kinect2在VS中的使用

前言:记录一下以前的笔记

1.kinectV2软件开发工具包的下载及调试

kinetSDK-v2.0_1409-Setup官网下载地址
kinect for Windows version 2.0 SDK官方文档
kinect2完全拆解
Kinect数据提取与坐标变换

1.1 安装

SDK包下载下来后双击很容易就安装好,然后根据提示重启电脑,如果安装过程出现下图情况的话,那么请将你的电脑重置系统(如果重置完系统后还是出现安装失败的话,说明电脑系统未能成功重置,请再次重新配置一下,我之前就出现过这种情况,再重配一下就好了)
image.png
:我之前一开始出现这种情况时去网上查找解决方案,大都说是让把刚下载的SDK包全部卸载干净,然后再重新下载,网上有些人成功了,不过我尝试后还是出现了相同的问题,后来想到之前自己把电脑上一些看着不太顺眼的东西卸载了好多,估计就是那时候出现了问题,我猜应该是把微软自带的驱动给一同卸载了。

1.2 调试

将kinectv2接到USB3.0(电脑蓝色口)看相机会有反应,因为win10会自动安装kinectv2的主要驱动。打开SDK Browser,打开相机的自检程序,出现下图情况则表示kinectv2已经连接完成了。
image.png
:USB那一行显示感叹号,表示接触不良,并不影响相机的其他调试,无需在意。

2. VS2013的下载与安装

2.1 下载

VS2013下载地址
image.png
VS2013官方文档
:我只下载了截图里圈出来的旗舰版和密钥(用来注册产品),如果打开不了链接的话,估计是因为我为了图方便以及下载速度快些,选择了学校下载地址,大家也可以去微信公众号“软件安装管家”里面下载(很好的公众号,极力推荐)

2.2 安装

参照微信公众号“软件安装管家”中的教程来

3. opencv的安装配置

opencv3.0下载地址
image.png
image.png
image.png

3.1 配置环境变量

控制面板→系统→高级系统设置→环境变量→新建系统变量
image.png
控制面板→系统→高级系统设置→环境变量→Path
image.png

  • x86和x64分别表示32bit和64bit的VS工程,vc10, vc11, vc12 分别表示VS2010, VS2012, VS2013的Visual Studio使用的编译器版本。
  • 环境变量设置好后重启电脑,保证系统环境变量立即生效,避免一系列路径相关问题。

3.2 opencv在VS中的配置

将opencv3.0的属性表先添加到文本文件里面(如下所示)

扫描二维码关注公众号,回复: 10825792 查看本文章
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <ImportGroup Label="PropertySheets" />
  <PropertyGroup Label="UserMacros" />
  <PropertyGroup>
    <IncludePath>$(OPENCV)\include;$(IncludePath)</IncludePath>
    <LibraryPath Condition="'$(Platform)'=='Win32'">$(OPENCV)\x86\vc12\lib;$(OPENCV)\x86\vc12\staticlib;$(LibraryPath)</LibraryPath>
    <LibraryPath Condition="'$(Platform)'=='X64'">$(OPENCV)\x64\vc12\lib;$(OPENCV)\x64\vc12\staticlib;$(LibraryPath)</LibraryPath>
  </PropertyGroup>
  <ItemDefinitionGroup>
    <Link Condition="'$(Configuration)'=='Debug'">
      <AdditionalDependencies>opencv_ts300d.lib;opencv_world300d.lib;IlmImfd.lib;libjasperd.lib;libjpegd.lib;libpngd.lib;libtiffd.lib;libwebpd.lib;opencv_calib3d300d.lib;opencv_core300d.lib;opencv_features2d300d.lib;opencv_flann300d.lib;opencv_highgui300d.lib;opencv_imgcodecs300d.lib;opencv_imgproc300d.lib;opencv_ml300d.lib;opencv_objdetect300d.lib;opencv_photo300d.lib;opencv_shape300d.lib;opencv_stitching300d.lib;opencv_superres300d.lib;opencv_ts300d.lib;opencv_video300d.lib;opencv_videoio300d.lib;opencv_videostab300d.lib;zlibd.lib;%(AdditionalDependencies)
      </AdditionalDependencies>
    </Link>
    <Link Condition="'$(Configuration)'=='Release'">
      <AdditionalDependencies>opencv_ts300.lib;opencv_world300.lib;IlmImf.lib;ippicvmt.lib;libjasper.lib;libjpeg.lib;libpng.lib;libtiff.lib;libwebp.lib;opencv_calib3d300.lib;opencv_core300.lib;opencv_features2d300.lib;opencv_flann300.lib;opencv_highgui300.lib;opencv_imgcodecs300.lib;opencv_imgproc300.lib;opencv_ml300.lib;opencv_objdetect300.lib;opencv_photo300.lib;opencv_shape300.lib;opencv_stitching300.lib;opencv_superres300.lib;opencv_ts300.lib;opencv_video300.lib;opencv_videoio300.lib;opencv_videostab300.lib;zlib.lib;%(AdditionalDependencies)
      </AdditionalDependencies>
    </Link>
  </ItemDefinitionGroup>
  <ItemGroup />
</Project>

image.png

其中利用文本文件将属性表内容添加到其中,再到生成属性表这个过程,我在网上找了不少案例,都说把内容添加到文本文件后直接将后缀名改成 props的格式就行了,但是到我这里就不行啊,改完后缀名后再来查看它的属性会发现依旧是txt的格式,这一点真的是让人糟心啊,虽然在这点上浪费了不少时间但好在最终找到了正确的方法,这里将我的过程截图出来希望能够帮助大家避免走弯路,大家在另存为的时候一定要记得将保存类型选为“所有文件”(我在截图里特意画了星星)。

3.3 新建vs工程测试配置是否成功

打开VS2013→文件→新建→项目→Visual C++→Win32控制台应用程序→新建工程名opencv_test→确定→下一步→空项目打勾→完成
image.png
image.png
image.png
搜索栏搜索“属性管理器”→右击opencv_test→添加现有属性表→找到属性表路径打开
image.png
搜索栏搜索"解决方案资源管理器"(Ctrl+Alt+L)→右击源文件→添加→新建项→Visual C++→C++文件→源文件命名为test(会自动添加后缀)
image.png
添加下面代码到程序中

#include <opencv2\opencv.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main()
{
	Mat img = imread("baobao.jpg");
	if (img.empty())
	{
		cout << "error";
		return -1;
	}
	imshow("baby:", img);
	waitKey();

	return 0;
}

找到与之前建立工程名相同的文件夹双击打开→添加baobao图片
image.png
image.png
回到程序中,按F5启动调试,就能看到宝宝照
image.png

4. Kinectv2在VS中(结合opencv使用)的配置

4.1 新建工程

同opencv一致

4.2 Kinectv2的配置

搜索栏搜索“属性管理器”→右键Debug|Win32,选择“添加新项目属性表”→名称命为“kinect_project.props",路径放在用于保存VS项目的主文件夹中(方便后面用kinectv2时可以直接“添加现有属性表”找到这里的“kinect_project.props"即可)
image.png
双击kinect_project.props→VC++ 目录→包含目录添加kinect的inc→库目录添加kinect×86位的lib→连接器的输入添加"Kinect20.lib"
image.png
image.png
image.png

4.3 添加下面代码到程序中

#include <Kinect.h>		//Kinect的头文件
#include <iostream>
#include <opencv2\highgui.hpp>			//opencv头文件
using   namespace   std;
using   namespace   cv;
int main(void)
{
	IKinectSensor   * mySensor = nullptr;
	GetDefaultKinectSensor(&mySensor);  //获取感应器
	mySensor->Open();           //打开感应器
	IDepthFrameSource   * mySource = nullptr;   //取得深度数据
	mySensor->get_DepthFrameSource(&mySource);
	int height = 0, width = 0;
	IFrameDescription   * myDescription = nullptr;  //取得深度数据的分辨率
	mySource->get_FrameDescription(&myDescription);
	myDescription->get_Height(&height);
	myDescription->get_Width(&width);
	myDescription->Release();
	IDepthFrameReader   * myReader = nullptr;
	mySource->OpenReader(&myReader);    //打开深度数据的Reader
	IDepthFrame * myFrame = nullptr;
	Mat temp(height, width, CV_16UC1);    //建立图像矩阵
	Mat img(height, width, CV_8UC1);
	while (1)
	{
		if (myReader->AcquireLatestFrame(&myFrame) == S_OK) //通过Reader尝试获取最新的一帧深度数据,放入深度帧中,并判断是否成功获取
		{
			myFrame->CopyFrameDataToArray(height * width, (UINT16 *)temp.data); //先把数据存入16位的图像矩阵中
			temp.convertTo(img, CV_8UC1, 255.0 / 4500);   //再把16位转换为8位
			imshow("TEST", img);
		myFrame->Release();
		}
		if (waitKey(30) == VK_ESCAPE)
		break;
	}
	myReader->Release();        //释放不用的变量并且关闭感应器
	mySource->Release();
	mySensor->Close();
	mySensor->Release();
	return  0;
}

回到程序中,按F5启动调试,能看到深度图像:
image.png
或者添加以下程序能够获取Kinect彩色图,深度图和红外图

#include <Kinect.h> 
#include <iostream>
#include <opencv2/core/core.hpp>  
#include <opencv2/highgui/highgui.hpp>  
using namespace cv;
using namespace std;
// 安全释放指针
template<class Interface>
inline void SafeRelease(Interface *& pInterfaceToRelease)
{
	if (pInterfaceToRelease != NULL)
	{
		pInterfaceToRelease->Release();
		pInterfaceToRelease = NULL;
	}
}
int main()
{
	// 获取Kinect设备
	IKinectSensor* m_pKinectSensor;
	HRESULT hr;
	hr = GetDefaultKinectSensor(&m_pKinectSensor);
	if (FAILED(hr))
	{
		return hr;
	}
	IMultiSourceFrameReader* m_pMultiFrameReader = NULL;
	if (m_pKinectSensor)
	{
		hr = m_pKinectSensor->Open();
		if (SUCCEEDED(hr))
		{
			// 获取多数据源到读取器  
			hr = m_pKinectSensor->OpenMultiSourceFrameReader(		FrameSourceTypes::FrameSourceTypes_Color |			FrameSourceTypes::FrameSourceTypes_Infrared |			FrameSourceTypes::FrameSourceTypes_Depth,
				&m_pMultiFrameReader);
		}
	}
	if (!m_pKinectSensor || FAILED(hr))
	{
		return E_FAIL;
	}
	// 三个数据帧及引用
	IDepthFrameReference* m_pDepthFrameReference = NULL;
	IColorFrameReference* m_pColorFrameReference = NULL;
	IInfraredFrameReference* m_pInfraredFrameReference = NULL;
	IInfraredFrame* m_pInfraredFrame = NULL;
	IDepthFrame* m_pDepthFrame = NULL;
	IColorFrame* m_pColorFrame = NULL;
	// 三个图片格式
	Mat i_rgb(1080, 1920, CV_8UC4);      //注意:这里必须为4通道的图,Kinect的数据只能以Bgra格式传出
	Mat i_depth(424, 512, CV_8UC1);
	Mat i_ir(424, 512, CV_16UC1);
	UINT16 *depthData = new UINT16[424 * 512];
	IMultiSourceFrame* m_pMultiFrame = nullptr;
	while (true)
	{
		// 获取新的一个多源数据帧
		hr = m_pMultiFrameReader->AcquireLatestFrame(&m_pMultiFrame);
		if (FAILED(hr) || !m_pMultiFrame)
		{
			//cout << "!!!" << endl;
			continue;
		}
		// 从多源数据帧中分离出彩色数据,深度数据和红外数据
		if (SUCCEEDED(hr))
			hr = m_pMultiFrame->get_ColorFrameReference(&m_pColorFrameReference);
		if (SUCCEEDED(hr))
			hr = m_pColorFrameReference->AcquireFrame(&m_pColorFrame);
		if (SUCCEEDED(hr))
			hr = m_pMultiFrame->get_DepthFrameReference(&m_pDepthFrameReference);
		if (SUCCEEDED(hr))
			hr = m_pDepthFrameReference->AcquireFrame(&m_pDepthFrame);
		if (SUCCEEDED(hr))
			hr = m_pMultiFrame->get_InfraredFrameReference(&m_pInfraredFrameReference);
		if (SUCCEEDED(hr))
			hr = m_pInfraredFrameReference->AcquireFrame(&m_pInfraredFrame);
		// color拷贝到图片中
		UINT nColorBufferSize = 1920 * 1080 * 4;
		if (SUCCEEDED(hr))
			hr = m_pColorFrame->CopyConvertedFrameDataToArray(nColorBufferSize, reinterpret_cast<BYTE*>(i_rgb.data), ColorImageFormat::ColorImageFormat_Bgra);
		// depth拷贝到图片中
		if (SUCCEEDED(hr))
		{
			hr = m_pDepthFrame->CopyFrameDataToArray(424 * 512, depthData);
			for (int i = 0; i < 512 * 424; i++)
			{
				// 0-255深度图,为了显示明显,只取深度数据的低8位
				BYTE intensity = static_cast<BYTE>(depthData[i] % 256);
				reinterpret_cast<BYTE*>(i_depth.data)[i] = intensity;
			}
			// 实际是16位unsigned int数据
			//hr = m_pDepthFrame->CopyFrameDataToArray(424 * 512, reinterpret_cast<UINT16*>(i_depth.data));
		}
		// infrared拷贝到图片中
		if (SUCCEEDED(hr))
		{
			hr = m_pInfraredFrame->CopyFrameDataToArray(424 * 512, reinterpret_cast<UINT16*>(i_ir.data));
		}
		// 显示
		imshow("rgb", i_rgb);
		if (waitKey(1) == VK_ESCAPE)
			break;
		imshow("depth", i_depth);
		if (waitKey(1) == VK_ESCAPE)
			break;
		imshow("ir", i_ir);
		if (waitKey(1) == VK_ESCAPE)
			break;
		// 释放资源
		SafeRelease(m_pColorFrame);
		SafeRelease(m_pDepthFrame);
		SafeRelease(m_pInfraredFrame);
		SafeRelease(m_pColorFrameReference);
		SafeRelease(m_pDepthFrameReference);
		SafeRelease(m_pInfraredFrameReference);
		SafeRelease(m_pMultiFrame);
	}
	// 关闭窗口,设备
	cv::destroyAllWindows();
	m_pKinectSensor->Close();
	std::system("pause");
	return 0;
}

5. PCL在VS中的配置

点云官方网站
参考博客1
参考博客2
参考博客3
关于如何查找和利用PCL库学习资源的一些心得

5.1 下载与安装

VS默认配置32位工程的,下载32位的PCL库,与参考博客一致,除了安装路径不一致。
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

5.2 配置环境变量

安装好PCL后系统自动帮我们配置好了下面
image.png
image.png
image.png

接下来和opencv的配置类似,添加如下到Path中:

%PCL_ROOT%\3rdParty\FLANN\bin
%PCL_ROOT%\3rdParty\Qhull\bin
%PCL_ROOT%\3rdParty\VTK\bin
E:\ruanjianbao\PCL\ProgramFile\PCL1.8.0\OpenNI2\Tools

image.png
添加完后重启电脑(一定要记得重启,不然就会出现我后面提到的找不到dll文件)

5.3 PCL在VS中的配置

新建项目pcl_test,同前
image.png
image.png
获取E:\ruanjianbao\PCL\ProgramFile\PCL 1.8.0\3rdParty\VTK\lib下的所有静态链接库文件名并存储至文本0.txt的技巧:

  • win+R
  • cmd→Enter
  • cd /d E:\ruanjianbao\PCL1.8\ProgramFile\PCL 1.8.0\3rdParty\VTK\lib →Enter
  • dir /b *.lib >0.txt→Enter
    image.png
    将下面所有依赖项添加进去
pcl_common_debug.lib
pcl_features_debug.lib
pcl_filters_debug.lib
pcl_io_debug.lib
pcl_io_ply_debug.lib
pcl_kdtree_debug.lib
pcl_keypoints_debug.lib
pcl_ml_debug.lib
pcl_octree_debug.lib
pcl_outofcore_debug.lib
pcl_people_debug.lib
pcl_recognition_debug.lib
pcl_registration_debug.lib
pcl_sample_consensus_debug.lib
pcl_search_debug.lib
pcl_segmentation_debug.lib
pcl_stereo_debug.lib
pcl_surface_debug.lib
pcl_tracking_debug.lib
pcl_visualization_debug.lib
libboost_atomic-vc120-mt-gd-1_59.lib
libboost_chrono-vc120-mt-gd-1_59.lib
libboost_container-vc120-mt-gd-1_59.lib
libboost_context-vc120-mt-gd-1_59.lib
libboost_coroutine-vc120-mt-gd-1_59.lib
libboost_date_time-vc120-mt-gd-1_59.lib
libboost_exception-vc120-mt-gd-1_59.lib
libboost_filesystem-vc120-mt-gd-1_59.lib
libboost_graph-vc120-mt-gd-1_59.lib
libboost_iostreams-vc120-mt-gd-1_59.lib
libboost_locale-vc120-mt-gd-1_59.lib
libboost_log-vc120-mt-gd-1_59.lib
libboost_log_setup-vc120-mt-gd-1_59.lib
libboost_math_c99-vc120-mt-gd-1_59.lib
libboost_math_c99f-vc120-mt-gd-1_59.lib
libboost_math_c99l-vc120-mt-gd-1_59.lib
libboost_math_tr1-vc120-mt-gd-1_59.lib
libboost_math_tr1f-vc120-mt-gd-1_59.lib
libboost_math_tr1l-vc120-mt-gd-1_59.lib
libboost_mpi-vc120-mt-gd-1_59.lib
libboost_prg_exec_monitor-vc120-mt-gd-1_59.lib
libboost_program_options-vc120-mt-gd-1_59.lib
libboost_random-vc120-mt-gd-1_59.lib
libboost_regex-vc120-mt-gd-1_59.lib
libboost_serialization-vc120-mt-gd-1_59.lib
libboost_signals-vc120-mt-gd-1_59.lib
libboost_system-vc120-mt-gd-1_59.lib
libboost_test_exec_monitor-vc120-mt-gd-1_59.lib
libboost_thread-vc120-mt-gd-1_59.lib
libboost_timer-vc120-mt-gd-1_59.lib
libboost_unit_test_framework-vc120-mt-gd-1_59.lib
libboost_wave-vc120-mt-gd-1_59.lib
libboost_wserialization-vc120-mt-gd-1_59.lib
flann-gd.lib
flann_cpp_s-gd.lib
flann_s-gd.lib
qhull-gd.lib
qhullcpp-gd.lib
qhullstatic-gd.lib
qhullstatic_r-gd.lib
qhull_p-gd.lib
qhull_r-gd.lib
vtkalglib-7.0-gd.lib
vtkChartsCore-7.0-gd.lib
vtkCommonColor-7.0-gd.lib
vtkCommonComputationalGeometry-7.0-gd.lib
vtkCommonCore-7.0-gd.lib
vtkCommonDataModel-7.0-gd.lib
vtkCommonExecutionModel-7.0-gd.lib
vtkCommonMath-7.0-gd.lib
vtkCommonMisc-7.0-gd.lib
vtkCommonSystem-7.0-gd.lib
vtkCommonTransforms-7.0-gd.lib
vtkDICOMParser-7.0-gd.lib
vtkDomainsChemistry-7.0-gd.lib
vtkDomainsChemistryOpenGL2-7.0-gd.lib
vtkexoIIc-7.0-gd.lib
vtkexpat-7.0-gd.lib
vtkFiltersAMR-7.0-gd.lib
vtkFiltersCore-7.0-gd.lib
vtkFiltersExtraction-7.0-gd.lib
vtkFiltersFlowPaths-7.0-gd.lib
vtkFiltersGeneral-7.0-gd.lib
vtkFiltersGeneric-7.0-gd.lib
vtkFiltersGeometry-7.0-gd.lib
vtkFiltersHybrid-7.0-gd.lib
vtkFiltersHyperTree-7.0-gd.lib
vtkFiltersImaging-7.0-gd.lib
vtkFiltersModeling-7.0-gd.lib
vtkFiltersParallel-7.0-gd.lib
vtkFiltersParallelImaging-7.0-gd.lib
vtkFiltersProgrammable-7.0-gd.lib
vtkFiltersSelection-7.0-gd.lib
vtkFiltersSMP-7.0-gd.lib
vtkFiltersSources-7.0-gd.lib
vtkFiltersStatistics-7.0-gd.lib
vtkFiltersTexture-7.0-gd.lib
vtkFiltersVerdict-7.0-gd.lib
vtkfreetype-7.0-gd.lib
vtkGeovisCore-7.0-gd.lib
vtkglew-7.0-gd.lib
vtkhdf5-7.0-gd.lib
vtkhdf5_hl-7.0-gd.lib
vtkImagingColor-7.0-gd.lib
vtkImagingCore-7.0-gd.lib
vtkImagingFourier-7.0-gd.lib
vtkImagingGeneral-7.0-gd.lib
vtkImagingHybrid-7.0-gd.lib
vtkImagingMath-7.0-gd.lib
vtkImagingMorphological-7.0-gd.lib
vtkImagingSources-7.0-gd.lib
vtkImagingStatistics-7.0-gd.lib
vtkImagingStencil-7.0-gd.lib
vtkInfovisCore-7.0-gd.lib
vtkInfovisLayout-7.0-gd.lib
vtkInteractionImage-7.0-gd.lib
vtkInteractionStyle-7.0-gd.lib
vtkInteractionWidgets-7.0-gd.lib
vtkIOAMR-7.0-gd.lib
vtkIOCore-7.0-gd.lib
vtkIOEnSight-7.0-gd.lib
vtkIOExodus-7.0-gd.lib
vtkIOExport-7.0-gd.lib
vtkIOGeometry-7.0-gd.lib
vtkIOImage-7.0-gd.lib
vtkIOImport-7.0-gd.lib
vtkIOInfovis-7.0-gd.lib
vtkIOLegacy-7.0-gd.lib
vtkIOLSDyna-7.0-gd.lib
vtkIOMINC-7.0-gd.lib
vtkIOMovie-7.0-gd.lib
vtkIONetCDF-7.0-gd.lib
vtkIOParallel-7.0-gd.lib
vtkIOParallelXML-7.0-gd.lib
vtkIOPLY-7.0-gd.lib
vtkIOSQL-7.0-gd.lib
vtkIOVideo-7.0-gd.lib
vtkIOXML-7.0-gd.lib
vtkIOXMLParser-7.0-gd.lib
vtkjpeg-7.0-gd.lib
vtkjsoncpp-7.0-gd.lib
vtklibxml2-7.0-gd.lib
vtkmetaio-7.0-gd.lib
vtkNetCDF-7.0-gd.lib
vtkNetCDF_cxx-7.0-gd.lib
vtkoggtheora-7.0-gd.lib
vtkParallelCore-7.0-gd.lib
vtkpng-7.0-gd.lib
vtkproj4-7.0-gd.lib
vtkRenderingAnnotation-7.0-gd.lib
vtkRenderingContext2D-7.0-gd.lib
vtkRenderingContextOpenGL2-7.0-gd.lib
vtkRenderingCore-7.0-gd.lib
vtkRenderingFreeType-7.0-gd.lib
vtkRenderingImage-7.0-gd.lib
vtkRenderingLabel-7.0-gd.lib
vtkRenderingLOD-7.0-gd.lib
vtkRenderingOpenGL2-7.0-gd.lib
vtkRenderingQt-7.0-gd.lib
vtkRenderingVolume-7.0-gd.lib
vtkRenderingVolumeOpenGL2-7.0-gd.lib
vtksqlite-7.0-gd.lib
vtksys-7.0-gd.lib
vtktiff-7.0-gd.lib
vtkverdict-7.0-gd.lib
vtkViewsContext2D-7.0-gd.lib
vtkViewsCore-7.0-gd.lib
vtkViewsInfovis-7.0-gd.lib
vtkViewsQt-7.0-gd.lib
vtkzlib-7.0-gd.lib

image.png
添加下面两项

_SCL_SECURE_NO_WARNINGS 
_CRT_SECURE_NO_WARNINGS

image.png
:如果不执行这一项操作的话,就会出现后面提到的错误。

5.4 PCL的小测试

#include <pcl/visualization/cloud_viewer.h>  
#include <iostream>  
#include <pcl/io/io.h>  
#include <pcl/io/pcd_io.h>  

int user_data;

void viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
	viewer.setBackgroundColor(1.0, 0.5, 1.0);
	pcl::PointXYZ o;
	o.x = 1.0;
	o.y = 0;
	o.z = 0;
	viewer.addSphere(o, 0.25, "sphere", 0);
	std::cout << "i only run once" << std::endl;

}

void viewerPsycho(pcl::visualization::PCLVisualizer& viewer)
{
	static unsigned count = 0;
	std::stringstream ss;
	ss << "Once per viewer loop: " << count++;
	viewer.removeShape("text", 0);
	viewer.addText(ss.str(), 200, 300, "text", 0);

	//FIXME: possible race condition here:  
	user_data++;
}

int main()
{
	pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
	pcl::io::loadPCDFile("my_point_cloud.pcd", *cloud);

	pcl::visualization::CloudViewer viewer("Cloud Viewer");

	//blocks until the cloud is actually rendered  
	viewer.showCloud(cloud);

	//use the following functions to get access to the underlying more advanced/powerful  
	//PCLVisualizer  

	//This will only get called once  
	viewer.runOnVisualizationThreadOnce(viewerOneOff);

	//This will get called once per visualization iteration  
	viewer.runOnVisualizationThread(viewerPsycho);
	while (!viewer.wasStopped())
	{
		//you can also do cool processing here  
		//FIXME: Note that this is running in a separate thread from viewerPsycho  
		//and you should guard against race conditions yourself...  
		user_data++;
	}
	return 0;
}

image.png

5.5 PCL结合opencv在VS中读取kinectv2点云图片

参考博客

#include <Kinect.h>
#include <opencv2\opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

#include <pcl/visualization/cloud_viewer.h> 
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/point_cloud.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>

using namespace cv;
using namespace std;
IKinectSensor* pSensor;
ICoordinateMapper *pMapper;
const int iDWidth = 512, iDHeight = 424;//深度图尺寸
const int iCWidth = 1920, iCHeight = 1080;//彩色图尺寸
CameraSpacePoint depth2xyz[iDWidth*iDHeight];
ColorSpacePoint depth2rgb[iCWidth*iCHeight];
void viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
	viewer.setBackgroundColor(1.0, 1.0, 1.0);//设置背景颜色 
}
//启动Kinect
bool initKinect()
{
	if (FAILED(GetDefaultKinectSensor(&pSensor))) return false;
	if (pSensor)
	{
		pSensor->get_CoordinateMapper(&pMapper);
		pSensor->Open();
		cout << "已打开相机" << endl;
		return true;
	}
	else return false;
}
//获取深度帧
Mat DepthData()
{
	IDepthFrameSource* pFrameSource = nullptr;
	pSensor->get_DepthFrameSource(&pFrameSource);
	IDepthFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IDepthFrame* pFrame = nullptr;
	Mat mDepthImg(iDHeight, iDWidth, CV_16UC1);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyFrameDataToArray(iDWidth * iDHeight, reinterpret_cast<UINT16*>(mDepthImg.data));
			cout << "已获取深度帧" << endl;
			pFrame->Release();
			return mDepthImg;
			break;
		}
	}
}
//获取彩色帧
Mat RGBData()
{
	IColorFrameSource* pFrameSource = nullptr;
	pSensor->get_ColorFrameSource(&pFrameSource);
	IColorFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IColorFrame* pFrame = nullptr;
	Mat mColorImg(iCHeight, iCWidth, CV_8UC4);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyConvertedFrameDataToArray(iCWidth * iCHeight * 4, mColorImg.data, ColorImageFormat_Bgra);
			cout << "已获取彩色帧" << endl;
			pFrame->Release();
			return mColorImg;
			break;
		}
	}
}
int main()
{
	initKinect();
	pcl::visualization::CloudViewer viewer("Cloud Viewer");
	viewer.runOnVisualizationThreadOnce(viewerOneOff);
	pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
	Mat mColorImg;
	Mat mDepthImg;
	while (cv::waitKey(30) != VK_ESCAPE)
	{
		mColorImg = RGBData();
		mDepthImg = DepthData();
		imshow("RGB", mColorImg);
		pMapper->MapDepthFrameToColorSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2rgb);//深度图到颜色的映射
		pMapper->MapDepthFrameToCameraSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2xyz);//深度图到相机三维空间的映射
		//for (int i = 0; i < iDWidth*iDHeight; i++)
		//{
		//	cout << i << ":  " << "X=" << depth2rgb[i].X << ";  Y=" << depth2rgb[i].Y<<endl;
		//}
		float maxX = depth2xyz[0].X, maxY = depth2xyz[0].Y, maxZ = depth2xyz[0].Z;
		float minX = depth2xyz[0].X, minY = depth2xyz[0].Y, minZ = depth2xyz[0].Z;
		for (size_t i = 0; i < iDWidth; i++)
		{
			for (size_t j = 0; j < iDHeight; j++)
			{
				pcl::PointXYZRGBA pointTemp;
				if (depth2xyz[i + j*iDWidth].Z > 0.5&&depth2rgb[i + j*iDWidth].X<1920 && depth2rgb[i + j*iDWidth].X>0 && depth2rgb[i + j*iDWidth].Y<1080 && depth2rgb[i + j*iDWidth].Y>0)
				{
					pointTemp.x = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X > maxX) maxX = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X < minX) minX = -depth2xyz[i + j*iDWidth].X;
					pointTemp.y = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y > maxY) maxY = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y < minY) minY = depth2xyz[i + j*iDWidth].Y;
					pointTemp.z = depth2xyz[i + j*iDWidth].Z;
					if (depth2xyz[i + j*iDWidth].Z != 0.0)
					{
						if (depth2xyz[i + j*iDWidth].Z > maxZ) maxZ = depth2xyz[i + j*iDWidth].Z;
						if (depth2xyz[i + j*iDWidth].Z < minZ) minZ = depth2xyz[i + j*iDWidth].Z;
					}
					pointTemp.b = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[0];
					pointTemp.g = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[1];
					pointTemp.r = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[2];
					pointTemp.a = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[3];
					cloud->push_back(pointTemp);
				}
			}
		}
		viewer.showCloud(cloud);
		mColorImg.release();
		mDepthImg.release();
		pcl::io::savePCDFileASCII("cloud.pcd",*cloud);
		cloud->clear();
		waitKey(10);
	}
	return 0;
}

5.6 PCL结合opencv在VS中保存kinectv2点云图片

#include <Kinect.h>

#include<vector>

#include <opencv2\opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

//#include <pcl/io/io.h>  
#include <pcl/io/pcd_io.h> 
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>
#include <pcl/visualization/cloud_viewer.h> 
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/io/ply_io.h>

using namespace cv;
using namespace std;

//typedef pcl::PointXYZRGBA PointT;
//typedef pcl::PointCloud< PointT> PointCloud;

IKinectSensor* pSensor;
ICoordinateMapper *pMapper;
const int iDWidth = 512, iDHeight = 424;//深度图尺寸
const int iCWidth = 1920, iCHeight = 1080;//彩色图尺寸
CameraSpacePoint depth2xyz[iDWidth*iDHeight];
ColorSpacePoint depth2rgb[iCWidth*iCHeight];
void viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
	viewer.setBackgroundColor(1.0, 1.0, 1.0);//设置背景颜色 
}
//启动Kinect
bool initKinect()
{
	if (FAILED(GetDefaultKinectSensor(&pSensor))) return false;
	if (pSensor)
	{
		pSensor->get_CoordinateMapper(&pMapper);
		pSensor->Open();
		cout << "已打开相机" << endl;
		return true;
	}
	else return false;
}
//获取深度帧
Mat DepthData()
{
	IDepthFrameSource* pFrameSource = nullptr;
	pSensor->get_DepthFrameSource(&pFrameSource);
	IDepthFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IDepthFrame* pFrame = nullptr;
	Mat mDepthImg(iDHeight, iDWidth, CV_16UC1);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyFrameDataToArray(iDWidth * iDHeight, reinterpret_cast<UINT16*>(mDepthImg.data));
			cout << "已获取深度帧" << endl;
			pFrame->Release();
			return mDepthImg;
			break;
		}
	}
}
//获取彩色帧
Mat RGBData()
{
	IColorFrameSource* pFrameSource = nullptr;
	pSensor->get_ColorFrameSource(&pFrameSource);
	IColorFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IColorFrame* pFrame = nullptr;
	Mat mColorImg(iCHeight, iCWidth, CV_8UC4);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyConvertedFrameDataToArray(iCWidth * iCHeight * 4, mColorImg.data, ColorImageFormat_Bgra);
			cout << "已获取彩色帧" << endl;
			pFrame->Release();
			return mColorImg;
			break;
		}
	}
}
int main(int argc, char** argv)
{
	initKinect();
	pcl::visualization::CloudViewer viewer("Cloud Viewer");
	viewer.runOnVisualizationThreadOnce(viewerOneOff);
	pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
	//PointCloud::Ptr cloud(new PointCloud);
	Mat mColorImg;
	Mat mDepthImg;
	while (cv::waitKey(30) != VK_ESCAPE)
	{
		mColorImg = RGBData();
		mDepthImg = DepthData();
		imwrite("color.jpg", mColorImg);
		imshow("RGB", mColorImg);
		imwrite("depth.jpg", mDepthImg);
		imshow("Depth", mDepthImg);
		pMapper->MapDepthFrameToColorSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2rgb);//深度图到颜色的映射
		pMapper->MapDepthFrameToCameraSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2xyz);//深度图到相机三维空间的映射
		//for (int i = 0; i < iDWidth*iDHeight; i++)
		//{
		//	cout << i << ":  " << "X=" << depth2rgb[i].X << ";  Y=" << depth2rgb[i].Y<<endl;
		//}
		float maxX = depth2xyz[0].X, maxY = depth2xyz[0].Y, maxZ = depth2xyz[0].Z;
		float minX = depth2xyz[0].X, minY = depth2xyz[0].Y, minZ = depth2xyz[0].Z;
		//pcl::PointCloud<pcl::PointXYZ> savecloud;
		//savecloud.width = 512;
		//savecloud.height = 424;
		//savecloud.is_dense = false;
		//savecloud.points.resize(savecloud.width * savecloud.height);
		for (size_t i = 0; i < iDWidth; i++)
		{
			for (size_t j = 0; j < iDHeight; j++)
			{
				pcl::PointXYZRGBA pointTemp;
				//PointT pointTemp;
				if (depth2xyz[i + j*iDWidth].Z > 0.5&&depth2rgb[i + j*iDWidth].X<1920 && depth2rgb[i + j*iDWidth].X>0 && depth2rgb[i + j*iDWidth].Y<1080 && depth2rgb[i + j*iDWidth].Y>0)
				{
					pointTemp.x = -depth2xyz[i + j*iDWidth].X;
					//savecloud[i,j].x = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X > maxX) maxX = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X < minX) minX = -depth2xyz[i + j*iDWidth].X;
					//savecloud[i, j].y = depth2xyz[i + j*iDWidth].Y;
					pointTemp.y = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y > maxY) maxY = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y < minY) minY = depth2xyz[i + j*iDWidth].Y;
					//savecloud[i, j].z = depth2xyz[i + j*iDWidth].Z;
					pointTemp.z = depth2xyz[i + j*iDWidth].Z;
					if (depth2xyz[i + j*iDWidth].Z != 0.0)
					{
						if (depth2xyz[i + j*iDWidth].Z > maxZ) maxZ = depth2xyz[i + j*iDWidth].Z;
						if (depth2xyz[i + j*iDWidth].Z < minZ) minZ = depth2xyz[i + j*iDWidth].Z;
					}
					pointTemp.b = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[0];
					pointTemp.g = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[1];
					pointTemp.r = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[2];
					pointTemp.a = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[3];
					cloud->push_back(pointTemp);
				}

			}
			//cloud->height = 1;
			//cloud->width = cloud->points.size();
			//cloud->is_dense = false;
			//pcl::io::savePCDFileASCII("pointcloud.pcd", *cloud);
			//cout << "point cloud size = " << cloud->points.size() << endl;
			//pcl::io::savePCDFileASCII("test_pcd.pcd", savecloud);	
		}
		pcl::io::savePCDFileASCII("cloud.pcd", *cloud);
		viewer.showCloud(cloud);
		//std::cerr << "Saved " << savecloud.points.size() << " data points to test_pcd.pcd." << std::endl;
		//for (size_t i = 0; i < savecloud.points.size(); ++i)
			//std::cerr << "    " << savecloud.points[i].x << " " << savecloud.points[i].y << " " << savecloud.points[i].z << std::endl;
		//std::cerr << "Saved " << savecloud.points.size() << " data points to test_pcd.pcd." << std::endl;
		//pcl::io::savePCDFileASCII("cloud.pcd", *cloud);
		mColorImg.release();
		mDepthImg.release();	
		//pcl::io::savePCDFileASCII("cloud.pcd", *cloud);
		//pcl::PLYWriter writer;
		//writer.write("cloud.ply", *cloud);
		cloud->clear();
		waitKey(3000);
	}
	return 0;
}

进入工程目录下即可看到保存下来的RGB、深度、点云图片
image.png

6. VS2013结合pcl在保存点云过程遇到的一系列问题

问题一

无法打开项目也无法新建项目
“no editor option definition export found for the given option name”
  • 解决方法:
    • 关闭VS
    • 清理C盘用户文件目录下的缓存,我的路径如下:
      C:\用户\YQ\AppData\Local\Microsoft\VisualStudio\12.0\ComponentModelCache
    • 重新打开VS
      注:
      visual studio 2012 对应 11.0文件夹;
      visual studio 2013 对应 12.0文件夹;

问题二
image.png

  • 解决方法:
    • 双击warning
    • 文件→高级保存选项→将编码那一栏改为下图→点击确定
      image.png

问题三
image.png

  • 解决方法:
    • boost官网找到你需要的boost版本并下载(我的是1.5.9)
    • 解压
    • 打开VS的Visual Studio Tools,选择X86命令提示(因为我一直使用32位的VS项目)
    • 到解压目录下→进入boost_1_59_0目录(cd /d E:\ruanjianbao\PCL\ProgramFile\boost_1_59_0)→运行bootstrap.bat文件(start bootstrap.bat)
      image.png
    • 运行完bootstrap.bat文件后生成b2.exe和bjam.exe→执行下面命令编译b2.exe生成32位的静态lib
b2 stage --toolset=msvc-12.0 architecture=x86 --stagedir=".\lib\vc12_x86" link=static runtime-link=static threading=multi debug releas
  • 在VS中重新配置PCL,步骤和前面PCL的配置相同,只是需要将有关boost的内容都换成重新下载的boost里面生成的lib和include
    附:
    利用cmd然后cd到指定文件夹的小技巧:
    找到你需要定位的文件夹→单击该文件夹→按Shift不动同时鼠标右击,会出现“复制为路径”这个选项,然后粘贴到你想要的地方即可。

问题四
image.png

  • 解决方法:待定
    注:
    多线程调试Dll (/MDd) MD_DynamicDebug
    多线程Dll (/MD) :MD_DynamicRelease
    多线程(/MT) :MD_StaticRelease
    多线程(/MTd):MD_StaticDebug

问题五
image.png

  • 解决方法:
    添加下面语句到图片所示
_SCL_SECURE_NO_WARNINGS

image.png

问题六
image.png
image.png
image.png
image.png

  • 原因:
    发生这种问题的根本原因在于环境变量的设置上,计算机只会在path下包含的目录里去寻找程序所要运行的.dll文件,若我们所要使用到的.dll文件没有包含在环境变量path中,则会发生错误:计算机丢失xxx.dll文件。不过自己之前在环境配置中明明配置好了,却出现了这种错误,真正在于自己环境配置完后没有重启电脑,导致环境变量的配置没有立即生效。
  • 解决方法:
    重启电脑后问题解决

问题七
image.png

  • 原因:
    ①值“0”不匹配值“2”,Debug使用了Release下的库文件。
    ②值“2”不匹配值“0”,Release使用了Debug下的库文件
  • 解决方法:
    情况一:项目->属性->配置属性->C/C+±>预处理器->预处理定义中添加:_ITERATOR_DEBUG_LEVEL=0
    情况二:项目->属性->配置属性->C/C+±>预处理器->预处理定义中添加:_ITERATOR_DEBUG_LEVEL=2

问题八
image.png
添加:/NODEFAULTLIB:msvcrt.lib
image.png

发布了89 篇原创文章 · 获赞 36 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/qq_40234695/article/details/89388282