today, I finished a demo, to demonstrate the current work result.
this interesting stuff demonstrates the cutout and gesture control using depth information from Kinect sensor. here is the video hosted by YouTube:
enjoying:)
today, I finished a demo, to demonstrate the current work result.
this interesting stuff demonstrates the cutout and gesture control using depth information from Kinect sensor. here is the video hosted by YouTube:
enjoying:)
thanks Nicolas, openNI provided the registration API already!
here is the code:
nRetVal = context.FindExistingNode(XN_NODE_TYPE_DEPTH, g_DepthGenerator); CHECK_RC(nRetVal, "Find depth generator"); nRetVal = context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_ImageGenerator); CHECK_RC(nRetVal, "Find image generator"); g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint(g_ImageGenerator);
warp depth to color.
then
for(int j=0; j<kin_h; j++) { for(int i=0; i<kin_w; i++) { d = kinDepth.at<ushort>(j, i); Vec3b xyz = depthImage.at<Vec3b>(j, i); if(xyz[0]>180) cutoutImage.at<Vec3b>(j, i) = kinColor.at<Vec3b>(j, i); } } #if showWin Mat depth_color(480, 1280, CV_8UC3); cutoutImage.copyTo(depth_color(Rect(0, 0, 640, 480))); kinColor.copyTo(depth_color(Rect(640, 0, 640, 480))); imshow("depth_In_color", depth_color); //imwrite("./depth_In_color2.jpg", depth_color); waitKey(1);
I just use a simple thresholding, normalized RGB depth >180, get the result, well register! cheers.