openNI kinect embedded registration

June 30, 2011

thanks Nicolas, openNI provided the registration API already!

here is the code:

	nRetVal = context.FindExistingNode(XN_NODE_TYPE_DEPTH, g_DepthGenerator);
	CHECK_RC(nRetVal, "Find depth generator");
	nRetVal = context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_ImageGenerator);
	CHECK_RC(nRetVal, "Find image generator");
	g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint(g_ImageGenerator);

warp depth to color.

then

  for(int j=0; j<kin_h; j++)
   {
	   for(int i=0; i<kin_w; i++)
	   {
		   d = kinDepth.at<ushort>(j, i);
		   Vec3b xyz = depthImage.at<Vec3b>(j, i);
		   	if(xyz[0]>180)
			       cutoutImage.at<Vec3b>(j, i) = kinColor.at<Vec3b>(j, i);
 }
   }
#if showWin
    Mat depth_color(480, 1280, CV_8UC3);
	cutoutImage.copyTo(depth_color(Rect(0, 0, 640, 480)));
	kinColor.copyTo(depth_color(Rect(640, 0, 640, 480)));
	imshow("depth_In_color", depth_color);
	//imwrite("./depth_In_color2.jpg", depth_color);
	waitKey(1);

I just use a simple thresholding, normalized RGB depth >180, get the result, well register! cheers.

cutot result


A study on paper “Human Detection Using Depth Information by Kinect”

June 28, 2011

this paper proposes a method to detect human and then track it with only depth image. But not only, this is the first paper I have read in this domain using Kinect sensor…

why depth cue is so confident:

objects may not have consistent color and texture but must occupy an integrated region in space.

object contours detection:

observe the depth array, it exists salient gradients between different distance, so use Canny edge detector, it’s easy get the edges. In the paper, authors also eliminate small edges simply count the pixels contained in an edge.

human identify:

Computer is just a machine, without knowledge base, it cannot recognizes the object as human^-^.  However, may only human has the unique head model? I am not sure. In this paper, authors use head binary template to identify the “object as human”

human verification:

As last step claimed, “object as human” but not means the object is human indeed. In this paper, authors use depth array to model 3D head model (estimate parameters), and fit the detected regions (by matching to head binary template), and calculate the square error bettern the region and 3D model, remain the accurate region, as real human. (how about gorilla^-^)

Yellow dots indicate the center of the head detected

extract integrated human shape:

there are several cases when simple edge detector failed. a) other Non-human objects approach to human; b) depth of foot and ground are the same. In this paper, so called region growing algorithm is developed to extract the whole human body. It’s simple to understand. As above steps can mostly success identify the head, so initially use the identified region as seed, and compute the depth mean value, scan the neighboring pixels, calculate the similarity. highest ones are the winner. grow the region, and re-calculate the mean value, and repeat…

Result of our region growing algorithm

The extracted whole body contours are superimposed on the depth map

human tracking:

human movement between neighboring frames should be smooth.  Calculate the 3D coordinates and speed (difference between center of detected blobs). In this paper, they defined an energy function, to calculate the movement smooth, by two data terms (3D coordinates and speed), smallest energy means same person.

tracking result, with different colors

  compare with  Sho Ikemura, etc.”Real-Time Human Detection using Relational Depth Similarity Features” ACCV 2010

                           Precision       Recall       Accuracy
this paper      100%               96.0%        98.4%
Ikemura’s      90.0%              32.9%        85.8%