Kinect how does




















And it does two things: generate a three-dimensional moving image of the objects in its field of view, and recognize moving human beings among those objects. Older software programs used differences in color and texture to distinguish objects from their backgrounds. PrimeSense, the company whose tech powers Kinect, and recent Microsoft acquisition Canesta use a different model.

The camera transmits invisible near-infrared light and measures its "time of flight" after it reflects off the objects. Time-of-flight works like sonar: If you know how long the light takes to return, you know how far away an object is.

Cast a big field, with lots of pings going back and forth at the speed of light, and you can know how far away a lot of objects are. Using an infrared generator also partially solves the problem of ambient light.

Since the sensor isn't designed to register visible light, it doesn't get quite as many false positives. PrimeSense and Kinect go one step further and encode information in the near-IR light. As that information is returned, some of it is deformed -- which in turn can help generate a finer image of those objects' 3-D texture, not just their depth. With this tech, Kinect can distinguish objects' depth within 1 centimeter and their height and width within 3 mm. Story continues Newer Xbox s have a Kinect port from which the device can draw power, but the Kinect sensor comes with a power supply at no additional charge for users of older Xbox models.

For a video game to use the features of the hardware, it must also use the proprietary layer of Kinect software that enables body and voice recognition from the Kinect sensor [source: Rule ]. A further look at the technical specifications for Kinect reveal that both the video and depth sensor cameras have a x pixel resolution and run at 30 FPS frames per second. The specifications also suggest that you should allow about 6 feet 1.

The Kinect hardware, though, would be nothing without the breakthrough software that makes use of the data it gathers. Leap forward to the next page to read about the "brain" behind the camera lens. Code for v2: MultiKinect2. Code for v1: PointCloud. Code for v2: PointCloud. This tutorial is also a good place to start.

In addition, the example uses a PVector to describe a point in 3D space. More here: PVector tutorial. The raw depth values from the kinect are not directly proportional to physical depth.

Rather, they scale with the inverse of the depth according to this formula:. Rather than do this calculation all the time, we can precompute all of these values in a lookup table since there are only depth values.

Thanks to Matthew Fisher for the above formula. More about calibration in a moment. The real magic of the kinect lies in its computer vision capabilities. Ignore it! As a quick demonstration of this idea, here is a very basic example that compute the average xy location of any pixels in front of a given depth threshold. Source for v1: AveragePointTracking.

Source for v2: AveragePointTracking2. Then, whenever we find a given point that complies with our threshold, I add the x and y to the sum:. The four microphones are arranged along the bottom of the sensor bar. Figure A Kinect sensor unwrapped. Figure shows all the hardware inside the Kinect that makes sense of the information being supplied from all the various devices. Figure The Kinect sensor data processing hardware. To make everything fit into the slim bar form, the designers had to stack the circuit boards on top of each other.

Some of these components produce quite a bit of heat, so a tiny fan that can be seen on the far right of Figure sucks air along the circuits to keep them cool. The base contains an electric motor and gear assembly that lets the Kinect adjust its angle of view vertically.

This map is produced entirely within the sensor bar and then transmitted down the USB cable to the host in the same way as a typical camera image would be transferred—except that rather than color information for each pixel in an image, the sensor transmits distance values.

This would be difficult to do over a short distance. Instead, the sensor uses a clever technique consisting of an infrared projector and a camera that can see the tiny dots that the projector produces.

Figure shows the arrangement of the infrared projector and sensor. Figure The Kinect infrared projector and camera. The projector is the left-hand item in the Figure It looks somewhat like a camera, but in fact it is a tiny infrared projector.

The infrared camera is on the right side of Figure In between the projector and the camera is an LED that displays the Kinect device status, and a camera that captures a standard 2D view of the scene.

Figure shows my sofa as a person okay, a camera might see it in a room. Figure My sofa.



0コメント

  • 1000 / 1000