life before and after Kinect

by Jef Mangelschots

This is a report on Bruce Weimer's lecture for the December 2010 meeting. He originally intended to give a lecture on the evolution of navigating Leaf robots throughout the house. 

But last month, Microsoft released the Kinect module. Originally intended for the XBox game console, the Kinect module can also be purchased standalone. As it turns out, the Kinect can be used for robotics applications like navigation, object detection, people recognition and more.

But I am getting ahead of myself.

Before Kinect

Bruce started the meeting with the following mantra: "Hi, my name is Bruce and I am a roboholic", upon the group chanted "Hi Bruce". And he got started by explaining the arduous road he and his fellow Leaf-addicts have followed in getting their offspring robots to do the impossible tasks of: "not bumping into walls", "knowing where you are after you are kidnapped", "going through doors", ...

Bruce's work on Leaf is primarily focussed on AI. using Lisp and using vision for face recognition and navigation. The AI. and face recognition are rather well established in the Leaf project. But navigation remains a sore issue for indoor robots. Professional robots have gotten reasonable at navigating in indoor environments using laser scanners, but these devices are in excess of $2000, well outside the reach of DIY robot hobyists. Also GPS is not a viable solution. Alex Brown has shown that GPS is not the way to go about navigating your robot indoors, where resolution is measured in inches rather then ten's of feets,which is the best you can achieve with civilian GPS indoors.

Webcams however are very cheap ($20) and interface seamlessly with modern PC's. He believes it should be possible to navigate through the house using only a webcam. With that in mind, he started off, with the help of Roborealm's Steven Gentner, to figure out a way to have his robot not bump into the furniture.

It all began with TARGETS. The idea is to place easily recognizable objects on the floor with a specific color, like a 5 inch red dot. With the output of the Roborealm Blob filter piped t the Centre-of-gravity filter, the robot can find the targets on the ground and move around the house like following bread-crumbs in Hansle & Gretel.

This technique however does not give the robot any clue about its location. Also breadcrumbs have to be in visual range of each other, otherwise the robot will have a tough time finding the next one. Also, very few spouses can be convinced of the esthestics of big red dots scattered around the house. This technique also only allows you to lay out a single path through the house. When multiple paths cross each other, it is difficult to keep track, unless you use different colors for different paths.

An improvement to this concept are ARROWS. Replace the dot with an arrow shape, which you lay out to point at a specific direction (e.g. North), and you will give the robot the ability to have a sense of direction (providing the robot has no compass on board). Replace the above mentioned Blob filter with the Shape Match filter, which also provides the orientation of the shape as a parameter, and you get a better sense of direction. Still, the wife is not very pleased (wonder why).

When the wife couldn't bear the sight of colored blobs on the floor anymore, Bruce had a luminous idea: hang them on the wall. Fiducials are 2D binary unique patterns, which Roborealm can detect in a scene captured by a webcam and deduce quite a bit of information from that image. Roborealm can figure out at which angle it is looking at the fiducial and how far the camera is from it. With those parameters and some trigonometry, you can calculate your X/Y coordinate and your orientation. Also by discerning between different fiducial patterns you can match these against a database that knows in which room you are in (unless the wife has swapped them around off course). This last one solves the "kidnapped robot problem". By giving an exact location, you are getting very close to a grown-up SLAM system.

Although fiducials solved a lot of indoor navigation problems, it still proves very hard to get an oversized Leaf robot to find and negotiate a doorway. Bruce has tried to solve this problem with omnivision, with moderate success. This technique requires a good convex mirror, but differing lighting conditions are still a sore spot.

Another approach were GATES, using the AVM module in Roborealm. This apprach uses the AVM library to take snapshots while manually guiding a robot through path through the house. As long as the scenery relatively remains the same, it can repeat those steps autonomously. This technique itself however will do nothing in avoiding obstacles or deal with changes in the scenery. Fiducials were better for that.

 

Another RSSC member, Rainer, also talked about a technique to use a webcam to navigate an indoor scenery. This technique is called Visual Odometry. The resolution on that is however rather coarse.

After Kinect

Now Bruce has always maintained that a Leaf robot has to be able to do most navigation using a cheap webcam, and remained shy of investing hard cash in laser scanners, decent convex mirrors for omnivision. But then came Microsoft answer to the Wii with the Kinect sensor, primarily meant to be attached to an XBox. It is a glorified webcam, capable of detecting people in its field of view waving idiotically to slap a non-existing golf ball around. Now why is this important to Bruce's quest for the Holy Grail of indoor navigation ?

Kinect turns out to be a nifty gadget, capable of accurately detecting a 3D scenery and objects. Within hours of its release, hackers had already reverse-engineered the Kinect, writing drivers in Windows and Linux to suck out every bit of information this device has to offer. And that is quite a bit, as it turns out. 

A Kinect module consists of : 

  • RGB camera
  • depth sensor consisting of infrared laser projector and CMOS sensor (structured light 3D scanner)
  • multi-array microphone
  • motorized tilt mechanism
  • 3-axis accelerometer
  • multicolor LED

That is quit a bit for a $150 device. Albeit not as cheap as a webcam, but Bruce is willing to set aside his requirement to keep device cost under $50, because he gets an awful lot in return for it. And $150 is still an acceptable expenditure on a home-built robot.

Upon learning about the potential of Kinect in robotics, he contacted Steven Gentner of Roborealm, with whom he collaborated before and suggested to take a look at the Kinect as a potential sensor source to Roborealm. Bruce got a reply back the next morning saying "yest it is interesting and BTW, here it is". The Roborealm Kinect module, allows to exploit all the Kinect features.

In Roborealm, you have the option to use the gray-scale depth map itself or to combine it with RGB images or to change the color based on distance. The latter allows you to use the simple Blob filter to filter out all objects closer than a certain distance.

 

 

 

 

 

 

 

 

 

Roborealm also allows you to generate a line depicting everything that is closest, similar like LIDAR systems. 

Beware, the Kinect does not work closer than 2,5 feet, so traditional range sensors will remain on your shopping list for a while.

 

 

 

 

 

Conclusion

Kinect could turn out the holy grail of indoor navigation, for which Sir Bruce and the Knight of the Order of Leaf have been questing for many years now. Always envy of the RoboMagellan guys, who could use GPS in the outdoor environments, Leaf'ers have been looking for a long time to accurately locate & navigate indoors. Kinect does not work in an outdoor environment (at least not in direct sunlight, but should work great in shade and total darkness), so the RoboMagellan guys will have to stick to their GPS's and IMU's for a while. But indoor, Kinect will most likely become the tool to use. 

What is amazing is the speed that this thing has hit the robotics community. Now with Kinect, more pieces are becoming mature to allow decent robot systems to be developed:

  • Willow Garage's ROS
  • Microsoft Kinect,
  • OpenCV/Roborealm
  • webcams
  • a wide variety of affordable range and other sensors
  • cheap magnetometers, gyroscopes and accelerometers
  • servo motors
  • a wide variety of mobile computing platforms (Arduino, embedded Linux boards, netbooks, ... )
  • online services for PCB's and 3D printing
  • CAD packages (Blender, Sketchup, Solidworks, ...)

 

Links

Here are some links to other presentations that Bruce gave on this subject:

Other interesting links:

 

Video's

Youtube is already filled with a ton of Kinect video's. The robot community is not the only bunch interested in this device. The gaming and 3D community has its own uses for it.

Below are a couple noteworthy robot-related video's: