Mar 3rd, 2011
Workshop
No workshop this meeting.
Class
"Homemade Servo Motors" by Thomas Messersmidt
slides will follow ...
Here are some pictures of a servo tester (upper right), power window motor (lower-left) and quadrature encoder (lower-right)
Business
April class: Labview by John Walters
May Class: Navigation by Martin Mason
Show & Tell
John Walters showed us his work-in-progress on a low-riding hallway robot. The brains of the robot is a netbook computer. The control software in Labview, something he will give a class about next month. The control panel shows a compass, speedometer, camera view (connected to a Logitech webcam), scanning sonar map (3 readings/sec, 200 step, 180deg, detects objects as far as 6 ft).
He plans to measure long range distance using 2 parallel laser pointers and use Roborealm to detect the dots and calculate the distance.
He uses a PIC based microcontroller for which he developped the board himself using wire-wrap techniques. He programs the microcontroller using Swordfish BASIC.
Bruce suggested a utility called Vcam, to split the feed from your webcam to several applications (e.g. Roborealm and Labview).
Rainer talked about his ongoing work mastering the Robot Operating System (ROS).
He posted the following articles on his blog:
Alex talked about his exploits in using ROS for room mapping. He uses the gmapping and slam_gmapper component in ROS. This consists in driving his ROS-powered Rocky around the hallway, and recording continuous readings from his laser scanner or Kinect sensor and building a map along the way.
In the left picture is a map he built at home. Funny detail is in the lower room. THe top left corner of the room shows a trianguar sideroom. This is in fact a large mirror.
On the right is a map he built by driving around the floor, right before the meeting. The result is somewhat disappointing. It shows a bend in the hallway and has hallways at a less than 90deg angle. Alex could not give an explanation of this, other than the scan might have been done too hastily.
In the discussion it was suggested that for mapping, you want to position your sensors (Kinect, laser, ..) as high as possible to only record the morphology of the room and not the objects in it. But for autonomously navigating around the room, you want to position the sensors as low as posible, to capture all the obstacles in your path. Someone suggested, you could also tilt the sensors on a low robot upwards for mapping. This is a compromise for the inability to mount your sensors above-head. This providing their are no obstacles between the low-mounted sensors and the walls.
The Kinect seems to behave very well, and is probably going to turn out to be the preferred sensor for most indoor robots. It has a range of 0.6 m to 6m. It can not be used outdoor (unless at night).
Indoor room mapping is the first step towards autonomous robot navigation. This feature allows you to have the robot create its own map of the environment. The alternative is to somehow manually draw a map and enter this in the robot database. The latter would be time-consuming.
Once the robot has a perception of its surroundings, you can start thinking about autonomous navigation. A potential technique for this is the D* algorithm for path planning.
It was also discussed how the Kinect can be used as a cheap laser scanner. The technique boils down to filtering out all the points in a certain height range and feed it into the mapping software that expects laser input. Here is a discussion on the internet.
Here is a video on Youtube that shows how other people are usingROS for mapping: