Main Page

Programming and using the FoxBoard:

I am not a software expert. I code with C language, not C++. I have not well understood all my C++ lessons. I compile the code via the "webcompiler" on the Acmesystems website. I need an Internet connection when developping. I don't use the SDK because it is difficult to use on an old Windows PC like mine (lot of memory needed). I don't even know how to write a "makefile". Thus, I place all the functions in a unique c file (~2000 lines). I cannot import other library. But it works...

When developping, the PC is connected via an Ethernet wire or via the WiFi (if the robot moves). When using the WiFi, the PC is considered as an IP gateway. This is usefull when developping in different places: I don't need to reconfigure PC and Robot each time. I use always the same WiFi configuration. When developping, I use telnet for controling the robot, Internet Explorer for compiling with the webcompiler, and an editor (notepad++).


Use the virtual hard drive in RAM. I use this drive for log files, programs (when development in progress), and all temporary files needed (2D pictures, command file).

A script that downloads the program on the webcompiler site, and just after runs this program.

HMI : Human - Machine Interface

The HMI is an interesting part of this project. It is made with several parts. It was perhaps possible to make things easyer, but it works.

The HMI is made with a web page (HTML) on web server of the FoxBoard. Below a picture of this page.
First, the main program creates a 2D image representing a map of the environment. This image is created (in RAM) each time the robot receives an order, and each time it has completed this order. Several things are represented on this map:
  • The robot and its orientation
  • The blue points represent the trajectory of the robot.
  • The black points represent the obstacles that have been detected via sonars.
  • The green cases represent the authorized areas, that have been formerly explored.
  • The rose cases represent unauthorized areas (because of an onstacle near)
The picture is an BMP image with 600x600 resolution, 256 colors. Each pixel represents 1cm. The BMP format is quite big (350ko) but very easy to use without external library. The picture is directly generated by the main program.

The command made by the user are collected via a form in the HTML page. The web page contains javascript that collects the position selected by the user on the map (mouse click), and put the coordinates in the form. The script alsa marks the position on the map with an HTML object (cross). The user selects the command via "radio buttons" that are also placed in the form.

Once this form is completed, the "execute" button refers to a cgi script. This script just takes the result of the form and place it in a new text file (ASCII). This file contains the coordinate selected, and the command. This file is stored in RAM. This script then refers to the main web page again.

5 times in a second, the main program checks if the file is present or not. If present, it interprets the command, and immediately deletes the file. Each time a new file is received, the command is executed, even if the previous command is not completed.

Algorithms :

The position of the robot is estimated via the 2 optical encoders mounted on the wheels. 40 times in a second, the robot computes its position and its orientation via the information of the HCTL2032.

Control :
Unlike many robots, I don't control the progress of each wheel, but I only control the orientation, and the progress of the robot. The control is made with 2 PI (proportional - Integral). I don't use derivative correction term because it is difficult to use with low resolution encoders. Furthermore, I don't expect high performances.

Sonar :
The exploitation of the sonar sensors is simple. I only compute the 2D coordinates of the obstacle detected. This is made with the 2D position of the robot, the orientation, and the distance measured by the sonar. If the distance is too small or too big, the measure is not used, because the result would be not precise or not reliable enough. The robot take 8 measures per second.

Cartography :
The cartography is made with a 2D table with 5cm squares. This table represents se position (squares) where the robot (the center of the robot) can go through or not. The squares are authorized (green), unauthorized (rose) or unexplored (white). If an obstacle is detected, several squares arround are considered as unauthorized. All the squares located less than 10cm from the obstacle are considered as unauthorized.
Each time an onbstacle is detected, all the squares located inside the cone of the radar (30deg)and between the robot and the obstacle (12cm before the obstacle) are considered as authorized.
Each square located inside the perimeter of the robot are considered as authorized

Path planning:
This path planning algorithm is totally home made. I am using the 2D cartography descibed before. The path planning is made in 2 phases.
First, I just try to find a path (not optimal). Starting from the current robot square, I give a number to each squares that is achivable. The start square is numbered zero. If the squares located next to the current square is authorized (green) the number of the adjacent square is incremented by 1. If the adjacent square is unexplored (white) it is incremented by 2. This process continues while the goal square has not been reached (or the goal is unreachable).

The (unsimplified) path is determined by decounting from the goal square to the current square.
We then need to simplify the path. The algorithm tries to find a segment from the current position to a position of the unsimplified path, as far as possible, and that remains in authorized (or unexplored) areas. And so on... All the segment found constitue the simplified path.

Example with real datas:
Take a look at the video.

Realignment of the robot:
Not finalized.
This part has to recognize walls with the 2D points measured by sonars. I use Hough transform to detect lines that matches the 2D points. I only use the last 2D measurements.
If a line is detected, the robot realines the orientation of the robot, given that all the walls must be parallel or perpendicular together. If the wall has already been discovered before, the robot also realines the distance to the wall.
This part of the program is still in development. The algorithm now consumes too much computing power for FoxBoard. It needs to be optimized.

Soccer playing :

BOB3 has now a CMUCAM ontboard. This camera can identify an object that has a pre-defined color. The camera directly sends the 2D coordinates of the object in the image to the FoxBoard. With this information, and the known location of the robot, the program computes an estimate position of the ball on the field. This estimation is mainly unprecise in deepth. The objects are well detected if their color are well contrasted compared to the background. I use a green ball on a parquet. With an orange ball, this doesn't work at all.

I will describe how BOB3 can play soccer. First, I have to place the robot at the "guardian" position, in the center of the goal. BOB3 memorize this position.
Then, I can displace the robot where I want in the field. From a position, I ordrer him to play soccer. It then turns around, 10 degrees by 10 degrees, looking for its green ball with the CMUCAM. Once it detected the ball, it estimates its position.
It now computes the shooting position. This position is aligned between the center of the goal and the ball, located 35cm behind the ball. The ball has to be located at most 60 degrees from the median line of the goal. The shooting position has to be clear from any obstacle.

If all these conditions are meet, BOB3 goes to the "shooting position", avoiding known obstacles, with the algorith desribed above. The ball is considered as a "temporar" obstacle, and will be cleared from the obstacle matrix just after the action.

Once it has reach the shooting position, it orientates to the direction of the ball. It looks once again for the ball. If it is not too much uncentered, it begins to shoot, otherwhise it recenters itself with the knew known position of the ball.
Then it shoots: it goes firts to the position of the ball, and then continues ton the center of the goal, without any stop.

Look at the video.

Leon | 04/05/2008