Micromouse is an event where small robot mice solve a 16×16 maze. It began in late 1970s, although there is some indication of events in 1950. Events are held worldwide, and are most popular in the UK, U.S., Japan, Singapore, India and South Korea. The maze is made up of a 16 by 16 grid of cells, each 180 mm square with walls 50 mm high. The mice are completely autonomous robots that must find their way from a predetermined starting position to the central area of the maze unaided. The mouse will need to keep track of where it is, discover walls as it explores, map out the maze and detect when it has reached the goal. Having reached the goal, the mouse will typically perform additional searches of the maze until it has found an optimal route from the start to the center. Once the optimal route has been found, the mouse will run that route in the shortest possible time.
Robots main controller is a universal robot driver board PisiXBee5 developed in TTÜ Robotiklubi. It has a main processor of Atmel ATxmega32A4U, dual H-bridge motor driver, 6-axis IMU, RGB LED and power converter. For robots movement 2 small DC motors are used with magnetic encoder system for speed and position feedback. For wall distance detection 4 infrared LEDs with infrared photo transistors are used. 2 sensors are pointing on a diagonal and 2 are pointing at a sligt angle forwards. IR detection pairs are wired to controllers ACD inputs. In the end also side looking sensors were added for extra stability.
For the robot we used these components:
|Controller||Atmel 8/16-bit microcontroller ATxmega32A4U|
|Motor||Pololu micro metal gear motor with 30:1 reductor. 1000 RPM on output shaft|
|Encoder||Pololu magnetic encoder with 12CPR resolution|
|Distance sensor||940nm IR LED + 940 Phototransistor with daylight filter|
|3D accelerometer and 3D gyroscope||LSM330DLC, I2C interface, 16-bit output resolution|
|Radio communication||XRF ISM band radio module|
Robot outputs the debug information straight into terminal. Below is shown a simple output example after robot has turned right:
During the software development stage it came apparent that the encoders we chose were not sufficient to do complex movements and can only be used for distance measurement. There was some debate about the forward facing distance sensors in regards to their angle regarding to the robot, but as experiments showed this small angle allowed the robot to perfectly navigate during complex maneuvers like driving on a diagonal. Only negative I can give is that the sensors are not symmetrical on both sides, and that gives a small error when comparing sensor values. Diagonal sensor angles were also working as they should and we could sense the edge of the wall and keep the robot straight at all times. One of the reasons why we put the sensors at that angle was so that we could see the “next square” when we are making decisions about the movements. This made the mapping a few seconds faster in the end because we didn’t have to drive into all dead ends. Side sensors, that we added in the end to the robot made navigating in changing wall conditions bit simpler. We also could definitively say if the is a wall on the sides. With only diagonal sensors we got a few false positives and negatives because the robot was not in the correct position at all times to make the decision.
For next implementation I would choose a bigger microcontroller (STM32 or something similar). This would give more hardware timers, floating point calculations (for PID), and hardware averaging and filtering of the ADC etc. On the mechanical side I would use better motors and better encoders with resolution of at least 4000 clicks per motor revolution. This would allow the robot to do normal speed profiling allowing more complex maneuvers. For simplicity next design would be a fully one PCB design because there is no longer need to use a readymade microcontroller platform that we chose because it was an example for „Teeme Ise“ course.