Electronics > Mechanical & Automation Engineering

Micro Mouse Robot

<< < (4/4)

Mysion:
Turns out that cortex M0 cores don't support ITM, the arm debugging protocol. Pity I was looking forward to using it. I ordered a uart to usb converter on amazon instead.

I also want to post about a maze simulator I created about a month ago for the competition.  It's coded in python and it's intended to help with creating an algorithm to solve the maze. It was designed for use with mouse version 1 and it won't do any of the fancy moves such as cutting across multiple corners but it will solve the maze. For version 1 I was worried about how much time it would take to solve the maze. There is a ton of room for optimization on mapping. With the simulator I tried 4 different approaches. All the comparisons in this post are too different maze mapping methods/algorithms.

Picture of the program in action



Do note that in all the approaches un mapped squares are treated as fully open with no walls until they are mapped.

Approaches
1 Chase Center: In this method they bot is hyper focused on finding the center. It will always follow the current best case path to center assuming the un-mapped squares are empty.
A major weakness of this method is that the bot can flip flop between multiple paths. When a route is discovered to be more difficult than an alternative it will re-route to map that route. Also it uses the path from the starting square to center as the path it follows. The local route to center difficulty is ignored.

2 Map it all: In this method the bot simply seeks out the closest un-mapped square. This was the worst by far.

3. Local Chase center: In this mode the bot makes getting to center is number one goal and does not consider the path from the starting point to center. Starting the path finding from it's local position it gets to the center. Then it will back track and map any squares that are needed for the map from start square to center. This method can still have some flip-floping issues but the scale is smaller than for method 1. Also it tends to end the mapping near the start square and this saves time getting back to start.

4. This is built on top of method 3 and is an attempt to limit flip-flopping. If a new faster route it found it's distance from the current position of the bot is measured. If it's above a threshold the bot hold's it current route and attempt to find the center. In some cases this would add 10 or so cycles to the previous method. However on one test maze it saved almost 100 cycles.

Picture of the raw data.



I tested these methods on 7 mazes I found online. They were all past competition mazes. Using method 1 as a base line method 2 was 11% slower and method 3 was 19% faster. Method 4 was the clear winder at 13% faster.

I'll post the source code on the Hack IO page for the project soon, I'll be sure to mention when I do.

Siwastaja:
A great inspirational thread, thanks. Add a few sensor modules more and you have a great sensor subsystem capable of doing actual localization and mapping and obstacle avoidance! If you have a well working and calibrated gyro running, to give you a heading that doesn't drift on a small time scale (seconds), you may even implement a "drunk mode" which scans the environment by oscillating the heading while moving, filling the dead angles of the sensors.

For routeplanning, I have good experience in implementing A*, or more specifically, the theta* variant, with added twist of testing whether the robot can turn, in addition to line-of-sight tests. If your robot isn't round and can hit obstacles during turning, this is of course important. For a robot that is bigger than one "pixel" in the grid, and can possibly turn, I found out that a good optimization is to use a 1-bit lookup table for the robot shape, for example, for a 32*32 pixel robot area, rotated in 16 different directions would be const uint32_t shape_lut[16][32] and accessed like shape_lut[direction][y] |= (1<<x). Now you can do comparisons to your map with a single bitshift and single AND operation, and compare 32 spots at once. On my theta* routeplanning, this optimization alone brought around 20x speedup IIRC.

I don't know if A* or Theta* is the most efficient or simplest for a tight and complex maze, it probably isn't; however it works great for real-world routeplanning where following a wall tightly would result in a long and stupid route, and it's simple to implement, the pseudocode widely available by googling actually works.

But routefinding and solving the maze is always the easy problem, by orders of magnitude, when you compare it to the problem of creating an accurate map with your sensor data and localizing in it. If you want to work on the maze solving first, make those corridors wider than 1 pixel, use some non-zero dimensions for the robot, and add some "noise" so that the walls are jagged.

GopherT:
For your programming logic, look up "flood filling"



And a bigger simulation...



Mysion:
Yep that's exactly what I'm using! Should have specified in the last post. The little numbers in each square if from the flood fill. The small colored squares show if a tile has been mapped. Red means it hasn't been visited and blue means it has been.

Surprisingly easy to implement. I have some extra logic in the flood flll that increases difficulty if the robot has to turn to go around a corner but other than that it's straight flood fill.
With regards to the mapping methods all of them use flood fill to find squares. The method just varies the immediate goal and sets thresholds.

coppercone2:
you might want to call the sensor 'failure mode' a heuristic or span feature if you document this for employer, it is a bit confusing and improper

Navigation

[0] Message Index

[*] Previous page

There was an error while thanking
Thanking...
Go to full version