In order to analyse the self-localization and world modelling, as well as to assist in machine learning approaches to various parts of the robot software, ground truth data is needed – i.e. the correct information where the robot actually is, and where the ball is. With that information we can compare the robot’s perception of the world with reality. This aids in evaluation of our performance, but it also allows the robot to get insight into what it is doing wrong and try to learn from it.
In order not to re-invent the wheel, we used the vision system of the Small Size League. Recycling the heads of last year’s robots, we mounted a camera over each field half. They are observing the field, looking for the ball and the robot markers. In the picture below, we put the markers on top of our dummy robot. We can also put it on top of the robots, though this is currently a bit tricky. However the new torso has been prepared to make this easier.
As usual this is always open for improvement, but so far it looks quite promising. The next months will show how useful this system is in our preparations.