We have received a mail from Edwin Steffens in which he offered to help us understand his code for the AIBO.
This morning we have been to the Robolab and gained a little more understanding of how the images actually get captured by the AIBO cam and transformed into the .png files. Using this we got ideas on how to implement our recognition of the colored stickers on the corners of our chessboard used for the calibration procedure. Even this seemed to be a huge task.
We noticed that there are alot of tricks required to satisfy the matlab's format for RGB pictures. It was clear to us that there was not enough time left to be able to make our own code work together with matlab and tekkotsu.
After taking some snapshots of the chessboard with coloured stickers we decided to head back to Euclides location to try and make the corner point recognition algorithm.
In an attempt to implement our just acquired knowledge we wrote some java code which should be able to take a snapshot from inside a java program. The calibration algorithm that we made yesterday needs to take snapshots while running to be able to work. Finding the right interface in tekkotsumon proved to be challenging and eventhough we can't actually nor test our code nor finish it here it is:
//naam: marc bron & aziz baibabaev //cknr: 0130486 & 0222704 // de main methode die calibration aanroept
// Note: tio.jar is an io package available // at: ftp://ftp.cse.ucsc.edu/pub/charlie/jbd/tio-v2/tio.jar
import tio.*; import org.tekkotsu.mon.*; import java.awt.*;
public class Calibrate { public static void main (String [] args) { System.out.println(" This program tries to calibrate the camera. Press any key to begin. "); Console.in.readChar();
int x; VisionListener image = new VisionListener(); System.out.println(" To stop press 0, to go on press 1. "); x = Console.in.readInt();
if(x == 1) {
System.out.println(image.getImage()); }
// lees een plaatje en vindt de hoekpunten while(x != 0) { System.out.println(" To stop press 0, to go on press 1. "); x = Console.in.readInt();
image = VisionListener.getImage();
for(int i = 0; i < image.length; i++) { // image is een BufferedImage? dat (hebben we gehoord) zou een array zijn van pixels // met een for loop kunnen we die scannen en zoeken naar de kleuren van de stickers // op de hoekpunten.
}
// als de hoekpunten A,B,C,D zijn gevonden wordt Calibration(A,B,C,D) aangeroepen // die vertelt ons dan om het bord te verplaatsen, Todat het bord goed ligt. }
else System.out.println("goodbye");
} }
All in all we didn't have enough documentation for the available code and eventhough we roughly knew where we had to go, we couldn't test our ideas (and correct them) because of our limited knowledge of used platforms and the needed interfaces to make it work with our code.
Currently we are writing documentation and getting prepared for tomorrow's presentation.
We have finished writing the documentation. Our final report is available under a link in the page Day5
End of Day 4