Context
As my research concentrates on the perception part of Robotics, my research is in the Informatics Institute part of the
Computer Vision group.
Started
Labbook 2022.
December 28, 2021
- Decided to call my new Duckiebot dagobert. Unfortunatelly, the flashing fails on the verification step.
- The command dts init_sd_card --help gives no information on the steps, although this post gives some clues.
- Looked into ~/.dt-shell/commands-multi/daffy/init_sd_card/command.py. The SUPPORTED_STEPS are licence, download, flash, verify, setup.
- Running the command with --steps setup, which works. Trying if booting works with this card.
December 27, 2021
- Assembled Duckiebot db21.
- Note that the Duckiebot hut as Stemma Qt connectors, next to I2C connectors. Also the display is the same as the Nanosaur.
- As bonus, the Duckiebot DB21 has now also a time of flight sensor on its nose.
December 22, 2021
- The PhD thesis from Sascha Jannik Steyer contains dynamic state estimation based on radar.
December 20, 2021
- The Biomorphic Intelligence Lab has 4 vacancies, where the third one is on event camera's (and the fourth one on neuromorphic chips).
December 17, 2021
- Read Boudewijn Bodéwes' thesis. Notice that in Fig. 12 tree canopies are clearly visible as in the disparity error map, indicating that there is fluctations in the disparity (leaves) but not in the segmentation.
December 9, 2021
November 24, 2021
- Looking through the EZTH 2017 projects for a traffic light reference implementation. Found none.
- Also looking through the 2019 and 2020 projects. The Intersection Navigation project has no traffic light code, but stears the robot left,straight,right based on ros parameters, and has a nice bird-eye view of the intersection with the cv2.warpPerspective function.
- The Object Detection project uses a TPU to recognize objects, and has recognizes 7 objects (Traffic light is no 3). It doesn't distinguish the color-state of the traffic light.
-
- From Udacity, there is still the CarND Object Detection lab. This lab makes use of tensor flow models. The link to the zoo doesn't work, because there is now a tensor flow v2 zoo and Tensor flow v1 zoo. The Jupyter notebook was build for tf1. The Kitti-trained model looks interesting (and the Edge TPU models!).
- Luckily, there is also a traffic-light classifier assignment from Intro to Self Driving Car. Here is an example solution from one of the ISDC students.
November 22, 2021
- This article on Domain Adaption could be interesting for Henk.
- Also the SeasonDepth dataset could be interesting. The dataset itself is available on github.
November 18, 2021
November 12, 2021
- Martin pointed me to two interesting papers:
November 9, 2021
- Followed a class from ConstructSim to detect with Yolo objects with a NanoSaur, based on a Nvidia Jetson.
- The building instructions of NanoSaur include an expansion board and two Leds for the eyes.
- The Darknet port for ros-foxy can be found on github.
.
November 4, 2021
November 2, 2021
- Started up the Jetson racer again. The latest modification was of May 18, 2021, when I was experimenting with the DreamVU camera.
November 1, 2021
September 20, 2021
- Impressive talk on unsupervised depth estimation, inspired by the original monoDepth paper (CVPR 2017), including code, models and talk at website.
- This was augmented with Ego motion estimation in SynDeMo paper (ICCV 2019), unfortunatelly without code or models.
September 7, 2021
August 24, 2021
- Maël is planning to use a pyramid pooling layer between the finale feature maps and the fully connected layers, as done by Zhao et al. Note that RetinaNet also is a ResNet with a FPN on top, as indicated in the original RetinaNet paper and by Zhu et al, who uses it for vehicle behavior recognition from a bird-eye view.
August 16, 2021
July 27, 2021
July 7, 2021
June 10, 2021
June 3, 2021
June 2, 2021
May 25, 2021
May 17, 2021
- Found this paper to improve photorealism (in GTA V).
May 11, 2021
- For large-scale point clouds, RandLA-Net seems to be state-of-the-art.
March 28, 2021
March 24, 2021
March 18, 2021
- Encountered an article which uses Wide-FOV cameras for localization while driving in Urban environments.
March 17, 2021
- Looked at Institut Pascal Long-Term dataset, which contains recordings of the same electric shuttle over the same parking lot over a full year. The extract_rosbag.py extracts the images and imu data, although the rosbag also contains lidar and gps data.
- Installed ros-melodic-ros-comm on WSL Ubuntu 18.04 of my home-computer, which allow me to do a rosbag info 2020-01-22-10-19-51.bag.
- This rosbag not only contained the lidar and gps data, but also 648 velodyne messages!
- Actually, there is a long-term visual localization workshop, which is both in 2019 and 2020. In 2020 there were only two submissions with public results!
- The workshop has multiple datasets, for the visual localization for autonomous vehicles challenge it uses these three:
- Extended CMU Seasons.
The original location of the dataset is only available on the Wayback machine. A subset is still available at this mirror.
- RobotCar Seasons v2
- SILDa Weather and Time of Day Dataset
Also this dataset is no longer available, not even on Wayback machine. This post gives some impression of this dataset. The dataset is also no longer available on github, although it was also used in Image Matching workshop. Found the dataset at a personal github. The download.sh seems to work, I could download the camera-intrinsics.tar.xz. Left the other 60Gb for a moment on the Imperial College server.
- In 2019 the same datasets where used (with a previous version of RobotCar Seasons).
- In 2019 there were two winners and one runner-up of the Visual Localization Challenge:
- Paul-Edouard Sarlin, Cesar Cadena, Roland Siegwart, Marcin Dymczyk, From Coarse to Fine: Robust Hierarchical Localization at Large Scale
- Tianxin Shi, Shuhan Shen, Xiang Gao, Lingjie Zhu, Yurun Tian, Qingtian Zhu, Visual Localization Using Sparse Semantic 3D Map
- Hugo Germain, Guillaume Bourmaud, Vincent Lepetit, Sparse-to-Dense Hypercolumn Matching for Long-Term Visual Localization
- In 2020 there was a clear winner on all Challenges and a running up for the Autonomous Vehicle Challenge:
- Paul-Edouard Sarlin: Hierarchical Localization with hloc and SuperGlue
- Martin Humenberger, Yohann Cabon, Nicolas Guerin, Julien Morat, Jérôme Revaud, Philippe Rerole, Noe ́ Pion, Cesar de Souza, Gabriela Csurka, Late Fusion of Global Image Descriptors for Visual Localization
March 4, 2021
March 1, 2021
- TU Delft has now HERALD Lab: Human-aware Robust Artificial Intelligence for Automated Driving. They are looking for 5 PhD students.
February 28, 2021
- Maybe I should try to install the login manager slim on my RB5, as suggested on askubuntu
February 24, 2021
- Continue with the install of open3d on an ARM-processor.
- The Eigen/Core problem is back, but by uncommenting the previous removed (due to the dependency on the latest cmake) open3d_link_3rdparty_libraries the building continues.
- Added typedef CBLAS_ORDER CBLAS_LAYOUT; /* for backward compatibility */ to cpp/open3d/core/linalg/BlasWrapper.h. Checked with apt-cache policy libblas-dev, which indicates v3.7.1 (while v3.8.0 is available since Nov 2017).
- Cloning lapack-release, which contains libblas v3.8.0.
- Have not installed v3.8.0 (yet), because the CBLAS_LAYOUT hack works. Open3D is now fully build. Continue with the make install command, which installs the package in ~/open3d_install, as specified with the cmake command. The Open3DConfig.cmake is installed in ~/open3d_install/lib/cmake/Open3D.
- Continue with the Open3D python package.
- Quanser also has a self-driving car studio which is Jetson TX2 based.
- Everything is build. Continue with depthai. Both the sudo wget -q0- http://doc.luxonis.com/_static/install_dependencies.sh | bash and python3 install_requirements.py work, although the optional package open3d is skipped. Installed that package directly with pip3 install /home/jetson/git/Open3D/build/lib/python_package/pip_package/open3d-0.12.0+31e5d4d1-cp36-linux-aarch64.whl.
- Still python3 depthai_demo.py crashes on SIGILL in gotoblas_dynamic_init().
- Installed v3.8.0 of libblas, still crashes.
- Upgrading to Ubuntu 20.04.
- Reading Highly Automated Vehicles and Self-Driving Cars, which gives a nice recent overview of what is needed for self-driving cars. Could be used as reference the ADAS levels.
- Upgrading to Ubuntu 20.04 was the solution. python3 install_requirements.py gave some warnings. Downloading python3-opencv_4.2.0 for amd64 solved that (although numpy and pyyaml were still missing).
- Receive a tracker-extract internal error, not clear if this related with python depthai_demo.py.
- Also the disparity channel can be accessed with python3 depthai_demo.py -s metaout previewout,12 disparity_color,12:
February 23, 2021
- Continue with making cmake 3.18 on the JetRacer.
- During compilation I read the article in Journal of Field Robotics on Fusion of neural networks, for LIDAR‐based evidential road mapping, which indicates that the NuScenes dataset has LIDAR scans tuned to detect nearby obstacles, for road detection on larger distances this dataset is less usefull.
- The dataset with VLP32C LIDAR (unique) are publicly available.
- The algorithm was also compared with the KITTI dataset, although also there critical notes were made on unlabelled laserscans (because no corresponding camera-segmentation label was available).
- The installation was done succesfully, although I made cmake v3.20.
- Needed a fresh terminal to access the new version of cmake. Also made a fresh build directory for Open3D.
- Continue with make -j$(nproc).
- Everything seemed to go OK, but target all fails after completing ext_filament.
- Had to search deep for an error, but the PinHoleCameraIntrinsic fails finding include Eigen/Core. Could try to define EIGEN_INCLUDE_PATH as /usr/local/include/eigen3, of specify -DENABLE_PRECOMPILED_HEADERS=OFF. For the moment, a new make continues with building (without any modification). Tried sudo apt install libeigen3-dev, but I have already the latest version. The include files can be found at /usr/include/eigen3. eigen3 is mentioned in CMakeCache.txt and Open3DTargets.cmake.
- Now the build fails on another part of the project: src/lib_json/CMakeLists.txt:125 patch does not apply. Patch is an optional MSVC-patch, so commented that line out from build/CMakeFiles/ext_jsoncpp.dir/build.cmake.
- Next fails the visualisation gui, on missing GLFW/glfw3.h. Installed sudo apt-get install libglfw3-dev and tried again.
- Now the visualisation/gui fails on Eigen/Core. Included -I/usr/include/eigen3 in cpp/open3d/visualisation/gui/CMakeFiles/GUI.dir/flags.make.
- Next is an dependency on sudo apt-get install libfmt-dev. That fails that the include tries to include format.cc, so tried to solve that sudo apt install libspdlog-dev. That is not the solution. Made a link from /us/include/fmt to /usr/include/spdlog/fmt/bundled/format.cc. Also fmt/ranges.h was needed, which is part of fmt v7.1.3. Cloned the library from github and did a sudo make install.
- The json patch problem was back, so commented that line out in ~/git/Open3D/3rdparty/jsoncpp.
February 22, 2021
- Matlab contains now the GNC algorithm for pose graph optimization, which handles outliers in SLAM much efficiently.
February 20, 2021
- Try again to install DepthAI on Nvidia Jetson. They claim that the same install instructions as Ubuntu work fine on the Jetson.
- Part of the install instructions of open3d on an ARM-processor is the installation of libblas-dev (the instruction that crashes). Yet, after installing the prerequisites the crash is still there.
- Building Open3D with the Nvidia Jetson options -DBUILD_CUDA_MODULE=ON and -DBUILD_GUI=ON specified for cmake.
- The required version of cmake is 3.18, which is not available for Ubuntu 18.04. The suggestion was to include cmake 3.18 from the repository, but no ARM-implementation was available.
- Protected the 3.18 specific calls with an if(${CMAKE_VERSION} VERSION_GREATER_EQUAL "3.18.0"). Still the different packages have problems with open3d_link_3rdparty_libraries. Hope it works, because I got still errors: $: unknown language.
- Making the C++ library seems to work. After more than an hour the make failed on package ext_faiss, which requires a higher version of cmake. Lowered the requirement, which didn't help because of missin CMAKE_HOMEPAGE_URL_COMPILER_ENV_VAR.
- Cloned v3.18.6 from gitlab and did a ./bootstrap.
February 9, 2021
February 5, 2021
- Continue with the Windows installation of the DepthAI camera.
- The command python3 is not known, but just python works fine.
- Got a warning that open3d was not installed, because my platform wasn't armv7l. Still, the OAK-1 seems to work fine.
- It not only works from a powershell, but also from a regular commandline. When starting the demo, mobinet-ssd\FP16 was downloaded, to recognize objects.
- Used my Thunderbold USB-C cable directly. Still 30 fps.
- Also tried the OAK-D with python depthai_demo.py -s metaout previewout,12 disparity_color,12 and python depthai_demo.py -s metaout previewout,10 depth,10. No complains on open3d.
February 4, 2021
- In their video, SLAMcore demonstrated the combination of a Jetson and a DepthSense, including PanOptic segmentation.
February 3, 2021
- Received the OpenCV AI Kits. First trying installation instructions on Windows on my nb-dual.
- Had to reboot, so I switched to the Ubuntu installation. Works directly, both for the OAK-1 and OAK-D.
- Read the post on DepthAI on Nvidia Jetson.
- Continued with the AI JetRacer. Installed depthai-0.4.1.1 with python3 -m pip -U install depthai.
- Building the depthai wheel takes a long time. Not sure if I should have installed it by building the source from the github.
- Running python3 install_requirements.py also tries to build a depthai wheel (and pyyaml).
- After several tries, the install succeeds, except the optional dependency on open3d==0.10.0.0.
- When needed, it is possible to build open3d from source.
- Yet, python3 depthai_demo.py crashes with an illegal instruction (SIGILL in gotoblas_dynamic_init().
- Simple cheese says no device found. Cheese works on my nb-dual, but gives the OAK-D not as possible device.
January 31, 2021
January 21, 2021
January 19, 2021
Previous Labbooks