Context
As my research concentrates on the perception part of Robotics, my research is in the Informatics Institute part of the
Computer Vision group.
.
June 25, 2024
- Looked at the provided helpercode, which is a Jupyter notebook with more details on the csv-files and how to display the frames.
- No details about calibration or rectification. Also no details on which outdoor_ride contains which city.
- Looked at the Discord group. On June 18 a user was also asking for the camera intrinsics and extrinsics from a chekerboard calibration.
June 24, 2024
- Unzipped the output_rides_23.zip, the smallest (2Gb) archive from the FrodoBots-2K dataset.
- This archives contains several recordings, I looked at the latest. The directory is called ride_40305_20240504100410, where the last part of the name indicates the date of the recording (used epochconverter.com to covert the unix-timestap 1714817062.533 (first frame) to Saturday 4 May 2024 10:04.
- The GPS coordinates are 22.753862,114.090904 (latitude,longitude), which corresponds with Lingnan Blvd, Longhua District, Shenzhen, Guangdong, China:
- The recording was made on a rainy day, with the FrodoBot navigating an artificial circuit (on the top of a building?!):
- The first directory in this archive was also recorded at this location, with even more rain (and fish-eye lens, and audio recording)
- Back to the last directory. The m3u8 is an index-file, to load the different video streams from the *.ts files.
- Tried ffmpeg -i c692cdf7be484e8b6d61468f21d64991_ride_40305__uid_s_1000__uid_e_video.m3u8 frame%08d.png.
- That created a lot of frames (killed it at frame #9604). According to the front_camera_timestamps_40305.csv, there were #17926 frames. This is frame #9604:
- During the ffmpeg conversion, I saw:
Input Stream #0:0: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1024x576, 20 fps, 20 tbr, 90k tbn, 40 tbc
Output Stream #0:0: Video: png, rgb24(pc, gbr/unknown/unknown, progressive), 1024x576, q=2-31, 200 kb/s, 20 fps, 20 tbn
ni-slamSni-slam
- For each packet it gives a warning [mpegts @ 0x561f34ebbac0] Packet corrupt (stream = 0, dts = 1480500), yet it still worked.
- I looked at the option ts_from_file at the ffmpeg documentation.
-
- Created a new SceneLib2/data/SceneLib2.cfg with the settings:
cam.width = 1024;
cam.height = 576;
cam.fku = 195;
cam.fkv = 195;
cam.u0 = 512;
cam.v0 = 288;
cam.kd1 = 9e-06;
cam.sd = 1;
- Only the cam.width and height are known. I have set cam.u0 and v0 to half this value, a kept the rest as initial. Difficult to estimate those without calibration.
- Running ./MonoSlamSceneLib1 doesn't work, not even for the original *.cfg. Trying a reboot, because it seems that no Linux display pops up. Also xclock fails.
- Rebooting helped. Was able to perform MonoSLAM on the outdoor_ride23, although no features were found:
- Hypothesis is that also the kernels have to be scalled for the higher resolution. A good first attempt would be to see if the ffmpeg conversion could also reduce the resolution (for another drive in this set).
June 19, 2024
June 12, 2024
- Milan looked at the RadarIQ sensor, but the github pages are turned to private.
- Also, the forum is gone, although some discussions can still be found at the Internet Archive.
- The latest general discussion is from March 2023.
- Already on March 2022 there is complaint that several issues posted on github are not solved, and ROS1 seems abandoned.
- On July 2022 somebody was working on a Raspberri Pi, yet no arm-libraries are available. Also an issue with ROS2 Galactic was mentioned.
- The only SDK still available is python (v1.0.7). The documentation is gone. The WayBack machine could also not find https://radariq-python.readthedocs.io/.
- Found some RadarIQ libaries (v 5.6.0) for the Arduino.
- The youTube Getting Started shows how to use their RadarIQ Controller, with is no longer available for download.
- I think to remember that I downloaded that viewer when I bought the sensor, but cannot find it in the Autonomous Driving labbooks. I also didn't install it on my home computer. Found the pledge (Dec 2020), with delivery around April 2021.
- Found v1.0.4 (also the latest download from RadarIQ getting started of the RadarIQ_Controller on nb-dual (native Ubuntu 20.04 partition) - March 2022.
- The controller gives some font warnings, but starts up nicely. No RadarIQ models found, so time to connect the sensor. The viewer starts from the command line, but from the File Explorer it complains that it is shared library, not a executable.
- Connected it with USB-micro to USB-B, linked to a USB-B to USB-C converter. The third time I see the device registered with dmesg | tail. At that moment the viewer also sees the RadarIQ module:
- I already installed radariq, because from radariq import RadarIQ also worked. Even the Usage example worked nicely, printing all rows.
- Looked into the package at ~/.local/lib/python3.8/site-packages/radariq, but no examples inside the package (that has to be found at the private github page).
- Found the radariq_ros in ~/ros2_ws/tmp/radariq_ros (March 31, 2022).
- Followed the instructions in the readme of this directory. Made ~/radariq_ws/src. The colcon_build was sucessfull (some rosidl policy CMP0148 warnings). Rviz starts, showing the radariq module as a blue square (RoboTModel succesfull loaded). The point-clouds are not visible, because the frame [radar] does not exist. With ros2 launch radariq_ros_driver transfer_base_radar.launch.py (and transfer_base_link) the TF tree can be build. Still no point-clouds visible.
- The radariq_object.launch.py should display the radariq_markers (don't have those). ros2 topic list also don't show that topic. /radariq is one of the topics, yet I see no echo.
- Looked at start of view_radariq_pointcloud.launch.py. Three nodes are started, including point_cloud_publisher. Yet, it fails on trying to do riq.set_certainty (conflict with latest python-package version).
- Commented that line out, point_cloud_publisher still fails, but now more fundamenatlly. Tries to do create_cloud, which calls pack_into() with three items (instead of the four expected). Seems a python version problem.
- At the same time an error occors with get_logger().error(error) (argument should be string, not error).
- Looked with pip show radariq. I have v1.0.6 installed (v1.0.7) is the latest. Also note that radariq-ros-driver depends on this package, but that package cannot be found (not with pipy.org search, nor with pip install radariq-ros-driver==. From package radariq itself only v1.0.6 and v1.0.7 are available to be pip installed.
June 5, 2024
- The Autonomous Grand Challenge has now several submissions on Open Review, which is not that strange because the deadline was moved forward to June 3.
- For the Occupancy and Flow the following submission is made:
- For the Mapless Driving the following submission is made:
- LGmap: Local-to-Global Mapping Network for Online Long-Range Vectorized HD Map Construction, team LGmap, first place.
- Leveraging SD Map to Assist the OpenLane Topology, team Xiaomi EV, second place.
- UniHDMap: Unified Lane Elements Detection for Topology
HD Map Construction, team CrazyFriday, third place.
- MapVision: CVPR 2024 Autonomous Grand Challenge Mapless Driving Tech Report, team MapVision, sixth score, not involved in final official ranking.
- Scene Perception and Reasoning with SD Map: A solution for Autonomous Driving Challenge: Mapless Driving, team BoschXCASW / supertrainer, ranked 11.
May 16, 2024
- The Thorgeon Rechargeable Li-ion Battery 18650 3.7V 1200Mah are too long to fit into the AI Racer PRO. The Thorgeon batteries are 69mm long, which was specified as package-dimension.
- I ordered the batteries I used before 18650 batteries which are less than 67mm (actually 65mm).
- The Autonomous Grand Challenge has the possibility to publish a Technical Report on OpenReview. No submissions yet.
May 14, 2024
- Ordered the Thorgeon Rechargeable Li-ion Battery 18650 3.7V 1200Mah for the DART. Also ordered the brass rings.
- For velocity readings, they recommend to both include magnets and a white ring on the wheels. To read those two clues, also a Hall sensor and IR sensor is needed. Do not see that in the part list.
- Looking into the arXiv paper.
- The Hall sensor is just three outputs, connected to the Arduino. Could be a U1881, just like this design
- The IR sensor gate seems to be HW-201. The same shop also has Hall sensor, which has some extra resistance and/or capacity. The sensor itself is a3144. The last one is a unipolar sensor.
- The u1881 seems to have its connectors at both sides, instead of one side. So, the a3144 seems a good choice.
April 25, 2024
April 19, 2024
- Used the spare Nvidia Jetson to display the German Open livestream.
- The Duckiebot DB21 SD card contains Jasper's Duckiebot (non-graphical Ubuntu 18.04)
- Instead used the SD card from NanoSaur (and the Wifi-stick from DB21).
- Did some updates, but the screen became black during the procress. Started up again without problems.
- Display worked on the big screen.
-
- The China3DV is finished, the HuggingFace leaderboard is published.
- For the Mapless Driving challenge no innovation award has been given. Only the first prize for Ren Jianwei, Shuai Jianghai, Li Gu, Zhao Muming from Xiaomi Automobile, Beijing Forestry University.
- For the Occupancy and Flow challenge the innovation award went to Zhang Haiming, Yan Xu, Liu Bingbing, Li Zhen from Chinese University of Hong Kong (Shenzhen) and Huawei-Noah, for designing a multi-modal fusion distillation Paradigm, uses a multi-modal teacher network to distill the student network at multiple scales, and effectively utilizes simulation data sets for joint training, achieving good performance. The first prize went to Zhou Zhengming, Liang Cai, Hu Liang, Wang Longlong, He Pengfei from Xiaomi Robot.
April 2, 2024
- The two CVPR competitions I have selected, have a sibling track at China3DV, where the deadline is already on April 10.
- A team description paper is required, but are not visible (yet). The leaderboard will be published at the competition end at HuggingFace.
- Both Mapless Driving and Occupancy and Flow challenge have a Hugging Face page and a github page.
- For Mapless Driving this is Hugging Face and the github page.
- For Occupancy and Flow this is Hugging Face and the github page.
-
- Unfortunately, the ordered Micro JST PH 4 pins connector doesn't fit.The connector on the LDRobot is 8mm wide, while the connector I received is 10mm.
March 22, 2024
March 22, 2024
- I am reading Convolutional social pooling for vehicle trajectory prediction (CVPR 2018)
- This paper extends Social lstm: Human trajectory prediction in
crowded spaces (pedestrian based) with a convolutional approach.
- Both papers are high-impact papers (821x and 3247x resp.). I should check if this also applied to intersections.
- Another innovation are the maneuver based decoder. At the highway it can be clustered to 2x3 maneuvers, how many do we use in our paper (my hypothesis: the same ammount, because we concentrate on the lane chosen when approaching the intersection. That works for breaking and switching lanes, but less clear how to classify the maneuver when other lanes crosses ours).
- Saw two interesting intersection-papers that cite Social LTSM:
- The first paper concentrates on Australian tangential roundabouts, with 4 clear conflict points.
- The second paper is the other extreme, using drone data from 4 Indian intersections, including busses, two-wheelers and rickshaws, creating an overload of possible conflict points.
- The second paper also cites Convolutional Social Pooling. A new paper to check is:
- I love their Intention Prediction figure in their introduction:
- They train and verify on the NGSIM and INTERACTION datasets. One of the algorithms they compare with is CS-LTSM, as we do.
- They also include regulations to exclude maneuvers that are illegal, as Generalizable intention prediction of human drivers at intersections, 2017 IEEE Intelligent Vehicles Symposium.
- The introduction is organized in physics, planning and pattern-based predictions, which orginates from Human motion trajectory prediction. Does this also apply to vehicles at intersections? The Human Motion paper applies it to the application domain for self-driving vehicles, but especially towards vulnerable road users.
March 15, 2024
- Yesterday one of the ICAI-PhDs presented SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation
March 13, 2024
- Looking for the 12.6 V battery charger (see Package Content tab).
- Found a 12V - 1A Netgear charger, which connector fits (together with the other 12V chargers - less mA).
- See no charger-led lighting up when connecting.
- The 18650 batteries that I are less than 67mm (checked due to the note on the JetRacer page).
-
- Looked at the Mini Pupper 2 - Lidar instructions. The LD Lidar is mounted on a 3D printed Lidar holder. The wiring is not specified, although the last OAK-D figure gives some clue.
- Step 7.1 from Mini Pupper 2 Pro gives a better vue.
- The connector is the 6-pins connector next to the USB-A, which four IO pins between GND and +5V. The connector has only 3 wires, only IO 1 is used:
- Start looking for a USB to TTL serial converter, but couldn't find it.
- This Serial TTL converter would be great, but seems a bit expensive.
- This Converter has a 6 pin connector, but that would be double female.
- This converter is at least 3x less expensive.
- I had the feeling that I had this converter.
- The DART uses the YDLiDAR X4, which has a (micro?)-usb connector, although on the parts image also a 6-pin breakout board is visible (with 4-pins connected).
- The X4 datasheet shows a 8-pin interface.
- The X4 user manual shows the USB Adapter board, which converts USB to UART.
- Next to the CP2102, there are also CP2104 converters.
- The CP2102N is the next generation, with USB-C and a faster transfer rate. Only 6$.
- The DART build instructions is not explict how they use the USB adapter board.
- It seems that JetRacer ROS also uses a 6-pin connector to their LiDAR (connection 12):
- The instruction manual only shows one part of the connection of the Lidar cables:
- Received a TTL converter from the CO2 program of the Technical Center. The red led on the board lights up, but I don't see any drivers installed. The connectors on the LD Lidar are too close to each other to connect with the break cables. I could try the ones from the Qualcomm board, because those cable are smaller.
- As the figure in section 5.3 of the LDRobot STL 06P datasheet indicates: the red wire is the TTL Tx with the LIDAR data output. The connector is a ZH1.5T-4P1.5mm
- Other name of the ZH connector is Micro JST PH 4 pins
- Plugged the CO2 TTL converter into nb-ros (Ubuntu 18.04). The device was recognized with lsusb as a QinHeng Electronics HL-340 USB-Serial adapter.
- According to dmesg | tail the ch341-uart converter was attached to ttyUSB0, so it seems to work under Linux.
- In section 5.6 they show a demo application; they connect the LD SPL-06 via a connector board (micro USB-ZH connector). The board is not specified.
- On the ldrobot github, they specify a serial port module like a cp2102 module.
- With this package, you start the ld06 node with ros2 launch ldlidar_stl_ros2 ld06.launch.py, followed by ros2 launch ldlidar_stl_ros2 viewer_ld06.launch.py. This is the same package as the Mini Pupper uses.
- Cannot find a cp2102 module with ZH or JST connector.
- On the JetRacer ROS. they use another LDrobot Lidar (A1), which they launch with roslaunch jetracer lidar.launch. The A1 is not in the list of supported sensor on the ldrobot github.
- Could look at the Mini Pupper ROS github for a launch file for the STL-06P.
March 12, 2024
- Looked at the required components list of Delft's Autonomous-driving Robotic Testbed.
- As Lidar they are using YDLIDAR X4, which has a range of 10m, a weight of 180g and diameter of 68mm. The 5V device uses max 500mA. There is no shop.
- Checked both SOS solutions and Seeed Studio.
- SOS has the following LIDAR products:
- Seeed has the following LIDAR products:
- The RobotShop.eu has YDLiDAR X4 for 114 euro.
- For the moment the LDROBOT STL-06P we have seems to be OK. 12m range, diameter 38mm, 5V, 290mA. Communication interface is UART@230400.
- The RoboShop.eu has only the LDROBOT D300, also 12m range, which is a Development Kit with contains < href=https://www.ldrobot.com/images/2023/05/23/LDROBOT_LD19_Datasheet_EN_v2.6_Q1JXIRVq.pdf>LD19 Sensor (same interface), which seems to be Triangular LiDAR, instead of DTOF LiDAR. The Develop Kit has also a serial cable, a charging cable and a complaint board or speed control plate (i.e. control board). The D200 is based on the LD14P LiDAR, and has a USB adapter board and a serial test cable.
-
- The battery was not specified. At Conrad you can specify the batteries with t-plugs. The lowest capacity pack was quite thick, so 7,4 V 3000 mAh 20C Eco-Line seems a better choice. Dimensions (l x b x h) 139 x 25 x 46 mm.
- Actually, the 7,4 V 1800 mAh looks bigger, but has smaller dimensions (l x b x h) 87 x 35 x 17 mm.
- Actually, there seem to be several synonyms on the plug system. You could also select the technology (LiPo) and Belastbaarheid.
- When I select Softcase and Capacity and Belastbaarheid I got one option: Hacker LiPo 7.4V.
- Note that also a BEC-plug is mentioned, but at build-instruction they only showing connecting the t-plug:
-
- Actually, the basis is the JetRacer Pro, which has suspension.
- We have the the JetRacer AI.
- There is a LIDAR extension of the JetRacer, JetRacer ROS AI
- This ROS kit uses the following components:
- Raspberry Pi RP2040 Chip - microcontroller
- 37-520 Metal Encoder Motor -
- 11 wire AB phase hall speed sensor - to read out the encoder
- MPU9250 - IMU sensor
- One speaker, dual microphone.
- Yet, note that the Waveshare Board also includes the additional I/O connections (including sound):
- Checked the JetRacer Pro, but the chassis is one part.
March 11, 2024
March 4, 2024
February 19, 2024
- Yue's work is based on these two publications:
- This Overview paper bundles the different approaches. The risk potential field seems to originate from Real-Time Obstacle Avoidance for Manipulators and Mobile Robots (1986). Also the ICRA 2008 paper points to this publication.
-
- The current approach could be extended by incorporating uncertainties and probabilities, comparible with:
- For the twisted Gaussian you nicely see the recording of the risks on the grid, comparible with the approach of Yue.
- The naturalsitic driving article covers more complex scenarios (crossings with single lanes and only two vehicles). It predicts future trajectories, selects some key-points on both trajectories, calculate Gaussians with uncertainty for those keypoints, and calculates risk if the Gaussians from both vehicles overlap.
-
- Another option is to add other perspectives, such as risks associated with angle and acceleration, such as:
- The 2017 paper has local path candidates for both the ego-vehicle and the surrounding vehicles, which gives a gollission risk map for where they overlap. Yet, for a 3-lane highway scenario, not a complex crossing.
- The 2020 paper concentrates more on the acceleration, but also on the high-way (following behavior). Nice from this paper is that they also tried to classify the behaviors of the surrounding vehicles.
-
- The last possible extension is take the bidirection into account, like:
- The studied scenario is at least a unprotected left turn (for a limited number of lanes and surrounding vehicles). Looks a bit like a particle filter how they do the predictions.
-
- So, from all this suggestions Collision risk assessment algorithm via lane-base
d probabilistic motion prediction of surrounding vehicles seems the most promising.
February 14, 2024
- Delft has published an extension of the JetRacer: DART.
- The JetRacer is extended with YDLidar X4, which is comparible with the Scan Sweep, LDS-02 (from TurtleBot 3) and/or LDRobot STL-06 (from mini-pupper).
- The choice for the YDLidar X4 seems to be based on MuSHR from Washington University (paper).
January 29, 2024
January 15, 2024
- Looked at this paper with code, which is able to detect out-of-distribution patterns in trajectories.
Previous Labbooks