To be done:
- Implement the book exercises in ROS2, which runs on RAE
June 27, 2025
- Checked the Jetpack version with sudo apt-cache show nvidia-jetpack, which showed version 6.0+b106.
June 26, 2025
- OpenCV blog-post on edge-detection on a AI-generated image.
-
- The Knowledge Representation was very logic based, only watched the first unit.
- Also looked at the other MOOC course, Robots in Action. Looked at the first video of RoboEthics.
-
- Tried again to get the point-cloud from the RAE.
- Started docker again, went to /underlay_ws/install/depthai_examples/share/depthai_examples/launch/. Activated point_cloud_xyzi and metric_converter in rgb_stereo_node.launch.py script.
- Launched ros2 launch depthai_examples rgb_stereo_node.launch.py camera_model:=RAE. Could see the different topics.
- Started on WS9 rviz2, and added displays on different topics. Only received an image from /color/video/image, received nothing from /stereo/depth.
- Opened another terminal on the RAE. Outside the docker I see two processes active: depthai-device (31.8%) and rgb_stereo_node (15.8%). Not clear how I can see them outside the docker!.
- Inside the docker I only see rgb_stereo_node (15.8%).
- With ros2 node list I see:
/container
/covert_metric_node
/launch_ros_76
/point_cloud_xyzi
'
- The only topic which gives me updates is /color/video/camera_info.
-
- Back to the ugv_rover. The oak_d_lite.launch.py starts the camera.launch.py from depthai_ros_driver. Inside the camera the pointcloud.enable is default False.
- The example_multicam.launch.py launches both camera and rgbd_pcl.launch.py.
- pointcloud.launch.py uses the PointCloudXyziNode plugin, and should publish /points.
- rgbd_pcl.launch.py starts camera.launch.py with pointcloud.enable True.
- rtabmap.launch.py also starts rtabmap_odom.
-
- Running ros2 launch depthai_ros_driver camera.launch.py pointcloud.enable:=True works, I could see the pointcloud2 (once I set the QoS to best_effort and the frame to oak_camera_frame):
-
- Received the invitation to preview Luxonis stereo-tuning-assistance
- The capture-package needed matplotlib, which conflicted with ros. Created a virtual environment, did the requirements install there and run ../capture_env/bin/python3.11 capture/oak_capture.py default /tmp/my_capture, which gives a X_LINK_ERROR (and a warning on no IR drivers and no RGB camera calibration). So, as long as I have not installed the RVC3_support in the venv, this doesn't work.
-
- Back to the ugv_rover. Tried ros2 launch ugv_slam gmapping.launch.py use_rviz:=false, but package slam_gmapping unknown.
- In ugv_ws/src/ugv_else/gmapping two directories can be found: openslam_gmapping and slam_gmapping. According to the README.md this can be started with ros2 launch slam_gmapping slam_gmapping.launch.py, but it seems that this directory is not included in the build.
- Had to be done in order. Did colcon build --packages-select openslam_gmapping followed by colcon build --packages-select slam_gmapping.
- To get rid of the source install/setup.sh I also did colcon build --packages-select explore_lite which worked. Unfortunately colcon build --packages-select costmap_converter failed on line 56 of blob_detector,cpp, which seems to be an C++-version problem (comment mentions compatibility).
- Running ros2 launch ugv_slam gmapping.launch.py use_rviz2:=false now works, in the sense that the LaserScan are visible. Could also select the Map inside RVIZ, but with the warning no Map received. Changing the Reliability QoS to Best effort solves that:
- Started ros2 launch ugv_bringup bringup_lidar.launch.py (does this means a double lidar_node?) to be able to do ros2 run ugv_tools keyboard_ctrl. The turning is a bit slow, could be increased by e. Going forward / backwards also helps.
- gmapping was able to make a map, although not perfect (two over each other). Can you reset the map without killing gmapping?
- Second try was much better, could not only map the small maze, but also going back to the door, followed by going left halfway the soccer field:
June 25, 2025
- Skipping two weeks of topics of the Robotics in a Nutshell MOOC (Force Control and Knowledge Representation).
- The 5th week is on Graph-SLAM. The description of Wolfram Burghard was a bit too high-level, the description of Giorgio Grisetti a bit too low-level. Minimizing all constraints is simple, but the least-square method is not explained in the best way, with a lot of math but without introducing all functions and operations. Maybe the chapter is better.
- The description of the manifold in Unit 2.4 is nice: "A manifold is a mathematical space that is not necessarily Euclidean on a global scale but can be seen as Euclidean on a local scale."
- Cyrill Stachniss presents the last Unit, with some reading material at the end. Most known, yet the video's on adaptive robust kernels are probably from the paper "Adaptive Robust Kernels for Non-Linear Least Squares Problems'. (April 2021)
- The supplemental material only show the overtake example, not the full mapping.
- I like the idea of replacing L1 or L2 with more robust kernels, to prevent sensitivity to outliers. Will this also work in scikit-learn:
-
- Started the RAE #1. Before I could ssh into the system, the RAE started already driving on one wheel. Once I started ros2 launch rae_bringup robot.launch.py enable_slam_toolbox:=false enable_nav:=false use_slam:=false in the docker the driving stopped.
- Connected to another shell into the docker. Looked with ros2 topic list. Several front and back images are published, yet no point-cloud.
- Inside the docker python3.10.12 is running.
- Inside python3 import depthai as dai doesn't work, so tried python3 -m pip install depthai. No module pip.
- Did sudo apt install python3-pip. Now I could isntall depthai-2.30.0.0.
- Also did sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg, because I had the ros gpg error again. At least 464 packages behind.
- Cloned depthai-python in /tmp/git and run python3 calibration_reader.py. Gave the following error:
RuntimeError: No available RVC2 devices found, but found 3 non RVC2 device[s]. To use RVC4 devices, please update DepthAI to version v3.x or newer.
.
- On October 31, 2024 I did something, directly playing with the depthai_ros_driver with DEPTHAI_DEBUG=1.
- Looked in /ws/src/rae-ros/rae_camera/src/camera.cpp. Code also depends on depthai (and depthai_bridge), but that seems to be cpp-library, not the python-module.
- For instance, /ws/src/rae-ros/rae_camera/stereo_node is an executable which loads /underlay_ws/install/depthai_bridge/lib/libdepthai_bridge.so, /usr/local/lib/libdepthai-core.so and /usr/local/lib/cmake/depthai/dependencies/lib/libusb-1.0.so.
- Tried ros2 launch rae_camera perception_ipc.launch.py, which conflicts with the already running nodes.
- Killed the other node, now I get several nodes running:
/RectifyNode
/battery_node
/complementary_filter_gain_node
/controller_manager
/diff_controller
/ekf_filter_node
/joint_state_broadcaster
/laserscan_kinect_back
/laserscan_kinect_front
/laserscan_multi_merger
/launch_ros_1345
/lcd_node
/led_node
/mic_node
/rae
/rae_container
/robot_state_publisher
/rtabmap
/speakers_node
/transform_listener_impl_55ba0cad70
/transform_listener_impl_55ba4dfdb0
/transform_listener_impl_55bbafe9d0
- Also now get the topic /scan
- Looked at my WLS-terminal with ROS-humble. The command ros2 node list gave a subset:
/complementary_filter_gain_node
/controller_manager
/diff_controller
/ekf_filter_node
/joint_state_broadcaster
/mic_node
/robot_state_publisher
/speakers_node
/transform_listener_impl_55bbafe9d0
- Also the ros2 topic list is a subset:
/battery_status
/lcd
/leds
- During the start of the perception_ipc script, I get the following startup information:
[perception_ipc_rtabmap-1] [INFO] [1750858282.709590462] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[perception_ipc_rtabmap-1] [INFO] [1750858282.709810547] [rae]: PoE camera detected. Consider enabling low bandwidth for specific image topics (see readme).
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858282.710] [system] [info] Reading from Factory EEPROM contents
[perception_ipc_rtabmap-1] [INFO] [1750858282.741301485] [rae]: Device type: RAE
[perception_ipc_rtabmap-1] [INFO] [1750858283.008338842] [rae]: Pipeline type: rae
[perception_ipc_rtabmap-1] [WARN] [1750858283.062986091] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [WARN] [1750858283.082811496] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [WARN] [1750858283.134807345] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [WARN] [1750858283.154057327] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [INFO] [1750858283.945303168] [rae]: Finished setting up pipeline.
- Followed (after initialization of the IMU) with:
[perception_ipc_rtabmap-1] [INFO] [1750858284.488374295] [rae]: Camera ready!
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] Baseline: 0.074863344
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] Fov: 96.69345
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] Focal: 284.64175
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] FixedNumerator: 21309.232
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [debug] Using 0 camera model
[perception_ipc_rtabmap-1] [INFO] [1750858284.806388901] [laserscan_kinect_front]: Node laserscan_kinect initialized.
.
- Note that in /ws/src/rae-ros/rae_camera/config/cal.json can be found, for camera-model RAE (version 7).
- Not clear where all those nodes are started, the launch file starts one executable (perception_ipc_rtabmap) and the control.launch.py.
- The file /ws/build/rae-camera/perception_ipc_rtabmapi is a real exec, loading for instance:
/opt/ros/humble/lib/librtabmap_slam_plugins.so
/opt/ros/humble/lib/librtabmap_util_plugins.so
- Yet, looked into the code, looks like several nodes are started:
executor.add_node(camera->get_node_base_interface());
executor.add_node(laserscanFront->get_node_base_interface());
executor.add_node(laserscanBack->get_node_base_interface());
executor.add_node(merger->get_node_base_interface());
executor.add_node(rectify->get_node_base_interface());
executor.add_node(rtabmap->get_node_base_interface());
- The merger is ira_laser_tools::LaserscanMerger, rectify is image_proc::RectifyNode.
- There is a seperate branch of depthai-python rcv3_support
- Inside that branch I did python3 examples/install_requirements.py, which installed v2.19.1.0 from depthai. That gave:
RuntimeError: No available devices (3 connected, but in use)
- Did ps -all and kill the ros2 process.
- Now python3 examples/calibration/calibration_reader.py in the rvc3_support branch gives the RAE calibration info (stored in examples/calibration/calib_xlinkserver.json).
- Also tried python3 examples/devices/list_devices.py, which gives the X_LINK info:
[DeviceInfo(name=127.0.0.1, mxid=58927016838860C5, X_LINK_GATE, X_LINK_TCP_IP, X_LINK_RVC3, X_LINK_SUCCESS),
DeviceInfo(name=192.168.197.55, mxid=58927016838860C5, X_LINK_GATE, X_LINK_TCP_IP, X_LINK_RVC3, X_LINK_SUCCESS)]
- So, started ros2 launch depthai_examples rgb_stereo_node.launch.py camera_model:=RAE.
- I see the following topics:
/color/video/camera_info
/color/video/image
/color/video/image/compressed
/color/video/image/compressedDepth
/color/video/image/theora
/joint_states
/parameter_events
/robot_description
/rosout
/stereo/camera_info
/stereo/depth
/stereo/depth/compressed
/stereo/depth/compressedDepth
/stereo/depth/theora
/tf
/tf_static
- Checking on WS9. I also see the topics there. The /color/video/image I can display. For video rviz2 complains that it cannot display stereo. That is a warning of rviz2, not on stereo-msgs.
- In rviz2 I also so a PointCloud, but it seems that the streaming just stopped (also no updates in the color-video). Restarting at the RAE didn't help.
- Looking in /underlay_ws/install/depthai_examples/share/depthai_examples/launch/.
- The rgb_stereo_node.launch.py should indeed also publish a point-cloud as topic /stereo/points.
- Note that rgb_stereo_node should also tries to launch rviz. Yet, this is commented out, together with the point_cloud_node
- I could try stereo_inertial_node.launch.py with depth_aligned=True, rectify=True, enableRviz=False.
- Yet, that fails:
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866665.675] [host] [debug] Device about to be closed...
[stereo_inertial_node-2] [2025-06-25 15:51:05.681] [depthai] [debug] DataOutputQueue (depth) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.681] [depthai] [debug] DataOutputQueue (preview) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] DataOutputQueue (rgb) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] DataOutputQueue (imu) closed
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866665.682] [host] [debug] Log thread exception caught: Couldn't read data from stream: '__log' (X_LINK_ERROR)
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866665.682] [host] [debug] Timesync thread exception caught: Couldn't read data from stream: '__timesync' (X_LINK_ERROR)
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] DataOutputQueue (detections) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] XLinkResetRemote of linkId: (0)
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866671.207] [host] [debug] Device closed, 5531
[stereo_inertial_node-2] [2025-06-25 15:51:11.210] [depthai] [debug] DataInputQueue (control) closed
[stereo_inertial_node-2] terminate called after throwing an instance of 'nanorpc::core::exception::logic'
[stereo_inertial_node-2] what(): Cannot use both isp & video/preview/still outputs at once at the moment (startPipeline)
- Instead commented out the point_cloud_xyzi and metric_converter
- On nb-dual WSL the /stereo/points is visible, but see no echo.
- Commented both out again. Try again tomorrow, after a reboot.
June 24, 2025
- The 2nd week of the Robotics in a Nutshell MOOC is on Image Formation and Calibration.
- I like the explanation of the Pinhole model, including the conservation fractions of the object heights and distances h'/h = l'/l:
- This is directly associated with the Thin Lense model, where the same fraction corresponds with the focal lens distances h'/h = l'/l = x'/f = f'/x:
- The 3rd unit of the 2nd week is a topic not often covered: Laser scanning with projection patterns. Scanning with two colored laser-planes, or providing a corner background I have not seen explained before. Nice that the unit starts with a single laser point.
- In the 4th unit impressive demonstration of the ICP algorithm in 3D is shown.
-
- Connected the OAK-D with the Thunderbolt cable to nb-dual, with native Ubuntu 20.04. No ROS2, so looked at ROS1 Noetic first.
- See no /opt/ros/noetic/share/depthai_ros_driver, so should install that one first.
- First had to reinstall the signature keys by using the third suggested option: sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg. 16 ros-packages are installed.
- If I look at this Medium tutorial, I should also install ros-noetic-depthai-examples (was already part of the pack).
- Yet, the command roslaunch depthai_examples rgb_stereo_node.launch camera_model:=OAK-1 fails on missing Intrinsic matrix available for the the requested cameraID.
- Known issue according to Luxonis post, should perform a Calibration.
- Started with python3 depthai_demo.py, which gives one frame and crashes on:
depthai_sdk/managers/preview_manager.py", line 148, in prepareFrames
packet = queue.tryGet()
RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'color' (X_LINK_ERROR)'
- CONTROL-C restarted the connection, which gave:
[DEVICEID] [3.8] [1.501] [StereoDepth(7)] [error] RGB camera calibration missing, aligning to RGB won't work
- Seems that I did the same with the RAE on December 12, 2024 and June 6, 2024.
- Should also look at this Luxonis discussion.
- The calibration files are written to the EPROM. Could first look at what is stored in the EEPROM with Calibration Reader.
- Moved to ~/git/depthai-python/examples/calibration. Running python3 calibration_reader.py gave the same error:
RuntimeError: There is no Intrinsic matrix available for the the requested cameraID
- Looked at the calibration_flash_v5.py script. It tries to write ../models/depthai_v5.calib to the EEPROM. That file actually exists.
- Going back to Initial Connection.
- Cloned depthai-core.
- Run python3 examples/python/install_requirements.py, which instaleld PyYAML-6.0.2-cp38, opencv_python-4.11.0.86-cp37 and depthai-3.0.0rc2-cp38.
- Moved to ~/git/depthai-core/examples/python/Camera. Running python3 camera_all.py gave:
RuntimeError: Device already closed or disconnected: Input/output error
[2025-06-24 17:01:55.193] [depthai] [error] Device with id XXX has crashed.
-
- Switching back to WS9, look if it can work with the OAK-D or find calibration info at the UGV Rover OAK-camera.
- Running python3 calibration_reader.py for the WS9 and OAK-D gave the same error
- Moved to the ugv_rover. Cloned depthai-python, but python3 calibration_reader.py couldn't find module depthai. Running git submodule update --init --recursive didn't help. Doing python3 -m pip install . takes a while. Couldn't build the wheel at the end.
- Just did python3 -m pip install depthai.
- Now python3 examples/calibration/calibration_reader.py works, and gives the RGB Camera Default intrinsics, RGB Camera resized intrinsics... 3840 x 2160, 4056 x 3040, LEFT Camera Default intrinsics..., LEFT/RIGHT Distortion Coefficients..., RGB FOV, Mono FOV, LEFT/RIGHT Camera stereo rectification matrix..., Transformation matrix of where left Camera is W.R.T right Camera's optical center, Transformation matrix of where left Camera is W.R.T RGB Camera's optical center.
- Seems that the information is stored in calib_device_id.json file. The productName is "OAK-D-LITE".
- Tried oakctl. Could installed both on WS9 and the ugv_rover, but no (OAK4) devices found.
- Tried to install oak-viewer on WS9. Installation goes well, but launching oak-viewer gives:
Viewer stderr: [PYI-3537627:ERROR] Failed to load Python shared library '/usr/lib/oak-viewer/resources/backend/viewer_backend/_internal/libpython3.12.so.1.0': /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.38' not found (required by /usr/lib/oak-viewer/resources/backend/viewer_backend/_internal/libpython3.12.so.1.0)
- Not the only one with this problem. There are many (old answers) on stackoverflow. The patchelf answer #14 looks promising.
- Close to my problem is the Proof of Concept at the end.
June 23, 2025
June 19, 2025
- Looking how to split the bringup_lidar.launch.py nicely in a rover and workstation part. For the rover there is the use_rviz=False, so could make a launch-script that launches the rviz (and description?) locally on the workstation.
- Strange enough, there is also a bringup-executable launched. What is in that executable?
- At the end, only selected three nodes for the workstation-side: use_rviz_arg rviz_config_arg, robot_state_launch.
- The executable seems to be ugv_bringup/ugv_bringup.py, which creates a number of publishers of the topics "imu/data_raw", "imu/mag", "odom/odom_raw", "voltage"
- There is also ugv_bringup/ugv_driver.py, which subscribes to topics "cmd_vel", 'joy', 'ugv/joint_states' (pan/tilt) and 'ugv/led_ctrl'. There is also a subscription to 'voltage' (can the voltage be controlled?). Looked in the code, if the voltage_value drops below 9 a low_battery sound is played.
- Made a rviz_lidar.launch.py with only three nodes, and added it to the install with colcon build --packages-select ugv_bringup. After setting the UGV_MODEL and LDLIDAR_MODEL I launced it with ros2 launch ugv_bringup rviz_lidar.launch.py use_rviz:=true
- Made two scripts start_lidar.sh (rover-side) and start_rviz_lidar.sh (workstation-side)
- Works, but I still have two robot_state_publishers, which also gives double /rf2o_laser_odometry, /transform_listener_impl_62d462ce14d0, /ugv/joint_state_publisher.
- Killed the workstation-side robot_state_publisher, still have a double rf2o_laser_odometry.
- Tried again, now without the robot_state_launch (2 nodes left). Starting start_rviz_lidar.sh gave 4 nodes:
/rviz2
/transform_listener_impl_5c12fd746e70
/ugv/joint_state_publisher
/ugv/robot_state_publisher
- Adding start_lidar.sh still gave many doubles
- Started a clean rviz2 with ros2 run rviz2 rviz2 -d ~/git/ugv_ws/install/ugv_bringup/share/ugv_bringup/rviz/view_bringup.rviz. Still two transform_listener_* and two rf2o_laser_odometry.
- Rebooted the jetson, still two rf2o_laser_odometries. The LD19 in ugv_else/ldlidar also launches a transform_listener_impl.
- Tried ros2 lifecycle set transform_listener_impl_636aff058900 shutdown, but get Node not found.
- Could try to reboot also ws9, but there are other users. Leave it here for the moment.
-
- Trying Tutorial 4: 2D Mapping Based on LiDAR.
- In the directory ~/git/ugv_ws/src/ugv_main/ugv_slam/launch there are three launches files, cartographer, rtabmap_rgbd and gmapping. Tutorial 4 is ussing gmapping.
- Started with activating the OAK-D camera with ros2 launch ugv_vision oak_d_lite.launch.py. Seems to work:
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rectify_color_node' in container 'oak_container'
[component_container-1] [INFO] [1750338883.796983921] [oak]: Starting camera.
[component_container-1] [INFO] [1750338883.807265975] [oak]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [INFO] [1750338886.592273026] [oak]: Camera with MXID: 14442C10019902D700 and Name: 1.2.3 connected!
[component_container-1] [INFO] [1750338886.593134048] [oak]: USB SPEED: HIGH
[component_container-1] [INFO] [1750338886.639988351] [oak]: Device type: OAK-D-LITE
[component_container-1] [INFO] [1750338886.642406675] [oak]: Pipeline type: RGBD
[component_container-1] [INFO] [1750338887.683423674] [oak]: Finished setting up pipeline.
[component_container-1] [WARN] [1750338888.471441275] [oak]: Parameter imu.i_rot_cov not found
[component_container-1] [WARN] [1750338888.471589696] [oak]: Parameter imu.i_mag_cov not found
[component_container-1] [INFO] [1750338889.080588298] [oak]: Camera ready!
- In RVIZ, I could display both the topic /oak/rgb/image_rect as /oak/stereo/image_raw as Image:
- When I look at the topics, only /oak/imu, /oak/rgb and /oak/stereo are published.
- Luxonis has launch files which also starts ROS depth processing nodes to generate a poincloud
- The oak_d_lite.launch.py script starts camera.launch.py from depthai_ros_driver, which starts the RGBD together with a NN.
- The installed driver files can be found at /opt/ros/humble/share/depthai_ros_driver/launch/. Didn't see the NN in spatial Mobilenet mode.
- Tried on the rover ros2 launch depthai_ros_driver rgbd_pcl.launch.py. Now there is also a topic /oak/points, of type PointCloud2.
- I could add PointCloud2 display in rviz2, but the node-log showed:
New subscription discovered on topic '/oak/points', requesting incompatible QoS. No messages will be sent to it. Last incompatible policy: RELIABILITY_QOS_POLICY
- Looked with ros2 topic info /oak/points --verbose. The QoS profile of rviz and the driver match:
Reliability: BEST_EFFORT
Durability: VOLATILE
- Should be possible, at least with orbbec_camera.
- On November 7, 2024 I had point-cloud displayed for realsense camera.
-
- Tried an alternative approach. Attached my OAK-D directly to ws9. Had to install ros-humble-depthai-ros-driver, which also installed ros-humble-depthai-ros-msgs and ros-humble-ffmpeg-image-transport-msgs.
- Launching ros2 launch depthai_ros_driver rgbd_pcl.launch.py on ws9 failed on udev rules: Insufficient permissions to communicate with X_LINK_UNBOOTED device with name "1.1". Make sure udev rules are set
- Looked at Luxonis troubleshooting and did the udev update. Still going wrong, so should try another USB-cable (currently using a white USB-B to USB-C).
- Connected my Thunderbolt USB-C to USB-C cable. At least the camera_preview example from the python sdk works.
-
- I have feeling that it has something to do with the bootloader.
- Following the instructions of bootloader config. No devices found.
- Also tried device information, but also Couldn't find any available devices.
- Tried on ws9 ros2 run rviz2 rviz2 -d /opt/ros/humble/share/depthai_ros_driver/config/rviz/rgbd.rviz. Didn't work, nor for the OAK-D, nor the ugv_rover.
- On July 11, 2022 I had a working visualisation of the PointCloud in rviz.
June 16, 2025
- Looking at the github ugv_ws, but this workspace is at least 4 months old.
- The first step of the tutorial is starting rviz, which is a bit strange inside the docker (running on a remote station). Tried if I could start rviz on WSL with a Xserver running, but that gives:
rviz2: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory
- So, try again, on WS9.
- Could connect via vislab_wifi to the rover.
- Cloned ugv_ws. Started ros-humbe. First command of build_first was:
colcon build --packages-select apriltag apriltag_msgs apriltag_ros cartographer costmap_converter_msgs costmap_converter emcl2 explore_lite openslam_gmapping slam_gmapping ldlidar rf2o_laser_odometry robot_pose_publisher teb_msgs teb_local_planner vizanti vizanti_cpp vizanti_demos vizanti_msgs vizanti_server ugv_base_node ugv_interface
- This build failed because nav2_msgs package was not installed.
- Installation failed on:
Failed to fetch http://packages.ros.org/ros2/ubuntu/dists/jammy/InRelease The following signatures were invalid: Open Robotics
- Solved it by adding the signature in a keyring, as suggested on askubuntu.
- Next to fail is again explore_lite, now on a missing map_msgs package.'
- This can be solved with sudo apt install ros-humble-nav2-msgs ros-humble-map-msgs ros-humble-nav2-costmap-2d.
- Next package (15/22) to fail is apriltag_ros, on package image_geometry.
- The package vizanti_server fails on rosbridge_suite. Now the first colcon_build is succesfully finished. To be sure did an intermediate source install/setup.bash.
-
- The second build is colcon build --packages-select ugv_bringup ugv_chat_ai ugv_description ugv_gazebo ugv_nav ugv_slam ugv_tools ugv_vision ugv_web_app --symlink-install .
- The ugv_nav fails on missing nav2_bringup. This is the only package that has to be installed. Did again a source install/setup.bash.
-
- In the README.md some additional humble-packages are mentioned, such as ros-humble-usb-cam and ros-humble-depthai-*.
-
- Started the first step of the tutorial, but ros2 launch ugv_description display.launch.py use_rviz:=true failed on missing UGV_MODEL.
- Looked in the launch-file and defined export UGV_MODEL=ugv_rover (name of urdf-file.
- Next exception is missing joint_state_publisher_gui package.
That package was in the README.md, so did sudo apt install ros-humble-joint-state-publisher-*. The Joint-State Publisher window now pops up, only rviz2 is missing.
- That is part of apt install ros-humble-desktop-*, also in the README.md. That installs 385 packages.
- Now I could control the pan-tilt (both up/down and left/right), both in rviz and on the rover itself (after starting the driver):
- The robot doesn't respond on the wheel commands. Maybe I should also have defined the UGV_MODEL at the rover side. Tried again with UGV_MODEL defined, same behavior. Checked the topics, no camera images or depth-images are published (yet).
-
- Next is Tutorial 3 of the wiki.
- Launching ros2 launch ugv_bringup bringup_lidar.launch.py use_rviz:=true on ws9 fails on LDLIDAR_MODEL. That is not in ugv_bringup/launch/bringup_lidar.launch.py, but in ldlidar.launch.py called from there. So, LDLIDAR_MODEL should be ld06, ld19 or stl27l. So set export LDLIDAR_MODEL=ld19.
- Activated in RVIZ a LaserScan, selected the /scan topic, but nothing visible.
- Also launched ros2 launch ugv_bringup bringup_lidar.launch.py use_rviz:=false, which gave an error:
[ugv_bringup-3] JSON decode error: Expecting value: line 1 column 1 (char 0) with line: z":1684,"odl":0,"odr":0,"v":1214}
- Try again with ld06? Actually, its a D500 Lidar. Yet, the D500 Lidar-kit is based on a STL-19P.
- Trying a 2nd time, now it works:
[rf2o_laser_odometry_node-6] [INFO] [1750080862.481144040] [rf2o_laser_odometry]: Initializing RF2O node...
[rf2o_laser_odometry_node-6] [WARN] [1750080862.495336634] [rf2o_laser_odometry]: Waiting for laser_scans....
[rf2o_laser_odometry_node-6] [INFO] [1750080862.695500846] [rf2o_laser_odometry]: Got first Laser Scan .... Configuring node
- Could also drive around with ros2 run ugv_tools keyboard_ctrl (from ws9):
- I see the updates in RVIZ, but I don't see the scan topic in the topic list. A bit strange, because I see in with ros2 node list many nodes (several double, not ideal to use the same bringup for both). One of the nodes is /LD19:
/LD19
/base_node
/base_node
/keyboard_ctrl
/rf2o_laser_odometry
/rf2o_laser_odometry
/rf2o_laser_odometry
/rf2o_laser_odometry
/rviz2
/rviz2
/transform_listener_impl_5b38e1b97df0
/transform_listener_impl_5e016b673f00
/transform_listener_impl_6337961ff800
/transform_listener_impl_aaaaf1f50810
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher_gui
/ugv/robot_state_publisher
/ugv/robot_state_publisher
/ugv/robot_state_publisher
/ugv/robot_state_publisher
/ugv_bringup
/ugv_driver
- When looking with ros2 topic echo /scan I see many .nan values:
- 228.0
- 232.0
- .nan
- 220.0
- 209.0
- 205.0
- 201.0
- 204.0
- 212.0
- 204.0
- .nan
- .nan
- .nan
- .nan
June 2, 2025
- Looking at the UGV-rover again. Tried to connected to the rover via the vislab-wifi. I also see the UGV network still active.
- Received Joey's labbook. The robot was connected vi LAB42, not the vislab_wifi. Able to login via ssh.
- The home-directory on the rover has several directories, including ugv_ws and ugv_jetson.
- Started with source /opt/ros/humble/setup.bash, followed by source ~/ugv_ws/install/setup.bash. That gives two warnings:
not found: "/home/jetson/ugv_ws/install/costmap_converter/share/costmap_converter/local_setup.bash"
not found: "/home/jetson/ugv_ws/install/explore_lite/share/explore_lite/local_setup.bash"
- Started ros2 launch ugv_custom_nodes line_follower.launch.py. Seems to work, only a few warnings:
[v4l2_camera_node-1] [INFO] [1749735101.182860329] [v4l2_camera]: Driver: uvcvideo
[v4l2_camera_node-1] [INFO] [1749735101.183075567] [v4l2_camera]: Version: 331656
[v4l2_camera_node-1] [INFO] [1749735101.183090895] [v4l2_camera]: Device: USB Camera: USB Camera
[v4l2_camera_node-1] [INFO] [1749735101.183096527] [v4l2_camera]: Location: usb-3610000.usb-2.2
[v4l2_camera_node-1] [INFO] [1749735101.183101583] [v4l2_camera]: Capabilities:
[v4l2_camera_node-1] [INFO] [1749735101.183105903] [v4l2_camera]: Read/write: NO
[v4l2_camera_node-1] [INFO] [1749735101.183111183] [v4l2_camera]: Streaming: YES
[v4l2_camera_node-1] [INFO] [1749735101.183125936] [v4l2_camera]: Current pixel format: MJPG @ 1920x1080
[v4l2_camera_node-1] [INFO] [1749735101.183234419] [v4l2_camera]: Available pixel formats:
[v4l2_camera_node-1] [INFO] [1749735101.183242515] [v4l2_camera]: MJPG - Motion-JPEG
[v4l2_camera_node-1] [INFO] [1749735101.183246995] [v4l2_camera]: YUYV - YUYV 4:2:2
[v4l2_camera_node-1] [INFO] [1749735101.183251315] [v4l2_camera]: Available controls:
[v4l2_camera_node-1] [INFO] [1749735101.184699257] [v4l2_camera]: Brightness (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.186169984] [v4l2_camera]: Contrast (1) = 50
[v4l2_camera_node-1] [INFO] [1749735101.187917198] [v4l2_camera]: Saturation (1) = 65
[v4l2_camera_node-1] [INFO] [1749735101.189416117] [v4l2_camera]: Hue (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.189443318] [v4l2_camera]: White Balance, Automatic (2) = 1
[v4l2_camera_node-1] [INFO] [1749735101.190926908] [v4l2_camera]: Gamma (1) = 300
[v4l2_camera_node-1] [INFO] [1749735101.192166269] [v4l2_camera]: Power Line Frequency (3) = 1
[v4l2_camera_node-1] [INFO] [1749735101.193665476] [v4l2_camera]: White Balance Temperature (1) = 4600 [inactive]
[v4l2_camera_node-1] [INFO] [1749735101.194914213] [v4l2_camera]: Sharpness (1) = 50
[v4l2_camera_node-1] [INFO] [1749735101.196162694] [v4l2_camera]: Backlight Compensation (1) = 0
[v4l2_camera_node-1] [ERROR] [1749735101.196197767] [v4l2_camera]: Failed getting value for control 10092545: Permission denied (13); returning 0!
[v4l2_camera_node-1] [INFO] [1749735101.196208903] [v4l2_camera]: Camera Controls (6) = 0
[v4l2_camera_node-1] [INFO] [1749735101.196217671] [v4l2_camera]: Auto Exposure (3) = 3
[v4l2_camera_node-1] [INFO] [1749735101.197664077] [v4l2_camera]: Exposure Time, Absolute (1) = 166 [inactive]
[v4l2_camera_node-1] [INFO] [1749735101.199412059] [v4l2_camera]: Exposure, Dynamic Framerate (2) = 0
[v4l2_camera_node-1] [INFO] [1749735101.200914146] [v4l2_camera]: Pan, Absolute (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.202424394] [v4l2_camera]: Tilt, Absolute (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.203911537] [v4l2_camera]: Focus, Absolute (1) = 68 [inactive]
[v4l2_camera_node-1] [INFO] [1749735101.203945650] [v4l2_camera]: Focus, Automatic Continuous (2) = 1
[v4l2_camera_node-1] [INFO] [1749735101.205413752] [v4l2_camera]: Zoom, Absolute (1) = 0
[v4l2_camera_node-1] [WARN] [1749735101.206874879] [camera]: Control type not currently supported: 6, for control: Camera Controls
[v4l2_camera_node-1] [INFO] [1749735101.207363787] [v4l2_camera]: Requesting format: 1920x1080 YUYV
[v4l2_camera_node-1] [INFO] [1749735101.218292138] [v4l2_camera]: Success
[v4l2_camera_node-1] [INFO] [1749735101.218322795] [v4l2_camera]: Requesting format: 160x120 YUYV
[v4l2_camera_node-1] [INFO] [1749735101.229039172] [v4l2_camera]: Success
[v4l2_camera_node-1] [INFO] [1749735101.236211424] [v4l2_camera]: Starting camera
[v4l2_camera_node-1] [WARN] [1749735101.804031818] [camera]: Image encoding not the same as requested output, performing possibly slow conversion: yuv422_yuy2 => rgb8
[v4l2_camera_node-1] [INFO] [1749735101.814271510] [camera]: using default calibration URL
[v4l2_camera_node-1] [INFO] [1749735101.814428698] [camera]: camera calibration URL: file:///home/jetson/.ros/camera_info/usb_camera:_usb_camera.yaml
[v4l2_camera_node-1] [ERROR] [1749735101.814617119] [camera_calibration_parsers]: Unable to open camera calibration file [/home/jetson/.ros/camera_info/usb_camera:_usb_camera.yaml]
[v4l2_camera_node-1] [WARN] [1749735101.814656576] [camera]: Camera calibration file /home/jetson/.ros/camera_info/usb_camera:_usb_camera.yaml not found
- Checked on nb-dual (WSL) and saw the following topics:
/camera_info
/cmd_vel
/image_raw
/image_raw/compressed
/image_raw/compressedDepth
/image_raw/theora
/joy
/parameter_events
/processed_image
/rosout
/ugv/joint_states
/ugv/led_ctrl
/voltage
- No topics like /scan and /odom yet, although Joey got that working at May 23 (ros2 launch ugv_custom_nodes mapping_node.launch.py.
- Looked what I tried as last command on February 6.
- Tried ros2 run image_view image_view --ros-args --remap image:=/image_raw inside WSL with VxSrv-Xlaunch running on the backgroud, but this gives:
[INFO] [1749735895.930762600] [image_view_node]: Using transport "raw"
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.4) ./modules/highgui/src/window_gtk.cpp:635: error: (-2:Unspecified error) Can't initialize GTK backend in function 'cvInitSystem'
- Tried again on ws9. Had to do unset ROS_DOMAIN_ID first. Still, ros2 topic list gives me only the two default topics, while I could ping and ssh to the rover, and see all topics there.
- ws9 was busy with a partial upgrade (asking for a reboot). Hope that this is not an update from Ubuntu 22.04 to 24.04.
- Installed sudo apt-get install ros-humble-image-view.
- Switched off all TurtleBot related settings in my ~/.bashrc
- Another user is running jobs on ws9, so could't reboot.
-
- This seems to happen at the LAB42 network, not on the vislab_wifi. Looking into Multiple Network sections from linuxbabe. That didn't work.
- Instead used the trick from stackexchange and looked up the UUIDs with nmcli c and switched to other network with nmcli c up uuid . The ssh-connection then freezes, because you have to build it up again via the other network.
- The two ethernet-connections also are in the 192.* domain, so had to unpluck them to connect to the robot.
- Now I see all topics. The image was black, but that was because the cover was still on the lens. Without cover the robot directly starts to drive (as expected from line-following).
- The images displayed with ros2 run image_view image_view --ros-args --remap image:=/image_raw have a delay of 10s:
- You can also request the processed_image with ros2 run image_view image_view --ros-args --remap image:=/processed_image:
- There are now 4 nodes running:
/camera
/driver
/image_view_node
/line_following_node
- The next step in the WaveShare tutorials is to control the leds (Tutorial 2).
- On ws9 Mozilla could open this web-page, chrome could.
- After starting the driver with ros2 run ugv_bringup ugv_driver, I could control the three led-lights with ros2 topic pub /ugv/led_ctrl std_msgs/msg/Float32MultiArray "{data: [255, 255]}" -1. With {data: [9,0]} only the two lower leds light up, with less brightness. Note that the ugv_driver is not the driver from the line-following. Yet, this seems only a difference in name (same executable) in ugv_custom_nodes/launch/line_follower.launch.py
- Time to go home.
March 4, 2025
February 20, 2025
- Looking for nice images for the paper. Fig. 4 of Group 1 could be usefull to illustrate the first assignment.
- Also Fig. 6 would be a nice illustration.
- Yet, Group 1 chose the most centered line.
- Group 9 shows in Fig. 9 the benefit of using RANSAC compared to Canny edge.
February 18, 2025
- The RAE has a Robotics Vision Core 3, which is based on the Intel Movidius accelerator with code name (a href=https://www.intel.com/content/www/us/en/developer/articles/technical/movidius-accelerator-on-edge-software-hub.html>Keem Bay.
February 12, 2025
February 6, 2025
- The ugv-rover has 4GB Jetson Orin Nano Kit. Documentation for the Orin can be found here. The documentation of the Rover can be found here.
- The rover comes without batteries. The instructions to load the batteries are not there yet.
- The battery compartment is actually below the rover. The screw-driver provided nicely fits. Placed three of the 18650 lithium batteries from the DART robot into the compartment and start charging (connector next to the on-off button).
- Connected the Display-port to my display and the USB-C to a mouse and keyboard. The ethernet-ip is displayed on the small black screen below the on-off button.
- Could login via the screen/keyboard. The Jetson is running Ubuntu 22.04.04.
- Switched off the hotspot (right top corner) and connected to LAB42.
- Tried to switch of the python programming started during setup. The kills seem to fail, but after a while no 10% python script popped up anymore. Looked at /home/jetson/uvg.log. Last line is Terminated.
- The docker scripts are actually in /home/jetson/uvg_ws. Didn't make two scripts mentioned in ROS2 Preparation executable. The ros2_humble script starts fine, and indicated a shell-server is started. Yet, I couldn't connect to the shell server, nor externally, nor via the docker-ip. Yet, doing a docker ps followed by docker exec -it CONTAINER_ID bash worked (no zsh in SPATH). Could find /opt/ros/humble/bin/ros2.
- Next tried RVIZ, but that failed because no connection to display localhost:12.0 could be made.
- The docker is a restart of container of a existing docker-image, not a fresh run.
- Tried to make a script that run from the image, with reuse of the host. Yet, out of battery before finished(charging is before USB-B connector). Switched to USB-C connector after the reboot.
- Starting the docker failed. Could be the background process. Tried docker run nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04. That works, but returns directly. docker run nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04. That works, but returns directly and gives a warning that no NVIDIA drivers are detected.
- Tried sudo docker run -it --runtime=nvidia --gpus all /usr/bin/bash. That works, entered the image. Only, no /opt/ros to be found, only /opt/nvidia. Strange, because I don't see any other image with docker image ls
-
- Looked if I could natively install the required ros-humble packages. sudo apt ugrade wants to downgrade 4 packages (nvidia-container-toolkit), so didn't continue.
- Installed apt install python3-pip. Next are the ros-packages, but I have to install the ros-repository first. Trying to install firefox, but that is snap-based. Used firefox for the ROS Humble Install instructions.
- Installed python3-colcon-argcomplete (11 packages)
- Installed ros-humble-base first (258 packages), followed by ros-humble-cartographer-* (360 new, 7 upgraded).
- Next ros-humble-joint-state-publisher-* (17 packages), followed by ros-humble-nav2-* (119 packages, 5 upgraded)
- Next ros-humble-rosbridge-* (22 packages), followed by ros-humble-rqt-* (61 packages)
- Next ros-humble-rtabmap-* (51 new, 1 upgraded), followed by ros-humble-usb-cam-* (11 packages).
- Last one is ros-humble-depthai-* (28 packages). Left the gazebo part for the moment.
-
- The code needed is available on github. Unfortunatelly, colcon build --packages-select doesn't work, so have do these three commands in each sub-dir of src/ugv_else:
cmake -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build --target install
sudo cmake --build build --target install
- There is not only /home/jetson/git/ugv_ws/install/setup.bash, but also the preinstalled /home/jetson/ugv_ws/install/setup.bash. Yet, running this script gave missing local_setup.bash for ugv_description, ugv_gazebo, ugv_nav and ugv_slam. Those local_setup.bash were logical links to /home/ws. Changing the link to /home/jetson (with sudo!) solved this. Now ~/.bashrc works.
- Yet, nor ros2 launch ugv_description display.launch.py use_rviz:=true nor ros2 run ugv_bringup ugv_driver works (package not found). Maybe a ros-dep init first, even better in a humble_ws/src/, with the different packages in that src-directory.
- At least ros2 run usb_cam usb_cam_node_exe worked. Could view the image with ros2 run image_view image_view --ros-args --remap image:=/image_raw. Only no permission to save image:
January 28, 2025
- This paper describes iMarkers, which allow to make unvisible ArucoTags. Yet, one need a polarizer sheet in front of one of the stereo-cameras.
January 27, 2025
- A student from Canada tested my VisualSfM - Ubuntu installation technical report.
- This student go an error of a missing siftgpu library when clicking "Compute Missing Matches" toolbar-icon.
- Looked at the SiftGPU/bin executables. I could start TestWinGlut “-i ~/src/vsfm/data/small_maze/frame99.jpg
- I downloaded the examples images from github, which are the two files processed when typing SimpleSIFT.
- The program MultiTreadSIFT started two threads on 100Hz, but than crashed after some time with a Killed signal.
- For me the "Compute Missing Matches" toolbar-icon (step 2 in manual) worked fine.
January 24, 2025
January 22, 2025
January 21, 2025
January 20, 2025
- Looking for an anouncement of the next version of RAE, but according to this post, most of the developers are gone. We can ask for a refund.
- Last August there were still plans to make a RCV4 version. The release was planned for Q3-Q4 2025.
- According to this post the project was already deprecated in 2023. We could already ask for refunds in October 2024.
-
- Alternative could be UGV Rover, Nvidia Jetson Orin based. Twice the price. But including Lidar and depth cameras. ROS2 Humble based. I also see that it is Docker based?! The depth sensor is OAK-D-Lite, the Lidar is D500 DTOF.
-
- The UGV Beast documentation wiki.
- Assembly instructions: "Continues improvement"
- The three 18650 lithium batteries are not included.
- Most of the details are about the Jupyter notebooks.
- Starts to get interesting at ROS2 preparation. Again, ROS Humble is part of Docker container.
- The code of the Docker container is on github. Last update 3 weeks ago.
January 17, 2025
- Trying the foxglove startup. Without other processes running in the background, I only see have 4 topics.
- Did ros2 launch rae_bringup robot.launch.py, which gave the /cmd_vel.
- Yet, the docker crashed before I could drive around.
January 16, 2025
- Comparison of 4 RGB-D cameras. Overall the ZED 2 was the winner.
January 2, 2025
- Tested VisualSFM installation on Karlijn's old laptop. Found only one small typo.
- Note that the space-key no longer works on this laptop.
Previous Labbooks