Chapter 2 directly goes into Manifolds and Lie algebra, but I like Fig 2.2 and the introduction of the wedge and vee operator.
In section 2.1.3 they give as example of Lie Group Optimization the lineralization of the measurement function h() of seeing a landmark in the camera frame, to finding the 3D transformation that corresponds with that observation, which is the perspective-n-point (PNP) problem.
Section 2.2 is about Continues-Time Trajectories, which are particular usefull wiht high-rate measurements like IMU (so relevant for Julian).
In section 2.2.2 they swap a limited number of basis functions (defining splines) for a kernel function. By using Gaussian estimates of the transition function they can keep the inverse kernel matrix sparse. The Markov condition of the state description keeps the inverse kernel matrix block-tridiagonal.
The tools described in this chapter will be used in the following chapters, so the chapter ends with a reassuring keep reading.
It is a paper with code, and links to 7 datasets. Three datasets are real, consisting of images recorded with both high-speed and low-speed shutter cameras. The ReLoBlur dataset looks most promising, because it contains mixing effects of moving objects and backgrounds.
For autonomous driving, the introduction points to Fsad-net for dehazing.
On the RealBlur-J dataset, Uformer is clearly the winner.
From the efficiency side, GhostDeblurGAN was mentioned for its short inference time (real-time according to Table 1). Also HINet and BANet had were positivly mentioned for their short inference (in Table 1).
Trying the code, but no dependencies, and input- and output directories have to be created manually.
Trying python3 evaluate_RealBlur_J.py. Created ./out/Our_realblur_J_results. Followed the link to RealBlur dataset. This has the also the requirements for DeblurGAN-v2 (Python3.6.3, cuda9.0).
The RealBlur dataset is 12 Gb.
Checked the requirements with pip3 show torch. Installed version is 2.7.0, requirement is 1.0.1. At the jetson the installed version is 2.5.0.
torchvision was not installed, so show give no information. Instead installed pip3 install pip-versions, and checked pip-versions list torchvision which gave version 0.1.6 to 0.23.0. At the jetson the installed version is 0.20.0. Version 0.2.2 was required for DeblurGANv2.
pretrainedmodels was not installed on the jetson. Version 0.1.0 until 0.7.4 are available, which is the required version for DeblurGANv2.
Also opencv-python-headless was not installed on the jetson. Available are 3.4.10.37 till 4.12.0.88. The required 4.2.0.32 version of DeblurGANv2 is not available, there is gap between version 3.4.18.65 and 4.3.0.38.
Next non-installed is joblib. On the jetson v0.3.2dev to v1.5.2 are available. The required v0.14.1 is there.
Next non-installed is albumentations. On the jetson 0.0.0 to 2.0.8 are available, including the required version 0.4.3.
Next non-installed is scikit-image. On the jetson 0.7.2 to 0.25.2 are available, including the required 0.17.2
Next non-installed package is tqdm. On the jetson 1.0 to 4.67.1 are available, including the required 4.19.9
Next non-installed package is glog. On the jetson the required 0.3.1 is the latest version.
Next non-installed package is tensorboardX. On the jetson 0.6.9 to 2.6.4 are available, including the required version 2.0
Next non-installed package is fire. On the jetson 0.1.0 to 0.7.1 are availeable, including the required version 0.2.1
Next non-installed package is torchsummary. On the jetson the required 1.5.1 is the latest version.
Tried on ws9 python3 predict.py. Failed on missing module fire. Also had to install tqdm and albumentations. albumentations-0.4.3 also installed imgaug-0.2.6
Continued on ws9 with python3 predict.py. Missing module pretrainedmodels-0.7.4 also installed munch-4.0.0 nvidia-cublas-cu12-12.8.4.1 nvidia-cuda-cupti-cu12-12.8.90 nvidia-cuda-nvrtc-cu12-12.8.93 nvidia-cuda-runtime-cu12-12.8.90 nvidia-cudnn-cu12-9.10.2.21 nvidia-cufft-cu12-11.3.3.83 nvidia-cufile-cu12-1.13.1.3 nvidia-curand-cu12-10.3.9.90 nvidia-cusolver-cu12-11.7.3.90 nvidia-cusparse-cu12-12.5.8.93 nvidia-cusparselt-cu12-0.7.1 nvidia-nccl-cu12-2.27.3 nvidia-nvjitlink-cu12-12.8.93 nvidia-nvtx-cu12-12.8.90 torch-2.8.0 torchvision-0.23.0 triton-3.4.0
Last missing package for the predict is torchsummary-1.5.1.
Executed python3 predict.py ~/tmp/preview_with_bounding_box.jpg, which fails on missing pretrained weights.
Predict loads per default config/config_RealBlurJ_bsd_gopro_pretrain_ragan-ls.yaml. The used weights_path is provided_model/fpn_inception.h5.
Would be interesting to see if the algorithm could deblur:
Yet, although both config as with weights exists, still python3 predict.py --weights_path provided_model/fpn_inception.h5 ~/tmp/apriltag_ctrl_result-1757084064-860794635.png fails:
File "/home/arnoud/git/DeblurGANv2/predict.py", line 86, in main
predictor = Predictor(weights_path=weights_path)
File "/home/arnoud/git/DeblurGANv2/predict.py", line 19, in __init__
config = yaml.load(cfg)
The trick was to do config = yaml.load(cfg, Loader=yaml.Loader). Now I get a result for provided_model/fpn_inception.h5. Optical inspection shows that the whiteboard at the left has less blur. The april_tag has not improved much:
In models/fpn_mobilenet.py a call is made for torch.load('mobilenetv2.pth.tar'), which fails. For the inception an automatic dowload from "http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth" was made.
Difference seems to be from pretrainedmodels import inceptionresnetv2 vs from models.mobilenet_v2 import MobileNetV2.
Should look at how to load the state_dict from this post
Yet, a load fails. With weights_only=True it fails on Unpickler error: Unsupported operand 10, with weights_only=False it fails on UnpicklingError: invalid load key, '\x0a'.
Instead loading mobilenet_v2-b0353104.pth gives a layer mismatch.
Mobilenet by default starts pretrained, so by setting the explicit pretrained-load to false the code works:
Result is not better than the inception result, maybe even worse. Whiteboard is still better than the original.
Trying different settings for the inception model in the yaml, such as d_name resp. double_gan, multi_scale, patch_gan, no_gan. I don't see any difference:
Trying other backbones. There are several options, yet predict_resnet.py still tries to load provided_model/fpn_inception.h5, which is a mismatch.
Tried to build predict_inception_v4.py, which failed on 'InceptionV4' object has no attribute 'conv2d_1a'. Same is true for predict_inception_v3.py
Checked the results with
the default AprilTag implementation. No AprilTag was found, not in the original, nor in the deblurred image.
Looked at Uformer, but that has only a test, not an predict or inference instruction.
This recent implementation FourierDiff is better documented.
Not using the conda environment, the main.py fails on cannot import name '_accumulate' from 'torch._utils'. That is version issue (implemented until v2.2.0 of pytorch, while I use v2.8.0).
Switching resulted on installing several cuda-packages: vidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.19.3 nvidia-nvtx-cu12-12.1.105 torch-2.2.0 triton-2.2.0.
Yet, the used implementation is even older: it now fails on module 'torch.library' has no attribute 'register_fake.
That seems to part of torch v2.4, so installed nvidia-cudnn-cu12-9.1.0.70 nvidia-nccl-cu12-2.20.5 torch-2.4.1 triton-3.0.0
That helps, it now fails on RuntimeError: operator torchvision::nms I am using version 0.23.0. They were using version 0.12.0
Went halfway an installed version 0.18.1, which also downgraded torch again to v2.3.1: nvidia-cudnn-cu12-8.9.2.26 torch-2.3.1 torchvision-0.18.1 triton-2.3.1
Back to cannot import name '_accumulate' from 'torch._utils'
Going all the way and installing torchvision-0.12.0 which also installed torch-1.11.0
The packages now load, got RuntimeError: CUDA error: no kernel image is available for execution on the device. This seems to originate from The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. My RTX 4090 with CUDA capability has sm_89, which seems to much.
Simson rule, try to install torchvision-0.15.2, which also installs cmake-4.1.0 lit-18.1.8 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 torch-2.0.1 torchvision-0.15.2 triton-2.0.0.
Now I get: INFO - main.py - 2025-09-09 16:45:21,443 - Using device: cuda.
Downloading something big (2.21G).
After the download the inference is not too slow. Result for low-light samples is impressive, the motion-blur of our test-image is less impressive:
The AprilTag is still not recognized.
September 8, 2025
Because of Julian Gallas proposal, I read Chapter 11 (Inertial Odometry for SLAM) of the SLAM Handbook.
The preintegration selects integrated IMU factors inbetween Image keyframes:
Looking for the code. SVO Pro is from the same year, combining VIO with VI-SLAM, but not the manifold approach.
As state-of-the-art VIO algorithms ORB-SLAM, DSV-IO, VINS-Mono, OpenVINS, Kimera, BASALT and DM-VIO are mentioned.
Should check some of the Inertial-Only Odometry algorithms mentioned on page 331.
For Julian the Edge algorithms on page 332 are interesting.
Started with the historical notes from Chapter 1.
They point to optimizing the trajectory with PoseSLAM or Smoothing and Mapping (SAM) (2006). Frank Dellaert presents SAM as a Least Square Problem.
I like figure 1.4 from the SLAM Handbook.
Also nice on page 29 is the suggestion to combine equation (1.15) and (1.16) when both control input u_t and odometry measurement o_t is known.
On page 31, they indicate that by eliminating the covariance from the Jacobian and prediction error, the units of the measurements (length, angles) are removed, which allows to combine them in a single cost function.
For non-linear optimization, they introduce DogLeg as intermediate between Steepest Descent and Gauss-Newton algorithm, like the Levenberg-Marquardt algorithm but without rejected updates by explicitly tracking the trust region boundary.
Checked Bishop's Pattern Recognition, who only covers Gradient Descent, and points to other non-linear optimalization techniques to a book never published with Ian Nabney.
Looking into robot_description_publisher.launch.py script. The script both starts a robot_description-node and makes a ros_gz_sim create -topic /robot_description call.
Made a version ~/tmp/robot_description_publisher.launch.py which doesn't launch the robot_state_publisher, but uses the robot_description to launch a robot in the world.
In ~/ugv_ws/src/ugv_main/ugv_gazebo/launch/bringup/robot_state_publisher.launch.py only the robot-state node is started which publishes the robot_description of the ugv_rover. Combined the should work.
First started ros2 launch ugv_gazebo robot_state_publisher.launch.py. This start two nodes: /joint_state_publisher and /robot_state_publisher. The topic /robot_description is now visible.
Made a symbolic link from /opt/ros/humble/share/ros_gz_sim_demos/launch/spawn_from_robot_description.launch.py to ~/tmp/spawn_from_robot_description.launch.py. Still, a number of things go wrong. Not all TFs are defined:
Next, the meshes are not loaded, the URI are not found. For example:
[ign gazebo-1] [Err] [SystemPaths.cc:378] Unable to find file with URI [model://ugv_description/meshes/ugv_rover/base_link.stl]
Extended export IGN_GAZEBO_RESOURCE_PATH=/opt/ros/humble/share:/home/arnoud/ugv_ws/install/ugv_description/share/. That solves the Gazebo part: ros2 launch ros_gz_sim_demos spawn_from_robot_description.launch.py has now the ugv_rover displayed in Gazebo (and no warnings), instead the INFO:
[create-3] [INFO] [1757073678.522566814] [ros_gz_sim]: Requesting list of world names.
[create-3] [INFO] [1757073678.921602212] [ros_gz_sim]: Waiting messages on topic [/robot_description].
[create-3] [INFO] [1757073678.932812465] [ros_gz_sim]: Requested creation of entity.
[create-3] [INFO] [1757073678.932823080] [ros_gz_sim]: OK creation of entity.
There are now two additional nodes running: /rviz and /transform_listener
The current TF-tree has as root the base_footprint, which carries the base_link, which carries pt_base_link, 3d_camera_link, base_imu_link, base_lidar_link. That the four wheel_link's are missing I understand, that the pt_camera_link is not coupled to the pt_base_link is strange. Yet, this are all the dynamic links.
Note that there was already ros2 launch ugv_gazebo spawn_ugv.launch.py.
Tried ros2 launch gazebo_ros gzserver.launch.py world:=ugv_gazebo/worlds/ugv_world.world. No errors, but nothing to see yet.
Next is ros2 launch gazebo_ros gzclient.launch.py, which brings up a Window 'Gazebo not responding'
Extended export IGN_GAZEBO_RESOURCE_PATH=/opt/ros/humble/share:/home/arnoud/ugv_ws/install/ugv_description/share/:/home/arnoud/install/ugv_gazebo/share/. Still, the ugv_world could not be found.
Directly called ruby /usr/bin/ign gazebo -r /home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/worlds/ugv_world.world --force-version 6. Now, the world could be found, but receive the error message Unable to find uri[model://ground_plane] again.
September 4, 2025
Used ros2 doctor, which indicates that several (rviz) packages have new versions.
Also robot_localization and apriltag_ros have new versions.
Also, it sees that several (ugv_rover) topics have subscribers without publishers and vice versa.
The outdated versions don't pop up with sudo apt upgrade. Tried instead pip list -o.
Did pip install xacro --upgrade. Upgrades succeeds, yet not visible from ros2 doctor.
The depth-image demo ros2 launch ros_gz_sim_demos image_bridge.launch.py image_topic:=/depth_camera works fine.
The equivalent ros2 launch ros_gz_sim_demos depth_camera.launch.py doesn't work. The choose a world windows shows up, so it looks that the world cannot be found. The bridge seems to be set up fine:
parameter_bridge-2] [INFO] [1756979694.078910720] [ros_gz_bridge]: Creating ROS->GZ Bridge: [/depth_camera (sensor_msgs/msg/Image) -> /depth_camera (gz.msgs.Image)] (Lazy 0)
.
The GPU-lidar demo ros2 launch ros_gz_sim_demos gpu_lidar_bridge.launch.py sort of works. Rviz gives an error, because it uses the wrong frame. Yet, when switching to map nothing is displayed anymore. Yet, I receive many warnings:
[rviz2-3] [INFO] [1756984073.412641424] [rviz]: Message Filter dropping message: frame 'model_with_lidar/link/gpu_lidar' at time 14.600 for reason 'discarding message because the queue is full'
.
Second way fails in the same way, without world to start with. The first script starts with ros_gz_sim/worlds/gpu_lidar_sensor.sdf, in the second this is commented out!
The actuall call to start the Gazebo simulation is ruby /usr/bin/ign gazebo -r gpu_lidar_sensor.sdf --force-version 6. Still, Rviz fails because no frame, and no additional panels. Also the rviz-configuration is commmented out. There is quite some difference between ../rviz/gpu_lidar_bridge.rviz and ../rviz/gpu_lidar.rviz
Now I get a RVIZ with a LaserScan and PointClound2 panel, yet still with an non-existent model_with_lidar/link/gpu_lidar (no other frames). I switched to ROS_DOMAIN_ID=42 to make sure that I don't be distracted by the real ugv_rover.
The ros2 doctor no longer complains on missing publish/subscribe combinations. But also no complaints on missing TF, while none are published.
Yet, the bridge version also has no TF, but shows the LaserScans. The non-bridge version shows a warning:
[parameter_bridge-2] [INFO] [1756986688.171719229] [ros_gz_bridge]: Creating GZ->ROS Bridge: [/lidar (gz.msgs.LaserScan/) -> /lidar (sensor_msgs/msg/LaserScan)] (Lazy 0)
[parameter_bridge-2] [WARN] [1756986688.171824490] [ros_gz_bridge]: Failed to create a bridge for topic [/lidar] with ROS2 type [sensor_msgs/msg/LaserScan] to topic [/lidar] with Gazebo Transport type [gz.msgs.LaserScan/]
According to ROS Gazebo demos only the point-cloud conversion is different, the LaserScan is parameterized via the ros_bridge. Yet, the parameters in the 2nd script contained a start- en end-'/'. Without this '/' the 2nd script works, althoug the point-cloud-dedicated conversion is not tested. The script indicates that a gpu_lidar.sdf inside ros_gz_point_cloud/ is needed to fix this:
Note that the environment the lidar-model is launched in is different from the one shown in ROS Gazebo demos, in the script the playground is used. The lidar is not mounted on a moving platform:
The package ros_gz_point_cloud exists, with gpu_lidar.sdf in examples. In examples there is also a depth_camera.sdf and rgbd_camera.sdf
In /opt/ros/humble/lib there is the plugins ros_gz_bridge/parameter_bridge and ros_gz_image/image_bridge, but no ros_gz_point_cloud/.
When I do ign gazebo -r ~/tmp/gpu_lidar.sdf I get:
[Err] [SystemLoader.cc:94] Failed to load system plugin [RosGzPointCloud] : couldn't find shared library.
Yet, this is a known issue. Note that this gpu_lidar.sdf world has the three objects from the demo page.
The script ros2 launch ros_gz_sim_demos imu.launch.py works, although it gives a rqt_topic plugin. If I start rviz2, and select the IMU or imu-plugin, I get the warnings:
[INFO] [1756991321.468821299] [rviz]: Message Filter dropping message: frame 'sensors_box/link/imu' at time 114.420 for reason 'discarding message because the queue is full'
Same for the MagneticField
Next trying ros2 launch ros_gz_sim_demos navsat.launch.py.
Starts well:
[ign gazebo-1] [Dbg] [gz.cc:161] Subscribing to [/gazebo/starting_world].
[ign gazebo-1] [Dbg] [gz.cc:163] Waiting for a world to be set from the GUI...
[ign gazebo-1] [Msg] Received world [spherical_coordinates.sdf] from the GUI.
[ign gazebo-1] [Dbg] [gz.cc:167] Unsubscribing from [/gazebo/starting_world].
[ign gazebo-1] [Msg] Ignition Gazebo Server v6.16.0
[ign gazebo-1] [Msg] Loading SDF world file[/usr/share/ignition/ignition-gazebo6/worlds/spherical_coordinates.sdf].
[ign gazebo-1] [Msg] Serving entity system service on [/entity/system/add]
[ign gazebo-1] [Dbg] [SystemManager.cc:70] Loaded system [ignition::gazebo::systems::NavSat] for entity [1]
Next step is defining some services:
[ign gazebo-1] [Msg] Create service on [/world/spherical_coordinates/create]
[ign gazebo-1] [Msg] Remove service on [/world/spherical_coordinates/remove]
[ign gazebo-1] [Msg] Pose service on [/world/spherical_coordinates/set_pose]
[ign gazebo-1] [Msg] Pose service on [/world/spherical_coordinates/set_pose_vector]
[ign gazebo-1] [Msg] Light configuration service on [/world/spherical_coordinates/light_config]
[ign gazebo-1] [Msg] Physics service on [/world/spherical_coordinates/set_physics]
[ign gazebo-1] [Msg] SphericalCoordinates service on [/world/spherical_coordinates/set_spherical_coordinates]
[ign gazebo-1] [Msg] Enable collision service on [/world/spherical_coordinates/enable_collision]
[ign gazebo-1] [Msg] Disable collision service on [/world/spherical_coordinates/disable_collision]
[ign gazebo-1] [Msg] Material service on [/world/spherical_coordinates/visual_config]
[ign gazebo-1] [Msg] Material service on [/world/spherical_coordinates/wheel_slip]
Several plugins are loaded, including at the end the NavSatMap:
[ign gazebo-1] [GUI] [Dbg] [Application.cc:426] Loading plugin [NavSatMap]
[ign gazebo-1] [GUI] [Msg] Added plugin [NavSat Map] to main window
[ign gazebo-1] [GUI] [Msg] Loaded plugin [NavSatMap] from path [/usr/lib/x86_64-linux-gnu/ign-gui-6/plugins/libNavSatMap.so]
Gazebo freezes / crashes while two warnings are given:
[ign gazebo-1] [Wrn] [Component.hh:144] Trying to serialize component with data type [N3sdf3v125WorldE], which doesn't have `operator<<`. Component will not be serialized.
[ign gazebo-1] [Wrn] [Model.hh:69] Skipping serialization / deserialization for models with //pose/@relative_to attribute.
Again a rqt_topic plugin shows up.
As expected ros2 launch ros_gz_sim_demos image_bridge.launch.py image_topic:=/rgbd_camera/depth_image works.
Also ros2 launch ros_gz_sim_demos rgbd_camera_bridge.launch.py works.
As expected, ros2 launch ros_gz_sim_demos rgbd_camera.launch.py, has no world to load. Could select the depth_cameras.sdf, but no native conversion of point-clouds.
The script ros2 launch ros_gz_sim_demos robot_description_publisher.launch.py shows only a ball (both Gazebo and rviz). Tomorrow I should add the ugv_rover description.
September 3, 2025
Looking into system variables for Ignition Gazebo 6 at the Finding resources page.
Models are searched in IGN_GAZEBO_RESOURCE_PATH and GZ_SIM_RESOURCE_PATH
Looked at /.ignition/fuel/fuel.ignitionrobotics.org/openrobotics/models/gazebo/3/model.config, which contains only a Gazebo model (from Nate Koenig)
Did grep Wrn ~/.ignition/gazebo/log/2025-09-02T17\:12\:29.706603374/server_console.log, which gave two warnings:
(2025-09-02T17:17:43.937214976) (2025-09-02T17:17:43.937220099) [Wrn] [Component.hh:144] Trying to serialize component with data type [N3sdf3v125WorldE], which doesn't have `operator<<`. Component will not be serialized.
(2025-09-02T17:18:17.60617480) (2025-09-02T17:18:17.60622432) [Wrn] [SdfEntityCreator.cc:910] Sensor type LIDAR not supported yet. Try usinga GPU LIDAR instead.
Looking into Server Configuration. I like the addition of a camera, which actually uses another system environment variable: IGN_GAZEBO_SERVER_CONFIG_PATH
Next looking at Model command, which allows to inspect the models loaded with ign model --list.
According to the Migration page, IGN-based environment variables are no longer used, and should one use GZ_SIM_RESOURCE_PATH. That variable is not defined (yet) in my environment.
This post gives an overview of other Environment variables used (quite a list).
Also note the ros_gz comment at the bottom of the migration page
Starting with the vehicle example from Visualize in RViz tutorial. The first command ros2 launch ros_gz_sim_demos sdf_parser.launch.py rviz:=True partly fails:
[robot_state_publisher-3] [INFO] [1756904410.927266716] [robot_state_publisher]: Floating joint. Not adding segment from chassis to caster.
[ign gazebo-1] [ignition::plugin::Loader::LookupPlugin] Failed to get info for [gz::sim::systems::OdometryPublisher]. Could not find a plugin with that name or alias.
[ign gazebo-1] [Err] [SystemLoader.cc:125] Failed to load system plugin [gz::sim::systems::OdometryPublisher] : could not instantiate from library [gz-sim-odometry-publisher-system] from path [/usr/lib/x86_64-linux-gnu/ign-gazebo-6/plugins/libignition-gazebo-odometry-publisher-system.so]
Following this segmentation example, I tried sudo apt-get install libignition-gazebo6-dev. Was already installed. /usr/lib/x86_64-linux-gnu/ign-gazebo-6/plugins/libignition-gazebo-odometry-publisher-system.so also exists.
Problem seems more the plugin-name, according to this post.
Looking into /opt/ros/humble/share/ros_gz_sim_demos/launch/sdf_parser.launch.py
This loads model/vehicle.sdf, but not explicit odometry-plugins.
Looked in /opt/ros/humble/share/ros_gz_sim/launch/gz_sim.launch.py. Here plugins are loaded, from GZ_SIM_SYSTEM_PLUGIN_PATH, but not explicit odometry.
Inside the lib, I see ignition::gazebo::systems::OdometryPublisher, instead of gz::sim::systems::OdometryPublisher. Seems a migration problem.
Instead of the latest, I should use the Fortress ROS 2 Integration tutorial, although it has not the Visualize in RViz part.
Finally, this worked, although I should use tree terminals, one for the bridge, one for gazebo and one for ros2 topic echo.
Also ros2 launch ros_gz_sim gz_sim.launch.py gz_args:="shapes.sdf" from ros_gz_sim_demos works.
The fluid pressure demo can be started with ros2 launch ros_gz_sim_demos air_pressure.launch.py (readme still said xml)
Trying to subscribe "reliable" indeed failed; ros2 topic echo /air_pressure --qos-reliability reliable gives:
[WARN] [1756911367.367823219] [_ros2cli_893745]: New publisher discovered on topic '/air_pressure', offering incompatible QoS. No messages will be received from it. Last incompatible policy: RELIABILIT
Also ros2 launch ros_gz_sim_demos battery.launch.py works, which gives two vehicles (and no odometry complains!)
Driving around with ros2 topic pub /model/vehicle_blue/cmd_vel geometry_msgs/msg/Twist "{linear: {x: 5.0}, angular: {z: 0.5}}" also works, although I don't see the battery level going down.
Also ros2 launch ros_gz_sim_demos image_bridge.launch.py
The command ros2 launch ros_gz_sim_demos camera.launch.py also launches rviz, which shows the image (although with a warning that no tf-data is received).
The command ros2 launch ros_gz_sim_demos diff_drive.launch.py launches a vehicle, including odometry.
With ros2 topic echo /model/vehicle_green/odometry I receive odometry updates.
Tomorrow the GPU lidar example, together with the BumpGo code.
Installing the software from the book is not completly trivial / documented.
Started with making a workspace:
cd
mkdir -p bookros2_ws/src
cd bookros2_ws/src/
Continued with cloning the humble-branch (quite some commits behind the main rolling-branch):
git clone --branch humble-devel https://github.com/fmrico/book_ros2.git
First had to install vcstool with sudo apt install python3-vcstool before I could import the dependencies with vcs import . < book_ros2/third_parties.repos. This installs the behavior-framework Groot, and packages of pal, pmb2 and tiago robots.
Continued with sudo rosdep init, rosdep update.
Installing all dependencies with cd ~/bookros_ws, rosdep install --from-paths src --ignore-src -r -y.
Finally the workspace itself is build with cd ~/bookros_ws, colcon build --symlink-install. 39 packages are build.
Activate the build packages with source ~/bookros_ws/install/setup.sh.
The simulation is start up with ros2 launch br2_tiago sim.launch.py. That fails because You are using the public simulation of PAL Robotics, make sure the launch argument is_public_sim is set to True
With help of this discussion, corrected the start command to ros2 launch br2_tiago sim.launch.py is_public_sim:=True
The simulation is waiting on the controller_manager. Starting up the controller with ros2 run br2_fsm_bumpgo_cpp bumpgo --ros-args -r output_vel:=/nav_vel -r input_scan:=/scan_raw -p use_sim_time:=true. See no output from bumpgo.
Tried instead ros2 launch br2_fsm_bumpgo_cpp bump_and_go.launch.py, which only indicated that the bumpgo process has started.
Tried instead starting the simulation including a world with ros2 launch br2_tiago sim.launch.py world:=factory is_public_sim:=True (Chapter 2). Now I see several Errors messages about missing MoveIt.
Run again, now with ros2 launch br2_tiago sim.launch.py world:=factory is_public_sim:=Truei moveit:=False navigation:=True. That starts up a rviz-window with a single TF. No Gazebo screen (yet), although gzserver prints a lot of models, material and links.
Still see some errors, such as moveit requesting the plugin of chomp-planning. The control_server is complaining that frame "odom" doesn't exist.
The documentation indicates ros2 launch tiago_gazebo tiago_gazebo.launch.py is_public_sim:=True [arm_type:=no-arm]. That stills give a lot of play_motion2 errors, mainly on the arm. Instead tried ros2 launch tiago_gazebo tiago_gazebo.launch.py is_public_sim:=True arm_type:=no-arm. Less errors, still tuck_arm and play_motion giving errors.
According to this discussion the play_motion errors could be ignored. The packages should work, I could try vcs pull (first doing sudo apt upgrade to get mostly 100 ros-humble packages up-to-date. According to vcs pull all packages are up-to-date.
Only remaining error-message I now see is:
[move_group-3] [ERROR] [1756806564.285553799] [moveit.ros_planning_interface.moveit_cpp]: Failed to initialize planning pipeline 'chomp'.
And the play_motion errors:
[play_motion2_node-9] [ERROR] [1756806590.164395740] [play_motion2]: Service /controller_manager/list_controllers not available.
[play_motion2_node-9] [ERROR] [1756806590.164568652] [play_motion2]: There are no active JointTrajectory controllers available
[play_motion2_node-9] [ERROR] [1756806590.164589783] [play_motion2]: Joint 'torso_lift_joint' is not claimed by any active controller
[tuck_arm.py-15] [ERROR] [1756806590.485311835] [arm_tucker]: play_motion2 is not ready
The Tutorial uses the colcon mixin build-interface. 55 packages are build, including moveit_planners_chomp. Terminal crashed while building! Trying again. Now all 55 packages are build.
Didn't do the Cyclone-DDS switch (yet).
Continue with MoveIt Quickstart. Receive many warnings that a link is not found in model 'panda'.
Rviz was hidden on my other monitor. When selecting “MotionPlanning” from moveit_ros_visualization, the robot appeared. Yet, the MotionPlanning gives a warning, Requesting initial scene failed.
Everything in the Quickstart worked, except showing a trail between start and goal. Both start-configuration were without collision, not sure why no path was found.
Checked terminal for error message. Saw:
rviz2-1] [INFO] [1756816210.851194549] [move_group_interface]: MoveGroup action client/server not ready
[rviz2-1] [ERROR] [1756816210.851330037] [moveit_background_processing.background_processing]: Exception caught while processing action 'update start state': can't compare times with different time sources
[rviz2-1] [ERROR] [1756816215.083438248] [moveit_ros_robot_interaction.robot_interaction]: Unknown marker name: 'EE:goal_panda_link8' (not published by RobotInteraction class) (should never have ended up in the feedback_map!)
[rviz2-1] [INFO] [1756816222.941671416] [move_group_interface]: MoveGroup action client/server not ready
[rviz2-1] [WARN] [1756816289.434020412] [rcl.logging_rosout]: Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic.
The Tiago simulation was still running in the background, maybe the problem.
With Tiago killed something is happening. I at least see:
[move_group-4] [INFO] [1756816658.654702357] [moveit_move_group_capabilities_base.move_group_capability]: Using planning pipeline 'ompl'
[move_group-4] [INFO] [1756816658.682241776] [moveit_move_group_default_capabilities.move_action_capability]: Motion plan was computed successfully.
[rviz2-1] [INFO] [1756816658.682573844] [move_group_interface]: Planning request complete!
[rviz2-1] [INFO] [1756816658.682721350] [move_group_interface]: time taken to generate plan: 0.015806 seconds
Still no trail, but good enough for the moment.
Back to ros2 launch tiago_gazebo tiago_gazebo.launch.py is_public_sim:=True arm_type:=no-arm. With both workspaces loaded, there is no MoveIt plugin problems anymore, but still no Rviz or Gazebo shows up.
Initiated the workspace ugv_ws and did ros2 launch ugv_gazebo bringup.launch.py. Receive a warning:
[robot_state_publisher-3] [WARN] [1756817272.617985721] [robot_state_publisher]: No robot_description parameter, but command-line argument available. Assuming argument is name of URDF file. This backwards compatibility fallback will be removed in the future.
Next is an info-message:
[spawn_entity.py-5] [INFO] [1756817272.876533049] [spawn_entity]: Loading entity XML from file ~/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/models/ugv_rover/model.sdf
Next is an error-message:
[spawn_entity.py-5] [ERROR] [1756817302.926052688] [spawn_entity]: Service /spawn_entity unavailable. Was Gazebo started with GazeboRosFactory?
Tried ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=empty.sdf, which works.
Did ros2 launch ugv_gazebo display.launch.py, which gave a rviz screen with a RobotModel:
Tried ros2 launch ugv_gazebo gazebo_example_world.launch.py, with the launch example of Launch Gazebo from ROS 2. Gives package 'example_package' not found
Was marked with #Replace with your package, so changed it to ugv_gazebo.
Launch now failed on downloading the world:
[ign gazebo-1] Unable to find or download file
[ERROR] [ign gazebo-1]: process has died [pid 584622, exit code 255, cmd 'ruby /usr/bin/ign gazebo ~/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/worlds/example_world.sdf --force-version 6']
There is a world in ugv_gazebo package, but worlds/ugv_world.world is not a SDF.
Loading this as world gave the errors:
[ign gazebo-1] [Err] [Server.cc:139] Error Code 13: [/sdf/world[@name="default"]/include[0]/uri:~/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/worlds/ugv_world.world:L6]: Msg: Unable to find uri[model://ground_plane]
[ign gazebo-1] [Err] [Server.cc:139] Error Code 13: [/sdf/world[@name="default"]/include[1]/uri:~/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/worlds/ugv_world.world:L10]: Msg: Unable to find uri[model://sun]
[ign gazebo-1] [Err] [Server.cc:139] Error Code 3: Msg: The supplied model name [world] is reserved.
Changed in the launch-file the package to gazebo_ros, and the world to worlds/empty.world. Also here the model sun and ground_plane are not known.
There are three models directories in /opt/ros/humble/share/, in hte package ros_gz_sim_demos, rviz_rendering/ogre_media and depthai_descriptions/urdf.
The sun and ground_plane are actually in ~/.gazebo/models/.
Included both with GAZEBO_MODEL_PATH and IGN_GAZEBO_MODEL_PATH, still not found.
According to this Gazebo documentation, ros_gz_sim should have a file gz_spawn_model.launch.py. No longer there in /opt/ros/humble/share/ros_gz_sim/launch/, only gz_sim.launch.py. There is a ros_gz_spawn_model.launch.py code snippet, but this script also launches ros_gz_bridge (which has for humble no launch file).
When I start ros2 launch ros_gz_sim gz_sim.launch.py I got a quick start screen with several worlds to choose from.
Launching ros2 run ros_gz_sim create -world default -file 'https://fuel.ignitionrobotics.org/1.0/openrobotics/models/Gazebo' fails.
Started empty from the quick start screen worked. Note that the source file path is /usr/share/ignition/ignition-gazebo6/worlds/empty.sdf
Running ros2 run ros_gz_sim create -world empty -file 'https://fuel.ignitionrobotics.org/1.0/openrobotics/models/Gazebo' adds a house to the empty world.
See a error [ign gazebo-1] [Err] [SystemPaths.cc:473] Could not resolve file [gazebo_diffuse.jpg].
Started in the quick start shapes.sdf
Started in the quick start empty. When I do ros2 run ros_gz_sim create -world empty -file '/home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/models/ugv_rover/model.sdf' the model is loaded, yet I see no mesh:
September 1, 2025
Updating the Overview lecture. Needed some time to find the Waveshare page of Unmanned Ground Vehicle basis, and Waveshare's Jetson Orin documentation.
Trying to start ros2 run rqt_console rqt_console from nb-dual (WSL with XLaunch running). Fails on from python_qt_binding import QT_BINDING, ImportError: libQt5Core.so.5.
Strange, added /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 to LD_LIBRARY_PATH, still same error.
The trick from stackoverflow worked: sudo strip --remove-section=.note.ABI-tag /lib/x86_64-linux-gnu/libQt5Core.so.5.
Loading from python_qt_binding import QT_BINDING from python3 now works, yet ros2 run rqt_console rqt_console still fails, now on:
could not connect to display
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Running xclock also gave the error Can't open display:. This could be solved with export DISPLAY=0:0. Now I got a rqt_console working under Windows from a WSL Ubuntu-22.04 terminal:
Chapter 3 has a nice simple application to control the robot with the Lidar, BumpGo (FSM-based). For a simulated Tiago robot. Is also available as python-version.
Nice project to try tomorrow on WS9 (Gazebo), before trying it on the Rover.
Chapter 5 has an example of tracking an object (camera based), with a HeadController, including a PID-controller.
Chapter 6 describes a patrolling behavior (implemented with Groot), with Nav and a costmap. Yet, example in slides is still ros1-foxy based. Nav2 (ros2-humble) with Turtlebot/Tiago is described in appendix.
August 29, 2025
Testing UvA AI-chat. Suggested this piece of code to draw the bounding box:
# Iterate through each detection and draw the bounding box
for detection in detections:
corners = detection['lb-rb-rt-lt']
# Convert corners to integer
corners = np.int32(corners)
The example-run python test_deeptag.py --config config_image.json seems to run, although on the end it crashed on Qt (tried to display the result?)
Indeed, at the end a call is made to visualize(). Made an equivalent function which saves the image.
That works, for the example image:
Yet, the config_image.json not only specifies the file-name, but also the cameraMatrix and distCoeffs. Running the same code on my preview.jpg , Stage already failed to detect a ROI, so also 2nd stage failed.
On June 2025 I did python3 examples/calibration/calibration_reader.py (for the RAE) and the UGV Rover.
The preview I stored was 300x300. I did a calibration run on the UGV already in June, so could try to reuse those values. The apriltag_cube was a 1280x960 image. The cameraMatrix of the apriltag_cube was [921.315, 0, 649.204, 0, 921.375, 459.739, 0, 0, 1], changing it to [453.941, 0.0, 318.555, 0.0, 454.052, 243.238, 0, 0, 1] (for height=480).
The distCoeffs were [ 0.978811, 3.71217, 0.000159566, 0.000281263, 0.638837, 1.36651, 3.99066, 1.93969], updated to [72.96343994140625,
-161.83697509765625,
0.0020885001868009567,
-0.0006911111413501203,
96.07749938964844,
73.06451416015625,
-162.3726806640625,
96.70803833007813,
0.0,
0.0,
0.0,
0.0,
-0.01333368755877018,
0.0015384935541078448]
Also update the marker_size from 3cm to 10cm.
Yet, now the code fails on cameraMatrix = np.float32(cameraMatrix).reshape(3,3), because I forgot the last row.
Now the code runs again, but still fails to find a ROI:
>>>>>>>Stage-1<<<<<<<
0 ROIs
>>>>>>>Stage-2<<<<<<<
>iter #0
>iter #1
Valid ROIs:
------timing (sec.)------
Stage-1 : [CNN 0.8183] 0.8188
Stage-2 (1 marker): [CNN 0.0000] 0.0000
Stage-2 (0 rois, 0 markers): [CNN 0.0000] 0.0000
The first part is on finetuning YOLO8. Started with the last part, which fails on pipeline = create_camera_pipeline(YOLOV8N_CONFIG, YOLOV8N_MODEL). That is correct, I didn´t specify YOLOV8N_CONFIG yet.
With YOLOV8N_CONFIG defined, it is missing the fine-tuned model: yolov8_320_fps_check/result/best.json.
So, I made in ~/test/ultralytics a download_pothole_dataset.py script. Script looks more like a shell-script. Downloaded the dataset directly with wget.
The commands seems to be intended for a Jupyter notebook. Just calling !yolo doesn't work. Looking at regular fine-tuning tutorial. That has a train.py file (instead of the yolo CLI).
According to this post, the command should be just ultralytics train. Tried ultralytics train model=yolov8n.pt imgsz=960 data=pothole_v8.yaml epochs=50 batch=32 name=yolov8n_v8_50e, which starts downloading yolov8n.pt but than fails on line 6 of the yaml file (missing quote).
Seems to start nicely:
Plotting labels to runs/detect/yolov8n_v8_50e2/labels.jpg...
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically...
optimizer: AdamW(lr=0.002, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0)
Image sizes 960 train, 960 val
Using 6 dataloader workers
Logging results to runs/detect/yolov8n_v8_50e2
Starting training for 50 epochs...
Directly after that the process is killed. Seems that specifying an additional "datasets" was not necessary. Without the dataset seems to be valid, some duplicate labels are removed. Still, the training is killed (without error-message). Running it with train_pothole.py also gives a kill.
Unfortunately, the openvino blob is no longer available.
Found an earlier (openvino_2021) version with artifactory.
Seems to work, yet looks like the preview crashes (no display). Removed the the preview-code. Now the script fails on in_cam = q_cam.get(). That was still the preview frame.
Script now works, get some warnings:
/home/jetson/test/ultralytics/detect_face_from_oak.py:22: DeprecationWarning: RGB is deprecated, use CAM_A or address camera by name instead.
cam.setBoardSocket(dai.CameraBoardSocket.RGB)
/home/jetson/test/ultralytics/detect_face_from_oak.py:26: DeprecationWarning: LEFT is deprecated, use CAM_B or address camera by name instead.
mono_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
/home/jetson/test/ultralytics/detect_face_from_oak.py:29: DeprecationWarning: RIGHT is deprecated, use CAM_C or address camera by name instead.
mono_right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
[14442C10019902D700] [1.2.3] [1.521] [SpatialDetectionNetwork(4)] [warning] Neural network inference was performed on socket 'RGB', depth frame is aligned to socket 'RIGHT'. Bounding box mapping will not be correct, and will lead to erroneus spatial values. Align depth map to socket 'RGB' using 'setDepthAlign'.
Looking at this post, which also uses a OAK combined with a Jetson. Points to Multi-Device Setup from Luxonis. Only one X_LINK_UNBOOTED device showed up.
Better is test-example is hello world from Luxonis. Should only safe the frame instead of showing it.
That works. Actually, it already recognized the AprilTag with mobilenet-ssd_openvino_2022.1_6shave.blob, although it didn´t print the label:
Executing YOLO on the CPU is really easy, just device="cpu". The result is also interesting. Inference is 2x as slow, but overall it faster, because the postprocess is 80x faster:
CPU-Speed: 8.8ms preprocess, 364.1ms inference, 8.1ms postprocess per image at shape (1, 3, 640, 480)
GPU-Speed: 8.6ms preprocess, 154.8ms inference, 638.9ms postprocess per image at shape (1, 3, 640, 480)
.
That works, it can detect a tagStandard41h12 with ID=1. The structure that is returned has the following form:
Detecting: {'hamming': 0, 'margin': 85.24191284179688, 'id': 1, 'center': array([197.00670206, 147.69477397]), 'lb-rb-rt-lt': array([[153.73930359, 193.42068481],
[244.68344116, 190.64276123],
[238.93740845, 103.38150787],
[151.60453796, 106.79576111]])}
August 26, 2025
The Rover is running JetPack 6. When checking with nvidia-smi, the CUDA version is 12.02 but no drivers are installed (nvidia-tools is version 540.3.0).
Starting with Yolo tutorial. It has start commands for three JetPacks (v4 to v6). Yet, all three docker based.
Ultralytics itself has a tutorial with a native installation on the Jetson.
Doing pip install ultralytics[export] starts to download for instance multiple versions of typing_extensions (4.2.0 to 4.15.0)
So, because at the end installing the rich-dependency (from keras, tensorflow, ultralytics) failed, I installed just pip install ultralytics.
I installed the other packages, and tried to run the example.
That failed on libcudart.so.12.
Looked at this post: I have only one libcuda.so version, in /usr/lib/aarch64-linux-gnu/libcuda.so.
Did sudo apt install cuda-toolkit-12-2, which contains cuda-cudart-12-2 cuda-cudart-dev-12-2. That solves it partly. Next improt error is on libcudnn.so.9. According to Installing cuDNN on Linux, this could be solved with sudo apt-get -y install cudnn9-cuda-12.
That solves all explicit dependencies. Yet, there are also implicit dependencies:
Creating new Ultralytics Settings v0.0.6 file ✅
View Ultralytics Settings with 'yolo settings' or at '/home/jetson/.config/Ultralytics/settings.json'
Update Settings with 'yolo settings key=value', i.e. 'yolo settings runs_dir=path/to/dir'. For help see https://docs.ultralytics.com/quickstart/#ultralytics-settings.
Downloading https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.pt to 'yolo11n.pt': 100%
YOLO11n summary (fused): 100 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
Ultralytics requirements ['onnx>=1.12.0,<1.18.0', 'onnxslim>=0.1.59'] not found, attempting AutoUpdate...
torch 2.5.0 requires sympy==1.13.1, but you have sympy 1.14.0 which is incompatible
Major problem is the support-matrix. Checked tensorrt 10.7.x, which requires cuda 12.6.
I couldn´t find the installation instruction for tensorrt 9.x, and version 8.x is CUDA 12.2 based.
Followed this instructions to install 12.6 on JetPack.
Now I can download wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.7.0/local_repo/nv-tensorrt-local-repo-ubuntu2404-10.7.0-cuda-12.6_1.0-1_arm64.deb
Next available version is https://developer.download.nvidia.com/compute/tensorrt/10.13.2/local_installers/nv-tensorrt-local-repo-ubuntu2404-10.13.2-cuda-13.0_1.0-1_arm64.deb (maybe for JetPack 7).
Next step is according to sudo apt-get install tensorrt
Verifying the installation with dpkg-query -W tensorrt gives tensorrt 10.7.0.23-1+cuda12.6.
Running the example comes one step further:
PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB)
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.65...
ONNX: export success ✅ 3.2s, saved as 'yolo11n.onnx' (10.2 MB)
TensorRT: export failure 3.3s: libnvdla_compiler.so: cannot open shared object file: No such file or directory
Seems that it is part of the nvidia-toolkit. Found this notes and did sudo apt-get install nvidia-container-toolkit libnvidia-container-tools nvidia-container-runtime. That was not the trick.
The NVIDIA DLA (Deep Learning Accelerator) should be part of the Orin Jetson.
Did a sudo apt upgrade, which updates many ros-humble packages, and installs ros-humble-aruco-msgs ros-humble-aruco-opencv-msgs.
The first check cat /proc/driver/nvidia/version gives VRM version: NVIDIA UNIX Open Kernel Module for aarch64 540.3.0
Trying sudo apt install nvidia-jetpack, whcih gives the following dependencies clashes:
vidia-container : Depends: nvidia-container-toolkit-base (= 1.14.2-1) but 1.17.8-1 is to be installed
Depends: libnvidia-container-tools (= 1.14.2-1) but 1.17.8-1 is to be installed
Depends: nvidia-container-toolkit (= 1.14.2-1) but 1.17.8-1 is to be installed
Depends: libnvidia-container1 (= 1.14.2-1) but 1.17.8-1 is to be installed
nvidia-tensorrt : Depends: tensorrt-libs (= 8.6.2.3-1+cuda12.2) but 10.7.0.23-1+cuda12.6 is to be installed
Searched with apt list -a *dla* for relevant packages. Found nvidia-l4t-dla-compiler. Now /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so, but python can still not find it. This path was not part of LD_LIBRARY (note, many libraries seems to be missing - I would also expect /usr/local/cuda-12/lib64
Start the definition now with export LD_LIBRARY_PATH="/usr/local/lib:/usr/lib/aarch64-linux-gnu/:/usr/lib/aarch64-linux-gnu/tegra/:/usr/local/cuda-12/lib64:${LD_LIBRARY_PATH:-}"
Now the example works:
TensorRT: starting export with TensorRT 10.7.0...
[08/27/2025-02:32:47] [TRT] [I] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 640, GPU 3219 (MiB)
[08/27/2025-02:32:50] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +965, GPU -392, now: CPU 1648, GPU 2860 (MiB)
[08/27/2025-02:32:50] [TRT] [I] ----------------------------------------------------------------
[08/27/2025-02:32:50] [TRT] [I] Input filename: yolo11n.onnx
[08/27/2025-02:32:50] [TRT] [I] ONNX IR version: 0.0.9
[08/27/2025-02:32:50] [TRT] [I] Opset version: 19
[08/27/2025-02:32:50] [TRT] [I] Producer name: pytorch
[08/27/2025-02:32:50] [TRT] [I] Producer version: 2.5.0
[08/27/2025-02:32:50] [TRT] [I] Domain:
[08/27/2025-02:32:50] [TRT] [I] Model version: 0
[08/27/2025-02:32:50] [TRT] [I] Doc string:
[08/27/2025-02:32:50] [TRT] [I] ----------------------------------------------------------------
TensorRT: input "images" with shape(1, 3, 640, 640) DataType.FLOAT
TensorRT: output "output0" with shape(1, 84, 8400) DataType.FLOAT
TensorRT: building FP32 engine as yolo11n.engine
[08/27/2025-02:32:50] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[08/27/2025-02:34:25] [TRT] [I] Compiler backend is used during engine build.
[08/27/2025-02:36:07] [TRT] [W] Tactic Device request: 260MB Available: 256MB. Device memory is insufficient to use tactic.
[08/27/2025-02:36:07] [TRT] [W] UNSUPPORTED_STATE: Skipping tactic 0 due to insufficient memory on requested size of 272793600 detected for tactic 0x00000000000003e8.
[08/27/2025-02:36:07] [TRT] [W] Tactic Device request: 260MB Available: 257MB. Device memory is insufficient to use tactic.
[08/27/2025-02:36:07] [TRT] [W] UNSUPPORTED_STATE: Skipping tactic 1 due to insufficient memory on requested size of 272793600 detected for tactic 0x00000000000003ea.
[08/27/2025-02:36:07] [TRT] [W] Tactic Device request: 260MB Available: 258MB. Device memory is insufficient to use tactic.
[08/27/2025-02:36:07] [TRT] [W] UNSUPPORTED_STATE: Skipping tactic 2 due to insufficient memory on requested size of 272793600 detected for tactic 0x0000000000000000.
[08/27/2025-02:36:14] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[08/27/2025-02:36:16] [TRT] [I] Total Host Persistent Memory: 566592 bytes
[08/27/2025-02:36:16] [TRT] [I] Total Device Persistent Memory: 38912 bytes
[08/27/2025-02:36:16] [TRT] [I] Max Scratch Memory: 2764800 bytes
[08/27/2025-02:36:16] [TRT] [I] [BlockAssignment] Started assigning block shifts. This will take 226 steps to complete.
[08/27/2025-02:36:16] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 25.7148ms to assign 11 blocks to 226 nodes requiring 19252224 bytes.
[08/27/2025-02:36:16] [TRT] [I] Total Activation Memory: 19251200 bytes
[08/27/2025-02:36:16] [TRT] [I] Total Weights Memory: 10588740 bytes
[08/27/2025-02:36:16] [TRT] [I] Compiler backend is used during engine execution.
[08/27/2025-02:36:16] [TRT] [I] Engine generation completed in 205.479 seconds.
[08/27/2025-02:36:16] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 1 MiB, GPU 213 MiB
TensorRT: export success ✅ 212.3s, saved as 'yolo11n.engine' (11.9 MB)
Export complete (214.0s)
Results saved to /home/jetson/test/ultralytics
Predict: yolo predict task=detect model=yolo11n.engine imgsz=640
Validate: yolo val task=detect model=yolo11n.engine imgsz=640 data=/usr/src/ultralytics/ultralytics/cfg/datasets/coco.yaml
Visualize: https://netron.app
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Loading yolo11n.engine for TensorRT inference...
[08/27/2025-02:36:16] [TRT] [I] Loaded engine size: 11 MiB
[08/27/2025-02:36:16] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +18, now: CPU 0, GPU 28 (MiB)
Downloading https://ultralytics.com/images/bus.jpg to 'bus.jpg': 100% ━━━━━━━━━━━━ 134.2/13Downloading https://ultralytics.com/images/bus.jpg to 'bus.jpg': 100% ━━━━━━━━━━━━ 134.2/134.2KB 16.1MB/s 0.0s
image 1/1 /home/jetson/test/ultralytics/bus.jpg: 640x640 4 persons, 1 bus, 32.4ms
Speed: 20.0ms preprocess, 32.4ms inference, 622.3ms postprocess per image at shape (1, 3, 640, 640)
This is the bus.jpg:
Without the ONNX conversion to DLA with the engine, 152.4ms is needed for inference:
Speed: 9.5ms preprocess, 152.4ms inference, 544.9ms postprocess per image at shape (1, 3, 640, 480)
With the conversion, 31.6ms is needed (5x as fast):
Speed: 33.9ms preprocess, 31.6ms inference, 588.2ms postprocess per image at shape (1, 3, 640, 640)
In the mean-time I looked at some of my experiments of December 12, 2024. The UGV rover also has a DepthAI camera, so ros2 launch depthai_ros_driver camera.launch.py worked out of the box.
An oak_container is started up. Inside the container I get after a while:
[component_container-1] [INFO] [1756214142.940842084] [oak]: Starting camera.
[component_container-1] [INFO] [1756214143.845368276] [oak]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [INFO] [1756214153.702111573] [oak]: Camera with MXID: 14442C10019902D700 and Name: 1.2.3 connected!
[component_container-1] [INFO] [1756214153.703465062] [oak]: USB SPEED: HIGH
[component_container-1] [INFO] [1756214153.752090092] [oak]: Device type: OAK-D-LITE
[component_container-1] [INFO] [1756214153.756476940] [oak]: Pipeline type: RGBD
[component_container-1] [INFO] [1756214154.265811788] [oak]: NN Family: mobilenet
[component_container-1] [INFO] [1756214154.371948858] [oak]: NN input size: 300 x 300. Resizing input image in case of different dimensions.
[component_container-1] [INFO] [1756214154.986553834] [oak]: Finished setting up pipeline.
After that I get three warnings, and the camera is ready:
[component_container-1] [14442C10019902D700] [1.2.3] [3.449] [SpatialDetectionNetwork(9)] [warning] Network compiled for 6 shaves, maximum available 10, compiling for 5 shaves likely will yield in better performance
[component_container-1] [WARN] [1756214156.322062749] [oak]: Parameter imu.i_rot_cov not found
[component_container-1] [WARN] [1756214156.322186081] [oak]: Parameter imu.i_mag_cov not found
[component_container-1] [INFO] [1756214157.512126477] [oak]: Camera ready!
Tried again, now with DEPTHAI_DEBUG=1.
Now I get some (too much) extra information, such as:
[component_container-1] CAM ID: 1, width: 640, height: 480, orientation: 0
[component_container-1] CAM ID: 2, width: 640, height: 480, orientation: 0
[component_container-1] CAM ID: 0, width: 1920, height: 1080, orientation: 0
Looking on nb-dual what I see when starting ros2 launch depthai_ros_driver camera.launch.py. With ros2 node list I see:
/launch_ros_147132
/oak
/oak_container
/oak_state_publisher
/rectify_color_node
With ros2 topic list I see the following oak-topics:
/oak/imu/data
/oak/nn/spatial_detections
/oak/rgb/camera_info
/oak/rgb/image_raw
/oak/rgb/image_raw/compressed
/oak/rgb/image_raw/compressedDepth
/oak/rgb/image_raw/theora
/oak/rgb/image_rect
/oak/rgb/image_rect/compressed
/oak/rgb/image_rect/compressedDepth
/oak/rgb/image_rect/theora
/oak/stereo/camera_info
/oak/stereo/image_raw
/oak/stereo/image_raw/compressed
/oak/stereo/image_raw/compressedDepth
/oak/stereo/image_raw/theora
Another interesting tutorial (directly on OAK-lite), is this Yolo on OAK tutorial
August 25, 2025
Interesting new 40K high-resolution depth-estimation dataset, fully panoramic with 3D LiDAR ground-truth. The dataset is available on github.
Removing sudo apt remove python3.10-distutils python3.10-lib2to3 python3.10-minimal python3.10 libpython3.10-minimal libpython3.10-stdlib was a bit too much. System crashed, no GUI, no internet.
Luckely a reboot gave a terminal (still no internet), but after sudo apt remove libpython3.10-minimal libpython3.10-stdlib and sudo apt install ubuntu-desktop I got my GUI back.
Now I could also do sudo apt install ros-humble-desktop.
Added build --cmake-args -Wno-dev to reduce the number of cmake Deprecation warnings
The package slam_mapping still gives output, but that is an explicit message() in the CMakefile (without Log level keyword.
Curious what the vizantie_demos are?
Looking at possible ros-humble official packages:
The dry-run sudo apt install --dry-run ros-humble-cartographer gives:
The following NEW packages will be installed:
libabsl-dev libcairo2-dev liblua5.2-0 liblua5.2-dev libpixman-1-dev
libreadline-dev libtool-bin ros-humble-cartographer
The version of ros-humble-cartographer is 2.0.9004-1jammy.20250701.011854.
In the 4th tutorial 2D Mapping Based on LiDAR, cartogrpaher is called, but with ros2 launch ugv_slam cartographer.launch.py use_rviz:=true. This script calls cartographer/launch/mapping.launch.py, together with ugv_bringup/launch/bringup_lidar.launch.py and robot_pose_publisher/launch//robot_pose_publisher_launch.py
The mapping.launch.py uses mapping_2d.lua as configuration_basename. It launches two nodes (cartographer_node and cartographer_occupancy_grid_node) from package cartographer_ros.
This package seems not be installed, sudo apt install --dry-run ros-humble-cartographer-ros installs 10 packages, including ros-humble-cartographer
ros-humble-cartographer-ros ros-humble-cartographer-ros-msgs.
Indeed, when I do ros2 launch cartographer mapping.launch.py I get the error that package 'cartographer_ros' not found.
Installed ros-humble-cartographer-ros and included it in the manual. Now ros2 launch cartographer mapping.launch.py starts.
I see three nodes:
/cartographer_node
/cartographer_occupancy_grid_node
/transform_listener_impl_6430e08998d0
I see the following topics:
/constraint_list
/landmark_poses_list
/map
/odom
/scan
/scan_matched_points2
/submap_list
/tf
/tf_static
/trajectory_node_list
When I do ros2 node info /cartographer_node I get:
/cartographer_node
Subscribers:
/odom: nav_msgs/msg/Odometry
/parameter_events: rcl_interfaces/msg/ParameterEvent
/scan: sensor_msgs/msg/LaserScan
Publishers:
/constraint_list: visualization_msgs/msg/MarkerArray
/landmark_poses_list: visualization_msgs/msg/MarkerArray
/parameter_events: rcl_interfaces/msg/ParameterEvent
/rosout: rcl_interfaces/msg/Log
/scan_matched_points2: sensor_msgs/msg/PointCloud2
/submap_list: cartographer_ros_msgs/msg/SubmapList
/tf: tf2_msgs/msg/TFMessage
/trajectory_node_list: visualization_msgs/msg/MarkerArray
In addition, I also see the services of /cartographer_node:
Service Servers:
/cartographer_node/describe_parameters: rcl_interfaces/srv/DescribeParameters
/cartographer_node/get_parameter_types: rcl_interfaces/srv/GetParameterTypes
/cartographer_node/get_parameters: rcl_interfaces/srv/GetParameters
/cartographer_node/list_parameters: rcl_interfaces/srv/ListParameters
/cartographer_node/set_parameters: rcl_interfaces/srv/SetParameters
/cartographer_node/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically
/finish_trajectory: cartographer_ros_msgs/srv/FinishTrajectory
/get_trajectory_states: cartographer_ros_msgs/srv/GetTrajectoryStates
/read_metrics: cartographer_ros_msgs/srv/ReadMetrics
/start_trajectory: cartographer_ros_msgs/srv/StartTrajectory
/submap_query: cartographer_ros_msgs/srv/SubmapQuery
/tf2_frames: tf2_msgs/srv/FrameGraph
/trajectory_query: cartographer_ros_msgs/srv/TrajectoryQuery
/write_state: cartographer_ros_msgs/srv/WriteState
When I do ros2 node info /cartographer_occupancy_grid_node
/cartographer_occupancy_grid_node
Subscribers:
/parameter_events: rcl_interfaces/msg/ParameterEvent
/submap_list: cartographer_ros_msgs/msg/SubmapList
Publishers:
/map: nav_msgs/msg/OccupancyGrid
/parameter_events: rcl_interfaces/msg/ParameterEvent
/rosout: rcl_interfaces/msg/Log
With in addition:
Service Servers:
/cartographer_occupancy_grid_node/describe_parameters: rcl_interfaces/srv/DescribeParameters
/cartographer_occupancy_grid_node/get_parameter_types: rcl_interfaces/srv/GetParameterTypes
/cartographer_occupancy_grid_node/get_parameters: rcl_interfaces/srv/GetParameters
/cartographer_occupancy_grid_node/list_parameters: rcl_interfaces/srv/ListParameters
/cartographer_occupancy_grid_node/set_parameters: rcl_interfaces/srv/SetParameters
/cartographer_occupancy_grid_node/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically
Service Clients:
/submap_query: cartographer_ros_msgs/srv/SubmapQuery
Last I do ros2 node info /transform_listener_impl_6430e08998d0:
/transform_listener_impl_6430e08998d0
Subscribers:
/parameter_events: rcl_interfaces/msg/ParameterEvent
/tf: tf2_msgs/msg/TFMessage
/tf_static: tf2_msgs/msg/TFMessage
Publishers:
/rosout: rcl_interfaces/msg/Log
No Service Servers of Clients.
Looking at Algorithm walkthrough, the /cartographer_node is the frontend (local SLAM) that builds submaps. The /cartographer_occupancy_grid_node is the backend (global SLAM) which tries to find loop closure constraints.
Also looked at original paper. Points to the Ceres-based scan matcher for the local SLAM.
Continue at Algorithm walkthrough, the /cartographer_node frontend (local SLAM) does uses TRAJECTORY_BUILDER_nD.min_range and max_range that be chosen according to the specifications of your robot and sensors. Didn't see this parameter specified in the launch-files.
Checking ros2 param get /cartographer_node TRAJECTORY_BUILDER_2D.min_range, give parameter not set.
Checking /opt/ros/humble/share/cartographer/configuration_files/trajectory_builder_2d.lua, which indicates a min_range-max_range of 0-30m.
Note that Cartographer ROS provides an RViz plugin to visualize submaps. In 3D there are both low and high resolution submaps.
I see artographer_ros/launch/visualize_pbstream.launch.py, which calls the node rviz2 with cartographer_ros//configuration_files/demo_2d.rviz.
Also the constraints of the backend (global SLAM) can be visualized in RVIZ. Also saw the trajecotries, as published by the frondend.
The global constraints can be used for multi-robot mapping!
Continue with the step of the cartographer-ros documentation: Tuning the algorithm.
One of the nice sections are Pure Localization in a Given Map and Odometry in Global Optimization.
The next waveshare tutorial Auto Navigation uses rtabmap_localization. Yet, the ugv_nav/launch/nav.launch.py has the option to use cartographer localization. That calls cartographer/launch/localization.launch.py (without _ros, that launch file doesn't exist only cartographer_ros/launch/localization.launch.py) The Map to localize on is /home/ws/ugv_ws/src/ugv_main/ugv_nav/maps/map.pbstream(wrong, should be corrected). The cartographer_localization.launch.py looks much more logicall than bringup_launch_cartographer.launch.py, because it launches two ros-nodes (backend, frontend), instead of the bare cartographer-localization.
Skipped the web-interface and chat-bot tutorials.
Last waveshare Gazebo tutorial combines all previous Mapping and Navigation.
Seems that Vizanti is a web-visualizer. Indeed, the ugv_web_app/launch/bringup.launch.py calls vizanti_server/launch/vizanti_server.launch.py.
The repository of vizanti can be found on github. Last update 2 weeks ago.
The version of ugv_ws is a unoffical clone, not updated for 11 months.
In Zelfbediening I have the tab 'Bestellen tot betalen', but not page 'Ontvangst'. When you search for 'Goederen ontvangen' you get the intended page. Yet, I cannot see the orders that already arrived (did somebody else give an OK?). The order that still has to come (the UGV Rover) was visible.
Trying to update to Ubuntu 22.04 via the software updater. That fails between after setting channels and calculating the changes.
Did in a terminal sudo apt update. Two channels had GPG errors, so looked in /etc/apt/sources.list.d and commented out kitware and opensuse.
Still, it fials. Looking in /var/log/dist-upgrade/main.log. Several errors with cuda-drivers
There is still one error in /var/log/dist-upgrade/main.log, related with rti-connext-dds-5.3.1. Did sudo apt list --installed | grep dds, which showed ros-foxy-rmw-dds-common. Removing ros-foxy-rmw-dds-common also removes ros-foxy-rmw-fastrtps-cpp
ros-foxy-rmw-fastrtps-shared-cpp ros-foxy-ros-base ros-foxy-rosbag2
ros-foxy-rosbag2-converter-default-plugins, but installs ros-foxy-connext-cmake-module ros-foxy-rmw-connext-cpp
ros-foxy-rmw-connext-shared-cpp ros-foxy-rosidl-generator-dds-idl
ros-foxy-rosidl-typesupport-connext-c
ros-foxy-rosidl-typesupport-connext-cpp rti-connext-dds-5.3.1.
Removing rti-connext-dds-5.3.1 would bring ros-foxy-rmw-dds-common back again. Deleted all ros-foxy packages with sudo apt remove ros-foxy-*, followed by sudo apt remove rti-connext-dds-5.3.1.
Commented out all additional /etc/apt/sources.list.d/*. Still 8 repo active, including dk.archive xenial, cloud-r and librealsense.
Removed those from /etc/apt/sources.*. Now suddenly there are many updates available (more than 2000 jammy-updates)
Something still goes wrong with /var/lib/dkms/librealsense2-dkms/1.3.27/6.8.0-65-generic/x86_64/dkms
.conf for module librealsense2-dkms includes a BUILD_EXCLUSIVE directive which
does not match this kernel/arch
Update seems to be succeeded.
Tried the steps from ROS install. Yet, I receive the error: Error: could not find a distribution template for Ubuntu/jammy. Yet, from /etc/apt/sources.list it is clear that universe is activated (see commandline help, because the app 'Software and Update' could be found but doesn't start.
Next steps work fine, until sudo apt update.
That fails on fingerprinting http://packages.ros.org/ros2/ubuntu/ jammy
Because 'software and Update' is not found, it seems that the update is not complete (although lsb_release -a says so. Trying sudo apt-get dist-upgrade, which removes a lot of ros-noetic (any many other packages).
Fixed the broken install with sudo apt -o Dpkg::Options::="--force-overwrite" --fix-broken install.
Could do sudo apt-get dist-upgrade, so 'Software and Updates' works. All other problems could be fixed. Running sudo dpkg -i /tmp/ros2-apt-source.deb followed by sudo apt update gives three repositories:
Hit:5 http://packages.ros.org/ros2/ubuntu jammy InRelease
Get:6 http://packages.ros.org/ros2/ubuntu jammy/main Sources [1,736 kB]
Get:7 http://packages.ros.org/ros2/ubuntu jammy/main i386 Packages [44.4 kB]
Get:8 http://packages.ros.org/ros2/ubuntu jammy/main amd64 Packages [1,707 kB]
Yet, sudo apt install ros-humble-desktop gives:
The following packages have unmet dependencies:
libpython3.10 : Depends: libpython3.10-stdlib (= 3.10.12-1~22.04.10) but 3.10.18-1+focal1 is to be installed
libpython3.10-dev : Depends: libpython3.10-stdlib (= 3.10.12-1~22.04.10) but 3.10.18-1+focal1 is to be installed
python3.10-dev : Depends: python3.10 (= 3.10.12-1~22.04.10) but 3.10.18-1+focal1 is to be installed
E: Unable to correct problems, you have held broken packages
No idea where this comes from, nor where the focal versions come from.
The code comes with three ros-demo bags (LiDAR, RGBD, KITTI).
Table I gives as related work 13 other SLAM algorithms (mostly with Semantic Localization), from 2013-2024.
Tried ldd /usr/bin/ign | grep factory, but ign is script. ldd /usr/bin/gazebo | grep factory gave nothing. The only library I see is /lib/x86_64-linux-gnu/libgazebo_common.so.11
Removed the non-ros gazebo version with sudo apt remove ignition-fortress && sudo apt autoremove.
Next step is to launch gazebo, following this gazebo-ros tutorial. ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=empty.sdf works.
Launching just the server fails. The command ros2 launch ros_gz_sim gz_server.launch.py world_sdf_file:=empty.sdf gives:
file 'gz_server.launch.py' was not found in the share directory of package 'ros_gz_sim' which is at '/opt/ros/humble/share/ros_gz_sim'
Still ros2 launch ugv_gazebo bringup.launch.py gives an error (did you include ros_factory). Removed ros-humble-gazebo-ros, which fails on missing package gazebo-ros. Installed again. Fails on [gzserver-1] Error Code 12 Msg: Unable to find uri[model://world].
My GAZEBO_MODEL_PATH is set to /opt/ros/humble/share, while the worls are in /opt/ros/humble/share/gazebo_ros/worlds/
Setting export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/opt/ros/humble/share/gazebo_ros/worlds/. Trying also export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/.
According to this post, I also have to set my GAZEBO_PLUGIN_PATH and GAZEBO_RESOURCE_PATH.
Still fails on with:
[spawn_entity.py-5] [INFO] [1753198394.461104938] [spawn_entity]: Calling service /spawn_entity
[gzclient-2] gzclient: /usr/include/boost/smart_ptr/shared_ptr.hpp:728: typename boost::detail::sp_member_access::type boost::shared_ptr::operator->() const [with T = gazebo::rendering::Camera; typename boost::detail::sp_member_access::type = gazebo::rendering::Camera*]: Assertion `px != 0' failed.
[ERROR] [gzclient-2]: process has died [pid 117754, exit code -6, cmd 'gzclient --gui-client-plugin=libgazebo_ros_eol_gui.so'].
[gzserver-1] gzserver: /usr/include/boost/smart_ptr/shared_ptr.hpp:728: typename boost::detail::sp_member_access::type boost::shared_ptr::operator->() const [with T = gazebo::rendering::Scene; typename boost::detail::sp_member_access::type = gazebo::rendering::Scene*]: Assertion `px != 0' failed.
[ERROR] [gzserver-1]: process has died [pid 117752, exit code -6, cmd 'gzserver /home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/worlds/ugv_world.world -slibgazebo_ros_init.so -slibgazebo_ros_factory.so -slibgazebo_ros_force_system.so'].
Note that in ls /home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/models/ there is also a world directory with a model.sdf.
Tried ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=/home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/models/world/model.sdf which gave:
[ign gazebo-1] [Err] [Server.cc:139] Error Code 3: Msg: The supplied model name [world] is reserved.
Changed the model-name in model.sdf from world to ugv_world. That gave:
ign gazebo-1] [Err] [Server.cc:145] SDF file doesn't contain a world. If you wish to spawn a model, use the ResourceSpawner GUI plugin or the 'world//create' service.
.
Instead tried ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=/home/arnoud/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/worlds/ugv_world.world, which gave:
Msg: Unable to find uri[model://ground_plane]
Msg: Unable to find uri[model://sun]
Msg: Unable to find uri[model://world]
...
Error Code 27: Msg: PoseRelativeToGraph error, too many incoming edges at a vertex with name [world].
[ign gazebo-1] [Err] [Server.cc:139] Error Code 9: Msg: Failed to load a world.
Instead started ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=empty.sdf. Tried to "Load client configuration". This gives a message ' Insert plugins to start' .
July 21, 2025
Looking on using for the course. Started with the python sdk.
Installing gave some warnings: uninstalled numpy 2.2.6 and installed numpy 1.26.4, including a warning on sys-platform "darwin".
Yet, rerun directly launches the welcome-screen of the viewer. Selected the RGBD example.
Following the instructions of my manual. The first thing that installs 10 new packages is sudo apt-get install ros-humble-usb-cam ros-humble-depthai-*.
Second command that installs 4 packages is sudo apt-get install ros-humble-robot-localization. Also sudo apt-get install ros-humble-imu-tools installs three packages.
Compilation goes well. Checked. Could install sudo apt install ros-humble-apriltag ros-humble-apriltag-msgs ros-humble-apriltag-ros (v3.4.3, v2.0.1, v3.2.2 resp). The one in ugv_ws is resp (v3.4.2, v0.0.0, v2.1.0)
Launch now fails on:
[ERROR] [spawn_entity.py-5]: process has died [pid 96307, exit code 1, cmd '/opt/ros/humble/lib/gazebo_ros/spawn_entity.py -entity ugv_rover -file ~/ugv_ws/install/ugv_gazebo/share/ugv_gazebo/models/ugv_rover/model.sdf --ros-args'].
Followed by [gzserver-1] Error Code 12 Msg: Unable to find uri[model://world]
Correct installation according to Gazebo documentation is sudo apt install ros-humble-ros-gz
Now I could do ign gazebo shapes.sdf . Loading instead the ugv_gazebo/models/ugv_rover/model.sdf gave:
[Err] [Server.cc:145] SDF file doesn't contain a world. If you wish to spawn a model, use the ResourceSpawner GUI plugin or the 'world//create' service.
Instead tried ros2 launch ugv_gazebo bringup.launch.py. Now ir fails with:
[spawn_entity.py-5] [ERROR] [1753111538.794482728] [spawn_entity]: Service /spawn_entity unavailable. Was Gazebo started with GazeboRosFactory?
Could look at ROS answer. Tried ign gazebo -s libgazebo_ros_init.so -s libgazebo_ros_factory.so shapes.sdf. No response.
July 18, 2025
No idea on which of the two networks the UGV was, so checked with map -sP 146.*.*.0/24. Only two machines on this network. Switched to the other network. Still on the old network.
Looking at the other nodes from bringup_imu_ekf.launch.py. Yesterday I already looked at base_node.
The ekf_node points to the package robot_localization. That package was actually not installed on the UGV rover. Added it to the manual. Reads the /odom_raw and published a filtere version. The documentation is quite rudimentary, but ugv_bringup used its own param/ekf.yaml, which indicates that it only uses the pose (x,y,yaw), and the x_vel and yaw_vel.
Also the package imu_complementary_filter seems not installed. See paper (2015). No parameter-file, but 5-parameters are set in the launch-file. Those are the same as the settings in imu_complementary_filter/config/filter_config.yaml. Installed sudo apt install ros-humble-imu-tools on UGV Rover.
Last important one is ldlidar package, which has its own ldlidar.launch.py. This points to ld19.launch.py, which has as 2nd node static_transform_publisher between the base_footprint and base_lidar_link, but with a null-vector (and the 2nd node is not launched). Check the robot-description if that contains a better transform between laser and base.
Indeed, in the ugv_rover.urdf the vector between base_lidar_link_joint to base_lidar_link (xyz="0.04 0 0.04") to base_link (xyz="0.0 0.0 0.0165") are defined.
Looked at the lines echo "eval "$(register-python-argcomplete ros2)"" >> ~/.bashrc. According to argcomplete documentation, this is needed when no global completion is activated. pip install argcomplete was already installed on UGV Rover, but the commmand activate-global-python-argcomplete could be found. Added manually python-argcomplete.sh in the PATH (~/.local/bin/)
Anyway, started ros2 launch ugv_bringup bringup_imu_ekf.launch.py. Checking with ros2 node list. 6 nodes are running:
/LD19
/base_node
/complementary_filter_gain_node
/ekf_filter_node
/transform_listener_impl_aaaabb524e10
/ugv/robot_state_publisher
/ugv_bringup
Also checked the published topics with ros2 topic list:
cmd_vel
/diagnostics
/imu/data
/imu/data_raw
/imu/mag
/joy
/odom
/odom/odom_raw
/odom_raw
/parameter_events
/rosout
/scan
/set_pose
/tf
/tf_static
/ugv/joint_states
/ugv/led_ctrl
/ugv/robot_description
/voltage
Note that there two odom_raw topics are published. Note also the /scan and /ugv/robot_description
Succes full launch ends with these two lines:
[ldlidar_node-5] [INFO] [1752846995.480029725] [LD19]: ldlidar communication is normal.
[ldlidar_node-5] [INFO] [1752846995.483222001] [LD19]: Publish topic message:ldlidar scan data.
Started on nb-dual (native Ubuntu 20.04) the RoboStack environment with mamba activate ros_humble_env. Could start ros2 run rviz2 rviz2, which showed the laser-scan. Only fails on /ugv/robot_description, because package ugv_description couldn't be found.
Did the first step of section 5.2 of the manual: clone the ugv_ws in my home.
Did the first of the installs with install ros-humble-nav2-msgs ros-humble-map-msgs ros-humble-nav2-costmap-2d. That installs 8 packages:
+ ros-humble-bond 3.0.2
+ ros-humble-bondcpp 3.0.2
+ ros-humble-nav2-common 1.1.5
+ ros-humble-nav2-costmap-2d 1.1.5
+ ros-humble-nav2-msgs 1.1.5
+ ros-humble-nav2-util 1.1.5
+ ros-humble-nav2-voxel-grid 1.1.5
+ ros-humble-smclib 3.0.2
mamba install ros-humble-image-geometry was not needed, already installed.
mamba install ros-humble-depthai-* couldn't be found, so only tried mamba install ros-humble-usb-cam. This fails on dependency:
The following package could not be installed
└─ ros2-distro-mutex 0.6* humble_* is requested and can be installed.
Tried the --only-deps option, to no avail. Continued to the next package.
Next is mamba install ros-humble-robot-localization (already installed).
Last is mamba install ros-humble-imu-tools (doesn't exist).
Starting to build. emcl2 failed, which can be skipped. Yet, also ldlidar fails, which is more a problem. Yet, probably less needed on the laptop side.
At least the ugv_description now works under RoboStack:
July 17, 2025
Found a solution for nvpmodel crash (nvpmode-SegFixed at the end).
New problem, firefox will no longer start. Do a snap install firefox? No, firefox is already installed.
Seems to be a SELinux protection added by Waveshare (see this discussion).
Instead of installing SELinux protection, I tried instead to do snap remove firefox. firefox was actually connected with a lot of other snap-packages, such as camera and joystick. Trying snap install firefox again. Still same error.
Installing via apt install firefox fails, because it is already installed via snap.
Checking SELinux install. systctl status apparmor indicated inactive (dead) - start condition failed June 4, 2025.
After installing SELinux it start relabeling all filessystems. Note that there is an user sddm with no passord. Relabbbeling costs a few minutes before the boot continues.
Got a snapd-error, so followed this advice and disabled SELinux. Now firefox can bes installed, but a new error: cannot set capabilities: Operation not permited.
Installed brave-browser instead. Gives a warning about libva, but works. Installing va-driver-all didn't help.
Installed the nvpmode-SegFixed (via remote terminal). That works.
Looking at the launch files in ugv_bringup. The bringup_imu_ekf.launch.py launches 10 nodes, bringup_imu_origin.launch.py and bringup_lidar.launch.py launches 9 nodes.
The most basic seems to be the driver_node, which is an executable om the ugv_bringup package.
Running ros2 run ugv_bringup ugv_driver & gives the following topics:
/cmd_vel
/joy
/ugv/joint_states
/ugv/led_ctrl
/voltage
First looked at the ugv_driver.py code. The pan/tilt are controlled with joy-messages, drive-angle is axis 0, drive-speed is axis 1, pan is axis 3, tilt is axis 4 (inverted). axes 2 is not used. The pan-tilt can also be contorlled with the joint_state (pt_base_link_to_pt_link1 - pan) and (pt_link1_to_pt_link2 - tilt). The joint_state is given in radians.
The driver subscribes to the voltage (and plays a sound when low), so in principal the ugv_bringup.py should be the first to be launched. It also publishes imu_data_raw, imu_mag and odom_raw.
Was able to read out the voltage and drive the rover forward with /cmd_vel.
Which node launches the pan-tilt camera and the depth camera. I know from June 26 that the depth-camera can be started with ros2 launch depthai_ros_driver camera.launch.py pointcloud.enable:=True.
The base_node is node that reads in the imu_raw message and publishes odometry messages and when configured also the odom_frame transform.
The ugv_vision package calls the usb_cam package with its own config/params.yaml. It also launches from image_proc pakage a RectifyNode. The two nodes are combined into a single container.
The command ros2 launch ugv_vision camera.launch.py seems to work:
[component_container-2] [INFO] [1752759640.742645655] [image_proc_container]: Load Library: /opt/ros/humble/lib/librectify.so
[usb_cam_node_exe-1] [INFO] [1752759640.885099049] [usb_cam]: camera_name value: pt_camera
[usb_cam_node_exe-1] [WARN] [1752759640.885346962] [usb_cam]: framerate: 30.000000
[usb_cam_node_exe-1] [INFO] [1752759640.892700837] [usb_cam]: camera calibration URL: package://ugv_vision/config/camera_info.yaml
[component_container-2] [INFO] [1752759640.907117633] [image_proc_container]: Found class: rclcpp_components::NodeFactoryTemplate
[component_container-2] [INFO] [1752759640.907235493] [image_proc_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rectify_color_node' in container '/image_proc_container'
[usb_cam_node_exe-1] [INFO] [1752759640.971137276] [usb_cam]: Starting 'pt_camera' (/dev/video0) at 640x480 via mmap (mjpeg2rgb) at 30 FPS
[usb_cam_node_exe-1] [swscaler @ 0xaaab14a93020] No accelerated colorspace conversion found from yuv422p to rgb24.
[usb_cam_node_exe-1] [INFO] [1752759641.007471467] [usb_cam]: Setting 'white_balance_temperature_auto' to 1
[usb_cam_node_exe-1] [INFO] [1752759641.007589456] [usb_cam]: Setting 'exposure_auto' to 3
[usb_cam_node_exe-1] [INFO] [1752759641.012967833] [usb_cam]: Setting 'focus_auto' to 0
[usb_cam_node_exe-1] [INFO] [1752759641.520032053] [usb_cam]: Timer triggering every 33 ms
Run bot ros2 run image_view image_view --ros-args --remap image:=/image_raw and image:=/image_rect. The rectification looks good. Adding image_transport:=theora and image_transport:=compressed didn't work.
The distorion coefficients are [-0.208848, 0.028006, -0.000705, -0.000820, 0.000000]. The camera matrix is not exact in the center of the 640x480 image, (347.23, 235.67). The diagonal element is (289.11,289.75) The rectifcation matrix is the Identity-matrix, but the project+matrix has diagonal elements (196.86,234.53) and off-diagioanl (342.88, 231.54).
According to ROS1 documentation the theora codec only works with 8-bit color or grayscale images. Not sure if the commment using transport "raw" is a default INFO from image_view. Remember that there was a bug in image_transport
Looked at the UGV rover in ~/git/ugv_jetson. The start_jupyter.sh is there, the noteboks are in the directory tutorial_en.
Started with Tutorial #3 - pan-tilt control.
Tutorial #14 fails on cv2.error: OpenCV(4.9.0) /io/opencv/modules/imgcodecs/src/loadsave.cpp:1121: error: (-215:Assertion failed) !image.empty() in function 'imencode'.
Same for Tutorial #20, the one I liked to reproduce. Made color_tracking_node.py in ugv_custom_nodes, a copy of the line_following_node.py. Should only stear the pan-tilt instead of the wheels.
July 16, 2025
Starting to use the UGV Rover at home. The small OLED screen doesn't show the Ethernet address anymore (as described at Waveshare documentation.
Luckely the hotspot is still there, advertised as UGV. Yet, doesn't work with the password in the documentation.
Connecting via the Jetson's ethernet connector is blocked by a Rover connector. Tried to connect via two USB-C to ethernet connectors, but don't see the ethernet lights up. Connecting with the suggested default IP address didn't work.
Instead will connect the Display-port, and make the connection directly to my home access-point.
Connecting to my home access-point with the suggested sudo nmcli d wifi list and connect worked.
Could use the wifi-connection to ssh into the jetson with the IP checked with ifconfig
Started with the suggested software update (including several ros-humble packages - which fails).
Used the suggested temporary fix from this discussion (three weeks ago). Now 679 packages are updating.
Looking for instructions to login to the UGV with USB, I found the battery and board instructions on youTube
This video shows the UGV lower body. The thing blocking the internet port is indeed a wifi antenna. You can control the robot directly, without the Jetson, via JSON commands.
Could login to UGV lower body, use the web-interface and even control the wheels.
In the FAQ also an indication is made which 18650 batteries to use.
In principal you can control the lower body also from the Jetson, you only have to know the serial interface (/dev/*?) of the Jetson PIN-40 interface.
Yet, nor /dev/ttyACM0 nor /dev/ttyTHS1 changed the OLED (no warnings, so the JSON commands where succesfull send to these devices).
Starting with colcon build --packages-select costmap_converter. Fails on abstract class. The compiler is g++ version 11.4.0, in CMakeLists.txt is specified CXX_STANDARD 14. These are the supported values. Setting it to 26 was too much. Start getting memory problems.
Tried CXX_STANDARD 23 and 20, but still a crash (although no error). Removed option -Wpedantic, and tried CXX_STANDARD 11, removed -Wextra and tried CXX_STANDARD 98, removed -Wall and tried CXX_STANDARD 17
Removed thunderbird, but I have the feeling that the problem is memory, not storage-space.
Looked at colcon build and used a single job with MAKEFLAGS=-j1 colcon build --executor sequentail. Now I get the error message again, and no complains about memory.
Commented out the line in map_to_dynamic_obstacles/blob_detector.cpp
Strange, this should be version 0.1.2, as I can see the blob_detecor in the humble version at github.
This patch solves the iisue. Still some warnings, but the package is built.
Could do source ~/ugv_ws/install/setup.bash without warnings.
June 27, 2025
Checked the Jetpack version with sudo apt-cache show nvidia-jetpack, which showed version 6.0+b106.
June 26, 2025
OpenCV blog-post on edge-detection on a AI-generated image.
The Knowledge Representation was very logic based, only watched the first unit.
Also looked at the other MOOC course, Robots in Action. Looked at the first video of RoboEthics.
Tried again to get the point-cloud from the RAE.
Started docker again, went to /underlay_ws/install/depthai_examples/share/depthai_examples/launch/. Activated point_cloud_xyzi and metric_converter in rgb_stereo_node.launch.py script.
Launched ros2 launch depthai_examples rgb_stereo_node.launch.py camera_model:=RAE. Could see the different topics.
Started on WS9 rviz2, and added displays on different topics. Only received an image from /color/video/image, received nothing from /stereo/depth.
Opened another terminal on the RAE. Outside the docker I see two processes active: depthai-device (31.8%) and rgb_stereo_node (15.8%). Not clear how I can see them outside the docker!.
Inside the docker I only see rgb_stereo_node (15.8%).
With ros2 node list I see:
/container
/covert_metric_node
/launch_ros_76
/point_cloud_xyzi '
The only topic which gives me updates is /color/video/camera_info.
Back to the ugv_rover. The oak_d_lite.launch.py starts the camera.launch.py from depthai_ros_driver. Inside the camera the pointcloud.enable is default False.
The example_multicam.launch.py launches both camera and rgbd_pcl.launch.py.
pointcloud.launch.py uses the PointCloudXyziNode plugin, and should publish /points.
rgbd_pcl.launch.py starts camera.launch.py with pointcloud.enable True.
rtabmap.launch.py also starts rtabmap_odom.
Running ros2 launch depthai_ros_driver camera.launch.py pointcloud.enable:=True works, I could see the pointcloud2 (once I set the QoS to best_effort and the frame to oak_camera_frame):
The capture-package needed matplotlib, which conflicted with ros. Created a virtual environment, did the requirements install there and run ../capture_env/bin/python3.11 capture/oak_capture.py default /tmp/my_capture, which gives a X_LINK_ERROR (and a warning on no IR drivers and no RGB camera calibration). So, as long as I have not installed the RVC3_support in the venv, this doesn't work.
Back to the ugv_rover. Tried ros2 launch ugv_slam gmapping.launch.py use_rviz:=false, but package slam_gmapping unknown.
In ugv_ws/src/ugv_else/gmapping two directories can be found: openslam_gmapping and slam_gmapping. According to the README.md this can be started with ros2 launch slam_gmapping slam_gmapping.launch.py, but it seems that this directory is not included in the build.
Had to be done in order. Did colcon build --packages-select openslam_gmapping followed by colcon build --packages-select slam_gmapping.
To get rid of the source install/setup.sh I also did colcon build --packages-select explore_lite which worked. Unfortunately colcon build --packages-select costmap_converter failed on line 56 of blob_detector,cpp, which seems to be an C++-version problem (comment mentions compatibility).
Running ros2 launch ugv_slam gmapping.launch.py use_rviz2:=false now works, in the sense that the LaserScan are visible. Could also select the Map inside RVIZ, but with the warning no Map received. Changing the Reliability QoS to Best effort solves that:
Started ros2 launch ugv_bringup bringup_lidar.launch.py (does this means a double lidar_node?) to be able to do ros2 run ugv_tools keyboard_ctrl. The turning is a bit slow, could be increased by e. Going forward / backwards also helps.
gmapping was able to make a map, although not perfect (two over each other). Can you reset the map without killing gmapping?
Second try was much better, could not only map the small maze, but also going back to the door, followed by going left halfway the soccer field:
June 25, 2025
Skipping two weeks of topics of the Robotics in a Nutshell MOOC (Force Control and Knowledge Representation).
The 5th week is on Graph-SLAM. The description of Wolfram Burghard was a bit too high-level, the description of Giorgio Grisetti a bit too low-level. Minimizing all constraints is simple, but the least-square method is not explained in the best way, with a lot of math but without introducing all functions and operations. Maybe the chapter is better.
The description of the manifold in Unit 2.4 is nice: "A manifold is a mathematical space that is not necessarily Euclidean on a global scale but can be seen as Euclidean on a local scale."
I like the idea of replacing L1 or L2 with more robust kernels, to prevent sensitivity to outliers. Will this also work in scikit-learn:
Started the RAE #1. Before I could ssh into the system, the RAE started already driving on one wheel. Once I started ros2 launch rae_bringup robot.launch.py enable_slam_toolbox:=false enable_nav:=false use_slam:=false in the docker the driving stopped.
Connected to another shell into the docker. Looked with ros2 topic list. Several front and back images are published, yet no point-cloud.
Inside the docker python3.10.12 is running.
Inside python3 import depthai as dai doesn't work, so tried python3 -m pip install depthai. No module pip.
Did sudo apt install python3-pip. Now I could isntall depthai-2.30.0.0.
Also did sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg, because I had the ros gpg error again. At least 464 packages behind.
Cloned depthai-python in /tmp/git and run python3 calibration_reader.py. Gave the following error:
RuntimeError: No available RVC2 devices found, but found 3 non RVC2 device[s]. To use RVC4 devices, please update DepthAI to version v3.x or newer.
.
On October 31, 2024 I did something, directly playing with the depthai_ros_driver with DEPTHAI_DEBUG=1.
Looked in /ws/src/rae-ros/rae_camera/src/camera.cpp. Code also depends on depthai (and depthai_bridge), but that seems to be cpp-library, not the python-module.
For instance, /ws/src/rae-ros/rae_camera/stereo_node is an executable which loads /underlay_ws/install/depthai_bridge/lib/libdepthai_bridge.so, /usr/local/lib/libdepthai-core.so and /usr/local/lib/cmake/depthai/dependencies/lib/libusb-1.0.so.
Tried ros2 launch rae_camera perception_ipc.launch.py, which conflicts with the already running nodes.
Killed the other node, now I get several nodes running:
/RectifyNode
/battery_node
/complementary_filter_gain_node
/controller_manager
/diff_controller
/ekf_filter_node
/joint_state_broadcaster
/laserscan_kinect_back
/laserscan_kinect_front
/laserscan_multi_merger
/launch_ros_1345
/lcd_node
/led_node
/mic_node
/rae
/rae_container
/robot_state_publisher
/rtabmap
/speakers_node
/transform_listener_impl_55ba0cad70
/transform_listener_impl_55ba4dfdb0
/transform_listener_impl_55bbafe9d0
Also now get the topic /scan
Looked at my WLS-terminal with ROS-humble. The command ros2 node list gave a subset:
/complementary_filter_gain_node
/controller_manager
/diff_controller
/ekf_filter_node
/joint_state_broadcaster
/mic_node
/robot_state_publisher
/speakers_node
/transform_listener_impl_55bbafe9d0
Also the ros2 topic list is a subset:
/battery_status
/lcd
/leds
During the start of the perception_ipc script, I get the following startup information:
[perception_ipc_rtabmap-1] [INFO] [1750858282.709590462] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[perception_ipc_rtabmap-1] [INFO] [1750858282.709810547] [rae]: PoE camera detected. Consider enabling low bandwidth for specific image topics (see readme).
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858282.710] [system] [info] Reading from Factory EEPROM contents
[perception_ipc_rtabmap-1] [INFO] [1750858282.741301485] [rae]: Device type: RAE
[perception_ipc_rtabmap-1] [INFO] [1750858283.008338842] [rae]: Pipeline type: rae
[perception_ipc_rtabmap-1] [WARN] [1750858283.062986091] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [WARN] [1750858283.082811496] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [WARN] [1750858283.134807345] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [WARN] [1750858283.154057327] [rae]: Resolution 800 not supported by sensor OV9782. Using default resolution 720P
[perception_ipc_rtabmap-1] [INFO] [1750858283.945303168] [rae]: Finished setting up pipeline.
Followed (after initialization of the IMU) with:
[perception_ipc_rtabmap-1] [INFO] [1750858284.488374295] [rae]: Camera ready!
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] Baseline: 0.074863344
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] Fov: 96.69345
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] Focal: 284.64175
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [info] FixedNumerator: 21309.232
[perception_ipc_rtabmap-1] [58927016838860C5] [127.0.0.1] [1750858284.597] [StereoDepth(11)] [debug] Using 0 camera model
[perception_ipc_rtabmap-1] [INFO] [1750858284.806388901] [laserscan_kinect_front]: Node laserscan_kinect initialized.
.
Note that in /ws/src/rae-ros/rae_camera/config/cal.json can be found, for camera-model RAE (version 7).
Not clear where all those nodes are started, the launch file starts one executable (perception_ipc_rtabmap) and the control.launch.py.
The file /ws/build/rae-camera/perception_ipc_rtabmapi is a real exec, loading for instance:
/opt/ros/humble/lib/librtabmap_slam_plugins.so
/opt/ros/humble/lib/librtabmap_util_plugins.so
Yet, looked into the code, looks like several nodes are started:
executor.add_node(camera->get_node_base_interface());
executor.add_node(laserscanFront->get_node_base_interface());
executor.add_node(laserscanBack->get_node_base_interface());
executor.add_node(merger->get_node_base_interface());
executor.add_node(rectify->get_node_base_interface());
executor.add_node(rtabmap->get_node_base_interface());
Inside that branch I did python3 examples/install_requirements.py, which installed v2.19.1.0 from depthai. That gave:
RuntimeError: No available devices (3 connected, but in use)
Did ps -all and kill the ros2 process.
Now python3 examples/calibration/calibration_reader.py in the rvc3_support branch gives the RAE calibration info (stored in examples/calibration/calib_xlinkserver.json).
Also tried python3 examples/devices/list_devices.py, which gives the X_LINK info:
[DeviceInfo(name=127.0.0.1, mxid=58927016838860C5, X_LINK_GATE, X_LINK_TCP_IP, X_LINK_RVC3, X_LINK_SUCCESS),
DeviceInfo(name=192.168.197.55, mxid=58927016838860C5, X_LINK_GATE, X_LINK_TCP_IP, X_LINK_RVC3, X_LINK_SUCCESS)]
So, started ros2 launch depthai_examples rgb_stereo_node.launch.py camera_model:=RAE.
I see the following topics:
/color/video/camera_info
/color/video/image
/color/video/image/compressed
/color/video/image/compressedDepth
/color/video/image/theora
/joint_states
/parameter_events
/robot_description
/rosout
/stereo/camera_info
/stereo/depth
/stereo/depth/compressed
/stereo/depth/compressedDepth
/stereo/depth/theora
/tf
/tf_static
Checking on WS9. I also see the topics there. The /color/video/image I can display. For video rviz2 complains that it cannot display stereo. That is a warning of rviz2, not on stereo-msgs.
In rviz2 I also so a PointCloud, but it seems that the streaming just stopped (also no updates in the color-video). Restarting at the RAE didn't help.
Looking in /underlay_ws/install/depthai_examples/share/depthai_examples/launch/.
The rgb_stereo_node.launch.py should indeed also publish a point-cloud as topic /stereo/points.
Note that rgb_stereo_node should also tries to launch rviz. Yet, this is commented out, together with the point_cloud_node
I could try stereo_inertial_node.launch.py with depth_aligned=True, rectify=True, enableRviz=False.
Yet, that fails:
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866665.675] [host] [debug] Device about to be closed...
[stereo_inertial_node-2] [2025-06-25 15:51:05.681] [depthai] [debug] DataOutputQueue (depth) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.681] [depthai] [debug] DataOutputQueue (preview) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] DataOutputQueue (rgb) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] DataOutputQueue (imu) closed
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866665.682] [host] [debug] Log thread exception caught: Couldn't read data from stream: '__log' (X_LINK_ERROR)
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866665.682] [host] [debug] Timesync thread exception caught: Couldn't read data from stream: '__timesync' (X_LINK_ERROR)
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] DataOutputQueue (detections) closed
[stereo_inertial_node-2] [2025-06-25 15:51:05.682] [depthai] [debug] XLinkResetRemote of linkId: (0)
[stereo_inertial_node-2] [58927016838860C5] [127.0.0.1] [1750866671.207] [host] [debug] Device closed, 5531
[stereo_inertial_node-2] [2025-06-25 15:51:11.210] [depthai] [debug] DataInputQueue (control) closed
[stereo_inertial_node-2] terminate called after throwing an instance of 'nanorpc::core::exception::logic'
[stereo_inertial_node-2] what(): Cannot use both isp & video/preview/still outputs at once at the moment (startPipeline)
Instead commented out the point_cloud_xyzi and metric_converter
On nb-dual WSL the /stereo/points is visible, but see no echo.
Commented both out again. Try again tomorrow, after a reboot.
June 24, 2025
The 2nd week of the Robotics in a Nutshell MOOC is on Image Formation and Calibration.
I like the explanation of the Pinhole model, including the conservation fractions of the object heights and distances h'/h = l'/l:
This is directly associated with the Thin Lense model, where the same fraction corresponds with the focal lens distances h'/h = l'/l = x'/f = f'/x:
The 3rd unit of the 2nd week is a topic not often covered: Laser scanning with projection patterns. Scanning with two colored laser-planes, or providing a corner background I have not seen explained before. Nice that the unit starts with a single laser point.
In the 4th unit impressive demonstration of the ICP algorithm in 3D is shown.
Connected the OAK-D with the Thunderbolt cable to nb-dual, with native Ubuntu 20.04. No ROS2, so looked at ROS1 Noetic first.
See no /opt/ros/noetic/share/depthai_ros_driver, so should install that one first.
First had to reinstall the signature keys by using the third suggested option: sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg. 16 ros-packages are installed.
If I look at this Medium tutorial, I should also install ros-noetic-depthai-examples (was already part of the pack).
Yet, the command roslaunch depthai_examples rgb_stereo_node.launch camera_model:=OAK-1 fails on missing Intrinsic matrix available for the the requested cameraID.
Started with python3 depthai_demo.py, which gives one frame and crashes on:
depthai_sdk/managers/preview_manager.py", line 148, in prepareFrames
packet = queue.tryGet()
RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'color' (X_LINK_ERROR)'
CONTROL-C restarted the connection, which gave:
[DEVICEID] [3.8] [1.501] [StereoDepth(7)] [error] RGB camera calibration missing, aligning to RGB won't work
Seems that I did the same with the RAE on December 12, 2024 and June 6, 2024.
The calibration files are written to the EPROM. Could first look at what is stored in the EEPROM with Calibration Reader.
Moved to ~/git/depthai-python/examples/calibration. Running python3 calibration_reader.py gave the same error:
RuntimeError: There is no Intrinsic matrix available for the the requested cameraID
Looked at the calibration_flash_v5.py script. It tries to write ../models/depthai_v5.calib to the EEPROM. That file actually exists.
Run python3 examples/python/install_requirements.py, which instaleld PyYAML-6.0.2-cp38, opencv_python-4.11.0.86-cp37 and depthai-3.0.0rc2-cp38.
Moved to ~/git/depthai-core/examples/python/Camera. Running python3 camera_all.py gave:
RuntimeError: Device already closed or disconnected: Input/output error
[2025-06-24 17:01:55.193] [depthai] [error] Device with id XXX has crashed.
Switching back to WS9, look if it can work with the OAK-D or find calibration info at the UGV Rover OAK-camera.
Running python3 calibration_reader.py for the WS9 and OAK-D gave the same error
Moved to the ugv_rover. Cloned depthai-python, but python3 calibration_reader.py couldn't find module depthai. Running git submodule update --init --recursive didn't help. Doing python3 -m pip install . takes a while. Couldn't build the wheel at the end.
Just did python3 -m pip install depthai.
Now python3 examples/calibration/calibration_reader.py works, and gives the RGB Camera Default intrinsics, RGB Camera resized intrinsics... 3840 x 2160, 4056 x 3040, LEFT Camera Default intrinsics..., LEFT/RIGHT Distortion Coefficients..., RGB FOV, Mono FOV, LEFT/RIGHT Camera stereo rectification matrix..., Transformation matrix of where left Camera is W.R.T right Camera's optical center, Transformation matrix of where left Camera is W.R.T RGB Camera's optical center.
Seems that the information is stored in calib_device_id.json file. The productName is "OAK-D-LITE".
Tried oakctl. Could installed both on WS9 and the ugv_rover, but no (OAK4) devices found.
Tried to install oak-viewer on WS9. Installation goes well, but launching oak-viewer gives:
Viewer stderr: [PYI-3537627:ERROR] Failed to load Python shared library '/usr/lib/oak-viewer/resources/backend/viewer_backend/_internal/libpython3.12.so.1.0': /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.38' not found (required by /usr/lib/oak-viewer/resources/backend/viewer_backend/_internal/libpython3.12.so.1.0)
Not the only one with this problem. There are many (old answers) on stackoverflow. The patchelf answer #14 looks promising.
Close to my problem is the Proof of Concept at the end.
Looking how to split the bringup_lidar.launch.py nicely in a rover and workstation part. For the rover there is the use_rviz=False, so could make a launch-script that launches the rviz (and description?) locally on the workstation.
Strange enough, there is also a bringup-executable launched. What is in that executable?
At the end, only selected three nodes for the workstation-side: use_rviz_arg rviz_config_arg, robot_state_launch.
The executable seems to be ugv_bringup/ugv_bringup.py, which creates a number of publishers of the topics "imu/data_raw", "imu/mag", "odom/odom_raw", "voltage"
There is also ugv_bringup/ugv_driver.py, which subscribes to topics "cmd_vel", 'joy', 'ugv/joint_states' (pan/tilt) and 'ugv/led_ctrl'. There is also a subscription to 'voltage' (can the voltage be controlled?). Looked in the code, if the voltage_value drops below 9 a low_battery sound is played.
Made a rviz_lidar.launch.py with only three nodes, and added it to the install with colcon build --packages-select ugv_bringup. After setting the UGV_MODEL and LDLIDAR_MODEL I launced it with ros2 launch ugv_bringup rviz_lidar.launch.py use_rviz:=true
Made two scripts start_lidar.sh (rover-side) and start_rviz_lidar.sh (workstation-side)
Works, but I still have two robot_state_publishers, which also gives double /rf2o_laser_odometry, /transform_listener_impl_62d462ce14d0, /ugv/joint_state_publisher.
Killed the workstation-side robot_state_publisher, still have a double rf2o_laser_odometry.
Tried again, now without the robot_state_launch (2 nodes left). Starting start_rviz_lidar.sh gave 4 nodes:
/rviz2
/transform_listener_impl_5c12fd746e70
/ugv/joint_state_publisher
/ugv/robot_state_publisher
Adding start_lidar.sh still gave many doubles
Started a clean rviz2 with ros2 run rviz2 rviz2 -d ~/git/ugv_ws/install/ugv_bringup/share/ugv_bringup/rviz/view_bringup.rviz. Still two transform_listener_* and two rf2o_laser_odometry.
Rebooted the jetson, still two rf2o_laser_odometries. The LD19 in ugv_else/ldlidar also launches a transform_listener_impl.
Tried ros2 lifecycle set transform_listener_impl_636aff058900 shutdown, but get Node not found.
Could try to reboot also ws9, but there are other users. Leave it here for the moment.
In the directory ~/git/ugv_ws/src/ugv_main/ugv_slam/launch there are three launches files, cartographer, rtabmap_rgbd and gmapping. Tutorial 4 is ussing gmapping.
Started with activating the OAK-D camera with ros2 launch ugv_vision oak_d_lite.launch.py. Seems to work:
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rectify_color_node' in container 'oak_container'
[component_container-1] [INFO] [1750338883.796983921] [oak]: Starting camera.
[component_container-1] [INFO] [1750338883.807265975] [oak]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [INFO] [1750338886.592273026] [oak]: Camera with MXID: 14442C10019902D700 and Name: 1.2.3 connected!
[component_container-1] [INFO] [1750338886.593134048] [oak]: USB SPEED: HIGH
[component_container-1] [INFO] [1750338886.639988351] [oak]: Device type: OAK-D-LITE
[component_container-1] [INFO] [1750338886.642406675] [oak]: Pipeline type: RGBD
[component_container-1] [INFO] [1750338887.683423674] [oak]: Finished setting up pipeline.
[component_container-1] [WARN] [1750338888.471441275] [oak]: Parameter imu.i_rot_cov not found
[component_container-1] [WARN] [1750338888.471589696] [oak]: Parameter imu.i_mag_cov not found
[component_container-1] [INFO] [1750338889.080588298] [oak]: Camera ready!
In RVIZ, I could display both the topic /oak/rgb/image_rect as /oak/stereo/image_raw as Image:
When I look at the topics, only /oak/imu, /oak/rgb and /oak/stereo are published.
Luxonis has launch files which also starts ROS depth processing nodes to generate a poincloud
The oak_d_lite.launch.py script starts camera.launch.py from depthai_ros_driver, which starts the RGBD together with a NN.
The installed driver files can be found at /opt/ros/humble/share/depthai_ros_driver/launch/. Didn't see the NN in spatial Mobilenet mode.
Tried on the rover ros2 launch depthai_ros_driver rgbd_pcl.launch.py. Now there is also a topic /oak/points, of type PointCloud2.
I could add PointCloud2 display in rviz2, but the node-log showed: New subscription discovered on topic '/oak/points', requesting incompatible QoS. No messages will be sent to it. Last incompatible policy: RELIABILITY_QOS_POLICY
Looked with ros2 topic info /oak/points --verbose. The QoS profile of rviz and the driver match:
Reliability: BEST_EFFORT
Durability: VOLATILE
On November 7, 2024 I had point-cloud displayed for realsense camera.
Tried an alternative approach. Attached my OAK-D directly to ws9. Had to install ros-humble-depthai-ros-driver, which also installed ros-humble-depthai-ros-msgs and ros-humble-ffmpeg-image-transport-msgs.
Launching ros2 launch depthai_ros_driver rgbd_pcl.launch.py on ws9 failed on udev rules: Insufficient permissions to communicate with X_LINK_UNBOOTED device with name "1.1". Make sure udev rules are set
Looked at Luxonis troubleshooting and did the udev update. Still going wrong, so should try another USB-cable (currently using a white USB-B to USB-C).
Connected my Thunderbolt USB-C to USB-C cable. At least the camera_preview example from the python sdk works.
I have feeling that it has something to do with the bootloader.
Also tried device information, but also Couldn't find any available devices.
Tried on ws9 ros2 run rviz2 rviz2 -d /opt/ros/humble/share/depthai_ros_driver/config/rviz/rgbd.rviz. Didn't work, nor for the OAK-D, nor the ugv_rover.
On July 11, 2022 I had a working visualisation of the PointCloud in rviz.
June 16, 2025
Looking at the github ugv_ws, but this workspace is at least 4 months old.
The first step of the tutorial is starting rviz, which is a bit strange inside the docker (running on a remote station). Tried if I could start rviz on WSL with a Xserver running, but that gives:
rviz2: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory
This build failed because nav2_msgs package was not installed.
Installation failed on:
Failed to fetch http://packages.ros.org/ros2/ubuntu/dists/jammy/InRelease The following signatures were invalid: Open Robotics
Solved it by adding the signature in a keyring, as suggested on askubuntu.
Next to fail is again explore_lite, now on a missing map_msgs package.'
This can be solved with sudo apt install ros-humble-nav2-msgs ros-humble-map-msgs ros-humble-nav2-costmap-2d.
Next package (15/22) to fail is apriltag_ros, on package image_geometry.
The package vizanti_server fails on rosbridge_suite. Now the first colcon_build is succesfully finished. To be sure did an intermediate source install/setup.bash.
The second build is colcon build --packages-select ugv_bringup ugv_chat_ai ugv_description ugv_gazebo ugv_nav ugv_slam ugv_tools ugv_vision ugv_web_app --symlink-install .
The ugv_nav fails on missing nav2_bringup. This is the only package that has to be installed. Did again a source install/setup.bash.
In the README.md some additional humble-packages are mentioned, such as ros-humble-usb-cam and ros-humble-depthai-*.
Started the first step of the tutorial, but ros2 launch ugv_description display.launch.py use_rviz:=true failed on missing UGV_MODEL.
Looked in the launch-file and defined export UGV_MODEL=ugv_rover (name of urdf-file.
Next exception is missing joint_state_publisher_gui package.
That package was in the README.md, so did sudo apt install ros-humble-joint-state-publisher-*. The Joint-State Publisher window now pops up, only rviz2 is missing.
That is part of apt install ros-humble-desktop-*, also in the README.md. That installs 385 packages.
Now I could control the pan-tilt (both up/down and left/right), both in rviz and on the rover itself (after starting the driver):
The robot doesn't respond on the wheel commands. Maybe I should also have defined the UGV_MODEL at the rover side. Tried again with UGV_MODEL defined, same behavior. Checked the topics, no camera images or depth-images are published (yet).
Launching ros2 launch ugv_bringup bringup_lidar.launch.py use_rviz:=true on ws9 fails on LDLIDAR_MODEL. That is not in ugv_bringup/launch/bringup_lidar.launch.py, but in ldlidar.launch.py called from there. So, LDLIDAR_MODEL should be ld06, ld19 or stl27l. So set export LDLIDAR_MODEL=ld19.
Activated in RVIZ a LaserScan, selected the /scan topic, but nothing visible.
Also launched ros2 launch ugv_bringup bringup_lidar.launch.py use_rviz:=false, which gave an error:
[ugv_bringup-3] JSON decode error: Expecting value: line 1 column 1 (char 0) with line: z":1684,"odl":0,"odr":0,"v":1214}
Try again with ld06? Actually, its a D500 Lidar. Yet, the D500 Lidar-kit is based on a STL-19P.
Trying a 2nd time, now it works:
[rf2o_laser_odometry_node-6] [INFO] [1750080862.481144040] [rf2o_laser_odometry]: Initializing RF2O node...
[rf2o_laser_odometry_node-6] [WARN] [1750080862.495336634] [rf2o_laser_odometry]: Waiting for laser_scans....
[rf2o_laser_odometry_node-6] [INFO] [1750080862.695500846] [rf2o_laser_odometry]: Got first Laser Scan .... Configuring node
Could also drive around with ros2 run ugv_tools keyboard_ctrl (from ws9):
I see the updates in RVIZ, but I don't see the scan topic in the topic list. A bit strange, because I see in with ros2 node list many nodes (several double, not ideal to use the same bringup for both). One of the nodes is /LD19:
/LD19
/base_node
/base_node
/keyboard_ctrl
/rf2o_laser_odometry
/rf2o_laser_odometry
/rf2o_laser_odometry
/rf2o_laser_odometry
/rviz2
/rviz2
/transform_listener_impl_5b38e1b97df0
/transform_listener_impl_5e016b673f00
/transform_listener_impl_6337961ff800
/transform_listener_impl_aaaaf1f50810
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher
/ugv/joint_state_publisher_gui
/ugv/robot_state_publisher
/ugv/robot_state_publisher
/ugv/robot_state_publisher
/ugv/robot_state_publisher
/ugv_bringup
/ugv_driver
When looking with ros2 topic echo /scan I see many .nan values:
- 228.0
- 232.0
- .nan
- 220.0
- 209.0
- 205.0
- 201.0
- 204.0
- 212.0
- 204.0
- .nan
- .nan
- .nan
- .nan
June 2, 2025
Looking at the UGV-rover again. Tried to connected to the rover via the vislab-wifi. I also see the UGV network still active.
Received Joey's labbook. The robot was connected vi LAB42, not the vislab_wifi. Able to login via ssh.
The home-directory on the rover has several directories, including ugv_ws and ugv_jetson.
Started with source /opt/ros/humble/setup.bash, followed by source ~/ugv_ws/install/setup.bash. That gives two warnings:
not found: "/home/jetson/ugv_ws/install/costmap_converter/share/costmap_converter/local_setup.bash"
not found: "/home/jetson/ugv_ws/install/explore_lite/share/explore_lite/local_setup.bash"
Started ros2 launch ugv_custom_nodes line_follower.launch.py. Seems to work, only a few warnings:
[v4l2_camera_node-1] [INFO] [1749735101.182860329] [v4l2_camera]: Driver: uvcvideo
[v4l2_camera_node-1] [INFO] [1749735101.183075567] [v4l2_camera]: Version: 331656
[v4l2_camera_node-1] [INFO] [1749735101.183090895] [v4l2_camera]: Device: USB Camera: USB Camera
[v4l2_camera_node-1] [INFO] [1749735101.183096527] [v4l2_camera]: Location: usb-3610000.usb-2.2
[v4l2_camera_node-1] [INFO] [1749735101.183101583] [v4l2_camera]: Capabilities:
[v4l2_camera_node-1] [INFO] [1749735101.183105903] [v4l2_camera]: Read/write: NO
[v4l2_camera_node-1] [INFO] [1749735101.183111183] [v4l2_camera]: Streaming: YES
[v4l2_camera_node-1] [INFO] [1749735101.183125936] [v4l2_camera]: Current pixel format: MJPG @ 1920x1080
[v4l2_camera_node-1] [INFO] [1749735101.183234419] [v4l2_camera]: Available pixel formats:
[v4l2_camera_node-1] [INFO] [1749735101.183242515] [v4l2_camera]: MJPG - Motion-JPEG
[v4l2_camera_node-1] [INFO] [1749735101.183246995] [v4l2_camera]: YUYV - YUYV 4:2:2
[v4l2_camera_node-1] [INFO] [1749735101.183251315] [v4l2_camera]: Available controls:
[v4l2_camera_node-1] [INFO] [1749735101.184699257] [v4l2_camera]: Brightness (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.186169984] [v4l2_camera]: Contrast (1) = 50
[v4l2_camera_node-1] [INFO] [1749735101.187917198] [v4l2_camera]: Saturation (1) = 65
[v4l2_camera_node-1] [INFO] [1749735101.189416117] [v4l2_camera]: Hue (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.189443318] [v4l2_camera]: White Balance, Automatic (2) = 1
[v4l2_camera_node-1] [INFO] [1749735101.190926908] [v4l2_camera]: Gamma (1) = 300
[v4l2_camera_node-1] [INFO] [1749735101.192166269] [v4l2_camera]: Power Line Frequency (3) = 1
[v4l2_camera_node-1] [INFO] [1749735101.193665476] [v4l2_camera]: White Balance Temperature (1) = 4600 [inactive]
[v4l2_camera_node-1] [INFO] [1749735101.194914213] [v4l2_camera]: Sharpness (1) = 50
[v4l2_camera_node-1] [INFO] [1749735101.196162694] [v4l2_camera]: Backlight Compensation (1) = 0
[v4l2_camera_node-1] [ERROR] [1749735101.196197767] [v4l2_camera]: Failed getting value for control 10092545: Permission denied (13); returning 0!
[v4l2_camera_node-1] [INFO] [1749735101.196208903] [v4l2_camera]: Camera Controls (6) = 0
[v4l2_camera_node-1] [INFO] [1749735101.196217671] [v4l2_camera]: Auto Exposure (3) = 3
[v4l2_camera_node-1] [INFO] [1749735101.197664077] [v4l2_camera]: Exposure Time, Absolute (1) = 166 [inactive]
[v4l2_camera_node-1] [INFO] [1749735101.199412059] [v4l2_camera]: Exposure, Dynamic Framerate (2) = 0
[v4l2_camera_node-1] [INFO] [1749735101.200914146] [v4l2_camera]: Pan, Absolute (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.202424394] [v4l2_camera]: Tilt, Absolute (1) = 0
[v4l2_camera_node-1] [INFO] [1749735101.203911537] [v4l2_camera]: Focus, Absolute (1) = 68 [inactive]
[v4l2_camera_node-1] [INFO] [1749735101.203945650] [v4l2_camera]: Focus, Automatic Continuous (2) = 1
[v4l2_camera_node-1] [INFO] [1749735101.205413752] [v4l2_camera]: Zoom, Absolute (1) = 0
[v4l2_camera_node-1] [WARN] [1749735101.206874879] [camera]: Control type not currently supported: 6, for control: Camera Controls
[v4l2_camera_node-1] [INFO] [1749735101.207363787] [v4l2_camera]: Requesting format: 1920x1080 YUYV
[v4l2_camera_node-1] [INFO] [1749735101.218292138] [v4l2_camera]: Success
[v4l2_camera_node-1] [INFO] [1749735101.218322795] [v4l2_camera]: Requesting format: 160x120 YUYV
[v4l2_camera_node-1] [INFO] [1749735101.229039172] [v4l2_camera]: Success
[v4l2_camera_node-1] [INFO] [1749735101.236211424] [v4l2_camera]: Starting camera
[v4l2_camera_node-1] [WARN] [1749735101.804031818] [camera]: Image encoding not the same as requested output, performing possibly slow conversion: yuv422_yuy2 => rgb8
[v4l2_camera_node-1] [INFO] [1749735101.814271510] [camera]: using default calibration URL
[v4l2_camera_node-1] [INFO] [1749735101.814428698] [camera]: camera calibration URL: file:///home/jetson/.ros/camera_info/usb_camera:_usb_camera.yaml
[v4l2_camera_node-1] [ERROR] [1749735101.814617119] [camera_calibration_parsers]: Unable to open camera calibration file [/home/jetson/.ros/camera_info/usb_camera:_usb_camera.yaml]
[v4l2_camera_node-1] [WARN] [1749735101.814656576] [camera]: Camera calibration file /home/jetson/.ros/camera_info/usb_camera:_usb_camera.yaml not found
Checked on nb-dual (WSL) and saw the following topics:
/camera_info
/cmd_vel
/image_raw
/image_raw/compressed
/image_raw/compressedDepth
/image_raw/theora
/joy
/parameter_events
/processed_image
/rosout
/ugv/joint_states
/ugv/led_ctrl
/voltage
No topics like /scan and /odom yet, although Joey got that working at May 23 (ros2 launch ugv_custom_nodes mapping_node.launch.py.
Looked what I tried as last command on February 6.
Tried ros2 run image_view image_view --ros-args --remap image:=/image_raw inside WSL with VxSrv-Xlaunch running on the backgroud, but this gives:
[INFO] [1749735895.930762600] [image_view_node]: Using transport "raw"
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.4) ./modules/highgui/src/window_gtk.cpp:635: error: (-2:Unspecified error) Can't initialize GTK backend in function 'cvInitSystem'
Tried again on ws9. Had to do unset ROS_DOMAIN_ID first. Still, ros2 topic list gives me only the two default topics, while I could ping and ssh to the rover, and see all topics there.
ws9 was busy with a partial upgrade (asking for a reboot). Hope that this is not an update from Ubuntu 22.04 to 24.04.
Switched off all TurtleBot related settings in my ~/.bashrc
Another user is running jobs on ws9, so could't reboot.
This seems to happen at the LAB42 network, not on the vislab_wifi. Looking into Multiple Network sections from linuxbabe. That didn't work.
Instead used the trick from stackexchange and looked up the UUIDs with nmcli c and switched to other network with nmcli c up uuid . The ssh-connection then freezes, because you have to build it up again via the other network.
The two ethernet-connections also are in the 192.* domain, so had to unpluck them to connect to the robot.
Now I see all topics. The image was black, but that was because the cover was still on the lens. Without cover the robot directly starts to drive (as expected from line-following).
The images displayed with ros2 run image_view image_view --ros-args --remap image:=/image_raw have a delay of 10s:
You can also request the processed_image with ros2 run image_view image_view --ros-args --remap image:=/processed_image:
There are now 4 nodes running:
/camera
/driver
/image_view_node
/line_following_node
The next step in the WaveShare tutorials is to control the leds (Tutorial 2).
On ws9 Mozilla could open this web-page, chrome could.
After starting the driver with ros2 run ugv_bringup ugv_driver, I could control the three led-lights with ros2 topic pub /ugv/led_ctrl std_msgs/msg/Float32MultiArray "{data: [255, 255]}" -1. With {data: [9,0]} only the two lower leds light up, with less brightness. Note that the ugv_driver is not the driver from the line-following. Yet, this seems only a difference in name (same executable) in ugv_custom_nodes/launch/line_follower.launch.py
Looking for nice images for the paper. Fig. 4 of Group 1 could be usefull to illustrate the first assignment.
Also Fig. 6 would be a nice illustration.
Yet, Group 1 chose the most centered line.
Group 9 shows in Fig. 9 the benefit of using RANSAC compared to Canny edge.
February 18, 2025
The RAE has a Robotics Vision Core 3, which is based on the Intel Movidius accelerator with code name (a href=https://www.intel.com/content/www/us/en/developer/articles/technical/movidius-accelerator-on-edge-software-hub.html>Keem Bay.
February 12, 2025
An interesting arXiv paper on a Wheeled lab on the University of Washington.
The rover comes without batteries. The instructions to load the batteries are not there yet.
The battery compartment is actually below the rover. The screw-driver provided nicely fits. Placed three of the 18650 lithium batteries from the DART robot into the compartment and start charging (connector next to the on-off button).
Connected the Display-port to my display and the USB-C to a mouse and keyboard. The ethernet-ip is displayed on the small black screen below the on-off button.
Could login via the screen/keyboard. The Jetson is running Ubuntu 22.04.04.
Switched off the hotspot (right top corner) and connected to LAB42.
Tried to switch of the python programming started during setup. The kills seem to fail, but after a while no 10% python script popped up anymore. Looked at /home/jetson/uvg.log. Last line is Terminated.
The docker scripts are actually in /home/jetson/uvg_ws. Didn't make two scripts mentioned in ROS2 Preparation executable. The ros2_humble script starts fine, and indicated a shell-server is started. Yet, I couldn't connect to the shell server, nor externally, nor via the docker-ip. Yet, doing a docker ps followed by docker exec -it CONTAINER_ID bash worked (no zsh in SPATH). Could find /opt/ros/humble/bin/ros2.
Next tried RVIZ, but that failed because no connection to display localhost:12.0 could be made.
The docker is a restart of container of a existing docker-image, not a fresh run.
Tried to make a script that run from the image, with reuse of the host. Yet, out of battery before finished(charging is before USB-B connector). Switched to USB-C connector after the reboot.
Starting the docker failed. Could be the background process. Tried docker run nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04. That works, but returns directly. docker run nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04. That works, but returns directly and gives a warning that no NVIDIA drivers are detected.
Tried sudo docker run -it --runtime=nvidia --gpus all /usr/bin/bash. That works, entered the image. Only, no /opt/ros to be found, only /opt/nvidia. Strange, because I don't see any other image with docker image ls
Looked if I could natively install the required ros-humble packages. sudo apt ugrade wants to downgrade 4 packages (nvidia-container-toolkit), so didn't continue.
Installed apt install python3-pip. Next are the ros-packages, but I have to install the ros-repository first. Trying to install firefox, but that is snap-based. Used firefox for the ROS Humble Install instructions.
Installed ros-humble-base first (258 packages), followed by ros-humble-cartographer-* (360 new, 7 upgraded).
Next ros-humble-joint-state-publisher-* (17 packages), followed by ros-humble-nav2-* (119 packages, 5 upgraded)
Next ros-humble-rosbridge-* (22 packages), followed by ros-humble-rqt-* (61 packages)
Next ros-humble-rtabmap-* (51 new, 1 upgraded), followed by ros-humble-usb-cam-* (11 packages).
Last one is ros-humble-depthai-* (28 packages). Left the gazebo part for the moment.
The code needed is available on github. Unfortunatelly, colcon build --packages-select doesn't work, so have do these three commands in each sub-dir of src/ugv_else:
cmake -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build --target install
sudo cmake --build build --target install
There is not only /home/jetson/git/ugv_ws/install/setup.bash, but also the preinstalled /home/jetson/ugv_ws/install/setup.bash. Yet, running this script gave missing local_setup.bash for ugv_description, ugv_gazebo, ugv_nav and ugv_slam. Those local_setup.bash were logical links to /home/ws. Changing the link to /home/jetson (with sudo!) solved this. Now ~/.bashrc works.
Yet, nor ros2 launch ugv_description display.launch.py use_rviz:=true nor ros2 run ugv_bringup ugv_driver works (package not found). Maybe a ros-dep init first, even better in a humble_ws/src/, with the different packages in that src-directory.
At least ros2 run usb_cam usb_cam_node_exe worked. Could view the image with ros2 run image_view image_view --ros-args --remap image:=/image_raw. Only no permission to save image:
January 28, 2025
This paper describes iMarkers, which allow to make unvisible ArucoTags. Yet, one need a polarizer sheet in front of one of the stereo-cameras.
Chapter 10 covers the Navigation stack, Chapter 13 covers Computer Vision (no code!).
January 20, 2025
Looking for an anouncement of the next version of RAE, but according to this post, most of the developers are gone. We can ask for a refund.
Last August there were still plans to make a RCV4 version. The release was planned for Q3-Q4 2025.
According to this post the project was already deprecated in 2023. We could already ask for refunds in October 2024.
Alternative could be UGV Rover, Nvidia Jetson Orin based. Twice the price. But including Lidar and depth cameras. ROS2 Humble based. I also see that it is Docker based?! The depth sensor is OAK-D-Lite, the Lidar is D500 DTOF.