It would be interesting to make an equivalent page of installation instruction as JSK lab from the University Tokyo, for sure because those instructions are from 2015. A ROS2 update would be an asset. The packages on github are well maintained, last update is 1h ago.
Delivery is expected in March 2023, nicely on time to reproduce some of Emily's result with a Mini version of her Spot robot.
Also the new Zed-x is available. The specs are quite compatible with the Zed-2i. The Zed-x has actual a lower resolution, but a higher frame-rate. The connection is not USB-based, but a secure GMSL2 connection. The Jetson Irin can be extented with a GSML2 card.
Nvidia has a preference for GMSL cameras, for instance NileCAM91. There are at least 14 GMSL2 camera systems. The NileCAM20 is compatible with a Jetson Nano development kit.
This picture shows the setup. The camera is connected with a coax-cable to interface board, which is connected with a MIPI CSI-2 ribbon-cable to Jetson camera input and powered by USB:
In that sense a Zed-2i would be handier, although the Zed-x has the advantage of the form factor (fits on the NanoSaur).
The only other Nvidia camera compatible with the Jetson Nano is NileCAM25.
The benefits of the GMSL interface are explained in this post.
For instance, a converter to convert the 15-pins Rasberry Pi camera interface to 30-pin I-PEX.
This GSML2 deserializer board uses two GPIO pins (J9 and J10) to communicate MIPI signals with a Jetson Nano and the J501 interface to connect ot the Jetson Xavier. Can connect 4 GMSL2 cameras, but is also as expensive as camera (and can be powered by 19V).
In the 3rd chapter a Xtion Pro Live was used as depth camera.
Chapter 4 is the JFR article. A comparison of the Inputs of different SLAM algorithms is given. Note that RTAB-Map doesn't use an IMU. The only SLAM algorithm which uses an IMU in this overview is VINS-Mono. RTAB-Map makes use of the odometry of the robot, which is a ROS-message which could be provided by an IMU.
According to section 4.3, RTAB-Map implements two standard odometry approaches: Frame-To-Map and Frame-To-Frame (Scaramuzza 2011).
RTAB-Map can use all features available in OpenCV, but uses the GoodFeaturesToTrack (Shi 1994). Alternatives are discussed here, which includes SIFT, SURF, ORB and FAST.
At least seven alternative visual odometry approaches are integrated in RTAB-Map, including the VO from ORB-SLAM2.
RTAM-Map also has a laser-odometry module. As alternative, the LO called LOAM (2017) is integrated in RTAM-Map. On the KITTI dataset (Table 4.3) LOAM is consistently outperformed by the LO from RTAM-Map.
The code of RTAB-Map is well maintained, with several ROS1 and ROS2 implementations, a standalone implementation, examples both with RGBD and WifiMapping. Last commit a few days ago.
A major improvement in this paper is the use of SuperPoint and SuperGlue for feature detection and matching. In the discussion, Mathieu suggested to use a learned global descriptor (e.g. NetVLAD 2016).
A quick scholar search suggest to use LoFTR (CVPR 2021, transformer based). The code is available on github.
Got recommendation for an paper on a nice application of a legged robot (Unitree Go1) with an 6DoF arm, where the base helped to reach points.
The recommendation was on a Navigation module which was independent on the type of robot. Yet, this were all wheeled robots, not a legged robot like Spot.
July 16, 2022
Added some code, so that finally ros2 run image_view image_view --ros-args --remap image:=/robot1/camera/depth/disparity_image also works.
Trick was to use self.bridge.cv2_to_imgmsg(color_image, encoding="bgr8") instead of encoding="passthrough".
Before image_view was complaining:
[ERROR] [1657949670.989541962] [image_view_node]: Unable to convert '8UC3' image for display: 'cv_bridge.cvtColorForDisplay() while trying to convert image from '8UC3' to 'bgr8' an exception was thrown ([8UC3] is not a color format. but [bgr8] is. The conversion does not make sense)'
This should not have occured, because according to ROS conversions bgr8 and 8UC3 are equivalent.
July 15, 2022
The obstacle avoidance based on the depth image from the realsense r200 is now working, although it is not very stable. Should use a region instead of a single pixel to decide to evade.
Looked if I could colour the depth map. In this tutorial they color the disparity map with COLORMAP_JET. Going to publish the depth with this colormap.
If I want to convert the depth to disparity, I can use disparity = B*f / depth, with B the baseline and f the focal length. Note that B is typically in mm, while my depth is meters. f is typically in pixels, disparity will also be in pixels.
The simulated depth-camera is a realsense r200, which is a depreciated braanch of librealsense. The baseline can be found in camera specs, the focal length definition in intrinsic camera parameters. This depends on the resolution/aspect ratio of the requested image, which has to be calibrated.
This calibration is done in this article, which gives baseline 69.95 mm, IR focal length ~(580,580) pixels, RGB focal length ~ (1400,1400) pixels
In the p3at/model.sdf the same 0.07 is used. A depth camera could also have a distortion field, but that is not specified for the p3at/model.sdf. The fx,fy,cx,cy are the important parameters for intrinsic matrix, the k1, k2 of the distortion model. The r200 uses for its color images the Modified Brown-Conrady Distortion. According to this post 'K1, K2, K3 are used in units of focal length and to convert them into mm or pixels their values need to be divided by fx(in mm or pix)^2 for K1, fx^4 for K2 and fx^6 for K3'.
I was able to convert the 32FC1 image from the depth sensor to CV_8U with the addWeighted function. 8bits grey and colormaps can be displayed without problems with rviz2 (which had already problems with 16bits color).
Added coloured blocks that indicate an obstacle warning. Tested this in the room with the lowered table:
This table is hard to recognize with the laserscan (legs are shown as 4 dots on the map). The sonar ring is even lower (but has a wider opening angle).
I visualized two of the sonar beams (so2 and so5). Both report average ranges in the order of 0.9m (above the threshold of 0.7m):
The laserscan also only detects the table legs:
This can also be seen when you create a map of the room (see SLAM below). Only 4 tiny dots from the table legs are visible (and the shadows the cast on the map):
Only the depth camera can be used to detect this obstacle, as visualized in the disparity image. Nearby pixels are bright, background pixels are indicated with grades of blue. The disparity is augmented with obstacles reports (all five directions indicated red):
Because of the strange laser measurements, and the natural choice for obstacle avoidance, I looked at the sonar rings of the p3at:
I found this figure in p3at manual (page 13). The UNM Technical Report indicated that to control it from ROS1 they used the p2os_driver, and received messages on the /sonar topic.
This post indicates 'I considered the aperture of one transducer is 16 degrees (real hardware transducers have an aperture of 15 degrees)'.
From the p3at manual (page 12) I read 'Each sonar array’s transducers are multiplexed: Only one disc per array is active at a time, but all four
arrays fire one transducer simultaneously. The sonar ranging acquisition rate is adjustable, normally
set to 25 Hz (40 milliseconds per transducer per array). Sensitivity ranges from 10 centimeters (six
inches) to over four meters, depending on the ranging rate. You may control the sonar’s firing pattern through software, too; the default is left-to-right in sequence 0 to 7 for each array.'. So an array would have a rate of 3Hz.
The Mobile Robots github has a urdf description, including a mesh, for the front and back sonar arrays, but not the individual sonars. It is generated from this urdf.
Here is a post from 2016, which had the same problem. My suggestion would be to use the ray_sensor in distance only mode.
The rough dimensions of the P3AT can be found in this datasheet. The sonar ring are mounted below the top-plate at 277 mm. The width of the top-plate at the front is 381mm, but the sonar ring is less (estimate 330mm).
The amr configuration places the sonar-array at a height of 250mm, which seems OK.
The Webots uses this naming scheme:
Also the range sensor assumes multiple rays, so made a front-sonar sensor with 8 measurements spaced 20 degrees. That should publish a range-topic, but that is not visible as ros2 topic. The house-model loads both a model.sdf and model.urdf, which is located in rvl_description/urdf/p3at. Note that the simulation will use the version in install/rvl_description. In the urdf the joint and links are specified.
Read this post on the right way to distribute the information between sdf and urdf. In principal the right order is xacro -> urdf -> sdf.
Note that in the meshes no sonar-array is present.
I have a version of the p3at with a range sensor as sonar.
Yet, even with the original models back, nor the home, nor the robots are spwned. Try a fresh installation.
Cloned it in ~/ros_ws/src, which now contains 62 packages. Moved zed-ros2-wrapper to ~/ros_ws/tmp, because it failed to build.
Also with the fresh build it fails with:
[ERROR] [gzserver-1]: process has died [pid 191901, exit code 255, cmd 'gzserver /home/arnoud/ros2_ws/install/rvrl_gazebo/share/rvrl_gazebo/worlds/house_world.model -s libgazebo_ros_init.so -s libgazebo_ros_factory.so -s libgazebo_ros_force_system.so '].
[spawn_entity.py-13] [ERROR] [1657776503.163794782] [spawn_entity]: Service %s/spawn_entity unavailable. Was Gazebo started with GazeboRosFactory?
Reboot!
Reboot worked, both the ros2_ws and git-version give me a house and robots again.
Even my new urdf and sdf are working, although I receive the warnings:
[gzserver-1] Warning [parser.cc:833] XML Attribute[value] in element[mass] not defined in SDF, ignoring.
[gzserver-1] Warning [parser.cc:833] XML Attribute[ixx] in element[inertia] not defined in SDF, ignoring.
I used the mass and inertia specification in urdf instead of sdf style. Repaired that.
Did ros2 topic echo /robot1/range, but that gives -inf for the range:
header:
stamp:
sec: 734
nanosec: 0
frame_id: front_sonar
radiation_type: 1
field_of_view: 2.792520046234131
min_range: 0.10000000149011612
max_range: 4.0
range: -.inf
Try to have it modelled as a 'normal' laserscan. With 8 beams, the left two always give infinity (at last at the start position. With the 4 middle beams it works:
Output range also works, which gives the average of the four beams. That cold be quite usefull, to average the opening angle of a sonar-beam of 16 degrees.
This robotics club post uses a Raspberry Pi hc-sr04 ultrasonic sensor and also publishes a range.
Created range-sensors so0-so7, which each average 4 beams, with a resolution of 20cm and a Gaussian noise 10x as high as the LaserScan (stddev 0.1 instead of 0.01). This is the result:
Also made sure that the height is OK:
For obstacle avoidance this has the advantage that the other robots are better seen, because the laser scans over the top.
Trying to estimate the depth from the published depth images. The depth images is encoded with as 32FC1 (240x320), consisting of integers. Yet, most are 0 or 128, seems that they are already colorcoded.
The termal_camera_pink seems to be especially developed for RoboCup, a more general plugin would be hector_gazebo_thermal_camera. As this thermal camera tutorial indicates, not only a thermal sensor model is needed, but also the objects in the world have to be assigned a temperature.
Note that the house_map.launch.py uses the model.sdf for the p3at robot, not model-1_4.sdf. Those differ considerable. The model.sdf also contains libgazebo_ros_imu_sensor.so, libgazebo_ros_ray_sensor.so, libgazebo_ros_joint_state_publisher.so. Missing are libgazebo_ros_thermal_camera_pink.so, libgazebo_ros_control.so, libgazebo_ros_control.so.
Started ros2 launch rvrl_gazebo house_map.launch.py. Each robot publishes four topics:
/robot1/joint_states
/robot1/robot_description
/robot1/tf
/robot1/tf_static
The robots seems to be controlled via their joint_states, not cmd_vel, which is strange because the teleop uses that parameter: ros2 run teleop_twist_keyboard teleop_twist_keyboard cmd_vel:=robot1/cmd_vel
Later I saw many more topics published, including cmd_vel.
Can add a random move.py to control that plugin.
Have to port that code to ROS2. This example shows how to handle rate and interupts.
A more relevant example how to publish cmd_vel with ros2 is twist_stamper.
Better is to use a collision avoidance behavior (Niels)
Next would be a wall following avoidance behavior.
The wall following code of UvA team ERF 2022 is based on the wall_follower_sim code from Yoraish, who based its code on HMMWV code, which is actually a sort of rescue robot. This code seems to be ROS1 Indigo based. This robot utilizes hector_slam and hector_navigation, the odomtransformer node is the only python-code, the other code is C++ (and hector)-based.
The new p3at model.sdf uses the ROS2 ray_sensor plugin, which can also generate LaserScans, but now including a new noise model.
Following this tutorial. Did a ros2 topic echo /robot1/scan. A single start looks like:
header:
stamp:
sec: 3607
nanosec: 425000000
frame_id: base_scan
angle_min: -1.5707999467849731
angle_max: 1.5707999467849731
angle_increment: 0.017453333362936974
time_increment: 0.0
scan_time: 0.0
range_min: 0.10000000149011612
range_max: 30.0
ranges:
- 1.3714041709899902
- 1.3739193677902222
- 1.3649139404296875
The laserscan of the p3at produces 181 points.
Created a simple obstacle_avoidance behavior. It works reasonable well (evading the robot in front can be more aggressive), but front-front-left and rigth keep on firing.
Tried to visualize the laser_scan with rviz2, but the messages are dropped (although the tf base_scan is known):
[INFO] [1657725888.845906088] [rviz]: Message Filter dropping message: frame 'base_scan' at time 26.581 for reason 'Unknown'
Yet, none of the base_link have a parent (base_link, base_scan, camera_frame, odom).
That has to be corrected tomorrow.
In the middle of the room both FFR and FFL see obstacles at 10cm (while none are there). Does it self-collide?
obstacle in front-front-right
distances 0.100000,0.100000,0.100000 = 0.100000
obstacle in front-front-left
distances 0.100000,0.100000,0.100000 = 0.100000
July 12, 2022
Added a prefix to talker / listener of the demo_nodes_cpp: they still chatter OK.
Same when I change the 2nd parameter of create_subscription from 10 to 1 (as in vdb_mapping).
Made a talker and listener of PointClouds. With the private declaration of rclcpp::Publisher pub_ the compilation fails on no match for ‘operator=’ (operand types are ‘rclcpp::Publisher > >’ and ‘std::shared_ptr > > >’), with the trick std::shared_ptr> pub_; the talker compiles but at runtime aborts with symbol lookup error: ~/ros2_ws/install/demo_nodes_cpp/lib/libtopics_library.so: undefined symbol: rosidl_typesupport_cpp31get_message_type_support_handleIN11sensor_msgs3
Tried to let the listener listen to the messages published by the oak-d camera, but also here there is a symbol lookup problem.
Should look how the oak-d is publishing its point-cloud.
Not clear how the point-cloud is published. Oak-d uses services, not publishers. In stereo_publisher.cpp an StereoDepth node is started. Depending on the withDepth flag a depth or disparity stream is started.
Should look in stereo.launch.py which nodes are started. Note that this script loads the library /opt/ros/foxy/lib/libdepth_image_proc.so, is this the depedence that is missing with my talker/listener? For instance stereo.launch.py reports:[container2]: Found class: rclcpp_components::NodeFactoryTemplate
Note that there is launch for stereo, stereo_node and stereo_nodelet. The fist one launches both a stereo_node and rviz displaying a point-cloud. It also launches a container, which does a metric_convertion, mapping from image_raw to /stereo/depth. The second container creates the mapping from points to /stereo/points. Both containers are plugins from depth_image_proc, the first plugin ConvertMetricNode, the second PointCloudXyziNode.
The stereo_node launches a stereo_node, a rviz_node and includes the urdf.launch. The nodelet launches also includes the urdf.launch and rviz node, and launches 4 nodelets, next to the StereoNodelet two depth_image_proc plugin nodelets and a nodelet manager.
Added a build_depend and exec_depend on stereo_msgs and sensor_msgs in the package.xml of my talker/listener. Also updated the CMakeList.txt. Still the same runtime error.
The package from oak-d has a very nice construct for ROS1 vs ROS2:
catkinament_cmakeroscpprospymessage_filtersrclcpp
...
depend condition="$ROS_VERSION == 1">nodelet
catkinament_cmake
Let see what happens when I add a dependence on depth_image_proc to my talker/listener (as done by oak-d).
Note that has RoboCup2022RVRL_Demo has p3at description, not clear why that doesn't show up in rviz2.
On July 22, 2021 I used the I-matrix for the rectification, as defined in the camera_info. Should be in zed_components.cpp
For the oak-d stereo_intertia_node crashes on missing intrinsic matrix. That should be in the CameraInfo. the function calibrationToCameraInfo() is called, so should check the intrinsic matrix there. The CameraInfo is filled by calibrationHandler, which is a call to device->readCalibration();
Looked in the code of depthai-ros/depthai_bridge/src/ImageConverter.cpp. There calibrationToCameraInfo is defined, including (line 285-295) the definition of the intrinics and other parameters.
Added a print-statement after those lines. Those are called right before the crash. Wrong cameraID?
Looks like the ImuPublisher has succesfully started. Adding some print-statements later in the process. None of the later print-statement is reached, so it seems that it is the ImuPublisher.
Problem is CalibrationHandler, which not handles CameraId RGB correctly. Added protections, which seems to work. I see in rviz the point-cloud, although I receive the warning:
[stereo_inertial_node-1] [14442C10710891D000] [3.8.4] [195.295] [IMU(4)] [critical] IMU firmware version detected: 3.2.13 is not supported. Latest supported: 3.9.7. Use 'imu.enableFirmwareUpdate(True)' once to update firmware to latest.
There is both an imu_firmware_update in depthai-python examples as depthai-core examples an imu_firmware_update.cpp. depthai-core is a subdirectory of depthai-python (so I have it double - removed the 'hidden' one). Yet, python3 imu_firmware_update.py gives:
imu_firmware_update.py", line 16, in
pipeline = dai.Pipeline()
TypeError: depthai.Pipeline: No constructor defined!
Yet, the examples are not made. Therefore the command cmake -S. -Bbuild -D'DEPTHAI_BUILD_EXAMPLES=ON' has to be given. Yet, this halts in depthai-core, but in the root-directory it works. Still, the imu_firmware_update.cpp is not compiled (many other examples do).
Did in depthai-python/examples python3 install_requirements.py. After that python3 imu_firmware_update.py worked. Also, no longer a warning on the imu with command ros2 launch depthai_examples stereo_inertial_node.launch.py
Followed the instructions and steared robot1 around. It dashed off and collided to a wall. Reduced the speed from 0.5 to 0.26, which makes it better controlable. Updated the README.md.
Continued with SLAM Demo (for a single robot). rviz2 starts nicely, but also displays an (non-fatal) error:
Trying to create a map of size 64 x 32 using 1 swatches
[ERROR] [1657513409.134350713] [rviz2]: Vertex Program:rviz/glsl120/indexed_8bit_image.vert Fragment Program:rviz/glsl120/indexed_8bit_image.frag GLSL link result :
active samplers with a different type refer to the same texture image unit
This is known issue. My hypothesis is that should be repaired in the map (both eight_bit_image and palette refer to the same texture).
Drove the robot further, but the map gets corrupted quite fast when I go continue around the corner. In the depth image I see the window to the room, but the corridor has already double walls, and a second cluster is also visible:
For the next demo multi-robot mapping is needed. My suggestion of May 20 was to look into VDB-Mapping, although the provided code is ROS1 wrapper
Started with cloning Openvdb. As indicated on website, OpenVDB is a C++ library comprising a novel hierarchical data structure and a suite of tools for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids.
Tried the build instructions for VDB Mapping ROS package, but that doesn't work for ROS2:
rosdep install --from-paths src --ignore-src -y
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
vdb_mapping_msgs: Cannot locate rosdep definition for [message_generation]
cpr_robot: Cannot locate rosdep definition for [rviz]
vdb_mapping_ros: Cannot locate rosdep definition for [dynamic_reconfigure]
dreamvu_pal_camera: Cannot locate rosdep definition for [rospy]
realsense2_description: Cannot locate rosdep definition for [rosunit]
realsense2_camera: Cannot locate rosdep definition for [ddynamic_reconfigure]
Looked into Migration guide, and changed the package tag to This was not needed for vdb_mapping packages or any other package. Note that this format is backward compatible with ROS1
Copied the vdb_mapping packages to ros2_ws/src and updated the package.xml files.
Trying to do it automatically with ros2-migration-tools. Unfortunally it fails on import CatkinToAmentMigration, ROS1ROS2Packages.
The central script is ros_upgrader.py. As argument it needs the link to the package.xml file itself, not the path. It also needs compile_commands.json from the catkin_ws. It also needs clang 8.0.0, which is not prebuild available for Ubuntu 20.04. Only 11.0.0 is available. Instead, installed sudo apt-get clang-8, which installs 8.0.1-9 and implicit libclang-common-8-dev.
That gives clang_rt.so libraries, so downloaded the suggested tar-file (16.04), which works.
Converted vdb_mapping with python ros_upgrader.py -c ~/catkin_ws/src/vdb_mapping/build/compile_commands.json -p ~/ros2_ws/src/vdb_mapping/package.xml
Yet, calling export vdb_mapping_DIR=/home/arnoud/catkin_ws/src/vdb_mapping; colcon build --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=ON fails because there is no vdb_mappingTargets.cmake. Without this dependency on package vdb_mapping compilation fails on fatal error: vdb_mapping/OccupancyVDBMapping.h: No such file
Doing catkin_make fails because vdb_mapping is (plain cmake). OpenVDBUtils_DIR=/usr/local/lib/cmake/OpenVDB/;catkin_make_isolated fails on missing openvdbutils-config.cmake. Without this dependence on missing the openvdb libraries, like:
Mapping.cpp:(.text._ZN7openvdb4v9_14math8internal4halfC2Ef[_ZN7openvdb4v9_14math8internal4halfC5Ef]+0x25): undefined reference to `openvdb::v9_1::math::internal::half::_eLut'
Installed openvdb via rosdep as suggested by VDM Mapping ROS package. Now vdb_mapping can be build, but catkin_make_isolated for vdb_mapping_ros fails on -Werror=unused-parameter. Removed the Werror as compile_option from vdb_mapping_ros, which compiles, but fails on missing tbb library. That is strange, because libtbb-dev was installed before openvdb. Different version?
Some of the other ros2-packages in my package fails, so I could better use --packages-select. Now vdb_mapping_ros could be build. Yet, vdb_mapping failed on some clang-tests (old version of clang). Removed clang-8 and installed default version of clang (clang-10). also had to install clang-format and clang-tidy.
That solves the build issue. Still source install/setup.bash gives /home/arnoud/ros2_ws/install/vdb_mapping/share/vdb_mapping/local_setup.bash. Could be because vdb_mapping is a plain cmake package (or the changes that I made with automating updating the CMakeList.txt
The command ros2 launch vdb_mapping_ros2 vdb_mapping_ros2.py startswithout problems, with ros2 topic list I see:
/parameter_events
/raw_points
/rosout
/scan_matched_points2
/tf
/tf_static
/vdb_mapping/vdb_map_overwrites
/vdb_mapping/vdb_map_pointcloud
/vdb_mapping/vdb_map_updates
/vdb_mapping/vdb_map_visualization
According to the documentation two topics are advertised:
~/vdb_map_visualization (type: visualization_msgs/Marker)
~/vdb_map_pointcloud (type: sensor_msgs/PointCloud2)
The last topic publishes the resulting map as a voxel marker
According to the documentation the node is subscribed to two topics:
~/Parameter:aligned_points (type: sensor_msgs/PointCloud2)
~/Parameter:raw_points (type: sensor_msgs/PointCloud2)
The default valuee for those two topics are scan_matched_points2 (aligned to a map frame) and raw_points (still in sensor frame)
When I launch the RoboCup 2022 Demo ros2 launch rvrl_gazebo house_map.launch.py the depth camera publishes PointCloud2 on the topic /robot1/camera/points
So rerun with ros2 launch vdb_mapping_ros2 vdb_mapping_ros2.py raw_points:=/robot1/camera/points.
COuld add both vdb-topics to rviz2, but although the status is OK I see nothing.
The command ros2 topic echo /robot1/camera/points shows that the topic is updated. Yet, I also don't see this topic in rviz2!
Let see if I can see the PointCloud2 from the OAK-D camera.
The official ros1 and ros2-drivers can be found at Luxonis github.
Had to modify the python-dev dependency in the install script to a python3-dev dependency. Created a depthai-core directory in my git-directory (normally this temporary directory is after installation removed from /tmp).
The vcs import src < underlay.repos didn't show any progress, so I cloned the two repositories manually.
The rosdep installed ros-foxy-vision-msgs, ros-foxy-foxglove-msgs and ros-foxy-depth-image-proc.
The command ros2 launch depthai_examples stereo_inertial_node.launch.py starts a rviz window, but only the camera model is visible. The stereo_inertial_node crashes with the following message:
[rviz2-5] Parsing robot urdf xml string.
[stereo_inertial_node-1] Listing available devices...
[stereo_inertial_node-1] Device Mx ID: 14442C10710891D000
[stereo_inertial_node-1] Device USB status: SUPER
[stereo_inertial_node-1] terminate called after throwing an instance of 'std::runtime_error'
[stereo_inertial_node-1] what(): There is no Intrinsic matrix available for the the requested cameraID
[ERROR] [stereo_inertial_node-1]: process has died [pid 190733, exit code -6, cmd '/home/arnoud/dai_ws/install/depthai_examples/lib/depthai_examples/stereo_inertial_node --ros-args --params-file /tmp/launch_params_bv5lq7qi
The example ros2 launch depthai_examples rgb_stereo_node.launch.py has the same problem.
At least ros2 launch depthai_examples rgb_publisher.launch.py works, which shows in rviz2 the topic /color/image.
Also ros2 launch depthai_examples stereo.launch.py works, which shows in rviz2 the topics /right/image_rect (in bw) and /stereo/points (the PointCloud2 that I wanted):
Yet, nothing happens when I run vdp_mapping with raw_points:=/stereo/points.
Inspected the code. The node.cpp is rather short, the actual code can be found in include/vdb_mapping_ros2/VDBMappingROS2_impl.hpp.
Added a print-statement on test on which ras_points topic is specified. The commandline is ignored, it still uses the default value.
Tried to set it with ros2 param set vdm_mapping_ros2 raw_points /stereo/points; ros2 launch vdb_mapping_ros2 vdb_mapping_ros2.py, ros2 param set works only at runtime (when the node is already started)
So, created a new launch file which reads a new config/*.yaml file. Now the parameter is updated and ros2 topic info /stereo/points shows that there are two subscribers (rviz2 and vdb_mapping_ros2).
Added print-statements when the subscription are made. Those print. Yet, in the two Callback functions nothing happens. Tried without slash: still no callback and the number of subscribers also remains the same. If I do an echo, I see the subscription number increase.
The create_subscription seems to be inherited from rclcpp:Node.
Looking into ros2 demos, if I could upgrade the listener in demos_nodes_cpp to the topic /stereo/points.Cloned the ros2 demos in my ros2_ws.
Build it with colcon build --cmake-clean-cache --packages-select demo_nodes_cpp. Build succeeds. How to call the demos is not in the readme (there is no readme!), but can be found in docs.
So, ros2 run demo_nodes_cpp talker works. Also the corresponding ros2 run demo_nodes_cpp listener hears what is produced. Continue tomorrow to make this on a topic with a prefix, followed by a PointCloud2.
July 4, 2022
One of the benefits of ROS2 is the handling of communication degredations. It would be interesting to study this Latency Management for ROS2 paper, including Open Problems.
June 28, 2022
Followed the ROS2 Humble Open Class. In one hour, the notebook was explained until ros2cli --flow-style feature.
Read Flexible Disaster Response of Tomorrow, which describes the final tests of the Centauro system. Impressive manipulation with two arms (each with their own gripper deign).
The StockPhoto illustration would be great for a new t-shirt in the Rescue League:
Continue with Exercise 2.7
Added call_map_service.py to catkin_install_python(PROGRAMS) in CMakeLists.txt, which makes rosrun get_map_data call_map_service.py executable. Only nothing happens, so it seems that the listener is just listening.
At the provide_map side I see that the listener wants to have topic /map_metadata to have the std_msgs/String format, while it actually is a nav_msgs/MapMetaData message.
Looked up the actual MapMetaData format and printed the resolution and height/width:
[INFO] [1655985264.794673, 1129.091000]: /listener_3276_1655985264560I heard 0.050000 resolution
[INFO] [1655985264.796686, 1129.093000]: /listener_3276_1655985264560I heard 640 x 608 height & width
.
Continue with Exercise 2.11
Created a new package with catkin_create_pkg my_mapping_launcher turtlebot_navigation_gazebo.
Struggled a lot with Exercise 2.15, because a yaml file is not a xml file: particles: 10 instead of . Now roslaunch launch/my_gmapping_launch.launch works (while roslaunch my_gmapping_launcher my_gmapping_launch.launch not). The parameters in the xml file should be commented out, the rosparam doesn't overwrite them (also not when the call is after the definition). My struggle could be solved by reading Note 1 of this exercise!
The last (optional) exercise is to build a rosbag.
June 22, 2022
Looked at the 2nd part of the Navigation course. Drove around and created a map of the kitchen:
At the end of lesson 2.2 I saved the map with rosrun map_server map_saver -f /tmp/my_map, as suggested in TurtleBot navigation tutorial. The result:
[ INFO] [1655899510.054575335]: Waiting for the map
[ INFO] [1655899510.303843834]: Received a 608 X 640 map @ 0.050 m/pix
[ INFO] [1655899510.303900698]: Writing map occupancy data to /tmp/my_map.pgm
[ INFO] [1655899510.313575207, 1148.002000000]: Writing map occupancy data to /tmp/my_map.yaml
[ INFO] [1655899510.314180877, 1148.002000000]: Done
For Exercise 2.5, I created a new package with command catkin_create_pkg provide_map map_server, as suggested in this tutorial.
Created a simple launch file as suggested by this tutorial:
pkg="map_server"
type="map_server /tmp/my_map.yaml"
name="my_map"
output="screen"
/>
This doesn't work because map_server is a pre-installed package, and not inside the provide_map package
Used the suggestion given by this answer, which actually works:
pkg="map_server"
name="map_server"
type="map_server"
args="$(arg map_file)"
output="screen"
/>
So, the sequence catkin_make, source devel/setup.bash, roslaunch provide_map provide_map.launch works.
Continue with Exercise 2.7
June 20, 2022
ISAAC Sim has in 2022.1 release a wheeled version of the Anymal and a Isaac Sim/Gazebo Connector.
June 17, 2022
Should make use of my ConstructSim subscription to explore the course Gazebo in 5 days, but this course seems to longer exist.
Can only find three Gazebo course, the most interesting
Deep Learning with Domain Randomization.
June 15, 2022
On linkedin, Raffaello indicated that nanosaur v2.0 is now released. According to the roadmap, this integrates both nanosaur Gazebo and Isaac Sim. Next vesion will integrate the Nvidia expansion shield NX, followed by the ROS2 Humble version. Curious if it really will be a Xavier expansion shield, because that board is above $ 2000 and released already in 2020.
Deepmind has acquired MuJuCo, and made the Physics Engine available at github.
The engine has its own visualition front-end, and only a Unity Plugin. Didn't see any other embeddings / model zoo.
CMU has collected Tartan Drive, an off-the-road dataset. Unfortunatelly, only RGB images as sensor, although this are the left images from a stereo camera (Carnegie Robotics Multisense S21), which they combined via a disparity map not a local 3D map, which is projected onto a height map (colored to a RGB map). The code to convert the stereo images to disparity and point clouds are not given, and this intermediate data is also not in the ROS bag.
The launch now fails on missing rclpy bind. That is a difficult one, because it expects the 36m.so in /opt/ros/humble/lib/python3.10/site-packages, but it is not even in the dreamvu sitepackages, or /opt/ros/noetic/lib/python3.
Looked at the suggested Humble TroubleShooting and tried to rebuild the ros2_ws from scratch. Unfortunatelly, building with python3.6 fails on missing ament (because I failed to source humble/setup.bash in my new shell).
Rebuild rclpy from source (humble branch). This builds python-310.so (while default python3 was 3.6).
Looking into the CMakeLists.txt file. Increased the cmake version > 3.19 and added 3.6.13 for required of find_package(Python3), but still the python3.10 library is build.
Leave it for the moment.
nb-ros runs Ubuntu 18.04 hand has python3.6.9 as default. The latest Ubuntu 18.04 ROS2 release seem to be Eloquent.
Simply installing with sudo apt install ros-elequent-ros-base doesn't work.
After updating the ROS 2 apt repositories it eloquent ros-base is installed.
There are several 36m libraries in /opt/ros/elequent/lib/python3.6/site-packages, but not the pybind.
Build rclpy (elequent branch) from source. No problems with build, although the pybind library is not build.
Building the humble branch fails on nb-ros, because rcl_logging interface is not found. Tried to make rcl_logging (humble branch) from source, but that fails on ament_export_targets.
Tried to build the rclpy foxy branch on nb-ros, which fails on some C-definitions.
On nb-dual (Ubuntu 20.04), there is also no rclpy_pybind11 library in /opt/ros/foxy/lib/python3.8/site-packages/rclcpy.
Tried to start the node in ROS1-style, which doesn't need rclpy. This fails on PAL_MIDAS:Init. The call is made from /opt/ros/humble/lib/python3.10/ros2run/command/run.py, which seems to have problems with finding the corresponding python3.6. Even with the dreamvu site-packages at the start of the PYTHON_PATH and after activating the dreamvu virtual environment.
Looked at last year's labbook, and tried python3 main1.py. That gives also a MIDAS assertion. Tried python3 test_py_installations.py, which works. Explorer gives an DualEnet assertion.
Started a vanilla terminal. Explorer fails on missing libPAL.so. Adding the Dreamvu lib to LD_LIBRARY_PATH starts Explorer, which fails on DualEnet.
Went to installations/camera_data and run setup_python_lib.sh. Still Explorer fails on DualEnet. Running sudo PAL_udev.sh and rebooted.
Boot halts. Go into recovery mode and doing a an update (including firmware). Still hangs on started on started daeomon for power management (after snapd is fully seeded). I say already some warnings on firefox and snap, so looked at what I did April 27 to solve the firefox-snap issue.
The boot gets a bit further, bu now fails after the docker container on the Postfix Mail Transport Agent.
Find it suspicious that nvidia-driver-510 was held back, so installed that manually. During that install I say that apt_get had some troubles with python, so switched back to the default python3.10. Now the system boots again.
Still, even python3.6 main1.py fails on the MIDAS assertion.
At the Enrich 2021 hackaton, 8 teams participated in the Search and Rescue scenario.
European Robotics has a nice logo:
The 2017 and 2019 version of this hackaton were at the same location: the Zwentendorf Nuclear Power Plant.
I don't see any simulation downloads, only some links to ROS drivers and software to evaluate the maps: CloudCompare.
I don't see any simulation downloads, only some links to ROS drivers and software to evaluate the maps: CloudCompare.
May 27, 2022
Installed on my XPS9370 (Ubuntu 22.04) the first ROS release dedicated to this LTS: ROS2 Humble.
Note that the XPS9370 system has some nvidia-packages that are held back.
Solved some minor bugs (.local and c_str() warning).
Yet, build still fails because libavutil.so.56 has undefined references to OpenCL 1.0.
That is strange, because on nb-dual precisely the same libavutil.so.56 is used.
Yet, on nb-dual locate *OpenCL.so.1 gave /usr/lib/x86_64-linux-gnu (and ~/packages/QRB5165, but that will not be loaded. On XPS9370 also two cuda variants and firefox snaps were visible. Removing the cuda8 variant (needed for tensorflow-1.4) solved this issue.
Build is finished. Should get the camera to test.
May 23, 2022
On nb-dual the SDK Manager is now upgraded to 1.8.0 (Used to flash the Jetson Nano boards).
May 20, 2022
Would be interesting to see VDB-Mapping applied to rescue scenario, for sure with the distributed/remote option.
May 6, 2022
It would be nice to see I could reproduce some of Nvidia TAO pipeline with the DreamVU camera.
May 5, 2022
Watched the video of Spot new upgrades, including a Mesh network and a Nvidia Jetson computing backpack. Most important upgrade is that the 5 side-camera's are now colour-based.
April 28, 2022
Orbec has a new 3D camera Persee++ with an onboard NPU, but the SDK is still in beta (and supports the camera, not explicitly the NPU).
April 27, 2022
Updated XPS9370 to Ubuntu 22.04, so that I can install ROS2 Rolling. See of details CS Labbook April 27.
April 26, 2022
Following the Object detection class at ConstructSim, which actually uses a ros-bridge to convert a ros1 point-cloud to a ros2 topic.
Nice trick in rviz2 is to rename the base-frame from map to base_link.
April 22, 2022
Read this story on multicam pipelines on the Jetson, although the first link to the personal repository does not work, the code is moved a nvidia repository.
Compilation halted on libyaml_vendor, which requires a patch which no longer exist. Could try to modify the CMakeList.txt file, but yaml 0.6.0 is already installed, so not clear why I also need libyaml-0.2.5. Moved this directory from the /opt/ros/foxy/src directory, which means that only 216 modules have to be build.
That trick doesn't work, because rcl_yaml_param_parser depends on libyaml_vendor. Moved the libyaml_vendor back, and removed the no longer existing patch command.
The build now continues, but with one threat. The other one is processing libyaml_vendor, without seeing any progress.
Cloned the libyaml, configured includedir where it is expected /opt/ros/foxy/include/libyam_vendor, so that the build can continue. After 7h compiling, it had compiled 200 modules, with still 15 to go. I aborted the process in the build of image_view and rosbag2_transport.
April 2, 2022
The upgrade of cmake as suggested in the dockerfile instructions solves this:
wget -qO - https://apt.kitware.com/keys/kitware-archive-latest.asc | apt-key add - && \
apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' && \
apt-get update && \
apt-get install -y --no-install-recommends --only-upgrade \
cmake
The colcon build continues with at least 57/217 packages.
Nvidia introduced Orin, including Jetpack 5, which is build on top of Ubuntu 20.04. The new Orin modules are compared with Xaviers, not Nano's. The systems will become available in Q4 (10W and !5W).
After installing python3-vcstool, the command vcs import src works. Yet, it clones several ros-packages from src in /opt/ros/foxy/src, which requires sudo rights.
System is very slow, installing apt-utils takes ages.
Build yaml-cpp.so.0.6 from source. Note that libayaml-0.so.2 did already exist.
March 21, 2022
Activated the Nanosaur again.
With both systems connected to robolab network, I could do connect to the rosboard webinterface via http://ip:8888/
Both the camera_info and image_raw could be visualised.
Looked at the keyboard tutorial. I could do source nanosour and run ros2 run teleop_twist_keyboard teleop_twist_keyboard --ros-args --remap cmd_vel:=key_vel, but nothing happens (while I have the tow eyes and we are connected to the same network). No error message. Tried to monitor the cmd_vel in the webinterface, but I can keep waiting for data. At the nanosour, I have only /opt/ros/foxy/src, no setup yet.
The same three docker images are active. All three containers were created 6 weeks ago.
Used the sdkmanager to do that. The sdkmanager executes a flash command, but unfortunatelly the CRC8 doesn't match (calculated 0x5b != stored 0x5).
This looks like the error reported on nvidia forum. The BOARDID is clear 3448, but the FAB depends on the boards sku, which is described in the README_Autoflash.txt. The command sudo ./nvautoflash.sh --print_boardid should give the boardid (and not flash the board), but even this fails.
February 14, 2022
Trying to reproduce my parameter-problems from December 24.
Started ros2 run pal_camera capture, which fails with bool PAL_MIDAS::Init(int): Assertion `m_pModule != NULL' failed.. This is because python3 is pointing to the default python3.8.
Removed python3.6 from my path on January 31.
Executing export PATH=/home/arnoud/packages/PAL-Firmware-v1.2-Intel-CPU/installations/dreamvu_ws/bin:$PATH solves this. Added this to the run.sh script.
The node is started with namespace '/':
[INFO] [1644830272.457952595] [pal_camera_node]: * namespace: /
[INFO] [1644830272.457959146] [pal_camera_node]: * node name: pal_camera_node
[INFO] [1644830272.457965601] [pal_camera_node]: ********************************
[INFO] [1644830272.457971567] [pal_camera_node]: *** GENERAL parameters ***
[INFO] [1644830272.457994396] [pal_camera_node]: * Camera model: pal_usb
[INFO] [1644830272.458027546] [pal_camera_node]: * Camera position x: 0
[INFO] [1644830272.458046316] [pal_camera_node]: * Camera position y: 0
[INFO] [1644830272.458060597] [pal_camera_node]: * Camera position z: 0.06
[INFO] [1644830272.458073377] [pal_camera_node]: * Camera orientation roll: 0
[INFO] [1644830272.458085846] [pal_camera_node]: * Camera orientation pitch: 0
[INFO] [1644830272.458099051] [pal_camera_node]: * Camera orientation yaw: 0
At least ros2 run image_view image_view --ros-args --remap image:=/dreamvu/pal/get/right is still working.
When I do ros2 run rviz2 rviz2, the images are dropped.
Added a TF view to rviz2. Both a base_link and a /dreamvu/pal/camera_center are present. I should check which TF is used by the capture node.
In the current code mCameraName + mCameraCenterFrameId was commented out, and just "camera_center" was used. The right image still used mCameraName + mCameraCenterFrameId, which is displayed without problems in rviz2.
Rebuild the code without "camera_center" with colcon build --symlink-install --cmake-args=-DCMAKE_BUILD_TYPE=Release --packages-select pal_camera.
Now the left camera also works again. Yet, the node is not using the camera_name parameter to advertise its nodes (see line 272 of pal_camera_node).
Strange, when I run ros2 run pal_camera capture --ros-args --remap camera_name:=/dreamvu/pal3 it ignores the camera_name, but when I add camera_model:=pal_mini it indicates unknown ROS arguments, while with only camera_model it ignores this parameter, and with general.camera_model:=pal_mini it is again a unknown argument. Maybe I should use the launch, as with the dreamvu_pal_camera_description.
Did ros2 launch pal_camera pal_camera.launch.py camera_model:=pal_mini. Now I the start is a bit different:
[capture-1] [INFO] [1644838217.823258671] [dreamvu.pal.pal_camera_capture]: ********************************
[capture-1] [INFO] [1644838217.823373288] [dreamvu.pal.pal_camera_capture]: DreamVU PAL Camera v1.2.0.0
[capture-1] [INFO] [1644838217.823382961] [dreamvu.pal.pal_camera_capture]: ********************************
[capture-1] [INFO] [1644838217.823388176] [dreamvu.pal.pal_camera_capture]: * namespace: /dreamvu/pal
[capture-1] [INFO] [1644838217.823393326] [dreamvu.pal.pal_camera_capture]: * node name: pal_camera_capture
[capture-1] [INFO] [1644838217.823397824] [dreamvu.pal.pal_camera_capture]: ********************************
[capture-1] [INFO] [1644838217.823402317] [dreamvu.pal.pal_camera_capture]: *** GENERAL parameters ***
[capture-1] [INFO] [1644838217.823424609] [dreamvu.pal.pal_camera_capture]: * Camera model: pal_usb
[capture-1] [INFO] [1644838217.823463963] [dreamvu.pal.pal_camera_capture]: * Camera name: pal
The namespace is now correct, the camera_name not (Advertised on topic: /dreamvu/pal/palget/left and Publishing Image with FrameId: palcamera_center). The camera_model parameter is not read (or overwritten).
The camera_model should be used to read the correct config/pal_usb.yaml. Yet in the default one the parameter camera_model and camera_name seems to be defined. Changed in config/pal_usb.yaml the camera_name to '/dreamvu/pal3/'
Now the capture node correctly publishes images under topic '/dreamvu/pal3/get/left'. Yet, rviz drops the messages because transformation to '/dreamvu/pal3/camera_center' is not published.
That was another piece of code commented out (line 431). Yet, when I uncomment this piece of code, the TF /dreamvu/pal3/camera_center is visible in rviz, and it is used to Publishing Image with FrameId: /dreamvu/pal3/camera_center. Yet, there is no transform from this TF to map!
Strange, because to transform from this FrameId to base_link, and base_link to map, are both published. Yet, this transform is not part of the tree. Missing '/' in front of base_link?
It was not the wrong tag, the system got working when I moved the transform base_link to camera_center in the publish-loop:
A bit for inefficient for a static transform. zed_camera_component does a sendTransform from the StaticTfBroadcaster (CameraImu) and TfBroadCaster (Odom)
Added a StaticTfBroadCaster, and could remove the broadcast from the loop. Finally, it seems to work. Time to clean up and publish.
Commited this version.
Version also still works with the simple run.sh script. Also works in combination of the camera_description. Removed the Publish image INFO. Should check why the left image is smaller than the right image. Depth also works.
Removed the Map2Base from the loop. Now the code no longer works. Broadcast this transformation again: works fine.
Connected a Logitech joystick, but no response (not even on powering the nanosaur down). Maybe check the battery status of the joystick.
Checked the webinterface of the nanosaur. I could connect, and system stats and top work. Only the Logviewer of dmesg gives no output (although the process is visible in top). Also no camera_info or camera_raw are displayed:
Ros-foxy and Ubuntu 18.04 are not regular combinations, so followed the advice of this post and looked if I could install ros-foxy-ros-base by replicating the commands in the dockerfile.
Installed sudo apt-get install gnupg2.
Added ros2.list to /etc/apt/sources.list.d.
ros-foxy-ros-base is not part of that list. From the docker-file, it seems that ROS is build from source.
After a sudo apt-get update I could install python3-colcon-common-extensions which installs 33 other packages (including python3-colcon-ros and python3-catkin-pkg-modules. I skipped libbullet-dev. Continued with python3-rosdep and python3-rosinstall-generator.
Created /opt/ros/foxy/src. Run the rosinstall_generator script with only ros_base v4l2_camera and image_transport, but still the cv_bridge and yaml are included. So I need to install those first (from source).
First look which dockers are available and already running. Looked with docker container list and docker image list which containers are active. Three are active: containrrr/watchtower, nanosaur/webgui and nanosaur/nanosaur.
Run on nb-dual nanosaur update, went to the nanosaur_ws and did a git pull. Started the environment with ,tt>src nanosaur.
Did ros2 topic list, which shows /image_raw and /camera_info. Yet, ros2 run image_view image_view --ros-args --remap image:=/image_raw shows nothing. rviz2 starts, and when I add as view an image, the status is OK. Yet, no image is displayed.
Connected the Nanosaur with power from the powerbank to both the extention board and the Jetson. Now no power warnings are given, and I could drive the robot around (although backwards when I say up, and forward when I say down):
Now also a the image_view is visible on the nanosaur/webgui:
The imageview now also works on nb-dual. Only rviz2 fails (because of a missing transformation, see my work on the dreamvu camera):
February 2, 2022
The Jetpack 4.6 of the Nanosaur nicely boots. Get a configuration-wizard for Ubuntu 18.04 first. Choose all standard options, although the default 10W option is not the best choice when using the USB-power.
Selected sudo nvpmodel -m 1, because I receive the MAXN warnings (could also be done interactive at the top bar).
Did a sudo apt upgrade, which installs 196 updates (from jetson/common r32.6)A
Anyway, running the nanosaur install script. This script strts with libpython3.6-dev, nano, jetson-stats, ros2_system_manager, docker-compose, and start nanosaur docker-compose (which downloads the nanosaur docker-image). The nanosaur docker-compose can be found in /opt/nanosaur/docker-compose.yml. This file points to wiki if you want to change the configuration.
Looked at the architecture page. Zoomed in at jetson_stats. This requires ros, so first tried to build ros2_system manager from source. Running sudo -H pip3 install -v -e . didn't work because python3.7 was expected (with python3.6 installed). The command colcon build --symlink-install fails on missing colcon. The Readme of jetson_stats is more clear, the clone should be in cloned in a ros2_ws. Note that the instructions still use eloquent as ros-distribution. Note that it assummes that ros development environment is installed. Part of the development tools are python3-colcon-common-extensions. Without the ros-repository this package cannot be found.
First the reboot from the nanosaur script. The eyes at least work!
I have the wide angle IMX219 camera already. I also have the PI cam (v2.1), but with default lens replaced with a wide angle lens. Found the original lens back.
Removed the Raspberry Pi v2.1 camera from the Silicon Highway Jetbot, switched the lens back and installed it into the NanoSaur. The NanoSaur is now complete, except the on-off switch:
Continue with downloading JetPack 4.6 for the 2GB Nano. But actually, I used a 4GB Nano (with 4 USB ports).
Downloaded Etcher for Linux and flashed the image.
Watched the webinar.
Soon Raffaello will add a docker for the OAK-D camera in his tutorial.