Wishlist
- Connection to UsarCommander
- ROS-nodes working for platform, arm, laser scanner and ins.
- Connection to playchess
August 23, 2024
- Basile liked Bas Terwijn's 'Youbot chess robot' document very much, but still some packages are missing. We should have taken care of the recommendation.
- In November 11, 2014 I describe some experiments on nb-ros (just updated to Ubuntu 14.04). Yet, nb-ros is now Ubuntu 18.04, and without /opt/ros/humble (only eloquent and melodic).
- On January, 2014 I also did some experiments with the youbot_drivers, but didn't see the code on nb-ros in ~/packages or ~/catkin_ws
July 21, 2024
- The b-it-bots are still working with the KUKA youBot as platform.
July 12, 2024
- The exchange students have found this setup guide for ROS Indigo (Ubuntu 14.04).
- Note that youbut_driver (Oct, 2017) also has a ROS Jade version, which is a version in between ROS Indigo and ROS Kinetic.
July 2, 2024
- Looked at euRobin coopetition, which contains a marketplace of modules, with points if you use someone else code. There are three challegences; industrial, indoor service, outdoor service.
June 7, 2024
- With the KUKA youBot, the problem is that the ROS-hydro packages are no lnonger to be found at Ubuntu ROS package list.
- Yet, in older versions at the Internet Archive the packages lsit are still there. They point to a Pool-directory. In the latest versions this directory ros-hydro is gone, but in the 2015 version the directory still contains ros-hydro packages.
- In 2020 ros-hydro is gone. The last one is from May 11, 2019. Yet, ros-hydro-desktop-full itself is not archived.
- Looked into the mirror ros-shadow-fixed, which gives the same error.
-
- Started tunis again. Had some problems to get the screen working. Connecting the VGA-only monitor didn't work, nor the U2412M I usually work with. Yet, the LOC-monitor from WS9 worked (with DVI-HDMI converter).
- Made an account for kuka, so that Juliette and Basille could use the packages still present on tunis.
- Seems that all installed packages are in /var/cache/apt.
- Could try to use this install deb from cache trick.
- Should install dpkg-dev first. Not needed, was already installed. Yet, dpkg-scanpackages . gave Wrote 0 entries to output Packages file.
- According to this post, I should have seen /var/cache/apt/archives/*.deb, but it seems that I did an apt clean because not a single *.deb file there.
- Gave tunis to Juliette and Basile.
May 23, 2024
April 24, 2024
- Could try to synchronize, as suggested in this post. Yet, in that case the time-difference was only 3 sec, while mine was huge.
- Install tf2_tools and run the view_frames script for 5 seconds with rosrun tf2_tools view_frames.py
- Waited longer than 5 seconds, still no frames.pdf generated. Could mean that no tf is generated at the moment.
- Started roslaunch velodyne_description example.launch again. Now without any other rviz in the background. The status of VLP-16 is OK, although no /velodyne_points are visible. The RobotModle fails, because there is nog transform from [vehicle/base_link] to [base_footprint].
- Run rosrun tf2_tools view_frames.py, which now created a frames.pdf with indicates 'No tf data received'.
- Strange enough tf2_monitor didn't work under ROS noetic, so tried the suggesitons from here. Unfortunately, rosrun rqt_tf_tree rqt_tf_tree failed on optional_import('PyQt5').
- After running this two commands:
rosrun tf2_ros static_transform_publisher 0.1 0 0.2 0 0 0 map vehicle/odom
rosrun tf2_ros static_transform_publisher 0.2 0 0.3 0 0 0 vehicle/odom vehicle/base_link
rviz gives for TF three warnings: No transform from [vehicle/odom] to frame [base_footprint], from [map] to frame [base_footprint], from [vehicle/base_link] to [base_footprint].
- After adding the last transform with command:
osrun tf2_ros static_transform_publisher 0.2 3 0.4 0 0 0 vehicle/base_link base_footprint
all rviz displays are without error, but still no VLP-object to be seen.
- Started the point-cloud node again with roslaunch velodyne_pointcloud VLP16_points.launch.
- No velodyne_points are seen (not in PointCloud2, nor in VLP-16 display. Missing is another transform:
rosrun tf2_ros static_transform_publisher 0.2 2 0.4 0 0 0 base_footprint velodyne
After that both the VLP-16 as PointCloud2 measurements are visible.
- No idea why the VLP-16 mesh is not visible, the urdf is loaded as robot_description.
-
- Time to move on, and to see if I can get rerun working with ros, for Vision for Autononmous Robots course.
-
- Demonstrated the Mover5 movement to Pieter.
- Also activated the webcam, using video_stream_opencv.
- Although dmesg | tail reported input31:
input: HD USB Camera: HD USB Camera as /devices/pci0000:00/0000:00:14.0/usb3/3-8/3-8.2/3-8.2:1.0/input/input31
the check /dev/video* only showed 5 devices. I have to use /dev/video4. The command rosrun video_stream_opencv test_video_resource.py 4 gives a live videostream from the Mover5 camera.
April 23, 2024
- Connecting the Mover5 again to nb-dual (native Ubuntu 20.04).
- Following the instructions of June 11, 2022.
- Connected to power, blue led lights up.
- Both PCAN-USB adapter and HD USB Camera are recognized, according to dmesg | tail.
- Executed source ~/bin/startup_can_interface.sh, which asked for sudo rights. Succes can be checked with dmesg | tail again.
- Initiated ROS with source /opt/ros/noetic/setup.bash.
- Initiated the cpr_robot workspace by source ~/catkin_ws/devel/setup.sh.
- Started the GUI in RVIZ with roslaunch cpr_robot CPRMover6.launch:
- Enabled the control and steered joint3 on a low as possible speed (mouse directly towards stop).
- This is the output of rostopic list with cpr_robot node running:
/InputChannels
/JointJog
/OutputChannels
/clicked_point
/initialpose
/joint_states
/move_base_simple/goal
/robot_state
/rosout
/rosout_agg
/tf
/tf_static
- Updated 116 packages, including ros-noetic-joint-trajectory-controller and ros-noetic-urdf-tutorial.
-
- Also tried to read out the Velodyne VPL-16 Puck with ROS.
- Connected the power and ethernet via HAMA USB2-Ethernet adapter.
- Look into dmesg | tail, which gives:
load rtl8153a-4 v2 02/07/20 successfully
[ 1431.239295] r8152 2-3.3:1.0 eth1: v1.12.13
[ 1431.360994] r8152 2-3.3:1.0 enx00133bfb1757: renamed from eth1
- Should follow (most) of this instructions.
- At least the webinterface (http://192.168.1.201) works:
- Installed sudo apt-get install ros-noetic-velodyne (v 1.7.0)
- Did a git clone https://github.com/ros-drivers/velodyne.git in ~/catkin_ws/src.
- Tried rosdep install --from-paths src --ignore-src --rosdistro noetic -y in ~/catkin_ws Failed on dependency ti_objdet_range: Cannot locate rosdep definition for [pcl]
- Will continue later. Should use the suggestion from this discusion
- The package libpcl-dev was already installed. Adding --skip-keys pcl to the rosdep command solved this. Three extra ros-packages were installed: ros-noetic-compressed-depth-image-transport
ros-noetic-compressed-image-transport ros-noetic-theora-image-transport.
- Started roslaunch velodyne_pointcloud VLP16_points.launch Correction angles are read from ~/catkin_ws/src/velodyne/velodyne_pointcloud/params/VLP16db.yaml. Receive a small warning No Azimuth Cache configured for model VLP16. After that the node seems to start up nicely:
[ INFO] [1713884394.621147357]: Initialized container with min_range: 0.4, max_range: 130, target_frame: , fixed_frame: , init_width: 0, init_height: 1, is_dense: 1, scans_per_packet: 384
[ INFO] [1713884394.645773817]: Velodyne VLP-16 rotating at 600 RPM
[ INFO] [1713884394.646199720]: publishing 76 packets per scan
[ INFO] [1713884394.648381110]: Cut at specific angle feature deactivated.
[ INFO] [1713884394.653022586]: Reconfigure Request
[ INFO] [1713884394.654115811]: expected frequency: 9.921 (Hz)
[ INFO] [1713884394.655819034]: Opening UDP socket: port 2368
- Once in a while another warning:
Packet containing angle overflow, first angle: 35969 second angle: 9
- The command rosnode list gives four nodes running:
/velodyne_nodelet_manager
/velodyne_nodelet_manager_driver
/velodyne_nodelet_manager_laserscan
/velodyne_nodelet_manager_transform
- Running rosrun rviz rviz -f velodyne gave a transform error, which can be solved as suggested in troubleshooting with rosrun tf static_transform_publisher 0 0 0 0 0 0 1 map velodyne 10 . After that the PointCloud2 is displayed:
- No robot_description provided. Found one on this github page (7 years old).
- The example (partly) works, strange enough the VPL-16.urdf not (yet).
- The VLP-16 description is not displayed, because there is transform missing. Providing such footprint fails with the TF Error: [Lookup would require extrapolation 1713886797.747684717s into the future. Requested time 1713886902.489684820 but the latest data is at time 104.742000000, when looking up transform from frame [velodyne] to frame [base_footprint]].
March 20, 2024
- Looking in the latest publications of Nvidia, as started in @Home.
- This paper from Dieter Fox uses Transformers, Simulation and Depth-cameras. The paper is accompanied with website and code.
- This is an extension of PerAct, which actually has Colab Tutorial
-
- The latest robot manipulation learning is Optimus, but all for the same type of robot (but 70 different objects).
- The CuRobo motion-planning where I was looking for is extended in this IROS 2023 paper. The supplement is not code, but appendix with additional experiments.
- For CuRobo the point to this ICRA 2023 paper. Here the supplement points to the extensive website, which contains the library, technical reports, etc
- At the moment, 9 robots are supported.
- Additional robots can be added following this tutorial.
- What is needed is the URDF, the kinematic chain and the self-collision cuboids or meshes (together in an USD configuration).
- In the function save_curobo_robot_to_usd() and read_world_from_usd() in usd_example.py one can see that USD is used to combine robot and the obstacles around it.
- This page list several Simulators with URDF.
- For instance, this ROS Hydro URDF has some simplified collision geometries.
- Here is a simplified URDF for the UMI-RTX, without collision geometries.
- The CommonPlace Mover5 is mentioned as ROS robot.
- A URDF of the Mover6 is available, together with the IGUS 5DOF.
- Specs of the Mover 4,5,6 can be found in CPR UserGuide
March 20, 2024
- According the vSLAM setup, the next step is to read cartographer's ROS integration
- That page gives many hints on parameters of the local and global SLAM algorithm that can be tuned, but less on the observations that are expected.
- This page (Running your own bag) is more usefull. For instance, there is a tool cartographer_rosbag_validate to check and detect a variety of mistakes commenly found in bags. Next thing would be to tune the configuration of the robot in my_robot.lua (which for instance defines the frames map_frame, tracking_frame, published_frame, odom_frame).
- For instance, cartographer expects 5 different launch files, one for instance demonstrating localisation on a previous recorded map.
- vSLAM actually has two launch files, one of them the one mentioned in the setup (roslaunch cartographer_ros vslam_2D_landmarks.launch.
- That launch file points to the MiR100.urdf for the robot_description, and to landmarks_2D.lau for the configuration. The MiR100 robot is equiped with front and a back scan, an imu and odometry.
- The landmarks_2D.lua contains mainly the settings for the trajectory builder, the only surprise is that the IMU measurements are not used.
- The main loop is detect_landmarks_vslam.py, which calls the yolov5 detector and uses ROS_pub.py to publish the landmarks. Not clear where the depth measurement of the RGB-D camera comes in, seems all purely image based.
-
- I experimented with installing primesense in 28 January 2014, when I connected the Asus Xtion ProLive to my labtop (ROS hydro).
- On March 24, 2021 I worked again with the Asus Xtion Pro, but with the librealsense-utils (which only worked with the RealSense D435).
- The package openni2_camera is supported for both melodic and noetic.
- On nb-ros, I installed with sudo apt install ros-melodic-openni2-camera. Also installed openni2_launch
- With lsusb the ProLive is visible simply as device ASUS.
- With dmesg | tail I see:
Product: PrimeSense Device
[27614.528926] usb 3-2: Manufacturer: PrimeSense
[27615.224714] usb 3-2: Warning! Unlikely big volume range (=4181), cval->res is probably wrong.
[27615.224719] usb 3-2: [3] FU [Mic Capture Volume] ch = 2, val = 0/12544/3
[27615.229327] usb 3-2: Warning! Unlikely big volume range (=4181), cval->res is probably wrong.
[27615.229330] usb 3-2: [3] FU [Mic Capture Volume] ch = 1, val = 0/12544/3
[27615.234905] usbcore: registered new interface driver snd-usb-audio
- Doing roslaunch openni2_launch openni2.launch gives:
process[camera/camera_nodelet_manager-1]: started with pid [32653]
process[camera/driver-2]: started with pid [32654]
process[camera/rgb_rectify_color-3]: started with pid [32655]
process[camera/depth_rectify_depth-4]: started with pid [32656]
process[camera/depth_metric_rect-5]: started with pid [32669]
process[camera/depth_metric-6]: started with pid [32682]
process[camera/depth_points-7]: started with pid [32691]
process[camera/register_depth_rgb-8]: started with pid [32703]
process[camera/points_xyzrgb_sw_registered-9]: started with pid [32716]
process[camera/depth_registered_sw_metric_rect-10]: started with pid [32726]
process[camera_base_link-11]: started with pid [32748]
process[camera_base_link1-12]: started with pid [32754]
process[camera_base_link2-13]: started with pid [32766]
process[camera_base_link3-14]: started with pid [303]
[ INFO] [1710928702.970805229]: Initializing nodelet with 4 worker threads.
[ INFO] [1710928705.939771024]: Device "1d27/0601@3/9" found.
Warning: USB events thread - failed to set priority. This might cause loss of data...
[ WARN] [1710928708.044227507]: Reconnect has been enabled, only one camera should be plugged into each bus
- Both the RGB as Depth image of the ProLive can be seen in rviz. Checked with rostopic info /camera/depth/points that this are PointCloud2 messages. Yet, the robot_description is the one published by the cartographer, displaying the points in rviz fails on missing camera_rgb_optical_frame. Note that frame map is also missing.
- Did roslaunch openni2_launch openni2_tf_prefix.launch which both launches the camera_node and 4 base_links, but the camera_rgb_optical_frame is not part of the published transformations.
- Note that the warning on priority could maybe be solved (according to this Xtion video4linux driver) by blacklisting snd_usb_audio.
- Note that openni2-camerar is also supported for ros2, and has a ros2 launch openni2_camera camera_with_cloud.launch.py there.
- Actually, there is even a launch with only the tsf.launch.py, which specificies camera_rgb_optical_frame as a 90 deg rotation for boht the roll and yaw from the camera_rgb_frame. This frame is a -0.045 shift from the camera_link.
- Looked at the tf tutorial. Unfortunately, I get the message tf_monitor waiting for time to be published
- Looked in rviz for the transforms. See the warning 'No transform from camera_link to frame map'.
- Simply created the transform with rosrun tf2_ros static_transform_publisher 0 0 0 0 0 0 map camera_link. Now all camera_rgb transforms are OK, and the pointcloud is displayed in rviz:
- Note that cartographer in the background is still processing the rosbag from the DeutschesMuseum. That make my nb-ros laptop quite slow. Also can be seen that some orientation errors are introduced with the default settings:
-
- Also checked if a ROS2 version of cartographer exist (yes, it does).
- Note that there is also a tutorial for ROS2 Cartographer, which is TurtleBot3 based.
- Note the hint in step 5: Make sure that the Fixed Frame (in Global Options) in RViz is set to “map”.
March 19, 2024
- Created an assignment to replicate the work of Anastasia Panaretou and Phillip Mastrup, for their master thesis at Technical University of Denmark.
- The work is published as a conference paper: Landmark-based Visual SLAM using Object Detection (2021)
- The code is ROS Melodic based, which corresponds with Ubuntu 18.04 (nb-ros).
- The Visual SLAM is a combination of cartographer and YOLO v5.
- The robot platform is MIR100, which resembles the youBot base. Currently, the smallest robot of this company is the MIR250.
- As RGB-D camera they used Orbecc Astra
- Their Master thesis is called 'Improving Visual SLAM for Mobile Industrial Robots' (2020), but not free available for download.
-
- Looking if I can install all packages mentioned in Landmarks_vSLAM on nb-ros.
- Started with sudo apt-get install ros-melodic-cartographer (instead of building from source)
- Continued with Running Cartographer ROS on a demo bag
- Launching the bag file fails on [demo_backpack_2d.launch] is neither a launch file in package [cartographer_ros]
- Back to Compiling Cartographer ROS
- I indeed get an error after sudo rosdep init: ImportError: cannot import name 'OS_RASPBIAN'. Ignored the error as suggested. Yet, this error remains. Continued with install abseil-cpp anyway.
- The catkin_make command starts building 384 packages, which takes several minutes. After the configuration is done 43 packages are build
- When using the full path of the bag-file the 2D demo of the Deutches Museum works (two warnings on trajectory and constraints):
March 18, 2024
January 26, 2024
- Benchmarking your percetion algorithms with Nvidia ISAAC Sim
January 25, 2024
Previous Labbooks