Started
Labbook 2024.
December 20, 2023
- Nice SLAM demo from Raffaello Bonghi on simulating the Husky robot with Nvidia Isaac Sim.
November 23, 2023
- This article paper does SLAM, with an actor-critic NN, to plan intermediate goals, citing our Beyond paper.
November 15, 2023
- Nice offer for a new 3D-sensor called Hifi, but we have already a nice collection. Benefits seems to be: more resolution, FOV, two laser projectors, more accuracy.
October 10, 2023
August 14, 2023
- Iván López Broceño has created a extended_map_server to use Nav2 for non-planar maps.
- Another Nav2 paper (from the ROS2 developers) is publised in the RAS journal.
July 10, 2023
- Stanford has a new Quadruped: the Dingo
July 3, 2023
- Preparing for the RaiSim and RL workshop tomorrow.
- Checked if I had a pytorch installation. conda list env indicated only one environment (conda-env), which seems not to have a pytorch package.
- Instead tried conda install pytorch torchvision torchaudio cpuonly -c pytorch. This takes long due to a dependency conflict of python-dateutil.
- Yet, I have torch 1.9.0 also already installed locally (~/.local/lib/python3.8/site-packages, which can be by typing import torch in the python prompt. Just torch.version shows the module location, torch.__version__ shows the version number.
- In the mean time updating cuda-12 and downloading Matlab R2023b. The prereleaswas quite minimal in packages, so also requested a RoboCup license with all packages.
June 29, 2023
June 23, 2023
- Would be great to test lidarslam_ros2, although it 2D SLAM.
- Fully 3D (based on ROS noetic) is Wavemap, which should be a method strong in detecting thin obstacles
June 22, 2023
- On March 21 Emily had three choices for detectors not implemented in RTABMap yet:
- All three are papers with code. She concentrated on R2D2, which is build on top of SuperPoint/SuperGlue.
June 20, 2023
- Repeated yesterdays experiment in Amsterdam. At the UMI-RTX (curtains closed) there is no Fix. With the Antenna outside the window in the Visualisation lab stil not. In the Intelligent Robotics Lab, outside the window on the robot field. I get a fix. After that returned the antenna to the field. Sporadic fixes were detected. The PPS led only lights up when it has fix, although I see time messages received from some week satellites (10-20 dB signal).
- So, the GPS as PPS signal for the Velodyne will only work outside the lab.
June 19, 2023
- Downloading u-center 2, v23.03.54868.
The NEO-M8U is not connected yet, so no devices are visible yet in the dashboard.
- Read the NEO-M8U Hardware integration manual. That contained too much options and details.
- Looked at the hookup-guide. There are 4 ways to provide power. First trying USB for both power and data seems a good option.
- The connection to the Velodyne only uses the PPS (see section Broken Out pins in the hookup-guide with the yellow wire. The other used pins on the board are ground (on the left) with the black wire, and the 3V3 (inverted) on the right (white wire).
- Started with a USB-connection (USB-C Thunderbolt via USB-C adapter). Power led becomes red. Windows reported serial connection on COM5.
- Went outside, start to see satellites. First the GPS satellites (USA), than Galileo (EU), followed by BeiDou (China) and GLONASS (Russia) show up. After BeiDou is see the PPS led start blinking with 1 flash per second.
- Tried to connect with the USB to TTL from D-SUN. Received an error that PXXX chipset was no longer supported (since 2012), which gave error 433.
- The other USB to TTL (non-marked, chipset CH340) was automatically recognized. Only received a warning on AutoBauding, and not receiving any data (still indoors):
- Reverted the data-cables (RX to TX, TX to RX). Now I receive information (Autobaud found 9600 bits/s).
-
- Switched to the Ubuntu 20.04 partition of nb-dual.
- Cloned PyGPSClient in ~/git. The code should start with simply with pygpsclient, but that executable is not there (only src/pygpsclient/__main.py.
- Instead did python3 -m pip install --upgrade pygpsclient
- Tried to connect via USB/UART, but got could not open port /dev/ttyUSB0. Changing the group from dailout to tty didn't help. Opening the port to others with sudo chmod o+rw /dev/ttyUSB0:
- Also tried it with the USB-C cable. Received Couldn't open port /dev/ttyACM0. Opening the ACM port also helped (have to be done after each disconnect):
- It also worked with the soft black USB-C 3.1 cable, and the grey/black USB-C cable from the Visualisation Lab. Also the USB-B to USB-C cable that came with the OAK-D works fine.
June 14, 2023
- An Orin Nano developer kit is now available for educators for the special price of $399.
June 7, 2023
- The GPS is now collected to the Velodyne Puck:
- Read the User Manual.
- Trying to reproduce the Technology Centers work:
- The Velodyne needs 12V (although a range of Voltage from 9V to 18V is OK - see page 18 of the manual). It draws typically 8W (page 31 of manual). The recommended power source (page 20) should be 1.5A rated. The power source shouldn't give more than 3A (the Technology Center provided 1.0A during their demo). The UMEC AC adapter fits, provides 2.1A max (25W), so the sensor starts up after a few seconds (the spin makes quite some noise):
- Connected the VPL-16 with an ethernet cable.
- The ethernet configuration on nb-dual was still set to Link-Local only (to connect to Nao robots).
- Switching the IPv4 to automatic DHCP does not work, setting manual the ip of nb-dual to 192.168.1.101, with a netmask of 255.255.0.0 and the gateway 192.168.1.1 works. Could connect to the webinterface on 192.168.1.201, as described on page 25 of the manual:
- The Velodyne is running version 3.0.40.0 of the firmware, the latest release (according page 3 of the manual). Yet, on resource page v 3.0.41.1 is available.
- The diagnostics looks good, not too hot, power OK. The GPS is not connected (yet), so the phase is off (with GPS-locked the phase is kept around 0 / 359 degs):
- The default GPS-sensor of the Velodyne is the Garmin GPS 18x LVC, instead of the NEO-M8U we have chosen.
- Note that the polarity of the NEO-M8U UART connector had to be inverted, as described on page 46 of the manual:
- Note that an INS instead of a GPS is also an option, according to page 29.
- The XSens MTi User Manual indictes on page 31 that it supports NMEA output as ASCII strings. Details can be found in the Low-Level Documentation (page 36). The GPRMC message is bit #14 on page 39 (indeed in the expected format).
- Switching from NMEA to MTData2 and back is described on page 45 of the LL documentation.
- An MTi INS has an internal clock with jitter less than 25ns (page 43 of manual).
- The INS communicates over USB or UART, but not both (see page 49 of the manual).
-
- It would be interesting to see if the PycURL example on page 87 works for the VPL-16
-
- The u-center which the Technology Center uses seems to be only supported on Windows.
- Best option for Linux seems to be PyGPSClient.
-
- It seems that VeloView is also available for Linux, unfortunatelly only for Ubuntu 18.04
- So, it seems that I have to build from source.
- Trying a Ubuntu 20.04 anyway. It seems to be possible, according to this instruction
- Following the instructions (only building the build inside ~/git/lsvb/build. Cmake fails on running version 3.16.3, instead of 3.18 or higher
- Followed instructions A-3 and A-4 of askubuntu, and cmake was updated to v3.26.4.
- Now cmake can configure the build-directory, although I get the warning:
Optional dependencies for paraview not found, is it in the list of
projects?: adios2, catalyst, cuda, hdf5, matplotlib, mpi, protobuf,
visitbridge, silo, lookingglass, fides, pythonmpi4py, xdmf3, vrpn, vtkm,
netcdf, openmp, openpmd, openvdb, paraviewgettingstartedguide,
paraviewtutorialdata, mesa, osmesa, egl, openxrsdk, cdi, ffmpeg, mili,
gmsh, genericio, cosmotools, tbb, ospray, exodus, seacas, occt
- Compilation of lsvb takes a long time (now at step [276/284]). Finally finished.
- At least lidarview has a VelodynePlugin. After building, I tried ~/git/lsvb/build/install/bin/LidarView, where the window indicates that it is v4.3.0-123-Ubuntu20.04.
- Opened the stream, but get the warning Live sensor stream (Port:2368). Could not determine the interpreter for now.
- Ip of the ethernet connection wasn't correctly set. With an ip in 192.168.1.101 the message bar below indicates Live sensor stream (Port:2368). DUAL RETURN | VP-16:
- Velodyne points to VeloView based on Paraview. Version5.11 seemst be Python3.9 based.
- Reading Paraview Tutorial.
- Lidarview itself can be found under 'Other files'. The version number of the file is v4.3.0 (date Jan 20, 2023).
June 6, 2023
- Yesterday there was a ROS Isaac ROS2 localization webinar. They used a Hesai Pandar XT32 as laser scanner.
- They also displayed the isaac_ros_occupancy_grid_localizer, combined with a pointcloud_to_flatscan_node.
- There was no notebook or demo-code, the pointed to Isaac Ros2 benchmark.
- That page has some nice datasets, including a AprilTag node.
- The code of the demo in the webinar can be found here.
June 1, 2023
- A Thunderbold-3 cable allows 40Gb/s, and has a chipset onboard to negotiate the power. Is available for 60 euro.
May 31, 2023
- Borrowed the good USB-C cable (3m) from the Visualisation Lab. That works fine. Now the Tunderbold cable can be used for the left slot, so /usr/local/bin/ZED_Explorer indicates ** [SVO] Hardware compression (NVENC) available **.
- The ZED_Explorer gives also the left and right images when the Zed-mini is used in combination with the Dell USB-C adapter, although the device in not visible with lsusb.
- With the USB-B to USB-C convertion the ZED_Explorer also works, (even without restarting), and the device is visible with lsusb. The coverter in the USB-C slot of the Dell adapter also works, although the device is not visible with lsusb.
- Without the eGPU, I get void sl::Mat::alloc(size_t, size_t, sl::MAT_TYPE, sl::MEM) : Err [100]: no CUDA-capable device is detected. (for the ZED_Sensor_Viewer. The ZED_Explorer starts with the warning ** [SVO] Hardware compression (NVENC) not available **.
- The ZED_Depth_Viewer crashes on Could not load library libcudnn_ops_infer.so.8. Error: libcublas.so.11: cannot open shared object file: No such file or directory.
- Strange enough this are no libraries directly loaded (checked with ldd). The ZED_Depth_Viewer depends on e.g. libcuda.so.1, libnvcuvid.so.1, libnvidia-encode.so.1, libicui18ns.so.66, libucydata.so.66
- The /usr/local/cuda/version.json points to v 12.1.1, although /usr/local/cuda-11.7 /usr/local/cuda-12 also exist.
- In /usr/local/cuda-11.7/targets/x86_64-linux/lib/ libcudnn_ops_infer.so.8.4.1 can be found (but no libcublas).
- In /usr/local/cuda-12/targets/x86_64-linux/lib/ libcublas.so.12.1.3.1 can be found, but no libcudnn_ops_infer.so. Same for cuda-12.1
- The LD_LIBRARY explicitly loads /usr/local/cuda-11.7/lib64 (from ~/.bashrc).
- Looked into CUDA 11.7 Installation Guide.
- The command cat /var/lib/apt/lists/*cuda*Packages | grep "Package:"
showed a long list of packages, including libcublas-dev-11-7 and libcudnn8-dev.
- Installed sudo apt-get install libcudnn8-dev (although this seems to be the version 8.9.1.23-1+cuda12.1.
- The command sudo apt list -a libcudnn8-dev shows that there exists a version 8.5.0.96-1+cuda11.7.
- So, downgraded with sudo apt-get install libcudnn8=8.5.0.96-1+cuda11.7 libcudnn8-dev=8.5.0.96-1+cuda11.7. Note that I receive a warning update-alternatives: removing manually selected alternative - switching libcudnn to auto mode
- Still the same errors, so I should also try to install libcublas-dev-11-7.
There are two version available: 11.10.3.66-1 and 11.10.1.25-1. Installed the latest, because I saw so.66 versions.
- That works, so far that ZED_Depth_Viewer no longer crashes. Yet, I receive a warning that CUDA 10.2 is required for the ZED SDK, and that it cannot access the camera in this resolution.
- Run ZED_Diagnostic, which complained that cuda-11.7 was not installed, and that the camera couldn't be detected (while I used the 3m USB-C cable which worked before):
- Looked with sudo apt list -a cuda-11-7 and saw two versions ( 11.7.0-1 and 11.7.1-1). Installed sudo apt-get install cuda-11-7=11.7.1-1.
- The ZED_Diagnostic now looks better. It can recognize the camera (with the USB-B to C converter). It sees that CUDA 11.7 is installed, but complains that there are alternatives:
- Both ZED_Depth_Viewer and ZED_Sensor_View still don't give a view.
- Did an sudo apt upgrade, before a reboot. Saw that libcudnn8-dev was upgraded to 8.9.1.23-1+cuda12.1
- Still same behavior. Saw with nvidia-smi that CUDA version 12.1 is used:
| NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 |
- Note that /usr/local/cuda is a link to /etc/alternatives/cuda, which points tot cuda-12.1.:w
- The directory /usr/local/cuda-11.7 contains now a version.json and several other subdirectories than targets.
- Used sudo update-alternatives --config cuda to switch to cuda-11.7, but nvidia-smi still sees cuda version 12.1, and ZED_Depth_Viewer stil has a problem.
May 30, 2023
- Continueing replicate Emily's work.
-
- Now, barebone on nb-dual. This is an Ubuntu 20.04 machine with ROS noetic.
- Started with ros-noetic-rtabmap-ros, which installs 64 new packages.
- Also updated foxglove-studio.
- Looking if I could do RGB-D Handheld Mapping with zedm_capture.
- The instructions of zedm_capture are missing the instructions to download my part. Also the ln -s ~/git/jacinto_ros1_perception/tools . doesn't work.
- The default path was .. (and config instead of conf). Modified generate_rect_map.py, but that gives a core-dump. Did a fresh git pull, which gave all callibarion-bins.
- Launched roslaunch zed_capture zedm_capture.launch zed_sn_str:=SN14962641, which gave Cannot find the ZED camera.
- I could see with ls -galt /dev/video* four devices, video0 till video3. With roslaunch zed_capture zedm_capture.launch zed_sn_str:=SN14962641 device_name:=/dev/video2 the camera is initialized. Yet, the driver fails on OpenCV(4.2.0) ../modules/core/src/matrix.cpp:465: error: (-215:Assertion failed) roi.x
- Seems that the Camera is started with Stereo Camera Mode HD, width 576, height 360, while the actual dimensions are different. When called with /dev/video0, the Camera starts with Stereo Camera Mode HD, width 640, height 480, which also fails. According to the launch-file, HD should be HD' - 1280x720.
- Switched from the provided USB C to B cable to the Thunderbold C-cable. Now the camera is visible with lsusb: Bus 003 Device 016: ID 2b03:f681 STEREOLABS ZED-M Hid Device. I also get a /dev/video4 and /dev/video5. In the mean-time the Zed_Sensor_Viewer recognized the device, but fails on a CUDA error (because the eGPU was not connected).
- With roslaunch zed_capture zedm_capture.launch zed_sn_str:=SN14962641 device_name:=/dev/video4 I now get [ INFO] [1685460098.906656431]: Stereo Camera Mode HD, width 2560, height 720, followed by [ INFO] [1685460098.958528159]: Successfully found the ZED camera.
- When I do rostopic list, I get:
/camera/left/camera_info
/camera/left/image_raw
/camera/right/camera_info
/camera/right/image_raw
- The command rosrun image_view image_view image:=/camera/left/image_raw also works:
May 26, 2023
- Looking how I should replicate Emily's work.
- One option would be to do it barebone on nb-dual, because it is Ubuntu 20.04 / Ros Noetic based.
- Another option would be to use singularity, by doing a singularity search TERM, as described April 11, 2022. That term should be OpenCV.
- The last option could be to try to use RoboStack. I am at home so I could try this on my Ubuntu 22.04 workstation, as I last did on March 20, 2023.
- On XPS-8930 I did mamba activate rostackenv. This environment has ROS_DISTRO=noetic, and as ROS_ROOT=~/mambaforge/envs/robostackenv/share/ros. Note that there is also on the barebone /opt/ros/noetic and /opt/ros/humble. The command which rostopic at least points to the .../robostackenv/bin/rostopic.
- Checked the robostack opencv packages at as default available in robostack-noetic, which only indicated ros-noetic-vision-opencv (v1.16.2). This version is not explicit on which version of OpenCV it is build.
- Looked in .../robostackenv/lib/libopencv*, and it seems that robostack-noetic is OpenCV v4.6.0 based.
- Emily used OpenCV v4.2.0 (May 1), should check where this requirement is coming from. On May 16 Emily upgraded to OpenCV v4.4. On April 27, she had a conflict between OpenCV v4.2 and v4.7. On April 24 she indicated that RTABMAP uses OpenCV 4.2.0. On April 27 she indicated that RTABMAP is python3.8 based. In my barebone Ubuntu 22.04 I have python3.10.6, in my robostack-environment python3.9.15.
- Next step is to look if I could install rtabmap_ros.
- Simply mamba install ros-noetic-rtabmap-ros doesn't work for the robostack-noetic. So, next step is to build from source.
- None of the optional dependencies could be installed with mamba, but it seems that it only use the conda-forge channels.
- Did conda config --env --add channels robostack-staging. The ros-noetic-rtabmap-ros is still not available, but ros-noetic-libg2o gives 6 packages. For instance, python is upgraded to python3.9.16 and ros-noetic-rosbag-storage upgraded from v1.15.15 to v1.16.0.
- Unfortunatelly, libpointmatcher and GTSLAM cannot be found in the robostack framework.
- So, lets start with RTAB-Map standalone. Did a checkout, followed by git switch noetic-devel
- The cmake failed on several missing libraries. Repaired with mamba install libusb.
- Next is pcl-1.11, which requires the flann module. The command dpkg -L libflann-dev showed that is present on the backbone, but without the required cmake. Could build it from source, but have to check which version pcl-1.11 expects. According to this build-setup, pcl-1.9, pcl-1.11 and pcl-1.13 all work fine with flann 1.9.1. The latest tag is 1.9.2.
- Checked out the 1.9.1 branch. The build instructions in the documentation do not work, a simple cmake .. gives no SOURCES given to target: flann_cpp. The website that belongs to flann can only be found at wayback machine. Moved to the main branch of the git-repository, where cmake .. works. Continue with doing make in the flann build-directorys, which finished without problems. A make install would install in /usr/local/lib/cmake. That solves the flann dependency of rtabmap's cmake.
- Next dependency is pcl-1.11 dependency on vtk. Did in ~/git git clone --recursive https://gitlab.kitware.com/vtk/vtk.git, as instructed here.
- In February 2, 2021 I also struggled with pcl and vtk. At that time pcl-1.8.1 with vtk-6.3 and vtk-7.1. Current version seems to be v9.2.6. It nicely points to cmake search procedure, including the default find-modules. sudo cmake --install . installed e.g. in /usr/local/include/vtk-9.2.
- The RTABMap package now finds VTK 9, but complaints that it is build without QT support.
- Found these instructions, but those instructions are no longer valid (VTK_MODULE_ENABLE_VTK_GUISupportQt seemed relevant). Tried to build with -DVTK_BUILD_EXAMPLES=YES, because there were Qt-examples, but RTABMap still complains.
- Did cmake .. -DWITH_QT-OFF, which allowed to create a Makefile for RTAB-Map version 0.21.1, with PCL_VERSION 1.11.1.99, OpenCV 4.6.0, xfeatures2d=YES, nonfree=NO.
- Specifying -DOPENCV_ENABLE_FREE=ON should not be done in RTAB-Map, but in opencv. Now, ~/mambaforge/envs/robostackenv/include/opencv2/opencv_modules.hpp is checked, where OPENCV_ENABLE_FREE is commented out.
- Lets first try to build the RTAB-Map standalone libraries without OPENCV_ENABLE_FREE. make -j6 goes well.
- Looking at this tutorial, which is already outdated. Probably the command rtabmap-console is the intended command (or I missing GUIs). Yet, during installation the runtimepath is set to "", which explains that librtabmap_core.so.0.21 is not found. This library was installed in /usr/local/lib. Adding this to LD_LIBRARY_PATH solves that, although now libboost_filesystem.so.1.71.0 is missing.
- Nice trick is to do cmake -LA .. | grep WITH_, to see all options (including WITH_DEPTHAI).
- Tried to do mamba install libboost-dev, but that is not provided.
- Looked with apt list --all-version libboost-dev, but only 1.74.0 is available. Yet, in /usr/lib/x86_64-linux-gnu both 1.74.0 and 1.65.1 are available. Seems like a hack. Could try to do the same (copy from a Ubuntu 20.04 machine). Or I could now move back to nb-dual.
- Before that, I did mamba search libboost-dev which indicated that no package was available for the current channels, but that I should search at Anaconda. Yet, that only provides v1.68.0.
- It would be better to download the package from ubuntu.
May 24, 2023
May 22, 2023
- Did sudo ./gpsbabel -D9 -i garmin -f /dev/ttyUSB0, which gave a response:
GPSBabel Version: 1.8.0
main: Compiled with Qt 5.12.8 for architecture x86_64-little_endian-lp64
main: Running with Qt 5.12.8 on Ubuntu 20.04.6 LTS, x86_64
main: QLocale::system() is en_US
main: QLocale() is en_US
main: QTextCodec::codecForLocale() is UTF-8, mib 106
GPS Serial Open at 9600
Tx Data:10 fe 00 02 10 03 : ...(PRDREQ )
[ERROR] GPS_Packet_Read: Timeout. No data received.
Rx Data:GARMIN:Can't init /dev/ttyUSB0
- Looked at bug report, which suggested to do sudo stty -F /dev/ttyUSB0 clocal. Did that, and tried chmod 777 /dev/ttyUSB0, as suggested here.
- Also tried sudo rmmod garmin_gps to free-up the GPS USB device, as suggested on redhat. Yet, this gives rmmod: ERROR: Module garmin_gps is not currently loaded.
- The same page suggested to build from source (which I did). Yet, tried again, but now with LIBUSB explicitly specified: cmake -DCMAKE_BUILD_TYPE=Release -DGPSBABEL_WITH_LIBUSB=pkgconfig -G Ninja ... Not that cmake reports: Include Directores are: "IncDirs-NOTFOUND". Same error. Tried the LIBUSB=system option. Same error. The included option no longer exists.
- As last option I tried 'no'. Same result.
- Get the same result when the GPS is not powered up. Should try it on a machine with a serial port!
-
- Logged in on Tunis, a Ubuntu 12.04 machine with serial port. Copied the ~/git/gpsbabel to this system, but both libstdc++.so.6 as LibQt5Core were not available (checked with ldd).
- Also Ninja is not available. Also without Ninja I get a build error (CMake 3.11 or higher required. You are running version 2.8.7).
- Modified the CMakelists.txt, which found Qt4 instead of Qt5, but fails on CMake command target_compile_definitions in gui/coretool/CMakeLists.txt
- Removed the gui subdirectory and switched precompiled errors off.
- Qt4 is found, but that has also target_compile_options in its definition.
- Connected Tunis to internet and did a sudo apt-get update. Removed non-existing repositories from /etc/sources.list.
- Changed the repository from http://nl.archive.ubuntu.com to http://old-releases.ubuntu.com/ubuntu/dists/. Now update works again. Yet, cmake is still the latest available version. Not only updating the main, but also providing the universe (unsupported by ubuntu) gave 207 new packages (including libc6)
- Also provided all other repositories (including ros - after updating the GPG key), but cmake remains version 2.8.7. Building cmake from source seems a bit overdone. Time to buy a new GPS.
May 17, 2023
- Installed ninja with sudo apt install ninja-build
- This dependency is not explicit on build documentation, but recommended in ~/git/gpsbabel/INSTALL.
- The command make -DCMAKE_BUILD_TYPE=Release -G Ninja .. now gives the error message: generator : Ninja
Does not match the generator used previously: Unix Makefiles
- Removing CMakeCache.txt solves this. The error is now the same as on May 9 (but now with Ninja): Could not find a package configuration file qt5serialport-config.cmake
- Did sudo apt install libqt5serialport5-dev. Next missing package is qt5webenginewidgets-config.cmake. Did sudo apt install qtwebengine5-dev.
- Now the build-files are made with only a warning: Document generation is only supported for in-source builds with single
configuration generators..
- In ~/git/gpsbabel/build, running ./gpsbabel -? works.
- The command needs the -i option. Possible are garmin_fit, garmin301, garmin_g1000, garmin_txt, garmin_poi, garmin_gpi, garmin.
- Inside garming there is the option erase_t
- I should start with gpsbabel -D9 -i garmin -f usb: -o gpx -F blah.gpx, where the -D9 offers debugging dumps.
- The device is not visible, but was not powered up. Used a 12V-3.5A powersupply to power the GPS 35-PC up.
- The device is now visible with : [ 4379.860245] usb 3-8.3: pl2303 converter now attached to ttyUSB0.
- Yet, gpsbabel -i garmin -f usb:-1 gives no recognized devices. Could try to build gpsbabel with the option GPSBABEL_WITH_LIBUSB on (which seems to be the default. Note that fmt garmin indicates "For Linux, this will fail if you have the garmin_gps kernel module loaded", whereafter it points to hotplug page.
- Added the 51-garmin.rules, but the vendor seems to be hidden by the pl2303 converter.
May 9, 2023
- The original software of the Garmin GPS 35 series is available for download for Windows 2000 and later.
- Here are some tricks to get it working under Linux.
- I couldn't find gpsd, but there is information on hotplug for gpsbabel.
- The software of gpsbabel is on github, with the latest update a week ago.
- GPSbabel can do realtime time tracking, although this is an experimental feature.
- Seems that I am looking for Garmin's PVT protocol, although the GPS 35-PC is not in the list of supported devices (but most serial Garmin GPS receiver gives hope). For realtime tracking there is resettime option
- On March 17, 2021, I looked the Institut Pascal long-term dataset, which contains both velodyne and gps messages.
- Tried to install the Windows 2000 program (v2.50), but it fails (tried several compatibility modes, to no avail).
- Read the INSTALL. First tried to install with Ninja, after that just did cmake -DCMAKE_BUILD_TYPE=Release .. in ~/git/gpsbabel/build. That works (partly). QT5 needs the Qt5SerialPort package configuration
April 24, 2023
- Note that Michelle now gets an error message on the docker-ce version on ws7, so check if there is a conflict with the installation for the CV2 project.
- On ws10 the docker-ce version is 5:23.0.4-1, which seems to be above the required v 19.03.
- The docker-ce is installed on ws7, but Michelle runs the commands that fails inside a docker-container. She has root-rights there, so installation of docker-ce could work. Only, no candidates for this package were found, so followed all steps of Install using the apt repository. That works, although I now get an error on /var/run/docker.sock. Running systemctl start docker fails on System has not been booted with init system. Can't operate.
- Started the docker-daemon with sudo dockerd. Still another error, which I corrected with sudo usermod -a -G docker developer (still inside the container). Exited the container and entered again with docker exec -it CONTAINER_ID /bin/bash now both sudo docker run hello-world works, and ~/subt_ws/src/subt/docker/run.bash osrf/subt-virtual-testbed:latest tunnel_circuit_practice.ign \ worldName:=tunnel_circuit_practice_01 \ robotname:=CERBERUS_GAGARIN \ robotConfig:=CERBERUS_GAGARIN_SENSOR_CONFIG_1 localModel:=true starts, which pulls down the osfr/subt-virtual-testbed image and extracts it. Running this run.bash now fails on could not select device driver "" with capabilities: [gpu]. Probably an option I should have selected when starting docker exec -it.
April 5, 2023
- A new toolbox for mapping and localization: LIMAP.
April 3, 2023
March 30, 2023
- Emily has been able to register the QR-code observations in RTAB-MAP, and is now trying to make them visible in the databaseViewer.
- In addition, she is trying to incorporate R2D2 as feature detector.
- In the mean-time I found this Msc-thesis (2021) from TU Twente on the modularity, concentrating on the (lack of) documentation. A proposal is made to make RTMAP more modular, but this is not implemented. Yet, it is a good piece of documentation of the inner workings of RTMAP.
- A comparison of RGB-D sensors in RTAB-MAP is also interesting.
- The Field Robotics publication shows comparison of different visual odometry algorithms in RTAB-MAP on the KITTI dataset, TUM RGB-D dataset, EuRoC dataset, MIT Stata Center. The last one seems the most approriate for Emily. It is also the one used in the configuration evalution (seciton 5).
March 22, 2023
- The two Spot robots in the subT challenge have a 3D medium range lidar (Velodyne?).
- According to the CTU CRAS Norlab specifications.md, the simulation should have been started with LC_ALL=C ign launch -v 4 ~/subt_ws/src/subt/submitted_models/ctu_cras_norlab
_spot_sensor_config_1/launch/example.ign robotName:=X1 ros:=true champ:=true.
- The 3D range finder is actually an Ouster OS0-128 3D lidar modeled by `gpu_lidar` sensor. It runs configured to produce 2048x128 scans at 10 Hz. It has a very wide vertical field of view (90 degrees) and short range (about 50 meters).
- The IMU is a XSens MTI-30 IMU.
- Also note the LED strips around the body an the ceiling-pointing light:
- The Spot from the Marble team could be called with C_ALL=C ign launch -v 4 ~/subt_ws/src/subt/submitted_models/marble_spot_sensor_config_1/launch/example.ign robotName:=X1 ros:=true champ:=true.
- The lidar is also simulated by the gpu_lidar sensor. Yet, it is the Ouster OS1-Gen1 3D lidar. It runs configured to produce 1024x64 scans at 10 Hz. It has a vertical field of view (33 degrees) and 120 meter range
- There are several robots which are configured with the Velodyne Puck VLP-16. That are:
- cerberus_anymal_b_sensor_config_1
- cerberus_m100_sensor_config_1
- coro_allie_sensor_config_1
- coro_jeanine_sensor_config_1
- coro_karen_sensor_config_1
- coro_mike_sensor_config_1
- costar_shafter_sensor_config_1
- csiro_data61_dtr_sensor_config_1
- csiro_data61_ozbot_atr_sensor_config_1
- emesent_hovermap_sensor_config_1
- explorer_ds1_sensor_config_1
- explorer_r2_sensor_config_1
- explorer_r3_sensor_config_1
- Seems that cerberus_anymal_b_sensor_config_1 is the most relevant for Emilies research. The Cerberus also carries a RGBD camera.
- The Hello World Tutorial indicates that the teleop node is configured for a Logitech F310 or an XBox 360 controller, while I used a Logitech F710.
- Hello World also hints to the Keyboard Teleoperation as I found myself.
- Also read the API documentation. Most robots are controlled with cmd_vel, except the Spot and ANYmal legged robots.
- Note that the champ_teleop package (for the Spot robot) has a forked version of keyboard control, including the option joy:=true for a Logitech F710.
- When the anymal_controller node is running, the ANYmal can also be controlled with /anymal_b/cmd_vel commands.
- Couldn't find the lidar_gpu documentation, but the API documentation indicates that 3D lidar provides //points and a RGBD camera publishes for instance //rgbd_camera/depth/points and //front/optical/depth.
- Note that the subT simulation also has a /subt/pose_from_artifact_origin, which returns the robot pose relative to the artifact origin.
- Also teams are encouraged to publish /markers by sending visualization_msgs/MarkerArray.
- The gpu_lidar seems to be a gazebo plugin. This Velodyne tutorial shows how to build such plugin.
- Found this common resources page, which has a modified version of the canonical Velodyne/Ouster Gazebo plugins.
- Note that for gpu optimalization to work, Gazebo has to have the GPU fix (gazebo7 > version 7.14.0, gazebo9 > version 9.4.0)
March 21, 2023
- Trying to reproduce yesterdays work on ws7.
- There was no docker installed on ws7, so followed the instructions of SubT docker install.
- Added the non-admin users to the docker group. Yet, even as admin sudo docker run -rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi doesn't work.
- Installed as user mamba following the instructions from miniforge.
- Created a ros_melodic_env based on python3.8, but no channel provides ros-melodic-desktop.
- Deactivated this environment again.
-
- Problem seems that nvidia-docker and nvidia-docker2 are depreciated, which means that the nvidia-container-toolkit should be used to figure that the nvidia-gpu can be used by docker.
- Followed the instructions from this Workstation setup.
- Trick was to add the nvidia-docker.list to apt with curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - and curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu18.04/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list, followed by sudo apt-get update and sudo apt-get install nvidia-container-toolkit.
- Now I could do the steps from nvidia container toolkit: sudo nvidia-ctk runtime configure --runtime=docker followed by sudo systemctl restart docker.
- Now I could do sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi (which gives drivers 515.86.01 and CUDA version 11.7).
- Also the SubT command ./run.bash osrf/subt-virtual-testbed competition.ign worldName:=tunnel_circuit_practice_01 circuit:=tunnel robotName1:=X1 robotConfig1:=X1_SENSOR_CONFIG_1 works now without problems.
-
- Next is testing with the joystick. The Logitech wireless Gamepad F710 has a A button. Yet, used the wrong nano-receiver. With the correct nano-receiver the led stays green - bluetooth connected.
- Tested the system at nb-ros. Both the Axis and the A-button work.
- Tried on SubT: no response. Made user part of both the bluetooth and input group, still I see no X1/cmd_vel messages. When sending a Twist message manually the X1 robot still moves.
- Logged out to check if the groups where applied. After that I got the same error as before: Invalid MIT-MAGIC-COOKIE Unable to open display: :1.
- Complete reboot solved this issue.
- Inside the docker the developer is an admin, so I could do sudo jstest /dev/input/js0. That seems to work. Also works without sudo. Could be the ROS_IP issue.
- Followed inside the docker container the steps of the catkin setup. The step 2 (sudo apt-get upgrade) upgraded many ros-melodic packages.
- One of the installs (010-tzdata) gave an installation error. Solved that with sudo apt-get install dailog
- Did inside the docker sudo apt-get install ros-melodic-teleop-twist-keyboard. You can remap the output with rosrun teleop_twist_keyboard teleop_twist_keyboard.py cmd_vel:=X1/cmd_vel.
- Tried to run the simulation with robotConfig1:=BOSTON_SPOT, but thatfails on the bosdyn_spot/JointTrajectoryBridge.
- Looked into list subT robots, and the spot is used for two configurations: MARBLE_SPOT and CTU_CRAS_NORLAB_SPOT.
- Running again with robotConfig1:=CTU_CRAS_NORLAB_SPOT_SENSOR_CONFIG_1 works.
- Yet, when I do rostopic list I only see the /clock and /subt/score, no cmd_vel or Velodyne measurements (or camera images).
March 20, 2023
- Was able to boot again my Ubuntu 22.04 system at home, after some effort.
- Checked RoboStack and did mamba activate robostackenv (I had still a working Ubuntu 18.04 system).
- Tried in this environment SubT docker install
- This works out of the box. Could even startup the teleop-node in the other terminal, although I get a warning that the gain on the joystick couldn't be set.
- Did a test as specified on Configuring a Linux Joystick. Had to add +rw to /dev/input/js0, but the rest seems OK.
- Yet, the robot name is specified with button A, so I need another joystick than the Logitech Attack 3.
- At least sending the command manually (with rostopic pub /X1/cmd_vel geometry_msgs/Twist -r 3 '[0.5,0.0,0.0]' '[0.0,0.0,0.0]' as specified in this tutorial) works.
- Tomorrow I will try to reproduce this on ws7 (also a Ubuntu 22.04 system) with a proper joystick.
March 17, 2023
- Continue with the building step on my laptop nb-ros (Ubuntu 18.04, ros-melodic), which is step 5 of subT virtual challenge github.
- Note that the github also contains spot robot packages!
- The build is complete. Next step is to test the simulation. Several models are downloaded, including a fiducial.
-
- Did a scan of the workstations, and selected ws6 as moth appropriate.
- Created a user for michelle, but running the docker image as normal user fails on permissions on /var/run/docker.sock
- Followed the instructions and did a fresh docker install, including nvidia-docker2 (which was not previously installed).
- Still, both CUDA 10 and 9.0 do not work. Even a native nvidia-smi gives Failed to initialize NVML: Driver/library version mismatch, so did a reboot
- After the reboot, I see that I have CUDA version 12.
- There is no docker image with cuda:12.0-base, but docker run --runtime=nvidia --rm nvidia/cuda:10.2-base nvidia-smi works and gives:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.89.02 Driver Version: 525.89.02 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
- Actually, all this was maybe not necessary, because in the troubleshooting my initial error was mentioned.
- Now the command ./run.bash osrf/subt-virtual-testbed competition.ign worldName:=tunnel_circuit_practice_01 circuit:=tunnel robotName1:=X1 robotConfig1:=X1_SENSOR_CONFIG_1 starts, although it fails when running remotely because there is no display.
- Yet, also locally I got an error that I couldn't connect to the display.
- A bit strange, because native gazebo works fine.
- Tried to compile on ws6, but this fails both for a normal user as superuser. Seems to be the ignition versions / GCC versions. What is different with the settings of nb-ros?
- At least both inputs on gcc --version are the same.
- Checked the gazebo website, but the default version for Ubuntu 18.04 is Citadel. Citadel is the optimal version for ros-noetic, for older versions they recommend to use ros_gz.
- Looked at the ign launch tutorial. This simply calls a launch-file. The competition.ign can be found in ~/subt_ws/src/subt/subt_ign/launch. The header indicates that it can be called with ign launch competition.ign circuit:= worldName:=, with circuitName {cave, tunnel, urban}.
- Specific for ignition is the section which starts with . Here a call is made to roslaunch subt_ros competition_init.launch world_name:=.
- Alos note that for libignition-gazebo-sensors-system the render_engine ogre2 is chosen (which is the thing that fails!). Also the gazebogui uses the ogre2 engine.
- At the end (HEREDOC), also two roslaunch commands are specified.
-
- Note that some of the download commands contain fuel.ignitionrobotics/1.0, while the 1.0 should be omitted.
- Because I had to be very patient, tried to start ign launch competition.ign circuit:=cave worldName:=simple_cave_01. Here I also have to be patient, but some new cave corners are now downloaded.
- Stopped the program, and downloaded the collection with ign fuel download -v 4 -j 8 -u "https://fuel.ignitionrobotics.org/OpenRobotics/collections/SubT Tech Repo".
- Now I am able to start a simulation with ign launch -v 4 competition.ign worldName:=simple_cave_01 circuit:=cave robotName1:=X1 robotConfig1:=X1_SENSOR_CONFIG_1. That works:
March 16, 2023
- My laptop nb-ros still has Ubuntu 18.04,so I looked if I could install subT virtual challenge.
- No problems so far until step 4. Tomorrow I will try to build the workspace.
- Note that this would also be a good test for RoboStack on one of the WS-computers with Ubuntu 20.04 or Ubuntu 22.04.
March 14, 2023
- For navigation, the BARN challenge is also very interesting.
- The approach of the three teams at ICRA 2022 was described in the IEEE R&A Magazine (Dec 2022).
- The winner indicated "We posit that a learnable midlevel planner with the ability to actively explore the environment appropriately to plan the optimal path may be a promising future direction of research to improve autonomous navigation in constrained spaces."
- The code from the winner seems to be available on github.
-
- Jaime Alemany has written in 2012 Design of high quality, efficient simulation environments for USARSim, which describes several modelling techniques (Hokuyo, P3AT). Yet, no mention of the Nao (yet).
- Another interesting thesis (2016) is from Nico Hempe, published by Springer.
- Seems that his models are implemented in the commercial Verosim development environment.
February 20, 2023
- The Choosing Good Stereo Parameters tutorial suggested to read page 438-444 of Learning OpenCV, but I read the whole of Chapter 12 - Projection and 3D Vision, partly because it also describes the bird-eye view transformation. Yet, it builds further on the camera intrinsics and distortion coefficients (Chapter 11), which are actually provided in the CameraInfo.
- Did Step 3 and run rosrun image_view stereo_view stereo:=narrow_stereo_textured image:=image_rect. This shows nicely the disparity map:
- The coloured speckles can be removed by increasing the speckle_size (for instance to 1000 pixels), see step 7.
February 17, 2023
- Continue with step 2 of Choosing Good Stereo Parameters tutorial.
- When just do ROS_NAMESPACE=narrow_stereo_textured rosrun stereo_image_proc stereo_image_proc I get the warnings:
[ WARN] [1676635158.068138635]: The input topic '/narrow_stereo_textured/left/camera_info' is not yet advertised
[ WARN] [1676635158.068197298]: The input topic '/narrow_stereo_textured/right/image_raw' is not yet advertised
- This is still the case when I hit the play button in rqt_bag (checked with rostopic list. Simply rosbag play rotating_detergent_1_6.bag removes those warnings.
- The rqt_reconfigure is for ros-melodic already more advanced:
February 16, 2023
- Trying to install zedm_capture on nb-ros, which has an Ubuntu 18 ros-melodic version.
- Made a new ~/ros_melodic_ws to avoid conflicts with existing ~/catkin_ws.
- Made the link with ln -s ~/git/jacinto_ros_perception/ros1 zedm_capture
- Copied my repository in this directory with cp -R ~/git/zedm_capture/ zedm_capture
- Did catkin_make --force-cmake, but the ti_objdet_range node failed on the version of PCL (1.8.1 was not accepted).
- Instead removed the complete nodes directory.
- Now only three packages are build: common_msgsi, mono_capture, zed_capture
- Made a mistake with cp -R ~/git/zedm_capture/ zedm_capture, and created a directory ~/ros_melodic_ws/src/zedm_capture/zed_capture
- Downloaded the calibration file with ./src/tools/stereo_camera/download_calib_file.sh 14962641
- Modified the generate_rect_map to use another default config directory and serial-number.
- Generated the rectivication with python3 generate_rect_map.py
- Copied ~/ros_melodic_ws/src/zedm_capture/zedm_capture/launch/zedm_capture to ~/ros_melodic_ws/src/zedm_capture/drivers/zed_capture/launch.
-
- Started the node with roslaunch zed_capture zedm_capture.launch zed_sn_str:=SN14962641, which fails on:
[ WARN] [1676554568.637582655]: Camera calibration file /home/arnoud/ros_melodic_ws/src/zedm_capture/drivers/zed_capture/config/SN14962641_HD_camera_info_left.yaml not found.
[ WARN] [1676554568.637866385]: Camera calibration file /home/arnoud/ros_melodic_ws/src/zedm_capture/drivers/zed_capture/config/SN14962641_HD_camera_info_right.yaml not found.
[ INFO] [1676554568.637943081]: Cannot find the ZED camera
- Checked with dmesg | tail. Looks good:
[ 5647.097514] usb 4-2: Product: ZED-M
[ 5647.097518] usb 4-2: Manufacturer: Technologies, Inc.
[ 5647.098862] uvcvideo: Found UVC 1.10 device ZED-M (2b03:f682)
[ 5183.251560] input: ZED-M: ZED-M as /devices/pci0000:00/0000:00:14.0/usb4/4-2/4-2:1.0/input/input16
- Note that I have not added the /etc/udev/rules.d/99-slabs.rules from the ZED SDK. This rules can be found in zed-open-capture.
- Installed the udev rules and copied the generated rectification files on the expected place, but still fails.
- Should check if /dev/video4 is readable:
PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.13
* /zed_capture/camera_info_left_yaml: SN14962641_HD_cam...
* /zed_capture/camera_info_right_yaml: SN14962641_HD_cam...
* /zed_capture/camera_info_topic_left: camera/left/camer...
* /zed_capture/camera_info_topic_right: camera/right/came...
* /zed_capture/camera_mode: HD
* /zed_capture/device_name: /dev/video4
* /zed_capture/encoding: bgr8
* /zed_capture/frame_id_left: left_frame
* /zed_capture/frame_id_right: right_frame
* /zed_capture/frame_rate: 15
* /zed_capture/image_topic_left: camera/left/image...
* /zed_capture/image_topic_right: camera/right/imag...
- Checked with ls -galt /dev/vi*, which showed /dev/video1.
- The launch had still /dev/video4 as default. With device_name /dev/video1 I get: Successfully found the ZED camera.
- The command rostopic list:
/camera/left/camera_info
/camera/left/image_raw
/camera/left/image_raw/compressed
/camera/left/image_raw/compressed/parameter_descriptions
/camera/left/image_raw/compressed/parameter_updates
/camera/right/camera_info
/camera/right/image_raw
/camera/right/image_raw/compressed
/camera/right/image_raw/compressed/parameter_descriptions
/camera/right/image_raw/compressed/parameter_updates
- I could also inspect the image with rqt_image_view /camera/left/image_raw:
- Note that the zed depth capture code shows how the disparity map should be build.
- Had to install sudo apt install libusb-1.0-0-dev libhidapi-libusb0 libhidapi-dev to be able to build zed-open-capture, including examples.
- Also did sudo make install
-- Installing: /usr/local/lib/libzed_open_capture.so
-- Installing: /usr/local/include/zed-open-capture/sensorcapture.hpp
-- Installing: /usr/local/include/zed-open-capture/videocapture.hpp
-- Installing: /usr/local/bin/zed_open_capture_depth_tune_stereo
-- Set runtime path of "/usr/local/bin/zed_open_capture_depth_tune_stereo" to ""
- The zed_open_capture_depth_example is actually not installed, but works fine. The program scans all /dev/video* devices, until it founds a StereoLabs camera:
[sl_oc::video::VideoCapture] INFO: Camera resolution: 2560x720@30Hz
[sl_oc::video::VideoCapture] INFO: Trying to open the device '/dev/video0'
[sl_oc::video::VideoCapture] WARNING: The device '/dev/video0' is not a Stereolabs camera
[sl_oc::video::VideoCapture] INFO: Trying to open the device '/dev/video1'
[sl_oc::video::VideoCapture] INFO: Opened camera with SN: 14962641
[sl_oc::video::VideoCapture] INFO: Device '/dev/video1' opened
Connected to camera sn: 14962641
wget 'https://calib.stereolabs.com/?SN=14962641' -O /home/arnoud/zed/settings/SN14962641.conf
--2023-02-16 16:12:40-- https://calib.stereolabs.com/?SN=14962641
Resolving calib.stereolabs.com (calib.stereolabs.com)... 199.16.130.189
Connecting to calib.stereolabs.com (calib.stereolabs.com)|199.16.130.189|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.stereolabs.com/developers/calib/?SN=14962641 [following]
--2023-02-16 16:12:41-- https://www.stereolabs.com/developers/calib/?SN=14962641
Resolving www.stereolabs.com (www.stereolabs.com)... 199.16.130.189
Connecting to www.stereolabs.com (www.stereolabs.com)|199.16.130.189|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘/home/arnoud/zed/settings/SN14962641.conf’
/home/arnoud/zed/s [ <=> ] 1.15K --.-KB/s in 0s
2023-02-16 16:12:41 (30.0 MB/s) - ‘/home/arnoud/zed/settings/SN14962641.conf’ saved [1182]
Calibration file found. Loading...
Camera Matrix L:
[771.625, 0, 643.1699829101562;
0, 771.64501953125, 364.573486328125;
0, 0, 1]
Camera Matrix R:
[772.7100219726562, 0, 633.489990234375;
0, 772.60498046875, 351.4649963378906;
0, 0, 1]
Camera Matrix L:
[-729.3639059881368, 0, 646.5188980102539, 0;
0, -729.3639059881368, 361.9774513244629, 0;
0, 0, 1, 0]
Camera Matrix R:
[-729.3639059881368, 0, 646.5188980102539, 45884.35555428653;
0, -729.3639059881368, 361.9774513244629, 0;
0, 0, 1, 0]
Error opening stereo parameters file. Using default values.
Stereo parameters write done: /home/arnoud/zed/settings/zed_oc_stereo.yaml
Stereo SGBM parameters:
------------------------------------------
blockSize: 3
minDisparity: 0
numDisparities: 96
mode: 2
disp12MaxDiff: 96
preFilterCap: 63
uniquenessRatio: 5
speckleWindowSize: 255
speckleRange: 1
P1: 216 [Calculated]
P2: 864 [Calculated]
minDepth_mm: 300
maxDepth_mm: 10000
------------------------------------------
Depth of the central pixel: -4124.44 mm
Depth of the central pixel: -3989.94 mm
Depth of the central pixel: -3905.05 mm
- Should check what happens with the stereo parameters file. The program gives three displays:
- Looked at the code, but the L-R Camera Matrices should be printed only once,so the same print is hidden in one of the called functions (initCalibration()?).
- The code indicates that the tool zed_open_capture_depth_tune_stereo can be used to tune stereo parameters (which couldn't be loaded). The tune_stereo gives a number of sliders, but no guidance on which features or measures should be optimized.
- Looked at zed_camera_component.cpp (ros2 version of the wrapper). A stereo_topic is published, while a depth_topic is optional. The depth is with Info, the stereo without.
- With the SDK the depth is retrieved with the function retrieveMeasure().
- Continue with this stereo_image_proc tutorial.
- The tutorial uses a rosbag which was recorded with PR2's narrow stereo camera. This stereo camera is a pair of WGE100 cameras (monochrome).
- The PR2 User manual gives more information, but not the baseline.
- Those should be read from the pr2_description, such as double stereo urdf and stereo.urdf, which makes no distinction between wide and narrow, and uses a hack_baseline of stereo_dy == 0.09.
- When starting rqt_bag rotating_detergent_1_6.bag I receive:
plugin file "/opt/ros/melodic/share/rqt_virtual_joy/plugin.xml" in package "rqt_virtual_joy" not found
- Used the plugin.xml from this project, which seems to work (no warning). I had to toggle tumbnails (most right button) to see the left and right images:
February 15, 2023
- Was able to login to github via gh auth login, as indicated in this labbook.
- Was able do git push on zedm_capture repository.
- Checked on February 3 that not only a rgb/image, but also a depth/image is needed.
- So, the next step would be to combine them following this tutorial.
- The ros_noetic_rqt_bag was already implictly installed.
- Yet, rqt_bag fails on from rqt_bag.bag import Bag on rom python_qt_binding.QtCore import QObject.
- Not just rqt_bag, already python -c "from python_qt_binding import QtCore" fails. Also python -c "import python_qt_binding" fails.
- Strange, because I followed all steps from rqt installation on Ubuntu.
- Try to do it from source. The making of the qt_gui_core fails, because it is plain cmake. The latest versions are already ros2 based. Doing git switch kinetic-devel solved that, but still the build fails on the qt5-version:
/usr/include/x86_64-linux-gnu/qt5/QtCore/qvariant.h:401:16: note: because ‘QVariant::Private’ has user-provided ‘QVariant::Private::Private(const QVariant::Private&)’
401 | inline Private(const Private &other) Q_DECL_NOTHROW
| ^~~~~~~
[ 72%] Meta target for qt_gui_cpp_sip Python bindings...
[ 72%] Built target libqt_gui_cpp_sip
- Checked with sudo apt install pyqt5-dev, which gives v 5.14.1
- Did also a check with pip install PyQt5==, which showed versions (5.7.1, 5.8, 5.8.1.1, 5.8.2, 5.9, 5.9.1, 5.9.2, 5.10, 5.10.1, 5.11.2, 5.11.3, 5.12, 5.12.1, 5.12.2, 5.12.3, 5.13.0, 5.13.1, 5.13.2, 5.14.0, 5.14.1, 5.14.2, 5.15.0, 5.15.1, 5.15.2, 5.15.3, 5.15.4, 5.15.5, 5.15.6).
- With version 5.15.6 python -c "import python_qt_binding" still fails. Also tried other version, still same fails on optional import.
February 14, 2023
February 13, 2023
- The mini-puppet 2 will get an upgrade of its Lidar. The previous LD 06 version had an accuracy of 45mm, the new STL-06P version an accuracy of 10mm.
February 9, 2023
- Modified the <2023/generate_rect_map.py>generate_rect_map.py script so that it generates an equidistant yaml file.
- The OpenCV function had no problem with a distortion vector of length 4.
- The yaml files can be generated with python3 scripts/generate_rect_map.py -i SN14962641.conf -m HD
- Yet, starting the node with :
PARAMETERS
* /rosdistro: noetic
* /rosversion: 1.15.15
* /zed_capture/camera_info_left_yaml: SN14962641_HD_cam...
* /zed_capture/camera_info_right_yaml: SN14962641_HD_cam...
* /zed_capture/camera_info_topic_left: camera/left/camer...
* /zed_capture/camera_info_topic_right: camera/right/came...
* /zed_capture/camera_mode: HD
* /zed_capture/device_name: /dev/video2
* /zed_capture/encoding: yuv422
* /zed_capture/frame_id_left: left_frame
* /zed_capture/frame_id_right: right_frame
* /zed_capture/frame_rate: 15
* /zed_capture/image_topic_left: camera/left/image...
* /zed_capture/image_topic_right: camera/right/imag...
NODES
/
zed_capture (zed_capture/zed_capture)
[ INFO] [1675953694.633697612]: Initialize the ZED camera
(zed_capture:319973): GStreamer-CRITICAL **: 15:41:34.662:
Trying to dispose element pipeline0, but it is in READY instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(zed_capture:319973): GStreamer-CRITICAL **: 15:41:34.679: gst_element_post_message: assertion 'GST_IS_ELEMENT (element)' failed
[ INFO] [1675953694.695943576]: YUV422
[ INFO] [1675953694.696006997]: Stereo Camera Mode HD, width 160, height 120
[ INFO] [1675953694.722610515]: Loading camera info from yaml files
[ INFO] [1675953694.723636909]: camera calibration URL: package://zed_capture/config/SN14962641_HD_camera_info_left.yaml
[ INFO] [1675953694.735726370]: camera calibration URL: package://zed_capture/config/SN14962641_HD_camera_info_right.yaml
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.2.0) ../modules/core/src/matrix.cpp:465: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'Mat'
- My hypothesis is that /dev/video2 is not the correct device.
- Tried ./ZED_Sensor_Viewer, which indicates no camera.
- Switched from the left USB-port on nb-ros to the right. Now I get sensor readings. Still, same error. dmesg indicates:
[517609.389278] usb 2-3: New USB device found, idVendor=2b03, idProduct=f682, bcdDevice= 1.00
[517609.389286] usb 2-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[517609.389290] usb 2-3: Product: ZED-M
[517609.389292] usb 2-3: Manufacturer: Technologies, Inc.
[517609.391660] usb 2-3: Found UVC 1.10 device ZED-M (2b03:f682)
[517609.397075] input: ZED-M: ZED-M as /devices/pci0000:00/0000:00:0d.0/usb2/2-3/2-3:1.0/input/input106
[517609.406013] usb 3-4: New USB device found, idVendor=2b03, idProduct=f681, bcdDevice= 2.05
[517609.406021] usb 3-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[517609.406025] usb 3-4: Product: ZED-M Hid Device
[517609.406027] usb 3-4: Manufacturer: STEREOLABS
[517609.406029] usb 3-4: SerialNumber: 14962641
[517609.408445] hid-generic 0003:2B03:F681.0051: hiddev1,hidraw4: USB HID v1.11 Device [STEREOLABS ZED-M Hid Device] on usb-0000:00:14.0-4/input0
[517610.935833] uvcvideo 2-3:1.1: Non-zero status (-71) in video completion handler.
[517611.465182] uvcvideo 2-3:1.1: Non-zero status (-71) in video completion handler.
[517612.205808] uvcvideo 2-3:1.1: Non-zero status (-71) in video completion handler.
- Checked ls -galt /dev/* | head showed that /dev/video4 and /dev/video5 were created when switching from usb-port. Changed the zedm_capture.launch file to use /dev/video4. Now I get:
[ INFO] [1675955848.296471989]: Successfully found the ZED camera
Although I also get three times:
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
(zed_capture:323519): GStreamer-CRITICAL **: 16:17:28.207:
Trying to dispose element appsink0, but it is in READY instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(zed_capture:323519): GStreamer-CRITICAL **: 16:17:28.228: gst_element_post_message: assertion 'GST_IS_ELEMENT (element)' failed
- At least rostopic list gives:
/camera/left/camera_info
/camera/left/image_raw
/camera/right/camera_info
/camera/right/image_raw
- Did rosrun image_view image_view image:=/camera/left/image_raw, which gave a view:
- Calling rqt_image_view /camera/left/image_raw (as suggested in zed_capture documentation fails, but on QT-binding:
File "/opt/ros/noetic/lib/python3/dist-packages/python_qt_binding/binding_helper.py", line 133, in _named_import
module = builtins.__import__(name)
ValueError: PyCapsule_GetPointer called with incorrect name
- Also tried rosrun rviz rviz, which gave Unsupported image encoding [yuv422].
- Changing in the launch file the encoding from yuv422 to bgr8 solves this:
February 8, 2023
- Cloned the zed-ros2 wrapper in ros2_ws instead of ros2_ws/src. Installed on the right place rosdep indicated that ros-humble-xacro had still to be installed. After that the build succeeded.
- When starting the node, it complained on the USB port (moved to USB-C with USB-B to C converter) and missing calibration file.
- The instructions to download the calibration file can be found here. This calibration file has to be copied to /urs/local/zed/settings, and actually contains all the parameters which are needed for the CameraInfo.
- Note that the function fillCamInfo for ros2 contains some extra logic for the different ZED-models (also between ZED and ZED-mini).
- Adding my print-code starting at line 1957 of zed_camera_component.cpp
- It seems to work, although both distortion model PLUMB as EQUIDISTANT is printed.
- After PLUMB the left and right D[*] correspond with the values LEFT_CAM_2K and RIGHT_CAM_2K in the calibration file. Note that the value for k4 is not used for D[3-4]. Note that nor the print-statement on old Extentric or ROS frame is printed, so no rawParam are used. Saved v1, and rewrite the actual used values.
- In both zed_ros_wrapper and the zed_ros2_wrapper the baseline is used to calculate the right P[3] or p[3].
- It seems that the function fillCamInfo is called twice, once with distortion PLUMB and once with EQUIDISTANT. With the PLUMB model the D and R matrices are default one-matrices. The EQUIDISTANT is much interesting. Note that with EQUIDISTANT also 'using ROS frame' is printed.
-
- Found a ros-wrapper implementation based on OpenCV capture from Texas Instruments. Both ros1 and ros2 version exists.
- This is part of the TI Robotics SDK, which is docker based.
- The ros1-code can be found here.
- Did git clone git://git.ti.com/processor-sdk-vision/jacinto_ros_perception.git
- This repository has two subdirectories. Moved the ros1 subdirectory to catkin_ws/src. The command rosdep install --from-paths src --ignore-src -r -y succeeds, except for the [pcl] dependency of ti_objdet_range.
- Put a link from ~/git/jacinto_ros_perception/cmake/ to catkin_ws/src/cmake, which allows to make all targets.
- In the directory ~/catkin_ws/src/jacinto_ros1_perception/drivers/zed_capture/config the calibration file of the ZED1 camera used in zed1_2020-11-09-18-01-08.bag can be found. Placed my ZED-m calibration file there.
- According to the zed_capture documentation, I can use the download_calib_file.sh and generate_rect_map.py to generate camera_info YAML files. Yet, have to find those scripts.
- Those scripts can be found at ~/git/jacinto_ros_perception/tools/stereo_camera.
- Executed python3 -m pip install -r requirements.txt in that direcotry, which installed the missing configparser and argparse.
- Modified the generate_rect_map.py to load from the correct directory (could also been done with option -p ... Yet, the script has to modified anyway, because the zed1 had also the p1 and p2 keywords.
- In the zed_ros_wrapper k1-k3 are disto[0], disto[1] amd disto[4], while p1-p2 are disto[2] and disto[3].
- As indicated in the zed_ros2_wrapper: the ZED_M has uses the EQUIDISTANT model, so the distortion model is 4x4 instead of 5x5. keyword k5 is used for disto[5], which is stored in d[3]! So if keyword k5 is there, one should not look for keywords p1 and p2.
- The script calls a main(calib_file, camera_mode, driver_path), which calls parse_calib_file_wrapper. This returns currently D1, D2 as 5x5. This is later used in the cv2.stereoRectify and cv2.initUndistortRectifyMap.
- Looked up the OpenCV documentation. No explicit mention of the dimension of the two distCoeffs.
- The main saves it always as a plumb_bob model (instead of a EQUIDIST model).
- Should look tomorrow how this yaml files are loaded by roslaunch zed_capture zed_capture.launch.
February 7, 2023
- Checked in Computer Systems labbook the eGPU connection of nb-dual. The diagnostics looks good:
- Trying to modify zed_wrapper_nodelet.cpp. Added print-statements in fillCamInfo() function. Did a build with catkin_make -DCMAKE_BUILD_TYPE=Release in ~/catkin_ws.
- The nodelet is build, but running it fails on ZED connection -> NO GPU COMPATIBLE:
[ INFO] [1675777935.299747460]: Initializing nodelet with 8 worker threads.
[ INFO] [1675777935.314348777]: ********** Starting nodelet '/zedm/zed_node' **********
[ INFO] [1675777935.314396854]: SDK version : 3.8.2
[ INFO] [1675777935.314414510]: *** GENERAL PARAMETERS ***
[ INFO] [1675777935.314696375]: * Camera Name -> zedm
[ INFO] [1675777935.314989163]: * Camera Resolution -> HD720
[ INFO] [1675777935.315215990]: * Camera Grab Framerate -> 15
[ INFO] [1675777935.315454326]: * Gpu ID -> -1
[ INFO] [1675777935.315662026]: * Camera ID -> 0
[ INFO] [1675777935.315868202]: * Verbose -> DISABLED
[ INFO] [1675777935.316285686]: * Camera Flip -> DISABLED
[ INFO] [1675777935.316658030]: * Self calibration -> ENABLED
[ INFO] [1675777935.317041424]: * Camera Model by param -> zedm
[ INFO] [1675777935.317060584]: *** VIDEO PARAMETERS ***
[ INFO] [1675777935.317281818]: * Image resample factor -> 0.5
[ INFO] [1675777935.317490034]: * Extrinsic param. frame -> X RIGHT - Y DOWN - Z FWD
[ INFO] [1675777935.317503605]: *** DEPTH PARAMETERS ***
[ INFO] [1675777935.317701935]: * Depth quality -> PERFORMANCE
[ INFO] [1675777935.317904205]: * Depth Sensing mode -> STANDARD
[ INFO] [1675777935.318108018]: * OpenNI mode -> DISABLED
[ INFO] [1675777935.318303293]: * Depth Stabilization -> ENABLED
[ INFO] [1675777935.318511922]: * Minimum depth -> 0.35 m
[ INFO] [1675777935.318727563]: * Maximum depth -> 10 m
[ INFO] [1675777935.318929348]: * Depth resample factor -> 0.5
[ INFO] [1675777935.318942655]: *** POSITIONAL TRACKING PARAMETERS ***
[ INFO] [1675777935.319317018]: * Positional tracking -> ENABLED
[ INFO] [1675777935.319519727]: * Path rate -> 2 Hz
[ INFO] [1675777935.319719613]: * Path history size -> 1
[ INFO] [1675777935.320154056]: * Odometry DB path -> ~/.ros/zed_area_memory.area
[ INFO] [1675777935.320540433]: * Save Area Memory on closing -> DISABLED
[ INFO] [1675777935.320955359]: * Area Memory -> ENABLED
[ INFO] [1675777935.321351417]: * IMU Fusion -> ENABLED
[ INFO] [1675777935.321768721]: * Floor alignment -> DISABLED
[ INFO] [1675777935.322148687]: * Init Odometry with first valid pose data -> ENABLED
[ INFO] [1675777935.322522855]: * Two D mode -> DISABLED
[ INFO] [1675777935.322904843]: *** MAPPING PARAMETERS ***
[ INFO] [1675777935.323287800]: * Mapping -> DISABLED
[ INFO] [1675777935.323662022]: * Clicked point topic -> /clicked_point
[ INFO] [1675777935.323678534]: *** OBJECT DETECTION PARAMETERS ***
[ INFO] [1675777935.324055698]: * Object Detection -> DISABLED
[ INFO] [1675777935.324072715]: *** SENSORS PARAMETERS ***
[ INFO] [1675777935.324270691]: * Sensors timestamp sync -> DISABLED
[ INFO] [1675777935.324472680]: * Max sensors rate -> 200
[ INFO] [1675777935.324486108]: *** SVO PARAMETERS ***
[ INFO] [1675777935.324856626]: * SVO input file: ->
[ INFO] [1675777935.325062521]: * SVO REC compression -> H265 (HEVC)
[ INFO] [1675777935.325440951]: *** COORDINATE FRAMES ***
[ INFO] [1675777935.326629527]: * map_frame -> map
[ INFO] [1675777935.326646911]: * odometry_frame -> odom
[ INFO] [1675777935.326657195]: * base_frame -> base_link
[ INFO] [1675777935.326669479]: * camera_frame -> zedm_camera_center
[ INFO] [1675777935.326680452]: * imu_link -> zedm_imu_link
[ INFO] [1675777935.326691125]: * left_camera_frame -> zedm_left_camera_frame
[ INFO] [1675777935.326701625]: * left_camera_optical_frame -> zedm_left_camera_optical_frame
[ INFO] [1675777935.326714309]: * right_camera_frame -> zedm_right_camera_frame
[ INFO] [1675777935.326725360]: * right_camera_optical_frame -> zedm_right_camera_optical_frame
[ INFO] [1675777935.326736474]: * depth_frame -> zedm_left_camera_frame
[ INFO] [1675777935.326747103]: * depth_optical_frame -> zedm_left_camera_optical_frame
[ INFO] [1675777935.326757818]: * disparity_frame -> zedm_left_camera_frame
[ INFO] [1675777935.326768626]: * disparity_optical_frame -> zedm_left_camera_optical_frame
[ INFO] [1675777935.326785967]: * confidence_frame -> zedm_left_camera_frame
[ INFO] [1675777935.326796965]: * confidence_optical_frame -> zedm_left_camera_optical_frame
[ INFO] [1675777935.326785967]: * confidence_frame -> zedm_left_camera_frame
[ INFO] [1675777935.326796965]: * confidence_optical_frame -> zedm_left_camera_optical_frame
[ INFO] [1675777935.327248111]: * Broadcast odometry TF -> ENABLED
[ INFO] [1675777935.327683959]: * Broadcast map pose TF -> ENABLED
[ INFO] [1675777935.328071678]: * Broadcast IMU pose TF -> ENABLED
[ INFO] [1675777935.328089558]: *** DYNAMIC PARAMETERS (Init. values) ***
[ INFO] [1675777935.328291285]: * [DYN] Depth confidence -> 30
[ INFO] [1675777935.328497063]: * [DYN] Depth texture conf. -> 100
[ INFO] [1675777935.328712014]: * [DYN] pub_frame_rate -> 15 Hz
[ INFO] [1675777935.328949843]: * [DYN] point_cloud_freq -> 10 Hz
[ INFO] [1675777935.329149802]: * [DYN] brightness -> 4
[ INFO] [1675777935.329347078]: * [DYN] contrast -> 4
[ INFO] [1675777935.329550287]: * [DYN] hue -> 0
[ INFO] [1675777935.329746328]: * [DYN] saturation -> 4
[ INFO] [1675777935.329944540]: * [DYN] sharpness -> 4
[ INFO] [1675777935.330142247]: * [DYN] gamma -> 8
[ INFO] [1675777935.330345884]: * [DYN] auto_exposure_gain -> ENABLED
[ INFO] [1675777935.330902803]: * [DYN] auto_whitebalance -> ENABLED
[ INFO] [1675777935.334061285]: * Camera coordinate system -> Right HANDED Z UP and X FORWARD
[ INFO] [1675777935.334116896]: *** Opening ZED-M...
[ INFO] [1675777939.467769052]: ZED connection -> NO GPU COMPATIBLE
- Also note the GPU=-1 at the start. Should test it on a system with an internal GPU.
- Moved to ws10 and installed ros-noetic-desktop.
- Downloaded the ZED SDK, but that requires CUDA 11.7, while ws10 has CUDA 11.8.
- Tried to login to ws8, but no screen, nor remote login. Problem was partly that I needed to update my wifi, partly that I had to use the ssh-server.
- Checked ws8, which has Ubuntu 20.04, but CUDA 11.4.
- Checked ws7, which has CUDA 11.7, but Ubuntu 22.04.
- Checked ws6, which has Ubuntu 18.04 and CUDA 11.8
- Checked ws5, which has Ubuntu 18.04 and CUDA 11.0
- Checked ws4, which has Ubuntu 18.04 and CUDA 11.8
- Checked ws3, which has Ubuntu 22.04 and CUDA 11.6
- Checked ws1, which has Ubuntu 20.04 and CUDA 10.1
- So, ws7 seems the easiest option, but that would require to install the zed-ros2-wrapper.
- No ros on ws7, so installing ros2 Humble, as requested by the zed-ros2-wprapper. Installed the ros-humble-ros-base, followed by ros-humble-rviz2. Also isntalled ros-dev-tools.
- When installing the ZED SDK, the CUDA 11.7 was suddenly gone (including nvidia-smi. Were some problems with held back packages, which I solved with sudo apt install --only-upgrad for all 4 packages. Now I was able to install cuda-tools-11-7. Still the ZED install script complains that nvidia-smi could not be found (reboot), and on tensorflow 2.11.0 conflict with protobuf 3.20.3, apache-beam 2.43.0 with dill 0.3.6 and numpy 1.24.2.
- Installed nvidia-utils-515 and rebooted. Still confict for nvidia-smi. Also install nvidia-drivers-515. Now nvidia-smi goes well. Continued with the ZED SDK installation. The AI models are optimized in circa 10 min.
- In the meantime apache-beam had a conflict with multiprocess==0.70.14, so installed multiprocess==0.70.7, which solved the conflict. Version 0.70.10 gives again a conflict, version 0.70.9 still works.
- Building the zed-ros2-wrapper still fails. Problem was that I installed as admin, and not as user with admin rights. I had to make the files in /usr/loca/zed readable, including zed-config.cmake. I also had to install ros-humble-diagnostic-updater to be able to build zed_compenents.
- The wrapper is build!
- Yet, calling ros2 launch zed_wrapper zedm.launch.py fails on missing directory xacro. Time to go home. Could try to test in on the ros-noetic stack on my computer at home.
-
- Today's open ros-class is on custom nav2 plugins. See for a list of published plugins this list.
February 3, 2023
- Made a succesfull build of my version of zed_open_capture_video_example which does a ros::init() by copying the ros-libaries from ~/catkin_ws/build/zed-ros-wrapper/zed_wrapper/CMakeFiles/zed_wrapper_node.dir/link.txt. The other dependencies (such as /usr/lib/x86_64-linux-gnu/libpython3.8.so) were not (yet) needed.
- In the next version also the ZEDWrapperNodelet was loaded, which actually worked (after I started a roscore in another terminal). Yet, the official ZEDWrapperNodelet makes use of the official SDK (which requires a GPU). The zed_wrapper_node is further quite empty, so looking at the nodelet code.
- Looked on the RTAB-Map setup, which requires (for the Kinect) three inputs: rgb/image, rgb/camera_info and depth/image. The rgbd_sync node can make a rgbd_image from these three inputs.
February 2, 2023
- Unboxing the ZED-mini. Connected the device to nb-dual. dmesg | tail shows:
[76600.481575] usb 3-4.2: new full-speed USB device number 37 using xhci_hcd
[76600.545673] usb 2-3.2: new SuperSpeed USB device number 15 using xhci_hcd
[76600.566844] usb 2-3.2: New USB device found, idVendor=2b03, idProduct=f682, bcdDevice= 1.00
[76600.566854] usb 2-3.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[76600.566858] usb 2-3.2: Product: ZED-M
[76600.566860] usb 2-3.2: Manufacturer: Technologies, Inc.
[76600.568418] usb 2-3.2: Failed to set U1 timeout to 0xff,error code -32
[76600.571418] usb 2-3.2: Found UVC 1.10 device ZED-M (2b03:f682)
[76600.575655] usb 2-3.2: Failed to set U1 timeout to 0xff,error code -32
[76600.578486] input: ZED-M: ZED-M as /devices/pci0000:00/0000:00:0d.0/usb2/2-3/2-3.2/2-3.2:1.0/input/input42
[76600.587494] usb 3-4.2: New USB device found, idVendor=2b03, idProduct=f681, bcdDevice= 2.05
[76600.587507] usb 3-4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[76600.587512] usb 3-4.2: Product: ZED-M Hid Device
[76600.587517] usb 3-4.2: Manufacturer: STEREOLABS
[76600.587521] usb 3-4.2: SerialNumber: 14962641
[76600.592144] hid-generic 0003:2B03:F681.000F: hiddev1,hidraw4: USB HID v1.11 Device [STEREOLABS ZED-M Hid Device] on usb-0000:00:14.0-4.2/input0
- Downloaded the ZED SDK v3.8.2 for Ubuntu 20 and Cuda 11.7 from stereolabs. Not installed it yet.
- First tried if I could already run hello ZED example with the old SDK.
- Cloned in ~/git the zed-examples from github.
- The hello-ZED python example fails on: import pyzed.sl as sl.
- Did ./ZED_SDK_Ubuntu20_cuda11.7_v3.8.2.zstd.run.
- First had to do sudo apt install zstd.
- CUDA11.7 was found. Selected all default options of installation.
- Got the recommendation:
The ZED Python API was installed for 'python3', when using conda environement or virtualenv, the ZED Python API may need to be resetup to be available (using 'python /usr/local/zed/get_python_api.py')
- After Do you want to run the ZED Diagnostic to download all AI models [Y/n] I get Cannot find CUDA, which could be truth because of my .bashrc settings.
- Although pip reports: Successfully installed cython-0.29.33 numpy-1.24.1 pyzed-3.8 although I see the error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.6.0 requires numpy~=1.19.2, but you have numpy 1.24.1 which is incompatible.
tensorflow 2.6.0 requires typing-extensions~=3.7.4, but you have typing-extensions 4.2.0 which is incompatible.
openvino 2022.1.0 requires numpy<1.20,>=1.16.6, but you have numpy 1.24.1 which is incompatible.
openvino-dev 2022.1.0 requires numpy<1.20,>=1.16.6, but you have numpy 1.24.1 which is incompatible.
openvino-dev 2022.1.0 requires numpy<=1.21,>=1.16.6; python_version > "3.6", but you have numpy 1.24.1 which is incompatible.
- Said Yes to o you want to run the ZED Diagnostic to optimize all AI models, it may take a very long time, up to multiple hours but will be done only once. Otherwise it will be optimized just in time when running the ZED SDK [Y/n] although it is not clear if this optimization starts directly.
- The hello_zed.py no starts, but fails on:
in void sl::Mat::alloc(size_t, size_t, sl::MAT_TYPE, sl::MEM) : Err [100]: no CUDA-capable device is detected.
CUDA error at Camera.cpp:121 code=100(cudaErrorNoDevice) "void sl::Camera::close()"
CUDA error at Camera.cpp:149 code=100(cudaErrorNoDevice) "void sl::Camera::close()"
CUDA error at Camera.cpp:174 code=100(cudaErrorNoDevice) "void sl::Camera::close()"
CUDA error at Camera.cpp:214 code=100(cudaErrorNoDevice) "void sl::Camera::close()"
CUDA error at Camera.cpp:219 code=100(cudaErrorNoDevice) "void sl::Camera::close()"
CUDA error at CameraUtils.hpp:773 code=100(cudaErrorNoDevice) "void sl::ObjectsDetectorHandler::clear()"
CUDA error at CameraUtils.hpp:789 code=100(cudaErrorNoDevice) "void sl::ObjectsDetectorHandler::clear()"
CUDA error at CameraUtils.hpp:795 code=100(cudaErrorNoDevice) "void sl::ObjectsDetectorHandler::clear()"
CUDA error at CameraUtils.hpp:798 code=100(cudaErrorNoDevice) "void sl::ObjectsDetectorHandler::clear()"
CUDA error at CameraUtils.hpp:800 code=100(cudaErrorNoDevice) "void sl::ObjectsDetectorHandler::clear()"
CUDA error at CameraUtils.hpp:809 code=100(cudaErrorNoDevice) "void sl::ObjectsDetectorHandler::clear()"
CUDA error at Camera.cpp:234 code=100(cudaErrorNoDevice) "void sl::Camera::close()"
- That is sort of nice, that means that we can test the no-gpu package directly.
- Followed the instruction from zed-open-capture and installed sudo apt install libusb-1.0-0-dev libhidapi-libusb0 libhidapi-dev
- The /etc/udev/rules.d/99-slabs.rules which came with the installation of the SDK is larger, so is probably more complete. Didn't update the udev rule.
- Specified cmake .. -DBUILD_SENSORS=OFF -DBUILD_EXAMPLES=YES and run ./build/zed_open_capture_video_example.
- That works (Connected to camera sn: 14962641[/dev/video6]):
- Emily used on December 13, 2022 ros-noetic, so that seems the version she is working with.
- In November, 2021 I worked with the ros2-wrapper. At that time, I used the ZED SDK v3.61 and Cuda 11.4.
- Tried /usr/local/zed/tools/ZED_Diagnostic.
- As expected, the CameraTest works fine, but no GPU is found (which also fails the ZED SDK check):
- The camera works with the provided USB-cables, or the Thunderbolt cable connected to the USB-C of Dell USB-C Adapter ring.
- The ros-wrapper can be found here.
- Build the ros-wrapper and launched it with roslaunch zed_wrapper zedm.launch.
- Unfortunatelly, the wrapper fails on [ INFO] [1675350650.534335602]: ZED connection -> NO GPU DETECTED. The only published topic is /zedm/joint_states
- Made my own Makefile based on the flags.make and list.txt generated in ./build/CMakeFiles/zed_open_capture_video_example.dir/.
- The ros.h is included from /opt/ros/noetic/include/, should also include the ros-library in the link command.
February 1, 2023
- In this SSRR paper they build in the Velodyne VDL-16 Puck into a small multi-sensor package. The sensor data is read out via ethernet, no mention of the power of interface box is made.
- Here is a working ros setup with foxglove.
January 30, 2023
- Received, next to TurtleBot 4 and 5, 4 batteries (numbered 0-3), two charges (one marked TurtleBot3) and one Intenso 16 Gb memory-stick.
- I am charging batteries 1 and 2.
January 27, 2023
- Learned that you could give a PoseEstimate from Rviz2, with the Tool Properties from the Panel.
- Also nice trick to use ros2 topic info -v to check the quality of service.
- Also nice for a multi-robot setting is to check the tf_tree with ros2 run tf2_tools view_frames.
January 26, 2023
January 4, 2023
- This dual-fisheye lens is an alternative for Ricoh Theta. The site has several other interesting new camera products.
January 3, 2023
- Raffaello Bonghi updated his jetson stats package. Should try it on my Nanosaur.
- The jetson-stats package was already installed on the nanosaur. I only get a warning from the cryptopackage that pyton3.6 is no longer supported.
- Did jetson_release -v, which indicated that the Jetpack is unknown (but L4T 32.7.4).
- The CUDA version is 10.2.300 and OpenCV is not compiled for OpenCV.
- Jetson-stats itself has version 3.1.4. Using the update option with jetson_config says the same.
-
- Continue with February 2022.
- The install instructions for the nanosaur point to Jetpack 4.6.1, although Jetpack 5.0.2 is now available. No easy way to upgrade, so I should burn a new SD-card.
-
- Continue with Nanosaur keyboard tutorial. Note that the eyes are not visible, so first check the head connection.
- Put new batteries in my Logitech joystick, but the green led next to the Mode only blinks for a while (used the Logitech Nanoreceiver in the nanosaur).
- Looked at nb-dual (Linux-boot). The nanosaur script is still there; the command nanosaur info gave ROS distro foxy. The command nanosaur help gave v1.5.1. So, maybe time to do nanosaur update.
-
- Checked the head, but it seems that it is correctly installed.
- Tried the suggestion of Joey on the robot: nanosaur down, nanosaur wakeup cover 2. Yet, nanosaur info still indicated the zed head (4). Repeated those two commands, but first did nanosaur cover (selected (2) pi). Yet, still no eyes (although all four dockers are up, according to nanosaur info.
-
- Did on nb-dual nanosaur update. That suggested to do a git pull in ~/nanosaur_ws/src/nanosaur. Now nanosaur help gives v2.1.0.
- When I do nanosaur config I get a new empty robot.yml file.
- When I do nanosaur teleop I get the complaint that ~/nanosau_core/install/setup.bash doesn't exist. No ip of the robot is given.
- At least downgraded the complaining package with pip install cryptography==36.0.2, which saves me from the constant complains. This seems the lateste python3.6 version, newer versions were 37.0.4, 38.0.4 and 39.0.0. The install didn't say from which version it downgraded.
Labbook 2022
Labbook 2021
Labbook 2020
Labbook 2019
Labbook 2018
Labbook 2017
Labbook 2016
Labbook 2015
Labbook 2014
Labbook 2013
Labbook 2012
Labbook 2011
Labbook 2010
Labbook 2009
Labbook 2008
Labbook 2007
Labbook 2006
Labbook 2005
Labbook 2004