The idea is to investigate if it is possible to sample particles in a 2D grid that has the same resolution as the local map and to fuse nearby cells for each cell of the map according to the variance of the robot's translation.
The I/O tutorial (partly done). Consist of more than 4 tutorials:
The PCD (Point Cloud Data) file format tutorial (finished)
Reading PointCloud data from PCD files tutorial (finished)
Writing PointCloud data to PCD files tutorial (skipped)
Concatenate the points of two Point Clouds tutorial (skipped)
The OpenNI Grabber Framework in PCL tutorial (simple-viewer was working with the Asus Xtion Pro on March 15)
Grabbing point clouds from DepthSense cameras tutorial (realsense-viewer worked with the Intel RealSense DS435 on at March 24, not clear if pcl-library is made with WITH_DSSDK enabled. On March 10 I had the in-hand scanner example working with the Intel RealSense DS435)
A lot of the data is moved to /media/arnoud/DATA on my Ubuntu XPS desktop, but I coulnd't find a mounting script.
October 18, 2021
Matlab R2021b Lidar toolbox now has methods to organize point clouds.
August 25, 2021
Installed the Looking Glass on the Windows partition of nb-dual, including the HoloPlay Studio and DepthRecorder.
The HoloPlay in principal imports RGBD which is a combined recording, left a window with RGB, on the right the depth.
The DepthRecorder can only work with a Kinect, not even the Psion.
Downloaded DepthKit, which could make short recordings also with the RealSense D435.
Made a short recording, exported it and imported the mp4 in HoloPlay Studio. The record is in landscape, while the Looking Glass is in Portrait mode, so make sure that you record at the left of the image. I could zoom in by using the scroll button of my mouse, but dragging didn't work for this video.
Used the raw sensor data, I should have used the exported mp4. Once imported, before playing, I could select the Depth position Bottom, and zoom and drag:
August 18, 2021
Installed the Linux software for the Looking glass. Works fine on Ubuntu 20.04. The HoloPlay studio is only available for Mac and Windows, so I should use the CoreSDK to display my own content.
The DepthKit also looks as a good tool (multiple camera's, also LiDAR), but is only available for Windows.
August 16, 2021
New paper on Point Cloud Segmentation called LatticeNet.
In 2019 a turtlebot3 update for ROS2 Dashing was promised.
The source code of the ROS2 Dashing packages can be found on github.
The home directory contains a turtlebot_ws with this code, which could be activated with source install/setup.sh.
Tried what will happen if I did sudo apt-get install --dry-run ros-dashing-turtlebot3. 336 new pacakgaes would be installed (and 317 not upgraded). Most of the turtlebot packages were already in ws, but additional packages were ros-dashing-turtlebot3-cartographer and ros-dashing-turtlebot3-navigation2. Didn't see ros-dashing-turtlebot3-salam.
Just tried roslaunch realsense2_camera rs_camera.launch filters:=pointcloud, which seems to go well, except for the warning at the end:
[ INFO] [1618222592.687425603]: Initializing nodelet with 12 worker threads.
[ INFO] [1618222592.910287224]: RealSense ROS v2.2.22
[ INFO] [1618222592.910321474]: Built with LibRealSense v2.42.0
[ INFO] [1618222592.910336296]: Running with LibRealSense v2.42.0
[ INFO] [1618222593.176727301]: Device with serial number 750612070438 was found.
[ INFO] [1618222593.176803568]: Device with physical ID 2-2-3 was found.
[ INFO] [1618222593.176835377]: Device with name Intel RealSense D435 was found.
[ INFO] [1618222593.177646294]: Device with port number 2-2 was found.
[ INFO] [1618222593.177689329]: Device USB type: 3.2
[ INFO] [1618222593.183051341]: getParameters...
[ INFO] [1618222593.251452030]: setupDevice...
[ INFO] [1618222593.251476983]: JSON file is not provided
[ INFO] [1618222593.251487610]: ROS Node Namespace: camera
[ INFO] [1618222593.251498392]: Device Name: Intel RealSense D435
[ INFO] [1618222593.251508773]: Device Serial No: 750612070438
[ INFO] [1618222593.251518753]: Device physical port: 2-2-3
[ INFO] [1618222593.251527235]: Device FW version: 05.12.05.00
[ INFO] [1618222593.251537211]: Device Product ID: 0x0B07
[ INFO] [1618222593.251548666]: Enable PointCloud: On
[ INFO] [1618222593.251561759]: Align Depth: Off
[ INFO] [1618222593.251571233]: Sync Mode: On
[ INFO] [1618222593.251608667]: Device Sensors:
[ INFO] [1618222593.268052537]: Stereo Module was found.
[ INFO] [1618222593.278863141]: RGB Camera was found.
[ INFO] [1618222593.278899052]: (Confidence, 0) sensor isn't supported by current device! -- Skipping...
[ INFO] [1618222593.278918307]: Add Filter: pointcloud
[ INFO] [1618222593.279361089]: num_filters: 1
[ INFO] [1618222593.279375526]: Setting Dynamic reconfig parameters.
[ INFO] [1618222593.801737231]: Done Setting Dynamic reconfig parameters.
[ INFO] [1618222593.807451511]: depth stream is enabled - width: 848, height: 480, fps: 30, Format: Z16
[ INFO] [1618222593.809093916]: color stream is enabled - width: 640, height: 480, fps: 30, Format: RGB8
[ INFO] [1618222593.809188737]: setupPublishers...
[ INFO] [1618222593.817313340]: Expected frequency for depth = 30.00000
[ INFO] [1618222593.898613410]: Expected frequency for color = 30.00000
[ INFO] [1618222593.922154374]: setupStreams...
[ INFO] [1618222593.945526322]: insert Depth to Stereo Module
[ INFO] [1618222593.945727810]: insert Color to RGB Camera
[ INFO] [1618222594.038348540]: SELECTED BASE:Depth, 0
[ INFO] [1618222594.046528322]: RealSense Node Is Up!
[ WARN] [1618222594.159462178]:
12/04 12:16:34,160 WARNING [140255065782016] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
I am not the only one with this issue. I disconnected the other depth-camera's, but that was not the problem. In the realsense-viewer the device was not recognized directly, but after reconnecting the usb-connection (or patience) the R435 worked fine.
Should try with an USB-B cable. Same problem. Also tried to reset the device and change the framerate (with roslaunch realsense2_camera rs_camera.launch initial_reset:=true depth_fps:=15 infra_fps:=15 color_fps:=15 enable_sync:=true. Yet, I receive updates from if I check with rostopic echo /camera/depth/image_rect_raw/header, so I should also check with rviz if I can see those frames.
As far as I can see, I have all settings equal, but still I don't see the PointCloud, nor the image.
Yet, if I try rosrun image_view image_view image:=/camera/color/image_raw I see the image.
When selecting the right topics, Rviz also displays the image and the pointcloud:
April 7, 2021
Activated with ccmake .. the option WITH_DSSDK. Yet, the RealSense SDK is obsolete, so is not found.
When generating the new configuration, also receive a warning from /opt/ros/noetic/lib/x86_64-linux-gnu/cmake/realsense2/realsense2Config.cmake.
Yet, at the end RealSense SDK 2 is found: -- RealSense SDK 2 found (include: /opt/ros/noetic/include, lib: realsense2::realsense2, version: 2.42.0).
Yet, the in_hand_scanner still uses pcl::OpenNIGrabber::setupDevice(const string&, const pcl::OpenNIGrabber::Mode&, const pcl::OpenNIGrabber::Mode&) in /home/arnoud/git/pcl/io/src/openni_grabber.cpp, although I changed the include file to grabber.h (first attempt) and openni2_grabber (second attempt).
The call in startGrabberImpl with grabber_ = GrabberPtr (new RealSense2Grabber (serial, false)); needs both a serial number and playback flag. Looked up /lib/udev/rules.d/60-librealsense2-udev-rules.rules, but for Intel the serial numbers are given far beyond D435 (checked with lsusb, the D435 has idVendor "8086" and idProduct "0b07"). The Softkinetic DepthSense 325 is also connected, with idVendor "2113" and idProduct "0145"). Found no rules for the Softkinetic devices.
Strange, the pointer still seems to be a OpenNIGrabber:
/home/arnoud/git/pcl/apps/in_hand_scanner/src/in_hand_scanner.cpp:477:65: error: no matching function for call to ‘std::shared_ptr::shared_ptr(pcl::RealSense2Grabber*)’
477 | grabber_ = GrabberPtr (new RealSense2Grabber ("0b07", false));
Changed the class in include/pcl/apps/in_hand_scanner/in_hand_scanner.h, which is called from pcl/apps/in_hand_scanner/main_window.h in src/in_hand_scanner/main.cpp.
Changed both the include and the using Grabber = pcl::RealSense2Grabber in in_hand_scanner.h
Now the MainWindow shows up. No warnings, but the main screen keeps only displaying "Starting the grabber" :
I also have a git-clone of pcl-1.8.1, although build/bin/pcl_depth_sense_viewer is not made.
Do make in pcl-1.8.1/build fails on linking bin/pcl_viewer, on a boost::system::generic_category.
Running /usr/local/bin/pcl_in_hand_scanner (see March 11) fails on missing shared library librealsense2.so.2.41
I installed inbetween (March 24) librealsense with sudo apt-get install librealsense2-dev, which installed so.2.41.
Remaking the /usr/local/bin/pcl_in_hand_scanner with sudo make install in ~/git/pcl/build failed on missing librealsense2.so.2.41.0', needed by 'lib/libpcl_io.so.1.11.1.99', so first did a cmake ..; make again.
Now the librealsense library can be found. Still pcl_in_hand_scanner fails:
ERROR in in_hand_scanner.cpp: void pcl::OpenNIGrabber::setupDevice(const string&, const pcl::OpenNIGrabber::Mode&, const pcl::OpenNIGrabber::Mode&) in /home/arnoud/git/pcl/io/src/openni_grabber.cpp @ 369 : No matching device found. openni_wrapper::OpenNIDevice::OpenNIDevice(xn::Context&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&) @ /home/arnoud/git/pcl/io/src/openni_camera/openni_device.cpp @ 117 : creating depth generator failed. Reason: USB interface is not supported!
Strange: in pcl/apps/in_hand_scanner/src/in_hand_scanner.cpp already openni_grabber.h was replaced for a generic grabber.h, although pcl/apps/in_hand_scanner/in_hand_scanner.h still contains class OpenNIGrabber. Yet, in_hand_scanner.h is not directly loaded, and indirectly only via pcl/apps/in_hand_scanner/main_window.h.
Yet, building in_hand_scanner no longer works due to missing boost!
Rebooted.
Checked warnings at ~/git/pcl/build. No warnings about boost, although I could set the required version higher in pcl/cmake/find_boost.cmake.
Repaired metslib error with fix suggested from KeZuLin. Cloning directly from github didn't work because there was no ./configure script. Running autogen.sh; autoconf generated that script, but still several Makefile.in files were missing. Created a symbolic links solved that, but the resulting Makefile didn't work. Downloading the tar in a /tmp directory worked (because it contained the configure script).
Pcl is build from scratch now this dependency was correctly set.
Yet, still compilation fails. The pcl_openni_grabber_example uses /usr/local/include/boost (which containts v1.67), instead of /usr/include/boost. Moved /usr/local/include/boost away (to ~/packages/boost_1_67_0/local, as the libboost* files from /usr/local/lib.
Did a make clean and tried again.
Now I could make everything and do a sudo make install again.
pcl_in_hand_scanner works again, but finds no OpenNI devices.
March 30, 2021
It would be interesting to see how ORB-SLAM3 would perform on the Katwijk Beach dataset, although this algorithm works better indoors, environments with rich textures and for sequences with fast motions. Yet, it can handle stereo-camera's which are non-rectified. Problems with textures in the sky can be solved by ignoring features at large distances. It was able to do SLAM in dark circumstances (by falling back to the IMU measurements), so it can maybe also survive too light circumstances on the beach.
The stereo video of the outdoor walk is still quite impressive (no problems with the sky or the indoor / outdoor boundary!)
For long-term data association FLIRT features for 2D laser scans were suggested by Scaramuzza and Leonard.
March 25, 2021
PointNet can be used to directly process point clouds by Neural Nets.
March 24, 2021
Installing librealsense2 on my Ubuntu 20.04 workstation.
The kernel module uvcvideo.ko is installed in /lib/modules/5.4.0-66-generic/updates/dkms/, which is a better place than overwriting the kernel module itself (as done for realsense v1.21).
After installing librealsense2-dkms, the R200 is recognized by lsusb.
The camera is also visible with dmesg | grep uvc
[199138.570873] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
Yet, running the ./R200-live-test from realsense v1.21.1 fails after correctly getting the metadata:
./R200-live-test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
R200-live-test is a Catch v1.2.1 host application.
Run with -? for options
~/git/librealsense/unit-tests/unit-tests-common.h:63: FAILED:
REQUIRE( rs_get_error_message(err) == rs_get_error_message(nullptr) )
with expansion:
"UVCIOC_CTRL_QUERY:UVC_SET_CUR error 25, Inappropriate ioctl for device"
==
{null string}
Tried some of the examples in librealsense2.
The example ./rs-software-device failed on OpenGL:
Maximum number of clients reached
Could not open OpenGL window, please check your graphic drivers or use the textual SDK tools
Installing librealsense2-utils also installs librealsense2-gl. Still OpenGL problems.
rs-hello-realsense also fails to recognize the Asus Xtion Pro. The camera is recognized with lsusb and dmesg | grep -i PrimeSense:
[202031.719339] usb 1-13: Product: PrimeSense Device
[202031.719341] usb 1-13: Manufacturer: PrimeSense
With the RealSense D435 rs-hello-realsense works: The camera is facing an object 1.68^Cmeters away
Rebooted the machine, the OpenGL problems are gone (first tried a regular monitor, but switching back to a TV also worked).
The realsense-viewer works fine, although a second start gave some warnings:
24/03 17:22:55,621 ERROR [140651154519872] (backend-v4l2.cpp:600) Metadata node for uvc device: id- /dev/video10
vid- 8086
pid- a80
mi- 4
unique_id- 2-3-3
path- /sys/devices/pci0000:00/0000:00:14.0/usb2/2-3/2-3:1.4/video4linux/video10
susb specification- 300
metadata node-/dev/video11 was previously assigned
The realsense-viewer also gave a warning that there were two udev-rules for the same device.
Now rs-software-device and rs-pointcloud also work.
Removed /etc/udev/rules.d/99-realsense-libusb.rules, because /lib/udev/rules.d/60-librealsense2-udev-rules.rules had some extra rules for devices with name accel_3d. Both rules had an additional RUN+="/usr/local/bin/usb-R200-in_udev" for idProduct 0a80 (aka R200), but I removed that non-working script, so that has no effect. Removed it anyway.
March 23, 2021
Since Ubuntu 14.04.05, trusty is using the kernel of xenial. There are several scripts to modify the uvcvideo.ko. Although in the Makefile is specified that the config shouldn't be remade, still hundreds of expert questions are asked (with the risk to corrupt the kernel). At the end the simplepatch script seems to be made for the kernel of 14.04.05, not for 14.04.06. It don't get it to work, and it doesn't seem to worth the risk.
This post describes comparible problems with Ubuntu 16.04 (but for a modern branch of intelrealsense, so not for r200).
Did some last tests. The DSReadCameraInfo didn't see any device, the librealsense/build/unit-tests/R200-live-test indicates that no uvcvideo kernel module is loaded.
A check with dmesg | grep uvc shows that uvcvideo: version magic '4.4.254+ SMP mod_unload modversions' should be '3.13.0-170-generic SMP mod_unload modversions retpoline'
The trick sudo apt-get install --reinstall linux-image-generic linux-image didn't give me uvcvideo.ko back.
A pity that no backup is made from /lib/modules/'uname -r'/kernel/drivers/media/usb/uvc/uvcvideo.ko.
Looked at my other linux-machines, but they had all kernel drivers for 4.* and 5.*, not for 3.13.0-170.
Was able to make patched uvcvideo.ko. Copied that to /lib/modules, yet it still cannot find it. Checked /lib/modules/3.13.0-170-generic/modules.order, and there uvcvideo is present.
Seems to be related with the VERMAGIC_STRING, which gives 4.4.254+ instead of 3.13.170. Modified linux/vermagic.h so that uvcvideo.ko has the right magic, still this module is not found.
Did sudo depmod. Now the module is found, only dmesg | grep uvc is complaining that cpu_tss, getnstimeofday64 and ktime_get_ts64 are missing.
Did a last try, by downloading the linux kernel. Switched to current version with git checkout tags/v3.13. Patched the uvc-driver manually. Had to do make scripts before I could do a make in the uvc-directory. Yet, had to modify again vermagic to get the right magic string. Was missing Modules.symvers, which I copied from ubuntu-xenial. Still the module couldn't be loaded. Tried make modules in linux directory.
The utsrelease.h was back to 3.13+. Tried to load the new uvcvideo.ko, but still a warning on a wrong exec format and no symbol version for module_layout.
Maybe I should update to ubuntu 16.04, because the R200 is supported for kernel 4.4 and higher (see ros kinetic instructions).
Run the r200_install.sh script, but had to excluded the ds_uvcdriver/script. Looks like I should have run that script under Ubuntu 13.10 (sausy) instead of 14.04. The patch tries to modify uvc_driver, which is made Torvald himself.
Restarted the computer. The DS camera is detected, but no appender has found for logger.
Trying again, now with the official Intel instructions.
The installation requires cmake3, which is not default in ubuntu14.04. Installed cmake3, renamed it /usr/bin/cmake3, copied /usr/share/cmake-3.5 to /tmp, reinstalled with sudo apt-get install --reinstall cmake cmake-data, copied /tmp/cmake-3.5 back to /usr/share, renamed /usr/bin/cmake2 and put a symbolic link from cmake to cmake3. now the ./scripts/install_glfw3.sh works!
Applying ./scripts/patch-uvcvideo-16.04.simple.sh is not so simple, because ubuntu-xenial/debian/scripts/retpoline-extract-one is not copied to ubuntu-xenial/scripts/ubuntu-extract-one. After manually copying that file and choosing all default options, the uvcvideo.ko is build. Receive warnings that cpu_tss, getnstimeofday64 and ktime_get_ts64 are not defined. At the end uvcvideo.ko couldn't be loaded, because it has a wrong Exec format.
Should look tomorrow if I could clone an older kernel.
Made the package. Running ~/git/OpenNI2/Bin/x64-Release/NiViewer gae the message Failed to open the USB device!, but with sudo it worked.
Created a file /etc/udev/ruled.d/40-libopenni2-0.rules with
SUBSYSTEM=="usb", ATTR{idProduct}=="0601", ATTR{idVendor}=="1d27", MODE:="0666", OWNER:="root", GROUP:="video"
.
Based on this post. The idProduct you can look up with lsusb. Applied the rule-change without rebooting by sudo udevadm control --reload-rules. Now NiViewer works without sudo, although I still receive the warning Warning: USB events thread - failed to set priority. This might cause loss of data....
The result is a depth image and camera image. Not clear what should be displayed in the lower image, although this is the same result as roboram on Ubuntu 14.04:
SimpleViewer shows only the Depth image. MultiDepthViewer is missing devices. SimpleRead and EventBasedRead just print a time-stamp and distance. ./MultipleStreamRead prints both the distance and a hexidecimal number.
The PS1080Console gives some detailed information (Firmware, SDK, ChipType). The ChipType is the PS1080!
The Firmware is the last available Firmware update from Asus (Nov 2013). The latest software (Nov 2012 - Ubuntu 10.10), contained PrimeSense Software Package 20.4.2.20 :
1. OpenNI Framework (Version 1.5.2.23)
2. Sensor DDK (version 5.1.0.41)
3. NITE (version 1.5.2.21)
4. USB driver (version 3.1.3.1)
The latest stable version of PrimeSense on github is version 5.1.6.6 (Nov 2013), which seems to be the Sensor DDK.
PSLinkConsole is configuring the device with the configuration found in OpenNI2/Drivers/PS1080.ini. Some parameters are changed and Firmware params were updated (such as the Device.SensorPlatformString and the Device.ID). Also properties as ImageColorTemperature and DepthResolution are changed. Camera still works in NiViewer.
ClosestPointViewer indicates the shortest distance point in the depth-map. The MWClosestPointApp print tuples of three numbers: 266, 181, 529.
The NiViewer has several views. The default view is '7' (side by side), but I also like option '0' (semi transparant with rainbow coloured coding of different depths):
Was looking for which sensors are supported by OpenNI2. At least Intel Realsense has a wrapper.
The old RealSense and the Zed where not recognized as device by NiViewer.
Following the instructions of librealsense2 Linux installation. This included a kernel-update, which required to set a secure boot password.
The old R200 was not recognized (but was also not listed in the supported device list). Seems that there is only support for Windows.
Installed a /etc/udev/rules.d for r200, which runs a script to get a lock on the uvccamera. That script doesn seem to work, so replaced that with tless dangerous code (same as intelrealsense ownership settings).
March 12, 2021
Looked at the to_ros1.sh and it makes use of two catkin components: pcl_ros and pcl_conversations. Both are part of perception_pcl.
Cloned ros-perception in ~/catkin_ws/src and catkin build it. Received the warning:
CMake Warning at ~/catkin_ws/src/perception_pcl/pcl_ros/CMakeLists.txt:135 (add_library):
Cannot generate a safe runtime search path for target pcl_ros_features
because files in some directories may conflict with libraries in implicit
directories:
runtime library [libGLEW.so.2.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/usr/local/lib
Some of these libraries may not be found correctly.
Modified /catkin_ws/build/structure_core_ros_driver/CMakeFiles/pcl_subscriber.dir/flags.make manually. Updating to pcl-1.11 didn't help, the code compiles with the includes from ~/git/pcl-1.8.1.
Unfortunatelly, pcl-1.8.1 doesn't build. Probably needs another version of boost:
/home/arnoud/git/pcl-1.8.1/segmentation/include/pcl/segmentation/plane_coefficient_comparator.h: In member function ‘const std::vector& pcl::PlaneCoefficientComparator::getPlaneCoeffD() const’:
/home/arnoud/git/pcl-1.8.1/segmentation/include/pcl/segmentation/plane_coefficient_comparator.h:144:17: error: invalid initialization of reference of type ‘const std::vector&’ from expression of type ‘const boost::shared_ptr >
Downloaded the source code of boost-1.67 and unpacked it in ~/packages/boost_1_67_0. Installed that version in /usr/local/include and /usr/local/lib (in contrast to boost-1.71 installed in /usr/include and /usr/lib.
Downloaded boost-1.40.0 because that was the one requested in PCLConfig.cmake and installed that version in /opt/boost/boost-1.40/. Gives even more compilation errors.
Trying boost-1.58.0, the version of Ubuntu 16.04. Still the same error.
Error is probably due to the option -Wno-conversion, because the compilation continues after repairing the definition in plat_coefficient_comparator.h with to an explicit return ((const std::vector&)plane_coeff_d_);.
Linking fails, also when switching back to the default boost-1.71.
Problem seems to be that /usr/lib/x86_64-linux-gnu/liblz4.so is added to the link. Added that file manually, but still fails.
Added the six lines of code to CMakeLists.txt in the root. The library is found, but still Linking of the pcl_viewer fails. Partly to missing boost, partly missing LZ4 codes. Should try another boost include and lib.
March 11, 2021
Trying to build in_hand_scanner from the apps directory. CMakeLists.txt is missing PCL_SUBSYS_OPTION.
This option is defined in ~/git/pcl/cmake/pcl_targets.cmake. Also so the option ADD_LIBRARY_OPTION_COMPONENT, which leads to the options that could be given to cmake.
Find this page how to configure cmake (with ccmake ... Activating the building of both the apps and the examples. Two apps could not be build (due to QVTK dependencies):
apps
building:
|_ 3d_rec_framework
|_ in_hand_scanner
|_ point_cloud_editor
not building:
|_ cloud_composer: Cloud composer requires QVTK
|_ modeler: VTK was not built with Qt support.
Changing from openni_grabber.h to grabber.h in in_hand_scanner.cpp didn't work, because at line 61 of in_hand_scanner.h the class OpenNIGrabber is used.
The sudo make install created /usr/local/lib/libpcl_apps.so and /usr/local/bin/pcl_in_hand_scanner.
Also this executable reports when the Intel Realsense is connected:
ERROR in in_hand_scanner.cpp: void pcl::OpenNIGrabber::setupDevice(const string&, const pcl::OpenNIGrabber::Mode&, const pcl::OpenNIGrabber::Mode&) in /home/arnoud/git/pcl/io/src/openni_grabber.cpp @ 338 : No devices connected.
Same with the Structure Core connected (after given the USB-device permission below). Continue with the StructureSDK package.
The OpenNi2 is available via the builders of the Structure Core, which in principal should support ASUS Xtion, PrimeSense Carmine, Microsoft Kinect and Structure Sensor depth sensors. I only have the last one at home. Yet, Structure Core users should also Structure SDK Cross-Platform.
Gave non-root access to Structure Core device with sudo DriverAndFirmware/Linux/1_0_0_Driver/Install-CoreDriver-Udev-Linux.sh
Run the Scripts/Build.sh, and run the app CorePlayground. Both the Capture session status as USB status indicated: HOST ERROR 2021-03-11 11:30:12.591 :381 clientReadLoop No MAGIC from USB interface 0 (error=7).
Trying a reboot.
Reboot doesn't help. Strange enough, the device is found, only the in_usb_read_bulk failed:
[processDeviceDescriptor] Found booted device: VID=0x2959, PID=0x3001, Rel=0x7FFF
HOST INFO 2021-03-11 12:05:27.461 :567 StructureCore_InitBootId StructureCore Init (1.0.0-release)
HOST INFO 2021-03-11 12:05:27.461 :578 StructureCore_InitBootId Structure Core driver operating in multi-device mode
HOST INFO 2021-03-11 12:05:27.461 :349 StructureCore_hotplugCallback StructureCore_hotplugCallback hit with new state 1
HOST INFO 2021-03-11 12:05:27.461 :149 updateState Notifying app of state: Booting
INFO: Structure Core is online (booting)
Capture session event: Booting
HOST INFO 2021-03-11 12:05:27.461 :741 StructureCoreClient_Init Calling StructureCoreClient_Init!
HOST INFO 2021-03-11 12:05:27.461 :327 findStartOfMsg inu_usb_read_bulk failed
HOST ERROR 2021-03-11 12:05:27.461 :381 clientReadLoop No MAGIC from USB interface 0 (error=7)
Seems to be an issue with the FirmWare of the StructureCore. Try to execute CoreFirmwareUpdater-1.0.0-Linux-x86_64, but receive the message:
Error opening device: LIBUSB_ERROR_ACCESS. Did you forget to run "sudo DriverAndFirmware/Linux/Install-CoreDriver-Udev-Linux.sh"?
Run the script in both 0.9.x and 1.0.0-directory, but although I receive Structure Core USB device nodes will be owned by user, the download doesn't work. Let see if I can do it from Windows, as done in 2019.
Following the instructions from Structure Core, and downloading Visual Studio Community 2019 (v 16.9.1) from Microsoft, including the latest C++ compiler (as only workload). cmake was already installed, added the path to my environment variables.
Followed the tip of stackoverflow, and selected the workload " Desktop Development"
Correctly assumed that the command should be cmake -G "Visual Studio 16 2019" -A x64, although I maybe also should define VS160COMNTOOLS.
After a reboot, that works (even without defining VS160). cmake complains that VULKAN_LIBRARY cannot be found, but build goes well.
Same magic error on Windows.
Tried downloading the StructureSDK-CrossPlatform-0.7.3-ROS. Upgrading the drivers worked, but the connection couldn't be made to do the firmware update.
Changed manually the rule from 664 to 777 in /etc/udev/rules.d/structure-core.rules. Reloaded the rules with sudo udevadm control --reload-rules. Now the CorePlayground works:
Systems works also with a USB-C cable, and I also activated Infrared and the Gyroscope. What is a bit spooky is that the Infrared is clearly a stereo-camera, with left camera and the infrared dots laying on top of the computer, and the right looking along the side of the computer:
Changed the rule back to 664, and device is still working. Maybe the trick was just the reload, because the rule contains the tag of the OCCIPITAL-MAGIC.
Also tried SimpleStreamer (to the commandline) and DepthTester:
Also tried the other sensors combined in MultiRecorder. The DepthSense DS325 seemed to give some response, although it doesn't go streaming.
Tried the pcl_in_hand_scanner again, but no devices found by the openni_grabber.
Tried to make the ROS1 package. Had to modify to_ros1.sh script (change directory ros1_driver to ros1). Had to do catkin build instead of catkin_make.
Build fails on definitions of /usr/include/pcl-1.10 in ros1/examples/pclSubscriber.cpp. Not sure if it needs pcl-1.8, pcl-1.11 or another boost. Without source to_ros1.sh BUILD_EXAMPLES the structure_core_ros_driver is build.
Also roslaunch structure_core_ros_driver sc_rviz.launch (note that it starts its own structure_core_ros_driver):
Also rosrun image_view image_view image:=/sc/depth/image works fine. Same for /sc/rgb/image. Yet, when I do rostopic list I don't see the infrared images.
March 10, 2021
The CylinderExample is made without problems with cmake. The list of libraries linked to the executable can be found in CMakeFiles/CylinderExample.dir/link.txt.
The executable works also fine.
Adding the full list to open_viewer_simple seems to solve the vtk-linking, because now the complaint is undefined reference to symbol 'pthread_condattr_setclock@@GLIBC_2.3.3'
Adding -pthread solves this, now the missing references are related to pcl::visualization. Adding -lpcl_visualization solves this: open_viewer_simple is build!
Running ./open_viewer_simple fails:
terminate called after throwing an instance of 'pcl::IOException'
what(): void pcl::OpenNIGrabber::setupDevice(const string&, const pcl::OpenNIGrabber::Mode&, const pcl::OpenNIGrabber::Mode&) in /build/pcl-gWGA5r/pcl-1.10.0+dfsg/io/src/openni_grabber.cpp @ 338 : No devices connected.
Strange, because the program was run while the Intel Realsense D435 was connected.
The missing library for the vtk was -lvtkCommonCore-7.1
The example code depth_sense_viewer.cpp can still be found in pcl-1.8.1 and pcl-1.9.1. The tools directory is gone in the next version (pcl-1.10.0). As new pcl 1.10.0 feature is in io (librealsense2 grabber based on RSSDK 2.0). Read the conversation how the librealsense2 grabber was tested.
I had a clone of pcl-1.8.1 on my machine. Did a make in ~/git/pcl-1.8.1/build, but build fails on segmentation.
Created a realsense2_viewer_simple which uses the RealSense2Grabber. The include file pcl/io/real_sense2_grabber.h was not (yet) available in /usr/include/pcl-1.10. Starting a build in ~/git/pcl (version 1.11). Starting a build in ~/git/pcl (version 1.11).
My realsense2_viewer_simple compiles. Waiting on the build of pcl-1.11 to do a install, because on Ubuntu 20.04 only libpcl-dev-1.10.0 is available (which doesn't contain the RealSense2Grabber yet.
My disk was full, so did a disk usage scan. Moved ~/git/tcnn/models, ~/git/tcnn/data and ~/git/tcnn/dataset to another disk /media/arnoud/DATA/tcnn/ (and made a symbolic link in ~/git/tcnn).
Finished the build of pcl-1.11 and did a sudo make install. The directory /usr/local/bin is now enriched with many tools, such as pcl_pcd2vtk, pcl_pcd2png, pcl_virtual_scanner, pcl_viewer, pcl_pcd_image_viewer, pcl_openni2_viewer.
When I do pcl_openni2_viewer -l, I get the answer No devices connected.
I have now two locations of my libpcl libraries. The file /usr/lib/x86_64-linux-gnu/libpcl_visualization.so points to /usr/lib/x86_64-linux-gnu/libpcl_visualization.so.1.10, while /usr/local/lib/libpcl_visualization.so points to /usr/local/lib/libpcl_visualization.so.1.11.
openni_viewer_simple no longer compiles, because /usr/local/lib/libpcl_visualization.so.1.11 no longer has showCloud.
Copied the pcl_openni2_viewer_simple to ~/git/esa-prl/ga_slam/example/. Used the definitions of ~/git/pcl/build/tools/CMakeFiles/pcl_openni_viewer_dir/flags.make in my Makefile, still is sees ‘string’ was not declared. Yet, the openni2_viewer_simple is not compiled in tools, when I copy openni_viewer.cpp it works (although I missing a symbol). Adding the full list helps.
Replaced the OpenniGrabber for a RealSense2Grabber, program starts with when given a device (for instance realsense2_viewer_simple DS435). Yet, the pointcloud responds with starting a openni_viewer, should find the realsense equivalent.
The code of realsense2_grabber points to testing with the in-hand scanner example.
Modified the code that I have a viewer (based on the v1.8 real_sense_viewer). Now I get an (empty) window, and after a while the message:
terminate called after throwing an instance of 'rs2::error'
what(): No device connected
The code of in-hand scanner can be found at github.
The main code starts a generic new Grabber, although it included #include the openni_grabber.h.
Lets try tomorrow to changed that first to grabber.h and later to real_sense_2_grabber.h.
March 9, 2021
Adding an #include solves the compilation problem, but linking fails on undefined reference to symbol 'xnContextRelease' and /usr/lib/libOpenNI.so.0: error adding symbols: DSO missing from command line.
The latter is mostly an ordering of the libraries. For the former: I did in /usr/lib/x86_64-linux-gnu a grep for xnContextRelease, and libpcl_io.so.1.10 matched.
Did the same search in /usr/lib and there are several matches. Interesting are libOpenNI.so, libSample-NiSampleModule.so and libXnDeviceSensorV2.so.0. Adding -L /usr/lib -lOpenNI at the end of the link command shifts the problem to missing vtk symbols.
Maybe I should try a vtk example. Note that the example is made from a CMakelist.txt which includes already 9 vtk-packages.
Looked at the TurtleBot3 Burger. Boots nicely, but has no longer default settings to bringup.
March 8, 2021
Looked at the ROS Melodic package for the Pal mini. It also contains a rviz-launch file with:
Alpha: 1
Autocompute Intensity Bounds: true
Autocompute Value Bounds:
Max Value: 0.240854248
Min Value: -1.55063653
Value: true
Axis: X
Channel Name: intensity
Class: rviz/PointCloud2
Color: 255; 255; 255
Color Transformer: RGB8
Decay Time: 0
Enabled: true
Invert Rainbow: false
Max Color: 255; 255; 255
Max Intensity: 4096
Min Color: 0; 0; 0
Min Intensity: 0
Name: PointCloud
Position Transformer: XYZ
Queue Size: 1
Selectable: true
Size (Pixels): 2
Size (m): 2
Style: Points
Topic: /dreamvu/pal/get/point_cloud
Unreliable: false
Use Fixed Frame: true
Use rainbow: true
Value: true
If I fire up rosrun pcl_ros pcd_to_pointcloud /tmp/ga_slam_test_data/cloud_sequence/global_cloud_0.pcd instead of an openni camera, I have the /cloud_pcd topic instead of the /camera/depth/points2. So, when I run rosrun pcl_ros convert_pointcloud_to_image input:=/cloud_pcd output:=/cloud_image combined with rosrun image_view image_view image:=/cloud_image I should get an image, but I see nothing.
So, I will first try a depth camera. The OAK-D only has a ros2 package, although this project used ROS Melodic.
First using Intel Realsense ROS package. So I did sudo apt-get install ros-$ROS_DISTRO-realsense2-camera. Launched roslaunch realsense2_camera rs_camera.launch, which publishes many topics, but no points topic, only camera/depth/image_rect_raw. Yet, the pointcloud can be requested with parameter filters:=pointcloud, which publishes the topic /camera/depth/color/points.
After sudo apt-get install ros-$ROS_DISTRO-realsense2-description I launched rviz with roslaunch realsense2_camera rs_d435_camera_with_model.launch. Yet, the pointcloud is not displayed because that the transform is from an unknown publisher. So many terminals open, that I will try a fresh restart after a reboot.
After a reboot I can display the pointcloud (with the camera model at the foreground):
Yet, when I do rosrun pcl_ros convert_pointcloud_to_image input:=/camera/depth/color/points output:=/camera/depth/cloud_image, I receive the warning Input point cloud is not organized, ignoring!.
That is a question was asked more often. Best answer is to do pointcloud tutorial.
Started with reading the ICRA 2011 paper, which for instance explains the nodlet architecture (like ros-nodes, but for memory efficiency in one process).
Created a read_pcd in tt>~/git/esa-prl/ga_slam/example, which had already a working Makefile. Result: /read_pcd
Loaded 213 data points from test_pcd.pcd with the following fields:
0.93773 0.33763 0
0.90805 0.35641 0
0.81915 0.32 0
...
Note that the program main no longer works, because /tmp/ga_slam_test_data was removed with the last reboot.
Unzipped the dataset now in /media/arnoud/DATA/tmp and changed the filenamePrefix in main.cc. Should not forget to mount /media/arnoud/DATA/.
Created openni_viewer_simple.cc from the example, but compilation fails on including . Added -isystem /usr/include/ni -isystem /usr/include/vtk-7.1 , but compilation fails on boost::this_thread.
March 3, 2021
Looking what happens if I start rosrun pcl_ros pcd_to_pointcloud /tmp/ga_slam_test_data/cloud_sequence/local_cloud_0.pcd:
[ INFO] [1614777282.428672122]: Recognized the following parameters
[ INFO] [1614777282.429232915]: * file_name: /tmp/ga_slam_test_data/cloud_sequence/local_cloud_0.pcd
[ INFO] [1614777282.429246809]: * interval: 0
[ INFO] [1614777282.429256186]: * frame_id: base_footprint
[ INFO] [1614777282.429267729]: * topic_name: /cloud_pcd
[ INFO] [1614777282.429276000]: * latch: false
[ INFO] [1614777282.429456502]: Loaded pointcloud with the following stats
[ INFO] [1614777282.429468631]: * number of points: 7406
[ INFO] [1614777282.429477478]: * total size [bytes]: 88872
[ INFO] [1614777282.429485940]: * channel names: x y z
So, mainly the same info, only less points. Note that the progam uses the same frame_id as the previous run (although none was specified this time).
Can now do two things: couple the elevation_mapping turtlesim3_waffle demo to ga_slam, or look how in this demo the frame_id map is provided. Lets start with the first option.
The turtlesim3_waffle demo still works, although without the purple layers. Maybe I unintensional modified the rviz file yesterday:
Downloaded a fresh rviz file, but the only difference is the SyncSource (RealSense Image vs Elevation Map Raw), and the purple layers are still gone.
The gzserver gives an error:
[gazebo-1] process has died [pid 43629, exit code 255, cmd /opt/ros/noetic/lib/gazebo_ros/gzserver -e ode /opt/ros/noetic/share/turtlebot3_gazebo/worlds/turtlebot3_house.world
Failure seems to be an already running gazebo service:
[rosout][INFO] 2021-03-03 14:44:22,816: Calling service /gazebo/spawn_urdf_model
[rosout][INFO] 2021-03-03 14:44:22,819: Spawn status: SpawnModel: Failure - entity already exists.
Created turtlesim3_gslam_demo.launch, replaced only the rviz-node (from waffle_demo.rviz to visualisation.rviz). Now only the coordination frames of the robot are visible:
Modified turtlesim3_gslam_demo.launch with the waffle_demo.rviz back and the long_range.yaml parameters loaded. The robot and gray point-cloud are visible, although the robot is outside the grid:
Image is still the same as turtlesim3_waffle demo above.
Copied some content of visualization.rviz into waffle.rviz, but rviz is now black. Added layers Elevation Map, Elevation Map Raw that should look at topics /elevation_mapping/elevation_map,/elevation_mapping/elevation_map_raw>. Checked with rostopic list and they are published:
/base_footprint_pose
/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points
/camera/parameter_descriptions
/camera/parameter_updates
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_raw/compressed
/camera/rgb/image_raw/compressed/parameter_descriptions
/camera/rgb/image_raw/compressed/parameter_updates
/camera/rgb/image_raw/compressedDepth
/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/camera/rgb/image_raw/compressedDepth/parameter_updates
/camera/rgb/image_raw/theora
/camera/rgb/image_raw/theora/parameter_descriptions
/camera/rgb/image_raw/theora/parameter_updates
/clock
/cloud_pcd
/cmd_vel
/elevation_mapping/elevation_map
/elevation_mapping/elevation_map_raw
/elevation_mapping/visibility_cleanup_map
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/goal
/ground_truth
/ground_truth_pose
/imu
/initialpose
/joint_states
/odom
/rosout
/rosout_agg
/scan
/tf
/tf_static
Yet, I also added a layer Elevation Cloud for the topic /elevation_map_raw_visualization/elevation_cloud, which could better look at /elevation_mapping/visibility_cleanup_map or /cloud_pcd.
March 2, 2021
The command roslaunch elevation_mapping_demos ground_truth_demo.launch now also starts an rviz-screen, but still needs ground_truth data to be published.
In grid_map_demos there are several examples. For instance, grid_map_loader_demo.launch contains a node which loads a bag-file and publishes a grid_map:
The elevation_mapping_demos/launch/realsense_demo.launch contains a link to elevation_mapping_demos/rviz/elevation_map_visualization_pointcloud.rviz, although this file doesn't exist.
In elevation_mapping/config/sensor_processors there are several sensors types defined: stereo, structured_light, laser, perfect.
Downloaded again master.zip from ~/git/ga_slam/build/test/ga_slam_test_data-prefix/src/ga_slam_test_data-stamp/ga_slam_test_data-urlinfo.txt and unzipped it to /tmp/ga_slam_test_data
In principal this information could be published with the command rosrun pcl_ros pcd_to_pointcloud point_cloud_file.pcd from the pcl package.
Running rosrun pcl_ros pcd_to_pointcloud /tmp/ga_slam_test_data/cloud_sequence/global_cloud_0.pcd indicated the following info:
[ INFO] [1614693917.039786404]: Recognized the following parameters
[ INFO] [1614693917.040405064]: * file_name: /tmp/ga_slam_test_data/cloud_sequence/global_cloud_0.pcd
[ INFO] [1614693917.040422049]: * interval: 0
[ INFO] [1614693917.040432492]: * frame_id: base_link
[ INFO] [1614693917.040444172]: * topic_name: /cloud_pcd
[ INFO] [1614693917.040458074]: * latch: false
[ INFO] [1614693917.049020673]: Loaded pointcloud with the following stats
[ INFO] [1614693917.049039971]: * number of points: 93728
[ INFO] [1614693917.049048931]: * total size [bytes]: 1124736
[ INFO] [1614693917.049075563]: * channel names: x y z
Changed config/robots/ground_truth_demo.yaml to listen to /cloud_pcd. I also see no points added to the elevation map, but rviz is complaining that the points are published in unknown frame. Changed it to /ground_truth_pose, but the unknown frame is starleth/odometry. Changed the frame_id in pcd_to_pointcloud command and config/robots/ground_truth_demo.yaml, but still the Elevation Map is indicating an error: Transform [sender=unknown_publisher]
For frame [map]: Fixed Frame [starleth/odometry] does not exist.
The starleth/odometry frame can be found in elevation_mapping_demos/rviz/elevation_map_visualization.rviz, which points to the StarlETH legged robot
Removed starleth both from rviz as ground_truth_demo, but still the same rviz error. Forget to remove it as Global Option. Changed it to map, which gave still an error. Changed it to base_footprint, which solved both the Global Status as TF error (and shows odom moving). The only remaining error is from the Elevation Map, which cannot transform from frame map. Note that the Waffle rviz used odom as fixed frame.
Also removed map from ground_truth_demo.yaml. No longer any rviz errors, but I also see no point_cloud. When I do rostopic list, I get:
base_footprint_pose
/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points
/camera/parameter_descriptions
/camera/parameter_updates
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_raw/compressed
/camera/rgb/image_raw/compressed/parameter_descriptions
/camera/rgb/image_raw/compressed/parameter_updates
/camera/rgb/image_raw/compressedDepth
/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/camera/rgb/image_raw/compressedDepth/parameter_updates
/camera/rgb/image_raw/theora
/camera/rgb/image_raw/theora/parameter_descriptions
/camera/rgb/image_raw/theora/parameter_updates
/clicked_point
/clock
/cloud_pcd
/cmd_vel
/elevation_map_raw_visualization/elevation_cloud
/elevation_map_raw_visualization/map_region
/elevation_map_raw_visualization/map_region_array
/elevation_mapping/elevation_map
/elevation_mapping/elevation_map_raw
/elevation_mapping/visibility_cleanup_map
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/ground_truth
/ground_truth_pose
/imu
/initialpose
/joint_states
/move_base_simple/goal
/odom
/rosout
/rosout_agg
/scan
/tf
/tf_static
March 1, 2021
Reboot helped: rviz starts now without problems:
[ INFO] [1614588911.358762591]: rviz version 1.14.4
[ INFO] [1614588911.358861836]: compiled against Qt version 5.12.8
[ INFO] [1614588911.358873197]: compiled against OGRE version 1.9.0 (Ghadamon)
[ INFO] [1614588911.365437410]: Forcing OpenGl version 0.
[ INFO] [1614588911.903973801]: Stereo is NOT SUPPORTED
[ INFO] [1614588911.904036891]: OpenGl version: 4.6 (GLSL 4.6).
After source ~/catkin_ws/devel/setup.sh, the demo of roslaunch elevation_mapping_demos turtlesim3_waffle_demo.launch starts without problems, and I can stear the TurtleBot through the maze:
Continue with roslaunch elevation_mapping_demos ground_truth_demo.launch. Still warnings about the input_sources and a running service:
[ WARN] [1614589946.976154379, 524.038000000]: Shutdown request received.
[ WARN] [1614589946.976206379, 524.038000000]: Reason given for shutdown: [[/elevation_mapping] Reason: new node registered with same name]
The launch/ground_truth_demo.launch loads config/robots/ground_truth_demo.yaml and config/sensor_processors/perfect.yaml
The ground_truth_demo.yaml contains point_cloud_topic: "/ground_truth"
Replaced this argument with the suggested configuration in the readme:
input_sources:
front: # A name to identify the input source
type: pointcloud # Supported types: pointcloud
topic: /lidar_front/depth/points
queue_size: 1
publish_on_update: true # Wheter to publish the elevation map after a callback from this source.
rear:
type: pointcloud
topic: /lidar_rear/depth/points
queue_size: 5
publish_on_update: false
Yet, still I receives errors:
[ERROR] [1614591905.165410402]: Could not configure input source front because no sensor_processor was given.
[ERROR] [1614591905.165439372]: Could not configure input source rear because no sensor_processor was given.
[ WARN] [1614591905.165626212]: Parameter 'point_cloud_topic' is deprecated, please use 'input_sources' instead.
Changed the type from pointcloud to points, and specified a sensor_processor/type. Needed both front and rear. Got a warning that both front and rear subscribed to the same topic. Looks like the demo is now working, although it is waiting on ground_truth data to be published:
SUMMARY
========
process[elevation_mapping-1]: started with pid [32337]
[ WARN] [1614593590.220029788]: Could not find the parameter: `algorithm`. Setting to default value: 'area'.
[ WARN] [1614593590.220601950]: Could not find the parameter: `parallelization_enabled`. Setting to default value: 'false'.
[ WARN] [1614593590.220613246]: Could not find the parameter: `thread_number`. Setting to default value: 'automatic'.
[ INFO] [1614593590.221258507]: Elevation mapping node started.
[ INFO] [1614593590.227646545]: Elevation map grid resized to 100 rows and 100 columns.
[ WARN] [1614593590.242908171]: The input sources specification tried to subscribe to /ground_truth multiple times. Only subscribing once.
[ WARN] [1614593590.252588375]: Parameter 'point_cloud_topic' is deprecated, please use 'input_sources' instead.
[ INFO] [1614593590.257363402]: Elevation mapping node initializing ...
[ INFO] [1614593591.498733593, 4154.752000000]: Done initializing.
No idea where the last point_cloud_topic dependency is coming from, no reference in the launch or config-files. No rviz shows up.
Could be added with the line:
The working demo config/robot/waffle_robot.yaml still publishes point_cloud_topic: /camera/depth/points.
February 26, 2021
Looked at the ground_truth demo, which can be found in src/elevation_mapping/elevation_mapping_demos and mainly consists of a launch/ground_truth_demo.launch.
Started , but no rviz shows up and I receive the warning that the parameter server is not running:
[ WARN] [1614330834.980107744]: Could not load the input sources configuration from parameter
/elevation_mapping/input_sources, are you sure it was pushed to the parameter server? Assuming
that you meant to leave it empty. Not subscribing to any inputs!
[ WARN] [1614330834.980265604]: Parameter 'point_cloud_topic' is deprecated, please use 'input_sources' instead.
[ INFO] [1614330834.984269903]: Elevation mapping node initializing ...
The parameter input_sources is described in readme, which seems to be a parameter which is called with rosservice call /elevation_mapping/input_sources.
Believe that I have to call the service with a *.srv file.
Inspecting the current running elevation_mapping node with rosservice list:
/elevation_mapping/clear_map
/elevation_mapping/disable_updates
/elevation_mapping/enable_updates
/elevation_mapping/get_loggers
/elevation_mapping/get_raw_submap
/elevation_mapping/get_submap
/elevation_mapping/load_map
/elevation_mapping/masked_replace
/elevation_mapping/save_map
/elevation_mapping/set_logger_level
/elevation_mapping/trigger_fusion
/rosout/get_loggers
/rosout/set_logger_level
Looking at some service. For instance rosservice info /elevation_mapping/enable_updates:
Node: /elevation_mapping
URI: rosrpc://XPS-8930:40157
Type: std_srvs/Empty
Args:
Or rosservice info /elevation_mapping/load_map:
Node: /elevation_mapping
URI: rosrpc://XPS-8930:40157
Type: grid_map_msgs/ProcessFile
Args: file_path topic_name
To see the information on the msgs_type rossrv info grid_map_msgs/ProcessFile:
string file_path
string topic_name
---
bool success
Start with the suggested example sudo apt install ros-noetic-turtlebot3*.
Did source ~/catkin_ws/devel/setup.bash, followed by roscd elevation_mapping_demos and roslaunch elevation_mapping_demos turtlesim3_waffle_demo.launch. Also here the same warning:
[ WARN] [1614333477.143336648]: Could not load the input sources configuration from parameter
/elevation_mapping/input_sources, are you sure it was pushed to the parameter server? Assuming
that you meant to leave it empty. Not subscribing to any inputs!
[ WARN] [1614333477.143543082]: Parameter 'point_cloud_topic' is deprecated, please use 'input_sources' instead.
No Gazebo screen, could be related with File "/opt/ros/noetic/lib/python3/dist-packages/genpy/message.py", line 48, in
import yaml
ImportError: No module named yaml
or
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Changed the link from /usr/bin/python from python2 to python3, because pyyaml was already installed for python3.
Try to solve the missing plugin with sudo apt install libxcb-xinerama0, but that package is already installed. Yet, I see no plugin directory in my environment. Found /usr/lib/x86_64-linux-gnu/qt5/plugins, which has both libqxcb-egl-integration.so and libqxcb-glx-integration.so in xcbglintegrations.
Did ldd /usr/bin/gazebo, which can find /usr/lib/x86_64-linux-gnu/libxcb.so.1. It seems to be the the rviz which fails.
Defined where the qt platform plugins could be found with export QT_QPA_PLATFORM_PLUGIN_PATH=/usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/, still fails (because no X-server is running?).
Also xeyes has problems. Seems that the DISPLAY=:1, but xhost complains that the number of clients is reached (because I use a TV-monitor?).
Try to allow the xhost schemes to open the display as suggested askubuntu (adding sesion optional pam_xauth.so to /etc/pam.d/su.
Tried different platforms for rviz. For instance rosrun rviz rviz -platform wayland-xcomposite-egl:
error: XDG_RUNTIME_DIR not set in the environment.
Failed to create wl_display (No such file or directory)
Using XComposite-EGL
Segmentation fault (core dumped)
Needed to install sudo apt-get install python3-catkin-tools and sudo apt-get install osrf-pycommon to be able to do catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release.
Had to clone also kindr and https://github.com/anybotics/kindr_ros to be able to do catkin build.
Did the most of the tutorials alread on February 1, although the code described in the chapter could be easily modified to publish the map as read in from /tmp/ga_slam_test_data/.
Run both source /opt/ros/noetic/setup.sh and source ~/catkin_ws/devel/setup.sh, yet still roscd elevation_mapping doesn't work.
According to this post roscd only works after a catkin_make, yet catkin-tools had problems with a workspace made with catkin_make.
Sourcing the setup.bash instead of the setup.sh solves this issue. roscd elevation_mapping works.
Building the elevation_mapping tests with catkin build --catkin-make-args run_tests -- --this also rebuilds kindr and kindr_ros. Running the tests with rostest elevation_mapping elevation_mapping.test -t resulted in three succesfull tests.
February 17, 2021
The Map class is an extension of the GridMap class from grid_map_core. It has a method to get the elevation height with Map.getMeanZ (and get Map.getVarianceZ).
The base class is described in /opt/ros/noetic/include/grid_map_core/GridMap.hpp. A GridMap consists of several layers, with geLayers you get the names as string-vector.
The grid_map package is described in more detail at github. It is used in elevation mapping package, which also contains some tests and demos. The ros-node publishes the topic elevation_map and has the rosservice save_map (into rosbag file).
Cloned the elevation_mapping package into ~/catkin_workspace/src.
February 16, 2021
With downloading Part1 and Part2 of the first trajectory of Katwijk Beach (29Gb and 4.5Gb), my filesystem is now full, so first I have to clean up.
Checking for large directories with du -hsx -- * | sort -rh | head -10.
My largest directories are git, AutonomousDriving, anaconda2, packages and src.
The Part1 download wasn't a tar file (and too large), so downloading it again.
Extracted part2 on my data-disk: /media/arnoud/DATA/space/KatwijkBeach/Trajectory1/Part2. 50Gb left on that disk.
Found in ga_slam/build/test/ga_slam_test_data-prefix/src/ga_slam_test_data-stamp/ga_slam_test_data-urlinfo.txt the location to get the test-data.
Unzip master.zip and moved it to /tmp/ga_slam_test_data/, as expected in ga_slam/test/functional/DataRegistrationTest.cc.
The test does map().isValid(), followed with insertZeroPose(); and insertSingleCloud();
Tried a sudo make install, but that does a Cmake .. again, which fails on the package grid_map_core. That is strange, because ros-noetic-grid-map-core is installed.
On Februari 2 I did cmake -DENABLE_TESTS=OFF .. and make, but not sudo make install as I do here.
Tried to fix CMAKE_PREFIX_PATH and CMAKE_MODULE_PATH, but at the end fixed the cmake problem with making a symbolic link udo ln -s /opt/ros/noetic/share/grid_map_core/cmake grid_map_core in the directory /usr/lib/cmake
Still, the compilation fails on missing grid_map_core/eigen_plugins/FunctorsPlugin.hpp. That should not fail when /opt/ros/noetic/include/ is in the include-directories. Added that directory to CXX_INCLUDES in vi CMakeFiles/ga_slam.dir/flags.make. Now sudo make install works:
-- Install configuration: "RelWithDebInfo"
-- Installing: /usr/local/lib/pkgconfig/ga_slam.pc
-- Installing: /usr/local/lib/libga_slam.so
-- Set runtime path of "/usr/local/lib/libga_slam.so" to ""
-- Installing: /usr/local/include/ga_slam
-- Installing: /usr/local/include/ga_slam/TypeDefs.h
-- Installing: /usr/local/include/ga_slam/processing
-- Installing: /usr/local/include/ga_slam/processing/CloudProcessing.h
-- Installing: /usr/local/include/ga_slam/processing/ImageProcessing.h
-- Installing: /usr/local/include/ga_slam/mapping
-- Installing: /usr/local/include/ga_slam/mapping/DataRegistration.h
-- Installing: /usr/local/include/ga_slam/mapping/Map.h
-- Installing: /usr/local/include/ga_slam/GaSlam.h
-- Installing: /usr/local/include/ga_slam/localization
-- Installing: /usr/local/include/ga_slam/localization/PoseEstimation.h
-- Installing: /usr/local/include/ga_slam/localization/ParticleFilter.h
-- Installing: /usr/local/include/ga_slam/localization/PoseCorrection.h
Made a simple program first program in ga_slam/example and a simple makefile. Had to execute export LD_LIBRARY_PATH=/usr/local/lib:/opt/ros/noetic/lib/ to be able to run this program:
#include "ga_slam/GaSlam.h"
Read in the cloud_sequence from /tmp/ga_slam_test_data/, and that seems to work:
My first Map is not valid yet, because no points are loaded.
My first Map is valid after loading the points.
Extended the simple main with the code from the test:
cloudPtr.reset(new Cloud);
std::cout << "My first Map " << (gaSlam.getLocalMap().isValid() ? "is already valid BEFORE loaded!\n": "is not valid yet, because no points are loaded.\n") ;
for (int i = 0; i <= numOfClouds_; ++i)
{
const std::string filename = filenamePrefix_ +
std::to_string(i) + filenameSuffix_;
pcl::io::loadPCDFile(filename, *cloudPtr);
}
gaSlam.cloudCallback(cloudPtr);
std::cout << "My first Map " << (gaSlam.getLocalMap().isValid() ? "is valid after loading the points.\n": "is NOT valid after loading the points!\n") ;
February 15, 2021
I am now stuck for the moment. I cannot exclude base/orogen/std, and I cannot build it with this version of Corba.
Many packages are depended on orogen, so it really seems a base.
According to rock-osdeps-package_set Ubuntu 20.04 is not supported yet. Maybe I should fall back on a docker image.
Yet, this doesn't work (yet). Checked master-20.06 pool, but I only see rock-master-20.06-base-types for bionic. Same for the master-20.10 pool.
So I did wget -qO - http://rock.hb.dfki.de/rock-releases/rock-robotics.public.key | sudo apt-key add -, followed by echo 'deb [arch=amd64 trusted=yes] http://rock.hb.dfki.de/rock-releases/master-20.06 focal main' | sudo tee /etc/apt/sources.list.d/rock-master-20.06.list, sudo apt update and sudo apt install rock-master-20.06-base-types. The result is Couldn't find any package by glob 'rock-master-20.06-base-types'.
Tried autoproj osdeps in rock-core/all-packages. Fails on
E: Package 'libignition-math2-dev' has no installation candidate
E: Unable to locate package libqglviewer-dev-qt4
The libignition-math2-dev is a quite old dependency, Gazebo has already libignition-math6-dev.
Cloned base-types, and made tools-pocolog. The exutables are installed in /usr/local/bin. Only /usr/local/bin/indexer has a usage (it asks for a logfile).
Looked at the code of pocolog. An InputDataStream is initiated, which loads the data type registry with an PluginManager.
The indexer looks for a stream "/simple_controller.command".
Also looked at esa-prl/ga_slam/test/functional/DataRegistrationTest.cc. There a PointCloud is read from /tmp/ga_slam_test_data/cloud_sequence/local_cloud_SEQ.pcd
A pcd file could also be read with rosrun pcl_visualization pcd_viewer -ax 0.1 , according to Jürgen Sturm
The pcd file is no longer available, at the ros-page Endress points to rgbd-dataset, but that contains image sequence, no point clouds.
The pcd_viewer is part of perception pcl ros-package, but can also be installed via sudo apt-get install pcl-tools.
Now looking for a nice pcd-file. Found Videotofiles.py at orb-slam blog.
Found this interesting Point Cloud Library data, which contains four city and four forest sites. Forest site 5, with its steep slopes seems the most interesting (although it also contains vegatation).
Downloaded FSite5_orig-utm.pcd and run pcl_viewer FSite5_orig-utm.pcd, received response Loading FSite5_orig-utm.pcd ERROR: In /build/vtk7-yd0MKW/vtk7-7.1.1+dfsg2/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 1497
vtkXOpenGLRenderWindow (0x55d7525b5880): bad X server connection. DISPLAY=Aborted (core dumped).
At least the following command seems to work rosrun pcl_ros pcd_to_pointcloud FSite5_orig-utm.pcd
[ INFO] [1613399365.306229224]: Recognized the following parameters
[ INFO] [1613399365.306852784]: * file_name: FSite5_orig-utm.pcd
[ INFO] [1613399365.306874132]: * interval: 0
[ INFO] [1613399365.306882909]: * frame_id: base_link
[ INFO] [1613399365.306890684]: * topic_name: /cloud_pcd
[ INFO] [1613399365.306899447]: * latch: false
[ INFO] [1613399365.333660428]: Loaded pointcloud with the following stats
[ INFO] [1613399365.333686899]: * number of points: 628320
[ INFO] [1613399365.333694436]: * total size [bytes]: 7539840
[ INFO] [1613399365.333704712]: * channel names: x y z
Continue with rosrun pcl_ros convert_pointcloud_to_image input:=/my_cloud output:=/my_image and rosrun image_view image_view image:=/my_image, but receive
[ INFO] [1613400599.704505479]: Initializing nodelet with 12 worker threads.
[ INFO] [1613400599.763860012]: Using transport "raw"
Maximum number of clients reached
(my_image:879309): dbind-WARNING **: 15:49:59.772: Could not open X display
Strange, because xeyes works without problems.
February 14, 2021
Went back to ~/git/orocos-toolchain/rtt. Should do configure --enable-corba.
Configure asks for a fresh build directory, but puts the new makefile in ~/rtt anyway. Make fails on const CORBA:Any:
In file included from /home/arnoud/git/orocos-toolchain/rtt/rtt/transports/corba/CorbaConversion.cpp:39:
/home/arnoud/git/orocos-toolchain/rtt/rtt/transports/corba/CorbaConversion.hpp: In static member function ‘static bool RTT::corba::AnyConversion >::update(const CORBA::Any&, RTT::corba::AnyConversion >::StdType&)’:
/home/arnoud/git/orocos-toolchain/rtt/rtt/transports/corba/CorbaConversion.hpp:311:18: error: no match for ‘operator>>=’ (operand types are ‘const CORBA::Any’ and ‘RTT::corba::AnyConversion >::CorbaType*’ {aka ‘RTT::corba::Pair*’})
311 | if ( any >>= result ) {
February 11, 2021
Trying to install orocos-rtt-corba-gnulinux. The documentation mentions a download of the source file at the orocos.org, but I couldn't find it there.
Yet, note that the binaries can be installed via ros with sudo apt-get install ros-${ROS_DISTRO}-orocos-toolchain. Yet, ros-noetic-orocos-toolchain is not available yet.
The library /usr/local/lib/liborocos-rtt-mqueue-gnulinux.so is build, but orocos-rtt-corba-gnulinux is an option in orocos-toolchain/rtt/configure.
Yet, it seems that the corba support is depended on the combination ACE and TAO. With sudo apt install libace-dev v6.4.5 is installed, but it seems that libtao-dev should be installed from source. Latest libtao-dev version I could find was for Ubuntu12.04.
No GNU makefiles are provided, which means that first MPC have to be installed.
MPC is already present as ACE/bin/mwc.pl. Yet, typing bin/mwc.pl -type make resulted in Unable to find the MPC modules in ~/git/ACE_TAO/ACE/MPC.
Cloned MPC in ~/git/ACE_TAO and set export MPC_ROOT=~/git/ACE_TAO/MPC. Now the driver can be found when I type bin/mwc.pl -type make.
Make fails on missing aio-library. Did sudo apt install libaio-dev.
The bin/mwc.pl -type make reported that it was skipping AIO because SSL was missing. The project ace/SSL/ssl.mpc is defined!
The aio calls are in librt.so. The makefiles that are created search explicitly for them in /lib and /usr/lib, while a simple -lrt would have been enough. Solved this by making a symbolic link from /usr/lib/librt.so to /lib/x86_64-linux-gnu/librt.so.1.
Next was an example which was missing #include "ace/Log_Category.h" for its debug messages.
The make finish nicely, but there is no target make install. Noted in MPC documentation, that bin/mwc.pl -type make works for every version of make, but not for ACE or TAO.
As suggested, bin/mwc.pl -type gnuace works fine. The code compiles with make -f GNUmakefile, the install works with sudo -E make GNUmakefile install.
Go to TAO directory and do $ACE_ROOT/bin/mwc.pl TAO.mwc -type gnuace, as suggested at TAO-INSTALL.
Did sudo make install, which e.g. installed:
-- Installing: /usr/local/lib64/libosgQOpenGL.so.3.6.4
-- Installing: /usr/local/include/osgQOpenGL/osgQOpenGLWidget
-- Installing: /usr/local/share/OpenSceneGraph/bin/osgviewerQt
Still I receive an error when making the configuration of gui/vizkit3d:
-- No package 'openscenegraph-osgQt' found
CMake Error at /usr/share/cmake-3.16/Modules/FindPkgConfig.cmake:463 (message):
A required package was not found
Looked at PKG_CONFIG_PATH, which is not set. In /usr/lib/pkgconfig the file Qt5Qwt6.pc can be found, in /usr/share/pkgconfig.
Copied the file ~/git/osgQt/build/packaging/pkgconfig/opscenegraph-osgQt.pc to /usr/lib/pkgconfig. Now vizkit3d configuration is build.
Next error is tools/service_discovery again. Did gem install rice. Still fails, but after running source env.sh again the autoproj build. Tempory solved by setting BINDINGS_RUBY OFF in build/CMakeCache.txt.
autoproj build#include .
The package osgQt indicated that it requires openscenegraph-osgWidget openscenegraph-osgDB openscenegraph-osgUtil openscenegraph-osg openthreads.
Trying to build openscenegraph from source. That helps, the include files of osgQt can still not be loaded. That is correct, because the github of osgQt contains include/osgQOpenGL, not include/osgQt. Also the source code of osg itself doesn't contain osgQt (because it is no longer supported after version 3.6?).
Look if I can depreciate the version of osg or osgQt.
Switched branch with git checkout topic/Qt4. Now a directory include/osgQt exist. Installed the code. The packaging/pkconfig is also different. The new version is version 3.7.0 (previous one 3.6.4), and the lib is now -losgQt5 (previously -losgQt). Copied the osgQt5.pc to /usr/lib/pkgconfig, next to the osgQt.pc.
Seems to work, next error is in base/orogen/std/, which cannot be excluded. Did sudo apt install castxml. Next is missing orocos-rtt-corba-gnulinux
February 9, 2021
Did gem install yard, as specified in yard README. Still the required yard cannot be found.
Added a minimal setup to the Rakefile, as suggested at stack overflow, to no avail.
Commented out the yard-lines, compilation continues (with probably some documentation missing).
Next error is in perception/viso2, because png++ includes are missing. Installing sudo apt install libpng++-dev solves this issue.
Next error is base/scripts. Rake cannot load the Hoe gem. Hoe is a yard-plugin, so I disabled the Hoe lines in base/scripts/Rakefile and made the default list empty.
Next error is in slam/hogman. Read this this stackoverflow post and changed in slam/hogman/aislib/stuff/array_allocator.h the lines template <>; template in template <, typename Base>. Seems to work. Not that easy, it is a specialisation trick which no longer works with this version of c. Moved it for the moment to the list of excluded_packages.
Next error is cmake configuration error in planning/fd_cedalion. Put this package in autoproj/manifest.
Also moved for the moment planning/fd_uniform, gui/gcam_calib and gui/pose3d_editor to excluded_packages.
Also drivers/video_capture_vlc failed, but that could be easily solved with sudo apt install libvlc-dev.
Building external/snap fails on an old-fashioned use of fget, so put it on the list of excluded_packages.
The package planning/ompl fails on a invalid intiliazation of a boost::parameter, so pushed it onto the list of excluded packages.
Also excluded gui/osgviz, but now building rock.core fails, because it has an dependency on gui/osgviz. Tried to solve the osg problem with sudo apt install openscenegraph, that didn't help, it should have been sudo apt install libopenscenegraph-dev
Next failed tools/service_discovery, which depends on the installation of sudo apt install libsigc++-2.0-dev. Still this packages fails on missing -lpthreads. Tried to exclude this package, but is a dependence of rock.core.
Read tools/service_discovery/INSTALL and did cd build; cmake ... Installed missing packages with sudo apt install libavahi-client-dev, sudo apt install libavahi-core-dev and gem install rice. That solves this issue.
The package gui/vizkit3d needs Qt 4.x (v 5.12.8 installed), so pushed it onto the list of excluded packages. This fails because the dependency of rock.core.
Moved package gui/vizkit3d to ignored packages, but now rake fails on home/arnoud/git/rock-core/all-packages/install/gui/vizkit3d/log/gui/vizkit3d-autobuild-stamp.
Found dependencies on gui/vizkit3d in several manifests, like tutorials/rock_tutorial/manifest.xml and slam/odometry/manifest.xml. Essential seems gui/vizkit/manifest.xml, gui/point_cloud/manifest.xml, drives/laser_filter/manifest.xml and base/types/manifest.xml (although the last had the option optional="1").
I couldn't find the dependency in autoproj/remotes/rock.core. Tried in that directory the test suggested in the README.md: ruby -Itest -I. test/cxx_test.rb. Fails on missing gem. Installed gem install flexmock-minitest. The test runs, but fails on undefined method 'flexmock' after a warning of setup_cxx_support.
The package gui-vizkit3d is quite old, most code is 3y old. Still, porting it to Qt5 seems hard.
Used qtchooser -l to see which versions where available, but setting export QT_SELECT=4 showed that /usr/lib/x86_64-linux-gnu/qt4/bin/qmake didn't existed.
Followed the suggestion of ubuntu handbook, and added sudo add-apt-repository ppa:rock-core/qt4. Installed sudo apt-get install qt4-qmake. Now qmake --version showed Using Qt version 4.8.7 in /usr/lib/x86_64-linux-gnu (with export QT_SELECT=4).
Installed sudo apt-get install libqt4-dev and gui/vizkit3d is build.
Moved drivers/aravis to excluded packages.
Making drivers/libusb failed on missing libudev. Installed that with sudo apt-get install libudev-dev.
Moved knowledge_reasoning/owlapi to excluded packages.
The package drivers/freenect2 failed on missing TurboJPEG_LIBRARIES, sudo apt install libturbojpeg0-dev could solve that. This package still fails, but now on cuda_depth_packet_processor.cu. Moved this package to the ignore list.
Now qui/vizkit3d fails again. There is a dependency on osgQt, but tthis module is no longer used since OSG 3.6. Requirement is not in qui/vizkit3d/CMakelist.t, but is probably in rock_init(vizkit3d 1.0).
The RockControl example shows how the next position is calculated with the Eigen-library.
The utilrb breaks because it tries to load the build/.autoproj/Gemfile from the old bootstrap location, which is specified in install/gems/Gemfile.
An empty Gemfile doesn't work, because rake is missing.
Replaced in install/gems/Gemfile the link to old location to ../../.autoproj/Gemfile and now utilrb can be build (with direct command autoproj build utilrb).
Next error is roby. In tools/roby/Rakefile is specified that require "yard"
February 2, 2021
Continued with installing libpcl-1.8.1-dev from the ubuntu 18.04 archives.
The only dependence missing was libvtk6.3.0, but that packages has itself many dependencies.
Installation broke at package libogdi3.2, which cannot be installed next to libogdi4.1:
libogdi4.1 (4.1.0+ds-1build1) breaks libogdi3.2 (smaller than 4.0.0) and is installed.
Did a git pull, but the code is up-to-date. Still, the change in CMakeFile.txt is not visible, and git log gives me as last update a date of May 10 (while Levin commited the last changes on Jan 12, 2021).
So, it seems that I missing 6 commits. Most are in the readme, but the one of Jan 7 seems very relevant (opencv constant and c++ flag).
Looked at the branches with git log --graph --all --decorate --oneline --simplify-by-decoration, but there is a direct line from the initial commit to commit 42d57b3 (May 2018). What happened with the later commits?
Anyhow, did a make again. And now /usr/include/pcl-1.8 is used, and I was able to build target ga_slam!
Manually applied the commit ac72e35 of wieset 26 days ago (opencv constant).
Next I tried cmake -DENABLE_TESTS=ON ... That fails in making the Makefiles on the googletests for gmock. Maybe related with the commit on the c++ flag.
Switched back with cmake -DENABLE_TESTS=OFF .. and rebuild libga_slam.so with the updated opencv constant. Yet, it is a library, but its usage is not specified.
The functional tests could give some indication of its usage.
Checked the origin of my checkout with git config --get remote.origin.url, and I had cloned Dimitris Geromichalos version, which is indeed 10 commits behind. This is because I blindly copied the git clone in the README.
In the esa-prl version I was able to configure with cmake -DENABLE_TESTS=ON ... Yet, now a make fails on No rule to make target '/usr/lib/libvtkWrappingTools-6.3.a, needed by libga_slam.so.. Yet, /usr/lib/libvtkWrappingTools-7.1.a is available.
This is related with this warning:
-- The imported target "vtkWrappingTools" references the file
"/usr/lib/libvtkWrappingTools-6.3.a"
but this file does not exist. Possible reasons include:
* The file was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and contained
"/usr/lib/cmake/vtk-6.3/VTKTargets.cmake"
but not all the files it references.
Inspected the cmake logs, but it is not directly clear where the vtkWrappingTools is imported.
Added find_package(VTK 7.1 REQUIRED), but still vtk-6.3 is implicit referenced. Put this file in front of find_package(PCL REQUIRED). Now the cmake files of vtk-7.1 are read.
With this modication I could build libga_slam.so. Building the Makefiles with the test still fails on /usr/src/gmock/CMakeLists.txt.
Added the 1.10 as minimal verion of pcl. Now cmake .. fails on:
Could not find a configuration file for package "PCL" that is compatible
with requested version "1.10".
The following configuration files were considered but not accepted:
Did a sudo apt --fix-broken install, which reinstalled libpcl-dev (1.10.0), which nicely builds libga_slam.so. Checked the library, which is linked to pcl-1.10 libaries.
The PRL lab has several interesting repositories. Part are the packages for the different robots (including the HDPR). The buildconf manifest points to rock-core/package_set, rock-core/rock_page_set, rock-tutorials/tutorials-package_set.
One of the pinned repositories is rover-package_set. This set contains a cmake for GA Slam, so I will inspect this package first.
The package contains slam.autobuild, which should probably be used by rock-core autobuild. Performed sudo gem install autobuild, which in succesfully installs autobuild and its dependencies. Yet, which autobuild fails. Yet, autobuild pops up with gem list.
The gem-install placed the autobuild command in /var/lib/gems/2.7.0/gems/autobuild-1.21.0/bin/autobuild. I couldn't find an option to specify an autobuild file, the only option is a path. Running the autobuild on slam.autobuild gives the error that cmake_package is an undefined method.
Tried to bootstrap autoproj, which tries to install this gem in ~/.local/share/autoproj/gems/ruby/2.7.0, yet the installation fails on:
In Gemfile:
autoproj was resolved to 2.13.0, which depends on
rb-inotify was resolved to 0.10.1, which depends on
ffi
FATAL: failed to install autoproj in /home/arnoud/git/esa-prl/rover-package_set/build/.autoproj
ffi expects /usr/lib/ruby/include/ruby.h. Solved this by installing ruby-dev.
Now the autoproj bootstrap works:
autoproj bootstrap successfully finished
To further use autoproj and the installed software, you
must add the following line at the bottom of your .bashrc:
source ~/git/esa-prl/rover-package_set/build/env.sh
To import and build the packages, you can now run
aup
amake
The resulting software is installed in
~/git/esa-prl/rover-package_set/build/install
Moved that directory to ~/git/rock-core/all-packages removed the autoproj directory and bootstrapped again. Built the different packages with autoproj build, including several slam-algortihms and tutorials. Selected all default options.
Build failed on osgviz:
ERROR: manifest /home/arnoud/git/rock-core/all-packages/control/trajectory_follower/manifest.xml of control/trajectory_follower from rock lists 'gui/osgviz/osgviz' as dependency, but it is neither a normal package nor an osdeps package. osdeps reports: cannot resolve gui/osgviz/osgviz: gui/osgviz/osgviz is not an osdep and it cannot be resolved as a source package
Could be a connection error (internet was unstable). Try again.
Now there is a problem with checking out aria. Looked in autoproj and commented simulation out, but that doesn't help. Looking where drivers/aria is defined. Looked up rock package search, and found the 6 years old drivers-orogen-aria.
In ./drivers/orogen/aria there is a manifest with a dependence on ./drivers/aria, but removing this dependence doesn't help.
Used the exclude_packages option described at customization to exclude ./drivers/aria. Receive now a warning:
WARN: drivers/aria, which was selected for rock, cannot be built: drivers/aria is listed in the exclude_packages section of the manifest
Didn't find drivers/aria directly in bundels/rock/manifest.xml
Build continues, but had to exclude the dependency on gui/osgviz/osgviz from control/trajectory_follower/manifest.xml
Next is harder: data_processing/openann/manifest.xml has a dependency on numpy and cython (removed them for the moment).
Next I removed python3-setuptools from planning/fast_downward/manifest.xml
The cmake in slam/mtk fails on a rosdep cxsparse, but I couldn't find this dependence in orginal github.
libcxsparse3 is part of suitesparse. Installing sudo apt-get install suitesparse-dev solves this.
Next error is in slam/flann. Didn't see a fast solution, excluded the package in autoproj/manifest.
The package drivers/imu_an_spatial tries to perform the command rock_init. Checked if I had done source env.sh. Still unknown command. Excluded the package.
Next error is in bundles/rock_ugv_nav, which fails on a gem file. Excluded the package.
Next error is in tools/utilrb, which also fails on a gem file. Excluded the package. That fails:
ERROR: rock.core is selected in the manifest or on the command line, but its dependency base/templates/ruby_lib is excluded from the build: utilrb is listed in the exclude_packages section of the manifest (dependency chain: base/templates/ruby_lib>utilrb)
February 1, 2021
Found no ros on my Linux workstation, so I could add a fresh ROS installation.
ROS recomments ROS Noetic Ninjemys for Ubuntu 20.04.
The grid_map package supports both Kinetic, Melodic and Noetic.
Continued with sudo apt-get install ros-$ROS_DISTRO-grid-map, as suggested at grid map readme.
First demo roslaunch grid_map_demos simple_demo.launch fails on the Qt platform "xcb" for rviz.
Looked at rviz troubleshooting, but found no Qt problems. Simply export LIBGL_ALWAYS_SOFTWARE=1; rosrun rviz rviz also fails. Checked both rviz and libxcb with ldd /usr/lib/x86_64-linux-gnu/libxcb.so.1
Installing qtcreator with sudo apt install qtcreator, so that I can run this tool with QT_DEBUG_PLUGINS=1, as suggested in this forum post.
That still fails. According to this forum.qt.io post, the QT_PLUGIN_PATH should set to /usr/lib/qt/plugins/. That directory doesn't exist, but did export QT_PLUGIN_PATH=/usr/lib/x86_64-linux-gnu/qt5/plugins.
Now I receive two interesting error-messages when starting qtcreator:
QXcbIntegration: Cannot create platform OpenGL context, neither GLX nor EGL are enabled
Cannot mix incompatible Qt library (5.12.8) with this library (5.15.0)
With QT_DEBUG_PLUGINS=1 I also see that the problem is related to library /usr/lib/x86_64-linux-gnu/qt5/plugins/sqldrivers/libqsqlite.so
When I do ldd /usr/lib/x86_64-linux-gnu/qt5/plugins/sqldrivers/libqsqlite.so I see that part of the Qt libraries are from /usr/local/webots/lib
Removing webots from my LD_LIBRARY_PATH solves this issue:
Also trying the other demos. The tutorial demo works, but looks the same. The iterator demo draws pictures on the occupancy grid:
The opencv demo looks very impressive, with the image sequence which is translated into the corresponding heights on the occuppancy grid:
The operations on the occupancy grid are based on EigenLab, which again is based on the Eigen library.
Read in the chapter The library supports multiple data layers and is for example applicable to elevation, variance, color, surface normal, occupancy etc. The underlying data storage is implemented as two-dimensional circular buffer. The circular bufferimplementation allows for non-destructive and computationally efficient shift-ing of the map position. . This is important implementation detail if I would like to use quadtrees or octtrees.
The chapter also compares its approach with the OctoMap library, which is a fully 3D approach. The source code is available on github, and is actively maintained. Installation as ROS package (Kinetic, Melodic, Neotic) is described here.
Continue with the installation of ESTEC's GA SLAM, which is build on top of Grid Map Core.
cmake fails on .... Received warnings like:
-- Checking for module 'flann'
-- Found flann, version 1.9.1
-- Found FLANN: /usr/lib/x86_64-linux-gnu/libflann_cpp.so
-- The imported target "vtkParseOGLExt" references the file
"/usr/bin/vtkParseOGLExt-7.1"
but this file does not exist. Possible reasons include:
* The file was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and contained
"/usr/lib/cmake/vtk-7.1/VTKTargets.cmake"
but not all the files it references.
-- The imported target "vtkRenderingPythonTkWidgets" references the file
"/usr/lib/x86_64-linux-gnu/libvtkRenderingPythonTkWidgets.so"
but this file does not exist. Possible reasons include:
* The file was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and contained
"/usr/lib/cmake/vtk-7.1/VTKTargets.cmake"
but not all the files it references.
-- The imported target "vtk" references the file
"/usr/bin/vtk"
but this file does not exist. Possible reasons include:
* The file was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and contained
"/usr/lib/cmake/vtk-7.1/VTKTargets.cmake"
but not all the files it references.
-- The imported target "pvtk" references the file
"/usr/bin/pvtk"
but this file does not exist. Possible reasons include:
* The file was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and contained
"/usr/lib/cmake/vtk-7.1/VTKTargets.cmake"
but not all the files it references.
But the main error seems to be:
CMake Error at CMakeLists.txt:43 (find_package):
Could not find a configuration file for package "OpenCV" that is compatible
with requested version "2.4".
The following configuration files were considered but not accepted:
The install script starts with adding ros xenial to the source-list, although only ros-kinetic-grid-map-core is a ros-package. It also installs two python packages with pip, although it is not clear if python2 or python3 is intended.
Changed in CMakeList.txt find_packages(OpenCV 2.4 REQUIRED) to find_packages(OpenCV REQUIRED). Readme indicated at least OpenCV 3, so that should be OK. cmake .. finishes generating the Makefile.
Yet, make fails. This seems to be more related to PCL than OpenCV. Most warnings are about /usr/include/pcl-1.10, while the software expects PCL 1.7 (or higher).
Saw in an update request, that changing the minimal version to OpenCV 3.2 and PCL 1.8 should work on Ubuntu 20.04.
In the pcl git-directory, I did a switch with git checkout tags/pcl-1.8.1. make -j2 works, sudo make -j2 install fails on #include .
Looked up for versions of libpcl-dev at pkgs.org. Found for Ubuntu 18.04 version of libpcl-dev-1.8.1. For Ubuntu 20.04 only libpcl-dev-1.10.0 was available.
Tried sudo dpkg -i ~/Downloads/libpcl-dev_1.8.1+dfsg1-2ubuntu2.18.04.1_amd64.deb, but receive many missing dependencies. First try is sudo apt-get install libvtk6-dev, but that removes both libpcl-dev and ros-noetic-desktop-full.
Did several wget followed by dpgk, to install all libpcl-1.8.1 packages (before libpcl-1.8.1-dev. The order is important!
A difficult one was wget http://archive.ubuntu.com/ubuntu/pool/universe/p/pcl/libpcl-io1.8_1.8.1+dfsg1-2ubuntu2_amd64.deb followed by sudo dpkg -i libpcl-io1.8_1.8.1+dfsg1-2ubuntu2_amd64.deb, because it depends on three libboost1.65.1 packages. Yet, I was able to install all three (and libboos-system1.65.1).
All dependencies for libpcl-dev-1.8.1 are now loaded, except libvtk6-dev and libvtk6-qt-dev, which themselves have many dependencies.