Wishlist
- Connection to UsarCommander
- ROS-nodes working for platform, kinect, laser scanner and ins.
Started
Labbook 2022.
July 20, 2021
- There is this call for this Indoor Robot Learning challenge.
- If the students are ready with microSpot, they could start building a inMoov.
July 14, 2021
- In this project report, they used a Jetson Xavier backpack for Pepper to run tiny-yolo-ros with FPS above 1.
March 4, 2021
- Continue with the TLT setup with Deploying Conversational AI Models.
- The tlt text_classification train command required a -k $KEY argument. Used tha API-KEY for the encryption.
- The train works, but the finetuning failed (seems like the restore in the training checkpoints could be loaded). Retrain again with 10 epochs.
- Specifying the user-details in ~/.tlt_mounts.json gave an error on permissions on the ./config folder.
- After training for 10 epochs, the finetuning gave an error that the provided encryption key was invalid.
- Seems that the standard model load key is specified in model description at the NGC catalog.
- The fine-tuning should be done on my_domain_classification/train.tsv, but none is provided. Instead I do the fine-tuning on sst2/train.tsv after the first epoch not much progress is made, the performance remains on a precision of 50.92% and recall of 100%. The fine-tuning uses at least 29 epochs.
- Epoch 0 is bad (precision 100%, recall 1.64%), but the next three epochs are all top 3. Also epoch 9, 10, 11, 16, 27 are seen as top 10 (all with vall_los of 0.702). Not clear what the stopping criterion is (model is now fine-tuning for three hours). In ~/tlt/specs/nlp/text_classification/finetune.yaml it is specified that it will train for 100 epochs.
- After 10h of training, the training is ready. Checking the inference with command tlt text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY, which fails because the provided encryption key is invalid. Same for the evaluation. Goes wrong somewhere in the nemo cookbook.
-
- Followed the instructions at the webinar, although I had first go out of my current virtual environment, install the wrapper with pip3 install --user virtualenvwrapper before I could add the source .local/bin/virtualenvwrapper.sh to my ~/.bashrc.
- Created the new virtual environment with mkvirtualenv tlt_gesture_demo which I could reach with workon tlt_gesture_demo.
- Installed sudo apt install jupyter-notebook. In the file /tmp/tlt_cv_samples_v1.0.2.zip several notebooks were available, but handdetect_training.ipynb not yet.
- Downloading the 1.2 Gb Egohands dataset from Indiana University.
- The Egohands dataset has a bit different structure than the dataset used in the webinar. Moved the downloaded zips to /media/arnoud/DATA/tmp. The dataset contains the images in the directory _LABELLED_SAMPLES, but I don't see the corresponding *.txt files. That information can be found in the file metadata.mat.
-
- Continue with tlt launcher.
- The command tlt detectnet_v2 --help downloads a complete new docker container.
- There are several CV models available, as described in this blog.
- For instance, the licence plate detection should be interesting for autonomous driving.
March 3, 2021
- Participated in the Nvidia webinar on Create Gesture-Based Interactions with a Robot.
- The webinar is based on the Transfer Learning Toolbox, which has a Zoo of pretrained models, and methods to downscale it to light inference models that can run on Jetsons or robots.
- The prerequisite of using TLT is having a NGC account and API key.
- With this you can perform docker login nvcr.io, where you can login with Username $oauthtoken and your API key.
- The API key is only generated once, and stored on ~/.docker/config.json (on my XPS-8930).
- Did virtualenv -p python3 tlt-env, followed by source tlt-env.bin/activate.
- In this virtual environment I did pip install nvidia-pyindex, followed by pip install nvidia-tlt.
- The tlt text_classification download_specs uses the mounts defined in ~/.tlt_mounts.json. Note that the json expects tabs, ignores # and didn't recognize '~' as the home-directory.
Previous Labbooks