Started Labbook 2019.
December 23, 2018
- Finished the last lesson of Learn Python 2 (on file input/output). That was the last 5% of the course to be done.
- All other Python courses from Code Academy are Pro (20$ per month). My Pro trail of 6 months is finished.
- Continue with HarvardX course Python for Research (Case Study 6: Social Network Analysis).
- Tried import networkx as nx on my Shuttle, but this failes (tried all tricks from install documentation. Did an sudo apt-get upgrade to be sure that this was not a problem of python2 or python3. That solved the problem.
- As noted at the beginning of lesson 4.3.2, G.nodes() now returns a NodeView instead of a list. You can make it a list by the command list(G.nodes()).
- Trying import matplotlib.pyplot as plt, but this failed on qt-bindings. Performed pip install --upgrade matplotlib, which solved qt-warning, but still complained about numpy (compiled for a different version). Also performed pip install --upgrade numpy. This solved the issue, but still quiz-question failed (which was the correct answer).
December 22, 2018
October 10, 2018
October 9, 2018
- With Google Colab, you could work in a group on a Jupyter notebook.
April 25, 2018
- On page 41 of the memorial from Arbib: "Another source of inspiration came from the work of Tom
Collett (1982)
on depth and detours. This
inspired the neural
models of Donald House which in turn inspired the work on mobile robots that Ron
Arkin conducted while at UMass
(Arkin, 1989).
."
- At page 46 Arbib gives a small history of the origin of the Schema Theory: "Thereafter come various papers within cybernetics, starting perhaps with Kenneth Craik's
(1943)
little book on
The Nature of Explanation,
which says that the job of the brain is to model the world, so
that when you act it is because you have been able to simulate the effects of your action before you do
it. In our terms, we might say that perceptual schemas activate motor schemas that can be run off
-
line
to predict the outcome of various actions before deciding to proceed with one of them. Here we see, far
in advance, the ideas on forward and inverse models brought into play by Mitsuo Kawato, Daniel
Wolpert and others
(Haruno, Wolpert, & Kawato, 2001; Wolpert & Kawato, 1998)
." and "A key notion is
assimilation
: if you are in a new situation, you try to make sense of it in terms of the schemas you have, assimilating the situation to what you already know. But from time to time you will find yourself in a
situation where these schemas are inadequate; what you know will not suffice and your stock of
schemas must change. Piaget calls this
accommodation
; the schemas accommodate to the data they
cannot assimilate; learning modifies old schemas and creates new ones.
- On page 62: "Ian Darian-Smith,
an
expert
on
the
cerebral
cortex
and
the
control
of
the
hand
,
read
my
Handbook
article
and ...
invited
me
to
speak
at
a
1983
IUPS Satellite Symposium
on
Hand Function and the Neocortex
in
Melbourne
. But
I
didn
’
t
have
anything
to
say
on the subject
except
for
that
one
schema
-
diagram.
Fortunately,
two graduate students at UMass ... ,
Damian
Lyons,
and
...
Thea
Iberall
–
agreed
to work
with
me
to
assemble
a
new
paper
on
control
of
the
hand
in
time
for
the
conference
(Arbib,
Iberall, & Lyons, 1985)
.
This
provided
the notion of “opposition spaces,” linking affordances
of the object
to “effectivities” of the hand
(Iberall, Bingham, & Arbib, 1986)
and to
“RS
robot
schemas”
(Lyons & Arbib,
1989),
so
the
collaboration
not
only
led
to
new
insight
into
what
the
brain
has
to
do
to
control
the
hand
at
the
schema
level,
but
also
gave
us
ideas
for
robot
control.
...
At
UMass,
a
Salisbury
hand (
a robotic hand
developed
by
Ken
Salisbury
at
MIT)
coupled with a robot
arm under the control of the control theorist
Theodore
E.
Djaferis
provided the core for the
development of the Laboratory for Perceptual Robotics
(LPR)
which helped pioneer the transition from
robots engaged in purely stereotypic actions to robots
whose action
-
oriented perception could adapt
their performance to current circumstances. LPR saw
not only
the application of perceptual and motor
schemas to
robot
arms and hands (and recall Arkin’s frog
-
inspired mobile robots)
...".
- On page 69: "My strength is my weakness
–
namely, the very breadth of my interests. " and " I seem incapable of “gearing down” to
produce a book that
can gain
a wide readership with the general public. Perhaps I am too keen to
explain details where a leisurely exploration
of the general feeling for a topic might have proved more
seductive."
- On page 73: "Sakata’s lab demonstrated that neurons in AIP (anterior intraparietal sulcus) responded to
vision of an object with activity that correlated with
“how” the object was to be grasped
(which we
viewed as an instance of affordances in the sense of J.J. Gibson, 1966), whereas Rizzolatti’s lab showed
how neurons in the area of premotor cortex they labeled F5 coded something akin to motor schemas for
grasping and manipulation. The insights from the first stage of our collaboration, integrating macaque
neurophysiology, human behavior, schema theory and computational modeling, were set forth under
the title
“Grasping objects: the cortical mechanisms of visuomotor transformation”
(Jeannerod, Arbib,
Rizzolatti, & Sakata, 1995), leading to the FARS-model, see Fig. 5.
- On page 85: "Macaque
F5
(with
its
mirror
system
for
grasping)
is
homologous
to
Brodmann’s
area
44
in
human
Broca’s
area; and imaging
studies
show
activation
for
both
grasping
and
observation
of
grasping
in
or
near
Broca’s
area.
But
Broca’s
area
in
the
human had been
implicated in speech production. However,
I
had learned from Ursula Bellugi
that lesions of
Broca’s area
are
equally implicated in
aphasia of sign
language as of spoken language.
This
led
to
Rizzolatti
and me to think
about
the
role of mirror neurons
in language evolution from LCA-m (the last common ancestor of macaque and human) to modern
humans, with manual gesture playing an important bridging role
(Arbib & Rizzolatti, 1997; Rizzolatti &
Arbib, 1998). Our
Mirror
System
Hypothesis
posited that
the
evolutionary
basis
for
language
parity
(the
hearer is generally able to “get,” more or less, the meaning t
he speaker intends to convey)
is
provided
by
the
mirror
system
for
grasping,
rooting
speech
in
communication
based
on
practical tasks involving
the hands.
The posited path from
“praxis”
to
communication provided
a
neural
basis
for
a
gestural
-
origins
view
of
the
evolution
of
brain mechanisms, unique to humans, that could support
language. Even in speaking, humans gesture with their hands
these cospeech gestures provide strong evidence for the linkage
of hands to these brain mechanisms), while deaf children
if raised in a
community with a sign
language
can learn
it
as readily as hearing children can learn a spoken language."
- Figure 2 comes from Sensorimotor Transformations in the Worlds of Frogs and Robots
April 24, 2018
- Read the memorial from Arbib, how he was influenced by Wiener. Arbib was a post-doc of Kalman in Stanford.
- Maybe I can make a question from Fig. 2.
April 5, 2018
March 5, 2018
- There is a new (free) book on robotics for non-roboticists: Elements of Robotics. Checked 3rd chapter on reactive systems. The chapter contains several activities (to implement differen Braitenberg vehicles), but no exercises.
- The supplementary material contains material to control a Thymio robot in Python.
February 20, 2018
- According to my labbook, I spent this year 93h on preparation on this course.
January 30, 2018
- Found another Robotics course of Monica Nicolescu at graduate level.
- The course has a seminar design, one of the studied articles is the classic Attention to Action from Norman and Shallice 1986, as described as biological evidence for hybrid architectures in Chapter 6 (p. 208).
- They point to Rumelhart and Ortony (1977) for a complete view of schemas.
- They point to Bellman on schema selection mechanisms in animals.
- As example, they several times use the cooperation and competition of the fingers while typing.
January 25, 2018
January 24, 2018
January 23, 2018
- Today I finished Perception Part II in one hour, while there was a lot of repatition with the slides I used yesterday4
January 21, 2018
- Found a quite relevant Guideline on how to write an essay, as used in the minor Embedded Systems. Unfortunately, no author, so difficult to give it credits.
- Followed the (outdated) links in the document and found that this guide was written by the Writing Center of the American University in Cairo, by Tom Johnson.
January 17, 2018
- Used Keep vid to download youtube video and include it in the presentation.
- Searching through the October 2005 archive of Nature for the origin of the Memory representation picture.
January 16, 2018
- Yesterday I only finished 47 of the 87 slides, missing the map example and t
he arbitration examples. Should look if this is covered in the Architecture slid
es.
January 11, 2018
- The lessons about loops is mainly repetition Learn Python. Yet, on the end the Python specific enumerate and zip are introduced! And don't forget the while-else and for-else construct.
January 10, 2018
- Found the following Python code to access ALDiagnosis API, to look if I could switch off the warning off the hot shoulder of Mio.
- Added this code to main.py, and this works fine:
word = "ALDiagnosis proxy has version: "
word += self.globals.diagnosisProxy.version()
self.globals.speechProxy.say(word)
self.globals.diagnosisProxy.setEnableNotification(False)
if self.globals.diagnosisProxy.isNotificationEnabled():
self.globals.speechProxy.say("ALDiagnosis Notification enabled")
else:
self.globals.speechProxy.say("ALDiagnosis Notification is switched off")
- I added the following code:
self.globals.posProxy.goToPosture("StandInit", 1.0)
self.globals.motProxy.setFallManagerEnabled(False)
self.globals.motProxy.setDiagnosisEffectEnabled(True)
self.globals.motProxy.moveTo(1.0,0.0,0.0)
self.globals.motProxy.rest()
But with all different configurations, Mio walks without any complaints. His arm is a bit off, he doesn't walk completely straight, but it is now one of the best Nao robots.
- Jenny was tested today, but Jenny also has a slippery unstable walk. Not as bad as Brooke, but not really optimal.
January 8, 2018
- Tested Brooke and Mio. Tried to walk 20cm with Brooke, but even that is too hard.
- Mio had the standard NaoQi, but could't connect to wireless because of the wireless not found. Removed wireless and rebooted, but again the reboot was strange. Said KnockKnock, but did't responded on front button. Saw its ip with a ping-scan, but switched itself off before I could connect. Seems to also to have wireless, but not very responsive.
- Switched heads: with Brookes head it goes well. Robot was marked as left foot broken, but wake up goes well. Yet, it complains about the temperature in its left shoulder (oscillating between 0 and 38 degrees). When not in rest warning music is played constantly.
January 7, 2018
- In the eleventh chapter of Intelligent Behavior in Animals and Robots is more relevant. Firstly several of Pavlov experiments are discussed in detail (including second-order conditioning and extinction. McFarland also discussed extensively Verschore's work. Also interesting is Fig 11.4 with different forms of reinforcement learning.
- The conclusion not only sums up, but also describes some new experiments of rats gaining knowledge about food, which indicates cognition. The final words are: "the main problem in designing robots ... is to find that mixture of built-in implicit knowhow and acquired (explicit?) knowhow that will enable the robot to attain maximum adaption to its niche".
January 4, 2018
- Checking the available Nao robots:
- Nao 17 - Jerry
- Nao 9 - Barta
- Nao 4 - Princess bep bop
- To be checked:
- Nao 11 - Brooke
- Nao 1 - Jenny
- Not available robots:
- Nao 15 - Mio - broken left foot
- Nao 15 - Julia - broken shoulder
- Following the Linux instructions. Had to use python2 instead of python, and to install python-scipy. I could import all libraries (including cv2), although I didn't follow the brew instructions (just checked, and cv2 was already up and running).
- Made a wired connection to Princess, it clearly has the DNT code on it. Tried to do sudo mv /etc/naoqi/autoload.naoqi /etc/naoqi/autoload.ini, but I receive Sorry, user nao is not allowed to execute '/bin/mv /etc/naoqi/autoload.naoqi /etc/naoqi/autoload.ini' as root on beepboop.. Also just su doesn't work. Using the root password helps. Rebooting.
- That worked. Increased the volume, added wireless. Network list cannot longer be loaded and ifconfig in shell indicated that there is a connection, but no inet4 or inet6 address.
- Also on the advanced webinterface no network is listed. Followed the trick from Nao Labbook and did mv /etc/init.d/wpa_supplicant /tmp/, followed by an reboot. Initially I have no connection at all, but after a few minutes I have a wired connection and can add a wireless (146.50.60.30).
- Tried to connect to Jerry an Bertha, but they are not visible in Choregraphe. Tried to find the connection with nmap -sn 192.168.2.0/24, but Jerry is in a different subdomain. This is due to the manual settings of the wired connection. Bonjour saw Jerry on 169.254.202.17. Moved autoload.ini, rebooted. Couldn't connect with my Linux machine, but could with my Windows machine. After removing wireless I have a fully connected Jerry.
- Written a script to scan whole subdomain (~/bin/ping_domain.sh), but only found itself. Found one computer at 192.168.149.1, but that is no nao shell.
- Next is Jenny, seen by Choregraphe, when logging in received message: -bash: alsa_input.0.input-microphones: command not found.
replaced autoload.ini. Complaint about cloud services and id. Jenny walks fine, but I get an error about the camera (and the audio). Is it an older version of Naoqi (2.1.4 ipv 2.1.4.13)? No, NaoQi version is OK.
- Brooke says Knock, Knock. Brooke is also wireless configured, but swayes too much during walking (falls after two steps).
- Flashing Bertha. Bertha was not flashed (although chest-button was flashing blue), yet visible for Choregraphe. Moved the autoload.naoqi to autoload.ini. Received the warning "Wifi-card not found". Here /etc/init.d/wireless was present, moved it to /tmp/wireless. Bertha seems to have a loose battery.
- Beboop has lost its wireless connection. wpa_supplicant is no longer in the /tmp directory. Strange, beboop now complains that no wireless card is found, while there is no wireless in /etc/init.d. Copied the wpa_supplicant from Brooke to Beboop and rebooted. Robolab is visible again in the webinterface and connected.
- Jerry is visible and walks fine (although a bit rusty the first meter).
- Testing Julia, but does only one step. Receive warning: [WARN ] ALMotion.ALMotionSupervisor :xUpdateProtectionFootContactSupportModeFromMemory:0 Walk Process killed due to loss of foot contact. Setting all Fall managers off didn't help, but this hint was usefull. Modified MoveTo code with:
def onInput_onStart(self):
import almath
self.motion.setMotionConfig([["ENABLE_FOOT_CONTACT_PROTECTION", False]])
# The command position estimation will be set to the sensor position
# when the robot starts moving, so we use sensors first and commands later.
initPosition = almath.Pose2D(self.motion.getRobotPosition(True))
And Julia walks!
- Looked at Brooke. Still falls after half a meter. Should look if that also happens with direct Python commands. Tried Downy's hello script, but socket is in use. Rebooted. That was not the problem, but the http:// before the IPaddress in globals. Started it from the Win64 commandline.
- With a simple command like:
self.globals.posProxy.goToPosture("Stand",1.0)
self.globals.motProxy.moveTo(1.0,0.0,0.0)
Brooke falls, although he starts well. Should try with arms to front?!
- Tried the first example of wholebody control, but that only keeps the feet on the floor while moving the hips. The example of Arms is much interesting.
- Moves it arms forwards, but put them back again when he starts to walk.
January 3, 2018
- In the eight chapter of Intelligent Behavior in Animals and Robots McFarland highlights that an external observer may observe state-transitions of a task, but for the actor itself a task is only a request which changes its internal motivation. It could talk about this task as an intention, but it knows that this motivation has to be balanced against other 'tasks' and that there is no guarantee that the task is actually done. Yet, in communication it can gain advantage by claiming already tasks before others.
- Finished PygLatin, you cannot jump ahead because the next lessons are locked.
- Cost me an hour to finish three lessons (14%->28%). So, I should be able to finish the course less than 8 hours. According to codeacademy, 10 hours are needed.
January 2, 2018
- In the sixth chapter of Intelligent Behavior in Animals and Robots McFarland introduces Kalman, but now with his view on Systems Theory (a system is described with a Transfer function between the observable and controllable parts. A system could also have non-controlable and non-observable parts, but the are not reppresented by the transfer function).
- An autonomous robot should have internal motivation, which is non-observable (which adds a non-controlable part to the robot). Motivation is one of the five factors which influence behavior (external stimili, maturation, injury and learning), but the only one which is reversible.
- In addition, the autonomous agent is able to make assessments (consequences of its action) because it is capable of planning and the results of its planning affects its motivational state.
- Yhe planning comes not with a short list of behavior candidates as a result of an exhaustive search, but as a result of motivational filtering. The second stage of planning is to evaluate the consequences of performing each of the candidate activities. This is not procedural but declarative knowledge.
-
- Slow progress with Learn Python. The Freeform projects are for paid, which costs $100 for half a year.
-
- Goal-achieving behavior is succesful by being in the right place at the right time and recognize this state of affairs. Imprinting is a type of perceptual learning of learning the characteristics of the habitat and the other players.
- In section 7.4 McFarland introduces Action Theory with the cybernetic approach, which he directly couples to TOTE units. Actions don't need to be intentional, but must involve some sort of mental representation (schemata, frames, etc) to distinguish them from reflexes.
Previous Labbooks