This presentation introduces "geometry-aware imitation learning" for robotics, which integrates known physics and geometric constraints into machine learning models to enable robots to learn and perform complex tasks more autonomously and robustly.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
to to
to
not included kindly turn your your
camera off
so my name is actually conti and I'm a
research associate at the art
engineering Institute here in London and
I would also like to introduce my
colleague and co-organizer David
massager who's a PhD enrichment student
also a design engineering
and for those of you who are not
familiar the island touring Institute is
UK's National Institute for data science
and AI
also for the advantage of those joining
us for the very first time I'd like to
briefly introduce the topic for the
seminar series that is physics enhanced
machine learning focusing on engineering
applications so physics and noise
machine learning is what most of you
know an upcoming subfield International
machine learning and deep learning that
aims to incorporate known physics
understanding into the machine learning framework
framework
on a side note I've also about to say
that if any of you are interested to
share your work within this agreement
please feel free to reach out to other
data myself or Andrea who's currently so
the unfortunately not joining us
um to find the slots for your for your
seminar that would be really great to
share your work with the community we
have quite a nice growing Community now um
um
and as well as format of this Center so
uh our speaker for today Mateo will
speak for some time around 45 minutes
and then we will have questions and
answers uh towards the end
but you might do so either uh in the
chat or asking in person all right so
without further Ado I lost my colleague David
David
to go to desired speaker for today and
with that as always I thank you all
yeah thanks uh Zach and hello everyone
um I would like to welcome here Mateo
and also introduce him briefly
um so Mateo received his bachelor's and
master of science degree in automatic
control and Engineering the University
of Naples Napoli in Italy in 2008 and
2011 respectively
uh then he received his PID from the
Technical University of Munich in 2017.
currently he is an assistant professor
at the department of industrial
engineering at the University of
torrento in Italy and privately he was
an assistant professor at the University
of Ginsburg and a postdoctoral
researcher at the German Aerospace
Center the DLR so uh Mateo thank you
very much for joining us today we are
very excited to see what uh we're gonna
learn today and yes uh
um we are ready okay
okay
thank you very much David and Zach for
actually giving me the opportunity to
present part of my research during this
seminar I hope you can hear me and see
do you see my mouse pointer yes yes okay
because I I use it a lot so I'm kind of
lost without it okay the title of my of
my talk is basically geometry aware
imitation learning in robotics I'll try
to give a little bit like teaching style
presentation at the beginning in order
to uh let also the people that are not
super familiar with the manual manifold
and differential geometry concepts to be
on board and then in the second part of
the talk I will move more towards
research oriented things and works that
actually we are doing together with other
other
cultures basically
okay but first of all I would like to
say a few words if needed why actually
willing we need learning in robotics and
I usually put this in my talks because
actually as as David said I studied uh
control engineering so I'm an engineer
and uh in the control Community uh I'm
in a large part of the country Community
is still a bit uh how to say reluctant
to go towards learning and data driven
solutions for you know the lack of
mathematical proofs guarantees and so on
even if I mean these things are now
coming but let's have a look at
basically what we have now and what we
have reached and more or less we are
somewhere here now and we can say that
basically here it's a mixture of control
Techniques plus some nice uh programming
skills well people that have basically
developed GUI
and you can just uh
actually compose some simple motion in a
sequence or in parallel and so on put
conditions and that's it this is of
course there and working it's robust
reliable and so on but the limitation is
that that's pretty limited autonomy and
requires still moderate or to high level
of expert domain knowledge and the goal
of Robotics the long term the Holy Grail
is basically to go in this area where
you have fully to um
um
uh almost fully autonomous robots or
really likely or weakly supervised
robots and
not much domain expert knowledge to
actually operate them
and this is basically the domain where
learning starts to be fundamental in
order to again increase the autonomy
without increasing the domain expert
knowledge and in this area I actually
place myself
in the imitation Learning Community also
called programming by demonstration in
in our field
so then why geometry aware imitation
learning well the the answer is pretty
easy so we are dealing with robots and
there are many type of data in robotics
that basically do not belong to the
euclidean space okay so the position is
the typical quantity that you can treat
as a euclidean
type of data because it is but already
if you move to the orientation then it's
not anymore no orientation leaves it's
on its own space similarly you have
other type of quantities like inertia
impedance and manipulability that belong
to symmetric and positive different matrices
matrices
what was uh in the past at least the
imitation Learning Community the way of
doing and dealing learning and actually
retrieving from this data was okay let's
just vectorize this data and then we do
some post processing in order to
basically restore the geometric
constraint or the manifold structure
that are very uh
uh row vectorization of this quantity
will of course completely destroy and to
better understand this actually we have
this this uh small video and and an
example here so let's assume that our
points live on a circle okay this blue
circle here and then we have these dots
here the gray ones right and we want to
compute something simple the mean
indicated with mu and which is you know
the basic of many uh probabilistic based
uh approaches in machine learning
then what happens is if we calculate the
mean treating these points as basically
belonging to R2 to the euclidean space
is that more or less we end up here and
then we can simply realize that this
point is too close to the center of the
circle to be a real one then what we can
do we can rescale this point you know
pushing it back on the circle where
originally it was and then we end up
here but if you calculate the mean using
basically a proper proper concept of
distance that preserves the fact that
these points are only Circle we may end
up here so every time we do basically we
treat Romanian data as euclidean
and then try to do something some
post-processing to push back on the
manifold we introduce some inaccuracies
and of course if this is used to
generate robot trajectory and maybe we
integrate we propagate this error
through our algorithm and you know we
end up with something that may easily go
you know out of distribution very quickly
quickly
so I'll show the video again here and
basically you see that the problem is
this one so two points in the euclidean
space no or two trajectories are here
and if they have to stay on the main for
basically we have to deform the space in
order to let them working on the Manifest
Manifest
so this means that if we want to
properly deal with Romanian data we
should redefine first of all the concept
of distance and use proper Romanian
distance and all the derived quantities
so the solution is then use the proper
tools that you know have been developed
by materializations uh in the framework
of Romanian geometry also called
sometimes differential geometry so a
reminder manifold is very general
concept it's a small topological space
equipped with the metric so with the
distance function the type of distance
function defines the property of the
manifold and it's not uniquely defined
but there are some
distances that you know give nicer
properties and are predominant in the
manifold here on the left you see a
sphere basically embedded in the
three-dimensional space this is the
manifold where orientation lives and for
example unit quaternions
and here on the right you see the cone
that actually contains symmetric and
positive definite matrices
okay so let's start giving some elements
of remaining geometry now in order to
then understand how we can deal with
this data so as I said manifold it's
actually locally isomorphic to the
euclidean space we can uh a little bit
you know improperly say with an abuse of
basically notation say that it's locally
euclidean even if this doesn't mean much
from a rigorous mathematical point of
view but I think you get the sense
actually in each points on each point on
the manifold you can actually Define the
so-called tangent space
but here it's pretty easy to to depict
where basically you can define a
coordinate frame and treat actually your
quantities as if they were euclidean in
order to do so of course we need some
operation that actually let us to move
from the manifold to the tangent space
and back these operations are usually
called the exponential map and the
logarithmic map so the exponential map
is the one that pushes a point from
tangent space onto the manifold
and the opposite is the logarithmic map
that actually lift a point from the
and another important concept is the one
of parallel transport why we need the
parallel transport because actually all
what we will build in a moment is
actually uh based on placing attention
space you know in a at some points and
then lift uh the data the nearby data on
the attention space and then use
euclidean tools to learn in this uh the
proximity of the tangent space and then
push back the data on the manifold and
then move on to the tangent space and
then when we have to basically compare
two vectors that are projected into two
different tangent space we cannot just
sum them up or subtract but we actually
need to move one vector a vector from
one tangent space to the other one so in
in the end we need that all the vectors
are represented in the same tangent
space in order to perform operations and
otherwise actually basically this
happens let me show once again so if we
just move and with typical euclidean
tools no from one tangent space to
another one we have that you see a
vector that stays here is kind of
completely changed while
uh uh the parallel transpose is
basically a way no to transfer the
proper way to transfer the vector by
keeping let's say the orientation that
it has so you move along a geodesic
connecting to two points where the two
tangent spaces are placed while actually
preserving the directionality so we need
this kind of operations but this
actually can be defined in any Romanian manifold
manifold
um this is uh just to summarize basic
arithmetic on manifolds so the
logarithmic map can be related to the
concept of difference in euclidean space
the exponential map is actually an addition
addition
and the distance I mean this concept of
distance of course you have to redefine
the distance to be on the manifold and
the interpolation so it's important can
be achieved through exponential mapping
you know multiplying basically by the
time T this difference of points in the
tangent space
okay so now we have basically introduced
our Machinery okay the Romanian manifold
Machinery let's now have a look at how
we can use that framework into basically
typical robotic problems and a first
example is uh the the super famous at
least in the imitation Learning
Community framework called Dynamic
movement Primitives introduced in the early
early
2000s by uh esper Stefan shall and
others so working at that time at the
University of California Los Angeles
so what was their idea basically they
wanted to reproduce some demonstrations
and generate something that was actually
feasible for the robot so to execute so
sufficiently smooth that's why they went
to second order Dynamics
and preserve the stability okay because
very often in robotics you want to reach
certain position a certain configuration
and keep it
so they had this idea let's say okay
let's start with the simplest dynamics
that we can consider a second board a
spring damper actually Mass system with
unitary Mass this is very famous
actually physical model everyone has
probably they have to deal with this in
basic course and physics
this is a second order Dynamics linear
uh stable asymptotically stable means
that from each point so I can basically
you can imagine that I can pull no this
side of this spring damper system and
sooner or later it will converge
depending on the value of the Springer
and the damping I can have oscillations
or I can have an exponentially decaying
trajectory because we usually say in
control tier you know it's a critically
done system
converts towards an equilibrium this is
pretty nice but of course it's quite
limited so the idea of this guys here
actually was okay let's add to this
simple system uh Force actually no
linear force in turn learned from
demonstration in order to basically uh
execute some arbitrary trajectories
and then finally retrieve asymptotic
convergence by actually killing okay so
this force is basically multiplied by a
banishing term so that with time
vanishes and let the system retrieves
asymptotic stability towards the desired code
code
this is basically uh the the formulation so
so
mono-dimensional but I mean it's not so
hard to extend this to higher Dimensions
so uh these two equation the Y Dot and Z
dot are basically the Dynamics of this guy
guy
so the velocity is the time derivative
of the position the acceleration is
basically again multiplied by the
position error minus the velocity plus
this non-linear force center only
unfortunate term is a radial basis
function Network so we're actually this
c i basis functions are gaussian components
components
nothing nothing complicated and I mean
this can be learned from demonstration
in a with a pretty pretty easy
formulation so this is the the standard
one and now of course we want to extend
this approach to basically consider data
that are embedded in a Romanian manifold
here for example I'm showing ellipsoids
which means symmetric and positive
definite matches so I can keep most of
the structure what I did so something
minor I extended this to to be
multi-dimensional and of course each of
these Y and that are now
vectors so we need to basically deal
with that and here the the way it's
become uh uh Matrix such that uh sorry
um yeah a vector so each each I is a
vector so all together can be seen as a
matrix such that also the forcing term
is a vector
and what I have changed here is uh not
basically dramatic so I have the error
here so subtraction I replaced it with
the logarithm
and the same here this is just a scaling
to prevent eye acceleration at the beginning
beginning
The crucial Point here is that I'm
actually placing my tangent space at y
so at the current point and this means
that I'm not working in a fixed
attention space but I have like a
tangent space that is sliding over the
manifold and actually I need just to do
one integration step this is very
important because if you place the
tangent space uh
somewhere and you have points that are
actually far from the center of the
tangent space you again start to
introduce some deformations in your
manifold and some numerical layers while
working like this you actually minimize
the the error that you introduce is just
one integration step and then you like
start again placing your tension space
in the next point that comes out of the
integration so integrating this one is
done through the exponential mapping
and actually this is basically one way
to extend the framework to Romanian manifolds
manifolds
uh the problem is now that how we
compute actually the training data so to
train that guy we need actually uh
derivatives so why not
and Y double dot and uh if you are in
Cartesian space pretty easy we just take
the difference between consecutive
sample and divide by the sampling time
and do again for the second derivative
on manifolds it's a little bit more
different so what we do is
um we go uh basically on the data we do
a this is called piecewise geodesic
interpolation actually so we place let's
assume that for example the two
consecutive points are this one and this
one a bit exaggerated so we have pretty
closer points so actually what we do we
then place the tension space here on the
left take this point project on this
tangent space assuming that they are
closed so I do not defer match the
trajectory then this gives me
the difference basically or what is the
equivalent of difference on a Romanian manifold
manifold
and then I divided this one by the
sampling time and this gives me actually
my derivative in any point
uh there is of course a bit of
technicality here so if I have to once I
have projected my data in the tangent
space usually I basically do not have to
consider the geometric constraints
anymore just to give you an example the
tangent space of the S3 sphere The
Continuous sphere is just a
three-dimensional euclidean space while
the tangent space of symmetric and
positive definite matrices is just the
space of symmetric matrices so asymmetic
Matrix is just uh you know equal to its
transpose and I can simply vectorize it
by taking the independent components
then you can calculate the velocity and
once you have these velocities you can
just apply euclidean tools basically to
calculate the second derivative and with this
this
your DMP
one of the interesting features of DMP
is the possibility to change the goal on
the fly now imagine that you have a
camera it's tracking an object you want
to catch it and you have learned some
stereotypical motion some template
trajectory reaching uh technology but
this goal is changing on the fly so what
you can do with DMP you can just Define
a linear system that actually
with the time depending on this gain
uh changes the goal from G to the new one
one
smoothly the equivalent of this on my
default is again A dynamical system that
instead of this difference considers uh
the logarithmic map this case placed at G
G
of jinu and then calculate g dot
you can also change actually the the
trick here is that basically you can
usually interchange these two
for for many manifolds smooth manifolds
and basically
um if you change this one then you have
to do the parallel transpose minus the
parallel transpose field okay in order
to Interchange these two so
this is where actually the parallel
transpose comes into the game this is
just a simulation so you see that uh the
DMP was basically trained to reach a
certain goal here and then we start
switching to another one and actually
our ellipsoid here that represents a
symmetric and positive definition
matrices actually changes so and instead
of reaching the Shaded goal here which
is just another one smoothly
okay it's time now to see a little bit
of of results so we have this paper and
the review
now uh where actually we describe what I
have told you so far this geometry where
moment primitive so here you see um some
Cartesian letters and then we have
transformed this pretty simply projected
on on manifolds so here on the left is
quaternion here is rotational Matrix and
on the right you see SPD
matrices so how you can reproduce that
and then some experiments so now the
robot here for example has to fill this
uh this watering can
and then it can control basically
position orientation and and stiffness
and in the other case uh the robot has
to pick from a box so we have instructed
robot to actually enter this box so we
control in this case the position and
the manipulability that is actually an
SPD Matrix in order to as a secondary
task and when we have to enter another
box we command the position and to
prevent that robot you know deform the
trajectory too much we have this
secondary task so the robot tries to
keep actually its posture
okay so the posture that has been
demonstrated during the motion now in
order to do this this basically type of
motion so you see that here this is the demonstration
demonstration
and actually when we try a little bit to generalize
generalize
we try to keep the configuration by the
manipulability as a secondary task of
the robot
and then we have this and otherwise I
don't know the robot could of course do
some hit the box and so on because uh of
okay so that was uh basically for uh
discrete motions so motion that
basically have start in the end but many
human tasks are of periodic nature if
you imagine I don't know solving for
example or it's something that requires
no that you pass again and again through
the same point and generate a periodic
trajectory of course the DMP framework
uh was already extended to consider
these periodic tasks it's formally the
same now instead of a clock signal we
have this phase variable that basically
goes from zero to two pi and then it's
reset so like I saw it
and uh
the forcing term is of course again it's
a little bit different so we have r that
is the amplitude of our oscillation and
then here now we have uh actually phone
masses basis functions that basically
represent our periodic signal no like uh
gaussian over a circuit and the
extension to to the uh
Romanian uh actually based data it's
again pretty simple just substituting
here the difference with the logarithmic map
map
and all the rest it's formally the same
of course we have to consider now
different type of data vectorized but
more or less changes are not dramatic
here are some results partially
published partially not
this is a robot operating a drilling machine
machine
so we have actually shown one
demonstration and the animated periodic
so here we control again position
orientation orientation and stiffness to
actually guarantee a smooth no
interaction while manipulating this so
we basically Define a variable stiffness
impedance controller here and then we can
can
uh basically perform multiple holes and
you see the actually it's my it's me now
showing the holes and inspecting somehow
the material we can also
change a little bit the material and the
thickness and we have basically a simple
estimator right so we just estimate the
the error in tracking the trajectory and
try to push more in order to if if the
error is actually large so pretty simple
thing and helps us to in this case um
um
make holes in a material that is uh it's
a different type of wood and uh with a
different that
so again we can see that and the robot
was successfully uh making the holes
next video it's
I think a bit nicer I will put the audio
on again just to you know give you the
feeling so it's this is my colleague by
the way Luca paternal from University of
delft and yeah you see again here we are
we are controlling the uh uh Motion in
this direction and the stiffness in
order to you know uh effectively do this
sewing task together with the human so
push the human in One Direction and pull
in the other one not be passive
and in a moment I mean you will see that
basically this was a real real task
it's coming it's coming
a bit of suspense
suspense
yeah so it was really cutting down uh
this this basically piece of Steel
together with the robot so again this
task was
controlling uh the position but not only
the position also the stiffness now in
order to basically ensure the contact
and actually support the human while
okay so we are
basically going uh towards the the end
or the last part of my presentation
uh so what we have seen so far with
basically Dynamic movement Primitives is
a possibility of learn actually a Time driven
driven
Dynamics where the time plays
fundamental role of actually a killing
out the non-linearities in order to
retrieve stability this means that we
have some hyper parameters to tune in
order to have you know a motion that at
least in the discrete case I'm talking
about the discrete case to prevent that
you have some overshoot or that you kill
this non-linear forcing term too early
and then you just linearly converge to
your Target and then there was uh
actually in the last more or less 10
years or a little bit less a lot of
effort to learn actually stable and
autonomous dynamical systems so systems
that do not depend on time input in
order to generate the motion so where
basically you learn actually an entire
Vector field and potentially you can
also generalize in areas of the state
space where you don't have
demonstrations because you have a
certain shape of your Vector field also
far from the demonstrations
the idea is nice and I mean I also did
developed a few approaches in this field
that and I'm not going to mention today
but uh then the problem that we started
to face is okay now what if we want to
have an autonomous system so time
independent that actually it's stable
and evolves on the manifold
and then the idea was actually uh very
very similar but we had to actually use
a particular type of learning you know
so we could not actually learn simply a
forcing there in order to guarantee
stability but we had to rely on some
diffiomorphic mapping so the idea is
basically this one again exploit the
tangent space and in this picture I'm
just placing for Simplicity the tangent
space at the goal but as I said this can
introduce some Distortion so
you can by parallel transposing it just
consider a local tangent space placed in
each point of your trajectory and then
have it sliding on the name fold and
this is a way more more accurate than that
that
so we we can define a very simple
linearly converging where linearly has
to be of course considered on the
manifold so linear on the manifold it's
of course some kind of geodesic curve if
you imagine a globe so a point like at
the path between two points it's of
course not a straight line but it's a
piece of geodesic
but anyway this is the tangent space
projection of it and then what we can do
is from the demonstration we can learn a mapping
mapping
okay that just transforms this Vector
field into this one on the right through
the mapping and it's Jacobian so the
derivative with respect to the state
and this is guaranteed actually to
preserve the stability so it's time
independent and stable and in case we
have to go back of course we can go back
so the mapping has to be smooth okay
because we have to calculate the the derivative
derivative
uh invertible and with continuous
inverse because we also have to invert
actually the Jacobian matrix and this is
our basically the properties of a
defiomorphism or defamorphic mapping
uh just to give you a few uh little
hints on on this basically so this is a
geodesic system okay
again placed uh placing the tangent
space in the current point or by the
transformation placing it at the goal uh
it's for for a few manifolds it's it's
similar if you consider parallel
transport in between
and then yeah just the equivalent of a
linear Dynamics no this is an error here
multiplied by a scalar gives you the
velocity and it's the same so this is uh
actually uh the so-called
global coordinates and that were
actually you define uh velocity fields
and then you start to integrate it and
this creates from an initial point to
the goal piecewise a piece of geodesic motion
motion
so that's the simplest thing that you
can do on a manifold and that's the
simplest dynamics that you can actually
describe on your mind
and then what you can do again as I said
from your data projected in the tangent space
space
no you can learn a different mapping
that actually takes as input this
General Global coordinate log X and
multiply this by the Jacobian computed
at the same point the same gain and
gives you a dynamics that for example
here follows this blue line
now learning this diffumer fees on uh
has been quite quite you know started
quite a lot maybe you are familiar with
the normalizing force uh probably this
this deep neural network that actually
uh is used to uh so originally proposed
to actually learn random uh non-linear
distributions getting as input a
gaussian so some people have of course
exploited normalizing flows to learn
this different or other type of teeth or
of deep neural network
what we have done is that is to make one
step back and use more let's say
traditional approach gaussian mixture
models showing that under which
conditions a gaussian mixture model is
actually the female freedom
which are I mean pretty pretty uh soft
conditions and we have I mean again in a
paper that is under review show that
this is more that efficient means that
we can accurately learn this different
from a few data
and uh also I mean time efficient
because I mean training normalizing flow
it's quite intense and also you have
less hyper parameter in a gaussian mixer
model you mostly have one hyper
parameter that is the number of gaussian
components that can be I mean with a
little bit of experience you honestly
choose it by hand after a while that use
gaussian mixer models or you can simply
do a good search among reasonable values
and that's it
and uh yeah this was not not the same
for for actually deep neural networks we
also have uh uh another paper in the
pipe where we use deep neural networks
but we had to do a proper hyper
parenthesis on the cluster in order to
get good results
so this is uh the type of experiments
that we have done so this is actually
our data set I mentioned already before
I will show uh the link later on where
you can actually download it it's an
existing data set that we have basically
projected on uh orientation and SPD
manifold so nothing
no rocket science and this is your next
sample where actually we were
controlling position and orientation of
the robot in order to do this wine
stacking one of the criticism was that I
could have chosen a slightly longer
bottle because it's really fitting that
uh quite
uh you know precisely but I mean that's
it and yeah now we change
the the goal so the the demonstration
was done always reaching the same goal
and then we can change the goal because
this is one of the properties of
dynamical System bus very simply
converging towards another goal I hope
David is not too slow uh
now due to the
uh like in the connection so this is
yeah yet another uh goal and now we
change it also the orientation
basically of The Rock
and still perform uh the same task successfully
and I mean this is of course there is
large tolerance in this but still
you have to be somehow there otherwise
you don't make the task and this is a
comparative drilling so here we are
basically controlling the stiffness to
below while approaching uh drilling
posts and then now we have high
stiffness so we have a variable
stiffness profile and now you see that
basically so the robot was uh actually
compliant or with the low stiffness at
the beginning because you know you have
to just rotate and lift this one from
the container and this piece of wood
It's relatively large so you have to
like unstack a little bit and you so
that was I mean relatively smooth
due to the fact that the robot has
compliance and now it's becoming stiff
so you see that I mean you can drill and
the robot is supportive of course I mean
this is not an industrial robot maybe
you are familiar with it so there is
already some elasticity into into the
joints uh so even if you set the maximum
stiffness the robot moves a little bit
but still yeah you can do this type of tasks
okay just to uh basically summarize so I
presented today something uh I mean a
little bit of Romanian geometry
framework and some ways of extending traditional
traditional
um imitation learning or programming by
demonstration Frameworks to the Romanian manifold
manifold
we have started to release our code
actually even if papers are not are not
published yet
but I mean just to to to serve the
community we have a macrob
implementation this is also long story
I'm trying to you know move uh from
Matlab towards python but still uh my
proficiency level now it's kind of
similar so python is overtaking finally
uh but yeah this one this one has been
done in Python and it's basically the
geometry aware the MP that you can
download and try we have tested actually
uh four different manifolds so symmetric
and positive definite unit patterns
rotational Matrix and sc3 that is
basically the composite manifold where
you have position and orientation we
will probably release I mean some some
other code but I mean you can actually
check or email me if you are interested
and I can even of course I can share
some development version even if they
are not super clean with you so that you
can try it
okay thanks a lot sorry I was running a
little bit but yeah hopefully we have
time for questions
well thank you very much Mateo that was
really really cool
um I love manifolds that's cool
I'll open the four four questions if
there are any questions please ask I
know that Peter had one before but I'm
sure you still don't ask it um
um
in the meantime I do have one or two questions
questions
let's see okay then I will go ahead so
what I want to do was primarily was when
you when you when you started to talk
about the rhyme and manifold geometry
you are you make an assumption right
there so you're assuming this kind of
spherical manifold sorry it's very good um uh
but it is a manifold right so what
assumption what is the influence of that
assumption in in the work so because of
course there are other kinds of
chemicals that you can choose yeah well actually
actually
um so if we stay in the the smooth
manifolds so so far we have considered
not only the sphere but also the cone
that basically describes semantic and
positive definitive messages
according to the the final variant
distance or one type of distance on the
SPD space
and these are pretty different so the
the sphere has this problem of basically
you have the the you know the the top
and the the bottom hemisphere where
actually you have anti-podality problems
no if you consider this to represent
robot orientation so a point that is on
the hemisphere and the conjugate one
represent the same orientation so this
is a typical problem that you have to
face with and usually as a robotic what
you do you try to you force your data to
be in one of the two hemispheres okay so
this is something that that you have to
keep in mind and this type of manifolds
uh you have uh you don't have this
problem so points are like uniquely
defined so they are different manager
but you have two problems so the
manifold is about it okay because I mean
I just cut the cone here but this is
just growing
and so it's an it's non-compact manifold
and you have also this point zero zero
zero that creates a lot of problems uh I
mean can create problems you know zero
divisions and stuff like this um
um
so in this case I mean uh for the
non-compactness uh it is okay because
you have a trajectory so your trajectory
will be compact anyway so you will stay
somewhere and we are forcing our system
to be stable so stability means that you
will not diverge so you will not reach
point at the infinity which which is
good for the zero point we we don't have
a real uh solution now apart from like
staying far from it that's basically
what what we are doing now but the
approach is uh let's say rather leader
General apart from properly defined
image mappings so each manifold has
these mappings the logarithmic map and
basically the exponential map that you
can Define on each point on the manifold
but some com what we didn't try it's uh
there are I mean some complex in some
complex manifolds the functional
expression of the logarithmic and
exponential map can even change
assume so on the sphere you have the
same functional expression just place
you know just you have to change the
point where you where you do it of
course this gives you different tangent
spaces but because it's a function
evaluated in a different point but for
some manifolds even the function changes
and there
this is something that we are still
trying to investigate
thanks okay
um and my second question I'd like to
ask about
um so if you have a different robot with
different specs right
um can you map this on this manifold or
can you reuse the information that you
have from the previous
uh robot so in a way because of course
there are similarities right yes similar
manifolds this is something I'm working
on right now but there are different
application so this is definitely
interesting for me as well
um there's obvious geometry
Transformations between similar State
spaces of course I mean apart from the
German so what can be transferred
relatively easily is the manipulability actually
actually
so the manual probability basically
describes in which direction you can
move more freely and in which direction
your emotion is constant so if you
imagine now the robot like stretched
like this of course in this direction
you cannot go forward much right but you
can up and down freely right so your
manipul abilities like so if you
describe a manipulability profile and
indeed there are papers that actually
show this like transferring from Human
to the robot and some some other stuff I
invite you uh if you're interested in
this have a look at recent work from
Sylvan kalinon group at uh
uh where actually they do manipulability
transfer so money probability can be
transferred with with little uh little
uh effort I would say uh position and
orientation unit Transformations right
so you need maybe to rotate the base of
your robot and stuff like this and you
can have you and you have to rescale
usually imagine you have a very big
robot and a very small robot then you
and in the other robots you have now of
course quite some some data can't you
use some data to indicate how to
transform your old manifold yes yes yes
of course you can I mean if you assume
that you have the same manifold and you
have the data then you can find the
relationship between them because
actually I mean we Define the distances
and so on so this is of course possible yeah
yeah
okay very cool thank you
um are there any questions in the from
the audience
before repeaters seem to have a pressing
question I wonder if
someone tries um it was me thank you
very much like I'm sorry I don't want to
put you on the spots thank you thank you
I had a very naive a silly question in
the mapping when you project your
dynamical system onto a manifold it
almost seemed like it's a little bit linear
linear
the system it doesn't matter and am I wrong
wrong
uh you mean here
yes and and in the actual equations they
they seem to be linear in G is is that
an important factor
uh well it is not linear so I mean this
one is this is like the equivalent of a
linear system on the manifold yes and of
course this generates so linear system
in euclidean space generates straight
lines this generate piece of geodesic so
that's uh that's it so the only thing is
that you you adhere to the manifold
joint and whatever it is your manifold
it will generate a part that stays on
the manifold and
um then we don't want this I mean this
is good because it's uh guaranteed to be
stable we can use some leopard of theory
to prove that you will always converge
to the goal and so on but this is only
part of the equation and then we want to
do for example the blue line here and
how we do this we pass this through a
nonlinear function here that is uh yeah
that's why you mentioned normalizing
flow and I understand my question was
the underlying physical system in the
euclidean space can it be highly
non-linear the underlying dynamical
equations yes it can be it can be it can
be and I mean in the in the in the
I mean in general you can you can learn
directly a non-linear system and enforce
some lyapunov conditions on top of it
this is possible and then you can pass
through this I have uh for Ukrainian
case I have actually a paper where I
show that uh I was doing incremental
learning okay and then I showed that uh
if I start from A system that is just
generating straight lines and then I
show a pretty complex path then it takes
you know a lot of time to you know
slowly move to this not a lot but takes
time to basically converge to that one
while if I was giving something that was
already in that direction like a piece
of sign function and so on I showed that
actually it was quicker to reach
incrementally the other one so of course
if you start already from a dynamics
that gives you what you want it's
clearly beneficial for the learning
thank you very much and thank you for
your talk yeah just just another another
point is that actually this is a
different so it has a lot of so it has
to be bijective continuous and so on and
so forth and there are it's it's
relatively easy to like to break these
conditions with real data all right so
if you just have two trajectory that
intersect and continue you know one on
top and one of the bottom then you
already can break this condition then
you can do some averaging and so on to
cope with it but this is something that
I really like to to overcome so to have
a let's say less constraint learning
strategy but still to keep stability
thank you I have also a nice question
sorry one more nice question from me
um you know the the the manifold the
spaces you showed in the beginning the
are they Bound by the eigenvectors of
the system or are they what are they the
main variables so in the beginning you should
should that
that
in the very beginning
State spaces good
all right uh if any other person questions
all right seems not but um meanwhile I'd
love to thank you so much Mateo thank
you so much
and I I hope that we can see more uh
geometry oriented uh you know Dynamics
so I work on this kind of stuff
this topic also Sakura sorry are you
working on this topic yes I'm actually
working on this topic but from an energy Dynamics
Dynamics
I have a question for you then yeah just
let me use five so you know that I mean
if you go to the to this uh geometric
deep learning yeah people I mean I I I I
don't know all the literature of course
but all the people that I read this
basically they are using this I think it
was from John Nash the this this uh
embedding theorem no so you know that uh
sufficient as smooth manifold can be
actually embedded in a sufficiently
large euclidean space yeah yeah and
actually they they use this fact in
order to calculate the Romanian distance
and once you know the demanding distance
then you can Define all the rest
but do you think we can overcome this
and I mean what I do instead is I
usually I I study uh known manifolds
like SPD and so on for which some
mathematicians have derived the concept
of distance and from the concept of
distance it's pretty easy to define the
logarithmic and exponential maps and the
rest and the the pure data-driven
approach is okay let's consider a super
higher dimensional uh embedding snow and
then just to average in the side
dimensional space and then say okay this
is the distance can we find something in
between I don't know it's another
question I I so I myself I don't work
directly with um geometry deep learning
but but the Assumption I make is that
high dimension higher order systems
actually exist on low damage manifolds
so which I guess at the same time and
um so and actually right now I'm working
with really pretty simple examples so it
would be enable me to kind of like if
you were an answer what you ask me and
the thing is that what I'm exploring
engine is looking for the distance
between different systems so rather than
looking at distance the the the distance
they're not euclidian distance between
um let's say in your case it's the same
system but it's different uh scenarios
trajectories in my case I'm looking at a
different systems and seeing how we can
find these geometric Transformations
okay so at the minimum distance between
two different complexities
spaces because the same spaces are very
similar so we find Germany we can
represent this geometrically then you
can actually
um you can find more you can find morphs
you can find Transformations between
these governing spaces and here to
Global is that we are challenging the
typical you know Black Box methods this
in my case we are doing energy
forecasting so it's a little bit
different to your application and we are
challenging the the black boxness of
these kind of typical data different
models that don't give you much
information also they don't generalize
very well so the one way of tackling
this is by dealing with rather the state
space instead and which is which is
innately governing right so and how do
we leverage these statistics driven
State spaces into this data driven
framework so that's why I asked earlier
about binding Transformations between
um between
exactly yeah
I think we're slowly working towards a
new kind of machine learning that's
catering for engineering Dynamics and
I'll I'll be happy to to go and involve
you in what we are doing and
um I mean just just yeah before you made
a few weeks ago and also was interested
in this so it will be very cool to have
a child and anyone who's listening perhaps
perhaps
well thanks a lot thank you so much it
was really interesting
um all right so I won't take more of
your time and thanks say thanks to
everyone thank you thanks once again
um for your assistance and uh
we'll see you until next week's time
with a new talk
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.