The presentation explores the convergence of AI and IoT, highlighting how advancements in hardware and software, particularly foundational models and edge AI, are enabling greater intelligence and productivity in IoT devices, leading to significant cost savings and expanded capabilities.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
hi everyone uh my name is Ivan and I'm a
Senior Solutions engineer in n gyles and
in this talk there's going there's going
to be a lot of buzzwords that we all
like together in one place AI iot llms
foundational models we have it all and I
think it's very exciting to look at how
we can combine this all together to
drive more productivity
growth and that's another intro
slide so I'll start with a question
there's going to be quite a few
questions throughout my presentation
first question is what is I I think
people who have been in the industry for
20 years still have different answers to
this question because we just cannot
figure out it's so many things at the
same time and everyone has their own
answer is it is it a fridge that has a
door that can tell you out of beer but
it has a back door that will send your
data out like I don't know I think from
the engineering
standpoint I formulated for myself at
least that is the sensing intelligence
and connectivity and then you can apply
to different things and then we have
sort of figured out the sensing part a
long time ago we have different ways to
capture the the the the sensors and all
sorts of things from around us we have
figured out connectivity honestly
there's a huge market and competition
Technologies low power Technologies like
Laura 4G and things like that so this is
kind of figure out pick and place but
the intelligence part is something that
has been progressing with the type of of
capacity that we have available in iot
devices and I'm going to talk about it
from the hardware sto later but I think
the focus for us is now adding more
value intelligence
part and the question is what is the
most important graph in Tech
business and it's not this one not this
one it's this one it's the Gardner hype
cycle and then you can find this hype
cycle for a lot of different um you know
Industries a lot of different topics
this particular case is the hype cycle
for artif artificial intelligence so for
those of you who don't know this hype
cycle kind of tells you at which stage
of adoption and closeness to
productivity different Technologies are
and how soon they're expected to reach
this further desirable ploto of
productivity that where the computer
vision is right now it means that we now
can take this and apply it and get value
immediately and it's understandable and
people understand how to use it and why
they want to use it
but there's an interesting Aji part
which I would argue actually goes in
hand hand inand with computer vision um
and that's what we're going to focus on
today and you can also notice that the
while it's kind of in an early stage
it's it's going to reach the plateau in
less than two years this technology is
really rapidly evolving and we see a lot
of innovation
happening so another question I have a
lot of questions in this presentation
what does the real world look like right
if we look around we have light sound
smell all sorts of signals the sensing
part that we have already figured out
and what we do as humans if we look
around like I can smell something and I
can tell if it's a burger or a pasta or
I can look at the light and say if it's
red or green so our brain is taking a
signal that a sensor usually takes makes
an inference based on it and determines
what it is does a classification or
regression or something like that
now historically we've been trying to
produced this on all sorts of devices
including iot and that's what we call
heris intelligence heuristical
programming we were seeing the data come
in we were knowing what problem we want
to solve and we were building rules
around it and then we're expecting the
system to kind of tell us what's going
on based on those
rules but the new approach this ai ai at
the edge is when you don't want to set
the rules manually but rather you would
give to the program and algorithm all
the data that you have of all the
classes or problem that you want to
solve and you let the algorithm figure
it out on its own this is way more
efficient but first just to keep in mind
these two things data is Paramount in
both cases and iot is all about data we
get a lot of data we get it from all
kind of sources but what do we do with it
it
right so why is this fisal approach
lacking and why are we starting to see
more and more limitations now of people
who are working with these approach is
that the real world is fuzzy right like
we can say Okay I can figure out these
two edge cases that I've seen on my
production line and I will set the rules
for them but what if something that I
didn't know about happens how do I know
detect this this thing that I will have
to fail again and now learn about it
like we cannot account for everything
her istics are labor intensive again to
collect and model all the situations
that we want to um anticipate it
requires a lot of domain knowledge and
you know just time to do this and then
the insights are being left on the table
right again iot data we say we want to
detect when a temperature crosses the
threshold but in the same data that I'm
collecting potentially there's so many
more different problems that I can
extract and solve with exactly the same
inputs but we don't know how to how to
approach it because it's a bit more
complex than a threshold
right so this QR code will you'll see a
couple of times I challenge that
everyone has to scan at least one in the
presentation and sign up uh this is a QR
code that will let you register an edge
inut which is a platform that we provide
that lets you build a machine learning
algorithm a machine learning Pipeline
and deploy it on an edge device it's a
it will be an algorithm that will take
sensor data in any sensor data that you
like any sort of problem that you would
like to solve you will build it using
cloud of course training models requires
a lot of compute but then you'll be able
to deploy it to uh any sort of Hardware after
after
that from high level that's what it
looks like and that's that's that's a
normal engineer engering process right
it's an iterative process so the
platform allows you to do this machine
learning engineering with embedded
systems in mind iteratively and quickly
without having to um without having to
pay attention to like things that very
that are very hard to
do second question we always get is like
where can I deploy machine learning with
this platform so I made this graph to
tell that you can deploy on PC on MCU
npu MP whatever you like but our CTO lik
say is basically anything on% if you can
compile C++ code this can run
there and that's another kind of
overview of what sort of Hardware I'm
talking about any hardware of course
like for smaller Hardware you will
consider different problems you will not
run a computer vision model on a
microcontroller unit but all sorts of
sensor data can have machine learning
applied to them and value extracted from them
them
right so I think what I'm trying to say
with this is that Ai and iot are having
sort of a love story because it's a
perfect match um there are a lot of
reasons and I'm I've been having quite a
few conversations today I'm saying the
most important thing I think is that
you're saving on energy costs and on
bandwidth these are the two most
expensive things in iot in my opinion we
have to pay for ingestion uh to the
cloud if you stream raw data to the
cloud from your huge deployment of sensors
sensors
that's a huge cost energy costs of
devices that are either battery powered
that you have to replace battery every
couple years is also very expensive so
with running the models directly on the
edge instead of running a camera that
will stream all the frames to the cloud
and run big model there which means you
have to send 30 720p images at uh per
second to the cloud right it's enormous
amount of
data I think the same point as that is I
the AI allows you to extend the
possibilities of what you thought is
possible with iot so before this kind of
camera that has an Nvidia GPU inside you
can only imagine it having connected
directly over ethernet to your local
network what you can do now
is connect it via Laura back hole for
example and Laura doesn't allow you to
stream images Laura allows you to send
one short 20 124 by message app so you
do all the processing on the camera
and you send one message yes or no over
Laura so you can now use camera systems
in your iot low Power iot
Systems right but let's take a little
step back and actually bring back the
point that is shared between all the
approaches and Engineering that we're
trying to do with sensors everything
starts with data we don't have a use
case or or anything to do without
data gen right this is a big thing in
the last in the last several years I
think it's it's crazy that I can now
write a prompt like this and get this
looks exactly the same right this
absolutely similar get this slightly
cartoonish image out uh but still it's
like representing the message but there
are other models with the same prompt
can generate a photorealistic image that
now is resembling the reality so we can
use gen capabilities to augment the data
sets that we have because collecting
data and finding the ways how to collect
the data is often a very hard
task um so what we have added to the
platform and I'm going to show now time
series data but with images we have the
same is the ability to use these big
models not to do your job not to solve
your problem but to help you build a
robust solution for example this model
from 11 Labs is able to generate
keywords so let's see how it works in
the platform I'm putting the phrase that
I want to have and it will generate me several
samples techx London
2025 techx London
2025 techx London
2025 so now I can generate a keyword
spotting data set without having to
touch microphone that's pretty amazing
these models are already very good
another one imagine I want to build a
I can generate these sounds without
having to break glass I don't want to
break anything I want to build a model
that will detect it for me so that's
that's pretty great right like it's
incredible that we can do these things
without having to to you know it's very
fast and very large amounts of data that
we can generate with um the lowest uh
investment right but actually everything
starts with label data because if we
have a bunch of data that we collected
from the field but it's not label it's
not quality data it's going to do us
damage rather than help us build
solution so quality control as we say
garbage in garbage out right
so again llms right it's crazy that now
I can put an image to in this case the
chat GPT or any other llm uh model ask
it what's going on it's not going to
tell me there's an person wearing orange
it's going to tell me which was taken
actually at the Olympics that this is a
Dutch national rowing team that is
getting a medal which is already a step
further to any sort of like automated
process I could do right that's great
but it's it's not applicable directly
for iot use cases because it's very
expensive right these models are huge
they're not going to fit anywhere
they're very expensive to even compute
and going back to the point you have to
send all those images up so the costs
are growing exponentially so that's not
really feasible now what we can do again
is to use these models to help us
quality control the data we already have
right like if I'm if I'm building um a
use case with a lot of um images I'm I'm
going to go and collect a video that
will have the things of interest for me
but then I would have to label every
frame of the video manually which is a
very labor intensive task now what I can
do is to give all my data set to this uh
large language model and this is how it
looks in our platform and ask it look at
every image you have out of all those
three 4,000 and tell me if there is a
hard hat or no and in 5 minutes I'm
going to have 5,000 labels applied
that's pretty incredible and increasing
the productivity with which we can now
approach these
Solutions so what we have so far is that
we have these Foundation models big
models llms that are incredibly smart
and they're very general um and they can
assist us to to build solutions for the
edge which don't have to be that smart
because I'm building this model to build
to solve one particular problem at one
at my plant or in my product I don't
need it to know everything in the world
but sometimes it's just not enough right
so like how do we how else can we be
smart and combine um them together we
need to take a step back again and
actually realize that everything starts
with Hardware iot is Hardware right in
the end so what the slide that I had
before with um the the iot is
connectivity and um hardware and
everything before we were trying to
develop the Technologies on top like
connectivity but the hardware was
limited we could only run the control
logic there and sensing and sending
everything up but now Hardware has
caught up we have so many possibilities
to put extra intelligence in the same
chips that we have and extract value
from the sensors that we're sampling so
what how we like to say is that Aji
Hardware 3 plus1 layers 3+ one because
these three are actually iot still and
GPU you could say somewhere in between
it can be an embeded GPU but 3+ one mcus
and mcus with npus which is neuro
processing units which is dedicated A6
design for machine learning inflence
mpus which is typically Linux systems
and then gpus and AI accelerators which
are high-end edml Hardware so let's take
a look what happened in 2022 a company
little company called arm have developed
something that is called ethos arm are
known for licensing general purpose
Computing cores but they said Hey how do
we make a GPU for embedded systems in
quotes right so they made this
very very energy efficient and high
performant piece of Hardware that is
able to process machine learning
operations very quickly and because arm
did it now everyone who works with arm
can just take it and apply it to their
designs incredible now we have a lot of
companies like Al if and ninian who have
Solutions with socks that have a
dedicated npu on their Hardware licensed from
from
arm but then companies like St started
looking and said hey I want to do it
myself and I can do it better than
licensing a thing from arm so you see
that yellow uh Corner SD has announced
like a couple months ago so they develop
this STM 32 n6 you can uh see this kit
in our booth 152 we have a this you can
you can look at it but basically now
they develop their own silicon
accelerator for 600 ghs which means that
MCU grade device MCU grade power
consumption you can run um YOLO V8 at 15
FPS it's incredible second mid-end
computer vision you have things like nxp
who used arm also dedicated acceleration
and Renaissance who build their own IP
amazing this you can already connect
several cameras and have a Linux system
and way more complex and then you have
high-end computer vision system this is
where you can run 10 models at the same
time 10 camera streams 60fps and put it
in robotics applications things like
that this is this Hardware now you can
just grab everything and make things run
locally that you could never imagine before
before
so we can be a little bit even more
smart and say hey how about um I run a
very simple model it's called an anomaly
detection that says is there something I
don't know in the frame yes or no is
there something that's not supposed to
be on the floor this model will tell you
yes or no but as soon as it
happens I can only then send this image
to GPT or with the last level of devices
to an on device llm or VM and get an
Insight of what's actually happening so
this kind of solution I see in in in the
coming year be becoming way more
prominent and interesting because the
hardware allow us to do these things
which is I think my my my personal
highlight of the coming
year and this is a small demo of it in
action there's an anomal detection model
running and you see the Cascade is not
enabled so we're not sending anything up
we're not sending anything to chat GPT
but now I'm sending only these frames if
I were to do for every frame I would
spend all that money that I was talking
about now it's out there's no anomalies
I'm not sending anything it's running
just on the
device and now a signature proper V imp
plus a bottle of
beer beer bottles yeah so I can balance
kind of if I know that this one is not
going to happen very often I can still
offload it and and you know use the the
bigger computational expensive
things right so just to reiterate I
think all these points now is the time
when a hardware and software are at the
point where we can in the same Solutions
distributed low power iot devices get
way more intelligence from the data
we're already collecting and
having last question I had a lot what is
Technologies I think it's always I like
I like showing this this is this is a
great thing because we are all
developers in one way or the other we
all know developers who we as managers
will come to and say I want this but you
need a developer who will take it
implement it test it and do everything
you want for it so that's why we at Ed
impul the platform that you've seen
before we have um made it so that you
can access it for free you can use it
for free you can access most of the
features for free you can even use the
outputs of the platform such as models
and use cases in commercial applications
for free because if this technolog is
getting adapted by everyone and everyone
is speaking the same language and knows
how to which Hardware to use which use
cases to build this makes us all way
more productive and coming up with
Incredible new new things that that can help
help
us so another QR code those who missed
the last two this is your last chance no
actually not the last one but yeah sign
up uh immediate platform access you can
you know do all this talk to
us talk to us at our booth and one thing
on the left is also an actual book that
we have a few left in the on the booth I
think it's a book about Aji it's about a
lot of Concepts that are not in the
scope of this conversation but it gives
you the fundamental understanding of how
sensor applications are approached when
you want to apply machine learning to
them this is a very interesting table
read this is not marketing about oh how
to use aul it's really fundamental
principles of these Technologies and
also a lot of um content that we've
created about these interesting things
so that's all for me
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.