YouTube Transcript:
The INSANE Truth About OpenAI
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
You have an incredible amount of power.
Why should we trust you?
>> Since launching Chat GPT, OpenAI has
become one of the most influential and
valuable tech companies in the world.
But the story of what's going on behind
the scenes is crazy. From trying to
overthrow their CEO to completely
abandoning their original principles,
this video is the insane history of Open
AI. But it's also a journey through the
past, present, and future of artificial intelligence.
intelligence.
And this is a story
that affects us all. [Music]
Sam Alman studied computer science at
Stamford, but he dropped out to work on
his own business. It was called Looped,
and it was a way of sharing your
location with friends using your phone.
However, since Sam started Looped before
the iPhone and the App Store even
existed, this proved to be a challenge.
But Sam worked tirelessly on the
business, mostly living off instant
noodles and ice cream. In fact, he
worked so hard and his diet was so poor
that he actually developed scurvy.
Then in 2005, he joined the first ever Y
Combinator class, which is basically a
boot camp for startups.
And it was here that Sam attracted the
attention of Y Combinator's founder Paul
Graham. Paul had a lot of business
experience and became a mentor to Sam as
the two got on extremely well. However,
Paul also observed something curious
about Sam. He said, "Sam is extremely
good at becoming powerful. You could
parachute him into an island full of
cannibals and come back in 5 years and
he'd be the king. Interestingly, before
long, Sam ended up becoming president of
Y Combinator. This meant at just 30
years old, Sam Alman was now leading the
most prestigious startup accelerator,
which allowed him to build relationships
with many of Silicon Valley's most
influential entrepreneurs.
Sam became extremely wellconed and his
reputation in the tech world grew massively.
massively.
Sam soon began doing more public
speeches and one of his favorite topics
In 2015, Elon Musk was terrified.
He was concerned about the lack of
safety precautions around AI which he
felt posed an existential threat to
humanity. I
>> I don't think most people understand
just how quickly machine intelligence is advancing.
advancing.
>> I try to convince people to slow down
slow down AI to regulate AI.
This was futile. At this point in 2015,
Google was the undisputed leader in
artificial intelligence.
They'd been acquiring AI research labs
and had roughly 3/4 of the top AI talent
working for them. And yet, when Elon
spoke with Google's CEO at the time,
Larry Page, Larry didn't seem all that
worried about AI.
Elon asked him how he could be so sure
super intelligence wouldn't wipe out
humanity, but Larry dismissed it
completely, saying Elon was being way
too paranoid.
But Elon said that Google had a monopoly
on AI and that the person in charge
doesn't care about AI safety. just one
one company that has close to monopoly
on AI talent and uh and computers like
or scaled computing and the person who's
in in charge doesn't seem to care about
safety. This is not good.
>> So Elon desperately felt he needed to
dilute Google's power and that would
lead him to a partnership with a man
In 2015, 10 influential people in the
tech world met for dinner. Elon Musk and
Sam Alman were both there. So was Greg
Brockman, who'd been influential in
growing Stripe. and also Ilaskea who was
one of the most respected researchers in
AI. At the dinner they all talked very
seriously about artificial intelligence
and its potential consequences and they
discussed how they could build an AI
company together to rival Google. Elon
said he would put forward a billion
dollars of funding. And so they figured
with Elon's investment money, Greg's
business operation experience, Ilia's AI
skills, and Sam to orchestrate
everything, they would have the dream
team. And thus, Open AAI was founded in
2015 as a nonprofit organization.
The reason was that they said having a
profit motivation with a technology like
this could be very dangerous and instead
it should be built for the good of the world.
world.
What's the the furthest thing from
Google would be like a nonprofit uh that
is fully open cuz Google was closed uh
for profit. So that's why the open and
open AAI refers to open source. We don't
want this to be sort of a profit
maximizing demon from hell.
>> In their own words, open AI's objective
was to build AI safely for the benefit
of humanity and they would share their
work openly with the public for free
instead of keeping it private for their
own gain. Hence the name open AI. However,
However,
When OpenAI began in 2015, it did not
look like a world-changing company. They
didn't even have an office. It was just
a small group working from Greg
Brockman's apartment.
We're sitting essentially on a couch at
a kitchen counter and on a bed and
that's pretty much it. That's where the
work is getting done. Um it's it's kind
of crazy to think that, you know, that's
where something this big got started.
However, they had over a billion dollars
pledged from various investors, and this
funding meant they could attract top AI
researchers very quickly, and so they
began by just experimenting.
Unfortunately, they didn't really have a
clear strategy. They spent a lot of
their time building a bot that could
play the popular game Dota 2. They
figured if they could build an AI that
understood the complexity of the game
world, it could lead to an AI that
better understood our world. Then
another project they did involved trying
to build a robot butler. One of OpenAI's
early employees even admitted, "We were
just doing random stuff and seeing what
would happen." Now, it's worth noting
that at this point, neither Sam nor Elon
was around much. Sam was still running Y
Combinator, and Elon had his other
businesses. Instead, OpenAI was led by
Ilia, who was considered to be an AI
genius, and Greg, who was considered an
expert at managing business operations.
But what the whole team did have was a
shared vision of creating AGI,
artificial general intelligence.
There are different definitions of this,
but it often means artificial
intelligence that can match or surpass
human capabilities at most tasks.
But here's the key. AGI should be able
to acquire new skills it wasn't even
trained on. Which is why some
researchers say the first super
intelligent machine is the last
invention humanity needs. Because if we
can build a machine smarter than us,
that can build even better machines
beyond what humans can even think of. So
inside OpenAI, the employees talked
about AGI as though they were building
God. Even in the very early days, the
OpenAI team were talking about how they
wanted to build something that could
completely change the world. Because AI
will solve all the problems that we have
today. It will solve employment. It will
solve disease.
It will solve poverty.
But it will also create new problems.
The problem of fake news is going to be
a million times worse.
cyber attacks will become much more
extreme. We will have totally automated
AI weapons.
>> Many of the Open AI team shared a
similar feeling that what they were
building had the potential to be the
greatest invention ever, but they also
said it could be the greatest threat to
the existence of humanity.
It's not that it's going to actively
hate humans and want to harm them, but
it's just going to be too powerful. And
I think a good analogy would be the way
humans treat animals.
It's not that we hate animals, but when
the time comes to build a highway
between two cities,
we are not asking the animals for permission.
permission.
A very simple example would be if we did
create super intelligence and asked it
to help Earth, there could be unintended
consequences like the AI deciding the
planet would be a lot safer without humans.
humans.
Sam Alman was saying similar things too.
On the one hand, he would publicly say
that with AI, we can cure all human
disease. We can build new realities. And
then he would also say if this
technology goes wrong, it can go quite
wrong. I think AI will probably like
most likely sort of lead to the end of
the world, but in the meantime, uh,
there will be great companies.
>> What's kind of funny is that they were
saying this stuff back when they were
making AI bots for video games. So, most
people didn't take them too seriously
back then. In fact, many felt the idea
of AI becoming powerful enough to
threaten humans was laughable, as at
this point, AI still felt quite primitive.
primitive.
The cynical explanation was that the
OpenAI team were talking about AI saving
humanity or wiping out the human race
because it helped attract publicity and investors.
investors.
But it does seem most of them genuinely
believed what they were saying about the
potential power of this technology. And
they weren't the only ones. Vladimir
Putin infamously said, "Whoever wins the
Back in 2015 when OpenAI began, the
reason most people weren't paying much
attention to AI was that the field of
artificial intelligence had seen decades
of slow progress. It had been dubbed the
AI winter as after a lot of initial
hype, there hadd been a lot less
breakthroughs than expected and so
funding dried up and many researchers
moved on. But in 2015, the same year
OpenAI began. For the first time, an AI
program beat a professional Go player,
which is a complex strategy game. And
the AI would later go on to beat the
world's best human player. This was
exciting news, but it also illustrated
the problem with current AI technology.
That AI could only play Go and nothing
else. Which meant if you wanted the AI
to do any other task like write a story
or calculate an equation, you'd have to
build and train a whole new system for
that one task, which was extremely
timeconuming. This was largely because
the training data you fed the AI had to
be clearly labeled to explain what it was.
was.
The TV show Silicon Valley parodyied
this perfectly. If you trained an AI on
enough specific data, an AI could tell
if an image was a hot dog or not a hot
dog, but it had no concept of anything
else. Basically, machines could do one
thing well if trained, but it was very
narrowly focused. Whereas human brains
were special because they could do so
many different things, which was dubbed
general intelligence.
However, in 2017, a team of scientists
working at Google published a paper that
A small team of engineers at Google
published a paper called attention is
all you need and they put forward a new
type of AI architecture known as the transformer.
transformer.
Unlike previous AI systems that needed
to be fed highly specific data that was
all labeled clearly to explain what the
data was, like hot dog or not hot dog,
the transformer was different. It could
take in random, messy, unlabeled data
and essentially teach itself. and it
worked surprisingly well. What's
interesting is that it was engineers at
Google, not OpenAI, who made this
initial breakthrough with the
Transformer. But Google had become so
big that they were very slow and
cautious. So even though they developed
their own AI chatbot before OpenAI, they
didn't release it to the public. They
worried it could make outlandish
comments that hurt Google's reputation
and opened up legal and regulatory
risks. Most crucially, Google worried it
could hurt their search advertising
business, which funded everything Google did.
did.
In hindsight, Google's decision to move
so slowly, turned out to be a grave
mistake, as they left the door open for
OpenAI to capitalize on Google's
invention instead.
Ilia read this research paper about the
transformer and immediately saw its
potential. As a result, OpenAI became
one of the first companies to seriously
start experimenting with this
technology. That's where the famous GPT
acronym comes from, generative
pre-trained transformer.
The transformer could handle far more
data and process human language much
faster. Most importantly, it could
handle pretty much any query, meaning
they were more general.
So thanks to the transformer, open AAI
suddenly started making huge progress
Now you may wonder why didn't the AI
researchers in the 20th century come up
with this? Well, they were limited by a
lack of compute power and the lack of
the internet.
You see, even though this wasn't the
internet's purpose, the internet had
become the perfect training data for an
AI because basically every book and
article had become digitized and humans
create endless amounts of content. And
so basically everything humans had ever
written online could now be fed into
these AI models as training data for
neural networks.
It's been described as a black box as we
don't fully know how they do what they
do. we just add input and receive output.
output.
But all of this meant that instead of
the old AI models that have been trained
to do one specific task, these new AI
transformer models became extremely
broad and general. They'd basically been
trained on all text available, so you
could ask it anything. However, this
method of just scraping the internet as
training data raises obvious copyright
concerns. But the Silicon Valley ethos
has always been to ask forgiveness, not permission.
permission.
OpenAI knew if they started asking big
companies if they could scrape all their
data, of course, there'd be lots of push
back and discussions about royalties.
So, OpenAI just went and did it. And it
worked so well that by the time Open AI
got to their GPT tomb language model,
they started to become worried.
They feared that what they were building
was so powerful that if they freely
shared the open- source code for it like
they'd promised all along in the wrong
hands, this could become very dangerous.
So, OpenAI announced due to our concerns
about malicious applications of the
technology, we are not releasing the
trained GPT2 model. This actually helped
generate a huge amount of publicity and
hype for them. For example, Wired
magazine published an article called the
AI text generator that's too dangerous
to make public. Open AAI also didn't
disclose what data sets had been used to
train it. It started to become clear
that OpenAI maybe didn't want to be so
On this channel, I cover all kinds of
entrepreneur stories, but have you ever
thought about starting your own
business? If so, today's sponsor, Busy,
offers a free LLC formation service. You
just pay the state fees and they'll get
your business incorporated and handle
all the paperwork. When I started my
business, I remember it was quite
overwhelming, but Busy makes the process
so much simpler. For example, Bizzy can
take care of the ongoing filing
requirements with your state, which is a
huge timesaver. And Bizzy can even
provide you with a professional address
with digital mail scanning, giving you
access to your mail from anywhere,
anytime. Basically, Bizzy gives
entrepreneurs the tools to start and
manage their business, which means you
can focus on making money. With over 20
years experience, they've already helped
over a million entrepreneurs, and I've
personally found them great to work
with. So, if you want to start your own
company, I honestly think Busy makes it
so much easier. Just use my link in the
description to get started today. [Music]
In 2018, Elon announced he was resigning
from OpenAI's board of directors.
publicly. They said this was due to a
conflict of interests as he was CEO of
Tesla which was developing its own AI.
But the truth was very different. Elon
had wanted to take over OpenAI and
become the CEO. He also proposed OpenAI
becoming part of Tesla, but the board
had refused and that's why he was now
leaving. Unfortunately for OpenAI, this
meant he was also taking his investment
money with him. He had pledged $1
billion in total, but it's believed less
than a hundred million had been paid so
far, and now OpenAI wouldn't be getting
the rest. This left OpenAI with a huge
funding problem. And so many employees
were extremely worried about what this
meant for the future of the business.
And this is where OpenAI made a very
controversial decision. In order to
increase their ability to raise more
capital and attract investors, OpenAI
decided to switch from being a nonprofit
to a for-profit business instead. They
also announced they'd be licensing their
technology for commercial use. Now, it's
important to note that there's obviously
nothing inherently wrong with being a
company that makes a profit. And Sam
argued this was necessary to raise more
investment. Plus, they said the profit
investors could make was counted 100
times the investment they made. However,
many felt this was a complete betrayal
of the whole reason they'd started the
company. Not just that, but OpenAI soon
formed a partnership with Microsoft, who
agreed to invest a billion dollars.
Microsoft had lots of raw computing
power, which OpenAI needed. So, the deal
made sense for both sides. But OpenAI's
mission had been to provide an
alternative to big tech. And now they
were going to help one of the world's
most powerful tech companies become more powerful.
powerful.
So many would argue Open AAI had
completely backtracked on its mission
for democratizing AI. Remember the name
Open AI was chosen as it was meant to be
open- source and freely owned by the
world. And yet, as soon as they had an
actually powerful product, they didn't
want to be open. They instead would
become increasingly secretive. Elon
would later sue the company, demanding
they change their name from Open AI to
closed AI. After transitioning to a
for-profit company, they also now needed
an official CEO, and it was Sam Olman
who got the role. Suddenly, Paul
Graham's comments about Sam being good
at getting into positions of power seems
In 2020, OpenAI unveiled GPT3, a
language model trained on massive
internet data sets. But the real
groundbreaking moment was on the 30th of
November, 2022, when OpenAI publicly
released its chatbot, ChatGpt.
At this point, most of the general
public had never heard of OpenAI, so
there was no big fanfare. It started
with Sam making a simple tweet saying,
"Today we launched Chat GPT. Try talking
with it here." What's interesting is
that OpenAI's leadership said this was
just a low-key research preview. And so,
expectations were low. Employees took
bets on how many users they'd get in the
first week, and the highest guess was
100,000 users. In reality, they were
completely wrong. Chat GPT went viral
and rapidly captured the attention of
the world. In just 2 months, it became
the fastest app to reach 100 million
users. For context, it had taken
Facebook 4 and 1/2 years to hit the same milestone.
milestone.
This unexpected growth was of course
very exciting for OpenAI. But what's
interesting is that internally at the
company, there had been some employees
uncomfortable releasing chat GPT so
quickly, and some of the safety team
weren't even aware it was going to be
released. They argued that they didn't
know how it might be misused by the
public. There were obvious risks like
hackers using it to find vulnerabilities
in code or people using it to help them
commit crimes, but there was no way of
knowing quite what would happen when the
general public started interacting with
chat GPT. Plus, the team were aware it
made a lot of factual errors which were
dubbed hallucinations.
Still, despite concerns, Chat GPT went
live and the response from the public
was extremely positive. Everyone had
tried chat bots before, but they always
felt extremely robotic. If you didn't
ask your question in the right way, they
were useless as they were basically
giving pre-programmed answers. But chat
GPT felt much more knowledgeable and
conversational. Within a couple of
months of ChatGpt being released,
Microsoft increased its ownership stake
in OpenAI with a new $10 billion investment.
investment.
Chat GPT's release also caused the AI
race to really ramp up. Investors began
throwing more money at AI projects than
ever before, and the big tech companies
all scrambled to release their own AI
models to compete. Open AAI seemed to be
leading though. They continued to
release new products and became even
more commercialized with paid plans and
selling their underlying technology to businesses.
businesses.
However, at OpenAI, a divide was growing
between those focused on product versus
those focused on safety.
OpenAI star engineer Ilaskea seemed to
grow concerned and began working more
closely with the company's safety team.
Then a group of nine current and former
OpenAI employees accused the company of
prioritizing profits over safety and
said OpenAI used restrictive agreements
to silent safety concerns. One of their
key safety researchers quit for the same
reason. Meanwhile, several of OpenAI's
lead developers left to start a rival
company called Anthropic with the goal
to build a safer AI alternative.
But these internal conflicts didn't seem
to slow OpenAI down. Not only was
ChatGPT taking the world by storm, but
OpenAI's progress with Dari, their image
generator, and Sora, their video
generator, were also incredible developments.
developments.
In fact, Sam's vision of what AI could
achieve only seemed to get bigger. He
was publicly talking about how what
they're building would create a world of
abundance and could help end poverty and
disease. At a major gathering of world
leaders, Sam said, "I think this will be
the most transformative and beneficial
technology humanity has yet invented."
He went on to say how there is nothing
else he would rather be working on. But
then came one of the most shocking
twists in recent business history. Less
than 24 hours after giving that speech,
Sam Alman got a text from Ilia asking
him to join a video call. Sam wasn't
sure what it was about and so he was
very surprised to find that the OpenAI
board was on the call except for his
friend Greg Brockman. The board told Sam
they were firing him. The call ended
shortly after and Sam was locked out of
his OpenAI computer.
It's hard to overstate what a shock this
was to everyone, including Sam. He was
the face of OpenAI, one of the most
exciting companies in the world. And now
he was being kicked out. It was Friday,
November 17th, 2023, when Sam was told
the news. and the OpenAI board put out a
statement saying they had removed
co-founder Sam Alman as CEO.
The board's statement explained that Sam
was not consistently candid with his
communications with the board, basically
implying Sam had lied to them and they
couldn't trust him. The board then said
they no longer had confidence in Alman's
ability to continue leading OpenAI.
It was all kind of vague. So, pretty
much everyone was asking what really
happened here. And there were many
different theories about why he'd been
fired. The board's main accusation was
that Sam had a habit of outright lying
to them, and so they often felt they
couldn't trust what he said. For
example, Sam said things had been
approved by their internal safety board,
which actually hadn't been approved.
It's also believed Sam had fallen out
with one of the board members, Helen
Toner, after she published a paper where
she basically suggested their
competitor, Anthropic, was safer than
Chat GPT. Apparently, Sam had taken
issue with that and had tried to get
Helen kicked off the board. He
reportedly told the other board members
that everyone else agreed Helen should
be fired, even though they hadn't said
that. The board members began to feel
that he was playing them against each other.
other.
Another theory about why the OpenAI
board turned on Sam is that they had
become annoyed that Sam seemed to be
building a whole empire of different
projects outside of OpenAI.
The board felt this was a major
distraction. For example, Sam had been
trying to raise tens of billions of
dollars from the Middle East to fund an
AI chipmaking business. He'd also
started a project called Worldcoin, a
crypto-based network that would give
everyone worldwide a unique digital
identity by scanning their eyes. Sam
believed that AI may put huge amounts of
people out of work, and thus they'd need
a way to distribute a universal basic
income. But some people think the Ford
was concerned about how Sam wanted to
use OpenAI's technology for all his own
separate projects.
However, there's also another very
prominent theory about why the board
really turned on Sam, and it's the most
concerning one. Some believe that the
reason I turned on Sam was because he
saw something internally that made him worried.
worried.
Ilia had been heavily involved in safety
at OpenAI. And for him to suddenly flip
on his co-founder was concerning. It
became a meme on social media with
people asking, "What did I see?
Still, no matter the exact reasons the
board had, the fact was that on Friday,
November 17th, Sam was kicked out of
OpenAI, and it was announced they'd
begin looking for a new CEO.
But by Saturday, something very
unexpected started happening. Open AAI
employees began revoling against the
board. It began when co-founder Greg
Brockman resigned from the company in
solidarity with Sam. But over the
weekend, more and more OpenAI employees
came forward in support of Sam. A letter
was drafted by OpenAI staff saying they
disagreed with the board's decision to
fire Sam, and they complained the board
hadn't given adequate explanation of
why. A petition was then created by
employees to say that they would leave
the company if Sam wasn't brought back.
And nearly all of the company's 800
employees signed it to say they didn't
want to work there unless Sam and Greg
were brought back.
Social media became flooded with posts
from employees saying OpenAI is nothing
without its people. Sam was replying to
each one individually with heart emojis.
Now, it's worth noting that part of this
support probably wasn't just out of
loyalty. You see, right before this coup
happened, OpenAI had been planning a
share sale for employees, which would
mean a big cash payout for staff. But
all this chaos going on would have
probably destroyed the chances of that
happening and massively hurt OpenAI's valuation.
valuation.
So keeping Sam in place was probably in
the employees best financial interests.
But either way, the OpenAI board
basically had a mass mutiny on its
hands. They had never expected the staff
to be so loyal to Sam. And to make
matters worse for them, Microsoft
announced Sam and Greg would join them
instead and that they would hire any
OpenAI employees who wanted to leave. At
first, the board still tried to press on
with their plan and began lining up a
new CEO to replace Sam. But then on
Monday morning came the final dagger.
Ilia, one of the four board members who
had initially pushed Sam out, changed
his mind and signed a petition saying he
wanted Sam to stay.
Ilia tweeted, "I deeply regret my
participation in the board's actions. I
never intended to harm OpenAI. I will do
everything I can to reunite the company."
company."
Now, it's unclear if he genuinely felt
that way or he had just realized that
all the other employees were clearly
siding with Sam, but Ilia siding with
Sam was the point that the remaining
board members realized they had lost the
battle. If you come for the king, you
best not miss. And the board had missed.
By Monday, negotiations were ongoing to
bring Sam back. And all of a sudden, the
board had lost all of its leverage, and
Sam was the one in the position of
power. The dynamics had totally flipped,
and Sam now held all the cards. By
Tuesday, a deal was reached. The board
members who turned on Sam would be
kicked off the board, and new board
members would be brought in. Sam would
return as CEO, and Greg would come back,
too. Greg posted saying, "We are so
back." And they held a companywide party
at the office. It truly was a party
atmosphere. What's really crazy is that
all of this happened in just 5 days. On
Friday, Sam was ambushed and thrown out.
By Tuesday, he was back as CEO, his
enemies on the board all gone, and the
entire company had declared loyalty to
him. Not just that, Sam would be
involved in choosing the new board
members, meaning he was in a stronger
position than ever before. And the odds
of anyone challenging him again now
seemed extremely low. As for Ilia, he
lost his board seat and ended up leaving
OpenAI completely just 6 months later.
However, what's really fascinating about
all this is that before the board tried
to overthrow Sam, he had repeatedly
talked about how it was only fair that
the board should be able to fire him and
hold him accountable. The board can fire
me. I think that's important.
>> Sam had repeated this a lot. And yet,
when the board members did try to fire
Hey, how's it going?
>> Hey there, it's going great. How about
you? I see you're rocking an open AI
hoodie. Nice choice.
>> The introduction of chat GPT's voice
mode marked another step towards science fiction.
fiction.
you could now have a very realistic
sounding conversation with an AI.
People immediately saw the parallels
with the 2013 film called Her about a
man falling in love with an AI. The
concept sounded kind of absurd at the
time, but now it's easier than ever to
see how it could happen. People are
already using Chat GPT as a friend, an
assistant, or even a therapist. So, it's
inevitable some people will use it as a
partner that they can program however
they want with the specific
characteristics they desire.
Interestingly, even though the film her
was quite dystopian, it seems Sam may
have been inspired by it. He
specifically reached out to Scarlett
Johansson, who did the AI voice in the
movie, to see if she wanted to be a
voice for Chat GPT. She declined, but
OpenAI went ahead and made a voice that
sounded like her anyway, leading her to
sue the company. But her own friends and
family thought it was her. Sam denied
the voice was based on Scarlet's voice
in the movie Her, but it didn't help his
case that he literally tweeted the word
her right after the feature was released.
released.
Still, despite all the controversy
around Open AI, there's no denying that
what they're building is exciting,
especially given how quickly the
technology is progressing. And that's
what makes the future of AI so hard to predict.
predict.
A few years ago, it was widely believed
that AI would come for manual jobs first
and creative jobs would be the last to
go. As surely, machines can't be
creative. Turns out though, it's the
creative industries AI is disrupting
first. Likewise, a few years ago, it was
hard to imagine anyone disrupting
Google's dominance in search. But now,
many people ask their questions to chat
GPT instead.
The reality is every industry is likely
to be disrupted by AI in some way. Many
will use it as a tool to help them. Many
will get replaced.
And if you want an example of how fast
AI is progressing, here's the real
twist. This entire video was created by AI.
AI. [Music]
[Music]
Nah, I'm just kidding. None of this was
AI. For the record, not a single line in
any Magnates Media video has ever been
written by AI. These videos take a long
time to make because I research and
write the scripts, record the voice
over, and then we spend hundreds of
hours editing them. But the fact you
can't always tell on YouTube anymore is
kind of crazy.
It was already hard to trust what you
see online. But as it becomes easier
than ever to create images, videos, and
audio of people saying and doing
whatever you want, it'll be harder than
ever to trust news or even trust that
who you're speaking with is human.
Social media is already filled with AI accounts.
accounts.
What's perhaps most interesting, though,
is how AI will mix with other
technology. As tech like virtual reality
improves, we'll probably get to a point
where people essentially spend more time
in a virtual world than the real world.
Kind of like Ready Player One. Imagine
being able to enter an ultra realistic
world indistinguishable from reality,
but you can go wherever you want, be
whoever you want, and do whatever you
want. It's not hard to see how someone
who's unhappy with their real life would
want to switch to a virtual world
powered by AI that can generate whatever
they desire. and it'll feel real.
Unless, of course, we're already inside
So, where are we up to? As of 2025,
OpenAI had closed a new funding round of
$40 billion at a $30 billion valuation,
making it the largest private tech
funding round in history. Meanwhile,
Elon Musk and Sam Alman, who started as
co-founders, are now competitors. They
continue to publicly argue, throwing
insults back and forth.
However, in January 2025, all the
American tech companies got shaken by
the arrival of Deepseek, a Chinese AI
competitor. Hundreds of billions of
dollars were wiped off the market caps
of US AI stocks as it became clear China
was a serious player in the AI race.
Open AAI actually accused DeepSeek of
stealing its intellectual property,
which a lot of people mocked since you
could argue Open AI essentially stole
that intellectual property in the first
place by scraping the internet. But
either way, what is clear is that the AI
race is heating up. And whilst nobody
knows for sure how it will play out,
it's fair to say the results will impact
us all.
Now, OpenAI's latest funding round was
led by SoftBank. And if you want to see
what happened last time SoftBank pumped
tens of billions into a company, click
here to watch the story of the $47
billion cult. Trust me, it's a crazy
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc