YouTube Transcript:
Why 80% of Cybersecurity is Safe from AI | USA, Canada, UK & EU
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
View:
Cyber security is next. That's the
subtext behind every article, every
chart, every AI hype headline making the
rounds right now. Entry-level roles
gone. Analysts automated. Entire teams
replaced by models that don't sleep,
don't blink, and never burn out. But
here's the problem. Almost none of those
claims are rooted in actual deployment
data, team structures, or labor market
shifts. In this episode, I'll dissect
those claims, every single one of them,
every layoff, every stat, every quote
against actual data, market shifts, and
seven years of AI deployments on cyber
security teams. And at the end, we're
going to look at our AI signal card,
showing which roles are really being
automated, which ones are evolving, and
where the capital is flowing. Let's dive
in. Case in point, Goldman Sachs 2023
report on generative AI predicts 300
million full-time jobs being put at risk
by 2030 with cyber security cited as
vulnerable due to automation.
Crowdstrike, a well-known American cyber
security company based in Austin, Texas,
announces 5% job cut, saying that AI is
reshaping every industry. Cyber security
threats on Reddit are full of comments
like these. Here's one of the best
examples. A user in Reddit took the time
to conduct an experiment using clot
code. He analyzed a WordPress plug-in
for vulnerabilities. The author notes
that currently AI struggles with certain
nuanced aspects. For example, generated
perfect exploit page loads, but the gap
is closing fast. A series of online
articles written by sources dedicated to
cyber security agree that Genai will
reduce the skill gap in the field,
citing sources like Gardner who theorize
that about 50% of entry-level cyber
security positions will be eliminated.
I'll test these claims against
independent data sources, field reports,
and actual data to give you a detailed
data-backed analysis on which of them
hold true and which ones are clearly
exaggerating. The evidence when it comes
to cyber security predictions are
generally framed around three areas.
End-to-end automation, where AI is
promoted as being able to triage,
investigate, and remediate incidents
with little or no human oversight.
Superior pattern recognition. The claim
rests on the belief that machine
learning can outperform humans at
detecting subtle or emerging threats and
reduction of error and fatigue. Given
that AI does not experience burnout or
lapses in concentration, it is often
said to be able to replace analysts who
are susceptible to such issues. Now,
let's talk about what AI is truly
capable of doing. AI systems can
continuously analyze vast streams of
logs, network data, and user activities
to flag anomalies and potential threats.
Machine learning models can classify and
prioritize incidents based on risk
context and potential business impact.
As AI models learn and adapt, they can
decrease the volume of unnecessary
alerts, lowering analyst fatigue and
missed signals. AI tools can connect
disperate data points across endpoints,
networks, emails, and cloud services,
uncovering relationships and attack
paths that would take humans much longer
to recognize. Advanced systems do offer
security analysts richer contexts
upfront through thread intelligence.
Once incidents are validated, AI systems
can execute predefined containment or
mitigation steps such as isolating
endpoints, disabling compromised
accounts, blocking IPs with little to no
human intervention. AI can generate
detailed documentation after the
incident, which response actions were
taken and lesson learned. Now let's look
how many companies have actually
integrated AI into cyber security on
their teams and whether it's actually
working. For this section, I will be
getting my data from a very recent study
conducted by a US-based nonprofit
organization called ISC2, International
System Security Certification
Consortium, described as the world's
largest IT security organization. This
is a very strong and credible report
that has been widely cited by news
sources just in time for the video. The
study is based on insights from 436
US-based cyber security professionals
working at organizations of all sizes.
So enterprise organizations with staff
size over 10,000 employees lead the
adoption of AI and cyber security with
37% actively using AI platforms. Mid to
large companies between 2 and a half to
10,000 employees and smaller companies
between 100 and 2 and a half thousand
each with 33% adoption. The smallest
organizations happen to be the most
conservative with 23% reporting no plans
to evaluate AI security tools. Now what
is AI being used for on cyersack teams?
AI is being used the most in network
monitoring and intrusion detection. This
covers log and data heavy functions
where AI performs repetitive and
timeintensive work, produces fast
responses and reaction times for
detection, endpoint protection and
response, vulnerability management and
threat modeling. All of these tasks
involve analyzing large data sets for
monitoring real-time network
information. And lastly, security
testing, which is a very time-consuming
task for cyber security personnel. AI
expedites the efficiency of testing and
ensures that it's being done correctly.
If you map what the companies claim to
be using AI for to what AI can actually
do, you will see that these things are
matching and AI is truly being used for
a lot of things, cyber security. And you
may go, okay, so what you said before
isn't really hype after all. and cyber
security is indeed being replaced. Hold
up, let's talk about historic trends. AI
began being widely integrated into cyber
security teams at tech companies
starting 2018 with rapid acceleration
and widespread integration occurring in
the early 2020s. The first two years it
was marked by machine learning tools for
thread detection and automated responses
like blocking suspicious activity,
isolation affected endpoints and
behavioral analytics. Between 2020 and
2022, AI evolved to realtime analytics,
analyzing massive volumes of data in
real time and allowing cyers teams to
scale incident triage and response.
Also, AI systems improve their ability
to predict attacks. Starting 2023 until
present, AI went through widespread
adoption and autonomous security.
Platforms like Dark Trace and
Crowdstrike, for example, now produce
fully autonomous responses. Jedi is
being used by both defenders and
attackers for smarter deep fakes,
fishing, LLM poisoning, which creates
the need for rapid threat modeling and
simulation of attack scenarios. All of
this is to say that AI and cyber sec is
not new. It's been making its way for
the past 7 years and this is before Chad
GBT, before Perplexity, before AI first
everything. So, of all tech
specializations we're reviewing in this
series, cyber sec is a really good
example because it didn't start 2 years
ago. AI and cyber security has been used
for a long time. Okay, so AI and cyber
sec has been around for a while. So
teams must have been shrinking for the
past 70 years, right? Let's see. Looking
at the recent layoff data, AI
integration and cyber security
operations at major tech companies in
the US is set to directly contribute to
layoffs in entry-level and repetitive
operational roles. But what does this
really mean? Microsoft cut 3% of its
global tech force in May and July 2025.
CyberSack numbers are not disclosed, but
internal reporting and external analysis
confirm that security operations and
manual monitoring roles are among those
affected as AI based security scales up.
Amazon at least hundreds of jobs
eliminated within AWS including security
operations units in July 2025. Again,
specific numbers for cyersack were not
disclosed. Meta laid off about 5% of
workforce in 2024, including sock and
trust and safety teams. Laid-off roles
were the ones handling routine incident
and policy workflows. Data among smaller
companies or non-fang enterprises is
similar between five to 20% layoffs on
the company level specific cyber
security numbers not cited but the
affected rules include technical writers
in security teams manual reporting and
monitoring rules. What's interesting is
that offshoring or nearshoring is not
nearly as pronounced compared to other
tech rules such as customerf facing
support QA or software engineering. So
while AI has been integrated into
cyersack teams and the layoffs have
indeed affected cyersack rules in all
fairness they've been affected just as
much as all other rules across the tech
industry and the most affected rules are
in manual monitoring routine incident
handling basic vulnerability management.
All of this is routine and repetitive
work that AI can't objectively do
better. Now let's see how the team
composition has changed over the years.
Here's a typical cyber security team
composition at a midsize technative
company in the US as of 2018.
In 2020, we're seeing roles such as
cloud security engineer, especially in
software as a service, infrastructure as
a service, and platform as a service.
This role emerged as a core role on the
team due to the explosion of cloud
adoption and remote work. GRC expanded
their workload as data privacy such as
GDPR, CCPA became more prominent.
Usually the teams range from six to 18
people of dedicated staff depending on
the company's size, pace of cloud
adoption and industry regulations. In
2023, the delineation between security
architect and security engineer became
much more pronounced. Prior to 2023,
those was often merged into one role.
Sock analyst one and two merged into
one. The introduction of chief
information security officer as a seuite
role. Security engineer scope of
responsibilities expanded outside of
network teams began blending centralized
security functions with embedded
specialists. For example, embedding
security analysts into product or cloud
squads. As a PM, I can attest to this. I
was a platform PM in 2023 and my
collaboration with security teams became
much closer. This was the first year
when I truly felt that push to shift
left concept. In case you haven't heard
about the shift left concept, it can
apply to numerous things really, but the
core concept is that you start thinking
about, in this case, security early on
before you release product updates,
automation and AI and alert triage and
incident workflows. Analysts
increasingly reviewing and tuning
automated findings. Pentesting and
thread intelligence to handle proactive
testing to stay ahead of evolving
threats. Typical team size is very
similar, 7 to 20 people of dedicated
security staff. 2025. Most routine event
detection, alert triage, reporting, and
vulnerability scanning are handled by AI
platforms. The typical team size is 5 to
12 dedicated security folks on the team.
Security roles are embedded within
product and IT teams to ensure security
is addressed in all deployments. Shift
left security is a standard. AI risk and
adversarial defense are major priorities
prompting new specialized rules. ongoing
upscaling. All team members are expected
to maintain high fluency in AI security
management and cloudnative defense. So
as you can see despite AI being
integrated into cyersack for quite a few
years now the function isn't gone, isn't
automated and isn't replaced. Lastly,
let's go through the cyersack trends for
the next 5 years. Cyber security is
widely cited as the industry that will
experience one of the highest shortages
in the tech industry in the next 5
years. Multiple publications site huge
numbers. 67% of companies experience
skill gap. World shortage over 4 million
cyber security specialists. 70% of
companies attribute increased cyber risk
to the skills gap. So wherever you look,
you will see that cyber security is the
job of the AI era. We will talk about
how the requirements are changing for
the junior specialists. But for the love
of God, please stop panicking. If there
is anything that's not dying, it's cyber
sec. The human factor. Look, I know
everybody's freaking out about AI taking
over our jobs and all the doom and gloom
headlines, but honestly, that's not
what's happening in cyber security right
now. The reality is much more
interesting. As AI gets smarter, the
cyber criminals are getting smarter,
too. And we're seeing a ton of attack
vectors that did not exist before. And
we haven't even started scratching the
surface of how AI will be used as the
time goes on. We're still in the early
stages of AI regulation. The US does not
even have proper federal laws governing
it yet. But when those regulations do
hit, it's going to be a lot of work for
cyber security teams. I learned this the
hard way. This was years back before the
AI boom. I was working on a fintech
product and that product operated in
Europe. And GDPR literally drove me
insane. Every month, every quarter,
there were new rules, new regulations,
and new changes, and we'd have to audit
our entire product all over again to
stay compliant. And that wasn't even an
AI product. And attack surfaces are
exploding. By 2027, almost half of chief
security officers are going to have to
expand way beyond traditional cyber
security because the regulatory pressure
and attack surfaces are exploding. So
instead of AI killing cyber security
jobs, it's actually making the field
more complex and essential than it's
ever been. How to stay afloat? Here is a
scorecard for cyber security in the age
of AI. Risk of automation entry level
and routine rules 8 to nine. mid-level
and specialized 45 and advanced roles
that require context, creativity,
reasoning, and industry expertise one at
most. Now, I would like to address the
point around junior specialist being out
of demand. No, no, no, no, no, no, no,
no. Junior specialists, the skill set of
which remained in 2020 are out of
demand. That's true. But junior roles
aren't going anywhere. They're just not
the same roles you saw 5 years ago.
Which skills will be needed for junior
specialists? AI native sock analyst
working with AI platforms to enhance
security information and event
management. AI threat intelligence
analyst focus on helping train and
validate AI models by managing large
data sets of threat indicators.
Automation and security orchestration
supporting the development and
maintenance of security automation
scripts. AI governance and compliance
associates most likely an entry-level
role ensuring AI systems used in
security are operating in alignment with
ethical and compliance expectations.
Security testing assistance testing the
robustness of AIdriven security tools
including evaluating their response to
adversarial inputs. Cloud security
support analysts working with AI
enhanced cloud security monitoring tools
to ensure the safety, availability and
defense of key cloud services and data
repositories. AI just bulldozed the busy
work, but it did not cancel cyber
security. It leveled it up. The only
gigs that are getting axed are click
here to triage rules. But the jobs that
ask you to outsmart an LLM powered
attack or turn brand new attacks into
bulletproof controls just became
missionritical. So stop doom scrolling,
start model testing, and own the space
where GBT meets GDPR. Let me know what
you guys think in the comments. As
always, I hope this was helpful. Till
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc