0:02 Cyber security is next. That's the
0:04 subtext behind every article, every
0:07 chart, every AI hype headline making the
0:09 rounds right now. Entry-level roles
0:13 gone. Analysts automated. Entire teams
0:15 replaced by models that don't sleep,
0:17 don't blink, and never burn out. But
0:19 here's the problem. Almost none of those
0:21 claims are rooted in actual deployment
0:23 data, team structures, or labor market
0:25 shifts. In this episode, I'll dissect
0:27 those claims, every single one of them,
0:29 every layoff, every stat, every quote
0:31 against actual data, market shifts, and
0:33 seven years of AI deployments on cyber
0:35 security teams. And at the end, we're
0:37 going to look at our AI signal card,
0:38 showing which roles are really being
0:40 automated, which ones are evolving, and
0:42 where the capital is flowing. Let's dive
0:45 in. Case in point, Goldman Sachs 2023
0:48 report on generative AI predicts 300
0:50 million full-time jobs being put at risk
0:53 by 2030 with cyber security cited as
0:54 vulnerable due to automation.
0:56 Crowdstrike, a well-known American cyber
0:58 security company based in Austin, Texas,
1:01 announces 5% job cut, saying that AI is
1:03 reshaping every industry. Cyber security
1:05 threats on Reddit are full of comments
1:08 like these. Here's one of the best
1:09 examples. A user in Reddit took the time
1:11 to conduct an experiment using clot
1:14 code. He analyzed a WordPress plug-in
1:15 for vulnerabilities. The author notes
1:17 that currently AI struggles with certain
1:20 nuanced aspects. For example, generated
1:22 perfect exploit page loads, but the gap
1:24 is closing fast. A series of online
1:26 articles written by sources dedicated to
1:28 cyber security agree that Genai will
1:30 reduce the skill gap in the field,
1:32 citing sources like Gardner who theorize
1:34 that about 50% of entry-level cyber
1:37 security positions will be eliminated.
1:38 I'll test these claims against
1:41 independent data sources, field reports,
1:43 and actual data to give you a detailed
1:44 data-backed analysis on which of them
1:46 hold true and which ones are clearly
1:49 exaggerating. The evidence when it comes
1:51 to cyber security predictions are
1:54 generally framed around three areas.
1:56 End-to-end automation, where AI is
1:58 promoted as being able to triage,
2:00 investigate, and remediate incidents
2:02 with little or no human oversight.
2:04 Superior pattern recognition. The claim
2:06 rests on the belief that machine
2:07 learning can outperform humans at
2:10 detecting subtle or emerging threats and
2:12 reduction of error and fatigue. Given
2:14 that AI does not experience burnout or
2:16 lapses in concentration, it is often
2:18 said to be able to replace analysts who
2:20 are susceptible to such issues. Now,
2:22 let's talk about what AI is truly
2:24 capable of doing. AI systems can
2:26 continuously analyze vast streams of
2:28 logs, network data, and user activities
2:30 to flag anomalies and potential threats.
2:33 Machine learning models can classify and
2:34 prioritize incidents based on risk
2:36 context and potential business impact.
2:39 As AI models learn and adapt, they can
2:40 decrease the volume of unnecessary
2:42 alerts, lowering analyst fatigue and
2:45 missed signals. AI tools can connect
2:47 disperate data points across endpoints,
2:49 networks, emails, and cloud services,
2:51 uncovering relationships and attack
2:53 paths that would take humans much longer
2:56 to recognize. Advanced systems do offer
2:58 security analysts richer contexts
3:00 upfront through thread intelligence.
3:02 Once incidents are validated, AI systems
3:04 can execute predefined containment or
3:07 mitigation steps such as isolating
3:08 endpoints, disabling compromised
3:11 accounts, blocking IPs with little to no
3:13 human intervention. AI can generate
3:15 detailed documentation after the
3:17 incident, which response actions were
3:20 taken and lesson learned. Now let's look
3:22 how many companies have actually
3:24 integrated AI into cyber security on
3:26 their teams and whether it's actually
3:27 working. For this section, I will be
3:30 getting my data from a very recent study
3:32 conducted by a US-based nonprofit
3:35 organization called ISC2, International
3:37 System Security Certification
3:39 Consortium, described as the world's
3:41 largest IT security organization. This
3:43 is a very strong and credible report
3:45 that has been widely cited by news
3:46 sources just in time for the video. The
3:49 study is based on insights from 436
3:51 US-based cyber security professionals
3:54 working at organizations of all sizes.
3:56 So enterprise organizations with staff
3:58 size over 10,000 employees lead the
4:01 adoption of AI and cyber security with
4:04 37% actively using AI platforms. Mid to
4:06 large companies between 2 and a half to
4:08 10,000 employees and smaller companies
4:10 between 100 and 2 and a half thousand
4:13 each with 33% adoption. The smallest
4:15 organizations happen to be the most
4:18 conservative with 23% reporting no plans
4:21 to evaluate AI security tools. Now what
4:24 is AI being used for on cyersack teams?
4:26 AI is being used the most in network
4:28 monitoring and intrusion detection. This
4:30 covers log and data heavy functions
4:32 where AI performs repetitive and
4:34 timeintensive work, produces fast
4:35 responses and reaction times for
4:37 detection, endpoint protection and
4:39 response, vulnerability management and
4:41 threat modeling. All of these tasks
4:43 involve analyzing large data sets for
4:45 monitoring real-time network
4:47 information. And lastly, security
4:48 testing, which is a very time-consuming
4:51 task for cyber security personnel. AI
4:53 expedites the efficiency of testing and
4:54 ensures that it's being done correctly.
4:56 If you map what the companies claim to
4:59 be using AI for to what AI can actually
5:01 do, you will see that these things are
5:03 matching and AI is truly being used for
5:05 a lot of things, cyber security. And you
5:07 may go, okay, so what you said before
5:09 isn't really hype after all. and cyber
5:12 security is indeed being replaced. Hold
5:14 up, let's talk about historic trends. AI
5:16 began being widely integrated into cyber
5:18 security teams at tech companies
5:21 starting 2018 with rapid acceleration
5:23 and widespread integration occurring in
5:26 the early 2020s. The first two years it
5:27 was marked by machine learning tools for
5:30 thread detection and automated responses
5:31 like blocking suspicious activity,
5:33 isolation affected endpoints and
5:35 behavioral analytics. Between 2020 and
5:39 2022, AI evolved to realtime analytics,
5:41 analyzing massive volumes of data in
5:43 real time and allowing cyers teams to
5:45 scale incident triage and response.
5:47 Also, AI systems improve their ability
5:50 to predict attacks. Starting 2023 until
5:52 present, AI went through widespread
5:54 adoption and autonomous security.
5:55 Platforms like Dark Trace and
5:57 Crowdstrike, for example, now produce
6:00 fully autonomous responses. Jedi is
6:01 being used by both defenders and
6:03 attackers for smarter deep fakes,
6:05 fishing, LLM poisoning, which creates
6:07 the need for rapid threat modeling and
6:09 simulation of attack scenarios. All of
6:11 this is to say that AI and cyber sec is
6:13 not new. It's been making its way for
6:16 the past 7 years and this is before Chad
6:19 GBT, before Perplexity, before AI first
6:20 everything. So, of all tech
6:22 specializations we're reviewing in this
6:24 series, cyber sec is a really good
6:26 example because it didn't start 2 years
6:28 ago. AI and cyber security has been used
6:30 for a long time. Okay, so AI and cyber
6:32 sec has been around for a while. So
6:33 teams must have been shrinking for the
6:36 past 70 years, right? Let's see. Looking
6:37 at the recent layoff data, AI
6:39 integration and cyber security
6:41 operations at major tech companies in
6:43 the US is set to directly contribute to
6:45 layoffs in entry-level and repetitive
6:46 operational roles. But what does this
6:49 really mean? Microsoft cut 3% of its
6:52 global tech force in May and July 2025.
6:54 CyberSack numbers are not disclosed, but
6:57 internal reporting and external analysis
6:59 confirm that security operations and
7:01 manual monitoring roles are among those
7:04 affected as AI based security scales up.
7:06 Amazon at least hundreds of jobs
7:09 eliminated within AWS including security
7:11 operations units in July 2025. Again,
7:13 specific numbers for cyersack were not
7:16 disclosed. Meta laid off about 5% of
7:18 workforce in 2024, including sock and
7:20 trust and safety teams. Laid-off roles
7:22 were the ones handling routine incident
7:25 and policy workflows. Data among smaller
7:27 companies or non-fang enterprises is
7:29 similar between five to 20% layoffs on
7:31 the company level specific cyber
7:33 security numbers not cited but the
7:35 affected rules include technical writers
7:37 in security teams manual reporting and
7:39 monitoring rules. What's interesting is
7:41 that offshoring or nearshoring is not
7:44 nearly as pronounced compared to other
7:46 tech rules such as customerf facing
7:48 support QA or software engineering. So
7:50 while AI has been integrated into
7:52 cyersack teams and the layoffs have
7:54 indeed affected cyersack rules in all
7:56 fairness they've been affected just as
7:58 much as all other rules across the tech
8:00 industry and the most affected rules are
8:02 in manual monitoring routine incident
8:05 handling basic vulnerability management.
8:06 All of this is routine and repetitive
8:08 work that AI can't objectively do
8:10 better. Now let's see how the team
8:12 composition has changed over the years.
8:14 Here's a typical cyber security team
8:16 composition at a midsize technative
8:20 company in the US as of 2018.
8:22 In 2020, we're seeing roles such as
8:24 cloud security engineer, especially in
8:26 software as a service, infrastructure as
8:27 a service, and platform as a service.
8:30 This role emerged as a core role on the
8:31 team due to the explosion of cloud
8:34 adoption and remote work. GRC expanded
8:36 their workload as data privacy such as
8:38 GDPR, CCPA became more prominent.
8:41 Usually the teams range from six to 18
8:43 people of dedicated staff depending on
8:44 the company's size, pace of cloud
8:46 adoption and industry regulations. In
8:49 2023, the delineation between security
8:51 architect and security engineer became
8:53 much more pronounced. Prior to 2023,
8:55 those was often merged into one role.
8:58 Sock analyst one and two merged into
9:00 one. The introduction of chief
9:02 information security officer as a seuite
9:04 role. Security engineer scope of
9:06 responsibilities expanded outside of
9:08 network teams began blending centralized
9:09 security functions with embedded
9:11 specialists. For example, embedding
9:14 security analysts into product or cloud
9:16 squads. As a PM, I can attest to this. I
9:18 was a platform PM in 2023 and my
9:20 collaboration with security teams became
9:22 much closer. This was the first year
9:24 when I truly felt that push to shift
9:26 left concept. In case you haven't heard
9:28 about the shift left concept, it can
9:30 apply to numerous things really, but the
9:32 core concept is that you start thinking
9:34 about, in this case, security early on
9:36 before you release product updates,
9:38 automation and AI and alert triage and
9:40 incident workflows. Analysts
9:42 increasingly reviewing and tuning
9:43 automated findings. Pentesting and
9:45 thread intelligence to handle proactive
9:47 testing to stay ahead of evolving
9:49 threats. Typical team size is very
9:51 similar, 7 to 20 people of dedicated
9:54 security staff. 2025. Most routine event
9:56 detection, alert triage, reporting, and
9:58 vulnerability scanning are handled by AI
10:01 platforms. The typical team size is 5 to
10:04 12 dedicated security folks on the team.
10:05 Security roles are embedded within
10:08 product and IT teams to ensure security
10:10 is addressed in all deployments. Shift
10:13 left security is a standard. AI risk and
10:15 adversarial defense are major priorities
10:18 prompting new specialized rules. ongoing
10:20 upscaling. All team members are expected
10:22 to maintain high fluency in AI security
10:24 management and cloudnative defense. So
10:26 as you can see despite AI being
10:28 integrated into cyersack for quite a few
10:31 years now the function isn't gone, isn't
10:33 automated and isn't replaced. Lastly,
10:35 let's go through the cyersack trends for
10:37 the next 5 years. Cyber security is
10:40 widely cited as the industry that will
10:42 experience one of the highest shortages
10:43 in the tech industry in the next 5
10:46 years. Multiple publications site huge
10:48 numbers. 67% of companies experience
10:51 skill gap. World shortage over 4 million
10:53 cyber security specialists. 70% of
10:55 companies attribute increased cyber risk
10:58 to the skills gap. So wherever you look,
11:01 you will see that cyber security is the
11:03 job of the AI era. We will talk about
11:05 how the requirements are changing for
11:06 the junior specialists. But for the love
11:09 of God, please stop panicking. If there
11:11 is anything that's not dying, it's cyber
11:14 sec. The human factor. Look, I know
11:16 everybody's freaking out about AI taking
11:18 over our jobs and all the doom and gloom
11:20 headlines, but honestly, that's not
11:21 what's happening in cyber security right
11:23 now. The reality is much more
11:25 interesting. As AI gets smarter, the
11:26 cyber criminals are getting smarter,
11:28 too. And we're seeing a ton of attack
11:30 vectors that did not exist before. And
11:32 we haven't even started scratching the
11:34 surface of how AI will be used as the
11:36 time goes on. We're still in the early
11:38 stages of AI regulation. The US does not
11:40 even have proper federal laws governing
11:43 it yet. But when those regulations do
11:45 hit, it's going to be a lot of work for
11:47 cyber security teams. I learned this the
11:49 hard way. This was years back before the
11:51 AI boom. I was working on a fintech
11:53 product and that product operated in
11:56 Europe. And GDPR literally drove me
11:59 insane. Every month, every quarter,
12:01 there were new rules, new regulations,
12:02 and new changes, and we'd have to audit
12:05 our entire product all over again to
12:07 stay compliant. And that wasn't even an
12:09 AI product. And attack surfaces are
12:13 exploding. By 2027, almost half of chief
12:15 security officers are going to have to
12:17 expand way beyond traditional cyber
12:19 security because the regulatory pressure
12:21 and attack surfaces are exploding. So
12:23 instead of AI killing cyber security
12:25 jobs, it's actually making the field
12:28 more complex and essential than it's
12:31 ever been. How to stay afloat? Here is a
12:32 scorecard for cyber security in the age
12:35 of AI. Risk of automation entry level
12:37 and routine rules 8 to nine. mid-level
12:40 and specialized 45 and advanced roles
12:42 that require context, creativity,
12:45 reasoning, and industry expertise one at
12:46 most. Now, I would like to address the
12:48 point around junior specialist being out
12:50 of demand. No, no, no, no, no, no, no,
12:52 no. Junior specialists, the skill set of
12:55 which remained in 2020 are out of
12:57 demand. That's true. But junior roles
12:58 aren't going anywhere. They're just not
13:01 the same roles you saw 5 years ago.
13:02 Which skills will be needed for junior
13:05 specialists? AI native sock analyst
13:07 working with AI platforms to enhance
13:08 security information and event
13:10 management. AI threat intelligence
13:12 analyst focus on helping train and
13:14 validate AI models by managing large
13:16 data sets of threat indicators.
13:18 Automation and security orchestration
13:19 supporting the development and
13:21 maintenance of security automation
13:23 scripts. AI governance and compliance
13:25 associates most likely an entry-level
13:28 role ensuring AI systems used in
13:30 security are operating in alignment with
13:32 ethical and compliance expectations.
13:34 Security testing assistance testing the
13:37 robustness of AIdriven security tools
13:38 including evaluating their response to
13:40 adversarial inputs. Cloud security
13:42 support analysts working with AI
13:44 enhanced cloud security monitoring tools
13:46 to ensure the safety, availability and
13:49 defense of key cloud services and data
13:52 repositories. AI just bulldozed the busy
13:53 work, but it did not cancel cyber
13:55 security. It leveled it up. The only
13:57 gigs that are getting axed are click
14:00 here to triage rules. But the jobs that
14:02 ask you to outsmart an LLM powered
14:04 attack or turn brand new attacks into
14:06 bulletproof controls just became
14:09 missionritical. So stop doom scrolling,
14:11 start model testing, and own the space
14:14 where GBT meets GDPR. Let me know what
14:16 you guys think in the comments. As
14:18 always, I hope this was helpful. Till