Nvidia's Consumer Electronics Show keynote focused heavily on enterprise-level AI infrastructure and "physical AI" advancements, largely neglecting consumer-facing news, particularly in gaming, which was relegated to separate announcements.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Nvidia had a lot to announce in its 93
minute consumer electronic show keynote.
>> We would like to have this AI stay with
us our entire life and remember every
single conversation we've ever had with
it. Right? People ask where is the money
coming from. >> Yes,
>> Yes,
>> this last year was incredible.
This last year there's a slide coming.
This is what happens when you don't practice.
practice.
It's the first keynote of the year. I
hope it's your first keynote of the
year. Otherwise you going you have been
pretty pretty busy basic way of building
These models are also world class.
This never happens in Santa Clara.
I think my system's still down. But
that's okay. I I I've I uh I'll make it
up as I go. brought to you by the most
powerful tech company in the world and
the company in charge of the global
economy. So, it's not the strongest
start, but you know, Jensen, AI is a
little bit scary for a lot of people.
So, give us something cute, something
that helps us drop our guard, but maybe
if you think about it a little too much,
it's still terrifying. You got anything
like that?
>> With Brev, I can share access to my
Spark and Reachi. So, I'm going to share
>> Hey Richi, what's Potato up to?
He's on the couch.
>> I remember you don't like this.
>> I'll tell him to get off.
>> Aa la vista, baby.
>> That's right. Nvidia saw Dogtober and
>> This is This is Wow. It's super heavy.
You have to be a CEO in really good
Okay. All right. So, this thing is I'm
going to guess this is probably I don't
>> Have you GUYS HEARD THIS ONE?
>> You laugh that way because you know
you're wealthy. Is that
>> I'm very wealthy and then that's
>> that's odd. I That's how wealthy people laugh.
laugh.
>> I WOULDN'T HAVE GUESSED. COME ON. It
could have been.
>> Do another one. Do another one. Tell
another joke.
>> It's usually about two tons, but today
it's 2 and 1/2 tons because um when they
shipped it, they forgot to drain the
water out of it.
So, we we shipped a lot of water from California.
It's funny because there's a localized
water crisis all over parts of the
United States because data centers are
taking all the water and took a thousand
pounds of water and put it in a box and
shipped it from California to a desert
to put on stage. It's also at a time
when the concentration of nitrates, the
percentage of it is increasing in some
areas of data centers because the data
center usage of water and also there's
sediment buildup in some people's water
where they can't drink it anymore.
>> Can you hear it squealing?
Oh, you could do it. Wow.
>> Nvidia did though have other important
and good news. In case you were
wondering, Nvidia is still accelerating
everything Palunteer does.
>> Uh Palunteer, for example,
um their their entire AI and data
processing platform is being integrated,
accelerated by Nvidia today.
>> That's right.
What is it Palanteer does again? The
data mining firm Palunteer faces
backlash over its partnership with ICE.
The government agency paid the company
$800 million last year.
>> Our product is used on occasion to kill people.
people.
>> Ah, good. Killing people. The good news
is that Jensen Juan doesn't need people
where he's going.
>> That's your future. We're going to give
Yeah, you're going to be born inside
these inside these platforms. Pretty
amazing, right?
>> Before that, this video is brought to
you by Thermaltch and their V600 TGKS.
The V600 TG comes in multiple different
color options like their future dusk
colorway, a classic white or black,
yellow, and a light blue and red
combination. Thermaltch's V600TG uses an
angular glass front design with heavily
ventilated side intake and bottom fan
mount options, plus large holes and
ventilation at the back for more flow
and pressure customization. The case
makes for a good showcase PC with colors
difficult to find elsewhere while also
providing adequate cooling options
through the perforations around the
case. It's also compatible with some
back connect motherboard form factors.
Learn more at the link in the
description below. Nvidia's consumer
electronic show keynote had zero
consumer news in it. And actually, it
had almost zero news in it at all. There
was a good amount of news. They just
didn't put it there. And maybe that's
because as Nvidia CEO Jensen Juan has
said, the keynotes serve a different purpose.
purpose.
>> When when we give a keynote, everybody's
stock price goes up.
>> Nvidia's stock didn't move much during
the keynote, but Jensen did give the
audience a slathering of corpo slop
speak for 93 minutes as he moved around
on the stage to to prove to shareholders
that he's healthy enough to continue
being CEO.
>> You have to be a CEO in really good
shape to do this job. But it was 93
minutes for a good reason.
>> We have about 15 keynotes worth of
material to pack in here.
>> Please, please, God, no. Please don't
don't do that. Nvidia's keynote covered
a medley of the company's greatest hits
like AI, self-driving cars, robots, AI,
autonomous vehicles and machines, AI,
and of course, money. What that means is
some $10 trillion or so of the last
decade of computing is now being
modernized to this new way of doing
computing. A hundred trillion dollars of
industry several percent of which is R&D
budget is shifting over to artificial intelligence.
intelligence.
>> Almost 800 plus billion dollars.
>> Hundreds of billions of dollars. Well,
it turns out we have billions of dollars
of supercomputers in operation. These
are billions of dollars. Let's say a
gigawatt data center is $50 billion.
There's a couple hundred billion dollars
in VC funding.
>> Other than money, the keynotes few
newsworthy details included Vera Rubin
details on the company's upcoming
platform that includes, it says, six new
chips within one, as it calls it, AI
supercomputer. They also had gaming
news, but we'll come back to that kind
of like how they never did. Jensen spoke
at length about how Nvidia co-designed
the six new chips, the Reuben GPU, Vera
CPU, Envyink 6 switch, Connect X9
Superdick Bluefield 4 DPU, and Spectrum
6 Ethernet switch and mentioned how
these would lower AI costs and improve
performance. Jensen Han even invited his
robot friends to the stage from last
year as well.
Hey guys,
hurry up.
>> He then spent the ensuing 20 to 30
minutes doing the talking to an empty
chair routine while he weirdly addressed
the robots for everything instead of the
audience. I he was still using the
second person you but he would look over
there at the robots and the audience
seemed very confused about what to do.
But may maybe he's just preparing for
when he replaces the entire audience
with AI.
There no GeForce gamer in the room.
>> Jensen and the Nvidia pet robots
unveiled robots on parade with the sure
shot sure to make investors drop money
as they showed robots ranging from
surgical use cases to construction
robots from Caterpillars. Although they
didn't show any of the killing drones
that uh the friends of the military
industrial complex might use. So that's
right. Once again at the consumer
electronics show, Nvidia has covered
everything except for consumer
electronics. And this time, even in its
B2B and enterprise discussion, it didn't
really talk about much news at all.
Instead, Nvidia highlighted how Nvidia
technologies would make AI operate in
the physical world, which Nvidia called
physical AI. Nvidia showed actual and
simulated demonstrations with its robots
and autonomous vehicles working in a
physical environment. The company also
introduced Alpameo, a family of open
reasoning models for autonomous vehicle
development. Nvidia's press release
called it quote part of the sweeping
push to bring AI into every domain. End
quote. As is Nvidia tradition, of
course, it also celebrated its
partnership with all the B2B and
enterprise companies out there. For
example, uh Palunteer Palunteer. Palanteer.
Palanteer.
>> Nvidia, could you just give us
something, please? Anything like like a
drop of water in a desert. Need anything.
anything.
>> And although we're not announcing any
new GPUs today.
>> Okay. All right. So, there aren't any
new GPUs. It's fine. We didn't really
expect any. It's not like they have the
memory supply to ship for it right now
anyway because they've got to suck it
all into the new systems that they have
that pull somewhere around two terabytes
for uh server. So in this video with
Jacob Freeman formerly of EVGA, he's
talking about how Nvidia actually did
have consumer gaming news, just not a
single piece of it was in the consumer
electronics show keynote. And actually,
weirdly, the gaming side had more news
than the non-gaming side in terms of
like news meaning stuff that hasn't been
said before. So, for some reason, none
of this stuff from the video that
Jacob's in made it into the keynote at
the Consumer Electronic Show. Like,
literally none of it. It almost seems
like Nvidia thinks talking about gaming
reduces the seriousness of their company
to the investors they pander to now. We
just wonder what Jensen Juan thinks
about the PC gamers who built his
company to where it is now.
>> Like I said, nobody's as cute as you
guys are.
>> Around 1 hour into the keynote, Nvidia
finally shared its first actual news
about literally anything. This was on
its upcoming Vera Rubin solution. So
that's right, at the Consumer
Electronics Show, if you have your own
$50 billion data center, you too can be
a consumer of a Vera Rubin solution.
Gaze upon the thin siphoning away all of
the consumer hardware allocation. will
give them credit for building what looks
to be a fairly modular design where
Jensen Juan illustrates on stage how the
prior solution had he says 43 cables and
six tubes for cooling while still
relying on air for some components
stating that the new Vera Rubin boxes
instead which is the new architecture
following Grace Blackwell moved to zero
cables and two tubes for water in and
out. Looking at this render of the
server, Nvidia shows blade style
connectors for power at the front edge
of the two primary boards shown at the
back sliding into the server solution
with liquid cooling tubes integrating
more completely with water cooling
blocks used on top of the CPU and GPU
parts. Juan also claimed that it's
faster to deploy, so data centers have
one less obstacle between them and more
feckless expansion.
>> It takes 2 hours to assemble this.
If you're lucky, it takes two hours. And
of course, you're probably going to
assemble it wrong. You're going to have
to retest it, test it, reassemble it.
So, the assembly process is incredibly
complicated and it was understandable as
one of our first supercomputers that's
deconstructed in this way. This from 2
hours to 5 minutes. Janu says that the
new server solution is 100% liquid
cooled, up from a stated 80% on the
prior model. He also claimed, and we're
not sure how much of this was a joke
versus wasn't since it actually was
unclear, that it has half a ton of water
in a full rack of these Ver Rubin
servers. The Vera part of Ver Rubin is
the CPU, which Nvidia says will have 88
cores branded Olympus, 176 threads, a
1.8 terabyte per second and VLink
connection, a 1.5 TB capacity of system
memory with 1.2 terabytes per second of
bandwidth on LPDDR5X,
and will be a 227 billion transistor
solution for the Vera CPU alone. The
Reuben component is the GPU part with a
stated quote up to 288 GB of HPM4 and
quote per GPU with multiple GPUs per
configuration possible. Nvidia states a
bandwidth of up to 22 tabytes per
second. Nvidia also makes a bunch of
claims about performance of NVFP4
inference and training stating 5x and
3.5x blackwell for each with the 22
tabby per second HPM4 bandwidth getting
a major line item here. Currently, the
memory suppliers have shifted towards
more HBM manufacturing to keep up with
the data center GPUs like this. HBM is
expensive and costs more wafer area when
factoring in things like yield losses,
contributing more to the memory crisis.
Really, it's totally tonedeaf.
Announcing things with terabytes upon
terabytes of memory in them for like
millions of dollars to a bunch of
consumers who can't afford things and
can't get anything with memory in it.
It's kind of like announcing that you're
the only guy with water in a place that
has a huge drought. So, we we shipped a
>> Can you hear it squealing?
>> Okay, look. But he didn't It's not like
he in case you were wondering who cares.
It's the tech billionaires and leaders
of other trillionaire and soon to be
trillionaire companies all lining up
with their handout to Jensen. Open AAI
CEO and who who just recruited the
Grinch to steal Christmas in Whoville,
Sam Alman, had this to say. Quote, "The
NVIDIA Rubin platform helps us keep
scaling this progress so advanced
intelligence benefits everyone." End
quote. Elon Musk said, "Quote, green
heart emoji, confetti emoji, rocket ship
emoji, robot emoji. Nvidia Rubin will be
a rocket engine for AI. Reuben will
remind the world that Nvidia is the gold
standard. Uh, see prior set of same
emojis." End quote.
>> I say the stupidest things that cannot
possibly be true.
>> So, the executive pickaxe trade
continues. And once again, it's your
data that they're mining.
>> There's so many companies that would
like to build. They're sitting on gold
mines. Gold mine. Gold mine. It's a gold
mine. Gold mine. Gold mine. It does this
repeatedly, token after token after
token. And obviously, if you have a long
conversation with that AI over time,
that memory, that context memory is
going to grow tremendously. Not to
mention the models are growing, the
number of turns that we're using, the AI
are are increasing. Even Sai Nadella,
CEO of Microsoft and guy who is recently
excited that they have warehouses full
of GPUs they can't plug into data
centers because they don't have the grid
capacity yet was excited for Ver Rubin
presumably so he can add it to his
smalike mountain of hoarded silicon
components. Nvidia also talked about its
new connect X9 spectrum X super nick
which is 1x short of requiring a VPN to
read about. The Nick is advertised as
running 800 Gbit per second Ethernet.
The company wrote on its blog quote in
the Ver Ruben NVL 72 rack scale
architecture each compute tray contains
four connectx9 superdick boards
delivering 1.6 6 terabts per second of
network bandwidth per Ruben GPU. This
ensures GPUs can participate fully in
expert dispatch, collective operations,
and synchronization without becoming
bottlenecked at the network edge. End
quote. Further noting that connectx9 has
these security capabilities quote data
and transit encryption acceleration for
IP security or IPS and platform security
protocol PSP to secure GPU toGPU
communications data at rest encryption
acceleration to secure storage platforms
secure boot firmware authentication and
device attestation this is part of
Nvidia's dubbed spectrum X Ethernet
scaleout architecture that's the quote
using bluefield 4 DPUs for handling
quote networking storage security and
control services and quote across
Reuben. Reuben consists of NBLink, the
CPU, the GPU, Bluefield DPUs for network
storage security needs, the Supernick
and the Ethernet switch. The end result
is what Nvidia refers to as new chips,
six of them to combine into its
solution. That's a lot of supporting
silicon to enable the GPU and the CPU to
run inference and training tasks. Using
all of these extra silicon components in
the server means that the GPU and CPU
can remain dedicated entirely to
so-called AI processing workloads. For
specs, Nvidia published a table
comparing Vera to Grace for the CPU.
Nvidia shows a move to 2 megabytes of L2
cache per core, 162 megabytes unified
L3, up from 114 megabytes, an increase
to 1.2 terabytes per second memory
bandwidth from 512 GB per second. a move
to 1.5 terabytes of LP DDR5X capacity
from 480 gigabytes maximally previously
faster NVLink solutions and a move to
PCIe Gen 6 and CXL 3.1 from PCIe Gen 5
previously. This table references what
they call spatial multi-threading for
Vera which we weren't familiar with
before. Nvidia defines this as quote a
new type of multi-threading that runs
two hardware threads per core by
physically partitioning resources
instead of time slicing enabling a
runtime trade-off between performance
and efficiency. This approach increases
throughput and virtual CPU density while
maintaining predictable performance and
strong isolation, a critical requirement
for multi-tenant AI factories. End
quote. And just to get ahead of the
marketing that they keep
putting in here, AI factories means data
centers. That is what that is. They're
trying to brand it as a factory to make
it seem like some kind of approachable
bluecollar thing presumably so they can
go ask the government for more money and
better regulations or something. uh but
it is a data center. That's what that
means. For memory, Nvidia notes that
it's using SOCAM or small outline
compression attached memory modules.
These look something like this pictured
in a serve the home article from
previously. This means that the memory
isn't BGA soldered to the board and is
instead attached to a stick. It's just
so cam with pins instead of a DDR5 style
stick. NVIDIA mounts the LPDDR5X to
these types of SOCAM sticks for the
server allowing modularity for capacity
per configuration while also giving some
level of replaceability if a chip on a
stick happens to go bad. Now, we
frequently see comments posted where
people ask if there's any potential
secondhand market in the future for all
these data center server components
where I think generally people are
wondering, okay, once all these things
get retired in like 3 to 5 years or
less, does it end up on the consumer
market where I can just buy sticks of
DDR5 for pennies on the dollar because
they're dumping hundreds of terabytes of
memory onto the market? And the answer
for things like this is no, because this
has no use in current desktop type
standard computers. uh it may have a
secondhand use for other types of data
centers uh or startups or something but
there's effectively zero use for any of
this in consumer uh there may be a use
also in say China where in Shenzhen you
might find a guy at a shop who will
desolder all the LPDDR5X from abandoned
modules although they seem worth more so
modules but and then put it on to
something else maybe there's a use case
there uh generally speaking though no
this is not something that you can just
stick into a an X87 70 motherboard or
something. Nvidia's block diagram shows
the layout of Vera Rubin. The CPU
connection to LPDDR5X sits at the top
going out to the SOCM sticks at up to
1.5 terabytes of capacity. The CPU
connects to PCIe Gen 6 depicted on both
sides of the block with NVLink COC at
1.8 terabytes per second going to the
two attached Reuben GPUs for the one
CPU. These are depicted with 288 GB of
HPM4 each. And we covered this in our
video entitled Nvidia. What the
question mark where we talk about how
HPM actually requires more wafer area
and allocation than a like for like
capacity of something like say DDR
normal DRAMM DDR5 or something uh or
VRAM if you want to take that comparison
the reason for this is a combination of
yield losses where HPM has more yield
losses because you've got a more complex
set of things vertically stacked if
anything goes bad in there you might
have to throw out the entire chip so
you're throwing a lot more silicon uh to
other factors such as requiring separate
uh IO or control silicon solutions and
interposer silicon solutions although
the interposers would come from TSMC not
from the memory factories fabs uh but we
talk about that in our separate piece
Nvidia WTF. So anyway point being at 288
GB of HBM per GPU listed as a max
capacity that is more than just 288 GB
of memory per GPU offset from the
consumer market. It would not be a
onetoone loss for consumer. It would be
greater than that. Nvidia also noted
that Reuben is a 336 billion transistor
chip for the full die solution with two
compute dies connected via fabric
centrally rather than using a single
monolithic chip. Just like with
Blackwell, Nvidia continues to leverage
ARM for multiple parts of its systems.
One example being Bluefield 4 running 64
ARM neoverse v2 compute solutions. The
Bluefield 4 component alone has a listed
memory capacity of 128 GB. though even
beyond the CPU and the GPU, these
additional processors that are shipping
with it are consuming huge amounts of
memory. The previous generation was
listed at 32 GB. As for the Ethernet
Spectrum 6 switch, Nvidia notes a 102.4
terab per second solution at 512 by 200
Gbit per second ports. Nvidia
illustrates performance with what it
calls a quote expert dispatch benchmark
end quote where completion time is
measured in milliseconds. The NVLink
switch tray is its own entire chassis
depicted here with NVL link six
switches, spine connectors, and a
claimed 3.6 terabyte per second per GPU
all to all solution. Also fully liquid
cooled and increasing water demands.
Nvidia spent some time towards the
bottom talking about power consumption
in its article. Seeing as data centers
are currently in the process of a
ruinous takeover of the grid in the
United States, this seems like something
they would want to talk about. Nvidia
notes that it's attempting to improve
efficiency, but of course, obviously,
and they don't say this, it is still
ultimately just ramping power
consumption beyond what we actually have
capacity for. The company showed power
smoothing and GPU power draw in this
chart measured in at megawws, which
really tells you everything you need to
know. Now, you might be wondering what
all this means for Nvidia.
Maybe you're not. I It's But just just
let let me let me get there. Nvidia has
published this helpful image showing the
circle of what all this means. There's
just one thing wrong with it. It was a
small error they made. One second. Let
me just There we go. That's more
accurate. The circle of profit. With all
of that out of the way, we got some
actual news in there. Look, I tried Doug
threw a lot of stuff they wrote because
it wasn't in the keynote. I can tell you
that cuz I was falling asleep watching
the keynote. But with all that out of
the way, let's go to the gaming news.
So, they announced a half-step iteration
on DLSS with DLSS version 4.5 that
introduces a new transformer model, an
updated transformer model to the one
that we tested last year. And they also
are introducing a new dynamic multiframe
generation that now goes up to 6x from
4x previously. There's some other stuff
in here too, like some RTX Remix news.
Nvidia started its presentation by
providing some yaxis devoid charts that
report growth in the PC gaming segment.
The first graph is units sold, which is
unknown at 2019 to 2024, referencing
Steam, Gartner, and Nvidia firstparty
results. The company reports a 14%
decline in PC adoption over the period
versus a 51% claimed increase in gaming
PC adoption. As for what constitutes a
gaming PC, Nvidia says it's anything
with a discrete GPU installed in it.
This would also mean that office
workstation PCs, work from home PCs
bought during COVID, and even local AI
processing desktops with the DGPU would
be classified as gaming PCs. Since this
chart crosses the CO explosion in the PC
market in 2020 through 2022, the growth
in DGPU equipped machines is
unsurprising. It's masking whatever
happened in the last year or so. What it
doesn't show is 2025, that data is
probably not fully in yet to be fair,
but the projections that are out now
from some of the analyst firms in the
industry are forecasting a reduction in
PC purchasing just in general for 2026.
Uh, which is also not a surprise. Nvidia
also noted that its percent total of
Steam install base has increased for
Blackwell versus prior generations. The
X-axis is months after launch and the
Yaxis is who the knows? But you
could calculate this one manually if you
cared. We'll start with the DLSS 4.5
changes. Nvidia's DLSS 4.5 introduces
two major changes. An updated
transformer upscaler model and dynamic
multiframe generation up to 6x now for a
semi-adaptive target frame rate
solution. Meaning frame generation would
be reduced in scenarios where you're
closer to your target frame rate and it
would be increased in scenarios where
you're further from it. The LSS 4.5's
second generation Transformer model,
which is the new one for the upscaler,
is available basically immediately to
all RTX GPU users. So that goes back to
the 20 series. It excludes only GTX
cards, including GTX 16 series cards
with dynamic MFG 6X reserved for 50
series users who wish to enslify their
gaming. Adaptive multiframe generation
is pretty interesting. They're calling
it dynamic. Uh, but we talked about Lost
the Scalings adaptive multiframe
generation previously. That's the $7
tool you can buy on Steam. Uh we
benchmarked some of those solutions and
did image quality analysis. But what
they do is a fractional multiplier uh as
the the minimum sort of divisible amount
where you can have say a 1.6x frame
generation or something like that in
order to hit a target frame rate that is
defined in the software. In this
situation, Nvidia can only accept whole
numbers. They told us that uh 1 through
6x or just no frame generation at all
are all possible values, but there's no
fractional value in between. And Nvidia
also noted that it can run on either a
manually set frame rate target, just a
hard number, or you can set it to
operate based on a refresh rate for the
screen target. As for the updated
transformer model, Nvidia's first party
demo makes it appear as if some issues
that we found previously have been
resolved. In our early 2025 DLSS testing
for DLSS4 at the time, which was done
independently of Nvidia's background
pressure that we've detailed in the
past, we found that the new DLSS
transform model sometimes had issues
with ghosting and trailing edges on
objects or UI elements. Overall, we did
think it was an improvement on the CNN
model, but it still had some areas to
improve. The new transformer model in
4.5, according to Nvidia's first-party
capture they distributed, should resolve
at least some of the ghosting, trailing
edge, and UI element detail issues, such
as text on the screen. We haven't yet
tested this, but Nvidia's first party
video demonstrating the feature does
look to improve upon issues that we
showed when we tested DLSS4 last year.
Nvidia's demo with Oblivion Remastered
helps show the ghosting issue in DLSS4
versus 4.5 with 4.5 looking a lot better
from the updated Transformer model.
Nvidia says that its updates will
primarily target performance and
ultraformance modes with super
resolution for image quality
improvements. Nvidia also anticipates
improvement in other DLSS quality
settings, but says that these will be
the most affected. There are override
settings in the NV app utility for these
features. And beyond these
announcements, Nvidia's game modding
utility RTX Remix also received updates.
This tool has been used to patch old
games like Halfife 2 and Portal in order
to maybe ironically do things like
leverage more VRAM today. It's also been
used for swapping APIs and modernizing
the sort of core of the games that the
modders are working on and generally
adapting the games forward with mods.
And we're big fans of modding games
here. It's actually kind of where I got
a lot of the start for GN was covering
game mods for things like Oblivion,
Skyrim, so forth. And um so RTX Remix
has always been kind of interesting uh
despite some of the other issues that
we've had with Nvidia over the years,
but this has gotten an update. And the
updates to Remix include what they're
calling Remix Logic, which allows for
game event detection to create triggers.
Anyone who used map editors back in the
day will be familiar with simple trigger
systems. So something like opening a
door or encountering a specific enemy
will give modders the opportunity to
easily insert game events tied to those.
Nvidia gave the example of a screen
pulse in Halfife 2 when enemies are
near. They also showed some simple
trigger response workflows with the
update. In theory, this should make it a
lot easier for people to mod the game
without a whole lot of programming uh
knowledge in the background. Nvidia also
discussed more of its in-game AI agent
solutions. The only thing we'll point
out is they're recommending 7 GB of VRAM
or more for using these features as it
executes locally. Nvidia's mid-range
cards only have 8 GB of VRAM in some
cases. So, they're really sort of just
making everyone's point uh for them.
Nvidia also had some GeForce Now
updates. We covered some of them in our
recent video about Nvidia. The one we
covered, they didn't talk about today,
which is the price going up effectively
because if you're over 100 hours of play
time now, you have to pay for more hours
in 15 hour chunks for their game
streaming service through the cloud. And
uh as we talked about previously, that
just means you're kind of sacrificing
ownership of the device, but now you're
also going to be giving away more and
more money for just playing games. And
although 100 hours a month sounds like
maybe a lot of time, one, it's it's
really not that much. But two, uh if
you're idling in the games or maybe more
common if you have friends, family,
roommates, whatever, who may share uh
typically a system with you to play on,
then moving all of that to a single user
account on GeForce Now would mean that
you blow through those hours really fast
if it's more than one user. um which you
could of course have with just a normal
computer if you have a a kid or a
significant other or someone who might
use the same device. So anyway, they
didn't talk about that. What they did
talk about was technically the 5080 tier
now supports some more games. So that's
the news on that. Um we think
cloud-based subscription computer
services will lead to the death of
hardware ownership and already covered
how awful we think GeForce Now is for
consumers in a recent piece. So let's
move on to the next part which is beyond
all this. Nvidia had some G-Sync Pulsar
news and LLM performance improvement
claims on its GPUs. Namely, Nvidia
claims optimizations in Comfy UI to run
some models with significantly reduced
memory. One example has the 87 GB
requirement for a BF16 workload being
brought down to 26 with NVFP4. We're not
really experts in LM workloads or
processing or how linearly this can or
can't be compared. That's the chart they
made. So, we can't really comment much
on these claims beyond just presenting
them and that's what they are. All
right, that's it for this one. Thanks
for watching. As always, that's the
recap of Nvidia's keynote. I mean, we
did a lot more than they had in their
keynote here. Uh, it's the articles
they've published and their supporting
materials elsewhere. I look, they don't
put the gaming stuff or the consumer
stuff in the consumer electronics show
keynote. They haven't really focused on
that for a while. So, this in a sense
isn't new. It's just now it's like
particularly bad because they had gaming
in consumer news and actually more of it
than what was in the 92 minutes of like
rambling walking around on the stage
like he stayed up too late the night
before Jensen Juan addressing
a crowd that half the time didn't seem
to understand if they were supposed to
clap or if the robots he was talking to
were supposed to clap. Like that's what
we got. So
you're welcome. Uh, don't bother
watching it. That's what we're here for
because I can tell you it was incredibly
painful. They didn't even have chat
turned on. At least Intel had chat
turned on so I could jump in and say,
"Wait, what is that? It does B70. Don't
do this to us." And if you want to learn
about that, check out our Intel keynote
coverage. I don't know if it's up or not
yet, but it will be soon if it's not.
Subscribe for more store.gamesaxis.net.
support us directly and uh the I guess
torture that we just went through
watching these keynotes. You can help us
out if you find this useful or
patreon.com/gamers nexus. We'll see you
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.