THANK YOU NVIDIA

videocardz.com/newz/nvidia-geforce-rtx-4090-to-feature-2520-mhz-boost-clock-almost-50-higher-than-rtx-3090

THANK YOU NVIDIA

Attached: NVIDIA-RTX-40-HERO-banner-1200x248.jpg (1200x248, 35.19K)

Other urls found in this thread:

videocardz.com/newz/geforce-rtx-40-ada-gpus-to-feature-very-large-l2-caches-nvidias-own-infinity-cache
twitter.com/SFWRedditVideos

wake me up when they actually have 30+ gb ram

>450W

Attached: 1647039439962.jpg (450x150, 8.07K)

Boosts for 1/10 of a second.

Kek

Attached: 577482867.gif (500x282, 794.65K)

Based housefirevidia

>RTX 4070, AD104-275-Kx-A1, 7168FP32, 160bit 18Gbps 10G GDDR6, 300W.
>300W
Looks like I'll be waiting one more gen

MUST CONSOOOM

>4070
>300w
A lot of people will be buying this while not knowing what 300w worth of heat is like, thankfully it won't release till Q4 so they'll be able to use it as a house heater.

>Looks like I'll be waiting one more gen
Here's your 400W RTX 5070 bro

Especially if you want it silent. Either 4 slot designs with tons of copper so say goodbye to your pcie slot or an fucking aio.


Not that i have anythign against the idea of an aio on an gpu. But normies can fuck up anything

>160bit
That's the same bus width as a 1060, 2060 or 3060
nvidia's so cocky that they're selling the xx60 class lovelace die as a 4070

videocardz.com/newz/geforce-rtx-40-ada-gpus-to-feature-very-large-l2-caches-nvidias-own-infinity-cache

still 1ghz lower then AMD rdna3

I'm more interested in the cuda core count and vram since I'm going to have to undervolt the hell out of this because of the ridiculous power requirements

Until amd makes ROCm not a complete fucking nightmare to work with I'm going to move along.

POODNA3 is not hitting 3GHz, fuck off faggot

Will this make up for the low memory bandwidth?

I swear to god, I’m fucking sick of this old ass architecture. When will this shit stop spiralling out of control?

You'd have to either introduce some sort of regulations through laws, or you'd have to convince everyone to stop buying. Neither will happen.

Are you retarded? Ampere GPUs used 14Gbps memory, Ada GPus are using 18Gbps memory, there's no lower memory bandwidth

Can’t they start using ARM to solve this shit? I pay more in electricity than internet atm.

to some extent, yes

Attached: E2rEszJXEAA2mDS_format=jpg&name=large.jpg (1818x1024, 221.54K)

check the bus widths of the xx80, and xx70 gpus

How do you imagine ARM solving anything for GPUs?

I guess that makes sense, I was more concerned with the lower 160 bit bus, I wasn't thinking about the faster ram.

You mean the "old-ass" Lovelace architecture that hasn't even been released yet?

Still younger than x86

>old ass architecture
that has nothing to do with it. it's a conscious decision by nvidia to fatten their chips to the breaking point of pcie (and soon, socket) power limits

And what does x86 have to do with this thread?