In these tests...

tellusim.com/intel-arc370/

In these tests, the new Arc architecture demonstrates much better compute performance on loads with big threads divergence. HW ray tracing rate is great in comparison with compute shader implementation. RT performance of Arc 770M should be better than RT performance of AMD Radeon GPUs based on results extrapolation.

THANK YOU INTEL

AYYMD IS FINISHED & BANKRUPT

Attached: badge-arc-graphics-straight.png (1080x1080, 386.05K)

Other urls found in this thread:

intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html
twitter.com/SFWRedditVideos

>should be better than RT performance of AMD Radeon GPUs based on results extrapolation.

Cool, they beat the GPUs with nearly unusable RT.

>still no fully functional SR-IOV despite implementing before it in their integrated graphics
cmon intel you can do it this is your chance now

Just give me 16GB GDDR for a non-retarded price

Attached: otaking bane.jpg (500x243, 22.49K)

most people have came to the conclusion that they only care about HW Encoding AV1 if they're buying Intel Arc because Ada Lovelace isn't out yet and for whatever reason despite Apple also being one of the founders of AOM doesn't support it in even M2

Doesn't matter if the bios is locked such that you can't run it outside of the parameters it's validated on.
It's also fucking retarded that they're not going to sell us the 4 tile one.
It's the same shit amd and nv do. If the ga100 had no rt cancer & all raster and didn't have graphics disabled it would have been about as fast as the 4090 but you know, in late 2020.
Nvidia could trivially then use tsmcs copy paste MCM and shit out a 4x 4090 card. Years ago.
Amd has been selling the 7950xt for six months now. Except it's way faster than the one we're going to get because it has all 16k cores, not the 13k, and it has hbm which is what we should have at that price range.
Six months ago.
Data centers needing GPUs has destroyed it for the rest of us. It would be one thing if I just had to pay NV or amd $5k for the best card on a special order but they're actually 10k, not for sale, and have directx or all rops disabled.
These garbage outdated node cards with half the die wasted on datacenter acceleration(ray tracing) are what we're stuck with.
Again this would be fine if it were done to avoid having to pass on tsmcs price hikes to the consumer but we pay 3nm+ MCM vcache with hbm2e prices for single die Samsung 10nm or tsmc 7 with half of the die wasted, bound by slow memory speed, voltage locked, power locked, clock locked, bios locked with purposely cheap one cent instead of three cent power delivery components so that they fail right after warranty.
Here I was thinking oh it's okay we'll see a correction in the market for performance but at the new $2000 price as a result of a resurgent amd and Intel GPUs. Nope. They follow Nvidias lead with the sand bagging.
Intel should have dropped the 4x MCM die with hbm 2e at cost to instantly gobble up all sales in order to make that money back next gen.
It's gonna be a shitty bios locked 1080ti competitor basically.
At least Intel cpus aren't locked down like Ryzen which overrides your bios settings as it pleases.

Attached: fourtimesasfastatrasterasradeonvii.jpg (258x195, 11.34K)

I'm sorry. If ga100 had all the cancer disabled it would be a good 50% faster than the 4090.
Typo.

>results extrapolation

wait? ryzen locks down the cpu? wtf i hate lisa su now

nobody? is moore threads MT2000 good? they actually had sr-iov enabled too bad the desktop cards are expensive as hell assuming they are shipped outside of chingchong

no CUDA tho

what do you need 16 GB for?

i only need 8gb for multiple vGPU instances (like the 2080) 16 seems overkill and all of the laptops there only have 3060 maxxed out (i saw a 3070 before)

intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html

gaming

Too bad it gets destroyed in gayming.
This is their sixth generation DX11+ GPU and second generation dGPU.
Intel's software department hires more people than AMD does entirely.
There are no excuses for how bad their drivers are.

Attached: 2Capture.png (1052x592, 772.31K)

It has SYCL, which is better than CUDA.

>Data centers needing GPUs has destroyed it for the rest of us
Yes, the plebs sponsoring R&D are not needed any more. You will never get the best stuff in the mainstream.
>These garbage outdated node cards with half the die wasted on datacenter acceleration(ray tracing) are what we're stuck with.
That's not true. For NVIDIA cards the RT and tensor cores are used also for conventional rendering even if the workload is not RT-aware.
> purposely cheap one cent instead of three cent power delivery components so that they fail right after warranty.
That's on the card OEMs, not chip vendors.
>Intel should have dropped the 4x MCM die with hbm 2e at cost to instantly gobble up all sales in order to make that money back next gen.
Why would they when enterprises can pay 10x what you are willing to pay for it?
>At least Intel cpus aren't locked down like Ryzen which overrides your bios settings as it pleases.
Meanwhile in Intel Management Engine land...

It's not better since it's supported by literally nothing. SYCL support in Blender is not even mainstreamed.

mainlined*

HAHAHAHAHAHAHA

>datacenter acceleration(ray tracing)
What? How is RT datacenter acceleration? Datacenters use compute, not RT.

CUDA is able to utilize RT and tensor cores for general computation (with caveats and NVIDIA-provided libraries).
He probably thinks that RT and tensor cores are ultra-specialized units that can't be used for anything else, which is false.
AMD RT units are even less specialized.

>It's also fucking retarded that they're not going to sell us the 4 tile one.
Link to this news? IIRC the 4096-core A770/A780 is still on. Was there ever a 8192-core DG2 SKU?

Nothing surprising about the benchmark scores. This puts the full DG2 as a 3060ti in most places, a 3070 is a few. Presuming that XeSS isn't total shite, it's basically competiting at the low end for Nvidia and the mid range for AMD.

Segmentation is a huge problem though. The TOP END is competing against $500 cards, and there's like 8 SKUs below that. The second-highest SKU already loses 30% of the cores, and the 6 SKUs are completely crippled. So what: $400, 280, 150, 130, 100, 70, 50...? There's no actual market all the way down there. People needing a GPU for

epic free market moment

4k vidya with MSAA, also VR with 150% SS

>Link to this news? IIRC the 4096-core A770/A780 is still on. Was there ever a 8192-core DG2 SKU?
He means the cancelled Xe-HP Arctic Sound which was 4 GPU tiles.