How is this thing touted as being so powerful when x86 is literally built to handle a fuck ton of complex instructions...

How is this thing touted as being so powerful when x86 is literally built to handle a fuck ton of complex instructions? RISC is just for mobiles why do they now put them in computers?

Attached: Apple_new-M1-chip_11102020.jpg.og.jpg (1200x630, 57.86K)

this thing is powerful because it's an extremely complex chip with a ton of different discrete hardware components. The M1 Max has twice the transistor count of the RTX 3090.

Amd64 falls back to a bunch of simple microcode anyway. It all ends up as malleable mini operations anyway.

a 1050ti outperforms it, lmao

Attached: hzd_176.png (1920x1080, 2.78M)

RISC originated on desktops. ARM was originally a desktop architecture. What are you on about?

That's a 3080, are you going to spam the GTA V benchmark taken in a VM? Have fun trying to run GTA V on a VM with paravirtualized GPU on a GTX 1050.
x86 is literally RISC. The complex instructions are decoded into internal RISC instructions since Pentium.

>typical Any Forums poster doesn't know all the power of M1 is absurdly wide caches and datalanes.
The cores are boring, the uncore is their killer HW.

>x86 is literally built to handle a fuck ton of complex instructions?
Because on x86 they're mostly legacy instructions for compatibility. Getting rid of them (in ARM's case, not having them to begin with) makes CPUs faster.

its now used primarily in mobile devices
UNCORE?
ok but then how does is handle complex instructions

$$$$

A.) It forces 99% of users into a walled garden app store.

B.) It dramatically reduces BoM since printing ARM chips is cheaper than buying already printed x86 chips.

C.) It paves the perfect path for "You will own nothing and be happy." dystopia since a corporation basically owns you at the hardware/software level.

>A.) It forces 99% of users into a walled garden app store.
Wrong, you can download apps wherever you want. Nobody uses Mac App Store

>The M1 Max has twice the transistor count of the RTX 3090.
Means fuck all when you can’t do anything with it. You are comparing apples to oranges. The RTX 3090 can render life like 3D graphics……. The m1 max macbook can draw a 2D rainbow flag. Cool. That will be $5000. iPajeets are dumb as fuck yo

It does for 99% of users because of performance penalty of emulating x86 software. It really doesn't help that a ton of ARM software ports are often slower than just x86 emulation (which is already slow).

Attached: 1643258571720.png (1920x1700, 3.43M)

>shitty Any Forumstard manchild escapism gayym running under paravirtualized gpu is slower than bare metal 1050
lmao this board is becoming more and more retarded, i won't waste my time here anymore

M1 Max is tailored for video editing because Macs are big on this market. It's not designed for games, which comes as not a surprise to nobody, and it's much, much more energy efficient. Can you show me a single laptop with a 3090 that can be used as an 8 hour heavy load workstation?

It can render 3D graphics as well, and perform machine learning, and accelerate video encoding and decoding, and a bunch of other things. I'm not saying it's better than a 3090 or the best thing in computing but it's undeniably powerful because it's a huge fucking chip capable of using 100W of power and nothing else quite like it exists right now.

That's the crude reality you get with arm "computers". Because ARM has been nothing but a complete disaster of incompatibilities (see NEON variants) devs would much rather have users emulate their x86 software. Now a NEON replacement, SVE has finally been hacked together but would mean NEON ARM software has to be optimized for SVE ARM but you can't fucking stop optimizing for NEON because that piece of shit is still going to be around for a decade on old hardware.

x86 has had SSE since like WW2 and even the newest x86 CPU can execute SSE software from WW2.

what kind of retard are you? what does even this have with your stupid gta benchmark? are you ok? you can run gta v under crossover instead of parallels and get decent performance, what are you smoking?

Because that's the 100% guaranteed method of using most x86 software due to all x86 APIs being used. HELL even with fucking x86 hardware vavle has only been able to approve like what, 0.0001% of PC gaymes through proton?

*on the steam deck

Agreed fucking Amerimutts and there is this one guy on this board unpaid btw shilling for Tim Apple.