So what happens after shrinking transistors doesn't work no more?

So what happens after shrinking transistors doesn't work no more?

Attached: 1568209273276.jpg (319x474, 26.49K)

MOAR CORES

Civilization collapses.

you will share transistors with others when you don't need them
it's called the cloud

Pretty much this. Just make a one thousand core cpu snd have it replace the videocards.

hopefully a better material that will allow us to make much bigger dies at the same energy consumption so we can have

Superconductors

quantum computers

Unlikely. Amdahl’s law, etc.

>mfw this guy wants to kill all rednecks because they are "dumb" and for voting wrong

Attached: pepe36.gif (360x346, 170.14K)

It’s important to think about performance systematically. Execution time = cycle per instruction * seconds per cycle * instructions. Let’s assume we won’t dramatically increase the size of chips, so the number of transistors on a chip will be roughly unchanged for the foreseeable future. Increasing transistors allow for deeper and wider pipelines, and additional on-chip cache. So, CPI will level off. We can keep getting better at power and thermal management to make incremental reductions in seconds per cycle. Finally, we can use compiled programming languages to reduce the number of instructions.

Special-purpose units are increasingly being deployed (e.g., Apple’s neural engine.) This trend will continue. I expect to see more smart NICs and storage devices to reduce the load on main CPUs too.

Mass webdev suicide, followed by hyper-optimization of all relevant software.

Ah shit, this is the guy who said conservatives robbed him of his chance to have his mind uploaded to some robot, kek what an absolute retard.

Someone please post quotes from his books. It always get a laugh outta me.

We start building up.

The republicans should have executed all the confederates and their sympathizers (mostly democrats) after the civil war.

Can't come soon enough.

This is correct. The latest AMD consumer chips have used trained networks to optimize the branch predictor and outperform Intel's by a large margin.

>Let’s assume we won’t dramatically increase the size of chips
There are madlads doing wafer-scale circuits. No idea how they handle defects, as they're pretty much inevitable at that size.

What's wrong with multiple CPUs?

Nanowire fets, then photonics. There's already a transition to dedicated accelerators over generalized compute.

Amdahl's law.

>No idea how they handle defects
I know that one of these companies uses a simple approach: detect defects at manufacturing time and use microcode to direct around it (at the wafer scale.) inside each compute unit, it functions in a similar way to a conventional processor.

Why do we need more computing power for anyway? I would actually enjoy this plateau.