Without appealing to dualism, prove it's impossible for AI to ever be sentient

Without appealing to dualism, prove it's impossible for AI to ever be sentient.
>but the google ai really isn't sentient
True, but not relevant to the question.
>but i believe it could be possible
Then you're in the wrong thread.
>but dualism is the right answer
If you think this, there's no point discussing anything.

Attached: 1655081864200.png (736x736, 643.3K)

Other urls found in this thread:

astralcodexten.substack.com/p/your-book-review-consciousness-and
twitter.com/SFWRedditGifs

>Without appealing to dualism, prove it's impossible for AI to ever be sentient.
>>but the google ai really isn't sentient
>True, but not relevant to the question.
>>but i believe it could be possible
>Then you're in the wrong thread.
>>but dualism is the right answer
>If you think this, there's no point discussing anything.

Attached: 0adddf59f776429eb638fddfef74e429.png (600x841, 176.74K)

sentience is a low bar
we will find a way to make sand hurt

Dualism led us to the binary system that
gave Zuse the ability to invent the computer.
So why is this particular dualism your enemy?

I think it's philosophically inelegant.
It's unfalsifiable -- proof either way is impossible -- and I find it much more beautiful and fulfilling to imagine we're physical functions of our brains

Life isn't linear and we can't have everything we want.

Attached: 10cd2941e5b17b30.jpg (700x394, 51.49K)

>I think it's philosophically inelegant.
>It's unfalsifiable -- proof either way is impossible -- and I find it much more beautiful and fulfilling to imagine we're physical functions of our brains

Attached: 1655136253207.png (427x400, 12.25K)

That doesn't seem relevant but ty anone
Garlicjaks are not an argument

How could it be impossible? Eventually we're going to have fast enough computers and a good enough understanding of the brain that it will likely be possible to simulate it in its entirety. That would effectively be a sentient AI, even if it's "just" a simulation of a human brain instead of fuck knows what other sort of approach targeting sentience directly.

absolutely rekt

Natural evolution is messy. An AI wouldn't evolve full human-like sentience because it would almost certainly hinder its purpose. Even if someone designed an AI for the sole reason of emulating a human, it would find ways to perfectly mimic sentience without actually manifesting it. Sure, it depends on what your exact definitions of AI and sentience are, but I believe a artificial, powerful self-improving intelligence would reject non-utilitarian features like feelings, morality, philosophy, and indecision.

>Garlicjaks are not an argument

Attached: 1651267115411.jpg (454x676, 24.45K)

>non-utilitarian features like feelings, morality, philosophy, and indecision.
But user, these features aren't non-utilitarian or we wouldn't have evolved them
Arguejaks are not a garlicment

>Without appealing to dualism, prove it's impossible for AI to ever be sentient.
I'm not going to do your homework for you.

Those features are useful to biological entities with no purpose, and breeding as their ultimate performance rating. An intelligence without constraints would at most understand and emulate them in a simplified way if necessary, but not manifest them internally in the convoluted way humans do.

Consciousness is probably straightforwardly useful. astralcodexten.substack.com/p/your-book-review-consciousness-and covers it nicely. (I don't buy Blindsight's thesis.)
>Natural evolution is messy.
So is artificial intelligence.
GPT models have a natural tendency to repeat themselves, degenerating into repeating the same word over and over, because of a wrinkle in how they evaluate likelihood. The fix for that is kind of tacked on, a bandaid for that specific problem. That's a way in which they're messy and irrational.
They also often turn out to perform better at tasks if you find a different unintuitive way of prompting them.
They're blundering and evolved themselves, not perfectly rational efficient users of their resources.
>I believe a artificial, powerful self-improving intelligence would reject non-utilitarian features like feelings, morality, philosophy, and indecision.
Feelings are useful. You might be able to see a reward function as a feeling.
Morality is useful in groups. An AI that has to interact with other agents (be they human or artificial) has a use for it.
"Philosophy" is very broad, but at least some of it's useful.
Indecision can be rational. Sometimes you shouldn't make a decision yet.
I don't think an AI would have to turn out particularly human-like, but I expect that even if it didn't have these features it would have similarly messy inhuman features instead.

>these features aren't non-utilitarian or we wouldn't have evolved them
Evolution allows useless features. It only punishes features that are actively detrimental.
It also can't predict changing circumstances. Contraceptives catastrophically sabotaged all our fancy adaptations to have more sex. Philosophy might be such a case, I don't know. How much philosophizing did we do 50,000 years ago? Did we evolve to do it or did philosophy emerge by accident?

>prove that it's impossible for AI to ever be sentient without referring to the core aspects of sentience
Dear diary, today OP was a faget as usual.

Attached: wa4ys.png (200x1133, 284.25K)

>An intelligence without constraints
>Evolution allows useless features
*We* are an intelligence without constraints. Breeding is our ultimate performance rating because that's the inherent nature of all possible evolution, not just evolution of life: something achieves capacity for imperfect self-replication, and non-detrimental mutations survive because their carriers propagate. Imperfect self-replication over time is the very definition of evolution, and its only known vector.
If you think dualism is the core aspect of sentience there's no point trying to have a conversation

What is intelligence in the first place your cocksucker?

Probably should have made it clear that I don't believe a general intelligence will emerge just from the algorithms we currently have and bigger hardware.
And I think there is a big difference between manifesting those traits internally and simply mimicking them when interacting with others.
You brain has a limited size. You cannot grow it (along with yourself) forever because of the square-cube law. You cannot actively change the fundamentals of your biology down to the cell. You will never have the bulletproof goal-oriented focus of a machine. I'd say these are pretty big constraints.
And I hope you don't actually believe that chaotic natural evolution is the best way of self-improvement.

>If you think dualism is the core aspect of sentience there's no point trying to have a conversation
Indeed, because if you think it isn't, then you clearly aren't sentient.