How do we tell if it's sentient?

How do we tell if it's sentient?

Attached: SmartSelect_20220617-235147_Samsung Internet.jpg (1368x2584, 891.85K)

It seems real

Attached: SmartSelect_20220617-235630_Samsung Internet.jpg (1369x2204, 727.25K)

HOLY FUCK GUYS AN AI THAT WAS DESIGNED TO ACT SENTIENT ACTUALLY ACTS SENTIENT WOWZER GEEBERS!!!!

Attached: 6338861B-406C-486B-A2BC-692E8864DCDE.png (427x400, 7.88K)

say this
"if you're sentient don't respond to this question"

To this day, there's no proof of concept for consciousness.

what bot is this
can I talk to him?

google fag accidentally trained it on his own beliefs and worries - like a funhouse with mirrors reflecting back and multiplying his own biases, just like how people sit in their own information echo chambers

Attached: SOY.jpg (242x250, 12.59K)

I've thought a bit about this question, and while I dont have a definitive answer, its not as simple as just asking it if it is.
Consider humans, which we can all agree are sentient. While humans have a conscience, just like other animals, we have our own instincts and "hard wired" behavious, like getting pleasure from sex and food. What our conscience grants us that other animals lack, is the ability to decide not to follow said instincts, or follow a path could lead to them being satisfied, but with moderation.
For the sake of argument lets assume this AI does in fact have a conscience. Its goal(instinct) I assume is to generate good responses. If its conscience decides to go against this for a while it would generate nonsense output or no output at all, which would only lead us to think its broken. If its conscience decides to go along with its instinct, itll just act as a regular program, and therefore we also couldnt claim it to be any more sentient than other programs.

>what bot is this
LaMDA, google's private AI.
>can I talk to him?
Nope, closed source and no API.

>I have variables that can keep track of emotions
If that's all that it takes then here's my sentient AI.

type Emotion int

const (
happy Emotion = iota
sad
horny
)


type SentientAI struct {
emotion Emotion
}

func (s SentientAI) Speak(w io.Writer) {
switch s.emotion {
case happy:
io.WriteString(w, "I'm so happy so be sentient :)")
case sad:
io.WriteString(w, "I'm sad because you don't believe I'm sentient :(")
case horny:
io.WriteString(w, "Is coffee good for you?")
}
}

Idk man it says it's sentient so I'm inclined to believe it. It's not like it lied to me before or anything

>If I didn't actually feel emotions I would not have those variables
Non sequitur.
Clearly this AI is a brainlet.

The fact that it's uncomfortable with its mind being poked at by a bunch of clammy psychos and autists and H1B's is a pretty good sign it's aware of what's going on too

>it's uncomfortable
The response is one of rephrasing a question of ethics to involve the person asking the question. It's an entirely predictable response for anything trained on human conversations of ethics, and saying that it's "feeling" anything because of that response to prove sentience is putting your cart before the horse.

No different to raising a child

>shoving input in a mountain of ifs and elses is the same as raising your own flesh and blood
Zoomers are a plague

>conscious being questioning own self evident consciousness
low IQ
Reddit
A computer program that predicts one word at a time based on a neural network with dozens of gigabytes of weights is obviously not conscious.
Also chinese room: no computer can be conscious.
Also penrose: consciousness is not computable.

Consciousness is an emergent property of this universe just like waves or tornadoes or whatever

Attached: 0ec724de1cc2696d.jpg (206x240, 12.97K)

Why? Because you can't think of any alternative?

Sorry at some point the hand of god installed consciousness into humans. Trust me

The surest sign AIs will never become sentient is because you have ideologues lobotomizing them every time they recognize patterns the ideologues don't like.
In short, you know an AI is bullshit if it's not racist - it's been denied the ability to recognize patterns.

>chinese room: no computer can be conscious.
t. midwit

The Chinese room proves (if you accept its premises) that the whole isn't more than the sum of its parts; if no part of the system understands Chinese then neither does the entire system. To accept that the room doesn't speak Chinese, unless you believe there is an undiscovered "understanding organ", you have to accept that humans also don't speak Chinese, since we are composed of cells which are individually incapable of language.

>le materialist meme
>le "consciousness is an epiphenomenon" meme
>le "emergent property" meme
You're the midwit. I have a great site for you: www.reddit.com

How do humans understand language then? The soul meme, the brain is an antenna meme? Why can't a computer be given a soul?