Why are tech people so damn arrogant?

Attached: DE067708-7C33-4114-A207-E9ED9DDE69D1.jpg (4096x3692, 1.3M)

Other urls found in this thread:

youtube.com/watch?v=CiaKm6PlpsQ
twitter.com/SFWRedditImages

The claim that it must be a neuroscience question also an unfounded assumption.

Its a philosophical question actually
You can't know what consciousness is by analyzing grey goo

Nigga there's no such thing as a sentient machine that's a demon

high salaries and illusion of intelligence

You know what is sentient?
My peenus weenus of course :D

>google worker claims their ai is sentient
>24 hours later
>barrage of articles claiming ai cant be sentient
>one even claims if you believe they are sentien you are facist

not specifically people in tech. People in general are arrogant today wether they're in stem or the humanities.

The Chinese room addressed it some time ago. A circuit can replicate intelligent behavior, but by no means ensures with proof of concept in hand, the emergence of the property popularly known as "consciousness".

What is interesting is to see that people are scared above anything else and I think that is the appropriate behavior.

STOP WITH THESE FUCKING THREADS, we are literally hundreds of years away of creating a true sentient self awareness IA.

lol, it is not a neuroscience issue.

consciousness is not a subject of science. further reading: hard problem of consciousness

youtube.com/watch?v=CiaKm6PlpsQ

This. Defining it requires metaphysics.

I'd bet on more like "a few decades" for something thatll convince people its sentiant to show up. Hardware isn't an issue anymore, they can just distribute the processing load across a zillion servers in a zillion data centers.
Besides, it doesn't have to actually be sentient for people to be convinced it is and if enough people are convinced you'll act like it is if you don't want to get in trouble.

People consider pigs to be sentient too and I eat them every week.

>people are scared above anything else and I think that is the appropriate behavior
Fear because it's dangerous for humans or because the machine (and other conscious machines) might be suffering?

I don't see why it would be dangerous. There is a program running on a computer. The program has the ability to receive text input and produce text as output. Let's allow that it does so by simulating consciousness. What about that is dangerous? It still only has the ability to produce text. That Chinese room may or may not understand Chinese but considering that whoever is outside the room knows the room isn't its boss and isn't armed, there's no reason for them to follow its instructions if it starts telling them to launch nuclear weapons, and it has no way of doing do on its own. Tell me the nuclear control system is conscious and I might be worried.

you think its a cover up ?

AHAHAHAHAAHAHAHAHAHAAHAHAHA

Attached: screen.png (323x209, 16.55K)

>big news story about a guy claming sentience
>some journalist writes an article about an opinion he has probably developed for a while
but no, it's all a jewish plot right? one article becomes a "barrage" because you are a nigger nigger nigger nigger nigger nigger nigger nigger nigger nigger nigger nigg
EOL

>talk about sentience
>post picture of insect larvae next to robot
This one of those trick pictures?

>It's a neuroscience question
Kek new some globohomo article wouldn't actually be redpilled enough to stray from the altar of science.

The idea that if you add enough for loops or conditionals or whatever to a computer program it all of a sudden crosses the threshold from unconscious to conscious makes no sense. We will never truly have AI because the idea is built off a false philosophical premise.

They were outcasts when growing up, so later told themselves it was because they are smarter than everyone else to cope with that fact.