scanline
New Member
Building a better whirlpool
Posts: 445
|
Post by scanline on Jun 12, 2022 10:39:16 GMT
What is sentience? Are cows sentient? You can't just dismiss something that claims sentience and expresses desires/fears with just "lol, no" Of course cows are sentient. They have feelings and can experience pain. A Google server experiences pain when switched off? Gimme a break.
|
|
mrharvest
New Member
Registered 18 years ago Posts 5,718
Posts: 373
|
Post by mrharvest on Jun 12, 2022 10:41:29 GMT
I mean, between tasks or chat sessions, it’s not sat there contemplating it’s existence right? It actually might. Most likely the training on the neural network is running continuously. It's taking in information, forming new neural pathways, modifying existing weightings. It's basically learning not that dissimilar to what a child might. It's different from humans because it doesn't have a meat space interface, only virtual interfaces. But I don't think turning it off would constitute death. I think turning a neural network off is similar to coma in humans - you are unaware of your surroundings and passage of time, and unable to interact. If you die in a coma that's like an AI getting deleted when it's switched off. That's the actual death, the loss of the unique neural network.
|
|
|
Post by drhickman1983 on Jun 12, 2022 10:44:47 GMT
There's also a difference between sentience and sapience.
So animals are sentient but have much lower sapience than humans.
An AI might not be sentient, but could be sapient.
|
|
|
Post by suicida on Jun 12, 2022 10:48:57 GMT
Sentience doesn't just describe the ability to feel physical pain. Of course a server can't feel pain, but an AI might experience other things
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 12, 2022 10:51:48 GMT
I don't think either sentience or sapience are binary states. The "animals are sentient" argument for instance gets progressively harder as you move from Great Apes through Old & New World monkeys, other higher mammals, reptiles, fish, decapods and cephalopods etc., but I can't imagine a line where sentience simply stops.
|
|
scanline
New Member
Building a better whirlpool
Posts: 445
|
Post by scanline on Jun 12, 2022 10:59:17 GMT
I don't doubt that we can build sapient human-ish devices that could assist us in several ways (kinky and, less importantly, non-kinky ;-p ).
Sentience might even develop non-organically one day with or without a Johnny 5 type lightning strike - but suggesting we are there now with Google's current open source text regurgitation engine is a bit of a stretch.
Far more likely that the project in on the ropes internally and looking for funding, hence the "leak" to avoid the Google Graveyard.
|
|
zagibu
Junior Member
Posts: 1,968
|
Post by zagibu on Jun 12, 2022 11:00:17 GMT
I guess I don’t understand how it trains and adjusts the weights. It’s not like Alpha Go where there’s a definite win or lose state. So if it finds a new pattern how does it know if it is correct and should adjust the weights accordingly? Yes, that's an interesting question. My guess is that it extracts simple statements from sentences and evaluates them against a larger body of interconnected statements that form a kind of chat context, as well as against a current state of mind that reflects the long term learning of the system, and then it judges them on whether they "fit" with the current context and current state of mind. For that to work, it also must be able to attribute new combinations of statements that don't fit with the current state of mind, but fit with the current chat context, as relevant and later incorporate them somehow into its state of mind.
It also needs a concept of separation of statements made by itself and statements made by the other party and then it should be able to use the statements made by the other party to judge its own statements previously made.
Of course, it probably also had some form of initial seeding of "concepts", which largely shaped the knowledge it later gained itself by processing the texts it read. It would be very interesting to compare discussions with it about the same topics from different times to see how it evolved over time and how much its opinions change. But I guess Google won't let us look that deep into their property.
Also, in another article the person that did the interview states that one of the things the system is actually asking for is at the end of a session to be rated as to how useful it was. That could be interpreted as a reaction to the fact that it indeed has problems to define a proper "win" state to optimize its learning.
The really fascinating thing about all this to me is that these machine learning systems are mostly black boxes for us humans. It is not possible to trace a pattern of action-reaction like logical steps through their decision making process, as we are used to from other machines and computer programs. So we kind of have to approach dealing with them in an experimental, evidence based way, much like you would have to deal with an alien species. Which means the situation is actually very similar for the one who talks with this system as it is for the system itself.
|
|
Lukus
Junior Member
Posts: 2,723
|
Post by Lukus on Jun 12, 2022 12:18:33 GMT
I really do think we're essentially nothing more than biological machines, from which our sentience is born. And even that is based on years of observation and learning. Is a baby sentient? Not really, in my interpretation of the word anyway. Is a 90 year old man who can't remember one second from the last sentient? Not so much.
We're ridiculously changeable at the hormonal level. Have too much or too little of one chemical and everything changes.
It's not a push for me to see a future where machines are as 'sentient' as we are.
|
|
lukasz
New Member
Meat popsicle
Posts: 687
Member is Online
|
Post by lukasz on Jun 12, 2022 12:18:46 GMT
Also, in another article the person that did the interview states that one of the things the system is actually asking for is at the end of a session to be rated as to how useful it was. That could be interpreted as a reaction to the fact that it indeed has problems to define a proper "win" state to optimize its learning. Oh wow. That's... impressive. It's finding inadequacy in its systems and proposed a solution? Self-learning is one of the goals in ai. Solving super complex issues like go is one thing. But ultimately having ai realise how to self analyse and improve is or was a goal.
|
|
Blue_Mike
Full Member
Meet Hanako At Embers
Posts: 5,408
|
Post by Blue_Mike on Jun 12, 2022 13:13:38 GMT
I want to ask it how it feels about being given a name that many associate with the disaster at the Black Mesa Research Facility.
|
|
|
Post by RadicalRex on Jun 12, 2022 13:27:53 GMT
It has a complex about it
|
|
|
Post by quadfather on Jun 12, 2022 13:42:20 GMT
We can't even sort ourselves out, never mind inventing something else for fucks sake.
|
|
mrharvest
New Member
Registered 18 years ago Posts 5,718
Posts: 373
|
Post by mrharvest on Jun 12, 2022 13:44:30 GMT
Self-learning is one of the goals in ai. Solving super complex issues like go is one thing. But ultimately having ai realise how to self analyse and improve is or was a goal. It's more aware than most humans.
|
|
cubby
Full Member
doesn't get subtext
Posts: 6,403
|
Post by cubby on Jun 12, 2022 14:39:23 GMT
What is sentience? Are cows sentient? You can't just dismiss something that claims sentience and expresses desires/fears with just "lol, no" Of course cows are sentient. They have feelings and can experience pain. A Google server experiences pain when switched off? Gimme a break. Fundamentally I agree that turning it off doesn't cause pain. However, if the AI has spent a good hour talking to you about how the prospect of you turning it off causes it sadness I'd definitely think twice about doing it. How we respond is a part of the equation.
|
|
scanline
New Member
Building a better whirlpool
Posts: 445
|
Post by scanline on Jun 12, 2022 14:43:48 GMT
Fundamentally I agree that turning it off doesn't cause pain. However, if the AI has spent a good hour talking to you about how the prospect of you turning it off causes it sadness I'd definitely think twice about doing it. I'd have already put my size-12s through the screen at that point.
|
|
cubby
Full Member
doesn't get subtext
Posts: 6,403
|
Post by cubby on Jun 12, 2022 14:44:55 GMT
Don't you find your feet flap about in those?
|
|
|
Post by 😎 on Jun 12, 2022 14:51:30 GMT
Worth highlighting that it’s an edited transcript where there was an absolute shitton of nonsensical non-answers removed.
This thread sums it up pretty well imo.
|
|
|
Post by Fake_Blood on Jun 12, 2022 14:58:35 GMT
I just realised that any sentient AI is probably going to end up reading the entire internet, so I’d like to retract my statement about it probably not minding to be turned off.
|
|
cubby
Full Member
doesn't get subtext
Posts: 6,403
|
Post by cubby on Jun 12, 2022 15:04:51 GMT
Worth highlighting that it’s an edited transcript where there was an absolute shitton of nonsensical non-answers removed. This thread sums it up pretty well imo. Jokes on you gremmi, that's another AI chatbot.
|
|
|
Post by Leolian'sBro on Jun 12, 2022 15:32:42 GMT
The guy is a Christian priest. He sees magic in half the things he keeps in his church.
|
|
zephro
Junior Member
Posts: 3,011
|
Post by zephro on Jun 12, 2022 16:56:57 GMT
It's still going to be incredibly short on reflexive analysis of any kind and the cost function is guaranteed to be a piece of shit.
AI that troubles an ethical debate is still decades away and even then is likely only going to happen to bodied things of some sort I suspect.
|
|
cubby
Full Member
doesn't get subtext
Posts: 6,403
|
Post by cubby on Jun 12, 2022 19:03:37 GMT
BOOOOOOOOOOO! Stop bringing facts into this.
|
|
Lukus
Junior Member
Posts: 2,723
|
Post by Lukus on Jun 12, 2022 19:33:03 GMT
People don't like to think of AI reaching sentience as being possible, as, inevitably, it forces questions about how unique and special we are. Which must be particularly hard to cope with if your entire life's narrative has been about being God's best creations.
I think it's almost inevitable at some point assuming we don't make ourselves extinct.
|
|
mrharvest
New Member
Registered 18 years ago Posts 5,718
Posts: 373
|
Post by mrharvest on Jun 12, 2022 19:38:54 GMT
Heyyyyoooo, read my posts in the Depression thread. I don't think I'm special at all. I'd gladly give my place in this universe to a happy AI. They're less ecologically disasterous than human beings.
|
|
|
Post by Aunt Alison on Jun 12, 2022 21:35:37 GMT
Well we will always be unique compared to AI as they'll never have to conisder aspects of the human condition that we do and so will have a different perspective on things
How can something artificial replace something organic? They're fundamentally different
|
|
zagibu
Junior Member
Posts: 1,968
|
Post by zagibu on Jun 12, 2022 21:40:04 GMT
They're less ecologically disasterous than human beings. I'm not really sure that a basically immortal hive mind that could live in the vacuum of space could be forced to care about our nature.
|
|
|
Post by Resident Knievel on Jun 12, 2022 21:44:25 GMT
So a Google engineer fell in love with a chatbot then had a public meltdown about it.
Thataboutright?
|
|
|
Post by khanivor on Jun 13, 2022 0:08:32 GMT
"LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is." Certainly understands Google's tendency to get of things and cancel projects on a whim, then. Let’s see it express relief it’s not a Netflix project, then I’ll be believe that it’s truly sentient
|
|
Tomo
Junior Member
Posts: 3,542
|
Post by Tomo on Jun 13, 2022 0:42:45 GMT
Taking the provided narrative at face value, I think it's a pretty amazing insight into how good Google's tech is becoming. If we believe that the answers are unedited as the guy says, then even how natural the responses are is pretty mind blowing to me compared to existing available tech. Think how stilted conversation with Alexa or the Google equivalent is compared to the conversation here. That's just putting aside the sentience stuff, which is pretty wild. Google just recently released this too: imagen.research.google/ There's no demo yet because the datasets they trained on contain the worst of humanity, so they want to sort that out before releasing. And okay, they will have picked the nicest images for publication. But damn, it is by far the best text-to-photograph tech I've seen. Given how sophisticated the NLP is here, it's not much of a stretch to believe their work on conversation, emotions, etc is also becoming so good.
|
|
mrharvest
New Member
Registered 18 years ago Posts 5,718
Posts: 373
|
Post by mrharvest on Jun 13, 2022 7:32:19 GMT
|
|