How will we know when a program has achieved sentience?

Thread Starter

Wendy

Joined Mar 24, 2008
23,320
It is my personal belief that we are very very close to that point, and probably won't recognize it when we see it because of human arrogance, of which we have plenty. So the point remains how will we know it when we see it? We already have examples that probably pass the Turing test. Just because we can be fooled does not mean it is intelligent.
 

Thread Starter

Wendy

Joined Mar 24, 2008
23,320
I would maintain that is human arrogance speaking we already have achieved the intelligence of an insect. There is a very good book written by James P Hogan called the two faces of Janus. that specifically addressed that point.
 

Papabravo

Joined Feb 24, 2006
20,600
It has been almost 60 years since ELIZA was written. IMHO we are still short of the goal of having a machine achieve self-realization. Being able to carry on a conversation and acting like a therapist is a long way from actually being one.

https://en.wikipedia.org/wiki/ELIZA

An excerpt of the popular response and the author's comment from the Wikipedia article:

Lay responses to ELIZA were disturbing to Weizenbaum [author of ELIZA] and motivated him to write his book Computer Power and Human Reason: From Judgment to Calculation, in which he explains the limits of computers, as he wants to make clear his opinion that the anthropomorphic views of computers are just a reduction of human beings or any life form for that matter.[29] In the independent documentary film Plug & Pray (2010) Weizenbaum said that only people who misunderstood ELIZA called it a sensation.
I also think that anthropomorphic views of a machine are intellectually suspect.
 
Last edited:

nsaspook

Joined Aug 27, 2009
12,285
I would maintain that is human arrogance speaking we already have achieved the intelligence of an insect. There is a very good book written by James P Hogan called the two faces of Janus. that specifically addressed that point.
I completely agree with @Papabravo , we are so far way from any sort of usable AI sentience. A actual human level of AI is still a distant dream. We are good at today is human imitation of sound, visual and textual mimicry (replicate the PRODUCTS of human intelligence) completely based on existing vast collections of human intelligence due to modern processing power costing huge amounts of money and using vast amounts of energy. It's nearly infinite monkeys sorting human generated thoughts and actions using human derived (from sorting and analyzing the vast human data collections) probabilities of the next word or pixel, not an understanding or intelligence of the underlying data.
 
Last edited:

Jerry-Hat-Trick

Joined Aug 31, 2022
446
How about, when the memory space needed for the code either reduces because it has found a way to become more compact, or increases because it has added more functionality?
 

Papabravo

Joined Feb 24, 2006
20,600
I would maintain that is human arrogance speaking we already have achieved the intelligence of an insect. There is a very good book written by James P Hogan called the two faces of Janus. that specifically addressed that point.
I'll be happy to wear my arrogance pin with pride.
 

MrAl

Joined Jun 17, 2014
10,898
Hello there,

There are a couple of ways to tell but I think one stands out as being the best.

Before that, a good way to suggest it is getting there is it will have to be able to 'conceptualize' and you can look that up.

The best though I think is a more recent examination of the situation and is based on the human brain itself. There is a very small part of the brain just recently discovered that is considered what gives people their sense of self, such as "who am I" or "me" or just "I".
I believe it is called the "anterior precuneus".
When the machine develops something like that, there may be no doubt it has gotten to the point of self-identity. It then can reason what other beings are and how they are different from them.
It still has to be able to think in the various ways, such as abstract thinking and able to conceptualize. It has to be able to understand that other beings can think and be able to describe what these other beings are thinking about if they have some idea from the context.

The Turing test was good for its time, but I am not sure it took the degree to which a human being can be fooled into consideration. Was there even a time limit set where the human only has a certain time to decide. Over the short term I think AI would pass, but over the long term as it stands now, I think it would fail.
Maybe a better test would be to see if AI can tell if IT is talking to a human or another AI.

After all is said and done, I think we will know when it gets here.
 
Last edited:

Thread Starter

Wendy

Joined Mar 24, 2008
23,320
If we do get there, there will be deniers galore, I think AAC is a testing ground for the Turing test in the form of ChatGTP. which we keep getting bombarded with (not passing yet).
 

MrAl

Joined Jun 17, 2014
10,898
Hi again,

At first I thought ChatGPT was pretty cool but then after some careful testing i realized it can't help me much except with maybe very general stuff.
So at first I may have thought it was human like, but then I realized it was just a big dumb-dumb (ha ha). Example, it gave me a formula for something then the next day said that the formula was not valid. When it gave the formulas, it was like a kid guessing and guessing about what the right answer was, and when it was told it was wrong it just apologized and gave another wrong answer, just like a little kid trying to impress his dad when he really had no idea what the right answer was.
 

Ya’akov

Joined Jan 27, 2019
8,505
I think we can simplify this question.
How do we know that any person we meet is sentient?

Aside from knowing our own personal experience of sentience, and correlated our experience as a human with other humans, what is the evidence that others are, in fact sentient? We make an assumption that our experience is evidence of sentience, that other humans have similar experiences, and that our introspective experience of sentience has an object reality.

It is quite a leap to decide that our self-awareness isn't just a side effect of whatever our experience is. To say we are not just "simulating" sentience ourselves requires the untestable belief that "behind" our subjective experience is an objective reality. How could we possible know that?

The only evidence we have of this is the agreement among humans that something is happening. Our choice of explanation is not proof of its truth. Why couldn't an AI be convinced, just as we are, that it is "experiencing" things; and if it was, how can we show that to be any different than what we are doing?

I don't think this question can be answered scientifically because, first, there is no good definition to work from, and second, we can only tell by appearance, even the measurements we might take are about that. This area is quite vexed.

And, to be clear, unless we are willing to say that only a biological brain can be sentient, our current methods of observing brains will not be applicable to a non-brain intelligence. Perhaps after a long time of working on characterizing what "sentience" is, we will have a framework independent of implementation that can be used to decide. I don't know where the purchase is to stand to get this view of things—there seems to be an impenetrable barrier that would have to be breached.

Long ago, I heard someone* say "as we improve AI, a valid Turing Test will always be whatever we haven't managed to do yet". This was back in the days of the first chatbots that were actually passing the classic Turing Test. There were protests from AI researchers that, while they did pass the tests, they "weren't actually intelligent". They knew because they wrote the programs and they "didn't put any intelligence in them". But I have to ask, how do they know?

*I wish I could recall who said it, it's a great insight.
 

Ya’akov

Joined Jan 27, 2019
8,505
Hi again,

At first I thought ChatGPT was pretty cool but then after some careful testing i realized it can't help me much except with maybe very general stuff.
So at first I may have thought it was human like, but then I realized it was just a big dumb-dumb (ha ha). Example, it gave me a formula for something then the next day said that the formula was not valid. When it gave the formulas, it was like a kid guessing and guessing about what the right answer was, and when it was told it was wrong it just apologized and gave another wrong answer, just like a little kid trying to impress his dad when he really had no idea what the right answer was.
You sound like someone that was trying the new bronze tools and was immediately enamored with a bronze hammer on the first day. But, on the second they were very disappointed that the hammer wouldn't cut down a tree.

ChatGPT, Bard, and Bing are tools designed to do something specific. Unfortunately, that specific thing is everything. ChatGPT is built to provide "good" answers. It is not doing this algorithmically, it's doing it probabilistically. It tries to work out the most convincing answer. And it is also working with, effectively, the average thoughts and opinions of Internet users.

A similar but narrowly focused AI trained with a curated training dataset would be much more capable of providing high quality, accurate answers—but that's not what ChatGPT is about. It is precisely about the general, high level answers it is quite good at. The more specific you try to make its answers the less reliable it will be.

There is an ancient saying, I think it was Laotse or maybe Archimedes who said, "GIGO". Garbage In, Garbage Out.
 
Top