Why humans learn faster than AI—for now

Ya’akov

Joined Jan 27, 2019
9,069
I asked GPT-2 what it thought of the poster's assertion and it said:
However, we need to understand that AI is not a silver bullet and there are several challenges that will be faced in the future that are the foundation of an interesting exploration for artificial intelligence that involves humans.

The technology is only capable to do a subset of tasks that have to be addressed and that’s where humans come in to do the decision making by assisting the machine to make decisions.

This is the opportunity for us as researchers, teachers and entrepreneurs in this field to help shape the future AI.

In the words of Stephen Hawking “Computers can’t think. Humans can, and that’s why they rule the world.”
Though I can't tell if it is disagreeing or agreeing...
 

strantor

Joined Oct 3, 2010
6,782
I asked GPT-2 what it thought of the poster's assertion and it said:


Though I can't tell if it is disagreeing or agreeing...
As James brown once said, talkin loud and saying nothing.

I've been trying to nail down exactly what it is about the posts from AI/spambots/language models/whatever you call them (I'll call them bots) that makes them obviously not from a person.

"Talking loud and saying nothing" is pretty close but I don't think that's the complete explanation. Real people do that to, but it's usually obvious when it's a person doing it vs when it's a bot. It seems like these rarely express any "opinions" - "... I can't tell if it is disagreeing or agreeing" is a common theme. But again, people do that too.

The GPT2 reply is everything its creators claim: "...synthetic text samples of unprecedented quality...lengthy continuation...outperforms other language models...language tasks like question answering, reading comprehension, summarization, and translation ... chameleon-like... realistic and coherent" but despite all that, its hallmark is still synthetic.

Is it just because it doesn't state any opinion? It is trained from 40gb of internet data, surely it ran across an opinion or two in all that? After all, it seems that's what most of the internet is now, opinions. For the bots to never express any, does that indicate that they must be trained not to (lest they say something that reflects poorly on their creators)? If bots stated opinions sans word salad, would they still come across as synthetic? Or is there actually some "soul" to human-written text that other humans innately and subconsciously identify, that will always be missing from bot-speech? If so, what's the "soul?"
 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
I've been trying to nail down exactly what it is about the posts from AI/spambots/language models/whatever you call them (I'll call them bots) that makes them obviously not from a person.

"Talking loud and saying nothing" is pretty close but I don't think that's the complete explanation. Real people do that to, but it's usually obvious when it's a person doing it vs when it's a bot. It seems like these rarely express any "opinions" - "... I can't tell if it is disagreeing or agreeing" is a common theme. But again, people do that too.

The GPT2 reply is everything its creators claim: "...synthetic text samples of unprecedented quality...lengthy continuation...outperforms other language models...language tasks like question answering, reading comprehension, summarization, and translation ... chameleon-like... realistic and coherent" but despite all that, its hallmark is still synthetic.

Is it just because it doesn't state any opinion? It is trained from 40gb of internet data, surely it ran across an opinion or two in all that? If it stated opinions sans word salad, would it still come across as synthetic? Or is there actually some "soul" to human-written text that other humans innately and subconsciously identify, that will always be missing from bot-speech? If so, what's the "soul?"
Maybe because we can detect patterns that seem to be spliced together instead of a continuous flowing of words from the human thought process. Things that are 'too' good, 'too' perfect, 'too' complete and 'too' logical make us wonder about the emotions that drive the comments. What makes it seem human are small language imperfections that seem to emerge outside of inductive and deductive logic. Humans know we can skip words, use the wrong words, use horrible grammar and still connect the dots. What the bots lack is the the ability to make typical human mistakes without it being obvious it's intentional.
 

cmartinez

Joined Jan 17, 2007
8,218
I've been trying to nail down exactly what it is about the posts from AI/spambots/language models/whatever you call them (I'll call them bots) that makes them obviously not from a person.

"Talking loud and saying nothing" is pretty close but I don't think that's the complete explanation. Real people do that to, but it's usually obvious when it's a person doing it vs when it's a bot. It seems like these rarely express any "opinions" - "... I can't tell if it is disagreeing or agreeing" is a common theme. But again, people do that too.

The GPT2 reply is everything its creators claim: "...synthetic text samples of unprecedented quality...lengthy continuation...outperforms other language models...language tasks like question answering, reading comprehension, summarization, and translation ... chameleon-like... realistic and coherent" but despite all that, its hallmark is still synthetic.

Is it just because it doesn't state any opinion? It is trained from 40gb of internet data, surely it ran across an opinion or two in all that? After all, it seems that's what most of the internet is now, opinions. For the bots to never express any, does that indicate that they must be trained not to (lest they say something that reflects poorly on their creators)? If bots stated opinions sans word salad, would they still come across as synthetic? Or is there actually some "soul" to human-written text that other humans innately and subconsciously identify, that will always be missing from bot-speech? If so, what's the "soul?"
You might be on to something... what is an opinion but a personal projection or prediction based on what data we have available? ... and it's also judgement on weather something is (or will be) good or bad for us either as individuals or as a species?

So far, a so-called AI program is completely uncapable of asserting an opinion simply because it cannot tell right from wrong regarding ethical dilemmas ... and it doesn't even have the wish or drive to confront or try to solve said kinds of problems.

Simply put, it lacks a sense of Will ...
 

MrAl

Joined Jun 17, 2014
11,389
For what it is worth I think this topic is closely related to self driving cars.

AI in general is probably too general at least for today. It has to be broken down into "Expert Systems" where each one can focus on some particular task. To have one AI program that can actually do virtually anything i think is just too far out of reach for now and will probably require quantum engineering to meet the requirements if not for anything else at least for the memory space.

I may have said this before but the game of chess is a good example of how an expert system might handle solving novel problems. For a machine to play chess it has to of course have the rules programmed in which is completely logic and fairly easy to do, but also some semi fixed knowledge such as the opening book repertoire which has to update regularly with new games played by Grand Masters around the world. The most striking part though is the heuristics. It has to be able to learn shortcuts from experience. Humans do this quite easily. Heuristics allow a program to proceed in finding a solution much quicker than any brute force algorithm which could literally take hours and hours. Even though that solution may not always be right it gets better with new experience aggregation. Any good AI program would have to have a way to do this and execute it quickly as external conditions change.

So for one we are talking about a LOT of memory space as well as very fast processing. Somehow the human is able to handle this pretty well except in extreme cases.
 

strantor

Joined Oct 3, 2010
6,782
You might be on to something... what is an opinion but a personal projection or prediction based on what data we have available? ... and it's also judgement on weather something is (or will be) good or bad for us either as individuals or as a species?

So far, a so-called AI program is completely uncapable of asserting an opinion simply because it cannot tell right from wrong regarding ethical dilemmas ... and it doesn't even have the wish or drive to confront or try to solve said kinds of problems.

Simply put, it lacks a sense of Will ...
I don't understand why the bot would need a sense of will, or to know right from wrong, to express an opinion. It's trained up from a body of data, that (presumably) includes many opinion pieces. All it does (I think?) is regurgitate spliced-together pieces of what it was taught, and it seems to me that would naturally include some pretty strong opinions. What I would expect is for it to be all over the place, inconsistently spouting opinion: "the movie was one of the best this year. It was written by a drunk toddler with a base sense of humor." It doesn't do that, so I suspect it is trained to identify opinion vs fact, and avoid repeating opinion.

How then, does it handle certain subjects where the matter is considered fact by some and opinion by others? Let's see (from https://app.inferkit.com/demo) ...

(The bold part is the seed I typed to get the AI going)
The fact is, not all vaccines are safe. Some of them cause reactions and side effects. Our recent investigation in The Sunday Times has uncovered some shocking revelations that show vaccine ingredients may harm children and not always do what they are supposed to do.

We found traces of the main ingredient in the yellow fever vaccine being used in our multiple sclerosis vaccine, causing people to go blind in Singapore.
It seems to know where I was going with that, but still (AFAIK) stuck to facts. It even used words like "shocking revelations" that couldn't have come from anywhere but an anti-vax clickbait article. Let's try a different seed...
The fact is, all vaccines are safe. However, many parents fear the risk of contracting a serious illness from their children's shots. There is no such thing as a 100% safe vaccine. But getting the flu shot protects you, your family, and the people around you who can't receive a vaccine.

The flu vaccine is not as straightforward as you might think.

The flu shot does not contain actual flu virus and is no more effective than the human immune system would be.

As your body develops antibodies to protect itself against the flu virus, they can attack and weaken some of the components of the vaccine
Again it seems to know what my opinion must be based on what I wrote, and continued the theme without injecting any further opinion (AFAIK). It did print a contradiction, but it was contradicting me, not itself. How about a harder, more opinionated seed:
Any vaccine recommended by the government is safe and you should get it.

It also doesn't hurt to get your dog vaccinated against parvo, distemper, etc.

These illnesses are often passed to other pets in the neighborhood or by being kept in a "hot" environment where more dogs are present.

Getting a booster vaccination is good for 2 years, so even if your dog is vaccinated now, you can still get him vaccinated every 2 years.

Dogs can get the flu vaccine for the same price you would pay for your human vaccine, it's just not as common.

Some people advocate only vaccinating against certain diseases, I'm not one of those people.
Whoah! It immediately veered off topic! I guess it wanted nothing to do with that one. Or am I assigning motive to something inanimate? It's hard to tell....

What would you get if you trained a bot using 40gb of ramblings from other bots? What if it were kept on a diet of AAC dialog between visionfast and killivolt? I think there would have to be some kind of new pardox which precludes any prediction of the outcome.
 
Last edited:

xox

Joined Sep 8, 2017
838
I don't understand why the bot would need a sense of will, or to know right from wrong, to express an opinion. It's trained up from a body of data, that (presumably) includes many opinion pieces. All it does (I think?) is regurgitate spliced-together pieces of what it was taught, and it seems to me that would naturally include some pretty strong opinions. What I would expect is for it to be all over the place, inconsistently spouting opinion: "the movie was one of the best this year. It was written by a drunk toddler with a base sense of humor." It doesn't do that, so I suspect it is trained to identify opinion vs fact, and avoid repeating opinion.

How then, does it handle certain subjects where the matter is considered fact by some and opinion by others? Let's see (from https://app.inferkit.com/demo) ...

(The bold part is the seed I typed to get the AI going)

It seems to know where I was going with that, but still (AFAIK) stuck to facts. It even used words like "shocking revelations" that couldn't have come from anywhere but an anti-vax clickbait article. Let's try a different seed...

Again it seems to know what my opinion must be based on what I wrote, and continued the theme without injecting any further opinion (AFAIK). It did print a contradiction, but it was contradicting me, not itself. How about a harder, more opinionated seed:

Whoah! It immediately veered off topic! I guess it wanted nothing to do with that one. Or am I assigning motive to something inanimate? It's hard to tell....

What would you get if you trained a bot using 40gb of ramblings from other bots? What if it were kept on a diet of AAC dialog between visionfast and killivolt? I think there would have to be some kind of new pardox which precludes any prediction of the outcome.
Most of these AIs are to one degree or another little more than glorified Markov chain driven systems. See here for a fairly simple working example of these structures in action.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://spectrum.ieee.org/computational-cognitive-science
I'm just going to come out and say it: Human cognition might have nothing whatsoever to do with computation.

Yes, I am well aware that the computational theory of mind is a deeply entrenched one, starting with the work in the early 1940s of Warren McCulloch and Walter Pitts in Chicago, and then later at MIT, where they were joined by Jerome Lettvin and Humberto Maturana. But over the course of human history, lots of theories have been widely but wrongly held, sometimes for decades.
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.

This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.
 

MrAl

Joined Jun 17, 2014
11,389
Most of these AIs are to one degree or another little more than glorified Markov chain driven systems. See here for a fairly simple working example of these structures in action.
Maybe the will is needed to start the process of thinking. Why would something want to think if it had no mind to do so. It has to have something driving it even when the reasons to do so are scarce, maybe.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
"Did you ever wonder what will the future of health-care hold? How will advances in medical A.I. change our lives? "

Innovation is amazing..... until it's not...
 

Wolframore

Joined Jan 21, 2019
2,609
I’ve come to realize that there are examples of AI being used somewhat effectively

USPS: address recognition (Wonder why your mail is sometimes sent to the wrong city?)
Banks: to recognize check amounts (wonder why it asks if the amount is correct?)
Stoplight cameras: license plate recognition to generate revenue and cause panic stops on yellow.
Vision systems: to recognize defects, items and position in manufacturing (like your pick and place).

these boring tasks are done very well by “AI” without tiring and very quickly.

I wonder if we are a little hard by expecting them to supplant people, and the developers are over-confident and overreaching. AI are like little kids at the moment, most, not even that.

It takes years to teach a child to understand and categorize information correctly, we constantly adjust their understanding when we find a misunderstanding. My 2 year old grandson love to say he gets it, he will be right one day. Humans are great, we have faulty memory, judgement and decisions also.

We tend to anthromorphisize everything. The terminology is merely an example. (Does a hurricane‘s eye actually see anything? Why does a river have a mouth?…)
 
Last edited:

Pushkar1

Joined Apr 5, 2021
416
I see some similarity between humans and AI. like driving car.

1) AI system needs time to train system. Same untrained human also take time to learn to drive a car.

2) A road accident can happen by a trained person. Self driving car can also be the cause of road accident

What matters is taking the right decision at the right time. In terms of driving a car, I think human can make better decisions than the AI system.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
These things are on the equivalent of a Disneyland test-track. Not a very impressive display of actual AI.
Seoul will kick off its self-driving vehicle service on Tuesday, one major step toward commercializing its driverless car project.

The Seoul Metropolitan Government announced Monday that it will operate three driverless cars open to the public that will autonomously circulate around a designated route in the self-driving test bed area of Sangam-dong in Mapo District, western Seoul.

 

ericgibbs

Joined Jan 29, 2010
18,766
These things are on the equivalent of a Disneyland test-track. Not a very impressive display of actual AI.
hi nsa,

Why are you so critical of this early example of self-drive cars.?

The very earliest old automobiles required a man to walk in front of the vehicle waving a Red flag, we have progressed from there.

All this type of development starts off at a simplistic level
E
 

Pushkar1

Joined Apr 5, 2021
416
I don't want a self driving car to make a mistake and hit me and then be told that self driving cars are better than the average person.

A human uses his intelligence wisely by looking at the real situation.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
hi nsa,

Why are you so critical of this early example of self-drive cars.?

The very earliest old automobiles required a man to walk in front of the vehicle waving a Red flag, we have progressed from there.

All this type of development starts off at a simplistic level
E
I'm critical because it's hype and potentially dangerous because of semi-autonomous hand-off issues with the current level-2-3 systems that lull the driver in thinking the car is capable of real-self driving.

https://www.tu-auto.com/adas-level-2-3-avs-are-hazards-experts-warn/
Dorn adds that she would like to see all AVs achieve Level 4 autonomy simultaneously to avoid a mix of autonomous and semi-autonomous cars on the roads together. She warns Levels 2 and 3 are “dangerous” as they cause drivers to “become intermittent operators”, driving the vehicle themselves for parts of journeys then becoming over-reliant on self-driving tech for others.


University of Sussex object recognition researcher Dr Graham Hole was also questioned for the study and dubs Levels 2 and 3 “the worst of all worlds”. He says: “Human beings are rubbish at being vigilant – vigilance declines after about 20 minutes. With semi-autonomous you are reducing the driver to monitoring the system on the off-chance something goes wrong. Most of the time nothing goes wrong, leading the driver to have massive faith in the system in all conditions, which of course isn’t always the case.”
https://spectrum.ieee.org/the-big-problem-with-selfdriving-cars-is-people#toggle-gdpr

The Big Problem With Self-Driving Cars Is People

And we’ll go out of our way to make the problem worse
 
Last edited:

ericgibbs

Joined Jan 29, 2010
18,766
These things are on the equivalent of a Disneyland test-track. Not a very impressive display of actual AI.
Hi nsa,
The above is what I was referring too

Not

I'm critical because it's hype and potentially dangerous because of semi-autonomous hand-off issues with the current level-2-3 systems that lull the driver in thinking the car is capable of real-self driving.
Which I agree with.:)
E
 
Top