Why humans learn faster than AI—for now

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
Then I decided to ask ChatGPT about something that I knew didn’t exist: a cycloidal inverted electromagnon. I wrote my thesis about electromagnons, but to be double sure, I checked there was no such thing (it's been ca. 7 years since my defense). ChatGPT thought differently:
I left the conversation with the intense feeling of uncanniness: I just experienced a parallel universe of plausibly sounding, non-existing phenomena, confidently supported by citations to non-existing research. Last time I felt this way when I attended a creationist lecture.

We will be fed with hallucinations indistinguishable from the truth, written without grammar mistakes, supported by hallucinated evidence, passing all first critical checks. With similar models available, how will we be able to distinguish a real pop-sci article from a fake one?
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
https://www.extremetech.com/extreme...t-darpa-robot-by-hiding-under-a-cardboard-box

DARPA was quickly humbled. Scharre writes that all eight Marines were able to defeat the robot using techniques that could have come straight out of a Looney Tunes episode. Two of the Marines somersaulted toward the center of the traffic circle, thus using a form of movement the robot hadn’t been trained to identify. Another pair shuffled toward the robot under a cardboard box. One Marine even stripped a nearby fir tree and was able to reach the robot by walking “like a fir tree” (the meaning of which Twitter users are still working to figure out).
 

MrSalts

Joined Apr 2, 2020
2,767
https://www.extremetech.com/extreme...t-darpa-robot-by-hiding-under-a-cardboard-box

DARPA was quickly humbled. Scharre writes that all eight Marines were able to defeat the robot using techniques that could have come straight out of a Looney Tunes episode. Two of the Marines somersaulted toward the center of the traffic circle, thus using a form of movement the robot hadn’t been trained to identify. Another pair shuffled toward the robot under a cardboard box. One Marine even stripped a nearby fir tree and was able to reach the robot by walking “like a fir tree” (the meaning of which Twitter users are still working to figure out).
So, what are the next 8 marines going to do to fool it?
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
So, what are the next 8 marines going to do to fool it?
Make friends with it before it tells the others.

https://www.economist.com/technolog...ion-and-destruction-can-still-blind-the-enemy
Deception and destruction can still blind the enemy
http://let.iiec.unam.mx/node/4100

https://arxiv.org/pdf/1610.08401.pdf
Can we find a single small image perturbation that fools
a state-of-the-art deep neural network classifier on all natural images? We show in this paper the existence of such
quasi-imperceptible universal perturbation vectors that lead
to misclassify natural images with high probability. Specifically, by adding such a quasi-imperceptible perturbation
to natural images, the label estimated by the deep neural network is changed with high probability (see Fig. 1).
Such perturbations are dubbed universal, as they are image agnostic. The existence of these perturbations is problematic when the classifier is deployed in real-world (and possibly hostile) environments,
 
Last edited:

MrAl

Joined Jun 17, 2014
11,389
After looking at the Ai images in this thread again i remembered i worked with a simpler Ai program and also and Ai calculator, and one i designed myself that creates human words.
The Ai calculator was probably the most interesting because it could come up with formulas like the formula for the circumference of a circle given the diameter. Sometimes it took longer than other times to get a good formula for different things.
The human words program wasnt too difficult because it only has to deal with the alphabet, not complete 2d images.
The simpler Ai program was interesting but it takes a lot a lot a lot of teaching to get it to do anything even close to useful.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
https://arstechnica.com/information...beats-machine-at-go-in-human-victory-over-ai/
Man beats machine at Go in human victory over AI

A human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence.

Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.
The tactics used by Pelrine involved slowly stringing together a large “loop” of stones to encircle one of his opponent’s own groups, while distracting the AI with moves in other corners of the board. The Go-playing bot did not notice its vulnerability, even when the encirclement was nearly complete, Pelrine said.

“As a human it would be quite easy to spot,” he added.

The discovery of a weakness in some of the most advanced Go-playing machines points to a fundamental flaw in the deep-learning systems that underpin today’s most advanced AI, said Stuart Russell, a computer science professor at the University of California, Berkeley.

The systems can “understand” only specific situations they have been exposed to in the past and are unable to generalize in a way that humans find easy, he added.

“It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines,” Russell said.
"Look!! Squirrel!"
 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
https://mcraenglish.weebly.com/uploads/1/5/8/7/15876062/the_machine_that_won_the_war01.pdf
Asimov, Isaac - The Machine That Won the War
He held the last coin between his fingers, staring absently at it.
"Multivac is not the first computer, friends, nor the best-known, nor
the one that can most efficiently lift the load of decision from the
shoulders of the executive. A machine did win the war, John; at least a
very simple computing device did; one that I used every time I had a
particularly hard decision to make."
With a faint smile of reminiscence, he flipped the coin he held. It
glinted in the air as it spun and came down in Swift's outstretched
palm. His hand closed over it and brought it down on the back of his
left hand. His right hand remained in place, hiding the coin.
"Heads or tails, gentlemen?" said Swift.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
https://www.sciencedaily.com/releases/2023/10/231025110653.htm
'Dim-witted' pigeons use the same principles as AI to solve tasks
Study shows power of associative learning in challenging situations

Date:October 25, 2023
Source:Ohio State University
Summary:A new study provides evidence that pigeons tackle some problems just as artificial intelligence would -- allowing them to solve difficult tasks that would vex humans.
 
Last edited by a moderator:

Ya’akov

Joined Jan 27, 2019
9,071
Because humans are the creators behind computers and AI, it's important to remember that these technologies don't instruct humans. Instead, humans are the driving force behind the creation and development of computers and AI
Welcome to AAC.

This argument is question begging. Your are asserting that because humans created AI, and it doesn’t “instruct humans” this limits the rate at which AI can learn to less than human speed. But you make no argument concerning how this works.

It is very easy to see that a locomotive is much stronger than a human, who created it. In what way is the AI case different? How does the creator of something set limits on its performance in regard to what it was created to do?

You have some unstated assumptions and maybe no rationale for them. If you are suggesting this limit is self evident, not needing or being susceptible to proof by facts and logic—well, that’s easy to reject. But if you are saying these is a demonstrable mechanism here (which, frankly, sounds like magic to me) can you explain what I am missing?
 

Ya’akov

Joined Jan 27, 2019
9,071
https://www.sciencedaily.com/releases/2023/10/231025110653.htm
'Dim-witted' pigeons use the same principles as AI to solve tasks
Study shows power of associative learning in challenging situations
Date:October 25, 2023
Source:Ohio State University
Summary:A new study provides evidence that pigeons tackle some problems just as artificial intelligence would -- allowing them to.
I didn’t read the article but is pigeons can “solve difficult tasks that would vex humans”, I‘m afraid that dim-witted means something very different from what I would take it to mean.

I would guess these vexed problems are actually simple for pigeons while being tough for humans, much the way that knowing the pigeons are, in fact pigeons and the humans differ in various conceptual ways is well beyond pigeon brains while being trivial for humans.

Who would have known that the difficulty of tasks is contextual and relative?
 

Ya’akov

Joined Jan 27, 2019
9,071
Out of curiosity, I asked ChatGPT (3.5) about this question. I think it gave a pretty good answer.

The statement that AI systems learn more slowly than humans may not always hold true and depends on various factors, including the specific task, the complexity of the learning algorithm, and the data available. However, there are certain reasons why AI systems may appear to learn more slowly compared to humans in some contexts:

1. Data Efficiency:
Humans are highly efficient at learning from relatively small amounts of data. We can generalize from a few examples and apply knowledge to a wide range of situations. Many AI systems, especially deep learning models, may require large amounts of labeled data for effective learning.​

2. Transfer Learning:
Humans can transfer knowledge from one task to another, leveraging past experiences to accelerate learning in new domains. While transfer learning is an active area of research in AI, it's not always as flexible or efficient as human transfer learning.​

3. Common Sense and Background Knowledge:
Humans bring a wealth of background knowledge and common sense reasoning to new situations. AI systems often lack this innate understanding and may need explicit training data for a wide range of scenarios to develop similar capabilities.​

4. Adaptability:
Humans are exceptionally adaptable and can quickly learn and respond to changes in their environment. AI systems may require retraining or adjustments to adapt to new conditions, and this process can be time-consuming.​

5. Complexity of Tasks:
Some tasks that are simple for humans, such as understanding natural language or recognizing objects in a scene, can be highly complex for AI systems. Training models for tasks with high levels of abstraction and variability can be challenging and time-consuming.​

6. Real-time Learning:
Humans can learn in real-time, continuously updating their understanding of the world. Some AI systems, especially those trained in batch mode, may not have the capability to learn dynamically from new data as it becomes available.​

7. Innate Cognitive Abilities:
Humans possess innate cognitive abilities, such as curiosity, creativity, and the ability to form abstract concepts. Replicating these qualities in AI systems is a complex task, and current models may lack some of these essential human-like characteristics.​

It's important to note that AI systems excel in certain domains and tasks, often outperforming humans in terms of speed, accuracy, and scalability. However, the nature of learning in AI is different from human learning, and there are areas where humans still demonstrate superior adaptability and efficiency. Ongoing research in AI aims to address these challenges and make AI systems more versatile, efficient, and human-like in their learning capabilities.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,086
Out of curiosity, I asked ChatGPT (3.5) about this question. I think it gave a pretty good answer.

The statement that AI systems learn more slowly than humans may not always hold true and depends on various factors, including the specific task, the complexity of the learning algorithm, and the data available. However, there are certain reasons why AI systems may appear to learn more slowly compared to humans in some contexts:

1. Data Efficiency:
Humans are highly efficient at learning from relatively small amounts of data. We can generalize from a few examples and apply knowledge to a wide range of situations. Many AI systems, especially deep learning models, may require large amounts of labeled data for effective learning.​

2. Transfer Learning:
Humans can transfer knowledge from one task to another, leveraging past experiences to accelerate learning in new domains. While transfer learning is an active area of research in AI, it's not always as flexible or efficient as human transfer learning.​

3. Common Sense and Background Knowledge:
Humans bring a wealth of background knowledge and common sense reasoning to new situations. AI systems often lack this innate understanding and may need explicit training data for a wide range of scenarios to develop similar capabilities.​

4. Adaptability:
Humans are exceptionally adaptable and can quickly learn and respond to changes in their environment. AI systems may require retraining or adjustments to adapt to new conditions, and this process can be time-consuming.​

5. Complexity of Tasks:
Some tasks that are simple for humans, such as understanding natural language or recognizing objects in a scene, can be highly complex for AI systems. Training models for tasks with high levels of abstraction and variability can be challenging and time-consuming.​

6. Real-time Learning:
Humans can learn in real-time, continuously updating their understanding of the world. Some AI systems, especially those trained in batch mode, may not have the capability to learn dynamically from new data as it becomes available.​

7. Innate Cognitive Abilities:
Humans possess innate cognitive abilities, such as curiosity, creativity, and the ability to form abstract concepts. Replicating these qualities in AI systems is a complex task, and current models may lack some of these essential human-like characteristics.​

It's important to note that AI systems excel in certain domains and tasks, often outperforming humans in terms of speed, accuracy, and scalability. However, the nature of learning in AI is different from human learning, and there are areas where humans still demonstrate superior adaptability and efficiency. Ongoing research in AI aims to address these challenges and make AI systems more versatile, efficient, and human-like in their learning capabilities.
The AI answer is just a rephrasing (work created by humans and regurgitating it) of previous human learning, I expect it to be a good answer.
 
Last edited:

DC_Kid

Joined Feb 25, 2008
1,072
Can AI / ML really "learn" anything? Technologies created by humans really can't do such, they can only use existing data sets (knowledge) to derive limited answer set. AI ML cannot solve any complex problems, like time travel, or big universe questions.
 

Ya’akov

Joined Jan 27, 2019
9,071
Can AI / ML really "learn" anything? Technologies created by humans really can't do such, they can only use existing data sets (knowledge) to derive limited answer set. AI ML cannot solve any complex problems, like time travel, or big universe questions.
What definition of "learn" are you using?
 
Top