Why humans learn faster than AI—for now

Thread Starter

nsaspook

Joined Aug 27, 2009
6,215
https://www.technologyreview.com/s/610434/why-humans-learn-faster-than-ai-for-now/
But while this work is impressive, it highlights one of the significant limitations of deep learning. Compared with humans, machines using this technology take a huge amount of time to learn. What is it about human learning that allows us to perform so well with relatively little experience?
...
By contrast, the game is hard for machines: many standard deep-learning algorithms couldn’t solve it at all, because there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing.

The best machine performer was a curiosity-based reinforcement-learning algorithm that took some four million keyboard actions to finish the game. That’s equivalent to about 37 hours of continuous play.
It's not surprising that these brute force machine learning or "deep" learning systems have problems when there is little to signal a 'good' path from a 'bad' path. There is very little intelligence in current AI that people didn't already program into it.


Leisure Suit Larry in the Land of the Lounge Lizards
 
Last edited:

spinnaker

Joined Oct 29, 2009
7,815
https://www.technologyreview.com/s/610434/why-humans-learn-faster-than-ai-for-now/


It's not surprising that these brute force machine learning or "deep" learning systems have problems when there is little to signal a 'good' path from a 'bad' path. There is very little intelligence in current AI that people didn't already program into it.


Leisure Suit Larry in the Land of the Lounge Lizards
I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?
 

joeyd999

Joined Jun 6, 2011
4,231
https://www.technologyreview.com/s/610434/why-humans-learn-faster-than-ai-for-now/


It's not surprising that these brute force machine learning or "deep" learning systems have problems when there is little to signal a 'good' path from a 'bad' path. There is very little intelligence in current AI that people didn't already program into it.


Leisure Suit Larry in the Land of the Lounge Lizards
A cosmic shift will occur when someone invents the digital equivalent of dopamine.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
6,215
I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?
I wish I understood how exactly how AI works too.

With 'Deep learning' there is really no learning or understanding in the way we think of in a classical AI human mimic brain. It works because we now have massive computing power able to hash huge databases generated by the machines when information is loaded into them. For deep learning the machine creates the database from the information input, we don't generate it for them.
http://karpathy.github.io/2016/05/31/rl/
Now back to RL. Whenever there is a disconnect between how magical something seems and how simple it is under the hood I get all antsy and really want to write a blog post. In this case I’ve seen many people who can’t believe that we can automatically learn to play most ATARI games at human level, with one algorithm, from pixels, and from scratch - and it is amazing, and I’ve been there myself! But at the core the approach we use is also really quite profoundly dumb (though I understand it’s easy to make such claims in retrospect).
A 'learned' machine doesn't understand the physical world difference between these two things but it can recognize them from database created from their images.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
6,215
A cosmic shift will occur when someone invents the digital equivalent of dopamine.
We already have the digital equivalent of LSD-25.


https://www.ibtimes.co.uk/google-deepdream-robot-10-weirdest-images-produced-by-ai-inceptionism-users-online-1509518
After being fed millions of pictures, the image recognition software created by Google enabled artificial neural network of computers to see shapes in images, creating strange, fantastic and psychedelic images that at times could be likened to impressionist art.
 

spinnaker

Joined Oct 29, 2009
7,815
I wish I understood how exactly how AI works too.

With 'Deep learning' there is really no learning or understanding in the way we think of in a classical AI human mimic brain. It works because we now have massive computing power able to hash huge databases generated by the machines when information is loaded into them. For deep learning the machine creates the database from the information input, we don't generate it for them.
http://karpathy.github.io/2016/05/31/rl/


A 'learned' machine doesn't understand the physical world difference between these two things but it can recognize them from database created from their images.
So basically exactly as I said? No real "thinking" going on? Just a very sophisticated search and probability algorithm?

It is my understanding , this is the way AI chess games work. They "simply" run through all of the possible moves and calculate the outcome. I am by no means a good chess player but it is my understanding, this is pretty much what the human players do. The computer works so much faster.

The photo above illustrates very well how amazing the human brain works. We really aren't doing any kind of search of images. Or at least I don't think so. For some reason, we can easily tell the difference between a puppy and a muffin even when most of the data is hidden from us.
 

takao21203

Joined Apr 28, 2012
3,682
There is some fundamental things absent, among such as

1) Instinct of survival
2) Memory of incidents threatening the own survival
3) Autonomous reproduction
4) Gaining access and ownership of resources + territory
5) Rewards of all kinds

One of the basic determinators of life of course is survival. Its fundamental.
 
And there are things that Aplysia can do that Google could only dream of.
Yes, joeyd999, you are quite right. In fact, Eric Kandel received a Nobel prize for telling us about what they can do and how they can do it. But, the key is not the neurotransmiter (any of them) so much as all those neurons.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
6,215
So basically exactly as I said? No real "thinking" going on? Just a very sophisticated search and probability algorithm?
There is something happening but it's unrelated to 'thinking'.

Because the massive amount of data for learning generates a totally abstract representation of the original data some of the current AI methods can be easily fooled if you understand how they work by making a completely different input data produce a pattern similar to the learned computer response. No human would think these images are what the computer 'thinks' they are.

http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf
One interesting implication of the fact that DNNs are easily fooled is that such false positives could be exploited wherever DNNs are deployed for recognizing images or other types of data. For example, one can imagine a security camera that relies on face or voice recognition being compromised. Swapping white-noise for a face, fingerprints, or a voice might be especially pernicious since other humans nearby might not recognize that someone is attempting to compromise the system.
 

joeyd999

Joined Jun 6, 2011
4,231
the key is not the neurotransmiter (any of them) so much as all those neurons...
The key to learning is the neurotransmitter, IMHO. It is the reward our brain seeks for successful execution of actions that achieve a goal. It is the reason we repeat learned behavior -- to again experience the reward.
 

spinnaker

Joined Oct 29, 2009
7,815
The key to learning is the neurotransmitter, IMHO. It is the reward our brain seeks for successful execution of actions that achieve a goal. It is the reason we repeat learned behavior -- to again experience the reward.

So how does that explain why people get married a second time? :eek:
 
The key to learning is the neurotransmitter, IMHO. It is the reward our brain seeks for successful execution of actions that achieve a goal. It is the reason we repeat learned behavior -- to again experience the reward.
No. the transmitters activate or modulate the activity of the neuron and it is much more complicated than that. The same neurotransmitters can act very differently in different species. In fact even in the same species during development. Some neurotransmitters are found all over the CNS and PNS doing their thing on neurons that are doing very different things in the system.

You always hear things like you are saying that are associated with drug abuse and reward centers and the like, but it is a huge oversimplification.

But look, I don't want to start an argument about this, it is very complicated, not completely understood and it is more than difficult to discuss in short posts. My "opinion" is based on being a neuroscientist for more than 30 years. I don't claim to know everything about neurons, learning and memory, but I do know some things about them.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
6,215
The pictures are obviously blond, brunette, redhead...
You are affected by
Universal adversarial perturbations :D

https://arxiv.org/pdf/1610.08401.pdf

The deep networks AI ability to see images where no human recognizable image exists can also be exploited to make systems misidentify slightly changed images that look normal to humans.


Imagine using this on self-driving cars by slightly modifying traffic signs in a way that humans can't detect easily.

 
Last edited:
Top