Why humans learn faster than AI—for now

WBahn

Joined Mar 31, 2012
29,978
So basically exactly as I said? No real "thinking" going on? Just a very sophisticated search and probability algorithm?

It is my understanding , this is the way AI chess games work. They "simply" run through all of the possible moves and calculate the outcome. I am by no means a good chess player but it is my understanding, this is pretty much what the human players do. The computer works so much faster.

The photo above illustrates very well how amazing the human brain works. We really aren't doing any kind of search of images. Or at least I don't think so. For some reason, we can easily tell the difference between a puppy and a muffin even when most of the data is hidden from us.
There are many different types of "artificial intelligence" and so there isn't a one-size-fits-all description for how they work.

For some types of systems, it is about pattern recognition. This is pretty key to how human brains work. But human brains are so unbelievable more capable in terms for raw processing that computer based systems are at a huge disadvantage, especially when you take into account the intrinsic parallelism that goes on in the brain that even the most highly-parallelized computer can't even begin to scratch.

The muffin/puppy data does illustrate how good the human brain is at learning SOME kinds of patterns and then generalizing them -- but there are lots of data sets of images that you can develop that humans are lousy at categorizing and that computer AI is very good at.

Based on literally years of constant learning, the human brain learns to categorize what it sees (and hears and smells and feels) lots of different ways and then is able to very quickly make judgments about it based on weighting the relevance of all those different categories and applying lots of filters regarding what can and can't be and then extrapolating from there.

While a huge fraction of human learning is unguided, a lot of it IS guided. Little kids are shown a something (be it a color, or a letter of the alphabet, or a number, or a picture of a cat) and are told what it is. Then they are shown it again (perhaps the exact same image or perhaps a different image of the same thing) and are asked what it is and are then told whether they are right or wrong or perhaps are told what the correct answer is.

Let's consider two very different training sets. In the first one, the person is shown different letters of the alphabet, but not all of them -- perhaps twenty. They are trained until they are nearly flawless at recognizing them correctly no matter how distorted they might be (up to a limit, of course). Now you show them one of the ones that wasn't in the training set. Most of them will try to pick one of the twenty letters they know -- essentially going for a closest match. A few might be able to say that they don't recognize it. None of them are going to correctly identify it.

Now consider a training set consisting of lots of pictures of dogs, cats, rodents, fish, and perhaps a dozen other types of animals at that same level of granularity. After they have been trained, you can probably show them a picture of an animal that falls into one of those categories but that looks significantly different than any of the pictures they've seen and may well even be, at least superficially, somewhat closer to one of the other animals (perhaps in coloring). A large fraction of people will be able to properly classify that new animal because they have been able to generalize things like features that all dogs have and that no dogs have (as well as that most dogs have or that few dogs have). But the kicker is that while some of the features that we rely on are pretty evident, there are lots of features that we rely on that are pretty subtle and that we can't easily identify.

Feedback neural networks try to emulate the guided learning model in which the network is presented with an image and told whether it is right or wrong, or perhaps what the answer should have been, and it then adjusts its internal weights to move its answer a bit closer to the correct answer. This is repeated over and over and over. At then end, it is shown images it has never seen before and evaluated on how well it has been able to extract a relevant set of features from the training set and apply them to new input.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
Feedback neural networks try to emulate the guided learning model in which the network is presented with an image and told whether it is right or wrong, or perhaps what the answer should have been, and it then adjusts its internal weights to move its answer a bit closer to the correct answer. This is repeated over and over and over. At then end, it is shown images it has never seen before and evaluated on how well it has been able to extract a relevant set of features from the training set and apply them to new input.
https://www.csail.mit.edu/news/fooling-googles-image-recognition-ai-1000x-faster
To test their method, the team showed that they could transform an image of a dog into a photo of two people skiing, all while the image-recognition system still classifies the image as a dog. (The team tested their method on Google’s Cloud Vision API, but say that it would work for similar APIs from Facebook and other companies.)

What’s especially impressive is that the researchers don’t even need complete information about what Google’s image-recognition system is “seeing.”
This method (DNN image recognition) produces impressive results in a controlled test but is also intrinsically fragile because the dimensional space of all possible images is so huge. The limited number of internal weights on practical systems that give the right answer for similar images also gives the same 'right' answer for crafted dissimilar images if it creates the same result using a different start value. The flip side of the equation is to use the same dimensional space properties to create digital classifier "optical illusions" that easily fool the classifier into not seeing or seeing things that don't exist (closeness exploit). Imagine making a transparent film with a low human detectable adversarial permutation pattern that can be placed on top of a stop sign on a busy street to attack cars with AI drivers or even on the hood and front window of a car to fool AI security systems.

Don't bring this turtle near a security system using Google’s InceptionV3 image classifier.
http://www.labsix.org/physical-objects-that-fool-neural-nets/

Sure, there are countermeasures but so far nothing that stops new types of attacks ( classifying adversarial examples correctly or detect them).
https://arxiv.org/pdf/1705.07263.pdf
ABSTRACT
Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses.
 
Last edited:

WBahn

Joined Mar 31, 2012
29,978
Imagine making a transparent film with a low human detectable adversarial permutation pattern that can be placed on top of a stop sign on a busy street to attack cars with AI drivers or even on the hood and front window of a car to fool AI security systems.
This gets directly at one of my big reservations to the use of this technology. We develop it and test it in benign environments and largely ignore the fact that, sooner rather than later, it will be operating in an adversarial environment against attacks that have been specifically crafted to exploit it's weaknesses, of which there are bound to be many and most will not be known until discovered by the adversary.
 
This method (DNN image recognition) produces impressive results in a controlled test but is also intrinsically fragile because the dimensional space of all possible images is so huge. The limited number of internal weights on practical systems that give the right answer for similar images also gives the same 'right' answer for crafted dissimilar images if it creates the same result using a different start value. The flip side of the equation is to use the same dimensional space properties to create digital classifier "optical illusions" that easily fool the classifier into not seeing or seeing things that don't exist (closeness exploit). Imagine making a transparent film with a low human detectable adversarial permutation pattern that can be placed on top of a stop sign on a busy street to attack cars with AI drivers or even on the hood and front window of a car to fool AI security systems.
/--/
This issue is a very big one for autonomous driving systems. Chantelle Dubois wrote an article in AAC about this subject https://www.allaboutcircuits.com/ne...attacks-can-trick-autonomous-driving-systems/

Take a look at how easy current traffic signs can be altered to defeat recognition. There are probably some obvious solutions that are going to be expensive and require maintenance. Nonetheless, widespread deployment is a ways off.
 
Last edited:
The muffin/puppy data does illustrate how good the human brain is at learning SOME kinds of patterns and then generalizing them -- but there are lots of data sets of images that you can develop that humans are lousy at categorizing and that computer AI is very good at.
That is very true and for good reason. The purpose of the nervous system in an organism is to produce behavior....under all conditions in the environment. That behavior is aimed at procreation and survival as priorities, but there are clearly many other aspects. The functional capabilities are the result of evolution and natural selection processes.

We are good at discriminating muffins from dogs because there is an advantage in such discrimination with regard to primary purposes. On the other hand, we are relatively poor at discriminating events with a probability of .000000000001 from those having a probability of .00000000000001 or the difference between a trillion dollars and a quadrillion dollars. The latter discriminations have little value to the organism in the normal environment. We are not "prepared" for that kind of learning which is not to say that we can't be trained, it is just not as easy.

For creating "AI" there is no such bias or encumbrance. Instead, there is the task of defining incredibly complex associations - something we take for granted as much as the workings of a mitochondria in a single cell.

In this regard, that is, given the problem statement as it were, there is no amount of AI that comes close.

I have long held the belief that the terminology of "neural network" was not just commandeered but illegitimately hijacked from other fields. Those are not neurons and they don't act like neurons in a network. Indeed, the term Artificial Intelligence is in the same category. I wish I knew what better terms to come up with, but even if I did they would probably never have generated the "buzz" that those terms initially did.

Based on literally years of constant learning, the human brain learns to categorize what it sees (and hears and smells and feels) lots of different ways and then is able to very quickly make judgments about it based on weighting the relevance of all those different categories and applying lots of filters regarding what can and can't be and then extrapolating from there.
While true, it does not take even a single year. Learning begins during the earliest point possible and newborns exhibit incredible discriminatory capabilities for their mother based on auditory and visual cues - within a few days or, at the outside, a few months of age.

While a huge fraction of human learning is unguided, a lot of it IS guided. Little kids are shown a something (be it a color, or a letter of the alphabet, or a number, or a picture of a cat) and are told what it is. Then they are shown it again (perhaps the exact same image or perhaps a different image of the same thing) and are asked what it is and are then told whether they are right or wrong or perhaps are told what the correct answer is.
It represents a different perspective rather than a right or wrong position, but in my view, learning is involuntary in a very real sense (I agree with your comment..."constant learning"). It is going on all the time, so long as the organism is alive. While some would argue the relevance of the observable neurogensis that occurs throughout the lifespan, it is generally not in dispute that we continue to learn throughout the lifespan.

What you may mean by "guided" is the intent by others to arrange contingencies in the environment toward the production of specific behaviors.

The point is not trivial because it has a great deal of impact for the individual as well society. For example, punishment can do very good job of reducing the probability of the preceding behavior from reoccurring. Punishment can also cause all sorts of undesirable learning, which can be manifest in the exhibition of "undesirable behaviors", including those behaviors intentionally intended to be suppressed.
 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
Now that researchers have found trivial ways to hack deep learned visions system they are turning their attention to humans.

https://gizmodo.com/image-manipulation-hack-fools-humans-and-machines-make-1823466223

Computer scientists at Google Brain have devised a technique that tricks neural networks into misidentifying images—a hack that works on humans as well.

https://arxiv.org/pdf/1802.08195.pdf
Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we create the first adversarial examples designed to fool humans, by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by modifying models to more closely match the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.

https://spectrum.ieee.org/the-human...nce/hacking-the-brain-with-adversarial-images
 
Last edited:
Thanks for posting the link. If you had not, I would not have stumbled upon it until it is widely available.

I have watched 12 minutes of it and there are already two things that I don't like.

1) I prefer a more academic treatment of these important issues in contrast to the "slick" and, frankly, distracting, optics that permeated the portion of the presentation that I watched.

2) I reach the same old conclusion: We have difficulty legally valuing personally produced data (my heart rate, my clicks, anything that serves as a metric of my internal or external behavior that is personally identifiable). Until we can agree that these data are, in fact, my property and I, not you, are entitled to them just as I am entitled to any other product that I produce, we will be on the short end of the Faustian Bargain.

Personally, I believe that there is reason for pessimism as we, somehow, can not legislate the concept of opt-out as a default.

I may watch more later, but I doubt 2) above will be altered, although 1) above could be.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
Thanks for posting the link. If you had not, I would not have stumbled upon it until it is widely available.

I have watched 12 minutes of it and there are already two things that I don't like.

1) I prefer a more academic treatment of these important issues in contrast to the "slick" and, frankly, distracting, optics that permeated the portion of the presentation that I watched.

2) I reach the same old conclusion: We have difficulty legally valuing personally produced data (my heart rate, my clicks, anything that serves as a metric of my internal or external behavior that is personally identifiable). Until we can agree that these data are, in fact, my property and I, not you, are entitled to them just as I am entitled to any other product that I produce, we will be on the short end of the Faustian Bargain.

Personally, I believe that there is reason for pessimism as we, somehow, can not legislate the concept of opt-out as a default.

I may watch more later, but I doubt 2) above will be altered, although 1) above could be.
For issue 1), it's a documentary type format for the uninformed, you need shiny things to keep them interested.

2) You have the choice not to use the services, devices in a way that supplies your actual personal data. It's quite easy today to make a completely new digital persona.
 
2) You have the choice not to use the services, devices in a way that supplies your actual personal data. It's quite easy today to make a completely new digital persona.
I would argue that the choice should be to participate and the default is to not - i.e., opt-out as the default. You but the burden on the entity that wants your data, not on the owner of the data.

In many cases, one can argue that we have arranged things to explicitly make opting out difficult.

Can you buy a set of tires without RF tags in them? Yeah, I suppose, but we make it difficult to do that because we do not treat personally produced data as our property.

That is my point and forcing a choice of "participant" or "off-the-grid Luddite" is what some would have us believe, but it is an artificial either-or categorization. We can do better.
 

joeyd999

Joined Jun 6, 2011
5,234
You, of course, could not be more wrong, in my opinion. Easy solutions are not, necessarily correct, best, nor advanced solutions.
Another easy solution: compete.

Start your own social networking service with what you think are fair rules/use of data.

If you are right, the world will beat a path to your door.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
I would argue that the choice should be to participate and the default is to not - i.e., opt-out as the default. You but the burden on the entity that wants your data, not on the owner of the data.

In many cases, one can argue that we have arranged things to explicitly make opting out difficult.

Can you buy a set of tires without RF tags in them? Yeah, I suppose, but we make it difficult to do that because we do not treat personally produced data as our property.

That is my point and forcing a choice of "participant" or "off-the-grid Luddite" is what some would have us believe, but it is an artificial either-or categorization. We can do better.
The 'Government' in league with big-tech has a vested interest in default opt-in data tracking so I would be surprised to see a big push for opting out. The kids today seem to want their personal data collected and used so it might even be slight burden for them to deselect restriction options. I only want the choice. If a person won't take the usually few steps necessary for using an available opting out option then I have crocodile tears for them for the world being difficult.

There is a third choice for non-mandated 'real-id' interactions. A separate online digital persona with prepaid gift, debit or credit cards for payments under any made-up name. Sure, there might be a cost to this anonymity armor but if you value privacy in this day and age then it has a price too.
 
For issue 1), it's a documentary type format for the uninformed, you need shiny things to keep them interested.

2) You have the choice not to use the services, devices in a way that supplies your actual personal data. It's quite easy today to make a completely new digital persona.
I have now watched it in its entirety (1:18).

I think it is worthwhile to watch.

A couple of comments:

As far as my initial point 1 and 2 that I posted earlier (in post #28) after only watching the beginning (BTW: I would suggest that you get through the beginning to get to more interesting parts).

For point 1, it actually got much better as it went on with much less slickness, although the music didn’t improve. Music that tells you when a point is coming and emotionally guides you is fine for old-time B&W entertainment movies but it is distracting when I am trying to focus.

For point 2, it remains as I stated. While the video keeps coming back to that issue, there is much more technology on display. So much so, that what I would have really liked was an hour video on each one of 20-30 of those topics….but you can’t always get what you want.

A few scattered comments…these will make no sense if you haven’t watched it and if you have, they are just some immediate reactions….not fight-worthy.

As far as the inevitable “We have already opened Pandora’s box” analogy….well we did that when we got kicked out of the “Garden of Eden”.

The reference to Poker vs. AI e.g., https://www.pokernews.com/news/2017/01/poker-ai-beats-the-pros-26990.htm I thought that this was still somewhat dynamic e.g., https://arxiv.org/pdf/1701.01724.pdf and not quite the done “deal” that they made it out to be.

The aneurism segment was spell-binding as they always are to me, but the reference to compassion was misplaced. Compassion can be lines of code like anything else – not lines that I particularly long to write, but lines nonetheless.

The cognitive segments were predictably disappointing as they usually are to me….the focus on self-awareness, blah, blah, blah.

As to “showing you what you want to see”…nothing new there. I used to refer to this as the “Over the counter medication” effect.

The predictable political moralizing was ok as an illustration, but I would have preferred the time being spent elsewhere. It was consistent with their summary point.

The concept of “machines as the perfect sociopath” is, again, misguided. A lack of coding is not an excuse.

The idea of not really knowing what the programs do is a reflection of ignorance and a lack of responsibility rather than something to be addressed by a submission to Sci-Fi worthy fears of the immortal dictator that will never die.

I liked the dedication.
 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
The concept of “machines as the perfect sociopath” is, again, misguided. A lack of coding is not an excuse.

The idea of not really knowing what the programs do is a reflection of ignorance and a lack of responsibility rather than something to be addressed by a submission to Sci-Fi worthy fears of the immortal dictator that will never die.

I liked the dedication.
We truly don't know what deep learned system will do in every possible circumstance because they are NOT coded mathematical algorithms. The coding provides the bulk structure for storing learning from data inputs that translates our 3D+1 world into a multi-dimensional higher order universe much like the higher dimensions of string theory. The ML machines operate on probabilities, distributions and weights not pure logic. The input/output is not a deterministic 1,2,3 =x programming process.
https://medium.com/machine-learning...-statistics-for-machine-learning-9d010f0a8980

My fear is not that they are a perfect sociopath, it's that they can be easily fooled to behave in ways that are harmful with inputs that look perfectly 'normal' to humans and will also have the ability to manipulate humans using our senses and emotional responses at a level we can't easily detect. This will be very useful for our fellow human sociopaths.
 
Last edited:
Top