Why humans learn faster than AI—for now

Thread Starter

nsaspook

Joined Aug 27, 2009
13,270
https://techcrunch.com/2018/12/31/t...-its-creators-to-cheat-at-its-appointed-task/
Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.
...
One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.
This is really just a lesson in the oldest adage in computing: PEBKAC. “Problem exists between keyboard and computer.” Or as HAL put it: “It can only be attributable to human error.”
https://arxiv.org/pdf/1712.02950.pdf
 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,270
https://mindmatters.ai/2019/01/will-artificial-intelligence-design-artificial-super-intelligence/
Recent AI gains are mostly due to improvements in computational power and access to data. The basic techniques used to formulate and train AI models have remained more or less the same since the 1970s. For example, the well-publicized Deep Learning approach to AI relies on a training algorithm known as backpropagation, which originated in the field of control theory in the 1960s and was then applied to neural networks in the 1970s. The convolutional neural network, the key component of Deep Learning networks, was invented in the 1980s. So, as we see, the basic techniques have remained unchanged for many decades.
...
Given that AI has primarily benefitted from increased data size and processing power, software architect Brendan Dixon concludes, contrary to Ray Kurzweil, that an AI winter is looming: “ The worries of an impending winter arise because we’re approaching the limits of what massive data combined with hordes of computers can do.” Past AI booms and busts have, generally speaking, been related to processor improvements or lack thereof.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,270
https://www.wired.com/story/the-exaggerated-promise-of-data-mining/
Nobel laureate Richard Feynman once asked his Caltech students to calculate the probability that, if he walked outside the classroom, the first car in the parking lot would have a specific license plate, say 6ZNA74. Assuming every number and letter are equally likely and determined independently, the students estimated the probability to be less than 1 in 17 million. When the students finished their calculations, Feynman revealed that the correct probability was 1: He had seen this license plate on his way into class. Something extremely unlikely is not unlikely at all if it has already happened.

The Feynman trap—ransacking data for patterns without any preconceived idea of what one is looking for—is the Achilles heel of studies based on data mining. Finding something unusual or surprising after it has already occurred is neither unusual nor surprising. Patterns are sure to be found, and are likely to be misleading, absurd, or worse.
 

djsfantasi

Joined Apr 11, 2010
9,163
I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?
My limited understanding is that the computing power looks for patterns and then matches its perception to a known base. Or, over time, it builds up a base of patterns and looks for recurring patterns of patterns.

The programmer of an AI system designs a set of algorithms which result in the aforesaid patterns.

In the case of reading medical journals, I would guess the following. First, the content would be in machine readable form. In its simplest, text format. Then, the programmers coded a series of lexical functions. Then, the software would process the text (parse and identify patterns) and then within that secondary database, look for patterns of patterns.

In the 60s, I befriended Dr. Seymour Papert, if the MIT AI Lab. I learned the process just described from the AI Lab and programmed a rudimentary AI system. Some 50 years ago.
So, it’s patterns of patterns, after being taught to identify a set of rudimentary constructs.
 

cmartinez

Joined Jan 17, 2007
8,253
My limited understanding is that the computing power looks for patterns and then matches its perception to a known base. Or, over time, it builds up a base of patterns and looks for recurring patterns of patterns.

The programmer of an AI system designs a set of algorithms which result in the aforesaid patterns.

In the case of reading medical journals, I would guess the following. First, the content would be in machine readable form. In its simplest, text format. Then, the programmers coded a series of lexical functions. Then, the software would process the text (parse and identify patterns) and then within that secondary database, look for patterns of patterns.

In the 60s, I befriended Dr. Seymour Papert, if the MIT AI Lab. I learned the process just described from the AI Lab and programmed a rudimentary AI system. Some 50 years ago.
So, it’s patterns of patterns, after being taught to identify a set of rudimentary constructs.
Then there are two techniques regarding that. Nested patterns, and recursive patterns.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,270
My limited understanding is that the computing power looks for patterns and then matches its perception to a known base. Or, over time, it builds up a base of patterns and looks for recurring patterns of patterns.

The programmer of an AI system designs a set of algorithms which result in the aforesaid patterns.

In the case of reading medical journals, I would guess the following. First, the content would be in machine readable form. In its simplest, text format. Then, the programmers coded a series of lexical functions. Then, the software would process the text (parse and identify patterns) and then within that secondary database, look for patterns of patterns.

In the 60s, I befriended Dr. Seymour Papert, if the MIT AI Lab. I learned the process just described from the AI Lab and programmed a rudimentary AI system. Some 50 years ago.
So, it’s patterns of patterns, after being taught to identify a set of rudimentary constructs.
That's the old way, almost none of the modern AI systems are programmer algorithmic in nature with classic database indexes beyond simple parsers. People don't design AI algorithms today, they design the constraints of artificial neural networks and train the network using a data pattern consisting of millions of usually labeled examples. In effect the system dynamically writes the algorithm to match the incoming data.

 
Last edited:

cmartinez

Joined Jan 17, 2007
8,253
another interesting bit from the same article:

A Dartmouth graduate student used an MRI machine to study the brain activity of a salmon as it was shown photographs and asked questions. The most interesting thing about the study was not that a salmon was studied, but that the salmon was dead. Yep, a dead salmon purchased at a local market was put into the MRI machine, and some patterns were discovered. There were inevitably patterns—and they were invariably meaningless.
 

spinnaker

Joined Oct 29, 2009
7,830
My limited understanding is that the computing power looks for patterns and then matches its perception to a known base. Or, over time, it builds up a base of patterns and looks for recurring patterns of patterns.

The programmer of an AI system designs a set of algorithms which result in the aforesaid patterns.

In the case of reading medical journals, I would guess the following. First, the content would be in machine readable form. In its simplest, text format. Then, the programmers coded a series of lexical functions. Then, the software would process the text (parse and identify patterns) and then within that secondary database, look for patterns of patterns.

In the 60s, I befriended Dr. Seymour Papert, if the MIT AI Lab. I learned the process just described from the AI Lab and programmed a rudimentary AI system. Some 50 years ago.
So, it’s patterns of patterns, after being taught to identify a set of rudimentary constructs.
I am lost at how that would allow a computer to make decisions. I.E. aA car sees a object in the road. So now it needs to figure our to stop, swerve to the right, swerve to the left or make a detour down a side street.
 

djsfantasi

Joined Apr 11, 2010
9,163
I am lost at how that would allow a computer to make decisions. I.E. aA car sees a object in the road. So now it needs to figure our to stop, swerve to the right, swerve to the left or make a detour down a side street.
First, this isn’t what you originally asked. But I’ll admit it is a logical progression. Keep that in mind as I mention my second point.

Second, @nsaspook commented that is the old way. Well, of course. I wrote that I did this in 1960! Papert was famous for his assertion that simple value rests could not account for AI alone. He wrote the (at the time) definitive paper on object discrimination. My project was based on his theories of pattern recognition. BTW, MIT still gives lectures based on his research.

Advanced AI still depends on this basic, 1960s concept. You ask how software can decide to turn right, turn left or take a detour. Good question. My ONLY point is that for a program to make that decision, it needs to recognize if there is an object to the left, to the right or if there even IS a detour. As with any system, there needs to be a predefined set of states as well as knowledge of how to move between those states.

Whether or not these states are defined with if...then...else constructs or fuzzy logic, that point is irrelevant. What is relevant is that these patterns and pattern of patterns - aka states - exist at all.
 

spinnaker

Joined Oct 29, 2009
7,830
First, this isn’t what you originally asked. But I’ll admit it is a logical progression. Keep that in mind as I mention my second point.

Second, @nsaspook commented that is the old way. Well, of course. I wrote that I did this in 1960! Papert was famous for his assertion that simple value rests could not account for AI alone. He wrote the (at the time) definitive paper on object discrimination. My project was based on his theories of pattern recognition. BTW, MIT still gives lectures based on his research.

Advanced AI still depends on this basic, 1960s concept. You ask how software can decide to turn right, turn left or take a detour. Good question. My ONLY point is that for a program to make that decision, it needs to recognize if there is an object to the left, to the right or if there even IS a detour. As with any system, there needs to be a predefined set of states as well as knowledge of how to move between those states.

Whether or not these states are defined with if...then...else constructs or fuzzy logic, that point is irrelevant. What is relevant is that these patterns and pattern of patterns - aka states - exist at all.

Well @cmartinez and @nsaspook might know what you are talking about but you certainly lost me. Seems to me that detecting an object is the easy part followed by knowing the options of how to deal with it. The hard part is knowing how best to deal with it. Guess I will have to watch those videos when I find some time.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,270
Very accurate and fast detection/classification is the 'hard dog' part of the problem that affects everything. If you detect/classify 99.9% of co-moving cars correctly that means that 1 in a thousand cars might be detected but be classified as a person in the road causing an emergency brake. The reason the auto-braking system was disabled in the AZ ubers crash was because of false alarms.

The physics engine inputs (mechanical driving) can be determined fairly easily for this type of problem in isolation. The NN classifies the object (O) as something to avoid like a person on a bike in the middle of a busy road. The avoidance alert algo might use a simple intercept vector calculation based on the objects position, speed and current trajectory. The object O might have a value of A that is a assigned a high avoid value and then every time-reasonable path from A is evaluated to find the lowest value B in X amount of time the physics engine says is possible under current driving conditions while avoiding already classified objects. If the action to lowest path value of B is 'slam on the brakes' then that happens.
 

bogosort

Joined Sep 24, 2011
696
Nobel laureate Richard Feynman once asked his Caltech students to calculate the probability that, if he walked outside the classroom, the first car in the parking lot would have a specific license plate, say 6ZNA74. Assuming every number and letter are equally likely and determined independently, the students estimated the probability to be less than 1 in 17 million.
Surely, Feynman's students at Caltech must have been joking. One doesn't need a calculator to realize that 36^6 = (6^2)^6 = 6^12 , which is a couple orders of magnitude bigger than 10^7.

Of course, 1 in 36^6 is indeed less than 1 in 17 million, but I think it's far more likely that the Wired writer was sloppy and got the story wrong than Feynman's Caltech students couldn't work out a simple probability. Had they really told him "less than 1 in 17 million", I suspect Feynman would have immediately stopped the physics lecture and spent the rest of the class teaching how to count.
 
Top