Why humans learn faster than AI—for now

jhovel

Joined Jul 9, 2016
31
I've shown thins famous 'text' to quite a few children in the process of learning to read. I noted that all children can read the text the moment they know the words (from other sources/reading or hearing) - even if they are not of the spelling of these words. Interestlingly, they are NOT distracted by the 'wrong' characters in this text. I've tried it with children between 6 and 10, checkingafterwards if they knew all the words - even when they couldn't identify any of them in the text. This is not a scientific study, by the way, just my curiosity.
My 8 year old grandchild is just comptetentat reading words he's never seen before. He took around 10 seconds to 'see' the correct text - without any prompting. His 10 y.o. brother who reads completely fluently read the text in about 2 seconds.
I'd love to see what an AI would do with it and in what time.
14004INTELLIGENCE.jpg
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
Decent intro article on the differences between Deep Learning and other AI.
https://blogs.nvidia.com/blog/2016/...telligence-machine-learning-deep-learning-ai/

But also, check out the linked load of podcasts around AI...I have only listened to a few (from NVIDIA), but they don't seem awful.
https://soundcloud.com/theaipodcast
If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.
These guys are full of it. The Deep Learning neural network did not teach itself what a stop sign looks like as it still had no concept of stop, signs or streets. It used high-dimensional vector math to create rules and variables in a digital computer universe of like images. In this computer universe many other objects have the same vector universe parameters as stop signs even if the image is completely different. What it stored has zero intelligence or meaning and is as easily fooled by simple alterations to patterns.

https://spectrum.ieee.org/cars-that...ications-can-fool-machine-learning-algorithms
 
These guys are full of it. The Deep Learning neural network did not teach itself what a stop sign looks like as it still had no concept of stop, signs or streets. It used high-dimensional vector math to create rules and variables in a digital computer universe of like images. In this computer universe many other objects have the same vector universe parameters as stop signs even if the image is completely different. What it stored has zero intelligence or meaning and is as easily fooled by simple alterations to patterns.

https://spectrum.ieee.org/cars-that...ications-can-fool-machine-learning-algorithms
Obviously, you were not quoting me, but text in the article which I linked. I'm not sure what you are saying....I mean...what your point really is, beyond that those guys (the journalist author and.or the people he is reporting on) are full of it.

First, as I have stated before, I don't go along with the comparison to neurons at all. I know something (not everything) about neurons, and those are not neurons. I would also generalize my feelings to "learned" and "taught", although there are times when I have probably used them in relationship to programs out of convenience. That being said, it would interest me to learn your definition (or what would be convincing to you) of "taught" or "learned", with regard to this subject matter. It may interest you to know (if you did not know already) that for neurosciences, learning is simply defined as a relatively permanent change in behavior as a result of experience - and the "relatively" part is included mostly to discriminate between things like habituation and allow for memory loss.

From reading the text and that passage (to include few previous paragraphs) that you quoted, it sounds like what they are saying is that if you feed zillions of images of a stop sign or not stop signs AND provide appropriate feedback (not a stop sign or stop sign), you end up with a trained network to identify stop signs and to do so with impressive accuracy - based upon the stimuli you used during training.

That may be very true, but it does not mean that the network will accurately generalize to all elements outside of the training samples (all stops signs). The degree of generalization is going to be dependent upon the population of images (which may or may not include a variety of contextual information). The ease with which such networks can be fooled has to be considered to be in some measure, the inadequacy of the training. In a sense, if you left out robust samples (such as those that are included in the images that are incorrectly identified), then your training is lacking (just like with the muffin/chihuahua and many other robust examples), or, at the bare minimum, you are outside of the level of generality that allows for the claim of accuracy based upon your samples.

The gold standard for accuracy (with these kinds of examples), of course, is the comparison with human performance. You or I had no difficulty recognizing a defaced stop sign as a stop sign up to a point, of course. After all, we see these all the time. Clearly, the population of stop signs sampled is insufficient if one wants to generalize to all stop signs in the real world. It has not been shown (as far as I know), however, that the techniques could not have been applied to include those samples that fooled the network and to do so with a resulting increase in accuracy.

It also follows, therefore, that a liability (or an unknown result at the least) is incurred if we erroneously assume that the population of stop signs is known and has been sampled appropriately. In that regard, a question to ask is what were the zillions of images used (whether it is stop signs or 10 million faces on YouTube). In previous discussions (e.g., Do you trust this computer?), this idea has been expressed as the resulting performance of deep learning instances is sometimes (often times?) not known.

But, I don't think that makes them full of it (at least no more than other folks spinning one way or the other), I think it just means that they are choosing an advantageous way of stating how great they are, consistent with manufacturing and selling video processors that work pretty darn well for something else also :)

Now, if you are saying that the stop sign example is not "Deep Learning" and just machine learning, then I would ask you to provide a clear explanation of a qualitative (not quantitative) difference between the two - with an example if you believe one exists.

[sorry if this is too long, but I find it interesting and am quite willing to modify some of my "positions" which are dynamic]
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
From what I see the problem is not one of training on more images or training in general. The latest research on DNN ("Deep Learning" is just another name for 30 year old tech) has shown it to be both effective and fragile at a fundamental level. The stop sign example is a good example of "Deep Learning" and its faults when used outside of a research lab. So far it has been the case that 'fooling' is not a deficiency of training but is an inherent characteristic of DNN for any dataset like audio files, malware or type of patterned data. By 'full if it' I mean we must be very careful of believing (“approaching human-level performance on blah blah blah” or “surpassing human-level performance on blah blah blah”) the results of these types of machines beyond the very narrowly defined objective as a classifier of objects and even in that narrow case we have shown to be capable of being easily misused in ways not anticipated by the designers. I believe we will be able to program general AI system eventually but 'Deep learning' will only be a small part of the total system.
https://arxiv.org/pdf/1801.00631.pdf
Deep learning, as it is primarily used, is essentially a statistical technique for classifying patterns, based on sample data, using neural networks with multiple layers
...
The “spoofability” of deep learning systems was perhaps first noted by Szegedy et al(2013). Four years later, despite much active research, no robust solution has been found.
...
The real problem lies in misunderstanding what deep learning is, and is not, good for. The technique excels at solving closed-end classification problems, in which a wide range of potential signals must be mapped onto a limited number of categories, given that there is enough data available and the test set closely resembles the training set. But deviations from these assumptions can cause problems; deep learning is just a statistical technique, and all statistical techniques suffer from deviation from their assumptions.
https://blog.keras.io/the-limitations-of-deep-learning.html
The limitations of deep learning
The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.

This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.

Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.
 
Last edited:

cmartinez

Joined Jan 17, 2007
8,218
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another.
That's what it is ... the first computers were mechanical, and had a few gears and cams and levers in it. Today, the gears have been replaced with transistors and electronic components. But the principle remains exactly the same.

It doesn't matter if you toss a million gears together to work harmoniously on a problem. Same applies for billions of transistors. The contraption will still remain a mindless machine. It will never develop "intelligence" and much less "consciousness". A yet to be found new technology will be needed for that.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
"One of the fundamental skills for all humans in an AI world is accountability - just because the algorithm says it's the answer, it doesn't mean it actually is."

https://www.bbc.com/news/technology-44561838
So began a sequence of events that saw Ibrahim Diallo fired from his job, not by his manager but by a machine.

He has detailed his story in a blogpost which he hopes will serve as a warning to firms about relying too much on automation.

"Automation can be an asset to a company, but there needs to be a way for humans to take over if the machine makes a mistake," he writes.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://www.theguardian.com/technol...al-intelligence-ai-humans-bots-tech-companies
It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://hackernoon.com/the-simplest-explanation-of-machine-learning-youll-ever-read-bebc0700047c
You’ve probably heard of machine learning and artificial intelligence, but are you sure you know what they are? If you’re struggling to make sense of them, you’re not alone. There’s a lot of buzz that makes it hard to tell what’s science and what’s science fiction. Starting with the names themselves…
https://hackernoon.com/machine-learning-is-the-emperor-wearing-clothes-59933d12a3cc
Machine learning uses patterns in data to label things. Sounds magical? The core concepts are actually embarrassingly simple. I say “embarrassingly” because if someone made you think it’s mystical, they should be embarrassed. Here, let me fix that for you.
 
Last edited:
"I’m a statistician and neuroscientist by training, and we statisticians have a reputation for picking the driest, most boring names for things. We like it to do exactly what it says on the tin. You know what we would have named machine learning? The Labelling of Stuff!"

I want to marry her. :)
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://www.quantamagazine.org/machine-learning-confronts-the-elephant-in-the-room-20180920/
It won’t be easy. The new work accentuates the sophistication of human vision — and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.

“There are all sorts of weird things happening that show how brittle current object detection systems are,” said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto.
...
They’ve also provoked researchers to probe their vulnerabilities. In recent years there have been a slew of attempts, known as “adversarial attacks,” in which researchers contrive scenes to make neural networks fail. In one experiment, computer scientists tricked a neural network into mistaking a turtle for a rifle. In another, researchers waylaid a neural network by placing an image of a psychedelically colored toaster alongside ordinary objects like a banana.

This new study has the same spirit.
https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html
Machine learning is being deployed in more and more real-world applications, including medicine, chemistry and agriculture. When it comes to deploying machine learning in safety-critical contexts, significant challenges remain. In particular, all known machine learning algorithms are vulnerable to adversarial examples — inputs that an attacker has intentionally designed to cause the model to make a mistake. While previous research on adversarial examples has mostly focused on investigating mistakes caused by small modifications in order to develop improved models, real-world adversarial agents are often not subject to the “small modification” constraint. Furthermore, machine learning algorithms can often make confident errors when faced with an adversary, which makes the development of classifiers that don’t make any confident mistakes, even in the presence of an adversary which can submit arbitrary inputs to try to fool the system, an important open problem.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
http://nautil.us/issue/67/reboot/why-robot-brains-need-symbols
But LeCun is right about one thing; there is something that I hate. What I hate is this: the notion that deep learning is without demonstrable limits and might, all by itself, get us to general intelligence, if we just give it a little more time and a little more data, as captured in a 2016 suggestion by Andrew Ng, who has led both Google Brain and Baidu’s AI group. Ng suggested that AI, by which he meant mainly deep learning, would either “now or in the near future” be able to do “any mental task” a person could do “with less than one second of thought.”

Generally, though certainly not always, criticism of deep learning is sloughed off, either ignored, or dismissed, often in an ad hominem way. Whenever anybody points out that there might be a specific limit to deep learning, there is always someone like Jeremy Howard, the former chief scientist at Kaggle and founding researcher at fast.ai, to tell us that the idea that deep learning is overhyped is itself overhyped. Leaders in AI like LeCun acknowledge that there must be some limits, in some vague way, but rarely (and this is why Bengio’s new report was so noteworthy) do they pinpoint what those limits are, beyond to acknowledge its data-hungry nature.
 
Last edited:
Top