It's another example of ML not 'real' AI. There is very little human style intelligence with machine (deep) learning.https://www.sciencemag.org/news/201...create-3d-model-person-just-few-seconds-video
I want AI to make me young again!
It's another example of ML not 'real' AI. There is very little human style intelligence with machine (deep) learning.https://www.sciencemag.org/news/201...create-3d-model-person-just-few-seconds-video
I want AI to make me young again!
It's a simple pattern matching problem that any deep learned machine would match quickly after being trained to look for these types of patterns. That's not the AI problem, the true AI problem is reasoning what the meaning of this is.I'd love to see what an AI would do with it and in what time.
View attachment 154211
So they managed to replicate 95% of the human species?It can argue expertly on things with absolutely no understanding of the subject.
Decent intro article on the differences between Deep Learning and other AI.
https://blogs.nvidia.com/blog/2016/...telligence-machine-learning-deep-learning-ai/
But also, check out the linked load of podcasts around AI...I have only listened to a few (from NVIDIA), but they don't seem awful.
https://soundcloud.com/theaipodcast
These guys are full of it. The Deep Learning neural network did not teach itself what a stop sign looks like as it still had no concept of stop, signs or streets. It used high-dimensional vector math to create rules and variables in a digital computer universe of like images. In this computer universe many other objects have the same vector universe parameters as stop signs even if the image is completely different. What it stored has zero intelligence or meaning and is as easily fooled by simple alterations to patterns.If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.
Obviously, you were not quoting me, but text in the article which I linked. I'm not sure what you are saying....I mean...what your point really is, beyond that those guys (the journalist author and.or the people he is reporting on) are full of it.These guys are full of it. The Deep Learning neural network did not teach itself what a stop sign looks like as it still had no concept of stop, signs or streets. It used high-dimensional vector math to create rules and variables in a digital computer universe of like images. In this computer universe many other objects have the same vector universe parameters as stop signs even if the image is completely different. What it stored has zero intelligence or meaning and is as easily fooled by simple alterations to patterns.
https://spectrum.ieee.org/cars-that...ications-can-fool-machine-learning-algorithms
https://blog.keras.io/the-limitations-of-deep-learning.htmlDeep learning, as it is primarily used, is essentially a statistical technique for classifying patterns, based on sample data, using neural networks with multiple layers
...
The “spoofability” of deep learning systems was perhaps first noted by Szegedy et al(2013). Four years later, despite much active research, no robust solution has been found.
...
The real problem lies in misunderstanding what deep learning is, and is not, good for. The technique excels at solving closed-end classification problems, in which a wide range of potential signals must be mapped onto a limited number of categories, given that there is enough data available and the test set closely resembles the training set. But deviations from these assumptions can cause problems; deep learning is just a statistical technique, and all statistical techniques suffer from deviation from their assumptions.
The limitations of deep learning
The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.
Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.
That's what it is ... the first computers were mechanical, and had a few gears and cams and levers in it. Today, the gears have been replaced with transistors and electronic components. But the principle remains exactly the same.This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another.
So began a sequence of events that saw Ibrahim Diallo fired from his job, not by his manager but by a machine.
He has detailed his story in a blogpost which he hopes will serve as a warning to firms about relying too much on automation.
"Automation can be an asset to a company, but there needs to be a way for humans to take over if the machine makes a mistake," he writes.
It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.
“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.
“It’s essentially prototyping the AI with human beings,” he said.
Nice article. I have noticed this for quite a while....whenever I try to order out of sequence at a fast food joint. Try starting off by saying "you want this to go" or "I don't want any fries". There is no bewilderment, they simply ask the question, that I have already answered, when they get to that point in the BASIC sequence that has been programmed.
https://hackernoon.com/machine-learning-is-the-emperor-wearing-clothes-59933d12a3ccYou’ve probably heard of machine learning and artificial intelligence, but are you sure you know what they are? If you’re struggling to make sense of them, you’re not alone. There’s a lot of buzz that makes it hard to tell what’s science and what’s science fiction. Starting with the names themselves…
Machine learning uses patterns in data to label things. Sounds magical? The core concepts are actually embarrassingly simple. I say “embarrassingly” because if someone made you think it’s mystical, they should be embarrassed. Here, let me fix that for you.
"I’m a statistician and neuroscientist by training, and we statisticians have a reputation for picking the driest, most boring names for things. We like it to do exactly what it says on the tin. You know what we would have named machine learning? The Labelling of Stuff!"
Hey ... I got dibs, I saw her first!"I’m a statistician and neuroscientist by training, and we statisticians have a reputation for picking the driest, most boring names for things. We like it to do exactly what it says on the tin. You know what we would have named machine learning? The Labelling of Stuff!"
I want to marry her.
https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.htmlIt won’t be easy. The new work accentuates the sophistication of human vision — and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.
“There are all sorts of weird things happening that show how brittle current object detection systems are,” said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto.
...
They’ve also provoked researchers to probe their vulnerabilities. In recent years there have been a slew of attempts, known as “adversarial attacks,” in which researchers contrive scenes to make neural networks fail. In one experiment, computer scientists tricked a neural network into mistaking a turtle for a rifle. In another, researchers waylaid a neural network by placing an image of a psychedelically colored toaster alongside ordinary objects like a banana.
This new study has the same spirit.
Machine learning is being deployed in more and more real-world applications, including medicine, chemistry and agriculture. When it comes to deploying machine learning in safety-critical contexts, significant challenges remain. In particular, all known machine learning algorithms are vulnerable to adversarial examples — inputs that an attacker has intentionally designed to cause the model to make a mistake. While previous research on adversarial examples has mostly focused on investigating mistakes caused by small modifications in order to develop improved models, real-world adversarial agents are often not subject to the “small modification” constraint. Furthermore, machine learning algorithms can often make confident errors when faced with an adversary, which makes the development of classifiers that don’t make any confident mistakes, even in the presence of an adversary which can submit arbitrary inputs to try to fool the system, an important open problem.
But LeCun is right about one thing; there is something that I hate. What I hate is this: the notion that deep learning is without demonstrable limits and might, all by itself, get us to general intelligence, if we just give it a little more time and a little more data, as captured in a 2016 suggestion by Andrew Ng, who has led both Google Brain and Baidu’s AI group. Ng suggested that AI, by which he meant mainly deep learning, would either “now or in the near future” be able to do “any mental task” a person could do “with less than one second of thought.”
Generally, though certainly not always, criticism of deep learning is sloughed off, either ignored, or dismissed, often in an ad hominem way. Whenever anybody points out that there might be a specific limit to deep learning, there is always someone like Jeremy Howard, the former chief scientist at Kaggle and founding researcher at fast.ai, to tell us that the idea that deep learning is overhyped is itself overhyped. Leaders in AI like LeCun acknowledge that there must be some limits, in some vague way, but rarely (and this is why Bengio’s new report was so noteworthy) do they pinpoint what those limits are, beyond to acknowledge its data-hungry nature.
by Duane Benson
by Jake Hertz
by Jake Hertz
by Aaron Carman