Why humans learn faster than AI—for now

Discussion in 'Computing and Networks' started by nsaspook, Mar 8, 2018.

  1. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
  2. techsoul

    New Member

    Jun 11, 2018
    1
    0
    Great, to know about Ai!
    Avanta Digital Systems integrator of professional 3d graphics animation solutions, machine learning, Ai workstations. www.avantadigital.com
     
  3. jhovel

    New Member

    Jul 9, 2016
    18
    4
    I've shown thins famous 'text' to quite a few children in the process of learning to read. I noted that all children can read the text the moment they know the words (from other sources/reading or hearing) - even if they are not of the spelling of these words. Interestlingly, they are NOT distracted by the 'wrong' characters in this text. I've tried it with children between 6 and 10, checkingafterwards if they knew all the words - even when they couldn't identify any of them in the text. This is not a scientific study, by the way, just my curiosity.
    My 8 year old grandchild is just comptetentat reading words he's never seen before. He took around 10 seconds to 'see' the correct text - without any prompting. His 10 y.o. brother who reads completely fluently read the text in about 2 seconds.
    I'd love to see what an AI would do with it and in what time.
    14004INTELLIGENCE.jpg
     
  4. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    It's a simple pattern matching problem that any deep learned machine would match quickly after being trained to look for these types of patterns. That's not the AI problem, the true AI problem is reasoning what the meaning of this is.
     
  5. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348

     
    Last edited: Jun 15, 2018
  6. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    Raymond Genovese likes this.
  7. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    3,978
    5,448
    So they managed to replicate 95% of the human species?
     
    killivolt and cmartinez like this.
  8. Raymond Genovese

    Well-Known Member

    Mar 5, 2016
    1,024
    571
  9. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    These guys are full of it. The Deep Learning neural network did not teach itself what a stop sign looks like as it still had no concept of stop, signs or streets. It used high-dimensional vector math to create rules and variables in a digital computer universe of like images. In this computer universe many other objects have the same vector universe parameters as stop signs even if the image is completely different. What it stored has zero intelligence or meaning and is as easily fooled by simple alterations to patterns.

    https://spectrum.ieee.org/cars-that...ications-can-fool-machine-learning-algorithms
     
    cmartinez likes this.
  10. Raymond Genovese

    Well-Known Member

    Mar 5, 2016
    1,024
    571
    Obviously, you were not quoting me, but text in the article which I linked. I'm not sure what you are saying....I mean...what your point really is, beyond that those guys (the journalist author and.or the people he is reporting on) are full of it.

    First, as I have stated before, I don't go along with the comparison to neurons at all. I know something (not everything) about neurons, and those are not neurons. I would also generalize my feelings to "learned" and "taught", although there are times when I have probably used them in relationship to programs out of convenience. That being said, it would interest me to learn your definition (or what would be convincing to you) of "taught" or "learned", with regard to this subject matter. It may interest you to know (if you did not know already) that for neurosciences, learning is simply defined as a relatively permanent change in behavior as a result of experience - and the "relatively" part is included mostly to discriminate between things like habituation and allow for memory loss.

    From reading the text and that passage (to include few previous paragraphs) that you quoted, it sounds like what they are saying is that if you feed zillions of images of a stop sign or not stop signs AND provide appropriate feedback (not a stop sign or stop sign), you end up with a trained network to identify stop signs and to do so with impressive accuracy - based upon the stimuli you used during training.

    That may be very true, but it does not mean that the network will accurately generalize to all elements outside of the training samples (all stops signs). The degree of generalization is going to be dependent upon the population of images (which may or may not include a variety of contextual information). The ease with which such networks can be fooled has to be considered to be in some measure, the inadequacy of the training. In a sense, if you left out robust samples (such as those that are included in the images that are incorrectly identified), then your training is lacking (just like with the muffin/chihuahua and many other robust examples), or, at the bare minimum, you are outside of the level of generality that allows for the claim of accuracy based upon your samples.

    The gold standard for accuracy (with these kinds of examples), of course, is the comparison with human performance. You or I had no difficulty recognizing a defaced stop sign as a stop sign up to a point, of course. After all, we see these all the time. Clearly, the population of stop signs sampled is insufficient if one wants to generalize to all stop signs in the real world. It has not been shown (as far as I know), however, that the techniques could not have been applied to include those samples that fooled the network and to do so with a resulting increase in accuracy.

    It also follows, therefore, that a liability (or an unknown result at the least) is incurred if we erroneously assume that the population of stop signs is known and has been sampled appropriately. In that regard, a question to ask is what were the zillions of images used (whether it is stop signs or 10 million faces on YouTube). In previous discussions (e.g., Do you trust this computer?), this idea has been expressed as the resulting performance of deep learning instances is sometimes (often times?) not known.

    But, I don't think that makes them full of it (at least no more than other folks spinning one way or the other), I think it just means that they are choosing an advantageous way of stating how great they are, consistent with manufacturing and selling video processors that work pretty darn well for something else also :)

    Now, if you are saying that the stop sign example is not "Deep Learning" and just machine learning, then I would ask you to provide a clear explanation of a qualitative (not quantitative) difference between the two - with an example if you believe one exists.

    [sorry if this is too long, but I find it interesting and am quite willing to modify some of my "positions" which are dynamic]
     
  11. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    From what I see the problem is not one of training on more images or training in general. The latest research on DNN ("Deep Learning" is just another name for 30 year old tech) has shown it to be both effective and fragile at a fundamental level. The stop sign example is a good example of "Deep Learning" and its faults when used outside of a research lab. So far it has been the case that 'fooling' is not a deficiency of training but is an inherent characteristic of DNN for any dataset like audio files, malware or type of patterned data. By 'full if it' I mean we must be very careful of believing (“approaching human-level performance on blah blah blah” or “surpassing human-level performance on blah blah blah”) the results of these types of machines beyond the very narrowly defined objective as a classifier of objects and even in that narrow case we have shown to be capable of being easily misused in ways not anticipated by the designers. I believe we will be able to program general AI system eventually but 'Deep learning' will only be a small part of the total system.
    https://arxiv.org/pdf/1801.00631.pdf
    https://blog.keras.io/the-limitations-of-deep-learning.html
     
    Last edited: Jun 20, 2018
    cmartinez likes this.
  12. cmartinez

    AAC Fanatic!

    Jan 17, 2007
    5,557
    6,611
    That's what it is ... the first computers were mechanical, and had a few gears and cams and levers in it. Today, the gears have been replaced with transistors and electronic components. But the principle remains exactly the same.

    It doesn't matter if you toss a million gears together to work harmoniously on a problem. Same applies for billions of transistors. The contraption will still remain a mindless machine. It will never develop "intelligence" and much less "consciousness". A yet to be found new technology will be needed for that.
     
  13. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    "One of the fundamental skills for all humans in an AI world is accountability - just because the algorithm says it's the answer, it doesn't mean it actually is."

    https://www.bbc.com/news/technology-44561838
     
    atferrari and cmartinez like this.
  14. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    https://www.theguardian.com/technol...al-intelligence-ai-humans-bots-tech-companies
     
    killivolt and cmartinez like this.
  15. Raymond Genovese

    Well-Known Member

    Mar 5, 2016
    1,024
    571
    Nice article. I have noticed this for quite a while....whenever I try to order out of sequence at a fast food joint. Try starting off by saying "you want this to go" or "I don't want any fries". There is no bewilderment, they simply ask the question, that I have already answered, when they get to that point in the BASIC sequence that has been programmed.
     
    killivolt likes this.
  16. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    https://hackernoon.com/the-simplest-explanation-of-machine-learning-youll-ever-read-bebc0700047c
    https://hackernoon.com/machine-learning-is-the-emperor-wearing-clothes-59933d12a3cc
     
    Last edited: Sep 19, 2018
  17. Raymond Genovese

    Well-Known Member

    Mar 5, 2016
    1,024
    571
    "I’m a statistician and neuroscientist by training, and we statisticians have a reputation for picking the driest, most boring names for things. We like it to do exactly what it says on the tin. You know what we would have named machine learning? The Labelling of Stuff!"

    I want to marry her. :)
     
    cmartinez likes this.
  18. cmartinez

    AAC Fanatic!

    Jan 17, 2007
    5,557
    6,611
    Hey ... I got dibs, I saw her first! :D
     
  19. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    https://www.quantamagazine.org/machine-learning-confronts-the-elephant-in-the-room-20180920/
    https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html
     
  20. nsaspook

    Thread Starter AAC Fanatic!

    Aug 27, 2009
    4,906
    5,348
    http://nautil.us/issue/67/reboot/why-robot-brains-need-symbols
    [​IMG]
     
    Last edited: Dec 10, 2018 at 11:58 AM
    cmartinez likes this.
Loading...