Why humans learn faster than AI—for now

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
Google images image search is pretty cool but still delivers some strange results.
A photo of a specific connector returned Lego Blocks. A photo of a drawer pull from a piece of old furniture I refinished returned a raccoon image.
That's the problem with convolutional neural networks. There are collisions with even random noise and small specific alterations of images cause classification failures on images that are obvious to humans.


https://forum.allaboutcircuits.com/...rn-faster-than-ai-for-now.146468/post-1254768
https://forum.allaboutcircuits.com/...rn-faster-than-ai-for-now.146468/post-1246374
https://forum.allaboutcircuits.com/...rn-faster-than-ai-for-now.146468/post-1246237
 

MrAl

Joined Jun 17, 2014
11,389
Google images image search is pretty cool but still delivers some strange results.
A photo of a specific connector returned Lego Blocks. A photo of a drawer pull from a piece of old furniture I refinished returned a raccoon image.
All i can say is, Haaaaa! :)

I noticed that too with a phone i had where you could take pictures of stuff and it would tell you what it was. Came out with some weird stuff.
They stopped doing it soon after.
 

MrAl

Joined Jun 17, 2014
11,389
That's the problem with convolutional neural networks. There are collisions with even random noise and small specific alterations of images cause classification failures on images that are obvious to humans.


https://forum.allaboutcircuits.com/...rn-faster-than-ai-for-now.146468/post-1254768
https://forum.allaboutcircuits.com/...rn-faster-than-ai-for-now.146468/post-1246374
https://forum.allaboutcircuits.com/...rn-faster-than-ai-for-now.146468/post-1246237
If i remember right hash codes do something like that too, then fall back onto some other more detailed method if a collision does occur.

Oh come to think of it, not too long ago some big tech was considering using face recognition mandatory to log into accounts. They quickly stopped doing that. (chuckle)
 

MrAl

Joined Jun 17, 2014
11,389
That's what your typical Deep Learning convolutional neural network image classifier does for things like computer vision tasks or photo image categorization. The 'code' is a multi-dimensional 'hash' to a mini universe of similar images.

https://developers.google.com/machine-learning/practica/image-classification

View attachment 268670
Hi,

Yeah that's interesting. I actually never sat down and tried to write this kind of thing yet as it takes so much time to develop new and wonderful image routines except for the very simple ones. It took me several hours to write a multi core rotation routine and maybe three hours just to write a file deletion routine into another program because it has to be perfect. If someone thinks they are deleting to a private recycle bin and it gets permanently deleted, they are going to be pissed :)
So these days i dont write as much but i do pay more attention to testing so at least what i do write works every time. I use my own software so it MUST work (ha ha). I think a lot of code writers dont use their own software much that's why there is so many bugs and strange ways of doing things in their software.

Thanks for the reply.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://www.msn.com/en-us/news/tech...-the-company-s-ai-has-come-to-life/ar-AAYliU1
The Google engineer who thinks the company’s AI has come to life

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.
...
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://www.msn.com/en-us/news/tech...-and-showed-how-the-test-is-broken/ar-AAYA8GJ

If Lemoine was taken in by LaMDA’s lifelike responses, it seems plausible that many other people with far less understanding of artificial intelligence, AI, could be as well — which speaks to its potential as a tool of deception and manipulation, in the wrong hands.

To many in the field, then, LaMDA’s remarkable aptitude at Turing’s Imitation Game is not an achievement to be celebrated. If anything, it shows that the venerable test has outlived its use as a lodestar for artificial intelligence.

“These tests aren’t really getting at intelligence,” said Gary Marcus, a cognitive scientist and co-author of the book “.” What it’s getting at is the capacity of a given software program to pass as human, at least under certain conditions. Which, come to think of it, might not be such a good thing for society.


“I don’t think it’s an advance toward intelligence,” Marcus said of programs like LaMDA generating humanlike prose or conversation. “It’s an advance toward fooling people that you have intelligence.”
 

cmartinez

Joined Jan 17, 2007
8,218

MrAl

Joined Jun 17, 2014
11,389
Hi,

Truth is, I've always admired Turing and his work. But I've also always found his so called "intelligence test", or cognitive test or whatever, deeply flawed.

From the start, it's always been about convincing a human participant whether a machine is intelligent or not. The test doesn't prove anything in an objective and reproducible way.
What i was thinking was that it may be truly impossible based on the ideas of i think it was Godel.
If we can fool a perfect all knowing "oracle", then we can fool AI, and if we can fool AI, then we may never believe even the most advance AI, unless maybe it has the ability to detect when there is an attempt to fool it. But then maybe there is an extension to Godel's ideas that would circumvent this too.
All we can do is wait and see.

What else i noticed, and this is the most troubling, is that with new technology there is always a DOWNSIDE. The key point is just how bad that downside is.
There are numerous examples i could probably make a huge list of just my own personal experiences, but i am hard pressed to find ONE example that does not have a downside or some loss of functionality. This goes from appliances like microwave ovens to computer software to cell phone apps and beyond.
Maybe people hate Windows 11 because of it's restrictions that MS felt they just HAD to add.
One that really gets me is this...
Back in the day when tube Televisions were common, one of the modern design improvements was to keep the tube filaments powered at least part way so that the TV turned on "instantly". So you hit the switch, a second later you are watching TV. Then came the bipolar transistors which took over and then it was only the cathode ray tube that needed the filament to be powered for that "instant on" technology that everyone loved.
Fast forward to today, the MODERN age, and what do we see? Every damn thing has to BOOT UP, and that could take MINUTES to wait for. Even TV's have to boot now, even if not smart TV's. Is there any TV that can start up right away anymore?
Updates to Android op systems seem to take longer to boot too now. I dont remember it being like that before. Now Android 12 seems ot take a full 2 minutes to boot from a hard shut down.

BTW, the Eu is forcing cell phone companies to use a USB C connector for charging in order to reduce electronic waste and keep consumers happy at the same time as they dont have to keep buying chargers. The US is now considering a similar bill although they have not yet specified it must be USB C. Apples Lightning port will be a thing of the past if this all through through.

The thing is, what is going to be the downside to Artificial Intelligence. What are we going to have to put up with in the name of technology this time.
It could be the very worst thing to happen to mankind. Dont get me wrong i am not afraid of new technology i'd love to have a self driving car, but it would have to be ultra perfect so that it is absolutely safe or some extremely low failure rate.
 

bogosort

Joined Sep 24, 2011
696
Truth is, I've always admired Turing and his work. But I've also always found his so called "intelligence test", or cognitive test or whatever, deeply flawed.

From the start, it's always been about convincing a human participant whether a machine is intelligent or not. The test doesn't prove anything in an objective and reproducible way.
The Turing test reflects the fact that we don't have a precise definition of intelligence (or cognition, or consciousness, or agency). Unlike an IQ test, which implicitly assumes the test taker has some level of cognitive capacity, the Turing test makes no such assumptions.

Like pornography and art, intelligence is very difficult to define in a falsifiable way, but relatively easy to determine on a case-by-case basis ("you know it when you see it").

Note that the U.S. government's "porn test" is very much akin to a Turing test:

https://www.justice.gov/criminal-ceos/citizens-guide-us-federal-law-obscenity
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://restofworld.org/2022/dall-e-mini-women-in-saris/
Marés, a veteran hacktivist, began using DALL·E mini in early June. But instead of inputting text for a specific request, he tried something different: he left the field blank. Fascinated by the seemingly random results, Marés ran the blank search over and over. That’s when Marés noticed something odd: almost every time he ran a blank request, DALL·E mini generated portraits of brown-skinned women wearing saris, a type of attire common in South Asia.
1655954484740.png
 

panic mode

Joined Oct 10, 2011
2,715
i am not a lawyer but this is just way too broad. based on this majority of everything said, written or shown is obscene
18 U.S.C. § 2252C Misleading words or digital images on the Internet
maybe it was meant to be confusing for alien probes running AI
 
Last edited:
Top