Why humans learn faster than AI—for now

Maybe this is a bit off topic...but maybe not...

So, I am minding my own business, reading CNN and living a life of quiet desperation when I come across this article.... Scientists say bees can do basic math https://www.cnn.com/2019/02/08/health/honeybees-learn-math-study-trnd/index.html

Thinking about that title, I say to myself, "Why would bees need to do basic math?" - Do they have to file taxes by April 15 and don't have access to TurboTax or H&R Block? So, I read the article and soon I am thinking, ok usual willful misinterpretations used by the media when relating anything remotely scientific. Consequently, I go get the original report http://advances.sciencemag.org/content/5/2/eaav0961

The study itself is not the problem, the interpretation is...well, IMO, nonsense. Are these not the principles of discrimination and generalization that I learned about in undergraduate school and used in the lab thereafter? Did not Skinner (and others) explain these concepts years earlier?

...and maybe the worst is the claim of relevance to AI. Very disappointing.

What's next, the pong playing pigeons learned physics and it is highly relevant to AI?

 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://www.i-programmer.info/news/...ce/13165-we-still-beat-ai-at-angry-birds.html

Humans! Rest easy we still beat the evil AI at the all-important Angry Birds game. Recent research by Ekaterina Nikonova and Jakub Gemrot of Charles University (Czech Republic) indicates why this is so.

It seems that the major problem is not having enough data to train on. Even so the humans manage to crack new unseen levels with hardly any data. It might be that we just like throwing things and notice when something happens to our advantage.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://spectrum.ieee.org/the-insti...ng-everything-ai-machinelearning-pioneer-says

Stop Calling Everything AI, Machine-Learning Pioneer Says
Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent
“I think that we’ve allowed the term engineering to become diminished in the intellectual sphere,” he says. The term science is used instead of engineering when people wish to refer to visionary research. Phrases such as just engineering don’t help.

“I think that it’s important to recall that for all of the wonderful things science has done for the human species, it really is engineering—civil, electrical, chemical, and other engineering fields—that has most directly and profoundly increased human happiness.”
 

cmartinez

Joined Jan 17, 2007
8,218
https://spectrum.ieee.org/the-insti...ng-everything-ai-machinelearning-pioneer-says

Stop Calling Everything AI, Machine-Learning Pioneer Says
Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent
I've been arguing that for years... there's a HUGE difference between a genuinely intelligent being and a complex algorithm. We're entering the age of complex recursive computational procedures... but that doesn't mean said devices qualify as "intelligent".
 

Ya’akov

Joined Jan 27, 2019
9,069
https://spectrum.ieee.org/the-insti...ng-everything-ai-machinelearning-pioneer-says

Stop Calling Everything AI, Machine-Learning Pioneer Says
Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent

Though even in my 15 years in academia I saw a shift, I loved engineering because it is “science constrained by applications”. As I watched, though, I saw engineers who wanted to be seen as “scientists” and scientist who saw the possibilities of applications for their research acting much more like engineers. Even “applied mathematics” was becoming less of a term said with a sneer.

Though our colleges of science and engineering were separate, there was a lot of collaboration and interdisciplinary work. I think the two things are merging but there will always be the tails of the curve with pure research and hard-headed application.

For me, the ideal engineer has head in the clouds and feet on the ground, imagining the possibilities because she wants to use them. See: Brittain and Bardeen for very good examples.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
Science informs about what is possible, Engineering informs about what is practical.

A classic example of a pure “scientist” and a great "engineer" discussing a problem. Together, there is a positive feedback loop to a possible and practical solution.

 
Last edited:

justtrying

Joined Mar 9, 2011
439
In my experience a technologist is a good grounding force as well. When I went for my diploma as part of final project we had 5 teams in my class participate in a design competition for an assistive device for ALS patients. It was a term (4 months) project. Other teams were from university engineering programs and had been working on theirs for the whole year (i think 8). My technical school took top 3 prizes. Why? We were the only ones who actually worked directly with the patients.
 

Motanache

Joined Mar 2, 2015
540
I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?
A very good question.
Although you know the answer, I like to discuss this, because I think that's the essence of AI.
A complex function is formed by recall the same routine.
A kind of f (f (... f (x))).
The most used function f(x) is the sigmoid one.
1618852272022.png

We thus have a complex function that includes addition, multiplication, power. But the function has unknown coefficients.

We give it a set of known data to teach the addition:


1 + 2 =3
2 + 2 =4
.......

Now the program interpolates the function, ie adjusts its coefficients to move as close as possible to the given data.

1618852712674.png

I wrote such a program a long time ago, I convinced a programmer that what Ai programs do is the same as interpolation.


In nature, the nerve impulse has the law
"all or none"
https://www.verywellmind.com/what-is-the-all-or-none-law-2794808

It is basically a chain reaction that is either triggered or not
1618853211016.png

Thus the nerve impulse is in 0 and 1 with a maximum repetition of several hundred Hz.

transmembrane potential or membrane voltage (The membrane is insulating and forms a capacitor)
-40..-80mV
(if Luigi Galvani discovers the electric current starting from the movement of the frog's leg, it is expected)

Positive Na, K and negative Ca ions form this electric potential.
They are driven by the difference in electrical potential and chemical gradient

The neuron is like a logic gate but not like a logic gate known in electronics.
There are programs that simulate the behavior of a real neuron.

The terminals (axons) of several neurons reach a neuron, but it responds in 1 and 0 by a single termination (axon).

Basically, depending on the area where another neuron ends up on the neuron in question, it has a greater power to depolarize it so to determine its response in "0 or 1"

I wrote this to see how different ANN is from what happens in nature.
But for sure, it is to be appreciated that it is a beginning.

If you want we can start from a simple program in C by ANN (artificial neural networks)
And to comment on it.

We have many C compilers online so we don't even have to install a compiler.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
It's not understanding, it's all pattern matching with possibilities. It's incredibly powerful (brute force to process large amounts of data) and incredibly limited in what we call 'understanding' (human reasoning) at the same time. Modern ML, Big Data and Deep Learning is programming without receiving explicit instructions but the end result is mindless machine tacit computer programming not tacit knowledge.

" The overestimation of technology is closely connected with the underestimation of humans. "
https://www.latentview.com/blog/moravecs-paradox-why-ai-cannot-replace-humans/
Moravec’s Paradox: Why are Simple Tasks Hard for AI?
Simply put, our brains are the products of millions of years of evolution and natural selection. The things that humans find hard are only hard because they are new. The skills that we already acquired through evolution come to us so naturally that we do not have to think about it. How exactly are we going to teach a machine the things that we do not even think about? As Polanyi famously said, “We can know more than we can tell.”

“Alchemy and Artificial Intelligence”
https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf

https://cacm.acm.org/magazines/2021...ais-new-romance-with-tacit-knowledge/fulltext
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
Applying our current techniques of understanding (reverse engineering) the human brain to the simple microprocessor.

https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268
There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
Here we have taken a reconstructed and simulated processor and treated the data “recorded” from it in the same way we have been trained to analyze brain data. We have used it as a test case to check the naïve use of various approaches used in neuroscience. We have found that the standard data analysis techniques produce results that are surprisingly similar to the results found about real brains. However, in the case of the processor we know its function and structure and our results stayed well short of what we would call a satisfying understanding.
 

Motanache

Joined Mar 2, 2015
540

MrAl

Joined Jun 17, 2014
11,389
What is a human brain?
I think an explanation is that the human brain is a "brain of brains" with conscience at the control, although sleep may be a little different. It has the ability to expand it's storage as well as increase connectivity.

Artificial intelligence is just literally an attempt to mimic this functionality with more constant techniques.
One simple example is a program that asks questions and stores the answers into a data base. The data compiled is able to associate with other data so when later a question is asked of it it can find an association and then render an answer. Here's a simple dialog between a computer C and a human H.
In this conversation the computer already knows what a resistor is but does not yet know what a diode is and then it learns about the diode's existence as well as some properties like the number of leads that will allow it to make decisions about it later. Note <cr> is the ENTER key, 'q' is the Q key.
Also interesting to note is that sometimes the human has to provide questions for the computer.

(the first query by the computer...)
C: Think of a part. Hit <cr> when you are ready, q to quit
H: diode
C: Does it have more than two leads?
H: no
C: Is it a resistor?
H: no
C: I give up. What was it?
H: diode
C: Please give me a question that would distinguish a resistor from diode
H: does it conduct current in two directions?
C: For diode the answer would be:
H: no
(then the second query...)
C: Think of a part. Hit <cr> when you are ready, q to quit
H: diode
C: Does it have more than two leads?
H: no
C: does it conduct current in two directions?
H: no
C: Is it diode?
H: yes
(then the third query...)
Think of a part. Hit <cr> when you are ready, q to quit.

Note the computer had learned about a diode and the way it conducts current which is different than the way a resistor conducts current. The diode and the associated info would then be forever stored in it's permanent memory storage.
 
Last edited:

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
https://www.technologyreview.com/20...-ai-failed-covid-hospital-diagnosis-pandemic/

When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.

But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”

It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.
 
The future is AI for which the people need to get ready because the creation of AI technology is in the hands of us the people who can understand more about the machine than feeding the data into the machine to make them perform a certain action
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,079
The future is AI for which the people need to get ready because the creation of AI technology is in the hands of us the people who can understand more about the machine than feeding the data into the machine to make them perform a certain action
Happy to know that.
1634149273803.png
 
Top