Why humans learn faster than AI—for now

Thread Starter

nsaspook

Joined Aug 27, 2009
9,092
Hi nsa,
The above is what I was referring too

Not



Which I agree with.:)
E
It's much easier to make a solution that ignores the problems with full-automation handling a random driving route from point A to point B, anytime, anywhere. The routes are fully human per-mapped, caution flagged and have remote human monitor systems for the corner-cases the local computers are unable to handle. They build an electronic rail for the car to follow.

This was impressive for the time.
https://techwireasia.com/2021/04/this-self-driving-car-drove-safely-all-over-south-korea-in-1993/
Han Min-hong, now 79, successfully tested his self-driving car on the roads of Seoul in 1993 – a decade before Tesla was even founded. Two years later, it drove 300 kilometers (185 miles) from the capital to the southern port of Busan, on the most heavily-traveled expressway in South Korea.

Footage from the period shows the car barreling down a highway, with no one behind the wheel. A 386-chip-powered desktop computer, complete with monitor and keyboard, is placed on the passenger seat. Han is sitting in the back, waving at the camera.

When you see this from 1993 it's hard IMO to be impressed with a few cars on set paths today.
Even so, Han believes there are limits to what self-driving technology can achieve, and that true autonomy is beyond reach. Neural networks do not have the flexibility of humans when faced with a novel situation that is not in their programming, he said, predicting that self-driving vehicles will largely be used to transport goods rather than people.

“Computers and humans are not the same,” he added.
 

MrAl

Joined Jun 17, 2014
8,616
I see some similarity between humans and AI. like driving car.

1) AI system needs time to train system. Same untrained human also take time to learn to drive a car.

2) A road accident can happen by a trained person. Self driving car can also be the cause of road accident

What matters is taking the right decision at the right time. In terms of driving a car, I think human can make better decisions than the AI system.
Hi,

So what are you saying, that we are judging self driving cars too harshly?
If we had all the statistics in front of us we could easily tell, but i dont have that yet.
 

Wolframore

Joined Jan 21, 2019
2,483
what is acceptable accident rate for
h (human) 0.01% one in 10,000?
ai 0.00001%? one in 10 million?

what is it currently and what is acceptable?

human beings can be very good, if we can see through fog, not be distracted by food, passengers, phone, annoyed by other drivers…etc

if the vehicle is capable of 0.00001% crash rate and to mitigate damage and injury in the event of an accident, I might be on board. I hate driving. I would love to commute while reading.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
9,092
what is acceptable accident rate for
h (human) 0.01% one in 10,000?
ai 0.00001%? one in 10 million?

what is it currently and what is acceptable?

human beings can be very good, if we can see through fog, not be distracted by food, passengers, phone, annoyed by other drivers…etc

if the vehicle is capable of 0.00001% crash rate and to mitigate damage and injury in the event of an accident, I might be on board. I hate driving. I would love to commute while reading.
I guess we will eventually see how safe they will be but I just don't see the need for self-driving cars for most daily drives. IMO self-driving cars for the general pubic is a solution looking for a problem.

https://www.strongtowns.org/journal/2018/9/12/driverless-cars-and-the-cult-of-technology
Driverless Cars and the Cult of Technology

1638331574933.png
 
Last edited:

Wolframore

Joined Jan 21, 2019
2,483

Thread Starter

nsaspook

Joined Aug 27, 2009
9,092
https://www.defenseone.com/technolo...it-had-90-success-rate-it-was-more-25/187437/
But Simpson said the low accuracy rate of the algorithm wasn’t the most worrying part of the exercise. While the algorithm was only right 25 percent of the time, he said, “It was confident that it was right 90 percent of the time, so it was confidently wrong. And that's not the algorithm's fault. It's because we fed it the wrong training data.”
Simpson said that such results don’t mean the Air Force should stop pursuing AI for object and target detection. But it does serve as a reminder of how vulnerable AI can be to adversarial action in the form of data spoofing. It also shows that AI, like people, can suffer from overconfidence.
 

k1ng 1337

Joined Sep 11, 2020
235
I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?
After much contemplation, I think AI is an extension of how a human (the engineers) brain operates. It can be said AI would not exist if it wasn't for human creation.

I've often wondered what separates living from non living. It is odd to think a volcano is a non living summation of the physical processes within and around it

An interesting question I like to put forth to the philosopher that reflects on my first paragraph: Is it possible for a human to imagine all possibilities?

Furthermore if advanced creatures such as humans came into existence against all odds, I don't see why it can't happen again especially if it's already been given the right ingredients. Naturally this extends into the realm of "God"..
 
Top