Hilarious! ... ... thank for sharing
OK, it's a joke but it does show the challenges of machine learning techniques.Hilarious! ... ... thank for sharing
Today machine learning is undergirding every aspect of the operations of companies like Facebook, Google, and Amazon and many startups. It’s making these companies exceptionally rich. But outside that AI belt, things are moving much more slowly, for rational economic reasons.
“I think that we’ve allowed the term engineering to become diminished in the intellectual sphere,” he says. The term science is used instead of engineering when people wish to refer to visionary research. Phrases such as just engineering don’t help.
“I think that it’s important to recall that for all of the wonderful things science has done for the human species, it really is engineering—civil, electrical, chemical, and other engineering fields—that has most directly and profoundly increased human happiness.”
I've been arguing that for years... there's a HUGE difference between a genuinely intelligent being and a complex algorithm. We're entering the age of complex recursive computational procedures... but that doesn't mean said devices qualify as "intelligent".https://spectrum.ieee.org/the-insti...ng-everything-ai-machinelearning-pioneer-says
Stop Calling Everything AI, Machine-Learning Pioneer Says
Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent
https://spectrum.ieee.org/the-insti...ng-everything-ai-machinelearning-pioneer-says
Stop Calling Everything AI, Machine-Learning Pioneer Says
Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent
A very good question.I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?
And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?
Simply put, our brains are the products of millions of years of evolution and natural selection. The things that humans find hard are only hard because they are new. The skills that we already acquired through evolution come to us so naturally that we do not have to think about it. How exactly are we going to teach a machine the things that we do not even think about? As Polanyi famously said, “We can know more than we can tell.”
There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
Here we have taken a reconstructed and simulated processor and treated the data “recorded” from it in the same way we have been trained to analyze brain data. We have used it as a test case to check the naïve use of various approaches used in neuroscience. We have found that the standard data analysis techniques produce results that are surprisingly similar to the results found about real brains. However, in the case of the processor we know its function and structure and our results stayed well short of what we would call a satisfying understanding.
When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.
But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”
It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.
In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.
by Aaron Carman
by Jake Hertz
by Jake Hertz