Discussion in 'General Science' started by nsaspook, Nov 17, 2010.
This is the basis for "Neural Net" type memory and processors. There are large number of issues, as bits/bytes cannot be clearly defined to map onto a biological brain, only mimic the base structure.
It will be quite a while before any processing device becomes "self aware", if it is even possible. Even a program under rigid rules, such as chess, take a huge amount of processor power, a database of human creativity of moves, and anticipating moves by brute force several moves ahead.
Until a basic AI is made, which will be rather huge in terms of gates, huge leaps will not be made (Sci-Fi doesn't count).
Let's see, at $0.10 per synapse you could retire the national dept with just one brain. Anybody got one we can donate to Washington?
It would turn to mush within a week. Washington has experience in such matters.
Wikipedia says there are only 100 billion neurons. I guess someone's only half smart. That's inflation.
More on topic...
If you want to be inspired, read about Santiago Ramón y Cajal
(http://homepages.nyu.edu/~eh597/cajal.htm). As testament to his doggedness, he meticulously traced individual neurons through serial sections of brain tissue to support his neuronal hypothesis. His drawings are still classics.
Here is "the whole brain atlas" :
I suspect we are approaching the densities needed for true machine self awareness, but lack the basic theory how to go about it. I tend to think the term synthetic intellectual is a better term, since artificial implies it isn't real.
The movie AI had several things right though, when it comes if we aren't careful it will take over everywhere, in a manner not to our (humanities) liking. I also tend to think if you make true intelligence it should have rights, otherwise you have made an enemy.
Yea, I can see it now. You'd get 20 years for deleting Windows XP sp. 10.1.
No way. Human rights are not tied to intelligence.
Then there could be war. There will be plenty of machines who have the capacity but not the software, and would not be sentient. Of course, if we get it right and they like being slaves... Somehow I doubt we'll have that much control on something that complex.
Last I heard there was serious effort to understand neural nets enough to design with them, mostly on digital computers simulating them. If this comes to pass we could be making a new type of processing architecture on silicon (or whatever). Goes back to what I said earlier, we don't have the theories.
AI with current technology is a total pipe dream. The complexity of brain structures point to a different method of processing that we don't have clue to how it really works. Logical inference/neural nets as programmed in digital computers don't seem to have the capability to map our mental state except as a simple interface to the external world. Even if we could create a massive 125 trillion connection gate array as a inference engine, it might be useless for real AI.
Does anyone have a library with an imagination routine I could use?
I do not think we have to worry too much about this for now.
I guess when your computer automatically starts including the .lib on all your builds is when you need to worry.