Machine intelligence

Discussion in 'Off-Topic' started by beenthere, Mar 10, 2007.

  1. beenthere

    Thread Starter Retired Moderator

    Apr 20, 2004
    15,815
    282
    I ran into a reference to Roy Kurzweil in a not-too-good novel last night (802.11 was a data transmission frequency). Which started me thinking about why it is that some people not only believe that machines (computers) can become intelligent, but are also in a state of high anticipation.

    There's lots of definitions of intelligence, of course. Since no generally agreed-on standard has been expounded, it's easy to play fast and loose with the concept.

    Kurzweil is a classic proponent of the Turing Test. It will determine that the computer in question is intelligent if one may have a conversation with it and not be at all aware that it is a computer. This is obviously not a face-to-face situation.

    I keep having visions of a Racter 5.2 running on a large parallel processor that could, indeed, carry on a conversation with some versimilitude. But I can also never imagine a situation where the computer could do more than parrot selected snippets of recorded past conversations. I'm not convinced that the Turing Test is a gauge of intelligence, however much it seemed so to Allen Turing back in the 1040's.

    I have an Intel processor on a key ring - probably an old P90 with math problems. In one point of view, it and all other processors are wafers of carefully contaminated silicon. They are numeric machines, which can never do more than act upon numbers by direction of other numbers.

    I can't see this as a basis for intelligence. The software may be an attractive diversion, in that it can direct the processor to simulate any number of things - but is such an apparent intelligence ever anything but artificial? Here I do not mean "artificial intelligence" in the science fictional sense of a truely intelligent computer, but as a sham of the "real" thing, whatever that may be.

    Kurzweil (to my perception) has absolutely no comprehension of the mechanical basis of a computer. He seems to have an absolutely rapturous anticipation of forming significant relationships with the objects in his life. I could be mistaken - the book "The Age of Spiritual Machines" caused several problems on my part, and I could not get through it.

    Simply stated - if I have taken a screwdriver to it, I can't quite see it as ever becoming intelligent. I started off repairing computers at the component level. Transistors, even in large aggregatons, don't have potential for becoming more than amplifiers and switches.

    Some things are known about the human brain, but how it functions is still unknown. The electrical activity thus far traced tells us about as much as an ammeter does about individual electrons.

    This is not a computer-oriented forum, but all of us use the miserable things. So, are they going to become intelligent?
     
  2. thingmaker3

    Retired Moderator

    May 16, 2005
    5,072
    6
    My wife, who knows vastly more about being intelligent than I do, tells me that there are many kinds of intelligence. "Emotional Intelligence" is the ability to successfully interact with others on an emotinal level - guaging their emotional state and responding appropriately. "Physical Intelligence" deals with awareness of structures of the body and how those structures interact with the environment. A dog clever enough to open a door displays physical intellegence; a dog sensitive enough to put their head in your lap when you're sad displays emotional intelligence. Intellectual intellegence is the communication & problem solving skill set that Sci-Fi writers are so fond of giving to machines.

    I suspect - even anticipate - that we shall eventually create software with synthetic physical intellegence. Regarding sentience, software that is aware of it's own existance and / or the existance of the system it runs on, I put in the category of fantasy (aka "space opera") rather than sci-fi. (And why is it, I wonder, that even the most modern writers attribute any "intellegence" to the circuitry instead of the software?)



    In Sci-Fi / Fantasy literature, the "soul mankind hath wrought" is metaphorically identical to Mary Shelly's "Adam." If we can create something just like us - or better than us - does it elevate us or does it diminish us? Medical technology may give us Dr. Frankenstien's creation, or Capek's "Rossum's Robots" much sooner than I.T. gives us any machine opening the back gate without our permisison. Shelly and Capek wrote of sentient creations indistinguishable from humans, and of how humans reacted to having created such things. If a sentient construct finds itself central to the plot, it will likely be surrounded by these old & steady themes.

    The plot device of a conversing computer, conversely, is little more than a modern mirror-mirror-on-the-wall. Kirk: "Mister Spock, we need some quick & dirty exposition to set up the next scene!" Spock: "Understood, Captian! I will flip these toggle switches and Majel Barrett will tell us what the audience needs to hear. This will be far more logical than consulting a crystal ball."

    That'll be two cents, if you please.:D
     
  3. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    Interesting subject, and one that I often get lugged with as the mediator in woth my friends who are not tech-literate

    This to me is the critical issue with intelligent-technology - computers are nothing more that very-very-efficient (and highly-dumb) calculators. They compute masses and masses of mathematical calculations every second - to the average (and above average) human this is still a momentous achievement.

    The intelligence in electronic and computers stems from the software - since this is the logic that IMO exumes the intelligences that we associate with computers (regular users of Microsoft's products may disagree!!). The underlying issue with software is that it centres around logical flow of information - a logical flow that is dictated by the human that programmed it, and ultimately human logic is flawed. Is this an arguement against computers being intelligent? No, but I suggest it is an arguement against computer intelligence becoming more than that of humans.

    Like thingmaker3 says, intelligence is based on the metric in which you measure. As an example in the tech world, I suppose one form of intelligence is the system of indexing and index-mapping. On the simplest level, it is a system by which a computer or other electronic device progressively learns and can regurgitate the information at a later date. I have personally done work with indexing technologies by which it not only learns as it goes alone, but when you present it with similar information at a later date, the system was able to decipher what is has seen before and what was new, hence streamlining the process considerably.

    The wider question of what constitutes technological intelligence is open to a wide debate, but I am still closed minded on the question of how intelligent technology can get, particularly with respect to the intelligence demonstrated by humans.

    Dave
     
  4. Thinker

    Active Member

    Jan 9, 2007
    61
    0
    This might sound a bit simple, but remember...

    Man created machine.
     
  5. beenthere

    Thread Starter Retired Moderator

    Apr 20, 2004
    15,815
    282
    Granted, we are the ones who made the machines. The question remains, though - will they become intelligent?

    As above, some people who are not mere crackpots eagerly anticipate the rise of machine intelligence. Personally, I can't see more than a simile of intelligence as presented by well-written software. Some people, though, would call a computer intelligent if they could hold a conversation with it, thus satisfying the Turing Test.

    Obviously, no computer can "grow up", and accumulate experiences to share with human beings. But will they become self-aware? Will they somehow become able to alter their programs in order to consciously control themselves? If you stuck it in a mobile platform (a robot), would it make a life for itself?
     
  6. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    I struggle with the idea that computers and software will somehow gain the knowledge to reprogramme itself beyond (i.e. by improvement of design) what is its initial programmed remit.

    This is what as far as I'm concerned is the limit of computer/robot/electronic technology and intelligence - learn to become more intelligent by efficiency, yes, because these are defined by the initial programmed boundaries by the original programmer, and examples of this readily exist in software today; learn to become more intelligent by design through a process of intrinsic self improvement, no, computers are only as intelligent as the initial boundary conditions to which they are programmed. And ultimately these initial boundary conditions to which they are programmed are flawed due to there human origins.

    Its interesting that you mention the Turing Test. To what extent does this measure intelligence? Thinking about this goes back to the question of how we define intelligence - and the cycle starts over.

    Dave
     
  7. beenthere

    Thread Starter Retired Moderator

    Apr 20, 2004
    15,815
    282
    As intelligent as Allen turing was, I have a hard time granting the validity of his Test. Having a meaningful conversation with an entity presupposes a commonality of experiences. Why a computer could possibly find, say, a meal really interesting, simply makes me gape in disbelief. Neither could I enjoy the amazing lack of harmonic fuzz on my power supply output.

    I suppose that if there is ever a meaningful definition of intelligence, then the question of machine intelligence may become a bit less vague. Nevertheless, something that follows instructions like LOAD A, IMMEDIATE just doesn't seem to have that kind of potential.
     
  8. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    But wasn't the basis of the Turing Test convincing someone they were speaking to a human when a conversation was assessed against a real human and a computer? This in itself as a measure of intelligence is flawed because of ones interpretation and assessment of the conversation.

    I agree. And how can something that follows the basic instructions as above become more intelligent than its design?

    Dave
     
  9. thingmaker3

    Retired Moderator

    May 16, 2005
    5,072
    6
    My own introduction to Turing's concept was when an old girlfriend asked for help with her homework. She had been assigned to ask a question which a human could answer but a computer could not. She came up with the following: "What would it feel like to swim in a giant tub full of jello?"

    At the time, I was more interested in something other than her homework. Looking back, I now ask: "How could a machine possibly have an imagination?"

    What would it feel like, look like, sound like, taste like, or be like to do anything outside of one's experience? A human can imagine what "it" might be like. A machine could only attempt to cross reference existing data.

    I re-watched the film "I Robot" after Beenthere's opening post in this thread. Sonny states at one point: "You are right, Detective Spooner, I cannot create a masterpiece." Sonny can only create a hard-copy of what Dr. Lanning programmed into the "dreams." Author Jeff Vintar seems to half-understand the limitations of the machine. (Perhaps he fully understands the limitations of the machine, but but willingly suspended his own disbelief to get the script out. VIKI's actions in the film require a wee bit of initiative.)




    What I would like to know is why the heros in sci-fi never have to deal with "the blue screen of death."
     
  10. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    I must be a machine - I couldn't even give an imaginative answer to that!

    :D Classic!

    Its an interesting point because we tend not to associate 'imagination' with 'intelligence', however as you say computers/machines would not have the ability to perform this function. So we get into the philosophical question of is imagination a strain of intelligence in one form or another.

    I bought it on DVD at the missus' request (she likes Will Smith) and have yet to watch it. Perhaps in light of this discussion I will give in and make the missus' night!

    Because they can't afford to purchase Windows!!

    Dave
     
  11. beenthere

    Thread Starter Retired Moderator

    Apr 20, 2004
    15,815
    282
    Unless you recall the system crash in "The Southpark Movie".

    Most of the computers in SF that manage to become self-aware do so by a somewhat magical process, like the computer in "THe Moon is a Harsh Mistress". It can get out of hand, like in Herbert's Destination Void". The spaceship control was a human brain (harvested from a moribund infant - won't see that one reprinted for awhile!). The brain went insane, and the crew had to make a supercomputer to do the navigation. It became aware, and demanded the crew worship it.

    Literature aside, there seems to be a thread of belief that a computer will someday be fast enough/large enough to become self-aware. I can't ascribe this to anything but ignorance on the part of the believers.

    By the way - "I, Robot" is probably more interesting in the novel. But right now, it takes a truckload of processors to try to direct a vehicle along a path. Managing many more functions in a smaller package is a stretch.

    For a thought experiment, try to imagine the SELECT CASE structure a robot would need to go through for mobility and conversation. Likely to be worse than the global variable list in ADA.
     
  12. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    Hi beenthere,

    Do you have any examples to hand of where people are stating this? I would be interested in seeing how they come up with such ideas - is it all concepts or is there some science in there to back up their claims.

    Dave
     
  13. beenthere

    Thread Starter Retired Moderator

    Apr 20, 2004
    15,815
    282
    Hi Dave,

    If you google Ray (not Roy - sorry) Kurzweil, you will find he is the leading proponent of computer intelligence. Currently, he is predicting that we will have computers pasing the Turing Test by 2029.

    Another acessment may be found here -
    http://www.computer.org/portal/site...page/0906&file=profession.xml&xsl=article.xsl&

    There are probably deep roots to this anticipation of intelligent computers. Back in the 1920's, science fiction was full of alien invasions feauring incredibly intelligent creature with huge brains. Then you get to the popularization of computers are giant brains in the 1950's and 1960's.

    The implication is that one day a computer will be so big that it will be smarter than us. I personally think this is magical thinking. The workings of the brain are mysterrious, as are the inner workings of computers. When you don't see the legions of code monkeys - remember that popular literature has the lone, all-knowledgeable hacker as the hero/antihero. Then, since his doings are also mysterious, you combine mysteries and the result is the creation of intelligence. It's the genius programmer as a kind of Dr. Frankenstein, creating life from inanimate materials.

    As I said before, Kurzweil's writings are just so much blather to me. It seems to me that he is looking for his computer to boot up and ask him "where do you want to go today?". If it also asks how he is doing and makes some other responses in a conversational manner, then Ray is happy. Guess he'll name it Scout, and cruise the internet with his faithful companion.

    You may find that it's computer scientists, rather than programmers, that see intelligent computers on the way. I find that computer science tends to ignore hardware. I learned about the inner workings before I picked up an instruction list and learned to write code.

    Sorry this is kind of a disjointed ramble. I'm really just curious about how people feel about it.
     
  14. thingmaker3

    Retired Moderator

    May 16, 2005
    5,072
    6
    I confess to never having seen the Southpark movie. I have read most of Asimov's work, though. I Robot was a collection of very good short stories. "The Little Lost Robot" seems to have been one inspiration for Vintar. So too "The Evitable Conflict," and potentially "Reason." If there is a novelization of the movie, I'm not familiar with such. Dr. Susan Calvin is taken nearly whole-cloth from Azimov's work, but made young and attractive (as a romantic interest for Spooner) instead of matronly.

    Bingo! No different from Rotwang's "machine in the image of man" in Fritz Lang's Metropolis. The programmer is metaphorically a wizard creating an homunculous. There are also old Hebrew stories of the "golem," a big animated clay statue powered by Qabalistic formulae.

    I suspect the popularity of the idea is simple anthropomorphism. We name our vehicles and sometimes our houses. We pat a favorite tool when it once again performs well. We blame the door for stubbing our toe. Since we are aware of our own sentience, how can the things around us be truly and completely insensate? This tendancy is fully manifest in some religions - the mountain has a spirit, and so does the river. Is it any wonder that we should anthropomorphise a thing so interactive as the computer?
     
  15. beenthere

    Thread Starter Retired Moderator

    Apr 20, 2004
    15,815
    282
    I can't say "Southpark" was that much of a movie, but there is a scene where the simulator freezes, and the incensed general pulls his sidearm and blows Bill Gates away. Simply worth the price of the rental.

    Also interesting is a few gotchas are ignored when intelligent computers are imagined. Like Asimiov's Laws of Robotics. If the critter can change its own programming, how does one insure it can't toss the three Laws. Just having them in ROM won't get it. Imagine the SELECT CASE structure to evaluated actions against the Laws.

    In your example of the robots in folklore (perhaps pushing that a bit in Metropolis, but Rotwang's house with the thatched roof is such a hoot and forces it into folklore), the robot is always activated by malice. Golems get even with goys, Rotwang was a sociopath, Frankenstein's monster had criminal's parts. Most movie robots & intelligent computers are actively hostile.

    Why, then is the advent of the intelligent computer so cheerfully anticipated? At the least (if running Microsoft) it will turn you in for unregistered software. What if Dick Cheney slips it a possible reward for keep a log of your activities?

    Ever read a little story "Press Enter"? Might be by John Varley. I really don't want a self-directed machine out there.
     
  16. thingmaker3

    Retired Moderator

    May 16, 2005
    5,072
    6
    Its another old, old metaphore. We greedily anticipate the arrival of a faithful and capable servant. We are blind to the possibility that said servant may not work out quite as we anticipate. Whether the servant will be actively malicious or simply defective makes no difference, we see only the potential for something easier, quicker, or more satisfying. The actual probability of that potential being fulfilled is never considered.

    It doesn't help much to have a million salespeople screaming about the fullfillment of our fantasies. Anything they think we want, they'll promise. And most of us will believe it! An old Savin commercial comes to mind - a Hollywood robot conversationally musing about the merits of the Savin copiers and how folk will have to gripe about each other instead of about their office equipment. I didn't realise either of the ironies at the time.
     
  17. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    Thanks for the information. I will work through it and post back with comments.

    Just to pick up on a point you have made regarding computer science and scientists tend to be more accepting of the idea of intelligent computers, and the fact that they tend to ignore hardware - I would tend to agree with this assessment. There is an anomoly in here from my perspective, because I believe the limitation on machine intelligence is the software, not the hardware - just think how computer hardware has developed, it seems as though it has no boundaries (Moore's Law has outstayed its life by oabout 20 years!). One would think that comp scientists would see the limitations of software and hence believe that machine intelligence is a pipedream; semiconductor engineers on the other hand probably feel they can split the oceans given what they have achieved in the last 40 years.

    Dave
     
  18. Salgat

    Active Member

    Dec 23, 2006
    215
    1
    The entire universe is quantized and can be described in mathematical terms, therefore a computer, which is nothing more than a machine that processes mathematical terms, can emulate anything in the universe with enough processing ability. It's not a matter of if it can be done, it can, its a matter of will it ever be done? I wouldn't be surprised if it's done within a millenium or two.
     
  19. Dave

    Retired Moderator

    Nov 17, 2003
    6,960
    143
    Whilst in theory the entire universe an be quantised (assuming we project all science back to the realms of quantum mechanics), it is a fact that as of yet much is not understood, and is unlikely ever to be - we cannot even get relativity and quantum mechanics to agree.

    Then there is the indescribable complexity of quantifying such mathematical implementations - would a computer/robot/machine be worth anything if it had so much work to do?

    And finally, is this incredible ability to process information in such an efficient manner a sign of intelligence? If so, then computers are already half way to human intelligence - and this is something I stuggle to agree with.

    Dave
     
  20. thingmaker3

    Retired Moderator

    May 16, 2005
    5,072
    6
    Asimove postulates the universe cannot be modelled by a system less complicated than itself.

    Can a sentient personality be modelled by a system less complicated than itself?
     
Loading...