artificial intelligence

I've read a few times that sentient machines will exist within the next 30yrs. I'm sorta reluctant to believe that. While the growth of computer technology is HUGELY exponential, the predictions of sentience are - to me, at least, complete extrapolations without adequate support.

The question of what life is will become a whole lot more complicated (interesting?) if machines can ever achieve sentience.

I don't have huge faith in the Turing test. Although, if we cannot distinguish man from machine, for all intensive purposes, it is sentient.
 
To be a sentient being is to have the ability to feel and to perceive independently of something akin to a program that tells a machine what to do within the context of a specific set of conditions. Hard to say with certainty that an AI will never become sentient, separate from its code. Programming languages are becoming more capable, however, of "learning" from user input. I both develop and write about customizing application programming interfaces for a living, and recent advances are promising in terms of making a smarter AI, but there are still no signs of Hal that I can see. As for Alan Turing and his test, I think it was an important idea in his day but that day was fifty years ago now and I don't think the Turing test is in any way definitive in determining sentience. Technical advances of the past half century have eclipsed the Turing method. AI programs today are very capable of interacting with a human being, but this is still another kettle of fish from possessing sentience.
 
To be like a human the AI would have to be particularly fallible. It would have to be able to make decisions based on emotions, as if it was having hormonal mood swings. It would have to be able to misjudge things when it got angry, and fall in love, etc.

AI could become more intelligent than humans though, and be able to analyse data objectively rather than letting subjectivity bias the conclusions it could make. Imagine if it had all the information ever collected by mankind available to it. It could be asked what the solution was to all the big problems we have, like war, disease, the environment. I bet you that the answers to all those would be rejected by mankind. The AI would not have illogical ethical hangups or be influenced by mental conditioning to find certain solutions unacceptable the way humans do.
 
I think that it is inevitable that computers will gain a semblance of sentience. However, whether the time required to achieve this is one hundred or thirty years depends on the rate of technological development; if theoretical quantum computers are created sentient computing will become possible becuase the human brain is essentially a quantum computer. It is my belief that computers will merge with humanity to some extent when the use of neurological implants becomes widespread as nano scale computing technology progresses. Regarding the turing test: it is not a relevant test of sentience because data is only 1's and 0's within today's computers which is different from the mechanisms of the human brain. But quantum computing would alter this thought fundamentally because of the way information would be processed in this format.
 
I don't think that there will be a machine that will be able to function without some sort of user to instruct it what to do in many instances. Until a machine is able to be self-reliant and able to function without help from a battery, we'll be far off. Although saying anything about the Matrix in this would be quite off topic, but the original cause of man versus machine was that the machine was only trying to survive. When a machine will have the urge to survive, we will have problems.
 
Quantum computing will pull computers from their current serial state shelves and put them up along side us humans where true parallel computing takes place.

After all that potential exists, there is the problem of building and using computers differently. Currently, we build them to function more as tools as we do any other machine. Following this is the ability to revolutionize our languages; something logic-oriented (prologish) with a twist I think will be almost a given or at least a good start.
 
Norsemaiden said:
It would have to be able to make decisions based on emotions, as if it was having hormonal mood swings. It would have to be able to misjudge things when it got angry, and fall in love, etc.
o_O
when i first noticed this thread i immediately thought of the android Data from The Next Generation and all those episodes that were all about his inability to feel any emotions