Hey guys, sorry about the little hiatus. I really like where the conversation has gone so far, but I have something else I'd like to introduce into the discussion: Hubert Dreyfus's philosophical critique of artifical intelligence:
What Computers Still Can't Do.
The introduction of this book alone is some sixty odd pages, and I've yet to really get into the meat of his argument, but to summarize briefly:
Dreyfus bases his argument off the fact that in order to program "intelligence" into computers, one must work with symbols. All computer programming revolves around our ability to create specific symbols that stand for something and which the computer's "brain" can then identify and work off of.
Dreyfus explains, quite early in his introduction, that in the early days of AI scientists believed that the more information received by a computer, the smarter and more capable it would become. After all, the more information we humans know, the smarter we become; furthermore, we're able to answer questions faster and "operate" faster overall when we understand more. Therefore, it should follow that the more information plugged into a computer's hard drive (or however those things work
) the faster and more intelligent it will become.
However, scientists encountered a problem when they tried to execute this theory in practice. Computers operate slower the more information is fed into them. Dreyfus published his then controversial treatise (which has since been basically proven true), arguing that because computers rely on symbols to understand information, they can never fully attain the experience of living or "being in the world." Dreyfus's argument stems from philosopher Martin Heidegger's theory that human consciousness and "being" are very unique and specific... let's say occurrences... that computers cannot experience. They lack the human quality of "being-in-the-world" (many of Heidegger's terms are awkwardly translated into English) and thus lack the human understanding of relationships, experience, time and inherent knowledge.
I'd like to post an excerpt from Dreyfus's introduction to illustrate this point (bear in mind, this was written some decades ago, in the 70s I believe); he is addressing a quote made by Douglas Lenat regarding "ontological engineering," or the process of forming ontological relationships in computer programming:
"Lenat is clear that his ontology must be able to represent our commonsense background knowledge- the understanding we normally take for granted. He would hold, however, that it is premature to try to give a computer the skills and feelings required for actually coping with things and people. No one believes anymore that by 2001 we will have an artifical intelligence like HAL. Lenat would be satisfied if the Cyc [AI project started by Lenat at MCC] data base could understand books and articles, for example, if it could answer questions about their content and gain knowledge from them. In fact, it is a hard problem even to make a data base that can understand simple sentences in ordinary English, since such understanding requires vast background knowledge. Lenat collects some excellent examples of the difficulty involved. Take the following sentence:
'Mary saw a dog in the window. She wanted it.'
Lenat asks:
'Does "it" refer to the dog or the window? What if we'd said "She
smashed it," or "She pressed her nose up against it"?'
Note that the sentence seems to appeal to our ability to imagine how we would feel in the situation, rather than requiring us to consult
facts about dogs and windows and how a typical human being would react. It also draws on know-how for getting around in the world, such as how to get closer to something on the other side of a barrier."
So, we can easily see how Dreyfus illustrates the problem. This is very rudimentary stuff as far as he is concerned (although I found it interesting and fascinating
); also, considering the fact that Lenat himself is an AI researcher, we know that many of these problems have already been taken into account by AI supporters and solutions are being investigated. That's all I'll post for now, because anymore would be an overload, I think; but I'll go back and reread the intro and hopefully find some more interesting topics to post. Furthermore, the deeper I get into this book the more discussion material it will provide (I hope).
If anyone wants to comment on this or branch off and bring up another related topic, feel free.