The funny thing about modernity

yea i mean, semantics sometimes prove difficult in certain situations.
i'd write more but i am like all sitting at my job having done nothing since 12 :/
 
I'm not quite sure how you would tell for sure, but my point is that even if a computer is programmed well enough to fool you into thinking you're talking to a person, that alone isn't enough to consider it intelligent/conscious. You can write a program that simulates conversation by carefully analyzing speech patterns and having the computer non-repetitively follow those patterns that will fool everyone. I think AI and consciousness for computers probably IS possible (much moreso than the possibility of real language for animals, which I think is basically impossible), but I don't know how you'd tell. But an airtight trick isn't necessarily the truth.
 
That's one of the Turing test points, though. I have no way of knowing that what you just said isn't a canned response coming from a machine somewhere. I bet you're a conscious person, but I have no way of knowing.

Another scenario is this: You're in a box. Every once in a while I slip of paper slides in an opening in the side of the box with strange symbols on it. You've got a huge rulebook and you look up the symbols and what order they're in, write down the answer and return it through the slot. Someone outside the box reads the slips and determines that 'Hey! That box can read Chinese!'

Whose to say the brain doesn't work that way? Whose to say that the box isn't reading chinese? The working parts don't understand it, just as each of your synapses doesn't know English, but its understanding and responding to Chinese....
 
Well, yes, I think the brain works SIMILAR to that, but remember that we're creatures with free will, not just input/output machines. That's why two people from the same crappy background can grow up to be a total failure and a famous erudite artist. Once the workings of the brain are dissected more, I bet they'll be able to pinpoint right where free will comes in and how it affects you, but right now I'm not sure it's possible.
 
You think they will be able to pinpoint free will- why would one ever want to pinpoint such a thing? What is the point to living if we dont have free will?-What mystery would be left if in life if our reasons for having free will were pinpointed. Look Im not a luddite or anything , but what you guys are talking about- scares the beejesus out of me.
 
I just hope i die before i become a fucking machine- seriously- the future is not always good- neither is technology. I say this with the support of all the islamic fundamentalist terrorists in my neighborhood.
 
Hey, it's possible, but if true, it means that everything human culture has ever produced has been totally wrong. I think there's too much goodness in the art and culture we've created for it to all be invalidated by what sounds like something I was saying when I was a cynical fifteen-year-old. Hopefully. Shit, maybe when we get to the afterlife it will be ruled over by cynical fifteen-year-olds. I'm reminded of the Far Side with Col. Sanders at the gates of Heaven...
 
OK, this is too cool - while writing about this stuff I received in the mail a book from my friend Todd by Stephen Wolfram called A New Kind of Science. It sort of fits with what I'm talking about, and something I've felt for a long time: That humans are based, just like computers, on what is known in Comp Sci as 'levels of abstraction'. Complicated things grow from a lot of little simple things. Computers have a system that moves around electrons-these get abstracted as ones and zeros, these get abstracted as characters, these are used to write programs, programs make operating systems, on top of operating systems are built MS Word, Internet Explorer and the Ultimate Metal motW forum. This is going to be long sorry....

"...On the basis of many discoveries I have been led to a still more sweeping conclusion, summarized in what I call the Principle of Computational Equivalence: that whenever one sees behavior that is not obviously simple--in essentially any system--it can be thought of as corresponding to a computation of equivalent sophistication. And this one very basic principle has a quite unprecedented array of implications for science and scientific thinking.
"For a start, it immediately gives a fundamental explanation for why simple programs can show behavior that seems to us complex. For like other processes our processes of perception and analysis can be thought of as computations. But though we might have imagined that such computations would always be castly more sophisticated than those performed by simple programs, the Principle of Computational Equivalence implies that they are not. And it is this equivalence between us as observers and the systems that we observe that makes the behavior of such systems seem to us complex.
"One can always in principle find out how a particular system will behave just by running an experiment and watching what happens. But the great historical successes of theoretical science have typically revolved around finding mathematical formulas that instead directly allow one to predict the outcome. Yet in effect this relies on being able to shortcut the computational work that the system itself performs.
"And the principle of Computational Equivalence now implies that this will normally be possible only for rather special systems with simple behavior. For other systems will tend to perform computations that are just as sophisticated as those we can do, even with all our mathematics and computers. And this means that such systems are computationally irreducible--so that in effect the only way to find their behavior is to trace each of their steps, spending about as much computational effort as the systems themselves.
"So this implies that there is in a sense a fndamental limitation to theoretical science. But it also shows that there is something irreducible that can be achieved by the passage of time. And it leads to an explanation of how we as humans--even though we may follow definite underlying rules--can still in a meaningful way show free will.
"One feature of many of the most important advances in science throughout history is that they show new ways in which we as humans are not special. And at some level the Principle of Computational Equivalence does this as well. For it implies that when it comes to computation--or intelligence--we are in the end no more sophisticated than all sorts of simple programs, and all sorts of systems in nature."
 
They'll be able to EXPLAIN it, one day, I think, which doesn't mean they're taking it away. Knowing about something doesn't invalidate it just because it's now known. I remember I saw some damn fool arguing that he'd proved (the Judeo-Christian) God's non-existence because it was impossible for both God to be omniscient and free will to exist, and since Judeo-Christianity requires both to be true, the whole thing was obviously a farce.

Whatever your reasons for downing Judeo-Christianity, I don't think it makes much sense to say that omniscience and free will are incompatible.
 
So rather than saying an interpretation of our minds as computational systems shows we don't have free will, you (as Devil's advocate, at least) are saying to the contrary, it shows that artificial systems have free will as well?

Where's the turnover point from a computer adding 1+1 and getting to to "free will"? Is it possible to have a little bit of free will, or is that like being "a little bit pregnant"? (I guess as parameters for your possible reactions expand, your free will expands too...then again, you can have free will and not freedom of action, and I think that pre-ellipsis statement just confused the two).

I forgot to bring in that triadic/dyadic essay. Dammit, I'll see what I can do...
 
i am reading the greatest thing ever on this very subject by douglas r. hofstadter in his book metamagical themas. in it, he has a discussion about the ability to use computations to recreate the mind of a person (in this case einstein). it's really interesting. also, there is discussion of this in the book 'the mind's I' by hofstadter and daniel c dennet. these books are so worth reading.
also, i mean, we cannot "guess" if something has free will or is conscious like another person. we are not in their mind. we can only 'assume' that their mind works like ours because superficially they seem to be biologically the same as us. this goes along with the age old question "i wonder if the blue i see is really the same blue that this person sees, or if they are really just seeing what i think is red instead!". i mean, at least they are 'appearing conscious' as we are. so if a computer can 'appear to be conscious' isn't it then? if we can't tell the difference? ie, the turing machine idea? if a machine says it has free will, and seems to have free will, and does all the things that we think are free will adhering, how is it not free will? just because we can 'guess' that it is simulated free will? and what is simulated action? isn't it still action?
i love this stuff. yay.
 
people would probably beg to differ! actually, it's so funny, because kurzweil argues that simulated virtual sex will soon be the same or better than actual sex in 'the age of spiritual machines',
 
I'm not really up to task for defending the points of the book yet, given that I've read about 8 pages of it...I guess I'll get back to you on it later.

preppy: that was definitely what I was getting at - if it looks like a duck and quacks like a duck there's a good chance....
 
Not to stray into ontology, but I think there's an importance difference between

"there is no (or very little) difference between IS and SEEMS TO BE"

and

"IS is because it SEEMS TO BE". For example, I think that the perception that Hollywood is left-wing maybe started as right-wing propaganda, but it has become true, probably because otherwise apolitical actors found it less laborious to fall in line with the PERCEIVED reality than to strike out on their own path.

I hope that was clear.
 
The point of what I think preppy and I were saying is that the perception is vital because there is no other evidence. You can postulate about the consciousness of computer til your blue in the face. But effectively you must take the surface evidence as proof until you have something more convincing one way or another.

You treat the computer as a conscious intelligence because it seems to be so, and you have no good (or scientific if you prefer) reason to believe otherwise.

You can't say, this computer is conscious, cause you don't know that. But you can say, this computer acts like a conscious creature, and so I'll treat it and think about it that way.
 
yea i mean, i want to say that is essentially what i mean, but i still have reservations about what we're even referring to as 'conscious'. but yea, i am basically saying that you can't say for sure that any one thing has a certain mental process. you can assume from experience and data, you can make inferences, you can even make a statistical representation in your favor.... but it's still basic conjecture.
 
I watched Errol Morris' Fast, Cheap, and Out of Control over the weekend and they talk a lot about hive intelligence and robots. And it's an awesome film!

I suppose that in the future, when conscious robots take over, they'll just be a logical evolution of humans. I mean, should the Neanderthals have tried to retard the development of Homo sapiens? They'll be humans, just metal ones.