The funny thing about modernity

i think honestly that the human mind will become a machine and not even be biological before we have a chance to worry about biological engineering entirely, and the impact of that because of the law of accerated returns (like, shit is going to happen a lot faster now as far as scientific innovation than it used to say, 20 years ago, or 40 years ago) is that people wont have time to bitch. i mean, sure they will, but it will happen anyway because it will be so economically useful that science/business can't resist it. that is the direction we are going with nanotechs, bionics, and artificial intelligence. more people will probably be against viewing machines as 'conscious' and as 'humans' than there will be people worried about genetic altering etc.
 
xfer - Totally cool concept, but it certainly seems a nightmare for privacy advocates. I wonder about the feasibility.

Also I think the technology and design of search engines as well as hardware is going to have to radically change. I don't know much of anything about Google's sorting methods, but it sure seems like the growth (read: bloat) of the Interweb is far outpacing Google's attempts to index it all, and outpacing simple hardware upgrades to the search's efficiency.

So unless there's some brilliant approach to document searching that noone's thought up yet, it might end up being fairly uncomputable, as in "the-heat-death-of-the-universe-will-occur-before-I-finish-searching-all-these-documents-for-the-string-Driver, Alex". Whether this means going to trinary (or more) state computers or what-have-you, I dunno. Seems to me the significant changes would have to be in the algorithms, though. There's almost 6 billion people doin' shit everyday.
 
You know that idea of yours that man will be combined with machine is rather fucking scary- I read a year ago, that a few geniuses think we should just stop researching artificial intelligence- as they actually stated that a Terminator situation were the AI takes over is possible- apparently these guys saud that since we imperfect humans are programming in our likeness, the Ai will pick up many of the same traits of greed, tyranny, jealousy etc. I m not a science or computer guy, but I do think technology has got to the point were it needs to be stopped soon.
 
I don't rememebr what it was but i rememebr this story (it might have been a book, movie or video game) where this highly highly advanced race was talking about their origin's and it seems that the next step is that human flesh will become obsolete and their mind or soul or essenc eis put into machines and they way peoeple can do things betetr, longer, fast mroe efficient, etc. but then after that they became light, and thus were ternal and god-like, so.... HAZA!
 
dudes, look at that site i linked. it's almost too late to stop AI. actually, it is too late. like i said, it's too economically feasible and worthwhile for software companies to continue doing neural net research. they won't stop. i also recommend reading "the age of spiritual machines' because it will sort of lay your pretty head to rest.
either way, here is my thoughts on the matter: technology is just our form of evolution. you must consider technology a byproduct of our existence, and then a part of our existence (most of us would die without it, without food, without shelter etc....) and therefore, part of our evolutionary process. it only makes sense that it would integrate "into" us. sure, paranoid people beware. but like i said, i'm not a big fan of how human beings act anyway so fuck it.
 
So think about this then - eventually, nstead of making babies, people might start just building an intelligent machine and growing to love it and nurture/maintain it just the way they would a child.
 
I do think we will figure out a revolutionary new way of searching and indexing. I also think that it's silly to try to stop technological innovation. If nothing else, humans are amazing at learning, advancing, and figuring new shit out, and it's useless to try to stop that. Cloning may or may not be wrong, but even if it's illegal in the States some crazy Italian or Zimbabwean or Japanese is going to figure it out anyway, right?

I'm not so sure machines will pick up HAL-like traits of power-hungriness or greed or anything else. I think it's entirely possible that AI machines will win the hearts of the world by lacking those traits. They'll be perfect, nice, good, productive, smart citizens of the world, and exactly THAT is why they'll become popular, spread, and eventually take over. Their logic will be unassailable: "You humans cause pain to each other, kill, lie...we don't. Why shouldn't we be in charge?" The world will be a better place when humans are gone.

Or will it? I don't think AI beings will have the capacity to create art. "Who needs art when you have world peace and prosperity?" the machines will ask. But I think a world without art is a shitty world indeed, so ultimately I think the world will be a much worse place.

Although machines would have to consciously maintain the morality we programmed them with, because without human traits/creations such as conscience or religion or metaphysics, they're not going to have any reason to behave according to a certain code. Maybe they'll develop a utilitarian system of "whatever keeps society running smoothest is Right"?
 
Ugh a utiltarian society run by machines- i will run to the third world to live. Can one really imagine a world were only numbers and cost benefit anaylsis run everyone lives- and become everyone lives- ugh- i dont want to live in such a society. The human mind is beyond numbers and logic- i think it would be a horrible idea to incorporate man and machine- why fight nature and try to become a man made god- i mean this goes really back to prometheus doesnt it?
 
there's just no way to be sure that a machine isn't already 'conscious', i mean, what is the definition of consciousness anyway after all? it's definitely up for debate and neuroscientists don't have a definitive answer. as for a machine's morals, there really isnt a way to say they would think in terms of cost effectiveness. they would probably steer towards information gain, and knowledge gain, which isn't so bad is it?
 
I don't want to live in it either, but my arguments against it aren't considered serious. I mean, obviously no one cares about art and shit anymore--look at the music charts.

You could make Biblical parallels, too. The Fall happened and Human became imperfect, and shall remain imperfect until he's not Human any longer, at which point perfection is possible. And we're going to be nailed to a fucking cliff getting our livers eaten out by robots singing DAISY DAISY GIVE ME YOUR ANSWER DOOOOOO.
 
Well there is something in computer science/Ai development known as the Turing test, forgive me if I'm repeating shit you already know....

Basically, if you talk (or in whatever other way interact) with a machine and be totally incapable of distinguishing your conversation from one held with a human, you really have no choice but to consider it intelligent. The classic scenario of course is to have someone sitting at two computer terminals. On the other end of one is a person, on the other end of the other is a computer. Then you have to try and figure out which is which.

There are already artificial intelligences that can pass limited versions of the test. For example if the conversation is restricted to only be about the subject of wine, or whatever, there are machines that can pretty well pass for humans.

I don't know if you guys think this is an acceptable test, but I do. I mean, it's pretty much the only way we guage each other's intelligence.....
 
Well, the thing that differentiates humans from animals is our ability to use language. Nothing non-human I know of can actually traffic in true (triadic) language (apes, birds, and computers haven't yet--to my knowledge--moved past dyadic language, that is, a simple give-and-take system of signs).
 
yea i mean the turing test is good, it's not rock solid, it's been proven wrong at MIT (fallible at least, if not wrong). but like i said, what is conscious anyway? does it have to "act" like a human? and if something acts human, and can't be mentally differentiated, then it must be human minded no? but is it conscious? are cats conscious? are dolphins? i think it's pretty anthropocentric to think a machine can't be conscious because we are afraid to admit it.
anyway, i want to have a machine baby.
 
Well I think the thing about the Turing test I find so elegant is that we have no way of knowing if cats or dolphins or even the_preppy is conscious...:) To me it kind of assumes consciousness and intelligence in lieu of evidence to the contrary.
 
i think turing was on the right track. i mean, as far as being interactive.... it hasn't been matched although kurzweil gives a good argument as to why the turing machine isn't perfect and how it could be made better. at the same time, it's all viva singularity! you know? hah.

p.s. i am way less conscious than my rabbit.
 
All right, I can try to dig out something I read about triadic/dyadic that might explain it better than me, but I'll take a shot. The movement from dyadic to triadic language is best exemplified by the linguistic event of Helen Keller's hands under the water pump during which she realised that the wordsign WATER referred to the concept WATER, not simply the thing flowing over her hands. Previously, she trafficked only in dyadic language, where a word was a signifier for a thing. She would make the wordsign for water, and receive water. That's the kind of language apes like Koko use. They don't understand what language is. (Walker Percy has a diagram where you can see the triangular relation for triadic versus the simpler dyadic setup, but I can't find it online).

Robots need culture