machines are learning to feel

I did a short research paper on something like this earlier this year and it's a very interesting topic. Most notably, the emulation of the human mind might be the last thing humans would ever have to invent. After that the machines may advance themselves in centuries worth of research/technology in only a fraction of the time. It's also a mystery of whether they'd ever reach a certain "threshold" in which they could advance no more because they've discovered everything possible. Imagine also the possibilities it could do for arts-- could the next Mozart possibly be a machine?

I also learned that there's been some pretty big philosophical debate about this topic in the past 50 years or so. Some people say that this is impossible because a real conscious mind is far too complex to be emulated in a machine whose core lies in the logic of 0’s and 1's. i.e., when the machine is faced with a choice, it will basely lie in a series of Boolean "yes" or "no"-- it won't start to think outside of the box. But who's to say that deep down to the core, the human mind does not work in the same way, and that everything we do is just some super-complex algorithm at work? I for one think it's entirely possible.

Ethics of course come in to play here as well… for instance would it be okay to turn off or disassemble a machine against it’s own will? and should we be trying to create life like this in the first place?

It’s kind of creepy to think about...
 
they said we'd have flying cars before 2000. they said we'd live on the moon.

until it happens, it is still science fiction. and its a fiction that i hope remains fiction. i absolutely despise the idea of implanting machines into our bodies for video tattoos, cell phones (which i dislike already) and worst of all: tracking and ID. it's already bad enough that the governments don't see us as names but numbers. i don't want them having any more encouragement to de-individualize us.

i don't think one can "download" a brain. there is more to a person than their sensory functions and data stored. it is by far inferior to real humanity. i also caution against this sort of development lest other science fictions be realized as well: such as those hypothesized by The Matrix and Terminator. it's a dangerous path to travel, and i won't be walking it.
 
I dont like it all. Its bad enough we are slaves to our technology, but to become actual slaves to smarter and stronger machines would be down right ironic. And ethics really have almost no meaning any more. Even if the US banned this research, some other country would start it up. We wont stop this mad dash at technology until its too late. The cat is out of the bag. Of course if I was god over the earth I wouldnt have allowed technology to surpass ancient times.
 
if things do get out of hand, we will experience the time-warp down-grade phenomenon. advance too far, and after the chaotic destruction caused, you are back at 1.
 
would it be okay to turn off or disassemble a machine against it’s own will? and should we be trying to create life like this in the first place?

interesting questions. I'd like to hear some thoughts on these....I agree with Silent Song's first post; I hope this remains as fiction. Although predictions do come true every now and then. When I read about "downloading" our brains to a disk, I immediately thought of Futurama and the heads in jars. If machines were able to think and become advanced and learn everything possible, become omnipotent "god"like beings, would be an interesting topic.

Nevermore's "The Learning" anyone?
 
interesting. my reference to The Matrix shows my opinion on the question of super-advanced A.I.

if we act as gods and create "life", then we are responsible for guiding it and disciplining it. that includes killing it, though that aspect of the station is something very dire and should be avoided. the most blatant flaw with humanity-as-god is that we ourselves are flawed, so our creations will "inherit" our flaws, despite how perfect we design them. if they are intelligent enough to learn, they will see our societal example, our violent tendencies, our greed and lust, and through that example they may construct their views.
 
If we give the machines some morality instead of only a conscience they won't slay us all.... well I hope.
 
What crap

This kind of hypothesising is begging the question:
The guy that wrote the artical is basically saying "when we know all there is to know about the mind then we will be able to build a computer that is exactly the same as our mind (and upload our own minds into it)

These are not uncontentious issues, firstly he is taking for granted that the mind is the kind of thing that we can capture the essence of, through something like a completed neuroscience, (and indeed that a completed neuroscience is even possible). Secondly he is assuming that computers and minds are somehow fundamentally the same.

Why should we accept either claim? All he does to convince us that such a thing is even possible is to blandly assert that they think they can do it...

Whatever... I am unconvinced.
 
This guy should read Antonio R. Damasio's Descarte's Error, the Reason of Emotions.
This book talks about how feelings (physical responses) influence our thought process and how it's hard for people with brain damage that cause them problem to take into account their physical response throughout the thinking process. I doubt we will be able to do something like this ever.

The brain is not simply about calculations, is much more complicated that what it seems. We don't even know yet how we really make decisions (the book I talk of at the beginning contains many, many hypotheses and is in no way the final view on this), so I doubt that we can work right now on something that emulates it.
 
Computers are not intelligent. They simply display simulated intelligence. I don't believe that we have the knowledge or technology to truly make "artifical" intelligence but I imagine it will be possible sometime in the far future (though not in 20-50 years like the article says).
 
the simple, one liner truth of the matter goes something like this:

wisdom != knowledge, and the former is far more important than the latter. though computers may be capable of holding all the knowledge of humanity, i doubt they can attain wisdom.
 
milkman said:
Computers are not intelligent. They simply display simulated intelligence. I don't believe that we have the knowledge or technology to truly make "artifical" intelligence but I imagine it will be possible sometime in the far future (though not in 20-50 years like the article says).

this is true, but what if computers would display a simulated intelligence which the vast majority of people would mistake for human intelligence?
 
Yes, but will these computers work? You never see any computer crash in a sci-fi film, but we all know it happens... hehe.

...

hal-200.jpg


You know it...