Artificial Intelligence

Lordenlil

FROST
Apr 19, 2002
5,047
31
48
47
Norway
vill.no
This thread is relevant here because I read sometime that Mr. V
is interested in science also. (don't remember details) :D
I remember only vaguely, and I dunno which kind... oh well.

Ok, I really wanted to tell you about how artificial intelligence
can get out of hand:

We (my project group) are developing a self-driven intelligent
robotic vehicle in college. It can drive from A to B by finding its
own way. (briefly told)

Here's what happened: Tonight we left it at college, and we
forgot to put it to 'sleep'. And we know it was in intelligent
modus when we left. This shit might actually be driving inside
the school demolishing things (it might still hit objects if its
unlucky) It's 100 kg's and in solid steel. Now we only hope
that it doesn't have any old data in his memory and acts upon it.
Then we're fucked!

Seriously;
So what do you think. Is Artificial Intelligence becoming
dangerous? Will it ever? (I'm speaking in general here, not
just our little project).
 
well, I would have to say more like out of hand. I like science and I'm quite interested in it but I have a real problem when science tries to contradict nature like with this biological research. Cloning, that business about using brain stem cells, any and all research carried out by testing on animals, and other 'frankenstein" theories as I call them should be forbidden. It's wrong to contradict nature. I don't care how many people it can help or cure, it's disrespectful and wrong.
Robotics is purely mechanical and electronic so I can't see any harm in the robotics themselves, any harm comes from its creator and his/her intention. This project of yours sounds fun. I'd love to be a fly on the wall to see what your little metal friend is getting up to though :lol: you may go in tomorrow to quite a mess to clean up. :lol: You can't exactly ground him or take away his privilages can you? and i'm sure a telling off won't do much good either.
 
I say it's concievable, but not in the next 100 years. The proper methodoligies needed to allow a computer to have any sort of self awareness or ability to "think outside the box" is SO complex that I don't see it happening for a very long time. We'll need some sort of TRUE GENIUS (a la Einstein, Bohr, whoever) to be able to break through that front.

And what I mean by "outside the box" is to be able to think without scope. All AI currently in existance has such incredibly limited focus that increasing it to become a threat is a huge problem.


I took a intro to AI course back in University, and I am quite unimpressed with what we can currently do with AI. I KNOW just how difficult of a problem it is, but the leaps and bounds needed to get a computer to get even close to passing the Turing test are very far in the future. And I still don't consider expert systems to be AI. Anyway, look at natural language processing. We're quite bad at that.

I am quite impressed with a lot of the image analysis we can do. Facial recognition, and some other similar problems.

bed time...
 
Everything is good up to a certain point.
Humans are just starting to figure this artificial intelligence thing
out, but we need to watch ourselves and where we are heading.
This could either end up like Star Trek or Blade Runner.

Personally I think humans are too greedy in general for this to
end up in perfect harmony. Some psycho will always find a way
and try and conquer the world with it.
 
Originally posted by Allison
well, I would have to say more like out of hand. I like science and I'm quite interested in it but I have a real problem when science tries to contradict nature like with this biological research. Cloning, that business about using brain stem cells, any and all research carried out by testing on animals, and other 'frankenstein" theories as I call them should be forbidden. It's wrong to contradict nature. I don't care how many people it can help or cure, it's disrespectful and wrong.

The ethics of fetal rights aside, I don't believe that biological research is wrong nor contradicting nature.
Levitation is contradicting nature. Genetically altering animals, plants, or bacteria is merely forcing a genetic mutation that is possible in nature, although VERY unlikely.

I also don't have any problem with cloning, as long as clones are treated as being as human as the original. But that is a whole other post for another time. I should be in bed.
 
Originally posted by Karldin
Everything is good up to a certain point.
Humans are just starting to figure this artificial intelligence thing
out, but we need to watch ourselves and where we are heading. (...)

Personally I think humans are too greedy in general for this to
end up in perfect harmony. Some psycho will always find a way
and try and conquer the world with it.

I second that.... This is very much what I think...

The way things are now it's all slowly developing
into something, and we're not sure what. Right
now everything is safe, but I know that when this
gets developed enough some idiot will use it for
something really stupid. As always. But progress
is good, we can't fight it :eek:)
 
Well, I would say if the artificial intelligence evolves in a human way, it certainly is dangerous... but then, AI is put to use by humans, so.... well, I can also say that I agree with Karldin :) I seldom say that, but here it's true :)
 
In complex systems, this kinda intelligence can act strange.
I'm not just talking about bugs, but learning patterns that
are so complicated that even the creators would have
problems in following the development. Such systems are
rare, I guess.
But I tell you, we had some strange testresults ourselves.
Sometimes the unit showed off some strange results in its
way of thinking. This was actually just things we told it to
do, but as the system was getting more complicated, the
total intelligence of it did unexpected things, but they were
still correct according to our later analysis.

But like someone mentioned. It's a big deal to create a
machine that shows anything similar to human intelligence.
But still, with approaches like artificial neural systems, it's
amazing what a system may recognize and learn itself....
Combine it with other types of intelligent routines, and you
got yourself a very smart and complicated system.
 
Very much so!!! And if we really had the knowledge, I promise
you machines would be out of control too. We have the ability to
fuck up everything that is around us for purposes ment to lift
ourselves to a higher staus, needs and greed! (Look at the environment..)

True scientists alone would actually only need bread and water
(ok - pizza and Coca Cola nowadays) only for the purpose of
getting insight in all that is possible in this world!
But they need money, money given by business-people! They are
corrupting everything and abusing it! We all know Einstein simply
HATED much of what he'd discovered when it came to nuclear
technology etc....that was NOT his vision at all...
Besides, we need the system but the system destroys us...
talk about a circle of evil!!!