On an unrelated note, and in the spirit of the old Iran jpg:
Automation, in this context, is a force pushing old principles towards breaking point. If I can build a car that will automatically avoid killing a bus full of children, albeit at great risk to its driver’s life, should any driver be given the option of disabling this setting? And why stop there: in a world that we can increasingly automate beyond our reaction times and instinctual reasoning, should we trust ourselves even to conduct an assessment in the first place?
Beyond the philosophical friction, this last question suggests another reason why many people find the trolley disturbing: because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us.
zabu of nΩd;10823387 said:Russia would have done the same thing as we did, if they ever had the resources. The difference with us is we actually have allies that matter.
Absolutely they would have, although framing it as "us and our buddies" is rather disingenuous, or historically inaccurate at the least. Many of those bases are in countries just happy to take our $$, some are in ex-Soviet countries which are only ex-Soviet because the Soviet economic system was horrible - leading to failure and their marginal territories ejected and in need of: $$. What is left are a few countries that we forcibly disarmed after WWII.
I like the quote. So there.
zabu of nΩd;10824278 said:1) capitalism > communism
2) Russia is not without its own global economic influence. Just look at its position as a leading supplier of oil & gas to europe. It's their own fault for not having any other meaningful industries right now, as they've persecuted intelligent people for generations.
I would have assumed you would be one of the first to see problems with a statement like "because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us."
Of course it is ridiculously assumptionist on the face of it, but beyond that: At any given time we can decide that something is "ethically superior", and at no time can the algorithm make this decision. It can merely perform. Or, conversely, perhaps the algorithm(to continue with the trolley example) we have determined is "ethically superior" begins to consistently choose the track with 5 rather than 1.
When I read articles like this, in the minds eye I can see the glee in the face of the writer as he/she can barely contain themselves as they explain how machines are going to make everyone behave or conform as they, the writer(s), believe they should. And of course, we cannot question the decisions of Technology. God is dead, kings are dead. Long live Godking Technology!
You know, you start off this way a lot: "I would assumed that you would be one of the first..." It's quite presumptuous.
I would not be one of the first, because I would have thought that you would be one of the first to know that I'm not the kind of person to reduce ethics to the decision of a single agent. Ethics is a system, a network; maybe not closed, but a network still. If this is the case, then I entirely believe that machines can make ethical decisions.
Ethics is like a language game in the Wittgensteinian sense. It doesn't require any internal justification or validation for its existence (this would be a moral decision, not an ethical one). It doesn't matter what one individual subject thinks about ethics, dissociated from the system entirely. It matters how the system of ethics works amid a collective of deciding factors. Ethics is determined entirely by the rules of the game among multiple agents, and this is only ever an external system. Machines can be players.
That says more about you than it does about the article.
Now you're shifting the goalposts and talking about the contradictions of claiming ethical superiority. This is a fine debate, but it isn't what the article is concerned with.
The article asks that, for a moment, we accept the parameters of a superior ethical system; and it asks us why machines shouldn't partake in this system, if we're engineering them to be increasingly complex and autonomous.
I don't see it as anthropomorphizing. I see it as taking a modern approach to ethics, which views the latter as a system that subsists because of interacting agents but can in no way, shape, or form be traced back to the conscious minds of those agents.In short, ethics can exist without humans.