Dakryn's Batshit Theory of the Week

That's good. :cool:

I imagine plenty of hardcore Deleuzians would come down on me as unfairly categorizing him; but Deleuze's whole program is basically one of anti-interpretation. That's why the Oedipal framework is his primary target; he's trying to demonstrate how the Oedipal complex isn't a valid means of interpreting a patient's symptoms because it isn't some mystery that preexists psychoanalysis or culture. It's an ideological construct of culture that is then projected backwards as an original problem.

It took me some time to internalize the Body without Organs. Unfortunately, with a lot of theory like this that uses neologisms and such, all you can do is revisit it and revisit it until the jargon sinks in. Eventually it just becomes clear how these terms are being used. Some might argue that they're unnecessary, but I would disagree. For theorists making very unorthodox and unintuitive claims, new terms are often very necessary and even difficult to explain in the space of a single paragraph, much less a single sentence.

I refer to it as the obscurantist style. :cool:
 
Well of course a body without organs is just a single mass, with nothing to analyze. Analyzation is not wrong, but impossible.

I don't agree with this approach of course. While analyzation can create the "can't see the forest for the trees" problem, suggesting there are no trees, no underbrush, no animals, etc. is going to the absolute opposite extremity.
 
In Deleuze's defense, what it does allow him to isolate are individual drives (flows, desires, etc.). Deleuze is of the opinion that what is important isn't the subject, or the contours that designate the subject (which in his view are too arbitrary to be of significant value), but the desires that comprise the subject as well her object of cathexis. So, flows become the topic of discussion but cannot be restricted to singular ideological subjects.

I, of course, resist this impulse as well, but it does illuminate an important radical example of poststructuralist anti-interpretavism.
 
The BwO is Deleuze's conclusion from focusing purely on drives. Without subjects to contend with as central meaners and intenders, Deleuze claims that drives permeate across borders. So what were before disparate bodies now become a complex assemblage constituted not by borders, but by the energies that traverse them.
 
On an unrelated note, and in the spirit of the old Iran jpg:

1959518_642438002460317_1574862120_n.jpg

Russia would have done the same thing as we did, if they ever had the resources. The difference with us is we actually have allies that matter.
 
Automation, in this context, is a force pushing old principles towards breaking point. If I can build a car that will automatically avoid killing a bus full of children, albeit at great risk to its driver’s life, should any driver be given the option of disabling this setting? And why stop there: in a world that we can increasingly automate beyond our reaction times and instinctual reasoning, should we trust ourselves even to conduct an assessment in the first place?

Beyond the philosophical friction, this last question suggests another reason why many people find the trolley disturbing: because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us.

http://aeon.co/magazine/world-views/can-we-design-systems-to-automate-ethics/
 
zabu of nΩd;10823387 said:
Russia would have done the same thing as we did, if they ever had the resources. The difference with us is we actually have allies that matter.

Absolutely they would have, although framing it as "us and our buddies" is rather disingenuous, or historically inaccurate at the least. Many of those bases are in countries just happy to take our $$, some are in ex-Soviet countries which are only ex-Soviet because the Soviet economic system was horrible - leading to failure and their marginal territories ejected and in need of: $$. What is left are a few countries that we forcibly disarmed after WWII.


It continues to surprise me the disparity in quality from one Aeon article to the next. This quote reaches the Quiggins Quality level, which is so far the lowest level I have found on the site.
 
Absolutely they would have, although framing it as "us and our buddies" is rather disingenuous, or historically inaccurate at the least. Many of those bases are in countries just happy to take our $$, some are in ex-Soviet countries which are only ex-Soviet because the Soviet economic system was horrible - leading to failure and their marginal territories ejected and in need of: $$. What is left are a few countries that we forcibly disarmed after WWII.

1) capitalism > communism

2) Russia is not without its own global economic influence. Just look at its position as a leading supplier of oil & gas to europe. It's their own fault for not having any other meaningful industries right now, as they've persecuted intelligent people for generations.
 
I like the quote. So there.

I meant to say "that article" rather than specifically that quote, but the quote does rather sum up the article.

I would have assumed you would be one of the first to see problems with a statement like "because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us."

Of course it is ridiculously assumptionist on the face of it, but beyond that: At any given time we can decide that something is "ethically superior", and at no time can the algorithm make this decision. It can merely perform. Or, conversely, perhaps the algorithm(to continue with the trolley example) we have determined is "ethically superior" begins to consistently choose the track with 5 rather than 1.

When I read articles like this, in the minds eye I can see the glee in the face of the writer as he/she can barely contain themselves as they explain how machines are going to make everyone behave or conform as they, the writer(s), believe they should. And of course, we cannot question the decisions of Technology. God is dead, kings are dead. Long live Godking Technology!

zabu of nΩd;10824278 said:
1) capitalism > communism

2) Russia is not without its own global economic influence. Just look at its position as a leading supplier of oil & gas to europe. It's their own fault for not having any other meaningful industries right now, as they've persecuted intelligent people for generations.

Oh they do have global influence, and more the US will openly admit. Why else would Germany be backing away from the rhetoric? Why else would China and Russia be signing deals?

Influence on the grand stage is about resources and the ability to defend them. Russia is set in that respect.
 
I would have assumed you would be one of the first to see problems with a statement like "because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us."

Of course it is ridiculously assumptionist on the face of it, but beyond that: At any given time we can decide that something is "ethically superior", and at no time can the algorithm make this decision. It can merely perform. Or, conversely, perhaps the algorithm(to continue with the trolley example) we have determined is "ethically superior" begins to consistently choose the track with 5 rather than 1.

You know, you start off this way a lot: "I would assumed that you would be one of the first..." It's quite presumptuous.

I would not be one of the first, because I would have thought that you would be one of the first to know that I'm not the kind of person to reduce ethics to the decision of a single agent. Ethics is a system, a network; maybe not closed, but a network still. If this is the case, then I entirely believe that machines can make ethical decisions.

Ethics is like a language game in the Wittgensteinian sense. It doesn't require any internal justification or validation for its existence (this would be a moral decision, not an ethical one). It doesn't matter what one individual subject thinks about ethics, dissociated from the system entirely. It matters how the system of ethics works amid a collective of deciding factors. Ethics is determined entirely by the rules of the game among multiple agents, and this is only ever an external system. Machines can be players.

When I read articles like this, in the minds eye I can see the glee in the face of the writer as he/she can barely contain themselves as they explain how machines are going to make everyone behave or conform as they, the writer(s), believe they should. And of course, we cannot question the decisions of Technology. God is dead, kings are dead. Long live Godking Technology!

That says more about you than it does about the article.
 
You know, you start off this way a lot: "I would assumed that you would be one of the first..." It's quite presumptuous.

I would not be one of the first, because I would have thought that you would be one of the first to know that I'm not the kind of person to reduce ethics to the decision of a single agent. Ethics is a system, a network; maybe not closed, but a network still. If this is the case, then I entirely believe that machines can make ethical decisions.

Ethics is like a language game in the Wittgensteinian sense. It doesn't require any internal justification or validation for its existence (this would be a moral decision, not an ethical one). It doesn't matter what one individual subject thinks about ethics, dissociated from the system entirely. It matters how the system of ethics works amid a collective of deciding factors. Ethics is determined entirely by the rules of the game among multiple agents, and this is only ever an external system. Machines can be players.

Well an individual doesn't exist completely disassociated from everything. At the very least, even Crusoe has other plants and critters and whatnot on the island.

Whether or not algorithms or machines can make ethical decisions is our problem, not theirs. Of course algorithms and machines perform functions, and they might be assigned an ethical value, but particularly in a context of some fluid system or network, making a statement about the ethical superiority of the any particular agent - man or machine - comes laden with all sorts of baggage which doesn't allow it past the starting line.

I make assumptions about where someone might go depending on where they have previously been headed - other comments, commentary, etc. Maybe it is simply axiomatic difference.


That says more about you than it does about the article.

I just prefer to stay away from trolleys while the utilists are busy crashing them into people, as I can't stop it since the people love it so. But utilists insist everyone must either be in or in front of the trolley. Anything else creates a data problem.
 
Now you're shifting the goalposts and talking about the contradictions of claiming ethical superiority. This is a fine debate, but it isn't what the article is concerned with.

The article asks that, for a moment, we accept the parameters of a superior ethical system; and it asks us why machines shouldn't partake in this system, if we're engineering them to be increasingly complex and autonomous.
 
Now you're shifting the goalposts and talking about the contradictions of claiming ethical superiority. This is a fine debate, but it isn't what the article is concerned with.

The article asks that, for a moment, we accept the parameters of a superior ethical system; and it asks us why machines shouldn't partake in this system, if we're engineering them to be increasingly complex and autonomous.

Well that is partially what I meant by saying that it was assumptionist, carrying baggage, etc.

The writer has already determined that not only consequentialism is the superior ethics, but we can assume by his repeated examples, a consequentialism that always sees more people left alive at the end of the day. Well sure we can engineer M+A them to partake in this system, and maybe they can even do so on their own. But I don't even see anything worth considering there. That becomes merely an "is" or "is not".

I don't really consider it moving the goalposts to challenge underlying assumptions. Ascribing ethics to algorithms is as anthropomorphic as "AI", and the values ascribed have their own issues.
 
I don't see it as anthropomorphizing. I see it as taking a modern approach to ethics, which views the latter as a system that subsists because of interacting agents but can in no way, shape, or form be traced back to the conscious minds of those agents. In short, ethics can exist without humans.
 
I don't see it as anthropomorphizing. I see it as taking a modern approach to ethics, which views the latter as a system that subsists because of interacting agents but can in no way, shape, or form be traced back to the conscious minds of those agents.In short, ethics can exist without humans.


While the system might arise emergently, I still don't think that keeps it from being always already human, and that in the absence of humans it would cease, or at least cease to exist in any form that we would recognize as such (Those two statements are not mutually dependent).