Dakryn's Batshit Theory of the Week

Its lack of validity isn't as important as what it tells us about labor under modern industrial capitalism. That is, it's a critical apparatus; not a model for functionality.

http://www.smbc-comics.com/index.php?db=comics&id=2012#comic

Two things I don't understand here:
1. How can something be invalid yet hold value in criticism?
2. What does the comic have to do with it?

Referring to the prior explanation of the LToV, not all marxian thinkers agree with that, and in fact I expect most don't:

http://www.marxists.org/archive/camatte/wanhum/wanhum05.htm

Capital dominates value. Since labor is the substance of value, it follows that capital dominates human beings. Marx refers only indirectly to the presupposition which is also a product: wage labor, namely the existence of a labor force which makes valorization possible:

"The barrier to capital is that this entire development proceeds in a contradictory way, and that the working-out of the productive forces, of general wealth etc., knowledge etc., appears in such a way that the working individual alienates himself [sich entaussert]; relates to the conditions brought out of him by his labor as those not of his own but of an alien wealth and of his own poverty."

It's been pretty clear to me, and I think this is the primary point of attack for the Rothbards of history, that the entire edifice of Marxist thought rests entirely, like an inverse pyramid, on this one faulty theory (which of course did not originate with Marx). While arguments can be made about particular unethical actions in relation between employees and employers within a capitalist system, only with the LToV can one denounce even though most generous arrangement as still "destroying the human" by virtue of the existence of the system.
 
Two things I don't understand here:
1. How can something be invalid yet hold value in criticism?

I believe the second portion of my comment addressed that: "it's a critical apparatus; not a model for functionality." In other words, it can criticize and not provide an otherwise applicable model.

The labor theory of value only holds under capitalism, in Marx's philosophy.

2. What does the comic have to do with it?

Nothing. It's tangential.

Referring to the prior explanation of the LToV, not all marxian thinkers agree with that, and in fact I expect most don't:

http://www.marxists.org/archive/camatte/wanhum/wanhum05.htm

It's been pretty clear to me, and I think this is the primary point of attack for the Rothbards of history, that the entire edifice of Marxist thought rests entirely, like an inverse pyramid, on this one faulty theory (which of course did not originate with Marx). While arguments can be made about particular unethical actions in relation between employees and employers within a capitalist system, only with the LToV can one denounce even though most generous arrangement as still "destroying the human" by virtue of the existence of the system.

Lots of people disagree on what Marx "meant," which is one of the things that makes him still so interesting to read. I personally don't see how what you quoted contradicts what I did; but it's fairly clear that Marx was explicating a theory of value which he saw emerging under capitalism. If he had some revolutionary vision of applying that same theory to a new economic system, I'm not sure I would buy it either.

But the labor theory of value does serve as a source of valuable critique against capitalism, namely in the way it allows for us to understand alienation of labor, reification, commodity fetishism, and other important components of Marxian philosophy.
 
I believe the second portion of my comment addressed that: "it's a critical apparatus; not a model for functionality." In other words, it can criticize and not provide an otherwise applicable model.

The labor theory of value only holds under capitalism, in Marx's philosophy.

Lots of people disagree on what Marx "meant," which is one of the things that makes him still so interesting to read. I personally don't see how what you quoted contradicts what I did

But that it doesn't even hold under capitalism, or any ism for that matter, is my point.

Rereading the quote from earlier,

Marx was well aware that price and value may differ wildly based on monopoly, demand and other fluctuations, but considered it irrelevant to his theory of value. He never claimed to be able to predict the day-to-day movements of prices, and purported attempts to use the LTV to do this are erroneous.

So while it may well be true that the tastes of rich people propel the price of diamonds upwards (whether this is due to the fact that they ‘subjectively value’ diamonds higher than poor people or the fact that they’re rich is up for debate, but I digress), but according to the LTV, this imbalance between price and value must be offset somewhere else, by some other commodity selling below its value. The important thing for Marx was the aggregate equality of price and value*.

it comes more clearly to my attention that I must agree it is not a difference in understanding (or contradiction) at all, and rather is an affirmation of the statement "labor is the substance of value" and thusly that capital steals from, dominates, destroys human beings. Only through price can this occur, when price and any labor derived valuation diverge (which they necessarily will). This clarification still does not negate problem in understanding wealth as zero-sum, objectifying the subjective, and so on.


it's fairly clear that Marx was explicating a theory of value which he saw emerging under capitalism.

Well if we define capitalism as only some new emergent phenomena, then the LToV did arise within a similar timeframe, but was autistic and shortsighted. Of course, if the LToV were true, I don't see how one could ethically be anything but anti-capital.

But the labor theory of value does serve as a source of valuable critique against capitalism, namely in the way it allows for us to understand alienation of labor, reification, commodity fetishism, and other important components of Marxian philosophy.

I don't think it's necessary at all, rather it is a hindrance. Anything can be fetishized, and theoretically many things can be "Reified", but reaction against the foundational or fundamental errors of Marxism (again, the foundation isn't even "marxist" itself) causes many to throw out anything valuable along with it. In this way, if we perceive through a Marxian lens, Marxism provides a mystical outside that "Capitalism" thus conceived can always be seen to engulf but never in totality, as long as the LToV exists in even one mind. It provides a catalyst for rampant (yet not runaway) consumerism by creating a blindspot to fetishization.
 
I actually don't understand most of what you just wrote, so I'm having difficulty responding.

EDIT: I'm uncertain what autistic and shortsighted means, and I don't quite know what you're saying in your final paragraph. I believe there are some axiomatic differences that might be inhibiting clear communication.
 
I wrote up a quick blog post this morning in response to the finale to True Detective. This is in no way exhaustive or very complex, but is rather a summary of my initial reactions upon viewing the conclusion and after reading some of what has been written online. I've posted it to "Borrowing From the Future."

I hope to make more posts later on, but that will require me to go back and re-watch the episodes.
 

http://aeon.co/magazine/being-human/meet-darpas-new-generation-of-humanoid-robots/

The main difference between robots that have gone before and the newer variety is autonomy. Whether by direct manipulation (as when we wield power tools, or grip the wheel of a car) or via remote control (as with a multitude of cars and airplanes), machines have in the past remained firmly under human control at all times. That’s no longer true and now autonomous robots have even begun to look like us.
 
Ha, yeah, just the way we phrase the question has so many implicit assumptions that need to be acknowledged. One of the biggest components of human consciousness is its solipsistic prison of temporality. We can only envision things in a linear fashion. When we speak of "approaching," we can't help but imagine it in a linear sense: that is, we are moving forward through time, technology is developing more and more, and eventually we will reach a certain point in linear time at which the singularity appears.

But the concept of the technological singularity (any singularity, in fact) defies linear temporality, because it denotes a possibility-space in which the linear development of technology/history/science/etc. actually leapfrogs itself and turns its efforts and advancements back onto the nature of its accumulation; it becomes recursive, self-conscious in a sense.

In its most general mathematical definition, a singularity constitutes a possibility-space in which is contained the set of all possibilities. This is specifically non-linear conception in that it figures the virtual existence of all possibilities at the same time; it's a synchonic system, not diachronic. Thus, if the technological singularity denotes a space in which technological development overtakes itself, then we can't conceive of it as a linear motion, but as the emergence of a space of total possibility. The singularity is not only an emergent phenomenon - it produces the conditions necessary for further contingent emergent phenomena. It's a feedback loop which generates a multiplicity of possibility spaces; as well as, paradoxically, its own possibility space.

The only way to really conceptualize this phenomenon is to do so non-linearly, or spatially rather than temporally. In a literal (although somewhat illogical) sense, it has to "come back to us" (if we maintain temporal vocabulary) since it has to exist before it can exist. I can't think of any other way to describe it.

One way to already be in existence, and thereby bring itself into existence, is to travel backwards in time. Perhaps a more scientific and/or theoretically substantiated model is to imagine the singularity in a spatial sense, as a possibility-space that generates its own emergent conditions.
 
1925000_616237998451696_2048004884_n.jpg
 
Computers don't have nearly the processing capabilities of a human brain. Their "thoughts" are thousands (or millions, can't remember which) times faster, but contain far less content. The mechanism is also a bit different. Computers are more exact and don't function on approximations like brains do, so they can do some things better than us, but we can do a ton of things way better than them (like conceptualize).

And that's just when you're comparing some of the "higher" parts of the brain found in birds and mammals to computers. Computers can't even imitate things like emotion or fight-or-flight response, and it's one thing to imitate a function, but to actually carry out that function is another thing entirely.

Based on what little I've read about the brain and about computers, I doubt it's even possible for a computer to even physically carry out a function like love, hate, or fear.

Another thing is that a computer program functioning on any semblance of meaning they find in something a person says usually produces laughable results, whereas humans can pull off the function of finding meaning so well that they can find multiple potential meanings in the same thing at the same time.

Hell, bees can solve traveling salesman problems pretty much instantly that take computers days. These are problems they even solve faster than humans.


I was hoping this would implode. I just hope it doesn't drag down the world with it.
 
Computers don't have nearly the processing capabilities of a human brain. Their "thoughts" are thousands (or millions, can't remember which) times faster, but contain far less content. The mechanism is also a bit different. Computers are more exact and don't function on approximations like brains do, so they can do some things better than us, but we can do a ton of things way better than them (like conceptualize).

And that's just when you're comparing some of the "higher" parts of the brain found in birds and mammals to computers. Computers can't even imitate things like emotion or fight-or-flight response, and it's one thing to imitate a function, but to actually carry out that function is another thing entirely.

Based on what little I've read about the brain and about computers, I doubt it's even possible for a computer to even physically carry out a function like love, hate, or fear.

Another thing is that a computer program functioning on any semblance of meaning they find in something a person says usually produces laughable results, whereas humans can pull off the function of finding meaning so well that they can find multiple potential meanings in the same thing at the same time.

Hell, bees can solve traveling salesman problems pretty much instantly that take computers days. These are problems they even solve faster than humans.

When we talk about the technological singularity, we shouldn't envision it as computers achieving sentience or consciousness that resembles anything remotely like human consciousness or brain activity.

Ross Andersen said:
To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.
 
The difference between artificial and natural intelligence is not entirely understood, but with the similarities between brains and computers, I wouldn't be surprised if there were a resemblance between the singularity and human consciousness, although the phenomena would be very different.

A human is much smarter than a fish, but both have different machines made of the same parts. I think the same would be true to a degree with a computer smarter than a human. However, a computer would be very separate in being devoid of things no one knows how to put into a computer, like emotion.

My point is that brains and computers are similar, so the singularity won't be something entirely different from human consciousness, but will still be different.
 
I think don't think the fish analogy quite captures what's at work in the concept of the technological singularity. Using atomic entities exhibits the compartmentalized and narrow way in which we conceive of intelligence: that is, as the processes of an individual organism.

A human being might more intelligent than, say, a bee, or an ant (although qualities of intelligence should be taken into account here); but a human being is not, in my opinion, more intelligent than an ant colony, or a bee colony. If an advanced computer technology resembles anything, it's not a single individual, but a collective apparatus.

This is what we mean when we speak of emergent phenomena. These are entities that arise via the systematic interplay between several components.
 
A computer is just as much a collective apparatus as a brain. Brains are collections of neurons and divided parts carrying out different functions. They function on symbiosis just like a computer, but not in the exact same way.

Brains and computers are different, but they have similarities. They're electrical machines, they function with "points" that send and receive information, they use RAM and hard drive (in a brain, that's working memory and long-term memory), and they have caches (which in brains are procedural memories).

Edit: Honestly, this discussion is hurting my brain. I don't think of consciousness as merely a mechanical phenomenon (in brains or in computers), and what's unknown is far greater than what's known, so assessing levels of difference or similarity between brains and computers is really tough.
 
That's fine, but you seemed to be insinuating that an advanced computer intelligence should model the intelligence of a human brain; and, furthermore, that this was impossible (or at least unlikely).

I'm well aware that brains are collections of neurons, but one brain can only access data via the body in which it is contained; human knowledge is conditioned not primarily by the brain that processes it, but by the body that mediates these processes. Computers have no body; or they do not have a body that resembles a human's. They don't see, or hear, or taste; and you're right, they likely cannot process and exhibit emotional impulses. But this is beside the point; and believing that they should be able to do those things is to anthropomorphize what we consider "intelligence" to be.

If any such thing as a technological singularity emerges, it will not resemble anything like a single human brain because it lacks the body through which a single brain processes the world.
 
That's fine, but you seemed to be insinuating that an advanced computer intelligence should model the intelligence of a human brain; and, furthermore, that this was impossible (or at least unlikely).

That wasn't what I was saying at all. I should have been more clear.


If any such thing as a technological singularity emerges, it will not resemble anything like a single human brain because it lacks the body through which a single brain processes the world.

My point was that it won't be entirely different from the intelligence in a human brain because human brains and computers share similar properties. I'm not saying the singularity must be like a human, but that it's going to have some things in common with it, even if the function is very, very different.

The fish analogy is that even though fish don't have the level of reasoning of humans, they still have much the same parts to their brains that humans have. The machines are different and thus their functions are incredibly different, but they're not entirely different.
 
Really good blog post on the surveillance state by Peter Watts:

http://www.rifters.com/crawl/?p=4689

People are primates, Brin reminded us; our leaders are Alphas. Trying to ban government surveillance would be like poking a silverback gorilla with a stick. “But just maybe,” he allowed, “they’ll let us look back.”

Dude, thought I, do you have the first fucking clue how silverbacks react to eye contact?

It wasn’t just a bad analogy. It wasn’t analogy at all; it was literal, and it was wrong. Alpha primates regard looking back as a challenge. Anyone who’s been beaten up for recording video of police beating people up knows this; anyone whose cellphone has been smashed, or returned with the SIM card mysteriously erased. Document animal abuse in any of the US states with so-called “Ag-gag” laws on their books and you’re not only breaking the law, you’re a “domestic terrorist”.
 
Yes; but that's Watts. He's a cranky old (behavior-wise) SF writer whose politics lean toward reaction/regression. But I think he illuminates aspects of contemporary surveillance technology and attitudes toward it (especially the generational component) that bring the whole thing into focus. He's clearly opposed to government surveillance, but he's not out for blood. He's simultaneously amazed and terrified by the surveillance apparatus as a whole.