Dakryn's Batshit Theory of the Week

Ha, absolutely. I mean, there's no reason that intelligence has to be "alive," right? Although, in this context, I suppose it would be neither living nor non-living. Which is a mindfuck in its own right.
 
Hadn't realized the nice segue here, but anyway - new blog post on intelligence is up.

Most recent issue of PMLA is interesting. Its 'Theories and Methodologies' section is on the discursive relationship between literature and philosophy. Haven't really gotten into it yet, but the introduction provides an impressively detailed account of the way philosophy bleeds into literature - or rather, how philosophy cannot makes its arguments without appealing to literary language, figure, metaphor, metonymy, etc. An excerpt (the quotations can get annoying, but she's emphasizing the words that operate in a more literary fashion):

Plato's dramatic characters, disagreeing speakers, mundane scenes, openly mythologizing allegories of ideas, and entirely imaginary polis; Descartes's "engineer" tracing "regular" forms on a "vacant plane," the irregular "paths" through the "book of the world" his Discourse takes (Discourse 1-7), the local "customs" and other temporary "housings" in which his "I" necessarily "resides" "in time" (8-13), much like the "infinitely flexible" "piece of wax" whose simplest conception as "extension" remains unchanged (Meditations 60-69); Locke's "empty cabinet" of a "mind" "furnished" with "ideas" (65) first "framed" by the "names" "lodged" within it (361-98); Hobbes's "Leviathan" or "Artificial Man" of a "State" whose "Soul" is the "Seat" of absolute "Sovereignty" (223-74), no less than Rousseau's opposing conception of a literally artificial "social contract" capable of replacing the "spectacular" bases of "inequality" with the "convention" of equal "citizenship" (Discourses; Social Contract), all demonstrate not only their authors' arguments but also those arguments' reliance on language properly categorized as literary.
 
Seems kind of a stretch. Obviously philosophy has to use words, and literature uses words.

Edit: Good blog post, a lot to chew on there. I do want to object to the move you make early on in trying to define and frame intelligence. It is true, at least broadly and up to the current era, that intelligence is discoverable/assessable via behavioral expression only. But it does not logically follow then that intelligence is merely behavior. There's a longstanding tension in psychology between cognitivists and behaviorists, but even behaviorists don't jump that far in justifying their orientation (at least not to my knowledge).
 
Last edited:
Yes, but philosophy has had a somewhat checkered history with regard to language, culminating of course in positivism's misguided efforts at creating a perfectly transparent language of logic. This is what Wittgenstein set out to do with the Tractatus, and by the end of it realized he'd basically argued the opposite. Philosophy usually tends to appropriate and utilize language in a manner different than literature per se, yet it often appeals to literary language in order to make its point. If we permit philosophy this resource - which, as you suggest, is necessary - then we have to be cautious where philosophy appears to treat its claims (or theorems, axioms, whatever) as though they're obvious, or entirely transparent - for its usually in moments like these that language plays tricks on us.

The humanities are all about the practice of reading, reading as a critical activity. It isn't purely about interpretation, but about how interpretation contributes to a discourse. Philosophy, simply by the nature of the discourse, has to assume a somewhat untroubled relationship to the words it uses. Literary studies can have some insight here.

EDIT:
Edit: Good blog post, a lot to chew on there. I do want to object to the move you make early on in trying to define and frame intelligence. It is true, at least broadly and up to the current era, that intelligence is discoverable/assessable via behavioral expression only. But it does not logically follow then that intelligence is merely behavior. There's a longstanding tension in psychology between cognitivists and behaviorists, but even behaviorists don't jump that far in justifying their orientation (at least not to my knowledge).

Thanks. I take your point about behaviorism vs. cognitivism, and intelligence not being reducible to behavior. I may emphasize the point too strongly in the post, because I don't think I want to say that it "logically follows" that behavior is intelligence (can't recall if I used these words or not). Rather, because behavior is all that we can verify, it makes sense to associate this with intelligence, and resist reifying intelligence into some internal substance.

It is illogical to assume that behavior equals intelligence, but it's also illogical to assume that performance on an IQ exam reflects some interior core of intelligence. And until we can prove the connection between behavior and internal substance, then I think it makes sense to pursue AI studies (and other fields involving intelligence) in terms of where intelligence does manifest - i.e. in behavior.
 
Last edited:
Thanks. I take your point about behaviorism vs. cognitivism, and intelligence not being reducible to behavior. I may emphasize the point too strongly in the post, because I don't think I want to say that it logically follows that behavior is intelligence. Rather, because behavior is all that we can verify, it makes sense to associate this with intelligence, and resist reifying intelligence into some internal substance.

This is, more or less, the orientation of behaviorism. That since we have limited if any access to actual cognitive functions/processes, it makes more sense to focus on behavior.

It is illogical to assume that behavior equals intelligence, but it's also illogical to assume that performance on an IQ exam reflects some interior core of intelligence. And until we can prove the connection between behavior and internal substance, then I think it makes sense to pursue AI studies (and other fields involving intelligence) in terms of where intelligence does manifest - i.e. in behavior.

If intelligence is framed as "number crunching ability", then we can speak of both hardware and software "interior cores", directly in terms of "artificial intelligence", and via analogy for animal and human "wetware".
 
This is, more or less, the orientation of behaviorism. That since we have limited if any access to actual cognitive functions/processes, it makes more sense to focus on behavior.

Yeah, this seems to be what I align with, based on what I've read. I don't like proclaiming myself for any one particular brand of philosophy of mind, but I seem to fall closest to eliminative materialism, which shares some affinities with behaviorism.

If intelligence is framed as "number crunching ability", then we can speak of both hardware and software "interior cores", directly in terms of "artificial intelligence", and via analogy for animal and human "wetware".

Can you explain this a bit more?
 
Can you explain this a bit more?

If we frame intelligence in terms of data processing (speed, load handling, inferential ability, etc.) then these things have pretty clear "cores" in things like processing chip power, bus speeds, and overall system architecture on the hardware end, and things like strong coding/various algorithms on the software end. Since we can see these cores very clearly as humans invent, design, assemble, and operate (or at least set in motion) them, why would we not suspect underlying cores for our own data processing operations - regardless of how accessible they are to us. Now, I realize this analogy may seem to echo the "Watchmaker" analogy after a fashion, but the critiques of the watchmaker analogy for intelligent design do not seem to apply to this comparison, especially since I'm not arguing that our processes are a product of "design".

OTOH, if we want to say that human cognition is qualitatively different than "AI" you could eliminate this "core" analogy, but now you wind up right back at where you started in trying to define intelligence in a way that covers both human and non-human forms.
 
To me, chips and system architecture simply don't qualify as interior just because they're often located inside machines. I don't consider them internal at all, except in perhaps an arbitrary structural sense. When I talk about interiority in humans, I'm referring to aspects of human experience that cannot be extracted by taking apart a brain; so, even a surgeon going in with a scalpel won't be able to locate consciousness, or intelligence. But an engineer going into a computer can locate its chips and other components of its material structure.
 
To me, chips and system architecture simply don't qualify as interior just because they're often located inside machines. I don't consider them internal at all, except in perhaps an arbitrary structural sense. When I talk about interiority in humans, I'm referring to aspects of human experience that cannot be extracted by taking apart a brain; so, even a surgeon going in with a scalpel won't be able to locate consciousness, or intelligence. But an engineer going into a computer can locate its chips and other components of its material structure.

But the engineer can't locate the process itself or the software merely by taking apart the computer, or even looking inside. As another comparison, we have a pretty decent idea of what areas of a normal brain process different sorts of information, the "material structure" of mental processing.
 
I'm still not sure exactly what it is that I'm missing.

You're right that an engineer can't locate the process itself, just as a surgeon can't locate consciousness/intelligence in a human. The difference in popular parlance is that we assume that consciousness/intelligence in humans corresponds to some internal substance, whereas for a machine we simply associate its intelligence with the functions/processes that it carries out. We don't project any internal substance - chips don't count in this case as internal because they don't fulfill the same purpose that a purported substance of consciousness would. Chips are still material pieces of hardware, they're still identifiable, much like various parts of the brain are. To put it another way, there's no hard problem of computer intelligence. It's just simply a reflection of algorithms.

Also, I have a feeling that engineers have a very good idea about what parts of a computer do what...

EDIT: scratch that final comment, I misread what you were saying about the comparison.
 
Last edited:
I haven't had a chance to read all of this closely yet, but Scott Bakker posted what appears to be a nice complement to what I've been thinking about lately. Apparently it's an older essay that he's re-posting, but anyway...

https://rsbakker.wordpress.com/2016/08/18/artificial-intelligence-as-socio-cognitive-pollution-2/

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.
 
Last edited:
  • Like
Reactions: Dak
This is interesting, to say the least. The Age of Heroes was a long time ago, though; but this doesn't mean that hero worship hasn't persisted. Band of Brothers features loads of hero worship, but it isn't part of the Age of Heroes. I'm intrigued by the piece, but a bit confused as to its position.
 
I think the overall point is that liberal "equalist" democracy is ultimately incapable of defending itself from external threats. That there are external threats (whether naturally occurring or provoked), requires any defense measures to increasingly be carried on outside of the eye of the public.
 
Last edited:
So, is this author saying that heroes are a necessity in that they defend against external threats, even though they may do so in controversial ways...?

I'm curious about the notion of heroes, and skeptical above all else, because even in the ancient world, heroes weren't seen as actually necessary figures in the defense of the state. In fact, ancient Greek and Roman legions regularly punished soldiers that acted "heroically" - we tend to think otherwise because our modern image of ancient Greek and Rome is through the lens of 300 and Gladiator. For the ancient world, heroes served a primarily literary purpose through which the culture could to itself the fatal and tragic inevitabilities of heroic action. There were no real Achilles or Hectors; these figures were part of the cultural imagination, not actual soldiers. And actual Greek soldiers weren't encouraged to act like Achilles or Hector.

So, tl;dr, I think maybe this author is speaking of the modern conception of "hero" - in which case it's a bit misleading to talk about Achilles and the Age of Heroes.
 
It appears easily arguable that "actual heroes" are always constructions. That is, "heroic acts" are performed, which then in restrospect turn the performers into "Heroes" (capitalized). But this isn't central to the point of the article other than that the suggestion that warfare must be hid because there are sacrifices and whatnot that must be made to have a certain lifestyle.

It's an issue that warfare (determining it to be "defense" is its own argument) - a fertile environment for heroism - must increasingly be conducted outside of the public eye. "Everything Is Awesome" geopolitically, except that stuff that isn't that's happening over elsewhere but that doesn't have anything to do with us. We can instead limit our concern to slacktivism and demanding extra bathrooms or whatever.

The metatakeaway, and this isn't new, is that democracy necessitates removing serious decision making and action (including warfare) from public access/view.