Dakryn's Batshit Theory of the Week

https://www.scientificamerican.com/article/why-people-dislike-really-smart-leaders/

IQ positively correlated with ratings of leader effectiveness, strategy formation, vision and several other characteristics—up to a point. The ratings peaked at an IQ of around 120, which is higher than roughly 80 percent of office workers. Beyond that, the ratings declined.
.........
“To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers,” he says. “The wrong interpretation would be, ‘Don’t hire high-IQ leaders.’

I had read something similar before, at least suggested if not research (and not sure if it was the thing referenced early in the article), that beyond 125-130 IQ, political aspirants and other aspiring leaders were going to have significant difficulty connecting with potential followers, due to the significant cognitive distance. This is the first research I've seen to suggest something similar, and the effect starts even earlier.
 
Just because it's not functioning doesn't mean it's gone (a hard drive can even be read after failure with the right equipment). At the cellular level, and all the way up, when it stops functioning it will decay away rather rapidly (other than bones). Total information loss outside of reproduction.

I see. But what's the point of Damasio invoking our fragility? It seemed that he was saying our fragility is what sets us apart from computers, but I don't see the point in that if what is durable about computers isn't their functionality. Yes, a computer's materials last; landfills are a testament to that. But the landfill is also a testament to the limited functionality these durable materials have for us. The plastic and copper of a computer might last for centuries, but the purpose we design them for typically lasts less than a human life (and in the case of laptops, usually not longer than five years). It's true that plastic sticks in the earth longer than biodegradable matter, but so what if it doesn't work?

I'm not sure why this is part of an argument against treating the brain as a computer. If anything, he should be saying that what makes a human brain different than a computer is the brain's durability.
 
I see. But what's the point of Damasio invoking our fragility? It seemed that he was saying our fragility is what sets us apart from computers, but I don't see the point in that if what is durable about computers isn't their functionality. Yes, a computer's materials last; landfills are a testament to that. But the landfill is also a testament to the limited functionality these durable materials have for us. The plastic and copper of a computer might last for centuries, but the purpose we design them for typically lasts less than a human life (and in the case of laptops, usually not longer than five years). It's true that plastic sticks in the earth longer than biodegradable matter, but so what if it doesn't work?

I'm not sure why this is part of an argument against treating the brain as a computer. If anything, he should be saying that what makes a human brain different than a computer is the brain's durability.

I think there's some presentism here. Individual cells are vulnerable to any number of issues, and the human body is vulnerable to a plethora of things. It is only through the explosion of technology providing adequate food, sanitation, and healthcare for most that the world absent 3rd world countries that we don't have to have 12 kids so that 4 live to see grandchildren, or some such ratio. Cells are at risk even from themselves. Computers generally have 2 points of failure: Old spinning disc hard drives, and power supplies. Power supplies last 5-10 years generally, and can be replaced easily. Increasingly used SSD hard drives do not have moving parts and can be expected to far outlast the old style hard drives. Even still HDs are an easy swap out. We typically don't replace PCs because they don't work per se, we replace them because the technology has made them obsolete. I have a an old tablet that still turns on and whatnot just fine. It's just so old it doesn't run current app versions for shit. Laptops get chucked for similar reasons, or because replacement batteries are expensive enough that we'd rather just get a faster model.
 
I've always liked Damasio's theory on the importance of our embodied consciousness--that physical pain is important for developed cognition, that emotional complexity but contributes to and clutters rational thought, and that anatomy informs cognitive behavior. Where I depart from Damasio is in his phenomenological methodology, which he derives from Hubert Dreyfus (the patron saint of A.I. critique). I believe there's a miscommunication between phenomenologists and eliminative materialists. Phenomenologists reject the idea that the brain is like a computer because the brain is a component of the body, whereas computers don't share the same kind of embodiment. I think that eliminative materialists and computationalists would say it's more than just the brain that's like a computer, it's the entire body. Claude Shannon and Warren McCulloch were the originators of the synapses = zeroes and ones model, which treats the brain like a computer. More recent theories would expand this beyond the limited synaptic responses of the brain.

Saying that algorithms aren't amenable to embodied behavior can be restated as saying that there are no algorithms complex enough to capture embodied behavior. Douglas Hofstadter identifies this as a fallacy of argument that he calls "Tesler's Theorem," which basically translates into "A.I. is whatever hasn't been done yet." This is the problem I have with phenomenologists.
 
I've always liked Damasio's theory on the importance of our embodied consciousness--that physical pain is important for developed cognition, that emotional complexity but contributes to and clutters rational thought, and that anatomy informs cognitive behavior. Where I depart from Damasio is in his phenomenological methodology, which he derives from Hubert Dreyfus (the patron saint of A.I. critique). I believe there's a miscommunication between phenomenologists and eliminative materialists. Phenomenologists reject the idea that the brain is like a computer because the brain is a component of the body, whereas computers don't share the same kind of embodiment. I think that eliminative materialists and computationalists would say it's more than just the brain that's like a computer, it's the entire body. Claude Shannon and Warren McCulloch were the originators of the synapses = zeroes and ones model, which treats the brain like a computer. More recent theories would expand this beyond the limited synaptic responses of the brain.

But synaptic transmission isn't even that neatly describable. It's a simplification of the action potential being all or nothing in certain synapses. I think the description of physical bodies as being like computers (or maybe now moving on to algorithms) is the same error-prone tendency which led us to describe the body as a series of pumps or by other mechanistic descriptions. Living organisms are vastly more complicated, and such comparisons only vaguely apply if at all.

Saying that algorithms aren't amenable to embodied behavior can be restated as saying that there are no algorithms complex enough to capture embodied behavior. Douglas Hofstadter identifies this as a fallacy of argument that he calls "Tesler's Theorem," which basically translates into "A.I. is whatever hasn't been done yet." This is the problem I have with phenomenologists.

I don't follow this application of the Didit fallacy. AI = Algorithms, and algorithms are designed to do something, even if the algorithm takes over on the path to that something (or even deviates from chasing that something).
 
But synaptic transmission isn't even that neatly describable. It's a simplification of the action potential being all or nothing in certain synapses. I think the description of physical bodies as being like computers (or maybe now moving on to algorithms) is the same error-prone tendency which led us to describe the body as a series of pumps or by other mechanistic descriptions. Living organisms are vastly more complicated, and such comparisons only vaguely apply if at all.

Computers and humans are both complex, but that's not to say their complexities are the same or even similar. All metaphors are ultimately faulty in that they overlook certain nuances, but they're redeemed in that they elucidate other nuances. Comparing the human body to a computer isn't to say that the body's information can be abstracted and reprogrammed into a different material and stay the same throughout. It certainly wouldn't because the body is part of the substrate necessary for human information. I think that arguing for consciousness as ultimately programmable is a misnomer, because I think that what we see in computers isn't consciousness and never will be.

The description of synaptic firing as binary code is neatly describable if we do so in the following manner: a synapse either fires or it doesn't. It's either a zero or a one. Granted, this says next to nothing about why a synapse is firing, but that's another level of interpretation. 1 + 1 = 2, but simply stating that fact tells us nothing about why someone might be adding those numbers together. The neuronal model of computation doesn't purport to explain the reasons for human behavior, simply that the firing of synapses can be translated into binary code. Walter Pitts published a theorem back in the 1940s proving this, and it's still widely accepted today. It's known as the artificial neuron.

I don't follow this application of the Didit fallacy. AI = Algorithms, and algorithms are designed to do something, even if the algorithm takes over on the path to that something (or even deviates from chasing that something).

What I mean is that A.I. skeptics continually shift the goalposts when discussing what they see as inherently human qualities that can't be programmed into computers. This is part of the history of A.I. research. First, it was impossible to create calculating machines--"Only humans can do that"--but then we had Babbage's analytical engine and Turing machines. Then it was said that computers couldn't replicate language, which they can now do. Then it was that computers couldn't identify objects, which now they can. Then it was that computers couldn't beat humans at complex games, and now they can. In each case, it's a matter of creating more complex algorithms; and in each case, researchers have been able to accomplish feats that skeptics said were impossible. Phenomenologists tend to privilege human experience simply because it's human experience; and that's fine, it's their prerogative. But it carries with it some predispositions that, I think, inhibit their ability to critique A.I. research.

Now, this isn't to say that it is possible to create a computer that matches human cognition and decision-making, only that the skeptics' argument is to claim "Well, that's not part of what makes human really human" (i.e. it isn't our language, or our game-playing, etc.). And the skeptics are right that computers will never fully emulate humans, but that's because computers don't have human bodies. Programming sound waves into a computer won't generate an aural experience because computers don't have the hardware to process sound waves as sound. The point in A.I. research isn't to create computers that are conscious, but computers that are intelligent enough to mimic conscious behavior.

The thrill of comparing humans to computers is that it forces us to question whether we are merely mimicking conscious behavior. ;) And there's no way to prove that we aren't, even if it feels like a pointless suggestion.
 
The description of synaptic firing as binary code is neatly describable if we do so in the following manner: a synapse either fires or it doesn't. It's either a zero or a one. Granted, this says next to nothing about why a synapse is firing, but that's another level of interpretation. 1 + 1 = 2, but simply stating that fact tells us nothing about why someone might be adding those numbers together. The neuronal model of computation doesn't purport to explain the reasons for human behavior, simply that the firing of synapses can be translated into binary code. Walter Pitts published a theorem back in the 1940s proving this, and it's still widely accepted today. It's known as the artificial neuron.

Just a side not (not quibbling, I know what you mean): The neuron fires. The synapse is the gap between the dendrite of the receiving neuron and the axon(s) of sending neurons.

I'm not familiar with Pitt's theorem and it's outside of my expertise, but I'm be skeptical of comparisons to neurons based on neuroscience prior to WWII.

What I mean is that A.I. skeptics continually shift the goalposts when discussing what they see as inherently human qualities that can't be programmed into computers. This is part of the history of A.I. research. First, it was impossible to create calculating machines--"Only humans can do that"--but then we had Babbage's analytical engine and Turing machines. Then it was said that computers couldn't replicate language, which they can now do. Then it was that computers couldn't identify objects, which now they can. Then it was that computers couldn't beat humans at complex games, and now they can. In each case, it's a matter of creating more complex algorithms; and in each case, researchers have been able to accomplish feats that skeptics said were impossible. Phenomenologists tend to privilege human experience simply because it's human experience; and that's fine, it's their prerogative. But it carries with it some predispositions that, I think, inhibit their ability to critique A.I. research.

Now, this isn't to say that it is possible to create a computer that matches human cognition and decision-making, only that the skeptics' argument is to claim "Well, that's not part of what makes human really human" (i.e. it isn't our language, or our game-playing, etc.). And the skeptics are right that computers will never fully emulate humans, but that's because computers don't have human bodies. Programming sound waves into a computer won't generate an aural experience because computers don't have the hardware to process sound waves as sound. The point in A.I. research isn't to create computers that are conscious, but computers that are intelligent enough to mimic conscious behavior.

The thrill of comparing humans to computers is that it forces us to question whether we are merely mimicking conscious behavior. ;) And there's no way to prove that we aren't, even if it feels like a pointless suggestion.

Ok, well I get the moving the goalposts issue (not sure about the Didit fallacy though). This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

As a personal interest by someone who does a lot of gaming, it would be interesting to see how computers do in games which are not completely contained and purely logical. I'd expect them to do better than humans sooner or later, but it would be interesting to see if they show a similar pattern of learning and/or move making as they have shown in Chess or GO - that is, eventually winning by methods considered completely unconventional.
 
Just a side not (not quibbling, I know what you mean): The neuron fires. The synapse is the gap between the dendrite of the receiving neuron and the axon(s) of sending neurons.

Right, yes. I'm just working from the phrase "synaptic firing," which I've read in the discourse.

I'm not familiar with Pitt's theorem and it's outside of my expertise, but I'm be skeptical of comparisons to neurons based on neuroscience prior to WWII.

I get that, but it's still a widely-accepted model. There have been books published on it as recently as 2011. I've not read any substantial rejections of the model itself.

Ok, well I get the moving the goalposts issue (not sure about the Didit fallacy though). This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

I never intended to invoke the "Didit fallacy," so apologies if my wording implied that. My issue w/ A.I. skeptics has always been the goalposts fallacy.

EDIT: when Hofstadter writes the fallacy as "A.I. is whatever hasn't been done yet," he's not suggesting the intervention of an unknown force. He's simply pointing out that A.I. skeptics constantly displace what we would identify as A.I. into the bounds of the unaccomplished in order to support their argument that A.I. is impossible.

As a personal interest by someone who does a lot of gaming, it would be interesting to see how computers do in games which are not completely contained and purely logical. I'd expect them to do better than humans sooner or later, but it would be interesting to see if they show a similar pattern of learning and/or move making as they have shown in Chess or GO - that is, eventually winning by methods considered completely unconventional.

Agreed. Although electronic games work on code too, so I assume it would simply be a matter of programming a computer w/ said code? The edge of non-electronic games is that they rely on a set of rules dictated by social convention and regularity (e.g. if you try and move a King two spaces on a chessboard, an opponent or referee has to stop you; the game itself won't prohibit it).
 
Last edited:
Agreed. Although electronic games work on code too, so I assume it would simply be a matter of programming a computer w/ said code? The edge of non-electronic games is that they rely on a set of rules dictated by social convention and regularity (e.g. if you try and move a King two spaces on a chessboard, an opponent or referee has to stop you; the game itself won't prohibit it).

Sorry, I meant physical games which involve some percentage of chance. At the extreme, well known end, I could refer to the game of Risk. There are some strategies, but at least 50% is left to luck by my estimation. I'm less interested in the win/loss by AI than by how they begin to approach the problem.
 
  • Like
Reactions: Einherjar86
This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

Just one more comment here--yes, I think I'm in agreement with Damasio on this point.

The controversy over this matter goes back to first-wave cybernetics, whose patron saints were figures like Norbert Wiener and Claude Shannon. Wiener and Shannon believed in the decontextualization, or disembodiment, of information. In other words, they believed that the information that "makes up" a human being could be extracted and reprogrammed in a different interface, and that this could be done without losing any information (they just didn't have the technology, they claimed). They thought that someday we would be able to "transmit" human beings, a la Star Trek.

This was a reigning perspective in cybernetics until about 1960. After 1960, cybernetics shifted in a direction more concerned with materiality and the nature of the information-media relation, or the placement and operation of the interface (this included contradictions of observation). As far as cybernetics in the humanities goes, the precedent set by N. Katherine Hayles is the one I follow: that information is an intrinsically embodied substance, that analyzing/calculating information is possible, but transplanting it yields an entirely new organism. In short, cybernetic entities are entities whose informatic makeup is constantly interacting with the hardware of the body and the material of the environment (in some manner).

If there's a key to cracking consciousness, I don't think it's to be found in neural information. A.I. isn't really about creating conscious cybernetic entities, but about creating massively intelligent cybernetic entities--i.e. entities that are capable of large-scale pattern-matching. Even if we eventually create an A.I. that is so complex that it can mimic consciousness, all this will tell us is that even we may be mimicking consciousness. It won't reveal any secrets.
 
If there's a key to cracking consciousness, I don't think it's to be found in neural information. A.I. isn't really about creating conscious cybernetic entities, but about creating massively intelligent cybernetic entities--i.e. entities that are capable of large-scale pattern-matching. Even if we eventually create an A.I. that is so complex that it can mimic consciousness, all this will tell us is that even we may be mimicking consciousness. It won't reveal any secrets.

I meant to respond earlier to the comment about even we mimicking consciousness and then skipped it. What would it mean to mimic something which, as far as we know, exists nowhere else or in nothing else? To mimic requires an other to mimic. Berkeley's Permanent Perceiver laughs.

AI is indeed, currently, simply fast and broad mathematical operations engaged in so much pattern matching. Humans engage in pattern matching too to varying degrees (or at least, successfully to varying degrees), but I'm not sure I'd want to reduce consciousness to only pattern matching. Speaking of AI:

https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what-journalists-want-you-to-think-it-means/

The media is misleading people. When an ordinary person uses the term “biased,” they think this means that incorrect decisions are made — that a lender systematically refuses loans to blacks who would otherwise repay them. When the media uses the term “bias”, they mean something very different — a lender systematically failing to issue loans to black people regardless of whether or not they would pay them back.
 
Sort of separate if not completely:

http://www.psych.nyu.edu/vanbavel/lab/documents/VanBavel.etal.2015COP.pdf

Moral cognition is also dependent on brain structures directly involved in self and other-related processing. The mPFC is engaged when thinking about the self [50], as well as others’ mental states [51], suggesting that the perception of self and other can be intimately entwined [52,53]. The temporoparietal junction (TPJ) also emerged early on as a key region necessary for decoding social cues [54] and specifically for representing the intentions [55,56] and emotions of others [57]. More recent work has found that these other-oriented intentions and motivations have downstream effects on moral behavior [58]. Inferences regarding moral character dominate our impressions of others [59 ,60,61] and impact how moral phenomena are perceived. In particular, immoral actions are highly diagnostic and heavily weighted in moral evaluations of others [62,63 ,64]. Most prevailing theories explain this negativity bias in terms of the statistical extremity or infrequency of immoral actions, compared to positive moral actions (e.g. [65–68]). However, people are not static — they change constantly. Both our impressions of others and our own self-concepts are dynamically updated in light of new information. A complete model of moral cognition must capture how moral valuations are shaped by expectations, both at the level of the environment (e.g., social norms) and the individual (e.g., knowledge regarding a specific person), and continuously updated

It appears possible that one of the reasons that prison populations are skewed towards the less intelligent may have more links to intelligence than simply being "too dumb to realize plan X won't work". The different parts of the pre-frontal cortex are responsible for a variety of higher order functions (like behavioral inhibition and planning), and this likely includes aspects of morality as well.
 
I meant to respond earlier to the comment about even we mimicking consciousness and then skipped it. What would it mean to mimic something which, as far as we know, exists nowhere else or in nothing else? To mimic requires an other to mimic. Berkeley's Permanent Perceiver laughs.

Haha, well, we mimic each other. I think we've had a variant of this conversation before, so we don't have to run through it again; but suffice it to say that I'm a skeptic toward the totally internalist explanation of consciousness, i.e. that consciousness came about and gave birth to other human attributes like language, empathy, etc. I believe that consciousness is concomitant with other aspects of social existence, including representational figures. Extrapolating from this premise, it's possible that consciousness developed as a complex process whereby human beings gradually internalized a model of social interaction, basically mimicking social form as a framework of personal subjectivity. A crucial component of consciousness is that we can imagine consciousness in others, we can empathize with others' situations.

At this point, humans would simply continue to mimic the consciousness they encounter and intuit in others. The theory of mirror neurons supports a version of this interpretation, although I realize it's very controversial.

AI is indeed, currently, simply fast and broad mathematical operations engaged in so much pattern matching. Humans engage in pattern matching too to varying degrees (or at least, successfully to varying degrees), but I'm not sure I'd want to reduce consciousness to only pattern matching.

Most people don't, it's true. And as of right now, it's reductive to say that consciousness is nothing more than pattern-matching. If that is true, then we should hypothetically be able to come up with algorithms to reproduce it. We haven't.

The problem is that we also can't prove that consciousness isn't just pattern-matching. Obviously the problem extends beyond the context, as this would demand that we prove a negative. The anxiety of A.I. research is simply that it forces us to confront these questions.
 
At this point, humans would simply continue to mimic the consciousness they encounter and intuit in others. The theory of mirror neurons supports a version of this interpretation, although I realize it's very controversial.

Well it's not impossible, but I'd be curious as to how it arose so rather uniformly.

Most people don't, it's true. And as of right now, it's reductive to say that consciousness is nothing more than pattern-matching. If that is true, then we should hypothetically be able to come up with algorithms to reproduce it. We haven't.

The problem is that we also can't prove that consciousness isn't just pattern-matching. Obviously the problem extends beyond the context, as this would demand that we prove a negative. The anxiety of A.I. research is simply that it forces us to confront these questions.

Even if it is just pattern matching, we don't understand the original or the reciprocal inputs or the objectives.
 
Well it's not impossible, but I'd be curious as to how it arose so rather uniformly.

Ha, so would I.

Even if it is just pattern matching, we don't understand the original or the reciprocal inputs or the objectives.

Correct.
Sort of separate if not completely:

http://www.psych.nyu.edu/vanbavel/lab/documents/VanBavel.etal.2015COP.pdf

It appears possible that one of the reasons that prison populations are skewed towards the less intelligent may have more links to intelligence than simply being "too dumb to realize plan X won't work". The different parts of the pre-frontal cortex are responsible for a variety of higher order functions (like behavioral inhibition and planning), and this likely includes aspects of morality as well.

More links as in, those who commit crimes tend to lack the mental capacity for moral consideration of the impact on others? If so, I think that makes perfect sense. Didn't have time to read the whole article.

If that is the case, seems a more practical alternative to the categorical imperative (which doesn't really care about the effects of immoral action, only that a moral code would be universally accepted).
 
More links as in, those who commit crimes tend to lack the mental capacity for moral consideration of the impact on others? If so, I think that makes perfect sense. Didn't have time to read the whole article.

Maybe lack the capacity for consideration, maybe lack the capacity to care about the consideration, etc. I'm not sure exactly how much the paper tells us beyond that moral cognition is complex and requires the "newer" parts of the brain to a significant degree.


If that is the case, seems a more practical alternative to the categorical imperative (which doesn't really care about the effects of immoral action, only that a moral code would be universally accepted).

Even if we were to leap to an extreme interpretation that more developed relevant portions of the brain = more morality, I'm not sure exactly how that would inform morality/ethics. Obviously there's more to it than that, but off the top of my head it makes a good argument for the value of religion. Fear of a omniscient "parent" is probably more salient to a self-interested simpleton in comparison with abstract arguments and metacognitions he lacks capacity for.
 
https://aeon.co/essays/the-self-does-exist-and-is-amenable-to-scientific-investigation

The answer is that science does all this by rejecting antirealism. In fact, the self does exist. The phenomenal experience of having a self, the feelings of pain and of pleasure, of control, intentionality and agency, of self-governance, of acting according to one’s beliefs and desires, the sense of engaging with the physical world and the social world – all this offers evidence of the existence of the self. Furthermore, empirical research in the mind sciences provides robust reasons to deny antirealism. The self lends itself to scientific explanations and generalisations, and such scientific information can be used to understand disorders of the self, such as depression and schizophrenia, and to develop this self-understanding facilitates one’s ability to live a rich moral life.

Ugh, this kind of hack writing annoys the piss out of me. I'm not even all that interested in the author's theory of a "multitudinous self"--it's fine if that's the proposed solution or working approach. I have no problem with it. But I have a big problem with this black and white notion that the self is either an illusion (it doesn't exist) or it's real, and is evidenced by our experience of selfhood.

First of all, the claim that our experience of selfhood constitutes a self completely elides the structure of argument that critiques the metaphysics of selfhood. It's amateurish and ridiculous. Second of all, it completely dismisses the plethora of interesting questions that this line of inquiry raises: is selfhood an essence, or an experience (is there a difference)? if there is such a thing as a "real" self, but we can't logically prove it, does that make our experience of selfhood more meaningful than any abstract entity that we call "the self"--and if so, is it even worth discussing what "real" selfhood is? could we distinguish between real and counterfeit selfhood, if the experience of both is the same? Why the need to reify experience into an internal fabric or substance?

Finally, and most important, why is an antirealist approach hazardous to the notion of human being and experience? Just because people like Dennett argue for an antirealist philosophy of mind (not sure Dennett would use the term "antirealist"), this doesn't mean they see the experience of selfhood as a worthless phenomenon. Dennett himself has developed the notion of the "intentional stance," which he argues is instrumental and imperative for rational human functioning. The dismissal of what we used to think of as "selfhood" doesn't translate into a degradation of human experience.

These kinds of arguments make me think we need a new definition of selfhood and the self. The old debates are getting tiresome.
 
I'm not sure we can escape a Cartesian self. I mean, we can theorize alternatives, but we can't have a conversation with them.

The point isn't whether we can escape it; it's whether we can separate the value of experiencing selfhood in a Cartesian fashion from presuming that that experience correlates to the metaphysical substance that we call "the self." The author of that article believes that the self exists because we experience it as such. This is the error, and it's a flaw in logic that phenomenologists continue to make.

In other news: http://www.rifters.com/crawl/?p=7875

Pastuzyn et al [link broken] of the University of Utah, have just shown that Arc is literally an infection: a tamed, repurposed virus that infected us a few hundred million years ago. Apparently it looks an awful lot like HIV. Pastuzyn et al speculate that Arc “may mediate intercellular signaling to control synaptic function”.

Memory is a virus. Or at least, memory depends on one.

Of course, everyone’s all over this. U of Utah trumpeted the accomplishment with a press release notable for, among other things, describing the most-junior contributor to this 13-author paper as the “senior” author. Newsweek picked up both the torch and the mistake, leading me to wonder if Kastalio Medrano is simply at the sloppy end of the scale or if it’s normal for “Science Writers” in popular magazines to not bother reading the paper they’re reporting on. (I mean, seriously, guys; the author list is right there under the title.) As far as I know I’m the first to quote Burroughs in this context (or to mention that Greg Bear played around a very similar premise in Darwin’s Radio), but when your work gets noticed by The Atlantic you know you’ve arrived.

Me, though, I can’t stop thinking about the fact that something which was once an infection is now such an integral part of our cognitive architecture. I can’t stop wondering what would happen if someone decided to reweaponise it.

The parts are still there, after all. Arc builds its own capsid, loads it up with genetic material, hops from one cell to another. The genes being transported don’t even have to come from Arc:

“If viral RNA is not present, Gag encapsulates host RNA, and any single-stranded nucleic acid longer than 20-30 nt can support capsid assembly … indicating a general propensity to bind abundant RNA.”

The delivery platform’s intact; indeed, the delivery platform is just as essential to its good role as it once was to its evil one. So what happens if you add a payload to that platform that, I dunno, fries intraneuronal machinery somehow?

I’ll tell you. You get a disease that spreads through the very act of thinking. The more you think, the more memories you lay down, the more the disease ravages you. The only way to slow its spread is to think as little as possible; the only way to save your intelligence is not to use it. Your only chance is to become willfully stupid.

fwiw Burroughs wrote that "Language is a virus from outer space."

Most likely true.
 
  • Like
Reactions: Dak