Dakryn's Batshit Theory of the Week

Just a side not (not quibbling, I know what you mean): The neuron fires. The synapse is the gap between the dendrite of the receiving neuron and the axon(s) of sending neurons.

Right, yes. I'm just working from the phrase "synaptic firing," which I've read in the discourse.

I'm not familiar with Pitt's theorem and it's outside of my expertise, but I'm be skeptical of comparisons to neurons based on neuroscience prior to WWII.

I get that, but it's still a widely-accepted model. There have been books published on it as recently as 2011. I've not read any substantial rejections of the model itself.

Ok, well I get the moving the goalposts issue (not sure about the Didit fallacy though). This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

I never intended to invoke the "Didit fallacy," so apologies if my wording implied that. My issue w/ A.I. skeptics has always been the goalposts fallacy.

EDIT: when Hofstadter writes the fallacy as "A.I. is whatever hasn't been done yet," he's not suggesting the intervention of an unknown force. He's simply pointing out that A.I. skeptics constantly displace what we would identify as A.I. into the bounds of the unaccomplished in order to support their argument that A.I. is impossible.

As a personal interest by someone who does a lot of gaming, it would be interesting to see how computers do in games which are not completely contained and purely logical. I'd expect them to do better than humans sooner or later, but it would be interesting to see if they show a similar pattern of learning and/or move making as they have shown in Chess or GO - that is, eventually winning by methods considered completely unconventional.

Agreed. Although electronic games work on code too, so I assume it would simply be a matter of programming a computer w/ said code? The edge of non-electronic games is that they rely on a set of rules dictated by social convention and regularity (e.g. if you try and move a King two spaces on a chessboard, an opponent or referee has to stop you; the game itself won't prohibit it).
 
Last edited:
Agreed. Although electronic games work on code too, so I assume it would simply be a matter of programming a computer w/ said code? The edge of non-electronic games is that they rely on a set of rules dictated by social convention and regularity (e.g. if you try and move a King two spaces on a chessboard, an opponent or referee has to stop you; the game itself won't prohibit it).

Sorry, I meant physical games which involve some percentage of chance. At the extreme, well known end, I could refer to the game of Risk. There are some strategies, but at least 50% is left to luck by my estimation. I'm less interested in the win/loss by AI than by how they begin to approach the problem.
 
  • Like
Reactions: Einherjar86
This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

Just one more comment here--yes, I think I'm in agreement with Damasio on this point.

The controversy over this matter goes back to first-wave cybernetics, whose patron saints were figures like Norbert Wiener and Claude Shannon. Wiener and Shannon believed in the decontextualization, or disembodiment, of information. In other words, they believed that the information that "makes up" a human being could be extracted and reprogrammed in a different interface, and that this could be done without losing any information (they just didn't have the technology, they claimed). They thought that someday we would be able to "transmit" human beings, a la Star Trek.

This was a reigning perspective in cybernetics until about 1960. After 1960, cybernetics shifted in a direction more concerned with materiality and the nature of the information-media relation, or the placement and operation of the interface (this included contradictions of observation). As far as cybernetics in the humanities goes, the precedent set by N. Katherine Hayles is the one I follow: that information is an intrinsically embodied substance, that analyzing/calculating information is possible, but transplanting it yields an entirely new organism. In short, cybernetic entities are entities whose informatic makeup is constantly interacting with the hardware of the body and the material of the environment (in some manner).

If there's a key to cracking consciousness, I don't think it's to be found in neural information. A.I. isn't really about creating conscious cybernetic entities, but about creating massively intelligent cybernetic entities--i.e. entities that are capable of large-scale pattern-matching. Even if we eventually create an A.I. that is so complex that it can mimic consciousness, all this will tell us is that even we may be mimicking consciousness. It won't reveal any secrets.
 
If there's a key to cracking consciousness, I don't think it's to be found in neural information. A.I. isn't really about creating conscious cybernetic entities, but about creating massively intelligent cybernetic entities--i.e. entities that are capable of large-scale pattern-matching. Even if we eventually create an A.I. that is so complex that it can mimic consciousness, all this will tell us is that even we may be mimicking consciousness. It won't reveal any secrets.

I meant to respond earlier to the comment about even we mimicking consciousness and then skipped it. What would it mean to mimic something which, as far as we know, exists nowhere else or in nothing else? To mimic requires an other to mimic. Berkeley's Permanent Perceiver laughs.

AI is indeed, currently, simply fast and broad mathematical operations engaged in so much pattern matching. Humans engage in pattern matching too to varying degrees (or at least, successfully to varying degrees), but I'm not sure I'd want to reduce consciousness to only pattern matching. Speaking of AI:

https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what-journalists-want-you-to-think-it-means/

The media is misleading people. When an ordinary person uses the term “biased,” they think this means that incorrect decisions are made — that a lender systematically refuses loans to blacks who would otherwise repay them. When the media uses the term “bias”, they mean something very different — a lender systematically failing to issue loans to black people regardless of whether or not they would pay them back.
 
Sort of separate if not completely:

http://www.psych.nyu.edu/vanbavel/lab/documents/VanBavel.etal.2015COP.pdf

Moral cognition is also dependent on brain structures directly involved in self and other-related processing. The mPFC is engaged when thinking about the self [50], as well as others’ mental states [51], suggesting that the perception of self and other can be intimately entwined [52,53]. The temporoparietal junction (TPJ) also emerged early on as a key region necessary for decoding social cues [54] and specifically for representing the intentions [55,56] and emotions of others [57]. More recent work has found that these other-oriented intentions and motivations have downstream effects on moral behavior [58]. Inferences regarding moral character dominate our impressions of others [59 ,60,61] and impact how moral phenomena are perceived. In particular, immoral actions are highly diagnostic and heavily weighted in moral evaluations of others [62,63 ,64]. Most prevailing theories explain this negativity bias in terms of the statistical extremity or infrequency of immoral actions, compared to positive moral actions (e.g. [65–68]). However, people are not static — they change constantly. Both our impressions of others and our own self-concepts are dynamically updated in light of new information. A complete model of moral cognition must capture how moral valuations are shaped by expectations, both at the level of the environment (e.g., social norms) and the individual (e.g., knowledge regarding a specific person), and continuously updated

It appears possible that one of the reasons that prison populations are skewed towards the less intelligent may have more links to intelligence than simply being "too dumb to realize plan X won't work". The different parts of the pre-frontal cortex are responsible for a variety of higher order functions (like behavioral inhibition and planning), and this likely includes aspects of morality as well.
 
I meant to respond earlier to the comment about even we mimicking consciousness and then skipped it. What would it mean to mimic something which, as far as we know, exists nowhere else or in nothing else? To mimic requires an other to mimic. Berkeley's Permanent Perceiver laughs.

Haha, well, we mimic each other. I think we've had a variant of this conversation before, so we don't have to run through it again; but suffice it to say that I'm a skeptic toward the totally internalist explanation of consciousness, i.e. that consciousness came about and gave birth to other human attributes like language, empathy, etc. I believe that consciousness is concomitant with other aspects of social existence, including representational figures. Extrapolating from this premise, it's possible that consciousness developed as a complex process whereby human beings gradually internalized a model of social interaction, basically mimicking social form as a framework of personal subjectivity. A crucial component of consciousness is that we can imagine consciousness in others, we can empathize with others' situations.

At this point, humans would simply continue to mimic the consciousness they encounter and intuit in others. The theory of mirror neurons supports a version of this interpretation, although I realize it's very controversial.

AI is indeed, currently, simply fast and broad mathematical operations engaged in so much pattern matching. Humans engage in pattern matching too to varying degrees (or at least, successfully to varying degrees), but I'm not sure I'd want to reduce consciousness to only pattern matching.

Most people don't, it's true. And as of right now, it's reductive to say that consciousness is nothing more than pattern-matching. If that is true, then we should hypothetically be able to come up with algorithms to reproduce it. We haven't.

The problem is that we also can't prove that consciousness isn't just pattern-matching. Obviously the problem extends beyond the context, as this would demand that we prove a negative. The anxiety of A.I. research is simply that it forces us to confront these questions.
 
At this point, humans would simply continue to mimic the consciousness they encounter and intuit in others. The theory of mirror neurons supports a version of this interpretation, although I realize it's very controversial.

Well it's not impossible, but I'd be curious as to how it arose so rather uniformly.

Most people don't, it's true. And as of right now, it's reductive to say that consciousness is nothing more than pattern-matching. If that is true, then we should hypothetically be able to come up with algorithms to reproduce it. We haven't.

The problem is that we also can't prove that consciousness isn't just pattern-matching. Obviously the problem extends beyond the context, as this would demand that we prove a negative. The anxiety of A.I. research is simply that it forces us to confront these questions.

Even if it is just pattern matching, we don't understand the original or the reciprocal inputs or the objectives.
 
Well it's not impossible, but I'd be curious as to how it arose so rather uniformly.

Ha, so would I.

Even if it is just pattern matching, we don't understand the original or the reciprocal inputs or the objectives.

Correct.
Sort of separate if not completely:

http://www.psych.nyu.edu/vanbavel/lab/documents/VanBavel.etal.2015COP.pdf

It appears possible that one of the reasons that prison populations are skewed towards the less intelligent may have more links to intelligence than simply being "too dumb to realize plan X won't work". The different parts of the pre-frontal cortex are responsible for a variety of higher order functions (like behavioral inhibition and planning), and this likely includes aspects of morality as well.

More links as in, those who commit crimes tend to lack the mental capacity for moral consideration of the impact on others? If so, I think that makes perfect sense. Didn't have time to read the whole article.

If that is the case, seems a more practical alternative to the categorical imperative (which doesn't really care about the effects of immoral action, only that a moral code would be universally accepted).
 
More links as in, those who commit crimes tend to lack the mental capacity for moral consideration of the impact on others? If so, I think that makes perfect sense. Didn't have time to read the whole article.

Maybe lack the capacity for consideration, maybe lack the capacity to care about the consideration, etc. I'm not sure exactly how much the paper tells us beyond that moral cognition is complex and requires the "newer" parts of the brain to a significant degree.


If that is the case, seems a more practical alternative to the categorical imperative (which doesn't really care about the effects of immoral action, only that a moral code would be universally accepted).

Even if we were to leap to an extreme interpretation that more developed relevant portions of the brain = more morality, I'm not sure exactly how that would inform morality/ethics. Obviously there's more to it than that, but off the top of my head it makes a good argument for the value of religion. Fear of a omniscient "parent" is probably more salient to a self-interested simpleton in comparison with abstract arguments and metacognitions he lacks capacity for.
 
https://aeon.co/essays/the-self-does-exist-and-is-amenable-to-scientific-investigation

The answer is that science does all this by rejecting antirealism. In fact, the self does exist. The phenomenal experience of having a self, the feelings of pain and of pleasure, of control, intentionality and agency, of self-governance, of acting according to one’s beliefs and desires, the sense of engaging with the physical world and the social world – all this offers evidence of the existence of the self. Furthermore, empirical research in the mind sciences provides robust reasons to deny antirealism. The self lends itself to scientific explanations and generalisations, and such scientific information can be used to understand disorders of the self, such as depression and schizophrenia, and to develop this self-understanding facilitates one’s ability to live a rich moral life.

Ugh, this kind of hack writing annoys the piss out of me. I'm not even all that interested in the author's theory of a "multitudinous self"--it's fine if that's the proposed solution or working approach. I have no problem with it. But I have a big problem with this black and white notion that the self is either an illusion (it doesn't exist) or it's real, and is evidenced by our experience of selfhood.

First of all, the claim that our experience of selfhood constitutes a self completely elides the structure of argument that critiques the metaphysics of selfhood. It's amateurish and ridiculous. Second of all, it completely dismisses the plethora of interesting questions that this line of inquiry raises: is selfhood an essence, or an experience (is there a difference)? if there is such a thing as a "real" self, but we can't logically prove it, does that make our experience of selfhood more meaningful than any abstract entity that we call "the self"--and if so, is it even worth discussing what "real" selfhood is? could we distinguish between real and counterfeit selfhood, if the experience of both is the same? Why the need to reify experience into an internal fabric or substance?

Finally, and most important, why is an antirealist approach hazardous to the notion of human being and experience? Just because people like Dennett argue for an antirealist philosophy of mind (not sure Dennett would use the term "antirealist"), this doesn't mean they see the experience of selfhood as a worthless phenomenon. Dennett himself has developed the notion of the "intentional stance," which he argues is instrumental and imperative for rational human functioning. The dismissal of what we used to think of as "selfhood" doesn't translate into a degradation of human experience.

These kinds of arguments make me think we need a new definition of selfhood and the self. The old debates are getting tiresome.
 
I'm not sure we can escape a Cartesian self. I mean, we can theorize alternatives, but we can't have a conversation with them.

The point isn't whether we can escape it; it's whether we can separate the value of experiencing selfhood in a Cartesian fashion from presuming that that experience correlates to the metaphysical substance that we call "the self." The author of that article believes that the self exists because we experience it as such. This is the error, and it's a flaw in logic that phenomenologists continue to make.

In other news: http://www.rifters.com/crawl/?p=7875

Pastuzyn et al [link broken] of the University of Utah, have just shown that Arc is literally an infection: a tamed, repurposed virus that infected us a few hundred million years ago. Apparently it looks an awful lot like HIV. Pastuzyn et al speculate that Arc “may mediate intercellular signaling to control synaptic function”.

Memory is a virus. Or at least, memory depends on one.

Of course, everyone’s all over this. U of Utah trumpeted the accomplishment with a press release notable for, among other things, describing the most-junior contributor to this 13-author paper as the “senior” author. Newsweek picked up both the torch and the mistake, leading me to wonder if Kastalio Medrano is simply at the sloppy end of the scale or if it’s normal for “Science Writers” in popular magazines to not bother reading the paper they’re reporting on. (I mean, seriously, guys; the author list is right there under the title.) As far as I know I’m the first to quote Burroughs in this context (or to mention that Greg Bear played around a very similar premise in Darwin’s Radio), but when your work gets noticed by The Atlantic you know you’ve arrived.

Me, though, I can’t stop thinking about the fact that something which was once an infection is now such an integral part of our cognitive architecture. I can’t stop wondering what would happen if someone decided to reweaponise it.

The parts are still there, after all. Arc builds its own capsid, loads it up with genetic material, hops from one cell to another. The genes being transported don’t even have to come from Arc:

“If viral RNA is not present, Gag encapsulates host RNA, and any single-stranded nucleic acid longer than 20-30 nt can support capsid assembly … indicating a general propensity to bind abundant RNA.”

The delivery platform’s intact; indeed, the delivery platform is just as essential to its good role as it once was to its evil one. So what happens if you add a payload to that platform that, I dunno, fries intraneuronal machinery somehow?

I’ll tell you. You get a disease that spreads through the very act of thinking. The more you think, the more memories you lay down, the more the disease ravages you. The only way to slow its spread is to think as little as possible; the only way to save your intelligence is not to use it. Your only chance is to become willfully stupid.

fwiw Burroughs wrote that "Language is a virus from outer space."

Most likely true.
 
  • Like
Reactions: Dak
I'm sugar and spice and everything nice, sir. :D

Yes, but Burroughs was writing fiction that was closer to reality than he probably realized--because, you know, he was a smart person. Alex Jones derives reality from the most inventive of fantasies--because he's a dumb person.

Also, language totally is a virus from outer space, if by "virus" we mean a reproductive tendency programmed into our brains, and by "outer space" we mean that we're organisms on a planet in space...