Dakryn's Batshit Theory of the Week

I read that and thought it was really interesting, but didn't share it because I figured there wasn't much to it (I don't feel qualified to say one way or another). That's cool that you think it actually does mesh with Austrian theory.

There is something that Orrell doesn't get quite right though, and I feel it's worth mentioning. It has to do with his association of consciousness in quantum theory and behavioral economics.

I have done no research on quantum anything, and I'm skeptical of its invocation for whatever "wow cool new thing!" that is occasionally offered. There obviously isn't a perfect relationship to the Austrian School, but there were points of intersection beyond his vague reference to "heterodox theories". I can summarize them in the following:

1. The importance of money in the discussion of economics, particularly the problem of central banking, and how the subject is completely avoided by mainstream economics:

Even stranger, though, is that in answering these basic questions money hardly seems to be mentioned – despite the fact that one would think money is at the heart of the subject. (Isn’t economics about money? Aren’t prices set by using money?) If you look at those textbooks, you will find that, while money is used as a metric, and there is some discussion of basic monetary plumbing, money is not considered an important subject in itself. And both money and the role of the financial sector are usually completely missing from economic models, nor do they get paid lip service. One reason central banks couldn’t predict the banking crisis was because their models didn’t include banks.

Economists, it seems, think about money less than most people do: as Mervyn King, the former governor of the Bank of England, observed in 2001: ‘Most economists hold conversations in which the word “money” hardly appears at all.’ For example, the key question of money-creation by private banks, according to the German economist Richard Werner, has been ‘a virtual taboo for the thousands of researchers of the world’s central banks during the past half century’.

2. Models being based on completely fictional ideas (in addition to ignoring various important real things).

To sum up, the key tenets of mainstream or neoclassical economics – including such things as ‘utility’ or ‘demand curves’ or ‘rational economic man’ – are just made-up inventions, no more real than the crystalline spheres that Medieval astronomers thought suspended the planets. But real things like money are to a remarkable extent ignored.

3. Price discovery

Similarly, money’s use in transactions is a way of attaching a number (the price) to the fuzzy and indeterminate notion of value, and therefore acts as a kind of quantum measurement process. When you sell your house, you don’t know exactly how much it is worth or what it will fetch; the price is revealed only at the time of transaction.

4. The economy being the emergent process of individual transactions by heterogeneous actors with fluctuating (and ordinal) value structures:

So how to define this new, quantum-inspired economics? It is not the science of scarcity, and it certainly isn’t the science of happiness (which is not to say these things aren’t important); rather, it can be defined as the study of transactions that involve money. Instead of assuming that market prices represent the intersection of made-up curves and optimise utility, prices are seen as the emergent result of a measurement procedure. Rather than modelling the economy as a kind of efficient machine, it makes more sense to use methods such as complexity theory and network theory that are suited to the study of living systems, and which as mentioned above are now being adopted in economics. One tool is agent-based models, where the economy emerges indirectly from the actions of heterogeneous individuals who are allowed to interact and influence each other’s behaviour, mirroring in some ways the collective dance of quantum particles. Agent-based models have managed to reproduce for example the characteristic boom-bust nature of housing or stock markets, or the effect of people’s expectations on inflation. Meanwhile, network theory can be used to illustrate processes and reveal vulnerabilities in the complex wirings and entanglements of the financial system.

Edit: I agree that behavioral economics so far may be better than more traditional mainstream economics, but only in so far as it turns its view to the actor to some degree. Overall it is oversold.
 
  • Like
Reactions: Einherjar86
http://nautil.us/issue/56/perspective/antonio-damasio-tells-us-why-pain-is-necessary

Are you saying neural codes or algorithms don’t blend with living systems?

Well, they match very well with things that are high on the scale of the mental operations and behaviors, such as those we require for our conversation. But they don’t match well with the basic systems that organize life, that regulate, for example, the degree of mental energy and excitation or with how you emote and feel. The reason is that the operations of the nervous system responsible for such regulation relies less on synaptic signaling, the one that can be described in terms of zeroes and ones, and far more on non-synaptic messaging, which lends itself less to a rigid all or none operation.

Perhaps more importantly, computers are machines invented by us, made of durable materials. None of those materials has the vulnerability of the cells in our body, all of which are at risk of defective homeostasis, disease, and death. In fact, computers lack most of the characteristics that are key to a living system. A living system is maintained in operation, against all odds, thanks to a complicated mechanism that can fall apart as a result of minimal amounts of malfunction. We are extremely vulnerable creatures. People often forget that. Which is one of the reasons why our culture, or Western cultures in general, are a bit too calm and complacent about the threats to our lives. I think we are becoming less sensitive to the idea that life is what dictates what we should do or not do with ourselves and with others.

.....................

This knowledge gives us a broader picture of who we are and where we are in the history of life on earth. We had modest beginnings, and we have incorporated an incredible amount of living wisdom that comes from as far down as bacteria. There are characteristics of our personal and cultural behavior that can be found in single-cell organisms or in social insects. They clearly do not have the kind of highly developed brains that we have. In some cases, they don’t have any brain at all. But by analyzing this strange order of developments we are confronted with the spectacle of life processes that are complex and rich in spite of their apparent modesty, so complex and rich that they can deliver the high level of behaviors that we normally, quite pretentiously, attribute only to our great human smarts. We should be far more humble. That’s one of my main messages. In general, connecting cultures to the life process makes apparent a link that we have ignored for far too long.
 
  • Like
Reactions: Einherjar86
Nice. Coincidentally, just sold my copy of Descartes's Error a few days ago (not because I don't like it, but because I've had it for years and it's time someone else got a chance to read it).

I'm confused though--why are the materials used to make computers more durable than human cells? I've gone through four or five laptops since high school due to hardware and/or hard drive malfunctions, but I haven't had to trade in my body. I'm not sure I understand that comment.
 
I'm confused though--why are the materials used to make computers more durable than human cells? I've gone through four or five laptops since high school due to hardware and/or hard drive malfunctions, but I haven't had to trade in my body. I'm not sure I understand that comment.

Just because it's not functioning doesn't mean it's gone (a hard drive can even be read after failure with the right equipment). At the cellular level, and all the way up, when it stops functioning it will decay away rather rapidly (other than bones). Total information loss outside of reproduction.
 
https://www.scientificamerican.com/article/why-people-dislike-really-smart-leaders/

IQ positively correlated with ratings of leader effectiveness, strategy formation, vision and several other characteristics—up to a point. The ratings peaked at an IQ of around 120, which is higher than roughly 80 percent of office workers. Beyond that, the ratings declined.
.........
“To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers,” he says. “The wrong interpretation would be, ‘Don’t hire high-IQ leaders.’

I had read something similar before, at least suggested if not research (and not sure if it was the thing referenced early in the article), that beyond 125-130 IQ, political aspirants and other aspiring leaders were going to have significant difficulty connecting with potential followers, due to the significant cognitive distance. This is the first research I've seen to suggest something similar, and the effect starts even earlier.
 
Just because it's not functioning doesn't mean it's gone (a hard drive can even be read after failure with the right equipment). At the cellular level, and all the way up, when it stops functioning it will decay away rather rapidly (other than bones). Total information loss outside of reproduction.

I see. But what's the point of Damasio invoking our fragility? It seemed that he was saying our fragility is what sets us apart from computers, but I don't see the point in that if what is durable about computers isn't their functionality. Yes, a computer's materials last; landfills are a testament to that. But the landfill is also a testament to the limited functionality these durable materials have for us. The plastic and copper of a computer might last for centuries, but the purpose we design them for typically lasts less than a human life (and in the case of laptops, usually not longer than five years). It's true that plastic sticks in the earth longer than biodegradable matter, but so what if it doesn't work?

I'm not sure why this is part of an argument against treating the brain as a computer. If anything, he should be saying that what makes a human brain different than a computer is the brain's durability.
 
I see. But what's the point of Damasio invoking our fragility? It seemed that he was saying our fragility is what sets us apart from computers, but I don't see the point in that if what is durable about computers isn't their functionality. Yes, a computer's materials last; landfills are a testament to that. But the landfill is also a testament to the limited functionality these durable materials have for us. The plastic and copper of a computer might last for centuries, but the purpose we design them for typically lasts less than a human life (and in the case of laptops, usually not longer than five years). It's true that plastic sticks in the earth longer than biodegradable matter, but so what if it doesn't work?

I'm not sure why this is part of an argument against treating the brain as a computer. If anything, he should be saying that what makes a human brain different than a computer is the brain's durability.

I think there's some presentism here. Individual cells are vulnerable to any number of issues, and the human body is vulnerable to a plethora of things. It is only through the explosion of technology providing adequate food, sanitation, and healthcare for most that the world absent 3rd world countries that we don't have to have 12 kids so that 4 live to see grandchildren, or some such ratio. Cells are at risk even from themselves. Computers generally have 2 points of failure: Old spinning disc hard drives, and power supplies. Power supplies last 5-10 years generally, and can be replaced easily. Increasingly used SSD hard drives do not have moving parts and can be expected to far outlast the old style hard drives. Even still HDs are an easy swap out. We typically don't replace PCs because they don't work per se, we replace them because the technology has made them obsolete. I have a an old tablet that still turns on and whatnot just fine. It's just so old it doesn't run current app versions for shit. Laptops get chucked for similar reasons, or because replacement batteries are expensive enough that we'd rather just get a faster model.
 
I've always liked Damasio's theory on the importance of our embodied consciousness--that physical pain is important for developed cognition, that emotional complexity but contributes to and clutters rational thought, and that anatomy informs cognitive behavior. Where I depart from Damasio is in his phenomenological methodology, which he derives from Hubert Dreyfus (the patron saint of A.I. critique). I believe there's a miscommunication between phenomenologists and eliminative materialists. Phenomenologists reject the idea that the brain is like a computer because the brain is a component of the body, whereas computers don't share the same kind of embodiment. I think that eliminative materialists and computationalists would say it's more than just the brain that's like a computer, it's the entire body. Claude Shannon and Warren McCulloch were the originators of the synapses = zeroes and ones model, which treats the brain like a computer. More recent theories would expand this beyond the limited synaptic responses of the brain.

Saying that algorithms aren't amenable to embodied behavior can be restated as saying that there are no algorithms complex enough to capture embodied behavior. Douglas Hofstadter identifies this as a fallacy of argument that he calls "Tesler's Theorem," which basically translates into "A.I. is whatever hasn't been done yet." This is the problem I have with phenomenologists.
 
I've always liked Damasio's theory on the importance of our embodied consciousness--that physical pain is important for developed cognition, that emotional complexity but contributes to and clutters rational thought, and that anatomy informs cognitive behavior. Where I depart from Damasio is in his phenomenological methodology, which he derives from Hubert Dreyfus (the patron saint of A.I. critique). I believe there's a miscommunication between phenomenologists and eliminative materialists. Phenomenologists reject the idea that the brain is like a computer because the brain is a component of the body, whereas computers don't share the same kind of embodiment. I think that eliminative materialists and computationalists would say it's more than just the brain that's like a computer, it's the entire body. Claude Shannon and Warren McCulloch were the originators of the synapses = zeroes and ones model, which treats the brain like a computer. More recent theories would expand this beyond the limited synaptic responses of the brain.

But synaptic transmission isn't even that neatly describable. It's a simplification of the action potential being all or nothing in certain synapses. I think the description of physical bodies as being like computers (or maybe now moving on to algorithms) is the same error-prone tendency which led us to describe the body as a series of pumps or by other mechanistic descriptions. Living organisms are vastly more complicated, and such comparisons only vaguely apply if at all.

Saying that algorithms aren't amenable to embodied behavior can be restated as saying that there are no algorithms complex enough to capture embodied behavior. Douglas Hofstadter identifies this as a fallacy of argument that he calls "Tesler's Theorem," which basically translates into "A.I. is whatever hasn't been done yet." This is the problem I have with phenomenologists.

I don't follow this application of the Didit fallacy. AI = Algorithms, and algorithms are designed to do something, even if the algorithm takes over on the path to that something (or even deviates from chasing that something).
 
But synaptic transmission isn't even that neatly describable. It's a simplification of the action potential being all or nothing in certain synapses. I think the description of physical bodies as being like computers (or maybe now moving on to algorithms) is the same error-prone tendency which led us to describe the body as a series of pumps or by other mechanistic descriptions. Living organisms are vastly more complicated, and such comparisons only vaguely apply if at all.

Computers and humans are both complex, but that's not to say their complexities are the same or even similar. All metaphors are ultimately faulty in that they overlook certain nuances, but they're redeemed in that they elucidate other nuances. Comparing the human body to a computer isn't to say that the body's information can be abstracted and reprogrammed into a different material and stay the same throughout. It certainly wouldn't because the body is part of the substrate necessary for human information. I think that arguing for consciousness as ultimately programmable is a misnomer, because I think that what we see in computers isn't consciousness and never will be.

The description of synaptic firing as binary code is neatly describable if we do so in the following manner: a synapse either fires or it doesn't. It's either a zero or a one. Granted, this says next to nothing about why a synapse is firing, but that's another level of interpretation. 1 + 1 = 2, but simply stating that fact tells us nothing about why someone might be adding those numbers together. The neuronal model of computation doesn't purport to explain the reasons for human behavior, simply that the firing of synapses can be translated into binary code. Walter Pitts published a theorem back in the 1940s proving this, and it's still widely accepted today. It's known as the artificial neuron.

I don't follow this application of the Didit fallacy. AI = Algorithms, and algorithms are designed to do something, even if the algorithm takes over on the path to that something (or even deviates from chasing that something).

What I mean is that A.I. skeptics continually shift the goalposts when discussing what they see as inherently human qualities that can't be programmed into computers. This is part of the history of A.I. research. First, it was impossible to create calculating machines--"Only humans can do that"--but then we had Babbage's analytical engine and Turing machines. Then it was said that computers couldn't replicate language, which they can now do. Then it was that computers couldn't identify objects, which now they can. Then it was that computers couldn't beat humans at complex games, and now they can. In each case, it's a matter of creating more complex algorithms; and in each case, researchers have been able to accomplish feats that skeptics said were impossible. Phenomenologists tend to privilege human experience simply because it's human experience; and that's fine, it's their prerogative. But it carries with it some predispositions that, I think, inhibit their ability to critique A.I. research.

Now, this isn't to say that it is possible to create a computer that matches human cognition and decision-making, only that the skeptics' argument is to claim "Well, that's not part of what makes human really human" (i.e. it isn't our language, or our game-playing, etc.). And the skeptics are right that computers will never fully emulate humans, but that's because computers don't have human bodies. Programming sound waves into a computer won't generate an aural experience because computers don't have the hardware to process sound waves as sound. The point in A.I. research isn't to create computers that are conscious, but computers that are intelligent enough to mimic conscious behavior.

The thrill of comparing humans to computers is that it forces us to question whether we are merely mimicking conscious behavior. ;) And there's no way to prove that we aren't, even if it feels like a pointless suggestion.
 
The description of synaptic firing as binary code is neatly describable if we do so in the following manner: a synapse either fires or it doesn't. It's either a zero or a one. Granted, this says next to nothing about why a synapse is firing, but that's another level of interpretation. 1 + 1 = 2, but simply stating that fact tells us nothing about why someone might be adding those numbers together. The neuronal model of computation doesn't purport to explain the reasons for human behavior, simply that the firing of synapses can be translated into binary code. Walter Pitts published a theorem back in the 1940s proving this, and it's still widely accepted today. It's known as the artificial neuron.

Just a side not (not quibbling, I know what you mean): The neuron fires. The synapse is the gap between the dendrite of the receiving neuron and the axon(s) of sending neurons.

I'm not familiar with Pitt's theorem and it's outside of my expertise, but I'm be skeptical of comparisons to neurons based on neuroscience prior to WWII.

What I mean is that A.I. skeptics continually shift the goalposts when discussing what they see as inherently human qualities that can't be programmed into computers. This is part of the history of A.I. research. First, it was impossible to create calculating machines--"Only humans can do that"--but then we had Babbage's analytical engine and Turing machines. Then it was said that computers couldn't replicate language, which they can now do. Then it was that computers couldn't identify objects, which now they can. Then it was that computers couldn't beat humans at complex games, and now they can. In each case, it's a matter of creating more complex algorithms; and in each case, researchers have been able to accomplish feats that skeptics said were impossible. Phenomenologists tend to privilege human experience simply because it's human experience; and that's fine, it's their prerogative. But it carries with it some predispositions that, I think, inhibit their ability to critique A.I. research.

Now, this isn't to say that it is possible to create a computer that matches human cognition and decision-making, only that the skeptics' argument is to claim "Well, that's not part of what makes human really human" (i.e. it isn't our language, or our game-playing, etc.). And the skeptics are right that computers will never fully emulate humans, but that's because computers don't have human bodies. Programming sound waves into a computer won't generate an aural experience because computers don't have the hardware to process sound waves as sound. The point in A.I. research isn't to create computers that are conscious, but computers that are intelligent enough to mimic conscious behavior.

The thrill of comparing humans to computers is that it forces us to question whether we are merely mimicking conscious behavior. ;) And there's no way to prove that we aren't, even if it feels like a pointless suggestion.

Ok, well I get the moving the goalposts issue (not sure about the Didit fallacy though). This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

As a personal interest by someone who does a lot of gaming, it would be interesting to see how computers do in games which are not completely contained and purely logical. I'd expect them to do better than humans sooner or later, but it would be interesting to see if they show a similar pattern of learning and/or move making as they have shown in Chess or GO - that is, eventually winning by methods considered completely unconventional.
 
Just a side not (not quibbling, I know what you mean): The neuron fires. The synapse is the gap between the dendrite of the receiving neuron and the axon(s) of sending neurons.

Right, yes. I'm just working from the phrase "synaptic firing," which I've read in the discourse.

I'm not familiar with Pitt's theorem and it's outside of my expertise, but I'm be skeptical of comparisons to neurons based on neuroscience prior to WWII.

I get that, but it's still a widely-accepted model. There have been books published on it as recently as 2011. I've not read any substantial rejections of the model itself.

Ok, well I get the moving the goalposts issue (not sure about the Didit fallacy though). This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

I never intended to invoke the "Didit fallacy," so apologies if my wording implied that. My issue w/ A.I. skeptics has always been the goalposts fallacy.

EDIT: when Hofstadter writes the fallacy as "A.I. is whatever hasn't been done yet," he's not suggesting the intervention of an unknown force. He's simply pointing out that A.I. skeptics constantly displace what we would identify as A.I. into the bounds of the unaccomplished in order to support their argument that A.I. is impossible.

As a personal interest by someone who does a lot of gaming, it would be interesting to see how computers do in games which are not completely contained and purely logical. I'd expect them to do better than humans sooner or later, but it would be interesting to see if they show a similar pattern of learning and/or move making as they have shown in Chess or GO - that is, eventually winning by methods considered completely unconventional.

Agreed. Although electronic games work on code too, so I assume it would simply be a matter of programming a computer w/ said code? The edge of non-electronic games is that they rely on a set of rules dictated by social convention and regularity (e.g. if you try and move a King two spaces on a chessboard, an opponent or referee has to stop you; the game itself won't prohibit it).
 
Last edited:
Agreed. Although electronic games work on code too, so I assume it would simply be a matter of programming a computer w/ said code? The edge of non-electronic games is that they rely on a set of rules dictated by social convention and regularity (e.g. if you try and move a King two spaces on a chessboard, an opponent or referee has to stop you; the game itself won't prohibit it).

Sorry, I meant physical games which involve some percentage of chance. At the extreme, well known end, I could refer to the game of Risk. There are some strategies, but at least 50% is left to luck by my estimation. I'm less interested in the win/loss by AI than by how they begin to approach the problem.
 
  • Like
Reactions: Einherjar86
This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

Just one more comment here--yes, I think I'm in agreement with Damasio on this point.

The controversy over this matter goes back to first-wave cybernetics, whose patron saints were figures like Norbert Wiener and Claude Shannon. Wiener and Shannon believed in the decontextualization, or disembodiment, of information. In other words, they believed that the information that "makes up" a human being could be extracted and reprogrammed in a different interface, and that this could be done without losing any information (they just didn't have the technology, they claimed). They thought that someday we would be able to "transmit" human beings, a la Star Trek.

This was a reigning perspective in cybernetics until about 1960. After 1960, cybernetics shifted in a direction more concerned with materiality and the nature of the information-media relation, or the placement and operation of the interface (this included contradictions of observation). As far as cybernetics in the humanities goes, the precedent set by N. Katherine Hayles is the one I follow: that information is an intrinsically embodied substance, that analyzing/calculating information is possible, but transplanting it yields an entirely new organism. In short, cybernetic entities are entities whose informatic makeup is constantly interacting with the hardware of the body and the material of the environment (in some manner).

If there's a key to cracking consciousness, I don't think it's to be found in neural information. A.I. isn't really about creating conscious cybernetic entities, but about creating massively intelligent cybernetic entities--i.e. entities that are capable of large-scale pattern-matching. Even if we eventually create an A.I. that is so complex that it can mimic consciousness, all this will tell us is that even we may be mimicking consciousness. It won't reveal any secrets.
 
If there's a key to cracking consciousness, I don't think it's to be found in neural information. A.I. isn't really about creating conscious cybernetic entities, but about creating massively intelligent cybernetic entities--i.e. entities that are capable of large-scale pattern-matching. Even if we eventually create an A.I. that is so complex that it can mimic consciousness, all this will tell us is that even we may be mimicking consciousness. It won't reveal any secrets.

I meant to respond earlier to the comment about even we mimicking consciousness and then skipped it. What would it mean to mimic something which, as far as we know, exists nowhere else or in nothing else? To mimic requires an other to mimic. Berkeley's Permanent Perceiver laughs.

AI is indeed, currently, simply fast and broad mathematical operations engaged in so much pattern matching. Humans engage in pattern matching too to varying degrees (or at least, successfully to varying degrees), but I'm not sure I'd want to reduce consciousness to only pattern matching. Speaking of AI:

https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what-journalists-want-you-to-think-it-means/

The media is misleading people. When an ordinary person uses the term “biased,” they think this means that incorrect decisions are made — that a lender systematically refuses loans to blacks who would otherwise repay them. When the media uses the term “bias”, they mean something very different — a lender systematically failing to issue loans to black people regardless of whether or not they would pay them back.
 
Sort of separate if not completely:

http://www.psych.nyu.edu/vanbavel/lab/documents/VanBavel.etal.2015COP.pdf

Moral cognition is also dependent on brain structures directly involved in self and other-related processing. The mPFC is engaged when thinking about the self [50], as well as others’ mental states [51], suggesting that the perception of self and other can be intimately entwined [52,53]. The temporoparietal junction (TPJ) also emerged early on as a key region necessary for decoding social cues [54] and specifically for representing the intentions [55,56] and emotions of others [57]. More recent work has found that these other-oriented intentions and motivations have downstream effects on moral behavior [58]. Inferences regarding moral character dominate our impressions of others [59 ,60,61] and impact how moral phenomena are perceived. In particular, immoral actions are highly diagnostic and heavily weighted in moral evaluations of others [62,63 ,64]. Most prevailing theories explain this negativity bias in terms of the statistical extremity or infrequency of immoral actions, compared to positive moral actions (e.g. [65–68]). However, people are not static — they change constantly. Both our impressions of others and our own self-concepts are dynamically updated in light of new information. A complete model of moral cognition must capture how moral valuations are shaped by expectations, both at the level of the environment (e.g., social norms) and the individual (e.g., knowledge regarding a specific person), and continuously updated

It appears possible that one of the reasons that prison populations are skewed towards the less intelligent may have more links to intelligence than simply being "too dumb to realize plan X won't work". The different parts of the pre-frontal cortex are responsible for a variety of higher order functions (like behavioral inhibition and planning), and this likely includes aspects of morality as well.
 
I meant to respond earlier to the comment about even we mimicking consciousness and then skipped it. What would it mean to mimic something which, as far as we know, exists nowhere else or in nothing else? To mimic requires an other to mimic. Berkeley's Permanent Perceiver laughs.

Haha, well, we mimic each other. I think we've had a variant of this conversation before, so we don't have to run through it again; but suffice it to say that I'm a skeptic toward the totally internalist explanation of consciousness, i.e. that consciousness came about and gave birth to other human attributes like language, empathy, etc. I believe that consciousness is concomitant with other aspects of social existence, including representational figures. Extrapolating from this premise, it's possible that consciousness developed as a complex process whereby human beings gradually internalized a model of social interaction, basically mimicking social form as a framework of personal subjectivity. A crucial component of consciousness is that we can imagine consciousness in others, we can empathize with others' situations.

At this point, humans would simply continue to mimic the consciousness they encounter and intuit in others. The theory of mirror neurons supports a version of this interpretation, although I realize it's very controversial.

AI is indeed, currently, simply fast and broad mathematical operations engaged in so much pattern matching. Humans engage in pattern matching too to varying degrees (or at least, successfully to varying degrees), but I'm not sure I'd want to reduce consciousness to only pattern matching.

Most people don't, it's true. And as of right now, it's reductive to say that consciousness is nothing more than pattern-matching. If that is true, then we should hypothetically be able to come up with algorithms to reproduce it. We haven't.

The problem is that we also can't prove that consciousness isn't just pattern-matching. Obviously the problem extends beyond the context, as this would demand that we prove a negative. The anxiety of A.I. research is simply that it forces us to confront these questions.
 
At this point, humans would simply continue to mimic the consciousness they encounter and intuit in others. The theory of mirror neurons supports a version of this interpretation, although I realize it's very controversial.

Well it's not impossible, but I'd be curious as to how it arose so rather uniformly.

Most people don't, it's true. And as of right now, it's reductive to say that consciousness is nothing more than pattern-matching. If that is true, then we should hypothetically be able to come up with algorithms to reproduce it. We haven't.

The problem is that we also can't prove that consciousness isn't just pattern-matching. Obviously the problem extends beyond the context, as this would demand that we prove a negative. The anxiety of A.I. research is simply that it forces us to confront these questions.

Even if it is just pattern matching, we don't understand the original or the reciprocal inputs or the objectives.
 
Well it's not impossible, but I'd be curious as to how it arose so rather uniformly.

Ha, so would I.

Even if it is just pattern matching, we don't understand the original or the reciprocal inputs or the objectives.

Correct.
Sort of separate if not completely:

http://www.psych.nyu.edu/vanbavel/lab/documents/VanBavel.etal.2015COP.pdf

It appears possible that one of the reasons that prison populations are skewed towards the less intelligent may have more links to intelligence than simply being "too dumb to realize plan X won't work". The different parts of the pre-frontal cortex are responsible for a variety of higher order functions (like behavioral inhibition and planning), and this likely includes aspects of morality as well.

More links as in, those who commit crimes tend to lack the mental capacity for moral consideration of the impact on others? If so, I think that makes perfect sense. Didn't have time to read the whole article.

If that is the case, seems a more practical alternative to the categorical imperative (which doesn't really care about the effects of immoral action, only that a moral code would be universally accepted).