Dakryn's Batshit Theory of the Week

https://samzdat.com/2017/12/19/euthyphro-dilemmas-as-mathematical-objects/

The first series had a bunch of stuff we can agree on, like “Politics are things people do, sometimes with ballots and other times with guns.” This one is going to have a lot we don’t agree on, like “Actually, it makes perfect sense for Heidegger to talk about the world worlding, really clarified the passage for me.”

The reason it makes sense is math.

The “worlding” school of philosophy, i.e. continental philosophy, where “continent”=France and “philosophy”=[tasteless joke at the expense of the dead], is generally considered to be the one that tacitly endorses neo-Kipling verses like “The scientific method is a social construct, foisted on hapless Natives by monopoly men in Pith helmets, haven’t you read post-colonial theory?” Dazzlingly incoherent, and also why it’s going to sound odd when I say that they’re part of a tradition that was all about the problem of saving math as a reliable thing.

They’re responding to Heidegger, who is responding to Husserl, both of whom are dealing with Kant’s framework, and Kant’s framework doesn’t make any sense until you realize that he needs the entire thing to address one central issue: why does math work with the physical world?

As shocking as this sounds to people who dismiss continental philosophy as inherently anti-rational, I guarantee it’s more shocking to people heavily invested in post-Heideggerian Comparative Literature departments.

@Einherjar86 I'm genuinely interested in your take on this series as it develops, and I a priori promise not to argue about it :D
 
Huh.

Well, some preliminary comments to demonstrate my curiosity yet also my suspicion, in response to some of his comments:

They’re responding to Heidegger, who is responding to Husserl, both of whom are dealing with Kant’s framework, and Kant’s framework doesn’t make any sense until you realize that he needs the entire thing to address one central issue: why does math work with the physical world?

As shocking as this sounds to people who dismiss continental philosophy as inherently anti-rational, I guarantee it’s more shocking to people heavily invested in post-Heideggerian Comparative Literature departments.

I'm not sure why he assumes it's so shocking. I actually think it makes a lot of sense.

For what it's worth, I fucking love reading about math even though I have little understanding of the specifics. I'm fascinated by early-twentieth-century mathematics and the fallout of logical positivism, as Wittgenstein's Tractatus all but spelled out its doom. I think the mathematical quandaries that arose from the work of David Hilbert, Kurt Gödel, and Alan Turing are some of the most interesting and substantial breakthroughs in the history of modern science. Why do "post-Heideggerian Comparative Lit departments" need to be shocked or perturbed by this, or even doubt the relevance of such discoveries?

Furthermore, I'm not over-generalizing by projecting my own fascination onto the majority of humanities scholars. If anyone bothers to actually talk with humanities scholars about mathematics, they'll find at worst indifference, and at best affirmation (my dissertation advisor has an undergraduate degree in mathematics, in fact). Our current Buzzfeed golden boy, Ted Chiang, wrote that a "proof that mathematics is inconsistent, and that all its wondrous beauty was just an illusion, would, it seemed to me, be one of the worst things you could ever learn." Deleuze, Derrida, and Lacan were all interested with mathematics, and not with the notion that it was a "social construct" (I get really tired of this being the go-to criticism of the humanities, by the way).

Deleuze and Guattari write that it "was a decisive event when the mathematician Riemann uprooted the multiple from its predicate state and made it a noun, 'multiplicity.' It marked the end of dialectics and the beginning of a typology and topology of multiplicities."

For Derrida and Lacan, mathematics issued a challenge analogous to the one stated in the blog, i.e. the Kantian dilemma of analytic vs. synthetic knowledge. The analogous challenge has to do with language--or more specifically, the subject's relation to the letter:

Burgoyne said:
Already in the first decade of his work, Lacan was working with structure, both explicitly and implicitly: explicitly with the structures of psychoanalysis and psychoanalytical psychiatry, as well as explicitly with the structures of language, and implicitly with the structures of mathematics. Even in this period of Lacan's work, he was committed to the necessity of producing an analysis of the structures of language.

In other words, Lacan proceeded according to his own brand of positivism; but he went on to incorporate the post-Hilbert rupture of mathematics, what Hilbert called the Entsheidungsproblem, which in turn led to the halting problem and Gödelian incompleteness. For Derrida, mathematics manifests in the uncertainty relation between the spectator and a work of art--a framing problem, or parergon in Derrida's terminology. Mathematicians were fascinated by the question of how to verify solvability; continental philosophers were fascinated by the question of how to verify meaning. It's no coincidence that mathematical language and models found their way into continental thought, since both fields encountered the same dilemma (which yes, has its roots in Kant).

Additionally, Alain Badiou's entire philosophy is built on a reading of Georg Cantor's set theory, and premised on the notion that "mathematics is ontology":

Badiou said:
The entire history of rational thought appeared to me to be illuminated once one assumed the hypothesis that mathematics, far from being a game without object, draws the exceptional severity of its law from being bound to support the discourse of ontology. In a reversal of the Kantian question, it was no longer a matter of asking: 'How is pure mathematics possible?' and responding: thanks to a transcendental subject. Rather: pure mathematics being the science of being, how is a subject possible?

And finally, I'm working on a paper that discusses the relationship between early-20thc mathematics and modernist writing (with which of course the continentals were obsessed). I'm going ahead and providing an excerpt (the paper itself is far from complete):

This essay asks what it means to ascribe a halting problem to modernism, and how modernist literature affords us the opportunity to conceptualize such a claim. A precedent for modernism’s halting problem emerges in 1922, in Ludwig Wittgenstein’s inimitable Tractatus Logico-Philosophicus—a text that weakens the foundations of formal logic even as it seeks to edify them. As the infectious lure of the Tractatus infiltrates the modern bloodstream, analogous responses begin to appear in both literature and the sciences. Not quite so deconstructive as Derrida’s parergon, modernist literature simultaneously exhibits a faith in, and skepticism toward, the stability of the aesthetic frame. A parallel skepticism emerges in mathematics and computer sciences as their practitioners begin to distinguish, in a manner similar to Wittgenstein’s Tractatus, between expressing isolated facts in the form of individual theorems and algorithmically determining the total formal system in which those theorems appear: “To be able to represent the logical form,” Wittgenstein writes, “we should have to be able to put ourselves with the propositions outside logic, that is outside the world.” With the publication of the Tractatus, modernist writers and scientists alike find themselves confronted with the implacable presence of the Outside. For mathematicians, this implacable presence rears its head in the figure of the halting problem; for modernist writers, it emerges in their ambivalence toward aesthetic form.

The halting problem initiated a fascination with how exactly to handle the question of determining solvability without actually doing any solving. The operation demands a language in which one can talk about proofs, and a set of formal statements about the provability of statements. The recursion of this demand opens the door to Gödel’s famous incompleteness theorems, which establish that within any given formal systems there are true statements that cannot be proven true.[ii] Douglas R. Hofstadter refers to this as “Gödel’s trick,” and describes it as “like trying to quote an entire sentence inside itself.”[iii] The Church-Turing hypothesis refined Gödel’s conceptual logic about five years later, in 1936, essentially proving that no program can be written in a language that can perform solvability tests on all other programs also written in that language.[iv] These programs cannot define their limits in their own vernacular. Even a program that attempted to make statements about programs in a lower-level language cannot guarantee that all its statements about those lower-level programs would be true, since in order to do so it would it need to address its own language, thereby necessitating another level of metalinguistic discourse.[v]

This is all well and good for mathematics, but what of modernism? Logical form is not aesthetic form, and we would not do well to confuse the two. Works of art are not arguments; they do not persuade us so much as seduce us. The emphasis in this paper lies not in conceiving of modernism as a logical system, but in conceiving of modernism as an aesthetic expression of the logical upsets in the works of Gödel, Turing, and Wittgenstein. Modernism’s halting problem emerges in the corollary between the epistemological drive for complete knowledge and the ontological drive for complete meaning—the full wealth of occupying our human experience.

Wittgenstein, Tractatus Logico-Philosophicus, 1922, trans. C.K. Ogden, New York: Barnes & Noble, 2003, 4.12. All citations to the Tractatus refer to aphorisms.

[ii] Fortnow gives an example in the form of an adaptation of the liar’s paradox: “There is no proof that this sentence is true” (111). If the sentence is false, then there is a proof that it is true, in which case the sentence would be true, but then would have no proof of being true. The insidious effect of this equation is that it persuades its readers to equate provability with truth: “Gödel also shows that we cannot prove that ‘everything we can prove is true is true’ unless we can also prove false things” (111). Put another way, Gödel’s incompleteness theorems reveal a glaring aporia in the recursive functions of formal systems. When looking for proofs of solvability, practitioners will always inevitably encounter true statements whose solvability cannot be guaranteed by the formal language available to them.

[iii] Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid, 1979, New York: Basic Books, 1999, 426.

[iv] Hofstadter, 429. See also C.A.R. Hoare and D.C.S. Allison, “Incomputability,” Computing Surveys 4.3 (1972), 178: “Any language containing conditionals and recursive functions definitions which is powerful enough to program its own interpreter cannot be used to program its own ‘terminates’ function.”

[v] The suggested overlap between Wittgenstein and Turing is not accidental, nor is it original. For an exceptional and detailed account of Wittgenstein’s influence on Turing (and vice versa), see Juliet Floyd, “Chains of Life: Turing, Lebensform, and the Emergence of Wittgenstein’s Later Style,” Nordic Wittgenstein Review 5.2 (2016): 7-89. Floyd suggests that Turing’s and Wittgenstein’s mature works challenge the formalization of metalevel systems that can account for the complexity of all possible formal statements, phrasing this challenge as an embrace of infinite recombination—that is, as a halting problem: “with Turing’s analysis in hand, [Wittgenstein] now realized that he could—or should—continually detach, move, rearrange, amalgamate and reconfigure motifs and pieces of procedure and thought and conversation (and its ending) within one another without end” (17).

Given all this, I find samzdat's following comment misguided:

The continentals took Kant seriously, continued his tradition, at some point forgot that their school only makes sense in light of Kant, proclaimed math oppressive and/or not real.

I don't think any notable continental philosopher has forgotten about Kant's influence or the influence of mathematical thought.

Anyway, it's my guess he'll turn eventually to the likes of Hilbert, Gödel, Turing, etc., since these guys basically inaugurated the epistemological crisis of mathematics in the twentieth century.
 
Last edited:
  • Like
Reactions: Dak
Scott Alexander reviews Madness and Civilization. Went about how I would expect and more or less aligned with my impression, although Scott put more effort into checking into and documenting the questionable assertions:

http://slatestarcodex.com/2018/01/04/book-review-madness-and-civilization/

Everything above is a really superficial reading of Madness And Civilization and probably misses the whole point of the book.

This point is something that alternately seems postmodern or kabbalistic or – for lack of a better term – insane. It’s not just saying that This Historical Period treated the mad This Way, but That Historical Period treated them That Way. It’s trying to peek beneath the hood (or the veil?) to find the zeitgeist, the animating spirit of the European continent that led them to do things as they did and which transformed one schema into another. This is rarely anything sensible, like “the economy improved” or “there was a revolution”. More often it’s some kind of deep subconscious beliefs about the meaning of humanity or freedom or symbolism or something. If Europe was one guy, this book would be Foucault performing Freudian dream analysis on that guy.

For example, the Europeans didn’t put their madmen on Ships Of Fools just because it was a convenient way to get rid of them, but also because:
........
Let’s appreciate a few things about this passage. First, it’s phenomenal writing. I apologize for thinking all Continental philosophy had to be badly-written; in retrospect Nietzsche should have cured me of this delusion.

But second, it’s totally bonkers. Like, forget the fact that there weren’t any real Ships Of Fools and Foucault is analyzing a literary motif. Forget that the literary motif actually comes from a metaphor by Plato which is about something else. Even if the rivers of Europe were choked with such Ships, this is just a phenomenally unproductive way to think about anything. This is the kind of thought process where we drill for oil because we are symbolically sexually penetrating Mother Earth (insert kabbalistic analysis of the word “fracking” here).
..........................
This is the thrust of the last chapter, and Foucault ties all of this together into a case that all of the reformers were just jerks, and they sought more humane treatment for the mentally ill out of a desire to judge and dominate them. This is fantastically contrarian. Foucault does not give an inch to the position that maybe there was something good and wholesome about the desire to rescue people from being crammed by the dozen in rat-infested cells with all of their limbs chained together. He doesn’t specifically say the rat-infested cells were better, but he sure hints at it pretty hard.

I always like contrarian takes. But I can’t make sense of what Foucault is trying to do here. And also, some of the same sites that debunk the Ship Of Fools thing say that actually the Renaissance was super-cruel to mad people, and Foucault’s picture of them as tolerant and understanding is composed entirely of cherry-picking and imagination.

The best I can do here is say that Foucault is too much of an Idealist where I am a Materialist. I measure humanitarian victories in prisoners freed and rat bites averted. He seems to measures them how the dream sequences of Personified Europe are treating the dialogue between Madness and Reason. Probably there’s a perspective in which this makes sense, but this book didn’t manage to teach me to appreciate it.
.............
Either everyone in the past is a total liar (given this effect, probably true), Foucault himself is a total liar (given the Ship of Fools thing, probably true), or we need even more constant vigilance than we’ve been applying thus far (alas, probably also true).
 
SSC said:
And so, Foucault tells us, in the fifteenth century there is a sudden emergence of a complex of artistic and philosophical themes linking madmen, the sea, and the terrible mysteries of the world. These culminate in the Ship Of Fools:

Renaissance men developed a delightful, yet horrible way of dealing with their mad denizens: they were put on a ship and entrusted to mareiners because folly, water, and sea, as everyone then knew, had an affinity for each other. Thus, “Ships of Fools” crisscrossed the seas and canals of Europe with their comic and pathetic cargo of souls. Some of them found pleasure and even a cure in the changing surroundings, in the isolation of being cast off, while others withdrew further, became worse, or died alone and away from their families. The cities and villages which had thus rid themselves of their crazed and crazy, could now take pleasure in watching the exciting sideshow when a ship full of foreign lunatics would dock at their harbors.

This was such a great piece of historical trivia that I was shocked I’d never heard it before. Some quick research revealed the reason: it is completely, 100% false. Apparently Foucault looked at an allegorical painting by Hieronymous Bosch, decided it definitely existed in real life, and concocted the rest from his imagination.

It wasn't "completely, 100% false." Alexander is misreading what Foucault is saying here. Foucault isn't claiming that a legitimate cultural phenomenon called the "Ship of Fools" appeared in the fifteenth century, ferrying the insane about to keep them away from town and cities. He borrows the term "Ship of Fools," or Narrenschiff, from literature, and he acknowledges this:

Foucault said:
This Narrenschiff was clearly a literary invention, and was probably borrowed from the ancient cycle of the Argonauts [...]. Such ships were a literary commonplace, with a crew of imaginary heroes, moral models or carefully defined social types who set out on a great symbolic voyage that brought them, if not fortune, at the very least, the figure of their destiny or of their truth.

What Foucault is saying is that there are documented cases of the insane being forcibly removed from land and placed on boats or ships. This wasn't an institutionalized practice, it was just something that happened. Foucault is appealing to the Narrenschiff for rhetorical purposes, which is why he puts the term in quotation marks, i.e. "Ships of Fools" crisscrossed the seas...

No one actually referred to these as "ships of fools" during this time, and there was no established institution for placing the insane on boats. It just happened occasionally, and Foucault does provide references for this in his endnotes. One of those quite common instances of life imitating art...

Of course, I'm working from the more expansive History of Madness, of which Madness and Civilization is the truncated version. So maybe he made some cuts thinking people would understand.
 
Last edited:
It wasn't "completely, 100% false." Alexander is misreading what Foucault is saying here. Foucault isn't claiming that a legitimate cultural phenomenon called the "Ship of Fools" appeared in the fifteenth century, ferrying the insane about to keep them away from town and cities. He borrows the term "Ship of Fools," or Narrenschiff, from literature, and he acknowledges this:


What Foucault is saying is that there are documented cases of the insane being forcibly removed from land and placed on boats or ships. This wasn't an institutionalized practice, it was just something that happened. Foucault is appealing to the Narrenschiff for rhetorical purposes, which is why he puts the term in quotation marks, i.e. "Ships of Fools" crisscrossed the seas...

No one actually referred to these as "ships of fools" during this time, and there was no established institution for placing the insane on boats. It just happened occasionally, and Foucault does provide references for this in his endnotes. One of those quite common instances of life imitating art...

Of course, I'm working from the more expansive History of Madness, of which Madness and Civilization is the truncated version. So maybe he made some cuts thinking people would understand.

I'll assume you are right about this, having more familiarity with Foucault broadly moreso than myself and Alexander. But I do think that this is probably underscores the other criticisms even more. Most charitably, that Madness and Civilization isn't very useful as a review of the history of madness and civilization.
 
I'll assume you are right about this, having more familiarity with Foucault broadly moreso than myself and Alexander. But I do think that this is probably underscores the other criticisms even more. Most charitably, that Madness and Civilization isn't very useful as a review of the history of madness and civilization.

It's probably not useful as a review or historical study in the more traditional Western sense (i.e. a record of historical events that comprise a particular moment or time). It is useful as a study of how the treatment of those deemed insane reflects underlying assumptions about "madness" and gives rise to a general meaning of madness that might be attributed to a particular time period. This seems to be what Alexander finds suspicious, but Foucault is simply placing actual historical incidents in conversation with each other and noticing similarities and differences, and deriving from this a meaning. This standard criticism is that he doesn't include enough historical references to substantiate his point; but he's trying to track the evolution of the meaning(s) of madness. He's moving from the fifteenth to the nineteenth centuries. That's an insane (pun intended) amount of time to cover in one book. Even I'll admit that Foucault cherry-picks, but that's not because he's being deceptive. He's looking for extreme ways in which the deemed-insane were treated and extrapolating from these incidents.

Alexander also acknowledges at one point that Foucault seems to be more interested in institutionalization than insanity, and this is a fair assessment. After all, Foucault would go on to publish a book about institutionalization, Discipline and Punish: The Birth of the Prison.

Ultimately, perhaps even more enlightening about Foucault's treatment of insanity and its potential drawbacks is the exchange between him and Derrida, who also found flaw with Foucault's study. Derrida initiated with "Cogito and the History of Madness," and Foucault rebutted with "My Body, This Paper, This Fire."
 
  • Like
Reactions: Dak
I read that and thought it was really interesting, but didn't share it because I figured there wasn't much to it (I don't feel qualified to say one way or another). That's cool that you think it actually does mesh with Austrian theory.

There is something that Orrell doesn't get quite right though, and I feel it's worth mentioning. It has to do with his association of consciousness in quantum theory and behavioral economics. He writes:

According to the standard ‘Copenhagen interpretation’, a particle such as an electron is described by a mathematical wave function, whose amplitude at any point describes the probability of finding the electron at that location. This wave function ‘collapses’ to a certain value during the measurement process. No one knows how this collapse occurs, but a conscious observer is usually assumed to be involved, which seemed to undercut the idea of physics as a purely objective science.

I'm sure Orrell is taking shortcuts for the sake of space and directness, but his description here is misleading. Heisenberg's uncertainty relation actually doesn't necessitate the presence of a conscious observer. It's true that interacting conscious observers yields one scenario of quantum uncertainty, but research since Heisenberg's original thesis suggests that the uncertainty inheres in the relation between physical objects. That is, the mere physicality or materiality of the universe itself engenders the uncertainty. It therefore follows that human observers, themselves being physical objects, also produce this uncertainty.

This is basically the difference between the uncertainty principle and the observer effect, the latter of which does necessitate some degree of conscious observation.
 
I read that and thought it was really interesting, but didn't share it because I figured there wasn't much to it (I don't feel qualified to say one way or another). That's cool that you think it actually does mesh with Austrian theory.

There is something that Orrell doesn't get quite right though, and I feel it's worth mentioning. It has to do with his association of consciousness in quantum theory and behavioral economics.

I have done no research on quantum anything, and I'm skeptical of its invocation for whatever "wow cool new thing!" that is occasionally offered. There obviously isn't a perfect relationship to the Austrian School, but there were points of intersection beyond his vague reference to "heterodox theories". I can summarize them in the following:

1. The importance of money in the discussion of economics, particularly the problem of central banking, and how the subject is completely avoided by mainstream economics:

Even stranger, though, is that in answering these basic questions money hardly seems to be mentioned – despite the fact that one would think money is at the heart of the subject. (Isn’t economics about money? Aren’t prices set by using money?) If you look at those textbooks, you will find that, while money is used as a metric, and there is some discussion of basic monetary plumbing, money is not considered an important subject in itself. And both money and the role of the financial sector are usually completely missing from economic models, nor do they get paid lip service. One reason central banks couldn’t predict the banking crisis was because their models didn’t include banks.

Economists, it seems, think about money less than most people do: as Mervyn King, the former governor of the Bank of England, observed in 2001: ‘Most economists hold conversations in which the word “money” hardly appears at all.’ For example, the key question of money-creation by private banks, according to the German economist Richard Werner, has been ‘a virtual taboo for the thousands of researchers of the world’s central banks during the past half century’.

2. Models being based on completely fictional ideas (in addition to ignoring various important real things).

To sum up, the key tenets of mainstream or neoclassical economics – including such things as ‘utility’ or ‘demand curves’ or ‘rational economic man’ – are just made-up inventions, no more real than the crystalline spheres that Medieval astronomers thought suspended the planets. But real things like money are to a remarkable extent ignored.

3. Price discovery

Similarly, money’s use in transactions is a way of attaching a number (the price) to the fuzzy and indeterminate notion of value, and therefore acts as a kind of quantum measurement process. When you sell your house, you don’t know exactly how much it is worth or what it will fetch; the price is revealed only at the time of transaction.

4. The economy being the emergent process of individual transactions by heterogeneous actors with fluctuating (and ordinal) value structures:

So how to define this new, quantum-inspired economics? It is not the science of scarcity, and it certainly isn’t the science of happiness (which is not to say these things aren’t important); rather, it can be defined as the study of transactions that involve money. Instead of assuming that market prices represent the intersection of made-up curves and optimise utility, prices are seen as the emergent result of a measurement procedure. Rather than modelling the economy as a kind of efficient machine, it makes more sense to use methods such as complexity theory and network theory that are suited to the study of living systems, and which as mentioned above are now being adopted in economics. One tool is agent-based models, where the economy emerges indirectly from the actions of heterogeneous individuals who are allowed to interact and influence each other’s behaviour, mirroring in some ways the collective dance of quantum particles. Agent-based models have managed to reproduce for example the characteristic boom-bust nature of housing or stock markets, or the effect of people’s expectations on inflation. Meanwhile, network theory can be used to illustrate processes and reveal vulnerabilities in the complex wirings and entanglements of the financial system.

Edit: I agree that behavioral economics so far may be better than more traditional mainstream economics, but only in so far as it turns its view to the actor to some degree. Overall it is oversold.
 
  • Like
Reactions: Einherjar86
http://nautil.us/issue/56/perspective/antonio-damasio-tells-us-why-pain-is-necessary

Are you saying neural codes or algorithms don’t blend with living systems?

Well, they match very well with things that are high on the scale of the mental operations and behaviors, such as those we require for our conversation. But they don’t match well with the basic systems that organize life, that regulate, for example, the degree of mental energy and excitation or with how you emote and feel. The reason is that the operations of the nervous system responsible for such regulation relies less on synaptic signaling, the one that can be described in terms of zeroes and ones, and far more on non-synaptic messaging, which lends itself less to a rigid all or none operation.

Perhaps more importantly, computers are machines invented by us, made of durable materials. None of those materials has the vulnerability of the cells in our body, all of which are at risk of defective homeostasis, disease, and death. In fact, computers lack most of the characteristics that are key to a living system. A living system is maintained in operation, against all odds, thanks to a complicated mechanism that can fall apart as a result of minimal amounts of malfunction. We are extremely vulnerable creatures. People often forget that. Which is one of the reasons why our culture, or Western cultures in general, are a bit too calm and complacent about the threats to our lives. I think we are becoming less sensitive to the idea that life is what dictates what we should do or not do with ourselves and with others.

.....................

This knowledge gives us a broader picture of who we are and where we are in the history of life on earth. We had modest beginnings, and we have incorporated an incredible amount of living wisdom that comes from as far down as bacteria. There are characteristics of our personal and cultural behavior that can be found in single-cell organisms or in social insects. They clearly do not have the kind of highly developed brains that we have. In some cases, they don’t have any brain at all. But by analyzing this strange order of developments we are confronted with the spectacle of life processes that are complex and rich in spite of their apparent modesty, so complex and rich that they can deliver the high level of behaviors that we normally, quite pretentiously, attribute only to our great human smarts. We should be far more humble. That’s one of my main messages. In general, connecting cultures to the life process makes apparent a link that we have ignored for far too long.
 
  • Like
Reactions: Einherjar86
Nice. Coincidentally, just sold my copy of Descartes's Error a few days ago (not because I don't like it, but because I've had it for years and it's time someone else got a chance to read it).

I'm confused though--why are the materials used to make computers more durable than human cells? I've gone through four or five laptops since high school due to hardware and/or hard drive malfunctions, but I haven't had to trade in my body. I'm not sure I understand that comment.
 
I'm confused though--why are the materials used to make computers more durable than human cells? I've gone through four or five laptops since high school due to hardware and/or hard drive malfunctions, but I haven't had to trade in my body. I'm not sure I understand that comment.

Just because it's not functioning doesn't mean it's gone (a hard drive can even be read after failure with the right equipment). At the cellular level, and all the way up, when it stops functioning it will decay away rather rapidly (other than bones). Total information loss outside of reproduction.
 
https://www.scientificamerican.com/article/why-people-dislike-really-smart-leaders/

IQ positively correlated with ratings of leader effectiveness, strategy formation, vision and several other characteristics—up to a point. The ratings peaked at an IQ of around 120, which is higher than roughly 80 percent of office workers. Beyond that, the ratings declined.
.........
“To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers,” he says. “The wrong interpretation would be, ‘Don’t hire high-IQ leaders.’

I had read something similar before, at least suggested if not research (and not sure if it was the thing referenced early in the article), that beyond 125-130 IQ, political aspirants and other aspiring leaders were going to have significant difficulty connecting with potential followers, due to the significant cognitive distance. This is the first research I've seen to suggest something similar, and the effect starts even earlier.
 
Just because it's not functioning doesn't mean it's gone (a hard drive can even be read after failure with the right equipment). At the cellular level, and all the way up, when it stops functioning it will decay away rather rapidly (other than bones). Total information loss outside of reproduction.

I see. But what's the point of Damasio invoking our fragility? It seemed that he was saying our fragility is what sets us apart from computers, but I don't see the point in that if what is durable about computers isn't their functionality. Yes, a computer's materials last; landfills are a testament to that. But the landfill is also a testament to the limited functionality these durable materials have for us. The plastic and copper of a computer might last for centuries, but the purpose we design them for typically lasts less than a human life (and in the case of laptops, usually not longer than five years). It's true that plastic sticks in the earth longer than biodegradable matter, but so what if it doesn't work?

I'm not sure why this is part of an argument against treating the brain as a computer. If anything, he should be saying that what makes a human brain different than a computer is the brain's durability.
 
I see. But what's the point of Damasio invoking our fragility? It seemed that he was saying our fragility is what sets us apart from computers, but I don't see the point in that if what is durable about computers isn't their functionality. Yes, a computer's materials last; landfills are a testament to that. But the landfill is also a testament to the limited functionality these durable materials have for us. The plastic and copper of a computer might last for centuries, but the purpose we design them for typically lasts less than a human life (and in the case of laptops, usually not longer than five years). It's true that plastic sticks in the earth longer than biodegradable matter, but so what if it doesn't work?

I'm not sure why this is part of an argument against treating the brain as a computer. If anything, he should be saying that what makes a human brain different than a computer is the brain's durability.

I think there's some presentism here. Individual cells are vulnerable to any number of issues, and the human body is vulnerable to a plethora of things. It is only through the explosion of technology providing adequate food, sanitation, and healthcare for most that the world absent 3rd world countries that we don't have to have 12 kids so that 4 live to see grandchildren, or some such ratio. Cells are at risk even from themselves. Computers generally have 2 points of failure: Old spinning disc hard drives, and power supplies. Power supplies last 5-10 years generally, and can be replaced easily. Increasingly used SSD hard drives do not have moving parts and can be expected to far outlast the old style hard drives. Even still HDs are an easy swap out. We typically don't replace PCs because they don't work per se, we replace them because the technology has made them obsolete. I have a an old tablet that still turns on and whatnot just fine. It's just so old it doesn't run current app versions for shit. Laptops get chucked for similar reasons, or because replacement batteries are expensive enough that we'd rather just get a faster model.
 
I've always liked Damasio's theory on the importance of our embodied consciousness--that physical pain is important for developed cognition, that emotional complexity but contributes to and clutters rational thought, and that anatomy informs cognitive behavior. Where I depart from Damasio is in his phenomenological methodology, which he derives from Hubert Dreyfus (the patron saint of A.I. critique). I believe there's a miscommunication between phenomenologists and eliminative materialists. Phenomenologists reject the idea that the brain is like a computer because the brain is a component of the body, whereas computers don't share the same kind of embodiment. I think that eliminative materialists and computationalists would say it's more than just the brain that's like a computer, it's the entire body. Claude Shannon and Warren McCulloch were the originators of the synapses = zeroes and ones model, which treats the brain like a computer. More recent theories would expand this beyond the limited synaptic responses of the brain.

Saying that algorithms aren't amenable to embodied behavior can be restated as saying that there are no algorithms complex enough to capture embodied behavior. Douglas Hofstadter identifies this as a fallacy of argument that he calls "Tesler's Theorem," which basically translates into "A.I. is whatever hasn't been done yet." This is the problem I have with phenomenologists.
 
I've always liked Damasio's theory on the importance of our embodied consciousness--that physical pain is important for developed cognition, that emotional complexity but contributes to and clutters rational thought, and that anatomy informs cognitive behavior. Where I depart from Damasio is in his phenomenological methodology, which he derives from Hubert Dreyfus (the patron saint of A.I. critique). I believe there's a miscommunication between phenomenologists and eliminative materialists. Phenomenologists reject the idea that the brain is like a computer because the brain is a component of the body, whereas computers don't share the same kind of embodiment. I think that eliminative materialists and computationalists would say it's more than just the brain that's like a computer, it's the entire body. Claude Shannon and Warren McCulloch were the originators of the synapses = zeroes and ones model, which treats the brain like a computer. More recent theories would expand this beyond the limited synaptic responses of the brain.

But synaptic transmission isn't even that neatly describable. It's a simplification of the action potential being all or nothing in certain synapses. I think the description of physical bodies as being like computers (or maybe now moving on to algorithms) is the same error-prone tendency which led us to describe the body as a series of pumps or by other mechanistic descriptions. Living organisms are vastly more complicated, and such comparisons only vaguely apply if at all.

Saying that algorithms aren't amenable to embodied behavior can be restated as saying that there are no algorithms complex enough to capture embodied behavior. Douglas Hofstadter identifies this as a fallacy of argument that he calls "Tesler's Theorem," which basically translates into "A.I. is whatever hasn't been done yet." This is the problem I have with phenomenologists.

I don't follow this application of the Didit fallacy. AI = Algorithms, and algorithms are designed to do something, even if the algorithm takes over on the path to that something (or even deviates from chasing that something).
 
But synaptic transmission isn't even that neatly describable. It's a simplification of the action potential being all or nothing in certain synapses. I think the description of physical bodies as being like computers (or maybe now moving on to algorithms) is the same error-prone tendency which led us to describe the body as a series of pumps or by other mechanistic descriptions. Living organisms are vastly more complicated, and such comparisons only vaguely apply if at all.

Computers and humans are both complex, but that's not to say their complexities are the same or even similar. All metaphors are ultimately faulty in that they overlook certain nuances, but they're redeemed in that they elucidate other nuances. Comparing the human body to a computer isn't to say that the body's information can be abstracted and reprogrammed into a different material and stay the same throughout. It certainly wouldn't because the body is part of the substrate necessary for human information. I think that arguing for consciousness as ultimately programmable is a misnomer, because I think that what we see in computers isn't consciousness and never will be.

The description of synaptic firing as binary code is neatly describable if we do so in the following manner: a synapse either fires or it doesn't. It's either a zero or a one. Granted, this says next to nothing about why a synapse is firing, but that's another level of interpretation. 1 + 1 = 2, but simply stating that fact tells us nothing about why someone might be adding those numbers together. The neuronal model of computation doesn't purport to explain the reasons for human behavior, simply that the firing of synapses can be translated into binary code. Walter Pitts published a theorem back in the 1940s proving this, and it's still widely accepted today. It's known as the artificial neuron.

I don't follow this application of the Didit fallacy. AI = Algorithms, and algorithms are designed to do something, even if the algorithm takes over on the path to that something (or even deviates from chasing that something).

What I mean is that A.I. skeptics continually shift the goalposts when discussing what they see as inherently human qualities that can't be programmed into computers. This is part of the history of A.I. research. First, it was impossible to create calculating machines--"Only humans can do that"--but then we had Babbage's analytical engine and Turing machines. Then it was said that computers couldn't replicate language, which they can now do. Then it was that computers couldn't identify objects, which now they can. Then it was that computers couldn't beat humans at complex games, and now they can. In each case, it's a matter of creating more complex algorithms; and in each case, researchers have been able to accomplish feats that skeptics said were impossible. Phenomenologists tend to privilege human experience simply because it's human experience; and that's fine, it's their prerogative. But it carries with it some predispositions that, I think, inhibit their ability to critique A.I. research.

Now, this isn't to say that it is possible to create a computer that matches human cognition and decision-making, only that the skeptics' argument is to claim "Well, that's not part of what makes human really human" (i.e. it isn't our language, or our game-playing, etc.). And the skeptics are right that computers will never fully emulate humans, but that's because computers don't have human bodies. Programming sound waves into a computer won't generate an aural experience because computers don't have the hardware to process sound waves as sound. The point in A.I. research isn't to create computers that are conscious, but computers that are intelligent enough to mimic conscious behavior.

The thrill of comparing humans to computers is that it forces us to question whether we are merely mimicking conscious behavior. ;) And there's no way to prove that we aren't, even if it feels like a pointless suggestion.
 
The description of synaptic firing as binary code is neatly describable if we do so in the following manner: a synapse either fires or it doesn't. It's either a zero or a one. Granted, this says next to nothing about why a synapse is firing, but that's another level of interpretation. 1 + 1 = 2, but simply stating that fact tells us nothing about why someone might be adding those numbers together. The neuronal model of computation doesn't purport to explain the reasons for human behavior, simply that the firing of synapses can be translated into binary code. Walter Pitts published a theorem back in the 1940s proving this, and it's still widely accepted today. It's known as the artificial neuron.

Just a side not (not quibbling, I know what you mean): The neuron fires. The synapse is the gap between the dendrite of the receiving neuron and the axon(s) of sending neurons.

I'm not familiar with Pitt's theorem and it's outside of my expertise, but I'm be skeptical of comparisons to neurons based on neuroscience prior to WWII.

What I mean is that A.I. skeptics continually shift the goalposts when discussing what they see as inherently human qualities that can't be programmed into computers. This is part of the history of A.I. research. First, it was impossible to create calculating machines--"Only humans can do that"--but then we had Babbage's analytical engine and Turing machines. Then it was said that computers couldn't replicate language, which they can now do. Then it was that computers couldn't identify objects, which now they can. Then it was that computers couldn't beat humans at complex games, and now they can. In each case, it's a matter of creating more complex algorithms; and in each case, researchers have been able to accomplish feats that skeptics said were impossible. Phenomenologists tend to privilege human experience simply because it's human experience; and that's fine, it's their prerogative. But it carries with it some predispositions that, I think, inhibit their ability to critique A.I. research.

Now, this isn't to say that it is possible to create a computer that matches human cognition and decision-making, only that the skeptics' argument is to claim "Well, that's not part of what makes human really human" (i.e. it isn't our language, or our game-playing, etc.). And the skeptics are right that computers will never fully emulate humans, but that's because computers don't have human bodies. Programming sound waves into a computer won't generate an aural experience because computers don't have the hardware to process sound waves as sound. The point in A.I. research isn't to create computers that are conscious, but computers that are intelligent enough to mimic conscious behavior.

The thrill of comparing humans to computers is that it forces us to question whether we are merely mimicking conscious behavior. ;) And there's no way to prove that we aren't, even if it feels like a pointless suggestion.

Ok, well I get the moving the goalposts issue (not sure about the Didit fallacy though). This is where the humanities can redeem themselves over the phenomenologists, as you refer to them. I think your point about "not fully emulating because they lack human bodies" is a shared point with Damasio to some degree.

As a personal interest by someone who does a lot of gaming, it would be interesting to see how computers do in games which are not completely contained and purely logical. I'd expect them to do better than humans sooner or later, but it would be interesting to see if they show a similar pattern of learning and/or move making as they have shown in Chess or GO - that is, eventually winning by methods considered completely unconventional.