Dakryn's Batshit Theory of the Week

Totally separate thing of interest to Ein:

http://blog.lareviewofbooks.org/essays/dancing-chains/

A prediction: China will produce some of the world’s most interesting scholarship on American literature within a generation. A secondary effect of this production will be a boost for the humanities, if from a most unexpected quarter.

I just returned from my third trip to China since 2011 lecturing to English language scholars about American and African American literature. This time, after giving five talks on five different campuses in China over the course of a week and a half, talking with scores of students and scholars with nearly perfect English studying American authors from Melville to Ellison, Poe to Plath, I am convinced that Chinese scholars pose a real challenge to the academic study of our own literature.

Maybe I am overstating the case. But we are in their gaze. And as Wesleyan president Michael Roth noted a few months ago, Chinese students are asking better and more interesting questions than we are about academia and academic subjects. Free from the invisible social constraints of academic norms — the trends and fashions of academic study — young Chinese scholars are writing with wild abandon about Sidney Sheldon, slave narratives, Emily Dickinson, Margaret Mitchell, “Chick Lit,” and Michael Chabon. I met a scholar studying Neil Simon and Toni Morrison, a combination for which I can’t imagine a supportive dissertation committee here.
 
  • Like
Reactions: Einherjar86
Read the latest vox article today too and that seems like an obvious interpretation of Klein. And he seems to be arguing Murray's point anyways, that prolonged opression has has a negative effect on IQ development generally (as he suggests with the swimming or weight lifting example).

Appropriate framing should be "is there anything worth pursuing with race research," rather than "look at how these guys are re inventing the wheel to subjugate black people this century."
 
My overall impression of Klein is that he misrepresents research and researchers on genetic links, partially because he's afraid of how they could be interpreted, and he approves of no-platforming people like Murray, because simply looking at genetics is racist.

I'm not sure this is the case simply because he admits to having little problem with David Reich's comments (which he mentions in the beginning of the article). I think it's a strawman to say that he believes "looking at genetics is racist."

Interpreting this as racist requires really poor reasoning.

You're assuming a particular approach to the issue and framing it as "reason." There are reasonable approaches that can identify where Murray's interpretations of his data fall victim to a form of automated neglect. I agree with Klein that Murray is probably genuinely opposed to racism and prejudice. In other words, he's not a bigot. I'm assuming that what Klein means is that Murray neglects the variability of interpretations of the data, and that his interpretation is part of a cultural pattern that has been around since the days of slavery.

The major issue with Harris in this situation is that he frames Murray's interpretations as part of a scientific consensus, when in fact they're nothing of the sort. At the very least, the data is legitimate. The main problem lies in Murray's interpretation of it, which neglects far more than it takes into account.


Really interesting, thanks. There's something to be said for looking at another country's literature from a "foreign" perspective.
 
I'm not sure this is the case simply because he admits to having little problem with David Reich's comments (which he mentions in the beginning of the article). I think it's a strawman to say that he believes "looking at genetics is racist."

If you mean he doesn't mind looking at genetics if it comes with the qualifier of "look at another bad thing whitey did", then sure. That's an incredibly narrow view which is quite unscientific. Reich pretends a view from nowhere which is simple face saving. There's no evidence West Africans were debased in terms of IQ by the slave trade.

I don't have any argument, and I know of no credible argument, that the slave trade didn't keep IQs depressed, and I would agree that social and economic practices (slavery, Jim Crow/redlining) continued to act in a way to keep IQs depressed, and as far as I can tell Murray (or Harris) would agree with this. Where a divergence occurs with "acceptable thought" is that leftist prescriptions for improving the situation at best keep things as bad as they were, if not make them worse. This is, for instance, Sowell's position. Given he is a black man who lived pre and post Civil Rights Act on the other side of the divide both racially and economically (and in the South for a time), and who was been trained to look at things from a non-Marxist economic mindset, I trust his perspective over a privileged millennial Jew like Klein with no education except in, essentially, wordsmithing. I highly recommend Sowell's autobiography. I'm frankly amazed he wasn't a Marxist.

You're assuming a particular approach to the issue and framing it as "reason." There are reasonable approaches that can identify where Murray's interpretations of his data fall victim to a form of automated neglect. I agree with Klein that Murray is probably genuinely opposed to racism and prejudice. In other words, he's not a bigot. I'm assuming that what Klein means is that Murray neglects the variability of interpretations of the data, and that his interpretation is part of a cultural pattern that has been around since the days of slavery.

The major issue with Harris in this situation is that he frames Murray's interpretations as part of a scientific consensus, when in fact they're nothing of the sort. At the very least, the data is legitimate. The main problem lies in Murray's interpretation of it, which neglects far more than it takes into account.

Some of Murray's positions may not be consensus, but there is increasingly undeniable evidence about serious genetic differences, which have long been denied and vilified. It is the privilege and the prerogative of experts in fields to offer their informed interpretations of the data. But the fact that the data itself has been castigated is the problem. I heard on more than one occasion in undergrad that there's no significant discernible differences between races other than melanin - or alternately that race itself is a construct and doesn't exist. Well, the first claim is false and the latter elides the issue. Making both of those claims doesn't make one a racist, and offering theories about causality that don't involve apologies doesn't either. Taking racial gaps at face value, that doesn't make East Asian descendants morally superior on an individual level, but they do commit less crime as a group than West African descendants, within the culture and legal system of the US (although West African immigrants don't have the same rates of crime, for different selection reasons which include not only not being exposed to US slavery pressures but also not being exposed to critical theory informed policy). Again, even assuming all of the negative effects that slavery and Jim Crow/redlining/etc had on IQ, that doesn't mean that A. They reduced IQ B. US Leftwing policies would improve IQ C. Acknowledging the difference and disparaging B is racist.

Really interesting, thanks. There's something to be said for looking at another country's literature from a "foreign" perspective.

Np (but no real need to put quotation marks). I imagine anyone outside the Western tradition with some intelligence about them will have radically different takes, which will probably prove insightful. Tocqueville to an extreme. I wonder how long it will take those critiques to filter into US academia?
 
If you mean he doesn't mind looking at genetics if it comes with the qualifier of "look at another bad thing whitey did", then sure. That's an incredibly narrow view which is quite unscientific. Reich pretends a view from nowhere which is simple face saving. There's no evidence West Africans were debased in terms of IQ by the slave trade.

I don't have any argument, and I know of no credible argument, that the slave trade didn't keep IQs depressed, and I would agree that social and economic practices (slavery, Jim Crow/redlining) continued to act in a way to keep IQs depressed, and as far as I can tell Murray (or Harris) would agree with this. Where a divergence occurs with "acceptable thought" is that leftist prescriptions for improving the situation at best keep things as bad as they were, if not make them worse. This is, for instance, Sowell's position. Given he is a black man who lived pre and post Civil Rights Act on the other side of the divide both racially and economically (and in the South for a time), and who was been trained to look at things from a non-Marxist economic mindset, I trust his perspective over a privileged millennial Jew like Klein with no education except in, essentially, wordsmithing. I highly recommend Sowell's autobiography. I'm frankly amazed he wasn't a Marxist.

Some of Murray's positions may not be consensus, but there is increasingly undeniable evidence about serious genetic differences, which have long been denied and vilified. It is the privilege and the prerogative of experts in fields to offer their informed interpretations of the data. But the fact that the data itself has been castigated is the problem. I heard on more than one occasion in undergrad that there's no significant discernible differences between races other than melanin - or alternately that race itself is a construct and doesn't exist. Well, the first claim is false and the latter elides the issue. Making both of those claims doesn't make one a racist, and offering theories about causality that don't involve apologies doesn't either. Taking racial gaps at face value, that doesn't make East Asian descendants morally superior on an individual level, but they do commit less crime as a group than West African descendants, within the culture and legal system of the US (although West African immigrants don't have the same rates of crime, for different selection reasons which include not only not being exposed to US slavery pressures but also not being exposed to critical theory informed policy). Again, even assuming all of the negative effects that slavery and Jim Crow/redlining/etc had on IQ, that doesn't mean that A. They reduced IQ B. US Leftwing policies would improve IQ C. Acknowledging the difference and disparaging B is racist.

As far as I know, accusations against Murray within academia and the media stem not from his attention to genetics or to the data of variation among races, but to his interpretations for how these differences might impact social policy. Even in this, I wouldn't say (and Klein isn't saying) that Murray himself is a bigot. He simply doesn't acknowledge that his positions mirror those of pre-20thc race science.

It's obvious that he's not saying that superior genetic intelligence would equal moral superiority. That's not the point. The point is that the very same positions Murray holds have been used in the past to justify the moral superiority of particular races. I believe you've suggested that if the situation were true (i.e. that certain races are in fact genetically less intelligent) then it would be justification for more developed and targeted social programs in order to assist such people. The problem is that this is precisely what hasn't happened in the past when theories of racial superiority were prevalent; rather, those in power have appealed to such theories as an excuse to leave the "less intelligent" races to their own devices.

Np (but no real need to put quotation marks). I imagine anyone outside the Western tradition with some intelligence about them will have radically different takes, which will probably prove insightful. Tocqueville to an extreme. I wonder how long it will take those critiques to filter into US academia?

I'm not sure.

I'm also not sure I entirely agree with the author's claim that Chinese academics are asking more interesting questions. For starters, that's the author's opinion; and furthermore, some of the examples he gives sound familiar to me. I'm not trying to say that Chinese academics aren't doing good work on American literature, and I'm sure their nationality/geography affords them a unique perspective. But the combinations of literary theory and American texts in particular look pretty standard (I'd bet that most people who have read Ellison's Invisible Man have thought about the relationship of the orphan to the African American tradition). The idea of modern Americans as "Native Americans" in the scenario of an alien invasion is also something that science fiction scholars have suggested; i.e. not that most Americans would think of themselves as such--part of the point is that they wouldn't think of themselves as such. But of course, he's interacting with Asian students and I'm not, so I can't really claim expertise here.

I do know more than a couple academics/teachers who have gone to China for work. The job market situation is definitely different over there.
 
Last edited:
A good piece on falsfiability and observability, and how these aren't foolproof checks on scientific theory:

https://aeon.co/essays/a-fetish-for-falsification-and-observation-holds-back-science

Unlike Pauli, Einstein was not afraid of suggesting unobservable things. In 1905, the same year he published his theory of special relativity, he proposed the existence of the photon, the particle of light, to an unbelieving world. (He was not proven right about photons for nearly 20 years.) Mach’s ideas also inspired a vital movement in philosophy a generation later, known as logical positivism – broadly speaking, the idea that the only meaningful statements about the world were ones that could be directly verified through observation. Positivism originated in Vienna and elsewhere in the 1920s, and the brilliant ideas of the positivists played a major role in shaping philosophy from that time to the present day.

But what makes something ‘observable’? Are things that can be seen only with specialised implements observable? Some of the positivists said the answer was no, only the unvarnished data of our senses would suffice – so things seen in microscopes were therefore not truly real. But in that case, ‘we cannot observe physical things through opera glasses, or even through ordinary spectacles, and one begins to wonder about the status of what we see through an ordinary windowpane,’ the philosopher Grover Maxwell wrote in 1962.

Furthermore, Maxwell pointed out that the definition of what was ‘unobservable in principle’ depends on our best scientific theories and full understanding of the world, and so moves over time. Before the invention of the telescope, for example, the idea of an instrument that could make distant objects appear closer seemed impossible; consequently, a planet too faint to be seen with the naked eye, such as Neptune, would have been deemed ‘unobservable in principle’. Yet Neptune is undoubtedly there – and we’ve not only seen it, we sent Voyager 2 there in 1989. Similarly, what we consider unobservable in principle today might become observable in the future with the advent of new physical theories and observational technologies. ‘It is theory, and thus science itself, which tells us what is or is not … observable,’ Maxwell wrote. ‘There are no a priori or philosophical criteria for separating the observable from the unobservable.’
 
  • Like
Reactions: Dak
https://www.bloomberg.com/amp/view/...ing-depressed-wages?__twitter_impression=true

New evidence is showing that employers have more market power than economists had ever suspected. Two papers -- the first by José Azar, Ioana Marinescu, and Marshall Steinbaum, the second by Efraim Benmelech, Nittai Bergman, and Hyunseob Kim -- find that in areas where there are fewer employers in an industry, workers in that industry earn lower wages. The two papers use very different data sources, look at different time periods and different geographical units, and use different statistical methodologies. But their findings are completely consistent.

Together with the evidence on minimum wage, this new evidence suggests that the competitive supply-and-demand model of labor markets is fundamentally broken. If employers have the power to set wages, then not just minimum wage, but other labor market policies -- for example, union-friendly laws -- can be expected to help workers a lot more than popular introductory economics textbooks now predict.
 
I don't understand how Smith is claiming that fewer employers = lower wages breaks supply/demand modeling. That relationship seems to be precisely what you would expect.

Separate, from Bakker:

This was the revelation I had in 1999, attempting to reconcile fundamental ontology and neuroscience for the final chapter of my dissertation. I felt the selfsame exhaustion, the nagging sense that it was all just a venal game, a discursive ingroup ruse. I turned my back on philosophy, began writing fiction, not realizing I was far from alone in my defection. When I returned, ‘correlation’ had replaced ‘presence’ as the new ‘ontologically problematic presupposition.’ At long last, I thought, Continental philosophy had recognized that intentionality—meaning—was the problem. But rather than turn to cognitive science to “search for the origin of thinking outside of consciousness and will,” the Speculative Realists I encountered (with the exception of thinkers like David Roden) embraced traditional vocabularies. Their break with traditional Kantian philosophy, I realized, did not amount to a break with traditional intentional philosophy. Far from calling attention to the problem, ‘correlation’ merely focused intellectual animus toward an effigy, an institutional emblem, stranding the 21st century Speculative Realists in the very interpretative mire they used to impugn 20th century Continental philosophy

Goddamn, that's some commitment to his ideals. I'm skeptical about plenty of things in psychology, and the half-life of anything I produce is going to be like 5 years or something anyway, but I'm still going to finish the thing regardless.
 
I don't understand how Smith is claiming that fewer employers = lower wages breaks supply/demand modeling. That relationship seems to be precisely what you would expect.

To your point, I feel like the article lacks clarifying details. For starters, the article itself focuses solely on the number of employers and not the amount of labor available. I'm not familiar with the two papers it links, but I assume these offer more specifics.

As I understand the argument the article is making:

More employers means more supply, in which case employers have to sell their products at a lower cost to undercut the competition, meaning they can't afford to pay their workers higher wages. So more employers = lower wages. This is the traditional model, I assume.

By contrast, the fewer employers there are, the smaller the competition, the less supply there is (since they can dictate how much they sell). In this scenario, employers can charge more for their product and therefore pay their employees more. So fewer employers should equal higher wages. The article is saying that recent research contradicts these assumptions.

It strikes me that this would vary, however, depending on the amount of labor available to employers and the opportunities available to potential employees. The articles says that the two papers it links looked at different time periods, demographics, and used different methodologies; but I'd want to be familiar with their content before taking the Bloomberg piece at its word.

Separate, from Bakker:

Goddamn, that's some commitment to his ideals. I'm skeptical about plenty of things in psychology, and the half-life of anything I produce is going to be like 5 years or something anyway, but I'm still going to finish the thing regardless.

I guess if you know you can write fiction that'll sell, then fuck the (grad)grind. ;)
 
To your point, I feel like the article lacks clarifying details. For starters, the article itself focuses solely on the number of employers and not the amount of labor available. I'm not familiar with the two papers it links, but I assume these offer more specifics.

As I understand the argument the article is making:

More employers means more supply, in which case employers have to sell their products at a lower cost to undercut the competition, meaning they can't afford to pay their workers higher wages. So more employers = lower wages. This is the traditional model, I assume.

By contrast, the fewer employers there are, the smaller the competition, the less supply there is (since they can dictate how much they sell). In this scenario, employers can charge more for their product and therefore pay their employees more. So fewer employers should equal higher wages. The article is saying that recent research contradicts these assumptions.

It strikes me that this would vary, however, depending on the amount of labor available to employers and the opportunities available to potential employees. The articles says that the two papers it links looked at different time periods, demographics, and used different methodologies; but I'd want to be familiar with their content before taking the Bloomberg piece at its word.

Product price/demand sets ceilings on wages, not floors. From employer/employee perspective or "labor supply/demand", fewer employers relative to X number of employees will set the supply of jobs low relative to the demand for employment, thus driving down wages. More employers = more competition for the best workers/workers in general, thus raising the price of labor as the firms try to lure the workers. Now, another factor that isn't addressed and I'm not sure of the specifics to the studies, but generally speaking A. There are more employers in larger economic areas and B. This is related to higher nominal prices of everything. The annual median income in the US is 57k. In Boston it is 75k. But are you getting real increases in purchasing power? Not really. Excluding the subjective value of living in Boston, you get more for your 57k not in Boston (or LA, New York, etc) than in it. Not taking this into account can lead researchers to draw poor conclusions about economic relationships (basically falling for The Money Illusion, among other things).
 
I probably should have posted that "other topic here" so I've copy/pasted over:

Different topic:
Going back to that Smith article on the minimum wage, here's a response by Scott Sumner. He's an economic blogger I recently started following as someone who seems to be able to explain things in an understandable way and is not very libertarian (but not Keynesian either). He's a big supporter of Fed wrangling of the economy, but in different ways from someone like Krugman. Anyway:
http://econlog.econlib.org/archives/2018/04/should_we_trust.html

To summarize, the empirical evidence on the effect on minimum wages on employment is mixed. The empirical evidence on the effect of minimum wages on prices is pretty clear---it raises prices. That means that, on balance, the empirical evidence is more supportive of the competitive labor market model than the monopsony model.

This doesn't mean that firms have no monopsony power---they almost certainly have some. The question is how much, and whether the short and long run labor demand elasticities differ.

I would add that the question of whether higher minimum wages are desirable is very different from the question of whether they affect employment levels. There are other important issues to consider, such as the impact of minimum wage laws on working conditions.


Another different thing:

https://www.bloomberg.com/view/arti...-analytical-thinking-puts-libertarians-on-top

Libertarians measure as being the most analytical political group. That’s according to something called the cognitive reflection test, which is designed to measure whether an individual will override his or her immediate emotional responses and give a question further consideration. So if you aren’t a libertarian, maybe you ought to give that philosophy another look. It’s a relatively exclusive club, replete with people who are politically engaged, able to handle abstract arguments and capable of deeper reflection.

But there's a problem which I realized some time ago now:

Extremely analytical leaders might be best for managing an organization of predominantly analytical people, but that doesn’t mean they will be good national politicians.
.................
Maybe a political philosophy can’t be much more analytical than the people who live in a given society. If leaders move too far from emphasizing the obvious, up-front empathetic dimensions of their choices, they might confront rebellion and eventually backlash. That too is a reason to keep the libertarians somewhat at bay.

Good politician in a democracy = gets elected. That's literally the only requirement. And being analytical is antithetical to getting elected, at least if you're honest anyway.
 
Bakker disappoints:

https://rsbakker.wordpress.com/2018/04/23/killing-bartleby-before-its-too-late/

Since preferences affirm, ‘preferring not to’ (expressed in the subjunctive no less) can be read as an affirmative negation: it affirms the negation of the narrator’s request. Since nothing else is affirmed, there’s a peculiar sense in which ‘preferring not to’ possesses no reference whatsoever. Medial neglect assures that reflection on the formula occludes the enabling ecology, that asking what the formula does will result in fetishization, the attribution of efficacy in an explanatory vacuum. Suddenly ‘preferring not to’ appears to be a ‘semantic disintegration grenade,’ something essentially disruptive.

There's nothing "disintegrating" about "preferring not to" unless you have a neurotic strawman on the receiving end of it. Bakker is in fact falling for the joke he even identified a few paragraphs earlier.

This is why for me, “Bartleby, the Scrivener” is best seen as a prank on the literary establishment, a virus uploaded with each and every Introduction to American Literature class, one assuring that the critic forever bumbles as the narrator bumbles, waddling the easy way, the expected way, embodying more than applying the ‘doctrine of assumptions.’ Bartleby is the paradigmatic idiot, both in the ancient Greek sense of idios, private unto inscrutable, and idiosyncratic unto useless. But for the sake of vanity and cowardice, we make of him something vast, more than a metaphor for x. The character of Bartleby, on this reading, is not so much key to understanding something ‘absolute’ as he is key to understanding human conceit—which is to say, the confabulatory stupidity of the critic.

playedyourselfmeme.jpg

He misses the mark with his conclusion:

One can even look at him as a blueprint for the potential weaponization of anthropomorphic artificial intelligence, systems designed to strand individual decision-making upon thresholds, to command inaction via the strategic presentation of cues. Far from representing some messianic discrepancy, apophatic proof of transcendence, he represents the way we ourselves become cognitive pollutants when abandoned to polluted cognitive ecologies.

AI may indeed command inaction with strategic presentation of cues, but this doesn't make Bartleby a bellwether. There are neurotic strawmen aplenty, but there's also plenty who would say "I don't give a damn what you prefer", or plenty who could engage in any action up to that. Of course, this requires real power - the real issue Bakker doesn't address, for either hidden reasons or inexplicable ignorance in this case.

HAL's "I'm sorry Dave, I'm afraid I can't do that" crashes because it reveals the power structure, or at least a power struggle.
 
I'm often disappointed when philosophers do literary criticism. It's mostly unabashed exegesis with little to no consideration for contextual frameworks (critical ecologies, let's say).

EDIT: okay, all the stuff about undermining the critical procedure is overstated, I think; but I actually found the bit about AI intriguing.

AI may indeed command inaction with strategic presentation of cues, but this doesn't make Bartleby a bellwether. There are neurotic strawmen aplenty, but there's also plenty who would say "I don't give a damn what you prefer", or plenty who could engage in any action up to that. Of course, this requires real power - the real issue Bakker doesn't address, for either hidden reasons or inexplicable ignorance in this case.

HAL's "I'm sorry Dave, I'm afraid I can't do that" crashes because it reveals the power structure, or at least a power struggle.

I'm not following here. Suggesting that Bartleby is a blueprint for weaponizing AI doesn't mean he loses meaning in other contexts... Maybe I'm confused about your objection.
 
Last edited:
I'm often disappointed when philosophers do literary criticism. It's mostly unabashed exegesis with little to no consideration for contextual frameworks (critical ecologies, let's say).

EDIT: okay, all the stuff about undermining the critical procedure is overstated, I think; but I actually found the bit about AI intriguing.

I'm not following here. Suggesting that Bartleby is a blueprint for weaponizing AI doesn't mean he loses meaning in other contexts... Maybe I'm confused about your objection.

Suggesting Bartleby is a blueprint for weaponizing AI, particularly in terms of this mess about "affirming a negative" or whatever, I think is a rather hamfisted smashing of a square peg in a hexagonal hole, never mind the fact that Bartleby is a boorish, boring character. The incapacity of the employer is attributed to Bartleby's "weaponized semantics", which is ridiculous. The shortcomings of the employer are curious, but not complex. The employer/manager needs some assertiveness training or needs to be fired. If a sole proprietor, it seems unlikely he would have had success up to that point or wouldn't simply can the guy. Overall the story seems preposterous and unuseful even as a thought exercise, like some of the Case Examples which populate some of the less useful psych textbooks.

On a totally different note:
http://slatestarcodex.com/2018/04/26/call-for-adversarial-collaborations/

Adversarial collaboration on X topic to be submitted by approximately July 1st. Approximately 5k words. Potential $1,000 prize ($500 apiece). If that's something you'd be interested in attempting/would have the time for this summer and have an idea of a topic you think we could meet in the middle on let me know.
 
Suggesting Bartleby is a blueprint for weaponizing AI, particularly in terms of this mess about "affirming a negative" or whatever, I think is a rather hamfisted smashing of a square peg in a hexagonal hole, never mind the fact that Bartleby is a boorish, boring character. The incapacity of the employer is attributed to Bartleby's "weaponized semantics", which is ridiculous. The shortcomings of the employer are curious, but not complex. The employer/manager needs some assertiveness training or needs to be fired. If a sole proprietor, it seems unlikely he would have had success up to that point or wouldn't simply can the guy. Overall the story seems preposterous and unuseful even as a thought exercise, like some of the Case Examples which populate some of the less useful psych textbooks.

I understand your objections, but the story isn't a thought experiment or a case study. It's an expression of incomprehensible absurdity (key word is expression).

I don't agree with Bakker that the story is a critique of criticism (my own words, probably not what Bakker would say), but this strikes me as one of your "I just don't like fiction" moments. The matter of the narrator's capability as a business-owner isn't really interesting, because the story doesn't aspire to plausibility. It's a ridiculous narrative, but that doesn't make it useless. Bakker has found it useful for talking about potential communicative difficulties in AI research, even programmed difficulties targeted at fostering confusion. That's a fascinating application, to my ears. It's almost certainly not what Melville had in mind, and most Melville scholars probably wouldn't buy it.

For me, Bartleby is a contextual enigma--someone whose behavior makes no sense within the parameters of his social environment. In this respect, he partakes of a literary tradition that includes Antigone, Joan of Arc (the literary, not historical, figure), and Robin Vote (from Nightwood). For me, this raises interesting questions about the representation/expression of gender ascription as well.

So a long response for a short answer: there's a lot to talk about in Bartleby beyond the believability of the scenario.

On a totally different note:
http://slatestarcodex.com/2018/04/26/call-for-adversarial-collaborations/

Adversarial collaboration on X topic to be submitted by approximately July 1st. Approximately 5k words. Potential $1,000 prize ($500 apiece). If that's something you'd be interested in attempting/would have the time for this summer and have an idea of a topic you think we could meet in the middle on let me know.

Oy. A cool opportunity, but I'm not sure. I hope to have a better sense of my summer schedule by the end of the next week.

Could the topic be whether "Bartleby" is an important story? ;) (just kidding, I'd rather write about something else)
 
Last edited:
https://www.amazon.com/Mind-Flat-Re...+mind+is+flat+by+nick+chater/marginalrevol-20

Psychologists and neuroscientists struggle with how best to interpret human motivation and decision making. The assumption is that below a mental “surface” of conscious awareness lies a deep and complex set of inner beliefs, values, and desires that govern our thoughts, ideas, and actions, and that to know this depth is to know ourselves.

In this profoundly original book, behavioral scientist Nick Chater contends just the opposite: rather than being the plaything of unconscious currents, the brain generates behaviors in the moment based entirely on our past experiences. Engaging the reader with eye-opening experiments and visual examples, the author first demolishes our intuitive sense of how our mind works, then argues for a positive interpretation of the brain as a ceaseless and creative improviser.
 
http://slatestarcodex.com/2018/04/30/book-review-history-of-the-fabian-society/

I’m not sure whether Pease believed that a capitalist intellectual was a contradiction in terms, but he certainly didn’t expect to meet any or think they had anything interesting to say. Indeed, the one time he does bring up some people having arguments against socialism, they sound bizarre and totally unlike anything a modern person might possibly say:

When the Society was formed the Malthusian hypothesis held the field unchallenged and the stock argument against Socialism was that it would lead to universal misery by removing the beneficent checks on the growth of the population, imposed by starvation and disease upon the lowest stratum and society.

I don’t know if this was an echo chamber effect or if this was just how the late 19th century worked. I think the latter is at least possible. Remember, everyone (including the capitalists) expected communist countries to have stronger economies, even as late as the 1950s. The idea of coordination problems was almost unknown; the concept of prices as useful signals was still in its infancy. And the possibility that communism could lead to totalitarianism was almost inconceivable; for Pease these concepts are basically exact opposites, and it took Orwell to even jam the concept of “totalitarianism” in the public consciousness in a useful way. If you don’t have any of those concepts or ideas, how do you argue against socialism? I don’t know if anyone in Pease’s day had really solved that problem.
................
All of this came together into a feeling that socialism was so self-evident that arguing for capitalism was absurd. This led to a perspective where there was a battle between the right and rational way of organizing society (socialism) versus the entrenched forces who wanted to keep power but admitted they had no justification besides force and self-interest. Modern communism’s descent from its 19th century predecessors explains a lot about its mindset.
 
I'm linking this not as an attempt of generating an argument or whatever, but as sort of a post of sadness. I don't know how to generate a contact with Sowell, but I would love to shake this man's hand. But he's clearly on the downward slope of health, and there's no clear path of contact. He turns 88 next month, and any chance of shaking his hand and looking him in the eye decrease exponentially each year.

 
  • Like
Reactions: Vegard Pompey
His most recent book was published through Basic Books. You could try contacting the publisher. That’s how I got in touch with Sam Delany.

Hmm. I went on their website and the Contact web form didn't really have a general question option. Basic Books apparently publishes for many public academics. I wonder if that makes a difference to transparency?