The process becomes magical and why work for what can be magically produced. Modernity turns our understanding of productive processes into the myth of Gandalf's Sack (or a similar sack from any number of fables).
Secondly, and this has been said before, the primary reason people work is absolutely to make money. There is a seriously small minority of people who would continue to do what that do, or what dive headfirst into other meaningful work even absent some sort of directly connected compensation, but I will insist until proven wrong by such an actual application of work-reward decoupling that all economic, sociological, and psychological evidence points towards a further explosion of entertainment consumption, grievance movements, and general cognitive and social decline in response to such a decoupling.
Your diagnosis of the myth is slightly misplaced, I think. It isn't "why work for what can be magically produced," because, as you say, people do still work. But as you also say, people don't work for food or some other direct object; they work to make money. So in this respect, yes--people do absolutely work to make money.
Because that's the system we've constructed for ourselves. Change the structure of distribution and accumulation, and you change people's motivations.
I'm still reading the piece on grievance moments from the Mort thread. It's long, but good.
This I'm on board with. Really nice critique of contemporary pop-culture perspectives on AI, which tend to shape intelligence around a human mold. I actually addressed this in my course this past spring. We kept discussing the difference in intelligence between human beings and nonhuman entities, and I eventually asked my students to think about the difference between human consciousness and intelligence, and whether the former implies the latter, or whether the latter necessitates the former. With regard to survival, I brought up things like ant hives and cockroaches and asked if those qualify as "intelligent."
All this said, the author's problem seems to be with the name "superhuman," rather than with anything that might be described as "superhuman." We tend to imagine superhuman intelligences in film (e.g. Terminator, The Matrix, Transcendence, etc.), but these are fictional fantasies of artificial intelligence--not scientifically informed possibilities.
Closer approximations would be films like
Her, or the philosophical meditation on embodiment that we find in
Ex Machina; but even these fall victim to anthropomorphism (although they're aware of it, I would say).
What the author seems to want to say is that we have an uncritical and inaccurate tendency to describe intelligences of expanded scales as "superhuman," which implies an amplification of human faculties and concerns (hence that superhuman intelligence could solve our problems). A more accurate description of these systems would be, simply put, different.
For my part, I've never used the term "superhuman." I prefer the phrase "complex system," or "complex intelligence." I definitely do think these kinds of intelligences are possible, and I don't think the author would deny this... I think he's saying that what we fantasize as "superhuman" is impossible, but complex intelligences are possible (and already exist, in fact).