Einherjar86
Active Member
That quote was from the same article you linked.
If you could give a specific example of his "leaps of character," that would make it easier to address. As far as I can tell, he's simply going off of what Musk, Zuckerberg, and the like have already said about AI. His main point seems to be that tech moguls project an unconscious skepticism toward their own behavior onto the prospect of superintelligent AI. In and of itself, that's a very speculative suggestion, and not one that has a lot of argument backing it; it's a kind of cultural critique performed very succinctly. It's a thought piece, not much more; but people like Chiang are often asked for such pieces, given their extensive body of work.
I do think there's been some push-back however from people who have somewhat misinterpreted the piece. I don't think Chiang is giving us his stance on AI one way or another. I think he's just comparing contemporary visions of superintelligent AI to corporate models and systems. This isn't a totally ridiculous analogy if one thinks in terms of emergence and complexity theory. Emergence has been used in the sciences to explain phenomena ranging from termite colonies to the internet to processes of urbanization. Chiang is just picking up on a theory of complexity that observes comparable patterns between corporate entities and hypothetical superintelligent systems.
If you could give a specific example of his "leaps of character," that would make it easier to address. As far as I can tell, he's simply going off of what Musk, Zuckerberg, and the like have already said about AI. His main point seems to be that tech moguls project an unconscious skepticism toward their own behavior onto the prospect of superintelligent AI. In and of itself, that's a very speculative suggestion, and not one that has a lot of argument backing it; it's a kind of cultural critique performed very succinctly. It's a thought piece, not much more; but people like Chiang are often asked for such pieces, given their extensive body of work.
I do think there's been some push-back however from people who have somewhat misinterpreted the piece. I don't think Chiang is giving us his stance on AI one way or another. I think he's just comparing contemporary visions of superintelligent AI to corporate models and systems. This isn't a totally ridiculous analogy if one thinks in terms of emergence and complexity theory. Emergence has been used in the sciences to explain phenomena ranging from termite colonies to the internet to processes of urbanization. Chiang is just picking up on a theory of complexity that observes comparable patterns between corporate entities and hypothetical superintelligent systems.