Einherjar86
Active Member
The point is that the author's point is retarded, and probably indicative of someone that has never published a paper in a remotely-decent scientific journal before. Even in social sciences there's an expectation of some statistical analysis of data, some formation of a testable model, etc. As Dak said, the issue wasn't the forgery of data, it was the fact that these papers were 90% conjecture based on 10% data.
I don't know what you mean by "objectionable" vs "non-objectionable", but people perform hoaxes against journals in the hard sciences as well. The controversy of such hoaxes isn't just forgery of data, it's primarily that the arguments themselves are bullshit and get accepted either because the reviewer dogmatically believes it to be true, or because the reviewer cannot understand the paper.
I'm unaware of hoaxes pulled against hard science journals. I'd like to read about them. I don't doubt someone has tried. Part of the author's point, however, is that those hoaxes don't get the kind of publicity that the Areo and Sokal hoaxes do. The reason they get such publicity is that they appeal to a political motivation among even non-academics who simply despise anything remotely gender-related. They claim to be apolitical, but their work is making a splash precisely because it is political. It's exploding on places like reddit and 4chan, while academics are mostly rolling their eyes.
I'm not trying to shrug the episode off. The Areo authors really did manage to publish some outlandish stuff, and it's worth pausing over the fact that four out of sixteen papers with no data collection got published.
My primary critique of the hoax, which I didn't have time to go into last night, has to do with its methodology--which the authors describe as "reflexive ethnography." This basically means immersing yourself into a culture or community until you can effectively communicate in its discourse, i.e. you figure out what the buzzwords are and how to put certain ideas together. In other words, you can pass yourself off as an insider.
The problem with this is that it's basically the same premise as a computer in a Turing test. A computer might convincingly imitate human language/behavior, but that doesn't mean computer scientists are convinced that it understands what it's saying. John Searle's Chinese room thought experiment basically suggests that a computer might communicate appropriately and effectively without having any knowledge of the semantic content of its language.
This raises a problem for the Areo authors claiming to "know they've made things up." If they don't truly understand the content of the things they've said, then they can't actually know what the meaningful effect of their "intentionally broken" arguments are. Even hard sciences like theoretical physics and abstract mathematics rely on innovative recombinations that surprise readers. Good arguments should simultaneously make sense and surprise us. If they didn't surprise us, then there would be no need to make the argument. So all arguments involve some element of creativity.
Admittedly, the Areo authors managed to publish some, ahem, excessively creative pieces. But I don't think they've actually proven, or even suggested, that the journals in which they've published are awash in such arguments. Rather, they've demonstrated the degree to which editors/readers are willing to bend to allow creative arguments. In some cases, it may be too far; but I'm willing to bet the majority of arguments accepted in these journals don't fall into this outlier category (I don't read any of them, so I don't know).
Last edited: