yeah, and then he would duck 2-3 inches while holding his beer or something, no? Pretty stupid move imo, what if him and the people around him got sprayed right when he was doing that?
Citizen science, I’d argue, is not structured to produce real knowledge. Rather, it’s about rejigging power relations. It draws strength from a certain brand of market fundamentalism – a political sensibility we might also call neoliberalism – in which people’s beliefs about science are simply transactions in a marketplace of ideas, as unassailable as their choice of soap-powder at the supermarket. What does it all mean? Let the market sort it out, not the scientific community. For neoliberals, the market knows the nature of truth better than any human being, a category which includes scientists.
The problem with this argument is that it doesn’t recognise that the mind must be prepared to see the significance of certain kinds of information. Being and becoming a scientist doesn’t revolve around a hieratic conformity to some transcendent ‘scientific method’. Rather, it’s the consequence of a long period of immersion in the specific culture of a discipline, such that one begins to be able to perceive what are the valid questions, preferred methods, legitimate styles of research, and so on.
By contrast, citizen science often amounts to the bald assertion that you can dispense with everything (including long years of education and apprenticeship) and mimic the outward trappings of science (cool apparatus, measurement, making organisms blink) – and still make a lasting ‘contribution’ to knowledge. Now, if it were just a matter of recruiting people to do the gruntwork, and have the experts check and follow up, such ‘knowledge’ wouldn’t really matter; but that is not at all how citizen science is sold. All movements to make citizens behave more like scientists embody a baneful internal contradiction: if the participants were at all serious, they would have to undergo real training, not a drive-by blast of methodology lite. But in that case, by definition, it would no longer be citizen science.
An ivory-tower reaction to the bucking of academic discourse? Or a perceptive take-down of lay science?
I wear my opinions on my sleeve.
https://aeon.co/essays/is-grassroots-citizen-science-a-front-for-big-business
Scott is left-wing, but he’s explicitly taking aim at a certain left tradition: Gramsci, Althusser, etc. This tradition is, more or less, opium of the masses on steroids. Someone is going to tell me that Actually, what they mean is […] and I’m going to say that you’re wrong and I don’t care. The Gramsciites I’ve met may have some super-secret other reading, but 95% of them present it as, more or less, “Sheeple just get propaganda’ed, bro, which is why they don’t rise up. We intellectuals need to show them the way.” They would look at Sedaka and instantly start screaming about cultural hegemony.
No.
I agree that "not anyone" can be a scientist. The problem is that we assume people that matriculate are automatically qualified scientists. It's not a bar tooooo low, but a bar too low. Conversely, particularly in the social sciences, there are echo chambwer issues which your link fails to acknowledge. (Not that the issue noted isn't also an issue). I provide, as a convenient counter, the newest blog from my newest favorite blogger:
https://samzdat.com/2017/11/20/banish-plump-mouse-deer-and-banish-all-the-world/
I will read the post soon, but don't have time right now (and the number agreement issue in his first paragraph is too much to handle right now; I just graded eighteen freshman papers, I don't need to see more grammatical errors).
All I'll say is that the quote you posted is directed at the humanities; the article I linked is directed at the physical/natural sciences (i.e. not the human sciences). Chemists and physicists aren't clambering over one another to claim cultural hegemony.
I think we have to assume that people in universities are qualified. That's part of the gauntlet they have to go through, it's how they earn their credentials. Obviously there are differences between universities, and inevitably there will be those who produce sub-par work; but I don't think our position can be one of extreme skepticism when it comes to scientific research, since such a position would slow progress to a virtual standstill (for what it's worth, many laypeople out there believe the legitimacy of evolution is on the table for serious scientific debate; the academy can't be slowed by down explaining why this isn't the case every time a citizen scientist posits a hypothesis about Edenic origins, or some such).
Much of the skepticism toward the physical sciences today comes from people who have little to no justification for being skeptical other than closely-held personal beliefs. This is what the Aeon piece means by the democratization of science. It is a profession, and it demands expertise. That doesn't mean that there won't be hacks out there, but I don't think that justifies a position of absolute skepticism prior to viewing their work. If there are hacks, then chances are their work will not conform to discursive consensus, and they'll be forced to reevaluate or abscond.
We award businesspeople a level of credibility when it comes to social affairs (i.e. "He runs a business, he knows what he's doing"). Scientists deserve the same level of trust.
Recovering or conserving the subjective viewpoint is an appealing notion, and its appeal reveals the sort of doublethink we have towards our era of the brain and the fMRI. Everyone’s attention is grabbed by talk of endorphins and serotonin-boosters and cooling that pesky fight-or-flight response. However, even when the evidence suggests that we want the total physiological control of Brave New World, it remains one of the most famous literary dystopias of our time. There is still horror in the idea of totally relegating our individual experience to the species-wide logic of physiology. Most of us, deep down, want our inner lives – our ideas about ourselves, our sense of where we want life to lead, what we fear, what we desire – to be of consequence. Psychoanalysis is attractive at least partly because it makes us rich, narrativised, and mysterious to ourselves. It makes life a novel, not a textbook.
Certainly, the project can become overheated, indulgent: to imagine that the depths of our being resembles the timeless turbulence of a Greek myth flatters us. To imagine that our dreams are loaded with meaning plays to our basic narcissism. (The same narcissism that makes us want to tell people all about them, and makes those that aren’t ours so reliably dull.) But there is a basic principle at play here, and it has to do with the feeling that no generalised theory can capture even a single living mind, let alone all of them. Rationalising ourselves offers a form of relief. No more vagueness; everything measurable and editable.
And yet, on some level, we don’t want to live our lives purely according to biochemistry, the same biochemistry as 7.5 billion others. What a flattening! In such an equation, something gets lost, even if we struggle to say what. Not for nothing does psychoanalysis survive most unscathed in the humanities. Freud’s work references Hamlet and Macbeth, and Goethe’s Faust. For a century, writers including H G Wells, Virginia Woolf, J G Ballard and Paul Auster have been compelled by his work. Like psychoanalysis, the humanities (and especially literature) privilege the richness of the individual life, and regard reality as populated by subjects rather than objects. Like psychoanalysis, the humanities are often framed as in decline, dwarfed by the technocratic bloodlessness of a scientistic age. There is a parallel pursuit for the two projects. Both are driven by the same instinct, that the stories we tell ourselves can affect how we live with ourselves.
Things like panic and phobias can be addressed with little to no introspection.
I don't think the author has any beef with CBT; he's just charting the relative reception of Freudian ideas within contemporary psychiatric practice. I personally found it interesting that neuroscience has something positive to say about Freud, and it's certainly true that he inaugurated a paradigm shift in how we think about the human mind--a shift that constitutes the state of the field today, even if he was wrong about the Oedipal complex and psychosexual development (both of which the author admits).
I think it's interesting that Deleuze and Guattari debunked the Oedipal complex in 1972, well before the dramatic shift toward CBT in the 1990s. Critical philosophy has long been skeptical of Freud despite awarding him intellectual prestige.
I also think the author's main point is that while Freud is frequently associated with ideas that have fallen out of favor, the scientific impetus that drove his thinking established a new paradigm for psychological study. Furthermore, there was plenty that Freud also got right. I think it's telling that neuroscience hasn't entirely discarded Freud, even today.
It sounds as though this is the bio-materialist position the author is concentrating on. I'm not swayed by his late appeals to enjoying our subjectivity, or some such; but I do think it's worth interrogating exactly how much we can reduce all psychic experience to neuronal phenomena.
The CBT shift happened in the 80s as far as I know.
Freud's only right contribution that I know of is the emphasis on the subconscious, which is solely from a neuroscience perspective. It's not actionable from a therapy or neuroscience perspective. Therapy can only be conscious acting on the unconscious (or conscious). My currently most challenging client has a lot of anxious difficulties primarily stemming from parental related criticism as far as I can deduce. I can't change their parents. I can influence change in their conscious relations and behaviors, and exposure to those challenging situations after psychoeducation in the processes has been beneficial.
Exposure to stimuli (with some cognitive education) to reactive stimuli works better than couch sessions or drugs. Panic and phobia are based on irrational fears which are only changed with counter learning experience.
Is it your impression that introspection aims to alter contingent reality? I only ask because your phrasing makes it seem like you think psychoanalysis, or related therapeutic methods, do aim to "change their parents." I think it's glaringly obvious that there are things in life we can't change. I don't think that psychoanalysis would be so egregiously misguided as to insist that a patient can change anything/everything.
I'm not equipped to comment on its practicality, but does impracticality render something valueless? Plenty of discoveries in the physical sciences have no practical application, but no one complains about them.
Again, I'm ill-equipped to speak to much of this. How do you propose to arrive at "counter learning experiences" without some degree of introspection?
Psychoanalysis doesn't presume to change relations to parents, I assume, but a change in relation must be part of therapy, and/or now we reach into CBT. Every situation is different, but I embrace the 3rd wave approach which includes a value driven goals objective, which has at least some nod to Freud.
If we are talking about phobia there is some learning about the nature of general cognition, but the function or mechanism of change is exposure to the thing they are afraid of absent the resullt they are afraid of; on repitition.
Based on what you're saying, it sounds as though the author of the piece isn't granting undue value to Freud's role in contemporary neuropsychiatry.
Obviously this isn't the character of my relationship with Freud's work, I just found it to be interesting since I assume, for the most part, that Freud was still dismissed by current practitioners. I also think it's too bad his work is dismissed wholesale as unscientific when much of it exhibits scientific tendencies behind the scenes. That is, I don't think that interpretive methodologies are necessarily unscientific.
Science, by nature, is conservative; a result isn’t even considered statistically significant below a probability of at least 95%, often 99%. Global systems are full of complexity and noise, things that degrade statistical significance even in the presence of real effects—so scientific publications, almost by definition, tend to understate risk.
By a couple dozen climate scientists.
Which might explain why, once we were finally able to collect field data to weigh against decades of computer projections, the best news was that observed CO2 emissions were only tracking the predicted worst-case scenario. Ice-cap melting and sea-level rise were worse than the predicted worst-case—and from what I can tell this is pretty typical. (I’ve been checking in on the relevant papers in Science and Nature since before the turn of the century, and I can remember maybe two papers in all that time that said Hey, this variable actually isn’t as bad as we thought!)
So saying that Wallace-Wells takes the worst-case scenario isn’t a criticism. It’s an endorsement. If anything, the man understates our predicament. Which made it a bit troubling to see even Ramez Naam—defender of dystopian fiction—weighing in against the New York piece. Calling it “bleak” and “misleading”, he accused Wallace-Wells of “underestimat[ing] Human ingenuity” and “exaggerat[ing] impacts”. He spoke of trend lines for anticipated temperature rise bending down, not up— and of course, he lamented the hopeless tone of the article which would, he felt, make it psychologically harder to take action.
I’m not sure where Ramez got his trend data—it doesn’t seem entirely consistent with what those Copenhagen folks had to say a few years back—but even if he’s right, it’s a little like saying Yes, we may be a hundred meters away from running into that iceberg, but over the past couple of hours we’ve actually managed to change course by three whole degrees! Progress! At this rate we’ll be able to miss the iceberg entirely in just another three or four kilometers!
I prefer to take the positive view that we are averting the catastrophe of an Ice Age - unless we merely avert it long enough for prevailing climate countermechanisms to send the earth back into one anyway. Losing Florida is nothing compared to glaciers covering half of the northern hemisphere. A much bigger problem for humans than climate change is antibiotic resistant bacteria (and a problem in which humans are easily, demonstrably responsible - although not 100%), and it's not getting the same level of hysterics.
An ice age is only one scenario; it's more complicated than either "Ice Age--we're fucked" or "No Ice Age--it's all good." There's a continuum of climate activity and disruption in between that affects regions across the globe, takes a toll on various species, and upsets local ecosystems. All of these have impacts on humanity, including on the global economy.
How is complaining about ecosystems shortsighted? I don't understand that.
I also don't understand your skepticism, but whatever.