Dakryn's Batshit Theory of the Week

I'm only saying that the stock market isn't some odious vulgarization of the real market. It's a very important component of the market, with very important consequences.

Sure, but my point is that it's importance/the importance of consequences of a market goes down as it moves away from the market, or maybe more accurately as the market moves away from a market.

Reinforcement, concretization... these mean something different from "increase" in a quantifiable sense. When I say distance is reinforced, or concretized, I mean that some material force enters into our lives that reminds us of it, that makes distance discernible. The concreteness of distance - what you're taking to be its actual material space, its measurement - could only be acknowledged abstractly, prior to media intervention. Now, communication is possible over great distances; but the distance itself must be realized, it must be traversed by inhuman technologies. And humans, as much as they rely on and utilize these technologies, come to concrete terms with their alienation and separation from each other.

I don't think that the traversing of distance by technology reinforces it, forces us to realize it, etc. It does exactly the opposite. By facsimile, it allows us to ignore the distance, or reduce it's effects.

What you're suggesting, in practical application, is that the 1700s international traveler, who required multiyear separations from friends and family to transverse the Atlantic, with only time in between for a handful of correspondence, felt closer/less realized the distance between his/herself and those left behind than the 21st century jetset executive with the ability to be back and forth in a day, and the tools of videochat, email, phone conferencing, etc at his/her disposal.
This doesn't hold up.

Subjectivity is an objective illusion, or fantasy. Thus, it's very real. But the senses aren't an extension of subjectivity; the senses are what give rise to subjectivity. They exist prior to subjecthood. Only after the fact do we appropriate them in a way that identifies them as for human beings.

What modernity has shown us, with the invention of mass media and the cinema show, is that our eyes are the ultimate movie screen; our ears, the ultimate phonograph. We've completed a Cartesian loop whereby we retrospect our own subjectivity and believe our senses to be the essential tools we use to gather data about the world. The challenge to empiricism, on the contrary, is that our senses constitute the ultimate ideological apparatus, as you've already suggested by claiming that we can't think non-anthropomorphically.

I would insist, however, that thought itself can be non-anthropomorphic because thought need not necessarily conform to sensory perceptions; it just so happens that, after millennia of evolutionary developments and internalization of the world according to sensory perception, our very consciousness is constituted by the way we perceive. If we make an effort to explore and confirm modes of thinking beyond the ideological appropriation of the senses (something that our Western culture, along with all philosophies associated with whatever market economy you claim we possess), then we can approach the actualization of non-anthropomorphic thought.

It's merely another step in the long process. The trick is seeing all this compression, indexation, and transcription as the way in which our ability to know has always been, in a strange way, apart from us. Our senses themselves only become natural after the fact, and then we accept our forms of knowledge as natural and efficient. But these external technologies are not so much extensions of our senses as they are new manifestations of the senses which are always-already external.

So you're reducing "us" to either brains and/or spirit, if we see sensory receptors as external. It's almost analytical in it's hyperdivision of the body. The eyes are not the brain are not the arm.

How would you, as whatever you want to call you, confirm that a thought you thought was not of (or through, and therefore "tampered" with by) you?
 
Sure, but my point is that it's importance/the importance of consequences of a market goes down as it moves away from the market, or maybe more accurately as the market moves away from a market.

So we're back to separating them. I still don't think you can distinguish the stock market from the market. I still think it's a "real" market.

I don't think that the traversing of distance by technology reinforces it, forces us to realize it, etc. It does exactly the opposite. By facsimile, it allows us to ignore the distance, or reduce it's effects.

What you're suggesting, in practical application, is that the 1700s international traveler, who required multiyear separations from friends and family to transverse the Atlantic, with only time in between for a handful of correspondence, felt closer/less realized the distance between his/herself and those left behind than the 21st century jetset executive with the ability to be back and forth in a day, and the tools of videochat, email, phone conferencing, etc at his/her disposal.
This doesn't hold up.

No, I'm saying that when he could communicate with his family via telephone or internet, this distance became phenomenologically actualized. How could the traveler in the eighteenth century perceive the distance?

So you're reducing "us" to either brains and/or spirit, if we see sensory receptors as external. It's almost analytical in it's hyperdivision of the body. The eyes are not the brain are not the arm.

How would you, as whatever you want to call you, confirm that a thought you thought was not of (or through, and therefore "tampered" with by) you?

I'm saying the subject is a projection. Not spirit or essence; it's a retrospection. I wouldn't call it analytic as much as I would call it Deleuzian, or perhaps poststructuralist in a broader sense.

If subjects are projections/retrospections/illusions, then no thoughts are "of you." It's the opposite: thoughts create the impression of "you."
 
So we're back to separating them. I still don't think you can distinguish the stock market from the market. I still think it's a "real" market.

A market within the market. Like a farmers market. Or a flea market. Or a stock index.

No, I'm saying that when he could communicate with his family via telephone or internet, this distance became phenomenologically actualized. How could the traveler in the eighteenth century perceive the distance?

The 18th century traveler perceived distance in a spatial sense because he/she traversed it. He perceived it to a greater degree due to the extended time required to traverse it/for correspondence to traverse it. New technologies shrink the latter reality, which greatly influences our perception of the former.

The faster a distance is traversed, the less it seems to be. Once it has been traversed once, the shorter the distance seems to be on subsequent trips. This has to do with how the brain processes new information, particularly in relation to time. Read an article on this recently, might have been in Aeon.

For example, I've been to Montana before. When I talk to my friend there, I feel much closer than I did communicating to people in California when I had never been there. But even not being there, I felt closer to the people in Cali when communicating in real time than families of gold prospectors felt in the 1800s (and not only because of the communication but because if I wanted to I could be in Cali in 2-3 days[driving, never mind under 1 day flying] rather than 6 months with risk of disease, indian attack, blizzard, etc, etc.)

I'm saying the subject is a projection. Not spirit or essence; it's a retrospection. I wouldn't call it analytic as much as I would call it Deleuzian, or perhaps poststructuralist in a broader sense.

If subjects are projections/retrospections/illusions, then no thoughts are "of you." It's the opposite: thoughts create the impression of "you."

The impression of me to me or of me to others? The latter doesn't seem very radical.
 
A market within the market. Like a farmers market. Or a flea market. Or a stock index.

For a long time you seemed to be arguing that the stock market doesn't count as a market at all. So I'm willing to just concede the above point.

The 18th century traveler perceived distance in a spatial sense because he/she traversed it.

Not quite; the traveler experienced it in a temporal sense (or, at the very least, we must say a spatiotemporal sense). Not in a purely spatial sense. In order to experience it in a spatial sense, the traveler would have to expand bodily to encompass the entire space.

He perceived it to a greater degree due to the extended time required to traverse it/for correspondence to traverse it. New technologies shrink the latter reality, which greatly influences our perception of the former.

The faster a distance is traversed, the less it seems to be. Once it has been traversed once, the shorter the distance seems to be on subsequent trips. This has to do with how the brain processes new information, particularly in relation to time. Read an article on this recently, might have been in Aeon.

Your position sounds a lot like Paul Virilio's: technology in relation to speed, and how this seems to shrink our horizons, makes measurable distances obsolete. Technological acceleration reduces physical distance.

The scientific component of this is simply not the case; the distance, of course, remains the same.

What cybernetics does is reduce the temporal gap, not the spatial gap. In order to do this, cybernetics occupies the gap; and thus, it appears to us as though we consciously occupy the gap. But cybernetics doesn't allow for immediate contact or communication between human beings; our physical bodies don't come in contact with each other. Technology allows for the mediation of human beings across vast distances, thus turning communication into something inhuman. When you talk with your friend over the phone, you may certainly feel closer; but the communication itself has altered the human forms on either end of the line.

Distance appears less to us because of the consciousness that projects itself into the machines. The distance isn't actually lessened. In fact, the technology feeds on our inability to bridge the gap in order to colonize our consciousnesses, you might say.

And I never suggested that the eighteenth-century explorer felt closer to his family than we do today.

The impression of me to me or of me to others? The latter doesn't seem very radical.

It would be both. But you're correct, of course, the former is the less obvious.
 
For a long time you seemed to be arguing that the stock market doesn't count as a market at all. So I'm willing to just concede the above point.

I never said stock indexes never count at all. Just that the modern stock indexes are counting less and less.

Not quite; the traveler experienced it in a temporal sense (or, at the very least, we must say a spatiotemporal sense). Not in a purely spatial sense. In order to experience it in a spatial sense, the traveler would have to expand bodily to encompass the entire space.

Your position sounds a lot like Paul Virilio's: technology in relation to speed, and how this seems to shrink our horizons, makes measurable distances obsolete. Technological acceleration reduces physical distance.

The scientific component of this is simply not the case; the distance, of course, remains the same.

What cybernetics does is reduce the temporal gap, not the spatial gap. In order to do this, cybernetics occupies the gap; and thus, it appears to us as though we consciously occupy the gap. But cybernetics doesn't allow for immediate contact or communication between human beings; our physical bodies don't come in contact with each other. Technology allows for the mediation of human beings across vast distances, thus turning communication into something inhuman. When you talk with your friend over the phone, you may certainly feel closer; but the communication itself has altered the human forms on either end of the line.

Distance appears less to us because of the consciousness that projects itself into the machines. The distance isn't actually lessened. In fact, the technology feeds on our inability to bridge the gap in order to colonize our consciousnesses, you might say.

Obviously the spatial distance remains the same. But our experience changes, and changes to a perception of less distance, not more.

To say that technology, or the market, "feeds" on the disconnect is again a - possibly unconsciously - negative word choice, and unduly so.


And I never suggested that the eighteenth-century explorer felt closer to his family than we do today.

That's what I see as the practical application of what you submitted at the time.

It would be both. But you're correct, of course, the former is the less obvious.

The former is little more than a "chicken and egg" question.
 
America has 3 Personality Regions

Interesting research based on the Big Five Factor Theory personality assessment. Most interesting that Texas has more in common with the North East than anywhere else, and that those regions have the most negative trait association (not really a surprise).

Also personally interesting that North Carolina is an odd duck of sorts within it's geographic area.
 
You say "chicken and egg" as though that automatically makes it not worth pursuing...

Well, "not worth pursuing" is a subjective value determination, but to pursue it (as per our current understanding) means holding out hope for "sneaking up on the thing itself" - with the thing itself.

Separately:

WSJ: Druckenmiller: Entitlements = Babyboomers ripping off Millennials

There's some honesty out of a Baby Boomer. It quite irritates me when my Grandmother (not a Baby Boomer herself obviously) starts harping on the "youth of today" and bashing entitlement when she is pulling ALL her money from entitlements. Two Social Securities, my deceased Post Office Pension, farm subsidy money, and any sort of payout from investments made from money from the same sources - investments which if they are doing well are doing so because of QE - more theft from the rest of America.
 
Well, "not worth pursuing" is a subjective value determination, but to pursue it (as per our current understanding) means holding out hope for "sneaking up on the thing itself" - with the thing itself.

Yeah... like trying to shine a flashlight beam onto the flashlight. But what the hell's producing the beam?

I'm not ignorant of the logical issues, but consigning yourself to simply believing that the flashlight is there (and knowing what it is) isn't good enough for me.

I think the subject is a product of language and collective, symbolic interaction. The paradox comes about when we ask: but don't we need a subject to create symbols and language? But this also assumes that language requires a creator... Language doesn't contain the answers within itself (i.e. words are material things; they don't speak). Neuroscience and cognitive studies will (hopefully) shed more light on the flashlight in the future.

I know you've been posting other things, no time to look at all of it; but will soon...
 
Yeah... like trying to shine a flashlight beam onto the flashlight. But what the hell's producing the beam?

The problem is that a flashlight can't shine on itself directly. Now, we could use a mirror..... :D

I'm not ignorant of the logical issues, but consigning yourself to simply believing that the flashlight is there (and knowing what it is) isn't good enough for me.

I wouldn't use the word consign, but going back to value judgements and thinking in terms of probability: It's the least likely problem for us to solve - I would would say in the whole of potential knowledge. In light of that, I personally won't put much effort behind trying to turn the flashlight back on itself.

Neuroscience and cognitive studies will (hopefully) shed more light on the flashlight in the future.

About those mirrors...
 
http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.

Various factors contribute to the problem. Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

One of those occasions my intuition on human behavior looks to be correct.
 
Maybe.

The author cites several studies that were identified as tendentious and subsequently retracted. Isn't that proof that the system is self-correcting? I think The Economist is getting a little big for its britches.
 
No one said it wasn't self correcting at some point. That's irrelevant, particularly to the portion I quoted. It's a systemic, incentivization problem. Cranking out a bunch of more or less intentionally BS studies is a drag on resources both in the running of the study and then having to go back to check it and does a disservice to the community and humanity in general.
 
I don't think it's a problem. I think this article is misleading. I think its findings themselves can be subsumed into the poor research it's accusing academic journals and institutions of, and I think its examples are well-known for incompetence among the larger academic community.
 
I don't think it's a problem. I think this article is misleading. I think its findings themselves can be subsumed into the poor research it's accusing academic journals and institutions of, and I think its examples are well-known for incompetence among the larger academic community.

Are they? Is the systemic root of the problem being corrected? Are incentives being adjusted? Etc. While it appears that standards for what constitutes good research remain high, the application of this standard is wanting. It's a sad state when confidence in *any* field of studies is below 50%.

I know researchers get caught between a lack of funding and publish or perish. Publishing accurate null testing is unlikely (it's boring!), so one must find something exciting to "prove". Massive conflict of interest, not only for the researcher(s) but for the organization at large. Eventual shortcomings of the study are merely a reason to request more funding - and it's easier to "ask for forgiveness" if the research is shown to be flawed. A for effort though right?

To head off charges of "conspiracy" and whatnot, I'm appealing to the banality of it. The assistants will collect the data requested, and the leads will compile etc. But how much data the funding allows for, whether it allows for fully proper protection against confounding factors, etc can be rationally "swept under the rug". Then statistical treatment is another area ripe for a fudge here, and exclusion there to dress up the findings - possibly completely unintentionally. Then you have the protectionism, a "white"? wall of silence (certainly nothing close to the blue or green wall of course)

Such headlines are rare, though, because replication is hard and thankless. Journals, thirsty for novelty, show little interest in it; though minimum-threshold journals could change this, they have yet to do so in a big way. Most academic researchers would rather spend time on work that is more likely to enhance their careers. This is especially true of junior researchers, who are aware that overzealous replication can be seen as an implicit challenge to authority. Often, only people with an axe to grind pursue replications with vigour—a state of affairs which makes people wary of having their work replicated.

There are ways, too, to make replication difficult. Reproducing research done by others often requires access to their original methods and data. A study published last month in PeerJ by Melissa Haendel, of the Oregon Health and Science University, and colleagues found that more than half of 238 biomedical papers published in 84 journals failed to identify all the resources (such as chemical reagents) necessary to reproduce the results. On data, Christine Laine, the editor of the Annals of Internal Medicine, told the peer-review congress in Chicago that five years ago about 60% of researchers said they would share their raw data if asked; now just 45% do. Journals’ growing insistence that at least some raw data be made available seems to count for little: a recent review by Dr Ioannidis which showed that only 143 of 351 randomly selected papers published in the world’s 50 leading journals and covered by some data-sharing policy actually complied.

I understand the purpose behind publish or perish, but it leads to this. Jobs and reputations are on the line for quantity rather than quality. The result is pretty predictable.

Interestingly enough:

Things appear to be moving fastest in psychology. In March Dr Nosek unveiled the Centre for Open Science, a new independent laboratory, endowed with $5.3m from the Arnold Foundation, which aims to make replication respectable. Thanks to Alan Kraut, the director of the Association for Psychological Science, Perspectives on Psychological Science, one of the association’s flagship publications, will soon have a section devoted to replications. It might be a venue for papers from a project, spearheaded by Dr Nosek, to replicate 100 studies across the whole of psychology that were published in the first three months of 2008 in three leading psychology journals.
 
Yes, they are; PLOS One accepts roughly 50%, and it's not considered a highly reputable journal. Can you imagine how difficult it is to get published in top notch ones?

Is it true that misinformation gets published in all fields, and in all journals? The answer is: yes. But the notion of "misinformation" is problematic in itself; what we're looking for in the long scientific process isn't consistently accurate or even close results. Just as experiments will always have outliers and anomalies, so is the entire process of scientific development and study.

This article misleads the public to believe that the majority of academic study is misguided and flawed. This is a fatal error. The most prominent journals and institutions, which in turn provide parameters for further research done by other institutions, provide rigorous panels of review and assessment. Does this mean they'll catch all tendentious publications? Not at all; but it does mean that academic research is a long process, and it is certainly self-correcting.

Publish or perish isn't potentially fixed by any privatization or market solution. If anything, that will amplify the problem. What the author doesn't realize is that poor publications invite self-correction, and many scholars are going to see the mistakes and jump on the chance to provide correction. The article makes it seem as though, somehow, we're going to spiral into a quagmire of mistaken results.

EDIT:

Discolosure-I am not a statistician, but I’ll try to present the article’s main argument. The Economist argues that at the standard significance threshold of 5%, meaning that the results observed only occur by chance 1 in 20 times, many of the conclusions are incorrect. This is based on their estimate that only 10% of hypotheses are correct (the basis for this estimate is not at all clear). Therefore, with a power of 0.8 (i.e. 2/10 hypotheses are not supported due to chance), then one finds 8% correct hypotheses and 4.5% incorrect hypothesis (the 90% wrong hypotheses multiplied by 5%). In other words, a third of hypotheses are incorrect due to chance.

The major flaw in this argument is the assumption that a hypothesis is deemed “correct” based on one result with a 5% significance finding. Wouldn’t that be nice if I could publish a paper with only one figure-THE experiment showing my model is correct! Rather, in good publications, multiple experiments are done to test one hypothesis from many different angles. In fact, a paper from my lab that was just published accepted last week in Molecular Microbiology has 14 data figures! Therefore, if an average paper uses three pieces of evidence to support one hypothesis with each having a confidence of 5% then the probability that the observations occurred by chance would be 0.05*3 or 0.000125. Even then, good scientists do not state their hypothesis is “correct”, but rather “supported” by the current data. On top of that, ideas are not fully accepted into the science lexicon until they have been repeated by others in different settings, further adding to the rigorous bar science must cross before becoming a widely accepted hypothesis. The cream rises to the top and the rest falls by the wayside.
 
Yes, they are; PLOS One accepts roughly 50%, and it's not considered a highly reputable journal. Can you imagine how difficult it is to get published in top notch ones?

You're moving the goalposts.

Is it true that misinformation gets published in all fields, and in all journals? The answer is: yes. But the notion of "misinformation" is problematic in itself; what we're looking for in the long scientific process isn't consistently accurate or even close results. Just as experiments will always have outliers and anomalies, so is the entire process of scientific development and study.

If it were merely limited to outliers at this point, journals and whole fields wouldn't be trying to self-correct at the moment. You are insinuating this correction is unnecessary, and that Dr. Alberts is completely offbase in suggesting that there needs to be fundamental changes to the research, review, and publication process.

Your argument is that correction isn't needed because science self-corrects......

Publish or perish isn't potentially fixed by any privatization or market solution. If anything, that will amplify the problem. What the author doesn't realize is that poor publications invite self-correction, and many scholars are going to see the mistakes and jump on the chance to provide correction. The article makes it seem as though, somehow, we're going to spiral into a quagmire of mistaken results.

Three separate things:

1. Privatization: Given that there isn't necessarily a "product" coming out of various fields of research means that it's probably not a field for business to handle. That doesn't mean it wouldn't necessarily benefit from privatization. Look at MIT.

2. As noted in the article, correcting isn't a walk in the park. It can be sandbagged and stonewalled in a variety of ways.

3. The article does not suggest we are heading for such a quagmire, it lays out an argument we are already in a quagmire as it is - compared to what *should* be. It specifically highlights positive changes being made in various journals aimed at correcting the problem.
 
You're moving the goalposts.

No I'm not. I'm saying the article misjudges the merit of PLOS One, and claims this single publication represents the standard of all academic journals.

If it were merely limited to outliers at this point, journals and whole fields wouldn't be trying to self-correct at the moment. You are insinuating this correction is unnecessary, and that Dr. Alberts is completely offbase in suggesting that there needs to be fundamental changes to the research, review, and publication process.

Your argument is that correction isn't needed because science self-corrects......

No; I'm saying that it already self-corrects. The author is claiming that it doesn't happen. Is this seriously so difficult to understand?

Three separate things:

1. Privatization: Given that there isn't necessarily a "product" coming out of various fields of research means that it's probably not a field for business to handle. That doesn't mean it wouldn't necessarily benefit from privatization. Look at MIT.

Fair point; I suppose I jumped the gun. Look at MIT; and Harvard, and University of Chicago, and Boston University, and...

...a slew of other private universities that set the bar for others.

2. As noted in the article, correcting isn't a walk in the park. It can be sandbagged and stonewalled in a variety of ways.

3. The article does not suggest we are heading for such a quagmire, it lays out an argument we are already in a quagmire as it is - compared to what *should* be. It specifically highlights positive changes being made in various journals aimed at correcting the problem.

Which is self-correction. This article is a bundle of contradictions.
 
No I'm not. I'm saying the article misjudges the merit of PLOS One, and claims this single publication represents the standard of all academic journals.

The merit of a given journal isn't the important takeaway. The point was that shoddy science is getting done and published to a degree that >50% of published (by some journal) work in some fields/areas is more or less useless.

PLoS is a red herring here, one way or the other, since other more respected journals are "upping their game" - which they wouldn't need to if there hadn't been a problem.

No; I'm saying that it already self-corrects. The author is claiming that it doesn't happen. Is this seriously so difficult to understand?

Doesn't happen? Or hasn't been happening to the degree expected and/or desired?

I found the latter rather obvious, and Dr Albert testified as much to Congress.

In testimony before Congress on March 5th Bruce Alberts, then the editor of Science, outlined what needs to be done to bolster the credibility of the scientific enterprise. Journals must do more to enforce standards. Checklists such as the one introduced by Nature should be adopted widely, to help guard against the most common research errors. Budding scientists must be taught technical skills, including statistics, and must be imbued with scepticism towards their own results and those of others. Researchers ought to be judged on the basis of the quality, not the quantity, of their work. Funding agencies should encourage replications and lower the barriers to reporting serious efforts which failed to reproduce a published result. Information about such failures ought to be attached to the original publications.

This should be a given - not professional Congressional testimony.
 
The merit of a given journal isn't the important takeaway. The point was that shoddy science is getting done and published to a degree that >50% of published (by some journal) work in some fields/areas is more or less useless.

It is an important takeaway when the author is implying that its merit is representative of academic journals in general. It's misrepresentation.

PLoS is a red herring here, one way or the other, since other more respected journals are "upping their game" - which they wouldn't need to if there hadn't been a problem.

They're not "upping their game"! They've always been thorough.

Doesn't happen? Or hasn't been happening to the degree expected and/or desired?

Both would be false. Dr. Albert's testimony makes it sound as though harmfully false and flawed results are being published all over the place. While this may happen from time to time, "publish or perish" does not encourage scholars to perpetually publish false results.

The reason for this is simple: if a journal is found to have published a poor paper that pushes flawed results, and if a particular scholar is found to be lax in his or her methods, then they begin to lose what reputation they have remarkably quickly.

You, in fact, argue ruthlessly that businesses and individuals shouldn't be charged legally for doing something reproachable in a business transaction; the loss of reputation and trust will be enough to ruin them. The same works in academia, in fact to a much greater degree. Once it comes to the surface that a journal or scholar has published incorrect or misleading results, both lose their credibility.