HamburgerBoy
Active Member
- Sep 16, 2007
- 15,042
- 4,723
- 113
A second plank of the race science case goes like this: human bodies continued to evolve, at least until recently – with different groups developing different skin colours, predispositions to certain diseases, and things such as lactose tolerance. So why wouldn’t human brains continue evolving, too?
The problem here is that race scientists are not comparing like with like. Most of these physical changes involve single gene mutations, which can spread throughout a population in a relatively short span of evolutionary time. By contrast, intelligence – even the rather specific version measured by IQ – involves a network of potentially thousands of genes, which probably takes at least 100 millennia to evolve appreciably.
Given that so many genes, operating in different parts of the brain, contribute in some way to intelligence, it is hardly surprising that there is scant evidence of cognitive advance, at least over the last 100,000 years. The American palaeoanthropologist Ian Tattersall, widely acknowledged as one of the world’s leading experts on Cro-Magnons, has said that long before humans left Africa for Asia and Europe, they had already reached the end of the evolutionary line in terms of brain power. “We don’t have the right conditions for any meaningful biological evolution of the species,” he told an interviewer in 2000.
In fact, when it comes to potential differences in intelligence between groups, one of the remarkable dimensions of the human genome is how little genetic variation there is. DNA research conducted in 1987 suggested a common, African ancestor for all humans alive today: “mitochondrial Eve”, who lived around 200,000 years ago. Because of this relatively recent (in evolutionary terms) common ancestry, human beings share a remarkably high proportion of their genes compared to other mammals. The single subspecies of chimpanzee that lives in central Africa, for example, has significantly more genetic variation than does the entire human race.
No one has successfully isolated any genes “for” intelligence at all, and claims in this direction have turned to dust when subjected to peer review. As the Edinburgh University cognitive ageing specialist Prof Ian Deary put it, “It is difficult to name even one gene that is reliably associated with normal intelligence in young, healthy adults.” Intelligence doesn’t come neatly packaged and labelled on any single strand of DNA.
Afaik it's generally accepted that there's no intelligence gene, and genes rarely do one thing. Rather combinations of on/off genes that are present lead to what could be considered more emergent properties, one of which being whatever you want to call the cognitive performance we measure with IQ tests. Mutations have obviously occurred across many genes. Taking a distributed view of genetic intelligence, wouldn't it make sense that the multiple mutations/differences would also contribute to the variance across races in intelligence? There is variance within and between races. When we refer to variance between, you're looking at mean differences that don't take into account variance within. Graphically, this looks like the offset normal or Gaussian distribution curves. These curves can have radically different variance within, with either "fat tails", meaning higher variance, or clustering around the mean. Even if means perfectly lined up, you could still wind up with markedly difference results based on variance, since it's the right tail that is mostly responsible for humanity's advancement across all disciplines.
nothing in history suggests that both all races and all cultures are relatively equivalent. There are measurable differences, they originate somewhere, and they have very stark material and ethical variance.
No one's saying there aren't measurable differences, and the author of that piece says as much.
The problem was that most of his identical twins were adopted into the same kinds of middle-class families. So it was hardly surprising that they ended up with similar IQs. In the relatively few cases where twins were adopted into families of different social classes and education levels, there ended up being huge disparities in IQ – in one case a 20-point gap; in another, 29 points, or the difference between “dullness” and “superior intelligence” in the parlance of some IQ classifications. In other words, where the environments differed substantially, nurture seems to have been a far more powerful influence than nature on IQ.
130 and above Very Superior
120–129 Superior
110–119 High Average
90–109 Average
80–89 Low Average
70–79 Borderline
69 and below Extremely Low
Yet people have not changed genetically since then. Instead, Flynn noted, they have become more exposed to abstract logic, which is the sliver of intelligence that IQ tests measure. Some populations are more exposed to abstraction than others, which is why their average IQ scores differ. Flynn found that the different averages between populations were therefore entirely environmental.
This finding has been reinforced by the changes in average IQ scores observed in some populations. The most rapid has been among Kenyan children – a rise of 26.3 points in the 14 years between 1984 and 1998, according to one study. The reason has nothing to do with genes. Instead, researchers found that, in the course of half a generation, nutrition, health and parental literacy had improved.
A 2008 paper, in which he linked “undetected serial rapists” with a propensity to commit serial and “crossover” acts of violence such as interpersonal attacks unrelated to sex, was shown to have provided no basis for such a generalization. His assertions, allegedly supported by a study he co-authored in 2010, that false accusations of sexual assault are exceedingly rare, have been shown to violate basic math by counting as true cases that didn’t qualify as sexual assault, had insufficient evidence to make a determination, or were referred for prosecution but about which the outcome was unknown.
As for Lisak's vague statements about having interviewed "hundreds" of serial rapists (occasionally styled as “thousands” when others talk about him), in truth no evidence exists that Lisak has interviewed any “undetected rapists,” serial or otherwise, since his dissertation research 30 years ago.
His claimed years of research turned out to be a handful of actual research publications, reviews full of editorializing about others’ research, rehashing of the dissertation he completed in 1989, and a website that deceptively merges that dissertation’s 1980s-era research on 12 college students with unrelated data from the 2002 paper on repeat offenders.
By title alone, finding differences is sold as unethical. It specifically raised South Africa as an example of why such investigation is bad and confounded, and attacks Jewish scientists as being bigots. That's anti-Semitic! Clearly an unethical bias.
A 30 point difference isn't enough to make this difference.
https://en.wikipedia.org/wiki/IQ_classification
The Wechsler Adult Intelligence Scale (WAIS) is pretty much the standard IQ test at this point, and has been for some years now. The scale runs as follows:
The biggest boost from "dullness" (whatever that is), which would be at best "Borderline", at 30pts would be "Average" (Even low average would only get to high average). I don't know if Bouchard was using an older version of the WAIS, but neither does the writer apparently, since there were no links (or he doesn't want to show them). Given that changes are supposed to be shown over time, I would assume relatively recent IQ comparisons would be using the WAIS-III or WAIS-IV.
Lewis Terman (1916) developed the original notion of IQ and proposed this scale for classifying IQ scores:
- Over 140 - Genius or near genius
- 120 - 140 - Very superior intelligence
- 110 - 119 - Superior intelligence
- 90 - 109 - Normal or average intelligence
- 80 - 89 - Dullness
- 70 - 79 - Borderline deficiency
- Under 70 - Definite feeble-mindedness
He also lies about IQ tests testing a "sliver":
The only subtest on the WAIS to test verbal abstraction is the "Similarities" test.
https://en.wikipedia.org/wiki/Wechsler_Adult_Intelligence_Scale
Information & Vocabulary are based on English/Western history canon (although not specifically history about western hist), but not abstract. The rest test either symbolic reasoning or numeric based memory. Separately, the Wechsler Memory Scale tests numeric, symobolic, and linguistic memory if one has trouble with numeric memory only.
I'd encourage you to resist that assumption. The article is interested in the shortcomings of race science, specifically the science of genetics as it applies to racial differences. The author isn't denying that there are differences in average intelligence between races; he's simply saying that there is no empirical evidence that the root cause of those differences is genetic. A scientific perspective that privileges empiricism shouldn't jump to certain conclusions based on speculative connections between general racial homogeneity and statistical commonalities. The proper response should be that while genetics could explain these differences, there is no way to empirically verify such connections. Therefore, it's irresponsible to promote such a theory based on current studies/research.
The studies that have been cited as supporting race science have also been subject to mild, if not gross, misinterpretation. I think that's the point of the article.
I know you want to wave this away as "virtue signaling"; but my impression is that it's impossible to make an argument even comparable to this without you criticizing it as virtue-signaling, which is why I find your comments suspect.
"Dullness" would be low average. So he's not incorrect:
http://www.wilderdom.com/intelligence/IQWhatScoresMean.html
WAIS uses the same intervals as Terman.
Again, this is not true Dak.
The Similarities test assesses abstract verbal reasoning yes; but the comprehension portion assesses the "ability to express abstract social conventions," and matrix reasoning assesses "nonverbal abstract problem-solving." The verbal and perceptual categories both test for abstract logic. Memory and processing don't, but these also can only be tested via the application of abstractions. The test is very much invested in abstract thinking.
abstract logic, which is the sliver of intelligence that IQ tests measure.
Most of this feels like interpretation to me. "Dullness" corresponds to the same interval as "low average," so I don't think that's an issue. As far as abstract thought in the test goes, it really boils down to how narrowly we choose to define "abstract." You seem to suggest that semantic knowledge doesn't rely on abstract thought; but I'm not sure I agree (meaning itself always involves abstraction, in my opinion, even if we associate such abstraction with practical application). Vocabulary is almost always abstract, especially when questions ignore/downplay context.
Seeing as intelligence tests measure our aptitude for solving problems that are largely symbolic of real-world, practical issues, I'd say the test is mostly abstract.
Everything we’re doing does involve abstract thought; but it involves abstract thought combined with the practicalities of daily life. An IQ test isolates the symbolic/conceptual abstractions that inform everyday thought.
That’s why the author suggests that the test measures the “sliver” of intelligence that involves abstract thought.
Then what is this vast pie of total intelligence comprised of, minus the sliver of symbolic/conceptual abstract thought?
Intelligence is largely behavioral. Patterns are abstract, but pattern-matching involves practice. The practical application of abstract thought is different than thought, but it's still intelligence. IQ tests can't measure the application of conceptual thought to real situations in-context. Knowing the definition of "feed" doesn't mean one accurately comprehends the phrase "feed the meter," and understanding abstract social conventions doesn't mean one intelligently applies such understanding in context.
Either way, I don’t get Jung, so I judge by what I do know: for someone who lauds Nietzsche and Dostoyevsky, Peterson fails to see Spooky Postmodernism on his own terms. It isn’t “cultural relativism” or “anti-Christianity,” it’s the opposite. They care about truth and Platonic moralism all too much. Only if you cared about truth would it be a problem for it to be relative, only with the Christian moral backing would “truth is relative, but let’s go get the oppressor” make sense. Otherwise you’d shrug. “Weird, that path failed. Whelp, time to switch majors to Finance.”