Down the rabbit hole of biological determinism
“Once upon a time . . .”
At the turn of the millennium, excitement around the dizzying possibilities of genetic research was still high. People wondered whether gene therapy could someday cure cancer. Researchers imagined they would find genes for everything from being tall to being gay, whether we might even build designer babies by tinkering with our DNA. And two scientists working for the National Cancer Institute in the United States wrote a fairy tale.
Their protagonist was a well-meaning geneticist who one day begins to wonder why some people use chopsticks to eat their food and others don’t. So of course, the hero does what all good experimentalists do: he rounds up several hundred students from his local university and asks them how often they each use chopsticks. Then he sensibly cross-references that data with their DNA and begins his hunt for a gene that shows some link between the two.
Lo and behold, he finds it!
“One of the markers, located right in the middle of a region previously linked to several behavioral traits, showed a huge correlation to chopstick use,” the tale goes. He has discovered what he decides to call the “successful-use-of-selected-hand-instruments” gene, neatly abbreviated to SUSHI. The magic spell is cast. The experiment is successfully replicated, the scientist gets his paper published, and he lives happily ever after.
This might have been the end were it not for one fatal yet obvious flaw. It takes him as long as two years to hit upon the uncomfortable realization that his research contains a mistake. The SUSHI gene he thought he found just happened to occur in higher frequencies in Asian populations. So it wasn’t the gene that made people better at using chopsticks; it was that people who used chopsticks for cultural reasons tended to share this one gene a little more often. He had fallen headlong into the trap of assuming that a link between chopsticks use and the gene was causal, when in fact it wasn’t. The spell was lifted and the magic was gone.
Like all good fairy tales, there was a moral to this story. Although not everyone could see it.
In 2005 the hype around genetics had begun to fizzle out, slowly replaced with a healthier skepticism. Scientists began to wonder whether our bodies might not be quite as straightforward as they had thought. And then along came a young geneticist at the University of Chicago in the United States with an extraordinary claim.
Bruce Lahn’s work was a shot in the arm for those who had always hoped that genes could explain everything, for the biological determinists who believed we were anything but blank slates, that in fact much of what we are is decided on the day we’re conceived. It was a claim so bold it implied that maybe even the course of history could be decided by something as tiny as one gene.
Lahn had originally emigrated from China to study at Harvard University, and soon became regarded as a cocky maverick who didn’t follow instructions, who did things his own way. A while after arriving in the United States, he changed his name from Lan Tian to Bruce Lahn in honor of the legendary actor and martial arts expert Bruce Lee. The science journalist Michael Balter describes in a profile how once, when invited to go on a two-day hike with his colleagues, Lahn turned up with nothing but a jar of pickled eggs.
“He was kinda the whiz kid; he was kinda the darling,” Balter recalls.
Lahn moved up the academic ladder at lightning speed. In 1999 he was named one of MIT Technology Review’s thirty innovators under thirty. Then in 2005 he published a pair of studies in the prestigious journal Science drawing a connection between a couple of genes and changes in human brain size. He and his colleagues stated that as recently as 5,800 years ago (just a heartbeat in evolutionary time), one genetic variant that was linked to the brain among other things had emerged and swept through populations because of evolution by natural selection. Their implication was that it bestowed some kind of survival advantage on our species, making our brains bigger and smarter. At the same time, he noted that this particular variant happened to be more common among people living in Europe, the Middle East, North Africa, and parts of East Asia but was curiously rare in Africa and South America. Lahn speculated that perhaps “the human brain is still undergoing rapid adaptive evolution”—although not for everyone in the same way.
His work was a sensation. What set pulses racing above all was his observation that the timing of the spread of this gene variant seemed to coincide with the rise of the world’s earliest civilization, in ancient Mesopotamia, which saw the emergence of among the first sophisticated human cultures and written language. Lahn seemed to imply that the brains of different population groups might have evolved in different directions for the past five millennia, and that this may have caused the groups with this special genetic difference to become more sophisticated than others. In brief, that Europeans and Asians had benefited from a cognitive boost, while Africans had languished without it, perhaps were still languishing.
Racists ate it up and asked for second helpings. After all, here was hard scientific evidence that seemed to corroborate what all those nineteenth-century colonialists and twentieth-century contributors to the Mankind Quarterly had always claimed, that some nations were intellectually inferior to others. Their failures to be economically prosperous were rooted not in history, but in nature. “There will be plenty more results where these came from,” predicted John Derbyshire, a right-wing commentator who wrote for the American conservative magazine National Review. Lahn also attracted support from the late Henry Harpending, a geneticist at the University of Utah and the coauthor of a controversial book arguing that biology could explain why Europeans conquered the Americas and also that European Jews had evolved to be smarter on average than everyone else.
But there were problems with Lahn’s findings. Even if his gene variants did show up with different frequencies in certain populations, it didn’t necessarily mean that they provided those who had them with a cognitive advantage. They were known to be linked to organs other than the brain, so if the variants were selected for, maybe this was nothing to do with intelligence. The hypothesis needed more evidence.
So, soon after the papers were published, the controversial Canadian psychologist John Philippe Rushton ran IQ tests on hundreds of people to see if possessing the gene variants really did make a difference. Try as he might (and we can reasonably assume that as the head of the Pioneer Fund at the time, he tried his hardest), he couldn’t find any evidence they did. They neither increased head circumference nor general mental ability.
Before long, critics piled on across the board, undermining every one of Lahn’s scientific and historical assertions. For a start, the gene variant he described as emerging 5,800 years ago could actually have appeared within a time range as wide as 500 to 14,100 years ago, so it may not have coincided with any major historical events. The respected University of Pennsylvania geneticist Sarah Tishkoff, who had been a coauthor on his papers, distanced herself from the suggestion that the gene variants in question might be linked to advances in human culture, as Lahn had suggested.
There were doubts, too, that Lahn’s gene variants had seen any recent selection pressure at all. Tishkoff tells me that scientists today universally recognize intelligence as a highly complex trait, not only influenced by many genes but also likely to have evolved during the far longer portion of human history when we were all hunter-gatherers, until around ten thousand years ago. “There have been common selection pressures for intelligence,” she explains. “People don’t survive if they’re not smart and able to communicate. There’s no reason to think that there would be differential selection in different populations. That doesn’t mean somebody won’t find something someday. Maybe it’s possible, but I don’t think there’s any evidence right now that supports those claims.”
In the end, Lahn had no choice but to abandon this line of research. “It was pretty damaging, because a lot of illustrious researchers either couldn’t replicate his original findings or did not come to the same conclusions,” explains Michael Balter, who interviewed Lahn, his critics, and his supporters at the time. Science, the journal that published his papers, even came under attack for including the more speculative portions of his work in the first place.
Lahn was partly a victim of how science works these days. The big discoveries have been made, so researchers often have little choice but to drill down into small, specific areas within biology. To make a name, they need themselves and the world to believe that this little thing they’re studying is significant. According to Martin Yuille, a molecular biologist at the University of Manchester, “If you’re going to do an experiment you have to be reductionist. You have to look for one of the factors that is associated with a phenomenon, and you’re tempted inevitably to try to think of that factor as being a cause, even though you know it is actually an association. So you’re kind of driven to it.
“It is all too easy to exaggerate the role of the one variant of a gene that you might identify as associated with a trait. . . . But you need to be modest.”
In this case, the world had seen the chopsticks fairy tale play out for real. In hindsight, it seems obvious that just because a genetic change in the brain may be more common in certain geographical populations than in others, that’s no basis for claiming that it could be responsible for the economic or political fortunes of entire regions. By any measure, this was an irresponsible leap of faulty logic. But then Lahn was known to be cocky, to do things his own way.
Gerhard Meisenberg from the Mankind Quarterly made the same assumption when I interviewed him—that the innate abilities of a country’s people are what define its success, even if we don’t have any scientific evidence for it. It’s an idea that has underpinned racist thought for centuries. It rests on the assumption that groups fall into ranks based on immutable biological features. It has the scent of the multiregional hypothesis, implying that nature has taken different tracks, that some of us are more “highly evolved” than others.
When I contact Lahn, now a professor of genetics at the University of Chicago, it has been more than a decade since his controversial papers were published. In 2009, undeterred by his embarrassing failure, he wrote a piece in another topflight journal, Nature, calling for the scientific community to be morally prepared for the possibility that they might find differences between populations, to embrace “group diversity” in the way that societies already cherish cultural diversity. He argued that “biological egalitarianism” won’t be viable for much longer, implying that not all population groups are actually equal. He tells me that he’s still “open to the possibility that there may be genetic differences in intelligence between modern populations, just like there may be genetic differences in other biological traits between modern populations such as bodily measures, pigmentation, disease susceptibility and dietary adaptation.”
As I learn, his hypothesis hasn’t changed. Lahn sticks firmly to the line that he is guided by science, wherever this may take him. “Before there is data, these are just possibilities,” he explains. “My nose follows the scientific method and data, not politics. I am willing to let the chips of data fall where they may, as any self-respecting scientist should.”
Barbara Katz Rothman, professor of sociology at the City University of New York, has written: “Genetics isn’t just a science; it’s a way of thinking. . . . In this way of thinking, the seed contains all it could be. It is pure potential.”
For Eric Turkheimer, the assumption that propels race research today in all its various forms is a sign of this deterministic pathology. “There are people out there who think in a serious way that they’re going to link up gene effects, the things you see in brain scans, the things you see on IQ tests,” he tells me. They are looking for that elusive mechanism, that magic formula which will allow them to take the genomes of people from Europe, or Africa, or China or India, or anywhere else, and prove beyond a shadow of a doubt that one population group really is smarter than another. It’s all there in our bodies just waiting to be discovered.
“It’s a racist hypothesis,” he adds.
The origins of this idea—that everything is in the genes—date back to the middle of the nineteenth century, when Gregor Mendel, an Augustinian friar in Brno, Moravia, then part of the Austrian Empire, became fascinated by plant hybrids. Working in the garden of his monastery, Mendel took seven strains of pea and bred them selectively until each one produced identical offspring every time. With these true-bred pea plants, he began to experiment, observing carefully to see what happened when different varieties were crossed. Nobody knew about genes at this point, and in fact Mendel’s paper on the topic published in 1866 went largely unnoticed within his lifetime. But his experimental finding that traits such as color were being passed down the generations in certain patterns would form the linchpin of how geneticists in the following century thought about inheritance.
Once scientists understood that there were discrete packets of information in our cells that dictated how our bodies were built, and that we got these packets in roughly equal measure from each parent, the science of heredity finally took off. And it took almost no time for the political implications to be recognized. In 1905 the English biologist William Bateson, Mendel’s principal popularizer, predicted that it “would soon provide power on a stupendous scale.”
Mendelism became a creed, an approach to thinking about human biology which suggested that a large part is set in motion as soon as an egg is fertilized, and that things then work in fairly linear fashion. If you crossed one yellow-seeded pea plant with one green-seeded pea plant and you could predict which colors subsequent generations of pea plant would turn out to have, then it stood to reason that you might be able to predict how human children would look and behave based on the appearance and behavior of their parents.
Through a narrow Mendelian lens, almost everything is determined by our genes. Environment counts for relatively little because we are at heart the products of chemical compounds mixing together. We are inevitable mixtures of our ancestors. Just as Bateson foresaw, this idea became the cornerstone of eugenics, the belief that better people could be bred via the selection of better parents. “Mendelism and determinism, the view that heredity is destiny, they go together,” says Gregory Radick, a historian and philosopher of science at the University of Leeds, who has studied Mendel and his legacy.
But Mendel’s pea plant research had a problem. At the beginning of the twentieth century Mendel’s paper became the subject of a ferocious debate, says Radick. “Should the Mendelian view be the big generalization around which you hang everything else? Or on the contrary, was it an interesting set of special cases?” When Mendel did his experiments, he deliberately bred his peas to be reliable in every generation. The aberrations, the random mutants, the messy spread of continuous variation you would normally see, were filtered out before he even began, so every generation bred as true as possible. Peas were either green or yellow. This allowed him to see a clear genetic signal through the noise, producing results that were far more perfect than nature would have provided.
Raphael Weldon, a British professor at the University of Oxford with an interest in applying statistics to biology, spotted this dilemma and began campaigning for scientists to recognize the importance of genetic and environmental backgrounds when thinking about inheritance. “What really bothered him about the emerging Mendelism was that it turned its back on what he regarded as the last twenty years of evidence from experimental embryology, whose message was that the effects a tissue has on a body depend radically on what it’s interacting with, on what’s around it,” explains Radick. Weldon’s message was that variation matters, and that it is profoundly affected by context, be it neighboring genes or the quality of air a person breathes. Everything can influence the direction of development, making nurture not some kind of afterthought tacked onto nature, but something embedded deep in our bodies. “Weldon was unusually skeptical.”
To prove his point, Weldon demonstrated how ordinary pea breeders couldn’t come up with the same perfectly uniform peas that Mendel had. Real peas are a multitude of colors between yellow and green. In the same way that our eyes aren’t simply brown or blue or green, but a million different shades. Or that if a woman has a “gene for breast cancer,” it doesn’t mean she will necessarily develop the disease. Or that a queen bee isn’t born a queen; she is just another worker bee until she eats enough royal jelly. Comparing Mendel’s peas to the real world, then, is like comparing a soap opera to real life. There is truth in there, but reality is a lot more complicated. Genes aren’t Lego bricks or simple instruction manuals; they are interactive. They are enmeshed in a network of other genes, their immediate surroundings and the wider world, this ever-changing network producing a unique individual.
Sadly for Weldon, the ferocious debate for the soul of genetics ended prematurely in 1906 when he died of pneumonia, aged just forty-six. His manuscript remained unfinished and unpublished. Without as much resistance as before, Mendel’s ideas went on to become incorporated into biology textbooks, becoming the bedrock of modern genetics. Although Weldon’s ideas have since slowly returned into scientific thinking, there remains a strain of genetic determinism in both the scientific and public imaginations. Harvard biologist Richard Lewontin called it the “central dogma of molecular genetics.” It is a belief that it’s really all there in our genes.
In 2015 sociologists Carson Byrd at the University of Louisville and Victor Ray at the University of Tennessee investigated the belief of white Americans in genetic determinism. Studying responses to the General Social Survey, which is carried out every two years to provide a snapshot of public attitudes, Byrd tells me they found that “whites see racial difference in more biologically deterministic terms for blacks.” Yet they tend to view their own behavior as more socially determined. For instance, if a black person happens to be less smart than average, the whites attribute this to the black person’s having been born this way, whereas a white person’s smartness or lack thereof is seen more as a product of outside factors such as schooling and hard work. “So they give people a bit more leeway if they’re white,” he explains.
Also interesting to Byrd was that even though the General Social Survey found that white conservatives were a little more biologically deterministic than white liberals, people with this view on both sides of the political spectrum shared the belief that policy measures such as affirmative action are needed to improve the lot of black Americans. There’s a slippery slope here, he warns. “The slipperiness is that they believe that because it’s genetic, theycan’t help themselves, that it’s innate, that they’re going to be in a worse social position because of their race.” In other words, they want society to help, not because they believe we’re all equal underneath, but because they believe we’re not.
“Before it was something in the ‘blood’ and now it’s in our genes,” Byrd tells me. What has remained the same over the centuries are the racial stereotypes of black Americans. Rather than black disadvantage being seen as social or structural in origin, which it is, it’s conveniently rendered in the new scientific language of genetics. “A lot of people have become enamored with the science . . . the mystique of things that could be embedded within our genes.”
Stephan Palmié, an anthropology professor at the University of Chicago, has argued that even now, “Much genomic research proceeds from assumptions it culls from ostensibly ‘scientific’ constructions of the past . . . and eventually restates them in the form of tabulations of allele frequencies” (alleles are different forms of the same gene). Nineteenth-century ideas about race that have gone out of fashion take on an almost magical quality when they’re freshly rewritten in the language of modern genetics. Today there is technical jargon, charts and numbers. Suddenly the ideas seem shinier and more legitimate than they did a moment ago. Suggest to anyone that the entire course of human history might have been decided by a single gene variant and they’ll probably laugh. But that’s exactly what Bruce Lahn did suggest in the pages of one of the most important journals in the world. For a moment, it felt possible because it was new science.
The belief that races have natural genetic propensities runs deep. One modern stereotype is that of superior Asian cognitive ability. Race researchers, including Richard Lynn and John Philippe Rushton, have looked at academic test results in the United States and speculated that the smartest people in the world must be the Chinese, the Japanese, and other East Asians. When the intelligence researcher James Flynn investigated this claim, publishing his findings in 1991, he found that in fact East Asians had the same average IQ as white Americans. Remarkably, though, Asian Americans still tended to score significantly higher than average on the SAT college admission tests. They were also more likely than average to end up in professional, managerial, and technical jobs. The edge they had was therefore a cultural one: their upbringing had endowed them with more supportive parents or maybe a stronger work ethic. They just tended to work harder than others.
To anyone who has grown up as an ethnic minority anywhere, especially those of us who were told that we have to work twice as hard to achieve the same as white people, this will come as small surprise. Among middle-class Indians living in the United Kingdom (the group my parents belong to), the weight of cultural pressure has generally been on children to become physicians, pharmacists, lawyers, and accountants. These are professions that tend to be well respected and well paid, with no shortage of job opportunities and straightforward entry once you have the right qualifications. They are reliable routes into middle-class society. Medicine carries such an immense prestige bias among immigrants and children of immigrants that, according to the most recent data gathered by the British Medical Association, around a quarter of all British physicians are South Asian or have South Asian heritage. In the United States, the American Association of Physicians of Indian Origin boasts eighty thousand members. This is not because Indians make better doctors but because culture acts as an invisible funnel, not dissimilar to the way women get channeled into caring professions such as nursing, because this is what society expects. Culture molds people, even subconsciously, for certain lives and careers.
It is also interesting how these stereotypes can change over time. Asian Americans are today considered a model minority. We forget that more than a century ago, unlike today, European race scientists saw Asians as biologically inferior, situated somewhere between themselves and the lowest-status races. In 1882 the United States passed the Chinese Exclusion Act to ban Chinese immigrant laborers because they were seen as undesirable citizens. Now that Japan has been highly prosperous for decades and India, China, and South Korea are fast on the rise with their own wealthy elites, the stereotypes have shifted the other way. As people and nations prosper, racial prejudices find new targets. Just as they always have.
“Think about what happened to all the old racial stereotypes,” Eric Turkheimer challenges me.
“A hundred years ago, people were quite convinced that Greek people had low IQs. You know, people from southern Europe? Whatever happened to that? Did somebody do a big scientific study and check those Greek genes? No, nobody ever did that. It’s just that time went on, Greek people overcame the disadvantages they faced a hundred years ago, and now they’re fine and nobody thinks about it anymore. And that’s the way these things proceed. All we can do is wait for the world to change and what seemed like hardwired differences melt away and human flexibility just overwhelms it.”
But the waiting is hard. And as we wait, it remains all too easy for researchers to allow their assumptions about the world to muddy the lens through which they study it, and for the research they then produce to impact or reinforce racial stereotypes.
In 2011, Satoshi Kanazawa, of the Department of Management at the London School of Economics, who writes widely on evolutionary psychology, speculated that black women are considered physically less attractive than women of other races. “What accounts for the markedly lower average level of physical attractiveness among black women?” he blogged in Psychology Today, racking his brain. “Black women are on average much heavier than nonblack women. . . . However, this is not the reason black women are less physically attractive than nonblack women. Black women have lower average level of physical attractiveness net of BMI [body mass index]. Nor can the race difference in intelligence (and the positive association between intelligence and physical attractiveness) account for the race difference in physical attractiveness among women,” he continued, in the manner of a drunk uncle.
At a stroke, Kanazawa took it as a scientific given that black women are both less attractive, which is obviously a value judgement, and innately less intelligent, which is unproven. Presenting these two offensive statements unchallenged, he landed on the speculative conclusion that their unattractiveness, as he had now established it, might have something to do with having different “levels of testosterone” from other women. Kanazawa, whose published work has since looked at intelligence and homosexuality, among other things, had his online post promptly pulled down under the weight of public and academic outrage. The London School of Economics banned him from publishing any more non-peer-reviewed articles or blog posts for a year.
But how did it get published at all? When Kanazawa invoked race as a factor in why he perceived some women to be more attractive than others, he was performing a sleight of hand. He was diverting attention away from the underlying question of where his assumptions came from, or why he was asking this specific question in the first place. In so doing he shone the spotlight straight onto his concluding statements. As soon as we, the audience, accepted his assumptions, his racist question could be transformed into a scientific one. It could seem almost legitimate rather than simple offensive. Diverted, the publisher of his work failed to notice that his hypothesis was dripping with prejudice. It had no rigor to it at all.
The American sociologist Karen Fields has compared the use of the idea of race, as in this example, to witchcraft—using the word “racecraft.” Race is commonly described by scientists, politicians, and race scholars as a social construct, as having no basis in biology. It’s as biologically real as witches on broomsticks. And yet, Fields writes that she sees the same “circular reasoning, prevalence of confirming rituals, barriers to disconfirming factual evidence, self-fulfilling prophecies” among scientists that is common in folk belief and superstition. It almost doesn’t matter what anyone says because race feels as real to us as magic feels real to those who believe in it. It has been made real by overuse.
When Bruce Lahn, just four years after he was forced to retreat from his flawed research on intelligence genes, asked the scientific community to embrace “group diversity” (that is, differences among groups), exactly what was he asking them to embrace? As he admitted to me himself, we don’t yet have data on the differences between populations beyond superficial ones, and even these superficial variations show enormous overlap. The chips haven’t yet fallen. His plea is not for us to accept the science we have, but to accept in advance something we don’t yet know. He is assuming that data will eventually confirm what he suspects—that there are cognitive differences between groups—and is telling us to take his word for it. But how scientific is that? How close are his assertions to being simply statements of belief?
“I do science as if the truth mattered and your feelings about it didn’t,” Satoshi Kanazawa states on his personal website, which lacks any sign of remorse for his paper on black women. In 2018 he and a colleague at Westminster International University in Tashkent, Uzbekistan, published a paper in the Journal of Biosocial Science, produced by the reputable Cambridge University Press, asking why societies with “higher average cognitive ability” have lower income inequality. Again, he started with the assumption that scientists believe that populations have different cognitive abilities—again, unproven. Again, the editors failed to notice.
Among the very few researchers to have written on links between race, intelligence, and the wealth of nations are Gerhard Meisenberg, Richard Lynn, and Tatu Vanhanen, all intimately associated with the Mankind Quarterly. In a joint publication they have claimed that Africans have an average IQ of about 70. But when a Dutch psychologist, Jelte Wicherts, investigated this figure, he found they could have arrived at it only by deliberatelyexcluding the vast majority of data that actually shows African IQs to be higher. “Lynn and Meisenberg’s unsystematic methods are questionable and their results untrustworthy,” he concluded. Even so, in his own work Kanazawa cites heavily from their work.
It’s a problem that continues outside ivory towers and fringe journals. In 2013, Jason Richwine, a public policy researcher at a powerful conservative think tank, the Heritage Foundation in Washington, DC, was forced to resign after it was revealed that he had written a doctoral thesis while at Harvard University in which he claimed that the average IQ of immigrants into the United States was lower than that of white Americans. Richwine expressed the possibility that Hispanics might never “reach IQ parity with whites,” ignoring that nobody considers “Hispanics” a single genetic population group, since they have such diverse ancestries. For instance, most Argentines are of European ancestry, just like white Americans are. Richwine created the illusion that Hispanics are a biological race.
From this, he followed up that immigration policy should focus on attracting more intelligent people. Upon joining the think tank, he also happened to cowrite a study suggesting that legalizing illegal immigrants, most of whom are Mexican and Central American, would result in an economic loss of trillions of dollars.
In January 2018, during a closed meeting on new immigration proposals held in the Oval Office, President Donald Trump reportedly asked lawmakers, in reference to immigrants from Haiti, El Salvador, and Africa, “Why do we want all these people from shithole countries coming here?” By the summer of 2018, a crackdown on illegal immigrants by the Trump administration resulted in thousands of young children being separated from their parents, wailing in distress and reportedly held in cages. He is believed to have stated in his January comments that the United States should be welcoming more immigrants from countries such as Norway.
The notion that there are essential differences between population groups, that genetically “shit” people come from “shithole countries,” may be an old one, but the science of inheritance helped propel these racially charged assumptions into modern intellectual thought. The concept of genetic determinism has made some succumb to the illusion that every one of us has a racial destiny.
In reality, as science has advanced, it has only become clearer that things are more complicated. “We can’t sidestep the fundamental problem that biological systems are systems; they are collections of organizations of matter that interact with each other and each of their environments,” explains molecular biologist Martin Yuille.
He offers the example of diabetes, a disease believed to run in families. In the United Kingdom, it has been estimated that those of South Asian ancestry can be up to six times as likely as other ethnic groups to receive a type 2 diabetes diagnosis. But even if some people may be genetically slightly more predisposed to a disease, this isn’t the same as actually receiving a diagnosis. Type 2 diabetes is well known to be heavily associated with lifestyle, such as diet and exercise, as well as age. Waist size is one of the most reliable correlations of all, which is why diabetes is becoming common among the middle classes of South Asia. Diets have always been rich in sugar and fat, but now people there are wealthier and more sedentary, leading to obesity. If diabetes were purely genetic, the world wouldn’t be seeing a diabetes epidemic at this moment and not before. “Just to say that it runs in your family is ludicrous,” says Yuille. “And it is fatalistic because it inhibits you from participating in activities that reduce risk.”
This is what it means to be human, says Gregory Radick, “to know what it is to be a person and a body, to understand yourself along with everything else around you as this product of interactions between what you’ve inherited and your surroundings.” Then you can begin to see that you “haven’t been dealt a hand that you just have to accept, but it’s within our gift to change these things and to make improvements. When you change the context, you can potentially change the effects.”
Another example of a condition that scientists believed is heritable is schizophrenia, a mental disorder for which people of black Caribbean ancestry living in the United Kingdom receive disproportionately more diagnoses than white people, to the point where it has even been described as a “black disease.” In recent years there have been feverish hunts for the genes thought to be responsible. In 2014 an enormous study involving more than 37,000 cases finally did find a number of genetic regions that may be associated with schizophrenia. But it turned out that the presence of even the most likely of these variants elevated the risk of suffering from schizophrenia by just a quarter of 1 percent over the risk in the population as a whole. One particular gene variant turned up in 27 percent of patients but also in around 22 percent of healthy subjects.
If schizophrenia is inherited, then its inheritance clearly can’t be a straightforward equation. Indeed, environmental risk factors, including living in an urban environment and being an immigrant, have already been shown to be at least as important to being diagnosed as any genetic links found so far. One study published in Schizophrenia Bulletin in 2012 found that patients with psychosis were almost three times as likely to have been exposed to adversity as children. That’s not to say the disorder doesn’t have a genetic component, but it does demonstrate that it can’t be quantified by looking at genes in isolation. If there are racial differences in diagnoses, it may be that life experiences, perhaps even the negative experiences resulting from racial discrimination, tip some people over the edge while rescuing others. This is without even considering that schizophrenia diagnosis itself is known to be notoriously subjective.
And if race is a factor, it’s interesting to contrast the characterization of schizophrenia as a “black disease” with an observation by the Nazi scientist Otmar von Verschuer who worked at the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics in Germany. A year before the outbreak of World War II he wrote, “Schizophrenia is strikingly more frequent among Jews. According to statistics from Polish insane asylums, among insane Jews schizophrenia is twice as common as among insane Poles.” Then he made a leap, twisting a medical observation into a racial generalization: “Since it is a matter of a hereditary disease . . . the more frequent occurrence of the disease in Jews must be viewed as a racial characteristic.”
that moment in time in that particular place, then, it wasn’t a black disease;
it was a Jewish one.