Skip to Content

Sociogenomics is opening a new door to eugenics

New ways of using your genetic data could bolster scientific racism and encourage discrimination.

Sociogenomics is opening a new door to eugenics is revealed beneath two diverging black panels
Sociogenomics is opening a new door to eugenics is revealed beneath two diverging black panels

Want to predict aggression? Neuroticism? Risk aversion? Authoritarianism? Academic achievement? This is the latest promise from the burgeoning field of sociogenomics.

There have been many “DNA revolutions” since the discovery of the double helix, and now we’re in the midst of another. A marriage of the social and natural sciences, it aims to use the big data of genome science—data that’s increasingly abundant thanks to genetic testing companies like 23andMe—to describe the genetic underpinnings of the sorts of complex behaviors that interest sociologists, economists, political scientists, and psychologists. The field is led by a group of mostly young, often charismatic scientists who are willing to write popular books and op-eds, and to give interviews and high-profile lectures. This work shows that the nature-nurture debate never dies—it is just cloned and raised afresh in a new world.

Advocates of sociogenomics envision a prospect that not everyone will find entirely benevolent: health “report cards,” based on your genome and handed out at birth, that predict your risk of various diseases and propensity for different behaviors. In the new social sciences, sociologists will examine the genetic component of educational attainment and wealth, while economists will envision genetic “risk scores” for spending, saving, and investment behavior.

Without strong regulation, these scores could be used in school and job applications and in calculating health insurance premiums. Your genome is the ultimate preexisting condition.

Such a world could be exciting or scary (or both). But sociogenomicists generally focus on the sunny side. And anyway, they say with a shrug, there’s nothing we can do about it. “The genie is out of the bottle,” writes the educational psychologist Robert Plomin, “and cannot be stuffed back in again.”

Is this what the science says, in fact? And if it is, is it a valid basis for social policy? Answering these questions demands setting this new form of hereditarian social science in context—considering not merely the science itself but the social and historical perspective. Doing so can help us understand what’s at stake and what the real risks and benefits are likely to be.

Weird science
If this is “the science,” the science is weird. We’re used to thinking of science as incrementally seeking causal explanations for natural phenomena by testing a series of hypotheses. Just as important, good science tries as hard as it can to disprove the working hypotheses.

Sociogenomics has no experiments, no null hypotheses to accept or reject, no deductions from the data to general principles. Nor is it a historical science, like geology or evolutionary biology, that draws on a long-running record for evidence.

Sociogenomics is inductive rather than deductive. Data is collected first, without a prior hypothesis, from longitudinal studies like the Framingham Heart Study, twin studies, and other sources of information—such as direct-to-consumer DNA companies like 23andMe that collect biographical and biometric as well as genetic data on all their clients.

Algorithms then chew up the data and spit out correlations between the trait of interest and tiny variations in the DNA, called SNPs (for single-­nucleotide polymorphisms). Finally, sociogenomicists do the thing most scientists do at the outset: they draw inferences and make predictions, primarily about an individual’s future behavior.

Sociogenomics is not concerned with causation in the sense that most of us think of it, but with correlation. The DNA data often comes in the form of genome-wide association studies (GWASs), a means of comparing genomes and linking variations of SNPs. Sociogenomics algorithms ask: are there patterns of SNPs that correlate with a trait, be it high intelligence or homosexuality or a love of gambling?

Yes—almost always. The number of possible combinations of SNPs is so large that finding associations with any given trait is practically inevitable.

The evolutionary biologist Graham Coop shows that big data can lull us into a false sense of objectivity. The success of GWASs, he writes, “seems to suggest that we’ll soon be able to settle debates about whether behavioral differences among populations are driven in part by genetics.” However, he adds, “answering this question is a lot more complicated than it seems.”

Coop offers what he calls a “toy” example of a misleading polygenic study—a thought experiment. The hypothetical research question: Why do the English drink more tea than the French?

Coop’s imaginary researcher, Bob, uses data from existing databases like the UK Biobank. He counts up the average number of alleles (different forms of a gene) associated with a preference for tea in English people and French people. “If the British, overall,” Coop writes, “are more likely to have alleles that increase tea consumption than French people, then Bob might say that we have demonstrated that the difference between French and UK people’s preference for tea is in part genetic.”

Being a conscientious scientist, of course, Bob would offer the usual assurances about the quality of his data. He would piously insist that his results do not show that all Brits who drink lots of tea do so because of their genes—only that the overall difference between the populations is partly genetic.

Coop then walks us through the problems with this thinking. It ignores the crucial fact that alleles may behave differently in different genomes and in different environments: “The issue is that GWAS studies do not point to specific alleles for tea preferences, only to alleles that happen to be associated with tea preference in the current set of environments experienced by people in the UK Biobank.” In other words, we can’t be sure that a different group of people with the same genetic variations would be equally avid tea drinkers. And even if they were, we still wouldn’t know it was those genes that made them love tea.

Bob, then, commits two fallacies. First, he confuses correlation and causation. The study does not show that the putative tea-drinking alleles affect tea drinking—merely that they are associated with it. They are predictive but not explanatory. The second fallacy is one I learned on the first day of class in college biostatistics: statistical significance does not equal biological significance. The number of people buying ice cream at the beach is correlated with the number of people who drown or get eaten by sharks at the beach. Sales figures from beachside ice cream stands could indeed be highly predictive of shark attacks. But only a fool would bat that waffle cone from your hand and claim that he had saved you from a Great White.

“Complex traits are just that—complex,” Coop concludes. “Most traits are incredibly polygenic, likely involving tens of thousands of loci [i.e., SNPs or genes]. These loci will act via a vast number of pathways, mediated by interactions with many environmental and cultural factors.”

A long tradition
Sociogenomics is the latest chapter in a tradition of hereditarian social science dating back more than 150 years. Each iteration has used new advances in science and unique cultural moments to press for a specific social agenda. It has rarely gone well.

The originator of the statistical approach that sociogenomicists use was Francis Galton, a cousin of Charles Darwin. Galton developed the concept and method of linear regression—fitting the best line through a curve—in a study of human height. Like all the traits he studied, height varies continuously, following a bell-curve distribution. Galton soon turned his attention to personality traits, such as “genius,” “talent,” and “character.” As he did so, he became increasingly hereditarian. It was Galton who gave us the idea of nature versus nurture. In his mind, despite the “sterling value of nurture,” nature was “by far the more important.”

Given the social and political climate of 2018, today would seem a particularly inauspicious time to undertake a new and potentially vastly more powerful expression of genetic determinism.

Galton and his acolytes went on to invent modern biostatistics—all with human improvement in mind. Karl Pearson, Galton’s main protégé (who invented the correlation coefficient, a workhorse statistic of GWASs and hence of sociogenomics), was a socialist who believed in separating sex from love. The latter should be spread around liberally, the former tightly regulated to control who bred with whom—that is, for eugenic ends.

The point is that eugenics was not, as some claim, merely an unfortunate bit of specious science. It was central to the development of biological statistics. This entanglement runs down the history of hereditarian social science, and today’s sociogenomicists, like it or not, are heir to it.

Early in the 20th century, a vicious new strain of eugenics emerged in America, based on the new science of Mendelian genetics. In the context of Progressive-era reformist zeal, belief in a strong government, and faith in science to solve social problems, eugenics became the basis of coercive social policy and even law. After prominent eugenicists canvassed, lobbied, and testified on their behalf, laws were passed in dozens of states banning “miscegenation” or other “dysgenic” marriage, calling for sexual sterilization of the unfit, and throttling the stream of immigrants from what certain politicians today might refer to as “shithole countries.”

At the end of the 1960s, the educational psychologist Arthur Jensen published an enormous article in the Harvard Educational Review arguing that Negro children (the term of the day) were innately less intelligent than white children. His policy action item: separate and unequal school tracks, so that African-American children would not become frustrated by being over-challenged with abstract reasoning. What became known as “Jensenism” has resurfaced every few years, in books such as Charles Murray and Richard Herrnstein’s The Bell Curve (1994) and the journalist Nicholas Wade’s A Troublesome Inheritance (2014).

Given the social and political climate of 2018, today would seem a particularly inauspicious time to undertake a new and potentially vastly more powerful expression of genetic determinism. True, the research papers, white papers, interviews, books, and news articles I’ve read on the various branches of sociogenomics suggest that most researchers want to move past the racism and social stratification promoted by earlier hereditarian social scientists. They downplay their results, insist upon avoiding bald genetic determinism, and remain inclusive in their language. But, as in the past, fringe groups have latched onto socio­genomic research as evidence for their hostile claims of white superiority and nationalism.

Social Risks
Social genomics comes with its own large set of social risks—and number one on the list is failing to grapple sufficiently with those risks. In the 2012 paper that has become the de facto manifesto of genoeconomics (the use of genetic data to predict economic behavior), Daniel Benjamin and his coauthors dedicated two full sections to “pitfalls.” Every one of them is methodological and statistical—false positives, studies with too few participants, and so forth. Most could be fixed with more data and better statistics.

Some in the field readily acknowledge the skeletons in the closet. “Eugenics is not safely in the past,” wrote Kathryn Paige Harden, a developmental behavior geneticist at the University of Texas, in a New York Times op-ed earlier this year. Harden lamented the rise of the so-called human biodiversity movement (referring to it as “the eugenics of the alt-right”), with its ties to white supremacy and its specious claims to scientific legitimacy. Members of this movement, she wrote, “enthusiastically tweet and blog about discoveries in molecular genetics that they mistakenly believe support the ideas that inequality is genetically determined; that policies like a more generous welfare state are thus impotent; and that genetics confirms a racialized hierarchy of human worth.”

Indeed, the human biodiversity crowd and other so-called “race realists” love sociogenomics. American Renaissance, a publication run by the avowed white supremacist Jared Taylor, features articles about the possibilities of sociogenomics, as does the HBD Bibliography, an aggregator of hereditarian materials. Steve Sailer, a well-known and prolific writer in white supremacist and human biodiversity circles, writes extensively about sociogenomics on “race realist” sites such as Unz Review and VDARE.

To be clear: I am not saying that socio­genomicists are racists. I am saying that their work has serious social implications outside the lab, and that too few in the field are taking those problems seriously.

Genetics has an abysmal record for solving social problems. In 1905, the French psychologist Simon Binet invented a quantitative measure of intelligence—the IQ test—to identify children who needed extra help in certain areas. Within 20 years, Binet was horrified to discover that people were being sterilized for scoring too low, out of a misguided fear that people of subnormal intelligence were sowing feeblemindedness genes like so much seed corn.

What steps can we take to prevent sociogenomics from suffering the same fate? How do we ensure that polygenic scores for educational attainment are used to offer extra help tailored to those who need it—and ensure that they don’t become tools of stratification?

Here’s one way: when the evolutionary biologist Coop and his student Jeremy Berg published a GWAS paper on the genetics of human height, they took the extraordinary step of writing a 1,500-word blog post about what could and could not be legitimately inferred from their paper.

Why isn’t this more common? The field needs more people like Coop—and fewer cheerleaders. It needs scientists who reckon with the social implications of their work, especially its potential for harm—scientists who take seriously the social critique of science, who understand their work in both its scientific and historical contexts. It is such people who stand the best chance of using this potent knowledge productively. For scientists studying human social genomics, doing so is a moral responsibility. 

Jon Phillips contributed research for this article.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.