ch 11 conclusion and implications
Well, we’ve reached the end. Time to look back at what we’ve discussed and draw a few final general conclusions. The first section of the book presented a broad picture of how genetic and developmental variation together cause innate differences in psychological traits. The second section considered these issues in relation to specific areas, exploring the diversity of human faculties affected and what is known of the underlying mechanisms in each case. We are only beginning to unravel these details but we know enough to sketch out broad conceptual frameworks for how genes affect these diverse traits. Hopefully the general principles described will stand up to the test of time and will prove useful in interpreting future discoveries.
I will reiterate and expand on some of these general principles below and, especially, emphasize the complexities and subtleties in the relationship between genetic variation and variation in psychological traits. I will also try to highlight not just what the scientific findings mean but also what they don’t mean, to clarify or preempt any simplifications, misunderstandings, or overextrapolation.
And, finally, I will consider some important implications of these findings across a range of societal, ethical, and philosophical issues. The genetic and neuroscientific discoveries described in this book are poised to change our ability to control our own biology, as well as our view of our selves and of the nature of humanity. We would do well to consider the potential ramifications now, because the pace of discovery will only accelerate.
WHAT GENES ARE FOR
Twin, family, and population studies have all conclusively shown that psychological traits are at least partly, and sometimes largely, heritable—that is, a sizable portion of the variation that we see in these traits across the population is attributable to genetic variation. However, as we have seen in the preceding chapters, the relationship between genes and traits is far from simple.
The fact that a given trait is heritable seems to suggest that there must be genes for that trait. But phrasing it in that way is a serious conceptual trap. It implies that genes exist that are dedicated to that function—that there are genes for intelligence or sociability or visual perception. But this risks confusing the two meanings of the word gene: one, from the study of heredity, refers to genetic variants that affect a trait; the other, from molecular biology, refers to the stretches of DNA that encode proteins with various biochemical or cellular functions.
If the trait in question is defined at the cellular level, then those two meanings may converge—for example, differences in eye color arise from mutations in genes that encode enzymes that make pigment in the cells of the iris. They really are genes for eye color—that is the job of those proteins. Similarly, mutations that cause cancer, where cells proliferate out of control, mostly affect genes encoding proteins that directly control cellular proliferation. That kind of direct relationship between the effects of genetic variation and the functions of the encoded gene products makes complete sense if you are looking at effects on a cellular level. But it makes no sense if you are talking about the emergent functions of complex multicellular systems, especially the human brain.
These emergent functions rely on the interactions of hundreds of different cell types, organized into highly specified circuits, first at the local level of microcircuits and then at higher and higher levels of connectivity across brain regions and distributed systems. It requires the actions of thousands of genes to build these circuits and mediate the biochemical functions of all the component cells. Variation in any of those genes could, in principle, affect how any given neural system works and manifest as variation in a behavioral trait.
The fact that a trait is heritable means only that there are genetic variants that affect that trait. But for the kinds of traits we are talking about, most of those genetic effects will be highly indirect. Natural selection may see such variants as “genes for intelligence” or “genes for sociability” because natural selection only gets to see the final phenotype. That does not mean that the encoded gene products are directly involved in that psychological function. There are no genes for complex psychological functions—there are neural systems for such functions and genes that build them.
This has important consequences for understanding the relationship between genotypes and psychological phenotypes. First, a lot of the variation in mature function stems from differences in how the neural systems develop. Our brains really do come wired differently—literally, not metaphorically. Here, the effects of genetic variation combine with those due to inherent noise in the cellular processes of development themselves. The program encoded in the genome can only specify developmental rules, not precise outcomes. And the more genetic variants there are affecting that program, the greater the variability in outcome will be. Any given genotype encodes a range of potential outcomes but only one—a completely unique individual—will actually be realized.
Second, the genetic architecture of such traits is not as modular as often thought—any given neural system can be affected by variation in probably hundreds of genes. Conversely, variation in any given gene will typically affect multiple functions. In fact, even the neural systems are not as modular and dedicated as once believed—most cells, circuits, or brain regions can flexibly engage in various tasks by communicating with different subsets of other cells, circuits, or regions. When we open the lid of the black box and look inside, we should not expect to see lots of smaller black boxes. It’s a mess in there (see figure 11.1).
Figure 11.1 Simple versus complex traits. A. An overly simplistic view of the relationship between genes and behavioral traits, mediated by direct effects on particular brain regions, circuits, or neurotransmitter pathways. B. A more realistic view of the complex genetic architecture of behavioral traits.
And, finally, the genetic variants that contribute to any given trait are highly dynamic over time. Natural selection has spent millions of years crafting the finely honed machine that is the human brain, and it’s not about to stand back and let it all go to pot. New mutations arise all the time, but those that impair evolutionary fitness—by affecting survival and reproduction—are selected against, with the ones with most severe effects rapidly disappearing from the population. This means that most traits will be dominated by rare mutations that wink in and out of existence in populations over time, rather than a pool of standing variation that just gets reshuffled from generation to generation. Moreover, the effects of many such variants (rare and common) will interact in complex ways in any given individual. All of these factors have important implications for the possible application of genetic information in predicting the traits of individuals.
GENETIC PREDICTION AND SELECTION—THE NEW EUGENICS?
The complexities described above will make it more challenging to identify specific genetic variants associated with specific psychological traits. And, even where they are identified, predictions of phenotypes based on genetic information will remain imperfect. The effects of single mutations almost always vary across individuals, depending on other genetic variants in their genomes, and multiple variants will often interact in complex ways. It may be possible to derive an average risk of a condition or an average value of a trait from population studies, but it will be very difficult to predict accurately in any individual, who will inevitably have a previously unseen combination of genetic variants in their genome. Moreover, developmental variability places a strong limit on how accurate genetic predictions can ever be, as it means that genotype-phenotype relationships are not just limited by current knowledge but are essentially probabilistic and will therefore never be predictable with complete accuracy.
However, genetic information doesn’t have to be 100% accurate in predicting traits or disorders for it to be useful. Even mutations that merely increase the risk of a condition, or variants that tend to increase or decrease the value of a trait, will likely be deemed actionable and may be used in reproductive decisions and possibly in other areas. We already know, for example, of hundreds of genes that, when mutated, increase the risk of neurodevelopmental disorders, manifesting as intellectual disability, autism, epilepsy, schizophrenia, or other diagnostic categories. Many of these mutations also affect intelligence more generally, even in people not severely enough affected to be clinically diagnosed, and other genetic variants with subtle effects on intelligence are also being discovered. A number of mutations have been associated with impulsivity, aggression, and antisocial behavior—ones causing other personality disorders, such as being a psychopath, are sure to follow. And it is only a matter of time before mutations affecting other traits, like sexual orientation, or conditions like synesthesia or face blindness are identified.
With this knowledge will come the opportunity to act on it. The most obvious way in which genetic information will be used—indeed, the way in which it is already being used—is in prenatal screening of fetuses or preimplantation screening of embryos generated by in vitro fertilization (IVF). Genetic screening of fetuses for chromosomal conditions such as Down syndrome is routinely done in many countries, and this could readily be extended to screen for deletions or duplications associated with neurodevelopmental disorders more broadly. It is even now possible to sequence the entire fetal genome noninvasively, by sampling the small number of fetal cells that circulate in the maternal bloodstream. This will allow the identification of potentially disease-causing single base changes to the DNA sequence, not just large chromosomal aberrations. The expected consequence, where such measures are available, is a concomitant increase in the number of terminations and a decrease in the number of children born with these conditions.
IVF provides even greater scope for the use of genetic information, as multiple embryos are generated at once. It is quite routine to perform genetic testing on embryos to screen for chromosomal anomalies, especially in older parents or ones with a history of miscarriages. And genetic testing is also done in cases where one or both parents are carriers of a known specific mutation associated with a disease. In such cases, unaffected embryos can be chosen for implantation. Genetic testing is also used in some jurisdictions to select embryos by sex, to screen for immunological compatibility with a previously born child in need of an organ transplant (so-called “savior siblings”) or even, in cases where both parents have a condition like deafness or dwarfism, to select for the presence of mutations that result in that condition in their children.
As with fetal screening, the range of genetic variants and number of associated conditions or traits that can be screened for will only increase with time. Currently, a limiting factor on how many things can be screened out is the number of eggs that can be obtained for fertilization. This may change with the recent development of techniques to generate large numbers of eggs in the lab from cultured stem cells (themselves derived from a person’s skin cells, for example). This kind of approach is costly, but could mean that hundreds of embryos could technically be generated and screened at once, changing the possible scope and pace of genetic selection.
Clearly, the ethics of the use of genetic information in this way merits some consideration. This is especially true given the dark history of eugenics and its association with the science of genetics. Francis Galton, whom we met in earlier chapters, coined the term “eugenics” in 1883 to refer to the idea of selective breeding in humans to “improve” the genetic stock of the population. He argued that what had been done in dog breeding, with a rapid response to strong selection, could just as well be done in humans. In particular, he bemoaned what he saw as the reproductive excesses of the lower classes in Great Britain that threatened to flood the gene pool with inferior genetic variants, which would, over time, degrade the average capabilities of the population. To counter this threat, he advocated programs to encourage people of higher intellectual ability to breed early and often.
In the early 1900s eugenics achieved wide popularity in Britain and especially in the United States. Prominent geneticists like Charles Davenport, and even celebrities like the aviator Charles Lindbergh, threw their weight behind it and it came to be entangled with issues of race and immigration. Davenport established the American Breeders’ Association, with the rather chilling mission to “investigate and report on heredity in the human race, and emphasize the value of superior blood and the menace to society of inferior blood.”1 Rather than just promoting breeding of those with supposedly high quality genes, American eugenic policies focused on preventing breeding by those with qualities deemed inferior. This included marriage bans and forced sterilization of the “feeble-minded” and even people with epilepsy to prevent the passing on of the “genetic taint,” to use the terminology of the day. Such policies persisted as late as the 1970s in some US states. The underlying principles of eugenics and the idea of racial superiority were warmly embraced in Nazi Germany and used to justify many of the horrors that followed.
Eventually, the principles of eugenics and the policies of socially engineering the gene pool, from encouraging marriage to outright genocide, were rejected by modern societies. There are some schemes in place in certain countries or ethnic groups where specific genetic conditions are especially common that encourage or require people who wish to marry to undergo genetic testing for those specific mutations. But the kind of broad, government-imposed restriction of breeding opportunities based on undesirable traits seen at the height of the eugenics movement is no longer in place in any country.
However, in its place is emerging a different idea, one based instead on the principles of personal or parental choice. This is seen by many as a natural extension of already existing options for reproductive choices available in many countries. The argument goes that if termination of pregnancies or selection of embryos for implantation is permissible at all, there is no reason that such choices could not be made on the basis of genetic information. Different states have taken different views of this. For example, preimplantation testing for genetic conditions is limited in the United Kingdom to a specified list, though this continues to grow over time. And testing for sex is permissible in the United States, but not in most European countries.
There are no easy answers here. You could argue that if no one is harmed (and embryos not being implanted don’t count, because that happens all the time anyway), then use of any genetic information should be permitted. On the other hand, this touches on much wider issues. Choosing based on clear medical grounds is one thing; choosing between two healthy embryos is another. Do parents really have the right to choose the traits of their child? Does this change the nature of the relationship? Does it incur some responsibility on parents for the traits of their offspring, if they either have or have not selected them? Will it alter how people who are born with conditions that are otherwise typically screened out are perceived and treated in society? Will changing practices put pressure on parents to make certain decisions?
I am not taking or advocating any position here—all of this is just to highlight the fact that these ethical issues exist and merit some discussion. And as the pace of genetic discoveries advances and new technologies develop, new issues will arise—ones that we may not even have conceived of yet. For example, the recent development of highly precise genome editing technologies (the CRISPR/Cas9 system, referred to in chapter 10) opens the possibility to go beyond screening and begin genetic modification of human embryos. That is currently outlawed, where it would lead to the modification being passed on through the germline, but this could change. Societies will have to grapple with these issues, and make principled decisions as to what should be permitted. We would do well to consider the implications before they happen or we will be closing the barn door after the horse has bolted.
One particularly touchy issue is the idea of selecting for intelligence. We already select against mutations that cause intellectual disability. It seems a small step to extend this to allow selection for intelligence across the typical range, if we have the means to do it. Indeed, some would argue that there is nothing to discuss, that it is obvious that we should allow parents to make that choice if they wish. This can swing back into eugenics territory very quickly, though. You can argue that being more intelligent will be better, for the person involved, than being less intelligent, all other things being equal. After all, higher intelligence is associated with greater general health, better life outcomes across a range of measures, and increased longevity. But that does not mean that intelligent persons (or embryos that will become more intelligent persons) are better than or of “higher quality” than less intelligent persons, as some commentators have asserted. Nor does it mean that it would be better for society if average intelligence were increased. That’s right back to the driving principles of Galton and Davenport.
From a technical point of view, whether we will be able to select for intelligence depends on what the true genetic architecture of the trait is. First, using genetics to predict intelligence across the whole range of the population is one thing—using it to predict the much smaller expected differences between siblings is a totally different proposition, one that would require greater precision than may be obtainable, especially given the influence of developmental variation, which is essentially unpredictable. Second, I described a model in chapter 8 that sees intelligence mainly as a general fitness indicator, reflecting to a large extent the general robustness of brain development and the genomic program that encodes it. If that is true, then intelligence may be determined by general mutational load and the impact of these mutations on brain development, rather than a specific, dedicated set of genes “for intelligence.” Selecting for greater intelligence may thus be a matter of choosing embryos with the lowest load of severe mutations likely to affect neural development. Indeed, that would be expected to increase general health as well.
Again, I am not advocating for this, merely laying out the technical parameters. And, in considering it, it is worth remembering the law of unintended consequences. First, any given mutation is likely to have multiple effects on multiple systems—some of these may be unknown or unpredictable and not all of them will necessarily be negative. Second, we are in fact adapted to a certain mutational load—our developmental programs have evolved with such a load in place. We all carry approximately 200 severe mutations—ones that seriously impair production or function of a protein—as well as thousands of less severe genetic variants. And we always have. Every human who ever lived has, just as every animal that has ever lived has had some similar burden. There never has been a human without a certain load of mutations, one that is fully “wild type” across the entire genome. We may have the opportunity to do what natural selection never could—to purge the genome of all such mutations at once, or to reach that point over successive generations. But really we have no idea what the outcome would be—maybe development will proceed perfectly well with all systems working maximally, maybe not. Perhaps we’ll all end up super healthy and smart and ridiculously good-looking—and identical.
Genetic information is likely to be used in many areas outside of reproductive decisions too. Perhaps the most obvious is in insurance, where information that predicts people’s future health could very well be gleaned from their genomes. This raises serious questions. For example, would just carrying a mutation that statistically increases the risk of developing schizophrenia at a future date be considered a preexisting condition? Would variants that predispose to risky behavior or suicidality be grounds to deny someone life insurance or charge that person higher premiums? Currently, many countries prohibit insurance companies from using such information to deny people coverage (for example, under the Genetic Information Nondiscrimination Act in the United States) but the policies are quite uneven and, of course, could change. Indeed, a bill (H.R. 1313) currently under consideration in the United States (in 2017) would allow employers to demand employees to undergo genetic testing as part of a “wellness” program, or face an increase in their health insurance costs.
It’s also not hard to see how genetic information that predicts behavioral traits or cognitive abilities would be of interest to schools, colleges, or employers. IQ and aptitude tests are already widely used—these could conceivably be replaced by genetic predictors. At the moment, such predictors remain hypothetical and they will never be perfect, but they could be developed to the point where they contain some information deemed to be useful in a prospective fashion—say, for streaming children in schools. We could even see the prospect of genetic profiles being used in dating, as depicted in the science fiction film Gattaca (along with many of the other scenarios raised here). After all, we already choose mates based on many different traits with genetic underpinnings, and information on such traits is commonly used in selecting sperm or egg donors. Direct-to-consumer genetic profiling is a booming business and is already straying into many of these areas. Science fiction is fast becoming science fact. Buckle up!
A NOTE ON RACE AND GROUP DIFFERENCES
Up to this point, we have been concentrating on the origins of differences between individuals, but have not considered the possibility of average differences between groups of individuals, or populations. (With the exception of sex differences, which are a special case, given that there are strong evolutionary reasons for sex differences in behavior and known, conserved mechanisms that institute them.) If psychological traits have a partly genetic basis, so that relatives are more similar in such traits to each other than to random strangers, then it seems reasonable to suppose that such similarity might extend across whole populations who share a common ancestry and cause differences between populations with different ancestries. There are dozens of physical traits—like skin color, facial morphology, or height, for example—that do indeed differ between populations in this way. That this might extend to psychological traits is thus not inconceivable.
However, this is not a given. Systematic differences between groups can sometimes arise just by “genetic drift”—the random divergence between two populations of genetic variants, some of which may affect traits. But that mainly applies to traits that are evolutionarily neutral—where it really doesn’t matter much if a trait is high or low. For traits with adaptive value, however, the emergence of systematic differences requires some active force to drive it, some selective advantage to a greater or lower level.
Most of the physical traits that differ between populations have clear adaptive effects—there is a reason that they differ. For example, lighter skin evolved independently a couple of times as humans migrated to more northern latitudes, as an adaptation to lower light conditions. While dark skin is protective in regions with high sunlight, in low light it prevents adequate production of vitamin D. Similarly, persistence into adulthood of expression of the enzyme lactase, which breaks down milk, arose recently (in the past several thousand years) in European populations with the advent of dairy farming. And genetic adaptations to high altitude are seen in some populations, like Tibetans.
However, even if comparable forces did apply for psychological traits (and there is no evidence that they do or have), their genetic architecture makes this kind of directional selection much more difficult. The physical traits mentioned above are driven by changes to one or two genes, with highly specific effects. But we have seen that psychological traits can be affected by genetic variants in hundreds or thousands of genes, which often also affect other traits. That means, first, that any given mutation that increases the level of one trait may have offsetting effects on other traits. This will tend to constrain the possibilities for change. And second, it means that directional selection will face a losing battle against mutation, which will instead constantly generate diversity within groups. There would need to be an extremely strong selective force—similar to the levels of artificial selection that dog breeds were subjected to—in order to drive stable group differences for these kinds of traits.
In addition, for personality traits at least, diversity may actually be promoted because there is no single combination of parameters that is optimal in all situations or all environments. Any given profile will lead to more optimal behaviors in some contexts, but less optimal ones in others. For example, in some circumstances cautious people will do better (they may be less likely to get killed, for example). In other situations more daring people may do better (they may be more likely to obtain food or a mating opportunity). Whether one profile outperforms the others in terms of evolutionary fitness depends on how often those different types of situations arise in that particular environment.
But we should remember that the most important thing in each person’s environment is other people. Those are the ones we can cooperate or compete with, those are the threats that pose the most danger and the sources of the most relevant opportunities. That means that the optimal profile of behavioral parameters for any individual depends on the profiles of everyone else around that person. Not in a simple way, however; it’s not the case that the best solution is to be like everyone else—sometimes quite the opposite. If, for example, most other people are quite reckless, then it may pay to be more cautious. While half of them are dying off because they’ve put themselves in too much danger, you can hang back and share in the spoils. (It may not be admirable, but natural selection won’t care.) If, on the other hand, you’re in a population of timid people, you may gain an advantage by being braver, especially in obtaining mating opportunities.
This is classic game theory—the optimal strategy for any individual depends on the strategies employed by others. In evolutionary terms it leads to what is known as frequency-dependent selection. The fitness value of any given phenotype (a behavioral strategy in this case) decreases as the frequency of that phenotype in the population increases beyond a certain point. Any given strategy works better while it’s still somewhat rare, which tends to prevent genetic variants that favor any specific behavioral profile from ever getting fixed in the population. Diversity thus arises not just from a fundamental inability to genetically specify the same profile in all individuals but also from the positive actions of natural selection.
So, while a naïve comparison with physical traits suggests that psychological traits might well vary between groups, a more detailed consideration of their genetic architecture reveals just how unusual a scenario would have to exist for this to arise. It is by no means impossible—but it would require strong and consistent environmental differences between groups to create systematic pressures strong enough to drive genetic adaptation for these traits. Which brings us to how such groups are defined and the question of whether the categories typically studied have any real validity.
Most of the discussion in this area centers on the colloquial idea of “races,” but exactly how many such categories exist and how they are defined are hard to agree on. Anthropologists in the 1800s identified three main races—Black, White, and Asian—roughly reflecting continental ancestry. But a fourth soon had to be added when it was recognized that Australian aborigines are really very distinct from Africans, despite having similar skin color. And, of course, each of those categories can be subdivided more and more—among Whites, for example, we could recognize Hispanics, Jews, Arabs, etc. In terms of shared ancestry, thousands of such groups can be defined across all areas of the globe. Some will be reasonably discrete, based on a history of isolation and restricted breeding, while others are much more mixed, reflecting more extensive migration and interbreeding.
Modern genetics can reveal much of this history and clearly illustrates the complexities of humanity’s global family tree. If you cluster people based on genetic similarity, you can indeed derive several major categories, but you can also just as well go to deeper levels and reveal many, many more. There is no reason to think that any one level should have privileged status—none of these groupings reflects a natural kind, in the way that sex does. You can look for trends at the level of Africans versus non-Africans, for example, but you can also look at the level of ethnic groups like Bantu, Amhara, Yoruba, Celts, Basques, Finns, Japanese, American Indians, Maori, and so on. The decision to stop at any given level of clustering is purely arbitrary, and the larger and more ancient the cluster, the greater diversity there will be within that group, both genetically and in terms of the environments to which they have been exposed.
This is an important point when considering claims of racial differences in behavior and the even stronger claim that these are driven by genetic differences. For example, in his 2014 book A Troublesome Inheritance: Genes, Race and Human History, journalist Nicholas Wade argues, first, that strong and stable differences in behavioral or cognitive traits exist between five major racial categories and, second, that these are driven by genetic differences, reflecting adaptation to different historical societal structures across continents. As the author admits, such claims are “leaving the world of hard science and entering into a much more speculative arena at the interface of history, economics and human evolution.”2 Quite. It is a complete non sequitur to claim that any cultural differences between populations must be caused by genetic differences. There is in fact no evidence at all that observed or supposed differences in behavioral patterns between populations reflect anything but cultural history.
A more contentious issue is the notion of racial differences in intelligence. The idea that observed differences in cognitive abilities between populations might be driven by genetic differences is an old one, certainly popular with Galton and Davenport, for example. But it achieved notoriety with the publication of the 1994 book The Bell Curve: Intelligence and Class Structure in American Life, by psychologist Richard Herrnstein and political scientist Charles Murray. Among other things, they highlighted differences in average scores on IQ tests between various ethnic groups across America, noting a lower average among African-Americans and Hispanics than among Whites or Asians. Since IQ is a heritable trait but can also be affected by environmental factors, they went on to state that: “It seems to us highly likely that both genes and the environment have something to do with racial differences.”3
This is couched in the most reasonable-sounding terms—simply presenting a “probably a bit of both” scenario as the most likely situation. This seems to put the burden of proof on people who argue that genetic differences will not contribute to differences in intelligence across population groups. But is there any evidence for their hypothesis? And is it really likely?
Regarding heritability, twin and family studies only show that much of the variation in IQ within the studied populations is due to genetic variation. This says nothing about what might cause differences between populations. A trait could be completely heritable within each of two populations yet show a difference between them that is completely environmental. As noted previously, body mass index is highly heritable in both the United States and in France, but the large difference in average body mass index between these countries is caused by environmental factors, not genetic ones.
In the case of intelligence, we know from trends over time that it is highly sensitive to factors such as general maternal and infant health, nutrition, education, and practices of abstract thinking. Changes to all of these factors have contributed to increases in average IQ scores across many nations over the past century, which have nothing to do with changes in genes. Given the historical and continuing inequities between racial groups in the United States and across the world, it would seem more appropriate to exhaust the possible contributions of these cultural factors before inferring any contribution from genetic differences.
Indeed, behavioral geneticists often rightly criticize sociological studies as being uninterpretable when they don’t control for known genetic confounds. For example, the idea that having books in the house causally increases children’s IQ is hopelessly confounded by the fact that parents with higher IQ will likely have more books in their house and will also tend to have children with higher IQ, for genetic reasons. The converse is true here. We know that cultural factors affect IQ and we know that they differ very substantially between the groups concerned. The conclusion that differences in IQ test performance reflect, even in part, genetically driven differences in intellectual potential across races is thus hopelessly confounded and remains entirely speculative.
But, beyond that, such variation may be inherently unlikely. If intelligence is a general fitness indicator rather than a genetically modular trait, this changes the dynamics of possible selection on it. It is not enough to say that greater intelligence might have been selected for in one population—you have to explain why that would not have been the case in every population. The selective pressures that led to the emergence of Homo sapiens may well have directly favored mutations that led to greater intelligence; that is, selection would have been acting on that trait itself. But once that complex system was in place, the main variation would be in the load of mutations that impair it, which will likely have effects on many traits and impair fitness generally. General fitness should always be selected for, by definition, in any population, meaning intelligence should get a free ride—it will be subject to stabilizing selection, whether or not it is the thing being selected for.
For all these reasons, none of the evidence for genetic effects on psychological traits presented in this book should be taken as supporting the case for a genetic contribution to differences in such traits between populations.
DETERMINISM
I have presented the case in this book for the existence of innate differences in psychological traits, arising from two sources: genetic differences in the program specifying brain development and function, and random variation in how that program plays out in an actual individual. The second source is often overlooked, but its effects mean that many traits are even more innate than heritability estimates alone would suggest. In short, we’re born different from each other. The slate is most definitively not blank. To many people, this may be the most obvious thing in the world, based on their common experience of other human beings, especially children. To others, however, it may smack of genetic determinism. It may sound like a claim that our genes determine our behavior—that we are slaves to them with no real autonomy.
This is not the case at all. The claim is far more modest. It is simply this: that variation in our genes and the way our brains develop cause differences in innate behavioral predispositions—variation in our behavioral tendencies and capacities. Those predispositions certainly influence how we behave in any given circumstance but do not by themselves determine it—they just generate a baseline on top of which other processes act. We learn from our experiences, we adapt to our environments, we develop habitual ways of acting that are in part driven by our personality traits, but that are also appropriately context dependent.
Along the same lines, the evidence that parenting does not have a strong influence on our behavioral traits should not be taken as implying that parenting does not affect our behavior at all. We may not be molding our children’s personalities, but we certainly influence the way they adapt to the world. Our actual behavior at any moment is influenced as much by these characteristic adaptations and by the expectations of family and society—and, indeed, the expectations we build up of ourselves—as by our underlying temperament. Slates don’t have to be blank to be written on.
But if I can evade the charge of genetic determinism, I may still appear guilty to some of the related crime of neuroscientific reductionism. In delving into the detailed mechanisms underlying mental functions and what may cause them to vary, it may seem as if I am reducing those mental functions to the level of cells and molecules, none of which has a mind or is capable of subjective experience. It may look like such explanations leave no room for real autonomy, for thoughts and ideas and feelings and desires and intentions to have any causal power, for free will to exist at all.
Once again, this is not the case—nothing I have presented in this book is a threat to our general notions of autonomy and free will. The fact that there is a physical mechanism underlying our thoughts, feelings, and decisions does not mean we do not have free will. After all, to expect that thoughts, feelings, and decisions would not have any physical substrate is to fall into dualism—the idea that the brain and mind are really fundamentally distinct things, the mind somehow immaterial. This is a fallacy, and one that is hard to climb back out of once you’ve fallen into it. The mind is not a thing at all—at least, it is not an object. It is a process, or a set of processes—it is, simply put, the brain at work.
Thoughts and feelings and choices are mediated by the physical flux of molecules in the brain, but this does not mean they can be reduced to it. They are emergent phenomena with causal power in and of themselves. Some pattern of neural activity leads to a certain action by virtue of it comprising a thought with some content and meaning for the organism, not merely because the atoms bumping around in a certain way necessarily lead to them bumping around in a new way in a subsequent moment. The precise details of all the atoms don’t matter and don’t have causal force because most of those details are lost in the processing of information through the neural hierarchy. What matters is the information content inherent in the patterns of neuronal firing that those atoms represent and what that information means. When I make a decision it’s because my patterns of neural activity at that moment mean something, to me.
We all have predispositions that make us more likely to act in certain ways in certain situations, but that doesn’t mean that on any given instance we have to act like that. We still have free will, just not in the sense that we can choose to do any old random thing at any moment. I mean, we could, we just usually don’t, because we are mostly guided by our habits (which have kept us alive so far) and, when we do make deliberative decisions, it is between a limited set of options that our brain suggests. So, we are not completely free, we are constrained by our psychological nature to a certain extent. But really that’s okay—that’s what being a self entails. Those constraints are essential for continuity of our selves over time. Having free will doesn’t mean doing things for no reason, it means doing them for your reasons. And it entails moral responsibility in the pragmatic sense that we are judged not just on our actions but also on our reasons for those actions.
This does raise a provocative idea, however—that some of us may have more free will than others. In each one of us our degree of self-control varies in different circumstances, depending on whether we are tired, hungry, distracted, stressed, sleep deprived, intoxicated, infatuated, and so on. And over our lifetimes the impetuosity of youth gives way to the circumspection of adulthood. But the mechanisms that allow us to exercise deliberative control over habitual or reflexive actions also clearly vary in a more trait-like fashion between people. Some people are far more impulsive than others, as we saw in chapter 6. Many suffer from compulsions or obsessions or addictive behavior that they cannot control. And people in the grip of psychosis or mania or depression are clearly not in full control of their actions, which is why we do not hold them legally responsible. You could say that some people are more at the mercy of their biology than others, though that difference itself is a matter of biology.
SELF-HELP
There is a massive self-help industry devoted to the idea that we can change ourselves—our habits, our behaviors, even our personalities. From psychotherapy or cognitive behavioral therapy to mindfulness, brain training, or simply harnessing the power of positive thinking, there are scores of different approaches and an endless supply of books, videos, seminars, and other materials to help you become your best self. These suggest that we can learn the habits of highly effective people, and we too will become highly effective. That we can overcome stress, anxiety, negative thoughts, relationship problems, and low self-esteem, manage our anger, boost our mood, achieve the goals we always hoped for, and generally become a happier person. The slightly paraphrased title of one self-help book promises to show you how to rewire your brain to overcome anxiety, boost your confidence, and change your life. Others proclaim that you can “Immediately achieve massive results using powerful (fill in the blank) techniques!”
Lately, what had been an almost exclusively psychological literature has been suffused with supposedly groundbreaking discoveries from neuroscience, which seem to confirm the possibility of change and elucidate the mechanisms by which it can occur. Two areas in particular have caught the public’s imagination.
The first is neuroplasticity or brain plasticity—the idea that the structure of the brain is not fixed but quite malleable, with the implication that prewired need not mean hardwired. And this is quite true, to a certain extent. The brain is constantly rewiring itself on a cellular scale—that is how it learns and lays down memories to allow behavioral adaptation based on experience, by forming new synaptic connections between neurons or pruning others away. There is nothing revolutionary about this—it is simply how brains work. It is also true that, after injury for example, the brain can sometimes rewire circuits on a much larger scale, which can aid recovery or compensation for the injury in some cases or lead to additional problems in others.
But the brain is not infinitely malleable, and for good reason—it has to balance the need to change with the need to maintain the physical structure that mediates the coherence and continuity of the self. If it were undergoing wholesale changes all the time we would never be us. While young brains are highly plastic and responsive, these properties diminish drastically beyond a certain stage of maturation—indeed, they are actively held in check by a whole suite of cellular and extracellular changes. The period of plasticity is extremely protracted in humans, reflecting the fact that we have greater cognitive and neural capacity to continue to learn from experience over longer periods of time. But at some point the brain and the individual have to stop becoming and just be.
This limits the amount of change we can expect to achieve. It is certainly possible to change our behaviors—with enough effort you can break a habit or overcome an addiction. And that may be a perfectly laudable and worthwhile goal in many circumstances. But there is little evidence to support the idea that we can really change our personality traits, that we could, for example, learn to be biologically less neurotic or more conscientious. You may be able to learn behavioral strategies that allow you to adapt better to the demands of your life, but these are unlikely to change the predispositions themselves.
For children the situation may be different. There may be periods in which intensive behavioral interventions can alter developmental trajectories. For example, a child with autism may be taught to consciously look at people’s faces as they are speaking—this may encourage better linguistic and social development than would have tended to occur otherwise. But even here the opportunities to effect long-lasting change are still limited. These kinds of interventions, in either typically or atypically developing children, will always be fighting against both the innate predispositions themselves and their cascading effects on the experiences individuals choose and the environments they select or create, which will tend to reinforce innate traits.
The second idea that is popular these days is known as epigenetics. We came across the word epigenetic in chapter 4, where it was used to refer to the processes of development through which an individual emerges. The modern usage refers to something quite distinct—the molecular mechanisms that cells use to regulate gene expression. In any given cell at any given time, some genes will be active—the proteins they encode will actually be being produced—while others will be silent. This allows muscle cells to make muscle cell proteins and bone cells to make bone cell proteins, and so on. But cells also respond to changes, either internal or external to the cell, by increasing or decreasing the amounts of proteins made from various genes. Epigenetic mechanisms of gene regulation allow these kinds of changes to be locked in place for some period of time, sometimes even through the life of the cell and any cells it produces. That is precisely what happens in development as different cell types differentiate from each other.
The attraction of epigenetics for the self-help industry stems from the idea that it acts as a form of cellular memory, turning genes on or off in response to experience and keeping them that way for long periods of time. The problem comes from thinking that turning genes on or off equates somehow to turning traits on or off. If you’re talking about something like skin pigmentation, that might apply—I can expose my skin to the sun for a period of time and this will lead to epigenetic changes in the genes controlling pigment production, and I’ll get a nice tan that will last for weeks. But for psychological traits, the link between gene action at a molecular level and expression of traits at a behavioral level is far too indirect, nonspecific, and combinatorial for such a relationship to hold. Moreover, if much of the variation in these traits comes from how the brain developed, the idea that you can change them by tweaking some genes in adults becomes far less plausible. So, despite their current cachet, neuroplasticity and epigenetics don’t provide any magical means to dramatically alter our psychological traits.
This brings me to a final point, and really it is just my personal opinion. To me, the self-help industry is built on an insidious and even slightly poisonous message. It all sounds very positive—the possibility of change—but really it relies on the idea that you’re not good enough as you are, that other people are better than you, but if you buy our products or take our classes or just think positively enough then you can be better too. It plays on some of the least attractive aspects of human psychology, often explicitly using envy as a marketing ploy—of neighbors who’ve got more money than you, that guy at work who got promoted ahead of you, or that woman who just seems to have the perfect life. And it is often targeted at the more neurotic among us, with claims of overcoming anxiety, worry, stress, low confidence, and low self-esteem, playing on those very personality traits to convince people they need to be changed.
This is not a self-help book—clearly. But perhaps there is something positive in highlighting a different view. There is a power in accepting people the way they are—our friends, partners, workmates, children, siblings, and especially ourselves. People really are born different from each other and those differences persist. We’re shy, smart, wild, kind, anxious, impulsive, hardworking, absent-minded, quick-tempered. We literally see the world differently, think differently, and feel things differently. Some of us make our way through the world with ease, and some of us struggle to fit in or get along or keep it together. Denying those differences or constantly telling people they should change is not helpful to anyone. We should recognize the diversity of our human natures, accept it, embrace it, even celebrate it.
1F. R. Marshall, “The Relation of Biology to Agriculture,” Pop. Sci. 78 (1911): 553.
I will reiterate and expand on some of these general principles below and, especially, emphasize the complexities and subtleties in the relationship between genetic variation and variation in psychological traits. I will also try to highlight not just what the scientific findings mean but also what they don’t mean, to clarify or preempt any simplifications, misunderstandings, or overextrapolation.
And, finally, I will consider some important implications of these findings across a range of societal, ethical, and philosophical issues. The genetic and neuroscientific discoveries described in this book are poised to change our ability to control our own biology, as well as our view of our selves and of the nature of humanity. We would do well to consider the potential ramifications now, because the pace of discovery will only accelerate.
WHAT GENES ARE FOR
Twin, family, and population studies have all conclusively shown that psychological traits are at least partly, and sometimes largely, heritable—that is, a sizable portion of the variation that we see in these traits across the population is attributable to genetic variation. However, as we have seen in the preceding chapters, the relationship between genes and traits is far from simple.
The fact that a given trait is heritable seems to suggest that there must be genes for that trait. But phrasing it in that way is a serious conceptual trap. It implies that genes exist that are dedicated to that function—that there are genes for intelligence or sociability or visual perception. But this risks confusing the two meanings of the word gene: one, from the study of heredity, refers to genetic variants that affect a trait; the other, from molecular biology, refers to the stretches of DNA that encode proteins with various biochemical or cellular functions.
If the trait in question is defined at the cellular level, then those two meanings may converge—for example, differences in eye color arise from mutations in genes that encode enzymes that make pigment in the cells of the iris. They really are genes for eye color—that is the job of those proteins. Similarly, mutations that cause cancer, where cells proliferate out of control, mostly affect genes encoding proteins that directly control cellular proliferation. That kind of direct relationship between the effects of genetic variation and the functions of the encoded gene products makes complete sense if you are looking at effects on a cellular level. But it makes no sense if you are talking about the emergent functions of complex multicellular systems, especially the human brain.
These emergent functions rely on the interactions of hundreds of different cell types, organized into highly specified circuits, first at the local level of microcircuits and then at higher and higher levels of connectivity across brain regions and distributed systems. It requires the actions of thousands of genes to build these circuits and mediate the biochemical functions of all the component cells. Variation in any of those genes could, in principle, affect how any given neural system works and manifest as variation in a behavioral trait.
The fact that a trait is heritable means only that there are genetic variants that affect that trait. But for the kinds of traits we are talking about, most of those genetic effects will be highly indirect. Natural selection may see such variants as “genes for intelligence” or “genes for sociability” because natural selection only gets to see the final phenotype. That does not mean that the encoded gene products are directly involved in that psychological function. There are no genes for complex psychological functions—there are neural systems for such functions and genes that build them.
This has important consequences for understanding the relationship between genotypes and psychological phenotypes. First, a lot of the variation in mature function stems from differences in how the neural systems develop. Our brains really do come wired differently—literally, not metaphorically. Here, the effects of genetic variation combine with those due to inherent noise in the cellular processes of development themselves. The program encoded in the genome can only specify developmental rules, not precise outcomes. And the more genetic variants there are affecting that program, the greater the variability in outcome will be. Any given genotype encodes a range of potential outcomes but only one—a completely unique individual—will actually be realized.
Second, the genetic architecture of such traits is not as modular as often thought—any given neural system can be affected by variation in probably hundreds of genes. Conversely, variation in any given gene will typically affect multiple functions. In fact, even the neural systems are not as modular and dedicated as once believed—most cells, circuits, or brain regions can flexibly engage in various tasks by communicating with different subsets of other cells, circuits, or regions. When we open the lid of the black box and look inside, we should not expect to see lots of smaller black boxes. It’s a mess in there (see figure 11.1).
Figure 11.1 Simple versus complex traits. A. An overly simplistic view of the relationship between genes and behavioral traits, mediated by direct effects on particular brain regions, circuits, or neurotransmitter pathways. B. A more realistic view of the complex genetic architecture of behavioral traits.
And, finally, the genetic variants that contribute to any given trait are highly dynamic over time. Natural selection has spent millions of years crafting the finely honed machine that is the human brain, and it’s not about to stand back and let it all go to pot. New mutations arise all the time, but those that impair evolutionary fitness—by affecting survival and reproduction—are selected against, with the ones with most severe effects rapidly disappearing from the population. This means that most traits will be dominated by rare mutations that wink in and out of existence in populations over time, rather than a pool of standing variation that just gets reshuffled from generation to generation. Moreover, the effects of many such variants (rare and common) will interact in complex ways in any given individual. All of these factors have important implications for the possible application of genetic information in predicting the traits of individuals.
GENETIC PREDICTION AND SELECTION—THE NEW EUGENICS?
The complexities described above will make it more challenging to identify specific genetic variants associated with specific psychological traits. And, even where they are identified, predictions of phenotypes based on genetic information will remain imperfect. The effects of single mutations almost always vary across individuals, depending on other genetic variants in their genomes, and multiple variants will often interact in complex ways. It may be possible to derive an average risk of a condition or an average value of a trait from population studies, but it will be very difficult to predict accurately in any individual, who will inevitably have a previously unseen combination of genetic variants in their genome. Moreover, developmental variability places a strong limit on how accurate genetic predictions can ever be, as it means that genotype-phenotype relationships are not just limited by current knowledge but are essentially probabilistic and will therefore never be predictable with complete accuracy.
However, genetic information doesn’t have to be 100% accurate in predicting traits or disorders for it to be useful. Even mutations that merely increase the risk of a condition, or variants that tend to increase or decrease the value of a trait, will likely be deemed actionable and may be used in reproductive decisions and possibly in other areas. We already know, for example, of hundreds of genes that, when mutated, increase the risk of neurodevelopmental disorders, manifesting as intellectual disability, autism, epilepsy, schizophrenia, or other diagnostic categories. Many of these mutations also affect intelligence more generally, even in people not severely enough affected to be clinically diagnosed, and other genetic variants with subtle effects on intelligence are also being discovered. A number of mutations have been associated with impulsivity, aggression, and antisocial behavior—ones causing other personality disorders, such as being a psychopath, are sure to follow. And it is only a matter of time before mutations affecting other traits, like sexual orientation, or conditions like synesthesia or face blindness are identified.
With this knowledge will come the opportunity to act on it. The most obvious way in which genetic information will be used—indeed, the way in which it is already being used—is in prenatal screening of fetuses or preimplantation screening of embryos generated by in vitro fertilization (IVF). Genetic screening of fetuses for chromosomal conditions such as Down syndrome is routinely done in many countries, and this could readily be extended to screen for deletions or duplications associated with neurodevelopmental disorders more broadly. It is even now possible to sequence the entire fetal genome noninvasively, by sampling the small number of fetal cells that circulate in the maternal bloodstream. This will allow the identification of potentially disease-causing single base changes to the DNA sequence, not just large chromosomal aberrations. The expected consequence, where such measures are available, is a concomitant increase in the number of terminations and a decrease in the number of children born with these conditions.
IVF provides even greater scope for the use of genetic information, as multiple embryos are generated at once. It is quite routine to perform genetic testing on embryos to screen for chromosomal anomalies, especially in older parents or ones with a history of miscarriages. And genetic testing is also done in cases where one or both parents are carriers of a known specific mutation associated with a disease. In such cases, unaffected embryos can be chosen for implantation. Genetic testing is also used in some jurisdictions to select embryos by sex, to screen for immunological compatibility with a previously born child in need of an organ transplant (so-called “savior siblings”) or even, in cases where both parents have a condition like deafness or dwarfism, to select for the presence of mutations that result in that condition in their children.
As with fetal screening, the range of genetic variants and number of associated conditions or traits that can be screened for will only increase with time. Currently, a limiting factor on how many things can be screened out is the number of eggs that can be obtained for fertilization. This may change with the recent development of techniques to generate large numbers of eggs in the lab from cultured stem cells (themselves derived from a person’s skin cells, for example). This kind of approach is costly, but could mean that hundreds of embryos could technically be generated and screened at once, changing the possible scope and pace of genetic selection.
Clearly, the ethics of the use of genetic information in this way merits some consideration. This is especially true given the dark history of eugenics and its association with the science of genetics. Francis Galton, whom we met in earlier chapters, coined the term “eugenics” in 1883 to refer to the idea of selective breeding in humans to “improve” the genetic stock of the population. He argued that what had been done in dog breeding, with a rapid response to strong selection, could just as well be done in humans. In particular, he bemoaned what he saw as the reproductive excesses of the lower classes in Great Britain that threatened to flood the gene pool with inferior genetic variants, which would, over time, degrade the average capabilities of the population. To counter this threat, he advocated programs to encourage people of higher intellectual ability to breed early and often.
In the early 1900s eugenics achieved wide popularity in Britain and especially in the United States. Prominent geneticists like Charles Davenport, and even celebrities like the aviator Charles Lindbergh, threw their weight behind it and it came to be entangled with issues of race and immigration. Davenport established the American Breeders’ Association, with the rather chilling mission to “investigate and report on heredity in the human race, and emphasize the value of superior blood and the menace to society of inferior blood.”1 Rather than just promoting breeding of those with supposedly high quality genes, American eugenic policies focused on preventing breeding by those with qualities deemed inferior. This included marriage bans and forced sterilization of the “feeble-minded” and even people with epilepsy to prevent the passing on of the “genetic taint,” to use the terminology of the day. Such policies persisted as late as the 1970s in some US states. The underlying principles of eugenics and the idea of racial superiority were warmly embraced in Nazi Germany and used to justify many of the horrors that followed.
Eventually, the principles of eugenics and the policies of socially engineering the gene pool, from encouraging marriage to outright genocide, were rejected by modern societies. There are some schemes in place in certain countries or ethnic groups where specific genetic conditions are especially common that encourage or require people who wish to marry to undergo genetic testing for those specific mutations. But the kind of broad, government-imposed restriction of breeding opportunities based on undesirable traits seen at the height of the eugenics movement is no longer in place in any country.
However, in its place is emerging a different idea, one based instead on the principles of personal or parental choice. This is seen by many as a natural extension of already existing options for reproductive choices available in many countries. The argument goes that if termination of pregnancies or selection of embryos for implantation is permissible at all, there is no reason that such choices could not be made on the basis of genetic information. Different states have taken different views of this. For example, preimplantation testing for genetic conditions is limited in the United Kingdom to a specified list, though this continues to grow over time. And testing for sex is permissible in the United States, but not in most European countries.
There are no easy answers here. You could argue that if no one is harmed (and embryos not being implanted don’t count, because that happens all the time anyway), then use of any genetic information should be permitted. On the other hand, this touches on much wider issues. Choosing based on clear medical grounds is one thing; choosing between two healthy embryos is another. Do parents really have the right to choose the traits of their child? Does this change the nature of the relationship? Does it incur some responsibility on parents for the traits of their offspring, if they either have or have not selected them? Will it alter how people who are born with conditions that are otherwise typically screened out are perceived and treated in society? Will changing practices put pressure on parents to make certain decisions?
I am not taking or advocating any position here—all of this is just to highlight the fact that these ethical issues exist and merit some discussion. And as the pace of genetic discoveries advances and new technologies develop, new issues will arise—ones that we may not even have conceived of yet. For example, the recent development of highly precise genome editing technologies (the CRISPR/Cas9 system, referred to in chapter 10) opens the possibility to go beyond screening and begin genetic modification of human embryos. That is currently outlawed, where it would lead to the modification being passed on through the germline, but this could change. Societies will have to grapple with these issues, and make principled decisions as to what should be permitted. We would do well to consider the implications before they happen or we will be closing the barn door after the horse has bolted.
One particularly touchy issue is the idea of selecting for intelligence. We already select against mutations that cause intellectual disability. It seems a small step to extend this to allow selection for intelligence across the typical range, if we have the means to do it. Indeed, some would argue that there is nothing to discuss, that it is obvious that we should allow parents to make that choice if they wish. This can swing back into eugenics territory very quickly, though. You can argue that being more intelligent will be better, for the person involved, than being less intelligent, all other things being equal. After all, higher intelligence is associated with greater general health, better life outcomes across a range of measures, and increased longevity. But that does not mean that intelligent persons (or embryos that will become more intelligent persons) are better than or of “higher quality” than less intelligent persons, as some commentators have asserted. Nor does it mean that it would be better for society if average intelligence were increased. That’s right back to the driving principles of Galton and Davenport.
From a technical point of view, whether we will be able to select for intelligence depends on what the true genetic architecture of the trait is. First, using genetics to predict intelligence across the whole range of the population is one thing—using it to predict the much smaller expected differences between siblings is a totally different proposition, one that would require greater precision than may be obtainable, especially given the influence of developmental variation, which is essentially unpredictable. Second, I described a model in chapter 8 that sees intelligence mainly as a general fitness indicator, reflecting to a large extent the general robustness of brain development and the genomic program that encodes it. If that is true, then intelligence may be determined by general mutational load and the impact of these mutations on brain development, rather than a specific, dedicated set of genes “for intelligence.” Selecting for greater intelligence may thus be a matter of choosing embryos with the lowest load of severe mutations likely to affect neural development. Indeed, that would be expected to increase general health as well.
Again, I am not advocating for this, merely laying out the technical parameters. And, in considering it, it is worth remembering the law of unintended consequences. First, any given mutation is likely to have multiple effects on multiple systems—some of these may be unknown or unpredictable and not all of them will necessarily be negative. Second, we are in fact adapted to a certain mutational load—our developmental programs have evolved with such a load in place. We all carry approximately 200 severe mutations—ones that seriously impair production or function of a protein—as well as thousands of less severe genetic variants. And we always have. Every human who ever lived has, just as every animal that has ever lived has had some similar burden. There never has been a human without a certain load of mutations, one that is fully “wild type” across the entire genome. We may have the opportunity to do what natural selection never could—to purge the genome of all such mutations at once, or to reach that point over successive generations. But really we have no idea what the outcome would be—maybe development will proceed perfectly well with all systems working maximally, maybe not. Perhaps we’ll all end up super healthy and smart and ridiculously good-looking—and identical.
Genetic information is likely to be used in many areas outside of reproductive decisions too. Perhaps the most obvious is in insurance, where information that predicts people’s future health could very well be gleaned from their genomes. This raises serious questions. For example, would just carrying a mutation that statistically increases the risk of developing schizophrenia at a future date be considered a preexisting condition? Would variants that predispose to risky behavior or suicidality be grounds to deny someone life insurance or charge that person higher premiums? Currently, many countries prohibit insurance companies from using such information to deny people coverage (for example, under the Genetic Information Nondiscrimination Act in the United States) but the policies are quite uneven and, of course, could change. Indeed, a bill (H.R. 1313) currently under consideration in the United States (in 2017) would allow employers to demand employees to undergo genetic testing as part of a “wellness” program, or face an increase in their health insurance costs.
It’s also not hard to see how genetic information that predicts behavioral traits or cognitive abilities would be of interest to schools, colleges, or employers. IQ and aptitude tests are already widely used—these could conceivably be replaced by genetic predictors. At the moment, such predictors remain hypothetical and they will never be perfect, but they could be developed to the point where they contain some information deemed to be useful in a prospective fashion—say, for streaming children in schools. We could even see the prospect of genetic profiles being used in dating, as depicted in the science fiction film Gattaca (along with many of the other scenarios raised here). After all, we already choose mates based on many different traits with genetic underpinnings, and information on such traits is commonly used in selecting sperm or egg donors. Direct-to-consumer genetic profiling is a booming business and is already straying into many of these areas. Science fiction is fast becoming science fact. Buckle up!
A NOTE ON RACE AND GROUP DIFFERENCES
Up to this point, we have been concentrating on the origins of differences between individuals, but have not considered the possibility of average differences between groups of individuals, or populations. (With the exception of sex differences, which are a special case, given that there are strong evolutionary reasons for sex differences in behavior and known, conserved mechanisms that institute them.) If psychological traits have a partly genetic basis, so that relatives are more similar in such traits to each other than to random strangers, then it seems reasonable to suppose that such similarity might extend across whole populations who share a common ancestry and cause differences between populations with different ancestries. There are dozens of physical traits—like skin color, facial morphology, or height, for example—that do indeed differ between populations in this way. That this might extend to psychological traits is thus not inconceivable.
However, this is not a given. Systematic differences between groups can sometimes arise just by “genetic drift”—the random divergence between two populations of genetic variants, some of which may affect traits. But that mainly applies to traits that are evolutionarily neutral—where it really doesn’t matter much if a trait is high or low. For traits with adaptive value, however, the emergence of systematic differences requires some active force to drive it, some selective advantage to a greater or lower level.
Most of the physical traits that differ between populations have clear adaptive effects—there is a reason that they differ. For example, lighter skin evolved independently a couple of times as humans migrated to more northern latitudes, as an adaptation to lower light conditions. While dark skin is protective in regions with high sunlight, in low light it prevents adequate production of vitamin D. Similarly, persistence into adulthood of expression of the enzyme lactase, which breaks down milk, arose recently (in the past several thousand years) in European populations with the advent of dairy farming. And genetic adaptations to high altitude are seen in some populations, like Tibetans.
However, even if comparable forces did apply for psychological traits (and there is no evidence that they do or have), their genetic architecture makes this kind of directional selection much more difficult. The physical traits mentioned above are driven by changes to one or two genes, with highly specific effects. But we have seen that psychological traits can be affected by genetic variants in hundreds or thousands of genes, which often also affect other traits. That means, first, that any given mutation that increases the level of one trait may have offsetting effects on other traits. This will tend to constrain the possibilities for change. And second, it means that directional selection will face a losing battle against mutation, which will instead constantly generate diversity within groups. There would need to be an extremely strong selective force—similar to the levels of artificial selection that dog breeds were subjected to—in order to drive stable group differences for these kinds of traits.
In addition, for personality traits at least, diversity may actually be promoted because there is no single combination of parameters that is optimal in all situations or all environments. Any given profile will lead to more optimal behaviors in some contexts, but less optimal ones in others. For example, in some circumstances cautious people will do better (they may be less likely to get killed, for example). In other situations more daring people may do better (they may be more likely to obtain food or a mating opportunity). Whether one profile outperforms the others in terms of evolutionary fitness depends on how often those different types of situations arise in that particular environment.
But we should remember that the most important thing in each person’s environment is other people. Those are the ones we can cooperate or compete with, those are the threats that pose the most danger and the sources of the most relevant opportunities. That means that the optimal profile of behavioral parameters for any individual depends on the profiles of everyone else around that person. Not in a simple way, however; it’s not the case that the best solution is to be like everyone else—sometimes quite the opposite. If, for example, most other people are quite reckless, then it may pay to be more cautious. While half of them are dying off because they’ve put themselves in too much danger, you can hang back and share in the spoils. (It may not be admirable, but natural selection won’t care.) If, on the other hand, you’re in a population of timid people, you may gain an advantage by being braver, especially in obtaining mating opportunities.
This is classic game theory—the optimal strategy for any individual depends on the strategies employed by others. In evolutionary terms it leads to what is known as frequency-dependent selection. The fitness value of any given phenotype (a behavioral strategy in this case) decreases as the frequency of that phenotype in the population increases beyond a certain point. Any given strategy works better while it’s still somewhat rare, which tends to prevent genetic variants that favor any specific behavioral profile from ever getting fixed in the population. Diversity thus arises not just from a fundamental inability to genetically specify the same profile in all individuals but also from the positive actions of natural selection.
So, while a naïve comparison with physical traits suggests that psychological traits might well vary between groups, a more detailed consideration of their genetic architecture reveals just how unusual a scenario would have to exist for this to arise. It is by no means impossible—but it would require strong and consistent environmental differences between groups to create systematic pressures strong enough to drive genetic adaptation for these traits. Which brings us to how such groups are defined and the question of whether the categories typically studied have any real validity.
Most of the discussion in this area centers on the colloquial idea of “races,” but exactly how many such categories exist and how they are defined are hard to agree on. Anthropologists in the 1800s identified three main races—Black, White, and Asian—roughly reflecting continental ancestry. But a fourth soon had to be added when it was recognized that Australian aborigines are really very distinct from Africans, despite having similar skin color. And, of course, each of those categories can be subdivided more and more—among Whites, for example, we could recognize Hispanics, Jews, Arabs, etc. In terms of shared ancestry, thousands of such groups can be defined across all areas of the globe. Some will be reasonably discrete, based on a history of isolation and restricted breeding, while others are much more mixed, reflecting more extensive migration and interbreeding.
Modern genetics can reveal much of this history and clearly illustrates the complexities of humanity’s global family tree. If you cluster people based on genetic similarity, you can indeed derive several major categories, but you can also just as well go to deeper levels and reveal many, many more. There is no reason to think that any one level should have privileged status—none of these groupings reflects a natural kind, in the way that sex does. You can look for trends at the level of Africans versus non-Africans, for example, but you can also look at the level of ethnic groups like Bantu, Amhara, Yoruba, Celts, Basques, Finns, Japanese, American Indians, Maori, and so on. The decision to stop at any given level of clustering is purely arbitrary, and the larger and more ancient the cluster, the greater diversity there will be within that group, both genetically and in terms of the environments to which they have been exposed.
This is an important point when considering claims of racial differences in behavior and the even stronger claim that these are driven by genetic differences. For example, in his 2014 book A Troublesome Inheritance: Genes, Race and Human History, journalist Nicholas Wade argues, first, that strong and stable differences in behavioral or cognitive traits exist between five major racial categories and, second, that these are driven by genetic differences, reflecting adaptation to different historical societal structures across continents. As the author admits, such claims are “leaving the world of hard science and entering into a much more speculative arena at the interface of history, economics and human evolution.”2 Quite. It is a complete non sequitur to claim that any cultural differences between populations must be caused by genetic differences. There is in fact no evidence at all that observed or supposed differences in behavioral patterns between populations reflect anything but cultural history.
A more contentious issue is the notion of racial differences in intelligence. The idea that observed differences in cognitive abilities between populations might be driven by genetic differences is an old one, certainly popular with Galton and Davenport, for example. But it achieved notoriety with the publication of the 1994 book The Bell Curve: Intelligence and Class Structure in American Life, by psychologist Richard Herrnstein and political scientist Charles Murray. Among other things, they highlighted differences in average scores on IQ tests between various ethnic groups across America, noting a lower average among African-Americans and Hispanics than among Whites or Asians. Since IQ is a heritable trait but can also be affected by environmental factors, they went on to state that: “It seems to us highly likely that both genes and the environment have something to do with racial differences.”3
This is couched in the most reasonable-sounding terms—simply presenting a “probably a bit of both” scenario as the most likely situation. This seems to put the burden of proof on people who argue that genetic differences will not contribute to differences in intelligence across population groups. But is there any evidence for their hypothesis? And is it really likely?
Regarding heritability, twin and family studies only show that much of the variation in IQ within the studied populations is due to genetic variation. This says nothing about what might cause differences between populations. A trait could be completely heritable within each of two populations yet show a difference between them that is completely environmental. As noted previously, body mass index is highly heritable in both the United States and in France, but the large difference in average body mass index between these countries is caused by environmental factors, not genetic ones.
In the case of intelligence, we know from trends over time that it is highly sensitive to factors such as general maternal and infant health, nutrition, education, and practices of abstract thinking. Changes to all of these factors have contributed to increases in average IQ scores across many nations over the past century, which have nothing to do with changes in genes. Given the historical and continuing inequities between racial groups in the United States and across the world, it would seem more appropriate to exhaust the possible contributions of these cultural factors before inferring any contribution from genetic differences.
Indeed, behavioral geneticists often rightly criticize sociological studies as being uninterpretable when they don’t control for known genetic confounds. For example, the idea that having books in the house causally increases children’s IQ is hopelessly confounded by the fact that parents with higher IQ will likely have more books in their house and will also tend to have children with higher IQ, for genetic reasons. The converse is true here. We know that cultural factors affect IQ and we know that they differ very substantially between the groups concerned. The conclusion that differences in IQ test performance reflect, even in part, genetically driven differences in intellectual potential across races is thus hopelessly confounded and remains entirely speculative.
But, beyond that, such variation may be inherently unlikely. If intelligence is a general fitness indicator rather than a genetically modular trait, this changes the dynamics of possible selection on it. It is not enough to say that greater intelligence might have been selected for in one population—you have to explain why that would not have been the case in every population. The selective pressures that led to the emergence of Homo sapiens may well have directly favored mutations that led to greater intelligence; that is, selection would have been acting on that trait itself. But once that complex system was in place, the main variation would be in the load of mutations that impair it, which will likely have effects on many traits and impair fitness generally. General fitness should always be selected for, by definition, in any population, meaning intelligence should get a free ride—it will be subject to stabilizing selection, whether or not it is the thing being selected for.
For all these reasons, none of the evidence for genetic effects on psychological traits presented in this book should be taken as supporting the case for a genetic contribution to differences in such traits between populations.
DETERMINISM
I have presented the case in this book for the existence of innate differences in psychological traits, arising from two sources: genetic differences in the program specifying brain development and function, and random variation in how that program plays out in an actual individual. The second source is often overlooked, but its effects mean that many traits are even more innate than heritability estimates alone would suggest. In short, we’re born different from each other. The slate is most definitively not blank. To many people, this may be the most obvious thing in the world, based on their common experience of other human beings, especially children. To others, however, it may smack of genetic determinism. It may sound like a claim that our genes determine our behavior—that we are slaves to them with no real autonomy.
This is not the case at all. The claim is far more modest. It is simply this: that variation in our genes and the way our brains develop cause differences in innate behavioral predispositions—variation in our behavioral tendencies and capacities. Those predispositions certainly influence how we behave in any given circumstance but do not by themselves determine it—they just generate a baseline on top of which other processes act. We learn from our experiences, we adapt to our environments, we develop habitual ways of acting that are in part driven by our personality traits, but that are also appropriately context dependent.
Along the same lines, the evidence that parenting does not have a strong influence on our behavioral traits should not be taken as implying that parenting does not affect our behavior at all. We may not be molding our children’s personalities, but we certainly influence the way they adapt to the world. Our actual behavior at any moment is influenced as much by these characteristic adaptations and by the expectations of family and society—and, indeed, the expectations we build up of ourselves—as by our underlying temperament. Slates don’t have to be blank to be written on.
But if I can evade the charge of genetic determinism, I may still appear guilty to some of the related crime of neuroscientific reductionism. In delving into the detailed mechanisms underlying mental functions and what may cause them to vary, it may seem as if I am reducing those mental functions to the level of cells and molecules, none of which has a mind or is capable of subjective experience. It may look like such explanations leave no room for real autonomy, for thoughts and ideas and feelings and desires and intentions to have any causal power, for free will to exist at all.
Once again, this is not the case—nothing I have presented in this book is a threat to our general notions of autonomy and free will. The fact that there is a physical mechanism underlying our thoughts, feelings, and decisions does not mean we do not have free will. After all, to expect that thoughts, feelings, and decisions would not have any physical substrate is to fall into dualism—the idea that the brain and mind are really fundamentally distinct things, the mind somehow immaterial. This is a fallacy, and one that is hard to climb back out of once you’ve fallen into it. The mind is not a thing at all—at least, it is not an object. It is a process, or a set of processes—it is, simply put, the brain at work.
Thoughts and feelings and choices are mediated by the physical flux of molecules in the brain, but this does not mean they can be reduced to it. They are emergent phenomena with causal power in and of themselves. Some pattern of neural activity leads to a certain action by virtue of it comprising a thought with some content and meaning for the organism, not merely because the atoms bumping around in a certain way necessarily lead to them bumping around in a new way in a subsequent moment. The precise details of all the atoms don’t matter and don’t have causal force because most of those details are lost in the processing of information through the neural hierarchy. What matters is the information content inherent in the patterns of neuronal firing that those atoms represent and what that information means. When I make a decision it’s because my patterns of neural activity at that moment mean something, to me.
We all have predispositions that make us more likely to act in certain ways in certain situations, but that doesn’t mean that on any given instance we have to act like that. We still have free will, just not in the sense that we can choose to do any old random thing at any moment. I mean, we could, we just usually don’t, because we are mostly guided by our habits (which have kept us alive so far) and, when we do make deliberative decisions, it is between a limited set of options that our brain suggests. So, we are not completely free, we are constrained by our psychological nature to a certain extent. But really that’s okay—that’s what being a self entails. Those constraints are essential for continuity of our selves over time. Having free will doesn’t mean doing things for no reason, it means doing them for your reasons. And it entails moral responsibility in the pragmatic sense that we are judged not just on our actions but also on our reasons for those actions.
This does raise a provocative idea, however—that some of us may have more free will than others. In each one of us our degree of self-control varies in different circumstances, depending on whether we are tired, hungry, distracted, stressed, sleep deprived, intoxicated, infatuated, and so on. And over our lifetimes the impetuosity of youth gives way to the circumspection of adulthood. But the mechanisms that allow us to exercise deliberative control over habitual or reflexive actions also clearly vary in a more trait-like fashion between people. Some people are far more impulsive than others, as we saw in chapter 6. Many suffer from compulsions or obsessions or addictive behavior that they cannot control. And people in the grip of psychosis or mania or depression are clearly not in full control of their actions, which is why we do not hold them legally responsible. You could say that some people are more at the mercy of their biology than others, though that difference itself is a matter of biology.
SELF-HELP
There is a massive self-help industry devoted to the idea that we can change ourselves—our habits, our behaviors, even our personalities. From psychotherapy or cognitive behavioral therapy to mindfulness, brain training, or simply harnessing the power of positive thinking, there are scores of different approaches and an endless supply of books, videos, seminars, and other materials to help you become your best self. These suggest that we can learn the habits of highly effective people, and we too will become highly effective. That we can overcome stress, anxiety, negative thoughts, relationship problems, and low self-esteem, manage our anger, boost our mood, achieve the goals we always hoped for, and generally become a happier person. The slightly paraphrased title of one self-help book promises to show you how to rewire your brain to overcome anxiety, boost your confidence, and change your life. Others proclaim that you can “Immediately achieve massive results using powerful (fill in the blank) techniques!”
Lately, what had been an almost exclusively psychological literature has been suffused with supposedly groundbreaking discoveries from neuroscience, which seem to confirm the possibility of change and elucidate the mechanisms by which it can occur. Two areas in particular have caught the public’s imagination.
The first is neuroplasticity or brain plasticity—the idea that the structure of the brain is not fixed but quite malleable, with the implication that prewired need not mean hardwired. And this is quite true, to a certain extent. The brain is constantly rewiring itself on a cellular scale—that is how it learns and lays down memories to allow behavioral adaptation based on experience, by forming new synaptic connections between neurons or pruning others away. There is nothing revolutionary about this—it is simply how brains work. It is also true that, after injury for example, the brain can sometimes rewire circuits on a much larger scale, which can aid recovery or compensation for the injury in some cases or lead to additional problems in others.
But the brain is not infinitely malleable, and for good reason—it has to balance the need to change with the need to maintain the physical structure that mediates the coherence and continuity of the self. If it were undergoing wholesale changes all the time we would never be us. While young brains are highly plastic and responsive, these properties diminish drastically beyond a certain stage of maturation—indeed, they are actively held in check by a whole suite of cellular and extracellular changes. The period of plasticity is extremely protracted in humans, reflecting the fact that we have greater cognitive and neural capacity to continue to learn from experience over longer periods of time. But at some point the brain and the individual have to stop becoming and just be.
This limits the amount of change we can expect to achieve. It is certainly possible to change our behaviors—with enough effort you can break a habit or overcome an addiction. And that may be a perfectly laudable and worthwhile goal in many circumstances. But there is little evidence to support the idea that we can really change our personality traits, that we could, for example, learn to be biologically less neurotic or more conscientious. You may be able to learn behavioral strategies that allow you to adapt better to the demands of your life, but these are unlikely to change the predispositions themselves.
For children the situation may be different. There may be periods in which intensive behavioral interventions can alter developmental trajectories. For example, a child with autism may be taught to consciously look at people’s faces as they are speaking—this may encourage better linguistic and social development than would have tended to occur otherwise. But even here the opportunities to effect long-lasting change are still limited. These kinds of interventions, in either typically or atypically developing children, will always be fighting against both the innate predispositions themselves and their cascading effects on the experiences individuals choose and the environments they select or create, which will tend to reinforce innate traits.
The second idea that is popular these days is known as epigenetics. We came across the word epigenetic in chapter 4, where it was used to refer to the processes of development through which an individual emerges. The modern usage refers to something quite distinct—the molecular mechanisms that cells use to regulate gene expression. In any given cell at any given time, some genes will be active—the proteins they encode will actually be being produced—while others will be silent. This allows muscle cells to make muscle cell proteins and bone cells to make bone cell proteins, and so on. But cells also respond to changes, either internal or external to the cell, by increasing or decreasing the amounts of proteins made from various genes. Epigenetic mechanisms of gene regulation allow these kinds of changes to be locked in place for some period of time, sometimes even through the life of the cell and any cells it produces. That is precisely what happens in development as different cell types differentiate from each other.
The attraction of epigenetics for the self-help industry stems from the idea that it acts as a form of cellular memory, turning genes on or off in response to experience and keeping them that way for long periods of time. The problem comes from thinking that turning genes on or off equates somehow to turning traits on or off. If you’re talking about something like skin pigmentation, that might apply—I can expose my skin to the sun for a period of time and this will lead to epigenetic changes in the genes controlling pigment production, and I’ll get a nice tan that will last for weeks. But for psychological traits, the link between gene action at a molecular level and expression of traits at a behavioral level is far too indirect, nonspecific, and combinatorial for such a relationship to hold. Moreover, if much of the variation in these traits comes from how the brain developed, the idea that you can change them by tweaking some genes in adults becomes far less plausible. So, despite their current cachet, neuroplasticity and epigenetics don’t provide any magical means to dramatically alter our psychological traits.
This brings me to a final point, and really it is just my personal opinion. To me, the self-help industry is built on an insidious and even slightly poisonous message. It all sounds very positive—the possibility of change—but really it relies on the idea that you’re not good enough as you are, that other people are better than you, but if you buy our products or take our classes or just think positively enough then you can be better too. It plays on some of the least attractive aspects of human psychology, often explicitly using envy as a marketing ploy—of neighbors who’ve got more money than you, that guy at work who got promoted ahead of you, or that woman who just seems to have the perfect life. And it is often targeted at the more neurotic among us, with claims of overcoming anxiety, worry, stress, low confidence, and low self-esteem, playing on those very personality traits to convince people they need to be changed.
This is not a self-help book—clearly. But perhaps there is something positive in highlighting a different view. There is a power in accepting people the way they are—our friends, partners, workmates, children, siblings, and especially ourselves. People really are born different from each other and those differences persist. We’re shy, smart, wild, kind, anxious, impulsive, hardworking, absent-minded, quick-tempered. We literally see the world differently, think differently, and feel things differently. Some of us make our way through the world with ease, and some of us struggle to fit in or get along or keep it together. Denying those differences or constantly telling people they should change is not helpful to anyone. We should recognize the diversity of our human natures, accept it, embrace it, even celebrate it.
1F. R. Marshall, “The Relation of Biology to Agriculture,” Pop. Sci. 78 (1911): 553.
Comments
Post a Comment