• A-Z
  • Directory
  • myUVM
  • Loading search...

Evolution in Structured Populations

Why reductionism doesn’t work; Part 1, Individuals to genes

Posted: May 18th, 2015 by Charles Goodnight

One thing that often used to happen, perhaps not so much any more, is that people will say that we don’t need to worry about levels of selection because all selection can be reduced to selection acting directly on genes. George Williams perhaps put this view best, first with his principle of parsimony, which argues that reductionism is the right perspective:

“In explaining adaptation, one should assume the adequacy of the simplest form of natural selection, that of alternative alleles in Mendelian populations, unless the evidence clearly shows that this theory does not suffice”

and in the same book, and more explicitly, which says that reductionism is works:

“No matter how functionally dependent a gene may be, and no matter how complicated its interactions with other genes and environmental factors, it must always be true that a given gene substitution will have an arithmetic mean effect on fitness in any population.”

All I can say to this is GAHHHH!

Brave 2 frustrated

Merida expresses her opinion on genetic reductionism (taken from http://giphy.com)

I think a lot of people know that you cannot think of selection as acting on genes, but a lot of people also can’t articulate why it doesn’t work. So, if anybody asks you, the simple answer is that reductionism doesn’t work because of interactions. At the individual level this will primarily be gene interactions of dominance and epistasis.

In a fully additive system there would be no problem, and this IS the problem.  Our intuition about genetics was developed using simple additive models.  In an additive system, knowing at what level selection was acting would be nice information, but the fitness of the phenotype can always be algebraically reduced to fitness effects on individual loci.   In other words, in additive systems, how the genes are packaged really doesn’t affect the effect of genes on the phenotype. To see this consider a phenotype affected by a single locus additive trait:

Genotype A1A1 A1A2 A2A2
Frequency p2 2pq q2
Fitness 1 1-Z/2 1-Z

(I use Z to emphasize that we are not talking about fitness. Selection will be affected by the packaging for the simple reason that some of the selection is on heterozygotes). We can calculate the average effect of the A1 allele on the phenotype we would discover that it is:

Original genotype genotype after substitution probability change
A1A1 A1A1 p2 0
A1A2 A1A2 ½ 2pq 0
A1A1 ½ 2pq Z/2
A2A2 A1A2 q2 Z/2

So, the average effect of the A1 allele is:

Screen Shot 2015-05-16 at 12.37.15 PM

Now consider a haploid system

Genotype A1 A2
Frequency p q
Fitness 1 1-Z/2

The average effect with the same phenotypic effects (adjusted for ploidy). Now the local average effect of the A1 allele is:

Original genotype genotype after substitution probability change
A1 A1 p 0
A2 A1 q Z/2

So, the average effect of the A1 allele is: you guessed it:

Screen Shot 2015-05-16 at 12.37.24 PM

The effect of the allele on the phenotype is not affected by the packaging.

Now lets do the same thing with a dominant system:

Genotype A1A1 A1A2 A2A2
Frequency p2 2pq q2
Fitness 1 1 1-Z

Now the average effect of the A1 allele on the phenotype becomes:

Original genotype genotype after substitution probability change
A1A1 A1A1 p2 0
A1A2 A1A2 ½ 2pq 0
A1A1 ½ 2pq 0
A2A2 A1A2 q2 Z

So, the average effect of the A1 allele is:

Screen Shot 2015-05-16 at 12.37.34 PM

turning to the haploid system

Genotype A1 A2
Frequency p q
Fitness 1 1-Z/2

Now the local average effect of the A1 allele is:

Original genotype genotype after substitution probability change
A1 A1 p 0
A2 A1 q Z/2

The average effect in the haploid system is now different than in the diploid system,.

Screen Shot 2015-05-16 at 12.37.24 PM

In other words, if we add the simplest possible form of nonadditivity the packaging does matter. Trust me it gets worse. I am way to lazy to put up tables for average effects in epistatic systems, but I have talked about this before. It turns out that the variance in local average effects is a measure of how the average effects of alleles are to genetic background. I have talked about these before, but it bears re-posting the relevant figure:

Drift and epistasis LAE graph

The important point is that the variance in local average effects is zero in additive systems, but non-zero when there are any sort of interactions. This means that the reducability of fitness effects on to genes is a reasonable exercise in additive system, but simply is not meaningful in epistatically interacting systems. To see how bad this can be, consider long-term directional selection in a system with AXA epistasis. Depending on the starting gene frequencies the average effect of an allele can actually reverse signs.  For what it is worth, the dashed lines are the local average effects for an additive system, and the solid lines are the local average effects for AXA epistasis.  This shows the contrast between additive systems and epistatic systems.  For the additive system, if you were to evaluate the fitness effects in generation zero they would provide a pretty good estimate of the fitness at the end (in this deterministic system an exact estimate).  On the other hand, for the epistatic system, estimates of allelic effects made in generation zero rapidly become useless, and by the time fixation is reached they are exactly wrong.

figure 12 AXA LAE

In one sense, Williams is absolutely correct. At any given instant it is certainly possible, in principle, to do a least squares regression analysis and assign fitness effects to individual loci. However in an epistatically interacting system those fitness assignments are ONLY good for the moment, or perhaps the generation, in which the assignment is done. Those effects will change as gene frequencies change, and not just gene frequencies at the locus under study, but gene frequencies at any other loci as well. So, my point is not that the assignment cannot be done, but rather that the assignment carries no information that is useful beyond the moment.

Next time I talk about why reductionism does work!

Epistasis in Monkey Flowers, and some general thoughts on epistasis

Posted: May 8th, 2015 by Charles Goodnight

So, at least my twitterverse has been on fire suddenly with the appearance of a new article in PLoS by Patrick Monnahan and John Kelly “Epistasis Is a Major Determinant of the Additive Genetic Variance in Mimulus guttatus”  .

It really is a nice study in which they identified 11 quantitative trait loci (QTL) in a single population of monkey flower, then used these to estimate the functional (also known as physiological) direct effects, and all of the two locus epistatic interactions. They then used these estimates to estimate additive genetic variances and total genetic variances in the population.

What is nice about this study is that they use actual data from a QTL analysis of a natural population, and then use the resulting analyses to estimate bi-allelic functional epistasis for each of the pairs of QTL. In fact it would be a great teaching tool to have access to some of those two locus genotypic values for teaching purposes! I would also love to have the actual allele frequencies, so that we could in fact estimate the standing statistical variance components in the natural populations. This also brings up a very important point: all of the models to date have put in fixed values for the genotypic values (or avoided the issue entirely using inbreeding coefficients). In the real world we collect organisms, identify genes, and phenotype them. There is ample room for error at every step. So the one thing we know for sure is that any QTL measures or assignment of phenotype to genotype is an estimate. This really is the first attempt to couple field estimates of genotypic values to variance components.

One other thing that is nice about this paper is that they bring up both the Kempthorne/Cokerham variance components and the more recent terminology of “positive”, “negative” and “sign” epistasis. Nicely, Hanson (2013 Evolution 67: 3501-3511) provided two locus examples of these types of epistasis. It turns out that if we set the gene frequencies to 0.5, and do the appropriate regressions we can directly relate these molecular concepts of epistasis to the quantitative genetic components. It also turns out that this is critical, for while functional epistasis is loads of fun, it is only the quantitative genetic variance components that tell us how phenotypic evolution works.

Anyway, from Hanson (2013) these different types of functional epistasis are:

Hansen figure

Using the JMP program shown below it is easy to show that positive epistasis is a hodgepodge of variance components (89% additive variance, 3.6% AXA epistasis, 3.6% AXD epistasis, and 3.6% DXD epistasis), negative and sign epistasis is additive variance and AXA epistasis (negative epistasis: 80% additive variance, 20% AXA epistasis, sign epistasis: 50% additive variance, 50% AXA epistasis). Maybe its because I am a curmudgeon, but I am happier with the old fart Kempthorne partitioning, because it relates directly to variance components, and can be much more easily converted to statistical genetic components.

Now here is the critical point. These variance components are a function of gene frequency, thus the variance components will change as gene frequencies change. Using the example of positive epistasis above I can now tell you the additive genetic variance for any two locus gene frequency:

va Plot cropped

Graph of the additive genetic variance for two locus two allele positive epistasis as described by Hansen (2013). A JMP program to calculate VA for a single gene frequency is listed below. Note that I rotated the graph to best show the shape of the surface. The highest additive genetic variance occurs when both the A2 and B2 alleles are at low frequency (around 0.2).

Finally, I know it is impolite to promote your own work, but well, it’s my blog and I will do what I want. My ego was a bit hurt by the fact that that my work on epistasis and additive genetic variance was not cited, in particular, my paper on average effects and additive variance (Goodnight. 2000. Heredity 84: 587-598.), which was quite relevant. That and my earlier paper using breeding values (Goodnight,1988. Evolution 42: 441-454) were the first papers to describe the conversion of epistasis in to VA, and they have historical significance if nothing else. I have long been fighting a bit of a rear-guard action to keep those papers from falling into the obscurity of common knowledge. There is actually another reason that they could have benefited from citing those papers. One of the things that comes out of those papers is that if you can write down the functional values for the 9 genotypes of a pair of interacting two allele loci you can use regression to calculate the additive genetic variance for any given gene frequency. I do actually know why they might have missed my paper. They use the Falconer partitioning that was first pioneered by Cheverud and Routman (1995. 139: p. 1455–1461) which is enough different that my paper really didn’t need to be cited, so it is hard to get too mad at them.

my party

Its my blog and I will whine if I want to. You would whine to if it happened to you. (picture from (http://www.amazon.com/Its-My-Party-Mercury-Anthology/dp/B000VHKHZA )

If you have JMP and are savvy in its use, the files that I use for calculating the additive genetic variance can be found here (variance regressions). I fixed it by changing the file extension to .txt.  It is still a .jmp, so after you download it please change the txt to jmp, then it should work.

Basically you add your own dependent variables, add the allele frequencies of your choice (I put it in as a formula, so use the get column info route to change those), and the linkage disequilibrium. Then run the script in the upper left hand corner. Finally, if the gene frequencies are other than 0.5 and in linkage equilibrium use sequential (type 1) sums of squares. Type 3 sums of squares will give you the wrong answer. If you have any questions feel free to ask me.  OK, if you want the program I need to send it to you under a separate cover, so email me if you would like it.  If I ever figure it out I will fix tings.

 

 

 

Matrix comparisons: Random skewers and selection skewers

Posted: April 25th, 2015 by Charles Goodnight

A week late and a dollar short, but lets continue comparing matrices. Continuing on with my blatant endorsement of statistical methods attached to my name. . .  Last time I talked about the “Rank”/“Signed Bartlett”/”Modified Mantel” tests for comparing the dimension size and shape of a pair of matrices. This is only one of several ways of comparing matrices. This set of tests has the advantage that it is basically non-parametric, and makes very few assumptions about the actual matrices. It is also useful because it directly compares matrices for easily interpretable differences. The problem with the these tests is that in most cases we don’t so much care about whether or not a pair of matrices are the same or different as whether they have the same or different effects on the evolution of the organism.

Obviously the size shape and dimension of a covariance matrix will be related to the ability to respond to selection, but the relationship may not be perfect. Two other approachs that has been developed are “random skewers” (Cheverud 1996 J. Evol. Biol. 9:5-42; Cheverud and Marroig 2007 Genet. Mol. Biol. 30:461-469; Revell 2007 Evolution 61:1857-1872) and “selection skewers” (Calsbeek and Goodnight 2009. Evolution 63:2627-2635). To see what a random “skewer” is consider that in a multivariate selection experiment the response to selection is given by:

R = GP-1S = Gβ

The β is a vector that describes the direct effects of selection on the different traits. The G matrix is sometimes thought of as a “rotation matrix” in that, while what it does from a biologists perspective is tell us what the R vector or response to selection, from a mathematicians perspective what it does is rotate and warp the β vector. Thus, if we take any arbitrary β vector and multiply it by two different G matrices the two matrices will rotate and stretch the β vector in different ways producing two different R vectors. We can use this because if the two matrices are identical the two rotated vectors will be identical, whereas if the matrices are different the two rotated vectors will also be different. These can be compared by calculating the vector correlation between the two vectors. In linear algebra terms this is (I am SO sorry I am doing this to you!)

Screen Shot 2015-04-25 at 9.07.06 AM

For the non-linear algebraic adept (he said raising his hand), the numerator is really just a means of calculating a covariance between the two vectors, and the denominator is the square root of the product of the two covariance matrices from the vectors.

So, with the random vectors approach what you do is generate a large number (1000 or more) random unit vectors. These represent a set of selection gradients in random directions. For each gradient you calculate the resulting R vector using your two matrices, and calculate the vector correlation. If the average correlation is close to one, then they are the same, whereas if it is less than one the two matrices are different.

The question, of course, is how close to one is close enough. Here again the bootstrap comes in. Following the approach I outlined last time, we generate a large number of pairs of matrices that are estimated from bootstrap samples of the same data set. Because they are estimated from the same data set there can be no true difference, so if we calculate the average correlation between these two matrices this will give us a distribution of the correlation when the null hypothesis is true. It is then a simple matter to compare the actual correlation with the bootstrap correlations. If the actual correlation is less than 95% (or what ever) of the bootstrap correlations then we can say that the two matrices are significantly different from each other.

This is an interesting point. Here we are using the null hypothesis that the two matrices are identical. Thus, we set up the bootstrap such that the null hypothesis was true, and compared our actual correlation with the bootstrap correlation. In the original random skewers approach the opposite was the case. The null hypothesis was that the two matrices were uncorrelated, and thus those papers use a different approach to significance testing. I googled hard for a joke about getting null hypotheses backwards, but apparently this is too subtle for the online community.

The selection skewers is similar to random skewers, with a few important changes. This analysis is appropriate if you are specifically interested in comparing how two populations will respond to a particular selection pressure. For example, you may have two recently diverged populations and want to determine whether the two populations will respond in the same manner to a particular selection pressure. In most cases you will likely have a known S vector, which is the raw selection differential. This is what I assume in the program I provided. In this case you first need to generate the b = P-1S vector. Then as with the random skewers you calculate the vector correlation, and compare the actual correlation to the correlation in the bootstrap data sets when the true null hypothesis is zero

The nice thing about both the random skewers and the selection skewers is that they give a real world idea of what changes in shape can do. The random skewers is agnostic as to how selection actually works, whereas the selection skewers tests a specific selection regime. This later is particularly interesting, since it is entirely possible for two matrices to have very different structures (as determined say by the rank/Bartlett’s/Mantel tests), and yet have this structural difference have very little actual effect on the response to selection. On the down side, however, the random and selection skewers lump a lot of information together. For example, it can be hard to determine whether a difference in response between to matrices is due to a difference in the total amount of available variation, or due to changes in the correlation structure leading to negative genetic correlations.

I guess the real lesson from all this is that there is no one best statistical test. Which is best depends on the question you ask. If you want detailed insights into the actual covariance matrices the rank/Bartlett’s/Mantel test may be best. If you want a summary of the difference in the ability to respond to selection random skewers may be a good choice, and if you have a clear a prior selection hypothesis to test the selection skewers is clearly the best.

To remind you I have an R script that performs these tests and can be relatively easily modified for different data sets and circumstances.

Here is the program: 

Writeup on how to use the program:  Matrix comparison writeup

The program:Bootstrap command

Relevant example data sets:

balanced stock females

stock data female

population 3 females

Statistical tests for comparing matrices

Posted: April 8th, 2015 by Charles Goodnight

I have been remiss. Quite a few years ago I found myself in the position of wanting to compare two genetic covariance matrices. At the time it was before the Flury hierarchy had been suggested by Pat Phillips (Phillips & Arnold 1999. Evolution 53: 1506-1515), so I found myself in a position of needing to invent my own. Later, apparently along with others I decided I wasn’t particularly enamored with the Flury hierarchy. This resulted in two publications (Goodnight & Schwartz 1997 Biometrics 53: 1026-1039; Calsbeek &. Goodnight 2009 Evolution 63: 2627-2635), the first of which is not particularly well known. The first publication also suffered from not having a good software implementation. With the appearance of R this has now been rectified. In any case I would like to remind people of these statistical methods for comparing covariance matrices.

First off, there is nothing wrong with the Flury hierarchy, I just don’t particularly find it intuitively useful. As I understand it the Flury hierarchy is a model selection approach, whereas the methods I will discuss are parametric statistical tests. I recommend you read Philips and Arnold’s papers and make your own decision. So enough preamble.

We had just done an experiment in which we sent a population a population bottleneck, and we had measured several traits. We wanted to know if the derived population and the ancestral population had the same genetic structure, aka, the same genetic covariance matrices. For a single trait we know exactly how to do this. You “simply” measure the additive genetic variance in the two populations and do an F test to see if they are the same or different. I put simply in quotations because measuring additive variance is never easy.

When we get to a multivariate settings things become more complicated. Again, we will likely use a MANOVA to measure an additive genetic covariance matrix for each population. We would then like to compare these to see if they are the same or different. The good news is that genetic covariance matrices are square and generally easy to work with. The bad news is that when we go multivariate there are several ways that matrices can be different. In Goodnight and Schwartz (1998) we decided there are three ways of interest. The matrices can be of different dimension, they can be of different size, and they can be of different shape. These are really independent ways of being different, so it makes sense to develop three tests. The way we tested these was using bootstrapping.

The bootstrap: Bootstrapping is an interesting statistical procedure that was popularized in the 80s by Brad Efron (Efron 1979, The Annals of Statistics 7:1-26) (I took a workshop he offered somewhere around 1985). The basic idea is that if you have a data set you can create new pseudo data sets by randomly sampling with replacement from the original data. If enough of these bootstrap data sets are generated they will actually provide a distribution for the data. This at first seems counter intuitive, but as long as your data set is relatively large it works very well. To use this as a statistical test you need to decide what your null hypothesis is, and then figure out a random sampling scheme that makes that null hypothesis true. For example with a t-test, the null hypothesis is that the two populations have the same mean. You can make that null hypothesis true in several ways. You could simply combine the data from the two populations. Then randomly assign them back to the two populations without regard to original source. As a result there will be no true difference between the populations. If you randomly create several thousand of these pairs of populations you will get a distribution of observed differences in the means when you know the true difference is actually zero. You can then take the actual difference between the two populations and simply ask what percentage of the bootstrap differences that are more extreme than the difference in the actual data. That percentage is your probability of the observed difference occurring by chance. There are more sophisticated approaches, but this gives the idea.

In our particular test we had an ancestral population and a population derived from two generations of brother sister mating. We wanted to see if the two populations were the same or different. Our null hypothesis was that their covariance matrices were the same (this is important!), and we decided to use data from the ancestral population as our source for the bootstrap data.

Dimension: A genetic covariance matrix can be thought of as enclosing a space. Thus a univariate “matrix” is a single vector of a length that is equal to the variance. A two-trait covariance matrix defines a plane, a three trait matrix a cube, and so on.

Screen Shot 2015-04-08 at 5.47.22 PM

Figure 1; a one dimensional vector, and two and three dimensional matrices.

There are two things that can happen to the additive genetic variance after a population goes through a bottleneck. First it can disappear, that is, it can go to zero. Second, it can become so highly correlated with other traits that it becomes a linear combination of these traits. In graphic terms, in the three-trait case, that would be the equivalent of one of the vectors lying exactly in the plane of the other two vectors.

3 colinear vectors

Figure 2: in this matrix trait z is a linear combination of traits y and x. As a result all three lie in a single plane, and the resulting matrix is a two dimensional matrix.

Consider trying to compare two matrices with three variances. One is like the three dimensional matrix in figure 1, and the second has only two dimensions as in figure 2. It won’t work to compare these. As an analogy it is like asking which is bigger, a box or a sheet of paper. The three dimensional matrix has an extra dimension along which it can evolve that is qualitatively different from the two dimensional structure.

The way we tested this was to find the largest sub-matrix that had valid variances that were not linear combinations of other vectors. We then tested the absolute value in the difference in rank ( |RpopA-RpopB|) as our test statistic measured against the bootstrap populations were there was no true difference in rank. In this data set the difference in rank was not significant.

Screen Shot 2015-04-07 at 12.19.17 PM

Difference in size: As mentioned above, matrices can be considered to be planes, volumes or hyper volumes. It turns out that the determinant is a measure of the space enclosed by the matrix. For example, in a two-trait matrix the determinant is the area of the matrix, in a three trait matrix it is the volume, etc. Thus two matrices of the same dimension, regardless of shape, can be compared by comparing the determinants. The analogy is having two oddly shaped vases. We can compare them by asking how much water they hold. In this case shape is of no consequence, only the size of the space enclosed.

The important caveat is that they must be the same dimension. Again, the same question: which is larger the volume of a box, or the area of a sheet of paper. And again it is a meaningless question. We chose to resolve this by doing an “orthogonal projection” of the larger dimension matrix on the smaller dimension matrix. That is, we searched the matix pairs for a set of traits that had valid variances in both matrices. We did the analysis on this pair of sub matrices.

The next question, is how to compare the two determinants. It turns out that there is a good test, the multivariate Bartlett’s test that can be used. Bartlett’s test, has two problems. First, it is very sensitive to the assumption of multivariate normality, and second, it is not structured for use with MANOVA derived data. Still we can use the basic statistic and combine it with the bootstrap data, and it works perfectly well. One of the very useful features of bootstrap tests is that they make no assumptions about the distribution of the data. Also, if properly designed, they work well with virtually any experimental design. Interestingly, since, the standard test was not developed for use with MANOVA the parametric multivariate Barlett’s test was way to optimistic and the bootstrap ended up doing a much better job. A final modification is that we had an a priori interest in whether the derived genetic covariance matrix was significantly larger than that in the ancestral population. Thus, we multiplied the Bartletts statistic by 1 if the derived population was larger than the ancestral population and -1 if it was smaller, giving us the signed Bootstrap Bartlett’s test that allowed for both one tailed and two tailed tests.

Screen Shot 2015-04-08 at 6.05.37 PM

Shape: For shape we decided to go with a test similar to the Mantel test. Many rightly complain about the classic Mantel test for numerous reasons. However, the basic idea is useful. The idea is that you calculate a correlation between the pairwise elements of the two matrices. That is you pair up the elements of the two matrices and simply calculate the correlation among them. The problems with the traditional Mantel test for this application are three fold. First, the traditional Mantel has a null hypothesis that the two matrices are independent, whereas our null hypothesis is that the two matrices are identical. The bootstrap solves this by allowing us to generate a distribution of Mantel correlations among pairs of matrices that have a true correlation of 1.

Second, the Mantel test is meant to compare correlation matrices, which have 1s on the diagonal, whereas this is not true for a covariance matrix. In the classic Mantel test this diagonal is excluded, whereas in ours it is not. Third, all of the elements of a correlation matrix are between -1 and 1, whereas covariance matrices can have vastly different variances for different traits, which can inappropriately skew the results. This last we solved by standardizing the elements to the average of the diagonals of the two matrices. The final equation is somewhat ugly, so I refer you to the paper if you want the details. The results indicate that females, but not males, have a significant change in the shape of their covariance matrix. That is the population bottleneck significantly changed some of the variances and covariances among traits in the two populations, even though it did not change the total amount of additive genetic variance.

Mantel

So, the point of this is simply to suggest one possible way to compare genetic covariance matrices. One of the reasons I really enjoy multivariate math (I can’t believe I said that) is that very simple ideas, like the variance of a trait, suddenly become so much richer, and can change in so many more ways as we move into a multivariate setting. Obviously simple multivariate math in a pale comparison with the real world, but this only serves to make the diversity of the real world even more easily understood.

The other reason I wanted to put this up is that I have an R program that does these analyses, along with random skewers and selection skewers, which I will talk about next time. I am not an R developer, so I would be more than pleased if somebody were to take this script and turn it into something that didn’t actually need to be adjusted for the needs of every data set. If you do choose to finish developing this, please let me know!

Here is the program: 

Writeup on how to use the program:  Matrix comparison writeup

The program:Bootstrap command

Relevant example data sets:

balanced stock females

stock data female

population 3 females

 

Individuality, Microbiomes, and organisms

Posted: March 26th, 2015 by Charles Goodnight

So many things to write about, and so much writing to do. Sorry about missing last week. Somehow writing this week has been more of a chore than a joy. One of the things it has been suggested I write about is the continuing brouhaha over Nowak’s paper (Nowak, et al. 2010. Nature 466: 1057), the latest response by Liao, Rong and Queller (2015. PLoS Biol 13: e1002098.), and who was right and who was wrong. To all that all I can say is “frankly, my dear, I don’t give a damn”. If you must know, the basic model, although not as bad, has the same fundamental flaw seen in Gardner’s model: It does not include indirect genetic effects. I have discussed this before, and I will probably discuss it again. But for the moment it needs a rest, you can watch Gone With The Wind to see what happens when you beat a horse too much.

rhett butler

 

(from http://kittenofcupcakes.tumblr.com/post/49802470800)

 

Instead, what I want to do is go deeper down the rabbit hole of what an individual is. In a previous blog post I argue that the individual should be the level at which we assign fitness. This is fine as far as it goes, but consider the situation in which we assign the fitness at the level of the organism. Well, organisms are not really one species. In fact, in humans, non-human cells are thought to outnumber human cells ten to one, although they are probably less than 3% of our body mass (http://www.nih.gov/news/health/jun2012/nhgri-13.htm). We also know that the microbiome has significant effects on health, ranging from effects on the ability of organisms to digest food to affecting the nervous system.

Down_the_Rabbit_hole

 

Down the rabbit hole of individuality (http://mag.splashnology.com/article/alice-in-wonderland-showcase-of-impressive-cosplay-photography/7324/)

 

This has a couple of interesting consequences. First off, when we assign fitness at the level of the organism, we are in fact assigning fitness to a community, which includes the host metazoan, and their microbiome. The first rather fun implication is that, except in the enlightened sense of the relativistic concept of individuality I discussed two weeks ago, there is no such thing as individual selection. “Individual selection” in the classic sense is in fact community selection.

 

This is not a problem for selection per se. We can assign fitness at what ever level we want. If we want to assign it at the level of the community formerly known as an organism, then that is just fine. Selection is an ecological process. Which means that for simply analyzing selection, we don’t actually need to know anything about the heritability. Of course, that is a bit unsatisfying, since we would like to know the response to selection, and for that we need to know the heritability. The problem is that with over 90% of the cells in a human being non-human, the vast majority (some estimates as high as 99%) (https://www.microbemagazine.org/index.php?option=com_content&view=article&id=3452:major-host-health-effects-ascribed-to-gut-microbiome&catid=750&Itemid=969) of the active genes in our bodies are also non-human. So, again we are confronted with a potentially serious problem with the concept of heritability. This actually poses two problems. First we need an expanded view of realized heritability that recognizes that organisms are communities. This is not really a problem for the phenotypic perspective, which defines heritability in terms of the phenotypic resemblance between parents and offspring. But it does raise the interesting possibility that many of the genes that contribute to heritability may in fact be bacterial genes. This further raises the interesting point that the heritability of an organism will now be a function of the ecology of the microbiome. If you get the microbiome from your parents, undoubtedly true for a portion of the microbiome, then it is potentially heritable. The particular case in point here would be bacteria such as Wolbachia, which is an intracellular symbiont of arthropods that is maternally inherited. Among host variation in this bacteria would show up as heritable variance in the population.

 

On the other hand, if the microbiome is picked up randomly from the environment then it may not be heritable. Even here there is a problem since it may be predictably acquired from the larger population, and thus heritable at a higher level. Consider termites. When a young termite first ecloses to become an adult it lacks its gut fauna, which it obtains by trophallaxis from another colony member. Basically, an older individual regurgitates and the newly emerged adult eats the symbiont containing regurgitate. What this means is that members of the same colony will all get similar gut symbionts. What this means is that in termites the gut fauna may not be heritable in the classic sense, it may nevertheless be heritable at the colony level.

trophallaxis-termites

 

Trophallaxis in termites transfers gut bacteria among colony levels, possibly making the gut fauna derived traits heritable at the colony level. (http://carronleesgspestmanagement.blogspot.com.br/2011/04/how-does-baiting-system-work-on-ants.html)

 

The bottom line for all of this is that yes, in my discussion I suggested that in many situations the organism would be a reasonable unit to call the individual. This week I am saying that the organism is not a single species entity, but must be considered a community. I am also arguing that if we use the phenotypic perspective the resemblance between parents and offspring then potentially the concept of inheritance can become quite complex, with some of the gut fauna being considered “environment” because it is randomly acquired throughout the life of the organism, but others need to be considered heritable variation. Even here we need to distinguish between parts of the microbiome that are inherited due to close association of the parents, and parts of the microbiome that are inherited at a higher level due to within group sharing of food, or other processes.

 

It is interesting to compare this to my earlier post on heritability in the absence of genetic variation . What this suggests is that we are naïve to think that heritability can be consistently and logically reduced to nuclear Mendelian genes in the host species in the community that we call an organism.

What is “additive variance” in genetically uniform populations?

Posted: March 11th, 2015 by Charles Goodnight

I recently got a comment from Michael Bentley at Oxford pointing out that he had a different interpretation of heritability among cells within higher organisms. His comment was:

“Please could I just clarify something you say in this piece, as it relates to something I’m working on at the moment. You say:

‘From the perspective of individuality, what this does is that it lowers the heritability at the cellular level to nearly zero.’

This confused me, since the heritability at the cell level via mitosis is nearly one, not nearly zero, isn’t it? If we take h^2 = Cov(zi,zi’)\Var(zi), where zi is parent cell phenotype, and zi’ is offspring cell phenotype (we have regressed parent phenotype against offspring phenotype and taken the gradient of the regression line to be the heritability). Assuming high fidelity, we have Cov(zi,zi’) approx = Cov(zi,zi) = Var(zi). Putting this back in we get h^2 = Var(zi)/Var(zi) = 1, and thus h = 1.”

The relevant post is here. Mr. Bentley raises a very good point. In this post I argue that because within an organism cells divide by mitosis, that there is essentially no genetic variation, and as a result, baring somatic mutations, the heritability within organisms is very near to zero. Michael argues that in fact the somatic cells have very high phenotypic fidelity when they divide. Thus, liver cells divide to make to liver cells, and skin cells divide to make to skin cells. By his reckoning the heritability should be very close to one.

So, how should this be handled. First off, I would argue that Michael is right, and I am wrong. Michael used an appropriate definition of “realized” heritability based on a phenotypic perspective, whereas, old fogey that I am, I somehow was stuck in trying to force Fisher’s model where it didn’t belong. Nevertheless, I do stand by my point that mitosis serves as a mechanism that minimizes the response to selection within organisms, I just should have been careful when I called it “heritability”

What this says is that we need to more carefully define heritability, and the additive variance. Fisher first defined additive genetic variance, and to paraphrase something that Walt Ewens, Fisher defined it, and thus we need to accept that his definition is correct. Fishers definition of the additive genetic variance is the sum of the covariances between average effects and average excesses, however, as Falconer has pointed out this definition is useless in the real world (Falconer 1985 Genet. Res. Camb. 46:337). Thus, we are stuck with making up a useful definition. Falconer provides an alternate definition of additive genetic variance statistically, for example as the variance due to regression of offspring on mid parents (I don’t have his book with me in Brazil, so I am not sure of his exact definition). However I would call this the “effective” additive genetic variance, since in real populations it will not exactly equal Fisher’s definition.  It is also relevant to mention that Falconer (in Introduction to Quantitative Genetics 1989) nicely demonstrates that the additive genetic variance is the genetic covariance between parents and offspring.

The way I have been thinking about phenotypic evolution is as a super-set of quantitative genetics. Fundamentally quantitative genetics is a phenotypic approach. The breeders equation demonstrates this:

R = h2S

Or in words, the response to selection is equal to the heritability times the selection differential. It is a phenotypic model because basically the heritability serves as the transition equation that converts the fitness weighted distribution of phenotypes in the parental generation (S) into the distribution of phenotypes in the next generation (R). What the phenotypic perspective does is to argue that this is a fundamentally correct perspective for thinking about evolution, but that a transition equation that is a single constant and (at least theoretically) includes only genetic effects is overly simplistic. Relevant to my discussion with Michael, quantitative genetics is also overly simplistic because it only applies to sexually reproducing organisms. Aside: It is hard to fault Fisher for this. His primary goals were to describe the genetics for humans and mammalian livestock, and to provide tools for animal breeders. His efforts were spectacularly successful to the point of saying that Fisher was the central figure in the new synthesis, and one could argue that he basically single handedly built the foundation for the new synthesis.

So, the bottom line is that we should stick with something similar to Falconer’s practical definition: The additive variance is the covariance between parents and offspring. Note that I did not say the “additive genetic variance”, and this is an important distinction. I suggest we should define the additive variance as the covariance between parents and offspring without regard to the cause of that covariance.

Of course in many situations that is not satisfying. In the discussion between Michael and I both of our perspectives were important. He was exactly right that there is a very high covariance between parent and offspring cells in metazoans, but I was also correct that there is essentially no genetic differences among cells in metazoans. So, what is causing the high covariance that Michael identified? I don’t know, but it is not genetic. More likely it is due to two causes. First there are epigenetic changes – silencing of some genes, and over expression of others – that give a particular cell type its phenotype, and importantly, these epigenetic changes are preserved during mitosis. Second there is a lot of cell-cell interaction that causes offspring cells to resemble parental cells due to the “developmental ecological” or “positional” situation a cell finds itself in. In development there are numerous examples of this sort of induction. It may well be that one reason the daughter cells of liver cells are also liver cells is because they are in the liver, and induced to be liver cells because of that.

I suggest the correct thing to do is to accept the general definition of additive variance, but then allow this to be broken up into components. That is the additive variance could be broken up into Additive “genetic” variance, Additive “epigenetic” variance, Additive “positional”, and so on. Thus, we should accept the single obvious definition of additive variance of the covariance between parents and offspring, but then use some form of least squares partitioning to divide it into sub components.

Of course there is a problem here. That is how do we do that division? Again, I suggest that we follow Fisher’s lead here. What is needed is an appropriate modification of parent-offspring regression and half sib design breeding experiments. For example, we might examine the additive variance in the natural setting to get the total additive variance. Second, we might look at the variance among cell lineages to get the additive genetic variance, and the variance within cell lineages to get the additive non-genetic variance. By transplanting cells to other locations we could get the additive physiological-ecological variance, and by using molecular methods to remove the epigenetic modifications get an estimate of the additive epigenetic variance.

What ever the actual experimental protocol that ends up being appropriate, what we want is:

Cov(Parent,Offspring) = Covgenetic(Parent offspring) + Covepigenetic(Parent offspring) + Covpositional(Parent offspring) + . . .

There are, of course, two major problems with this. The first is practical. If you decide to do that experiment, well good luck. At least at first blush it looks like it would be a horrific amount of work that would simply not be worth the information obtained. The second is statistical in nature. I am arguing for using a Fisherian least squares partitioning into the subcomponents of the additive variance. The good news is that, if done properly, such partitionings are orthogonal, so that the components would add up the total additive variance. The bad news is that such partitionings are context dependent, thus, the partitioning into sub components of the additive variance would change as conditions change. Nevertheless, it seems to me that this is a good way to think about simple linear transition equations from the phenotypic perspective. It is also a way to keep the excellent framework that Fisher provided, while allowing it to be conceptually expanded to other systems of reproduction, and non-genetic forms of inheritance.

More on fitness assignments and individuality

Posted: March 3rd, 2015 by Charles Goodnight

In my last post I briefly mentioned that the level at which fitness is assigned is an interesting problem, but not a conundrum, or a serious conceptual issue. I think it would actually be quite useful to expand on this. The basic ideas came out of discussions I had 20 some odd years ago separately with Lorraine Heisler and John Damuth over a series of years. Heisler and Damuth took this one direction (Group Selection 1 and Group Selection 2), and I went in another direction, which involved not publishing anything until my Chapter on “defining the individual” (Goodnight 2013, Chap. 2 in “Defining the individual” Bouchard & Hueneman eds).

 

working hard in the forest

In case you were wondering where I was, I was working hard in the Amazonian flooded forest. (I was at the Uakari lodge. I recommend it if you are ever in the Manaus area. http://www.pousadamulticultura.com/mamiraua-reserve )

So here is the basic issue: Biological things tend to be organized hierarchically. This need not be the case, but it often is. Thus, we have cells, which group together, possibly with other species, to become organisms – yes, it is probably incorrect to think of “humans” as a single species – which group together to become populations or groups, which finally group together to become communities.

Using the most basic definition of evolution: the change in the distribution of a set due to the gain or loss of members of that set, it should be clear that it is possible for evolution can take place at any of these levels. By the way, I use this very clutzy definition of evolution here to avoid using terms like “individual” and “population”. Normally this is not a problem, but in this particular circumstance we need to be very careful. The point is that change occurs, and it can be potentially defined as evolution. However, at least for selection, it can only be defined as evolution by natural selection if there is variation in fitness. Here is the problem. Contextual analysis, and I would argue human understanding, really only allows fitness to be defined at a single level.

Herein lies the issue. We can choose to define fitness at any level. Different levels may be better choices than others, but ultimately, the level at which we assign fitness is an arbitrary construct of the investigator. I would argue that once we have assigned fitness at any particular level, that becomes the “member” of the set in our definition of evolution.   In other words, when we define fitness as occurring at a particular level, we are in fact defining the individual in the less clutzy definition of evolution: Change in the distribution of a population due to the gain or loss of individuals.   Even though we really only need to define fitness with regard to selection, and adaptation, it makes no sense to have concepts of individuality for mutation, migration and drift that are different than our concept for selection. Thus, the I would argue that logically the level at which we define fitness defines individuality for all evolutionary forces acting on that trait.

Of course, the level at which we define fitness does not alter the changes that occur in the organism. The changes that occur are independent of human observation. What DOES change, however, is our interpretation of those changes. Only changes at or above the level of individuality—the level at which we assign fitness – can be interpreted in an evolutionary framework. Certainly for adaptation, we can only interpret changes as being due to natural selection if there is variation in fitness, and there is no variation in fitness below the level at which we assign fitness. So, what we do is we call those changes that occur below the level of individuality as something else. For example, we typically we assign fitness at the level of the organism, and changes within the organism are called “development”. However, were we to choose to assign fitness at the level of the cell we could reasonably call these changes evolution, and view differential cell division and mortality as selection.

This idea of the relativity of individuality, and the role of the observer in interpreting the nature of changes is at the heart of the problem that people have with Group Selection 1 and Group Selection 2.   This is also why I am not a big fan of the GS1/GS2 terminology. Basically, I think we would be better served by stating the level at which we define fitness. Thus, we might say “In this study we define the organism to be the individual”, or “we assigned fitness at the level of the colony in this study”. I think this is clearer and removes a lot of ambiguity. For example, consider a hypothetical study of Tasmanian Devil Face Cancer. This potentially has three or more levels at which we could assign fitness, including the cell, the organism, the population, and potentially the species. Defining the level of the individual has the flexibility to handle this GS1 and GS2, just gets difficult (if we assign fitness at the level of the species is that GS4?)

The problem, of course, is the idea that there is this desire to have the “individual” be a natural unit, and to have “development” qualitatively different than “evolution”. The idea that the individual is a construct of the observer is really not compatible with these thoughts. That said, I am quite comfortable with the arbitrariness of the level at which we assign fitness. I see no other way that we can have transitions of levels: There really is no qualitative difference between the most organized colonies and the least organized organisms (compare Volvox to Trichoplax). It is also the only way we can study cancer as evolution, and not have to assign fitness at the level of the cell when we are studying, say, foraging behavior. Nevertheless, I understand that many will find this deeply disturbing, and many will reject this relativity of individuality as a viable world view. That said, I think if you can get your head around it, it will help you in understanding multilevel selection.

Volvox-aureus-DF trichoplax

Volvox (left) is considered to be a colonial protist, whereas Trichoplax is considered to be a single organism and an animal. There are differences in their structure, but the differences are not great considering that one is a colony of cells and the other is multicellular organism. (Volvox: http://www.dr-ralf-wagner.de/Bilder/Volvox-aureus-DF.jpg, Tricoplax: http://www.marinespecies.org/placozoa/ )

I am out of space, but as I mentioned above, although the level at which we assign fitness is, in my view, arbitrary, there are nevertheless better and worse levels that we can choose. For example, often there is a reasonable a-priori choice. Higher organisms are made up of trillions of cells. It would be a ridiculous, and probably impossible, task to assign fitness at the level of the cell if we are studying morphology or behavior at the whole organismic level. Other times, contextual analysis can be used to identify the lowest level at which selection on a particular trait is acting, and that level becomes a reasonable one for assigning fitness. Still other times there may be adaptations (policing, mitosis) that minimizes adaptation by natural selection at lower levels. In this case it makes sense to assign fitness at the level at the lowest level that a response to selection is likely to occur. Finally, at the beginning, I mentioned that MLS works fine if groups are not nested. However, any study with non-nesting groups will only work if fitness is assigned at a level that is fully encompassed within all higher groups. For example in a continuous population of plants every organism (ramet?) can be considered to be at the center of its own neighborhood. Obviously these neighborhoods overlap. Nevertheless MLS analysis will work as long as fitness is assigned at the level of the organism instead of the neighborhood.

Gardner’s theory of multilevel selection 3: the discussion

Posted: February 11th, 2015 by Charles Goodnight

This week I will finish up with Gardner’s paper (2015 Jour. Ev. Biol doi:10:1111/jeb. 12566) which I have been discussing for the past two weeks. Given the problems with the literature review and the model, it is hardly surprising that this has led to issues with the discussion. I have problems with virtually the entire discussion; however, I will focus on the ones that I find most concerning.

First, Gardner talks of collective fitness 1 vs collective fitness 2 In doing this he continues and deepens the confusion he started when he developed the model. As I make clear in my chapter on defining the individual (Goodnight 2013, Chap. 2 in “Defining the individual” Bouchard & Hueneman eds), whether you are talking about group selection 1 or groups selection 2, or for that matter group selection 10 (there is no such thing), depends entirely on the level at which you, the investigator, assign fitness. In the example Gardner gives, Group A has 12 daughters in 4 groups of 3, whereas Group B has 12 daughters in 3 groups of 4. In this example, If you assign fitness at the level of the individual organism, and presuming no other variation, the individuals in groups A and B have equal fitness. If you assign fitness at the level of the group Group A has higher fitness than Group B. The difference, of course, is that in the second instance you have a within group “developmental” process that results in different group sizes, however since fitness is assigned at the level of the group you cannot call it selection or even evolution. The problem is that with fitness assigned at the level of the group there can be no variation in fitness within groups, and thus no evolution. This leaves the question of whether it is better to assign fitness at the level of the group or the level of the organism. This is an issue that that I address in my chapter. For fairly deep philosophical reasons it basically cannot be resolved, but as long as we are clear on where we assign fitness it is not a problem. Gardner is right that this was an important issue, but it is not a conundrum. It is one that has been resolved, and no longer presents a serious conceptual issue.

However, what I find most disturbing in this section is so jaw-droppingly silly it causes me to question whether the paper is supposed to be satire. To quote Gardner:

“Cancer is often conceptualized as involving a tension between different levels of selection, with cancerous tissues achieving higher reproductive success at a within-organism level and cancerous individuals suffering lower reproductive success at a between-organism level. However, somatic tissues – including cancerous ones – do not generally contribute genes to distant future generations, on account of the demise of their lineages upon the death of the organism. Consequently, cancerous tissues do not have reproductive value, and so their proliferation within the organism cannot correspond to selection in the strict sense of the genetical theory.” (page 6, citations removed)

jaw-drop

Seriously? You actually believe that? ( from http://www.calgaryunitedway.org/socialvoice/wp-content/uploads/2012/10/jaw-drop.jpg )

This is basic introductory evolution material. Here is the Intro Bio version: Lewontin in his article in Annual reviews (1970, Vol 1 page 1) tells us that three things are necessary and sufficient for evolution by natural selection to occur. These are:

  • There must be phenotypic variation.
  • There must be differential fitness of different phenotypes
  • The phenotypes must be heritable.

To remind you, necessary and sufficient means that you need all three, and if you have all three evolution by natural selection will occur. So, lets think about cancer. (1) is there phenotypic variation? Yes, Cancer cells are different than normal cells in many respects ranging from physical appearance to changes in the regulation of the cell cycle. (2) Are these phenotypic differences associated with fitness? Yes. For example disregulation of the cell cycle causes cancer cells to divide more rapidly than normal cells. Cell division is reproduction. Reproduction is fitness. Yes, there is variation in fitness associated with phenotype. (3) Are these variations in fitness heritable? Yes. Most, if not all, cancers are due to at least one, and usually five or more mutations. These are genetic mutations that are passed on to daughter cells during cell division. Thus, we see that in a organism with cancer we have phenotypic variation, variation associated with fitness, and the fitness is heritable. Either Lewontin is right and Gardner is wrong, or vice versa. I am going with Lewontin being right. Yes, cancer’s “. . . proliferation within the organism cannot DOES correspond to selection”

To see how silly Gardner’s stance is, consider the Wake Island rail, a cute flightless bird that did very well until World War II. On December 23rd, 1941 the Japanese occupied Wake Island, and by the time they were expelled on September 4th 1945 the Wake Island rail was extinct. Apparently the Japanese ate them when they were placed under siege by the American military. Now the question: At some point it was safe to say that the rails did “not generally contribute genes to distant future generations” and thus “. . . their proliferation . . . cannot correspond to selection . . .”. My question is when should we consider differential survival and reproduction of Wake Island rails to no longer be selection? Was it selection in 1939 before the war? How about 1941 when the Japanese invaded? Or how about the January of the likely year of their extinction, 1943? The ridiculousness of making this judgment should be obvious. Selection doesn’t see the future and neither should we when we are identifying something as selection.

wake Island Rail

At what point did differential survival and reproduction stop being selection for the Wake island rail? ( From http://www.extinct-website.com/extinct-website/product_info.php?products_id=409 )

My goal in this is to make the important point that very smart people have thought very hard about evolution. It behooves us to know what the masters said. This does not mean reading every single paper that Lewontin ever published, but it does mean not making obvious errors in logic that have been resolved by people smarter than you and I.  It also does not mean you can’t disagree with the masters.  Science advances when old paradigms are overturned.  But it does mean if you are going to disagree with the canon you should know why you disagree, and be able to defend your position.  Again, ignorance of the literature is no excuse.

With that lapse of good sense out of the way, and ignoring MLS 1 VS MLS 2 – Been there, done that, got the tee shirt – lets move on to the units of selection. Basically the first half of this section is un-interpretable gobble-de-gook that comes from trying to force Gardner’s class structure model on to the Price equation. As I said earlier, his approach is rather clutzy, but it will work as long as there is no group selection. To add group selection you MUST turn to a multivariate approach, or make the assumption that everything is additive always, and there are no interactions of any kind. In short, it simply does not work for multilevel selection in the real world. What caught my eye, however, was his example were a wasp lays two eggs a male and a female, and males and females are reasonably being treated as different classes. He is stumped by how to use a multilevel selection approach to study this. It is actually dead easy. Each individual has a male trait or a female trait (depending on their sex) and one or more contextual traits. The contextual trait is some measure of the characteristics of the group. Note that there would be a separate phenotypic covariance matrix for males and females, but a single genetic covariance matrix for the population (Lande 1980 evolution 34:292; Goodnight et al 1992 Am. Nat. 140:743). That is, with contextual analysis, there is no problem.

So here is my opinion on this and I want to emphasize it is only my opinion. I think that Gardner has an agenda. I think that agenda is that he does not want multilevel selection to be seen as a valid research program. To this end he is willing to ignore an entire literature, to be apparently willfully ignorant of quantitative genetics, to ignore the writings of such luminaries as Richard Lewontin, and to choose not to see obvious solutions. The problem is that his agenda has clouded his vision, allowed him to use sloppy thinking and logic, and to write things that are regrettable, and frankly wrong. This does not advance science. It creates noise that interferes with people who are actually trying to understand nature. I hope I am wrong. Gardner is a good theoretician, and the world needs people like him. Hopefully this paper is simply the unfortunate type of mistake we all make, and he is really working to advance our understanding of science rather than undermine a field that he doesn’t understand.

Gardner’s theory of multilevel selection: Parsing the Model

Posted: February 2nd, 2015 by Charles Goodnight

Continuing our discussion of Gardners paper on “the genetical theory of natural selection” (Gardner 2015 Jour. Evol. Biol. doi: 10.1111/jeb.12566) I want to turn from complaining about his failure to read the literature, and this week start talking about the model itself.

He starts the model with a discussion of Fishers fundamental theorem, which I have already shown is not particularly complex. Then he goes on to expand this using Robertson’s (1968. In: Population Biology and Evolution, R.C. Lewontin, ed.) result that the change in a trait is equal to the covariance between a trait and relative fitness.

Gardner 2 eq 1

It is worth mentioning that although it is usually presented the other way around, in fact, Fisher’s fundamental theorem is actually a special case of the response to selection on any trait. To see this just replace the trait, z, with relative fitness.

Next he goes on to express concern about selection in a class structured population. His approach actually works, as long as there is no multilevel selection. As I said last week, I think his approach is rather clumsy, and there is a much better way using standard quantitative genetic methods. So, my overall comment on that part of the paper is “meh”.

meh

Gardner’s approach to evolution in stage structured populations? “Meh” (From http://rubbercat.net/simpsons/news/2013/09/ )

Now we get to the meat of the issue. He then goes on to develop his genetical theory of multilevel selection. First off, he develops his theory in terms of breeding values. This, has a number of possible definitions. His definition is “. . . a weighted sum of the frequencies of the alleles that the individual carries, the weights being decided by linear regression analysis. This is strangely worded, but basically correct. It hides a HUGE problem that he is ignoring. To see this consider a more standard definition of breeding value: The sum of the average effects of the alleles that make up an individual. The average effect of an allele is basically the effect of that allele averaged across all possible genotypes. This works fine in Fisher’s imaginary world of infinite population size and random interactions. It does not work well when populations are structured, and interactions are not random. If you have multilevel selection then you have population structure.   If you have population structure average effects, and thus breeding values are not constant.

This is why this is so insidious: The assumption of constant breeding value appears reasonable, and it is consistent with all of the classic models. It is the central feature of his model, that there is population structure, that invalidates the assumption of constant breeding values.  It is so obvious that Gardner did not consider the possibility that breeding values might not stay constant, although quite entertainingly he did very clearly, if unknowingly, explain why they wouldn’t. On page 3 he writes:

“Fitness may be decomposed into its genetical and environmental components, that is vi = gi + ei, where ei captures nonadditive genotypic effects (such as dominance, epistasis, synergy and frequency dependence) as well as other more obviously environmental effects.”

Well, no, that is not true. That partitioning is done by least squares, and epistasis and dominance will shift between components as we move from group to group. However note that even here he is completely unaware that when genes interact it might have evolutionary implications. And that is where Gardner falls short: his model requires that breeding values stay constant. They do not. The correct subscripting should be gij, that is, the breeding value of the ith individual in the jth deme. Experimental (De Brito, et al. 2005. Evolution 59: 2333) and theoretical work shows that gij will vary in a way that is not predictable either from the individual nor the group measured in isolation. However, I am a generous man, so lets assume they are constants for the moment, and just keep in the back of our head that this is a fatal flaw in the underlying assumptions of his model.

He then goes on to use the two level Price equation to develop his “genetical model of multilevel selection”:

Gardner 2 eq 2

OK, I hate his notation. Here it is a form that doesn’t hurt my head:

Gardner 2 eq 3

where

Gardner 2 eq 4is the change in the mean breeding value due to selection

Gardner 2 eq 5is the between populations correlation between relative fitness and breeding value (and yes, I refuse to use v for relative fitness)

Gardner 2 eq 6is the average covariance between relative fitness and breeding value within populations

So what is wrong with this?

Well for starters its been published before. Wade, in his paper “Hard Selection, Soft Selection, Kin Selection, and Group Selection” (1985. Am Nat 125: 61) develops a model which has the following equation:

Gardner 2 eq 7

I won’t burden with telling you all of the details of what all the symbols mean, except to say the first term on the right hand side is the mean within population covariance, and the second term is the among populations covariance. I should also say that if you sum over the K loci, the result is the breeding value. In other words, with slightly different notation it is exactly the same equation that Gardner uses. One would think a proper citation would be in order.

The nice thing about Wade’s Price partitioning being published 30 years ago is that it has been around long enough, and we have known that it doesn’t work for 20 years, and we know why. As long ago as the 1990’s I was talking to Steve Frank about this (I am sure he doesn’t remember, so Steve, if you are reading this tell me if I am wrong) and he told me that he was well aware of the partitioning, but he never called the among group covariance group selection. I also know that Mike Wade, who originally published the Price covariance model 30 years ago, has come to realize that the Price equation is inadequate.

What is wrong with the Price equation is actually quite simple, and is really the same as William’s (1966, “Adaptation and Natural Selection”) famous distinction between a “fleet herd of deer” and a “herd of fleet deer”. The problem is that if there is only selection at the individual level, say the slowest deer get eaten, then there will be some herds that by chance have a large proportion of fast deer. The Price partitioning will identify this variation in group composition as a positive covariance between group mean fitness and group mean phenotype; however, it will be entirely due to individual selection and the fact that there is variation among groups in the proportion of fleet deer. In mathematical terms, we can divide the Price covariance at the group level into a partial covariance between group mean fitness and group mean phenotype independent of individual level effects, plus a residual covariance between group mean fitness and group mean phenotype that is caused by individual fitnesses and phenotypes.  Only the partial covariance holding individual effects constant should be considered “group selection”  the other portion is changes due to selection at the individual level:

Gardner 2 eq 8

The Price equation cannot make this separation.  It should come as no surprise that this partitioning is best done using contextual analysis. You can work out the math yourself if you want. The equations you need are in Goodnight et al. (1992 Am. Nat. 140:743).

However, there is a much more serious issue than something so minor as the model being fundamentally flawed at this high level. This is the problem I mentioned before, and that is that he is partitioning breeding values. In an additive world this should work, however, if there is one lesson that comes out of the experimental group selection literature it is that it does not work in the real world (Goodnight and Stevens 1997. Am. Nat. 150:S59). This is an important point I have made in the past, when theory and experiment disagree the theory is wrong.

Indeed, there is no theoretical justification in Fisher’s additive world for me saying it is wrong. The reason I know that you can’t do that partitioning is because I have done and read the experiments (e.g., Goodnight 1990 Evolution 44:1614 & 44:1625). The problem is that when individuals interact their interactions affect the phenotype. While it may not change breeding values at the individual level, it does change them at the group level. And this is exactly what we have found. Group selection experiments work way too well. When we have done experiments where the causes can be teased apart we know that the reason that group selection works so well is because it can act on the interactions among individuals. In other words interactions among individuals become part of the breeding value at the group level. The Price partitioning assumes you are partitioning a constant, however experiments show us that the breeding value at the group and individual levels are not the same thing.

In short, the only way to develop a “genetical theory of natural selection” is to go Full Monty multivariate quantitative genetics, and treat the group and individual traits as separate, but correlated traits. Contextual analysis does half of this, what remains to be done is to work out why the G matrix is the way it is. Fortunately, Bijma and friends have gone a long way in this direction (e.g., Bijma et al. 2007. Genetics 175: 277, Bijma 2014 Heredity 112:61).

full_monty,0

You have to go Full Monty multivariate quantitative genetics if you want to have a chance at developing a genetical theory of multilevel selection. (hope the beefcake doesn’t offend.) (http://www.theage.com.au/articles/2004/05/10/1084041332216.html?from=storyrhs)

So, thus we find that the basic model is flawed in several fundamental ways. First, it is a re-derivation that is, except for details of notation, identical to a model by Wade published in 1985 (it is clear he was unaware of Wade’s work so there is no possibility of plagiary here). Second, Wade’s model, and thus Gardner’s model, was shown to incorrectly partition group and individual selection, and third, based on experimental and theoretical work, it is clear that the basic underlying assumption of constancy of breeding values is fundamentally flawed. Efforts to partition breeding values into within and among group components using the Price equation are doomed to failure due to interactions among genes and individuals. Ignoring these issues, however, well, I guess the model is fine.

Next week will be the last on this paper.  Basically last week we covered the introduction, this week was the model.  Next week will be the discussion.  If I can’t cover it in three weeks it ain’t getting covered.

Added in postscript:  Andy:  I feel badly about so thoroughly trashing this paper.  If you would like to respond I will post your response with no edits other than a short paragraph at the beginning giving attribution.  (you might want to wait until next week after I discuss the implications of your model).

 

Gardner’s theory of multilevel selection: Where he goes wrong and why

Posted: January 28th, 2015 by Charles Goodnight

Two things have happened recently. First, Jonathan Pruitt and I (Pruitt and Goodnight 2014 Nature 514:359) have been asked to reply to a goodly number of letters to the editor concerning our paper on multilevel selection in Nature. These letters have made it clear to me that many people have a very basic misunderstanding of multilevel selection. Second, I was made aware of a recent paper by Andy Gardner (2015 Jour. Evol. Biol. doi: 10.1111/jeb.12566), which is impressive in the depths of  misunderstanding of multilevel selection that is in the paper. I have never met Andy, but I do know he is well established, and he can stand a little criticism from me. Thus, I thought his paper would be perfect for highlighting some of the very more serious misunderstandings people have about multilevel selection. There are so many problems with the Gardner paper that it will take me several weeks to work through them, so on that note, lets take his paper and start turning it into confetti. You have actually seen the opening salvo in my post last week about Fisher’s fundamental theorem. What brought that up was Gardner suggesting that the fundamental theorem was somehow special, or that it applied only to a specific subset of biological entities.

What I want to talk about this week is an idea that Gardner puts out nicely in the first sentence of the abstract: “The theory of multilevel selection (MLS) is beset with conceptual difficulties.” The truth is that MLS is in fact a mature theory. One that, at this point, has very few conceptual difficulties. We know group selection works, we know why it is so effective, we know how to extend quantitative genetics along several different pathways to incorporate the interesting results of group selection experiments, and we know how to measure MLS in the field. Finally, MLS methods are widely used in agriculture – your breakfast this morning may well have been dependent on MLS theory. Eggs, bacon (hogs) and toast (wheat) are commonly or exclusively selected using MLS methods. It is a mature settled theory, sure there is much to be done, but isn’t that true of all science?

So, why is Gardner so wrong? Well that can be seen in the first sentence of his introduction (Do you start to see why this might take a few weeks!): “Recent years have seen a resurgence of interest in the theory of multilevel selection (MLS: Price, 1972a; Hamilton, 1975; Sober & Wilson, 1998; Keller, 1999; Okasha, 2006; Wilson & Wilson, 2007; West et al., 2008; Gardner & Grafen, 2009; Leigh, 2010; Nowak et al., 2010; Lion et al., 2011; Marshall, 2011; Frank, 2012a, 2013).” What you should notice is that there are no serious multilevel selection experimentalists on this list, nor is there anybody on that list who I would call a true MLS theorist. I will not go through the list of why these people are inappropriate, other than to say that some are very old, many are philosophers, and many are advocates of kin selection, or for other reasons really should not be considered authorities on multilevel selection. One has to question where are (to list only modern authors) Wade (1977 Evolution 31:134, Wade et al 2010 Nature 463:E8), Bijma (Bijma et al 2007 Genetics 175:277), Muir (1996 Poultry Science 75:447), Eldakar (Eldakar et al 2010 Evolution 64:3183) Simon (Simon et al 2013 Evolution 67:1561), Ratcliff (2012 PNAS 109:5), Travisano (2004 Trends Microbiol. 12:72), Driscoll (Driscoll and Pepper 2010 Evolution 64:2682) or dare I suggest myself? These are people who understand multilevel selection. I should point out that it is not just this sentence where he fails to cite the relevant literature. With the exception of one vacuous (it will come up later) reference to a paper of mine, none of these authors appear in the literature cited.

This is a fundamental problem that I am seeing. Gardner, not to mention the authors of the letters to nature that we have been fielding, appear to be completely ignoring the MLS literature. I will admit my own failings in this matter. It is not infrequent that I will glance over an abstract and decide it is not important to what I am writing about. However, when writing outside my field (and yes, Gardner is working outside his field) I really do try to ask colleagues if they know of anything I have missed. In this case there is plenty that Gardner missed. As an example, the model he develops in his paper is totally incompatible with the results of Goodnight and Stevens (1997 Am Nat 150:S59). Nobody but Andy knows the real reason ignored the body of MLS literature. Hopefully it won’t happen in the future.

With these weak foundations, Gardener then goes on to list a series of things that he believes to be difficulties. These include:

  • the “precise meaning of group trait” – A group trait is either a trait measured on the group itself, or a composite of measures taken on the group members. Both can be appropriate. Like all studies of selection an understanding of the underlying biology is needed to identify relevant traits. Bottom line: experimentalists need to actually measure these “group” (really contextual) traits. As you might expect, those who measure them know what they are.
  • The “precise meaning of group fitness” – I have to give you that. However, the reason for this is that it is not relevant to the study of MLS. The relevant issue here is measuring selection in the field, and for this the appropriate approach is contextual analysis, which does not use “group fitness” (see Taylor, Wild and Gardner 2007 J. Evol Biol 20:301 for a demonstration that direct fitness, which is the same thing as contextual analysis, is an appropriate metric, Snideness aside, also look at Goodnight 2013 Evolution 67:1539).
  • There is “ambiguity as the focal level in a MLS analysis”. Here he is complaining about the distinction between multlevel selection 1 and 2. I do not like this language, and I am not the best to comment on it. The term was coined over 25 years ago, can we give it a rest? The basic problem is the level at which you assign fitness. Sadly he again shows his ignorance because the actually most relevant paper that gives a relatively simple explanation for this non-controversy is the one paper of mine he cites: Goodnight 2013 (pp 37-53 in: From Groups to Individuals). Sadly, while he did cite this chapter, it was not in the context of this problem, and when he did cite it, it is to make an invalid point.
  • Finally, he makes a big deal about MLS theory does not adequately able to handle class structured populations. First, off, there actually is a nice old paper on multilevel selection in age-structured populations (Mertz et al 1984 Evolution 38:560), although it really isn’t very useful in this context. More relevant, the reason that nobody has developed a method to study MLS in a class-structured population is that nobody has bothered – Most ant people these days are kin selectionists. The basic approach is actually conceptually quite simple:   I would follow Lande’s lead on analyzing sexual dimorphism (Lande 1980 Evolution 34:292) and phenotypic plasticity (Via and Lande 1985 Evolution 39:305). I would describe a separate trait for each cast, plus one or several contextual traits to describe the overall composition of the colony. Each individual would express only one of the individual traits, but of course, all would experience the contextual traits. Then it would be a small matter to modify the methods of Lande and Via and Lande to use them in this system. It actually isn’t that different from the approach Gardner advocates, but it is far more elegant, and it is far more consistent with the existing methodologies for related problems.

So basically what we see in Gardner’s paper (and by extension many of the letters to Nature) is a failure to be aware of and to understand the relevant literature. The problem is not failure to cite the relevant papers per say, rather the problem is that by they do not know the literature and understand the field. As a result the authors end up looking foolish for raising issues that do not exist, and suggesting methodologies that in this case are clumsy, but as we shall see in the next week or two, also methodologies that simply give the wrong answer. I am aware that it is often easy to miss important papers, but to paraphrase the old saying about the law: Ignorance of the literature is no excuse.

ignorant

We all are guilty of not adequately reading the literature. Nevertheless, it is something to be avoided.  (From http://imgur.com/gallery/1DEYI)

 

 

Contact Us ©2010 The University of Vermont – Burlington, VT 05405 – (802) 656-3131
Skip to toolbar