The principle of persimmony: persimmons come from a persimmonious tree (https://www.flickr.com/photos/giagir/5185254421).

But is this really true? Last week I discussed why individual selection can’t be reduced to genic selection. It turns out that the situation is worse trying to reduce group to selection on the underlying individuals. So with that long-winded introduction out of the way, the main reason that group selection cannot be reduced to individual selection is indirect genetic effects (IGEs). Indirect genetic effects occur when genes in one individual affect the phenotype of another individual.

This is an effect that has been seen time and time again. The most aggressive chickens lay the most eggs, but also suppress the egg laying of their cage mates (Muir 1996, Poultry Science 75:447), crop plants aggressively interact such that the highest producing plants most strongly suppress their neighbors (Griffing 1977 in: Proceedings of the International Congress on Quantitative Genetics, August 16-21, 1976.) and many more examples. The important thing is that interactions that are internal to the unit of selection can contribute to the response to selection, whereas if they are external to the unit they cannot. Thus group selection can act on IGEs, but individual selection cannot.

To see this it is easiest to use the Price equation. The Price equation divides the covariance between a trait and relative fitness into within and between group components. It is easy and convenient to use this partitioning to make the point I want to make, but it is important to emphasize that the Price partitioning should never be equated with group and individual selection (Are you listening West and Gardner?).

Imagine we have a metapopulation in which individuals interact within groups but not between groups. The individuals interact in some manner that affects all individuals in the group in the same way. That is, perhaps they release waste products into their environment and everybody gets equally poisoned, or on a more positive note, perhaps they release some chemical public good. Further imagine that we have a trait, z, that is influenced by direct genetic effects (DGE), indirect genetic effects (IGE) and environmental effects. Thus, the trait value of the *i*th individual in the *j*th deme is:

Z_{ij} = DGE_{ij} + IGE._{j} + e_{ij}

Further imagine that the fitness of the *ij*th individual relative to the metapopulation mean fitness is w_{ij}, and the correlation between environmental effects and fitness is zero (just to get them out of the way).

To bring this back to my posts on Gardner, if I was following his model, at this point what I would want to do is partition the “total breeding value” so I could compare it with his partitioning of Fisherian breeding values. “Breeding value” is defined by Fisher (1930, Falconer and MacKay 1996) to be the average value of an individual’s offspring measured as a deviation from the population mean. This breeding value assumes that there is no population structure and that offspring interact randomly with other individuals in the population. Because they ignore population structure Fisherian breeding values cannot be partitioned. Bijma and Wade (2008. JEB **21**: 1175-1188) solved this by defining “Total Breeding Value” to be the average value of an individual’s offspring **measured in their native social environment** as a deviation from the metapopulation mean. Unlike Fisherian breeding values, total breeding values can be partitioned. If you prefer to partition total breeding values replace “z” with total breeding value in the equation below, and replace DGE’s and IGE’s with their additive genetic equivalent.

If we put all this together, using the Price equation to partition the covariance between total breeding value and relative fitness we get an algebraic explosion!

Or much more simply:

So, in words, this simply tells us that the within demes covariance between phenotype and relative fitness (red in the equation) includes ONLY direct genetic effects, whereas the between demes covariance between phenotype and relative fitness (blue in the equation) includes both direct and indirect genetic effects. This is shown graphically in the following figure:

*The sources of variation for a trait and the group mean of the trait. For clarity I have left the total variance proportions the same for the group mean trait, even though in most situations the direct genetic effects and the environmental effects would be reduced due to averaging. Although the genetic components underlying the trait are unchanged by taking the average, the heritable component does change. For the individual trait only the direct effects are heritable, whereas for the group mean trait both the direct and indirect genetic effects are heritable.*

What this is saying is that from an evolutionary perspective a trait and the group mean of a trait are actually different traits. Because group selection can act on both direct and indirect effects it can produce genetic changes that are qualitatively different than selection acting on the individual level. As I have pointed out numerous times this is not a minor theoretical issue that experimentalists can ignore. Indirect genetic effects have shown up as major factors in the response to group selection in every situation where it has been possible to infer there presence, including both experiments specifically designed to detect them (e.g., Goodnight 1990 Evolution 44:1625), or where it was obvious even though the experiment did not have explicit treatments to detect them (e.g., Muir 1996).

Next week, as promised for this week, but not delivered: Why reductionism does work.

]]>

“In explaining adaptation, one should assume the adequacy of the simplest form of natural selection, that of alternative alleles in Mendelian populations, unless the evidence clearly shows that this theory does not suffice”

and in the same book, and more explicitly, which says that reductionism is works:

“No matter how functionally dependent a gene may be, and no matter how complicated its interactions with other genes and environmental factors, it must always be true that a given gene substitution will have an arithmetic mean effect on fitness in any population.”

All I can say to this is GAHHHH!

*Merida expresses her opinion on genetic reductionism (taken from http://giphy.com)*

I think a lot of people know that you cannot think of selection as acting on genes, but a lot of people also can’t articulate why it doesn’t work. So, if anybody asks you, the simple answer is that reductionism doesn’t work because of interactions. At the individual level this will primarily be gene interactions of dominance and epistasis.

In a fully additive system there would be no problem, and this IS the problem. Our intuition about genetics was developed using simple additive models. In an additive system, knowing at what level selection was acting would be nice information, but the fitness of the phenotype can always be algebraically reduced to fitness effects on individual loci. In other words, in additive systems, how the genes are packaged really doesn’t affect the effect of genes on the phenotype. To see this consider a phenotype affected by a single locus additive trait:

Genotype | A_{1}A_{1} |
A_{1}A_{2} |
A_{2}A_{2} |

Frequency | p^{2} |
2pq | q^{2} |

Fitness | 1 | 1-Z/2 | 1-Z |

(I use Z to emphasize that we are not talking about fitness. Selection will be affected by the packaging for the simple reason that some of the selection is on heterozygotes). We can calculate the average effect of the A_{1} allele on the phenotype we would discover that it is:

Original genotype | genotype after substitution | probability | change |

A_{1}A_{1} |
A_{1}A_{1} |
p^{2} |
0 |

A_{1}A_{2} |
A_{1}A_{2} |
½ 2pq | 0 |

A_{1}A_{1} |
½ 2pq | Z/2 | |

A_{2}A_{2} |
A_{1}A_{2} |
q^{2} |
Z/2 |

So, the average effect of the A_{1} allele is:

Now consider a haploid system

Genotype | A_{1} |
A_{2} |

Frequency | p | q |

Fitness | 1 | 1-Z/2 |

The average effect with the same phenotypic effects (adjusted for ploidy). Now the local average effect of the A_{1} allele is:

Original genotype | genotype after substitution | probability | change |

A_{1} |
A_{1} |
p | 0 |

A_{2} |
A_{1} |
q | Z/2 |

So, the average effect of the A1 allele is: you guessed it:

The effect of the allele on the phenotype is not affected by the packaging.

Now lets do the same thing with a dominant system:

Genotype | A_{1}A_{1} |
A_{1}A_{2} |
A_{2}A_{2} |

Frequency | p^{2} |
2pq | q^{2} |

Fitness | 1 | 1 | 1-Z |

Now the average effect of the A_{1} allele on the phenotype becomes:

Original genotype | genotype after substitution | probability | change |

A_{1}A_{1} |
A_{1}A_{1} |
p^{2} |
0 |

A_{1}A_{2} |
A_{1}A_{2} |
½ 2pq | 0 |

A_{1}A_{1} |
½ 2pq | 0 | |

A_{2}A_{2} |
A_{1}A_{2} |
q^{2} |
Z |

So, the average effect of the A_{1} allele is:

turning to the haploid system

Genotype | A_{1} |
A_{2} |

Frequency | p | q |

Fitness | 1 | 1-Z/2 |

Now the local average effect of the A_{1} allele is:

Original genotype | genotype after substitution | probability | change |

A_{1} |
A_{1} |
p | 0 |

A_{2} |
A_{1} |
q | Z/2 |

The average effect in the haploid system is now different than in the diploid system,.

In other words, if we add the simplest possible form of nonadditivity the packaging does matter. Trust me it gets worse. I am way to lazy to put up tables for average effects in epistatic systems, but I have talked about this before. It turns out that the variance in local average effects is a measure of how the average effects of alleles are to genetic background. I have talked about these before, but it bears re-posting the relevant figure:

The important point is that the variance in local average effects is zero in additive systems, but non-zero when there are any sort of interactions. This means that the reducability of fitness effects on to genes is a reasonable exercise in additive system, but simply is not meaningful in epistatically interacting systems. To see how bad this can be, consider long-term directional selection in a system with AXA epistasis. Depending on the starting gene frequencies the average effect of an allele can actually reverse signs. For what it is worth, the dashed lines are the local average effects for an additive system, and the solid lines are the local average effects for AXA epistasis. This shows the contrast between additive systems and epistatic systems. For the additive system, if you were to evaluate the fitness effects in generation zero they would provide a pretty good estimate of the fitness at the end (in this deterministic system an exact estimate). On the other hand, for the epistatic system, estimates of allelic effects made in generation zero rapidly become useless, and by the time fixation is reached they are exactly wrong.

In one sense, Williams is absolutely correct. At any given instant it is certainly possible, in principle, to do a least squares regression analysis and assign fitness effects to individual loci. However in an epistatically interacting system those fitness assignments are ONLY good for the moment, or perhaps the generation, in which the assignment is done. Those effects will change as gene frequencies change, and not just gene frequencies at the locus under study, but gene frequencies at any other loci as well. So, my point is not that the assignment cannot be done, but rather that the assignment carries no information that is useful beyond the moment.

Next time I talk about why reductionism does work!

]]>It really is a nice study in which they identified 11 quantitative trait loci (QTL) in a single population of monkey flower, then used these to estimate the functional (also known as physiological) direct effects, and all of the two locus epistatic interactions. They then used these estimates to estimate additive genetic variances and total genetic variances in the population.

What is nice about this study is that they use actual data from a QTL analysis of a natural population, and then use the resulting analyses to estimate bi-allelic functional epistasis for each of the pairs of QTL. In fact it would be a great teaching tool to have access to some of those two locus genotypic values for teaching purposes! I would also love to have the actual allele frequencies, so that we could in fact estimate the standing statistical variance components in the natural populations. This also brings up a very important point: all of the models to date have put in fixed values for the genotypic values (or avoided the issue entirely using inbreeding coefficients). In the real world we collect organisms, identify genes, and phenotype them. There is ample room for error at every step. So the one thing we know for sure is that any QTL measures or assignment of phenotype to genotype is an estimate. This really is the first attempt to couple field estimates of genotypic values to variance components.

One other thing that is nice about this paper is that they bring up both the Kempthorne/Cokerham variance components and the more recent terminology of “positive”, “negative” and “sign” epistasis. Nicely, Hanson (2013 Evolution 67: 3501-3511) provided two locus examples of these types of epistasis. It turns out that if we set the gene frequencies to 0.5, and do the appropriate regressions we can directly relate these molecular concepts of epistasis to the quantitative genetic components. It also turns out that this is critical, for while functional epistasis is loads of fun, it is only the quantitative genetic variance components that tell us how phenotypic evolution works.

Anyway, from Hanson (2013) these different types of functional epistasis are:

Using the JMP program shown below it is easy to show that positive epistasis is a hodgepodge of variance components (89% additive variance, 3.6% AXA epistasis, 3.6% AXD epistasis, and 3.6% DXD epistasis), negative and sign epistasis is additive variance and AXA epistasis (negative epistasis: 80% additive variance, 20% AXA epistasis, sign epistasis: 50% additive variance, 50% AXA epistasis). Maybe its because I am a curmudgeon, but I am happier with the old fart Kempthorne partitioning, because it relates directly to variance components, and can be much more easily converted to statistical genetic components.

Now here is the critical point. These variance components are a function of gene frequency, thus the variance components will change as gene frequencies change. Using the example of positive epistasis above I can now tell you the additive genetic variance for any two locus gene frequency:

*Graph of the additive genetic variance for two locus two allele positive epistasis as described by Hansen (2013). A JMP program to calculate VA for a single gene frequency is listed below. Note that I rotated the graph to best show the shape of the surface. The highest additive genetic variance occurs when both the A2 and B2 alleles are at low frequency (around 0.2).*

Finally, I know it is impolite to promote your own work, but well, it’s my blog and I will do what I want. My ego was a bit hurt by the fact that that my work on epistasis and additive genetic variance was not cited, in particular, my paper on average effects and additive variance (Goodnight. 2000. Heredity 84: 587-598.), which was quite relevant. That and my earlier paper using breeding values (Goodnight,1988. Evolution 42: 441-454) were the first papers to describe the conversion of epistasis in to VA, and they have historical significance if nothing else. I have long been fighting a bit of a rear-guard action to keep those papers from falling into the obscurity of common knowledge. There is actually another reason that they could have benefited from citing those papers. One of the things that comes out of those papers is that if you can write down the functional values for the 9 genotypes of a pair of interacting two allele loci you can use regression to calculate the additive genetic variance for any given gene frequency. I do actually know why they might have missed my paper. They use the Falconer partitioning that was first pioneered by Cheverud and Routman (1995. 139: p. 1455–1461) which is enough different that my paper really didn’t need to be cited, so it is hard to get too mad at them.

*Its my blog and I will whine if I want to. You would whine to if it happened to you. (picture from (http://www.amazon.com/Its-My-Party-Mercury-Anthology/dp/B000VHKHZA )*

If you have JMP and are savvy in its use, the files that I use for calculating the additive genetic variance can be found here (variance regressions). I fixed it by changing the file extension to .txt. It is still a .jmp, so after you download it please change the txt to jmp, then it should work.

Basically you add your own dependent variables, add the allele frequencies of your choice (I put it in as a formula, so use the get column info route to change those), and the linkage disequilibrium. Then run the script in the upper left hand corner. Finally, if the gene frequencies are other than 0.5 and in linkage equilibrium use sequential (type 1) sums of squares. Type 3 sums of squares will give you the wrong answer. If you have any questions feel free to ask me. OK, if you want the program I need to send it to you under a separate cover, so email me if you would like it. If I ever figure it out I will fix tings.

]]>

Obviously the size shape and dimension of a covariance matrix will be related to the ability to respond to selection, but the relationship may not be perfect. Two other approachs that has been developed are “random skewers” (Cheverud 1996 J. Evol. Biol. 9:5-42; Cheverud and Marroig 2007 Genet. Mol. Biol. 30:461-469; Revell 2007 Evolution 61:1857-1872) and “selection skewers” (Calsbeek and Goodnight 2009. Evolution 63:2627-2635). To see what a random “skewer” is consider that in a multivariate selection experiment the response to selection is given by:

R = GP^{-1}S = Gβ

The β is a vector that describes the direct effects of selection on the different traits. The G matrix is sometimes thought of as a “rotation matrix” in that, while what it does from a biologists perspective is tell us what the R vector or response to selection, from a mathematicians perspective what it does is rotate and warp the β vector. Thus, if we take any arbitrary β vector and multiply it by two different G matrices the two matrices will rotate and stretch the β vector in different ways producing two different R vectors. We can use this because if the two matrices are identical the two rotated vectors will be identical, whereas if the matrices are different the two rotated vectors will also be different. These can be compared by calculating the vector correlation between the two vectors. In linear algebra terms this is (I am SO sorry I am doing this to you!)

For the non-linear algebraic adept (he said raising his hand), the numerator is really just a means of calculating a covariance between the two vectors, and the denominator is the square root of the product of the two covariance matrices from the vectors.

So, with the random vectors approach what you do is generate a large number (1000 or more) random unit vectors. These represent a set of selection gradients in random directions. For each gradient you calculate the resulting R vector using your two matrices, and calculate the vector correlation. If the average correlation is close to one, then they are the same, whereas if it is less than one the two matrices are different.

The question, of course, is how close to one is close enough. Here again the bootstrap comes in. Following the approach I outlined last time, we generate a large number of pairs of matrices that are estimated from bootstrap samples of the same data set. Because they are estimated from the same data set there can be no true difference, so if we calculate the average correlation between these two matrices this will give us a distribution of the correlation when the null hypothesis is true. It is then a simple matter to compare the actual correlation with the bootstrap correlations. If the actual correlation is less than 95% (or what ever) of the bootstrap correlations then we can say that the two matrices are significantly different from each other.

This is an interesting point. Here we are using the null hypothesis that the two matrices are identical. Thus, we set up the bootstrap such that the null hypothesis was true, and compared our actual correlation with the bootstrap correlation. In the original random skewers approach the opposite was the case. The null hypothesis was that the two matrices were uncorrelated, and thus those papers use a different approach to significance testing. I googled hard for a joke about getting null hypotheses backwards, but apparently this is too subtle for the online community.

The selection skewers is similar to random skewers, with a few important changes. This analysis is appropriate if you are specifically interested in comparing how two populations will respond to a particular selection pressure. For example, you may have two recently diverged populations and want to determine whether the two populations will respond in the same manner to a particular selection pressure. In most cases you will likely have a known S vector, which is the raw selection differential. This is what I assume in the program I provided. In this case you first need to generate the b = P^{-1}S vector. Then as with the random skewers you calculate the vector correlation, and compare the actual correlation to the correlation in the bootstrap data sets when the true null hypothesis is zero

The nice thing about both the random skewers and the selection skewers is that they give a real world idea of what changes in shape can do. The random skewers is agnostic as to how selection actually works, whereas the selection skewers tests a specific selection regime. This later is particularly interesting, since it is entirely possible for two matrices to have very different structures (as determined say by the rank/Bartlett’s/Mantel tests), and yet have this structural difference have very little actual effect on the response to selection. On the down side, however, the random and selection skewers lump a lot of information together. For example, it can be hard to determine whether a difference in response between to matrices is due to a difference in the total amount of available variation, or due to changes in the correlation structure leading to negative genetic correlations.

I guess the real lesson from all this is that there is no one best statistical test. Which is best depends on the question you ask. If you want detailed insights into the actual covariance matrices the rank/Bartlett’s/Mantel test may be best. If you want a summary of the difference in the ability to respond to selection random skewers may be a good choice, and if you have a clear a prior selection hypothesis to test the selection skewers is clearly the best.

To remind you I have an R script that performs these tests and can be relatively easily modified for different data sets and circumstances.

**Here is the program: **

Writeup on how to use the program: Matrix comparison writeup

The program:Bootstrap command

Relevant example data sets:

]]>First off, there is nothing wrong with the Flury hierarchy, I just don’t particularly find it intuitively useful. As I understand it the Flury hierarchy is a model selection approach, whereas the methods I will discuss are parametric statistical tests. I recommend you read Philips and Arnold’s papers and make your own decision. So enough preamble.

We had just done an experiment in which we sent a population a population bottleneck, and we had measured several traits. We wanted to know if the derived population and the ancestral population had the same genetic structure, aka, the same genetic covariance matrices. For a single trait we know exactly how to do this. You “simply” measure the additive genetic variance in the two populations and do an F test to see if they are the same or different. I put simply in quotations because measuring additive variance is never easy.

When we get to a multivariate settings things become more complicated. Again, we will likely use a MANOVA to measure an additive genetic covariance matrix for each population. We would then like to compare these to see if they are the same or different. The good news is that genetic covariance matrices are square and generally easy to work with. The bad news is that when we go multivariate there are several ways that matrices can be different. In Goodnight and Schwartz (1998) we decided there are three ways of interest. The matrices can be of different dimension, they can be of different size, and they can be of different shape. These are really independent ways of being different, so it makes sense to develop three tests. The way we tested these was using bootstrapping.

**The bootstrap:** Bootstrapping is an interesting statistical procedure that was popularized in the 80s by Brad Efron (Efron 1979, The Annals of Statistics 7:1-26) (I took a workshop he offered somewhere around 1985). The basic idea is that if you have a data set you can create new pseudo data sets by randomly sampling with replacement from the original data. If enough of these bootstrap data sets are generated they will actually provide a distribution for the data. This at first seems counter intuitive, but as long as your data set is relatively large it works very well. To use this as a statistical test you need to decide what your null hypothesis is, and then figure out a random sampling scheme that makes that null hypothesis true. For example with a t-test, the null hypothesis is that the two populations have the same mean. You can make that null hypothesis true in several ways. You could simply combine the data from the two populations. Then randomly assign them back to the two populations without regard to original source. As a result there will be no true difference between the populations. If you randomly create several thousand of these pairs of populations you will get a distribution of observed differences in the means when you know the true difference is actually zero. You can then take the actual difference between the two populations and simply ask what percentage of the bootstrap differences that are more extreme than the difference in the actual data. That percentage is your probability of the observed difference occurring by chance. There are more sophisticated approaches, but this gives the idea.

In our particular test we had an ancestral population and a population derived from two generations of brother sister mating. We wanted to see if the two populations were the same or different. Our null hypothesis was that their covariance matrices were the same (this is important!), and we decided to use data from the ancestral population as our source for the bootstrap data.

**Dimension:** A genetic covariance matrix can be thought of as enclosing a space. Thus a univariate “matrix” is a single vector of a length that is equal to the variance. A two-trait covariance matrix defines a plane, a three trait matrix a cube, and so on.

Figure 1; a one dimensional vector, and two and three dimensional matrices.

There are two things that can happen to the additive genetic variance after a population goes through a bottleneck. First it can disappear, that is, it can go to zero. Second, it can become so highly correlated with other traits that it becomes a linear combination of these traits. In graphic terms, in the three-trait case, that would be the equivalent of one of the vectors lying exactly in the plane of the other two vectors.

Figure 2: in this matrix trait z is a linear combination of traits y and x. As a result all three lie in a single plane, and the resulting matrix is a two dimensional matrix.

Consider trying to compare two matrices with three variances. One is like the three dimensional matrix in figure 1, and the second has only two dimensions as in figure 2. It won’t work to compare these. As an analogy it is like asking which is bigger, a box or a sheet of paper. The three dimensional matrix has an extra dimension along which it can evolve that is qualitatively different from the two dimensional structure.

The way we tested this was to find the largest sub-matrix that had valid variances that were not linear combinations of other vectors. We then tested the absolute value in the difference in rank ( |R_{popA}-R_{popB}|) as our test statistic measured against the bootstrap populations were there was no true difference in rank. In this data set the difference in rank was not significant.

**Difference in size:** As mentioned above, matrices can be considered to be planes, volumes or hyper volumes. It turns out that the determinant is a measure of the space enclosed by the matrix. For example, in a two-trait matrix the determinant is the area of the matrix, in a three trait matrix it is the volume, etc. Thus two matrices of the same dimension, regardless of shape, can be compared by comparing the determinants. The analogy is having two oddly shaped vases. We can compare them by asking how much water they hold. In this case shape is of no consequence, only the size of the space enclosed.

The important caveat is that they must be the same dimension. Again, the same question: which is larger the volume of a box, or the area of a sheet of paper. And again it is a meaningless question. We chose to resolve this by doing an “orthogonal projection” of the larger dimension matrix on the smaller dimension matrix. That is, we searched the matix pairs for a set of traits that had valid variances in both matrices. We did the analysis on this pair of sub matrices.

The next question, is how to compare the two determinants. It turns out that there is a good test, the multivariate Bartlett’s test that can be used. Bartlett’s test, has two problems. First, it is very sensitive to the assumption of multivariate normality, and second, it is not structured for use with MANOVA derived data. Still we can use the basic statistic and combine it with the bootstrap data, and it works perfectly well. One of the very useful features of bootstrap tests is that they make no assumptions about the distribution of the data. Also, if properly designed, they work well with virtually any experimental design. Interestingly, since, the standard test was not developed for use with MANOVA the parametric multivariate Barlett’s test was way to optimistic and the bootstrap ended up doing a much better job. A final modification is that we had an a priori interest in whether the derived genetic covariance matrix was significantly larger than that in the ancestral population. Thus, we multiplied the Bartletts statistic by 1 if the derived population was larger than the ancestral population and -1 if it was smaller, giving us the signed Bootstrap Bartlett’s test that allowed for both one tailed and two tailed tests.

**Shape:** For shape we decided to go with a test similar to the Mantel test. Many rightly complain about the classic Mantel test for numerous reasons. However, the basic idea is useful. The idea is that you calculate a correlation between the pairwise elements of the two matrices. That is you pair up the elements of the two matrices and simply calculate the correlation among them. The problems with the traditional Mantel test for this application are three fold. First, the traditional Mantel has a null hypothesis that the two matrices are independent, whereas our null hypothesis is that the two matrices are identical. The bootstrap solves this by allowing us to generate a distribution of Mantel correlations among pairs of matrices that have a true correlation of 1.

Second, the Mantel test is meant to compare correlation matrices, which have 1s on the diagonal, whereas this is not true for a covariance matrix. In the classic Mantel test this diagonal is excluded, whereas in ours it is not. Third, all of the elements of a correlation matrix are between -1 and 1, whereas covariance matrices can have vastly different variances for different traits, which can inappropriately skew the results. This last we solved by standardizing the elements to the average of the diagonals of the two matrices. The final equation is somewhat ugly, so I refer you to the paper if you want the details. The results indicate that females, but not males, have a significant change in the shape of their covariance matrix. That is the population bottleneck significantly changed some of the variances and covariances among traits in the two populations, even though it did not change the total amount of additive genetic variance.

So, the point of this is simply to suggest one possible way to compare genetic covariance matrices. One of the reasons I really enjoy multivariate math (I can’t believe I said that) is that very simple ideas, like the variance of a trait, suddenly become so much richer, and can change in so many more ways as we move into a multivariate setting. Obviously simple multivariate math in a pale comparison with the real world, but this only serves to make the diversity of the real world even more easily understood.

The other reason I wanted to put this up is that I have an R program that does these analyses, along with random skewers and selection skewers, which I will talk about next time. I am not an R developer, so I would be more than pleased if somebody were to take this script and turn it into something that didn’t actually need to be adjusted for the needs of every data set. If you do choose to finish developing this, please let me know!

**Here is the program: **

Writeup on how to use the program: Matrix comparison writeup

The program:Bootstrap command

Relevant example data sets:

]]>

*(from http://kittenofcupcakes.tumblr.com/post/49802470800)*

Instead, what I want to do is go deeper down the rabbit hole of what an individual is. In a previous blog post I argue that the individual should be the level at which we assign fitness. This is fine as far as it goes, but consider the situation in which we assign the fitness at the level of the organism. Well, organisms are not really one species. In fact, in humans, non-human cells are thought to outnumber human cells ten to one, although they are probably less than 3% of our body mass (http://www.nih.gov/news/health/jun2012/nhgri-13.htm). We also know that the microbiome has significant effects on health, ranging from effects on the ability of organisms to digest food to affecting the nervous system.

*Down the rabbit hole of individuality (http://mag.splashnology.com/article/alice-in-wonderland-showcase-of-impressive-cosplay-photography/7324/)*

This has a couple of interesting consequences. First off, when we assign fitness at the level of the organism, we are in fact assigning fitness to a community, which includes the host metazoan, and their microbiome. The first rather fun implication is that, except in the enlightened sense of the relativistic concept of individuality I discussed two weeks ago, there is no such thing as individual selection. “Individual selection” in the classic sense is in fact community selection.

This is not a problem for selection per se. We can assign fitness at what ever level we want. If we want to assign it at the level of the community formerly known as an organism, then that is just fine. Selection is an ecological process. Which means that for simply analyzing selection, we don’t actually need to know anything about the heritability. Of course, that is a bit unsatisfying, since we would like to know the response to selection, and for that we need to know the heritability. The problem is that with over 90% of the cells in a human being non-human, the vast majority (some estimates as high as 99%) (https://www.microbemagazine.org/index.php?option=com_content&view=article&id=3452:major-host-health-effects-ascribed-to-gut-microbiome&catid=750&Itemid=969) of the active genes in our bodies are also non-human. So, again we are confronted with a potentially serious problem with the concept of heritability. This actually poses two problems. First we need an expanded view of realized heritability that recognizes that organisms are communities. This is not really a problem for the phenotypic perspective, which defines heritability in terms of the phenotypic resemblance between parents and offspring. But it does raise the interesting possibility that many of the genes that contribute to heritability may in fact be bacterial genes. This further raises the interesting point that the heritability of an organism will now be a function of the ecology of the microbiome. If you get the microbiome from your parents, undoubtedly true for a portion of the microbiome, then it is potentially heritable. The particular case in point here would be bacteria such as Wolbachia, which is an intracellular symbiont of arthropods that is maternally inherited. Among host variation in this bacteria would show up as heritable variance in the population.

On the other hand, if the microbiome is picked up randomly from the environment then it may not be heritable. Even here there is a problem since it may be predictably acquired from the larger population, and thus heritable at a higher level. Consider termites. When a young termite first ecloses to become an adult it lacks its gut fauna, which it obtains by trophallaxis from another colony member. Basically, an older individual regurgitates and the newly emerged adult eats the symbiont containing regurgitate. What this means is that members of the same colony will all get similar gut symbionts. What this means is that in termites the gut fauna may not be heritable in the classic sense, it may nevertheless be heritable at the colony level.

*Trophallaxis in termites transfers gut bacteria among colony levels, possibly making the gut fauna derived traits heritable at the colony level. (http://carronleesgspestmanagement.blogspot.com.br/2011/04/how-does-baiting-system-work-on-ants.html)*

The bottom line for all of this is that yes, in my discussion I suggested that in many situations the organism would be a reasonable unit to call the individual. This week I am saying that the organism is not a single species entity, but must be considered a community. I am also arguing that if we use the phenotypic perspective the resemblance between parents and offspring then potentially the concept of inheritance can become quite complex, with some of the gut fauna being considered “environment” because it is randomly acquired throughout the life of the organism, but others need to be considered heritable variation. Even here we need to distinguish between parts of the microbiome that are inherited due to close association of the parents, and parts of the microbiome that are inherited at a higher level due to within group sharing of food, or other processes.

It is interesting to compare this to my earlier post on heritability in the absence of genetic variation . What this suggests is that we are naïve to think that heritability can be consistently and logically reduced to nuclear Mendelian genes in the host species in the community that we call an organism.

]]>“Please could I just clarify something you say in this piece, as it relates to something I’m working on at the moment. You say:

‘From the perspective of individuality, what this does is that it lowers the heritability at the cellular level to nearly zero.’

This confused me, since the heritability at the cell level via mitosis is nearly one, not nearly zero, isn’t it? If we take h^2 = Cov(zi,zi’)\Var(zi), where zi is parent cell phenotype, and zi’ is offspring cell phenotype (we have regressed parent phenotype against offspring phenotype and taken the gradient of the regression line to be the heritability). Assuming high fidelity, we have Cov(zi,zi’) approx = Cov(zi,zi) = Var(zi). Putting this back in we get h^2 = Var(zi)/Var(zi) = 1, and thus h = 1.”

The relevant post is here. Mr. Bentley raises a very good point. In this post I argue that because within an organism cells divide by mitosis, that there is essentially no genetic variation, and as a result, baring somatic mutations, the heritability within organisms is very near to zero. Michael argues that in fact the somatic cells have very high phenotypic fidelity when they divide. Thus, liver cells divide to make to liver cells, and skin cells divide to make to skin cells. By his reckoning the heritability should be very close to one.

So, how should this be handled. First off, I would argue that Michael is right, and I am wrong. Michael used an appropriate definition of “realized” heritability based on a phenotypic perspective, whereas, old fogey that I am, I somehow was stuck in trying to force Fisher’s model where it didn’t belong. Nevertheless, I do stand by my point that mitosis serves as a mechanism that minimizes the response to selection within organisms, I just should have been careful when I called it “heritability”

What this says is that we need to more carefully define heritability, and the additive variance. Fisher first defined additive genetic variance, and to paraphrase something that Walt Ewens, Fisher defined it, and thus we need to accept that his definition is correct. Fishers definition of the additive genetic variance is the sum of the covariances between average effects and average excesses, however, as Falconer has pointed out this definition is useless in the real world (Falconer 1985 Genet. Res. Camb. 46:337). Thus, we are stuck with making up a useful definition. Falconer provides an alternate definition of additive genetic variance statistically, for example as the variance due to regression of offspring on mid parents (I don’t have his book with me in Brazil, so I am not sure of his exact definition). However I would call this the “effective” additive genetic variance, since in real populations it will not exactly equal Fisher’s definition. It is also relevant to mention that Falconer (in Introduction to Quantitative Genetics 1989) nicely demonstrates that the additive genetic variance is the genetic covariance between parents and offspring.

The way I have been thinking about phenotypic evolution is as a super-set of quantitative genetics. Fundamentally quantitative genetics is a phenotypic approach. The breeders equation demonstrates this:

R = h^{2}S

Or in words, the response to selection is equal to the heritability times the selection differential. It is a phenotypic model because basically the heritability serves as the transition equation that converts the fitness weighted distribution of phenotypes in the parental generation (S) into the distribution of phenotypes in the next generation (R). What the phenotypic perspective does is to argue that this is a fundamentally correct perspective for thinking about evolution, but that a transition equation that is a single constant and (at least theoretically) includes only genetic effects is overly simplistic. Relevant to my discussion with Michael, quantitative genetics is also overly simplistic because it only applies to sexually reproducing organisms. Aside: It is hard to fault Fisher for this. His primary goals were to describe the genetics for humans and mammalian livestock, and to provide tools for animal breeders. His efforts were spectacularly successful to the point of saying that Fisher was the central figure in the new synthesis, and one could argue that he basically single handedly built the foundation for the new synthesis.

So, the bottom line is that we should stick with something similar to Falconer’s practical definition: The additive variance is the covariance between parents and offspring. Note that I did not say the “additive genetic variance”, and this is an important distinction. I suggest we should define the additive variance as the covariance between parents and offspring without regard to the cause of that covariance.

Of course in many situations that is not satisfying. In the discussion between Michael and I both of our perspectives were important. He was exactly right that there is a very high covariance between parent and offspring cells in metazoans, but I was also correct that there is essentially no genetic differences among cells in metazoans. So, what is causing the high covariance that Michael identified? I don’t know, but it is not genetic. More likely it is due to two causes. First there are epigenetic changes – silencing of some genes, and over expression of others – that give a particular cell type its phenotype, and importantly, these epigenetic changes are preserved during mitosis. Second there is a lot of cell-cell interaction that causes offspring cells to resemble parental cells due to the “developmental ecological” or “positional” situation a cell finds itself in. In development there are numerous examples of this sort of induction. It may well be that one reason the daughter cells of liver cells are also liver cells is because they are in the liver, and induced to be liver cells because of that.

I suggest the correct thing to do is to accept the general definition of additive variance, but then allow this to be broken up into components. That is the additive variance could be broken up into Additive “genetic” variance, Additive “epigenetic” variance, Additive “positional”, and so on. Thus, we should accept the single obvious definition of additive variance of the covariance between parents and offspring, but then use some form of least squares partitioning to divide it into sub components.

Of course there is a problem here. That is how do we do that division? Again, I suggest that we follow Fisher’s lead here. What is needed is an appropriate modification of parent-offspring regression and half sib design breeding experiments. For example, we might examine the additive variance in the natural setting to get the total additive variance. Second, we might look at the variance among cell lineages to get the additive genetic variance, and the variance within cell lineages to get the additive non-genetic variance. By transplanting cells to other locations we could get the additive physiological-ecological variance, and by using molecular methods to remove the epigenetic modifications get an estimate of the additive epigenetic variance.

What ever the actual experimental protocol that ends up being appropriate, what we want is:

Cov(Parent,Offspring) = Cov_{genetic}(Parent offspring) + Cov_{epigenetic}(Parent offspring) + Cov_{positional}(Parent offspring) + . . .

There are, of course, two major problems with this. The first is practical. If you decide to do that experiment, well good luck. At least at first blush it looks like it would be a horrific amount of work that would simply not be worth the information obtained. The second is statistical in nature. I am arguing for using a Fisherian least squares partitioning into the subcomponents of the additive variance. The good news is that, if done properly, such partitionings are orthogonal, so that the components would add up the total additive variance. The bad news is that such partitionings are context dependent, thus, the partitioning into sub components of the additive variance would change as conditions change. Nevertheless, it seems to me that this is a good way to think about simple linear transition equations from the phenotypic perspective. It is also a way to keep the excellent framework that Fisher provided, while allowing it to be conceptually expanded to other systems of reproduction, and non-genetic forms of inheritance.

]]>

*In case you were wondering where I was, I was working hard in the Amazonian flooded forest. (I was at the Uakari lodge. I recommend it if you are ever in the Manaus area. http://www.pousadamulticultura.com/mamiraua-reserve )*

So here is the basic issue: Biological things tend to be organized hierarchically. This need not be the case, but it often is. Thus, we have cells, which group together, possibly with other species, to become organisms – yes, it is probably incorrect to think of “humans” as a single species – which group together to become populations or groups, which finally group together to become communities.

Using the most basic definition of evolution: the change in the distribution of a set due to the gain or loss of members of that set, it should be clear that it is possible for evolution can take place at any of these levels. By the way, I use this very clutzy definition of evolution here to avoid using terms like “individual” and “population”. Normally this is not a problem, but in this particular circumstance we need to be very careful. The point is that change occurs, and it can be potentially defined as evolution. However, at least for selection, it can only be defined as evolution by natural selection if there is variation in fitness. Here is the problem. Contextual analysis, and I would argue human understanding, really only allows fitness to be defined at a single level.

Herein lies the issue. We can choose to define fitness at any level. Different levels may be better choices than others, but ultimately, the level at which we assign fitness is an arbitrary construct of the investigator. I would argue that once we have assigned fitness at any particular level, that becomes the “member” of the set in our definition of evolution. In other words, when we define fitness as occurring at a particular level, we are in fact defining the individual in the less clutzy definition of evolution: Change in the distribution of a population due to the gain or loss of individuals. Even though we really only need to define fitness with regard to selection, and adaptation, it makes no sense to have concepts of individuality for mutation, migration and drift that are different than our concept for selection. Thus, the I would argue that logically the level at which we define fitness defines individuality for all evolutionary forces acting on that trait.

Of course, the level at which we define fitness does not alter the changes that occur in the organism. The changes that occur are independent of human observation. What DOES change, however, is our interpretation of those changes. Only changes at or above the level of individuality—the level at which we assign fitness – can be interpreted in an evolutionary framework. Certainly for adaptation, we can only interpret changes as being due to natural selection if there is variation in fitness, and there is no variation in fitness below the level at which we assign fitness. So, what we do is we call those changes that occur below the level of individuality as something else. For example, we typically we assign fitness at the level of the organism, and changes within the organism are called “development”. However, were we to choose to assign fitness at the level of the cell we could reasonably call these changes evolution, and view differential cell division and mortality as selection.

This idea of the relativity of individuality, and the role of the observer in interpreting the nature of changes is at the heart of the problem that people have with Group Selection 1 and Group Selection 2. This is also why I am not a big fan of the GS1/GS2 terminology. Basically, I think we would be better served by stating the level at which we define fitness. Thus, we might say “In this study we define the organism to be the individual”, or “we assigned fitness at the level of the colony in this study”. I think this is clearer and removes a lot of ambiguity. For example, consider a hypothetical study of Tasmanian Devil Face Cancer. This potentially has three or more levels at which we could assign fitness, including the cell, the organism, the population, and potentially the species. Defining the level of the individual has the flexibility to handle this GS1 and GS2, just gets difficult (if we assign fitness at the level of the species is that GS4?)

The problem, of course, is the idea that there is this desire to have the “individual” be a natural unit, and to have “development” qualitatively different than “evolution”. The idea that the individual is a construct of the observer is really not compatible with these thoughts. That said, I am quite comfortable with the arbitrariness of the level at which we assign fitness. I see no other way that we can have transitions of levels: There really is no qualitative difference between the most organized colonies and the least organized organisms (compare Volvox to Trichoplax). It is also the only way we can study cancer as evolution, and not have to assign fitness at the level of the cell when we are studying, say, foraging behavior. Nevertheless, I understand that many will find this deeply disturbing, and many will reject this relativity of individuality as a viable world view. That said, I think if you can get your head around it, it will help you in understanding multilevel selection.

*Volvox (left) is considered to be a colonial protist, whereas Trichoplax is considered to be a single organism and an animal. There are differences in their structure, but the differences are not great considering that one is a colony of cells and the other is multicellular organism. (Volvox: http://www.dr-ralf-wagner.de/Bilder/Volvox-aureus-DF.jpg, Tricoplax: http://www.marinespecies.org/placozoa/ )*

I am out of space, but as I mentioned above, although the level at which we assign fitness is, in my view, arbitrary, there are nevertheless better and worse levels that we can choose. For example, often there is a reasonable a-priori choice. Higher organisms are made up of trillions of cells. It would be a ridiculous, and probably impossible, task to assign fitness at the level of the cell if we are studying morphology or behavior at the whole organismic level. Other times, contextual analysis can be used to identify the lowest level at which selection on a particular trait is acting, and that level becomes a reasonable one for assigning fitness. Still other times there may be adaptations (policing, mitosis) that minimizes adaptation by natural selection at lower levels. In this case it makes sense to assign fitness at the level at the lowest level that a response to selection is likely to occur. Finally, at the beginning, I mentioned that MLS works fine if groups are not nested. However, any study with non-nesting groups will only work if fitness is assigned at a level that is fully encompassed within all higher groups. For example in a continuous population of plants every organism (ramet?) can be considered to be at the center of its own neighborhood. Obviously these neighborhoods overlap. Nevertheless MLS analysis will work as long as fitness is assigned at the level of the organism instead of the neighborhood.

]]>First, Gardner talks of collective fitness 1 vs collective fitness 2 In doing this he continues and deepens the confusion he started when he developed the model. As I make clear in my chapter on defining the individual (Goodnight 2013, Chap. 2 in “Defining the individual” Bouchard & Hueneman eds), whether you are talking about group selection 1 or groups selection 2, or for that matter group selection 10 (there is no such thing), depends entirely on the level at which you, the investigator, assign fitness. In the example Gardner gives, Group A has 12 daughters in 4 groups of 3, whereas Group B has 12 daughters in 3 groups of 4. In this example, If you assign fitness at the level of the individual organism, and presuming no other variation, the **individuals** in groups A and B have equal fitness. If you assign fitness at the level of the group **Group** A has higher fitness than **Group** B. The difference, of course, is that in the second instance you have a within group “developmental” process that results in different group sizes, however since fitness is assigned at the level of the group you cannot call it selection or even evolution. The problem is that with fitness assigned at the level of the group there can be no variation in fitness within groups, and thus no evolution. This leaves the question of whether it is better to assign fitness at the level of the group or the level of the organism. This is an issue that that I address in my chapter. For fairly deep philosophical reasons it basically cannot be resolved, but as long as we are clear on where we assign fitness it is not a problem. Gardner is right that this was an important issue, but it is not a conundrum. It is one that has been resolved, and no longer presents a serious conceptual issue.

However, what I find most disturbing in this section is so jaw-droppingly silly it causes me to question whether the paper is supposed to be satire. To quote Gardner:

“Cancer is often conceptualized as involving a tension between different levels of selection, with cancerous tissues achieving higher reproductive success at a within-organism level and cancerous individuals suffering lower reproductive success at a between-organism level. However, somatic tissues – including cancerous ones – do not generally contribute genes to distant future generations, on account of the demise of their lineages upon the death of the organism. Consequently, cancerous tissues do not have reproductive value, and so their proliferation within the organism cannot correspond to selection in the strict sense of the genetical theory.” (page 6, citations removed)

*Seriously? You actually believe that? ( from http://www.calgaryunitedway.org/socialvoice/wp-content/uploads/2012/10/jaw-drop.jpg )*

This is basic introductory evolution material. Here is the Intro Bio version: Lewontin in his article in Annual reviews (1970, Vol 1 page 1) tells us that three things are necessary and sufficient for evolution by natural selection to occur. These are:

- There must be phenotypic variation.
- There must be differential fitness of different phenotypes
- The phenotypes must be heritable.

To remind you, necessary and sufficient means that you need all three, and if you have all three evolution by natural selection **will** occur. So, lets think about cancer. **(1)** is there phenotypic variation? Yes, Cancer cells are different than normal cells in many respects ranging from physical appearance to changes in the regulation of the cell cycle. **(2)** Are these phenotypic differences associated with fitness? Yes. For example disregulation of the cell cycle causes cancer cells to divide more rapidly than normal cells. Cell division is reproduction. Reproduction is fitness. Yes, there is variation in fitness associated with phenotype. **(3)** Are these variations in fitness heritable? Yes. Most, if not all, cancers are due to at least one, and usually five or more mutations. These are genetic mutations that are passed on to daughter cells during cell division. Thus, we see that in a organism with cancer we have phenotypic variation, variation associated with fitness, and the fitness is heritable. Either Lewontin is right and Gardner is wrong, or vice versa. I am going with Lewontin being right. Yes, cancer’s “. . . proliferation within the organism cannot DOES correspond to selection”

To see how silly Gardner’s stance is, consider the Wake Island rail, a cute flightless bird that did very well until World War II. On December 23^{rd}, 1941 the Japanese occupied Wake Island, and by the time they were expelled on September 4^{th} 1945 the Wake Island rail was extinct. Apparently the Japanese ate them when they were placed under siege by the American military. Now the question: At some point it was safe to say that the rails did “not generally contribute genes to distant future generations” and thus “. . . their proliferation . . . cannot correspond to selection . . .”. My question is when should we consider differential survival and reproduction of Wake Island rails to no longer be selection? Was it selection in 1939 before the war? How about 1941 when the Japanese invaded? Or how about the January of the likely year of their extinction, 1943? The ridiculousness of making this judgment should be obvious. Selection doesn’t see the future and neither should we when we are identifying something as selection.

*At what point did differential survival and reproduction stop being selection for the Wake island rail? ( From http://www.extinct-website.com/extinct-website/product_info.php?products_id=409 )
*

My goal in this is to make the important point that very smart people have thought very hard about evolution. It behooves us to know what the masters said. This does not mean reading every single paper that Lewontin ever published, but it does mean not making obvious errors in logic that have been resolved by people smarter than you and I. It also does not mean you can’t disagree with the masters. Science advances when old paradigms are overturned. But it does mean if you are going to disagree with the canon you should know why you disagree, and be able to defend your position. Again, ignorance of the literature is no excuse.

With that lapse of good sense out of the way, and ignoring MLS 1 VS MLS 2 – Been there, done that, got the tee shirt – lets move on to the units of selection. Basically the first half of this section is un-interpretable gobble-de-gook that comes from trying to force Gardner’s class structure model on to the Price equation. As I said earlier, his approach is rather clutzy, but it will work as long as there is no group selection. To add group selection you MUST turn to a multivariate approach, or make the assumption that everything is additive always, and there are no interactions of any kind. In short, it simply does not work for multilevel selection in the real world. What caught my eye, however, was his example were a wasp lays two eggs a male and a female, and males and females are reasonably being treated as different classes. He is stumped by how to use a multilevel selection approach to study this. It is actually dead easy. Each individual has a male trait or a female trait (depending on their sex) and one or more contextual traits. The contextual trait is some measure of the characteristics of the group. Note that there would be a separate phenotypic covariance matrix for males and females, but a single genetic covariance matrix for the population (Lande 1980 evolution 34:292; Goodnight et al 1992 Am. Nat. 140:743). That is, with contextual analysis, there is no problem.

So here is my opinion on this and I want to emphasize it is only my opinion. I think that Gardner has an agenda. I think that agenda is that he does not want multilevel selection to be seen as a valid research program. To this end he is willing to ignore an entire literature, to be apparently willfully ignorant of quantitative genetics, to ignore the writings of such luminaries as Richard Lewontin, and to choose not to see obvious solutions. The problem is that his agenda has clouded his vision, allowed him to use sloppy thinking and logic, and to write things that are regrettable, and frankly wrong. This does not advance science. It creates noise that interferes with people who are actually trying to understand nature. I hope I am wrong. Gardner is a good theoretician, and the world needs people like him. Hopefully this paper is simply the unfortunate type of mistake we all make, and he is really working to advance our understanding of science rather than undermine a field that he doesn’t understand.

]]>He starts the model with a discussion of Fishers fundamental theorem, which I have already shown is not particularly complex. Then he goes on to expand this using Robertson’s (1968. In: Population Biology and Evolution, R.C. Lewontin, ed.) result that the change in a trait is equal to the covariance between a trait and relative fitness.

It is worth mentioning that although it is usually presented the other way around, in fact, Fisher’s fundamental theorem is actually a special case of the response to selection on any trait. To see this just replace the trait, z, with relative fitness.

Next he goes on to express concern about selection in a class structured population. His approach actually works, as long as there is no multilevel selection. As I said last week, I think his approach is rather clumsy, and there is a much better way using standard quantitative genetic methods. So, my overall comment on that part of the paper is “meh”.

*Gardner’s approach to evolution in stage structured populations? “Meh” (From http://rubbercat.net/simpsons/news/2013/09/ )*

Now we get to the meat of the issue. He then goes on to develop his genetical theory of multilevel selection. First off, he develops his theory in terms of breeding values. This, has a number of possible definitions. His definition is “. . . a weighted sum of the frequencies of the alleles that the individual carries, the weights being decided by linear regression analysis. This is strangely worded, but basically correct. It hides a HUGE problem that he is ignoring. To see this consider a more standard definition of breeding value: The sum of the average effects of the alleles that make up an individual. The average effect of an allele is basically the effect of that allele averaged across all possible genotypes. This works fine in Fisher’s imaginary world of infinite population size and random interactions. It does not work well when populations are structured, and interactions are not random. If you have multilevel selection then you have population structure. If you have population structure average effects, and thus breeding values are not constant.

This is why this is so insidious: The assumption of constant breeding value appears reasonable, and it is consistent with all of the classic models. It is the central feature of his model, that there is population structure, that invalidates the assumption of constant breeding values. It is so obvious that Gardner did not consider the possibility that breeding values might not stay constant, although quite entertainingly he did very clearly, if unknowingly, explain why they wouldn’t. On page 3 he writes:

“Fitness may be decomposed into its genetical and environmental components, that is

v=_{i}g+_{i}e, where_{i}ecaptures nonadditive genotypic effects (such as dominance, epistasis, synergy and frequency dependence) as well as other more obviously environmental effects.”_{i}

Well, no, that is not true. That partitioning is done by least squares, and epistasis and dominance will shift between components as we move from group to group. However note that even here he is completely unaware that when genes interact it might have evolutionary implications. And that is where Gardner falls short: his model requires that breeding values stay constant. They do not. The correct subscripting should be g_{ij}, that is, the breeding value of the ith individual in the jth deme. Experimental (De Brito, et al. 2005. Evolution 59: 2333) and theoretical work shows that g_{ij} will vary in a way that is not predictable either from the individual nor the group measured in isolation. However, I am a generous man, so lets assume they are constants for the moment, and just keep in the back of our head that this is a fatal flaw in the underlying assumptions of his model.

He then goes on to use the two level Price equation to develop his “genetical model of multilevel selection”:

OK, I hate his notation. Here it is a form that doesn’t hurt my head:

where

is the change in the mean breeding value due to selection

is the between populations correlation between relative fitness and breeding value (and yes, I refuse to use v for relative fitness)

is the average covariance between relative fitness and breeding value within populations

So what is wrong with this?

Well for starters its been published before. Wade, in his paper “Hard Selection, Soft Selection, Kin Selection, and Group Selection” (1985. Am Nat 125: 61) develops a model which has the following equation:

I won’t burden with telling you all of the details of what all the symbols mean, except to say the first term on the right hand side is the mean within population covariance, and the second term is the among populations covariance. I should also say that if you sum over the K loci, the result is the breeding value. In other words, with slightly different notation it is exactly the same equation that Gardner uses. One would think a proper citation would be in order.

The nice thing about Wade’s Price partitioning being published 30 years ago is that it has been around long enough, and we have known that it doesn’t work for 20 years, and we know why. As long ago as the 1990’s I was talking to Steve Frank about this (I am sure he doesn’t remember, so Steve, if you are reading this tell me if I am wrong) and he told me that he was well aware of the partitioning, but he never called the among group covariance group selection. I also know that Mike Wade, who originally published the Price covariance model 30 years ago, has come to realize that the Price equation is inadequate.

What is wrong with the Price equation is actually quite simple, and is really the same as William’s (1966, “Adaptation and Natural Selection”) famous distinction between a “fleet herd of deer” and a “herd of fleet deer”. The problem is that if there is only selection at the individual level, say the slowest deer get eaten, then there will be some herds that by chance have a large proportion of fast deer. The Price partitioning will identify this variation in group composition as a positive covariance between group mean fitness and group mean phenotype; however, it will be entirely due to individual selection and the fact that there is variation among groups in the proportion of fleet deer. In mathematical terms, we can divide the Price covariance at the group level into a partial covariance between group mean fitness and group mean phenotype independent of individual level effects, plus a residual covariance between group mean fitness and group mean phenotype that is caused by individual fitnesses and phenotypes. Only the partial covariance holding individual effects constant should be considered “group selection” the other portion is changes due to selection at the individual level:

The Price equation cannot make this separation. It should come as no surprise that this partitioning is best done using contextual analysis. You can work out the math yourself if you want. The equations you need are in Goodnight et al. (1992 Am. Nat. 140:743).

However, there is a much more serious issue than something so minor as the model being fundamentally flawed at this high level. This is the problem I mentioned before, and that is that he is partitioning breeding values. In an additive world this should work, however, if there is one lesson that comes out of the experimental group selection literature it is that it does not work in the real world (Goodnight and Stevens 1997. Am. Nat. 150:S59). This is an important point I have made in the past, when theory and experiment disagree the theory is wrong.

Indeed, there is no theoretical justification in Fisher’s additive world for me saying it is wrong. The reason I know that you can’t do that partitioning is because I have done and read the experiments (e.g., Goodnight 1990 Evolution 44:1614 & 44:1625). The problem is that when individuals interact their interactions affect the phenotype. While it may not change breeding values at the individual level, it does change them at the group level. And this is exactly what we have found. Group selection experiments work way too well. When we have done experiments where the causes can be teased apart we know that the reason that group selection works so well is because it can act on the interactions among individuals. In other words interactions among individuals become part of the breeding value at the group level. The Price partitioning assumes you are partitioning a constant, however experiments show us that the breeding value at the group and individual levels are not the same thing.

In short, the only way to develop a “genetical theory of natural selection” is to go Full Monty multivariate quantitative genetics, and treat the group and individual traits as separate, but correlated traits. Contextual analysis does half of this, what remains to be done is to work out why the G matrix is the way it is. Fortunately, Bijma and friends have gone a long way in this direction (e.g., Bijma et al. 2007. Genetics 175: 277, Bijma 2014 Heredity 112:61).

*You have to go Full Monty multivariate quantitative genetics if you want to have a chance at developing a genetical theory of multilevel selection. (hope the beefcake doesn’t offend.) (http://www.theage.com.au/articles/2004/05/10/1084041332216.html?from=storyrhs)*

So, thus we find that the basic model is flawed in several fundamental ways. First, it is a re-derivation that is, except for details of notation, identical to a model by Wade published in 1985 (it is clear he was unaware of Wade’s work so there is no possibility of plagiary here). Second, Wade’s model, and thus Gardner’s model, was shown to incorrectly partition group and individual selection, and third, based on experimental and theoretical work, it is clear that the basic underlying assumption of constancy of breeding values is fundamentally flawed. Efforts to partition breeding values into within and among group components using the Price equation are doomed to failure due to interactions among genes and individuals. Ignoring these issues, however, well, I guess the model is fine.

Next week will be the last on this paper. Basically last week we covered the introduction, this week was the model. Next week will be the discussion. If I can’t cover it in three weeks it ain’t getting covered.

Added in postscript: Andy: I feel badly about so thoroughly trashing this paper. If you would like to respond I will post your response with no edits other than a short paragraph at the beginning giving attribution. (you might want to wait until next week after I discuss the implications of your model).

]]>