Featured post

tr2-delimitation in python3

I just announce that the tr2-delimitation in Python3 is now available from the following repository.

https://bitbucket.org/tfujisawa/tr2-delimitation-python3

It should return results exactly the same as the Python2 version (I verified it with the data set of the Fujisawa et al. (2016) paper).

I will maintain the old Python2 version, but the new functions will be added to this Python3 version.

Advertisements

Superimposing maps of two cities with ggmap

I recently moved from Kyoto to Paris to do my second postdoc. In the process of flat hunting in Paris, which has been very hard  because of  the rising housing cost, I walked around some different parts of Paris. One impression I had is Paris is small. Once you walk through the city,  you will find this quite easily.

My intuition told me that its size is almost close to the size of Kyoto, which is a medium-sized city in Japan. But, is it true? I found a website comparing London and Paris, and it says London is larger than Paris, but no comparison between Paris and Japanese cities exists.

So, I just checked by superimposing maps of the two cities. I used an R package called “ggmap”, a very nice geographic visualization package built based on ggplot2.

I first downloaded maps of Paris and Kyoto in 15km squares surrounding city centers. (I chose the Cite island for Paris and Shijo-Karasuma for Kyoto as centers). This process was a bit tricky because maps are defined by longitudinal and latitudinal range. The identical longitudinal ranges do not represent the same distance when the cities’ latitudes are different. I used the “great circule distance” equation to adjust the effect of latitude on the longitudinal distance and and find appropriate ranges.

R <- 6371
#convert distance to longitudinal range given a latitude
delta.lambda <- function(d, phi=0.0) {
	2*asin(sin(d/R/2)/cos(phi))
}

#create longitudinal and latitudinal ranges which define a suquare with length d
map.range <- function(d, lat=0.0, lon=0.0){
	dphi <- d/R*360/2/pi
	dlambd <- delta.lambda(d, phi=lat*2*pi/360)*360/2/pi

	lonrange <- lon + c(-dlambd/2, dlambd/2)
	latrange <- lat + c(-dphi/2, dphi/2)

	rang <- c(lonrange, latrange)
	names(rang) <- c("left", "right", "bottom", "top")
	return (rang)
}

In the above code, the map.range function returns longitudinal and latitudinal range which covers a square with its side length = d km.

Once you can define a range to draw, downloading and plotting maps is simple thanks to ggmap functions.

d <-15 #15kmx15km map
z <- 12 #Zoom level
kyoto_center <- c(35.0, 135.76)
kyoto <- map.range(d, lat=kyoto_center[1], lon=kyoto_center[2])
kyoto_map <- get_stamenmap(kyoto, zoom=z, maptype="toner")
ggmap(kyoto_map)

paris_center <- c(48.855, 2.34)
paris <- map.range(d, lat=paris_center[1], lon=paris_center[2])
paris_map <- get_stamenmap(paris, zoom=z, maptype="toner")
ggmap(paris_map)

The “get_stamenmap” function downloads a map from Stamen Maps. If you like Google map, use “get_googlemap” function to download from Google map.

maps

The two maps plotted side by side tells me a lot. For examplem the Peripherique motor way, which defines Paris, looks quite similar size as the city of Kyoto surrounded by mountains.

Superimposing two maps was another tricky point. The ggmap has two ways for superimposition of graphics, “inset_map” and “inset_raster”. It seems “inset_raster” is an appropriate function because it allows me to set coordinates for regions you superimpose graphics. One problem is that it does not allow me to change alpha value. I just added alpha values to the raster graphics of the map and plot it using the inset_raster.

translucent.raster <- function(map){
	tmp <- as.matrix(map)
	tmp <- gsub("$", "55", tmp)
	tmp <- gsub("FFFFFF", "0000FF", tmp)
	ras <- as.raster(tmp)
	attributes(ras) <- attributes(map)
	ras
}

ggmap(paris_map) + inset_raster(translucent.raster(kyoto_map), xmin=paris["left"], ymin=paris["bottom"], xmax=paris["right"], ymax=paris["top"])

superimpose.02

This is a bit awkward way, but the result looks OK. Now, I can compare landmarks more precisely.

The Golden and Silver temples (indicated by small black arrows), which define north-west and north-east edges of Kyoto area are almost on the Peripherique. This means walking from the Cite island to the Peripherique is just like walking from Shijo-Karasuma to the Silver temple. Probably, this distance similarity made me feel that the two cities are in similar size.

Of course, Paris has much larger suburban area, and the size of Paris metropolitan area is much larger than Kyoto. But, my intuition looks more or less correct.

Multilocus delimitation with tr2: Guide tree approach

The second option of delimitation with the “tr2” is a guide tree approach. A guide tree is a tree which specify a hierarchical structure of species grouping. By using it, you can significantly reduce the number of possible delimitation hypotheses to search.

The tr2 implements an algorithm to find a best position of nodes which define species group under a given guide tree. As the algorithm is reasonably fast, you can search the best delimitation through a tree from each tip is distinct species to all tips are from one single species.

Now, acceptable size of the number of taxa on guide tree is around 100. If you exceed 200 taxa, the memory requirement usually becomes huge and normal desktop computers can not handle. The current limitation of the number of input trees, that is, the limit of number of loci, is ~ 1000. Theoretically, it can be larger, but a problem on numerical calculation now limits the number of loci you can use.

To run tr2 with a guide tree, you need a newick formatted guide tree file, and a gene tree file also in newick format.

Only the first line of the guide tree file is used. In the standard analysis, guide tree tips must contain all taxa found in gene trees. Guide trees can be built any methods such as concatenated ML (eg. RAxML) or coalescent-based species tree methods (eg. ASTRAL). Most importantly , it must be properly rooted. Incorrect rooting often results in over-splitting.

The gene tree file must contain one tree per line. They must be rooted too. Missing taxa are allowed.

Once two files are ready, the command below starts a search algorithm.

./run_tr2.py -g guide.tre -t genetree.tre

To test with bundled example files, use files in “sim4sp” directory.

./run_tr2.py -g sim4sp/4sp.nex10.RTC.tre -t sim4sp/simulated.gene.trees.nex10.4sp.tre

After some intermediate outputs, you will see a tree with delimitation results and a table.

write: <stdout>
species sample
1     12.3
1    11.3
1    15.3
1    14.3
1    13.3
2    7.2
2    10.2

If the tree and table are too large on your console screen, use “-o” option to output them into files.

./run_tr2.py -o out -g sim4sp/4sp.nex10.RTC.tre -t sim4sp/simulated.gene.trees.nex10.4sp.tre

This command ouputs a table and a tree into “out.table.txt” and “out.tre” respectively. You can see the results using any programs.

Now, I use R + “ape” package to check a delimitation result.

library(ape)
tr <- read.tree("./out.tre")
plot(tr)
nodelabels(text=substr(tr$node.label,1,6), bg="white")

These R commands plots the guide tree and delimitation like the  below picture.

outdelimit

The numbers on the nodes indicate average differences of posterior probability scores. If they are positive the node has between-species branches. The negative values suggests that nodes are within species. “nan” indicates there are not enough samples to split/merge the node. “*” signs show the best position of delimitation. In this case, there are 4 putative species.

Doing MCMC with PyMC and making tr2 a bit Bayesian

When you do a Bayesian inference, Markov chain Monte Carlo (MCMC) sampling is a common method to obtain a posterior probability of your model or parameter. So far, I have avoided using MCMC in my programs because I like simple and rapid algorithms. But,  MCMC now looks unavoidable when I do more sophisticated Bayesian modeling.

There are many software platforms to do MCMC with reasonable learning curves, like Stan & Rstan or BUGS. These are definitely worth studying when you want to be a serious Bayesian.

But, for now, I choose a Python package called “PyMC“. This is because it is well integrated with Python language, and there is a nice introductory textbook to learn this package. (It is freely available online, and they sell hard copies too.)

After reading some chapters of the book, I tried to solve a simple problem with PyMC, which is related to phylogenetic inference and species delimitation.

The problem is estimation of λ in the model below.

P(n_1,n_2,n_3|\lambda) = \frac{N!}{n_1!n_2!n_3!}(1-\frac{2}{3}e^{-\lambda})^{n_1}(\frac{1}{3}e^{-\lambda})^{N-n_1}

Why is this related to delimitation?  Because this is a model of distribution of tree topology when you sample 3 individuals from 3 species.

If you sample 3 individuals from 3 different species and reconstruct gene trees, you are more likely to see one particular topology, the same one as species tree, than others. On the other hand, if you sample 3 individuals from 1 species, 3 types of topology are evenly observed. Finding this skew / evenness of  topology distribution is a basic idea of the tr2-delimitation.

N in the model above is the number of sampled loci, which is a sum of n1, n2 and n3, counts of three possible tree topology of 3 samples. As λ (a branch length of species tree) increases, you more frequently observe topology 1 (species tree topology). The distribution becomes more even when λ is close to zero.

With this model, a posterior distribution of λ when you observe topology counts [n1,n2,n3] is,

P(\lambda|n_1,n_2,n_3) = \frac{P(n_1,n_2,n_3|\lambda)\pi(\lambda)}{P(n_1,n_2,n_3)}

I tried to estimate this distribution by MCMC. Luckily, there is an analytical solution to posterior distribution of λ at least with a uniform prior. So, I can check if MCMC can actually work.

The code below simulates n1, n2 and n3 with a particular λ value and does MCMC sampling to estimate λ’s posterior with simulated values, then outputs 5000 samples.

import sys
import numpy
import pymc

##simulated frequencies of triplets
l = 0.598   #true lambda = 0.598
#l = 0.162  #or lambda = 0.162
prob = [1-2*numpy.exp(-l)/3, numpy.exp(-l)/3, numpy.exp(-l)/3]
count_obs = numpy.random.multinomial(100, prob)
print(l, prob, count_obs)

##Bayesian model
lambd = pymc.Uniform("lambda", lower=0.0, upper=5.0) #Uniform prior for lambda

#A pymc function translating lambda to 3 probabilities of triplets
@pymc.deterministic
def triplet_prob(lambd=lambd):
  p1 = 1-2*numpy.exp(-lambd)/3
  p2 = p3 = numpy.exp(-lambd)/3
  return [p1, p2, p3]

#observed values were associated with the multinomial model
obs = pymc.Multinomial("obs", n=sum(count_obs), p=triplet_prob, observed=True, value=count_obs)

#run MCMC
model = pymc.Model([obs, triplet_prob, lambd])
mcmc = pymc.MCMC(model)
mcmc.sample(100000, burn=50000)

with open("trace.lambda.%0.3f.txt"%l, "w") as f:
    for i in mcmc.trace("lambda")[::10]:
            f.write("%f\n"%i)

PyMC has a quite elegant and intuitive way to abstract a Bayesian modelling.

You can easily define prior distributions of parameters and develop models by combining them. The observed data are connected to the model with the “observed=True” option. Dependency of variables can be traced by “parent” and “children” attributes. Then, you can run MCMC just by calling mcmc.sample.

The distribution of posterior probability of λ when λ = 0.598 was like this. (In this case, simulated numbers are [n1,n2,n3]=[60,21,19])

plot2

The left plot is the trace of MCMC, and the right the histogram of MCMC samples. The curve superimposed on the histogram is an analytical solution. As you can see, the MCMC distribution fitted the analytical solution surprisingly well. This is great. The 95% credible interval (CI) is (0.30, 0.78). So, I am quite sure that λ is larger than zero and  topology distribution is skewed.

When λ is smaller (λ = 0.162,[n1,n2,n3]=[44, 33, 23]), estimation became more uncertain.  The 95%CI is (0.035, 0.38). A bit difficult to say n1 is more frequent.

plot1

I think this credible interval approach is OK for this simple case to just estimate λ. But, if you seriously implement species delimitation, a model comparison with reversible jump MCMC is required. It looks much more laborious to write codes for it since PyMC doesn’t have rjMCMC functions.

Regardless, I think PyMC is a good package which is easy to learn and have nice, readable documentations. It is probably a reasonable option if you want to integrate a Bayesian inference in your Python codes. Also, I think it is a handy tool to prototype a Bayesian model before writing it from scratch with other fast languages.

Multilocus delimitation with tr2: Model comparison

The “tr2” currently has two options for species delimitation. One is calculating posterior probability scores for user-specified delimitation hypotheses. Another option is finding the best delimitation under a guide tree, which specifies a hierarchical structure of species grouping.

The first option is probably useful to compare multiple species groupings and find the best one (such as comparing morphological species vs. mtDNA groups) while the second option can be used without any prior assignments and find species only from gene trees.

Let’s start with the first option. (I assume you have already set up an environment for tr2.)

You must have two input files: A gene tree file in Newick format and a tab-delimited text file which specify associations of species and individual samples.

In a tree file, one line must contain one gene tree. Trees can have missing taxa. They must be rooted. (Yes. The program is based on “rooted triplet”. So, trees must be rooted. If you do not have outgroups, midpoint rooting or RAxML’s “-I f” option often works well.)

In an association file, the first column represents the names of samples. They must be identical to the names of the tree tips. The second and so forth columns are species groups. You can write as many columns as you want. Also, you can use any codes to describe species names.

For example, a table below specifies three alternative delimitations of samples 16.4-20.4

19.4     4     B     sp5
17.4     4     B     sp5
18.4     4     B     sp4
16.4     4     A     sp4
20.4     4     A     sp4

Association files must contain all sample names which appear in the tree file.

Once you have a tree file and an association file, simply run the tr2 command as follows.

./run_tr2.py -a sp_association.txt -t genetrees.tre

Some example files are stored in the “sim4sp” folder. If you use them to test tr2, the command is like this.

./run_tr2.py -a sim4sp/sp.assoc.4sp.txt -t sim4sp/simulated.gene.trees.nex10.4sp.tre

The outputs of this command must be like below.

write: <stdout>
model score
null 51391.76
model1 5.73

The score of “model1” looks much smaller than the “null” model (, which assumes all samples are from one single species). So, you can be quite confident that model1 is a better delimitation.

How to install “tr2”

Once the environment is properly set up, the installation of tr2-delimitation is fairly simple. Download the file of tr2-delimitation from the bitbucket repository and decompress it in any place in your computer.

Just click a cloud-like icon on the left side pane and then click “Download repository” to start downloading. A decompressed folder may have a long name  like “tfujisawa-tr2-delimitation-84f248b5fa48”. Simply rename it like “tr2” if you want a handy access to the folder.

To test if the tr2 (and the environment) is correctly installed, let’s run it with a test data set.

$cd /path/to/yourfolder/tr2-delimitation #move to the installed folder
$python run_tr2.py -t sim4sp/simulated.gene.trees.nex10.4sp.tre -g sim4sp/guide.tree.4sp.tre

If you see a table of delimitation like below, the installation is successful.

write: <stdout>

species sample
1 1.1
1 2.1
1 3.1
1 4.1

...

4 17.4
4 18.4
4 19.4
4 20.4

 

In Unix-like OS’s, the program can be run without calling with python.

$./run_tr2.py -t sim4sp/simulated.gene.trees.nex10.4sp.tre -g sim4sp/guide.tree.4sp.tre

 

If you installed python with Anaconda and created a python2 environment, you need to first activate it,

$activate python2
$python run_tr2.py -t sim4sp/simulated.gene.trees.nex10.4sp.tre -g sim4sp/guide.tree.4sp.tre

 

You can integrate delimitation with species tree inference using the rooted triple consensus. To set up the triplec program, download it from its website, and create a folder named “bin” in the same folder as run_tr2.py and put the triplec in the “bin” folder.

$python run_tr2.py -t sim4sp/simulated.gene.trees.nex10.4sp.tre

If the command above returns a similar outputs, the triplec is correctly called.

How to set up an environment for “tr2”

Installing Python and check versions

Python is required to run the tr2-delimitation on your computer. In many modern operating systems, Python is pre-installed and you do not need to install it by yourself.

However, the installed version of Python matters. The tr2 is written in Python 2,  while, in some recent systems, Python 3 is the default version. This is simply because Python 2 was a standard when I started writing the codes, but Python 3 is becoming a new standard now. (I am translating the tr2 codes into Python3.) So, you first need to check the version of Python installed on your system.

Type on your console,

$python --version

If you see a message like below, you are running Python 2.

Python 2.7.6

In some decent OS’s, both Python2 and 3 are pre-installed, and you can call Python 2 by

$python2

even when your default Python is Python3.

If your environment does not have Python2 at all, visit the Python website and install it, or create a Python2 environment as explained in the next section.

Installing dependencies (scipy/numpy and Java)

Python packages called numpy/scipy is required to run tr2. These packages are for numerical calculations. Visit SciPy website and follow the instructions for your operating system to install Scipy libraries. Installing all Scipy related packages following the instruction is just fine though not all Scipy packages are required to run tr2.

An alternative, easy way to install dependencies is installing an all-in-one suite like Anaconda, which includes Python and related packages. As Anaconda allows you to run multiple versions of Python, you can install it with any version of Python.

If you choose to install Anaconda with Python3, you must run Python2 codes by creating Python2 environment.

$conda create --name python2 python=2 numpy scipy

This “conda” command creates an “environment” where the python version is 2 with numpy/scipy installed. You can switch to it by calling the “activate”  command,

$activate python2

and switch back to the default environment by the “deactivate” command.

(python2) $deactivate

 

Checking packages

You can check if the packages are properly installed by loading them on the interactive shell.

$python

(Just by typing “python”, “interactive shell” starts. You can quit this mode by pressing Ctl+d.)

>>>import numpy, scipy

If it does not return errors, packages are ready to use.

Java is also required to run the consensus tree building program, triplec. Almost all modern operating systems have Java as pre-installed software.