Prior
by Jens Keilwagen, Jan Grau, Stefan Posch, and Ivo Grosse.
Description
Background
One of the challenges of bioinformatics remains the recognition of short signal sequences in genomic DNA such as donor or acceptor splice sites, splicing enhancers or silencers, translation initiation sites, transcription start sites, transcription factor binding sites, nucleosome binding sites, miRNA binding sites, or insulator binding sites. During the last decade, a wealth of algorithms for the recognition of such DNA sequences has been developed and compared with the goal of improving their performance and to deepen our understanding of the underlying cellular processes. Most of these algorithms are based on statistical models belonging to the family of Markov random fields such as position weight matrix models, weight array matrix models, Markov models of higher order, or moral Bayesian networks. While in many comparative studies different learning principles or different statistical models have been compared, the influence of choosing different prior distributions for the model parameters when using different learning principles has been overlooked, leading to questionable conclusions.
Results
With the goal of allowing direct comparisons of different learning principles for models from the family of Markov random fields based on the same a-priori information, we derive a generalization of the commonly-used product-Dirichlet prior. We find that the derived prior behaves like a Gaussian prior close to the maximum and like a Laplace prior in the far tails. In two case studies, we illustrate the utility of the derived prior for a direct comparison of different learning principles with different models for the recognition of binding sites of the transcription factor Sp1 and human donor splice sites.
Conclusions
We find that comparisons of different learning principles using the same a-priori information can lead to conclusions different from those of previous studies in which the effect resulting from different priors has been neglected. The derived prior is implemented in the open-source library Jstacs, so it can be easily applied to comparative studies of different learning principles in the field of sequence analysis.
Paper
The paper Apples and oranges: avoiding different priors in Bayesian DNA sequence analysis has been published in BMC Bioinformatics.
References in Jstacs
- The prior is used in the project GenDisMix: unifying generative and discriminative learning principles
- Example: Train classifiers using GenDisMix (a hybrid learning principle)
- API