Please be patient while the object screen loads.
Please use this identifier to cite or link to this item : http://hdl.handle.net/2078.1/33546
Microarray experiments are a very promising tool for early diagnosis and disease treatment. The datasets obtained in these experiments typically consist of a small number of instances and a large number of covariates, most of which are irrelevant for discrimination. These characteristics pose severe difficulties for standard learning algorithms. A Bayesian approach can be useful to overcome these problems and produce more accurate and robust predictions. However, exact Bayesian inference is computationally costly and in many cases infeasible. In practice, some form of approximation has to be made. In this paper we consider a Bayesian linear model for microarray data classification based on a prior distribution that favors sparsity in the model coefficients. Expectation Propagation (EP) is then used to perform approximate inference as an alternative to computationally more expensive methods, such as Markov Chain Monte Carlo (MCMC) sampling. The model considered is evaluated on 15 microarray datasets and compared with other state-of-the-art classification algorithms. These experiments show that the Bayesian model trained with EP performs well on the datasets investigated and is also useful to identify relevant genes for subsequent analysis. (C) 2010 Elsevier B.V. All rights reserved.
|Publication Date :||2010|
|Document type :||Article de périodique (Journal article) - (Article de recherche)|
|Source :||"Pattern Recognition Letters" - Vol. 31, no. 12, p. 1618-1626 (2010)|
|Publisher :||Elsevier Science Bv (Amsterdam)|
|Publication status :||Publié|
|Subject :||Microarray data ; Bayesian inference ; Expectation Propagation|
|PDF_01||pdf document||-1 bytes||Request copy|