Nonlinear Estimation and Classification
Author(s): D.D. Denison, M.H. Hansen, C.C. Holmes, B. Mallick & B. Yu (Eds.)

ISBN 0-387-95471-6

More Information
Springer Verlag (January 22, 2003)
Paperback: 432 pages


REVIEW
This volume contains 31 papers presented at the Mathematical Sciences Research Institute (MSRI) workshop in Berkeley, California on March 19-29, 2001. The main idea of the meeting which is perfectly reflected in the volume was to bring together leading experts from machine learning, statistics, signal and image processing, applied mathematics, optimization and bioinformatics. Until recently, the linkage between the corresponding communities has not been sufficiently strong and this volume is a step towards building bridges between these disciplines. The papers cover a wide spectrum of topics including both theoretical analysis of various novel techniques in regression and classification, and their applications to the specific problems in environmental modeling and bioinformatics. The first 11 chapters include “long papers” written by invited speakers while the remaining 20 chapters contain “short papers” based on contributed talks.

Some of the papers are review-style and introduce the main ideas of several modern approaches in regression and classification to a broad audience of researchers from different fields.

Wahba, Lin, Lee and Zhang give an overall review of Support Vector Machines (SVMs). They present the basic ideas of SVMs which originated in machine learning literature and explain them from the statistical viewpoint. The authors also mention possible generalizations of SVMs to several nonstandard cases.

Schapire provides an excellent overview of boosting – a novel general method for improving the accuracy of any given learning algorithm that has been reported to provide somewhat surprisingly good results in a series of practical studies. The Chapter contains an explanation of its basic principles, the original AdaBoost algorithm, statistical properties of boosting and its connection to logistic regression and game theory that can be used for the explanation of its “mysterious” success.

Mukherjee, Rifkin & Poggio consider a general regularization (penalized) approach to nonparametric regression and classification. Various known estimators/classifiers (e.g., smoothing splines, support vector machines and its variants) can be viewed within this framework corresponding to different loss functions. The authors review recent theoretical results on generalization error bounds of the resulting estimators.

Gey & Nedelec provide a brief presentation of Classification and Regression Trees (CART) methodology.

Several other papers focus on more specific issues. An interested reader can find summaries of recent results for a wide spectrum of modern techniques in regression and classification, including wavelet analysis of signals and images, spline estimation, aggregate classifiers, probability and neural networks, SVMs and others.

There is also a series of papers dealing with applications of these methods to the specific problems in bioinformatics, traffic flow, analysis of satellite images, etc. In particular, the papers of Pollard & van der Laan and Rahnenführer consider clustering problems for microarray data.

Overall, Nonlinear Estimation and Classification offers a comprehensive overview of state-of-the-art modern regression and classification. Review-style papers can be valuable for an insight of the basic principles of several novel methods to a broad audience of the bioinformatical community. Other papers may be of interest to readers with some knowledge and experience in the discussed topics. The two papers on clustering of microarray data mentioned above are especially relevant to researchers in the field.

Felix Abramovich, Tel Aviv University, Israel

="Buy from Amazon.com" border="0" alt="Buy from Amazon.com" src="http://rcm-images.amazon.com/images/G/01/associates/add-to-cart.gif"> TOP | Back to Bioinformatics Books