# 3 Classical parametric estimation

In the previous chapter we reviewed some basic notions of probability. Under the assumption that a probabilistic model is correct, logical deduction through the rules of mathematical probability leads to a description of the properties of data that might arise from the real situation. This deductive approach requires however the existence and the availability of a probabilistic model. The theory of statistics is designed to reverse the deductive process. It takes data that have arisen from a practical situation and uses the data to suggest a probabilistic model, to estimate its parameters and eventually to validate it. This chapter will focus on the estimation methodology, intended as the inductive process which leads from observed data to a probabilistic description of reality. We will focus here on the parametric approach, which assumes that some probabilistic model is available apart from some unknown parameters. Parametric estimation algorithms build estimates from data and, more important, statistical measures to assess their quality.

There are two main approaches to parametric estimation

**Classical or frequentist** : it is based on the idea that sample
data are the sole quantifiable form of relevant information and that
the parameters are *fixed but unknown*. It is related to the frequentist view of probability.

**Bayesian approach** : the parameters are supposed to be *random
variables*, having a distribution *prior* to data observation and a
distribution *posterior* to data observation. This approach assumes
that there exists something beyond data, (i.e. a subjective degree
of belief), and that this belief can be described in probabilistic
form.

For reasons of space, we will limit here to consider the classical approach. It is important however not to neglect the important role of the Bayesian approach which led recently to a large amount of research in Bayesian data analysis and relative applications in machine learning [51].