Master's Degree Course of Statistical Inference

20/02/2022 3-minute read

Scientific Inference

Scientific investigations are required in order to solve and give a more detailed explanation of real life problems. The scientific method entirely comprehend from the conception of one or more research questions to drawing scientific conclusions. When real-life problems have perceptible unexplained and haphazard variation, the best tool to guide scientific conclusions are statistical methods, especially under a quantitative perspective. Statistical methods outlined the design, measurements, analysis, and interpretation.

Although it is out of scope of this post, I may emphasize that the design of experiment and measurements, which are tasks before the analysis and interpretation, are critical for the successful of scientific investigations, since the assumed model should covered the phenomena characteristics observed.

Statistical inference makes preposition about a phenomena using data drawn from the phenomena population based on a sampling strategy, thus statistical inference is closely related to design of experiment. The probability model specified to describe the phenomena is very important in the statistical inference context. Such model or family of models are based on the assumptions and characteristics of the phenomena. Also, the models are completely specified by unknown quantities called parameters, which should be estimate from the sample data.

There are two major paradigms in statistical inference, namely Classical and Bayesian. Both of them are braced on probability theory and they have in common the use of a likelihood function. The former uses probability only by the hypothetical long-run frequency interpretation to explain and deduce exact and asymptotic results concerning the unknown and fixed parameters. On the other hand, the latter employs Bayes’ Theorem and make probabilistic statement about the unknown parameter – treating them as random variable – in order to construct a posterior and predictive distributions, which are then using for inference purposes. The two approaches have their advantages and are useful in real applications. Although in some particular cases both methodologies may coincide numerically in their estimation methods, the interpretation is completely different.

Classical inference theory, which consist of hypothesis test, point and interval estimation, and design of experiments, is essentially a developing of R. A. Fisher (1890-1962) and J. Neyman (1894-1981). A curious fact is that the two men cursed each other during their lives. Bayesian inference theory as we know nowadays was established and settled by several fundamental works, but the mainly contributions came from L. J. Savage (1917-1971), H. Jeffreys (1891-1989), and B. de Finetti (1906-1985).

My Master’s Degree Statistical Inference course was mainly based on Classical Inference, though we briefly studied the Bayesian paradigm at the end of the course.

Course contents

The lessons were taught by professor Caio Azevedo, who had prepared customized slides based on reference books, such as “Statistical Inference: Based on the likelihood” by Adelchi Azzalini, “Parametric Statistical Inference” by James Lindsey, and “Statistical Inference”, by Vijay Rohatgi.

Throughout the semester I set up a summary notes that can be find here (in Portuguese).

The primary contents of the course were:

  • Review of Common Family of Distributions:

    • Exponential family
    • Location-scale family
  • Principles of Data Reduction

  • Point Estimation

  • Interval Estimation

  • Hypothesis Tests

  • Introduction of Bayesian Inference and Decision Theory

Exams

As well as in the probability course at the end of the semester I reviewed for one month all the contents and solved many exercises in order to prepare for the statistical inference exam. The list of exercises I have been solved are at home with other stuffs.

The exam that I did can be find here (in Portuguese).