Where things are heading
The techniques that we’ve studied in this course were born at the end of the 19th century, had their adolescence in the early part of the 20th century. They were developed originally for the analysis of laboratory experiments, but quickly were adopted to the analysis of complex observational studies.
The ideas behind resampling and randomization emerged in the early part of the 20th century, but they were impractical until electronic computers were invented.
Only now, in the 2010s are the computational ideas becoming mainstream in statistical education, so hardly anyone older than you will know about them.
Where things are heading:
- Big data, where we try to glean patterns from large number of cases in observational data.
- Example: You’ve watched two movies on Netflix. Other people have watched these movies, too. Can we make a good recommendation of movies for you based on the characteristics of the other viewers?
- Example: Your credit card has just been used at a gas station. Might it have been stolen?
- Little n big m. Microarray data, where you may have n=10 samples with m=40000 variables.
- Bayesian computation. The Bayesian approach to probability is, to many people, more compelling than the frequentist/hypothesis-testing approach. The computations required are difficult. But they are becoming possible.