Machine Learning day - VVSOR - VVSOR

09 June 2022

Machine Learning day

at the University of Amsterdam

On Thursday June 9, the Social Science Section of the Dutch Statistical Society will be hosting our 2021 Fall meeting (‘najaarbijeenkomst’) at University of Amsterdam, which takes the form of an afternoon symposium (starting at 13:30). The topic of this symposium will be

 

Machine Learning Day

How machine learning can help the social sciences

 

in which recent methodological developments relating to machine learning will be discussed, as well as their relevance for and application in the Social Sciences. During the symposium, international experts on the topic will share their thoughts and contributions on this topic with you, and there will be room for discussion. With machine learning becoming more and more prominent, we believe that the symposium will be of interest to researchers throughout the Social Sciences and hope to welcome you there. More details about the program of the symposium can be found below. The Symposium will be held in room Rec A 2.10 (UvA).

 

Attendance is free, but please register with M.L.Geelhoed@uu.nl before June 1 with subject ‘VvS-OR Machine Learning day’.
If you enjoy our meetings or other VvS-OR meetings, consider becoming a member of VvS-OR, see vvsor.nl/join.

All the best,
the board of the Social Science Section of the Dutch Statistical Society:

Jesper Tijmstra (Tilburg University)

Don van Ravenzwaaij (Groningen University)

Joost van Ginkel (Leiden University)

Lourens Waldorp (University of Amsterdam)

Rebecca Kuiper (Utrecht University)

 

 

Program

Machine learning Day - 2021-05-10

Abstracts

Presenter:

Marjolein Fokkema

 

Title:
Seeing the forest for the trees

 

Abstract:

Machine learning (ML) is by now a familiar buzzword, both within and outside the field of statistics. Opinions vary on what exactly is machine learning, and how it distinguishes itself from statistics. The author would like to avoid making the distinction altogether, and to start from the more familiar distinction between parametric and non-parametric methods. After a short introduction to ML, I will focus on one class of non-parametric methods for prediction: decision trees. Single decision trees are well known for their ease of interpretation. Techniques such as bagging, boosting and random forests, in which the predictions of a large number of trees are averaged, are well known for providing state-of-the-art predictive accuracy in many applications. These techniques will be used to discuss trade-offs between interpretability and predictive accuracy, and to illustrate the potential benefits of flexible non-parametric methods for applications in social sciences.

 

Presenter:

dr. Caspar J. van Lissa

Assistant professor of developmental data science,

FSW ambassador to the Open Science Community Utrecht
Utrecht University, dept. Methodology & Statistics

 

Title:
Closing the empirical cycle: Using machine learning for rigorous scientific exploration

 

Abstract:

Should applied scientists care about machine learning? What can data scientists learn from applied science? These are questions that Caspar van Lissa seeks to address. His Veni-funded research “Will the kids be alright?” applies modern machine learning methods to a classical pedagogical problem: Emotional problems in adolescence. Confirmatory, theory-testing research is now the dominant approach in the social sciences. This approach typically tests whether a specific theoretically relevant predictor, like attachment, has a significant effect. Machine learning, by contrast, offers an unprecedentedly effective way to conduct exploratory research. It can encompass all possible predictors, and rank their importance in predicting the outcome (e.g., emotional problems). The results have relevance for theory formation, as they may emphasize which theoretical factors are empirically most important, and can reveal blind spots of under-theorized factors. These improved theories may then be tested using confirmatory research. By combining both approaches, we close the “empirical cycle”.

 

Presenter:

Rosanne Turner

 

Title:
Safe statistics: anytime-valid hypothesis tests and confidence sequences

 

 

Abstract:

Safe statistics is a new framework for collecting evidence for hypotheses, particularly suitable for online and sequential learning. Most currently available methods for hypothesis testing and parameter estimation should not be used in the online or sequential setting, as bounds on error probabilities are then not guaranteed. Safe statistics are feasible, easily implementable equivalents of classical methods that do offer these guarantees: Type-I error bounds and/or confidence intervals can be given at any point in time, under arbitrary stopping rules.

Within the framework, nonnegative random variables called E-variables represent evidence for a hypothesis in the data. To optimize the amount of evidence collected, the information-theoretic GRO-criterion is used for E-variable design. GRO stands for Growth Rate Optimal. These GRO E-variables are special Bayes factors, but with corresponding prior distributions that are sometimes quite different from what Bayesian machine learners or statisticians would normally use. These concepts can also be extended and inverted safe tests can be used to construct anytime-valid confidence sequences.

In this talk, the theoretical background of safe statistics and its foundations in machine learning will be discussed, and the principles will be illustrated with specific implementations for t-tests and a two-sample data stream setting.