Statistics & OR for Robust Decision Making

VVSOR Annual Meeting 2023

Please note: today, Thursday 23rd of March 2023 we have a livestream of the event. If you registered, you have received the link to join the meeting online.

Date: Thursday March 23, 2023
Location : Choose between livestream or attending the event in Utrecht at conference location ‘In de Driehoek’, a 5-10 minutes walk from Utrecht Central station.
Registation: Closed




On this day there will be various speakers presenting their research and work around the theme:

“Statistics and operations research for robust decision making” 

Expect to learn about cutting-edge research and real-life applications on the topic of decision-making under uncertainty.

Taking robust (high-stake) decisions can be very challenging, especially when decisions must be taken under varying levels of uncertainty. During this annual meeting we will have several experts talk about their research in this field. The speakers during this day will elaborate on the use of data, statistical models and operations research applications that could assist the decision-making process and robustness of the decisions. What factors should be considered, what models can be used and how could one incorporate uncertainty in the models, decision-making process and communication around the uncertainty?

Join this annual meeting and learn more from four scientists specializing in the field of health economics, mathematics, complexity and their relationship to decision making. We have planned a panel discussion at the end of all talks with all speakers:

Gianluca Baio is a professor of Statistics and Health Economics in the Department of Statistical Science at University College London (UK). His studies were in Statistics and Economics, and he holds a PhD in Applied Statistics. Gianluca's main interests are in Bayesian statistical modelling for cost effectiveness analysis and decision-making problems in the health systems, hierarchical/multilevel models and causal inference using the decision-theoretic approach. Gianluca leads the Statistics for Health Economic Evaluation research group within the department of Statistical Science. He has been the 18th Armitage Lecturer in November 2021 but, since becoming the Head of the UCL Department of Statistical Science, his research activity is officially dead.

Health technology assessment (HTA) is the final stage of clinical development, where interventions are assessed for their "value-for-money". Bodies such as NICE in the UK or ZIN in the Netherlands act as conduits for the relevant (public) healthcare provider to suggest whether a given intervention should be funded, once it's put on the market. Often, decisions are based on limited evidence, which generates large uncertainty over the decision-making process and the possible consequences of making the "wrong" choice, including sunk costs associated with switching from one technology to another. Value of information (VoI) is a principled set of techniques that can be used to assess the impact of uncertainty in model inputs over the decision-making, as well as to prioritise research into specific components of a model, in order to reduce the underlying, key drivers of the uncertainty. In this talk, I will briefly introduce the main concepts around HTA and VoI and present some recent computational and substantive development in the VoI methodology.

Peter Grünwald heads the machine learning group at CWI in Amsterdam, the Netherlands. He is also full professor of  statistics at the mathematical institute of Leiden University, and indeed, his research interests lie in the intersection of machine learning, statistics and information theory.  He is the author of the book The Minimum Description Length Principle, (MIT Press, 2007), which has become the standard reference for this information-theoretic approach to learning. A recipient of NWO VIDI and VICI grants, in 2010 he was co-awarded VVSOR's Van Dantzig prize, the highest Dutch award in statistics and operations research. From 2018-2022 he served as President of the Association for Computational Learning, the organization running COLT, the world’s prime annual conference on machine learning theory, which he chaired himself in 2015. Since about 2018, his research group has focused almost exclusively on safe anytime-valid inference and e-values.

A standard practice in hypothesis testing is to mention the p-value alongside the accept/reject decision. We show a major advantage of mentioning an e-value instead. With p-values, we simply cannot use an extreme observation (e.g. p << alpha) for getting better frequentist decisions. With e-values we can, since they provide Type-I risk control in a generalized Neyman-Pearson setting with the decision task (a general loss function) determined post-hoc, after observation of the data --- thereby providing a handle on the age-old "roving alpha" problem in statistics: we obtain risk (expected loss) bounds which hold independently of the loss, or any alpha, being set in advance. The reasoning can be extended  to confidence intervals. E-values were originally (in 2019) introduced because of  their ability to deal with optional continuation, i.e. gathering additional data whenever one sees fit. Their ability to deal with post-hoc decision tasks provides a second, independent argument for embracing them.

This work is based on:
P. Grünwald. Beyond Neyman-Pearson. arXiv 2205.00901, 2022.
P. Grünwald. The E-Posterior. Phil. Trans. Soc. London Ser. A, 2023.

Frank P. Pijpers is senior methodologist at Statistics Netherlands and professor by special appointment at the Korteweg-de Vries Institute for Mathematics of the University of Amsterdam. The focus of his chair is on complexity for official statistics. Before joining Statistics Netherlands in 2010, Frank Pijpers carried out fundamental research in astrophysics at various universities in Europe and he also worked for the UK Government Operational Research Service for a few years.

This years’ topic of robust decision making might not appear the most obvious for a speaker from a national statistical institute (NSI). Traditionally, the task of NSIs is to provide support for decision making by others, and that support tends take the form of tables, time series, and the like, these days published in electronic form as open data.

In this talk, I will argue that for effective evidence-based public governance it is imperative that supplementary analysis and interpretation be provided by NSIs. NSIs must disseminate not only data, but also the possibilities and limitations of interpretation of the data being published, or the micro-level data that underpin them.

Public governance decisions often implicitly assume causalities, which have not necessarily been demonstrated or properly tested. Even outside of the controlled and isolated settings of experiments, in some cases it is possible to explore and test hypotheses of causality, which the work of renowned researchers such as Imbens and Angrist has demonstrated. Causality testing brings together some traditional statistics with the relatively new field of complexity science. The traditional part is quantifying margins of uncertainty, that ought to be available for everything NSIs publish. The non-traditional part comes from the realisation that what we designate as trends or properties of society or the economy, are emergent from the myriad of individual interactions between people, which implies that “causes” are always through multiple pathways of mechanisms. I hope to illustrate this point using some examples from recent ‘case work’.

Julie Rozenberg is a widely published economist with 15 years of experience working on the link between development policy and climate change adaptation and mitigation. She works as a Senior Economist at the World Bank where she leads applied research to support decision making for investments and public policies in developing countries.  Julie is also an editor for Wires Climate Change, and the Vice President of the Society for Decision Making Under Deep Uncertainty.


Climate change, pandemics, financial crises, are examples of deep uncertainties that challenge decision making for public policy. Deep uncertainty occurs when decision makers and stakeholders do not know or cannot agree on how likely different future scenarios are. For economists assessing the consequences of policy options or infrastructure investments, the presence of deep uncertainty requires using new techniques that look for robust decisions—performing well under multiple future conditions—rather than an optimal solution under a single prediction of the future. This talk will give examples of how robust decision making methods and tools are used to support decisions for infrastructure investments or to analyze future climate change impacts on poverty reduction goals.

*Please note: we have one change in the schedule; the previously announced speaker Rianne de Heide is replaced by Peter Grünwald.

The meeting will be hybrid and in English, allowing for both online presence via livestream and offline presence in Utrecht, the Netherlands.

During the day, we will also have the general assembly (ALV) and award ceremonies. After the conference, there will be an (optional) conference dinner.

For the full time schedule, fold out the program below:


Registration for attendance in Utrecht has closed.

* The discounted student prices for the conference and dinner are sponsored by LUXs data science and the VVSOR.







In case we have more information, we will keep you up-to-date about the program, speakers on this page.

We hope that many of you will join this event.

Gepubliceerd op: December 5, 2022