Important: This event will now be online and take place on December 4th and December 11th, 2023. We hope to see you all there.
In open science, researchers strive to make their research outputs openly available, enabling others to verify and build upon their work. This includes for instance, sharing raw data, research protocols, and analysis code, allowing for greater transparency and reproducibility of scientific studies.
Various speakers will address important themes such as promoting transparency in statistical analyses and built models. Also, during the webinars one can gain insights into effective methods of sharing data and code, fostering collaboration and learning within our community.
Date and time:
Location: Online on Zoom
In recent years, “Open Science” has started to circulate in research policy and practice. Universities have drafted their open science strategies and funding bodies such as the Dutch Scientific Council have adjusted their allocation decisions so that it commits to open science. Yet, the concept is surprisingly vague and has quite some contradictory meanings. This talk aims to navigate through these unclarities by sketching the philosophical and political underpinnings of the concept, while taking into account the various ways in which “open science” has been implemented in the Netherlands.
Martijn van der Meer is a senior policy advisor on responsible research at the Tilburg School of Humanities and Digital Science and chair and co-founder of the Center of Trial and Error. He is also a lecturer and PhD-researcher on the history of public health at the Erasmus University Rotterdam and the Erasmus Medical Centre.
Over the last decade, there have been many concerns regarding the reliability and replicability of the academic literature, often dubbed as the “reproducibility crisis”. This has aided in the rapid growth of the Open Science movement, which has enabled a set of practices that improve transparency and protect against bias. These practices, when done properly, tend to increase the quality and usability of research output. In this talk, I will discuss tips and benefits of a transparent research workflow, with a main focus on preregistration.
Bawan Amin has a background in research, having completed a PhD in Behavioural Ecology. Now he focuses on the way we do science as an Open Science Advisor at Utrecht University.
Other scientists might look to the clinical trial world as a highly regulated and administratively transparent science. And indeed, many practices such as preregistration and aggregate data sharing have been around for a long time, at least in theory. In practice, however, clinical trials lag behind many other sciences that are quickly improving. In this talk I will discuss some examples that surprised me. As a statistician, there are reasons to worry, to care about improvement, to keep imagining how things could be better and ways to help.
Judith ter Schure is a consultant statistician and assistant professor at the Epidemiology & Data Science department of Amsterdam UMC. Her research focuses on statistics for prospective meta-analysis as a way to make accumulation of scientific knowledge from clinical trials more efficient: ‘ALL-IN meta-analysis’.
When processing and analyzing empirical data, researchers regularly face choices that seem to be arbitrary. As such, interpreting the outcome of a single analysis can be tricky, because plausible alternatives remain unexplored or undocumented. In order to assess the robustness of a finding and to improve transparency, one can conduct a so-called multiverse analysis, which involves methodically examining the various choices pertaining to data processing and/or model building. In this talk, I will introduce multiverse analysis through concrete examples and give some pointers on how to conduct your own multiverse analysis.
Tom Heyman is Assistant professor Methodology & Statistics at the Psychology Institute of Leiden University. His research topics include multiverse analysis, sequential testing, online data collection and open data.
Simulation studies are an essential tool in methodological research. A single simulation study can influence the analyses of thousands of subsequent empirical studies. With great power comes great responsibility. Open research practices are becoming more and more prevalent in empirical research. We argue that they are also crucial in methodological research. We put forward ten reasons to start replicating simulation studies in order to ensure a robust foundation for data analytical decisions. We emphasize that replicating simulation studies is an opportunity that the quantitative methodology field should prioritize and present tools as well as a roadmap.
Anna Lohmann is a research statistician at EAH Jena. Her research focuses on the replicability, applicability and generalizability of simulation based methodological research.