Live Symposium on Transparent Machine Learning - VVSOR - VVSOR

Netherlands Society for Statistics and Operations Research | Dutch
On Oct 5th 2022, we dive into model explainability & ethics

Live Symposium on Transparent Machine Learning

Transparent Machine Learning
A dive into model explainability & ethics

Have you always wondered what is behind an algorithm? What are the risks and challenges of relying on one? And how to explain your machine learning and AI models in a more comprehensible way? Find out all about it and join us on the 5th of October, 2022.

With the number of AI, machine learning and deep-learning algorithms growing more than ever, there is also a higher demand on explainability of these methods and a closer look on the ethics surrounding predictions and building of the models.

On the 5th of October the Statistics Communication and Data Science sections of the VVSOR organize an afternoon filled with talks on Transparent Machine Learning, and stay for the included after-symposium drinks!

Location: Social Impact Factory, at 5 minutes walking from Utrecht Central Station.

Short schedule: 

13.30 – 14.00: Walk-in
14.00 – 17.15: Symposium
17.15 – 18.00: Drinks

Speakers:

– click on names to see abstract and more info – 

Hinda Haned – On the challenges of bringing explainable AI to practice

 

Providing explanations about how a machine learning model produced a particular outcome has the potential of helping enhance end-users’ trust and their willingness to adopt the model for high-stake applications. Recent years have seen a surge in research on explaining ML-powered systems, but very little in this body of work evaluates the realistic usefulness of the provided explanations from a practical (user-centered) perspective. In this talk, I will discuss some of the challenges of bringing explainability into practice, and why it needs to be thought of as a process rather than a product.

Hinda Haned is a professor by special appointment at the University of Amsterdam where she researches fair, transparent and accountable AI. She is also the scientific co-director of the Civic AI Lab, an ICAI lab dedicated to studying the application of artificial intelligence in the fields of education, welfare, the environment, mobility and health.

 

Tamilla Abdul-Aliyeva – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal

 

Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. Amnesty International illustrates in its report how the use of individuals’ nationality resulted in discrimination based on nationality and ethnicity, as well as racial profiling.

Tamilla Abdul-Aliyeva works as a senior policy advisor and researcher at Amnesty International. Her area of focus includes the use of technology (for example algorithmic decision-making systems) by the government and its impact on human rights. Tamilla has a legal background and previously worked as a lawyer in the field of IT and privacy law.

 

Tim van Erven – The Limits of Explainable Machine Learning: Some Things Are Simply Impossible

 

When automated machine learning decisions lead to undesirable outcomes for users, methods from explainable machine learning can inform users how to change the decisions. It is often argued that such explanations should be robust to small measurement errors in the users’ features. We prove mathematically, however, that this type of robustness is impossible to achieve for any method that also gives useful explanations whenever possible. This explains undesirable behavior of this type of explainable machine learning observed in practice. This talk is based on joint work with Hidde Fokkema and Rianne de Heide.

Tim van Erven is associate professor at the Korteweg-de Vries Institute for Mathematics at the University of Amsterdam in the Netherlands. His research focuses on machine learning theory, with a particular interest in adaptive sequential prediction and, more recently, explainable machine learning.

 

Bas van der Velden – Explainable artificial intelligence (XAI) in medical imaging

 

Explainable AI in medicine provides meaningful information to explain automated decisions based on medical data. The black box nature of Deep Learning impedes this. In this talk, dr.ir. Bas van der Velden will discuss XAI in medicine, potential limitations of saliency maps, and other explainable AI models that do not suffer from the same limitations.

Bas van der Velden is a senior postdoctoral researcher at UMC Utrecht, interested in predicting outcome of cancer patients using advanced image analysis, with a focus on eXplainable Artificial Intelligence (XAI).

 

Marjolein Fokkema – Bridging interpretable and explainable machine learning with trees and rules

Decision trees and rules are inherently interpretable tools for statistical prediction. However, when compared to black-box prediction methods such as random forests or (deep) neural networks, they tend to provide lower predictive accuracy. Interestingly, the predictions of highly accurate black-box models can be used to improve the predictive accuracy of tree- and rule-based methods. At the same time, tree- and rule-based methods can be used to explain the predictions of black-box models. As such, inherently interpretable models may gain some strength from highly accurate black-box models, vice versa. In this talk, I will discuss how this can be attained using model-based data generation.

Marjolein Fokkema is an assistant professor at the Methodology & Statistics unit at Leiden University. She works on statistical modelling (or as we nowadays call it: machine learning or artificial intelligence) and psychological assessment.

 

Poster

Contact:

If you have any questions, please contact us via statisticscommunication@vvsor.nl.

 

Registration:

This event is VVSOR-members only. If you are not a member: membership comes with many benefits and there are discounts on the fee for many (starting at €0,-)! You can find more information about memberships here.

Gepubliceerd op: August 12, 2022