• Home
  • Building trust in ai

Building trust in ai

Building trust in ai
  • ORT

    Dataworkz offices

  • DATUM

    29 Jun, 2020

  • ZEIT

    22:00

  • KOSTEN

    free

In recent times, we have seen that our data models and our A.I. systems have become more important than ever.
As we have all experienced in the current pandemic, our decisions directly depend on models. We have learned that A.I. systems that decide for us enable possibilities that we never dared to dream of. Be it the microcredits, proactive healthcare or predictive maintenance on airplanes. Therefore it has become the responsibility of engineers to build systems we can trust. But how do we do that?

Time: June 30th, 19:00 -20:00
Location: This is a virtual meetup. We will communicate details to all participants shortly before the meetup.

Talk 1: Agile Data Ethics in Practice

Robert de Snoo is founder and board member of the Human & Tech institute. He has worked for many years in the ICT sector and has mainly been involved in technological innovation. From his experience with the ThinkTank Innovation & Trust and the Taskforce Socially Responsible Data Use, he will share his experiences how organizations struggle and deal with Ethical Data Use. How is this incorporated into agile working and what are the ideas regarding responsible use of Artificial Intelligence.

The Human & Tech z`institute (HTi) believes that the business and social value of data are an extension of each other. Based on this idea, we guide organizations towards taking full responsibility for fast-moving technological innovation. HTi once started as ThinkTank Innovation & Trust, a collaboration project of Achmea, KPN, Rabobank and Royal Schiphol Group.

Talk 2:Trust in models matter

Models ought to be deployed only if they can be trusted. Data scientists can explain to stakeholders part of the intricacies of a model using explainable AI packages such as LIME: https://github.com/marcotcr/lime or SHAP: https://github.com/shaoshanglqy/shap-shapley. However, these approaches do not allow you to mitigate these systematic biases that might not be desired. Foremostly if these models provide input for decisions that concern individuals, such as providing access to financial products (loans, insurance), public subsidies and job opportunities. Currently there is a new wave of ethical AI packages that try to plug the gap. Google provides an What-If: https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html tool with which can visualize group facets https://github.com/PAIR-code/facets in its Tensorboard. IBM is promoting its extenstive AI Fairness 360 https://github.com/IBM/AIF360 package with which you can mitigate this bias. In this talk we will use Microsofts Fairlearn. It might be less extensive than AI Fairness, but does provide you with a more opinionated Scikit-Learn look and feel.

In this talk, we will discuss the AI fairness lingo, give an overview of the most relevant metrics, show a set of real life examples in which this could have played a role and introduce what kind of categories of harm mitigators are most commaon. Subsequently we will show a small demo how Fairlearn can be applied on the Adult census dataset in a classification setting.

Event Sprechers

Bert Wassink

Andere Events