Skip to main content

Collecting Ideas for the Innovation Booster on “Responsible AI”

In its end-of-year-meeting, seven members of the Data Ethics Expert Group came together for discussing potential innovation projects in Responsible AI. Both researchers and industry representatives discussed the question: “What do we see as the biggest challenge for research / in our company on the way to Responsible AI?” Below you find a short summary of the discussion points.

The “Innovation Booster” is the current vehicle of the data innovation alliance to push new innovative ideas that have potential for new marked solutions or that solve pressing problems in data-based value creation. Christian Hauser (Fachhochschule Graubünden) presented details on how the Booster works based on the example of an ongoing project. This project intends to support decision-making within companies on various ethical and legal aspects of making business with data.

In a next short presentation, Frank-Peter Schilling (ZHAW) presented activities of the new Center of AI. A focus of ongoing activities concern certification of AI – an issue that will gain relevance due to the upcoming EU regulation of AI. They address the question of how to implement AI systems evaluation at the technical level for high risk AI. This is a pressing topic of various actors such as ISO, IEEE, EASA and the German Fraunhofer Institute. The goal of an ongoing Innosuisse research project carried out at the Center for AI in collaboration with CertX AG is to establish a workflow that should support the certification process.

Another domain where new challenges may emerge is the Metaverse. Eleonora Viganò (Fachhochschule Graubünden) presented the raising trends that companies create own “metaverses” or aim for being present in the larger existing metaverses. Although these developments are still young and diverse, it is foreseeable that the economic relevance of the Metaverse will raise, although many legal and ethical issues remain unsolved. Her research aims to develop guidelines for desirable behavior in the Metaverse such that companies do not risk their reputation in this new domain.

Christoph Hauser (University of Applied Sciences Luzern) presented the topic of how generative AI may affect the generation of cultural assets, a highly relevant topic for the creative industry. Again, many unresolved legal and ethical issues, e.g. related to copyright or liability, may have an impact on the use of this technology. The presentation triggered a lively discussion, pointing to the fact that some industries (such as banks) internally block the use of generative AI completely to avoid any legal risks, although many use cases would exist. The group came to the consensus that any solution that could make the use of generative AI safer in a legal sense would be of great help for such companies.

Anna Broccard, working on data and digital ethics at SBB, presented ongoing activities within the company – also related to a project of Christian Hauser, where SBB is a partner. The current main goal is to establish suitable internal structures and decision-making processes that incorporate different perspectives and teams to address digital ethics issues.

Daniel Blank (ZKB) pointed to a pressing problem of data driven risk rating – a key process of insurances and bank: nondiscrimination in risk rating. The main problem is here that the application of fairness principles does not necessary guarantee that the result is nevertheless seen as unfair in the public domain – and transparency on which fairness principles were used actually increases the problem. Also this topic generated a lively discussion pointing to the problem that for many such decisions a lack of consensus on applicable fairness within the society generates an inherent reputation risk. One possibility may be to factor in the likelihood of reputation risks in the current risk rating models – but whether this may be a feasible approach would need further considerations.

Finally, Markus Christen (University of Zurich) outlined a project that recently has been submitted for funding. The project aims to make an intercultural comparison on the adoption of AI solutions in various industry domains and education between Switzerland and Ukraine. The goal is to assess how existential threats a country in war experiences affects both assessment and actual use of AI technologies, including those considered “high risk” from the EU legislation standpoint.

The Expert Group will meet again on Thursday February 1 in a public event at the Technopark Zürich ( At this event, challenges of the new EU regulation for startups will be discussed.