Great interest in AI regulation
By Markus Christen, UZH
The event “AI’ll be back – Consequences of AI regulation for startups”, jointly organized by the Data Ethics expert group and Technopark, generated a great deal of attention. More than 40 persons listened to the speakers and discussed lively on how the emerging regulation of AI may affect innovation.
The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Its purpose is to documentation, auditing, and process requirements for AI providers. What does that mean for AI startups in Switzerland? This question was the focus of an event organized by the Data Ethics expert group of the Data Innovation Alliance. More than 40 persons attended this meeting on Thursday, February 1, at the Technopark Zürich.
First, Livia Walpen, Senior Policy Advisor International Relations at the Federal Office of Communication (OFCOM), outlined the current state of the EU AI act and its possible consequences for Switzerland. The EU AI act will create a harmonized legal framework for the EU’s internal market. It will be relevant both for private and public actors. The core intention is to classify AI systems according to the risk they involve; depending on the risk, different measures will follow – from no obligations if the risk is minimal, up to prohibited practices if the risk is unacceptable. For practical applications, the main relevant difference will be between “limited risk” and “high” risk – as for latter, various obligations will follow: Among others, those involve adequate risk assessment systems, data sets of high quality, and appropriate human oversight measures. High-risk AI systems will require a registration in public EU databases and providers outside the EU need to appoint an authorized representative in the EU. The AI Act will apply two years after its entry into force, i.e. in 2026 (with the exception of certain specific provisions).
For Switzerland, the new EU AI act will have consequences for all Swiss companies and research institutions operating in the EU, as they will have to assess the conformity of their products in accordance with the conditions or procedures laid down. The AI act might also have an influence on the Swiss political debate and the Swiss regulatory approach to AI. In November last year, the Federal Council gave the mandate for the elaboration of an overview of possible regulatory approaches to AI in Switzerland. The OFCOM, in close cooperation with different federal offices, is currently preparing a report that will outline different possible approaches for Switzerland, to be published by the end of this year.
Christoph Heitz, Founder and President of the Data Innovation Alliance, discussed how developers of AI applications in companies need to prepare already today, how the AI Act changes the job of developers, and how startups can obtain support for these new challenges. His main message to the audience consisted in three points: First, know what your AI application is actually doing. Second, AI regulation leads to innovation challenges; i.e., new solutions are required. Third, there is support for helping companies to master the transition. As for the last point, he pointed to the “Innovation Boosters” run by the Data Innovation Alliance: The already estabilshed “databooster” and the new Booster “Artificial Intelligence”. Both booster programs support companies in developing innovative approaches to address the regulatory challenges for their concrete AI products they aim to develop.
Finally, Christoph Bräunlich, Head AI of BSI Software, presented a use case to demonstrate how a “Code of AI conduct” can help to be prepared for compliance for AI regulations. He outlined how BSI has developed a “Code of Conduct AI”, inspired by the “Code of Ethics for Data-Based Value Creation” of the Data Innovation Alliance. A key element of their Code of Conduct is the role of an “Ethics Enabler” – a person within the company that structures discussions on ethical issues of a specific AI innovation within the company. After the three talks – the slides are available as pdf download – a lively discussion emerged in the audience, moderated by Markus Christen, managing director of the Digital Society Initiative of the University of Zurich. Several people pointed to the practical issues that may arise when assessing concrete risks of AI systems – certainly a point where both the regulator and the companies will have to gain experience.