G7 sets the framework for use of artificial intelligence

Switzerland

On 30 October 2023, the G7 approved international guidelines for artificial intelligence (AI) and a code of conduct for AI developers. This adoption is in line with the process initiated at the G7 summit in Hiroshima to provide a framework for AI-related developments.

Risk-based principles

The guiding principles consist of eleven points, including a reminder to implement appropriate data protection and intellectual property protection measures to provide a framework for the development of AI based on a risk-based approach. These principles are supplemented by a code of conduct, which also shares this risk-based approach.

Risk mitigation and AI content identification

The guiding principles prescribe that appropriate measures throughout the development (i.e. lifecycle) of AI systems be prescribed with the aim of identifying, assessing and mitigating risks (typically those relating to threats to electronic or even physical security), particularly, through independent internal and external control measures. The principles also prescribe modelling the risks of misuse, post-deployment (including when a system is placed on the market), and the techniques enabling users to identify AI-generated content as far as possible.

Transparency and international standards

The principles also propose ensuring transparency through public reporting on the capabilities, limitations and areas of the use of AI systems. This transparency should be enhanced by sharing information and reporting incidents between AI developers. This sharing is also recommended with partners in industry, government, civil society and academia. In line with this, the principles stress the importance of enhancing the safety of AI systems by promoting the development and, where appropriate, adoption of technical norms (i.e. standards) at the international level. The principles also recall the potential of AI in the fight against the climate crisis and the role it could play in health and education.

Principles supplemented by conduct rules

The code of conduct for AI developers is intended to be open-ended and not exhaustive, like the guiding principles. In particular, it is aimed at academic institutions, the private sector and public authorities. The code is also based on the OECD's AI principles. Its structure sets down and develops the eleven recommendations in the guiding principles, but remains general. On a more technical level, this code of conduct is more like a charter than a real code of conduct that can be used for internal use by a company. This is not surprising given that it is a text adopted by the members of the G7. Above all, the aim of this code and the guiding principles is to set a trend. It is not intended to be a ready-to-use manual.

General and voluntary framework

The principles and code of conduct can be adopted on a voluntary basis by interested parties. They will be supplemented by the legally binding EU rules on AI currently being finalised, and will have a far more tangible impact on businesses than the framework established by the G7.

Outlook

Because Switzerland is home to a number of centres of research excellence, it has a duty to monitor such international developments, particularly at the European level. The ever-increasing use of AI in many areas of activity, including finance, means that both the authorities and private sector must be made aware of developments in this area. It is in the interests of the industry to anticipate the rules, proactively assess their impact, and implement them, even if the implementation is voluntarily.

For more information on AI and FinTech regulations in Switzerland, contact your CMS client partner or local CMS expert:

Dr Vaïk Müller

Partner