Documenting high-risk AI: an European regulatory perspective
The increasing adoption of Artificial Intelligence (AI) systems in high-stakes applications brings new opportunities for innovation, economic growth and the digital transformation of society. However, this often comes with associated risks to the safety, health or fundamental rights of people, highlighting an urgent need for the systematic adoption of trustworthy AI practices. Transparency is key for building trust in AI systems, as it facilitates their understanding and scrutiny. This article discusses transparency obligations introduced in the AI Act, the recently proposed European regulatory framework for Artificial Intelligence. Specifically, we look at requirements for providers of high-risk AI systems in terms of provision of information to users and technical documentation. An analysis of the extent to which current
approaches for AI documentation satisfy these requirements is presented, assessing their suitability as a basis for future technical standards and making recommendations for their potential development in this direction.
Email Address of Submitting Authorisabelle.firstname.lastname@example.org
ORCID of Submitting Author0000-0002-9811-9397
Submitting Author's InstitutionEuropean Commission
Submitting Author's Country