The EU AI Act: What We Know So Far
This month, December 2023, the European Parliament got to a political agreement on the EU AI Act. The EU AI Act will be the first comprehensive law on AI in the EU. Discussions started last June 2023, but a lot of details are still up in the air. A final text will be presented in 2024. The regulation will come into effect in 2026. Just like the GDPR did in 2018, this new regulation is expected to shake up the business world, not just in the EU, but in the world at large. Here is what we know so far – and what your organization can do to prepare for it now.
Summary of the EU AI Act
- World’s First AI Regulation: The EU has introduced the AI Act, which is the world’s first comprehensive regulation on artificial intelligence. This law aims to regulate the development and use of AI technology to ensure better, safer, and fairer conditions for its users.
- AI Risk Classification: The AI Act classifies AI systems based on the risk they pose to users. This risk-based approach will determine the level of regulation. The categories include unacceptable risk, high risk, and limited risk.
- Unacceptable Risk: AI systems posing unacceptable risks, such as cognitive behavioral manipulation, social scoring, and real-time biometric identification, will be banned. Some exceptions may be allowed for specific purposes with court approval.
- High-Risk AI: High-risk AI systems, which impact safety or fundamental rights, will have strict regulations. This includes AI used in products like toys, aviation, and medical devices, as well as AI systems in specific areas like law enforcement and education. These systems will be assessed before entering the market.
- Limited Risk: AI systems with limited risks must comply with minimal transparency requirements, allowing users to make informed decisions. Users should be aware when they are interacting with AI, especially in cases involving image, audio, or video content generation.
- Transparency for Generative AI: Generative AI, like ChatGPT, must comply with transparency requirements, including disclosing AI generation, preventing illegal content generation, and publishing summaries of copyrighted data used for training.
- Penalties for infringing the regulation: Just like with the GDPR, penalty amount will either be imposed based on the company’s turnover or a predetermined amount, whichever is higher. For violation of the banned AI application, €35 million or 7% of the company’s turnover; for violations of the AI Act, €15 million or 3%; for the supply of incorrect information, €7,5 million or 1,5% of the company’s turnover.
This regulation is designed to ensure the safety, transparency, and ethical use of AI technology within the EU.
How to prepare your organization for the EU AI Act – if you use AI
- Transparency and Disclosure:
-
-
- Clearly disclose to users or customers when AI technology is in use. Transparency is a key requirement under the EU AI Act.
- Ensure that users are aware that the content they are interacting with has been generated by AI.
-
- Content Review and Moderation:
-
-
- Implement mechanisms for reviewing and moderating AI-generated content to prevent the dissemination of illegal, harmful, or inappropriate information.
- Establish guidelines and policies for content moderation and ensure they align with the AI Act’s requirements.
-
- Data Privacy and Security:
-
-
- Ensure that user data is handled in compliance with data protection regulations, such as GDPR, especially if user data is processed in conjunction with AI interactions.
- Safeguard user information and take steps to protect it from unauthorized access or breaches.
-
- Ethical Use:
-
-
- Develop and adhere to ethical guidelines for the use of AI in customer interactions. Ensure that AI is not used for discriminatory or harmful purposes.
- Monitor AI interactions to prevent instances of bias or unethical behavior.
-
- Legal Compliance:
-
-
- Understand the legal obligations and responsibilities when using AI services provided by third-party providers. Ensure that AI services comply with relevant laws and regulations.
- Verify that the AI service provider is also compliant with the EU AI Act and other relevant regulations.
-
- User Consent:
-
-
- Obtain informed consent from users regarding the use of AI in their interactions. Clearly explain how AI will be used and the purpose it serves.
-
- Documentation and Record Keeping:
-
-
- Maintain records of AI interactions and user consent to demonstrate compliance with the AI Act and other regulations.
- Document the use of AI, including the specific AI model and its capabilities.
-
- Monitoring and Auditing:
-
-
- Implement systems for monitoring AI interactions to ensure that they continue to meet compliance and ethical standards.
- Conduct regular audits to assess the impact and effectiveness of AI usage.
-
- Customer Support and Feedback:
-
-
- Provide channels for users to seek assistance or report concerns related to AI-generated content.
- Actively collect feedback from users to improve AI interactions and address any issues.
-
- Continual Adaptation:
-
- Be prepared to adapt and update AI usage policies and practices in response to changes in regulations or evolving best practices.
- Stay informed about updates to the EU AI Act and other relevant regulations.
Sources:
EU AI Act: First regulation on artificial intelligence (EU Parliament)
7 things to know about the political agreement on the AI Act by Emerald De Leeuw-Goggin, Head of Privacy at Logitech