Navigating AI Regulations: What Business Professionals Need to Know
As countries worldwide strive to formulate regulations for artificial intelligence (AI), businesses are on the brink of facing a new wave of rules governing AI practices. But what exactly are regulators looking for?
At present, most countries lack comprehensive laws specifically dedicated to AI. However, China stands as a notable exception, with its Interim Measures for the Management of Generative Artificial Intelligence Services set to take effect on August 15th. Nevertheless, even these measures are labelled as provisional.
In Canada, progress is being made with the AI and Data Act under Bill C-27. However, many practical details are yet to be outlined and will be developed and implemented over a two-year period. This means a complete understanding of the law's requirements is still some time away.
On the other hand, the European Union's AI Act provides a more detailed framework, and ongoing negotiations are expected to conclude by year-end. This rulebook may include a two-year transition period for organisations to meet their obligations.
In the United States, the National Institute of Standards and Technology AI Risk Management Framework offers guidance for trustworthy AI development. Additionally, the White House Blueprint for an AI Bill of Rights provides insight into the Biden administration's objectives. However, it does not entail strict regulatory requirements.
Given this landscape, where do businesses currently stand?
The U.S. Federal Trade Commission (FTC) has provided some answers, as have numerous regulators globally. FTC Commissioner Alvaro Bedoya emphasised that generative AI is indeed regulated, with a focus on its impact on everyday individuals. He also highlighted the importance of transparency and accountability. FTC Chair Lina Khan echoed these sentiments.
As regulators worldwide initiate investigations into AI systems and publish guidelines, they offer valuable insights into their expectations. Although these insights haven't yet been tested in legal proceedings, they highlight regulators' focal points and the questions businesses should address.
By the time concrete laws are enacted, businesses will have invested significantly in AI systems. Therefore, it's prudent to develop AI governance structures that align with regulators' queries.
While some answers might not be immediately clear, it's essential to be prepared. Technologies and governance practices are rapidly evolving. By knowing the questions regulators are asking and establishing an AI governance program that can respond effectively, businesses can ensure smoother compliance. It's far more cost-effective to build robust systems from the start than to attempt retroactive modifications.
So, what are the questions regulators are currently posing? What should businesses anticipate?
**Risk Assessment and Mitigation**
Regulators are primarily concerned with understanding the risks posed by AI in various domains and the measures organisations have taken to mitigate them. These questions often revolve around risk assessments, including:
1. Have you evaluated the impact of AI on your operations?
2. What policies and procedures do you have in place to assess AI risks and ensure safety?
3. Who is responsible for assessing AI risks and overseeing risk mitigation? What expertise do they possess?
These questions underline the crucial role of professionals in implementing policies effectively. For AI developers, training and retraining models are key to risk reduction. Regulators seek information on training methods, oversight, and trainers' qualifications.
**Transparency**
Regulators across the EU, UK, and US emphasise the importance of transparency in data processing by AI systems and comprehensible explanations of their operations. They ask:
1. How do you ensure transparency in data processing activities, including the logic behind it?
2. How is information on data processing communicated to individuals?
3. How do you communicate information about data subject rights?
This underscores the significance of clear communication and accessible information regarding AI systems' functions and data processing methods.
**Sourcing Data**
Regulators are inquiring about the data sources used to train AI models, considering factors like accuracy, bias, and legality:
1. What data sources are you using to train AI models?
2. How did you obtain the data?
3. Are you using publicly available data from the internet? How are you collecting, using, and retaining it?
These questions highlight the importance of data quality, legality, and sourcing practices.
**Data Protection**
Regulators are focusing on privacy issues and potential data breaches in AI products and services. They ask about personal data processing, including:
1. What personal data is your organisation collecting, using, and retaining?
2. Why are you using each type of personal information?
3. How long do you retain each type of personal data?
4. What legal basis justifies your data processing?
These questions stress the importance of understanding and complying with data protection laws.
**Individual Requests**
Regulators are keen to understand how individuals can interact with companies using AI products, particularly in terms of privacy rights:
1. How does your organisation respond to individual complaints about AI-based data processing?
2. How can individuals opt out of data collection, use, and transfer?
These questions highlight the need for businesses to provide accessible mechanisms for individuals to exercise their rights.
**Automated Decision-Making, Discrimination, and Civil Rights**
Regulations concerning automated decision-making and discrimination offer clear guidelines for regulators:
1. Are you using AI for automated decisions with legal effects?
2. How do you ensure individuals are not unfairly affected by automated decisions?
3. Are your AI products
In the UK regulators are particularly focused on the potentials and pitfalls of generative AI. Each has published guidelines on their intended strategies for managing these outcomes.
In March, the ICO. released a white paper outlining its stance that its established regulators for online platforms and digital services are well-prepared to address the risks presented by AI systems.
Working together under the Digital Regulation Cooperation Forum (DRCF), the U.K.'s four digital regulators — the Information Commissioner's Office, Competition and Markets Authority, Office of Communications, and Financial Conduct Authority — have joined forces to establish a more cohesive and coordinated regulatory approach, especially concerning the digital landscape and AI systems.
A noteworthy point highlighting the widespread influence of privacy and data protection governance is that the Information Commissioner chairs the DRCF. This demonstrates the growing role of privacy regulators in enforcement actions, often involving breaches of privacy and data protection caused by AI developers:
- Ofcom is addressing issues such as scams, phishing, and "fake news," which can be amplified by generative AI. They are engaging with companies subject to the Online Safety Bill, who develop generative AI tools, to evaluate safety risks and implement effective mitigation measures.
- The ICO is engaging with generative AI developers in connection with relevant data protection laws and presenting a set of questions to guide developers in processing personal data. Just like any entity under the U.K. GDPR, developers need to establish a lawful basis for processing, determine their role as a controller or processor, carry out a data protection impact assessment, ensure transparency and address security risks, implement a data minimization policy, develop procedures for handling individual rights requests, and understand their obligations when utilizing generative AI for automated decision.
- The CMA is conducting a review to establish guiding principles for competition and consumer protection as foundational AI models evolve. The review's findings are expected to be published in September 2023.
- The FCA is actively shaping its approach to AI regulation in the financial services sector. The FCA's evolving strategy includes regulating cloud service providers, bolstering fraud prevention and cyber resilience among financial firms using AI products, setting up a Digital Sandbox to foster secure fintech innovations, enhancing technology for financial supervision, and collaborating internationally to exchange insights on complex AI regulatory matters.
AI is a complex and emergent area for data protection regulation and compliance. If you would like to know how to integrate AI safely into your business operations, book a free consultation with one of our consultant DPOs.
How can we help you?
Get in touch and find out how we can help you achieve your goals