After months of deliberations, Europe came to a tentative agreement on historic regulations pertaining to the use of artificial intelligence by the European Union. These regulations cover the use of AI by governments for biometric monitoring as well as the regulation of ChatGPT and other AI systems.
With the political accord, the EU is getting closer to being the first significant global power to pass AI-related legislation. Following a nearly 24-hour debate the day before, roughly 15 hours of negotiations resulted in the agreement reached on Friday between members of the European Parliament and EU member states.
The final legislation may take on a different form as the two sides work out the details in the upcoming days.
"Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is yes, I believe, a historical day," European Commissioner Thierry Breton told a press conference.
Before being released onto the market, the agreement mandates that foundation models like ChatGPT and general purpose AI systems (GPAI) adhere to transparency requirements. These include creating technical documentation, adhering to EU copyright regulations, and sharing thorough synopses of the training content.
Systemic risk in high-impact foundation models will require assessments and mitigation of systemic risks, model reviews, adversarial testing, reporting of major occurrences to the European Commission, cybersecurity, and energy efficiency reports.
Codes of practice may be used by GPAIs with systemic risk to abide by the new rule.
Real-time biometric surveillance in public areas can only be used by governments in response to specific crime victims, to stop real, imminent, or credible threats like terrorist acts, and to hunt for suspects in the most serious crimes.
The agreement forbids social scoring, the untargeted scraping of CCTV or internet-based facial picture databases, cognitive behavioural manipulation, and the use of biometric categorization algorithms to infer racial, sexual, religious, or political attitudes.
While the sanctions for infractions would range from 7.5 million euros ($8.1 million) or 1.5% of revenue to 35 million euros or 7% of worldwide turnover, consumers would have the right to file complaints and obtain thorough explanations.
The business organisation DigitalEurope slammed the regulations, arguing that they add to the already heavy weight of recent legislation for businesses.
"We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head," its Director General Cecilia Bonefeld-Dahl said.
European Digital Rights, a privacy rights organisation, was equally negative.
"It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc," its senior policy advisor Ella Jakubowska said.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
After both parties officially approve the law, it is anticipated to go into effect early next year and be applicable for two years after that.
Governments everywhere are trying to strike a balance between the benefits of technology—which can create computer code, answer queries, and have conversations akin to those of a human—and the necessity of instituting regulations.
Europe's ambitious AI regulations come at a time when startups like Microsoft-backed OpenAI are finding new applications for their technology, earning praise and raising concerns. In response to OpenAI, Google's parent company Alphabet introduced Gemini, a new AI model, on Thursday.
The European Union law has the potential to serve as a model for other nations and a substitute for China's interim regulations and the United States' lax attitude.
(Source:www.moneycontrol.com)
With the political accord, the EU is getting closer to being the first significant global power to pass AI-related legislation. Following a nearly 24-hour debate the day before, roughly 15 hours of negotiations resulted in the agreement reached on Friday between members of the European Parliament and EU member states.
The final legislation may take on a different form as the two sides work out the details in the upcoming days.
"Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is yes, I believe, a historical day," European Commissioner Thierry Breton told a press conference.
Before being released onto the market, the agreement mandates that foundation models like ChatGPT and general purpose AI systems (GPAI) adhere to transparency requirements. These include creating technical documentation, adhering to EU copyright regulations, and sharing thorough synopses of the training content.
Systemic risk in high-impact foundation models will require assessments and mitigation of systemic risks, model reviews, adversarial testing, reporting of major occurrences to the European Commission, cybersecurity, and energy efficiency reports.
Codes of practice may be used by GPAIs with systemic risk to abide by the new rule.
Real-time biometric surveillance in public areas can only be used by governments in response to specific crime victims, to stop real, imminent, or credible threats like terrorist acts, and to hunt for suspects in the most serious crimes.
The agreement forbids social scoring, the untargeted scraping of CCTV or internet-based facial picture databases, cognitive behavioural manipulation, and the use of biometric categorization algorithms to infer racial, sexual, religious, or political attitudes.
While the sanctions for infractions would range from 7.5 million euros ($8.1 million) or 1.5% of revenue to 35 million euros or 7% of worldwide turnover, consumers would have the right to file complaints and obtain thorough explanations.
The business organisation DigitalEurope slammed the regulations, arguing that they add to the already heavy weight of recent legislation for businesses.
"We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head," its Director General Cecilia Bonefeld-Dahl said.
European Digital Rights, a privacy rights organisation, was equally negative.
"It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc," its senior policy advisor Ella Jakubowska said.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
After both parties officially approve the law, it is anticipated to go into effect early next year and be applicable for two years after that.
Governments everywhere are trying to strike a balance between the benefits of technology—which can create computer code, answer queries, and have conversations akin to those of a human—and the necessity of instituting regulations.
Europe's ambitious AI regulations come at a time when startups like Microsoft-backed OpenAI are finding new applications for their technology, earning praise and raising concerns. In response to OpenAI, Google's parent company Alphabet introduced Gemini, a new AI model, on Thursday.
The European Union law has the potential to serve as a model for other nations and a substitute for China's interim regulations and the United States' lax attitude.
(Source:www.moneycontrol.com)