BRUSSELS - The European Union will need at least another year to figure out how to use its powerful digital law to limit the risks associated with ChatGPT. This slowness by regulators contrasts sharply with the explosion of use of the artificial intelligence chatbot across Europe, as Politico reports.
OpenAI announced this month that it has crossed a key threshold for regulatory oversight: more than 120 million people use the ChatGPT search function each month. The news did not surprise the European Commission, two officials told Politico. Still, regulators have not yet figured out how to bring ChatGPT under the Digital Services Act (DSA), which takes effect in early 2024 and is meant to ensure platforms minimize risks.
A decision is not expected until mid-2026, a senior Commission official said.The case tests the EU's ability to manage the risks of large language models, which are fast becoming as common as traditional search engines. The slowness reflects the fact that DSA was designed before the mainstream breakthrough of ChatGPT and its definitions do not fully cover AI chatbots.
Brussels thus risks falling behind the curve, even as the risks become more acute, Politico reports.OpenAI recently admitted that 1.2 million people a week have conversations with ChatGPT that suggest suicide plans and that "in some rare cases, the model may not behave as expected in these sensitive situations".
"For an industry accustomed to voluntary AI security frameworks and proprietary benchmarks, the DSA's legally binding due diligence regime may be a rude awakening," He said Mathias Vermeulen, director of the Brussels law firm and consultancy AWO Agency. "OpenAI will have to step up its game significantly and will not do with simply copying what it is doing now," He added.
The company did not respond to the criticism, instead linking to its online site with information about DSA compliance. It noted that the 120 million figure refers only to search functions, not the entire service.How broadly to regulate?ChatGPT is already subject to the AI Act. Since August, it has had to assess and mitigate risks, with the threat of fines of up to €15m if it fails to comply.
Now, however, user numbers have catapulted the chatbot into the league of the big platforms - well above the 45 million monthly users threshold for major online platforms and search engines (VLOP/VLOSE) according to the DSA. Once ranked, it faces fines of up to 6 % of global annual turnover, as Politico notes.
ChatGPT was not a DSA "predicted"but "fits his language", said Martin HusovecProfessor of Law at the London School of Economics.
The key issue is the scope of the designation, he mentioned Joris Van Hoboken, professor of AI governance at the Vrije Universiteit Brussel. The Commission can limit it to search functions (as a search engine) or extend it to the whole offering - as a platform or service.
The OpenAI requirements and their implementation are subject to the Commission's decision. The company must assess the severity of chatbot risks - including the impact on elections and public health -, mitigate them, and report to the Commission in a comprehensive compliance report. The broader the designation, the broader the report. The evaluation includes "design of recommender systems and other relevant algorithmic systems".
This can be difficult if the Commission targets the whole language model, not just the search content. A narrow definition would spare OpenAI from having to provide a mechanism for reporting and removing content. A mid-2026 designation would mean the DSA would not take effect until the last quarter of next year. OpenAI may challenge the designation, which will prolong the process, as noted by João Pedro Quintais, Associate Professor of Information Law at the University of Amsterdam.
Key questions remain about the overlap between the AI Act and the DSA. The two laws were meant to coexist for AI integrated into digital services like Google AI Overviews, but not for vertically integrated providers like OpenAI, Quintais told Politico.
This development highlights the tension between innovation and regulation in the EU, where AI is becoming an everyday reality but legal instruments are lagging behind. As Politico points out, Brussels needs to step up to keep up with technologies that affect millions of Europeans.
gnews.cz - GH