Jump to content
  • The EU AI Act passed — here’s what comes next

    Karlston

    • 1 comment
    • 549 views
    • 8 minutes
     Share


    • 1 comment
    • 549 views
    • 8 minutes

    The EU’s sweeping AI regulations have (almost) passed their final hurdle.

    European Union lawmakers have officially approved the bloc’s landmark AI regulation, paving the way for the EU to prohibit certain uses of the technology and demand transparency from providers. In a majority vote on Wednesday, 523 European Parliament members elected to formally adopt the Artificial Intelligence Act (AI Act), and will now work towards its enforcement and implementation.

     

    The AI Act has been hotly debated since it was first proposed in 2021, with some of its strictest regulations — such as a proposed total ban on biometric systems for mass public surveillance — being softened by last-minute compromises. While Wednesday’s announcement means the law has almost passed its final hurdle, it will still take years for some rules to be enforced.

     

    The legal language of the text is still awaiting final approval, either via a separate announcement or a plenary session vote on April 10th/11th, with the AI Act then officially coming into force 20 days after it’s published in the Official Journal — which is anticipated to happen in May or June this year. Provisions will then take effect in stages: countries will have six months to ban prohibited AI systems, 12 months to enforce rules against “general-purpose AI systems” like chatbots, and up to 36 months for AI systems the law has designated as “high risk.”

     

    Prohibited systems include things like social scoring, emotion recognition at work or schools, or systems that are designed to influence behavior or exploit user vulnerabilities. Examples of “high-risk” AI systems include those applied to critical infrastructure, education, and vocational training, certain law enforcement systems, and those that can be used to influence democratic processes like elections.

     

    “In the very short run, the compromise on the EU AI Act won’t have much direct effect on established AI designers based in the US, because, by its terms, it probably won’t take effect until 2025,” said Paul Barrett back, deputy director of the NYU Stern Center for Business and Human Rights, back in December 2023 when the EU provisionally agreed on the landmark AI regulation. So for now, Barrett says major AI players like OpenAI, Microsoft, Google, and Meta will likely continue to fight for dominance, particularly as they navigate regulatory uncertainty in the US.

     

    The AI Act got its start before the explosion in general-purpose AI (GPAI) tools like OpenAI’s GPT-4 large language model, and regulating them became a remarkably complicated sticking point in last-minute discussions. The act divides its rules on the level of risk an AI system has on society, or as the EU said in a statement, “the higher the risk, the stricter the rules.” 

     

    But some member states grew concerned that this strictness could make the EU an unattractive market for AI. France, Germany, and Italy all lobbied to water down restrictions on GPAI during negotiations. They won compromises, including limiting what can be considered “high-risk” systems, which would then be subject to some of the strictest rules. Instead of classifying all GPAI as high-risk, there will be a two-tier system and law enforcement exceptions for outright prohibited uses of AI like remote biometric identification. 

     

    That still hasn’t satisfied all critics. French President Emmanuel Macron attacked the rules, saying the AI Act creates a tough regulatory environment that hampers innovation. Barrett said some new European AI companies could find it challenging to raise capital with the current rules, which gives an advantage to American companies. Companies outside of Europe may even choose to avoid setting up shop in the region or block access to platforms so they don’t get fined for breaking the rules — a potential risk Europe has faced in the non-AI tech industry as well, following regulations like the Digital Markets Act and Digital Services Act.  

     

    But the rules also sidestep some of the most controversial issues around generative AI

     

    AI models trained on publicly available — but sensitive and potentially copyrighted — data have become a big point of contention for organizations, for instance. The approved rules, however, do not create new laws around data collection. While the EU pioneered data protection laws through GDPR, its AI rules do not prohibit companies from gathering information, beyond requiring that it follow GDPR guidelines.

     

    “Under the rules, companies may have to provide a transparency summary or data nutrition labels,” Susan Ariel Aaronson, director of the Digital Trade and Data Governance Hub and a research professor of international affairs at George Washington University said when the EU provisionally approved the rules. “But it’s not really going to change the behavior of companies around data.”

     

    Aaronson points out that the AI Act still hasn’t clarified how companies should treat copyrighted material that’s part of model training data, beyond stating that developers should follow existing copyright laws (which leave lots of gray areas around AI). So it offers no incentive for AI model developers to avoid using copyrighted data.

     

    The AI Act also won’t apply its potentially stiff fines to open-source developers, researchers, and smaller companies working further down the value chain — a decision that’s been lauded by open-source developers in the field. GitHub chief legal officer Shelley McKinley said it is “a positive development for open innovation and developers working to help solve some of society’s most pressing problems.” (GitHub, a popular open-source development hub, is a subsidiary of Microsoft.)

     

    Observers think the most concrete impact could be pressuring other political figures, particularly American policymakers, to move faster. It’s not the first major regulatory framework for AI — in July, China passed guidelines for businesses that want to sell AI services to the public. But the EU’s relatively transparent and heavily debated development process has given the AI industry a sense of what to expect. Aaronson said the provisional text (which has since been approved) at least shows that the EU has listened and responded to public concerns around the technology.

     

    Lothar Determann, data privacy and information technology partner at law firm Baker McKenzie, says the fact that it builds on existing data rules could also encourage governments to take stock of what regulations they have in place. And Blake Brannon, chief strategy officer at data privacy platform OneTrust, said more mature AI companies set up privacy protection guidelines in compliance with laws like GDPR and in anticipation of stricter policies. He said that depending on the company, the AI Act is “an additional sprinkle” to strategies already in place.

     

    The US, by contrast, has largely failed to get AI regulation off the ground — despite being home to major players like Meta, Amazon, Adobe, Google, Nvidia, and OpenAI. Its biggest move so far has been a Biden administration executive order directing government agencies to develop safety standards and build on voluntary, non-binding agreements signed by large AI players. The few bills introduced in the Senate have mostly revolved around deepfakes and watermarking, and the closed-door AI forums held by Sen. Chuck Schumer (D-NY) have offered little clarity on the government’s direction in governing the technology. 

     

    Now, policymakers may look at the EU’s approach and take lessons from it

     

    This doesn’t mean the US will take the same risk-based approach, but it may look to expand data transparency rules or allow GPAI models a little more leniency. 

     

    Navrina Singh, founder of Credo AI and a national AI advisory committee member, believes that while the AI Act is a huge moment for AI governance, things will not change rapidly, and there’s still a ton of work ahead. 

     

    “The focus for regulators on both sides of the Atlantic should be on assisting organizations of all sizes in the safe design, development, and deployment of AI that are both transparent and accountable,” Singh told The Verge in December. She adds there’s still a lack of standards and benchmarking processes, particularly around transparency. 

     

    The act does not retroactively regulate existing models or apps, but future versions of OpenAI’s GPT, Meta’s Llama, or Google’s Gemini will need to take into account the transparency requirements set by the EU. It may not produce dramatic changes overnight — but it demonstrates where the EU stands on AI.

     

    Update March 12th, 8:30ET AM: Updated the original article following the EU Act being officially adopted.

     

    Source

    • Like 2

    User Feedback

    Recommended Comments

    Forget it, the US WILL MAKE sure that Skynet will become an US Entity and will rule the world with an iron fist and the pinky finger on the Nuclear Trigger.

     

    When the Planet is about to be destroyed (anyone can drive the car against the wall), we want to make sure it is done right.

    Link to comment
    Share on other sites




    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...