Europe is trying to regulate AI. That could backfire.

2024-03-14 08:52:06+00:00 - Scroll down for original article

Click the button to request GPT analysis of the article, or scroll down to read the original article text

Original Article:

Source: Link

By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . You can opt-out at any time. Access your favorite topics in a personalized feed while you're on the go. download the app Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview The European Union is forging ahead with its plans to regulate artificial intelligence. On Wednesday, the European Parliament approved the Artificial Intelligence Act, with 523 votes in favor, 46 against, and 49 abstentions. "Europe is NOW a global standard-setter in AI," Thierry Breton, the European internal market commissioner, said on X. "We are regulating as little as possible — but as much as needed!" This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. It's the first attempt at sweeping AI control by a major regulator to protect its citizens from the technology's potential risks. Other countries, including China, have already brought in rules around specific uses of AI. Advertisement The legislation has been questioned by some commentators, such as AI and deepfakes expert Henry Ajder, who called it "very ambitious." While calling it an overall positive step, he warned it risked making Europe less competitive globally. "My concern is that we will see companies explicitly avoiding developing in certain regions where there is robust and comprehensive regulation," he told Business Insider. "There will be countries that almost act as AI policy tax havens where they explicitly avoid enforcing harsh legislation to try to attract certain kinds of organizations." The act has been in the works for some time. First mooted in 2021, it was provisionally agreed in negotiations with member countries in December 2023. The EU legislation plans to assign the risks of AI applications into three categories, with applications that cause unacceptable risk set to be banned. Advertisement High-risk applications assigned to the second category will be subject to specific legal requirements, while applications in the third will largely left unregulated. Key milestone Neil Serebryany, CEO of California-based Calypso AI, told BI that while the "Act includes complex and potentially costly compliance requirements that could initially burden businesses, it also presents an opportunity to advance AI more responsibly and transparently." He called the legislation a "key milestone in the evolution of AI" and an opportunity for companies to consider social values in their products from the earliest stages. The regulation is expected to come into force in May, providing it passes final checks. Implementation of the new rules will then be phased in from 2025. Advertisement How exactly the rules will apply to businesses is also still relatively vague. Avani Desai, CEO of cybersecurity firm Schellman, said the Act may have a similar impact to the EU's general data protection regulation (GDPR) legislation and require US companies to meet certain requirements to operate in Europe. Companies uncertain about the rules can expect more details on the specific requirements in the coming months as the EU Commission establishes the AI Office and begins to set standards, said Marcus Evans at law firm Norton Rose Fulbright. "The first obligations in the AI Act will come into force this year and others over the next three years, so companies need to start preparing as soon as possible to ensure they do not fall foul of the new rules," he added.