This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

AI Act risks stalling innovation

News
Share:
AI Act risks stalling innovation

By

European tech companies express concerns over rushed AI legislation and its potential impact on innovation

In March, the European Parliament adopted the Artificial Intelligence Act, marking the world’s first comprehensive horizontal legal framework for AI. This act aims to safeguard fundamental rights, democracy, and environmental stability amidst the rise of high-risk AI technologies. While the intent is to ensure ethical AI usage across Europe, many industry leaders believe the legislation has been hastily introduced under pressure, potentially stifling innovation in the tech sector.

Denas Grybauskas, Head of Legal at Oxylabs, highlighted the challenges businesses might face with the new AI Act, stating, “As the AI Act comes into force, the main business challenges will be uncertainty in its first years. Various institutions, including the AI office, courts, and other regulatory bodies, will need time to adjust their positions and interpret the letter of the law. During this period, businesses will have to operate in a partial unknown, lacking clear answers if the compliance measures they put in place are solid enough."

Denas Grybauskas emphasised that the AI Act affects not only firms directly dealing with AI technologies but also the wider tech community. The act’s provisions could indirectly impose liabilities on third parties within the AI supply chain, such as data collection companies. Many AI systems today rely on machine learning models that require vast amounts of data for training. Although the AI Act does not specifically target data-as-a-service (DaaS) companies and web scraping providers, these firms might indirectly inherit certain ethical and legal obligations.

"A prime example is web scraping companies based in the EU who will have to ensure they do not supply data to firms developing prohibited AI systems. If a company willingly cooperates with an AI firm that, under EU regulation, is breaking the law, such cooperation might bring legal liability. Moreover, web scraping providers will need to install robust know-your-customer (KYC) procedures to ensure their infrastructure is used ethically and lawfully, ensuring an AI firm is collecting only the data they are allowed to collect, not copyright-protected information," Grybauskas explained.

Another significant risk comes from the decision to grant exemptions for AI systems based on free and open-source licenses. Denas Grybauskas pointed out the lack of a consolidated definition of “open-source AI” and the potential for companies to misuse the term for legal exemptions. “There is no consolidated, single definition of ‘open-source AI’; and it is unclear how the widely defined open-source model might be applied to AI. This situation has already resulted in companies falsely branding their systems as ‘open-source AI’ for marketing purposes. Without clear definitions, even bigger risks will manifest if businesses start abusing the term to win legal exemptions,” he noted.

Despite its potential to establish trust across the industry, the AI Act could also hinder technological advancement. Denas  Grybauskas concluded, “The AI Act has the potential to establish trust across the industry but may also be detrimental to innovation across the technology industry. Organisations must be on their toes, as they may face penalties in the millions for severe violations involving high-risk AI systems.”

As the AI Act begins to shape the European tech landscape, companies must navigate the evolving regulatory environment carefully to balance compliance with continued innovation.