The AI Act, as amended, if passed, would prevent US companies such as OpenAI, Amazon, Google and IBM from providing API access to generative AI models. The law will lead to sanctions against open source developers and software distributors like GitHub if generative AI models become available in Europe without first going through extensive and costly licensing. There are high fines.
Image source: unsplash.com
While the law provides exceptions for traditional machine learning models, it specifically prohibits the use of open-source generative systems without a license. If the law is passed, law enforcement will be outside the control of EU member states. Under the AI Act, which has extraterritorial jurisdiction, third parties can sue national governments for fines, and the European government can be forced by third parties into conflict with US developers and companies.
The very broad scope of the law includes the following: “Providers of AI systems that are established or located in a third country in which the law of an EU member state applies due to international law or the result generated by the system is intended for use in the EU.” Software developers and distributors are legally liable. With open source software, the responsibility lies with the company that uses or distributes the software.
Developers must register both their AI design or base model and the intended functionality. If the declared functionality is exceeded, the license can be revoked. This will become a problem for many open source projects. Registration also requires disclosure of data sources used, computing resources, training time, performance benchmarks, and red teaming – preparing the system to defend against cyberattacks.
Expensive risk testing will be required. The list of risks is very vague and includes potential problems for the environment, democracy and the rule of law. Apparently, depending on the size of the applicant company, the EU states will carry out risk assessments, the tests themselves still have to be created. The AI law also requires post-release monitoring of the system (presumably by the government). If models demonstrate unexpected skills, recertification is required.
Image source: pexels.com
Using an API that allows you to implement AI-based services without running them on your own hardware can be extremely difficult in the EU. If new features are discovered while using the API, they must be certified under the new rules and the vendor must provide confidential technical information to the third party to complete the licensing process. This provision allows the criminal to make the fraudulent software available in Europe, which requires licensing and disclosure of the software.
If a US open source developer publishes a model or code using the API on GitHub and that code becomes available in the EU, the developer is responsible for publishing the unlicensed model. Additionally, GitHub will be responsible for hosting such a model. Currently, many US cloud providers do not restrict access to API models. Any developer can access the latest technology at a reasonable price. The restrictions of the new law make using the API so complicated and expensive that it will only be available to corporate customers.
Open-source projects use LoRA to train models—adding new information and functionality slowly, incrementally, and cheaply because they can’t afford billions of dollars of computing infrastructure. Such projects need to be re-certified each time they use the LoRA method to extend the model.
The AI Act also provides for deployment licensing – developers using AI systems must pass rigorous eligibility checks prior to rollout. Small businesses in the EU are exempt from this obligation.
Penalties for failing to comply with AI law range from 2% to 4% of a company’s global gross revenue. For private individuals, the fine can be up to 20 million euros. At the same time, European small and medium-sized companies are not threatened with such fines. And if AI is used for research and development of clean energy systems or clean energy generation, compliance with the new law is not required at all.
The new “AI Act” contradicts the requirements of the US regulator and the measures envisaged in this law will lead to serious antitrust problems for US companies. The high cost of model training limits the availability of AI to small businesses and individuals. The US Federal Trade Commission wants to prevent a large company from using its position to earn most of its profits at the expense of smaller partners.
The AI law could create more problems than you think. Software developers, especially open source solutions, are unlikely to respond well to bans and expensive certification requirements. It’s hard to imagine what will happen when GitHub and other similar platforms decide that dealing with the EU is too difficult and block access. This can lead to consequences that have not been carefully considered. Fortunately, the law has not yet been passed and significant changes could be made to it.
Add Comment