Potential open source regulation of the EU AI Act sparks Twitter debate

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.


Alex Engler, research fellow at the Brookings Institution, never expected that his recent article, “The EU’s attempt to regulate open source AI is counterproductive,” would spark a Twitter debate.

According to Engler, as the European Union continues to debate the development of the Artificial Intelligence Act (AI Act), one of the steps it has considered would be to regulate open-source AI for general use (GPAI). The EU AI Act defines GPAI as “AI systems with a wide range of potential applications, both intended and unintended by the developers…these systems are sometimes referred to as ‘fundamental models’ and are characterized by their widespread use as pre-trained models for other, more specialized AI systems.”

In Engler’s piece, he said that while the proposal is intended to enable safer use of these artificial intelligence tools, it would “create legal liability for open-source GPAI models, undermining their development.” The result, he claimed, would “further concentrate power over the future of AI in major tech companies” and prevent scrutiny.

“It’s an interesting issue that I didn’t expect to get attention,” he told VentureBeat.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

“Honestly, I’m always surprised to get press calls.”

But after Emily Bender — a professor of linguistics at the University of Washington and a regular critic of how AI is treated on social and mainstream media — wrote a thread about a piece that quoted from Engler’s article, a snappy Twitter back- and started forward.

The open source discussion of the EU AI Act in the hot seat

“I haven’t studied the AI ​​law and I’m not a lawyer, so I can’t really say if it will work well as a regulation,” Bender tweetedwhich she points out later in her thread, “How do people get away with pretending that, by 2022, regulation isn’t necessary to divert innovation away from exploitative, harmful, unsustainable, etc. practices?”

Engler responded to Bender’s thread with: its own. Broadly speaking, he said, “I’m in favor of AI regulation… yet I don’t think regulating open-sourcing models doesn’t help at all. What is better instead and what the original European Commission proposal did is to regulate when a model is used for something dangerous or harmful, regardless of whether it is open source.”

He also insisted that he does not want to exempt open source models from the EU AI Act, but rather exempt the act of open sourcing AI. If it is more difficult to release open-source AI models, he argued, the same requirements will not prevent the commercialization of these models behind APIs. “We end up with more OpenAIs and fewer OS alternatives – not my favorite outcome,” he tweeted.

Bender responded to Engler’s thread by emphasizing that if part of the regulation’s purpose is to require documentation, “the only people who are able to actually thoroughly document training data are those who collect it.”

Perhaps this could be addressed by disallowing commercial products based on insufficiently documented models, leaving liability to the corporate interests doing the commercialization, she wrote, adding, “What about if HF [Hugging Face] or similar hosts GPT-4chan or Stable Diffusion and individuals download copies and then maliciously use them to flood various online spaces with toxic content?”

Obviously, she continued, the “Googles and Metas of the world must also be subject to strict regulations around the ways in which data can be collected and used. But I think there is enough danger in creating collections of data/models.” who have been trained to do so, so that OSS developers are not allowed to have free rein.”

Engler, who studies the implications of AI and emerging data technologies for society, admitted to VentureBeat that “this issue is quite complicated, even for people who generally share fairly similar perspectives.” He and Bender, he said, “share a concern about where regulatory responsibility and commercialization should fall … it’s interesting that people with relatively similar perspectives end up in a slightly different place.”

The impact of open-source AI regulation

Engler made several comments to VentureBeat about his vision of the EU regulating open source AI. First of all, he said that the limited scope is a practical concern. “The EU requirements don’t affect the rest of the world, so you can still release this elsewhere and the EU requirements will have very minimal impact,” he said.

In addition, “the idea that a well-built, well-trained model that somehow meets these regulatory requirements would not be applicable for harmful use is simply not true,” he said. “I don’t think we have clearly shown that legal requirements and making good models will necessarily make them safe in malicious hands,” he added, pointing out that there are many other software that people use for malicious purposes that are difficult to start regulating.

“Even the software that automates the interaction with a browser has the same problem,” he said. “So when I try to create a lot of fake accounts to spam social media, the software that allows me to do that has been public for at least 20 years. So [the open-source issue] is a bit of a departure.”

Finally, he said, the vast majority of open source software is made without the purpose of selling the software. “So you’re having an already uphill battle that they’re trying to build these big, expensive models that can even come close to competing with the big companies and you’re also adding a legal and regulatory barrier,” he said.

What the EU AI law will and will not do?

Engler stressed that the EU AI law will not be a panacea for AI diseases. What the EU AI law will generally help with, he said, is “preventing some sort of fly-by-night AI applications for things it can’t really do or does very badly.”

In addition, Engler thinks the EU is doing a pretty good job of trying to “sensibly solve a pretty difficult problem about the spread of AI in dangerous and high-risk areas,” adding that he would like the US to have a more proactive regulatory environment. role in the space (although he credits the work of the Equal Employment Opportunity Commission on bias and AI hiring systems).

What the EU AI law doesn’t really address is the creation and public availability of models that people just use nefariously.

“I think that’s another question that the EU AI law doesn’t really address,” he said. “I’m not sure if we’ve seen anything that’s stopping them from being there, in a way that’s actually going to work,” he added, while the open source discussion is a bit “taped.”

“If there was a part of the EU AI law that said hey, the spread of these big models is dangerous and we want to slow them down, that would be one thing – but it doesn’t say it,” he said. .

Debate will certainly continue

It is clear that the Twitter debate surrounding the EU AI Act and other AI regulations will continue as stakeholders from across AI research and across the industry spectrum voice their opinions on dozens of recommendations on a comprehensive AI regulatory framework that could could be a model for a global standard.

And the debate continues offline, too: Engler said one of the European Parliament’s committees, advised by digital policy advisor Kai Zenner, plans to pass an amendment to the EU AI law that would address the issue of open-source AI. tackling – reflected in yet another tweet:

The mission of VentureBeat is a digital city square for tech decision makers to learn about transformative business technology and transactions. Discover our briefings.

Leave a Reply

Your email address will not be published.