The United States could end up having much more global influence over AI than it already has. This week, the U.S. Senate is debating a series of bills that would give the government far greater powers in terms of regulating AI. Eager to have a say in it all is OpenAI, CEO Sam Altman’s company. Tech companies with billions invested in them really don’t want to twiddle their thumbs while politicians decide their future.
One of the most important bills under discussion is the Future of AI Innovation Act. If passed, this law creates a national institute to establish conditions and guidelines for ‘responsible’ and ‘safe’ AI. Other laws deal with using and regulating AI for education and research.
Anna Makanju, Vice President of Global Affairs (note the ‘global’ in the title) of OpenAI, caused a stir on LinkedIn by stating that her company is squarely behind the mission of the newly created United States AI Safety Institute and the other bills that must pass the Senate. “We want [this institute] to be the global leader in this emerging field, and we welcome its growing collaboration with its counterparts in other countries.”
Influence beyond the U.S. alone
Earlier this week, OpenAI signed an open letter to senators, including the chairman of the same committee that Makanju mentions and tags by name in her LinkedIn message, pledging support for the AI institute being founded. Other signatories included Amazon, Cohere, IBM, Meta, Microsoft, Palo Alto Networks and Salesforce. Google was conspicuously absent from the list.
OpenAI, like a host of other U.S. tech companies, wants to exert influence on the direction and policies of the Institute. This is understandable, as it may well determine the future of (the use) of AI, not only for the US but also for the rest of the world. Companies and governments worldwide are hugely dependent on American technology. To declare their support in advance, companies like OpenAI hope to build goodwill and present themselves as partners rather than mere objects of scrutiny.
Containing Big Techs power
Lawmakers in the US are keen to curb the power of Big Tech. A prime example is Microsoft, which has faced one antitrust investigation after another in the past. The European Commission is also keen to regulate the impact of innovative and potentially disruptive technology, whether through the Digital Services Act, the Digital Markets Act, the AI Act or the NIS2 directive.
Perhaps you could see the EU’s recently established AI Office as the European counterpart of America’s AI Safety Institute. It is a bit early to compare the two bodies, as they have only existed briefly or are still being established. Still, in its mission statement, the U.S. organization seems more concerned with scientific progress and developing safety and accountability standards. It’s a bit like how a nuclear agency would profile itself.
The EU AI Office seems to favour a broader approach, with bureaucratic layers, themes and departments neatly laid out in an organizational chart. That’s kind of the approach we would expect from an EU institute.
Billions at stake
Companies are not keen on idly sitting on the sidelines while these legislative and regulatory affairs unfold. In the case of OpenAI, billions in investments are at stake, and it’s hard enough to turn these into resounding profits. So, they are eager to help determine what regulation will look like. The institutions mentioned above also provide the space to do so.
Another reason big players typically like to rub shoulders with lawmakers and regulators is because this is a well-established method of taking the wind out of the sails of smaller, upstart competitors. In the case of AI, think about lucrative deals with news publishers, compliance with safety checks at scale or detection of dangerous hallucinations. Only parties with deep pockets and an army of lawyers can comply with such regulations. Parties like themselves, in other words.
At the same time, such companies don’t wait around for legislation to arrive. OpenAI, along with Microsoft, Google and Anthropic, created an organization (the Frontier Model Forum) to steer the development of AI models in the right direction–their direction. The message being that too much regulation is unwanted and that AI players are perfectly capable of making their own rules.
Lobbying behind the scenes
The so-called AI-Enabled ICT Workforce Consortium, meant to assuage concerns about job losses due to AI, can also be seen in that light. Cisco founded this group with support from Google, Microsoft, IBM, Intel, SAP and Accenture. You often don’t hear much from such initiatives after the initial news of their founding. But be assured that behind-the-scenes lobbying is taking place. In that case, U.S. companies are trying to stay on the level with European institutions, where workers enjoy significantly more protection than in their home countries.
The fact is that legislators in Washington, D.C. are often closer to the fire than their counterparts in Europe. Thus, they are able to cooperate more closely (and occasionally clash) with the tech entrepreneurs of Silicon Valley. In Europe, national governments have not yet made a dent in curbing the ambitions of AI giants.
Results may be limited
They probably prefer to leave this to the European Commission, which opens a new can of NGOs, activists, lobbyists, citizens’ groups and other stakeholders to weigh every piece of new legislation’s pros and cons. Despite this attempt to involve as many actors as possible in developing laws and regulations, the influence of European bodies seems limited. Despite numerous privacy violation fines hurled at Meta, Google and others, eventually Big Tech often makes just enough concessions to mostly get its way.
On the AI front, Mistral is a hopeful contender for Europeans with a bit of ‘EU patriotism’. The French startup is showing impressive models. However, when you compare the investments granted to this company with those of its American competitors who regularly rake in billions, we must acknowledge that in the AI orchestra, Europe is blowing a trumpet and America a tuba. That undoubtedly limits the influence of European players on the direction AI is going.