UK swoons over OpenAI in legally meaningless love-in • The Register

The UK’s Department for Science, Innovation & Technology (DSIT) is jumping into bed with chatbot biz OpenAI, signing a memorandum of understanding to expand OpenAI’s footprint in the nation while inserting its tech firmly into the public sector.

The MoU falls far short of a signed-and-sealed contract. “This memorandum is voluntary,” the document reads, “not legally binding, and without prejudice to any binding agreements,” and “does not prejudice against future procurement decisions” – and no mention has been made of money changing hands, in either direction, at this stage.

Despite this, the British government is positioning the agreement as a big win for its plans to build the UK into an artificial intelligence powerhouse. Under the MoU, it is claimed OpenAI will work with DSIT to identify where “advanced AI models” can be deployed in both public and private sectors, to “improve understanding of [AI] capabilities and security risks” in work that builds on an existing partnership with OpenAI and the UK AI Security Institute, and has the potential for OpenAI to participate in planned “AI Growth Zones” in addition to expanding the company’s footprint in the UK.

Google, Mountain View

UK tech minister negotiated nothing with Google. He may get even less than that

READ MORE

“AI will be fundamental in driving the change we need to see across the country – whether that’s in fixing the NHS, breaking down barriers to opportunity, or driving economic growth,” trilled technology secretary Peter Kyle. “That’s why we need to make sure Britain is front and center when it comes to developing and deploying AI, so we can make sure it works for us.

“This can’t be achieved without companies like OpenAI, who are driving this revolution forward internationally. This partnership will see more of their work taking place in the UK, creating high-paid tech jobs, driving investment in infrastructure, and crucially giving our country agency over how this world-changing technology moves forward.”

“AI is a core technology for nation building that will transform economies and deliver growth,” claimed OpenAI chief executive Sam Altman in a statement. “Britain has a strong legacy of scientific leadership and its government was one of the first to recognize the potential of AI through its AI Opportunities Action Plan. Now it’s time to deliver on the plan’s goals by turning ambition to action and delivering prosperity for all.”

The government has been sniffing around the use of AI for some time, even prior to the publication of the AI Opportunities Action Plan, going so far as to predict that machine learning and technology implementations will mean zero impact from a 10 percent budget cut across major departments. At the same time, Prime Minister Keir Starmer has pledged £1 billion (about $1.3 billion) to “scale up [the UK’s] compute power by a factor of 20,” not including an additional £750 million (just over $1 billion) to resurrect scrapped plans for a supercomputer in Edinburgh.

Despite doubts over whether the government can deliver on its vision, ministers have their eye on using AI technologies to do everything from accelerate planning permission applications to flagging suspicious MOT test centers. It’s even planning to put the technology to work as a digital public servant, unironically named Humphrey after the star of British political satire Yes, Minister.

Planning regulations are to be loosened to create “AI Growth Zones,” public-facing roles are to be supplemented – or replaced – by chatbots, kids’ schoolwork marked and graded by machine, and there are even concerning rumblings regarding a Minority Report-like “murder prediction tool.”

This all comes less than two years after then-Prime Minister Rishi Sunak opened the global AI Safety Summit at Bletchley Park, historical home to the Second World War codebreakers working at “Station X” – and Alan Turing, the polymath best known in AI circles for his eponymous test for determining whether a correspondent was a human or a machine.

Where the European Union is legislating for more control over AI technology, with limited success, the UK appears to be going the opposite way – as this latest deal with a company criticized for mass use of copyrighted content in model training would indicate.

Some experts, however, aren’t as convinced as the government that such technologies will deliver the promised national growth. Wayne Holmes, professor of critical studies of artificial intelligence and education at University College London’s Knowledge Lab, doesn’t mince his words on the topic.

“Policymakers and idiots around the world are just getting sucked into this hype-fest, believing the nonsense that these people are saying, that this is going to sort everything, can help solve all the problems of the world, and cancer is going to be solved in three weeks, poverty in five weeks,” he told The Register.

“It’s just utter, utter drivel and neoliberal nonsense. OpenAI and others like it – terrible, terrible companies, and to invest or to have a memorandum of understanding with them is just crazy. OpenAI is an incredibly unstable company that could collapse at any time. They are just working really hard to enhance their immediate value. And then clearly this memorandum is going to help them in that project, because they’re just interested in making as much money as quickly as they can, because, in my opinion, they know that they’re hitting a brick wall.

“The technology itself that OpenAI lead on, or have their dominant position on, GenAI, ChatGPT, LLMs, all that is fundamentally flawed.”

“It’s never going to be really improved any time soon. And they talk about the ‘agentic’ ways of approaching it. And this is the kind of thing [where] they’re just not doing what they claim it’s doing. Are these tools fun to play with? Absolutely. Do they do silly and interesting things? Yes, they do. Do they do anything that I personally would trust? Not in a million years. So the idea that our government is going to trust it is just absurd.”

As for what the government should be doing, Holmes recommends a very different approach. “Number one,” he said, “make damn sure that people actually understand what they’re talking about. And clearly government and ministers do not. Number two, we need robust regulation about these technologies. We need that regulation in place, and it needs to be proactive regulation, regulation that recognizes that these tools are changing in terms of their application.”

Dr. Dan Nicolau, director of the King’s College London AI for Science Programme, told us this non-binding agreement by UK government follows other “similar delas” with Anthropic and Google.

“So this MoU should be seen as an early and exploratory part of this much larger plan to make AI central to life in the United Kingdom in the coming decades, basically.

“I don’t know and I don’t think anyone does know whether this is the right plan. But it is a bet that the many issues with the AI developed by these big US firms, such as hallucinations, biases, privacy, copyright and unexplainability, will be resolved well enough and fast enough for the technology to be transformative, for instance in the public sector.”

El Reg asked OpenAI for responses to specific questions but it directed us to a blog instead. A DSIT representative had not responded to a request for comment by the time of publication. ®

Leave a Comment