Can the UK Produce a Frontier AI Company? An AI Founder’s Perspective

When I first began studying Law at university in London, I did not imagine that I would one day be building artificial intelligence (AI) products. Yet, even during my undergraduate studies, I found myself drawn to the intersection of AI and law. My dissertation explored how the fair dealing exception in UK copyright law could be expanded to allow the training of large language models (LLMs) on UK data in ways that both protect rightsholders and promotes innovation.

I am now the founder of Brilliant AI, a UK-based research and development company with a mission to build the most advanced AI agents for developers and a long-term vision of building artificial general intelligence (AGI). This journey has been both exhilarating and challenging. I have learned firsthand how difficult it is for a founder in the UK to compete at the frontier of generative AI.

This is my firsthand account of that journey—what I have seen, what I have learned, and what I believe the UK must do if it truly wants to produce its own OpenAI or Anthropic. 

In November 2022, when ChatGPT was released, I was a second-year Law student and a self-taught programmer. The moment I interacted with GPT-3.5, I understood that my career path had changed. These models were not just tools—they were digital brains, capable of learning and processing information in orders of magnitude faster than humans.

It was clear to me then, and remains clear today, that any country unable to produce these digital brains for itself risks being left behind economically, technologically and geopolitically. The requirements for building such systems can be boiled down to three things: data, compute and talent.

The United States has become the undisputed leader in frontier AI by bringing these three together at scale. OpenAI’s models dominate benchmarks, and its consumer product, ChatGPT, has around 700 million weekly active users. Even the global Stargate project—a buildout of new data centres with projected costs in the $400-500 billion range—is a direct result of demand for OpenAI’s models.

The UK certainly has the talent to play at this level. We have done it before. DeepMind, founded in London, pioneered reinforcement learning breakthroughs such as AlphaGo. For years, it was regarded as the leading AI lab globally until the launch of ChatGPT shifted the spotlight to OpenAI .

When I started Brilliant AI, I wanted to build frontier models straight away. But I quickly ran into a wall. The funding requirements were astronomical, and the UK ecosystem lacked mechanisms for supporting early-stage researchers who wanted to train models at scale. By contrast, US venture capital firms like Andreessen Horowitz provide grants to independent labs to support open-source model training. In the UK, there are no equivalent grassroots opportunities.

I pivoted to building AI products that relied on existing frontier models. Using GPT-4 and later reasoning models like OpenAI’s o1-mini, I built Bril AI, an agent designed to help students learn more effectively. But this made me, like many other UK founders, an ‘AI taker’ rather than an ‘AI maker’.

I also launched LlamaCloud, a platform to help developers build with open-source models such as Llama 3 and Mistral. The idea was to create a sovereign alternative for UK developers. But, I soon found that most developers and businesses wanted accuracy and reliability above all. The open-source models still lagged behind proprietary US models in performance.

In short, I discovered the hard way, that the UK has talent and ambition but lacks the structural support to turn those into frontier companies. These days, I am developing BrilliantCode, the world’s most advanced autonomous AI software engineer built for real-world, production-grade software  engineering — although it is currently powered by the gpt-5 family of models  my goal is for future versions to be powered by coding models we have trained, similar to how Cursor and Windsurf in the US started with models from OpenAI and Anthropic but are now training models of their own for use in their agentic IDEs.

Funding gaps. Training frontier models requires billions of pounds in capital expenditure. The UK government has invested around £1 billion in compute and AI research initiatives between 2022 and 2023. Compare this with OpenAI’s projected spend of $500 billion by 2030. Without attracting external investments on this scale, UK startups are structurally disadvantaged.

Copyright uncertainty. The UK’s copyright regime is restrictive compared to the US fair use doctrine. Investors hesitate to fund model training when there is uncertainty about whether training datasets could lead to litigation. The EU AI Act has compounded this by introducing compute thresholds for ‘systemic risk’ models, triggering costly transparency and compliance obligations.

Lack of sovereign models. While Mistral in France represents  Europe’s strongest open-source effort, its models remain behind the frontier. The UK currently has no widely adopted sovereign model. This leaves startups dependent on US labs or, increasingly, Chinese-trained models that may not align with British values.

Regulatory fragmentation. Meta’s multimodal Llama models, for example, cannot be legally deployed in Europe due to licensing restrictions. UK founders face a fragmented compliance landscape that makes it hard to build scalable businesses.

Beyond funding and regulation, there is a more fundamental issue: the UK’s data centre and energy infrastructure is simply not competitive for large-scale AI training workloads.

As Kao Data’s recent research highlights, the UK has some of the highest industrial electricity prices in the G7. At current prices, 1GW of power across 12 months costs approximately £1.8 billion in the UK, compared to just £438 million in the United States. This pricing disparity alone makes the UK an unattractive location for the energy-intensive compute required to train frontier AI models.

Grid connection delays of up to 15 years in some regions further compound the problem, as does the lack of data centre eligibility for Energy Intensive Industries (EII) relief—despite data centres being designated as Critical National Infrastructure.

Encouragingly, there are signs of progress. Companies like NVIDIA, Microsoft, OpenAI, Nscale, and Kao Data are investing heavily in UK compute infrastructure, however the success of all this   depends on power pricing in the UK being tackled. Kao Data, in particular, has a proven track record deploying AI startup and hyperscale cloud infrastructure within the UK and understands the critical connection between sovereign compute capacity and AI sovereignty.

But compute infrastructure alone is not enough. The UK needs regulatory clarity around training data use, competitive energy pricing, a culture of risk-tolerant venture investment, and the ability to retain and attract top talent that currently gravitates toward Silicon Valley.

For now, my focus at Brilliant AI is on building advanced agents powered by existing frontier models. The goal is to create the best possible user experience, achieve adoption at scale, and then use that demand to justify training our own models in the future.

The roadmap is clear: build advanced reasoning models that can autonomously work on complex tasks, develop computer-use agents that can safely automate any digital workflow, and ultimately become a full-stack UK AI company that not only builds products but also trains and deploys world-class frontier models.

The UK has every reason to want a frontier AI company of its own. These digital brains will define the next era of human progress, from healthcare to defence to education. We cannot afford to be perpetual takers of someone else’s models.

The ingredients of talent, ambition and early research excellence are well and truly here. But without bold investment, regulatory clarity, competitive energy pricing, and a long-term commitment to sovereignty, UK founders will continue to face the same hard reality I have faced: the path of least resistance is to build on US models.

If the UK is serious about producing its own OpenAI or Anthropic, then now is the time to act. Otherwise, we will remain takers of others’ AI innovation, rather than AI makers, shaping it.

*To hear more from Jennifer Umoke – please download her Critical Careers Podcast here. 



 

Share

Other articles

March 31, 2026

Hype Cycle to Power Cycle: The Industrial Era of AI kicked off in San Jose

March 18, 2026

The UK’s AI Ambition Gap

January 22, 2026

The Quiet Revolution: Slough’s story shows how data centres can benefit the UK’s economy and communities for good

Test Text

test job title

Details

If your application is successful, Harlow Council will transfer the grant by BACS.  Bank details (account name, number and sort code) will need to be supplied with a summary of accounts. 

Funding conditions:  If your application is successful, your project must be delivered by 31 December 2026. You will have to return any grant funds if the project is not delivered or the organisation receiving the funding stops operating.

A contract agreement will need to be signed between your organisation and Harlow District Council before any funding is granted.

Monitoring and Evaluation: Grant recipients will be required to provide an end of project report to establish whether the project has met its aims and objectives, as well as to assess the overall impact on participants. Funding for the project is provided on the basis that the Project Evaluation form is returned within the agreed project timescale (no later than 31 January 2027).

Risks and Liabilities: In giving grants the ‘Harlow Council’ will require the supported project organisation or groups to accept all risks and liabilities associated with the activity being supported. This will be a condition of the grant.  Copies of relevant documents may need to be provided if the application has been approved.

Data protection statement – how we will use your information

The Council is committed to handling your personal information in line with the data processing principles.  The Data Protection Legislation and the General Data Protection Regulation 2016/679 (GDPR) sets the legal framework for how we collect, handle and process personal data and for your rights as a ‘data subject.’

General Data Protection Regulation:  Personal data provided by you will be processed in accordance with this protocol. For more details, please see https://www.harlow.gov.uk/privacy-notice

Thank you for taking the time to read these guidance notes. If you have any questions, please get in touch with [email protected].

FAQs

  1. Who can apply?
    The fund is open to not-for-profit community groups and grassroots initiatives based in Stockport.
     
  2. What types of projects are eligible?
    We encourage projects focused on environmental sustainability, community cohesion, and local economic development.
     
  3. How much funding is available?
    Grants range from £500 to £2,500 for pilot projects or to enhance existing initiatives.
     
  4. Is this the sole funding for this project?
    If not, please expand on the additional match funding that you currently have or are in the process of applying for.
     
  5. Are there any restrictions on grants under £500?
    Yes, please note that grants under £500 may be subject to different guidelines or restrictions, which will be communicated upon application.
     
  6. When will we know if our project was selected?
    Notifications will be sent by March 31, 2025.
     
  7. How can I apply for a grant?
    Applications can be submitted through our online portal, where you will find detailed guidelines and forms.
     
  8. What happens if my application is unsuccessful?
    If your application is not successful, we encourage you to seek feedback and consider reapplying in future funding rounds.
     
  9. When is the application deadline?
    Please check our website for the latest application deadlines and any upcoming funding rounds.