Government AI adoption: formal-relational contracting can unlock responsible progress
Governments worldwide are quickly ramping up efforts to adopt artificial intelligence, but concerns about risk, bias, and accountability loom large. Felix-Anselm van Lier, Research and Policy Fellow at the Government Outcomes Lab and Co-Director of the Leading Cross-Sector Partnerships Programme, argues that a flexible, collaborative contracting model can unlock progress.

2025 was off to a great start for AI in government.
Keir Starmer’s government boldly announced the AI Opportunities Action Plan, driven by the belief that “AI is the way … to transform our public services”. Within a month headlines took a slightly more cautious tone, highlighting the “frustrations and false starts” of many AI projects in governments. For those experienced in implementing AI in complex socio-political environments, this neither surprising, nor necessarily a bad sign. There is often a stark contrast between optimistic narratives about AI’s potential governments and the formidable challenge of operationalising even the most basic use cases.
Despite the hype, adopting AI in government–where transparency, accountability, and adherence to fundamental values and legal rights of citizens is paramount–is no trivial task. There are rarely any “plug-and-play” solutions. AI implementation is a deeply “socio-technical” challenge. AI “solutions” are highly context dependent, shaped by a multiplicity of aspects including the choice of AI model, the quality and availability of data, and the interaction with human users. AI’s opportunities and risks are not static–they emerge and evolve over time, often only becoming apparent through real-world experimentation in specific contexts.
Responsible and meaningful AI adoption requires continuous adaptation, responsiveness, and processes that enable mutual learning and collaboration between technology experts and end users. Policymakers are increasingly aware of AI’s inherent challenges – Pat McFadden’s pledge for a “test-and-learn-approach” in policy development and the UK government’s recently published Artificial Intelligence Playbook for the UK Government both emphasise the need for government to be more agile and more collaborative.
In the case of AI, government remains heavily reliant on external expertise. Most cutting-edge know-how remains largely concentrated in commercial organisations, with governments either procuring AI tools outright or licensing the underlying technology. Current procurement and contracting practices, however, are anything but agile and collaborative, consistently failing to foster the cross-sector collaboration required for meaningful and responsible AI implementation. If government is serious about putting its ambitions into practice, more focus needs to be dedicated to the seemingly mundane – but critically important - processes of contracting and procurement.
Procurement is becoming an increasing focus – but it’s not fit for purpose
Public procurement and contracting’s potential as a risk management tool is increasingly coming into the purview of policymakers and researchers. Procurement could be the missing link between pre-deployment regulation and ongoing post-deployment monitoring and evaluation needed to ensure AI systems remain safe, effective, and aligned with public values. As it stands, however, procurement remains fundamentally misaligned with the realities of AI adoption.
Instead of fostering dynamic, collaborative partnerships, current contracting practices reinforce rigid, transactional processes that prioritise compliance over innovation. This not only hinders meaningful cross-sector collaboration but also limits government and public sector insight into AI system design, functionality, and alignment with public goals and values - and government’s ability to effectively manage technology’s risks.
Highly-specified contracts create an illusion of accountability by attempting to predefine and control inherently unknown futures. Traditional contracts fail to accommodate the emergent and dynamic risk landscape that is inherent in AI implementation. Consequently, rather than mitigating risks, these contracts tend to exacerbate risks, leaving government officials ill-equipped to learn, adapt, intervene, or ensure that AI systems remain aligned with public interests over time.
These rigid contracting models don’t just slow down AI adoption–they increase the risk of failures, stifle innovation–and ultimately to erode public trust. Recognising this, recent reports by the Tony Blair Institute and by PUBLIC emphasised the need for more agile, adaptive procurement practices. However, there are still very few concrete ideas on how to implement such an approach in actual contracting practices.
Bringing a “test-and-learn” approach to contracting: formal-relational contracts
The need for flexible and collaborative contracting models is not exclusive to the field of AI and technology adoption. There is much to be learned from a growing body of research on innovative contracting in complex policy areas, ranging from complex defence procurement to homelessness prevention. The Government Outcomes Lab has been at the forefront on this work, focusing on how a shift towards more flexible and collaborative contracting models can lead to better outcomes for citizens.
Building on our work in complex contracting environments and evidence from the private sector, we are developing a formal-relational approach to public sector contracting. Rather than attempting to anticipate unknowable contractual outcomes, this model shifts the focus of the contract to fostering meaningful relationships through well-designed governance structures, transparent decision-making processes and principles of engagement. This model acknowledges the uncertainties inherent complex contracts, and prioritises adaptability, ongoing collaboration and shared responsibility for achieving desired outcomes–without abandoning the need for accountability.
As part of this work, we are currently collaborating with Public Digital to co-create and test tools and resources to help commissioners adopt a more relational approach to contracting, creating a publicly accessible resource to strengthen cross-sector partnerships. The aim of the project is to turn the contract into a living document, a test-and-learn framework where contracts serve as evolving vessels that guide behaviour, foster collaboration, and adapt based on new insights gained during implementation. An early lesson from this ongoing project is: a different approach to contracting is not only possible but urgently needed—and AI procurement would likely benefit from the same shift.
What would a formal-relational approach bring to public sector AI adoption?
The promising work on formal-relational contracting in public services suggests that a similar approach could transform contracting AI by making it more adaptive, collaborative, and accountable.
First, it would allow for more effective and meaningful AI adoption: a formal-relational approach acknowledges AI implementation as a socio-technical endeavour, enabling cross-sector and multi-disciplinary teams’ expertise, and ongoing learning. Contracts must foster iterative, test-and-learn processes, ensuring ensure that technology aligns with the nuanced needs and values of the public sector.
Second, it would promote more responsible AI adoption. This approach would embed transparency and accountability at the heart of contracts, making AI adoption processes more visible, understandable, and governable. This, in turn, would help strengthen public confidence and help ensure that AI systems remain aligned with societal goals.
As anything, the approach will not be a panacea. Many challenges remain, ranging from the lack of clear regulatory frameworks and the difficulty in specifying and verifying AI attributes like fairness and explainability, to the asymmetry in technical expertise between public buyers and tech providers. But today, as Peter Chamberlin from Public Digital highlighted, a crucial task is to invest in the foundations of AI. A formal-relational approach to AI contracting would do just that: addressing the basics, prioritising people, adopting flexible models and building their capacity.