Trending Now
Business & Finance City City-Local News London News News Technology

City of London sets new benchmark for responsible AI use with GenAI policy

  • October 13, 2025
  • 5 min read
City of London sets new benchmark for responsible AI use with GenAI policy

The new City of London AI policy sets out how generative AI should be used responsibly across public operations, with clear rules on governance, transparency and data security.

City of London Corporation has unveiled a comprehensive Standard Operating Procedure (SOP) to govern how staff, contractors and technology partners use generative AI tools. The move reflects growing concerns about data security, intellectual property, and transparency as AI becomes embedded in workplace decision-making.

City & Square Mile — Latest from EyeOnLondon

Explore what’s on and what matters across the Square Mile, then keep reading around the site.

Lady Mayoress’s Show returns to the Square Mile

Parade day details, family activities and how the celebration links the City’s history with today’s community.

Read the story
More City stories

Autumn education listings across London

Talks, workshops and free resources for families and learners in and around the Square Mile.

Read the guide
More City stories

St Katharine Docks half-term harvest

Seasonal markets, kids’ activities and waterfront food a short walk from the Tower and City offices.

Read the piece
More City stories

The policy comes amid a sharp increase in the deployment of large language models across both the public and private sectors. It sets out a framework designed to ensure ethical use, legal compliance and oversight, particularly where sensitive or personal data is involved.


Clear governance and strict oversight

Under the new guidelines, all intended uses of GenAI must be declared to the Corporation’s Information Management Board, including details of input, output and how the information will be distributed. This step, officials say, ensures clear accountability from the outset.

The SOP also mandates strict technical assessments for any GenAI platform before it can be adopted, particularly when tools are hosted internationally. This is to address concerns around data sovereignty and the transfer of sensitive information beyond UK legal jurisdiction.

James Tumbridge, Chairman of Digital Services, said the policy aims to foster responsible innovation rather than stifle technological progress.

“This policy reflects our commitment to innovation with integrity. By setting clear standards for the use of generative AI, we aim to harness its potential while safeguarding public confidence, data security, and ethical governance,” Tumbridge said.
“We want people to use AI with thought, and that is what this policy is all about.”


Ethics, transparency and accountability

The new framework goes further than many organisational AI policies by embedding explicit ethical obligations. Users must not generate or disseminate any content that could be discriminatory, offensive or inappropriate.

Transparency is another key principle. Any content generated by AI must be clearly identified as such, with users taking full responsibility for its accuracy and use. This aligns with emerging best practice set out by national regulators and industry bodies on responsible AI deployment.

The Corporation’s Digital Services Committee worked closely with permanent staff to develop the framework, placing emphasis on explainability, accountability and robust safeguards.


Keeping pace with rapid change

The SOP will be reviewed regularly to keep pace with developments in AI technology and evolving legal standards.

The announcement comes as regulators and policymakers globally intensify their focus on how generative AI is used in public services, law, finance and infrastructure. In the UK, AI governance is already under close scrutiny following the publication of the government’s AI Regulation White Paper, and the international AI Safety Summit held at Bletchley Park last year.

Industry analysts say such governance measures are increasingly necessary to ensure that public bodies can adopt AI without undermining trust. For example, the national AI guidance published by UK Information Commissioner’s Office sets expectations around lawful data processing and accountability for automated decision-making.

For more information on UK data protection and AI guidance, visit the official ICO website.

For more independent reporting on London’s governance, technology and business, follow EyeOnLondon City. Share your thoughts in the comments.

Follow us on:

Subscribe to our YouTube channel for the latest videos and updates!

YouTube

We value your thoughts! Share your feedback and help us make EyeOnLondon even better!

About Author

Editor

Emma’s journey to launching EyeOnLondon began with her move into London’s literary scene, thanks to her background in the Humanities, Communications and Media. After mingling with the city's creative elite, she moved on to editing and consultancy roles, eventually earning the title of Freeman of the City of London. Not one to settle, Emma launched EyeOnLondon in 2021 and is now leading its stylish leap into the digital world.

Leave a Reply

Your email address will not be published. Required fields are marked *