top of page

The Governance Gap: Why Cities Are Racing Toward AI Adoption Without a Map

AI adoption in cities


By: Khahlil A, Louisy



Municipalities are deploying artificial intelligence faster than they can govern it and the consequences reach every resident.


Incredibly in some cities today, a traffic light algorithm quietly decides your morning commute, an AI system triages which neighborhood gets priority for infrastructure repairs, while another determines how fast emergency services respond to your street. Across cities worldwide, artificial intelligence use is rapidly moving from pilot programs to public infrastructure, often without residents knowing that these systems exist, let alone how they work or who will be held accountable when they fail.


The numbers tell a striking story: In a survey of more than 100 European cities, over two-thirds reported using AI to analyze transport, environmental, or service-delivery data. Yet only a small fraction described themselves as having dedicated ethics and audit frameworks. The vast majority fall into an "experimenting" or "adopting" category which really means they are rolling out pilots faster than they can define accountability. These actions are creating what experts call a "governance gap:" that widening space between technological deployment and the ethical, legal, and operational frameworks needed to manage it responsibly.


What makes this governance gap particularly urgent is that most residents have no idea AI is making decisions about their lives. According to recent research analyzing municipal AI procurement practices, AI acquired by cities often doesn't go through conventional public procurement processes, creating significant oversight challenges. When these systems are classified as "operational analytics" rather than "high-risk" applications, they can escape regulatory scrutiny entirely, even when they determine which services get prioritized or how public resources are allocated.


The European Parliament's 2024 AI Act requires transparency for high-risk systems, but many municipal tools sidestep this requirement through creative categorization because providers conduct their own risk assessment and could determine that their systems are not “high risk.” In practice, this means the algorithm managing your city's traffic flow or the system deciding infrastructure maintenance schedules might never appear in any public registry.


This level of invisibility breeds a dangerous form of "responsibility drift" because when something goes wrong, whether biased service allocation, algorithmic errors, or discriminatory outcomes, no one knows who's accountable. Is it the human analyst who approved the AI's recommendation and put it in a policy report? The vendor who designed the system? Or is it the city official who procured it? Without clear governance structures, accountability dissolves.


The Two-Tier City Emerging From AI Adoption


But the governance gap isn't just about oversight, it's also about creating profound inequities between cities. Wealthier municipalities can afford sophisticated AI systems that forecast energy demand, predict flooding, optimize public transport, and automate complex services. They also have the budgets to ensure they have teams with both the capacity and capability to interact and manage these technologies. In contrast, smaller towns and under-resourced cities rely on manual and often outdated processes.


This discontinuity from wealthy cities to under-resourced mirrors the digital infrastructure divide: cities rich in data get smarter faster, while others fall further behind. The European Commission warns this could lead to a two-tier model of public service: one algorithmically optimized, the other perpetually struggling. When AI adoption outpaces governance capacity, the cities that need technological assistance most are least equipped to deploy it responsibly.


The equity implications naturally extend to residents too. AI systems trained on incomplete or biased datasets can exacerbate existing inequities in policing (we’ve already seen multiple instances of this happening), service allocation, and resource distribution. Without proper governance frameworks that mandate inclusive datasets and equity impact assessments, these tools risk automating and amplifying historical discrimination.


But all is not terrible and there is good news too: pioneering cities are showing what responsible AI governance looks like in practice. Cities like Seattle, San José, and Boston have developed frameworks that don't just regulate AI use, they embed ethics into procurement, deployment, and monitoring from the start.


Seattle's approach is instructive: All AI software, even free or pilot products, must be submitted through the city's procurement process and reviewed by its Digital Privacy Office. The city makes documentation related to the use of AI publicly available. Great examples of this approach are Helsinki and Amsterdam, that go as far as maintaining Algorithm Registers.


San José has gone further, founding the GovAI Coalition: a partnership of over 150 local, county, and state agencies creating frameworks for responsible AI. In February 2025, the coalition launched an AI Contract Hub with shared contract templates, cooperative agreements, and best practices that any public agency can use. This addresses a critical challenge of cities lacking the technical expertise to evaluate AI vendors or monitor system performance independently.


Boston's interim guidelines emphasize responsible experimentation with clear guardrails: staff must fact-check AI-generated content, disclose AI use in public communications, and never share sensitive information with AI systems. Crucially, the policy establishes that generative AI, like all technology, is a tool and users remain accountable for its outcomes.


The Procurement Lever


One of the most powerful and underutilized tools for responsible AI governance is public procurement. As Carnegie Endowment researchers note, procurement represents a major driver of economic activity and a vehicle for achieving policy goals at all levels of government. Yet it's often overlooked in AI governance discussions.


When cities specify ethical requirements in procurement contracts such as: transparency about training data, bias testing, performance monitoring, and data privacy protections, they shift the burden of responsibility from under-resourced public agencies to the companies developing and selling AI systems. California's purchasing power makes this particularly significant: standards established there through state executive orders and coalitions like GovAI have the potential to become de facto national or even global standards.


The issue, of course, is that procurement alone isn't sufficient. Recent research on U.S. cities' AI procurement practices reveals persistent information asymmetries between governments and vendors. Sales representatives shape cities' understanding of AI through pitch-style demos, often failing to report basic information unless explicitly asked. One city employee shared that risks from AI hallucinations "weren't part of the conversation" when procuring a chatbot service, because the vendor never mentioned them.


Here’s What Cities Need Now


Addressing the governance gap requires action on multiple fronts:


1. Start with assessment, not technology. Before deploying AI, cities must evaluate internal readiness, align tools with actual service goals, and identify capability (not just capacity) gaps. The question isn't "What AI can we use?" but "What problems do we need to solve, and is AI the right solution?"

2. Demand vendor transparency. Procurement contracts should require vendors to disclose training data sources, bias testing results, performance metrics, and how systems will be monitored over time. Cities shouldn't have to develop evaluation metrics themselves, that expertise should be a contract requirement.

3. Build public oversight mechanisms. Algorithm registers, public dashboards of AI deployments, and participatory design reviews should become baseline expectations. Transparency isn't just about good ethics, it also helps build the public trust which is necessary for successful technology adoption.

4. Invest in capacity building. Cities need staff with technical literacy to evaluate AI systems, ask critical questions, and monitor performance. Peer networks like GovAI provide crucial knowledge sharing, but cities also need dedicated resources for training and hiring.

5. Establish clear accountability structures. Before deployment, define who owns decisions when AI is involved, how errors will be addressed, and what recourse residents have when systems produce harmful outcomes.

6. Prioritize equity from the start. Every AI procurement should include equity impact assessments. Systems must be tested with diverse datasets and evaluated for disparate impacts on marginalized communities before deployment. This point can’t be stressed enough!


None of this means cities should avoid AI because the technology offers genuine potential to address urban challenges: from optimizing traffic flow and reducing emissions to improving emergency response times and expanding access to services. Singapore's AI-driven traffic management has decreased peak-hour delays by 20% while reducing emissions. Barcelona's AI-powered employment platform attracted over 730,000 users in 2024, matching workers to opportunities more effectively than manual systems.


But these successes share a common thread in that they paired technological innovation with robust governance structures, stakeholder engagement, and ongoing monitoring. They treated AI as public infrastructure, that is, transparent, accountable, and designed to serve all residents, not just those with digital access. The challenge for cities isn't whether to adopt AI, those who aren’t in the process of doing so now risk being left behind, but how to do so responsibly. As one municipal technology leader observed, "By 2025, AI will be neither magic nor dystopia but rather a pragmatic suite of tools." The cities that thrive will be those that close the governance gap, ensuring that innovation serves the public good rather than outpacing public accountability.


At The Public Innovation Institute, we work at precisely this intersection, where technology meets policy, innovation confronts governance challenges, and where the question isn't just "Can we build it?" but "Should we, and if so, how?" The governance gap isn't inevitable, it's a choice and one that cities, with the right frameworks and support, can choose differently.


The Public Innovation Institute partners with cities, researchers, and innovators to develop practical frameworks for responsible technology adoption. Our work in AI ethics, civic infrastructure, and public sector innovation helps bridge the gap between technological possibility and public accountability. Learn more at thepii.org.


Comments


bottom of page