A Practical Look at a New Regulatory Era
Intro
Artificial Intelligence enters 2026 with less novelty and greater consequence. Over the past eighteen months, the focus has shifted away from demonstrations of capability and towards questions of governance, accountability and institutional readiness. This period has been marked by a steady accumulation of regulatory signals, from the EU AI Act entering into force in mid-2024 through to successive UK guidance initiatives across 2025 and early 2026. AI systems are no longer confined to experimental deployments, they are increasingly embedded within core enterprise workflows, influencing decisions, markets, and the distribution of responsibility between humans and machines.
This shift reflects a broader maturation of the AI ecosystem. Early optimism has given way to more sober assessment as organisations confront the limits of automation in practice. The challenge is no longer whether AI can generate outputs at speed, but whether those outputs can be trusted, explained, and governed at scale. In high-stakes environments such as financial research and decision-making, these questions are not abstract. They determine risk exposure and strategic confidence.
Regulation is an inevitable response to this moment, but it is also a revealing one. The sequencing of regulatory activity over the past two years shows governments experimenting with guidance, voluntary codes and formal legislation in parallel. The way states and regions choose to regulate AI reflects a necessary balancing act between risk appetite and the pursuit of technological and economic opportunity. Overcorrection risks stifling innovation, while under-reaction erodes public trust and institutional legitimacy. Navigating this balance has become one of the defining policy challenges of the current decade.
This piece lightly discusses that challenge through a number of lenses including: the growing body of literature interrogating AI’s concentration of power; the EU AI Act as a reference point; recent UK regulatory initiatives that have unfolded incrementally since early 2025; and the UK’s evolving attempt to align digital ambition with credible oversight.
AI Regulation Enters Its Literary Phase
Over the past year, AI has become the subject of a different kind of scrutiny. Alongside technical papers and policy consultations, a growing body of long-form writing has emerged that treats AI less as a breakthrough technology and more as a social and economic system. This literary turn has unfolded in parallel with the EU AI Act’s publication in August 2024. Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Empire of AI) exemplifies this shift, moving the conversation away from model performance and towards questions of institutional control and exploitation of both labour and resources.
At the heart of Hao’s argument is a reframing of AI as an extractive enterprise. Rather than portraying progress as the natural outcome of innovation, she highlights the concentration of resources and influence within a small group of firms. These organisations shape the direction of AI development through scale and capital, while the costs of that development are often distributed elsewhere. The book draws attention to the human labour that underpins modern AI systems, much of it invisible, precarious and geographically distant from centres of decision-making.
What makes Hao’s commentary particularly relevant to the regulatory conversation is the insistence that governance cannot be separated from incentive structures. Hao does not argue that AI is inherently harmful, nor does she call for sweeping prohibition. Instead, she exposes how existing market dynamics reward speed, opacity and centralisation, even when these qualities undermine accountability or long-term resilience. In this framing, regulatory missteps are less about technical ignorance and more about institutional misalignment.
The significance of this perspective lies in how it reshapes public expectations at a time when regulators are still calibrating their approach. As AI becomes more embedded in everyday systems, narratives that emphasise concentration and dependency resonate more strongly than abstract promises of efficiency.
Generally speaking, literature has a unique ability to surface these concerns in a way that technical documentation often cannot. It connects individual experiences of confusion or mistrust to broader structural patterns, creating a shared vocabulary for critique. For regulators, this literary turn presents both opportunity and challenge. On one hand, it broadens the scope of oversight beyond narrow safety metrics, encouraging consideration of fairness and long-term ESG impact. On the other, it risks accelerating regulatory urgency ahead of empirical clarity. Stories are persuasive, but they do not always map cleanly onto enforceable rules.
The emergence of works like Empire of AI signals that AI regulation is no longer driven solely by engineers and lawyers. It is increasingly shaped by public sentiment and political interpretation. Effective governance in this context requires discernment; literature can illuminate the stakes and expose blind spots, but regulation must translate those insights into proportionate and durable frameworks.
From Narrative to Governance
Literary critique alone does not produce regulation, but it shapes the conditions under which regulation becomes politically possible. This has been evident over the past two years as public concern has grown alongside formal regulatory action, particularly following the EU AI Act’s entry into force in 2024. At times, public opinion calcifies into prescriptive and paternalistic overreach, locking in assumptions that outpace evidence. At others, scepticism towards intervention can drift into inertia, allowing structural risks to accumulate unchecked. Both tendencies risk distorting regulation away from its core function: establishing clear, accessible guidelines grounded in risk mitigation and sustainable organisational benefit.
Completely dismissing literary influence is a mistake. Public legitimacy and trust matter. Regulation that fails to address widely understood concerns is out of touch with real end users and struggles to command compliance, regardless of architectural sophistication. The task for policymakers is to extract durable insights without codifying transient sentiment.
Distinguishing between risks that warrant formal oversight and broader anxieties that require transparency rather than restriction is imperative. This distinction underpins the EU AI Act’s risk-based architecture and is echoed, in softer form, in the UK’s subsequent guidance-led approach. Questions of cultural unease or speculative future harm are, at times, better addressed through standards, disclosure and ongoing review.
The challenge is heightened by the public’s tendency to frame regulation as a binary choice between protection and control. In the aftermath of the Grok episode, explained below, public debate quickly spun out into concerns about censorship, surveillance and overreach. Whilst these legitimate concerns merit serious consideration in general, within this context, they risk obscuring the narrower and more practical question of system accountability. Effective digital regulation must resist false dichotomy. Safeguards designed to prevent demonstrable harm do not require expansive monitoring of speech or behaviour; they require clarity about responsibility, proportionate controls and credible mechanisms for redress.
The EU AI Act as a Regulatory Reference Point
The EU AI Act is a (rightly) ambitious step towards codifying AI governance into law. Published and having entered into force in August 2024, it introduced a risk-based framework that categorises AI systems according to their potential to cause harm, with obligations scaling accordingly. For firms operating within or alongside the European market, the Act offers something that has long been absent from AI oversight: shared regulatory vocabulary.
From a UK perspective, the importance of the EU AI Act lies less in direct applicability and more in its role as a reference point. Even outside the Union, British companies building or deploying AI systems will encounter its influence through cross-border operations, investor expectations and emerging norms of best practice. The Act effectively sets a baseline against which other regulatory approaches, including those developing in the UK, will be compared, particularly as it becomes generally applicable in August 2026.
One of the Act’s strengths is its explicit definition of boundaries. By identifying prohibited uses and high-risk categories, it seeks to clarify what is unacceptable whilst allowing space for lawful innovation. This approach aligns with a jurisprudential preference for outlining constraints rather than prescribing permissible activity. In theory, such clarity should reduce uncertainty for developers and deployers alike.
In practice, however, the challenge lies in implementation. Many of the Act’s definitions are necessarily broad, reflecting the diversity and pace of AI and machine learning development. This creates an interpretive burden, particularly for organisations without extensive compliance infrastructure. There is a risk that complexity, rather than risk, becomes the primary driver of regulatory cost. Smaller firms may struggle to navigate layered requirements, even where their systems pose limited harm.
As a regulatory artefact, the EU AI Act marks a turning point. It moves AI governance from principle to operational. Whether it succeeds will depend on how effectively its requirements are translated into enforceable, proportionate practice.
Sequencing Security and Innovation in the UK
In contrast to the EU’s legislative approach, the UK’s regulatory posture over the past year has been characterised by sequencing and experimentation. Since the January 2025 publication of the AI Cyber Security Code of Practice (“guidance to help stakeholders across the supply chain for AI systems, particularly Developers and System Operators, to meet the cyber security provisions outlined for AI systems in the UK Government’s Code of Practice (and subsequently ETSI TS 104 223)”), followed by the Software Security Code of Practice in May 2025 and its update in January 2026, the emphasis has been on laying foundations for secure design before hardening expectations.
Alongside the Digital and Technologies Sector Plan, which frames AI as a driver of national growth, the recent launch of the Software Security Ambassador Scheme (January 2026) signals a deliberate trust-led approach to governance. The scheme builds on earlier guidance by encouraging industry-led adoption, peer learning and demonstrable implementation, rather than immediate enforcement.
The Code, co-designed with industry and the National Cyber Security Centre, sets out principles for embedding security across the software lifecycle. A cohort of signatories has committed to championing these principles and sharing practical insight on implementation. This staged approach allows voluntary practice to mature into shared expectation, giving regulators a clearer view of organisational capability before escalating intervention.
Taken together, these initiatives suggest a regulatory philosophy focused on learning, iteration and institutional readiness. They reflect an understanding that effective oversight depends as much on operational maturity as on formal rules.
When Strategy Meets Reality: The Grok Backlash
The EU AI Act reflects a structured attempt to regulate AI through risk categorisation, while the UK’s guidance-led initiatives emphasise proportionality and innovation. Recent events surrounding Grok, the generative AI chatbot integrated into the social media platform X (formerly Twitter), illustrate how quickly these frameworks can be tested.
A significant controversy emerged after users demonstrated that Grok’s image generation and editing features could be used to produce non-consensual sexualised images of real individuals (including minors). As examples circulated publicly, the issue drew rapid political and regulatory scrutiny, particularly in the UK.
What followed exposed the tension between strategic ambition and operational accountability. The speed and visibility of harm shifted regulatory response from future-proofing to immediate containment and in the UK, the episode was framed through online safety, raising questions about how existing regimes apply to AI-mediated harms.
The (ongoing) episode highlights the limits of voluntary safeguards once systems are deployed at scale. While mitigations were introduced, the incident reinforced a broader reality: public-facing AI systems understandably attract hard expectations of accountability. For policymakers pursuing innovation-led strategies, this underscores the need for governance mechanisms that function under pressure.
Closing: Clarity, Restraint and Risk as Organising Principles
The current moment calls for clarity of purpose on all sides of the regulatory landscape. For regulators and lawmakers, this means defining boundaries rather than enumerating acceptable use. The most effective frameworks will remain firm regarding impermissible actions whilst allowing for flexibility where application is concerned. The rapidly evolving landscape requires restraint, however, risk must remain the organising principle.
For organisations and builders, governance is no longer a downstream compliance exercise. It is an operational cornerstone that must shape the full AI supply chain, from Ethics-by-design, through R&D, all the way to deployment and monitoring. As ever, firms that embed oversight early will be better positioned as regulatory expectations harden.
Regulation of AI/ML and Digital in 2026 is no longer solely about playing catch-up with technology. It is about governance maturity. Literary critique of the levers behind AI development, the EU’s formalisation of oversight and the UK’s incremental, trust-led guidance all reflect the spreading recognition that AI governance must be framed and managed as a component of broader institutional responsibility.
This is the defining task of the new regulatory era.
Timeline & References
- August 2024 EU AI Act enters into force
- January 2025 AI Cyber Security Code of Practice published
- April 2025 ETSI TS 104 223 published
- May 2025 Software Security Code of Practice published
- January 2026 Software Security Code of Practice updated
- January 2026 Software Security Ambassador Scheme announced
- August 2026 EU AI Act becomes generally applicable