We must build laws for intelligence.
In past decades, the promise of law was to govern human action; to protect rights, resolve disputes and of course, hold individuals accountable. But now we are producing systems that think, decide, and act autonomously. When algorithms become judges, detectives, creative partners, our legal frameworks strain. We have reached a point where traditional law isn’t enough, and I am sure you heard that many times. Don’t get me wrong, I believe we have made progress. The EU AI Act is the first serious attempt to regulate AI systems at scale; classifying them by risk, setting obligations around transparency, and creating enforcement structures. It is a milestone, and it deserves recognition. But even with this progress, there is something we are missing: we are trying to fit an entirely new kind of intelligence into an old governance model.
Law was built for human behaviour (for intention, fault, causality and foreseeability). AI doesn’t operate on intention; it operates on logic, pattern, and data. However we are trying to assign human notions of “responsibility” to systems that don’t "intend" to act at all. Isn't that crazy?
It’s not that regulatory efforts are not there, global efforts show that progress is happening. But the truth is, these frameworks still assume that law can impose control over systems we barely understand. They are written in the language of human accountability, applied to entities that don’t think or choose in any human sense.
This is where governance begins to fail. Our legal systems were never designed to manage autonomous reasoning as they were built to judge motive, negligence, and cause. When we apply those same tools to algorithmic logic, we fail.
Even as regulation evolves, we are still missing a unifying standard, not only across countries, but across industries. The current approach is horizontal: one-size-fits-all principles stretched across every industry. What we actually need is vertical precision: frameworks that understand the unique ethical risks of each domain. The questions in autonomous finance are not the same as those in generative art, healthcare diagnostics, or national security systems. But we continue to rely on the same compliance vocabulary everywhere.
And perhaps that’s why so many experts are beginning to argue that what we need isn’t more regulation, but more adaptation. AI’s unpredictability, its emergent, non-linear nature needs adaptive governance rather than rigid regulation. Governance that learns, audits, and evolves with the systems it oversees.
Because until our mindset changes (until we stop treating AI as a tool to be contained and start treating it as an intelligence to be understood) the law will always be one step behind. This is where we need to shift the frame: from “regulating machines” to “governing intelligence.” Because intelligence (whether human, artificial or hybrid) operates on logic, feedback loops, and agency. Law as we know it was built for human beings. It assumes rational actors, intention, reference to precedent and accountability. Algorithms challenge all these assumptions.
Three Pillars of Intelligent-System Governance
1. Logic > Intent. With humans, the legal question often is: what did you mean to do? With intelligent systems, the question becomes: what logic did you execute? We must ask not just “Who pressed the button?” but “What algorithm chose to press the button?” Accountability must follow traceable reasoning and not just human intervention.
2. Emergence > Specification. AI systems trained on vast data sets develop responses we might not have foreseen. Traditional regulation specifies rules. But when behaviour emerges, we must govern the lifecycle: design, data, architecture, feedback and even cessation. As one framework puts it, “Unlike past technologies… behaviour emerges unpredictably from training rather than intentional design.”
3. Sovereignty > Jurisdiction. Algorithms respect no borders. They operate in the cloud, influence globally, and learn fast. But our laws remain territorial. The governance challenge is building structures that enforce across intelligence networks, not just national boundaries. A treaty like the Framework Convention on Artificial Intelligence begins this shift, but implementation remains silent.
Autonomous systems will only grow in reach, complexity, and power. If we default to laws built for human behaviour, we risk not only the failure of regulation but the erosion of civilisation’s trust in the systems we build.
And accountability is already shifting. Developers and deployers remain the legally responsible parties, because AI, for now, isn’t recognised as an entity. But that line is starting to blur. Even OpenAI recently adjusted its policies to narrow liability, making clear that responsibility lies not with the system but with the user’s expertise. The message is this: the more capable the machine, the less direct our control and the more fragile our legal assumptions. Therefore we must evolve from being enforcers of human law to being architects of intelligence governance.
“We used to write laws for behaviour. Now we must write them for intelligence.”