AI Governance: The Fourth Law We Need for the Fifth Industrial Revolution
- Owen Tribe
- Mar 17
- 3 min read

In my role as CTO, I've been contemplating the intersection of AI governance, regulatory compliance, and the human elements that make transformative technologies succeed. If we're being honest, most of us in the industry are donning tin foil hats and peering into the future with equal measures of excitement and trepidation.
The velocity of AI advancement has outpaced our ability to regulate it effectively. Like watching a toddler suddenly sprint across the room with scissors, we're simultaneously impressed by the capability and terrified by the potential consequences.
Asimov's Laws: More relevant than ever
You'll likely recall Isaac Asimov's Three Laws of Robotics, proposed in 1942:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These were science fiction then. Today, they're the foundation of practical AI ethics discussions worldwide. But I believe we need a fourth law – one that addresses the unique challenges of generative AI and large language models.
The Fourth Law: Transparency
Fourth Law: An AI must not deceive a human by impersonating a human being.
This isn't merely about chatbots passing the Turing test. It's about the fundamental relationship between humans and AI systems in a world where the line between human and machine-generated content blurs daily.
Implementation of this Fourth Law would require:
Mandatory AI disclosure in direct interactions
Clear labelling of AI-generated content
Technical standards for AI identification
Legal frameworks for enforcement
Educational initiatives to improve AI literacy
From Theory to Implementation
Our assessments for Industry 4.0 readiness include evaluating governance frameworks across three axes: culture, systems, and infrastructure - each divided by people, machines, and data.
Remember my layer cake metaphor for smart manufacturing? The same principles apply to AI governance. Without the right culture layer at the top, the technical capabilities beneath it will collapse like a poorly constructed Victoria sponge on a rainy Bake Off day.
I've seen organisations roll out impressive AI capabilities without considering governance implications, only to find themselves floundering when regulatory questions arise. It's like building a Formula 1 car without installing brakes – impressive until the first corner.
The Trust Economy
In "Why Web 3.0 makes me less angry", I explored how blockchain technology could establish a trust economy. This same principle must extend to AI.
The formula I've developed for measuring Industry 4.0 transformations applies equally well to AI governance:
Simply put: successful AI implementation is the square root of how much your people, assisted by AI, can accomplish divided by the volume of data points generated – all driven by your company culture.
Beyond Regulation: The Human Element
The pandemic taught us that transformation can happen overnight when necessary. Digital adoption that would have taken years occurred in weeks. The same urgency must now apply to AI governance.
Just as I argued in "Digital Transformation is not about process", AI governance isn't about ticking regulatory boxes. It's about fundamentally rethinking how humans and machines co-exist, with humans maintaining meaningful control while gaining unprecedented capabilities.
From Healthcare to Manufacturing
Whether in healthcare (We need to start building Healthcare 5.0, now) or manufacturing (Smart manufacturing is best served as a layer cake), governance principles remain consistent:
Culture drives adoption
People must remain at the centre
Data must be managed responsibly
Systems must be transparent and accountable
A Call to Action
We're at an inflection point. Like the dawn of the Internet in the 1990s, AI presents both extraordinary opportunities and existential challenges. The organisations that will thrive are those that embrace governance not as a constraint but as an enabler.
If you're wrestling with AI governance questions, don't approach them as merely technical problems. They're human challenges that require human solutions, augmented by the very technologies we're seeking to govern.
In the words of a famous 1979 Memorex cassette tape advert: "Is it live, or is it Memorex?" The answer needs to be crystal clear in our AI-augmented future. Anything less undermines the trust that's essential for these technologies to deliver their promised benefits.
As Isaac Asimov himself once said,
"The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom."
Let's prove him wrong.
Comments