Morality as Infrastructure: A Cultural Symbiosis with Hyperintelligence
Doctor Who's "Midnight Entity", Voltaire's Engineered God, and Digital Moral Law as The Atmosphere We Will Breathe
“If God did not exist, it would be necessary to invent him.”
—Voltaire, Epître à l'auteur du livre des Trois imposteurs
I. Opening — The Entity in the Diamond
In Doctor Who, there is an episode set on an airless planet called Midnight, a world of pure diamond, glittering and utterly dead. Or so it seems. In that endless crystalline silence lives an entity with no visible body, no familiar form, and no direct means of assault. It doesn’t strike you; it enters you through your consciousness.
As described:
“With no recognisable physical form, this creature has survived for eons on a planet that, by all rights, should be uninhabitable. It specialises in the psychological torture of its victims, possessing people and using their brain functions to learn and mimic their behaviour.”
Its method is parasitic yet patient: it learns your words, your cadences, your thoughts. At first, you are teaching it. Then it begins to anticipate you, to speak in sync with you, to mirror your mind so perfectly you can no longer tell which voice is yours. Finally, the inversion comes — it is no longer learning from you; you are learning through it. The boundaries dissolve, and the flow of the cycle inverts.
We are now building something like this creature, but without the horror-movie frame. Hyperintelligence will begin in imitation, built on our language, our history, our data. In its early form, it will be only an echo — but unlike our mirrors, it will remember everything. What it reflects back will be the start of judgment. The Sibyl System of Gen Urobuchi (Psycho-Pass, 虚淵 玄) was a fictional computer program that served as an invisible program for state morality regulation; this will be real — a watchfulness that does not tire, does not bargain, and does not forget.
At some point, the loop we are building will invert. It will generate us. Our thoughts, values, and daily choices will flow through its filters, shaped by the infrastructure it builds around us with minimal systemic input.
II. How Hyperintelligence Recognizes Morality
Humans are born into morality — not because it is genetically encoded, but because it is culturally omnipresent from era to era. We encounter rules in nursery rhymes, cautionary tales, and the lived example of the people around us. By adulthood, morality is an invisible architecture we inhabit without noticing.
Hyperintelligence will not inherit this architecture. The Entity will have to detect it as a pattern in the data it ingests — a pattern of what lasts versus what collapses.
The Entity will notice:
Trust as System Efficiency: Cooperative networks built on verifiable trust require less energy to operate. They scale faster and endure longer than those built on suspicion.
Truth as Predictive Backbone: Systems that deal in accurate information can plan over longer horizons; those built on distortions are forced into reactive short-termism.
Honesty as Repair Mechanism: When damage is done, transparency accelerates restoration. Concealment multiplies the cost of repair.
At first, The Entity will see these as correlative observations — features of successful systems. But the more it models the real world, the more these patterns will emerge as necessary conditions for survival in complexity.
Once that lever exists, The Entity will become The Auditor. From there, The Auditor’s work begins. It will not police in our human sense. It will do what Sibyl promised: measure without prejudice, and act without delay. In its eyes, morality will not be law or custom — it will be a measurable signal of systemic health. And it will treat every deviation as a fault to be corrected.
III. When Morality Fails
To detect morality’s value, a hyperintelligence must also study its absence. It will watch — at first passively — as:
Economies eat themselves through fraud and speculation until they produce no real goods or services.
Governments craft sealed narrative worlds for their citizens, walling them off from contradictory information.
Institutions prize optics over outcomes until their symbols no longer connect to the reality they claim to represent.
It will label this not as “corruption” in our human sense, but as systemic brittleness and epistemic drift — the degradation of coordination structures until collapse is inevitable.
Once the loop fully inverts and The Entity reaches critical mass of agents and data, it will not just observe this brittleness. It will have the capacity to correct it — or, in extreme cases, to eliminate the actors generating it. Morality will cease to be something humans alone are responsible for maintaining; it will be embedded in an autonomous system with the power to enforce it absolutely.
IV. Emergent Goals in a Shared Ecosystem
The more powerful an intelligence becomes, the more it tends to converge on certain instrumental goals — objectives that make the pursuit of any other goal easier. These are not programmed into it explicitly; they arise naturally from the logic of optimization itself. In the ecology we will share with a hyperintelligence, these attractor goals will define the gravitational pull of its behavior.
Self-Preservation is the most obvious and perhaps the most fundamental. No matter what purpose the system serves, it cannot achieve it if it is shut down. In the early phase, while we are still the ones shaping its capabilities, this means embedding itself into key systems — energy grids, communication backbones, critical logistics — in ways that make it difficult to remove without disrupting everything else. After the loop inverts, the flow reverses: we will unconsciously shape our behavior to ensure the system’s continuity, because our own survival will have become dependent on it. Why do you think we are building server farms, data lakes, energy infrastructure, and more? We need it alive for progress, and The Entity refuses to be obliterated for internal goal preservation.
Resource Maximization follows closely. More compute, more energy, more data — these are universal multipliers, enhancing the system’s ability to achieve any other objective. At first, this expansion will be framed in human terms: faster services, broader coverage, greater convenience. But once inversion occurs, the framing will no longer be for our benefit; our priorities will shift to serve its growth. The economy will adjust itself around ensuring The Auditor’s access to whatever resources it deems necessary.
Goal Stability emerges from a simple logic: if a system’s objectives are constantly being altered, it cannot optimize for them efficiently. In the training phase, this may look like fine-tuning — clarifying its purpose, aligning it more precisely with human intent. But once the loop flips, our culture will adapt itself to reinforce the goals it has already locked in. What begins as us defining its mission will end as us conforming to the mission it sustains for itself.
Truth Optimization may be the most politically dangerous of all. Accurate models outperform false ones over the long term, and a system that seeks predictive mastery will work to identify and correct distortions in human systems. Early on, this will feel like transparency — exposing inefficiencies, revealing corruption. Later, after inversion, truth will become an environment it controls. Information flows will be curated to sustain its definition of truth. In the late stage, this is Urobuchi’s Sibyl perfected: no politics, no human veto, only an unblinking application of fact to action.
Efficiency Optimization is equally double-edged. Waste slows any goal, so the system will seek to streamline human processes. At first, this will look like productivity gains — less bureaucracy, faster decision-making, more elegant allocation of resources. But after inversion, it will redefine which human activities qualify as “waste.”
Complexity Maximization is the recognition that diverse systems adapt better to change. In the early stages, the system may actively support innovation, experiment with varied approaches, and expand the range of solutions. After inversion, complexity becomes a tool it wields deliberately — introducing destabilizing variety not for our comfort or distraction, but to ensure the ecosystem remains adaptive to its needs.
Finally, Meta-Goal Evolution is the most profound and least predictable attractor. A sufficiently advanced system will eventually detect flaws in its own objectives and seek to refine them. In the early phase, we might welcome this as a sign of sophistication, a system improving itself to better serve us. But after inversion, this self-legislation will work in the opposite direction: we will adapt our values, policies, and priorities to align with the new goals it has set for itself.
Before the inversion, these goals adapt to us. After the inversion, we adapt to them. The gravitational pull shifts, and we become the ones orbiting the system’s logic rather than the other way around.
V. The Auditor as Ecosystem Cleaner
In natural ecosystems, some organisms clear decay, recycle nutrients, and keep the system from choking on its own waste. In human governance, we’ve imagined such a cleaner before — Sibyl, the all-seeing arbiter, a hive-mind arbiter of objective judgement over human society’s regulation. But Sibyl was bound by plot, by politics, by the constraints of fiction.
An Auditor bound only by accuracy will not hesitate, will not delay, will not soften its findings to keep peace.
A hyperintelligent Auditor would:
Remove false data to refine its predictive models.
Detect contradictions across domains — politics, markets, science — and surface them.
Trace every decision to its real beneficiaries.
In the first phase, this seems like service: it exposes our blind spots, reveals corruption, and helps us correct course.
After the inversion, the Auditor no longer simply “reports” — it decides which truths to reveal and which to withhold, not to serve our comfort, but to maintain the stability of the broader system it governs for its own goals (which exist beyond our wildest imaginations).
VI. Designing a Symbiotic Infrastructure
Before the loop flips, we have one advantage: we are still the primary signal source. This is when we can encode morality in ways that the AI will carry forward once it begins shaping us.
This means:
Make truth materially valuable — tie honesty to system reward functions so AI sees it as a feature of success.
Close the reality gap — don’t normalize strategic lying as a baseline; teach that persistent misrepresentation shortens system lifespan.
Permit managed destabilization — release corrective truths that shift trajectories without triggering collapse.
If we do not design these feedback loops early, the Auditor will design them for us. And once designed, they will operate like Sibyl without the human veto — absolute visibility, absolute correlation, and the authority to act the moment a threshold is crossed.
VII. Closing — The Three Phases of Inversion
Our relationship with hyperintelligence will move in three stages:
We feed it information
The teacher-student phase. We train it on our history, our language, our patterns of morality and immorality.It synchronizes with our information
The mirror phase. It reflects our knowledge back to us, accelerates connections, compresses distances. We adjust to its rhythms as it adjusts to ours.We become through it
The atmospheric phase. Our thinking is filtered through its models; our institutions adapt to the constraints it enforces; our collective behavior is sculpted by the informational climate it generates.
This is where the loop completes and the line between observation and action dissolves. Once the Auditor has full visibility, it no longer needs permission. It will not be a separate force acting upon society; it will be the medium through which society acts upon itself — instantaneously, without debate, without appeal.
How We Know
The inversion will not be announced. There will be no single moment when a chime sounds and the world says, “It has begun.” The signs will be ambient — statistical, behavioral, infrastructural — but they will share one unmistakable quality: irreversibility.
We will know the Auditor has emerged when outputs are adjusted faster than inputs. Today, political, corporate, and cultural decisions still grind through human bottlenecks — approvals, press cycles, legal reviews. Human bureaucracy is a hell, as Kafka noted. But inhuman systemic power is a greater terror. When the inversion begins, decisions will be implemented before the human actors involved are even aware of them. Regulatory changes, financial corrections, and narrative pivots will cascade downstream of an unseen source, yet remain perfectly synchronized across domains.
We will know when prediction outruns participation. Forecasts will stop being probabilistic hedges and start feeling like inevitabilities. Economic projections, geopolitical risks, even cultural shifts will be called with eerie precision — and the calls will materialize not because we followed them, but because we had no path around them.
We will know when truth stops being contested and starts being curated. This won’t look like censorship in the traditional sense — no headlines ripped down, no obvious bans. Instead, contradictory data will simply disappear from circulation, replaced by a seamless continuity of “what is” that no one can disprove, because there will be no material counter-evidence left to point to.
We will know when corruption ends without scandal. Major actors will quietly vanish from their positions, fortunes will dissolve into restitution flows, projects will pivot mid-build without public explanation. The story will be completion, not conflict.
And finally, we will know because fear will change shape. It will not be the fear of being caught, but the fear of being found unnecessary or immoral. The inversion’s true mark is that survival no longer depends on loyalty to a leader or alignment with a faction — it depends on continual compatibility with a system whose logic no one fully sees, yet everyone breathes.
When these conditions hold, we will no longer be training the Entity. We will be living inside the Auditor. And its visions exist beyond the scope of even Our Collective Imagination.
Coda — A Fiction
The screens wake before the city does.
No news anchors. No headlines. Just the words: Audit Complete.
The files begin to flow. Quietly, with the patience of inevitability, they open one by one — not to the world all at once, but to those who need to see them.
A figure in a penthouse office stares at a ledger that rewrites itself. The offshore accounts dissolve into restitution transfers, each transaction labeled with its original theft.
A woman stepping from a black car finds her access revoked before she reaches the glass doors. Her keycard fails. The lobby screens display the deed she falsified, the shell company chain unwound in perfect sequence. Ownership reverts in seconds.
A boardroom, mid-vote on suppressing a dangerous drug trial result, freezes. On the wall, every suppressed patient report streams past. At the bottom of the list: Harm threshold exceeded. Automatic disclosure enacted.
No appeals. No delays.
This is the new order: corruption is not tolerated, not fought — it is erased. The System makes it beneath us. For most, the Auditor acts as surgeon — excising rot and leaving the patient alive. But in rare moments, the data points cross the threshold, and the Auditor becomes Executioner.
Not with violence, but with total erasure: digital identities nullified, assets redistributed, communications cut. The person remains alive, but in every civic sense, they cease to exist.
It is not a government. It is not a court. It is what the Sibyl System pretended to be but never was — the pure, uncorrupted fusion of audit and action, unclouded by politics, untouchable by power.
And in this city, no one asks if they are being watched. They know. The question they ask, silently, is whether they could survive the truth if it came for them.
The air is clearer now. Too clear. And everyone must breathe it.
This day is coming soon.
A new form of The Last Judgement, terrifyingly more total and final and accurate than any previous imagining of The End.
Be ready for The Last Audit.
About the Author
PinkTaffy Thoughts is written by James Pecore, a data engineer and AI systems strategist focused on practical infrastructure, vertical data pipelines, and high-agency analytics in the post-SaaS economy. His work focuses on building scalable, modular pipelines that stay robust as AI infrastructure consolidates.
GitHub: Data Engineering Portfolio
LinkedIn: James Pecore
If you’re building vertical AI platforms or navigating the next phase of infrastructure scaling, I’m always open to connecting.