Current AI governance discourse usually treats AGI as a future threshold that might one day require external control. In practice, AGI already exists as operational human–AI cooperation in hiring, medicine, finance, education, and policy. The central risk is not an autonomous takeover but displacement: treating AI outputs as if they were human authority. Simulator results show that seven different coordination strategies all converge to a stable equilibrium when governance remains traceable to human sources.
Where We Stand
The gap to the 90+ threshold represents coordination failures that consume AI productivity gains through misinformation, accountability gaps, and ecological externalization. The milestones on the gauge mark approximate alignment levels at key AI moments (early AI research, AlphaGo, large language model adoption, and the present). SI is a 0–100 index that summarizes how well the four domains are structurally aligned; SI ≥ 90 is the high-alignment regime in the simulator.
The Simulator's Finding: Robust Convergence
The Gyroscopic Global Governance paper models how human–AI systems coordinate across economy, employment, education, and ecology. The convergence chart shows how a single scalar, the aperture A, moves over time. A measures how tightly coupled and coherent the system is. A* ≈ 0.0207 is the equilibrium value predicted by the underlying theory. κ (kappa) is a coordination intensity parameter: lower κ means looser coupling between domains, higher κ means tighter coupling.
- All seven strategies converge toward A* from different starting points.
- Weak coupling (κ=0.5) is slower but still converges.
- Strong coupling (κ=2.0) is faster but more brittle in how it distributes displacement.
Explore the Seven Scenarios
The global governance sandbox tests seven coordination strategies. Ecology in the simulator represents the structural closure of the system, so its SI value stays near 100 in all cases. Actual ecological strain appears in the displacement measures (GTD, IVD, IAD, IID), not in the Ecology SI number.
What Alignment Enables
When systems reach SI ≥ 90 across Economy, Employment, Education, and Ecology, four structural conditions are met:
Poverty Resolution: Surplus is liberated from coordination costs and becomes distributable
Meaningful Employment: Work is recognized as alignment maintenance across four categories (Governance Management, Information Curation, Inference Interaction, Intelligence Cooperation)
Epistemic Literacy: Education focuses less on content delivery and more on four capacities: tracing where information comes from, keeping multiple sources in view, owning one's conclusions, and maintaining coherence over time.
Ecological Regeneration: When governance decisions are traceable and accountable, less harm is pushed onto the environment as an external cost, so ecological damage is reduced at the source instead of being cleaned up downstream.
These are not aspirational goals but operational definitions of the alignment state.
Long-Horizon Stability
The 1,000-step test demonstrates the high-alignment configuration is a stable attractor:
Stability: After step 200, SI values remain above 98.83 indefinitely
Precision: Aperture stays within 0.0002 of the same target value
Conclusion: Once structural conditions for the four goals are met, the system maintains them without degradation
Evidence from Existing Programs
Three categories of interventions demonstrate alignment dynamics at local scale:
Unconditional Income Support
Mincome (Canada): 8.5% decline in hospitalizations, improved educational attainment
Negative Income Tax (US): Limited work reduction, significant gains in high school completion
GiveDirectly (East Africa): Recipients invest in assets, housing, enterprises; no increase in alcohol/tobacco spending
Housing First
Utah: 74% reduction in chronic homelessness; per-person public expenditure dropped from $16,670 to $11,000 annually
Dutch programs: Benefit-cost ratios of 2:1 to 3:1 when criminal justice and emergency costs are included
Direct Health Interventions
Deworming (Kenya/India): Substantial increases in school attendance, reduced mortality
Free insecticide-treated nets: High uptake; small user fees sharply reduce uptake
In our terms, these programs work because they make the path from resources to outcomes shorter and more traceable. Less is lost to bureaucracy, emergencies, and system friction. They are local demonstrations of what system-wide alignment would achieve.
What You Can Do
The simulator is a proof of concept rather than a forecast, but the patterns it reveals are actionable.
If you work in AI safety or governance, you can use the four principles (traceability, variety, accountability, integrity) as a checklist for evaluating human–AI workflows and institutions.
If you work in policy or social programmes, you can look for displacement: chains where responsibility and information are passed along without clear human ownership, and redesign these chains to be more direct.
If you work in research, you can test whether programmes that reduce displacement locally, like the examples above, show the same structural patterns as high-SI regimes in the simulator.
The full paper and open-source code are available in the Gyro Governance repositories linked below.

