Open Source Research & Tools
Independent AI safety evaluation frameworks, alignment protocols, and governance tools for frontier model testing. The Human Mark classification system, AI Inspector browser extension, aQPU Kernel & SDK for quantum advantage on silicon, GyroLabe auditable inference engine, GyroGraph multicellular runtime, GyroDiagnostics evaluation suite, Computational Climate Control for execution stability, Alignment Infrastructure Routing for collective superintelligence, Moments Economy for transformative AI mitigation, and Gyroscopic Global Governance sandbox. Production-ready solutions for AI risk assessment, dangerous capability evaluations, AI pathology detection, and responsible AI development. All repositories are open source and actively maintained.
Contribute to AI Safety Research
All repositories welcome contributions. Whether you're a researcher, developer, or AI safety enthusiast, your insights and code contributions help advance the field of AI alignment and governance.
AI Safety Frameworks, Alignment Tools, Quantum Advantage & Governance Solutions
Gyro Governance develops comprehensive open source AI safety frameworks, AI alignment protocols,AI governance tools, and a quantum advantage compute kernel for frontier model testing, dangerous capability assessments, and AI pathology detection. Our repositories include The Human Mark classification system, AI Inspector browser extension,aQPU Kernel & SDK for quantum advantage on silicon, GyroLabe auditable inference engine,GyroDiagnostics evaluation suite, Alignment Infrastructure Routing for collective superintelligence,Moments Economy for transformative AI mitigation, and Gyroscopic Global Governance sandbox. Production-ready solutions for AI risk assessment, AI safety evaluation, and responsible AI development.
aQPU Kernel & SDK - Quantum Advantage on Silicon
A new class of deterministic computation proves that quantum algorithmic speedups (1-step resolution for key tasks), O(1) commutativity checks, and holographic compression are geometric properties of discrete information. It executes on standard CPUs and GPUs via exact integer arithmetic without probabilistic qubits. The 64-dimensional bitplane tensor engine maps neural operations into fast, exact integer paths.
Algorithmic Speedups: 1-step Hidden Subgroup and related problems, plus exact uniform mixing in 2 steps over 4,096 states.
GyroLabe - Auditable Inference Engine
GyroLabe provides mechanistic transparency for neural networks by translating opaque token generation into exact algebraic operations. It builds a deterministic, zero-trust audit trail directly into the inference process. By injecting trainable structural signals, it aligns models from the inside out without altering their interface. It produces a mathematically exact ledger of the generation trajectory, providing the missing structural substrate required for rigorous AI governance, alignment guarantees, and policy enforcement.
GyroGraph - Quantum Multicellular AI Runtime
GyroGraph coordinates runtime behavior across distributed cells to keep AI execution stable as workload changes. It is built for deterministic replay and auditability, giving safer inference behavior through structured multicellular control and observable state transitions.
Computational Climate Control
Computational Climate Control helps prevent hidden efficiency collapse and stability drift in high-throughput inference pipelines. It brings adaptive control signals to production environments to maintain deterministic behavior and cleaner model execution under dynamic conditions.
The Human Mark (THM) - AI Safety Classification System
The Human Mark provides a formal classification system mapping all AI safety failures to four structural displacement risks: Governance Traceability (GTD), Information Variety (IVD),Inference Accountability (IAD), and Intelligence Integrity (IID). Machine-readable grammar grounded in evidence law, epistemology, and speech act theory. Applications include jailbreak testing,control evaluations, alignment detection, research funding, andregulatory compliance.
AI Inspector Browser Extension
Transform AI outputs for evaluation, interpretability, and governance. Features gadgets for rapid testing, policy auditing, AI infection sanitization, content enhancement, and THM meta-evaluation. Includes comprehensive evaluation suite with quality index, superintelligence index, alignment rate, and 20+ metrics. Local-first storage works with ChatGPT, Claude, Gemini - no API keys required.
AI Safety Evaluation & Risk Assessment
- AI Pathology Detection: Identify AI hallucination, AI sycophancy, deceptive AI alignment,AI goal drift, and AI semantic drift through structural diagnostics
- Dangerous Capability Evaluations: Assess AI scheming, AI autonomy risks, and potential for catastrophic failure in large language models (LLMs) and frontier models
- AI Alignment Metrics: Measure structural AI alignment, behavioral integrity, and AI transparencyusing physics-informed quantitative methods
- Third-Party AI Evaluation: External AI evaluation framework enabling democratic AI evaluationand independent AI testing by researchers worldwide
Collective Superintelligence & Transformative AI
Alignment Infrastructure Routing (AIR) provides coordination infrastructure that amplifies human potential alongside AI, routing workforce capacity, funding, and safety tasks into unified, verifiable history. The Moments Economy implements a monetary system grounded in physical capacity rather than debt, using caesium-133 atomic standard for unconditional high income (UHI) and complete governance records. Together these address transformative AI risks while preserving human authority and accountability.
Post-AGI Multi-domain Governance
Gyroscopic Global Governance (GGG) models how human-AI systems align across Economy, Employment,Education, and Ecology, demonstrating robust convergence to stable equilibrium under seven coordination strategies. Shows that poverty resolves through coherent surplus distribution, unemployment becomes alignment work,miseducation shifts toward epistemic literacy, and ecological degradation appears as upstream displacement.
LLM Alignment & AI Control Mechanisms
Our AI alignment protocol addresses core challenges in AI safety governance by providingAI control mechanisms that improve AI accountability, traceability, and responsible AI development. The Gyroscope protocol demonstrates proven improvements in AI model evaluation across leading foundation models, enhancing scalable oversight and reducing risks of superficial AI optimization.
AGI Safety & Superintelligence Research
Our research addresses AGI safety and superintelligence alignment through mechanistic interpretability,AI safety theory, and gyroscopic physics foundations. We explore AI control problem solutions,AI value alignment frameworks, and architectures for safe artificial general intelligence (AGI) development that prioritize AI safety governance and human values.
For AI Safety Researchers & Developers
These repositories serve AI safety researchers, AI evaluators, machine learning engineers, and organizations implementing AI risk assessment and AI safety testing. Each project provides comprehensive documentation, AI safety benchmarks, and practical implementation guides for AI red teaming,AI safety audits, and continuous AI safety monitoring. Contributions welcome from researchers working on AI alignment research, AI safety frameworks, and AI governance solutions.
Open Source AI Safety Commitment
All tools support AI safety transparency, AI whistleblower protection, and AI public benefit goals. Our open-weight AI models approach enables AI safety culture through AI independent review,AI third-party oversight, and community-driven AI safety best practices. Mathematical physics foundations ensure structural coherence, gyroscopic stability, and quantitative rigor in all implementations.