In Part I of our exploration into AI Safety, we outlined the foundational reasons why it should be treated as a public good, i.e., a non-excludable and non-rivalrous resource that benefits all of humanity. As AI continues to rapidly evolve, so too does the urgency of ensuring it is aligned with human values, robustly controlled, and ethically deployed. But how exactly can we achieve this?
Today, we dive deeper into the concrete research areas driving AI safety, examine the role of the Decentralized AI ecosystem, and show how embracing a public goods mindset can reshape the trajectory of AI development.
Research Paths to Achieve AI Safety
To ensure AI systems are safe, ethical, and aligned with human values, the global research and development community is coalescing around several complementary approaches. These mechanisms, ranging from technical safeguards to adversarial testing, form the backbone of responsible AI development.
The main techniques researchers are working on can be split into several categories. Let’s take a closer look:
- Model Improvements:
- Reinforcement Learning from Human Feedback (RLHF) to teach models preferences through human demonstrations.
- Debiasing and robustness training to reduce toxic or misleading outputs in high-stakes applications.
- Formal verification and control theory to ensure models operate within safe, pre-defined parameters.
- Testing and Oversight:
- Scalable oversight, including approaches like recursive reward modeling and debate, to help humans supervise increasingly complex models.
- Stress-testing AI models through Red-Teaming to uncover failure modes, biases, or vulnerabilities before they reach the public. This helps mitigate risks from both misuse and unintended behaviors.
- Open Benchmarking:
- Initiatives like safety benchmarks, shared testbeds, and open scientific discourse promote cross-institutional learning and accountability. These are especially important in reducing information asymmetry in safety capabilities between large and small actors.
Through its integration of Endo, DCF will be contributing to this frontier. Endo’s work in powering secure decentralized computing is essential in pushing the boundaries of safe intelligent systems.
The Role of the Decentralized AI Movement
We at Decentralized Cooperation Foundation (DCF) have been actively advocating for decentralizing AI, for the privacy and security enhancements it brings. It’s no wonder that even regular people are now increasingly aware of the dangers posed by centralized AI and trust DeAI systems better.
The decentralized AI movement champions open-source models, transparent data pipelines, and community-driven governance, enabling a much broader base of scrutiny and innovation. However, decentralized AI introduces both new challenges and novel solutions to the AI safety landscape.
Benefits of the Decentralized AI (DeAI) approach include:
- Greater accountability: Researchers and users can inspect how models are trained and how they behave.
- Privacy and censorship resistance: Decentralized platforms reduce single points of failure or coercion.
- Rapid iteration: Global collaboration accelerates safety advancements through shared tooling and reproducible results.
Values like privacy-first and trustworthy AI, non-extractive data policies, and user-sovereign infrastructure are exclusively championed within the Web3 and the wider decentralization ecosystems. Considering the field of AI Safety in particular, decentralization is uniquely positioned to contribute through:
- Open Source Audits: Open model weights and codebases allow independent researchers to inspect and improve safety properties.
- Community-Led Red-Teaming and decentralized safety evaluations.
- Distributed Governance Models: DAOs (Decentralized Autonomous Organizations) and blockchain-based AI networks enable transparent and democratic oversight of model training, deployment, and updates.
Yet, in a world where anyone can access and run open-source models, safety must scale beyond closed labs and corporate oversight. There are outstanding challenges in the decentralized space which shouldn’t be ignored. They include:
- Lack of central control can increase risk if safety norms are not widely adopted.
- Regulatory and ethical alignment becomes harder across distributed communities.
- Funding safety-focused work in decentralized ecosystems can be fragmented.
Still, with the right incentives and frameworks, decentralized AI is able to expand the field of AI Safety research, making it more accessible, resilient, and diverse. Moreover, if AI Safety is considered a public good, it may benefit from Web3-native funding mechanisms like RetroPGF and Quadratic Funding.
Why Treating AI Safety as a Public Good Unlocks Progress
The current AI landscape is shaped by intense competition, rapid innovation, and deep concentration of power among a few corporate actors. In this environment, safety and alignment efforts often take a backseat to performance and monetization. As we argued in our previous blog post, there is another path forward. It involves treating AI safety not as a competitive advantage, but as a public good.
Achieving a public good status for AI safety will reframe it not as a proprietary edge, but as shared infrastructure—accessible, transparent, and collaboratively maintained. This approach supports sustainable funding, fosters public trust through open governance, and aligns with the decentralized AI movement’s values of privacy, resilience, and equity. By embedding safety into the commons, we enable innovation that benefits everyone, not just the few.
Here’s how a public goods approach can unlock the full potential of safe and aligned AI:
1. Incentivizing Long-Term Thinking
Public funding and non-profit grants can support safety work that doesn’t yield immediate profits, but is critical for future resilience. Research areas that would benefit include foundational research in alignment or scalable oversight tools.
2. Avoiding Competitive Race Dynamics
When safety is a shared priority across institutions (rather than a proprietary edge), we reduce the incentives for cutting corners in the pursuit of performance or market dominance.
3. Enabling Global Participation
Safety shouldn’t be confined to a few labs or nations. Treating it as a global public good empowers broader participation.
4. Sustainable Funding Through Public Goods Mechanisms
Mechanisms like quadratic funding, impact certificates, and blockchain-based coordination protocols can align capital with open-source safety innovation. Projects like Gitcoin and the Protocol Labs ecosystem are already pioneering these models.
A Multi-Pronged Approach for a Safer AI Future
No single mechanism can solve AI safety. It will take a diverse ecosystem of actors, including researchers, policymakers, developers, and the public, working across technical, regulatory, and societal layers to ensure AI systems are aligned, safe, and equitable.
The Decentralized AI movement, supported by institutions like DCF, offers a promising path forward: one grounded in transparency, resilience, and open collaboration. By combining with a public good treatment of AI safety, we can not only build better systems, but create a foundation of trust for the future.
The future of AI is ours to shape. Let’s make safety a shared responsibility.
Follow DCF’s blog to dive deeper into our education and advocacy initiatives.
Subscribe to our monthly newsletter to stay up-to-date with all DCF activities and publications.