What Is AI Safety, And Why It Should Be A Public Good?

AI Safety and why it should be treated as a public good

As artificial intelligence continues to grow in capability and influence, so too does the urgency of ensuring it operates safely, ethically, and in the public interest. AI Safety, the discipline focused on aligning AI systems with human values, has become not only a cornerstone of responsible technology development but a global imperative. 

But what does AI safety mean? And since it’s of paramount importance, shouldn’t we consider it a public good, and collectively strive to ensure it?

Championing decentralization is part of DCF’s mission, so we’ve repeatedly highlighted how much safer decentralized AI is compared to centralized and closed-sourced systems. Today, we’ll explore the essence of AI Safety, why it should be treated as a public good, and what mechanisms can help us achieve that.

What Is AI Safety?

AI safety refers to the practice of designing, developing, and deploying artificial intelligence systems in ways that are aligned with human values and societal well-being. This means ensuring that AI systems:

  • Behave predictably and reliably
  • Do not cause unintended harm
  • Remain under human control
  • Operate transparently and fairly

AI safety isn’t just about preventing rogue superintelligent machines (although that is one aspect). It’s also about mitigating present-day risks such as algorithmic bias, misinformation, surveillance misuse, and automation-related job displacement.

However, at the time of writing, governments, corporations, and researchers often define “AI Safety” differently. This lack of consensus makes the term somewhat ambiguous.

Why Is AI Safety Controversial?

AI safety as a scientific field has gained popularity over the last few years, and each AI lab tends to define it in its own way. For example, Wikipedia describes it as:

“AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.”

Let’s unpack some of these terms:

  • Machine ethics is a discipline concerned with adding or ensuring moral behavior of man-made machines. But what does moral mean? Morality is a trait only humans can exhibit, and its use here outlines the growing and rather dangerous trend of anthropomorphizing AI systems. A clearer definition would frame machine ethics as efforts to instill ethical decision-making frameworks in AI systems, and teaching machines to distinguish right from wrong.
  • AI alignment aims to steer AI systems toward a person’s or group’s intended goals, preferences, or ethical principles. An aligned system advances intended goals, whereas a misaligned one may pursue unintended objectives. However, alignment is complicated by conflicting human interests. For example, a company like OpenAI and Anthropic may prioritize financial profit, even if that goal conflicts with broader human welfare.

These complexities highlight a key point: the standards for AI safety should not be left solely to individual actors. Instead, they require coordinated global regulation to ensure AI development serves the interests of humanity as a whole.

Why Should AI Safety Be Treated As A Public Good?

Public goods are resources that benefit everyone, regardless of individual contribution or ownership. They are non-excludable and non-rivalrous, like clean air, national defense, or public health. AI safety fits this definition well. Its benefits, such as fairness, reliability, and the prevention of harm, extend across borders and social divides, while its failures can impact society at large.

The influence of AI systems is already widespread, shaping decisions in healthcare, education, employment, and criminal justice. When these systems malfunction or perpetuate bias, the fallout affects entire communities and can reinforce structural inequalities. Crucially, no single company, government, or lab can fully grasp or manage the societal impact of advanced AI. That’s why AI safety must be treated as a collective responsibility, not a proprietary concern.

Like all public goods, ensuring AI safety demands long-term investment, broad cooperation, and transparent oversight. The risks of unsafe AI, such as deepening inequality, eroding public trust, and posing existential threats, are too great to ignore. While the path to safety may be costly and complex, its rewards are shared by all, making it one of the most urgent public goods of our time.

Mechanisms to Achieve AI Safety

Achieving AI safety is complex and multi-layered. Here are some of the key mechanisms currently being explored:

1. Robust Technical Research

Researchers are developing techniques to align AI behavior with human intent, reduce model uncertainty, and prevent harmful outputs. With the recent integration of Endo, a trailblazer in the field of secure decentralized computing, DCF will be playing a growing role here. 

2. Decentralized and Open-Source Collaboration to Boost Transparency

Open research and transparency around model design, training data, and limitations can allow for broader scrutiny and safer deployments. It can also emphasize privacy, data protection, and censorship-resistance. This is, in short, what the Decentralized AI community is attempting to achieve, and for which we at DCF have been advocating for years. 

3. Regulatory Frameworks

Governments are beginning to implement laws and guidelines that enforce AI ethics, transparency, and accountability. For example:

4. Third-Party Auditing and Red-Teaming

External audits help identify vulnerabilities, biases, and unintended behavior in AI systems before they are deployed at scale.

5. Public Engagement and Education

Public understanding and involvement in AI policymaking are essential to ensure democratic oversight and inclusive safety priorities.

Stay tuned for a deeper explanation of these mechanisms in our forthcoming publications.

A Call for Collective Action

AI safety is more than a technical challenge—it’s a societal obligation. Like climate change or public health, it requires coordinated action from governments, corporations, researchers, and the public.

By treating AI safety as a public good, we can foster innovation while protecting our future.

Whether you’re an engineer, policymaker, or everyday tech user, your voice matters. Let’s work together to build AI systems that are not only powerful, but safe, fair, and beneficial for all.

Follow DCF’s blog to dive deeper into our education and advocacy initiatives. 

Subscribe to our monthly newsletter to stay up-to-date with all DCF activities and publications.

Related Posts

In Part I of our exploration into AI Safety, we outlined the foundational reasons why it should be treated as

In a big move for the decentralized technology ecosystem, Agoric and DCF today jointly announce that Agoric has transferred the

A recent Harris Poll commissioned by the Digital Currency Group (DCG) found that 75% of Americans favor decentralized AI (DeAI)