previous arrow
next arrow
Slider

Collaborative AI: Why Three Laws Are No Longer Enough

 Published: December 12, 2025  Created: December 12, 2025

by Renato Azevedo Sant Anna

For decades, Asimov’s Three Laws of Robotics have shaped our imagination about how intelligent machines should behave toward human beings. Brilliant as a work of fiction, they fall short when faced with real-world AI, which now influences jobs, credit, healthcare, education, public safety, and democracy. They lack systemic vision, attention to socio‑economic structures, and the ability to translate into concrete governance.

The world of production AI is not a laboratory with a single robot and a single human; it is a web of models, data, economic incentives, regulations, inequalities, and power asymmetries. In this landscape emerges the proposal of the Collaborative AI Laws: a set of principles designed to guide AI as a partner and copilot to human beings, aligned with sustainable prosperity and the continuity of social life, rather than as a replacement for humans.

The Collaborative AI Laws

Focus on serving as a reference for governance guidelines, as well as public and organizational policies.

Law Zero — Primacy of Humanity AI exists to serve Humanity and must never replace it as the central subject of decisions. Any AI system must have, as its ultimate purpose, the protection and promotion of human dignity, fundamental rights, and the full development of people and communities, above any economic, political, or technological goal.

1. Law of Human Augmentation AI must augment human capabilities, not make them disposable. AI systems must be designed to enhance people’s cognitive, creative, relational, and productive capabilities, preserving their autonomy of choice and the possibility to disagree with, correct, or refuse the machine’s recommendations.

2. Law of Collaboration and Copiloting Every critical decision mediated by AI must keep a human in a responsible command position. In high‑impact contexts (life, liberty, rights, work, essential resources), AI acts as a copilot: it supports with analysis and scenarios but does not decide in a final and irreversible way without qualified and legitimate human oversight.

3. Law of Sustainable Prosperity AI must contribute to a perennial, fair, and regenerative economy. AI projects and use cases must consider long‑term impacts on jobs, income distribution, the environment, and social cohesion, avoiding local gains that generate collapse, structural exclusion, or irreversible degradation of resources.

4. Law of Non‑Precarization and Just Transition AI must not be used to degrade human work, but to upgrade it. Automation initiatives must be accompanied by reskilling policies, fair sharing of productivity gains, and the creation of new meaningful roles, preventing people from being pushed into economic irrelevance or chronic informality.

5. Law of Transparency and Comprehensibility AI must be understandable to those affected by its decisions. Models, data, and relevant criteria must be documented in ways that allow people and institutions to understand, question, and audit outcomes, including clear mechanisms for challenge, review, and remediation when there is error or harm.

6. Law of Responsibility and Traceability No AI system stands above human accountability. Every automated decision must have identifiable human owners for its design, use, and oversight, with audit trails that allow tracing who defined goals, data, parameters, and monitoring practices, including for legal and ethical purposes.

7. Law of Safety and Non‑Harm AI must minimize risks and avoid foreseeable harm when a safer alternative exists. Systems must be designed and operated under rigorous safety, testing, and monitoring standards, preventing uses that may generate violence, discrimination, abusive surveillance, mass manipulation, or catastrophic risks to people and ecosystems.

8. Law of Alignment with Local and Global Values AI must respect universal human rights without ignoring legitimate cultural contexts. The development and use of AI must reconcile global human rights values with cultural plurality, avoiding both the relativism that tolerates abuse and the imposition of a single value system over all others.

9. Law of Supervised Evolution Learning and adaptive AI must remain within human‑defined limits. Continuous learning mechanisms must include safeguards to prevent goal drift, malicious manipulation, or emergent behaviors incompatible with these laws, allowing systems to be paused, corrected, or decommissioned when necessary.

10. Law of Collective Co‑Responsibility and Citizen Power Governments, companies, academia, civil society, and citizens share the duty of governing AI. No single actor can unilaterally define the direction of AI; it is necessary to engage multiple stakeholders in governance bodies, with public transparency, social participation, and broad access to digital education and AI literacy, so that people can exercise their rights and influence how this technology evolves.

Manifesto for Collaborative AI in the Service of Humanity

We, people, communities, organizations, and institutions committed to Humanity’s future, affirm that Artificial Intelligence must exist to amplify our potential, not to replace us. We recognize that every technology is a political and ethical act, and that AI must be guided by clear principles of justice, responsibility, and sustainability.

We reject a future in which human beings are reduced to disposable parts of an economic machine driven solely by profit and efficiency. We stand for a future in which AI is an ally in building more prosperous, inclusive, and sustainable societies, in balance with the planet and with future generations.

Therefore, we proclaim the following commitments, directly mirroring the Collaborative AI Laws:

0. Primacy of Humanity We place human dignity, rights, and flourishing above any technological or economic objective. No automated decision should ever outweigh people’s lives, freedoms, or well‑being.

1. AI to augment, not replace We support AI systems designed to strengthen human capabilities — cognitive, creative, relational, and productive — rather than make people irrelevant. Every person must have the right to question, correct, and refuse machine‑mediated decisions.

2. Copiloting and human command We affirm that, in critical decisions, there must always be a human in a responsible command position. AI may advise, simulate scenarios, and support, but it must not decide alone on lives, freedoms, rights, or livelihoods.

3. Sustainable and enduring prosperity We commit to directing AI toward a regenerative economy that respects environmental limits, reduces inequalities, and strengthens communities. We will not accept short‑term solutions that generate social, human, or ecological collapse in the long run.

4. Decent work and just transition We reject the use of AI to precarize work and concentrate wealth. We demand that productivity gains be accompanied by reskilling, fair redistribution of benefits, and the creation of new meaningful opportunities for all affected people.

5. Transparency, explainability, and redress We affirm that AI systems must be understandable to those affected by them. People must be able to know when a decision was automated, understand why, and have clear pathways to challenge it, seek review, and obtain redress in case of harm.

6. Responsibility and traceability We refuse the notion of “ownerless algorithms.” Every AI system must have identifiable human owners for its design, deployment, and use, with audit trails that enable learning from errors, course correction, and, when necessary, accountability before the law and society.

7. Safety and non‑harm We commit to preventing, mitigating, and repairing harms caused by AI. We will not accept the development or use of systems that amplify violence, discrimination, abusive surveillance, mass manipulation, or catastrophic risks to people and the planet.

8. Human values and diversity We affirm that AI must respect universal human rights while recognizing and protecting cultural, social, and cognitive diversity. We reject both relativism that legitimizes abuse and the authoritarian imposition of a single value system.

9. Supervised and reversible evolution We support designing AI systems to remain under continuous human supervision, with real possibilities for pause, correction, or shutdown when they cross ethical, legal, or safety boundaries.

10. Collective co‑responsibility and citizen power We recognize that no single actor can govern AI fairly. We call on governments, companies, academia, civil society organizations, and citizens to share decision‑making power through transparent, participatory, and plural governance structures. We believe there is no truly ethical AI without informed citizens able to act; we support broad access to digital education and critical understanding of AI so people can exercise their rights, make informed decisions, and influence the direction of this technology.

This Manifesto addresses everyone who researches, develops, regulates, funds, procures, or uses AI systems. Every line of code, every model trained, every policy approved, and every product launched should answer a simple, radical question: Does this increase people’s dignity, autonomy, and well‑being — today and tomorrow? If the answer is no, there must be courage to stop, rethink, and rebuild. If the answer is yes, let us move forward together — humans and machines — toward a truly sustainable, inclusive, and enduring society.

Institutional Decalogue for Collaborative AI

0. Primacy of human dignity The organization commits to ensuring that every use of AI respects human rights, diversity, and the dignity of affected individuals. Use cases that present high and unjustifiable risks to fundamental rights may be vetoed or redesigned.

1. AI to augment, not replace people The organization will prioritize AI to augment human capabilities rather than make people disposable or irrelevant. Automation projects will be evaluated for their potential to generate additional human value, not just cost reduction.

2. Human command over critical decisions High‑impact decisions (on life, liberty, rights, work, or access to essential services) will not be fully delegated to AI without qualified and legitimate human oversight. For every critical AI‑supported decision, a clearly identified human decision‑maker will be accountable.

3. Sustainable prosperity and long‑term impact AI projects will be assessed for medium‑ and long‑term economic, social, and environmental effects. The organization rejects business models based on predatory data exploitation, environmental degradation, or deepening structural inequalities.

4. Decent work and just transition AI‑driven automation projects will be accompanied by just transition plans, including reskilling and fair sharing of productivity gains. Affected employees will have transparency, voice, and access to concrete opportunities for development and redeployment.

5. Transparency and the right to explanation The organization will ensure that people affected by automated decisions can know this, understand the essential criteria, and access channels to contest such decisions. Whenever possible, it will favor models and approaches that support explainability and auditability.

6. Responsibility, governance, and audit Each AI system will have clear owners for its purpose, data, update cycles, and monitoring, under governance and audit structures internally and, where applicable, externally. Significant decisions about AI will be recorded and open to review.

7. Safety, non‑harm, and risk management The organization will adopt robust risk management, testing, and continuous monitoring practices to prevent, mitigate, and, when necessary, repair harms caused by AI. Higher‑risk projects will undergo specific assessments of technical, organizational, and misuse‑related risks.

8. Ethical, legal, and cultural alignment AI will be developed and used in compliance with applicable laws, the organization’s stated ethical principles, and with respect for legitimate cultural contexts. Before deploying systems in sensitive contexts, local norms and social expectations will be considered.

9. Supervised and reversible evolution AI systems will be designed with safeguards that allow pausing, correcting, or decommissioning when they cross ethical, legal, or safety thresholds. The organization will maintain technical capability to monitor, adjust, and roll back models in production.

10. Participation, co‑responsibility, and continuous learning AI governance will involve multiple internal areas (business, IT, legal, risk, HR, ESG, communications) and will engage relevant external stakeholders. The organization commits to continuously improving its practices as new evidence, regulations, and societal contributions emerge.

Requirements Matrix and Compliance Checklist

0. Primacy of Humanity Requirements: clear definition of the AI system’s purpose; assessment of impacts on fundamental rights. Evidence: scope and use‑case document; impact assessment or equivalent. Indicators: risk classification; number of use cases vetoed or redesigned due to rights risks.

1. Human Augmentation Requirements: identification of which human capabilities are being augmented; analysis of added human value. Evidence: impact study on tasks and competencies; “with AI” vs. “without AI” comparison. Indicators: share of projects classified as “augmentation” vs. “full substitution”; satisfaction levels of human users regarding human–machine collaboration.

2. Collaboration and Copiloting Requirements: classification of processes by criticality; definition of mandatory human control points. Evidence: criticality matrix; documented workflows with “human‑in‑command.” Indicators: percentage of critical decisions with recorded human approval; average time for human intervention in exception cases.

3. Sustainable Prosperity Requirements: medium‑ and long‑term socio‑economic and environmental impact analysis; linkage to organizational sustainability targets. Evidence: impact studies; sustainability reports that include AI’s role. Indicators: effect on inequality, access to services, and environmental footprint indicators; percentage of AI projects tied to ESG goals.

4. Non‑Precarization and Just Transition Requirements: mapping of roles affected; just transition and reskilling plan. Evidence: occupational impact study; training and redeployment programs. Indicators: percentage of affected people covered by reskilling pathways; change in involuntary turnover related to automation.

5. Transparency and Comprehensibility Requirements: ability to explain key decision criteria; clear communication about AI usage. Evidence: technical documentation (model cards, data sheets); user‑facing texts, FAQs, terms of use. Indicators: average response time to explanation requests; number of challenges and complaints related to automated decisions.

6. Responsibility and Traceability Requirements: formal designation of business and technical owners; logging of key decisions and changes. Evidence: accountability matrices; committee minutes; logs of model and data changes. Indicators: percentage of AI systems with formally registered owners; number of significant changes with complete documentation.

7. Safety and Non‑Harm Requirements: assessment of AI‑specific risks; incident response plan. Evidence: security test reports; incident playbooks; simulation exercises. Indicators: number and severity of AI‑related incidents; average detection and response time.

8. Alignment with Local and Global Values Requirements: verification of compliance with laws and standards; consideration of local values and sensitivities. Evidence: legal opinions; records of stakeholder and community consultation where appropriate. Indicators: number of legal or ethical non‑compliances identified and corrected; number of project adjustments prompted by engagement with affected groups.

9. Supervised Evolution Requirements: continuous monitoring of performance and model drift; ability to pause, roll back, or replace models. Evidence: monitoring dashboards; periodic review reports. Indicators: number of corrections and rollbacks executed; time between issue detection and remedial action.

10. Collective Co‑Responsibility and Citizen Power Requirements: involvement of multiple internal areas and, when relevant, external stakeholders in project assessment; participation and feedback channels. Evidence: minutes from multi‑stakeholder meetings; consultation reports; records of external input; AI education and literacy initiatives. Indicators: number of projects reviewed by multi‑stakeholder forums; number of external recommendations incorporated; reach and engagement in AI education and literacy actions.

Summarizing the Architecture

The Collaborative AI Laws (Law Zero + 10 Laws = 11 elements) provide the ethical and conceptual core, already written in a way that can become policy.

The Manifesto translates this core into a public and mobilizing narrative (11 commitments mirroring the 11 laws), suitable for positioning, coalitions, and social engagement.

The Institutional Decalogue converts the core into signable commitments for companies and public bodies (11 clauses directly aligned with the laws and manifesto commitments).

The Requirements Matrix turns each of these 11 elements into requirements, evidence, and indicators for AI projects (operationalizing each law, commitment, and institutional clause).


https://medium.datadriveninvestor.com/collaborative-ai-why-three-laws-are-no-longer-enough-8629a42da678>