Some changes to the zine

This commit is contained in:
Nathan Schneider
2025-11-20 15:37:58 -07:00
parent 3a5cd2c299
commit 7fef10a55a
2 changed files with 32 additions and 45 deletions

14
package-lock.json generated
View File

@@ -5464,20 +5464,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/typescript": {
"version": "5.9.3",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"optional": true,
"peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
},
"engines": {
"node": ">=14.17"
}
},
"node_modules/unherit": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/unherit/-/unherit-3.0.1.tgz",

View File

@@ -1,54 +1,54 @@
# Collective Governance for AI: Points of Intervention
Made for ![the world](metagov.png) by [Metagov](https://metagov.org)
## Invitation
People often speak about AI as if it is one thing. It can seem like that when we use todays most popular interfaces: a single product, packaged by an unfathomably big company. But that view is both misleading and disempowering. It implies that only the big companies could possibly create and control this technology, because only they can manage the complexity. But another orientation is possible.
People often speak about AI as if it is one thing. It can seem like that when we use todays most popular interfaces: a single product, packaged by an unfathomably big company. But that view is both misleading and disempowering. It implies that only the big companies could possibly create and control this technology, because only they can handle its immensity. But another orientation is possible.
When presented with a hard math problem, often the best way to tackle it is to break it up into smaller, easier problems. As we understand AI systems in more social and technical detail, similarly, we can recognize them as involving a sequence of smaller operations, and those can start to seem more approachable for community-scale governance. Feasible interventions start to seem possible. We can think beyond how the post-2022 AI corporate “labs” want us to think about what AI is or could be. We dont need to be a trillion-dollar tech company to make a dent in shaping this technology through our communities needs and knowledge. We can remember the long history of developing and using AI techniques—in ways less visible than the current consumer products—to the future where we can more easily disentangle and co-govern these toolsets.
The best way to solve a hard math problem is to break it up into smaller, easier problems. Similarly, as we better understand AI systems in their social and technical particulars, we can recognize them as involving a sequence of smaller operations. Those can start to seem more approachable for our communities to manage. Interventions start to seem possible. We can think beyond how the post-2022 AI corporate “labs” want us to think about what AI is or could be. We dont need to be a trillion-dollar tech company to make a dent in shaping this technology through our communities needs and knowledge. We can remember the long history of developing and using AI techniques—in ways less flashy than the current consumer products—and imagine a future where we can more easily disentangle and co-govern these toolsets.
This is a collective document meant to do two things. First, it identifies distinct layers of the AI stack that can be isolated and reimagined. Second, for each layer, it points to both potential strategies and existing projects that could steer that layer toward meaningful collective governance.
This document from the Metagov community has two goals. First, it identifies distinct layers of the AI stack that can be named and reimagined. Second, for each layer, it points to potential strategies, grounded in existing projects, that could steer that layer toward meaningful collective governance.
We understand collective governance as an emergent and context-sensitive practice that makes structures of power accountable to those affected by them. It can take many forms—sometimes highly participatory, and sometimes more removed. It might mean voting on members of a board, proposing a policy, submitting a code improvement, organizing a union, or many other things. Governance is usually referred to as something that humans do, but we (and AI systems) are part of broader ecosystems that might be part of governance processes as well. In that sense, maybe a drought caused by AI-accelerated climate change is an input to governance.
We understand collective governance as an emergent and context-sensitive practice that makes structures of power accountable to those affected by them. It can take many forms—sometimes highly participatory, and sometimes more representative. It might mean voting on members of a board, proposing a policy, submitting a code improvement, organizing a union, holding a potluck, or many other things. Governance is not only something that humans do; we (and our AIs) are part of broader ecosystems that might be part of governance processes as well. In that sense, a drought caused by AI-accelerated climate change is an input to governance. A bee dance and a village assembly could both be part of AI alignment protocols.
The idea of “points of intervention” here comes from the systems thinker Donella Meadows—especially her essay “[Leverage Points: Places to Intervene in a System](https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/).” One idea that she stresses there is the power of feedback loops, which is when change in one part of a system produces change in another, and that in turn creates further change in the first, and so on. Collective governance is a way of introducing powerful feedback loops that draw on diverse knowledge and experience.
We recognize that not everyone is comfortable referring to these technologies as “intelligence.” We use the term “AI” most of all because it is now familiar to most people, as a shorthand for the models based on large datasets that are currently widely available through interfaces designed for mass user adoption. But a fundamental premise of ours is that the advent of this technology should enable, inspire, and augment human intelligence, not replace it—and the best way to ensure that is to cultivate spaces of creative, collective governance.
We recognize that not everyone is comfortable referring to these technologies as “intelligence.” We use the term “AI” most of all because it is now familiar to most people, as a shorthand for a set of technologies that are rapidly growing in adoption and hype. But a fundamental premise of ours is that this technology should enable, inspire, and augment human intelligence, not replace it. The best way to ensure that is to cultivate spaces of creative, collective governance.
These points of intervention do not focus on asserting ethical best practices for AI, or on defining what AI should look like or how it should work. We hope that, in the struggle to cultivate self-governance in our relationships with technology, our collective goals will evolve and sharpen in ways that we cannot now anticipate.
These points of intervention do not focus on asserting ethical best practices for AI, or on defining what AI should look like or how it should work. We hope that, in the struggle to cultivate self-governance, healthy norms will evolve and sharpen in ways that we cannot now anticipate. But democracy is an opportunity, never a guarantee.
## Model design
How are foundational models designed, and who does the designing? What institutions regulate the designers?
* Organize [worker governance and ownership of AI labs](https://www.cip.org/blog/shared-code) could help ensure that ethical considerations take precedence over profit motives
* Develop [smaller, purpose-specific](https://dl.acm.org/doi/10.1145/3442188.3445922) models that involve less costly and environmentally destructive training, and can be [less error-prone](https://research.nvidia.com/labs/lpr/slm-agents/); ensure models are fit for purpose, with large-data models used only when probabilistic outputs are required
* Design models through institutions oriented around the common good, such as democratic governments and nonprofit organizations, such as the Swiss [Apertus model](https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html)
* Train developers to understand and be aware of their worldviews, and to engage in collective governance practices with affected communities rooted in [design justice](https://designjustice.org/principles-overview)
* Organize [worker governance and ownership of AI labs](https://www.cip.org/blog/shared-code) in the hope that ethics can take precedence over profit motives
* Develop [smaller, purpose-specific](https://dl.acm.org/doi/10.1145/3442188.3445922) models that involve less costly and environmentally destructive training, and can be [less error-prone](https://research.nvidia.com/labs/lpr/slm-agents/); ensure models are fit for purpose, with large-data models used only when necessary
* Design models through institutions oriented around the common good, like democratic governments and nonprofit organizations, as with the Swiss [Apertus model](https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html)
* Train developers to understand and be aware of their worldviews, and to engage in [design justice](https://designjustice.org/principles-overview) practices with affected communities
## Data
What data is used to train models? Where does it come from? What permission and reciprocity is involved?
* Ensure that all training data is transparent and retrievable, such as the [Apertus](https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html) and [Pythia](https://www.eleuther.ai/artifacts/pythia) models, as well as standards such as the [OSI Open Source AI Definition](https://opensource.org/ai)
* Establish and rely on [data cooperatives](https://www.projectliberty.io/news/data-coops-as-alternative-to-centralized-digital-economy/), [data collaboratives](https://doi.org/10.1007/s12525-025-00831-6), and [data trusts](https://theodi.org/insights/explainers/what-is-a-data-trust/) to provide ethical, consensual data sourcing and compensate data providers, such as [Transfer Data Trust](http://youtube.com/watch?v=QLIW_TfVR4k) and [Choral Data Trust](http://youtube.com/watch?v=SO_IcQvjMDU&feature=youtu.be)
* Ensure that all training data is auditable through techniques of [data provenance](https://hypha.coop/data-provenance/) and [traceability](https://www.researchgate.net/publication/395416141_Using_Blockchain_to_Trace_Data_Sources_in_AI), building on examples like the [Apertus](https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html) and [Pythia](https://www.eleuther.ai/artifacts/pythia) models, and the [OSI Open Source AI Definition](https://opensource.org/ai)
* Establish [data cooperatives](https://www.projectliberty.io/news/data-coops-as-alternative-to-centralized-digital-economy/), [data collaboratives](https://doi.org/10.1007/s12525-025-00831-6), and [data trusts](https://theodi.org/insights/explainers/what-is-a-data-trust/) to provide ethical, consensual data sourcing and compensate data providers, such as the [Transfer Data Trust](http://youtube.com/watch?v=QLIW_TfVR4k) and [Choral Data Trust](http://youtube.com/watch?v=SO_IcQvjMDU&feature=youtu.be)
* Adopt a clear, accessible, and usable [data policy](https://metagov.pubpub.org/pub/data-policy/) in any organizational context
* Reflect best practices of community accountability from the [Indigenous Data Alliance](https://indigenousdata.org/) and the [Collaboratory for Indigenous Data Governance](https://indigenousdatalab.org/)
* Leverage existing data under cooperative control, as in cases such as [agricultural co-ops](https://www.fertilizerdaily.com/20241112-how-land-olakes-is-using-ai-to-revolutionize-farming-practices/) and [credit unions](https://creditunions.com/features/perspectives/is-your-credit-union-part-of-the-ai-revolution/)
* Require robust [data provenance](https://hypha.coop/data-provenance/) and [traceability](https://www.researchgate.net/publication/395416141_Using_Blockchain_to_Trace_Data_Sources_in_AI) techniques, with robust anonymization, to enable auditing and oversight
* Use datasets that reflect demonstrable cultural diversity to allow diverse forms of interaction and participation; disclose limitations where this is not possible
* Leverage existing data under cooperative control, as in [agricultural co-ops](https://www.fertilizerdaily.com/20241112-how-land-olakes-is-using-ai-to-revolutionize-farming-practices/) and [credit unions](https://creditunions.com/features/perspectives/is-your-credit-union-part-of-the-ai-revolution/)
* Gather datasets that reflect demonstrable cultural diversity to allow diverse forms of interaction and participation; disclose limitations where this is not possible
* Use participatory taxonomy development, data labeling, and annotation processes to help ensure models are better reflective of community norms, language, and values, as with [Reliabl.ai](http://Reliabl.ai)
## Training
How are foundational models trained? What infrastructures and natural resources do they rely on?
* Organize training processes through accountability-oriented institutions such as [democratic governments or nonprofit consortia](https://metagov.org/projects/public-ai)
* Provide robust benefit-sharing arrangements for communities that host data centers
* Participatory taxonomy development and data labeling and annotation processes that help ensure models are better reflective of community norms, language, and values, such as [Reliabl.ai](http://Reliabl.ai)
* Ensure that data annotation workers can build collective power through unions and collectives, such as through NGOs like [Techworker Community Africa](https://www.techworkercommunityafrica.com/) and [She Codes Africa](https://shecodeafrica.org/), which can help negotiate rates and provide legal support
* Monitor and evaluate labor practices within the supply chain, following the example of [Fairwork](https://fair.work/en/fw/about/)
* Utilize community-governed standards like [participatory guarantee systems](https://en.wikipedia.org/wiki/Participatory_Guarantee_Systems) so that communities that host data centers or data labor can set locally appropriate standards
* Utilize community-governed standards like [participatory guarantee systems](https://en.wikipedia.org/wiki/Participatory_Guarantee_Systems) so that communities that host data centers or data labor can set locally appropriate guidelines
## Tuning
@@ -65,9 +65,9 @@ What fine-tuning do models receive before deployment? What collective interventi
How do AIs obtain contextual information? What kinds of actions are agents able to carry out?
* Enable privacy-sensitive tools for connecting local models with community data, such as [RooLLM](https://github.com/hyphacoop/RooLLM) and [KOI Pond](https://metagov.org/projects/koi-pond)
* Utilize cooperative worker ownership, like [READ-COOP](https://open-research-europe.ec.europa.eu/articles/5-16/v1), for human-in-the-loop, AI-assisted activities
* Promote cooperative worker ownership, like [READ-COOP](https://open-research-europe.ec.europa.eu/articles/5-16/v1), for human-in-the-loop, AI-assisted activities
* Manage and protect contextual data through user-owned cooperatives, like [Land OLakess Oz platform](https://www.fastcompany.com/91438757/the-wizard-of-crops-microsofts-oz-aims-to-transform-farming)
* Adopt open standards, like [Model Context Protocol,](https://modelcontextprotocol.io/docs/getting-started/intro) that enable context-holders to define more accurate, appropriate, and ethically sourced data-use policies
* Adopt open standards, like [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro), that enable context-holders to define more accurate, appropriate, and ethically sourced data-use policies
* Utilize community-governed and transparently curated infrastructure, such as [Stract optics](https://github.com/c-host/mg-stract-optics-library-and-search-engine), for agent web searches
* Establish clear, privacy-respecting, and consent-based norms for model access to user data, such as through the [Human Context Protocol](https://humancontextprotocol.com/) or [data pods](https://www.secoda.co/glossary/what-are-data-pods)
@@ -76,7 +76,7 @@ How do AIs obtain contextual information? What kinds of actions are agents able
Where are AIs running while they are interacting with users? How do they treat user data?
* Deploy AI systems at data centers powered by renewable energy, such as [GreenPT](https://greenpt.ai/) and [Earth Friendly Computation](https://earthfriendlycomputation.com/), and that respect local ecosystems
* Host AI services on cooperatively owned and governed servers, such as [Cosy AI](https://cosyai.net/), or through local institutions like [libraries](https://publicai.network/libraries)
* Host AI services on cooperatively owned and governed servers, such as [Cosy AI](https://cosyai.net/), or through democratic local institutions like [public libraries](https://publicai.network/libraries)
* Run local models on personal or community computers with tools like [Ollama](https://ollama.com/) and [Jan](https://www.jan.ai/)
* Use decentralized or federated solutions for hosting like [Golem](https://www.golem.network/) or [Internet Computer](https://internetcomputer.org/)
@@ -85,20 +85,21 @@ Where are AIs running while they are interacting with users? How do they treat u
What kinds of interfaces and expectations are users presented with? What options do users have? How do interfaces nudge user behavior?
* Ensure [worker control](https://www.microsoft.com/insidetrack/blog/deploying-microsoft-places-at-microsoft-with-our-works-councils/) over the deployment of AI systems in their workplaces
* Provide for user choice around the worldviews and model moderation practices, such as locally deployed [open-weights models allow](https://opensource.org/ai/open-weights)
* Provide for user choice around the worldviews and model moderation practices, such as [open-weights models allow](https://opensource.org/ai/open-weights)
* Establish sectoral agreements over AI use, as in the outcome of the [20232024 Hollywood strike](https://cdt.org/insights/the-sag-aftra-strike-is-over-but-the-ai-fight-in-hollywood-is-just-beginning/)
* Create interfaces that enable user choice among different models, such as [Duck.ai](https://duck.ai/)
* Provide privacy-protecting mechanisms, including user-[data mixers](https://duckduckgo.com/duckai/privacy-terms) and [data-protection compliance](https://greenpt.ai/privacy/)
* Provide privacy-protecting mechanisms, including [user-data mixers](https://duckduckgo.com/duckai/privacy-terms) and [data-protection compliance](https://greenpt.ai/privacy/)
* Expect user interfaces and models to respect local law and global treaties by design
## Public policy
How does public policy shape the design, development, and deployment of AI systems?
* [Get involved](https://posts.bcavello.com/how-to-get-into-ai-policy-part-1/) in AI policymaking
* Demand high standards for procurement of foundational model providers, ensuring that both the providers and the models are audited according to best practices of human rights and sustainability
* Develop policy with [AI-augmented citizen assemblies](https://www.demnext.org/projects/five-dimensions-of-scaling-democratic-deliberation-with-and-beyond-ai) that lay out clear guidelines in highly sensitive contexts, such as education, healthcare, law enforcement, and public benefits
* Convene public debates about limits on AI resource usage without positive social purpose
* Hold AI companies responsible for the behavior of models that they monopolistically control, such as through lawsuits and legislative advocacy
* Insist on public debates about limits on AI resource usage without positive social purpose
* Hold AI companies responsible for the behavior of models that they control, such as through lawsuits and legislative advocacy
## Culture
@@ -132,20 +133,20 @@ How do different community-governed AI systems connect, share information, and m
Finally, what feedback loops can we imagine across these layers of the stack? How could change in one area lead to greater change through its effects at other layers?
* Constraints introduced through collective power at the level of deployment can then put pressure on changing norms in training and tuning processes
* Successful training of narrower, more efficient models can enable AI systems that are less costly and easier for communities to own and govern
* Collective power at the level of deployment can put pressure on changing norms in training and tuning processes
* Successful training of smaller, more efficient models can enable AI systems that are less costly and easier for communities to own and govern
* Economies more conducive to investment for collective ownership can open the door to collective governance at multiple levels
* Interconnected ecosystems through open standards and shared norms can spread best practices developed in one policy context to others
* Interconnected ecosystems of open standards and shared norms can spread best practices developed in one policy context to others
Feedback loops can be messy. Remember that collective governance demands working with care and consideration for others. Lets keep that in mind as we define our AI interventions and adaptations.
Feedback loops can be messy. Remember that collective governance begins with care and consideration for others. May our interventions begin there.
## Now, time to intervene!
## Credits
Initiated and edited by Nathan Schneider, with contributions from Cormac Callanan, Coraline Ada Ehmke, Val Elefante, Cent Hosten, Joseph Low, Thomas Renkert, Julija Rukanskaitė, Ann Stapleton, Madisen Taylor, Freyja van den Boom, Jojo Vargas, Mohsin Y. K. Yousufi, and Michael Zargham.
Initiated and edited by Nathan Schneider, with contributions from Cormac Callanan, B Cavello, Coraline Ada Ehmke, Val Elefante, Cent Hosten, Joseph Low, Thomas Renkert, Julija Rukanskaitė, Ann Stapleton, Joshua Tan, Madisen Taylor, Freyja van den Boom, Jojo Vargas, Mohsin Y. K. Yousufi, and Michael Zargham.
Built by the Media Economies Design Lab with open-source software and AI collaboration.
Website built with open-source software and AI collaboration.
Made for ![the world](metagov.png) by [Metagov](https://metagov.org)