As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. In May, we announced the Democratic Inputs to AI grant program. We then awarded $100,000 to 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public.
At OpenAI, we’ll build on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. We’re excited to combine our research with ideas and prototypes developed by the grant teams in the coming months.
In this update, we will cover:
We received nearly 1,000 applications across 113 countries. There were far more than 10 qualified teams, but a joint committee of OpenAI employees and external experts in democratic governance selected the final 10 teams to span a set of diverse backgrounds and approaches: the chosen teams have members from 12 different countries and their expertise spans various fields, including law, journalism, peace-building, machine learning, and social science research.
During the program, teams received hands-on support and guidance. To facilitate collaboration, teams were encouraged to describe and document their processes in a structured way (via “process cards” and “run reports”). This enabled faster iteration and easier identification of opportunities to integrate with other teams’ prototypes. Additionally, OpenAI facilitated a special Demo Day in September for the teams to showcase their concepts to one another, OpenAI staff, and researchers from other AI labs and academia.
The projects spanned different aspects of participatory engagement, such as novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior. Notably, across nearly all projects, AI itself played a useful role as a part of the processes in the form of customized chat interfaces, voice-to-text transcription, data synthesis, and more.
Today, along with lessons learned, we share the code that teams created for this grant program, and present brief summaries of the work accomplished by each of the ten teams:
Teams captured views in multiple ways. Many teams found that public views changed often.
Reaching relevant participants across digital and cultural divides might require additional investments in better outreach and better tooling.
Finding a compromise can be hard when a small group has strong opinions on a particular issue.
When trying to produce a single outcome or make a single decision to represent a group, there might be tension between trying to reach consensus and adequately representing the diversity of various opinions. It’s not just about siding with the majority, but also giving a platform to different viewpoints.
Some participants felt nervous about the use of AI in writing policy and would like transparency regarding when and how AI is applied in democratic processes. Post-deliberation sessions, many teams found that participants became more hopeful about the public's ability to help guide AI.
Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we are forming a “Collective Alignment” team consisting of researchers and engineers that will:
We are recruiting exceptional research engineers from diverse technical backgrounds to help build this work with us. If you’re excited about what we’re doing and would like to apply, please apply to join us!