The Limbic chatbot, which screens people seeking help for mental-health problems, led to a significant increase in referrals among minority communities in England.
We are launching a new generation of embedding models, new GPT-4 Turbo and moderation models, new API usage management tools, and soon, lower pricing on GPT-3.5 Turbo.
In 2006, British mathematician Clive Humby said, “data is the new oil.” While the phrase is almost a cliché, the advent of generative AI is breathing new life into this idea. A global study on the Future of Enterprise Data & AI by WNS Triange and Corinium Intelligence shows 76% of C-suite leaders and decision-makers are planning or…
We funded 10 teams from around the world to design ideas and tools to collectively govern AI. We summarize the innovations, outline our learnings, and call for researchers and engineers to join us as we continue this work.
Although AI is by no means a new technology there have been massive and rapid investments in it and large language models. However, the high-performance computing that powers these rapidly growing AI tools — and enables record automation and operational efficiency — also consumes a staggering amount of energy. With the proliferation of AI comes…
As organizations recognize the transformational opportunity presented by generative AI, they must consider how to deploy that technology across the enterprise in the context of their unique industry challenges, priorities, data types, applications, ecosystem partners, and governance requirements. Financial institutions, for example, need to ensure that data and AI governance has the built-in intelligence to…
One can’t step into the same river twice. This simple representation of change as the only constant was taught by the Greek philosopher Heraclitus more than 2000 years ago. Today, it rings truer than ever with the advent of generative AI. The emergence of generative AI is having a profound effect on today’s enterprises—business leaders…
We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.
If we’re not careful, Microsoft, Amazon, and other large companies will leverage their position to set the policy agenda for AI, as they have in many other sectors.
To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.
Together with Anthropic, Google, and Microsoft, we’re announcing the new Executive Director of the Frontier Model Forum and a new $10 million AI Safety Fund.
We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.
We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts.
We’re releasing a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
We use GPT-4 for content policy development and content moderation decisions, enabling more consistent labeling, a faster feedback loop for policy refinement, and less involvement from human moderators.