К ленте

Anthropic News

An update on our election safeguards

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

  • Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
  • People around the world turn to Claude for information about political parties, candidates, and the issues at stake during election time—as well as to answer simpler questions like when, where, and how to vote. In our view, if AI models can answer these questions well (that is, accurately and impartially), they can be a positive force for the democratic process.
  • Here, we explain what we’re doing to help Claude meet the mark ahead of the US midterms and other major elections around the world this year.
  • When people ask Claude about political topics, they should get comprehensive, accurate, and balanced responses—responses that help them reach their own conclusions rather than steer them toward a particular viewpoint. That’s why we train Claude to treat different political viewpoints with equal depth, engagement, and analytical rigor—a principle set out in Claude’s constitution. This is built into the model through character training (where we reward the model for producing responses that reflect a set of values and traits), and then reinforced through our system prompts, which carry explicit instructions on political neutrality into every conversation on Claude.ai. (You can read more about this process in our previous post about political bias.)