You are on page 1of 1

Anthropic goal

Make safe AI systems deploy them reliably  We develop large-scale AI systems


so that we can study their safety properties at the technological frontier, where
new problems are most likely to arise. We use these insights to create safe,
steerable, and more reliable models, and to generate systems that we deploy
externally, like Cloude (i.e., I+D)
Research develops
AI as a Systematic Science  Inspired by the universality of scaling in statistics
physics, we develop scaling laws to help us do systematic, empirically-driven
research. We search for simple relations among data, compute, parameters, and
performance of large-scale networks. Then we leverage these relations to train
networks more efficiently and predictably, and to evaluate our own progress.
We’re also investigating what scaling laws for the safety of AI systems might look
like, and this will inform our future research

Safety and Scaling  At Anthropic we believe safety research is most useful when
performed on highly capable models. Every year, we see larger neural networks
which perform better than those that came before. These larger networks also
bring new safety challenges. We study and engage with the safety issues of large
models so that we can find ways to make them more reliable, share what we learn,
and improve safe development outcomes across the field. Our immediate focus is
prototyping systems that pair these safety techniques with tools for analyzing text
and code.

Tools and measurements  We believe critically evaluating the potential societal


impacts of our work is a key pillar of research. Our approach centers on building
tools and measurements to evaluate and understand the capabilities, limitations,
and potential for societal impact of our AI systems.

You might also like