ChatGPT and Bard have grabbed the headlines, andAI chatbots are available on Facebook and Instagram. Another AI-powered chatbot emerged in 2023 with the potential to challenge thebest AI apps. Its name is Claude. Developed by Anthropic, the brainchild of former OpenAI employees, and backed by heavyweights like Google and Amazon, Claude (in version 2.1) is available on phones, PCs, and thebest Chromebooks, proving to be a contender for the most useful, powerful, and versatile AI assistant.
But what is Anthropic, and how has this band of relative unknowns muscled its way to prominence in a field growing more crowded and competitive daily?

First, here’s some Anthropic background information
A group of 11 employees departed AI company OpenAI in 2021, led by co-founders and siblings Daniela and Dario Amodei. Citing differences in vision and direction with the company’s leadership after OpenAI took $1 billion in investment from Microsoft, the Amodei siblings founded Anthropic.
Dario Amodei (who had acted as Vice President of Research at OpenAI before his departure) touted Anthropic as an AI company devoted to LLMs safety and research guided by human feedback. A clear commercial focus on AI development has emerged in the past few years.

What is generative AI?
An agent of the human will, an amplifier of human cognition. Discover the power of generative AI
Based in San Francisco, Anthropic raised several billions of dollars in investment, beginning with a $500 million injection from disgraced cryptocurrency fraudster Sam Bankman-Fried’s Alameda Research, followed quickly by a $300 million investment by Google’s cloud division (in exchange for 10% of the company).

The torrent of investments has increased, with Amazon claiming a minor stakeholder position on the strength of an initial $1.25 billion investment and pledging up to $4 billion. Google increased its stake with an additional $500 million and a commitment to $1.5 billion over time. As part of Amazon’s contributions, Anthropic pledged to use Amazon Web Services as its primary cloud provider and train its models on AWS Trainium and Inferentia chips.
Claude, Anthropic’s ambitious chatbot
This wave of investment capital has been generated mainly on the strength of Anthropic’s AI chatbot, Claude. In the age of the AI gold rush, when many companies (including Google, with Bard and nowintegrated AI features being tested in Chrome) are scrambling to train large language models and develop flexible AI for a broad array of commercial applications, Claude stands out due to its promising early test results and Anthropic’s commitment to AI safety.
One of Anthropic’s cornerstones is its commitment to Constitutional AI (CAI). CAI is a framework of constraints, rules, and organizing principles designed to ensure that AI development is ethical and conforms to humanity’s best interests. The company claims it focuses many of its resources on researching large-scale AI systems. It does this to identify potential risks and failure points and ensure that AI like Claude remains steerable and safe to use.
There’s also a focus on artificial intelligence interpretability. Interpretability is how easy it is for human users and developers to look at an artificial intelligence’s outputs to understand how they were reached. There’s a link between safe AI and interoperability. The easier it is to evaluate how LLMs reach a conclusion, the easier it is to direct or manipulate the model’s decision trees through human feedback to make them align with our values and goals.
Claude represents the fruits of their research in an outward-facing commercial product. Anthropic currently offers three versions of its Claude AI for commercial applications. Claude Instant is the cheapest and least sophisticated option. It’s intended for scenarios that don’t need deeper or more complex reasoning.
Claude 2.0 is the second tier, designed as a high-performance model for more complicated applications. Then there’s Claude 2.1, currently the cutting-edge version of the AI, which brings the power and functionality of Claude 2.0 but promises a reduction inAI hallucinations, the phenomenon where AI systems perceive nonexistent patterns or objects and produce nonsensical outputs.
Ernie: Baidu’s ChatGPT competitor explained
Baidu’s Ernie 4.0 comes as China’s formidable answer to the AI race
Claude 2.1 also has access to a context window twice the size of the other two tiers. This is significant as a context window effectively acts as an AI’s short-term memory, allowing it to retain more tokens (data points, typically words, phrases, or punctuation) to serve as the context for interpreting new data. ChatGPT’s context window as of July 2023 was 16,000 tokens. Anthropic claims Claude 2.1’s context window is 200,000 tokens.
Research, development, and the future of AI
True to its original mission statement, and unlike many of Silicon Valley’s cloistered tech companies, Anthropic regularly publishes the results of its AI research initiatives. According to its website, the company conducts research around four central principles: a systematic approach to training AI, safety even at scale, building helpful tools and metrics, and collaboration.
The result is an impressive catalog of scholarship about AI, from how the company uses input from other AI models to ensure that its systems are prevented from developing problematic tendencies and behaviors to in-depth analysis of the problems of measuring and evaluating AI models. Anthropic appears to be committed to pushing the boundaries and frontiers of AI while simultaneously studying the progress of AI systems to ensure they remain safe to use and aimed at helping individuals and society flourish.