Startup Anthropic was able to expand its Claude chatbot’s contextual input window to 75,000 words, a huge improvement over current models. The company claims the chatbot can process an entire novel in less than a minute.
An often-overlooked limitation of chatbots is memory. While the AI language models underlying these systems are actually trained on terabytes of text, the amount these systems can process during use is quite limited. For ChatGPT, this is about 3000 words. And while there are ways to circumvent these limitations, it’s still not enough. Anthropic, founded by former OpenAI engineers, has significantly expanded the context window for its Claude chatbot, bringing it to 75,000 words. As she states in her blog, which is enough to process The Great Gatsby in one go. In fact, the company tested the system by editing a sentence in the novel and asking the AI to notice the change. It happened in 22 seconds.
AI language models measure information not by the number of characters or words, but by tokens—a semantic unit that doesn’t always correlate with the number of characters, and since words can be long or short, their length doesn’t necessarily equal the number of tokens In this regard, the Claude context window is superior to the OpenAI model as it can handle 100,000 tokens compared to 9,000 before, while the fully functional GPT-4c model can handle up to 32,000 tokens.
Currently, the new Claude features are only available to Anthropic Business Partners who connect to the chatbot through the company’s API. The price is also unknown, but it must have increased significantly since processing more text means more computation.