On November 25, a recent debate over whether AI models have hit a "wall" in terms of scaling laws has sparked widespread attention. In this debate, two competitors—OpenAI and Anthropic—have surprisingly aligned in their stance, both arguing that AI has not yet reached its limits.
OpenAI CEO Sam Altman made his position clear on social media, stating: "There is no wall!"
At a recent conference titled "Next-Generation AI: Can It Deliver on Productivity Promises?" Gusten Haber emphasized: "Large models are becoming increasingly adept at self-correction and reasoning. Every few months, we release new models that continually expand the capabilities of large language models. What’s most exciting about this field is that every model upgrade unlocks entirely new applications."
Haber pointed out: "We are definitely seeing intelligence expand, but we don't believe we’ve encountered any bottlenecks in terms of planning and reasoning. One reason for this is that we are only just beginning to understand how to design planning and reasoning tasks so that models can adapt to new environments they haven't yet encountered."
He added: "We are still in the early stages of development, learning from application developers about what they’re trying to achieve and where large language models are falling short. This feedback helps us incorporate improvements into future iterations of the models."
Haber also noted that these advancements are closely tied to the foundational research at Anthropic, but they also depend heavily on listening to industry needs and adapting in real time: "We are deeply focused on learning from the industry in real-time."