
User confidence in Anthropic's AI model, "Claude," has begun to erode noticeably, particularly after its initial perception as a leading contender in the large language model market.
Criticism is no longer confined to everyday users; it now extends to professional developers and subscribers of paid plans.
This situation prompts genuine questions about the platform's future, especially with intense competition from giants like OpenAI and Google.
Paid Subscriptions... Disappointing Service
Although many users upgraded to paid plans like Pro and Max hoping for better performance, their practical experience proved to be quite different.
One user on Reddit explicitly stated they canceled their subscription due to "unreasonable restrictions," adding that free tools like Gemini Studio offered more efficient results for programming tasks.
In other instances, users mentioned they constantly have to re-explain requests because of context interruptions, wasting their time and reducing productivity.
Even those who purchased the Max plan, the most expensive tier, were not immune to these issues. One such user confirmed they resort to clearing the context after almost every task just to be able to continue.
Work Teams Are Also Dissatisfied
The criticism did not stop at individual use.
A member of a team using the "Claude Team" plan, designed for collaborative work, reported that work sessions crash merely from opening small files. He affirmed that even simple commands are no longer being completed normally.
These experiences indicate the model's failure to meet the needs of technical teams, despite their pre-paid annual subscriptions.
Suffocating Usage Limits Without Notice
Developer Edward Rozga from Prezi also revealed his struggles with the imposed limits on the number of messages or "tokens" allowed per minute.
According to him, it has become common for users to hit the maximum limit within an hour, forcing them to wait 2 to 3 hours before they can use Claude again.
Furthermore, many users have pointed out the absence of any official notification from Anthropic regarding changes to the restriction system or pricing, which has only added to the confusion.
One individual highlighted a new update on the support page, revealing that the Max plan only includes five-hour sessions—a detail they described as "very disappointing," especially considering the plan costs $300 in Australia.
Sharp Decline in Performance and Reach
According to web traffic data and benchmark tests, it is clear that Claude no longer holds its top position among AI models.
In January 2024, statistics showed the platform had significantly declined in engagement and interaction metrics compared to OpenAI's ChatGPT and Google's Gemini.
On the performance front, Claude has struggled to compete in crucial areas such as output accuracy, human user preference, and coding capabilities. While its competitors, notably Gemini 2.5 Pro and GPT 4.1, have made significant advancements, Claude has remained far from the top ranks.
Reasons for Claude's Decline
Many experts attribute this decline to Anthropic's excessive focus on research safety concepts at the expense of rapid innovation and user experience improvement.
While this approach is commendable from an ethical standpoint, the current market does not wait; it rewards those who develop, experiment, and deploy quickly.
In this context, some believe the company's attempts to regain trust by introducing features like "Claude Max" have failed due to imposed limitations that do not align with user needs, especially at the corporate and enterprise levels.
Is There Still Hope?
Despite everything, some users still express hope for Claude to regain its standing. They point out that when the model operates efficiently, it delivers impressive results in logical analysis and content generation.
Additionally, a few have indicated a willingness to return to Claude, provided the experience is improved and the technical restrictions are eased.
In conclusion, we cannot definitively say that Claude is entirely losing its technical capabilities.
However, it is in a position where it is gradually losing users. If Anthropic does not take concrete steps to enhance its user experience, the company may find itself out of the competition in a rapidly evolving market that rewards superior performance and responsiveness to user needs.