
The non-profit Ai2 institution has unveiled a new artificial intelligence model named Olmo 2 1B, boasting capabilities that surpass similar models developed by major companies like Google, Meta, and Alibaba, according to the company's claims.
This release comes as part of the second generation series of "OLMo" models, which the company first announced last November.
The new model comprises one billion parameters. It is based on open-source code licensed under Apache 2.0. This allows researchers and developers to rebuild it completely using the available data and code via the Hugging Face platform.
Olmo 2 1B represents part of a recent wave of smaller models. These models do not require advanced hardware to run and can operate easily on laptops or even phones. Thus, Olmo 2 1B is considered an attractive option for developers and hobbyists who do not possess powerful resources.
Outperformance in Tests
The model was trained on four trillion data tokens, sourced from multiple origins including published content, content generated by AI models, and human work.
This data diversity granted Olmo 2 1B a high capability to handle complex tests, such as the GSM8K test for mathematical ability, where it recorded results surpassing Google's Gemma 3 1B, Meta's Llama 3.2 1B, and Alibaba's Qwen 2.5 1.5B models.
Furthermore, the company pointed to Olmo 2 1B's superiority over these models in the TruthfulQA test, which measures information accuracy.
This model was pretrained on 4T tokens of high-quality data, following the same standard pretraining into high-quality annealing of our 7, 13, & 32B models. We upload intermediate checkpoints from every 1000 steps in training.
— Ai2 (@allen_ai) May 1, 2025
Access the base model: https://t.co/xofyWJmo85 pic.twitter.com/7uSJ6sYMdL
On another note, Ai2 confirmed that the model - despite its advantages - might produce harmful or inaccurate content, rendering it unsuitable for direct commercial use.
This warning serves as a reminder that open models require careful monitoring and evaluation before being included in sensitive applications.
In conclusion, the release of Olmo 2 1B comes at a time when major companies and institutions are racing to develop small, highly efficient models, keeping pace with AI advancements without the need for complex infrastructure or high operating costs.