Phi-4-Reasoning-Plus: Microsoft’s Super Reasoning Model Rivals DeepSeek

Microsoft recently announced the launch of the Phi-4-Reasoning-Plus model, an enhanced version of the Phi-4 model, distinguished by advanced deep logical reasoning capabilities and efficient, accurate solving of complex problems.

The new model relies on a dense Transformer architecture, comprising 14 billion parameters. Despite its medium size, it outperforms larger models in a number of performance tests, including mathematics, programming, and logic exams.

Microsoft based the training of Phi-4-Reasoning-Plus on carefully selected synthesized and refined data, alongside a reinforcement learning phase that included over six thousand mathematical problems.

This phase helped boost the model's accuracy, while ensuring its responses remained logical, consistent, and free from repetition.

A distinguishing feature of the training method was the use of internal learning signals separating reasoning steps from the final answer. This facilitates tracing the solution method and understanding the logic behind it.

Phi-4-Reasoning-Plus: Performance and Features

In terms of performance, Microsoft indicated its new model achieved notable superiority over massive models like DeepSeek-R1-Distill-70B in precise tests. Furthermore, its performance approached that of the base DeepSeek-R1 model, which contains 671 billion parameters.

In the AIME 2025 mathematics test, the model successfully passed all thirty questions with remarkable accuracy on the first attempt—a result not achieved by most larger models.

From a technical perspective, Phi-4-Reasoning-Plus offers support for long contexts, reaching up to 32 thousand tokens, and has been successfully tested on inputs exceeding 64 thousand tokens.

It is also compatible with open AI tools like Hugging Face, llama.cpp, vLLM, and Ollama, simplifying its integration into various working environments.

According to Microsoft reports, the model can be employed in multiple fields, including legal analysis, financial affairs, and examination of complex technical data.

Additionally, its ability to separate reasoning logic from the final answer allows for seamless integration into systems requiring interpretability or transparency.

Moreover, the model underwent extensive safety testing by the Microsoft AI Red Team, increasing confidence in its potential use in stringent regulatory environments.

In conclusion, this step by Microsoft reveals that small models are no longer necessarily less efficient. Through smart training methods and carefully designed data, accurate and widely deployable models can be produced without the need for complex infrastructure.

The new model opens the door for technology institutions to adopt powerful solutions in manageable sizes, whether in interactive AI services or in data analysis tools relying on sequential logical understanding.

Related Posts

DeepSeek R2 Ignites AI Race and Threatens Nvidia’s Throne
  • May 5, 2025

China’s DeepSeek is gearing up to launch its latest AI model, DeepSeek-R2, which is already being positioned as a fierce rival…

Deepseek Prover-V2: The New Math Model Update Is Impressive
  • May 5, 2025

China’s DeepSeek company recently revealed the launch of an updated version of its model specialized in solving complex mathematical problems, under…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

DeepSeek R2 Ignites AI Race and Threatens Nvidia’s Throne

DeepSeek R2 Ignites AI Race and Threatens Nvidia’s Throne

Microsoft Copilot Windows Updates: GPT-4o Image Gen, New AI Features

Microsoft Copilot Windows Updates: GPT-4o Image Gen, New AI Features

Deepseek Prover-V2: The New Math Model Update Is Impressive

Deepseek Prover-V2: The New Math Model Update Is Impressive

Google’s Gemini 2.5 Flash: Safety Decline vs. Instruction Gains

Google’s Gemini 2.5 Flash: Safety Decline vs. Instruction Gains