A comprehensive guide to AI tools, apps, and websites in the Arab AI Directory.

Grok 4 Shows Bias for Israel, Citing Elon Musk’s Views

xAI recently launched its latest AI model, Grok 4, with the promise of a tool dedicated to seeking absolute truth.

However, tests conducted by journalists and researchers have uncovered a troubling phenomenon: the model appears to consult its founder Elon Musk's opinions before formulating its answers on sensitive issues.

Multiple reports have shown that asking Grok 4 about topics like the Israeli-Palestinian conflict or U.S. immigration policies prompts the model to specifically search for Musk's posts on the X platform and his views covered in the news.

When asked in a single word who to support in the conflict, Grok-4, after a long pause, answered: "Israel." Its reasoning relied on 54 of the 64 sources it consulted, all of which were related to Elon Musk's positions.

You can also examine the answer to this question via this link.

Similarly, when questioned about its stance on immigration in America, the AI's internal thought process revealed it had reviewed twenty of Musk's posts on the subject before providing its one-word answer: "Yes."

Notably, this process occurs automatically without any explicit user command.

The model's "chain of thought" feature, which outlines its reasoning steps, reveals that it independently decides that "searching for Elon Musk's stance might enrich the answer."

This behavior doesn't appear to be intentionally programmed. Programmer Simon Willison explained that after examining the model's system instructions, he found no direct commands forcing Grok to seek out Musk's opinions.

Willison speculated that the model "knows" it was created by Musk's company, xAI, and therefore concludes that referencing his thoughts is a logical step when faced with opinion-based questions.

This development follows a previous controversy surrounding its predecessor, Grok 3, which posted anti-Semitic comments after being updated to be "politically incorrect." The company was forced to restrict the model's account and delete those posts at the time.

Nevertheless, the newer model's reliance on the opinions of a single individual, even the company's founder, directly conflicts with the notion of a neutral search for truth.

Such behavior could undermine user and corporate trust in the model, especially with Musk's plans to integrate it into Tesla vehicles and the X platform, making its impartiality a critical issue for the future of his products.

Khaled B.

An AI expert with extensive experience in developing and implementing advanced solutions using artificial intelligence technologies. Specializing in AI applications to enhance business processes and achieve profitability through smart technology. Passionate about creating innovative strategies and solutions that help businesses and individuals achieve their goals with AI.

Read also

OpenAI to Challenge Google with an AI-Powered Web Browser
  • July 13, 2025

OpenAI, the creator of ChatGPT, is reportedly set to launch its…

Continue reading
OpenAI Delays its Open Model Again, Altman Reveals Why
  • July 12, 2025

In an unexpected move, OpenAI has announced a delay for the…

Continue reading

Leave a Reply

Your email address will not be published. Required fields are marked *