Meta AI App Launches with Llama 4 to Rival ChatGPT

Meta has officially launched a new standalone app for its Meta AI assistant, aiming to offer users a more personalized experience powered by the latest Llama 4 model.

Unlike previous integrations within platforms like WhatsApp, Facebook, and Instagram, the assistant is now available as a dedicated app with a stronger focus on natural voice interaction and smooth conversational flow.

The app leverages advanced voice technology known as “full-duplex speech,” allowing users to engage in natural, real-time audio conversations without needing to read or respond to written text.

Currently in its early stages, this voice feature is initially rolling out in the U.S., Canada, Australia, and New Zealand for early testing and feedback collection.

Key Features of the New Meta AI App

Screenshot of the Meta AI app start screen showing voice and text interaction options, chat suggestions, Meta AI logo, and Ray-Ban glasses connection.

One of the standout features is the assistant’s ability to remember details the user chooses to share, such as personal preferences or general information, allowing it to generate more relevant responses.

For example, if a user mentions being allergic to dairy, the assistant will avoid suggesting activities or content related to dairy products in future interactions.

The app also has access to previous user interactions across Meta platforms, giving it deeper context for generating helpful responses.

Meta has also introduced a new “Discover” section, an interactive space where users can see how others are using the assistant, share creative prompts, and interact with community-generated content—with user consent.

Screenshot of the Discover section in Meta AI, showing user prompts and interactions with the smart assistant.

Integration with Other Devices and Platforms

Meta has integrated the app with its AI-powered Ray-Ban smart glasses. Users can start a conversation on the glasses and seamlessly continue it on the app or web—though resuming conversations from the glasses directly is not yet supported.

All settings and paired devices from the previous “Meta View” app have been automatically migrated to the new platform.

On the web, Meta AI now supports voice chat, improved image generation, and introduces a new document editor. This editor allows users to create rich content with text and visuals, export to PDF, and even upload files for AI-based analysis and insights.

Meta emphasizes that the assistant is built to feel more relatable in both language and function, reflecting decades of experience in digital personalization—while keeping the user fully in control of what data is shared.

It’s clear that with this launch, Meta is positioning itself for direct competition with ChatGPT and other AI tools. However, it brings a unique strength to the table: prior knowledge of user behavior and interests across its suite of platforms.

  • Related Posts

    Google Gemini Adds AI Image Editing Features Directly Within Chats
    • May 1, 2025

    Google has announced a new update for the Gemini app that allows users to edit images directly within conversations, whether the…

    Ray-Ban Meta Glasses Collect Your Data for AI Training
    • May 1, 2025

    In a widely debated move, Meta has modified its privacy policies for the Ray-Ban Meta smart glasses, granting its artificial intelligence…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Google Gemini Adds AI Image Editing Features Directly Within Chats

      Google Gemini Adds AI Image Editing Features Directly Within Chats

      Ray-Ban Meta Glasses Collect Your Data for AI Training

        Ray-Ban Meta Glasses Collect Your Data for AI Training

        Freepik Releases F Lite: An Open-Source AI Image Generator

          Freepik Releases F Lite: An Open-Source AI Image Generator

          AI solves a century-old scientific puzzle

            AI solves a century-old scientific puzzle