Jump to content
  • Apple’s new AI model hints at how AI could come to the iPhone


    Karlston

    • 627 views
    • 2 minutes
     Share


    • 627 views
    • 2 minutes

    Apple teased new generative AI features, but what kind?

    Apple has been quiet about its plans for generative AI, but with the release of new AI models today, it appears the company’s immediate ambitions lie firmly in the “make AI run locally on Apple devices” realm.

     

    Researchers from Apple released OpenELM, a series of four very small language models on the Hugging Face model library, on Wednesday. Apple said on its Hugging Face model page that OpenELM, which stands for “Open-source Efficient Language Models,” performs very efficiently on text-related tasks like email writing. The models are open source and ready for developers to use.

     

    OpenELM is even smaller than most lightweight AI models

     

    It has four sizes: 270 million parameters; 450 million parameters; 1.1 billion parameters; and 3 billion parameters. Parameters refer to how many variables a model understands in decision-making from its training datasets. For example, Microsoft’s recently released Phi-3 model bottoms out at 3.8 billion parameters, while Google’s Gemma offers a 2 billion parameter version. Small models are cheaper to run and optimized to work on devices like phones and laptops. 

     

    Apple CEO Tim Cook teased that generative AI features will be coming to the company’s devices, saying in February that Apple is spending “a tremendous amount of time and effort” in the space. However, Apple has yet to share specifics on what its use of AI might look like.

     

    The company has released other AI models before, though it hasn’t released any AI foundation model for commercial use like its competitors have.

     

    In December, Apple launched MLX, a machine learning framework that ideally makes it easier for AI models to run better on Apple Silicon. It also released an image editing model called MGIE, which lets people fix photos with prompts. Another model, Ferret-UI, could be used to navigate smartphones. Apple is also rumored to be working on a code completion tool similar to GitHub’s Copilot

     

    However, even with all the model releases from Apple, the company reportedly reached out to Google and OpenAI to bring their models to Apple products. 

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...