Here's how the new A.I. model from Google compares to ChatGPT.
Google just dropped its most advanced family of A.I. models yet, and they give business owners a whole new bag of tricks. Today, the company announced Gemini, its most advanced A.I. model to date, and business owners will be able to begin integrating the tech in just a week.
What makes Gemini so unique compared to other A.I. models like ChatGPT is that it's designed to be multimodal, meaning it can respond to spoken questions, images, and text or code all at the same time. For example, users could upload a picture or video and ask the system to write a poem about what it sees. Other multimodal platforms are made by separately training models with different capabilities and then stitching them together.
In a video showing how this multimodality can be harnessed for product ideation, a user shows Gemini an image of two balls of colored yarn and asks what he could make with the materials. In response, the chatbot generates photorealistic images of items that could be made with the yarn, such as crochet cakes and fruit. In another example of how Gemini can transform one piece of media into another, the user draws a picture of a guitar, and asks Gemini to create music inspired by the image. When the user adds an electric amp to the drawing, Gemini changes the music to fit.
The model could also be a boon to businesses with research-heavy operations. In another video, Google researchers described an incident in which they needed to update a dataset with new information, but to do that, they'd need to sift through over 200,000 scientific papers. The researchers asked Gemini to extract key data from the relevant papers and filter out the non-relevant papers. The A.I. whittled down the 200,000 papers to 250 and updated the study with the new data.
Google says there will be three versions of Gemini: Gemini Nano, designed to power generative A.I. applications on mobile devices; Gemini Pro, designed for deployment at scale; and Gemini Ultra, designed for highly-complex tasks that need extra computing power and reasoning.
Currently, Gemini can only be accessed by using Google's chatbot Bard, but on December 13, developers and enterprise customers will be able to integrate the Gemini API into their business and make use of Gemini Pro and Nano. The Ultra version is expected to be released in early 2024.
- Adenman
- 1
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.