Root NationNewsGoogle showed a new feature with AI that works with the help of the camera

Google showed a new feature with AI that works with the help of the camera

-

The conference for Google I/O developers is about to start today (it is scheduled to start at 19:00 Kyiv time), and the tech giant decided to tease the audience a little and presented an intriguing new artificial intelligence function that requires a camera to work.

The company posted a short video on its Twitter page that appears to demonstrate a new AI feature that works through the camera and can recognize objects in the frame in real-time.

Google showed a new feature with AI that works with the help of the camera

The video shows a Pixel device with its camera turned on and pointed at a stage where preparations are underway for a keynote at the conference. The person holding the camera asks: “Hey, what do you think is going on here?”.

To this, the AI ​​replies, “it looks like people are preparing for a big event, maybe a conference or a presentation.” He was also able to identify the letters “IO” as being related to Google’s developer conference and mentioning “new advances in artificial intelligence.” You can also see a text transcription of the dialogue on the screen.

It’s not entirely clear what this feature is, though it bears some resemblance to Google Lens’s camera-powered search. However, the one shown in the teaser video appears to be working in real time and responds quickly to voice commands. The fact that the demo is shown on a Pixel device is also interesting. as Google often makes new AI features available primarily on Pixel devices.

While it’s a bit unusual for Google to show off one of its announcements so soon before a big event, this could likely be the company’s response to OpenAI’s new feature. I will remind you, we wrote today that OpenAI held a live Spring Update event, where it demonstrated similar capabilities to its new GPT-4o model. Its multimodal capabilities will allow you to get data not only from the text in the dialogue but also through voice requests or images from the camera. Yes, the chatbot was able to recognize emotions on a person’s face (although a second before that it confused him with a wooden tabletop, the image of which was uploaded to the dialogue earlier).

Also, this announcement suggests that artificial intelligence, its capabilities and the implementation of new functions will be at the centre of attention at this year’s conference. We’ll likely learn more about the Gemma model, which is slated to be an open-source version of Gemini. In addition, Pixel 8a and Android 15 are expected to debut here.

Read also:

SourceEngadget
- Advertisement -
Subscribe
Notify of
guest

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments