After announcing the arrival of Bardits experimental conversational artificial intelligence (AI) service, Google held this Wednesday (8), in Paris, a global event where it presented the latest AI innovations for Search, Google Maps and Google Translate.
Among the main innovations announced by company spokespersons is the expansion to more countries of Multisearch, a search experience that combines image and text in the Search application. The experience is already available in Brazil.
In Maps, Google introduced advances in AI and Augmented Reality, including the expansion of functionality Live Indoor Viewwhich includes 1,000 new airports, train stations and malls in cities around the world, cities around the world including London, Paris, Berlin and São Paulo in the coming months.
At the time, Google Vice President Prabhakar Raghavan shared more details about Bard, a new experimental conversational AI service powered by LaMDA, and its applications. He also reinforced the importance of developing technology in a bold and responsible way, with current and quality information. Bard is currently being tested by external groups and, after a period of feedback, should be released to the public in the coming weeks.
Discover the main announcements below:
Multisearch: new ways to “Google”
Mobile device cameras have become a powerful way to explore and understand the world around us. To give you an idea, the Lens application is used more than 10 billion times a month, as people search for what they see through the images from their cameras.
Last year, we reached an important milestone with the launch in the United States of Multisearch, a new way to search using text and images at the same time. In the Google app, you can use Lens to take a photo or use a screenshot and add text to it, just like you can naturally point to something and ask a question about it.
The idea is to be able to naturally point your camera at something and ask a question about it. Starting today, the experience starts going live on mobile devices in all languages and countries where Lens is available, including Brazil.
Recently, Google has also added to the Multisearch feature the ability to search locally. You can take a photo with Lens and add “near me” to the search to find what you need, or take a screenshot of a dish or an object and see which restaurant or nearest store items are available. Over the coming months, we’ll roll out the feature to all languages and countries where Lens is available, including Brazil.
With Lens, Google’s goal is to connect people to the world’s information. You can now use Lens to search using camera or photos, directly from the Search bar. Now you can search about what’s on your phone’s screen.
In the coming months, you will be able to use Lens to “Search screen” directly on Android. With this technology, you can search what you see in photos or videos on sites and utilities you know, such as messaging apps and videos, without having to leave them or miss the moment.
Maps: Immersive Visualization and Live Indoor View in more countries
Last year, Google shared its vision for the future of Google Maps: an intuitive, interactive map that reinvents how you search and navigate, helping people make more useful and sustainable choices.
At the Paris event, the company showed how AI is bringing that mission to life, with updates to immersive visualization and Live View, as well as new features for EV drivers and people commuting by bicycle or public transport.
One of the most anticipated features, the Immersive Visualization is available now in five additional cities, including London, Los Angeles, New York, San Francisco and Tokyo, and in the coming months in Florence, Venice, Amsterdam and Dublin, and brings advances in AI and computer vision, with an immersion that merges billions of Street View images and aerial panoramas to create a digital model.
Let’s say you’re planning a visit to the Rijksmuseum in Amsterdam. You can virtually fly over the building and see where and how the entrance is. With a time slider, you can even see what the area looks like at different times of day and what the weather will be like.
It’s also possible to identify where it tends to get more crowded to have all the information you need to decide when to go. If you’re hungry, you can head down to street level to explore nearby restaurants and peek inside to quickly understand the buzz of a place before making your reservation.
Google also announced that it will expand the functionality of the Live Indoor View to more than 1,000 new airports, train stations and malls in cities around the world, including São Paulo, London, Paris, Berlin, Madrid, Barcelona, Prague, Frankfurt, Tokyo, Sydney, Melbourne, Singapore and Taipei . With Live View, you can point your phone’s camera inside locations to find stores, ATMs and restaurants.
From now on, in Maps, it will also be possible to follow the trip with the summarized view of the route or directly from the lock screen. As for electric cars, the platform announces that in the coming months it will be possible to view charging stations directly in Search and Maps.
Drivers will be able to add mandatory charging stops to shorter trips and the app will suggest the best destination based on factors such as current traffic, car charge level and expected energy consumption.
New releases make Translator more accessible to its 1 billion users
For the A translator, Google announced changes in the design, which has been completely redesigned for Android users and will gain a new interface for iOS users. The platform will also offer, in the coming months, more comprehensive and contextual translations for single words, short sentences and expressions with multiple meanings, helping you to better understand and find the best translation based on context.
This will be available in English, French, German, Japanese and Spanish in the coming weeks and will be expanded to more languages in the coming months.
In Brazil and in the world, the augmented reality translation functions in Lens are also starting to work, which perfectly mix the translated text into images, making them much more natural.
After announcing the arrival of Bardits experimental conversational artificial intelligence (AI) service, Google held this Wednesday (8), in Paris, a global event where it presented the latest AI innovations for Search, Google Maps and Google Translate.
Among the main innovations announced by company spokespersons is the expansion to more countries of Multisearch, a search experience that combines image and text in the Search application. The experience is already available in Brazil.
In Maps, Google introduced advances in AI and Augmented Reality, including the expansion of functionality Live Indoor Viewwhich includes 1,000 new airports, train stations and malls in cities around the world, cities around the world including London, Paris, Berlin and São Paulo in the coming months.
At the time, Google Vice President Prabhakar Raghavan shared more details about Bard, a new experimental conversational AI service powered by LaMDA, and its applications. He also reinforced the importance of developing technology in a bold and responsible way, with current and quality information. Bard is currently being tested by external groups and, after a period of feedback, should be released to the public in the coming weeks.
Discover the main announcements below:
Multisearch: new ways to “Google”
Mobile device cameras have become a powerful way to explore and understand the world around us. To give you an idea, the Lens application is used more than 10 billion times a month, as people search for what they see through the images from their cameras.
Last year, we reached an important milestone with the launch in the United States of Multisearch, a new way to search using text and images at the same time. In the Google app, you can use Lens to take a photo or use a screenshot and add text to it, just like you can naturally point to something and ask a question about it.
The idea is to be able to naturally point your camera at something and ask a question about it. Starting today, the experience starts going live on mobile devices in all languages and countries where Lens is available, including Brazil.
Recently, Google has also added to the Multisearch feature the ability to search locally. You can take a photo with Lens and add “near me” to the search to find what you need, or take a screenshot of a dish or an object and see which restaurant or nearest store items are available. Over the coming months, we’ll roll out the feature to all languages and countries where Lens is available, including Brazil.
With Lens, Google’s goal is to connect people to the world’s information. You can now use Lens to search using camera or photos, directly from the Search bar. Now you can search about what’s on your phone’s screen.
In the coming months, you will be able to use Lens to “Search screen” directly on Android. With this technology, you can search what you see in photos or videos on sites and utilities you know, such as messaging apps and videos, without having to leave them or miss the moment.
Maps: Immersive Visualization and Live Indoor View in more countries
Last year, Google shared its vision for the future of Google Maps: an intuitive, interactive map that reinvents how you search and navigate, helping people make more useful and sustainable choices.
At the Paris event, the company showed how AI is bringing that mission to life, with updates to immersive visualization and Live View, as well as new features for EV drivers and people commuting by bicycle or public transport.
One of the most anticipated features, the Immersive Visualization is available now in five additional cities, including London, Los Angeles, New York, San Francisco and Tokyo, and in the coming months in Florence, Venice, Amsterdam and Dublin, and brings advances in AI and computer vision, with an immersion that merges billions of Street View images and aerial panoramas to create a digital model.
Let’s say you’re planning a visit to the Rijksmuseum in Amsterdam. You can virtually fly over the building and see where and how the entrance is. With a time slider, you can even see what the area looks like at different times of day and what the weather will be like.
It’s also possible to identify where it tends to get more crowded to have all the information you need to decide when to go. If you’re hungry, you can head down to street level to explore nearby restaurants and peek inside to quickly understand the buzz of a place before making your reservation.
Google also announced that it will expand the functionality of the Live Indoor View to more than 1,000 new airports, train stations and malls in cities around the world, including São Paulo, London, Paris, Berlin, Madrid, Barcelona, Prague, Frankfurt, Tokyo, Sydney, Melbourne, Singapore and Taipei . With Live View, you can point your phone’s camera inside locations to find stores, ATMs and restaurants.
From now on, in Maps, it will also be possible to follow the trip with the summarized view of the route or directly from the lock screen. As for electric cars, the platform announces that in the coming months it will be possible to view charging stations directly in Search and Maps.
Drivers will be able to add mandatory charging stops to shorter trips and the app will suggest the best destination based on factors such as current traffic, car charge level and expected energy consumption.
New releases make Translator more accessible to its 1 billion users
For the A translator, Google announced changes in the design, which has been completely redesigned for Android users and will gain a new interface for iOS users. The platform will also offer, in the coming months, more comprehensive and contextual translations for single words, short sentences and expressions with multiple meanings, helping you to better understand and find the best translation based on context.
This will be available in English, French, German, Japanese and Spanish in the coming weeks and will be expanded to more languages in the coming months.
In Brazil and in the world, the augmented reality translation functions in Lens are also starting to work, which perfectly mix the translated text into images, making them much more natural.