- Posted on : May 31, 2021
- Industry : Corporate
- Service : Digital Transformation
- Type: Blog
There is nothing in the world that can deter the human spirit. Google’s annual developer conference, Google I/O – Innovation in the Open, returned this year after a pandemic-enforced break in 2020. While a lot has changed in the world around us, technology kept us moving with new-age challenges to keep people virtually connected, as we all march back to normalcy through massive vaccination drives across the globe.
#Google I/O has always been a celebration of technology to bring developers together. When Sunder Pichai took center stage on 19 May 2021, the scene was different from the usual images of developers getting together over the years. This year, the problems loomed were large and people collaborated to build solutions despite the physical distances. The search landscape also reflected changing priorities for people as they searched for the most reliable information during the Covid-19 crisis. The top advancement centered around powering Multi-modal context-aware search, ensuring security, personalization, learning modes powered by augmented reality, and enhancing workspace collaboration experiences.
More powerful wearable solutions: WearOS and Samsung Tizen unite
The pandemic has brought significant awareness and placed a priority on people’s health. It means the advancements in wearables will continue in the health monitoring direction. For instance, Samsung Electronics pioneered Tizen OS as a successor to MeeGo and it will now be merged with Google Wear OS to power future Galaxy watches. This will provide an always-on constantly-tracking experience by reducing overall battery consumption and optimizing the app experience for faster start-up.
MUM - Search space continues to evolve and contextualized
Searches have evolved and come a long way from linking information to seekers. Google introduced BERT in 2019 to build the context of words to better search and respond to queries with the most helpful answers. MUM, multitask unified model, is built on transformer architecture like BERT and unlocks a thousand times more power to train on 75+ languages simultaneously. It can understand text, images, and videos to solve complex and nuanced queries which require building context, carrying gap analysis, and responding for the best answer. By translating languages, MUM provides the most diverse information to build a 360-degree perspective around the query. As people started searching to validate the social-media claims across text, images, and videos, the searches for “Is it true that” have overshot the usual Google searches. The upcoming “About this result” feature is aimed at providing credible and authentic information by sharing the website track record.
LaMDA - Multi-lingual and Multi-modal is the way forward
Language is by nature complex, and we switch context when building conversations, it is not uncommon to switch from weather to football. LaMDA is Google’s answer to explore these interactions by answering with sensible responses that keep the dialogue open-ended. A new conversation is possible without retraining the model since it never takes the same path twice to ensure conversations remain lively and more real-life. LaMDA is trained only on text, but normal people converse using images, text, audio, and video. This is where future research and advancements will further the technology as it gets ready to power.
Google Assistants, Search, and Workspace using multi-modal models to detect a word written or spoken in any language, images, sounds, videos, and related media.
Security in everything with more granular controls for users
Given how information is distributed today, there is not enough emphasis that can be put on security. However, passwords remain the most vulnerable block of the entire security paradigm. Two-thirds of people recycle passwords across accounts leaving them vulnerable to password breaches. The goal is to create a password-less world to protect everything from sites to apps. Federated learning enables the machine learning model’s centralized training and ensures that no data leaves the device. The use of differential privacy ensures individual data identification can be prohibited by using large, aggregated data. Privacy is becoming more modular with wider selection coming on the hands of users.
Artificial Reality is redefining the visualization
Enabling smooth and seamless learning has been a key focus throughout the pandemic. When you introduce Augmented Reality (AR) to search, the visual element is a boost in the overall learning process. Google lens is combining visual translation with educational content from the web to facilitate learning in your preferred language. If you thought learning was only about classroom concepts, technology is proving you wrong. There is so much that goes in sports that are being simplified by AR to understand how super-human efforts by top athletes come to life. A powerful feature provided by Google Lens is offering information when you view something in your surroundings, making shopping a more engaging experience powered by real-time information at the moment.
Google Photos – Rediscovering memories with immersive cinematic moments
Remember those Kodak days where photo reels were frugally used for best shots? There are about 4 trillion photos stored in Google photos. The accessibility created a unique problem of having plenty of photographs but not finding the right memory when required. Machine learning is solving this problem by translating images into a pattern and grouping them for interesting patterns and linking for special celebrations for reminiscence. Interpolation has given rise to powerful animated memories by creating intermediate frames and bringing the images to life. Isn’t it incredible how technology can fix the inadvertent problems that originate based on how the user’s behaviors evolve?
Interconnected workplace collaboration experience
Access to collaboration is key when you work in 2021 and beyond. The Google Workspace is soon going to power Smart Canvas. Canvas brings voices and faces of team members for collaboration that can meet directly into apps such as doc/sheet/slide to collaborate in real-time with human touch using the Companion mode in Google Meet. Assisted writing features can revolutionize the sensitivity aspect of writing. For example, it can help use inclusive language by prompting a writer to swap the word "chairman" with "chairperson".
It was refreshing to see Google I/O return to give us a glimpse into the technological advancements that will influence and touch more lives in the pursuit to make work a safer, happier, and connected place.