Throughout the COVID-19 pandemic, video-conferencing has become the backbone of both our work and social lives. Today, on #WorldHugDay, we take a look at some of the ways in which AI (Artificial Intelligence) will help to more efficiently connect us virtually in the future.
Almost a year after most of the western world was plunged into a state of lockdown, it’s hard for most of us to imagine life without the constant bleeping of the team’s application on our phones or the ever so frequent occurrence of having to remind a co-worker that they had accidentally muted their microphone.
As innovative and advanced as this current technology may be, the future possibilities of further technological advancements in video-conferencing platforms are becoming increasingly visible thanks to the continuous evolution and advancement of AI based technologies.
Sorry, you froze!
There’s nothing more annoying than a ‘laggy’ or low-quality video stream when you’re trying to catch up with friends or take part in a meeting. It’s a daily problem for most of us without a high-speed internet connection, but this bothersome reality of the virtual lifestyle will soon be a thing of the past.
So called ‘AI video compression technology’ completely reinvents the way in which video-chat platforms work and is currently being incorporated into video-conference platforms.
How does it work?
By collecting data on the facial features of users such as the eyes, nose and mouth, this AI powered technology creates a virtual avatar which, when combined with the organic video image, produces a much higher quality stream for users.
At the same time, this technology dramatically reduces bandwidth consumption. The result is a much more seamless user experience, allowing everybody to enjoy high quality video streams regardless of their bandwidth capacity, making video-conferencing possible in remote areas with weak network connections.
This technology also has the ability to adjust camera angles, make users appear more engaged by diverting eye contact towards the screen and potentially even mask imperfections on the skin such as zits and eye-bags.
NVIDIA MAXINE is an example of such a pioneering solution that offers integrated AI frameworks to video conferencing developers.
Can you translate please?
As we become accustomed to working remotely and depending on video- conference technology as a primary way of doing business, developers are starting to incorporate conversational AI frameworks into their products.
Video-conferencing platforms of the future will incorporate tools such as a digital assistant function which can inform users of relevant information such as of the weather and offer real time translations whilst on call. Clearly, this will be extremely helpful for those who wish to engage in international conversations both in a business and leisure context.
Conversational AI frameworks also have the capacity to identify different voice tones, allowing the platform to recognise the voice of the main speaker and mute all noise from the surrounding environment, making it far easier for people to hold a virtual conversation in busy public spaces, or indeed at home with noisy animals or children around.
Video-conferencing platforms are a vital tool for many of us as we go about our daily lives during the COVID-19 pandemic. This has incentivised developers to push the boundaries of existing platforms and apply AI within current technology to achieve increased functionality. Thanks to this innovation in AI, someday soon, perhaps without even realising it, we will be communicating with our digitally produced avatars as we ignore the screaming children in the background of our online interview. The future is in sight.
To keep up to date with LUCA visit our website, subscribe to LUCA Data Speaks or follow us on Twitter, LinkedIn or YouTube .