Product Creation Studio

View Original

Harnessing the Potential of AI at the Edge: Acoustical Evaluation Update June 2023

Last month, we gave you an inside peek into how we are deploying AI techniques at the edge to deliver real-time acoustical analysis and insights imperceptible to the human ear, such as speech characteristics, voice patterns, traffic noise, and other signals that could revolutionize various industries.

Here is a quick update on our work and a sample use case for June 2023:

Knowing who is speaking and where they are positioned within the environment will allow future acoustical devices to utilize improved context to deliver more tailored user experiences. 

For example, if I say, "I'm having trouble reading this book,” and my home assistant knows I'm sitting in the living room near a reading light, it can offer to turn on the light positioned closest to me with the light level and color temperature that I typically prefer for this time of day,

With current acoustical devices, I have to make these very specific requests individually (i.e., identify the specific device to turn on by name, specify the light level and specify lighting color, etc.).

It's AI at the Edge’s ability to provide this tailored, more natural user experience that will allow our home control systems to truly feel as smart as they've claimed to be for nearly a decade now.

 From the technical standpoint, we are actively working on deploying this feature by:

  • Utilizing a MIMO block within our custom neural network. We have currently separated up to 8 sound sources while in motion.

  • Our next step will be to augment our MIMO block using correlation to identify the precise location of each sound source.

  • Beyond that, we will utilize voiceprint techniques to discern between and separate sound sources in the environment that are close to each other.

Stay tuned for more insights and updates from our “Harnessing the Potential of AI at the Edge” series!