How sound quality can get lost in translation

UBCO researchers combine art and science to create seamless, immersive audio experiences 

December 17, 2025

Dr. Miles Thorogood stands in front of illuminated screens while presenting his research.

Dr. Miles Thorogood presents SPIRAL, his Canada Foundation for Innovation-funded space, at the 2024 launch event. Research in the lab explores simulating the creative process in sound design to develop advanced models and algorithms for new computational tools.

We’ve all been there, protecting our ears. The school play in the gym or community hall, where sound is distorted due to glitches in equipment.  

“And listening to live performances on the internet introduces even more glitches. Yikes,” says UBC Okanagan’s Dr. Miles Thorogood.   

But now, Dr. Thorogood and his team at UBCO’s Sonic Production, Intelligence, Research, and Applications Lab (SPIRAL) are exploring how advanced machine learning can make glitch-free network music performances possible, supporting creative collaboration and public art experiences.   

The SPIRAL researchers are exploring how audio data travel through a network. They’re using neural networks for Packet Loss Concealment, which generates synthetic audio that can not be lost in transmission, so listeners never notice the gaps.    

Quite often, during a remote music performance, audio packet loss can cause glitches when data travel across a computer network and fail to reach their destination, especially with high-quality audio over wi-fi in noisy urban environments, explains Dr. Thorogood, who teaches in the Faculty of Creative and Critical Studies. 

While designing the sound infrastructure for Light up Kelowna—using a wireless multi-node audio visual format—Dr. Thorogood observed audio packet loss, and along with undergraduate media studies student Yashvardhan Joshi, furthered their research for a recent publication in IEEE Access. 

Dr. Thorogood explains how this research blends art and science to solve a recurring problem: how to hide glitches in complicated soundscapes made from overlapping natural and electronic sounds, so public installations and music feel immersive and uninterrupted. 

Can you explain what audio packet loss concealment means, and what this research paper is about? 

An Audio Packet is how audio data travel through a network. The User Datagram Protocol (UDP) transmits data streams for real-time communication, such as voice or network music performance. However, UDP—nicknamed the “ultra-dodgy protocol”—lacks error control, so lost or corrupt packets cannot be retransmitted, causing glitches that disrupt remote music performance. Our approach uses a machine learning algorithm to generate synthetic audio that closely resembles what should have been heard, based on the history of the audio signal.   

This contribution represents a substantial increase in modelling difficulty compared to the stationary signals commonly studied in PLC literature.    

Tell us about the Sonic Production, Intelligence, Research, and Applications Lab (SPIRAL)? 

At SPIRAL, we research digital media art and design practices and develop technology solutions for creative workflows. The work in this paper proposes deep learning methods to streamline and conduct glitch free network music performances. This directly relates to the research conducted at SPIRAL in machine learning models to assist and streamline the creative process of music performance and sound analysis.     

How did you apply this research to Light Up Kelowna?  

We designed the Light up Kelowna infrastructure around a wireless multi-node audio visual architecture using open-source hardware and software developed in the lab. With the deployment of the Light Up Kelowna system, we observed audio packet loss some of the time. An analysis of the situation revealed that the large distances between nodes and noisy radio frequency urban environment was the cause of the problem. To mitigate the audio packet loss, we explored packet loss concealment for network music.  

This work provided an opportunity for Yashvardhan to investigate how current deep learning algorithms catered towards Packet Loss Concealment solutions can be used to create experimental audio effects for novel sonic explorations for the music industry. He is also working on public art infrastructure both in creating sound art and developing light and sound technology as part of Light Up Kelowna.   

For more information on the work conducted at SPIRAL, visit: https://fccs.ok.ubc.ca/spiral 

Media Contact

Patty Wellborn
E-mail: patty.wellborn@ubc.ca


More content from: Faculty of Creative and Critical Studies