Skip to main content
  1. Home
  2. Emerging Tech
  3. Music
  4. News

Deep learning A.I. can imitate the distortion effects of iconic guitar gods

Add as a preferred source on Google
 

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

Recommended Videos

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

“It has been the general belief of audio researchers for decades that the accurate imitation of the distorted sound of tube guitar amplifiers is very challenging,” Professor Vesa Välimäki told Digital Trends. “One reason is that the distortion is related to dynamic nonlinear behavior, which is known to be hard to simulate even theoretically. Another reason may be that distorted guitar sounds are usually quite prominent in music, so it appears difficult to hide any problems there; all inaccuracies will be very noticeable.”

guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1
Researchers recorded the guitar effects in a special anechoic chamber. Mikko Raskinen

To train the neural network to recreate a variety of distortion effects, all that is needed is a few minutes of audio recorded from the target amplifier. The researchers used “clean” audio recorded from an electric guitar in an anechoic chamber, and then ran it through an amplifier. This provided both an input in the form of the unblemished guitar sound, and an output in the form of the corresponding “target” guitar amplifier output.

“Training is done by feeding the neural network a short segment of clean guitar audio, and comparing the network’s output to the ‘target’ amplifier output,” Alec Wright, a doctoral student focused on audio processing using deep learning, told Digital Trends. “This comparison is done in the ‘loss function,’ which is simply an equation that represents how far the neural network output is from the target output, or, how ‘wrong’ the neural network model’s prediction was. The key is a process called ‘gradient descent,’ where you calculate how to adjust the neural network’s parameters very slightly, so that the neural network’s prediction is slightly closer to the target amplifier’s output. This process is then repeated thousands of times — or sometimes much more — until the neural network’s output stops improving.”

You can check out a demo of the A.I. in action at research.spa.aalto.fi/publications/papers/applsci-deep/. A paper describing the work was recently published in the journal Applied Sciences.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
DJI ‘s first 360° drone offers 8K video recording and a freakishly long transmission range
From omnidirectional obstacle sensing to 42 GB of onboard storage, the Avata 360 is DJI doing what DJI does best: raising the bar for everyone else.
DJI Avata 360° drone.

DJI has officially entered the 360° drone arena with the launch of the Avata 360. It’s the company’s first-ever fully immersive FPV drone, and a direct shot at the Antigravity A1, a rival built by an Insta360-incubated brand. Looks like the drone wars just got more interesting. 

What makes the Avata 360 worth looking at?

Read more
I transferred all my chats from other AI apps to Gemini — and it works flawlessly
Google Gemini Graphics Featured

You know that moment when AI assistants like ChatGPT, Gemini, or Claude suddenly lose the plot mid-conversation and start hallucinating like they’re absolutely sure they’re right? Yeah…it’s equal parts funny and painfully annoying. My usual reaction is switching between apps, hoping one of them gets it right. But the real problem is that I have to start over every single time. It feels like I’m stuck in a loop explaining my life story to different AIs, one after the other.

Now with Gemini, I can now jump in from other AI apps without that whole reset conversation. Finally, the Google gods have blessed us. I tried it out expecting the usual hiccups, but it was surprisingly smooth and quick.

Read more
Google expands Search Live globally with voice and camera AI
The feature is now available in 200+ countries with multilingual support
Google Search Live

Google is taking another big step toward turning Search into a full-blown AI assistant. The company has officially expanded Search Live globally, making the feature available in over 200 countries and territories, along with support for dozens of languages.

https://twitter.com/google/status/2037201891130523917

Read more