8

Getting Started With Reverb Design, Part 1: Dev Environments - Valhalla DSP

 2 years ago
source link: https://valhalladsp.com/2021/09/20/getting-started-with-reverb-design-part-1-dev-environments/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Getting Started With Reverb Design, Part 1: Dev Environments

Over the years, I’ve received a fair number of questions from people who want to try their hand at developing reverb algorithms. In many cases, people have qualified their questions with “I’m not an EE, but…” Well, as an Anthropology major, I can attest that you can design reverbs without having to go through an intensive 4-year academic program!

In my opinion, all the theory in the world is useless without being able to HEAR the results. So my suggestion is to start your reverb development path by setting up one or more development environments. Ideally, you want a place where you can hear your work in near-real time, and run the audio of your choice through your algorithms.

Do you know how to code? Great! My recommendation is to download the Juce SDK. Juce is the framework used by a lot of plugin developers to create their plugins. It is used to handle both the audio and visual parts of the plugins, has target for all of the popular plugin formats (AU, VST2, VST3, AAX, Mac, Windows, Linux, iOS, Android), and is free and open source to get started. Once you get a plugin you want to release commercially, Juce has several options to pay for using the code for closed-source plugins. I would highly recommend installing the Juce SDK, compiling the example plugins, and modifying the example plugins as the start to your own plugins.

Don’t know how to code? My suggestion is to work with a computer music language / environment, so you can start learning the fundamentals of digital signal processing and algorithms without having to understand code. When I started programming reverbs in 1998, I was using Csound. This was an old school language even in 1998, but it had all the building blocks I needed to make reverbs:

  • Delay lines. A “delay line” allows you to read from and write to a memory buffer, and increment through that buffer every sample. The distance between your read and write points determines how much the signal is delayed from the input to the output. An algorithmic reverb will have several to several dozen of these, all at different delay lengths.
  • Filters. A digital filter is used to control the high and/or low frequency balance of a signal. In a digital reverb, filters are often placed in the feedback path of a delay, where the signal is filtered, scaled by a value, and added back into the input.
  • Modulators. Many digital reverbs (including all of the Valhalla reverbs) will slowly vary the delay lengths over time. This requires fractional delays, that use linear or higher order interpolation, but that’s a concept beyond this introduction. The important point is that you need some sort of modulator to slowly vary things. These are often low frequency oscillators, or some sort of band limited noise.
  • Add/subtract/multiply. These are the building blocks for creating feedback around the delays, as well as feeding a signal around a delay (as found in allpass delays), scaling the outputs of the reverb, creating matrices for mixing signals together before they are fed back into the inputs, stuff like that.

Today, you have a lot of real-time options to experiment with these fundamental computer music building blocks. Most of the modern music DSP environments use a visual environment to patch signal processing modules together, in a way that is similar to an analog modular synthesizer.

  • Pure Data is a free, open source real time computer music environment that runs on pretty much everything – Windows, Mac, Linux, embedded hardware, you name it. It isn’t the prettiest language from a visual standpoint, but it is very powerful. For reverb development, you’ll want to use of the Pd-Extended branches, and find something that has an allpass~ unit generator, as this is a CRITICAL building block for algorithmic reverbs.
  • Max/MSP is a commercial, closed source, prettier version of Pure Data. It has nice delay and allpass ugens built in. More importantly, it has several decades of useful example code.
  • Max4Live is a fairly recent adaptation of Max/MSP that runs as audio and MIDI effects inside of Ableton Live. It is a VERY powerful environment to get started in, especially if you use Live as part of your musical workflow. I started working with Max4Live in early 2020, to create some examples for lectures at the University of Victoria in Victoria, BC. It took me about a week to get up and running. By the time I gave the lectures, I was able to develop a few algorithms that became the core of ValhallaSupermassive. You can get a LOT done in Max4Live.
  • Bitwig Studio was recently updated with the Grid environment, which has 170+ DSP blocks that can be patched together in a visual environment. I haven’t tried this yet, but I’ve heard other people having success patching together reverbs with the allpass delays.
  • Reaktor has had some nice reverb examples for the last few decades, so it is worth downloading the various reverbs in that environment and opening them up to see how they work.

Wanna learn how to code? I’d recommend focusing on C++. Most of my DSP code is in C++, although it is closer to “C with classes” – I tend not to use advanced C++ functionality in my low level DSP blocks. C++ is used for Juce, and for many other audio SDKs, so learning C++ will be super useful.

Working in a visual DSP language can be much faster than getting up and running versus a text based DSP language. It is much quicker to prototype a simple reverb in Max/MSP or Max4Live versus creating an entire DSP and GUI code base in C++. The drawback of a visual language is that a complicated reverb will quickly turn into something that looks like a pile of necklaces that have become knotted together – it is difficult to follow. Text bases languages usually have the benefit of having for() loops or something similar to churn through repetitive actions, which is a lot of what reverbs end up doing. Still, if you are just getting started, I’d highly recommend trying one of the visual languages, and seeing how far it can take you.

A small snippet of one of my Max4Live reverbs.

In my next blog post, I’ll list what I consider to be the “canonical” papers in reverb literature. Thanks for reading!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK