I ported native guitar plugins to JavaScript (in-depth)

Konstantine Kutalia
9 min readSep 1, 2021

--

Photo by Thomas Litangen on Unsplash

TL;DR

I converted native (Windows/Linux/Mac) guitar VST plugins into web browser using React JS and WebAssembly.

Demo

Here’s the full story:

Introduction

Electric guitar is one of the most popular instrument. You have heard it on countless pop, jazz and rock records. Those iconic sounds are generated by electric devices known as amplifiers. First generations of guitar amplifiers used vacuum tubes to amplify signals. They were heavy, expensive and fragile. In 1970s when solid state technology became widespread transistor-based guitar amplifiers were popularized, yet many players would still prefer “vacuum tube sounds” as they sound warmer and fuller. At the end of 1990s Line 6 POD digital guitar effects processor revolutionized guitar gear market. It gathered a huge fanbase due to its versatility, low cost and lack of maintenance. During 2000s many iterations of digital simulations appeared on various platforms, both as personal computer apps and standalone machines using dedicated DSP (Digital Signal Processing) processors. Just few years ago first web browser-driven amplifier simulations appeared which used Web Audio API. Those were attempts to make electric guitar playing through a computer a bit easier since native applications require setup and are platform-dependent. Since then not much has been done building upon this idea whereas Web Audio API itself has been vastly improved.

The mission

So the basic idea was to bring native applications experience to browser environment with unquestionable audio quality and performance. That means you would not have to choose specific installer for your platform (x64/x86, Windows/Mac and etc.), install it potentially with other mandatory applications like DAW (Digital Audio Workspace) — not all digital guitar products come as standalone versions, you might need to install them as plugins and run via DAWs. All that manual work is replaced by typing URL of your favorite guitar plugins host website. Downloading all assets and compiling could just take few seconds and you are ready to rock.

Overloud TH-U — popular native standalone application. One of my inspirations for GUI.
Overloud TH-U — popular native standalone application. One of my inspirations for GUI.

Tools

DSP algorithms for native apps are usually written in languages like C/C++ or DSLs (domain specific languages) like Feldspar, FAUST (Functional Audio Stream) and etc. A tool for compiling those languages to JavaScript has already been in use for quite some time — Emscripten. You can export entire games to be compatible with browser utilizing the technology. At first Emscripten compiled to asm.js — a strictly typed JS subset. asm.js could be ran on modern browsers like Mozilla Firefox. As it’s popularity grew new technology came — WebAssembly (Wasm), which superseded asm.js. Wasm is not only a language, but according to WikiPedia:

an open standard that defines a portable binary-code format for executable programs, and a corresponding textual assembly language, as well as interfaces for facilitating interactions between such programs and their host environment.

Here’s a neat thing which might blow your mind: with Emscripten you can actually compile compilers to WebAssembly. This Node JS package is a great example — libfaust includes FAUST language compiler as a WebAssembly module and appropriate JS wrapper functions to run them in browser, manage memory, type annotations are also included for TypeScript compatibility. So you can compile FAUST language code from .dsp files on demand. To be more technical, the compiled code is executed in AudioWorklet worker thread which runs separately from the main thread (aka the DOM renderer, paint thread). Therefore it helps avoiding UI blocking and other performance issues. As I just told you (shared) memory management is handled by the package. So all you have to do is call given functions. And even if this information is not enough, you can go ahead and read Google developer’s blog about the topic.

FAUST code descriptors generated by libfaust
FAUST code descriptors generated by libfaust

Not only that libfaust handles most crucial things, it also generates descriptors objects for all the virtual knobs and buttons described by special primitives in .dsp codes. For example: tonestack_low = vslider(“bass”,0,-15,15,0.1); The given code describes low frequencies slider labeled “bass” with minimum value of -15, maximum of 15 and default of 0, while 0.1 is a smallest step in a change of value. I used React library by Facebook to create UI components from aforementioned descriptors. Control functions are also automatically generated and wrapped in the audio worklet processor class. Therefore you can directly connect components’ onChange events to these function calls.

Challenges

Most obvious question currently in your head should be: but how does it automatically convert whole native apps, is it some kind of magic? The answer is: it does not. Even though browser engine and the libfaust does some heavy lifting to virtualize native APIs to fit WebAssembly (remember, it’s also a virtual machine) environment, memory management and file system, there are few areas where manual coding solutions are needed: GUI (which we have already talked about) and additional C/C++ wrapper functions which also do DSP and load additional files, but are not compiled to asm.js by libfaust. Even though I could try to manually compile it using Emscripten and add as a separate audio worklet processor module, I thought it would be too much of a headache so I went and used my little C++ experience from university days to “translate” to JS. In case I haven’t told you I ported the free Kapitonov Plugins Pack in JS which includes simulations of variety of iconic guitar pedal and amplifier sounds.

What the code in question does is that it:

1) Loads tube amplifier simulation profiles. I.e. you can have a clean guitar amp profile for playing blues and a high-gain profile for heavy metal music in separate files.

2) Does resampling to fit program’s properties to the user’s audio configurations. Resampling means changing audio data sample rate. If you have an .mp3 track with a sample rate of 96Khz it will be twice as accurate as the same track with a sample rate of 48Khz, which translates to broader frequency spectrum, better lows and highs.

You might ask: what’s to resample? We’ll get to that later.

kpp_tubeamp plugin profiles are stored in .tapf files as a raw binary data. They consist of 3 types of information: metadata, parameters to use in the FAUST .dsp file as signal building blocks and impulse responses. These are all tweakable variables created with dedicated software to simulate different kinds of guitar amplifier while the main “schematic” is described in the .kpp_tubeamp.dsp file and is loosely based on real guitar tube amplifiers. According to the source code .tapf files are read using C++’s famous fread function, byte-by-byte. The profile structure is given in profile.h header file implying float and int properties are 4 byte datatypes each. There are multiple ways to download and parse files in JS, Fetch API being my favorite. It provides arrayBuffer method on a response to work with raw binary data. The one important part to mimic fread is partial reading of a fetched file by a given size in bytes and number of elements into buffers. Let’s say we have a 16-byte file called JazzProfile.tapf and we want to read 2 float numbers (4 bytes each). It would look something like that:

struct profile {
float f1;
float f2;
// could be float bass_frequencies; int version_number and etc.
}
profile *prof;FILE *profile_file = fopen(“JazzProfile.tapf”, “rb”);fread(&prof, sizeof(profile), 2, profile_file);

It will load total of 8 bytes to the block of memory &prof will be pointing at. From &prof you could access 2 float numbers: f1 and f2. After the operation profile_file input stream intrinsic pointer will shuffle by 8 bytes and will be pointing to the second half of the file. Here’s the JS version:

fetch(“JazzProfile.tapf”).then(response =>
response.arrayBuffer()).then(buffer => {
let bufferPosition = 0;
const floatSize = 4;
const buff1 = buffer.slice(bufferPosition, floatSize);
const f1 = new Float32Array(buff1)[0];
bufferPosition += floatSize;const buff2 = buffer.slice(bufferPosition, floatSize);
const f2 = new Float32Array(buff2)[0];
});

Obviously we don’t manipulate pointers in JS, therefore I initialized bufferPosition variable and incremented it after every profile parameter reading by that property size (in our case — 4-byte float). The problem of exposing binary buffers as JS variable(s) is solved by typed arrays. Here we see Float32Array instantiated with buff1. It means a new array is created with 32-bit wide elements and filled with data from buff1. If you are still following you should’ve guessed that the returned array from the constructor will only have 1 element as buff1 is only 4 bytes long = 32 bits.

Another challenge I met is saturating the plugin itself with profile values. We’ve already digested general process of reading profile metadata, main parameters and impulse responses from .tapf files as array buffers. Metadata could help verifying profile validity and correct versioning. Profile parameters are given to Wasm processor module running compiled code form FAUST. Here’s one caveat: there’s a way to inject foreign parameters into FAUST code in native environment. Unfortunately the feature restricted when compiling for web, also I couldn’t find a way to initialize Wasm processor with initial arguments. I had to find the hack: in the original .dsp file I simply swapped the code telling the FAUST compiler to rely on foreign variables with primitive types describing sliders and numeric entries, as if they were set manually. It’s like connecting your heartbeat to a remote control:

// original code, fvariable stands for foreign variable
tonestack_low = fvariable(float LOW_CTRL, <math.h>);
// optimized for web, vslider primitive abstracts vertical
// default, min and max values are there just to avoid errors
// useable value is set from JS after a fetching profile filetonestack_low = vslider(“bass”, 0, -10, 10, 0.0001);

Ez, right?

The resampling comes to play with third profile part not yet discussed: impulse responses (IR). Impulse response is a way to shape signal by adding a layer of recorded track in a special manner called convolution. It can be used for room effects, reverb or speaker simulation. Think of it as an equalizer but in 3D. Guitar amplifiers are connected to guitar speakers. In order to faithfully replicate the whole sound we need to model guitar cabinet frequency response to a signal over time picked by specific microphones as top music producers record guitar tracks in their studios. All these factors: guitar cabinet material, speakers, room acoustics and microphone varieties are captured in a single impulse response file, usually in .wav format, as a very short audio track. The track needs to be in the same sample rate as the input signal. From a programmer’s perspective in JS it all comes down to array buffers fetched from .tapf profile file or from user uploaded .wav tracks.

After searching and testing numerous packages, I’ve discovered it could be easily done with wave-resampler. You simply supply samples, input sample rate, output sample rate arguments and the function returns resampled data in a form of array-like object. ConvolverNode takes care of wrapping impulse response as a separate node to be applied anywhere in the signal chain: at the beginning or right at the last bits of output. I put speaker simulation convolver node as the last node while preamp convolver loaded from profile file is applied just before main amplifier running compiled kpp_tubeamp.dsp script (hence the name pre-amp).

Conclusion

The result is (sonically) quite impressive. The only pitfall is noticeable input delay when running on Windows machine, although It’s better than in previous years. Browsers and Microsoft are to blame for not using optimized drivers. Mac system is much more faster responding in this regard. Same goes for Android vs iOS battle (yes, the website on smartphone sounds just like as in PC).

GUI is still imperfect and glitches are to expected here and there. You can choose an input device, custom speaker IRs, chain various plugins in desired order and so on.

I hope you enjoyed this guide.

You can visit the demo at: https://kutalia.github.io/react-webaudio-5150/

Source code is available at: https://github.com/Kutalia/react-webaudio-5150

For any questions or comments feel free to contact me at Facebook

--

--

Konstantine Kutalia

Aspiring software engineer specializing in front-end development and a guitar player.