Chimera of Volca Drum and Model:Cycles in the browser


I am currently totally excited about tone.js - a web audio framework (disclaimer I have some software development background). I just read about it recently and wanted to start a fun spare-time project with it to learn more about creating a synthesizer/groovebox in the browser and how synthesis and sound routing works in general.

I post today because I wanted to get your feedback/feature request of such an “browser-based device”.

Goals: Inspired by the awesome M:C and VD I want to build :

  • focus on knob per function on mobile devices (tablets first eventually smartphones)
  • velocity sensitive pads (I have some ideas/POC’s to make this eventually happen on touch devices)
  • 6 tracks+ (I guess it depends on performance of browser/tonejs)
  • each track has one “engine” (like a specfic tone.js synth: AM, FM, Metallic, …)
  • everything is p-lockable (also LFO, ARP) also the engines, so one track can have different engines on each step
  • step jump
  • endless encoders (well, that should be kind of an easy thing in software :slight_smile: )

It is in the very early stage (currently do some data architecture sketching), not yet ready to let you try. But I posted it here to push myself to put the first 0.0.1 version online somewhere soon even though if it will be in a very early unusable stage - which is very likely. I usually tend to try to make everything perfect which is in reality never done and a very subjective dimension anyway, and what usually happens then is that I abandon the project because it then transform from fun into not fun “work”…

Anyway, I try a different approach this time. Even if it turns out to be a failure (technically, functionality-wise , UX-wise…) I guess at least I will learn something new.

Have a great Sunday!

A screenshot of the current POC (I am not a designer, but I hope I can improve the design as well over time :smiley: )


Made a little screencast of it actually doing some weird sounds


Nice one! Please let me know if/how you get tone.js working on mobile, because I couldn’t get it to run in my p5js works, only on desktop.


This is definitely possible, I’ve shipped stuff in the past using Tone.js on mobile. There are some caveats – your AudioContext needs to be created as a result of an interaction (tap on a button or whatever, same as on desktop these days), on iOS you need to have the mute switch off to hear the audio (apparently this can workaround it, I’ve never tried), and mobile performance is a bit hit or miss (though seems better these days). But it should broadly work, so let us know if you have a specific error and I can see what I can remember.

1 Like

Fun idea! I look forward to seeing more. Tone.js has its limitations but it’s a great way to quickly prototype stuff. Once you get into trying to use C++ or whatever, it becomes a lot less fun to code audio stuff!

Have you already seen Tahti Studio? It’s another cool web-based Elektron-inspired sampling groovebox. The author of that used SOUL to write custom DSP compiled into WebAssembly :exploding_head:

1 Like

Thank you, I am curious what limitations will appear and if I can figure out a way around it.

I can imagine, I coded with C/C++ years ago, I like it more high level atm. Currently just love the possibility of easy sound programming AND easy UI development with tone.js and HTML/CSS. I guess web audio isn’t up to dedicated hardware or DAWs at the moment, but its easy/fast enough to be fun.

Yess, read about it in the lines forum and already tried it. Amazing work! Also considered SOUL but then thought that for the first experimental project I wanted to use a language/tools I am mostly familiar with.

@pselodux Regarding mobile, the examples on the website work for me on Samsung Galaxy S21 and Galaxy Tab S7. Although I sometimes needed to press the play button multiple times to start some examples.

Today I mostly did a complete refactoring af the data structure. Moved everything in a simple self made but hopefully blazingly fast RxJs state managment system. Existing libs (akita, ngrx) seemed over the top with too much overhead in code and performance for the requirements. Only added “immer” to handle the immutable state more easily.

I think the new data model let me model the elektron and the volca drum workflow - theoretically. Future will show if I missed some fundamental aspects.

1 Like

A little status update, did the first graphics mockups to test button sizes and layout. Hope I can create a one-knob(slider)-per-function feeling without cluttering the UI too much.

Additionally I refactored the state management, will do it again, not because of performance (yet) but I think i can reduce complexity.

No real limitations found in tone.js yet for this fun project, most “bugs” (klicks, bad timing) are miss configuration of tone.js on my side.

Tested it with 4 tracks x 2 engines implementation and p-locking, no issues on playback, sometimes on live recording but that might be because of my currently very hacky way of doing live recording.

Next steps, implement the design mockup at least partly, and connect it to the unfinished tonejs prototype. After I probably refactor the state mgmt a 3rd time :joy:

1 Like

I implemented parts of the UI/Layout in HTML. Was a bit tricky first to find a way to have a proper scaled interface on all devices. But i think I found a way.

The last couple of days i was testing the possibilities of simulating a velocity sensitive pad in current mobile browser to get velocity and aftertouch information from the buttons. I ended up using TouchEvent.radiusX and -Y for that purpose.

In this demo video I connected the led intensity to the velocity value, no sound engine connected to the UI yet. Also tried to simulate real button background LED with having multiple layers in the button. One is the light emitting one and a surface layer with alpha value to let some of the “light shine through”.

The buttons are mostly inspired by my elektron model:cycles. But design is a work in progress :wink: I am better at the coding part. Next steps will be connecting the tone.js synth engine back to the new UI.


This week, I have no new video as I did some performance profiling.

The idea to have a 6 track synth where each track has 2 voices (similar to the volca) made my pure tonejs prototype to not perform really well. It had some stutters, clicks and other weird unpredictable behaviour especially on the Samsung Galaxy S7 Plus tablet and Galaxy S21 Ultra which has usually pretty good performance.

I stumbled across some web audio debugging and performance profiling tips and tools. One very nice article about it is: Profiling Web Audio apps in Chrome

Having done some basic performance profiling let me questioning myself if my vision was technically feasable in the web with the current available technology in the browser. Somehow during my research I stumbled upon FAUST DSP a functional open source audio programming language which compiles from C, C++, VST, to web assembly.

I figured out that using web assembly (which brings almost native CPU performance to the browser for static memory applications) makes code with static memory footprint blazingly fast in the browser. And FAUST does exactly this, compiles its functional DSP Code to static memory allocating web assembly code.

The super nice thing is, you can load that FAUST complied web assembly code with Javascript in the browser and connect it via AudioWorklets with tone.js easily.

So refactored the architecture a bit, to separate the responsibilities at the moment like this:

  1. Tone.JS is used for the sequencing, and audio routing
  2. FAUST audio worklets are the synth engines who do the heavy audio dsp lifting

Why do I hope this will bring some performance improvement? Because when you use a Tone.JS synth instrument, each Instrument is build from dozens of other AudioNodes which are wired together. in contrast an example FAUST FM synth like for example this bela - Faust Documentation compiles down to ONE single AudioWorklet which does the whole synth engine audio processing in the super fast web assembly language.

I did a simple comparison with the tone.js synth engines and with the faust fmsynth AudioWorklets:

Tone.JS 4x Tracks each 3x Synths (FM, Metal, AM):

Faust Web Assembly AudioWorklet 4x Tracks each 3x Synths (FMSynth2_Analog):

The Variance is with 0.050ms much better than with pure Tone.JS and the Render Capacity less than 1/5th of the ToneJS.

To be honest, it is hard to directly compare the complexity of the Tone.js synths with the example FAUST FM Synth engine but I got the impression that with this combination it is possible to have quite solid realtime sound generation in browsers without manually writing/optimizing C code.

So at the moment the “fun” is still present in this project :slight_smile: And FAUST is really an awesome language with an awesome browser based IDE including realtime osciloscope and spectroscope.

Hope this is of use to someone.

The last days I just made the proof of concept that this is working, now that I know that, I can continue to build basic synth engines and connect them properly with the sequencer. Looking forward to that :happy:

1 Like

Hi all,

it’s been a while since I posted an update. Well, COVID hit me, so I was quite a while busy getting back on track. Anyway I made some progress with my project - i call it “synth:ans”. I finally connected all these individual proof of concept pieces together.

I made a very basic FM Synth engine in FAUST which I connected with the web app with tone.js.

I wanted to start sharing my progress with you although it is really a work-in-progress and most things are broken. But you can get some sounds/noises out of it, if you are lucky :slight_smile:

You can play around with it at

I will improve step by step and post new features here. Really happy for feedback, bug reports and suggestions for improvement.

As I have no documentation yet and some (most) things are not working yet. I made a quick video how to get sounds out of it.

Intended use was with touch devices, tablets and large smartphones in landscape mode but it should work also somehow in the browser with a mouse. It was meant as a fun “live” web-synth.

It should work on the main browsers like chrome, safari, firefox also on the mobile versions. But you might need to press play and stop in some browsers to allow the synth to make in the browser.

Next features?.. Well I don’t know exactly what I will improve/fix as a next step. Probably the sequencer :slight_smile: I have lot’s of ideas but need to prioritise and do things step by step.

What is synth:ans at the moment?

  • The synth engine is build around FM synthesis
  • 6 tracks
  • each track has two voices (A & B) which are triggered together (same gate, velocity, frequency)
  • each voice has 3 operators (left one is the first, middle one is the second, which modifies the first. Right one is the third, which modifies the second - so basically only one algorithm)
    each voice has an ADSR envelope, detune ( and a reverb send which is not yet connected)
  • you can mix the two voices with the mix slider
  • the step sequencer - when you hit play - plays the T1 with 1/16th notes - just for debugging. Currently it only triggers gate 1 no gate 0 so it is basically a constant tone (the default humming comes from the small default detune of voice A to B)
  • the track buttons T1 - T6 are “preasure” sensitive and have aftertouch. The “preasure” is calculated by the area your fingers touching on the buttons.
  • all sliders are relative not absolute, to prevent changes when touching them accidentally on the wrong position. But they are configured way to slow/accurate in my opinion.

Thanks for checking out :v:


Interesting project, I’ll be intrigued to see how this develops!

I have been playing around with FAUST and WebAudio lately, as you say it’s pretty impressive what you can do with it. There are plenty of nice FAUST library functions and example code to get some really nice sounds too, e.g. the Greyhole reverb in the examples.

I spent some time modifying the way FAUST outputs WebAudio in order to implement sample accurate timing (actually using a fork the FAUST folks are working on to WebAudioModules compatibility among other changes) - maybe I’m just really sensitive to it being a techno fan, lol, but the timing with the current implementation is only accurate to within one buffer at best (in practice probably worse as it is also calling through multiple bits of JS etc) so you can get at a minimum 3ms of jitter - enough for me to hear on 1/16th hihats. I’m not sure if Tone’s internal sequencing is sample accurate either. Anyway if you want to chat more about this let me know, it’s an area I’m interested in :slight_smile:

1 Like

Sounds interesting. I am not very deep yet into the timing issues of JS and web assembly. I know there is some latency but I haven’t yet measured it. FAUST is really an amazing tool with so many libraries. I just found out there is a DX7 implementation written in FAUST.

I’ll try to build the sequencer of synth:ans in Tone.js so timing is in the responsibility of JavaScript while the sound generation is done in web assembly (by FAUST). I hope that the timing is quite accurate for playback of step programmed patterns. The latency might be an issue for “live” playing/recording. I hope it is low enough for live trig programming and modulation.

I also tried having the LFO’s in Tone.JS manipulating parameters of the FAUST web assembly synth engine. Connection was pretty easy and it works but haven’t tested it in detail yet. But that will make the LFO assignment very easy and flexible to implement compared to coding them in FAUST directly.

This is just the right project for me to find out where the limits are and how to get around them - hopefully :slight_smile:

Deployed Version 2022-04-25 of synth:ans yesterday. I added a very basic chromatic “keyboard” to the step sequencer trigs like on Model:Cycles. Not yet pressure/aftertouch sensitive like the track buttons but supports sliding over the “keys” and also multitouch although there is no polyphonic mode (yet).

Also want to add the sliding over buttons for the track buttons to finally mute/unmute multiple tracks in one “slide” easily similar to the volca drums experience.

Works only on touch devices yet.


Yeah I wouldn’t worry too much for now and would focus on the fun bits!

I suspect you will have issues with jitter (e.g. if you play notes 1/16th apart which should be e.g. 125ms, you might find some are 128ms apart, others are 123ms) but whether this is important or not depends on the kind of music you’re making I think! It’s much more noticeable with straight 1/16th techno closed hi hats than with ambient :slight_smile:

The reason you get this is because of the way Tone.js’s scheduler and the FAUST AudioNode communicate - when a note is due, Tone.js calls a function on the FAUST node to start a note, but because this is all in JS and on the UI thread, the amount of time that function call takes is unpredictable. On top of that, once the play note function is called, FAUST will start the note at the beginning of the next buffer - in WebAudio a buffer is 3ms, so your note could be up to 3ms out because of that, depending where in the buffer it should fall.

WebAudio does provide sample accurate scheduling with the time parameter you pass in to e.g. osc.start - you schedule the start ahead of time and WebAudio will trigger it at that exact time, as the scheduling runs on the audio thread. It’s just that right now FAUST (and maybe Tone.js?) don’t support this.

But as I say this might be a non issue for you and certainly shouldn’t be a blocker! If it ever does become an issue give me a shout and I can share my fork of the FAUST WebAudio stuff which does expose a method for sample accurate scheduling.

Look forward to seeing the project progress, great work!

1 Like

I try to, but some challenging obstacles are also fun if they are not overwhelming :slight_smile:

AFAIK Tone.JS is using the WebAudio thread for timing, so this should be pretty accurate and not affected by the UI thread. Regarding the buffer delay on the way from JS to the FAUST engine. This is very interesting information, thank you for that. I thought having parameter input with the si.smoo make the parameter sample accurate instead of block accurate but maybe I misunderstood that part in the Kadenze video course (which is IMO a great free video course btw).

Thank you, appreciate that. I am always interested in learning.

Thank you, me too :slight_smile: !

Released Version 2022-04-27 of synth:ans yesterday. Since 2022-04-25 some things changed:

  • Added basic mouse support for the sequencer buttons
  • fixed unreliable sequence playback
  • added a preprogrammed - not yet changeable - pattern for testing and debugging
  • added basic visualisation of active trigs and current step.
  • record button is already switching the mode but step recording is not yet implemented

Next thing planned: step record mode to add/delete/modify trigs on the sequencer to make it at least a little useable :wink:

In terms of timing accuracy I think it should be okay. Didn’t make external tests but inspected some timing values and they were pretty spot on. Actually I didn’t expect that.

Works also on my smartphone although there is a small bug in the touch intensity algorithm for small devices, that’s why all sounds are very silent on smartphones at the moment. Compared to implementing the sound generation in Tone.JS the FAUST --> Web Assembly approach has way better performance. Still using the sequencer in Tone.js which works pretty well.

Also the button visualisation (active step) feels like it is in good sync and not delayed. I also didn’t expect that, as requestanimationframe - which is respnsible for the drawing and button illumination - triggers “only” 60 times per seconds.

1 Like

Released development Version 2022-05-01 of synth:ans now.

  • Added basic step sequencer recording (add/remove of trigs)

It is more fun to use on touch devices as you can slide with the finger to add/remove multiple trigs. You can also use multiple fingers to add/remove multiple trigs at once.

Next thing planned: make the steps p-lockable and provide a way to save a pattern.

Short sidenote: I will be at the Superbooth most likely friday and saturday. If anyone wants to chat about web audio programming, FAUST DSP, Synth UI/UX, Web Assembly let me know :v: