web-audio

Getting started with web-audio

Remarks#

The Web Audio API is a W3C standard for lower level access to the audio system than the standard <audio>-tag, via a high level API.

Use cases includes games, art, audio synthesis, interactive applications, audio production and any application where fine-grained control of the audio data is required.

The API can accept input from a number of sources, including loading audio files and decoding them via the API or <audio>-elements. It also provides facilities to generate sound directly via the API through the use of oscillator nodes.

There is also a number of processing nodes, such as gains, delays and script processors (which eventually will be deprecated and replaced by more efficient nodes). These can in turn be used to build more complex effects and audio graphs.

Synthesising audio

Using effects on audio

Recording audio from a microphone source

Playing audio

Realtime altering of two audio sources

Setup

We start off by creating an audio context and then create an oscillator which is the easiest way to check that your setup works. (Example fiddle)

// We can either use the standard AudioContext or the webkitAudioContext (Safari)
var audioContext = (window.AudioContext || window.webkitAudioContext);

// We create a context from which we will create all web audio nodes
var context = new audioContext();

// Create an oscillator and make some noise
var osc = context.createOscillator();

// set a frequecy, in this case 440Hz which is an A note
osc.frequency.value = 440;

// connect the oscillator to the context destination (which routes to your speakers)
osc.connect(context.destination);

// start the sound right away
osc.start(context.currentTime);

// stop the sound in a second
osc.stop(context.currentTime + 1);

This modified text is an extract of the original Stack Overflow Documentation created by the contributors and released under CC BY-SA 3.0 This website is not affiliated with Stack Overflow