All you need to know about the Web Audio API
source link: https://www.tuicool.com/articles/hit/ZFBVFbZ
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
All you need to know about the Web Audio API
Did you know Javascript has a constantly evolving high-level API for processing and synthesizing audio? How cool is that!
The goal of the audio API is to replicate features found in desktop audio production applications. Some of the most prominent features are mixing, processing, filtering, etc.
The web audio API has a lot of potential and can do awesome stuff. But first — how well is the API supported across the board?
Cool, worth digging into. :+1:
What is the web audio API capable of doing?
Good question! Here are couple examples demonstrating the capabilities of the Web Audio API. Make sure you have sound on.
The web audio API handles audio operation through an audio context. Everything starts from the audio context. With the audio context you can hook up different audio nodes.
Audio nodes are linked by their inputs and outputs. The chain of inputs and outputs going through a node create a destination — destination being the audio frequency which we pick up with our ears.
If you’re the type of person who wants to know all the tiny details, here’s a sweet link to get you started.
If you’re more into visual learning, here’s a great introduction talk about the Web Audio — check it out!
One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations.
This article explains how, and provides a couple of basic use cases.
If you’re keen on learning the audio API in depth — here’s a great series.
Here’s a free book about the Web Audio API— by Boris_Smus (interaction engineer at Google).
A glance at the API
The web audio API is relatively intuitive to understand. Here’s an abstract example how to use the API.
Breakdown of the steps;
- We create a new
AudioContext
object by calling it withnew
keyword. - We bind our oscillator and volume controller to the audio context.
- We connect our oscillators and volume controller to our sound system.
- Set our frequency type and value (tuning)
- Start our oscillator — The
start
method of theOscillatorNode
interface specifies the exact time to start playing the tone.
Big potential, room to grow
Of course, with all great things, there’s always room to grow and improve. Here’s some healthy feedback from much smarter people than I.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK