Creating .webm video from getUserMedia()

There’s a ton of motivation for being able to record live video. One scenario: you’re capturing video from the webcam. You add some post-production touchups in your favorite online video editing suite. You upload the final product to YouTube and share it out to friends. Stardom proceeds.

MediaStreamRecorder is a WebRTC API for recording getUserMedia() streams (example code). It allows web apps to create a file from a live audio/video session.

MediaStreamRecorder is currently unimplemented in the Chrome. However, all is not lost thanks to Whammy.js. Whammy is a library that encodes .webm video from a list of .webp images, each represented as dataURLs.

As a proof of concept, I’ve created a demo that captures live video from the webcam and creates a .webm file from it.


The demo also uses a[download] to let users download their file.

Creating webp images from <canvas>

The first step is to feed getUserMedia() data into a <video> element:

var video = document.querySelector('video');
video.autoplay = true; // Make sure we're not frozen!

// Note: not using vendor prefixes!
navigator.getUserMedia({video: true}, function(stream) {
video.src = window.URL.createObjectURL(stream);
}, function(e) {

Next, draw an individual video frame into a <canvas>:

var canvas = document.querySelector('canvas');
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);

Chrome supports canvas.toDataURL("image/webp"). This allows us to read back the <canvas> as a .webp image and encode is as a dataURL, all in one swoop:

var url = canvas.toDataURL('image/webp', 1); // Second param is quality.

Since this only gives us an single frame, we need to repeat the draw/read pattern using a requestAnimationFrame() loop. That’ll give us webp frames at 60fps:

var rafId;
var frames = [];
var CANVAS_WIDTH = canvas.width;
var CANVAS_HEIGHT = canvas.height;

function drawVideoFrame(time) {
rafId = requestAnimationFrame(drawVideoFrame);
ctx.drawImage(video, 0, 0, CANVAS_WIDTH, CANVAS_HEIGHT);
frames.push(canvas.toDataURL('image/webp', 1));

rafId = requestAnimationFrame(drawVideoFrame); // Note: not using vendor prefixes!


The last step is to bring in Whammy. The library includes a static method fromImageArray() that creates a Blob (file) from an array of dataURLs. Perfect! That’s just what we have.

Let’s package all of this goodness up in a stop() method:

function stop() {
cancelAnimationFrame(rafId);  // Note: not using vendor prefixes!

// 2nd param: framerate for the video file.
var webmBlob = Whammy.fromImageArray(frames, 1000 / 60);

var video = document.createElement('video');
video.src = window.URL.createObjectURL(webmBlob);


When stop() is called, the requestAnimationFrame() recursion is terminated and the .webm file is created.

Performance and Web Workers

Encoding webp images using canvas.toDataURL('image/webp') takes ~120ms on my MBP. When you do something crazy like this in requestAnimationFrame() callback, the framerate of the live getUserMedia() video stream noticeably drops. It’s too much for the UI thread to handle.

Having the browser encode webp in C++ is far faster than encoding the .webp image in JS.

My tests using libwebpjs in a Web Worker were horrendously slow. The idea was to each frame as a Uint8ClampedArray (raw pixel arrays), save them in an array, and postMessage() that data to the Worker. The worker was responsible for encoding each pixel array into webp. The whole process took up to 20+ seconds to encode a single second’s worth of video. Not worth it.

It’s too bad CanvasRenderingContext2D doesn’t exist in the Web Worker context. That would solved a lot of the perf issues.