Vignesh Anand

Recording and generating animated screencasts within the browser

"Nostalgia - its delicate, but potent."
- Don Draper

Reflecting on my quest for the past couple of months, I wonder if I should attribute it to a sentimental truism on nostalgia by the protagonist of a TV series or is it just the profound need to build something? Regardless, here we are.

Demo from the original post
(Recorded and generated here through Betamax)

More than a decade ago, a blog post titled “Animated GIFs the Hard Way”[1] was released by Jon Skinner, the creator of Sublime Text. The post showcased a fascinating demo of the text editor (displayed above), brought to life through javascript, canvas and an image packing differences between the frames of a recording. However, I came across it only a couple of years back, after which it drifted in and out of my consciousness, as and when I used to extol it’s ingenuity to some of my colleagues about it.

So, when I needed to display short screencasts for my recent blog post, I decided to build a companion tool to record and generate those demo’s.

Below is a demonstration of the tool generating a demo.

Generating a demo

The tool is called Betamax. I know, comical as it may sound but it’s been named that for a reason as it’s not a replacement for <video> but merely an alternative medium that sort of died off along with the post Jon wrote eons ago. A relic of the past not seen elsewhere but the home pages of amazing products from the mentioned author. It does sound cool though!

Below is the output generated from the tool (stored in zip):

Sample demo (rendered on canvas)

The packed_image.png places the first frame of the recording at the top with all the other frames stored as differences in a compact manner whereas timeline.json contains information on all those differences along with delay to animate it accordingly on the canvas as seen in the sample demo above.

The user interface of the tool is inspired by “Peek”. As a Linux user, I find it’s no-nonsense interface idiomatic and quite easy to use.

On it’s internal workings, the tool uses getDisplayMedia to record the screen, cropping the required region, generating a zip with all the frames marked with time, before passing it off to the python encoder from yesteryear (loaded through Pyodide) to generate the packed image.

So, did I succeed in building this tool? Not entirely. I managed to generate the demos that I needed out of it — but — I failed in releasing the tool for wider consumption. Hence, the BETA. Let’s talk about the challenges.


When I built it initially, I was quite pleased with it’s performance on my linux machine. I got optimistic about it and ended up refining it’s UI, handling edge cases and what not. What I didn’t anticipate was the bitter pill that I had to swallow after running it on Mac. Why is that? well, it all has to do with pixel density or DPR (Device Pixel Ratio).

I am running my Linux on a device with a screen that maps it’s pixels 1-to-1 to device pixels (CSS pixels). In such a scenario, the recorder captures the required region with no noticeable effects on the frame rate of the page being recorded and the generation of the packed image through pyodide runs with relative ease. However, recording the same demo on a high DPI/Retina display causes the size of the captured image increase by a factor based on the DPR and with this increase, we also start seeing noticeable effects on the page being recorded affecting it’s frame rate — which is bad if you wish to capture smooth transitions. I will also add, as a developer who has been around on the front-end space dealing with @2x images, I can still surprise myself by own naivety.

Supported implementations

Now, let’s talk about two ways we can record snapshots of the screen or application window, once we get the stream from getDisplayMedia:

ImageCapture: This is the default implementation. It relies on the grabFrame method of this interface which runs on an interval set to selected frame rate during recording. This method is a costly operation to run in case of high DPI displays, as it takes a snapshot of the whole region and doesn’t allow cropping. However, to alleviate memory issues, only the required cropped region from those frames are stored using createImageBitmap on each interval before passing it off to the encoder post-recording.

MediaRecorder: This is the second option. When selected, it uses the MediaStream Recording API. Most of the screen recording extensions out on the wild use this api. It natively buffers and encodes the captured data with a lossy video codec [2]. The resulting blob out of this data is then set as source of a video element which can be played or downloaded. Although it does affect the frame rate of the web page during recording, but relatively less so, allowing for transitions to be captured. Now, with regards to Betamax, we want individual frames out of this video, not the video itself. So, I initiate a playback of this video and capture frames out of it at a set interval. However, the resulting frames are not only larger in size (better color reproduction than ImageCapture) but also have identical frames that differ greatly in pixel data due to compression (applied through encoding) during recording.

What is the mentioned pixel data?

It simply refers to the individual color values that make up the visual content of an image. The data consists of information about the color and intensity of each pixel, typically expressed in terms of red, green, and blue (RGB) values. [3]

This data is important because the current implementation of the python encoder relies on the differences between pixel data of individual frames to generate the packed image. However, due to compression applied by video codecs during the recording, even visually identical frames will differ in this data and end up being marked as dissimilar and packed separately — increasing the size of the packed image and, failing at it pretty much.

So now, without significantly altering the python encoder (possibly introduce ssim), the second option of MediaRecorder is no longer a working one which leaves me with the first option of using ImageCapture. As mentioned before, it leads to janky frame rate on high DPI displays and I didn’t want to let go of those smooth transitions in my demo’s.

So, what did I do?… Easy! I became Quicksilver and slowed down time.

Slowed down transitions [4]

So, if you are on a high DPI/Retina display, the demo “Structure of the panel” in this blog post was made possible after slowing down the UI before recording i.e., by increasing the delay of the transitioning elements in the UI (in one case, from 300ms to 600ms) and then by scaling down the delay property of all the differences in the timeline by a factor of 0.4, thereby speeding up the animation.

What are the other challenges? You could checkout the list of issues in the repo.

So, will Betamax ever replace <video>HS?

Of course! It’s coming back with a vengence. World domination is in sight, once, this tool gets out of BETA and if Sony Corp doesn’t sue me on trademark infringement :D

To state the obvious, <video> is definitely better! it doesn’t require javascript to run, doesn’t care for @2x assets, has fancy compression codecs and it’s pretty much, “Record once, play anywhere!”.

The primary goal of this project is to able to record and generate short animated screencasts within the browser reliably (and quickly) not because it wants to replace good ole <video> but because it’s cool!

Want to checkout the chrome extension? get it from the web store. Also, you can find the source code and open issues here:


[1]: I would highly recommend reading it.

[2]: MediaRecorder does not support lossless encoding.

[3]: Thanks ChatGPT for this definition! You scratch my back… I scratch yo.. maybe not?

[4]: Unmute for a surprise!