Weighing the pros and cons of rendering images with HTML Canvas vs. embedded <img> tags
In this post, I’m going to focus on HTML canvas. More specifically, how canvas-rendered images compare to embedded <img> elements, and which may be the better tool to use depending on the situation.
Modern web browsers today allow front-end web developers to use a handful of effective techniques to implement the rendering of an image on a standard HTML webpage. Here’s a list of the common techniques used:
- Image embed element (<img>)
- Scalable vector graphic/SVG
- A background-image property on a HTML <div> or <span> element
- HTML <canvas> element
The HTML Canvas API has become widely supported across all the modern web browsers in recent years. It is versatile, powerful, and an extremely effective tool in a web developer’s arsenal when it comes to adding interactivity on a web page.
Canvases will most likely seem intimidating to those who have never had to implement them before, but, once you’re familiar with them, it’s very easy to see how useful they can be in a variety of situations. Canvas has become the tool-of-choice for many front-end web developers when it comes to web art, video game graphics, interactive animations and live data visualizations. They are particularly a no-brainer when it comes to real-time graphic visualizations – some examples that come to mind are ticker feeds and live stock charts.
According to caniuse.com, basic support for canvas is available for 97.99% of all users, globally.
MDN breaks down canvas rendering into a two-step process:
- “Get a reference to an HTMLImageElement or to another canvas element as a source. It is also possible to use images by providing a URL.”
- “Draw the image on the canvas using the drawImage() function.”
Here’s an example of basic canvas implementation from w3schools.com:
In a recent Angular project, I became well-acquainted with HTML canvas and spent a lot of time weighing the pros and cons between canvases and other image-rendering implementations. The app is extremely data driven (used primarily by scientists) and its UI is centered around two main features: 1) a moving line-graph (visually similar to a seismograph) that re-renders with updated data points on the order of milliseconds, and 2) a geographic map consisting of over a dozen layered canvases, re-rendering at 10Hz. These simultaneous renderings are rather demanding on resources.
One of my main tasks in this application was to build an image-based layer in a geo-projected map among already existing layers, all responsible for visualizing different data. There was a particularly difficult problem in implementing this layer that involved three main steps:
- Creating event-triggered GET requests to a server to receive back BASE64-encoded, high-res images.
- Correctly positioning the images on the map, among the other layers, based on their respective geographical X/Y coordinates and dimensions.
- Making the images scale, reposition, and change based on the most recently fetched image data and relative to the other canvas layers.
Without getting into too many specific details, I found that the most effective way to render these large images quickly and to move smoothly was to implement the following:
- A <canvas> element inside the main geo-projection component’s template which houses all of the sibling canvases. This element is dynamically sized relative to the current size of its parent element.
- A service that makes GET requests for image data from a REST API and listens for server messages on a websocket that indicate a new image needs to be fetched. It also listens to position changes from the other canvas layers.
- A dedicated component to handle all of the image’s data, unit conversions and rendering logic. The component contains two key parts:
- a property which stores the most recently updated data.
- a loadImgSrc() function which creates a new HTMLImageElement each time the image source changes and draws the newly created image onto the context of the target canvas – its attributes based on the current image data – the src, length, width, x and y coordinates.
Canvas’ drawImg() function takes either three, five or nine arguments. In our implementation of it in this app, we pass in five – the HTML image element, the x and y-positions, the width and the height. Optionally, you may also pass in four additional values if you want to show a clipped area rather than the entire image.
Performance-wise, there wasn’t too signigicant of a difference between repositioning an embedded <img> and redrawing an image on a canvas, based on the incoming projection coordinates. However, we discovered through a lot of trial and error that canvas-drawn images were the cleaner and more elegant solution for this feature for three primary reasons:
- In order to position an embedded <image> relative to the rest of the projection, we would have had to create an additional image-wrapper div to handle rotation and some other logic that the canvas could do on its own, which would mean additional overhead in the already extremely busy app.
- All the other projection layers also consisted of canvases, which allowed us to share some of their common methods by extending from the base canvas layer component.
Unlike <img> tags, HTML Canvas allows you to:
- Render videos as well as images.
- Draw basic geometric shapes with just a few lines of code.
- Respond to events with animations or graphics. For example, listening to input changes on a form element and then drawing the input text inside somewhere on the page or inside an image.
- Draw very complicated shapes and graphics. I encourage you to browse codepen, where very impressive canvas drawings are uploaded daily.
- Draw from a variety of image sources: HTMLImageElement, SVGImageElement, HTMLVideoElement, another HTMLCanvasElement, and interestingly enough, an image (or part of one) pulled from another domain.
Here are a few reasons of why you might consider using an <img> tag over a <canvas>:
- While <canvas> only has “width” and “height” available to it is attributes, the <img> element has a 14 attributes, which are useful in a wide range of situations. A few examples:
- alt – can be used to describe the content of an image with text or provide the accessible link text if the user isn’t able to see the embedded image.
- sizes – used to specify different image sizes for different page layouts
There are many other attributes at your disposal here – explore!
Both <img> and <canvas> have the ability to use a BASE64-encoded image as an image source. By embedding image data directly into your document with a data URI, you can minimize the number of requests made to your server if you’re planning on using modifications of the initial image throughout the session.
During an HTML page load, each time an <img src=””> is hit, there is a request made to the server to get the image the src attribute is referencing. In pages that contain a large number of embedded images, that means a lot of server calls (whether it be to your own server or from another webpage’s server). Though BASE64 strings will ultimately increase the size of your HTML file and should make it load slower, the decrease in speed will be made up because these server calls are skipped completely. Note – this tends to only apply when you’re dealing with smaller images.
As the image sizes become larger, the time it takes for the browser to render the BASE64 string into an image surpasses the time it would take for the image to just be retrieved from a server. Of course, your point of diminishing returns in regards to page-load speed will vary based off a number of factors – server speed, server location relative to the user, other contents of the page, etc.
I’d highly suggest using Canvas if you are planning to use variations of an image throughout your page session. The app I mentioned earlier in this post was a perfect use-case for Canvas + BASE64 images , since it is very graphics-heavy and is restricted to very limited internet access, making frequent GET requests for similar images unnecessarily expensive.
Here’s an example of the capabilities of canvas in our restricted app: the user can specify a geographic area, download a map for that area, and then send it from the server to the client to render. When there are overlapping areas between the maps, we’re able to subtract the area we already had from the new image and only send the parts of it that are needed from the server. At that point, we can stitch together those BASE64-encoded map segments to create the new, complete image. Though you don’t need to use text-encoded images to use this technique, I think it’s a good demonstration of how this combo gives you a great way to squeeze every drop of your resources when faced with connectivity restrictions.
If you’re a front-end web developer, unfamiliar with HTML canvas and looking to enhance interactivity and flexibility with visual elements of your project, I very strongly recommend you look into it a bit more and give it a try!