DZI
Deep Zoom Image (DZI) is a tile-based image pyramid format originally created for Microsoft Silverlight. It is well-suited for displaying very large images (gigapixels or larger) in a web browser because only the tiles visible at the current zoom level and viewport need to be fetched at any one time.
The @alleninstitute/vis-dzi library provides a WebGL-accelerated DZI renderer built on top of
regl and the @alleninstitute/vis-core async rendering infrastructure.
Live Demo
Section titled “Live Demo”The example below renders two DZI images side-by-side with a shared SVG annotation overlay. Pan by clicking and dragging; zoom with the scroll wheel. Both canvases share a single offscreen RenderServer.
Key Concepts
Section titled “Key Concepts”The DZI tile pyramid
Section titled “The DZI tile pyramid”A DZI image is stored as a folder of JPEG or PNG tiles. Layer 0 is a single 1×1 pixel tile; each higher layer doubles the resolution. The highest-numbered (maximum) layer is the full-resolution image. At any given zoom level the renderer selects the layer whose tile size is closest to — but not exceeding — the screen resolution using getVisibleTiles, so only the minimum number of tiles needed are ever fetched.
The DziImage descriptor object maps directly to the metadata in a .dzi XML file:
import type { DziImage } from '@alleninstitute/vis-dzi';
const image: DziImage = { // URL of the *_files/ directory that holds the layer subfolders imagesUrl: 'https://example.com/my-image_files/', format: 'jpeg', // 'jpeg' | 'png' | 'jpg' | 'JPG' | 'PNG' overlap: 1, // pixels of overlap added on each side of a tile tileSize: 512, // nominal tile edge length in pixels (power of 2) size: { width: 15936, // full-resolution image width in pixels height: 11526, },};If you have a .dzi metadata URL you can parse it directly:
import { fetchDziMetadata } from '@alleninstitute/vis-dzi';
const image = await fetchDziMetadata('https://example.com/my-image.dzi');The relative camera model
Section titled “The relative camera model”The camera view is a box2D expressed in relative image space: [0, 0] is the top-left corner of the image and [1, 1] is the bottom-right corner, regardless of the image’s pixel dimensions or aspect ratio. This makes it easy to link multiple images with the same logical viewport.
import { Box2D } from '@alleninstitute/vis-geometry';
// view the whole imageconst view = Box2D.create([0, 0], [1, 1]);
// zoom into the top-left quadrantconst view = Box2D.create([0, 0], [0.5, 0.5]);screenSize is a [width, height] tuple in pixels and controls the output resolution of the rendered canvas.
import type { DziRenderSettings } from '@alleninstitute/vis-dzi';
const camera: DziRenderSettings['camera'] = { view, screenSize: [800, 600],};The RenderServer and shared WebGL context
Section titled “The RenderServer and shared WebGL context”WebGL contexts are expensive browser resources. RenderServer (from @alleninstitute/vis-core) manages a single offscreen WebGL canvas that can serve multiple visible <canvas> elements. Each client canvas registers itself with the server; when a frame is ready the server copies the offscreen pixels to every registered client using CanvasRenderingContext2D.putImageData.
import { RenderServer } from '@alleninstitute/vis-core';
// first arg is the offscreen resolution [width, height]; second is required WebGL extensionsconst server = new RenderServer([2048, 2048], ['oes_texture_float']);Wrap your viewer tree in a context so all child components share the same server:
import { createContext, useState, type PropsWithChildren } from 'react';import { RenderServer } from '@alleninstitute/vis-core';
export const renderServerContext = createContext<RenderServer | null>(null);
export function RenderServerProvider({ children }: PropsWithChildren) { const [server] = useState(() => new RenderServer([2048, 2048], ['oes_texture_float'])); return <renderServerContext.Provider value={server}>{children}</renderServerContext.Provider>;}Building the renderer and starting a frame
Section titled “Building the renderer and starting a frame”buildAsyncDziRenderer wraps the synchronous buildDziRenderer in the vis-core async scheduling layer. It returns a function that, when called, queues tile fetches and GPU uploads in the background and invokes a callback as tiles arrive.
import { useContext, useEffect, useRef } from 'react';import { buildAsyncDziRenderer, type DziImage, type DziRenderSettings, type DziTile, type GpuProps as CachedPixels } from '@alleninstitute/vis-dzi';import type { buildAsyncRenderer, RenderFrameFn } from '@alleninstitute/vis-core';
// Inside your component:const server = useContext(renderServerContext);const canvas = useRef<HTMLCanvasElement>(null);
const renderer = useRef< ReturnType<typeof buildAsyncRenderer<DziImage, DziTile, DziRenderSettings, string, string, CachedPixels>>>(undefined);
// Create the renderer once when the server is availableuseEffect(() => { const el = canvas.current; if (server?.regl) { renderer.current = buildAsyncDziRenderer(server.regl); } return () => { // Deregister the canvas from the server when the component unmounts if (el) server?.destroyClient(el); };}, [server]);The rendering loop
Section titled “The rendering loop”Call server.beginRendering whenever the camera or image data changes. It accepts a RenderFrameFn that describes what to render, an event callback that reacts to frame lifecycle events (begin, progress, finished), and the target client canvas.
useEffect(() => { if (!server || !renderer.current || !canvas.current) return;
const renderMyData: RenderFrameFn<DziImage, DziTile> = (target, cache, callback) => renderer.current!(dzi, { camera }, callback, target, cache);
server.beginRendering( renderMyData, (e) => { if (e.status === 'begin') { // Clear the offscreen buffer before the first tile is drawn server.regl?.clear({ framebuffer: e.target, color: [0, 0, 0, 0], depth: 1 }); } else if (e.status === 'progress' || e.status === 'finished') { // Copy the current offscreen state to the client canvas e.server.copyToClient((ctx, image) => { ctx.putImageData(image, 0, 0); }); } }, canvas.current, );}, [server, dzi, camera]);The copyToClient callback receives a 2D canvas context and an ImageData snapshot of the offscreen buffer. This is also where you can composite additional 2D content (see SVG overlay below).
SVG overlay composition
Section titled “SVG overlay composition”Because copyToClient hands you a raw CanvasRenderingContext2D, you can draw anything on top of the WebGL output using the standard Canvas 2D API. The demo uses this to overlay an SVG annotation layer that is aligned to the same relative coordinate space as the camera view:
const compose = (ctx: CanvasRenderingContext2D, image: ImageData) => { // 1. commit the WebGL tiles ctx.putImageData(image, 0, 0);
if (svgOverlay) { // 2. convert the camera view (relative coords) into pixel offsets within the SVG const { width, height } = svgOverlay; const svgSize: vec2 = [width, height]; const start = Vec2.mul(camera.view.minCorner, svgSize); const wh = Vec2.sub(Vec2.mul(camera.view.maxCorner, svgSize), start); const [sx, sy] = start; const [sw, sh] = wh; // 3. draw the visible region of the SVG stretched to fill the canvas ctx.drawImage(svgOverlay, sx, sy, sw, sh, 0, 0, ctx.canvas.width, ctx.canvas.height); }};Interactive pan and zoom
Section titled “Interactive pan and zoom”The camera is plain React state — updating it triggers a re-render and beginRendering fetches and draws the new set of visible tiles. The @alleninstitute/vis-geometry Box2D and Vec2 utilities make the math straightforward:
import { Box2D, Vec2, type box2D, type vec2 } from '@alleninstitute/vis-geometry';
/** Zoom toward/away from a mouse position (in screen pixels). */function zoom(view: box2D, screenSize: vec2, scale: number, mousePos: vec2): box2D { const zoomPoint = Vec2.add(view.minCorner, Vec2.mul(Vec2.div(mousePos, screenSize), Box2D.size(view))); return Box2D.translate( Box2D.scale(Box2D.translate(view, Vec2.scale(zoomPoint, -1)), [scale, scale]), zoomPoint, );}
/** Pan by a pixel delta. */function pan(view: box2D, screenSize: vec2, delta: vec2): box2D { const relative = Vec2.div(Vec2.mul(delta, [-1, -1]), screenSize); return Box2D.translate(view, Vec2.mul(relative, Box2D.size(view)));}Full source
Section titled “Full source”dzi-demo.tsx — the demo wrapper with shared camera and SVG overlay
import { useEffect, useMemo, useState } from 'react';import { fetchDziMetadata, type DziImage } from '@alleninstitute/vis-dzi';import { Box2D, type box2D, type vec2 } from '@alleninstitute/vis-geometry';
import { pan, zoom } from '../common/camera';import { RenderServerProvider } from '../common/react/render-server-provider';import { DziViewer } from './dzi-viewer';
const SVG_OVERLAY_URL = 'https://idk-etl-prod-download-bucket.s3.amazonaws.com/idf-23-10-pathology-images/pat_images_JGCXWER774NLNWX2NNR/H20.33.040-A12-I6-primary/annotation.svg';
const DZI_URLS = [ 'https://idk-etl-prod-download-bucket.s3.amazonaws.com/idf-23-10-pathology-images/pat_images_JGCXWER774NLNWX2NNR/H20.33.040-A12-I6-primary/H20.33.040-A12-I6-primary.dzi', 'https://idk-etl-prod-download-bucket.s3.amazonaws.com/idf-23-10-pathology-images/pat_images_JGCXWER774NLNWX2NNR/H20.33.040-A12-I6-analysis/H20.33.040-A12-I6-analysis.dzi',];
const SCREEN_SIZE: vec2 = [400, 400];
/** * This React Component renders two DZI images which share a camera and a shared SVG overlay. * * It uses simple React state management for the camera, basic event handlers for mouse interactions, * and a shared RenderServer to render the DZI images to multiple canvases. */export function DziDemo() { const [images, setImages] = useState<DziImage[]>([]);
useEffect(() => { Promise.all(DZI_URLS.map(fetchDziMetadata)).then((results) => { setImages(results.filter((img): img is DziImage => img !== undefined)); }); }, []);
// the DZI renderer expects a "relative" camera - that means a box, from 0 to 1. 0 is the bottom or left of the image, // and 1 is the top or right of the image, regardless of the aspect ratio of that image. const [view, setView] = useState<box2D>(Box2D.create([0, 0], [1, 1])); const [dragging, setDragging] = useState(false);
const handleZoom = (e: WheelEvent) => { e.preventDefault(); const zoomScale = e.deltaY > 0 ? 1.1 : 0.9; const v = zoom(view, SCREEN_SIZE, zoomScale, [e.offsetX, e.offsetY]); setView(v); };
const handlePan = (e: React.MouseEvent<HTMLCanvasElement>) => { if (dragging) { const v = pan(view, SCREEN_SIZE, [e.movementX, e.movementY]); setView(v); } };
const handleMouseDown = () => { setDragging(true); };
const handleMouseUp = () => { setDragging(false); };
const [overlay, setOverlay] = useState<HTMLImageElement | null>(null);
useEffect(() => { const img = new Image(); img.onload = () => setOverlay(img); img.src = SVG_OVERLAY_URL; }, []);
const camera = useMemo(() => ({ screenSize: SCREEN_SIZE, view }), [view]);
return ( <RenderServerProvider> <div style={{ display: 'flex', flexDirection: 'row' }}> {images.map((v) => ( <div key={v.imagesUrl} style={{ width: SCREEN_SIZE[0], height: SCREEN_SIZE[1], marginTop: 0 }}> <DziViewer id={v.imagesUrl} dzi={v} camera={camera} svgOverlay={overlay} onMouseDown={handleMouseDown} onMouseUp={handleMouseUp} onMouseLeave={handleMouseUp} onMouseMove={handlePan} onWheel={handleZoom} /> </div> ))} </div> </RenderServerProvider> );}dzi-viewer.tsx — the reusable viewer component
import { useContext, useEffect, useRef } from 'react';import { type GpuProps as CachedPixels, type DziImage, type DziRenderSettings, type DziTile, buildAsyncDziRenderer,} from '@alleninstitute/vis-dzi';import { Vec2, type vec2 } from '@alleninstitute/vis-geometry';import type { RenderFrameFn, buildAsyncRenderer } from '@alleninstitute/vis-core';
import { renderServerContext } from '../common/react/render-server-provider';
type Props = { id: string; dzi: DziImage; svgOverlay: HTMLImageElement | null; onWheel?: (e: WheelEvent) => void; onMouseDown?: (e: React.MouseEvent<HTMLCanvasElement>) => void; onMouseUp?: (e: React.MouseEvent<HTMLCanvasElement>) => void; onMouseMove?: (e: React.MouseEvent<HTMLCanvasElement>) => void; onMouseLeave?: (e: React.MouseEvent<HTMLCanvasElement>) => void;} & DziRenderSettings;
export function DziViewer({ svgOverlay, camera, dzi, onWheel, id, onMouseDown, onMouseUp, onMouseMove, onMouseLeave,}: Props) { const server = useContext(renderServerContext); const canvas = useRef<HTMLCanvasElement>(null);
// the renderer needs WebGL for us to create it, and WebGL needs a canvas to exist, and that canvas needs to be the same canvas forever // hence the awkwardness of refs + an effect to initialize the whole hting const renderer = useRef< ReturnType<typeof buildAsyncRenderer<DziImage, DziTile, DziRenderSettings, string, string, CachedPixels>> >(undefined);
useEffect(() => { const el = canvas.current; if (server?.regl) { renderer.current = buildAsyncDziRenderer(server.regl); } return () => { if (el) { server?.destroyClient(el); } }; }, [server]);
useEffect(() => { const compose = (ctx: CanvasRenderingContext2D, image: ImageData) => { // first, draw the results from webGL ctx.putImageData(image, 0, 0);
if (svgOverlay) { // then add our svg overlay const { width, height } = svgOverlay; const svgSize: vec2 = [width, height]; const start = Vec2.mul(camera.view.minCorner, svgSize); const wh = Vec2.sub(Vec2.mul(camera.view.maxCorner, svgSize), start); const [sx, sy] = start; const [sw, sh] = wh; ctx.drawImage(svgOverlay, sx, sy, sw, sh, 0, 0, ctx.canvas.width, ctx.canvas.height); } };
if (server && renderer.current && canvas.current) { const renderMyData: RenderFrameFn<DziImage, DziTile> = (target, cache, callback) => { if (renderer.current) { // erase the frame before we start drawing on it return renderer.current(dzi, { camera }, callback, target, cache); } return null; }; server.beginRendering( renderMyData, (e) => { if (e.status === 'begin') { server.regl?.clear({ framebuffer: e.target, color: [0, 0, 0, 0], depth: 1, }); } else if (e.status === 'progress' || e.status === 'finished') { e.server.copyToClient(compose); } }, canvas.current, ); } }, [server, svgOverlay, dzi, camera]);
// React registers onWheel as a passive listener, so we can't call preventDefault from it. // Instead we register our own non-passive listener and forward to the prop. useEffect(() => { const el = canvas.current; if (!el) { return; } const handleWheel = (e: WheelEvent) => { e.preventDefault(); onWheel?.(e); }; el.addEventListener('wheel', handleWheel, { passive: false }); return () => { el.removeEventListener('wheel', handleWheel); }; }, [onWheel]);
return ( <canvas id={id} ref={canvas} width={camera.screenSize[0]} height={camera.screenSize[1]} onMouseDown={onMouseDown} onMouseUp={onMouseUp} onMouseMove={onMouseMove} onMouseLeave={onMouseLeave} /> );}