Niall Eccles
Back to blog

Building a Concurrent Thumbnail Queue with Sharp

How I built a queued, cached thumbnail pipeline for a desktop photo app without blocking the UI or overwhelming the system

28 March 2026
Electron Node.js TypeScript Sharp Side Projects

When you open a folder of 600 photos in Foco, the app needs to display every image as a thumbnail. It needs to do that quickly, without freezing the UI, and without melting your CPU.

One approach is to process everything at once. Fire off 600 Sharp operations as fast as the loop runs and wait for them to finish. This works in the sense that it eventually produces thumbnails. It also pegs the CPU, makes the app unresponsive, and on a slower machine will occasionally exhaust the process.

The right approach is a queue with a concurrency limit, combined with a disk cache so you never repeat work. Here is how I built it.

Why Sharp

Sharp is a Node.js image processing library built on top of libvips. It is fast in a way that pure JavaScript image libraries are not, because the actual work happens in a native module. Resizing and encoding a JPEG takes milliseconds per image.

In Foco, all thumbnail generation runs through Sharp in Electron’s main process. The renderer never touches pixel data. It sends an IPC request, the main process does the work via Sharp, and the result comes back as a file path to a cached thumbnail on disk.

The main process: generation and caching

The thumbnail service in the main process is straightforward. Given an image path and the folder it lives in, it either returns a cached thumbnail or generates a new one with Sharp.

const CACHE_DIR = '.foco-cache'
const THUMB_DIR = 'thumbnails'
const THUMB_SIZE = 160
const THUMB_QUALITY = 75

export async function getThumbnailPath(imagePath: string, folderPath: string): Promise<string> {
  const cacheDir = await ensureCacheDir(folderPath)
  const { name } = parse(imagePath)
  const thumbPath = join(cacheDir, `${name}.thumb.jpg`)

  try {
    const [thumbStat, srcStat] = await Promise.all([fs.stat(thumbPath), fs.stat(imagePath)])
    if (thumbStat.mtimeMs >= srcStat.mtimeMs) {
      return thumbPath
    }
  } catch {
    // Cache miss — generate below
  }

  await sharp(imagePath)
    .resize(THUMB_SIZE, THUMB_SIZE, { fit: 'cover', position: 'centre' })
    .jpeg({ quality: THUMB_QUALITY })
    .toFile(thumbPath)

  return thumbPath
}

Thumbnails are stored in a .foco-cache/thumbnails/ directory inside the folder itself, next to the photos. The cache key is the thumbnail filename. It’s derived from the source image name and combined with a modification time check. If the cached thumbnail’s mtime is newer than or equal to the source file’s mtime, the cached version is valid. If the source has been modified since the thumbnail was generated, Sharp regenerates it.

This is simpler than a hash-based approach and works well for the use case. The only way a photo changes on disk after you open it in Foco is if you edited it in another tool, and modification time reliably captures that.

The IPC handler is a thin wrapper:

ipcMain.handle('get-thumbnail', async (_event, imagePath: string, folderPath: string) => {
  return getThumbnailPath(imagePath, folderPath)
})

The renderer: a Zustand queue

The main process handles one thumbnail at a time and returns a file path. The renderer is responsible for deciding how many requests to send at once. This is where the queue lives.

The queue is a Zustand store:

const MAX_CONCURRENT = 4

export const useThumbnailStore = create<ThumbnailState>((set, get) => ({
  cache: new Map(),
  pending: new Set(),
  queue: [],

  requestThumbnail(imagePath, folderPath) {
    const { cache, pending, queue } = get()
    if (cache.has(imagePath) || pending.has(imagePath)) return
    if (queue.some((item) => item.imagePath === imagePath)) return

    set({ queue: [...queue, { imagePath, folderPath }] })
    get()._processQueue()
  },

  _processQueue() {
    const { pending, queue } = get()
    if (pending.size >= MAX_CONCURRENT || queue.length === 0) return

    const next = queue[0]
    const newPending = new Set(pending)
    newPending.add(next.imagePath)
    set({ queue: queue.slice(1), pending: newPending })

    api
      .getThumbnail(next.imagePath, next.folderPath)
      .then((thumbPath) => {
        const { pending: p, cache: c } = get()
        const newCache = new Map(c)
        newCache.set(next.imagePath, `safe-file://${thumbPath}`)
        const newP = new Set(p)
        newP.delete(next.imagePath)
        set({ cache: newCache, pending: newP })
        get()._processQueue()
      })
      .catch(() => {
        const { pending: p } = get()
        const newP = new Set(p)
        newP.delete(next.imagePath)
        set({ pending: newP })
        get()._processQueue()
      })
  }
}))

The store tracks three things: a cache of resolved thumbnails (image path to URL), a pending set of in-flight requests, and a queue of items waiting to be sent. requestThumbnail deduplicates before adding anything. If the thumbnail is already cached, pending, or already queued, it does nothing. _processQueue pulls items off the queue and sends IPC requests until the pending count hits MAX_CONCURRENT.

When a request resolves, it removes from pending, stores the result in cache, and calls _processQueue again to pick up the next item. When a request fails, it removes from pending and calls _processQueue anyway. The queue does not stall because one image failed.

The concurrency limit is four. That number came from testing rather than theory. At four, multiple CPU cores stay busy without the rest of the machine becoming sluggish. Lower and generation feels slow. Higher and you start noticing the impact.

Lazy loading with IntersectionObserver

Sending 600 IPC requests immediately when a folder opens is pointless if most thumbnails are off-screen. Each Thumbnail component uses IntersectionObserver to only request its thumbnail when it scrolls close to the viewport:

useEffect(() => {
  const el = ref.current
  if (!el) return
  const observer = new IntersectionObserver(
    ([entry]) => {
      if (entry.isIntersecting) {
        setInView(true)
        observer.disconnect()
      }
    },
    { rootMargin: '200px' }
  )
  observer.observe(el)
  return () => observer.disconnect()
}, [])

useEffect(() => {
  if (inView && !thumbnailUrl) {
    requestThumbnail(image.path, folderPath)
  }
}, [inView, thumbnailUrl, image.path, folderPath, requestThumbnail])

The rootMargin: '200px' means thumbnails start loading 200px before they enter the viewport, which prevents visible pop-in when scrolling. Once a thumbnail is in view, the observer disconnects. It only needs to fire once. While waiting, the component renders a Mantine Skeleton placeholder that pulses until the image is ready.

Serving local files

Sharp writes thumbnails to disk, and the IPC handler returns a file path. The renderer uses a custom safe-file:// protocol to load them as image sources. This sidesteps the browser sandbox restrictions that would block standard file:// URLs in Electron’s renderer process. The URL stored in cache looks like safe-file:///path/to/.foco-cache/thumbnails/photo.thumb.jpg, which the Electron main process intercepts and serves as a local file response.

What was harder than expected

The error handling within the queue took more thought than the queue itself. Sharp throws if it encounters a file it cannot decode — a corrupt JPEG, an unsupported format, a file deleted between queuing and processing. The .catch() handler in _processQueue ensures that when this happens, the image is removed from pending and the queue continues. Without that, a single bad file would leave pending one too large and the queue would stall one below the concurrency limit indefinitely.

On the renderer side, components never know whether a thumbnail failed or is just slow. The Skeleton placeholder stays visible until a URL appears in the cache. For genuinely bad files, that placeholder never resolves, which is acceptable — a placeholder is better than an error state for something in a filmstrip.

The result

Opening a folder for the first time takes a few seconds for thumbnails to populate. Re-opening the same folder is close to instant because the disk cache is still valid. The app stays responsive throughout, because the queue means the renderer and main process are never overwhelmed with simultaneous work.

The queue itself is about sixty lines in the Zustand store. The thumbnail service in the main process is about thirty. Most of the thinking was in how the pieces fit together: where the concurrency limit should live, how cache invalidation should work, and how to make sure failures do not cause subtle hangs.

Back to blog