Isolated parallel E2E tests for SvelteKit apps with Remote Functions
5 January 2026Testing is important—and often challenging—when we build applications. Testing strategies often evolve from mostly unit tests to more integration tests with mocks, and eventually full E2E (end-to-end) tests. But E2E isn’t a silver bullet.
This post shows my current default approach to testing, demonstrated with a minimal SvelteKit app using Remote Functions and Bun.
Before we start, let’s recap the pros and cons of two common approaches: mocked tests and full E2E tests on a shared environment.
Definitions
- Integration tests with mock API responses (mocked E2E): browser automation against a local app, with network responses mocked.
- Full E2E on a shared environment: browser automation against a running app with a real backend + real DB in a shared (non-isolated) environment.
Integration tests with mock API responses
Pros
- Fast and lightweight, since responses return immediately from mocks.
- Easier to reproduce tricky edge cases (e.g., a temporary server outage).
Cons
- Mock API responses take time to create and maintain, especially with relational data.
- Mocks drift from reality when backend behavior changes.
Full E2E test suite on a shared environment
Pros
- High confidence: it exercises the entire stack from UI to backend to database.
- Not necessarily tied to the codebase, so it can be maintained separately.
Cons
- Slower, since interactions hit real services.
- Hard to parallelize safely without multiple accounts/servers or robust data cleanup.
- Cleanup is often manual—or becomes complex to automate.
- Less resilient: crashes can leave leftover data and break later runs.
- Flaky when cleanup fails or when transient network issues occur.
- Hard to retry because previous attempts may have already written to the database.
My new test setup requirements
I wanted to avoid mock maintenance while keeping tests local and fast, and also make parallel runs safe and reliable. This time, I want:
- Still E2E: assertions based on user behavior
- Minimal mock data maintenance
- Runs locally to stay reasonably fast
- Retryable when tests flake
- Real backend + real database (hit real code paths)
- Parallelizable with zero interference between test cases
After some research, I settled on disposable environments per Playwright worker using testcontainers (Postgres + app instance), plus a fast “reset to baseline” mechanism per test via database snapshots. In theory, this works with any framework and with both SSR and SPA architectures.
In the rest of the post, we’ll build it step by step with a brand new SvelteKit app using Remote Functions and Bun. By the end, you’ll have Playwright running tests in parallel, each with its own Postgres container + app instance, with safe retries even when tests fail.
What we’ll build
- A simple app showing the user count on the home page
- A simple form to register new users
- 3 test cases asserting it
- Shows the user count
- Shows error message when the username has been taken
- Updates the user count after successful registration
Prerequisites
Step 1: Base app creation and setup
Create a new SvelteKit app with TypeScript support, Prettier, ESLint, Playwright, and Bun:
bunx sv create --template minimal --types ts --add prettier eslint playwright --install bun <project-name>To add the necessary dependencies:
svelte-adapter-bun: run SvelteKit production builds on Bun (we’ll use production builds in E2E)@types/bun: Bun type definitions for server-side codetestcontainers: to build and start containers of the SvelteKit app for testing@testcontainers/postgresql: start disposable postgresql containersvalibot: validates user registration input
bun add -D svelte-adapter-bun @types/bun testcontainers @testcontainers/postgresqlbun add valibotEdit svelte.config.js to (1) use the Bun adapter and (2) enable Remote Functions. Remote Functions are experimental and must be opted in, and using await directly in components requires compilerOptions.experimental.async.
import adapter from '@sveltejs/adapter-auto';import adapter from 'svelte-adapter-bun';import { vitePreprocess } from '@sveltejs/vite-plugin-svelte';
/** @type {import('@sveltejs/kit').Config} */const config = { // Consult https://svelte.dev/docs/kit/integrations // for more information about preprocessors preprocess: vitePreprocess(),
kit: { // adapter-auto only supports some environments, see https://svelte.dev/docs/kit/adapter-auto for a list. // If your environment is not supported, or you settled on a specific environment, switch out the adapter. // See https://svelte.dev/docs/kit/adapters for more information about adapters. adapter: adapter(),
experimental: { remoteFunctions: true, }, },
compilerOptions: { experimental: { async: true, }, },};Step 2: Prepare the infra
We’ll run the production build in tests (inside a disposable container), so we need a Dockerfile that builds to the /build folder and starts it with Bun. This is based on Bun’s recommended multi-stage pattern.
# use the official Bun image# see all versions at https://hub.docker.com/r/oven/bun/tagsFROM oven/bun:1 AS baseWORKDIR /usr/src/app
# install dependencies into temp directory# this will cache them and speed up future buildsFROM base AS installRUN mkdir -p /temp/devCOPY package.json bun.lock /temp/dev/RUN cd /temp/dev && bun install --frozen-lockfile
# install with --production (exclude devDependencies)RUN mkdir -p /temp/prodCOPY package.json bun.lock /temp/prod/RUN cd /temp/prod && bun install --frozen-lockfile --production
# copy node_modules from temp directory# then copy all (non-ignored) project files into the imageFROM base AS prereleaseCOPY --from=install /temp/dev/node_modules node_modulesCOPY . .
# make production buildENV NODE_ENV=productionRUN bun --bun run build
# copy production build into final imageFROM base AS releaseCOPY --from=prerelease /usr/src/app/build .
# run the appUSER bunEXPOSE 3000/tcpENTRYPOINT [ "bun", "--bun", "run", "index.js" ]For local development, I recommend:
- run SvelteKit on your host (fast HMR, editor integration)
- run Postgres in Docker (isolated DB, no local conflicts)
services: app: build: . ports: - '5000:3000' environment: POSTGRES_URL: postgresql://postgres:postgres@db:5432/svelte depends_on: - db
db: image: postgres:18-alpine3.23 ports: - '5432:5432' environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: svelte volumes: - db_data:/var/lib/postgresql/data
volumes: db_data:Start the database:
docker compose up dbCreate a .env.local file and point Bun to the forwarded port:
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/svelteStart the dev server and open http://localhost:5173:
bun --bun run devStep 3: Develop user features with Remote Functions
3.1 Create database table
In real apps, you’d use migrations. For this minimal tutorial, we’ll create the users table by executing psql inside the db container
docker exec -it sveltekit-remote-function-e2e-tests-db-1 psql -U postgres -d svelte -c \ "CREATE TABLE users (id BIGSERIAL PRIMARY KEY, username VARCHAR(50) NOT NULL UNIQUE, password TEXT NOT NULL);"3.2 Implement user logic (Remote Functions)
Create src/lib/users/data.remote.ts and colocate all user-related logic there:
getUserCount: queries the row countcreateUser: validates input, checks uniqueness, writes to DB, then redirects
import * as v from 'valibot';import { redirect, invalid } from '@sveltejs/kit';import { query, form } from '$app/server';import { sql } from 'bun';
export const getUserCount = query(async () => { const [{ count }] = await sql`SELECT COUNT(*) FROM users;`; return count;});
export const createUser = form( v.object({ username: v.pipe(v.string(), v.nonEmpty()), password: v.pipe(v.string(), v.nonEmpty()), }), async ({ username, password }, issue) => { const [{ count }] = await sql`SELECT COUNT(*) FROM users WHERE username = ${username};`;
if (count !== '0') { invalid(issue.username('Username has been taken.')); }
// TODO: In a real app, always hash passwords before storing them. const encryptedPassword = password;
await sql`INSERT INTO users (username, password) VALUES (${username}, ${encryptedPassword});`;
redirect(303, `/`); },);Home page: display the user count and link to registration. Because we enabled compilerOptions.experimental.async, we can await the Remote Function directly in the <script> block.
<script lang="ts"> import { getUserCount } from '$lib/users/data.remote'; import { resolve } from '$app/paths';
const userCount = await getUserCount();</script>
<a href={resolve('/users/register')}>User Register</a>
<p>User Count: {userCount}</p>Registration page: spread the Remote Function form object onto <form> and bind fields.
<script lang="ts"> import { createUser } from '$lib/users/data.remote'; import { resolve } from '$app/paths';</script>
<a href={resolve('/')}>Home Page</a>
<h1>User Register</h1>
<form {...createUser}> <label> Username
<input {...createUser.fields.username.as('text')} />
{#each createUser.fields.username.issues() as issue (issue.message)} <br /> <span>{issue.message}</span> {/each} </label>
<br />
<label> Password
<input {...createUser.fields.password.as('password')} />
{#each createUser.fields.password.issues() as issue (issue.message)} <br /> <span>{issue.message}</span> {/each} </label>
<br />
<button type="submit">Register</button></form>At this point:
- Visiting home page should show
User Count: 0 - Registering a user should redirect back to home page and show
User Count: 1
Step 4: Set up E2E tests
This is the core: we’ll run Playwright in parallel, and each worker gets:
- a dedicated Postgres container
- a dedicated app container (production build)
- a DB snapshot that we restore after each test, so tests are retryable and order-independent
4.1 Playwright config
import { defineConfig } from '@playwright/test';
export default defineConfig({ webServer: { command: 'npm run build && npm run preview', port: 4173 }, testDir: 'e2e', globalSetup: 'global-setup.ts', fullyParallel: true,});4.2 Build the app image once (global setup)
We’ll refer to the image name later, so let’s define it as a constant.
export const APP_IMAGE_NAME = 'sveltekit-remote-function-e2e-tests-app';Build the Docker image once before any test starts:
import { dirname } from 'node:path';import { fileURLToPath } from 'node:url';import { GenericContainer } from 'testcontainers';import { APP_IMAGE_NAME } from './e2e/constants';
export default async function globalSetup() { const appPath = dirname(fileURLToPath(import.meta.url)); await GenericContainer.fromDockerfile(appPath).build(APP_IMAGE_NAME, { deleteOnExit: false, });}4.3 The fixtures (where isolation happens)
We’ll define two fixtures:
Fixture 1: base (worker-scoped)
Set up once per worker:
- Create a Docker network so app ⇄ DB can talk by hostname
- Start a dedicated Postgres container, create schema + seed one user
- Take a DB snapshot (baseline state)
- Start an app container connected to that DB
- Provide
{ db, appUrl }to the tests
Fixture 2: override Playwright’s page fixture (test-scoped)
- Before each test:
- Inject a request header so SvelteKit can determine the request protocol (HTTP) correctly
In production builds, SvelteKit may assume requests are HTTPS (or infer the protocol from proxy headers). When running everything locally over plain HTTP in E2E, that can make Remote Functions fail the request origin checks (it looks like the request origin doesn’t match due to different protocol).
Since this is only for tests, we setPROTOCOL_HEADERon the app and use Playwright to “proxy” requests with an extra header (e.g.REQUEST-PROTOCOL: http) so SvelteKit treats them as HTTP. You can find more details here.
- Inject a request header so SvelteKit can determine the request protocol (HTTP) correctly
- After each test:
- Restore the DB snapshot after the test completes
import { GenericContainer, Network, Wait } from 'testcontainers';import { PostgreSqlContainer } from '@testcontainers/postgresql';import type { StartedPostgreSqlContainer } from '@testcontainers/postgresql';import { test } from '@playwright/test';import { APP_IMAGE_NAME } from './constants';
export const e2e = test.extend< object, { base: { db: StartedPostgreSqlContainer; appUrl: string } }>({ base: [ // eslint-disable-next-line no-empty-pattern async ({}, use) => { const network = await new Network().start(); const db = await new PostgreSqlContainer('postgres:18-alpine3.23') .withNetwork(network) .withHostname('db') .withUsername('postgres') .withPassword('postgres') .withDatabase('svelte') .start();
console.log('Creating DB tables with seed data...'); await db.exec([ 'psql', '-U', 'postgres', '-d', 'svelte', '-c', "CREATE TABLE users (id BIGSERIAL PRIMARY KEY, username VARCHAR(50) NOT NULL UNIQUE, password TEXT NOT NULL); INSERT INTO users (username, password) VALUES ('user', '123456');", ]);
console.log('Taking DB Snapshot...'); await db.snapshot();
const app = await new GenericContainer(APP_IMAGE_NAME) .withNetwork(network) .withEnvironment({ POSTGRES_URL: 'postgresql://postgres:postgres@db:5432/svelte', PROTOCOL_HEADER: 'REQUEST-PROTOCOL', }) .withExposedPorts(3000) .withWaitStrategy( Wait.forLogMessage('Listening on http://0.0.0.0:3000/'), ) .start();
const appUrl = `http://localhost:${app.getMappedPort(3000)}`; const workerIndex = test.info().workerIndex;
console.log(`Worker index ${workerIndex} started hosting ${appUrl}`); await use({ appUrl, db });
await app.stop(); await db.stop(); await network.stop(); console.log(`Worker index ${workerIndex} stopped hosting ${appUrl}`); }, { scope: 'worker', auto: true }, ],
page: async ({ page, base: { db, appUrl } }, use) => { // To make CSRF check work while keeping HTTP // Ref: https://github.com/sveltejs/kit/issues/14352#issuecomment-3705210391 await page.route(`${appUrl}/**`, async (route) => { const request = route.request(); await route.continue({ headers: { ...request.headers(), 'REQUEST-PROTOCOL': 'http', }, }); });
await use(page); console.log('Restoring from DB Snapshot...'); await db.restoreSnapshot(); },});4.4 The tests (clean and parallel-safe)
With the fixtures, test cases become simple: just interact and assert.
import { expect } from '@playwright/test';import { e2e } from './fixtures';
e2e('shows user count on home page', async ({ page, base: { appUrl } }) => { await page.goto(appUrl); // By default there is one user from the seed data await expect(page.getByRole('paragraph')).toContainText('User Count: 1');});
e2e('handles username conflicts', async ({ page, base: { appUrl } }) => { await page.goto(`${appUrl}/users/register`); // Try to pick the same username in the seed data await page.getByRole('textbox', { name: 'Username' }).fill('user'); await page.getByRole('textbox', { name: 'Password' }).fill('123456'); await page.getByRole('button', { name: 'Register' }).click(); await expect(page.getByText('Username has been taken.')).toBeVisible();});
e2e('registers new users correctly', async ({ page, base: { appUrl } }) => { await page.goto(`${appUrl}/users/register`); await page.getByRole('textbox', { name: 'Username' }).fill('newuser'); await page.getByRole('textbox', { name: 'Password' }).fill('123456'); await page.getByRole('button', { name: 'Register' }).click(); await page.waitForURL(appUrl); await expect(page.getByRole('paragraph')).toContainText('User Count: 2');});Run the tests:
bun run testThe first run may take longer because Docker images need to be built and dependencies pulled. After that, runs are typically much faster.
Step 5: CI (GitHub Actions)
GitHub Actions’ ubuntu-latest runner can build Docker images out of the box. We just need Bun, dependencies, and Playwright browsers + system deps.
name: E2E Tests
on: pull_request: push:
jobs: test: name: Run E2E Tests timeout-minutes: 60 runs-on: ubuntu-latest steps: - uses: actions/checkout@v4
- uses: oven-sh/setup-bun@v2 with: bun-version: 1.3.5
- name: Install dependencies run: bun install
- name: Install Playwright dependencies run: bunx playwright install --with-deps
- name: Run E2E tests run: bun run testNotes
-
It’s heavy compared to mocked tests.
Yes—this approach trades CPU and Docker overhead for reliability, parallelism, and realism. -
It still uses “mock data” in a sense.
You’ll still maintain a small amount of seed data (your baseline snapshot). The difference is that you’re no longer maintaining mocked API responses—so backend response changes don’t require updating test doubles. Most of the “data maintenance” becomes just keeping a minimal, realistic starting state. -
Passwords
This tutorial stores passwords as-is for brevity. Don’t do that in production: always hash passwords using a strong algorithm (argon2/bcrypt). -
Remote Functions are experimental
They’re powerful, but the API and behavior can change. Keep an eye on SvelteKit release notes and be ready to adjust.
A quick performance detour (and the real win)
My first attempt was the most straightforward idea: start a fresh Postgres container and a fresh app container for every single test. It worked, and the isolation was perfect—but it was painfully inefficient. A single assertion (just checking one element that renders the DB result) could take close to a minute end-to-end. Running the same test 100 times took around 4 minutes, mostly spent on container startup rather than actual testing.
The key realization was: the app and DB don’t need to be disposable per test. What needs to be isolated is the state. Once I switched to one Postgres + one app instance per Playwright worker, and reset the DB after each test using a snapshot restore, the runtime dropped dramatically. Workers still pay the startup cost once, but the remaining runs become cheap and repeatable. In my case, running the same test 100 times dropped to under a minute—because restoring a snapshot is far cheaper than rebuilding containers over and over.
Conclusion
Disposable environments per worker + DB snapshot restore per test is a simple mental model:
- Readable: each test case is just user actions + assertions.
- Parallel-safe: each worker has its own DB and app instance.
- Retryable: each test resets state after it runs.
- Realistic: production build + real DB + real server code paths.
For larger apps, you can extend this pattern with real migrations, richer seeds, and more services (Redis, queues, etc.). But even in this minimal form, it’s a great default when you want E2E confidence without shared-environment flakiness.
You can find the repo with all the code here.