A Technical Case Study by Jani Tarvainen February 2026
In February 2026, I rebuilt a multi-tenant driving directions platform that has been operating since 2007 from scratch. The project went from first commit to 17 production tenants serving real visitors across Europe and Africa in a week. The entire codebase was produced by a team of two: me as product owner, architect, and technical director, and Claude Code (Anthropic's AI coding agent) as the implementation engine. I did not write a single line of code by hand.
This case study documents the technical decisions, architecture, timeline, and lessons learned. It is written for technical professionals who value both engineering depth and business pragmatism.
I have operated a pan-European franchise of driving directions portals since 2007. Think of it as a Google Maps or Via Michelin alternative with a simpler UX, localized for specific European markets. Each market has its own domain, language, and branding. At peak seasons, the network has served 20,000-30,000 daily visitors across multiple tenants.
The business model is straightforward: display advertising (Google AdSense) covers operational costs and generates modest revenue. It is a side business, not a startup -- a profitable digital property that has run for nearly two decades with minimal intervention.
The previous incarnation was built in 2017 on Symfony 3 (PHP), React.js, and PostgreSQL. It has served well for almost a decade, but several factors converged:
- Technical debt: Symfony 3 reached end-of-life years ago. The React frontend was showing its age.
- Career context: After years of steady employment, I entered the job market in 2026. I wanted a project that demonstrates current skills and AI-native development competency.
- Curiosity: I had light experience with Claude Code and wanted to stress-test it on a real, production-bound project with a known target architecture.
Rather than incrementally upgrading the existing stack, I decided to maximize technical risk deliberately. The platform concept was proven and well-understood -- I knew exactly what features I needed. This freed me to go all-in on unfamiliar technology:
- Deno instead of Node.js -- I had been intrigued by the "Node.js done right" pitch for years but never used it in production.
- Fresh v2 as the server-side framework -- brand new, minimal community resources, Preact-based SSR.
- SQLite instead of PostgreSQL -- deliberately simpler for a single-server deployment. The architecture is designed so a migration to Postgres or another database is straightforward if scale demands it.
- MapLibre GL JS instead of Google Maps -- open-source, zero per-request API costs.
- Self-hosted OSRM for route calculation -- no per-request charges, full control over routing data.
- Claude Code (Opus 4.6) as the sole code author -- I described what I wanted; the AI wrote every line.
The calculated risk: I knew the domain cold after 19 years. If the technology choices failed, I could DNS-failback to the 2017 platform instantly. The old system was (and still is) running in parallel.
User Request
|
v
Cloudflare CDN (caching + SSL termination)
|
v
Caddy (reverse proxy + basic auth for staging)
|
v
Deno Fresh v2 (SSR + API, port 8086)
|-- Multi-tenant resolution (hostname -> config)
|-- Preact SSR pages (routes, explore, admin)
|-- API layer (geocode proxy, beacon, health, landmarks)
|-- SQLite (WAL mode, batched writes)
|-- Sitemap generation (7 sub-sitemaps per tenant)
|
+-- External services:
|-- Nominatim (geocoding, rate-limited + cached)
|-- Self-hosted OSRM (route calculation, 4 regional instances)
|-- Self-hosted tile server (OSM raster tiles)
| Layer | Technology | Rationale |
|---|---|---|
| Runtime | Deno 2.6.9 | TypeScript-first, built-in tooling, secure by default |
| Framework | Fresh v2 | File-system routing, Preact SSR, island architecture |
| Database | SQLite (node:sqlite) | Zero-config, single-file, WAL mode for concurrent reads |
| Map rendering | MapLibre GL JS 4.7.1 | Open-source, no per-request API costs |
| Geocoding | Nominatim (OpenStreetMap) | Free, globally available, cached locally |
| Routing | OSRM (self-hosted) | Four regional instances: EU, Africa, North America, South America |
| Tiles | Self-hosted tile server | OSM raster tiles, zero external dependency |
| CDN | Cloudflare | Free tier, cache purge API integrated into deploy pipeline |
| Reverse proxy | Caddy | Automatic HTTPS, simple config generation |
| Container | Docker (multi-stage) | Build-stage runs checks + tests; runtime stage is minimal |
| Hosting | Hetzner bare metal (outlet) | Cost-effective, runs OSRM instances alongside the app |
One of the key architectural decisions was minimizing recurring costs:
- Map tiles: Self-hosted. Cost: $0/request.
- Geocoding: Nominatim is free. Aggressively cached locally (30-day TTL) to respect their usage policy.
- Route calculation: Self-hosted OSRM. The Hetzner bare metal server runs four regional routing engines covering Europe, Africa, North America, and South America.
- Hosting: A single outlet-priced Hetzner bare metal server runs everything -- the app, all OSRM instances, and the tile server. Monthly cost is a fraction of what cloud-hosted map API calls would cost at this traffic volume.
- CDN: Cloudflare free tier. Automatic HTTPS, DDoS protection, edge caching.
- Revenue: Google AdSense auto-ads, enabled per tenant. The platform has historically covered its own costs.
This architecture means the marginal cost of adding a new tenant is essentially zero: a domain registration and a config entry.
The platform serves 21 configured tenants from a single deployment. Tenant resolution works through hostname matching:
- Custom domains (e.g.,
afroute.com,dojazdu.net) map directly to tenant configs - Subdomain-based routing (e.g.,
fi.routemap.info) as a fallback - Development override via query parameter
Each tenant configuration specifies: language, country, brand name, tagline, default map center/zoom, featured cities, ad settings, and live status. Adding a new market is a config change, not a code change.
Live tenants at launch (17 markets):
| Tenant | Market | Language | Domain |
|---|---|---|---|
| ng | Nigeria | English (en_NG) | afroute.com |
| en | Ireland | English (en_IE) | routeplanner.app |
| es | Spain | Spanish | como-llegar.org |
| de | Germany | German (de_DE) | routenplaner.cc |
| fr | France | French | itineraire.me |
| it | Italy | Italian | percorsi.me |
| pt | Portugal | Portuguese | itinerarios.me |
| pl | Poland | Polish | dojazdu.net |
| cs | Czechia | Czech | planovac.net |
| da | Denmark | Danish | ruter.me |
| ee | Estonia | Estonian | marsruut.net |
| lv | Latvia | Latvian | marsruts.net |
| lt | Lithuania | Lithuanian | marsrutas.net |
| ro | Romania | Romanian | traseu.net |
| sk | Slovakia | Slovak | trasa-mapy.net |
| ch | Switzerland | German (de_CH) | routenplaner.me |
Additional tenants configured but not yet live: Finland, Sweden, Netherlands, Hungary, Norway, Serbia, Chile.
SQLite with WAL mode serves all reads and writes. The schema includes:
- Core data:
locations,countries,continents,location_typeswith junction tables for many-to-many relationships - Content:
trip_ideas(79 pre-defined road trips with translations in 20+ languages),map_services(competitor links per country) - Caches:
geocode_cache(Nominatim results, 30-day TTL),route_distances(OSRM results cached per origin-destination pair) - Analytics:
page_views,location_views,location_type_views(all fed through batched writers) - Operations:
redirects+redirect_tenants(tenant-scoped URL redirects managed via admin panel)
Schema migrations are tracked in a schema_version table and run automatically on startup. Each migration is wrapped in a transaction with rollback on failure.
A key performance decision: all write-heavy operations use in-memory buffering with periodic flush. The generic BatchedWriter<K, V> class handles the buffer-flush-rollback pattern:
- Incoming writes accumulate in a
Map<K, V>buffer - Every 30 seconds, the buffer is swapped and flushed to SQLite in a single transaction
- On transaction failure, unflushed entries are merged back into the live buffer
This pattern is used for page view counting, location view tracking, location type view tracking, geocode cache writes, and route distance caching. The health endpoint exposes buffer depths for operational monitoring.
- Cloudflare CDN: Edge caching for all public pages (1-hour TTL, stale-while-revalidate 24h). Cache is purged on every deployment via API.
- In-memory TTL cache: Generic
MemoryCacheclass with configurable TTL and max-entry eviction. Used for hot-path lookups. - SQLite geocode cache: Nominatim results stored with 30-day TTL. Read-through pattern -- check memory first, then SQLite, then fetch from Nominatim.
- SQLite route distance cache: OSRM results cached per origin-destination pair. Avoids redundant route calculations for pages that display distances.
- Browser caching: Hashed static assets get 1-year immutable cache headers. HTML pages get cache-control with stale-while-revalidate.
Security headers are set in middleware:
- Content Security Policy (script-src, style-src, img-src, connect-src, frame-src scoped to known domains)
- HSTS with 2-year max-age and includeSubDomains
- X-Frame-Options, X-Content-Type-Options, Referrer-Policy
- Rate limiting on API endpoints (beacon: 60 req/min, geocode: 30 req/min per IP)
- Internal endpoints (
/api/health,/api/tenant-check) restricted to localhost - Admin panel behind password authentication with cookie-based sessions
For anything beyond basic security hardening, I would insist on an external security audit. The CSP and rate limiting provide a solid baseline, but a production system handling real traffic deserves professional review.
Everything listed below was built from scratch in one week.
- Interactive map with MapLibre GL JS -- pan, zoom, click-to-add-destination
- Multi-waypoint routing -- add unlimited intermediate stops, drag to reorder
- Turn-by-turn directions with collapsible instruction list
- Route playback animation -- animated camera flythrough of the calculated route using
requestAnimationFrame, with play/pause/stop controls and a heads-up display showing current road, instruction, and distance - Route reversal -- swap origin and destination with one click
- Autocomplete geocoding -- type-ahead search powered by Nominatim with local caching
- Home page with popular routes, locations, and location types (data-driven from view counters)
- Route detail pages (
/origin/destination) -- full SSR with structured data, distance, duration, and embedded interactive map - Explore pages -- browse locations by type (cities, landmarks, attractions) with country scoping
- Route search -- discover routes by continent, with localized URL paths
- Road trip ideas -- 79 curated European road trips with descriptions translated into 20+ languages
- Route listing -- browse all cached routes by popularity
- Share pages -- sharable route links via URL-encoded state
- Per-page meta tags: Dynamic
<title>,<meta description>,og:title,og:descriptionusing localized templates with distance/duration placeholders - Sitemap generation: 7 sub-sitemaps per tenant (pages, cities, routes, routes-listing, routesearch, explore, index) generated dynamically
- Localized URL paths: Route and explore paths translated per language (e.g.,
/reittejäin Finnish,/rutasin Spanish) - Preconnect hints: Resolved per-tenant to the nearest OSRM regional backend
- Structured internal linking: Popular routes, locations, and cross-tenant links on every page
The i18n system covers 6 fully supported UI languages (English, Finnish, Spanish, German, French, Swedish) with partial support for 15+ additional languages. The i18n strings file is ~2,200 lines covering every UI label, placeholder, page title template, and meta description template.
Road trip content is translated into 20+ languages including Czech, Danish, Estonian, Hungarian, Italian, Latvian, Lithuanian, Dutch, Norwegian, Polish, Portuguese, Romanian, Slovak, Serbian, and Russian.
Page tracking uses a cache-proof beacon pattern:
- A 1x1 transparent GIF is embedded as an
<img>tag in the app layout - The beacon URL includes the current page path as a query parameter
- On the server, the beacon endpoint records the view in the batched page view counter
- Optionally, it fires a server-side GA4 Measurement Protocol event (no client-side JavaScript required)
This approach works regardless of CDN cache status, ad blockers, or JavaScript availability. GA4 integration includes the tenant domain in page_location for cross-tenant segmentation.
A full CRUD admin panel at /admin with:
- Locations, countries, continents, location types management
- Page views, location views, location type views dashboards
- Route distances browser
- Geocode cache inspector
- Trip ideas management
- Map services (competitor links) management
- Tenant-scoped redirect management (301/302)
- Paginated listings with search across all sections
- Password-protected with cookie-based sessions
The deployment system is a set of shell scripts that implement a robust release process:
tag-release.sh: Creates date-based git tags (e.g., v26-02-14, v26-02-14.2) with automatic suffix incrementing. Enforces main-branch-only tagging.
deploy.sh: Full deployment pipeline:
- Pull latest code and tags
- Build Docker image (multi-stage: install deps, run
deno task checkfor format/lint/typecheck, run tests, production build) - Backup SQLite database (gzipped, with rotation)
- Spin up smoke test container on ephemeral port
- Health check against
/api/health - Tenant readiness check via
/api/tenant-check-- verifies each live tenant has its country, continent, coordinates, locations, i18n, slug resolution, and route pairs working - Stop old container, start new one
- Live health check
- Automatic rollback to previous version if health check fails
- Purge Cloudflare cache for all live tenant domains
- Post-deployment domain crawl (sequential, with per-tenant progress) -- doubles as cache warmup and smoke test for every live URL
- Prune old Docker images (keep 3 most recent)
- Print deployment summary with duration, tenant status, and verification commands
rollback.sh: Instant rollback to any previously built image version.
backup.sh: SQLite backup with tiered local rotation (daily/weekly/monthly, 7 kept per tier) and optional remote upload via SCP. Uses VACUUM INTO for consistent snapshots when sqlite3 CLI is available.
purge-cache.sh: Cloudflare cache purge for all live tenant domains. Discovers domains dynamically from the running app's /api/tenant-check endpoint.
generate-caddyfile.sh: Generates Caddy reverse proxy config with basic auth, gzip/zstd compression, and cache headers for hashed assets.
Multi-stage Dockerfile:
- Build stage: Install dependencies, run
deno task check(format + lint + typecheck), run tests, production build - Runtime stage: Copy only built artifacts and server source. No dev dependencies in production.
- Container runs with
--cpuset-cpusfor resource isolation on the shared bare metal server deno serve --parallelspawns one worker per visible CPU core
My role was product owner, architect, and technical director. I described what I wanted at varying levels of specificity -- sometimes a high-level feature ("add a tenant readiness check to the deploy script"), sometimes a precise technical requirement ("use the BatchedWriter pattern for route distance caching"). Claude Code wrote all the code.
I reviewed code at a high level: reading through key files, checking architectural decisions, asking for security audits, and requesting code deduplication when I noticed patterns repeating. I did not review individual lines -- that level of verification is not practical at this velocity. For anything security-critical, I would insist on external professional review before scaling traffic.
Over the course of the project (Feb 9-17, 2026):
| Metric | Value |
|---|---|
| Claude Code sessions | 110 |
| Total messages exchanged | 37,319 |
| Tool calls (file reads, writes, bash commands) | 8,247 |
| Model used | Claude Opus 4.6 |
The heaviest day was February 14 (deployment day for the first tenant): 25 sessions, 11,233 messages, 2,525 tool calls.
-
Domain expertise is the multiplier. Knowing exactly what features I needed, what the UX should look like, and what the SEO requirements are made every prompt effective. I was not exploring -- I was directing.
-
Conversational iteration. Build something, test it, refine it. The feedback loop was measured in minutes, not days.
-
Fearless refactoring. When architecture needed to change (e.g., extracting the
BatchedWritergeneric from three separate implementations), it happened in a single session with zero risk of losing context. -
Shell script generation. The deployment scripts are arguably the most impressive output -- complex bash with error handling, rollback logic, progress indicators, and Cloudflare API integration. Writing these by hand would have taken days.
- Technology selection. Choosing Deno, Fresh, SQLite, and the self-hosted services stack.
- Multi-tenant strategy. How tenants are resolved, which markets to launch, domain naming.
- Business logic. Which features matter, what the priority order is, when to ship vs. polish.
- Operational decisions. Server sizing, OSRM data regions, backup strategy, DNS failover plan.
- Security posture. Deciding what is "good enough" for launch vs. what needs professional audit.
| Metric | Value |
|---|---|
| Total source files | 111 |
| Total lines of code | ~23,000 |
| Backend TypeScript/TSX | ~18,000 lines |
| Frontend JS/CSS/HTML | ~2,300 lines |
| Shell scripts (deploy, backup, etc.) | ~1,200 lines |
| Backend test file | 537 lines |
| Git commits | 172 |
| Tagged releases | 130 |
| Database migrations | 8 |
| i18n strings | ~2,200 lines across 6+ languages |
| Seed data | 1,000 Finnish landmarks + 79 road trips (20+ languages) |
Day 1 -- Wednesday, Feb 11: Foundation
- First commit at 20:18 UTC: Deno Fresh boilerplate + front-end map prototype
- Road trip feature backend + SSR pages
- 2 commits, initial scaffolding
Day 2 -- Thursday, Feb 12: Core Features
- Landmark HUD for route playback
- Road trip SSR with landmark integration
- Test database setup
- 8 commits across the day
Day 3 -- Friday, Feb 13: Backend Build-Out
- Large chunks of backend functionality
- Database schema work
- 3 commits (but substantial -- "Big chunk" messages)
Day 4 -- Saturday, Feb 14: First Production Deployment (25 sessions, 2,525 tool calls)
- Docker deployment setup (
deploy.sh,Dockerfile,tag-release.sh) - Afroute.com domain purchased at 11:44:15
- First deployment attempt for Nigeria tenant (v26-02-14.7, 8, 9 -- three iterations to get it right)
- Routing localization, Manrope font integration
- Caddy config generator
- Speed improvements, sitemap generation
- UX work, security fixes
- 41 commits, 29 tagged releases
- First tenant live in production
Day 5 -- Sunday, Feb 15: Multi-Tenant Expansion (14 sessions, 1,373 tool calls)
- Caching improvements, design refinements
- Google AdSense integration (ads.txt, tenant-level ad control)
- SEO and UX improvements (multiple iterations)
- Nigeria-specific data (locations, categories)
- Continent-level search
- Major refactoring session
- Poland (dojazdu.net), Italy, Portugal tenants enabled
- Admin panel styling, self-hosted assets
- Sitemap logic updates
- Translation improvements
- Tenant configuration system refined
- 40 commits, 33 tagged releases
Day 6 -- Monday, Feb 16: SEO & Refactoring Marathon (34 sessions, 1,928 tool calls)
- Cloudflare cache purge integration
- Cookie consent implementation
- Deployment process improvements (deploy script iterations)
- Massive refactoring: 22 consecutive releases labeled "serious refactoring" -- restructuring route components, extracting shared patterns, deduplicating code
- Localized categories, radical refactoring of explore pages
- Localized SEO for Polish, French, Spanish, Danish markets
- SEO overhaul (12 consecutive releases)
- Database handling improvements
- Geocoder queue implementation
- Redirect capability (admin-managed, tenant-scoped)
- Admin panel improvements
- 66 commits, 57 tagged releases -- the most intense day
Day 7 -- Tuesday, Feb 17: Polish & Ship (morning only)
- Multi-process parallelism (
deno serve --parallel) - Server-side GA4 analytics via Measurement Protocol -- the final feature
- Legacy configuration additions
- 12 commits, 11 tagged releases
- Feature complete at 05:43 UTC
| Day | Date | Commits | Releases | Sessions | Messages | Focus |
|---|---|---|---|---|---|---|
| 1 | Feb 11 | 2 | 0 | 14 | 2,986 | Foundation |
| 2 | Feb 12 | 8 | 0 | 10 | 2,347 | Core features |
| 3 | Feb 13 | 3 | 0 | 9 | 4,241 | Backend |
| 4 | Feb 14 | 41 | 29 | 25 | 11,233 | First deployment |
| 5 | Feb 15 | 40 | 33 | 14 | 6,687 | Multi-tenant |
| 6 | Feb 16 | 66 | 57 | 34 | 8,496 | SEO & refactoring |
| 7 | Feb 17 | 12 | 11 | 4 | ~1,500 | GA4 & ship |
| Total | 172 | 130 | 110 | 37,319 |
The cowboy coding sprint is over. The platform is live and serving visitors. The next phase is stabilization and measurement:
- Monitor: Google Search Console, Bing Webmaster Tools, Google AdSense revenue, GA4 analytics. Watch for indexing behavior, crawl patterns, and ad performance across tenants.
- Data enrichment: Add location data for underserved markets, especially African cities for Afroute.com.
- Market expansion: New African markets are the primary growth vector. The OSRM Africa routing engine is already running.
- API integrations: Where relevant -- hotel booking affiliates, fuel price data, points of interest.
- Scale preparation: The SQLite-based architecture is deliberately simple for now. The batched writer pattern and the database abstraction layer make it straightforward to migrate to PostgreSQL if traffic demands it. Caching layers can be swapped to Redis. The single-server Docker deployment can be replicated behind a load balancer.
The 2017 Symfony/React/PostgreSQL platform remains running and serving tenants that have not yet been migrated. DNS failback is one record change away. This is how you manage risk when you move fast.
I am genuinely astonished by the velocity. Six days from first commit to 17 production tenants across two continents. 130 tagged releases. A full deployment pipeline with automated testing, smoke tests, tenant readiness checks, domain crawling, automatic rollback, and cache purging.
But I understand why this was possible:
- 19 years of domain knowledge. I was not discovering requirements. I was dictating them.
- Solo operation. No code reviews, no pull requests, no meetings, no Jira tickets. Just a human and an AI in a terminal, shipping.
- Known target. The platform concept was proven. I was rebuilding, not inventing.
- Calculated risk tolerance. SQLite in production, no external auth provider, minimal testing. All deliberate tradeoffs appropriate for a side business with a working failback.
In a team context, this velocity is not realistic -- nor should it be. The value of code review, collaborative design, and structured processes is not up for debate. I have managed 10+ person development teams and I know the difference. But for a solo operator who knows their domain, AI-native development changes the calculus of what one person can build.
The lemon in Afroute.com, incidentally, comes from the Fresh framework's logo and the fact that "Afroute" sounds a bit like "a fruit." Sometimes the best brand names come from bad puns.
Jani Tarvainen jani.tarvainen@iki.fi https://janit.iki.fi
This case study was written on February 17, 2026 -- the same day the last feature was deployed. The RouteMap platform (routemap4) is a private repository. Technical demonstrations and code walkthroughs available on request.