Building an offline-first maritime calculator (and picking Tauri over Electron)
Notes from Draft Survey — a Next.js 16 + TypeScript tool that computes bulk-carrier cargo from draft readings, hydrostatic tables, and trim corrections. A zero-dependency calculation engine, an offline-only posture, and the distribution choice that made it all portable.
The problem
A draft survey is how ships' officers and marine surveyors verify how much bulk cargo actually made it onto a vessel. They read the ship's draft — the depth below the waterline — at six points on the hull before and after loading, then back out the cargo weight from the displacement change, after applying corrections for trim, water density, and onboard consumables. Get it wrong and the charter party, the shipper, and the receiver all have a reason to argue.
The existing tooling is Excel. Every chief officer carries their own spreadsheet, their own hydrostatic tables pasted into a tab, and their own accumulated superstitions about cell references. It works. It is also fragile: one broken formula on a rolling ship at 2am produces a number that sounds plausible and isn't.
Draft Survey is an attempt to replace that spreadsheet with a focused tool. The constraints:
- Works offline. Deck operations happen in ports with spotty connectivity and on ships where IT connectivity is not a guarantee. No accounts, no sync, no cloud dependency.
- Transparent calculations. Every correction has to be inspectable. An officer needs to trust the number, and auditors need to reconstruct it.
- Deck-usable UI. Large inputs, high-contrast dark mode for sunlight or night bridge, a decimal keypad on touch devices, input ranges validated against the hydrostatic table.
- Portable data. One vessel's data should move with the officer between vessels, ports, and machines without ceremony.
Why the calculation engine is a zero-dependency TypeScript module
The entire calculation lives in src/lib/hydro.ts. No math library, no physics package, no external hydrostatic service. Two reasons:
- Auditability. The functions are pure, the inputs are typed, and a reviewer can read the whole pipeline in one sitting. A dependency would add a surface I have to vouch for every time someone asks why the cargo figure is what it is.
- Portability. Pure TypeScript moves anywhere — the Next.js app today, a Tauri desktop binary tomorrow, a Node CLI for batch verification if someone needs it. No dependency re-evaluation step when the runtime changes.
The module exposes two things: an interpHydro(draft) function that linearly interpolates between hydrostatic table rows for a given draft, and a calculateSurvey(input) function that takes draft readings, water density, ship dimensions, and mark offsets, then runs the five-stage correction pipeline.
The interpolator is deliberately simple. Out-of-range drafts clamp to boundary values instead of extrapolating. Hydrostatic tables are published by naval architects for a specific loading range; extrapolating past them is a way to get a confident wrong answer. Clamping surfaces that the reading is outside the trusted range and lets the UI flag it.
The five-stage correction pipeline
The calculateSurvey function is a series of named corrections applied in order. Each stage takes the running state and returns a modified one. The stages:
Mark correction
The ship's draft marks aren't at the perpendiculars — they're wherever the shipyard could paint them, typically offset a few meters from the forward and aft perpendiculars. The first correction takes the apparent trim and the mark offsets and produces corrected drafts at the actual perpendiculars.
First trim correction
Once the mean draft is established, the hydrostatic table gives displacement, TPC (tonnes-per-centimeter), LCF (longitudinal center of flotation), and MCTC (moment-to-change-trim-one-centimeter). The first trim correction accounts for LCF: the flotation center isn't at midships, so trim shifts displacement even at constant mean draft. The formula in the engine: firstTrimCorrection = (trim * lcf * 100 * tpc) / lbp.
Nemoto second trim correction
Trim also affects MCTC itself nonlinearly. The second correction is Nemoto's method — sample MCTC at ±0.5m around the quarter-mean draft, take the difference, and apply. This is the correction that mostly matters when the ship is heavily trimmed, and the one most field spreadsheets either get wrong or skip.
Density correction
Hydrostatic tables assume a reference water density (1.025 t/m³, salt water). Actual port water is anywhere between brackish fresh water and that. A multiplicative correction scales the displacement by the ratio.
Net cargo
Finally, subtract everything that isn't cargo — fuel oil, diesel oil, fresh water, ballast, the ship's own constants, and the light-ship weight. What's left is what the cargo holds contain.
Presenting the pipeline as discrete named stages instead of one large expression is the single most important thing the engine does. Every intermediate number is available, which means the UI can show the full calculation sheet and an auditor can check any line.
Offline-first, and the Tauri decision
The roadmap is to ship this as a desktop app — Windows, macOS, Linux. Two realistic choices: Electron or Tauri. The tradeoffs matter, so they're worth naming.
- Electron bundles Chromium with the app. 100+ MB binaries. Memory footprint in the hundreds of megabytes even at idle. The ecosystem is enormous and the path from "Next.js app" to "distributable binary" is well-worn.
- Tauri uses the system webview and a Rust backend. Binaries measured in the single-digit megabytes, startup noticeably faster, battery life respected. Native SQLite integration. The ecosystem is smaller.
For a single-purpose tool that an officer runs on a ship's laptop — possibly an older one, possibly with thermal limits, definitely not plugged in — binary size, memory, and battery matter more than ecosystem breadth. The calculation engine is pure TypeScript and moves across either backend unchanged. The only Tauri-specific piece is the SQLite adapter for vessel and voyage storage, which is a known-solved problem.
The portable data story falls out naturally. One vessel's database is a single .db file in the app-data directory. Backup is copy the file. Moving to a new machine is copy the file. This is the kind of story a chief officer can follow without calling IT.
Design decisions in the UI
Ship-centric data model
The primary record isn't the survey — it's the ship. A ship owns its hydrostatic table, its draft-mark offsets (which differ between Initial, Interim, and Final surveys because marks can be repainted or obstructed), its light-ship weight, and its constants. A voyage attaches to a ship. A survey attaches to a voyage. The hierarchy reflects how the data is actually reused: one vessel, many voyages, many surveys per voyage.
Three-stage workflow
Every voyage has three surveys — Initial (before loading), Interim (mid-load, optional), Final (after loading). Each stage has its own mark-offset overrides, because the officer may have spotted an obstructed mark and swapped for a known offset. Modeling stages as first-class entities, not enum fields on a generic survey row, keeps the per-stage overrides clean.
Import from Excel, because that's where the tables live
Hydrostatic tables come out of the vessel's stability booklet, transcribed into Excel decades ago and never re-transcribed since. The import path uses xlsx to read them directly. Asking officers to retype a 200-row table would be a non-starter.
Deck-usable input treatment
The UI details aren't glamorous but they decide whether the tool gets used: oversized tap targets, high-contrast dark mode for bridge use, the decimal keypad on touch devices, validation that flags a draft reading outside the hydrostatic range before the number propagates through three corrections and produces nonsense. On a pitching deck at night, these are the things that separate a tool people trust from one they quietly go back to Excel for.
What I'd do differently
- Ship the Tauri build sooner. Developing inside a Next.js dev server is comfortable; it also lets me pretend offline-first is a near-term concern when the app still fetches from
localhost. Bundling for Tauri flushes out the assumptions. - Golden test the calculation. The right way to gain confidence in five chained corrections is a golden-file suite of real surveys with known cargo figures. The calculation engine is pure functions; the test harness is trivial to add; the confidence it buys is disproportionate.
- Plan the migration story from day one. Offline desktop tools accumulate schema changes the same way web apps do. Ship a migration runner in the Tauri build that handles "user had version 1, is now on version 3, .db was never opened in version 2". This is a mundane feature and the absence of it is what kills desktop tools in year two.
Closing
Draft Survey is a small project whose interesting problems aren't small. It's domain engineering — the hard part was understanding the calculation well enough to name its stages, not choosing a framework. It's offline-first — the hard part was committing to it instead of pretending one more cloud feature wouldn't hurt. It's UI for a specific population — the hard part was respecting that population's actual conditions rather than optimizing for my desk.
The senior-engineering instinct I keep coming back to: the interesting decisions are usually the ones that constrain the system on purpose. Zero-dependency engine. Offline-only. Tauri, not Electron. Ship as the primary record. The constraints are the product.
- Framework
- Next.js 16 (App Router), React 19, TypeScript
- Styling
- Tailwind CSS 4
- Calculation engine
- Zero-dependency TypeScript module (src/lib/hydro.ts)
- Data import
- xlsx — hydrostatic tables from stability booklets
- Target distribution
- Tauri desktop bundles — Windows, macOS, Linux
- Persistence (planned)
- SQLite, portable .db per installation
- Audience
- Chief officers and marine surveyors on bulk carriers
- Code
- github.com/raimieltan/draft-survey
Looking for senior engineers who build for real users?
I'm currently open to senior / staff backend and full-stack roles, remote across time zones.