How to Build a Map-Enabled Offline Recommender for Group Decisions
mapsofflinetutorial

How to Build a Map-Enabled Offline Recommender for Group Decisions

UUnknown
2026-02-23
11 min read
Advertisement

Build a resilient group recommender that uses offline maps, edge inference, and CRDT sync so teams decide together without connectivity.

Why build a map-enabled offline recommender for groups in 2026?

Decision-making in small groups (where to eat, where to meet, which route to take) is still painfully slow when people rely on messages, opinions and spotty cellular coverage. As teams and social circles increasingly need resilient, private tools, the ability to run local inference against offline maps is now practical — thanks to affordable NPUs, compact model runtimes, and mature vector-tile tooling in 2026.

This guide shows a hands-on architecture and implementation path to build a group recommender that: (1) uses OSM data packaged for offline use, (2) runs recommendation logic on-device (edge inference), and (3) synchronizes preferences and conflict resolution across peers when connectivity is available. You'll get patterns, commands, code snippets and conflict-resolution strategies you can ship in weeks, not months.

High-level architecture (what components you need)

Design the app around four local subsystems plus an optional cloud sync plane:

  • Offline map and POI store — MBTiles (vector tiles) for map rendering + a compact SQLite POI index for queries.
  • Local recommender engine — small model (or heuristic) that scores POIs against group profiles; runs with ONNX / TFLite / Core ML.
  • Peer sync & conflict resolution — CRDT-based merging (Automerge, Yjs) or operational transforms to converge group preference state without central lock.
  • Sync plane (optional) — light cloud actor to relay updates, host diffs for eventual consistency, and store metadata for analytics.

Why this split?

It separates heavy data (map tiles and POIs) from ephemeral group state (votes, preferences). The recommender can run locally with low latency and private data, while sync and cloud services handle ops that require global visibility or archival.

Step-by-step: From OSM to on-device MBTiles and POI index

You need two datasets offline: vector tiles for rendering and a searchable POI dataset (restaurants, cafes, transit stops) with attributes. Below is a practical pipeline for a city-area extract.

1. Extract an OSM bounding box

Use Geofabrik or Overpass to extract a region. For scripting, Overpass is flexible:

# Overpass API query to get amenities in bounding box (example)
[out:xml][timeout:25];
(
  node[amenity](bbox);
  way[amenity](bbox);
  relation[amenity](bbox);
);
out body; >; out skel qt;

Replace bbox or generate via Overpass Turbo. For large areas, prefer Geofabrik extracts and then filter.

2. Build vector tiles (MBTiles) with Tippecanoe

Tippecanoe is still the go-to for compact MBTiles. Generate tiles focused on POIs and lightweight map base layers.

# Convert GeoJSON (extracted/filtered POIs) to MBTiles
tippecanoe -o poi.mbtiles -zg --drop-densest-as-needed --extend-zooms-if-still-dropping --read-parallel poi.geojson

Include a minimal basemap (roads, water) to keep the UI readable offline. Keep tiles limited to zoom <= 16 for device storage efficiency.

3. Build a compact POI SQLite with R-Tree

A simple SQLite database with an R-Tree index gives fast nearest-neighbor queries without a heavy GIS stack. Recommended schema:

CREATE TABLE poi (
  id TEXT PRIMARY KEY,
  name TEXT,
  category TEXT,
  lat REAL,
  lon REAL,
  tags JSON
);
CREATE VIRTUAL TABLE poi_index USING rtree(id, minLat, maxLat, minLon, maxLon);

Populate poi_index with lat/lon bounding boxes (or same point extents). This gives sub-millisecond spatial queries on mobile and Raspberry Pi class devices.

Local recommendation logic and edge inference

A recommender for groups often needs to combine: individual preferences, contextual signals (time, weather), spatial proximity, and group-specific policies (e.g., cost limits). In offline mode you must make the model compact and deterministic.

Design choices (2026 considerations)

  • Model size: Use models under 1–5 MB for mobile devices unless your device has an NPU (e.g., Raspberry Pi + AI HAT+2 where 50 MB models are feasible).
  • Runtime: Convert to TensorFlow Lite, ONNX Runtime Mobile or Core ML (iOS). ONNX is widely supported across edge runtimes in 2026.
  • Hybrid approach: Combine a deterministic scoring function (distance, category match) with a tiny neural network for personalization. This is robust offline and explainable to users.

Example: compact scoring function (JS)

function scorePOI(poi, userProfiles, context) {
  // distance score (0..1)
  const distKm = haversine(context.location, {lat: poi.lat, lon: poi.lon});
  const distanceScore = Math.max(0, 1 - distKm / context.maxRadiusKm);

  // category match (average across group)
  const catScore = average(userProfiles.map(u => u.pref[poi.category] || 0));

  // recency or popularity fallback from local tags
  const popScore = Math.min(1, (poi.tags.popularity || 1) / 10);

  // weighted sum
  return 0.5 * catScore + 0.3 * distanceScore + 0.2 * popScore;
}

This runs instantly using local POI rows and user preference vectors. For an ML add-on, offload the final combination to a tiny TFLite model.

Converting a model to edge runtime (example)

Train a small model centrally (e.g., logistic regressor or 2-layer MLP) and export to ONNX, then package the ONNX file with the app. ONNX Runtime Mobile is supported on Android and Linux devices in 2026.

# Example: convert PyTorch model to ONNX
import torch
model = MySmallModel()
dummy = torch.randn(1, input_dim)
torch.onnx.export(model, dummy, "recommender.onnx", opset_version=14)

Peer sync, merges and conflict resolution

Group decisions need fast local coordination: people vote, suggest, veto. The sync architecture must converge without losing actions when devices reconnect. In 2026, CRDTs are the recommended approach for this offline-first UX.

Why CRDTs?

CRDTs (Conflict-free Replicated Data Types) let each peer update state and later merge deterministically. Libraries like Automerge and Yjs are lightweight, work in browsers and Node, and are production-ready in 2026.

State model for a group decision

  • Group metadata: participants, meeting window
  • Candidate list: POI ids with timestamps
  • Votes: per-participant ranked choices (CRDT map)
  • Constraints: cost cap, max travel time

Simple Automerge example (JS)

import * as Automerge from 'automerge'

let doc = Automerge.from({candidates: {}, votes: {}})

// Add a candidate
function addCandidate(doc, poi) {
  return Automerge.change(doc, d => { d.candidates[poi.id] = poi })
}

// Cast a vote
function castVote(doc, userId, ranking) {
  return Automerge.change(doc, d => { d.votes[userId] = ranking })
}

// Merge two docs when peers sync
let merged = Automerge.merge(docA, docB)

Automerge handles merging and ordering without conflicts. Use vector clocks or Automerge's internal causal history to show who voted when.

Transport options for syncing

  • Local peer-to-peer: WebRTC over local Wi-Fi, Bluetooth LE, or ad-hoc Wi-Fi Direct for devices in proximity.
  • Gossip relay: Local devices broadcast updates using mDNS + HTTP/GRPC on LAN for quick discovery.
  • Cloud relay: Pub/Sub endpoint for devices that leave the local network; acts as eventual store-and-forward.

For cross-OS compatibility, implement a minimal WebSocket relay as a cloud fallback and use peer discovery for local fast-sync.

Conflict resolution policies: pragmatic, explainable, and auditable

Even with CRDTs, you need decision policies to pick a winner when scores are close. Consider hybrid policies:

  • Weighted average — weight votes by presence (if someone is co-located, their vote gets a slight boost), past trust score, or explicit role (organizer).
  • Majority then proximity — choose majority favorite; if tie, prefer the closest POI to the group's centroid.
  • Fallback rules — if conflict persists, fall back to randomized but explainable choice (flip with seeded RNG based on group ID and timestamp).
Make the rule visible to users. In offline scenarios, transparent rules reduce frustration and support reproducibility when devices reconnect.

Usability patterns for offline-first group decision apps

Great UX prevents users from feeling like they're using a degraded experience.

  • Immediate feedback — show local acknowledgments for votes and suggestions with optimistic UI updates.
  • Network status — clearly indicate offline mode, last sync timestamp and pending changes.
  • Explainability — show why an item was recommended (distance, votes, dietary match) to build trust in local inference.
  • Progressive enhancement — when online, show cloud-only signals like live popularity; when offline, indicate staleness gracefully.

Storage, performance and device targets (2026)

In 2026, edge hardware spans ultra-low-power wearables to Raspberry Pi class devices with NPUs. Target three tiers:

  1. Phone-class — 50–200 MB storage for map tiles + SQLite, TFLite model (~1–5 MB), runs on Android/iOS.
  2. Lightweight IoT — single-board computers (Raspberry Pi 4/5). With AI HAT+2 you can offload heavier models and batch re-ranking.
  3. Shared hub — an optional Raspberry Pi hub that stores a region’s MBTiles and offers local API for low-end devices to query via local network.

Benchmarks: Use SQLite + R-Tree for sub-10ms NN queries on phones and sub-2ms on Raspberry Pi 5. Vector tile rendering (MapLibre) with MBTiles performs well offline with pre-cached styles.

Sync efficiency and bandwidth minimization

When connectivity is available, sync should be incremental and small:

  • Sync only state diffs (Automerge patches or delta updates) instead of full blobs.
  • Compress and batch updates; use HTTP/2 or gRPC for efficient transport.
  • For map updates, fetch small POI diffs or tile deltas. Use region-specific MBTiles replacements or tile diffs when possible.

Privacy, licensing and trust

Offline-first designs reduce PII exposure. Keep user preference vectors local and only send anonymized aggregates to the cloud if needed.

  • Licensing — OSM data is ODbL. Retain attribution and follow share-alike provisions when redistributing derived datasets (MBTiles). Keep a visible attribution string.
  • Security — sign MBTiles and POI manifests so devices can verify authenticity after syncs. Use TLS for cloud relays and authenticated peer connections for private groups.

Example deployment scenario: Where2Meet (prototype)

Imagine a small team using a mobile app plus a Raspberry Pi hub in their office. Flow:

  1. Hub preloads city MBTiles and a POI SQLite for the office radius (5 km).
  2. Team members open the app; local discovery connects to the hub via LAN; hub delivers tiles and POI rows over HTTP.
  3. Members rank favorites; the client runs the recommender locally and displays top 5. Votes are stored in Automerge docs and synced over LAN.
  4. If someone leaves the office and votes on cellular, the app pushes diffs to the cloud relay; when they return, local devices merge the updates automatically.

This hybrid approach balances offline resilience with the convenience of cloud relay for remote participants.

Operational checklist before shipping

  • Define map area size and expected MBTiles storage per region.
  • Choose runtimes (TFLite / ONNX / Core ML) and test model latency on target devices.
  • Implement Automerge or Yjs for group state and test merge scenarios: split-brain, rejoin with conflicting votes, stale updates.
  • Design visible conflict-resolution policies and surface them in UI for transparency.
  • Automate MBTiles and POI builds and include attribution metadata to comply with ODbL.
  • Test real-world offline workflows: airplane mode, intermittent Wi‑Fi, Bluetooth-only mesh.

By late 2025 and into 2026 we've seen practical trends that directly enable this architecture:

  • Wider availability of affordable NPUs (e.g., Raspberry Pi AI HAT+2) — making on-device re-ranking and small transformer-like models feasible.
  • Standardization of portable model formats (ONNX continues to gain runtime support) which simplifies packaging models for multiple platforms.
  • Maturation of CRDTs and offline-first frameworks; product teams increasingly ship collaborative apps that work when disconnected.
  • Vector tile tooling and MBTiles ecosystems have stabilized — MapLibre and tippecanoe remain core components for offline maps.

Expect more native OS-level support for local peer networking and even tighter NPU integration in phones through 2026–2027. Design your app to accept richer local signals (on-device embeddings, sensor-assisted context) without leaking PII.

Quick troubleshooting cheatsheet

  • No tiles shown: Verify MBTiles path, correct style URL, and that MapLibre is pointed at the MBTiles tile server or embedded file API.
  • Slow nearest-neighbor queries: Confirm SQLite R-Tree exists and that queries use bounding boxes for prefiltering.
  • Automerge conflicts not merging: Ensure all peers apply patches and that you use Automerge.merge rather than raw replacement.
  • Model latency high: Quantize the model and test ONNX Runtime CPU+NPU delegates; prefer int8 or dynamic range quantization for mobile.

Actionable starting template (repo checklist)

To get started quickly, scaffold a repo with these folders:

  • /data — scripts to generate MBTiles and POI SQLite (tippecanoe, osmconvert)
  • /server — simple local tile server and optional cloud relay (Node + Express + WebSocket)
  • /client — React Native or PWA with MapLibre, Automerge, and ONNX/TFLite runtime wrapper
  • /models — small ONNX/TFLite models and conversion scripts
  • /docs — ODbL attribution and sync protocol design

Summary — Why this approach wins for groups

Combining offline maps with local inference and robust CRDT sync enables group decision apps that are fast, private and resilient. You get deterministic recommendations, low-latency UX, and graceful conflict handling — all essential for teams that need to decide in the real world, not just on the cloud.

Next steps & call to action

Ready to build a prototype? Start by extracting a small OSM area, create MBTiles with tippecanoe, and wire a simple Automerge document for votes. If you want a jumpstart, clone a starter repo with a MapLibre PWA, a sample SQLite POI store, and an ONNX re-ranker — iterate quickly and test on-device.

If you’d like, I can produce a compact starter repo (MapLibre + MBTiles + Automerge + ONNX runtime) tailored to your target platform (Android, iOS, Raspberry Pi). Tell me your target devices and I’ll outline the repo and scripts to generate your first offline region.

Advertisement

Related Topics

#maps#offline#tutorial
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-23T06:57:52.828Z