Real-Time Notifications with Firebase Realtime Database: Deduplication, Retention, and Seen-By Tracking
Firebase Realtime Database gives you instant data sync out of the box. Subscribe to a path, get updates. It sounds simple and for a single developer testing against one client it absolutely is.
Then I deployed to production.
Multiple fleet operators checking notifications simultaneously. Alerts duplicating across date boundaries. Firebase listeners leaking memory on every tab navigation and accumulating duplicate subscriptions until the notification count showed double what it should. Operators calling saying they had "hundreds of notifications" after navigating around the app a few times. I had built something that worked perfectly in development and fell apart the moment real people used it in ways I hadn't tested.
This article covers what I had to build to make it actually production-ready - 3-day retention windows, client isolation, duplicate prevention, atomic multi-user seen tracking, and the startup timing problem that forced me into exponential backoff.
The data structure - the query is the structure
The most important architectural decision was how to organise the Firebase tree. I spent time on this upfront because Firebase doesn't give you query flexibility the way a database does - the structure you choose largely determines what queries are even possible.
notifs/
├── 20260325/ # Date: YYYYMMDD
│ └── client_abc/ # Client-specific bucket
│ ├── last_update # Timestamp for change detection
│ └── vehicle_123/ # Category (vehicle/device)
│ └── alert_key/ # Individual alert
│ ├── alert_time
│ ├── alert # Alert type
│ ├── deviceNr
│ └── seen_by[] # Array of user UIDs
├── 20260324/
│ └── client_abc/ ...
└── 20260323/
└── client_abc/ ...
Date nodes at the top level give you a natural 3-day retention window - query three date paths, get three days of alerts. Client ID buckets under each date give you client isolation - each client's app only reads from its own subtree, no server-side filtering needed. No complex Firestore queries, no index configuration. The structure itself is the query.
This design decision also made cleanup simple. Deleting a date node deletes everything under it. Retention doesn't require a background job scanning for old records - it's just removing a top-level path.
The startup problem - why a setTimeout wasn't enough
My first approach to initialising notifications was a 2-second setTimeout after login. Give the auth flow time to complete, then start loading. It worked in development. Then users on slow 3G connections started reporting no notifications at all - the client ID hadn't resolved by the time the notification system tried to read it, it got null, and the whole thing silently failed to initialise.
A fixed timeout doesn't adapt to conditions. I needed something that would keep trying until the client ID was actually available:
private async startNotificationRetryLoop() { if (this.isRetrying) return; this.isRetrying = true; let attempts = 0; let currentDelay = this.retryDelay; const tryGetNotifications = async () => { try { const clientId = this.authService.returnClientID(); if (clientId && clientId !== "" && clientId !== null && clientId !== undefined) { this.clientId = clientId; await this.getNotifs(clientId); this.setupPersistentNotificationListener(); this.isRetrying = false; return; } if (attempts >= this.maxRetryAttempts) { this.isRetrying = false; this.toastController.presentToast( "Unable to load notifications. Please refresh the app." ); return; } attempts++; const jitter = Math.random() * 500; const delayWithJitter = Math.min(currentDelay + jitter, 30000); setTimeout(tryGetNotifications, delayWithJitter); currentDelay *= 1.5; } catch (error) { attempts++; if (attempts < this.maxRetryAttempts) { setTimeout(tryGetNotifications, currentDelay); currentDelay *= 1.5; } else { this.isRetrying = false; this.toastController.presentToast("Failed to load notifications"); } } }; await tryGetNotifications(); }
The jitter - Math.random() * 500 - prevents a thundering herd problem if multiple users log in simultaneously. Without it, everyone who logged in during the same second would retry at exactly the same intervals, hitting Firebase in coordinated bursts. The 30-second cap ensures we never wait unreasonably long. The 1.5x multiplier is less aggressive than the typical 2x - on mobile networks the client ID usually resolves within a few seconds, so a gentle backoff finds it faster than a steep one that overshoots quickly into multi-second waits.
The consequence of getting this wrong was operators with no notifications. The consequence of getting it right was a startup sequence that adapts to whatever network conditions the user is actually on.
The 3-day retention window
Fleet operators need recent alerts - yesterday's speeding incidents, last night's after-hours movement triggers. They don't need alerts from three months ago. The system fetches exactly three days on load:
async getNotifs(clientId) { this.vehicleArray = []; this.newVArray = []; this.processedDates.clear(); const currentDate = new Date(); const formattedCurrentDate = currentDate .toISOString().slice(0, 10).replace(/-/g, ""); const yesterday = new Date(currentDate); yesterday.setDate(currentDate.getDate() - 1); const formattedYesterday = yesterday .toISOString().slice(0, 10).replace(/-/g, ""); const twoDaysAgo = new Date(currentDate); twoDaysAgo.setDate(currentDate.getDate() - 2); const formattedTwoDaysAgo = twoDaysAgo .toISOString().slice(0, 10).replace(/-/g, ""); let dataArray = [formattedYesterday, formattedTwoDaysAgo]; await this.fetchDataAndLog(dataArray, clientId); await this.createListener(formattedCurrentDate, clientId); }
Yesterday and two days ago are fetched once with ref.once("value") - a single read, done. Today gets a persistent listener that fires on every change. This separation is deliberate: re-fetching yesterday's data on every Firebase update would be wasteful, and more importantly it would trigger the duplicate processing I'd worked hard to prevent.
The persistent listener - and why I stopped using AngularFire for it
For real-time updates on today's notifications the app needs a listener that stays alive for the entire session. My first version used AngularFire's Observable wrapper, which seemed like the natural Angular choice. It caused duplicate processing - AngularFire's automatic resubscription behaviour on connection drops was re-emitting events I'd already processed. I switched to Firebase's native ref.on() for more control:
private setupPersistentNotificationListener() { if (this.persistentNotificationListener) return; try { const currentDate = new Date(); const formattedCurrentDate = currentDate .toISOString().slice(0, 10).replace(/-/g, ""); const firebasePath = `notifs/${formattedCurrentDate}/${this.clientId}`; const ref = this.db.database.ref(firebasePath); this.persistentNotificationListener = ref.on( "value", async (snapshot) => { if (snapshot.exists()) { if (this.processedDates.has(formattedCurrentDate)) { this.processedDates.delete(formattedCurrentDate); } await this.getVehicles(this.clientId, formattedCurrentDate); } } ); } catch (error) { this.toastController.presentToast( "Notification monitoring may be limited" ); } }
When today's data changes, the processedDates Set entry for today gets cleared so the data re-fetches and re-processes. Historical dates stay in the Set and don't get re-fetched. The guard at the top - if (this.persistentNotificationListener) return - prevents accidentally registering the same listener twice, which was causing the multiplying notification counts operators were seeing.
Duplicate prevention
The processedDates Set is the core of the deduplication strategy:
private processedDates = new Set<string>(); getVehicles(clientId: string, date: string): void { if (this.processedDates.has(date) && date !== this.formatDate(new Date())) { return; } this.db.list(`notifs/${date}/${clientId}`) .snapshotChanges() .subscribe(changes => { const notifications = this.extractNotifications(changes); const deduped = _.uniqBy(notifications, 'alertKey'); this.processedDates.add(date); this.mergeNotifications(deduped, date); }); }
Historical dates get processed once and added to the Set - any subsequent call for the same date returns immediately. Today's date is excluded from the guard so it always re-fetches when the persistent listener fires. The _.uniqBy from Lodash catches duplicates within a single day's data - the upstream system occasionally writes the same alert twice with slightly different timestamps, and uniqBy on the alert key collapses those.
This is the cheapest deduplication mechanism I could find. A Set lookup is O(1). The alternative - scanning the existing notification array for every incoming alert - would be O(n) and would get slower as the day's alerts accumulated.
Seen-by tracking with Firebase transactions
The fleet has multiple operators watching the same vehicles. When operator A marks a speeding alert as seen, operator B should still see it as unseen until they open it themselves. A simple seen boolean doesn't work - you need per-user tracking.
I store a seen_by array on each alert. When an operator opens a notification, their UID gets appended. The problem is concurrent updates - if two operators open the same notification at the same time, a simple set() call would overwrite one of their UIDs with the other's write. Firebase transactions handle this atomically:
markAsSeen(notifPath: string): void { const uid = this.authService.fetchUID(); const ref = this.db.database.ref(`${notifPath}/seen_by`); ref.transaction(currentSeenBy => { if (currentSeenBy === null) { return [uid]; } if (Array.isArray(currentSeenBy) && !currentSeenBy.includes(uid)) { currentSeenBy.push(uid); } return currentSeenBy; }); }
The transaction reads the current seen_by array, adds the UID if it's not already there, and writes back atomically. If two operators trigger this simultaneously, Firebase runs both transactions serially - both UIDs end up in the array. A set() would have lost one.
The badge count is calculated client-side:
getUnseenCount(notifications, uid): number { return notifications.filter(n => !n.seen_by || !n.seen_by.includes(uid) ).length; }
Each operator sees their own unseen count. One operator reading an alert doesn't affect another operator's badge. This is one of those things that seems obvious once it's working but requires getting the data model right upfront - a boolean seen field would have forced a complete rethink of the data structure.
The notification modal - answering both questions at once
When an operator taps a notification they have two immediate questions: what happened, and where is the truck right now. My first version answered the first question. The operator would read the alert, then navigate to the map to find the vehicle. Two steps, context lost between them.
The notification modal now answers both simultaneously. When it opens it fetches the live device position from Firebase and drops two markers on an inline map:
addLiveNotificationMarker(): void { const ref = this.db.database .ref(`device/${this.clientId}/${this.deviceNr}`); ref.once('value').then(snapshot => { if (snapshot.exists()) { const liveData = snapshot.val(); this.addMarker( liveData.latitude, liveData.longitude, 'Current' ); this.addMarker( this.alertLat, this.alertLng, 'Alert Location' ); } }); }
One marker for where the alert happened. One marker for where the truck is right now. The operator can see at a glance whether the vehicle has moved since the incident, whether it's still in the area, whether it's heading somewhere concerning. No extra navigation, no context switch. I added this after watching an operator open an alert, close it, go to the map, find the vehicle, then go back to the alert list - a round trip that took 30 seconds of hunting. The feature took an afternoon to build and saved those 30 seconds every time a notification was checked.
Cleanup - the thing that was silently breaking everything
The multiplying notification counts operators were reporting when navigating around the app - that was a leak I'd introduced without realising it. Every time the notification tab was visited, a new Firebase listener was registered. Navigate to the tab ten times and you have ten listeners all firing on the same data, all calling mergeNotifications independently. The notification count multiplied with every visit.
The cleanup has to handle three completely different subscription mechanisms:
// AngularFire Observable subscriptions this.destroy$.next(); this.destroy$.complete(); // Native Firebase listener - different mechanism, separate cleanup if (this.persistentNotificationListener) { const firebasePath = `notifs/${formattedCurrentDate}/${this.clientId}`; const ref = this.db.database.ref(firebasePath); ref.off("value", this.persistentNotificationListener); this.persistentNotificationListener = null; } // Exponential backoff timeouts - these leak independently too this.retryTimeouts.forEach(id => clearTimeout(id)); this.retryTimeouts = []; // Processed state this.processedDates.clear();
destroy$ handles AngularFire subscriptions via takeUntil. ref.off() handles the native listener - this is not covered by destroy$, they're completely separate mechanisms. The retry timeouts handle any pending backoff callbacks that might still be queued. All three have to be explicitly cleaned up because all three leak independently.
I found this by adding a log line to mergeNotifications and watching the console while navigating. The same alert being merged two, three, four times on a single page visit. Once I saw it it was obvious. The fix took less time than the debugging.
This article is part of a series on building a fleet telematics platform.