Loading...
Loading...
When I started as a frontend developer at Dynaway, our mobile app had a sync problem that everyone just accepted as normal. The app was designed for maintenance technicians who needed to work offline in factories and warehouses, so it had to download all necessary data upfront during an initial synchronization.
This sync was painfully slow. On my development machine, each page request took about 7 seconds. For customers with large datasets, technicians would start the sync in the morning and find it still running when they returned from their first break. Some syncs took over 20 minutes. With 100+ company clients and thousands of technicians using the app daily, this was affecting a lot of people.
My initial approach was to optimize what I could control on the frontend. Working with my colleague, we tackled the client-side bottlenecks. Originally, the sync process was completely sequential: fetch a page from the backend, wait for it to arrive, save it to IndexedDB, then request the next page.
We implemented parallel processing where the app could download one page while simultaneously saving the previous page to IndexedDB. This created a pipeline that improved throughput, but we were still limited by that 7-second backend response time for each page.
The frontend improvements helped, but the real bottleneck was clearly on the server side.
When an opportunity opened up on the backend team, I took it. My product owner handed me the synchronization system - one of the most critical pieces of our mobile platform. The previous developer who had built it was no longer with the company, so I was inheriting a system that thousands of users depended on.
Diving into the code, I discovered how the pagination actually worked, and it explained the performance issues.
The backend was generating pages using application-level loops. When a client requested a page, the server would:
1. Start iterating through database entities one by one
2. Add each entity to the current page until reaching exactly 1MB
3. Stop and send that page to the client
4. When the client requested the next page, resume the loop from where it left off
This approach had several problems. Each page had to be generated from scratch by walking through potentially hundreds of thousands of records. The server had to remember where each client's pagination left off. And there was no way for the frontend to request multiple pages simultaneously since it didn't know how many total pages existed.
I also discovered a bug where if the last entity in a page exceeded the 1MB limit, the system would incorrectly signal that more pages were available, causing clients to request empty pages.
I decided to rewrite the pagination logic using SQL instead of application loops. Rather than generating pages one at a time, one SQL operation would analyze all the data and pre-calculate how it should be divided into 1MB pages.
The new approach worked like this:
The transformation was significant. What used to take 20+ minutes for large customers was reduced to a few minutes. More importantly, the sync became predictable - technicians could actually plan around it instead of wondering if it would ever finish.
This project taught me that sometimes the biggest performance gains come from rethinking the fundamental approach rather than optimizing existing code. Moving from iterative application logic to set-based SQL operations didn't just make things faster - it made them scalable.
Tech used: .NET, SQL Server, TypeScript, Angular, X++, IndexedDB