Locating mobile devices from network measurement reports
Operators validate network performance through drive testing — sending teams with specialized equipment through streets and buildings, measuring signal quality block by block. It's slow, expensive, and gives you a snapshot of a network that changes continuously.
The alternative is using the measurement reports that devices already send back to the network: signal strength readings from the serving cell and up to six neighbors, timing information, and basic radio parameters. Millions of these arrive every day, from every connected device, covering every corner of the network — indoors, outdoors, highways, basements.
If you can locate where each measurement was taken, you effectively turn the entire subscriber base into a continuous, passive drive test — without sending a single van.
The problem: standard positioning methods either don't work well enough or require infrastructure that isn't deployed, or expose individual subscriber identities or sensitive user data. GPS needs device cooperation and fails indoors. Cell-ID gives you a cell footprint that could span kilometers. Triangulation methods like OTDOA need positioning reference signals and tight time synchronization that most operators haven't deployed. And fingerprinting databases require wardriving campaigns that cost months, go stale the moment the network changes, and need to be repeated.
A proprietary positioning algorithm that estimates device location from standard 5G measurement reports and network configuration data alone.
The algorithm was designed for a Tier 1 mobile network equipment vendor, specifically for enabling driveless testing of 5G networks. The goal: allow operators to continuously validate coverage, detect dead zones, and identify performance degradation across their entire network footprint using the measurement data they already collect — replacing periodic, expensive drive test campaigns with always-on, passive network quality monitoring.
The approach divides the coverage area into a fine geographic grid and builds a model for approximating the localized radio propagation for each grid cell, trained from the measurement reports themselves. Rather than assuming a single propagation model for the entire area — which breaks down against real-world terrain, buildings, and clutter — the model learns how radio signals actually behave at each specific location.
The system uses timing information to constrain the search area, neighbor cell geometry to narrow candidates further, and multi-frequency measurements to improve confidence. Spatial convolution techniques smooth the model across the grid, reducing noise from shadow fading while preserving meaningful local propagation effects.
Radio signal strength depends on distance, but also on terrain, building materials, whether the device is indoors or outdoors, antenna orientation, and dozens of other factors. Two locations 100 meters apart can produce nearly identical measurement reports if the propagation environment differs. The algorithm needed to disentangle distance from environment, using only the signals themselves.
Signals passing through walls lose 10–25 dB depending on building material, but a measurement report doesn't say whether the device is inside a building or on the street. The algorithm handles this implicitly through its localized modeling — areas with heavy indoor traffic learn different propagation characteristics — but the boundary between indoor and outdoor remains a persistent source of ambiguity.
Finer geographic resolution gives more precise positioning but requires more measurement data to train each grid cell reliably. Coarser resolution trains faster but caps accuracy at the grid size. The right balance depends on network density, traffic volume, and the operator's accuracy requirements — and varies across the same network between dense urban cores and suburban edges.
Spatial convolution is necessary to fill gaps and reduce noise, but aggressive smoothing destroys the street-canyon and building-shadow effects that make localized modeling valuable. Dense urban environments need minimal smoothing. Sparse suburban areas need more. This tradeoff required careful tuning per deployment scenario.
The algorithm was designed and formally proposed to a Tier 1 mobile network equipment vendor as the basis for a driveless testing capability inside their network analytics platform. The proposal covered the full system: the core path loss approximation model, the binning and convolution approach, the training and inference pipelines, the multi-frequency handling, and the integration points into existing OSS/BSS data flows.
The detailed technical approach was reviewed and approved by the vendor's engineering team. The project reached a natural pause at the trial stage — the vendor chose to prioritize other initiatives in their roadmap, and deployment against live network data was not pursued at that time. The algorithm remains ready for integration should the vendor revisit driveless testing in a future cycle.
What this case study demonstrates is the engineering work: framing a hard positioning problem under strict privacy constraints, designing an algorithm grounded in standard radio propagation theory, and producing a specification detailed and credible enough to win technical sign-off from a major network equipment vendor. The same approach — taking a real operational constraint, finding the cleanest path through the trade-offs, and delivering something an expert audience can defend — is what we bring to every engagement.
Let's talk about what we could build for your network analytics.
Book a Discovery Call