Telemetry HQ

Mobility ops metrics and KPIs that actually move the needle

A practical set of mobility ops metrics and KPIs that improve service, reduce chaos, and make performance discussions more honest.

City traffic at dusk, representing mobility operations and service pressure.

Most mobility teams measure what is easy to count, then wonder why nothing improves.

If you want metrics that change outcomes, they need to do two things. They need to match how work actually happens, and they need to lead to actions that are realistic for the team that owns them. Otherwise the numbers become a monthly ritual and the operation stays the same.

This post is meant to be generic and usable. If you want the technical context behind the data, read tracking and telemetry system architecture, alert fatigue in mobility ops, and ideal IoT update intervals.

Metric #1: on time is a range, not a single number

“On time” sounds simple until you define it. Arrived within 2 minutes? 5 minutes? 15 minutes? And which timestamp counts, the first arrival at the pin, or the first arrival where the rider actually boards?

The best operators treat on time as a distribution. They look at how performance behaves on normal days and how it behaves under stress. Averages hide peak pain. Percentiles reveal it.

A useful KPI review sounds less like “we were 92 percent on time” and more like “we were fine until 4:30 p.m., then curb dwell blew up and the 90th percentile got ugly.” That kind of conversation gives ops something to fix on the next shift. A single blended average usually does not.

Metric #2: cancellations and no shows tell you where your promise is breaking

Cancellations are often the earliest honest signal that your promise is slipping. Customers cancel when they stop believing. Drivers cancel when the work feels chaotic or unworkable. Operations cancels when the system is forcing bad choices.

Do not lump all cancellations together. If you cannot tell the difference between customer initiated, driver initiated, and ops initiated, you cannot fix the root cause.

Metric #3: dispatch override rate is a truth serum

If you have any kind of automation, optimization, or rules engine, you should measure how often humans override it. Not to blame people, but to find where the system is misaligned with reality. This is one of the fastest ways to tell whether your dispatch workflow and routing logic actually fit the day you are running.

A rising override rate usually means one of three things. Inputs got worse, the model is wrong for the day, or the workflow is missing an exception path so operators do the only thing they can do.

This is also why “time saved” claims can be misleading. If a tool saves time on paper but increases overrides, the operation pays the cost somewhere else.

Metric #4: freshness and staleness should be visible, not buried

If your dashboards show a location pin but hide that the last update was 12 minutes ago, you are setting people up to make bad decisions with confidence.

Measure staleness. Make it visible. Track how often staleness shows up during active service windows. Many teams find this metric explains more operational arguments than any routing KPI. It is also one of the easiest ways to connect the dots between geofence failures and reporting cadence.

Metric #5: cost per completed job, but only when the definition is stable

Cost per completed job is one of the few metrics that executives and operators can both respect, but it is easy to game if definitions change.

If you change what counts as completed, if you move work in and out of scope, or if you do not account for rework and support load, the metric becomes political. Keep the definition stable and the metric becomes useful.

Good metrics make the operation more honest. They surface constraints early and they guide tradeoffs when volume spikes. If your metrics do not change what someone does on the next shift, they are probably just reporting.