The Assumption That Gets Buyers in Trouble
Most procurement teams operate on an implicit assumption: if a supplier hasn't complained, things are probably fine. No news is good news. The delivery will show up.
The data disagrees.
The Institute for Supply Management's Report on Business tracks supplier delivery speed as a core component of its monthly Purchasing Managers' Index (PMI). The Supplier Deliveries sub-index has spent extended periods above 50 — indicating slower-than-normal deliveries — across virtually every major supply disruption of the past decade. What the index makes clear is that delivery slippage is a structural feature of supplier relationships, not an occasional exception.
The uncomfortable truth: on-time delivery failure is the baseline. The question is whether you're measuring it or not.
What "On-Time" Actually Means
Before looking at the numbers, it's worth defining the metric. On-time delivery (OTD) is typically measured as the percentage of purchase orders delivered on or before the originally confirmed date. Simple in theory. Complicated in practice, because:
- Confirmed date vs. requested date. Many suppliers quote a "confirmed ship date" that already bakes in a buffer from your requested date. So a delivery can be "on time" by the supplier's definition while still being late from your planning perspective.
- Partial deliveries. A supplier ships 80% of an order on time and holds the rest. Most manual tracking systems count this as a hit.
- Supplier-reported vs. buyer-observed. If you're relying on suppliers to self-report delays, you're measuring their incentive to tell you bad news, not the actual delivery rate.
A body of supply chain research from MIT's Center for Transportation and Logistics consistently finds that the gap between supplier-reported performance and buyer-observed performance is significant — often 10 to 20 percentage points on OTD metrics.
Why Small and Mid-Size Buyers Are Disproportionately Affected
Enterprise procurement teams have dedicated supplier performance management systems, scorecards, and quarterly business reviews. They track OTD formally. When a supplier's score drops, there are contractual levers to pull.
Small and mid-size buyers have email.
This matters because supplier prioritisation is real. Research published by Harvard Business Review on supplier relationship dynamics repeatedly shows that suppliers manage their capacity by customer importance. If you're a smaller account, you're lower in the queue when a supplier is stretched. You find out about the delay when they get around to telling you — which is usually after the original date has already passed.
The asymmetry is structural: large buyers have systems that catch delays before they happen. Small buyers find out after.
The Cost of a Single Missed Delivery
The obvious cost is the delay itself. But the downstream costs are what add up:
- Production holds. If one component is late, the entire assembly may stop. You're paying for capacity that isn't producing.
- Expediting costs. Rush freight, premium sourcing, and last-minute supplier switches are expensive. Bureau of Labor Statistics producer price data shows that expedited logistics costs run materially higher than standard freight, with volatility that makes them hard to budget.
- Management time. Chasing a single delayed order — the calls, the emails, the re-scheduling — routinely consumes hours that compound across a team.
McKinsey's research on supply chain resilience found that companies that proactively monitored supplier commitments reduced their exposure to disruption costs significantly compared to those that relied on reactive escalation. The mechanism is simple: early detection allows cheap interventions. Late detection requires expensive ones.
The Measurement Problem Is Solvable
The reason most SMB buyers don't track OTD isn't that they don't care. It's that building a tracking system from scratch — spreadsheets, reminders, manual logging — takes time they don't have. So they default to memory and prayer, as one buyer put it to us.
The data that matters is already in the email thread. Every commitment a supplier makes — a ship date, a quantity confirmation, a revised lead time — exists in writing. The problem is extracting it reliably across dozens of active threads.
That's the gap BuyerPro fills. When you BCC [email protected] on a supplier conversation, Coach Jim reads every email in the thread, extracts each commitment, and tracks whether the supplier followed through. If a supplier goes quiet after confirming a date, Jim flags it. If delivery language gets vague, Jim notices. You get a specific, actionable coaching email — with a draft follow-up you can send in one click.
No new dashboard. No scoring system to maintain. Just better visibility into the threads you're already managing.
What Good Looks Like
A well-functioning supplier monitoring process does three things that most buyers only do one of:
- Captures commitments at the point they're made — not reconstructed later from memory.
- Monitors proactively — flags silence and vague language before a deadline passes, not after.
- Generates a paper trail — so when a supplier disputes what they promised, you have the thread.
The ISM data, the MIT research, the McKinsey findings all point to the same root cause: buyers who measure consistently outperform buyers who don't. The measurement doesn't have to be elaborate. It has to be systematic.