The Numbers Don’t Lie. Or Do They?

Vendors are releasing their case studies number, and they show some big.

Vendor due diligence plays a huge part in the technology buying process. With so many options available to address a given business need today, it is essential to show that the option selected represents the best investment. A key part of the due diligence process is to verify 3rd party references for the selected vendor. In the past this was a time-consuming exercise involving phone calls or even site visits with existing customers of the proposed vendor.

Today, the Web is often used to short-circuit this process. Industry Analyst reports, or vendor-published case studies are offered as “proof” of the legitimacy of vendor claims. We will deal with Industry Analyst reports another day, but I’m sure that you have seen case studies from vendors in our industry for example. If you take them at face value they can sound really impressive – particularly when they use so many big numbers. It’s important though, that whenever you see material like this you look at the numbers they use very carefully.  They are always stated the way they are for a reason, which is often to be ambiguous, if not outright misleading. To help you understand what I mean, take a case study that I recently saw from a competitor.

This unattributed (no customer name) case study references a large project for a major corporation. The project involved delivery of a 4,800 location technology deployment project over a 26-week timeframe, which – on the face of it – sounds very impressive.

As you read further, you see that the vendor claims performance of “between 400 and 500 sites a week around the country”.  This is also impressive, but misleading.  While they may have been going that fast at the end of the project, simple math suggests that it could not have been for very long, or they would have finished the entire project far sooner than 26 weeks. So if we accept then that the run-rate number is accurate, a more likely explanation is that they had to go that fast at the end because they were running behind the required run rate for the bulk of the project…

The study also repeatedly states that to deliver the 4,800 sites required 6,050 site visits, meaning roughly 1 out of every 4 sites required a revisit, which in a volume project is actually terrible performance!  Yet, the vendor tries to make this read like it’s a good thing, presumably working under the assumption “the bigger the number, the better”.

Here’s their explanation for their poor performance in the “Results” section of the study (italics are mine). “In total, more than 6,050 site visits were completed. This takes into account original site installation visits and follow-up visits when work could not be completed due to hardware availability, resource constraints, or site maintenance/construction. The team conducted daily site-readiness calls to ensure resources were on site when needed and that the proper equipment and hardware had been delivered. These touch-points helped reduce the number of revisits over the life of the project.

So, if we again accept this as an accurate description of how the project was run, the “site-readiness” calls were instituted only to address the challenges they describe in the sentence immediately before, not because it’s good practice on a project of this type. The last italicized point is key. Even though this is a massive long-running project where they ran a 50-site pilot at the beginning stunningly, the vendor never figured out how to eliminate site revisits, only how to “reduce” them.

A major claim the study makes is that the vendor delivered $300k in savings for the customer – an average of $62.50 per site.  When you think that they logged over 48,000+ hours for this project, the $300k represents a miserable efficiency gain for a repetitive activity conducted thousands of times.

Finally – and most importantly – always examine the numbers the vendor uses. For example, in the study we are discussing, the numbers they use are much too round.

  • 4,800 sites (this is almost certainly rounded up)
  • 6,050 site visits (almost certainly rounded down) Remember, there were 50 pilot sites, so that is exactly 1.25 visits for 4,800 sites. Really?
  • 48,000 project hours (rounded down) – exactly 10 hours per site?
  • $300k savings (rounded up) – exactly $62.50 per site?

This lack of precision is illustrative of either of a couple of traits, neither of which are good:

  1. They don’t actually know the real numbers, implying a worrying lack of attention to detail
  2. They don’t like the real numbers, because they don’t support the argument they are making

This is just something to think about.   At Concert, we are taking a different path. Any time you see a statistic from us, it will be provided to a comforting level of precision and we will be happy to discuss the dataset that the statistic is derived from.  Next time you see one of these stories from another vendor, ask them to show you the data that backs it up too.

If you’d like a copy of the vendor case study that this analysis is based on, just contact us.

More From Concert