A team with whom I was working recently shared their win rate statistics with me. Out came a complex spreadsheet, with a summary that ran something like the following:
Open deals: 30%
Won: 30%
Lost: 20%
Result unknown: 20%
I asked them to explain the difference between ‘open deals’ and ‘no result’. “Ah,” they said, “that’s easy. When we submit a proposal, we forecast when we think the customer will make their decision. Obviously, that can take some time, so we can’t include those ‘open’ deals in the win rate calculation.”
So far, so good. But what of the ‘no results’? Well, they explained, those were the deals that were no longer ‘open’, given the time that had elapsed, but where the company had never heard back from the customer as to the result of the bid.
“In other words, you didn’t actually win a piece of business?” I asked.
They – reluctantly – agreed, knowing where my logic would take me. For their win rate – apparently showing that they captured more deals than they lost – was, to my mind, somewhat misleading. For me, it hugely over-inflated the true picture, in which they were actually winning more like 43% (30 / (30 + 20 + 20). If they hadn’t heard back from the customer after a significant period of time, excluding the deal from the total felt like false optimism.
“Ah,” they commented, “but our boss wouldn’t be happy telling people that.” That’s OK, then…