Richard Walker, director at Mustard, shares his experiences of highly competitive tendering.
I’ve sat on this anecdote for a couple of years, but the timing feels right for a little rant.
For context, I’m going to quote verbatim from an anonymised (prospect) client email exchange from 2016.
“Many thanks again for the time and thought you put into the proposal you submitted to us. We received 17 proposals in total and shortlisted 3. I’m afraid on this occasion we have decided to go with another supplier. We won’t be providing individual feedback, but we selected our chosen supplier because we felt that they submitted a very clear and detailed methodology outline, which demonstrated understanding of what [ANON] wishes to learn from the evaluation and how this learning would be achieved. Their methodology combines quant and qual methods… (and) contained a detailed and realistic time line for delivery, and a budget within the limit that we feel will deliver good value for money. We were really pleased with the quality of the proposals we received, with most providing detail and evidence of tailored thinking about our evaluation needs, although some were brief and more generic.”
Yes, you read that right. For fans of the football videprinter (of a certain age) that is 17 (seventeen) proposals received from what we could assume is from even more invitations to tender. I checked, and they were invitations – it was NOT publicly advertised.
Depending on the efforts of the individual agencies approached, let’s say they spent an average of 1-2 days (say, 12 hours) costing, producing and proofing their proposals. It may well be, and probably would be, more than this estimate (especially the detailed ones with “evidence of tailored thinking” and more so given the very specific information requested in the brief), equally it may have been slightly less time invested for some (i.e. the “generic ones”).
So roughly 25 days (or around 200 hours) of time invested by agencies. Depending on the agencies approached, this would probably equate to anywhere between £10,000 and £30,000 “billable” time (real opportunity cost) depending on the rates of the specific agency.
Note, this was for a multi-stage research and evaluation programme with a total budget of £12,500.
Note further, there is added irony to this anecdote, this brief was from an organisation that provides funding and support for entrepreneurs / small businesses. I’ll not be any more specific, obviously.
Note also my under-stated reaction – I wasn’t particularly happy when I received that “generic” ‘thanks, but no thanks’ email that I quoted before. So, I entered into a very polite email exchange.
I explained to [ANON] that I was “surprised” (stunned) to hear they received 17 proposals. The level of detail requested was definitely beyond the norm. I further explained how these practices are potentially very damaging to small and medium sized businesses – and how I would imagine that [ANON] could have been more sensitive to this matter. It was probably the comment about “not providing individual feedback” that simultaneously iced the cake and broke the camels back.
There are several published guidelines on procuring research services, including a “better procurement” report by the Market Research Society.
I shared this with [ANON], and suggested perhaps a 2-stage procurement process might be fairer for all concerned in future. At which point I learned that they DID go through a 2-stage process. Starting with around 80 (eighty) organisations…
Except this was actually a 3-stage process because the lucky three that were finally shortlisted (from the 17 detailed proposals) then had to invest even MORE time to pitch!
We live, we learn and we move on.
We ensure we take time to qualify all of the opportunities that present themselves to the team at Mustard. Firstly, each brief is interrogated in terms of “fit” with our services and specialisms. In most instances we will commercially assess each opportunity in terms of the potential “cost of sale” versus “potential benefit to our business / the community”. This will also take into account factors such as sector, budget and potential social benefit (for us to deliver our making the difference promise), as well as an understanding of how many proposals have been requested, and the level of detail being demanded. We will invest significant time in responding to briefs and invest money too where there is a good “fit” and where we feel appropriate. We have done this recently in order to become FSQS Hellios qualified for our financial services clients. A client in the food and drink sector recently praised us for investing in an omnibus question as part of the development of our proposals.
In some instances, we happily refer enquirers onto alternative providers (e.g. fieldwork specialists), or we will turn a brief down if it doesn’t “tick the right boxes”.
So our business development process is highly considered, and I expect our distinguished competitors have similar checks and balances in place.
And so, those organisations requesting dozens of proposals that are demanding the greatest collective time (for less perceived “reward”) risk losing the opportunity to work with our industry’s leading thinkers who, like us, will be evaluating briefs in the same way and “turning away” those they consider too much effort for too small an opportunity.
But lightning can strike twice it seems.
I decided to write this blog when, last week, we had the “Dear John” call from a regional business support provider. A highly personalised briefing, a real opportunity for us to evidence “difference making”, received around Christmas time, but we didn’t qualify how many proposals were requested.
It turns out that 7 (seven) proposals were received, for a maximum annual project fee of £10,000. Should we have bothered?
As a director in a small business we obviously monitor our “strike-rate” from proposals submitted, and fortunately we are successful with just as many (if not more) proposals and tenders than we are “unsuccessful”. I’m not sure how SMEs and consultants could operate on a 1 in 17, or even a 1 in 7 average as would be the norm if more research buyers followed the examples set by those [ANON]s who are supposed to be here to help and support us.