It is often said that the road to hell is paved with good intentions. Someone develops this great idea that seems simple and obvious. “Of course, that will work. Why didn’t we think of it before?” It’s not until you get far enough down the road (to hell) that you realize maybe this wasn’t such a good idea. In fact, maybe it was a really BAD idea.
We often see this phenomenon in historical data management, specifically historical project cost data. The great idea is to use the operational data (all construction cost estimates) to serve as your historical benchmarking data. Just make it all available in its native format and presto—you have a historical benchmarking system.
The first problem with this approach is that the operational data is being developed by many different authors (estimators) who don’t necessarily develop estimates the same way. They use varying levels of detail, they change codes, units of measure, and descriptions to suit their immediate needs. They don’t provide much in terms of describing the estimate (attributes). Keep in mind, they are under tremendous pressure to develop the estimate in a short (ridiculous) amount of time. Meeting the time deadline with a defensible response is the primary objective and if data quality has to suffer, so be it.
And it’s not just one version of an estimate. A large project will have tens to hundreds of estimates: estimates for different stages, version 1, version 2, version 3, discipline estimates, internal vs. external estimates, alternates, options, etc. When we’ve helped clients in retrospectives, we’ve seen large, $1B+ projects with over 1,000 estimates.
Now, to leverage those estimates for benchmarking purposes, you need data quality. You need consistent coding structures. You need consistent units of measure. You need attributes. You need good, clean, actionable data. Period.
So once you throw those estimates together, you quickly see the inconsistencies. You see the duplications. “Aha! I know what to do,” you tell yourself. You decide to impose lots and lots of restrictions on the operational processes. You lockdown all kinds of fields. You force required fields. You force all kinds of business rules in the name of data quality.
You’ve thrown sand in the gears.
In a short amount of time, estimators begin to revolt. They can’t get the estimates out the door because the machine (the estimating process) has ground to a slow crawl to meet the data quality standards you’ve imposed on them. And your business development team and bid schedule suffer because your productivity has dropped through the floor. It’s not pretty.
Instead, make the operational system as efficient as you can and implement a historical data management process with a standards-based process for capturing, validating, and delivering clean, trusted, actionable data to help you make better business decisions, support data analytics, and drive key performance indicators and metrics.
Save yourselves! Don’t throw sand in the gears.