The High Cost of Bad Data—And How To Avoid Paying It

The High Cost of Bad Data—And How To Avoid Paying It

How much is bad data costing your organization?

Data is only useful if it is timely, relevant and reliable. Incorrect, redundant and incomplete data introduces risks that negatively impact business operations and skew business analytics/decision-making. Poor data quality also impacts information security and compliance programs. And as businesses amass more and more data, just the cost of storing bad data becomes significant.

Looking at “the big picture,” the cost of bad data to organizations is mind-boggling. For example, a Gartner study found that data quality affects overall labor productivity by as much as 20% and costs companies between 15% and 20% of their operating budgets. A recently published infographic from InsightSquared showed that the aggregate cost of bad data to US businesses exceeds $600 billion, while its total impact on the US economy is estimated at $3.1 trillion.

However, organizations from SMBs to major corporations often fail to address data quality issues in a systematic way. In one report, 68% of companies admitted that they don’t even attempt to measure the cost of poor data quality.

Even if organizations understand the value of improving data quality, they may not have the expertise or the bandwidth to identify and address problems. This is where an outsourced Oracle DBA expert can help.

The first step is usually to conduct an audit to assess what data quality problems are most prevalent in an organization. The most common data quality issues usually include: data duplication, invalid data/formats, incomplete data, data conflicts and outdated/obsolete data.

The next step is to put controls in place that address the root causes of data quality issues so that newly acquired data is of higher quality. There’s no point identifying and correcting erroneous data until such controls are in place. The role of the DBA in this context is to build constraints into the databases that help enforce value ranges, ensure uniqueness and so on.

Data quality controls, in turn, depend on accurate metadata—the data definitions against which controls are applied. In the same way that a building depends on a solid foundation, high data quality isn’t possible without high quality metadata. For consumers of data, metadata explains “who, what, when, where, why” the data is about. To apply regulations and policies to data, for example, the corresponding metadata must be correct.

Once controls and metadata have been improved, it’s time to correct the existing data itself. The challenges here can be daunting. According to data quality experts, billing records generally have error rates between 2% and 7%, for example. Data profiling is one way to isolate and prioritize the most problematic areas within the most valuable data assets, which are the “quick wins” where you want to invest effort to correct bad data.

Especially in markets like financial services, where data has a very short “shelf life” and timely, accurate decisions are the cornerstone of success, high data quality is paramount. It’s also vital in highly regulated industries like healthcare, where poor data quality can undermine compliance and literally constitute a crime.

To talk with an expert about how to assess and address data quality concerns in your business, contact Buda Consulting.