Does Data Governance Make You Shudder?

At a recent vendor conference, I found myself talking with a varied group of technology professionals. Two were technology generalists, one was a data engineer, one was responsible for transportation technology at a major university (think autonomous vehicles, traffic sensors, etc.), another was responsible for university student and teacher data (lesson plans, research findings, etc.), and one was responsible for his organization’s IT security. 

During the conversation, someone mentioned data governance. Immediately there was a conspicuous and collective sigh around the table.
Our group clearly found the subject intimidating and uncomfortable.

Why does the mere mention of data governance invoke that kind of response? 

One reason is probably that the potential scope of a data governance effort is so wide. It could basically involve every possible task associated with data management. 

Further, the word “governance” emphasizes the importance of taking those tasks seriously, and getting them right. So when you combine “there’s a lot to do” with “and it’s all important,” fear kindles in the hearts of those responsible.

And rightly so: the consequences of poor data governance are significant. They range from regulatory fines and sanctions for failing to adequately protect data or for noncompliance, to the insidious costs of bad data quality, such as missed business opportunities due to poor decision-making or lost customers due to low service levels.

But there are a lot of “big and important” topics in IT, and they don’t all make a diverse group of seasoned professionals wince. I decided to do some research and dig a little deeper into why data governance seems to be outside our collective comfort zone.

One thing that came up right away is that data governance is defined and described in diverse ways. Moreover, the terms used to describe the activities or responsibilities that comprise data governance aren’t defined or used the same way by everyone. Anytime I tried to define a term, I’d find another term that meant the same thing… sometimes, depending on context. In other words, the definitions tend to morph according to where one looks at them from (our viewpoint).

That variability and inconsistency made just framing this blog post difficult—never mind a program that “…includes the people, processes and technologies needed to manage and protect the company’s data assets…” and impacts an organization at strategic, tactical and operational levels. 

Indeed, there’s an axiom in management theory that “You can’t manage what you can’t name.” Further, “You can’t properly manage what you don’t define explicitly.” In other words, how you define a data governance program will significantly impact your ability to manage it successfully.

Given that a key element of data governance is ensuring the consistency of data definitions across an organization, I find it ironic that we don’t have consistent, agreed definition of terms for the components of data governance itself.

Normally when I write about a complex topic, I break it down into a list of subtopics and then decompose each of those—similar to how I would attack a complex software development project or database design endeavor. But all the variability and overlap among terms that I encountered around data governance forced me to change not only my approach to writing this post, but the whole focus of the post. 

Instead of working top-down, I had to work bottom-up. Below I listed some subheadings that are parts of data governance, and then I listed all the tasks or responsibilities that relate to all the subheadings. Your mission—if you choose to accept it—is to take a few minutes to decide under which subheading you would place each task. 

So here are the subheadings that I started with:

  • Data Management (aka Database Management)
  • Data Security
  • Data Stewardship
  • Data Quality
  • Master Data Management
  • Regulatory Compliance (GDPR, PCI, HIPAA)

Here is my list of many (but by no means all) of the critical tasks that need to be completed in order to ensure that your data is relevant, available, secure, and optimized (i.e., “governed”). 

Under which subheading would you put each of these tasks if you were to document your data governance activities?

  • Data Encryption
  • Data Masking
  • Data Access Control
  • High Availability
  • Disaster Recovery
  • Data Lifecycle Management
  • Data Version Tracking
  • Data Custody Tracking and Control
  • Data Provenance Tracking
  • Change Tracking and Management
  • Data Access Auditing
  • Data Update Auditing
  • Data Validation
  • Define Business Rules for Data
  • Meta Data Management and managing consistent data definitions
  • Managing Taxonomies and Naming Conventions

Some of the tasks seem to relate to obvious subheading, such as Meta Data Management and Taxonomies and Naming Conventions being grouped under Master Data Management. Or grouping Data Encryption, Data Masking and Data Access Control under Data Security. 

But you could group Data Access Control under Data Stewardship as well, along with many other tasks. In fact, Data Stewardship is used somewhat interchangeably with Data Governance… sometimes. And which tasks fit under Compliance? Maybe all of them? 

My personal takeaway from all this is that it may be better to look at this particular issue from the bottom up of instead of the top down. When wrapping our minds around data governance, we might want to look at all the relevant lower-level tasks (lower in this hierarchy, not in importance), and think about what is involved in each and what tools can help us implement them.

Don’t get too caught up with definition of terms or with categorizing tasks into subgrouping, as I did for the purposes of discussion. At least when it came to writing this blog post, I found that to be the most intimidating part.

Are you looking for strategic, tactical and/or operational support around a data governance program or related initiative? Contact Buda Consultingand let’s talk about where you are, where you need to be and how we can help.

Compliance 101 for Oracle DBAs

Regulatory compliance issues are top-of-mind for today’s senior executives. New laws and industry regulations are changing how organizations acquire, store, manage, retain and dispose of data. Every Oracle DBA should be aware of these changes because of their sweeping impacts on the DBA job role.

Compliance goes hand-in-hand with security because regulations often mandate that organizations be able to attest or even prove that data—and therefore databases—are secure and controlled. In this context, Oracle DBAs are directly involved in implementing and managing the policies and technologies that support compliance.

What are some of the key regulations that impact Oracle DBAs? Here in the US, one of the most prevalent is Sarbanes-Oxley (SOX), aka the U.S. Public Accounting Reform and Investor Protection Act of 2002. SOX is meant to reduce fraud and improve financial reporting. Its impact on IT is sweeping. In particular, it holds the CFO responsible to guarantee the processes used to produce financial reports, which invariably involve software accessing data stored in databases via processes maintained by DBAs.

For healthcare organizations the major regulatory worry is HIPAA, the Health Insurance Portability and Accountability Act. HIPAA mandates security measures for patients’ personal health information (PHI)—to the extent that an organization must be able to document every time a PHI data element was viewed. HIPAA audits often focus on the processes that drive exception logs and reports. Database auditing is critical in this regard.

Here are some typical Oracle DBA tasks that directly relate to compliance:

  • Data quality and metadata management. Ensuring data quality is key to regulatory compliance. If data or metadata aren’t accurate, how can the right data elements be subject to the appropriate regulatory controls?
  • Database auditing. As mentioned above, robust database audit capabilities can be essential for compliance with HIPAA and other regulations and policies that mandate tracking database usage. What data was accessed when and by whom? Database audit software can tell you. Database auditing is also vital for overall information security and detection of security breaches, especially against internal threats.
  • Data masking and obfuscation. Data masking practices are generally used to render original data suitable for testing purposes. It looks and functions consistently with the original data, but no longer constitutes personally identifiable information or credit card data, etc. for regulatory purposes. It’s also important for protecting sensitive data from staff (e.g., third-party contractors) working in non-production environments.
  • Database archiving and long-term data retention. Regulations often mandate what data must be stored for what period of time. This is also important for legal/eDiscovery purposes.
  • Database recovery. Database recovery is also a compliance issue, because it relates to database integrity and availability. If data is lost and can’t be recovered, that can be as problematic as a security breach from a regulatory perspective.

If you’re not sure whether your Oracle database policies, procedures and controls are adequate to support regulatory compliance, Buda Consulting can help. Contact us to discuss a database security assessment to identify areas of noncompliance and provide whatever assistance you need to address them.

 

The High Cost of Bad Data—And How To Avoid Paying It

The High Cost of Bad Data—And How To Avoid Paying It

How much is bad data costing your organization?

Data is only useful if it is timely, relevant and reliable. Incorrect, redundant and incomplete data introduces risks that negatively impact business operations and skew business analytics/decision-making. Poor data quality also impacts information security and compliance programs. And as businesses amass more and more data, just the cost of storing bad data becomes significant.

Looking at “the big picture,” the cost of bad data to organizations is mind-boggling. For example, a Gartner study found that data quality affects overall labor productivity by as much as 20% and costs companies between 15% and 20% of their operating budgets. A recently published infographic from InsightSquared showed that the aggregate cost of bad data to US businesses exceeds $600 billion, while its total impact on the US economy is estimated at $3.1 trillion.

However, organizations from SMBs to major corporations often fail to address data quality issues in a systematic way. In one report, 68% of companies admitted that they don’t even attempt to measure the cost of poor data quality.

Even if organizations understand the value of improving data quality, they may not have the expertise or the bandwidth to identify and address problems. This is where an outsourced Oracle DBA expert can help.

The first step is usually to conduct an audit to assess what data quality problems are most prevalent in an organization. The most common data quality issues usually include: data duplication, invalid data/formats, incomplete data, data conflicts and outdated/obsolete data.

The next step is to put controls in place that address the root causes of data quality issues so that newly acquired data is of higher quality. There’s no point identifying and correcting erroneous data until such controls are in place. The role of the DBA in this context is to build constraints into the databases that help enforce value ranges, ensure uniqueness and so on.

Data quality controls, in turn, depend on accurate metadata—the data definitions against which controls are applied. In the same way that a building depends on a solid foundation, high data quality isn’t possible without high quality metadata. For consumers of data, metadata explains “who, what, when, where, why” the data is about. To apply regulations and policies to data, for example, the corresponding metadata must be correct.

Once controls and metadata have been improved, it’s time to correct the existing data itself. The challenges here can be daunting. According to data quality experts, billing records generally have error rates between 2% and 7%, for example. Data profiling is one way to isolate and prioritize the most problematic areas within the most valuable data assets, which are the “quick wins” where you want to invest effort to correct bad data.

Especially in markets like financial services, where data has a very short “shelf life” and timely, accurate decisions are the cornerstone of success, high data quality is paramount. It’s also vital in highly regulated industries like healthcare, where poor data quality can undermine compliance and literally constitute a crime.

To talk with an expert about how to assess and address data quality concerns in your business, contact Buda Consulting.

Financial Services Spotlight: Risk Data Aggregation and Risk Reporting

The finance, risk and compliance departments of any financial services firm all need fast and comprehensive access to business data in order to measure performance, manage risk and report to regulators and clients. But each department needs a specific view, whether strategic, operational or a combination of the two.

Risk data aggregation in particular has garnered considerable attention since the Basel Committee on Banking Supervision (BCBS) published Principles for effective risk data aggregation and risk reporting, often called BCBS 239, in 2013. Banks are required to be fully compliant with all eleven principles of BCBS 239 by January 1, 2016—and many will require considerable resources and expertise to get there.  

In the past, risk managers have often had to decide for themselves what data they needed. But regulators are now specifying more about what a risk management analytical framework needs to look like. The goal is to help financial services institutions individually and collectively to avoid counterparty risk and systemic risk, to help prevent a repeat of the 2008 financial crisis.

One of the key lessons learned in the aftermath of the 2008 crisis was that financial services organizations’ IT systems and data architectures were insufficient to enable the management of financial risks, especially around aggregating risk exposures and identifying concentrations of unacceptable risk quickly and accurately at the group level and across lines of business.

Data aggregation frameworks can offer a complete view of the risk inherent in each exposure, counterparty, customer, product and so on—in minutes rather than days. Having the right information at hand to make an optimal decision quickly can make an enormous difference. The delay in understanding what a bank’s total exposure to Lehman Brothers was at the peak of the crisis is a cautionary indication of the value of such a system.

But despite the clear mandate and clear benefits of developing compliant risk data aggregation and risk reporting, the Basel Committee’s December 2013 preparedness survey of thirty “systemically important banks” showed that these banks self-rated their compliance at 2.8 overall on a scale from 1 (noncompliant) to 4 (fully compliant). Principles 2, 3 and 6 (data architecture/IT governance, accuracy/integrity and “adaptability,” respectively) scored the lowest at around 2.5/2.6. Half the respondents indicated that they were far from being compliant in these areas. Since the principles are all interdependent, presumably some weak spots would make overall compliance a significant challenge.

To comply with BCBS 239 in time, financial services companies will need to:

  • Automate manual processes to accelerate data management and analytics
  • Consolidate today’s disparate views of risk
  • Bolster the reliability of risk systems and risk data quality assurance
  • Improve risk data governance, data ownership and procedural documentation

According to experts at Oracle, speeding up data management and analytics practices is key to avoiding the kind of risk that rocked the world in 2008. Firms that embrace near real-time/on-demand analytics and similar data management technologies will be able to aggregate data much faster across different classes, different lines of business and different data structures—including unstructured data. This will enable them to better pinpoint and evaluate risk to predict problems before they become catastrophic.

Of course, data quality is also vital. How can a firm calculate or predict risk exposure if its data is unreliable or incomplete?

Due to the complexity and diversity of many financial services firms’ data management systems, an objective, third-party assessment of where you are and where you need to go can be the best way to move “with all deliberate speed” towards compliance with BCBS 239.

Buda Consulting has over fifteen years’ experience with building and assessing these kinds of complex, mission-critical database applications. Get in touch to discuss how we can help you evaluate and address your risk data aggregation and reporting challenges.