Data Governance - A Thought Leader's Perspective

benedetta cittadin
Jul 29, 20226 min read

Jump To

If you are a Data Leader in 2022, Data Governance is most definitely on your radar. Regardless of your organization's data maturity stage, chances are, you have already implemented or started implementing a Data Governance Strategy. 

Data Quality is one of the core pillars of a successful Data Governance Strategy. During Entropy 2022 - A Data Management Conference by Sifflet, we sat down with Dan Power, Thought Leader and MD in Data Governance at State Street Asset Management, to pick his brain on the topic.

What are some of the consequences of having bad data quality? How do you measure its impact?

For Dan, you can measure the impact of having bad data. You first have to have the will to do it and some type of tool. He explained that he is concerned about the interplay between data, data quality, and data governance. Mainly because some of the most critical downstream data consumers are Artificial Intelligence and Machine Learning. There’s a fascinating documentary about this topic on Netflix - Coded Bias - where MIT researcher Dr. Joy Buolamwini researched facial recognition. She discovered that all facial recognition models that were commercially available could not accurately recognize black females, which is a huge issue. Imagine the problems such cases could bring when it comes to credit decisions. If the facial recognition model does not recognize you, the automated credit decision model won’t lend you any money. So, there are a lot of disparate outcomes related to bad data quality for marginalized groups. Dan has been advocating for a debiasing phase of Artificial Intelligence and Machine Learning projects. And according to him, many issues related to automated discrimination exist because nobody thought about using different data to train their models. This thought process is essential because this is what allows you to look for the data quality outliers and other potential problems. For Dan, organizations need to look at data early and often and also have the mindset that quality is not something you tap at the end; you need to bring it into your lifecycle very early on. 

At what stage should organizations start thinking about implementing data quality monitoring solutions? 

Dan has no doubt about this; it is imperative to begin very early on. He works at a company that is 225 years old, so in those cases, it is difficult. But it is possible to catch up to some extent. Dan believes that data quality is not rocket science until you don’t do it because you will have to try to infer and manually fix things yourself. It can take up to a week’s worth of work to track down every occurrence of that mistake and then manually correct it. Fixing everything at the source is the most efficient way to prevent such instances. 

However, it is also important to remember that nothing is ever perfect. Even the dictionary has typos! And the closer you get to 100% accuracy, the more expensive it gets. So, at times, it might not even be worth it. People are usually uncomfortable with the idea that nothing can ever be 100% accurate. For this reason, you need to get them comfortable with this idea by defining a threshold of what’s good enough. 

Another prohibitive practice is tackling things without any tools, for instance, by writing everything in the native language of the platform. In this way, tracking down quality issues is impossible and inefficient. A crucial first step for fixing this situation is to educate people within organizations about data quality and automation to start building them into their life cycles early on. 

How should people think about data quality and governance frameworks? How much of it is cultural/enterprise data literacy vs. tooling? 

For Dan, people are always the most challenging part. “Culture eats strategy for breakfast” in this case too. Now, we are dealing with rescaling or upscaling the workforce that is very familiar with Excel. Dan is of the idea that there are a few crucial steps to take. First, organizations need to provide training. Employees need data literacy, but most importantly, they must realize that Excel is not a data quality tool. You also cannot expect everybody to have the same skills as developers and write SQL. So, if your data quality tool is based on SQL, Dan thinks it will most likely be a failure. This is because the end-users - analysts - need to be enabled to use data without being programmers. You want to empower your analysts and make them more productive and not held back by the availability of the developer. All you need for this is a bit of literacy and training around the data quality tool you choose. 

When we talk about the people's side, there are conflicting opinions about top-down or bottom-up. What is the best approach? Does this need to be executive sponsorship drilled down on the organization, or can this be fostered from the data teams going up? 

Dan thinks this depends on the timeline. If you think of a 1 to 5 scale, when you are first starting, you will probably be a 1. In this case - in immature organizations - the bottom-up approach can be successful. This is because, at that stage, organizations are trying to respond to crises using techniques or tools. When they finally manage to find a sustainable way to solve the problems they have, they start expanding the use of the tool within the organization. This can happen until organizations reach a certain point halfway in the maturity model - where they need to start having budgets to adopt solutions. This, unfortunately, is not going to work bottom-up. The allocations required are both for technology and for instilling processes. To successfully create and implement a data governance team with a data quality capability, you will need many resources. 

Dan also explained that it is crucial to get management personally involved. For him, the best way to move from a defensive approach - where regulatory compliance is the main focus - to an offensive approach is through the active involvement of executives in the project. You need to get executives personally involved in the project to make them care about it over the years. This is the best way to ensure that investments will be kept up throughout the project.

Data quality used to be a knee-jerk reaction, but now the culture is rapidly changing. He explained that, over the past four and five years, people have finally picked up that data governance is not a nice-to-have anymore but a need-to-have. And this also applies to AI and Machine Learning because everyone is interested in getting the newest tools. However, you need to consider data governance as the project's foundation to get those tools. Dan agreed that taking a proactive approach is critical. Being able to have a reactive approach when something happens is very important. However, you need to ensure that, before any accident happens, you are doing proactive audits of the data and educating people on the importance of data quality. Perfect data doesn’t exist, but taking this approach allows organizations to fix issues before they blow up into something bigger. 

What is the role of collaboration and having a community in data governance?

For Dan, community is the only way to make progress. And for him, there are two kinds of communities. The first is collaboration within firms. In his organization, Dan has organized a weekly collaboration meeting for all the people in his position - data governance leads - to share best practices, insights, and ideas. The second kind of collaboration is across firms and industries. That is also very important because this is where innovative ideas come from. 

Originally posted here

Similar Journal