If you are a Data Leader in 2022, Data Governance is most definitely on your radar. Regardless of your organization's data maturity stage, chances are, you have already implemented or started implementing a Data Governance Strategy.
Data Quality is one of the core pillars of a successful Data Governance Strategy. During Entropy 2022 - A Data Management Conference by Sifflet, we sat down with Dan Power, Thought Leader and MD in Data Governance at State Street Asset Management, to pick his brain on the topic.
For Dan, you can measure the impact of having bad data. You first have to have the will to do it and some type of tool. He explained that he is concerned about the interplay between data, data quality, and data governance. Mainly because some of the most critical downstream data consumers are Artificial Intelligence and Machine Learning. There’s a fascinating documentary about this topic on Netflix - Coded Bias - where MIT researcher Dr. Joy Buolamwini researched facial recognition. She discovered that all facial recognition models that were commercially available could not accurately recognize black females, which is a huge issue. Imagine the problems such cases could bring when it comes to credit decisions. If the facial recognition model does not recognize you, the automated credit decision model won’t lend you any money. So, there are a lot of disparate outcomes related to bad data quality for marginalized groups. Dan has been advocating for a debiasing phase of Artificial Intelligence and Machine Learning projects. And according to him, many issues related to automated discrimination exist because nobody thought about using different data to train their models. This thought process is essential because this is what allows you to look for the data quality outliers and other potential problems. For Dan, organizations need to look at data early and often and also have the mindset that quality is not something you tap at the end; you need to bring it into your lifecycle very early on.
Dan has no doubt about this; it is imperative to begin very early on. He works at a company that is 225 years old, so in those cases, it is difficult. But it is possible to catch up to some extent. Dan believes that data quality is not rocket science until you don’t do it because you will have to try to infer and manually fix things yourself. It can take up to a week’s worth of work to track down every occurrence of that mistake and then manually correct it. Fixing everything at the source is the most efficient way to prevent such instances.
However, it is also important to remember that nothing is ever perfect. Even the dictionary has typos! And the closer you get to 100% accuracy, the more expensive it gets. So, at times, it might not even be worth it. People are usually uncomfortable with the idea that nothing can ever be 100% accurate. For this reason, you need to get them comfortable with this idea by defining a threshold of what’s good enough.
Another prohibitive practice is tackling things without any tools, for instance, by writing everything in the native language of the platform. In this way, tracking down quality issues is impossible and inefficient. A crucial first step for fixing this situation is to educate people within organizations about data quality and automation to start building them into their life cycles early on.
For Dan, people are always the most challenging part. “Culture eats strategy for breakfast” in this case too. Now, we are dealing with rescaling or upscaling the workforce that is very familiar with Excel. Dan is of the idea that there are a few crucial steps to take. First, organizations need to provide training. Employees need data literacy, but most importantly, they must realize that Excel is not a data quality tool. You also cannot expect everybody to have the same skills as developers and write SQL. So, if your data quality tool is based on SQL, Dan thinks it will most likely be a failure. This is because the end-users - analysts - need to be enabled to use data without being programmers. You want to empower your analysts and make them more productive and not held back by the availability of the developer. All you need for this is a bit of literacy and training around the data quality tool you choose.
Dan thinks this depends on the timeline. If you think of a 1 to 5 scale, when you are first starting, you will probably be a 1. In this case - in immature organizations - the bottom-up approach can be successful. This is because, at that stage, organizations are trying to respond to crises using techniques or tools. When they finally manage to find a sustainable way to solve the problems they have, they start expanding the use of the tool within the organization. This can happen until organizations reach a certain point halfway in the maturity model - where they need to start having budgets to adopt solutions. This, unfortunately, is not going to work bottom-up. The allocations required are both for technology and for instilling processes. To successfully create and implement a data governance team with a data quality capability, you will need many resources.
Dan also explained that it is crucial to get management personally involved. For him, the best way to move from a defensive approach - where regulatory compliance is the main focus - to an offensive approach is through the active involvement of executives in the project. You need to get executives personally involved in the project to make them care about it over the years. This is the best way to ensure that investments will be kept up throughout the project.
Data quality used to be a knee-jerk reaction, but now the culture is rapidly changing. He explained that, over the past four and five years, people have finally picked up that data governance is not a nice-to-have anymore but a need-to-have. And this also applies to AI and Machine Learning because everyone is interested in getting the newest tools. However, you need to consider data governance as the project's foundation to get those tools. Dan agreed that taking a proactive approach is critical. Being able to have a reactive approach when something happens is very important. However, you need to ensure that, before any accident happens, you are doing proactive audits of the data and educating people on the importance of data quality. Perfect data doesn’t exist, but taking this approach allows organizations to fix issues before they blow up into something bigger.
For Dan, community is the only way to make progress. And for him, there are two kinds of communities. The first is collaboration within firms. In his organization, Dan has organized a weekly collaboration meeting for all the people in his position - data governance leads - to share best practices, insights, and ideas. The second kind of collaboration is across firms and industries. That is also very important because this is where innovative ideas come from.
You’ve likely heard about ELT — Extract Load and Transform… the Modern Data Stack’s evolution on ETL. This is a game changer by nature in that it enables organizations to ingest raw data into the data warehouse and transform it later. ELT gives end-users access to the entirety of the datasets they need by circumventing downstream issues of missing data that could prevent a specific business question from being answered.
A majority of business leaders believe data insights are key to the success of their business in a digital environment. However, many companies struggle to build a data-driven culture, with a key reason being the lack of a sound data democratization strategy.
Just like data mesh or the metrics layer, active metadata is the latest hot topic in the data world. As with every other new concept that gains popularity in the data stack, there’s been a sudden explosion of vendors rebranding to “active metadata”, ads following you everywhere and… confusion.
As the amount of data rapidly increases, so does the importance of data wrangling and data cleansing. Both processes play a key role in ensuring raw data can be used for operations, analytics, insights, and inform business decisions.
Do you know the current status — quality, reliability, and uptime — of your data and data systems? Not last month or last week, but where they stand at this moment. As businesses grow, being able to confidently answer this question becomes more important. That’s because data needs to be clean, accurate, and up-to-date to be considered reliable for analysis and decision-making. This confidence comes through what’s known as data observability.
In the past years, organizations have been investing heavily to convert themselves into data-driven organizations with the objective to personalize customer experiences, optimize business processes, drive strategic business decisions, etc. As a result, modern data environments are constantly evolving and becoming more and more complex. In general, more data means more business insights that can lead to better decision-making. However, more data also means more complex data infrastructure, which can cause decreased data quality, a higher chance of data breaking, and consequently erosion of data trust within organizations and risk of not being compliant with regulations. The data observability category — which has quickly been developing during the past couple of years — aims to solve these challenges by enabling organizations to trust their data at all times. Although the category is relatively young, there are already a wide variety of players with different offerings and applying various technologies to solve data quality problems.
Data governance is more than just having a strategy – it is about establishing a culture where quality data is achieved, maintained, valued, and used to drive the business. Modern-day businesses are supported by data and information in many ways and forms. In recent years, data has become the foundation for competition, productivity, growth, and innovation. We are seeing successful organizations shift their focus from producing data to consuming it, and data governance strategies becoming increasingly important to support their crucial business initiatives. Executives and shareholders are starting to realize that data is a strategic asset and data governance is a must if they want to get value from data.
Data is the most valuable asset for most businesses today. Or at least it has the potential to be. But to realize the full value, organizations must manage their data correctly. This management covers everything from how it’s collected to how it’s maintained and analyzed. And a big component of that is data governance.
I started my career as a first-generation analyst focusing on writing SQL scripts, learning R, and publishing dashboards. As things progressed, I graduated into Data Science and Data Engineering where my focus shifted to managing the life-cycle of ML models and data pipelines. 2022 is my 16th year in the data industry and I am still learning new ways to be productive and impactful. Today, I am now the head of a data science & data engineering function in one of the unicorns and I would like to share my findings and where I am heading next.