The modern data stack has helped democratize the creation, processing, and analysis of data across organizations. However, it has also led to a new set of challenges thanks to the decentralization of the data stack. In this post, we’ll discuss one of the cornerstones of the modern data stack—data catalogs—and why they fall short of overcoming the fragmentation to deliver a fully self-served data discovery experience.
If you are a leaders of the data team at a company with 200+ employees, there is a high probability that you have
If that’s the case, you’d definitely find this post highly relatable.
This post is based on our own experience of building DataHub at LinkedIn and the learnings from 100+ interviews with data leaders and practitioners at various companies. There may be many reasons why a company adopts a data catalog, but here are the pain points we often come across:
The bottom line is that you want to empower your stakeholders to self-serve the data, and more importantly, the right data. The data team doesn't want to be bogged down by support questions as much as data consumers don't want to depend on the data team to answer their questions. Both of them share a common goal—True Self-service Data Discovery™.
In our research, we saw striking similarities in companies attempting to solve this problem themselves. The story often goes like this:
Voila! You now have a full self-service solution and proudly declare victory over all data discovery problems.
Let’s walk through what typically happened after this shiny new data catalog was introduced. It looked great on first impression. A handful of power users were super excited about the catalog and its potentials. They were thrilled about their newfound visibility into the whole data ecosystem and the endless opportunities to explore new data. They were optimistic that this was indeedThe Solution they’ve been looking for.
A few months after launching, you started noticing that the user engagement waned quickly. Customer’s questions in your data team’s Slack channel didn’t seem to go away either. If anything, they became even harder for the team to answer.
So what happened?
Is it really that hard to find the right data even with such advanced search capabilities and all the rich metadata? Yes! Because the answer to “what’s the right data” depends on who you are and what use cases you’re trying to solve. Most data catalogs only present the information from the producer’s point of view but fail to cater to the data consumers.
Providing the producer’s point of view through automation and integration of all the technical metadata is definitely a key part of the solution. However, the consumer’s point of view—trusted tables used by my organization, common usage patterns for various business scenarios, impact from upstream changes have on my analyses—is the missing piece that completes the data discovery & understandability puzzle.
Most of the data catalogs don't help users find the data they need; they help users find someone to pester, which often referred to as a “tap on the shoulder”. This is not true self-service.
We believe that there are three types of information/metadata required to make data discovery truly self-serviceable:
It should be fairly clear by now that discovering the right data and understanding what it means is not a mere technical problem. It requires bringing technical, business, and behavioral metadata together. Doing this without creating an onerous governance process will boost your organization’s data productivity significantly and bring true data-driven culture to your company.
You’ve likely heard about ELT — Extract Load and Transform… the Modern Data Stack’s evolution on ETL. This is a game changer by nature in that it enables organizations to ingest raw data into the data warehouse and transform it later. ELT gives end-users access to the entirety of the datasets they need by circumventing downstream issues of missing data that could prevent a specific business question from being answered.
A majority of business leaders believe data insights are key to the success of their business in a digital environment. However, many companies struggle to build a data-driven culture, with a key reason being the lack of a sound data democratization strategy.
Just like data mesh or the metrics layer, active metadata is the latest hot topic in the data world. As with every other new concept that gains popularity in the data stack, there’s been a sudden explosion of vendors rebranding to “active metadata”, ads following you everywhere and… confusion.
As the amount of data rapidly increases, so does the importance of data wrangling and data cleansing. Both processes play a key role in ensuring raw data can be used for operations, analytics, insights, and inform business decisions.
Do you know the current status — quality, reliability, and uptime — of your data and data systems? Not last month or last week, but where they stand at this moment. As businesses grow, being able to confidently answer this question becomes more important. That’s because data needs to be clean, accurate, and up-to-date to be considered reliable for analysis and decision-making. This confidence comes through what’s known as data observability.
In the past years, organizations have been investing heavily to convert themselves into data-driven organizations with the objective to personalize customer experiences, optimize business processes, drive strategic business decisions, etc. As a result, modern data environments are constantly evolving and becoming more and more complex. In general, more data means more business insights that can lead to better decision-making. However, more data also means more complex data infrastructure, which can cause decreased data quality, a higher chance of data breaking, and consequently erosion of data trust within organizations and risk of not being compliant with regulations. The data observability category — which has quickly been developing during the past couple of years — aims to solve these challenges by enabling organizations to trust their data at all times. Although the category is relatively young, there are already a wide variety of players with different offerings and applying various technologies to solve data quality problems.
Data governance is more than just having a strategy – it is about establishing a culture where quality data is achieved, maintained, valued, and used to drive the business. Modern-day businesses are supported by data and information in many ways and forms. In recent years, data has become the foundation for competition, productivity, growth, and innovation. We are seeing successful organizations shift their focus from producing data to consuming it, and data governance strategies becoming increasingly important to support their crucial business initiatives. Executives and shareholders are starting to realize that data is a strategic asset and data governance is a must if they want to get value from data.
I started my career as a first-generation analyst focusing on writing SQL scripts, learning R, and publishing dashboards. As things progressed, I graduated into Data Science and Data Engineering where my focus shifted to managing the life-cycle of ML models and data pipelines. 2022 is my 16th year in the data industry and I am still learning new ways to be productive and impactful. Today, I am now the head of a data science & data engineering function in one of the unicorns and I would like to share my findings and where I am heading next.