Active metadata is like hot gossip. Here’s why.
Just like data mesh or the metrics layer, active metadata is the latest hot topic in the data world. As with every other new concept that gains popularity in the data stack, there’s been a sudden explosion of vendors rebranding to “active metadata”, ads following you everywhere, and… confusion.
With everyone talking about active metadata, it must be pretty easy to understand, right?
Apparently not! I’ve been talking about active metadata for over a year now, but I still see questions like these all the time.
Active metadata can sound a bit scary, but it doesn’t have to be. It is a must-have tool in the modern data toolbox, so if you’re still wondering what it means, this article is for you.
I’ve broken down the ideas behind active metadata with as little jargon as possible. Keep reading to learn what active metadata is, what it looks like, how you can actually use it, how it fits into the modern data stack, and why it even matters.
I could start dropping some jargon here, but then both you and I will be asleep in seconds. So let’s jump into an analogy instead.
Imagine that you got your hands on the juiciest piece of tech gossip — Apple is expanding into recreational marijuana to literally help people “think different”.
There’s no way you’re going to keep something this exciting a secret. The world has to know. So you post it on your blog, blogspot.applefansunite.com. All done, right?
Just like a car in the Hyperloop tunnel, we all know that’s not going anywhere. You can’t just put the story somewhere and hope people will find it. You have to actually deliver it into people’s hands.
You sharpen your PR chops, blast the news to tech reporters and news sites, and lo and behold it’s everywhere in no time. It’s already been mummified, and your grandfather just asked why apple farmers are talking about this Molly girl on your group chat.
Metadata is like this information. If it sits passively in its own little world, with no one seeing or sharing it, does it even matter? But if it actively moves to the places where people already are, it becomes part of and adds context to a larger conversation.
Passive metadata is the standard way of aggregating and storing metadata into a static data catalog. This usually covers basic technical metadata — schemas, data types, models, etc.
Think of passive metadata as putting out information on a personal blog. Every so often, it’ll get picked up and go viral on Hacker News. But most of the time it’s just going to sit unseen and unused, even when people actually need to know it.
Active metadata makes it possible for metadata to flow effortlessly and quickly across the entire data stack, embedding enriched context and information in every tool in the data stack. It is usually more complex than passive metadata, covering operational, business, and social metadata along with basic technical information.
Think of active metadata as a viral story. It shows up everywhere you already live in what seems like seconds. It’s immediately cross-checked against and combined with other information, bringing together a network of related context into a larger trend or story. And it sparks conversations, making everyone more knowledgable and informed in the end.
To put it simply, no one wants to go to another website to “browse the metadata”.
As we embraced the internet and data exploded in the early aughts, companies realized they needed to manage all their new data.
We entered a golden age of metadata management. New companies like Informatica, Collibra, and Alation were created, and they hyped the importance of data catalogs. People needed a way to sort through all their options, so we got reports like Gartner’s Magic Quadrant for Metadata Management. Billion-dollar companies emerged, and companies spent hundreds of millions of dollars on metadata management.
Yet just last year, Gartner released their Market Guide for Active Metadata and declared that “Traditional metadata practices are insufficient…”
That’s because passive data catalogs solve the “too many tools” problem by adding… another tool. They aggregate metadata from different parts of the data stack, and it stagnates there. User adoption suffers, and these exciting tools turn into expensive shelfware.
Active metadata sends metadata back into every tool in the data stack, giving the humans of data context wherever and whenever they need it — inside the BI tool as they wonder what a metric actually means, inside Slack when someone sends the link to a data asset, inside the query editor as try to find the right column, and inside Jira as they create tickets for data engineers or analysts.
Active metadata functions as a layer on top of the modern data stack.
It leverages open APIs to connect all the tools in your data stack and ferry metadata back and forth in a two-way stream. This is what allows active metadata to bring context, say, from Snowflake into Looker, Looker into Slack, Slack into Jira, and Jira back into Snowflake.
According to Gartner’s new Market Guide for Active Metadata, active metadata is an always-on, intelligence-driven, action-oriented, API-driven system, the opposite of its passive, static predecessor.
This can be broken down into the four key characteristics of active metadata.
There are dozens, if not hundreds, of use cases of active metadata. (Enough for several articles of their own — coming soon!) Let’s go through a few of my favorites.
As metadata becomes big data and big data becomes a behemoth, active metadata isn’t just a wonderful dream. It’s a necessity — the only way to understand today’s data.
Managing, processing, and analyzing metadata is the new normal for modern data teams. Doing this passively and manually, though, isn’t possible. That’s why it’s been so exciting to see active metadata take shape in the last year and become the de facto standard for what people expect out of modern metadata.
All of these use cases — like auto-tuned pipelines, automated data quality alerts, and continuously validated calculations — would have sounded wildly impossible just a few years ago. Today, they’re actually in reach. I couldn’t be more excited to see the intelligent data dream become a reality as active metadata continues to evolve in the coming years.
You’ve likely heard about ELT — Extract Load and Transform… the Modern Data Stack’s evolution on ETL. This is a game changer by nature in that it enables organizations to ingest raw data into the data warehouse and transform it later. ELT gives end-users access to the entirety of the datasets they need by circumventing downstream issues of missing data that could prevent a specific business question from being answered.
A majority of business leaders believe data insights are key to the success of their business in a digital environment. However, many companies struggle to build a data-driven culture, with a key reason being the lack of a sound data democratization strategy.
As the amount of data rapidly increases, so does the importance of data wrangling and data cleansing. Both processes play a key role in ensuring raw data can be used for operations, analytics, insights, and inform business decisions.
Do you know the current status — quality, reliability, and uptime — of your data and data systems? Not last month or last week, but where they stand at this moment. As businesses grow, being able to confidently answer this question becomes more important. That’s because data needs to be clean, accurate, and up-to-date to be considered reliable for analysis and decision-making. This confidence comes through what’s known as data observability.
In the past years, organizations have been investing heavily to convert themselves into data-driven organizations with the objective to personalize customer experiences, optimize business processes, drive strategic business decisions, etc. As a result, modern data environments are constantly evolving and becoming more and more complex. In general, more data means more business insights that can lead to better decision-making. However, more data also means more complex data infrastructure, which can cause decreased data quality, a higher chance of data breaking, and consequently erosion of data trust within organizations and risk of not being compliant with regulations. The data observability category — which has quickly been developing during the past couple of years — aims to solve these challenges by enabling organizations to trust their data at all times. Although the category is relatively young, there are already a wide variety of players with different offerings and applying various technologies to solve data quality problems.
Data governance is more than just having a strategy – it is about establishing a culture where quality data is achieved, maintained, valued, and used to drive the business. Modern-day businesses are supported by data and information in many ways and forms. In recent years, data has become the foundation for competition, productivity, growth, and innovation. We are seeing successful organizations shift their focus from producing data to consuming it, and data governance strategies becoming increasingly important to support their crucial business initiatives. Executives and shareholders are starting to realize that data is a strategic asset and data governance is a must if they want to get value from data.
I started my career as a first-generation analyst focusing on writing SQL scripts, learning R, and publishing dashboards. As things progressed, I graduated into Data Science and Data Engineering where my focus shifted to managing the life-cycle of ML models and data pipelines. 2022 is my 16th year in the data industry and I am still learning new ways to be productive and impactful. Today, I am now the head of a data science & data engineering function in one of the unicorns and I would like to share my findings and where I am heading next.