The modern data stack is on the rise. Many companies use raw data from their SaaS analytics tools as input for their data warehouse, but this introduces problems downstream. Are there better ways?
When Google Analytics launched nearly 20 years ago, it took the market by storm. Its success created the mold for SaaS-based analytics offerings that included everything in one box: data collection, storage, analysis & visualization.
Ease-of-use and low barrier of entry were two of its main drivers. Instrumentation typically required little in terms of engineering resources and the included analyses, although somewhat limited, were ready to be used by a broad audience.
Urchin Analytics, to become Google Analytics after acquisition.
While SaaS analytics tools are still very popular and widely used, their wild growth has left the data analysis landscape highly fragmented. Most companies have numerous tools collecting data, and each of them have a different way of analyzing & consuming it. As a result, it’s very hard to get a holistic view of what all that combined data is telling you.
We’re seeing a major shift where companies export data from their SaaS analytics tools to use as input for their data warehouse. Not only to create a single source of truth for all data consumers, but also to unlock the potential of raw data. By combining full raw data sets, you can go way beyond the scope of your SaaS analytics tools. You’re technically only limited by the quality and completeness of the data.
The big shift towards data warehousing is represented by the upsurge of companies that operate in this space and how fast they’re growing. Over 10.000 companies now use Snowplow to process events at scale and send them straight into the DWH. dbt now has over 9.000 weekly active data transformation projects. 13.000+ sites and apps are running RudderStack to pull data from SaaS tools and send it into data warehouses. The list goes on.
dbt Cloud in action
Most data warehouses are fed with data that’s collected by all-in-one-box SaaS analytics tools. While historically explainable, this often causes problems downstream.
There is a big difference between what data teams want their raw data to look like and what it actually looks like when it comes from SaaS analytics tools. The data sets they collect were never designed for advanced modeling on the raw data. It is often incomplete or overcomplete, unstructured and ambiguous. Significant grunt work is typically required before it can be used for modeling.
By extracting data from SaaS tools and loading it into a data warehouse, you’re effectively duplicating a source.
SaaS analytics tools all offer their own way to consume the collected data, and chances are many of your company’s data consumers still rely on the dashboards of these tools to inform their daily decisions. Either out of habit, or simply because it offers an easy & fast way to get data.
In practice, this often leads to discussions about why numbers of different teams don’t seem to be adding up, as the data team and data consumers both use different versions of the same source of data as input.
SaaS analytics tools typically don’t disclose the actual models behind an analysis. As a result, rebuilding an analysis to work with the data set in your data warehouse requires significant reverse engineering effort, and troubleshooting & debugging can be painful when anything is off.
The way data is used has evolved, and the all-in-one-box model that SaaS analytics tools use no longer fits. In order to effectively combine and analyze data from multiple sources and truly serve all data consumers from a single source of truth, we need to unbundle the black box and separate data collection, storage, modeling and visualization.
By separating data collection, all collected data can be sent straight into the data warehouse without stopovers. This eliminates source duplication and moves data consumption downstream. It enables all analysis & visualization to occur on a single source of truth: the data warehouse.
Unbundling the black boxes also provides a big opportunity for the adoption of open standards. Shifting from vendor-locked tools to raw data and code enables data teams to adopt generic ways of structuring data to share and reuse each others’ tools, models and analyses.
We think the future of analytics is open and data warehouse-native, and there are clear signals we’re already on the way. It’s time to enable meaningful collaboration and to unbundle SaaS analytics.
You’ve likely heard about ELT — Extract Load and Transform… the Modern Data Stack’s evolution on ETL. This is a game changer by nature in that it enables organizations to ingest raw data into the data warehouse and transform it later. ELT gives end-users access to the entirety of the datasets they need by circumventing downstream issues of missing data that could prevent a specific business question from being answered.
A majority of business leaders believe data insights are key to the success of their business in a digital environment. However, many companies struggle to build a data-driven culture, with a key reason being the lack of a sound data democratization strategy.
Just like data mesh or the metrics layer, active metadata is the latest hot topic in the data world. As with every other new concept that gains popularity in the data stack, there’s been a sudden explosion of vendors rebranding to “active metadata”, ads following you everywhere and… confusion.
As the amount of data rapidly increases, so does the importance of data wrangling and data cleansing. Both processes play a key role in ensuring raw data can be used for operations, analytics, insights, and inform business decisions.
Do you know the current status — quality, reliability, and uptime — of your data and data systems? Not last month or last week, but where they stand at this moment. As businesses grow, being able to confidently answer this question becomes more important. That’s because data needs to be clean, accurate, and up-to-date to be considered reliable for analysis and decision-making. This confidence comes through what’s known as data observability.
In the past years, organizations have been investing heavily to convert themselves into data-driven organizations with the objective to personalize customer experiences, optimize business processes, drive strategic business decisions, etc. As a result, modern data environments are constantly evolving and becoming more and more complex. In general, more data means more business insights that can lead to better decision-making. However, more data also means more complex data infrastructure, which can cause decreased data quality, a higher chance of data breaking, and consequently erosion of data trust within organizations and risk of not being compliant with regulations. The data observability category — which has quickly been developing during the past couple of years — aims to solve these challenges by enabling organizations to trust their data at all times. Although the category is relatively young, there are already a wide variety of players with different offerings and applying various technologies to solve data quality problems.
Data governance is more than just having a strategy – it is about establishing a culture where quality data is achieved, maintained, valued, and used to drive the business. Modern-day businesses are supported by data and information in many ways and forms. In recent years, data has become the foundation for competition, productivity, growth, and innovation. We are seeing successful organizations shift their focus from producing data to consuming it, and data governance strategies becoming increasingly important to support their crucial business initiatives. Executives and shareholders are starting to realize that data is a strategic asset and data governance is a must if they want to get value from data.