Yeah. So this is a concept that is entirely copied from I might butcher the name, but Jean-Michel Lemieux. He had a great Twitter thread about it. Shopify VP. And the observation that he had is that by default teams put code where they can put it fastest as opposed to where it goes when considering the long-term effect on the overall system. I think unquestionably, this is something that I've seen at all stops on my data journey, which is people will use your work.
people will take your output. They will transform it for their specific use case. That is exciting. It is phenomenal that they're doing that. You should always have that ability, but you need to think about what that pull, like that gravitational pull towards centralization and building production-grade stuff is.
And so for us, right? Like many teams, we have dim fact tables, we have great modelling, we think about Kimball and then we have additional surface areas where people can say, I'm gonna iterate on this stuff. I'm gonna build things in my schema. I'm gonna be building things in notebooks. And actually, before you know it and this happened, we had Ramp have one of our first account takeover events.
And, we had people stay up late and they were doing incredible research on account takeovers, and it was all in notebooks and to a large extent. Ramp's entire account takeover fraud program for two weeks was running out of one analyst's notebook on their schema.
And it was like a bridge off of our repo and it had a ton of commits cuz they were iterating quickly. And all of that was incredible. And that's exactly what we wanna enable cuz fighting fraud is oftentimes like fighting fires. It's if you come up with a solution two weeks later like it doesn't matter, the house is already burnt down.
So we enabled that person to iterate quickly, but then thinking about all of this logic, everything that this person has built that will benefit many teams at Ramp is living in a pretty siloed area of both visibility and dependencies. We don't know what dependencies have been built on top of the data team, and so thinking about how can we invite that person into our code base, how can we show them how to commit?
How can we give them help on this journey? How can we take that work back and distribute it to the entire company in a version-controlled way? That is how you solve this Layerinitis problem. And the number one and two things that I think are important are, one, a culture of celebrating hardening code, right?
Because it is always easy to say, I would rather focus on the next thing. I wanna do the next analysis. I wanna build the next model. I wanna do the next product launch. It's harder to say we are going to take some of our work, refactor it, and build it the way it should be. So that's important. And then the other is a culture of really inviting people into codebases.
So I'd say this is probably the number one thing I've changed my mind on in the last year one year ago I would've said the dbt repo is for analytics engineers and analytics engineers only. That is a recipe for people putting business logic in other parts of the stack. So for me now, I think it's really about how can I invite people in.
how can I have them do work that adopts well to dbt? How can we teach them a little bit about dim and fact tables? How can we teach them about modelling? How can we think about making things accessible and looker for the entire company as opposed to in a super custom Databricks notebook with 250, 600 lines of code. So largely like those are the two ways we try to combat layerinitis as a celebration of hardening systems and inviting people into codebases and teaching them our best standards. Wow