Sep 27, 202236 min
Share via:

S01 E03: Commercial v/s Open-source v/s In-house ETL solution with Lucas Smith and Addison Higley, Hudl

Selecting the right set of data tools is important as it can have a long term strategic impact on your business. You can choose between commercial or open source tooling and can even custom build according to your needs. In this episode we discussed the factors to be considered while making this decision with our guests, Lucas and Addison from Hudl. We also took a deep dive into self-serve analytics, data governance, observability and much more.

Available On:
spotify
google podcast
youtube
Amazon Music
apple podcast

About the guest

Lucas Smith
Sr Analytics Manager at Hudl
Addison Higley
Senior Data Engineer at Hudl

About Lucas: Lucas is a Sr Analytics Manager at Hudl, with an experience of leading and growing teams in transportation/trucking/railroad/high growth SaaS space. Throughout his career Lucas has led various efforts like consolidation, department analytics strategy, and scaling analytics team . He loves loves solving decision problems that involve a distinctly human element. Lucas is also a Professor and teaches foundational elements of analytics. About Addison: Addison is Senior data engineer at Hudl. His work includes maintaining data warehouse, perform ETL tasks, create new ingests, and modify existing ones. He enjoys the challenges and learning opportunities that data engineering presents.

In this episode

  • About Hudl platform and kind of data they deal with.
  • Data stack of Hudl.
  • Open-source vs commercial vs in-house ETL solutions.
  • Evaluation of open source platforms and trade offs.
  • Self serve analytics at Hudl.

Transcript

00:00:00
Welcome back everyone to another episode of the modern data show. We hope you're enjoying this podcast and having as much fun as we are having while recording these episodes with these awesome guests. Today's episode is really special because we have not only one, but two guests for you. Our first guest today is Lucas Smith from Omaha Nebraska, who is senior manager of data analytics at Hudl. And along with him, we have Addison Higley, who is a senior data engineer at Hudl, joining us from Lincoln, Nebraska. For those who don't know, Hudl is a pioneer in performance analysis technology helping more than 200,000 teams in 40 plus global sports prepare for and stay ahead of further competition. Hudl has more than 3,200 employees operating in 17 countries with a global team of engineers, analysts and support. They're building the world's most powerful network of sports, video and data. Thank you for joining us for the podcast and it's a pleasure to have you guys as our guest.
00:00:57
It's a pleasure to be here.
00:00:58
Yeah. Thank you for having us.
00:01:00
So why don't you guys start a little bit telling us a little bit more about your background and your work at Hudl
00:01:06
yeah. So for listeners who don't know, obviously thanks for that intro. We're a sport technology company that's been around, when were we founded Addison 2006, maybe and we really specialize in kind of the sports technology landscape. We started in the US really with high schools and making distributing video easy, right from there we've kind of branched out into the kind of the professional market in the athlete space and so we really serve sports all over the world and help bring technology to make their jobs easier, more efficient and give them a competitive advantage. And my team at Hudl is responsible for kind of the internal business intelligence and analytics around our customers and what they do.
00:01:55
Yep. I'm Addison as a data engineer, our team makes data available for analytics consumption all across the company in some cases to external customers as well, but primarily internally,
00:02:11
Lucas helped me understand a little bit more about the technology behind Hudl like help our listeners understand. How does exactly how the Hudl platform works for these sports you know, sports teams and individuals.
00:02:26
Yeah. So, we have many, many, many technology applications for different areas. So I'll right now, I'll kind of focus in on competitive market, which is kind of the US based high school arena, where it's kind of like our bread and butter that most people in the US know us for. It we really kind of, there was a problem back, in the day where coaches needed to kind of exchange game film. Right. And so. Then they would drive three hours to, you know, exchange film with somebody and lose half their Saturday. Right. Coaches are part-time in high schools in US. Right. And so the company looked to kind of solve that workflow problem for them. By giving them kind of cloud-based video exchange tools. From there we branched out into helping them break down their film so that they didn't have to spend time doing that. The whole goal was to get them back to coaching, and kind of solve some of those time consuming workflows that they were doing. Through a series of acquisitions over the last 10 years or so, we've also branched out into kind of the professional sporting landscape, where we have more, highly customizable tools that really serve kind of the professional analyst need. Cuz if you can think about the type of technology that's required for let's say a coach in division one basketball, or maybe an analyst in the front office of a, you know, professional organization, their needs are gonna be much different than that high school coach. So we have. Products anywhere ranging from sports code, which is our highly customizable breakdown tool to Wyscout, which is a content platform for what we call global football, or most people know as soccer, to automated recording devices, cameras that kind of sink into this school schedule and automatically record the games. And allow coaches to really be freed up from having to kind of work through those mechanics. So, we see ourselves as kind of a technology company that helps solve those workflow issues. If you have been to, you know, seen some AWS kind of marketing material. We are a heavy AWS shop and we do a lot of lectures, I'm a professor, so I had to say lecture, right. A lot of talks with the AWS community about what we do. Most recently one of I think our director of product technologies is his title - Eric Reznicek did one on kind of how we compress our videos. Right. So if you're looking for Hudl, we're very strong in the AWS community when we start talking about internal data pipelines, our technology is pretty basic, right? So, we use AWS technologies. We also for really well solved patterns, use things like Fivetran for ingest for third party tools. We're a dbt shop. Addison can get into some the details of some of the ways and patterns we solve certain things, but we use Spark for some of our data engineering pipelines. We've got Looker, Redash and so we kind of have an ecosystem built up around. What most people would consider a data platform. One thing that's unique about our stack that I think is different than a lot of companies is we don't start with like, what is the ideal platform that we need to go after? And then just like go buy the entire modern data stack. Right. We've kind of made conscious choices along the way of what pieces that stack to bring in. And then what pieces that we kind of wait on or ignore for competitive reasons.
00:05:55
Wow. That's quite insightful. So before we dive in deeper into, you know, you know, the data stack at Hudl. Tell us you know, help us understand what kind of data you guys deal with?
00:06:07
We have a wide range of data. From various systems. So some of our sources would be. You know, SaaS providers like Salesforce, Marketo, things like that. But then we also have, you know, internal databases like Hudl.com is backed by MongoDB databases. So we have data like that coming in. We have application logging coming in through Sumo logic. And then we also use Snowplow for structured events. So, we have some structured events, data as well.
00:06:42
Awesome. Awesome. So, you know, so if I have to summarize. Got Snowplow for structured events, you've got Fivetran for collecting data from multiple data sources putting to data warehouse. What's the data warehouse here? Assuming Redshift.
00:06:56
Yeah.
00:06:57
That is correct.
00:06:58
You guys working anything towards, you know, data observability in terms of any kind of data quality monitoring tools.
00:07:04
So, we've kind of dabbled in this space. We've used some. We've used great expectations a little bit. And we also have way too many tests in dbt that have been kind of slapped on there over time. Right Addison. And then you guys do some pretty cool things on the actual custom pipeline side, in terms of monitoring.
00:07:25
A lot of our testing is sort of, homegrown, I guess you would say. So any data that we're ingesting through code that we've written. Right. So this would exclude something like Fivetran, but if we're writing the ingest, then we do two sets of validation. So the first is data loss. Do we have all the data we expect? Does the records match? And then what we would call data dictionary, which is more the shape of the data. So records that should never be null, records that should be of a certain type of value, things like that. So if any of those fail. You know, I will get a phone call through PagerDuty and someone will be looking into it right away so.
00:08:07
Okay. So you've kind of built in a custom observability solution using ,you know, great expectation on PagerDuty .That's quite amazing. Right. So let's go a little bit deeper into Fivetran. So you mentioned that you use Fivetran trying to be able to pull various you know, various data like marketing data, you know, various sales data .From various sources and putting into a data warehouse. How was, how did you make that choice in terms of, and, you know, I'm not assuming that you were the one who individually made that choice, but as a data practitioner, So if you were to make a choice in terms of, you know, now. In terms of, you know, going with three options, commercial solution, open source, solution like that of Airbyte or Meltano or in-house solution. How would you go ahead in terms of making that decision now?
00:09:02
So the way we've kind of looked at it as of recent, is if we've got homegrown, proprietary data, we really want to have custom ingest written by our data engineers. We've got a small, but mighty data engineering team. Anytime we're on call with like AWS representatives. They're always impressed by the work that this team is able to accomplish. So, we really want to use their power where it's like necessary. And really helps the company grow. And know more about our internal data assets. Right. But then when it comes to like commonly solved, Ingest problems, we don't wanna spend our data engineers time working through those problems. Right? Like they should be focused in on like those tough data engineering problems, not the, how do I build a connector into Salesforce and pull the data. Right. We've had. I would say we've had a chequered history with custom grown ingest for commonly solved patterns. I think it was what the debacle of Thanksgiving , Addison. Most recently where it just seems like every holiday, a homegrown connector built on these third party systems tries to die. Right. So really kind of replacing it with Fivetran gave us more stability for these connectors because obviously as APIs are changing, they're able to update it. We're not kind of building them in a vacuum. Like Whereas these other connectors are learning from many groups of people as they deal with these ingest patterns. Right. What do you think, Addison, would you do anything differently? I'm I've never asked you that question.
00:10:31
Yeah. So when I joined the team, of course, we had a lot of legacy systems and you know, there was a custom Salesforce connector, but if you wanted to add a new table or a new column or make some change, you had to get a data engineering time. And at that time, our data engineering team was two people. It had been two people for a long time, so it just wasn't. Yeah, it just, wasn't quick to get people, the data that they needed. So being able to get out of the box solution. Just allowed us to move a lot more quickly. I will say, you know, it's one of those things where you have to consider the amount of control that you have. I would say the spectrum you displayed from custom to open source to vendor, right? Like you're trading convenience for control as you move along that spectrum. So we've chosen the maximum convenience. And so, and then we're trusting right in Fivetran, but if something breaks. We no longer have the ability to go fix that immediately. Right. So that is one of those trade offs that we've made.
00:11:38
Okay. And any thoughts in terms of using some kind of open source platforms? How would you evaluate them today? If you were to try out some kind of open source platform, which gives you a kind of a maintainability of commercial version, but you have. A huge community that is supporting those connectors. What's your take on that?
00:12:00
Yeah. It's definitely something that we'd have to explore. Like we've used open source tooling. In the past, for example, with Snowplow, we started with their open source offering. But eventually we went to the managed offering just because it was so much easier for us and reduced the maintenance. Maintenance is a big concern for us, especially with having such a small team. We can't you know what I mean? We can't be dedicating all of our time to maintenance projects, so that one's a little, that one's a little tricky. The other thing is that, you know, with an open source project, You have the code in front of you. So if there's something that needs to change, you could submit a poll request, but you can't guarantee that version will be released exactly when you hope it would be. You know what I mean? So, it is still on that spectrum where yeah, kind of a midpoint between convenience and control, I would say.
00:12:56
And I would say like, there's also the consideration that needs to be had, especially if you're a small data team. Where you've got maybe an engineer or two and an analyst or data scientist two, and maybe a manager over the whole thing. Right? Not only is there a time factor, but there's a potential. Like invisible cost that you have to, that you have to consider when you're managing your own infrastructure. Right? So not only do you have the time cost, but you actually have the cost, like open source does not equal free. And you know, if anybody who is like just starting their data infrastructure up at any company. Hopefully they can hear that right. Open source doesn't mean free. There are other costs that you really need to consider. When you kind of are evaluating the offerings.
00:13:37
That's a, that's a great thought. That's a great thought. You always have those. You know, engineering costs, cloud cost maintenance costs. You tend to ignore them while buying it, but that's when you know, it hits you later. So, point well taken. So, another question that I had is, you know, we kind of saw the overview of how the batch processing pipelines happens at Hudl. What would be very interesting for me to understand is first of all, do you have any case where you would have some kind of a real time streaming pipelines and which is something that we commonly see? in cases where you need to deal with some kind of product, even data via either via through change data capture, or some kind of a streaming infrastructure. So do you have any kind of a streaming infrastructure in place and if so, would love to understand the cases around that?
00:14:29
Yeah, I would say that we do not have like a real streaming case. We've explored it in the past and even done some POCs. But ultimately for us, the deciding factor is there a business case that would justify doing this work? And we've yet to find a business case to justify doing that? One notable POC we did was coaches want to know coaches and athletes want to know how much. The Hudl product is being used. So if I'm a coach, I've asked my athletes to watch video, have they actually watched it? And if so, how much? So we did a POC that would give in real time. Analytics on, you know, how much each user has watched, which video, things of that nature. Give you various aggregations, but there just wasn't there wasn't the need. Nobody needs to know. You know, 10 minutes after they've asked someone to watch the video, whether those 10 minutes have been watched. It's something that maybe people check on weekly. So the case for streaming just wasn't strong enough for us to pursue that.
00:15:35
So, the next question that comes to my mind is around the, you the governance when it comes to the data, right from the consumption. Right, right from the production to the storage and to the consumption layer. What are your thoughts on around data governance, especially from a compliance perspective. From various other perspective, because I presume that you know, you guys are storing a lot of personal data What is your strategy around ensuring a tight governance across the whole data life cycle?
00:16:13
That is a really good question that I think every company has to wrestle with. Right. So when you think of data governance, right? It can range the spectrum of like everything's locked down and nothing can happen until it's approved to like anybody can do anything that they want. Right. We function in a very I would say decentralized pattern in terms of our engineer teams and squads. .And so there is a lot of questions around, like, what is the right. Level of red tape and governance that need to be in place? I fundamentally believe it's at every company. It should be an ever evolving decision based on company size markets. You're in, you know, risk, those markets present and things like that. So, that's kind of, you know, high level, but what I could say is when it comes to data governance. .That really should be kind of like a personal decision for each company. Right. And you really need to consider all of those kind of external factors kind of, hitting you in the face.
00:17:15
And, you know, on, on that topic. We'd love to understand how's the data organization within Hudl structured .We have seen a couple of cases where we see a federated organization where you have a central, global kind of a central platforms team, and then you have federated data engineers supporting various functions. So how's the data, you know, the data team structured at Hudl.
00:17:39
Yeah. So we really have two data teams at Hudl .I guess technically three Addison. Would you consider .Your team separate from my team? I don't know. We've got basically two main organizations. Right. We got our applied machine learning organization and they do a lot of work, really kind of cutting edge computer vision type of work. You've probably, if you've gone to Hudl's medium page, you've probably seen some work by that team. They used to publish pretty frequently. And then you've got decision and business operations, which is the teams that. Addison and I are on. And so we're in a very centralized approach. But our, I guess centralized organization structure is a little different than most central organization structures, right. Like most people who work for larger companies probably know of the. I would say the dreaded IT BI team that you have to submit the 40 page requirement document too, to even get time on their calendar. Right. And then you wait for three months to get your, you know, Oracle BI dashboard back. Right. That's not how our team functions. We try to align ourselves the best we can with our business. So we've got what we call the elite business unit, which is our professional customers and the competitive business unit, which is our high school customers. And our goal is to put our analysts closest to those strategies to help them find the data, develop dashboards, analytics solutions .And really turn all of our internal data into information for those businesses.The data engineering team works a lot more like that kind of really truly centralized approach. But in reality, you know, like I said, we've got world class 10X data engineers, I know cringeworthy. .But like they, you know, a team of four can handle pretty much any team are anything our team throws at them. And so, really, it's kind of one of those we don't go, we didn't choose a centralized organization lightly. We decided to go centralized to be able to hold. Like true analytics professionals to the same set of standards in terms of, you know, how do you write your dbt models? What types of analysis are you doing? What change processes do you have in place to just add some layer of governance to what we do, but really in a lightweight way that attempts to align with our business? Did I miss anything Addison?
00:20:03
No, I think you really. Yeah, you really covered the whole thing. The only thing I would say is with our team being centralized, but so small, big focus for us has been trying to enable self-service whenever possible. So, you know, using CI/CD for data producers to get their data into our systems without requiring a lot of data engineering time, just making those processes easy. So.
00:20:30
Yeah, man. It would've been, it would've been terrible if we missed that part. Cuz I think that's like your guys shining pride and joy. Right. Being able to enable our product teams to really kind of move data into the warehouse out well, without you guys having to touch the work that they're doing, I, I do wanna throw a big hat tip to that team because it makes everyone's life easier.
00:20:49
That's actually a very interesting point because that was one of the. Next question I was about to ask is there is a constant. You know, debate that you see are going around in the data industry is around you. Even the possibility of a true self serve analytics model. Right. And. What do you guys think about that? Are you guys, you know, are you guys close to having that kind of a model where you can say. We are at least like what's 60% there when it comes to having a self serve analytics function within the organization?
00:21:24
Yeah. So I would say for ingesting data, like, if you're a data producer and want to get your data into Redshift, in many cases, that's quite easy for you, or if you wanna make schema changes or things of that nature. That tends to be pretty easy. I think maybe, and Lucas can talk more about this. The problem that's harder to answer is when that producer loads their data. In their, they may have certain caveats, they're aware of certain assumptions that are true or untrue. And when it comes to driving value from that data and doing analytics, you need to have a nuanced understanding. Of some of those nuances. And that's, we don't have a CI/CD system that would inform you of all those at this time. Right. So, Lucas, did you wanna talk more about that?
00:22:16
And I would say like, we have not perfected. A self-service approach by any means. If anyone's interested in kind of the problems space that Addison's really talking about. There's someone on LinkedIn that I read pretty much every one of his posts. It's Chad Sanderson. He'd be a great guy to really look at as you kind of start your data program and consider what does governance, what does self-service look like? But I try to take it one step level when we say self-service like, we could easily. We're self-service right. You can get the data you need in the warehouse and get access to it anytime you need it. Right. Like if that's your definition of self-service we've achieved Nirvana, but that has created additional downstream issues. Right. How do you keep people from writing the same query with a slightly different nuance every single time? Right. How do you keep people from like creating their own nuance on a metric that may or may not actually like meet the company's objectives. So when I think of self-service, I think you gotta take a step back and say like at a certain stage of the company, it might be right to say everyone has access to most things. Right. But at a different size of company or in a different space, you may wanna say, no, self-service really means like they're hitting the same tableau or Looker layer every single time. And they can explore from there, like they're enabled to explore the data in a, like a safeguarded way. Right. And so when you say. How do you achieve true self-service? I think that, again, I'm gonna go back to every company has to make the right strategic decision for them in their business and where they're at in their kind of growth life. So,
00:23:56
And one of the key things that we keep hearing in terms of, as a deterrent for self-service is shared data knowledge across the organization, you know, shared data knowledge in terms of what these tables are, who created that table. And you know, typically we see that, you know, vendors throwing a magic band around data, catalog, logging, and you know, data discovery tools kind of solving their problems. What are your thoughts on that? Have you ever explored any kind of data catalog, logging or data discovery tools?
00:24:27
Were you here? Addison? When we tried to scale up at Amundsen as a skunkworks project.
00:24:34
So we, we did have Amundsen as a skunkworks project. Skunkworks is like a internal company hackathon type of project. And we, you know, it, it showed some promise at that time, certain some of the features we were really excited about like automatically pulling in metadata was not available in the open source offering. So that was a little little problematic, but the problem, I think that vendors do a really good job of doing is like providing you the tooling. But the problem that, you know, I don't know that any vendor can help you solve is like the people problem. Right? So, someone adds a new source. How do I guarantee that they've updated whatever your tooling is to represent that? How do I make sure that the information in that tooling is a hundred percent accurate? How do I make sure everyone's aware that this tooling exists and is using that and is even if the data there is right, are they interpreting that data correctly to then write the appropriate queries? So that's one of the big challenges that we've that sort of prevented us from diving in with a, like a data cataloging vendor is until we can answer all these questions, we're not sure that this will be money well spent for us.
00:25:54
So, yeah. Again. That goes back to kind of my comment earlier that we kind of take an approach of like, if we've got a problem that a tool will solve we're willing to bring it in, but until we have the true problem that's gonna solve, there's no reason for us to be investing in just like a general modern data stack. To add on to what Addison kind of was going after a little bit when you think of data cataloging, right. It's really kind of trying to organize things and give you visibility into where that organization exists, right. Within your warehouse or within kind of your entire data stack. When we think of. Some of the things we've tried to add some layers of knowledge into our warehouse. You know, there, you can get a certain part of the ways by creatively using schema or creating creatively sectioning off your warehouse for certain purposes. Right. .You can get discoverability through tools like Redash right. You can get a dropdown of all the tables. With the columns that you may or may not. Need to use for your query. Right? So it's like, how do you have that right layer that exists to enable. The analysis and the kind of visualization and dashboarding that's being done on top of the data? That's kind of, I think step one, before you can even start graduating into, okay, now I've got a data catalog and I've got well defined owners and those owners know like it's part of their responsibility to maintain this thing. And then there's kind of a social component that some. Data catalogs add in that seems really cool to me. But. I've just not actually in talking to a bunch of people, not actually seen it work well in practice. And I think it goes back to the point that Addison made until you can answer some of these really organizational questions around how you would use a data discoverability or cataloging tool. The technology's just gonna be another way to surface kind of the information that already exists.
00:27:57
Amazing, you know, and. This links back to the point. You know, I saw somewhere in your bio Lucas that you wrote that you love solving decision problems that involve a distinctly human element. Right. And you know, a few weeks back, we had. We a chance to interview Juan from data.world, where he talked about the social technical approach to the data where the success, the data success should not only be measured by the technical aspects of the things, but how beneficial it has been to the end user of an organization that kind of links really well to that. So, we are almost coming towards the end of our episode. And before, before we leave our guest for today we have few kind of rapid fire questions for both of you. So we would have individually for both of you.
00:28:48
Is this like a data party? Are you giving us a little data party at the end of this thing?
00:28:54
I wish, you know, I wish you, you were here with us in person and we could have gone out for a beer or something. The very first question for, you know, we'll start with you. Lucas is one tool or one platform or one technology that you just can't live without, you know, as a part of your day to day workflow.
00:29:11
So this is gonna be blast to everyone here listening Miro. I don't know if anyone here has played with Miro, but I think in the analytics space, digging into and understanding processes is going, is gonna make or break any analysis. Right. So just cuz you can go understand what leads are coming in to your sales pipeline, doesn't make it good data, good analysis, good BI understanding, you know, how do the leads arrive into our marketing systems and our sales systems? How then are they transformed into like creating business opportunities for our sales team? I think back to my time prior to Hudl where I was doing, you know, risk analysis and things like that, it's like, how does the freight rail world. Without knowing that, you know, I can say like the number of accidents happened at this location, but does that mean anything to the field operators? Absolutely not understanding how freight makes it through a terminal out on the main line and then back into a terminal and ultimately in customer's hands is key to understanding how risk itself presents itself. In a certain space. So I think tool I can't live without is gonna be a diagramming tool. That's super easy to use. And Miro I found recently I found Miro to be probably my favorite one of all
00:30:37
That's the first time you were hearing that
00:30:39
I got a background in human factors prior to data, right? You gotta merge the two. Right?
00:30:43
What about you Addison? I think for
00:30:44
me personally, I would say Apache spark. If I was to answer for the organization, I would have to say dbt.
00:30:52
Okay. So next question for you, Addison SQL versus Python. What's your go-to tool?
00:30:57
I would say SQL on that one, if you can do it in SQL all the better. Yeah. Obviously Python I do use a lot, but. I'll say SQL for this one.
00:31:11
Next question, like, Lucas, what's your go to source for learning new stuff about data, any particular book or blog or newsletter or any of the source that you would recommend to our listeners in terms of keeping up to date with the data and getting the real signal? Not the typical noise that you're seeing around the modern data.
00:31:31
So, my team is gonna gimme crap for this when they hear this podcast. But I would say a well tuned LinkedIn timeline is gonna be your best place, right? So there's a lot of features on LinkedIn that allow you to dismiss posts. And so my post is distinctly data, or my timeline is distinctly data because I've tuned it in the way that I get really good access to information. And then that springboards into, you know, the different podcasts and newsletters. What I would say is, you know, kind of the data space right now is becoming like Wikipedia was about like 20 years ago where it may or may not be true, but someone was gonna put something out there about it. So obviously question the authenticity of the source that it's coming from. But yeah, I would say like quick bite, quick hit start with LinkedIn and then move from there. Off the top of my head, a few. A few really meaningful books that I've been reading as recently for kind of the individual analyst storytelling with data. I've not read it yet, but Joe Ries, came out with a fundamental to data engineering book, which is getting a lot of praise in the space. And then if you're getting into some kind of analytics or data science management, John Thompson wrote a book called building analytics teams, which is pretty darn good. So, I'm an avid reader, but I still think LinkedIn is probably like the best daily space for me.
00:32:46
Any particular accounts that you would recommend?
00:32:49
I mean, I would say Chad, Joe does a Monday, midday podcast- ish on LinkedIn. Which I've always found. He brings in really good guests. And it's a super interesting conversation to hear them just nerd out to data every Monday. Yeah, I would say like, those would be the two starting points. And then from there. Your field just get flooded with just a lot of really good thoughtful people throwing content out there.
00:33:20
Any recommendation from your side, Addison?
00:33:22
I think the top one for me is probably Udemy. But I like maybe more of a deeper dive into certain technical topics over the course of. My career that has helped me so little gaps and that's a great way to fill 'em I think.
00:33:39
Perfect. So last question, before we. You know, wrap up this episode. Is, are you guys hiring, are you guys hiring in your data teams?
00:33:46
I personally don't have any positions open currently, but we're always kind of looking to expand and open positions on our teams. But if you want to work with Addison
00:33:55
you'd like to work with me, we are hiring for a data engineer. So.
00:34:01
Make you make sure you put Edison in the referral code. Yeah.
00:34:03
yeah, you can put me as your referral. Absolutely.
00:34:07
You know for all the listeners listening out there in the show notes, you would have the link to apply for open job site Hudl. Okay. So with this, we wrap up our episode for today. Thank you very much for joining us. Both of you it's it has been such a pleasure speaking with you guys, and I hope listeners had fun listening to this fun conversation with Lucas as well as Addison.
00:34:27
Yeah. Thank you.
00:34:28
Thank you guys so much.