Spotlight on Data Analytics Leadership with Lucas Smith

Lucas Smith is a Senior Data Analytics Manager at Hudl, a sports tech company. He gives a great introduction to Hudl below. I really enjoyed our conversation about his objectives, team structure and collaboration, dashboards, governance vs. self-service, and what it takes to be a good Data Analyst.

Try Whaly

Thousands of users rely on Whaly every day to monitor and improve their revenue. Join them now!

Spotlight on Data Analytics Leadership with Lucas Smith

Can you give me some background on your role as Senior Data Analytics Manager at Hudl, and how you got there?

To tell you a little bit about Hudl, it’s a global sports tech firm that specializes in sports performance. We create video and data tools that make it easier for teams to analyze and improve performance, ensuring that athletes “get the shot they deserve.” Most companies focus on the media side of sports, but we’re all about performance. Initially, Hudl was built around a need that American high school football coaches had. Back then, they were exchanging DVDs of game film manually. Now, we provide this service in the cloud - with many more advanced tools on top. Hudl has tools to automate recording & upload of practice and game footage, break down the film, and more. Beyond high school, we’ve expanded to cover college and Elite (professional) sports, acquiring tools to offer an even more customized breakdown: live capture pre & post match for end-to-end analysis, and even access to a database of opponent footage. We’ve also recently acquired WIMU Pro, a wearables company, that will take us to the next level of sports performance analysis.

In terms of my career path, I majored in Math at Indiana University for undergrad, and then completed a Masters in Kinesiology - essentially a scientific study of the human body and movement. I looked into how people in the workforce are impacted by the things they do, particularly for manual work: how much effort they put in, the load they carry, the effects that has. How do they process that psychologically, and when do they choose to go get more material. This led me to become a consultant as a railroad company, performing fatigue data analysis and bio mechanical modeling of human performance.

After that, I joined Union Pacific Railroad company as an Analyst. Over the course of my 6 years there, I worked on analytics of operations around Safety Systems, and also enterprise reporting and BI. There was a big effort to revamp the safety programs, and this is when my path shifted  to being more data-oriented, over human motivation-oriented.

We could definitely go deeper, and that’s when I was drawn to studying Predictive Analytics. This ultimately led to the creation of many risk models throughout the operations side of the railroad.

When Hudl approached me, there was a Data Analytics Manager opening, which I thought would be an exciting move to the SaaS world. I now manage a team of Data Scientists, Data Analysts, and Analytics Engineers. We support the entire enterprise - Sales, Marketing, Finance, and Product teams - how customers use our products, sales pipeline, acquisition, etc.

We have lots of data that Hudlies can access when they need. Our goal is to help Hudlies be able to have and make the insights they need in their day to day roles. We are working towards creating a decision-making ecosystem, rather than single-use data projects on disconnected projects.

What are your main objectives / KPIs?

Analytics leaders often wrestle with KPIs. It’s a bit “squishy” since the role encompasses so much. I’d love to have a black and white figure for you, like # of insights generated, but the reality is that it’s not so clear cut. On top of the charts and dashboards, we’re responsible for tackling things like:

  • How are we doing things more efficiently? What’s more efficient now than it was before?
  • What can we kill that isn’t correct?
  • How do we implement sources of truth that are scalable?

In terms of what success looks and feels like, I’d say it’s working with stakeholders to help them instrument a tracking plan to answer their questions on an ongoing basis. Once we have it in place, are they using it consistently? Are they able to get what they need from it? We have a dashboard that enables business leadership to see a single # around revenue metrics, which is imperative.

In this sense, it’s the usefulness of our dashboards and data assets. Are they actively being used and adding value? We share a lot of our reporting via Slack, so I keep tabs on Slack to see which of our data assets are being shot around and used by those across the company.

We have a central data team, and from this perspective, when creating a KPI, you really need to balance what is the thing you’re optimizing for. Then, be ready to pivot when it’s in a good place, as opposed to spending all your time doing that one thing. It’s a balancing act between ensuring clean dashboards in the future, or a better educated internal team around data, measuring data cleanliness, and fielding requests from business teams.

Our main responsibility is to deliver data experiences that will help our various teams drive efficiencies and grow our business, which comes in various shapes and forms, both short-term and long-term.

How is your team structured?

I manage a central Data Analytics team, comprised of Data Scientists and Analytics Engineers, and we sit under the Business Operations organization - which reports into the COO.

We work extremely closely with the Data Engineering team, that sits under Business Technology function, so structurally speaking, we’re not on the same team. We work with them every day, however, on how to optimize our RedShift cluster, and constantly share our pain points with them. At Hudl, we try to make everything available and low-touch in terms of data pipelines, so that we aren’t super reliant on data engineers. We help software engineers who are data-savvy to learn how to do certain data engineering tasks.

Everyone on our team is quite technically sound, even our interns, so there’s an expectation that everyone needs to have SQL knowledge as a baseline. We offer a great starting point for interns, who are initially focused on short-term requests from business users, which can oftentimes lead to higher-level involvement in larger projects. First, they may be answering those rapid-fire requests and questions around usage data and cutting financials. But I always teach my team NOT to stop at dashboards, and to dig deeper beyond the quick stats. If we find something that’s worth spinning up a larger effort, then we’ll make a request to our business partners, and see if that’s something we can pursue.

Our interns get a lot of exposure, working cross-functionally. We have a request channel on Slack called “Analytics Support,” where business users can request help running queries and answering questions. Our interns manage this channel, with requests split out between teams supporting each segment of our business, which means they get to develop domain knowledge as well. Our promise to them is that “when you leave this role, you’ll have the experience you need for that first job.” This is a good way for them to build up knowledge, and a data portfolio.

What’s the key to being a good Data Analyst? How does your team collaborate with your business users?

The best data analysts have a super solid understanding of the business. This is key to winning in this role. As a leader, I always ask myself: “Am I positioning my data analyst in the best place possible for 1) getting to know the business well, and 2) having the right access to provide the best insights?

We have an efficient line of communication with business teams through our Analytics Support channel. We also have routine meetings with different stakeholders of the leadership team. We have a list of agreed priorities that we go over with them, and can raise prioritization requests whenever we want to suggest any changes or other projects.

A data analyst that knows our entire business, inside and out, is better positioned to help our Product team, Sales team, and really any team - ultimately pushing the company in the right direction based on business goals. You should always try to answer: “What are the business needs you’re trying to fulfill?” when helping any team to leverage data for decisions.

I may be biased because I genuinely find the business side interesting, but if you’re in Analytics, you’re bridging the gap between the business users and the data engineers on the technology side. Everything you do should ladder up to business outcomes.

Let’s talk about Dashboards. What are some things to take into consideration when creating mission-critical dashboards?

A dashboard is really a tool in an analyst’s tool belt. It’s not the only end result - if you need pipeline metrics, a dashboard is useful, and so is a report. A dashboard is not the end-all-be-all.

Always think about the purpose or the reason for the dashboard, and restrain its scope. There’s a common perception that you should create ONE super-dashboard that rules them all, and that a good dashboard should account for 80%+ of use cases.

In my view, you’re actually failing if your dashboard does account for 80% of use cases. Why so broad? A dashboard is meant to be a quick & easy representation of a business process that you need an ongoing pulse on. Whether it’s revenue, lead pipeline, product usage, shipping times - making that info readily available. With a super-dashboard, the chances of confusing these metrics and not providing the answer to the key question is high.

I think the basics of human perception aren’t taught enough, which is absolutely vital when building a dashboard.

There are 2 books I recommend that offer golden guidance:

  1. The Big Book of Dashboards by Steve Wexler, Jeffrey Shaffer, and Andy Cotgreave
  2. Storytelling with Data by Cole Nussbaumer Knaflic

These books are sitting on my desk as we speak. They remind us that different people see the world in different ways. For example, consider people who have color vision deficiency, or from different cultures that tend to scan pages differently.

I recently wrote a LinkedIn post on my views around the filter on the left-hand side of a dashboard. In a nutshell, the western world reads from left to right top to bottom. This makes the top left corner prime real estate. Adding a filter to such an important area doesn’t communicate what’s most important to the user who needs that information. If it’s revenue for the quarter and how it compares to other quarters, that should go in the top left. If the question is, “Is our revenue in trouble?” don’t conflate it - give them exactly what they need to easily and quickly take this information away. Put yourself in the shoes of the person who needs the information, and design your dashboard/reporting accordingly.

Don’t default to a certain type of dashboard if it’s not the easiest and best way for the user to answer the question at hand. Define the purpose, and then think of which format works best.

There are 3 main formats in my view:

  • Core KPI dashboards - shows quick-hit info
  • Exploratory dashboards - guided explorations of the data, enabling you to explore a handful of KPIs
  • Single pager dashboard - this one isn’t talked about enough. If you have data (efficiency metric, revenue, usage, etc.) in which there are further insights that are worth highlighting if you dig deeper, that aren’t effectively shown in the dashboard, you should consider a single-pager dashboard. This is where the end result is a PDF or a consumable that includes both the visualized data + human insight in text form. It’s a guided narrative that takes the user through the data and presents insights.

Remember that you can always dig deeper to make your information more valuable, for the purpose that it’s meant to serve.

What are some ways that you’re addressing data quality and governance at Hudl?

To maintain high-quality data, a few best practices are to:

  • Automate quality checks
  • Leverage the testing functionality within dbt
  • Have a super solid team of data engineers in place and have confidence that they’ll get all that data into the data warehouse.

Once the data hits the warehouse, we have key models in place that test for things that could go wrong. We’ve written custom tests to identify and alert when there are changes that seem off. If there’s a major drop in core sign-ons, for example, we’d trigger automated emails. There are also observability tools like Metaplane and Monte Carlo, that gives engineers and analysts the power to identify issues earlier.

For Analytics Leaders, if you’re going to make your first data hire, make sure that they are skilled in data. There’s a common ‘trending’ mistake these days, correlated to the increasing simplicity of tools, that tempt people to hire an unskilled data person as their first hire: “The tool seems easy, so hire anyone and they’ll learn the tool.” If you do this, you’ll risk creating your entire data structure on an unscalable base. The first data hires should have ETL developer skills, and can create scalable data assets.

Because the tools these days are easier to use, it will be easier to reach the perfect state of self-service - but at the expense of data quality and consistency. This is always a conflicting decision, but data quality and consistency are critical, and it really starts with those first data hires to believe in this importance and create the scalable foundation.

Another thing to keep in mind about data quality is that companies will undergo many transformations, particularly fast-growing tech and SaaS companies. I believe that companies must be willing to adapt and erase the past. A data asset should have a lifecycle, and there should be a trigger for deletion.

Being willing to pivot and adapt looks like, “we’ve learned a ton from this data, but it’s no longer relevant or up to date, so let’s start from scratch.” It could be that your company’s first product is not the same product as it is now. This “let’s move on” mindset isn’t often considered, even if that data has no more relevance - and will be a blocker later on when you scale.

Don’t be afraid to use that “start over” option. We’ve experienced pain around unwillingness to start fresh. There’d be less pain in hitting the reset button.

Data quality is linked to Governance. Do you have any thoughts about how to maintain governance, while allowing business users to self-serve?

We have a Redash instance that allows for self-service. Self-service can be tricky because you don’t want to give all users access to all the data. This can cause issues around data governance, and cause data to break or users working off of wrong or stale data. In this sense, it’s good to create an exploratory dashboard, so that business users can explore a handful of KPIs without breaking anything. We use a BI tool with built-in governance features.  

To me, self-service is a bit of a buzzword that means anything and everything these days. Self-service solutions can provide drag & drop exploration capabilities, but access and knowledge around concrete KPIs are not necessarily possible to be self-served to business users with no technical expertise. The meaning goes around in cycles. At some point, self-service even pushes the concept that everyone should be writing SQL. Yes, anyone can learn SQL, but should they? Companies need to think of the additional cost/resource of getting business users to learn SQL if it’s not their core job.

It boils down to striking a balance between:

Freedom = multiple sources of truth, lots of analysis performed by everyone

Lack of freedom = consistent source of truth that’s governed, but people can’t get answers fast enough

Accessibility is key until you’re left with a big, messy pile. This is why I believe everyone needs a data catalog, to categorize the data that’s in the warehouse. We’ve scaled to the point where we’re progressing to enterprise analytics, and we really need to implement layers of certainty within data. We always need to ask ourselves “is this good data?” = is this well governed?

What’s the toughest part of your role as an Analytics Leader?

As an Analytics Leader, I would say that it’s balancing the many requests and responsibilities for getting data out to the rest of the business, with making enough time to work with my team members on a personal level and help them grow. If my team isn’t growing in/out of their roles, I’ve failed, and that requires individual attention.

I strive to give them 1:1 time and feedback, and help them towards their career goals, sometimes even graduating from Hudl onto bigger and better things. I believe that if your people are healthy, they’ll perform well. As an Analytics Leader, there will always be a billion things going on, but you can’t forget your people. To take it a step further, if you’re overwhelmed by what’s on your plate, just go back to focusing on your team. That will pay off and yield the best results.

What do you enjoy most about your role?

Based on my answer to the previous question, this shouldn’t surprise you - it’s the people! The people are incredible, and make Hudl better every day. We’re all aligned to the same mission and north star, so whenever there’s a conversation around “how do we do better,” it’s always a teamwork-oriented “we” conversation that makes it feel like we’re truly in it together. If I have to move you or fire you, that’s on me, and I’ve failed.

There’s a “Coaching Tree” concept in sports, which applies to business as well, and has really benefited me in the Analytics space. My team and our experiences make up a coaching tree, and branches out when people move on or up. Some of them become data scientists at large corporations, and go onto do great things. We stay linked in our journeys moving our way up the industry, and learn from each other.

What advice do you have for someone looking to get into your field?

Don’t fall into the trap of thinking you need to complete 18 boot camps.

Instead, my biggest pieces of advice are to 1) network, and to 2) find ways to use and apply the skills you’re learning. If you’ve never created a dashboard, people tend to turn to resources like Alex the Analyst and will build their first dashboard based on his training module. While he’s a great resource and that’s a great way to learn, that first generic dashboard is not a portfolio builder - don’t put it on your resume.

Graduate from that quickly and create something that deserves to be on your portfolio. To achieve this, I recommend gathering data on something you’re passionate about. For example, I’m an avid runner, and I’ve collected and visualized my own running data to inform how I structure my training. Similarly, my friend is into golf. He used a free app to gather info on his golf rounds, then used BigQuery’s free tier to collect the data, and applied a visualization tool on top of that as his first data project. Now that’s a real project that you’re likely to use and keep iterating on. When presenting your portfolio and projects in job interviews, you’ll be more organically appealing, and be able to talk about it with passion and confidence.

If you’re trying to pivot into the Data Analytics space from elsewhere - from a Marketing role, for example, find someone internally who can teach you the basics of SQL, Tableau, etc. As you internalize more information, you’ll ask deeper questions. Use them to ask and answer real questions about your current role. What is really exciting is that all data warehouses these days are accessible and affordable, which means anyone can get started.

For some reason, people really overcomplicate the skills a Data Analyst needs to get started. The starter skillset is actually simple:

  • Learn basic SQL
  • Know how to use excel
  • Start answering questions
  • Most importantly, be curious and persistent

Try Whaly

X

Thousands of users rely on Whaly every day to monitor and improve their revenue. Join them now!

Learn moreStart your free trial