Learn how data leaders at Hudl and Fluid Truck approach self-service at their organizations, and the tips & tricks for boosting data adoption.
May 16, 2023 - 9 min read
The key to unlocking business success is the ability to quickly act on data-backed insights. To leverage data’s full potential, all stakeholders need to know how to efficiently navigate and understand it - regardless of department, role, or tenure. In fact, Gartner found that data leaders who prioritize data sharing generate 3x higher return than those who do not.
To ensure that companies don't miss a beat and that all data is valuable is used, it's important to implement a self-service set-up and program.
It was a pleasure to have Lucas Smith, Sr. Data Analytics Manager at Hudl and Connor Swatling, Head of Business Intelligence at Fluid Truck on our webinar panel to discuss how their organizations approach self-service analytics, as a way to boost data adoption and enable timely and confident decision-making.
Company & Data Team Context
10 employees in 2017, and now over 400
In 2021, Fluid Truck’s BI team was centralized into a few people trying to migrate analytics databases from legacy systems to a new home in GCP, while also providing ad-hoc reports to leadership. It was a very transactional process turning reports around between stakeholders and the BI team. This manual and time-consuming workflow became increasingly painful as new dimensions and definitions were being established without a reliable source of truth.
As the business grew, departments began to scale up their own operations and we rolled out new product features. At this point, the old ticketing system of “request-in, request out” wasn’t going to cut it.
That’s when the “Data Champions” program was born. I started engaging others in the business who had shown some competency in data - folks with a strong understanding of Excel or an interest in in leveling up their careers in technology. These “data champions” were intimately familiar with domains they operated in, and provided this buffer layer between our highest traffic teams making the requests, and the actual BI space.
Now, it's become more of a "hybrid model"where we have these sanctioned users who are proficient (data champions) and are moderated by a more professional cadre of analytics users. It's been a great opportunity for individual team members to engage through this this intermediary layer where we had embedded these professional or proficient data producers.
Hudl now has over 1,000 employees globally across various domains from sales to product to finance.
Recently made the transition from a centralized data team acting as "pseudo-gatekeepers" trying to reach this far-fetched idea of self-service - to a decentralized team where we have domain-specific data experts who can develop data sources and content. With 1,000+ employees to serve, we needed a way to really meet our groups where they are, and not create any extra hurdles for them to get their hands on data.
We decentralized late last year with the hope that we can do a better job of connecting our producers and consumers with frameworks, models, and self-service access. We've taken the approach of saying: "the product team really understands the domain of the product. We can produce better self-service assets that are of higher-quality, less prone to error, if we have our analysts within that organization."
Our focus right now is on product managers - to really get in and understand how users are using and interacting with our product. And how that then impacts our revenue.
We have 3 data scientists and data analysts, so it isn't possible for us to answer every single question. We needed to find a scalable way to get data out, and that requires self-service.
So we really as part of this process, we took a much deeper look into our tool set and whether or not it was meeting the needs internally. When Hudl's data program was set up 7 years ago, the goal was to dump everything in Redshift kind of like the Lakehouse concept. Let everyone write SQL against it. Well, that obviously proliferated itself into a bunch of unmanageable things with everyone coming up with different answers.
What does self-service mean to you?
How are our business leaders able to integrate various measures, analyses, insights into their day-to-day processes and operations in ways that they don’t have to go to a slack channel and ask for something with their hair on fire
Improving the conversations that take place day to day in the office and enhancing the decision-making process by ensuring that it follows the scientific method (hypothesis, use data to evaluate hypothesis > making a confident business decision). Essentially, scaling curiosity in the scientific method.
It’s about risk-evaluation. How have you evaluated the risk of a decision you’re trying to make? Giving business members confidence in the data they’re leveraging to make a decision, and to drive them towards using it repeatedly.
It’s like in the movie Armageddon. The core premise is that there’s an asteroid coming to earth and in order to prevent the threat, they had to send astronauts up into space to drill into the asteroid and blow it up. Rather than train astronauts how to drill, they trained drillers how to be astronauts. And that’s a great analogy of where we need to be with self-service analytics. The data team can’t be experts on everything, so it’s important to empower business users who are experts in their domains, to understand how to put queries together, and access data when they need it.
Webinar Takeaways & Tips: Self-Service Analytics
Start with the WHY. There needs to be an internal desire, and you really need to understand the motivation of your self-service initiative. It should be approached almost like an internal market research activity, where you figure out the pain points around decision-making and how self-service fits into that.
Based on the why, come up with a strategy and definition for self-service that's specific to your company. Every organization defines self-service differently, so take the time to set the right expectations. Be very specific with your definition: Is it that everyone needs to learn how to write SQL? Or is it that everyone should know how to search through existing data assets? Or knows how to use your BI tool? Talk to people throughout the company and repeat your strategy.
Get C-level buy-in on data as a driving force
Consider decentralizing your team to remove yourselves as a bottleneck, and so that you have don’t have data analysts working on domains in which they don’t have a deep understanding.
Evaluate your data tool stack based on your self-service definition, needs, and team structure. Don’t just go for the hottest tool on the market. Find what meets “your internal product market fit” between your data consumers' needs and the data producers' output. - For Hudl, for example, their teams were all to decentralized and autonomous in their organization to have a BI tool that holds insights under lock (like Looker, with LookML and it's code-based requirements), it didn’t really make sense.
While your tool stack should always be based on your self-service strategy, it’s good practice to get a BI tool that has self-service exploration capabilities, like Whaly. Ideally, it should have access control and built-in governance features as well.
Self-service is accepting that your data team can’t do it all — but it still requires a significant degree of trust in your data consumers, to know what they’re asking and what the output represents. There needs to be a common “company-wide data language” that translates data source language into plain business terms that are consistent (a Semantic Layer). Put a semantic layer in place from day 1, even if it's a simple one.
Identify which metrics are most important for your company, and what your CEO cares about. This enables you to evaluate whether or not your strategies are on track.
Start small. Underpromise, overdeliver. You’ll be compelled to overpromise, but don't make the mistake of potentially underdelivering! That's easy to do.
Self-service and governance are two opposing forces, which means both can’t be perfect. The balance comes down to the risk that your organization is willing to take, and the velocity to insight that needs to happen vs. the accuracy of insight.
Develop a “Metric Manifesto” - content guidelines for analysts, allowing them to understand: what filters should I use when this sort of request comes through? How do I structure that question to best deliver content? What's the right template for this type of content? What's relevant to this team?
Consider a “Data Champions” program, in which you find folks internally within your company who have shown a competency in Excel or an interest in leveling up their careers. These data champions should be intimately familiar with the domains in which they operate, and can serve as a valuable “buffer” between your highest traffic teams making the requests, and the actual BI teams. This removes the “wizardry” out of data and ensures that people who are speaking the same language as the domain teams is getting them on board. - Supporting the data champions then becomes the work of your core team of analysts. Instead of consistently fielding every single request, we can lean on the data champions, which frees up our team to work on our standards, build up our tech stack, do code reviews, data governance meetings and formally train these champions who can represent the organization at a more granular level. - If this program is something that would work well at your organization, my advice would be: your organization likely already has this talent within it. There's no need to look externally. There are always folks who are hungry and want more responsibility - leverage that find the people who really would jump at the chance to do more with their professional career and help them along, and they'll set an example for others to follow.
Play around with gimmicks or “competitions,” like a Leaderboard of the top business users who have used the BI tool for X weeks in a row. This might be a good way to encourage data consumers by tapping into their competitive drive!
How do you measure the success of your self-service initiatives?
CS: We're a group that operates in precision, so it's frustrating that I can't point to a quantifiable metric that says: “our team's existence has brought in X number of dollars to the company or improved efficiencies by X%.” But when I see how we approach decisions, there’s a big difference. Two years ago, conversations were very much based around gut feeling or “let’s do it this way because this is how we’ve done it in the past.” The conversation now seems to follow the scientific method a lot more than it used to, starting with a hypothesis.
People will suggest, “I think we should make this business decision and move with this feature on the product side” and then execute on that hypothesis by asking additional questions, gathering data, doing A/B testing, evaluating what that feature implementation would look like in reality and then measuring results from that. We're getting better at that, and it’s very exciting to see this shift - not just at the highest levels, but all across the company.
And I think having data available and accessible across the company means,everyone knows what the CEO wants them to know. If our gross rental revenue is down month over month, then that conversation should be a hallway conversation - not just something you have to prep for when you have your biweekly 1:1 with the C Suite. That's the sort of, you know, emotional, not quantifiable answer to that question of how I measure success.
I'm always proud to see someone bringing numbers to a meeting or showing a graph that one of our team members made or they made themselves. I think it's tough to measure it because all of our data initiatives have sort of this expectation that they will deliver a result and allow a decision to be made.
LS: Sometimes, there's the occasional miraculous scenario of: "We found this insight, sent it to the sales team, they closed the deal." We've had a couple of those and those are awesome, but they're few and far between.
To me, the "data-driven culture" and adoption impact is really in those nuanced weeds of how have you evaluated the risk of a decision you're trying to make. For example, maybe you're trying to make a pricing decision. What is the upside and downside of that decision you're making? And how do you balance that risk for the business? The value there is not necessarily in the data itself, but being confident in the decision and being able to measure whether or not it moved the way they thought it was going to move. Other times, it's got a direct result of reducing cost of goods.
And so on the various data teams that I've led, we tend to hone in on just one element of that value because it's probably the easiest to measure at the time. It's always good to think more holistically about the value you're there to provide. It’s something you've really got to think about as a data leader and self-service absolutely plays into that because if you've got a team of five data analysts, data scientists and data engineers and you've got a company of 500 people, then that's a high ratio in today's world. You're still not gonna be able to serve all 500 people. So how are you doing things that give those business members confidence in what they're using to make decisions, and drive them towards repeatedly using it?
Subscribe to our newsletter.
Stay in touch with monthly updates and news from Whaly. No spam, we promise.