How to Deliver Data Experiences with a Data Mesh Mindset

Adopting a data mesh mindset is the first step to a data-driven organization. What next? Learn how to deliver data experiences to your domain.

Try Whaly

Thousands of users rely on Whaly every day to monitor and improve their revenue. Join them now!

How to Deliver Data Experiences with a Data Mesh Mindset

In a previous blog article, we talked about the differences between a data monolith and a data mesh, and which structure works best for you.

Now, we’ll dive deeper into data mesh specifically, and share tips on how to deliver data and serve users within your domain in this structure.

While a monolithic architecture may still be manageable for smaller organizations, data mesh is a more scalable model for the inevitable surge in data and sources that will come as your company grows. That’s why we’re doubling down on it.

Remind me, what’s a data mesh again?

“Data mesh” was introduced in 2019 by Zhamak Dehghani, the former Director of Emerging Technologies at Thoughtworks. It’s a disruptive approach and mindset behind the management and the sharing data for analytical use cases, that poses a change to both organizational structure and technical architecture of businesses.[1] While it’s considered a “socio-technical” shift, there’s a significantly bigger emphasis on the “socio” - the mindset, the people, and the organization. After all, the mindset is what has to come first and foremost, with technology following after. [2]

Why do we need a new mindset? Starting bigger picture, Zhamak noticed that companies these days all strive to be “data-driven,” putting data at the heart of their mission statements. They set ambitious goals around leveraging data to be more customer-centric, to anticipate needs, to grow revenue, to drive efficiencies, to reduce costs - you name it. According to Accenture, 90% of all businesses are expected to explicitly mention data as a key success factor by 2022. [3] For companies that already incorporated data as a crucial part of their company trajectory, they’re really putting money where their mouth is. Companies are investing millions into being “data-driven” - 65% of enterprises are investing over $50 million into data initiatives. [4] However, despite these ambitious (yet lofty) goals, and all the money behind “data,” the results were consistently low and didn’t reflect everything that was going into these efforts. That’s when Zhamak started investigating and re-thinking the way data is currently structured, and why business results weren’t adding up.

The status quo of how data teams are structured in organizations is predominantly monolithic, where there’s one central team that sits in the middle, responsible for dishing out data to all other functions and departments. This centralized approach has a few challenges:

  • The data team ends up being a bottleneck for people who need the data, slowing down business decisions and adoption. A bigger consequence is general mistrust in data, and domains having to find workarounds for analyzing data themselves.
  • Business context can get lost with one data team in the middle that doesn’t have deep knowledge into a specific domain.
  • The data isn’t interoperable across domains.

Oppositely to a monolith, a data mesh is a decentralized data management structure, where data experts sit in different domains and are responsible for delivering “data as a product” to these teams. Thinking about “data as a product” means that the data experts for any particular business domain can tailor how best data is shared, adopted, and used for that function. This model provides more freedom and autonomy for both the data experts and the business users, and a shared sense of accountability and responsibility.

Data mesh has 4 main principles outlined by Zhamak [5]:

  • Domain-driven ownership of data

Clear boundaries of responsibility between domain teams (the ones focusing on creating business-oriented data products), and the platform teams who focus on technical enablers for the domains. This is different from the monolith structure where the data team is responsible for both domain-specific data for analysis, as well as the underlying technical infrastructure.

  • Data as a product

Data sets should be served as if a product is going to a customer. This concept refers to the application of product development principles to data projects, such as identifying and addressing goals and needs, agility, iteration, etc. This methodology empowers data experts sitting within their respective domains to apply product management principles for their data work to be more valuable and scalable for the consumers. Data products need to be easily accessible by all the data consumers within the organization, with special focus on the consumers within their own domain.

  • Self-serve data platform

Even in a data mesh approach, everything must rest on a common platform and tools that are easy to use, even for those who don’t have technical data expertise. Domain teams must be able to autonomously build and maintain their own data products. Without a self-service infrastructure in place, domain teams will have to rely on limited resources that aren’t dedicated to their domain, and won’t be able to truly own their data. Self-service platforms enable this autonomy and data adoption across the organization.

  • Federated computational governance

Federated governance sets metadata and documentation standards that each domain must follow and apply to their own data products. It ensures that data products from different domains can be combined and modeled against each other easily. Of course, there needs to be a balance between global governance and the autonomy that each domain team should have. Maintaining consistent access controls and data protections is important in a data mesh approach, to ensure overall quality.

So, how do I serve my domain?

Congrats! Your company has “officially” adopted this data mesh approach, and you are the data analyst sitting on the sales team.

  1. Data mesh is exciting because you get to delight your consumers around the experience of your data. Forget the typical processes you are used to. Put yourself in the mindset that you’re building a “product” from scratch - to best serve the “customer” who will be using and consuming your product. Rather than jumping right in, there’s a chance for you to take a step back, understand their needs, and then proceed to build your “product” - modeling data in the best way possible, so that your domain team can consume it in the best way possible. Your data experience is the product that you’ll be delivering, and you have the autonomy to build and own this.
  2. Identify the primary metrics that are important to your domain. Talk to your team and truly understand their goals, and what they want to get out of data. What are their objectives? What do they need to know from a team level / individual level? What activities need to be tracked in the sales process? What kind of dashboards / data experience would be the most valuable for them? Are they data-savvy?
  3. Based on this, figure out the right tools for modeling and visualization that will best cater to your consumers.
  4. Create your data sources or plug and re-route from the central architecture. Even though ownership is decentralized and you’re building your own “product,” it doesn’t always imply decentralized infrastructure. For governance purposes, trust in the underlying platform.
  5. Anticipate their next questions once they visualize the data - what will they want to drill down into? How can you already show this?
  6. Create tables / dashboards / charts reflecting their answers with your tools
  7. Boost adoption and discoverability of your domain’s data by cataloging - document where to find your new data so other teams can self-serve.
  8. Iterate. Get feedback and understand what’s working and what’s not, to make necessary changes accordingly.

In summary, unlike the traditional monolithic approach in which everything is managed from one central data lake, the data mesh supports decentralized, domain-specific data consumers, with data experts sitting in the respective teams. These data experts are in charge of delivering their data experience as a product, managing their own data pipelines and figuring out the best way for their domain to consume and make use of the data so that it adds business value. Beneath the data mesh, there’s a standardized layer of governance to ensure that the data is reliable, accurate, and trustworthy.

We hope this article has inspired you to shift towards a data mesh mindset in which “data is a product” and get excited about owning your data experiences and delivering them in a way that best serves your domain consumers. This mindset will undoubtedly maximize your chances of making an impact on your domain - and organization - and driving a true “data-driven” culture.

To learn more about data mesh and how you can drive this in your organization, talk to one of our data experts!

Sources:

[1] Zhamak Dehghani, "Delivering data-driven value at scale" 2022

[2] Ammara Gafoor, Ian Murdoch, and Kiran Prakash, "Data Mesh in Practice: Getting off to the right start" Thoughtworks 2022

[3] L. van der Sande, M. van der Meijden and M. Geleijns, “Why you need to capitalize on the rise of the data-driven enterprise,” Accenture blog, May 2021

[4] S. Bokil, “Nearly 65% of Organizations are Investing Over $50 Million in Big Data and AI,” Enterprise Talk, January 2020

[5] Eleks, published by DataDrivenInvestor, "Data Mesh: The Four Principles of a Distributed Architecture," March 2021

Try Whaly

X

Thousands of users rely on Whaly every day to monitor and improve their revenue. Join them now!

Learn moreStart your free trial