The dbt cloud x Whaly Integration Is Here

We're excited to announce our native integration with dbt! Learn why we built it, and why we think it's a game-changer.

Try Whaly

Thousands of users rely on Whaly every day to monitor and improve their revenue. Join them now!

The dbt cloud x Whaly Integration Is Here

We’re excited to announce a native integration between dbt cloud x Whaly! 🎉

dbt’s impressive adoption across the data community is proof of the value it provides to data teams. Through our integration, we aim to help companies close the gap between dbt cloud and Whaly, driving efficiencies in how analysts work across both tools.

Background

dbt is a great open-source tool that simplifies data transformation by enabling engineers and analysts to transform data in the warehouse through SQL statements, which it then converts to datasets. This tool has taken the data world by storm, and is widely adopted by the data community. Essentially, anyone who knows SQL is now able to build production-grade data pipelines and produce datasets for analytics.

With dbt, people are empowered to follow the software engineering mindset and methodology in their data modeling projects, such as version control through git, unit testing, continuous integration, and continuous deployment. Countless companies have leveraged dbt to help them scale their modeling capabilities in this way. However, it introduces a “new way of working” for data teams, which inevitably comes with some drawbacks.

Why we built the integration

While there’s no doubt that dbt is a great, game-changing tool for the data world, it's no silver bullet, and there are a few problems that remain:

Problem #1: Working the software engineer way is time-consuming

Software engineers usually all follow a standard process, which goes like this:

  • Receive a change request from the business team
  • Implement the request on a local branch of the project
  • Locally test the new implementation
  • Open a Pull Request on your favorite git tool (github, gitlab, etc.)
  • Peer review and approval of the Pull Request
  • Merge the Pull Request
  • Check production state after the Pull Request has been merged

This is a complex process, but has plenty of benefits, which is why it’s been maintained as the standard way of working for software engineers. To highlight a few benefits:

  • Each development must be approved by a peer, which means:
    - Codebase is more consistent
    - Fewer bugs are pushed to production
  • Each feature can be tested locally, which results in:
    - A smoother dev experience
    - Less time spent debugging, and more time spent fixing bugs and working on new features

With complexity and robustness, however, often comes slowness - and that is one of the downsides of this process. This lengthy process can lead to a longer dev cycle for new features, thus reducing confidence in your business users around your ability to quickly handle their requests. If it takes you one full week to implement a change request, your business teams may feel less confident that their requests will be taken seriously and handled swiftly.

Oppositely, the data team and engineers may feel more confident in their newfound, more robust capabilities in implementing the change request. The business teams likely won’t see things that way, however, and the delays could negatively impact adoption.

Problem #2: What's not in your dbt project is your downfall

Using dbt can give you a huge advantage in controlling how lineage is defined and impacted, and how your data can be tested and documented. But at the end of the day, all that information is committed in a git repository and exposed internally under the dbt documentation. This creates a disconnect between those who rely on what they see in dbt, and business users who rely on what they see in the BI platform.

This has several implications:

  • As analysts, when we are tasked to debug a dashboard, we need to rely on the dbt lineage to figure out the dashboard's dependencies and find the root cause. This means that if dashboards aren't committed in dbt, they won't show up in the documentation, and will therefore be harder to debug.
  • As analytics-engineers, in the case of pipeline failures, if dashboards aren't committed in dbt, we can’t warn our users that dashboards are broken. This is frustrating for all parties, as dashboards will be broken somewhere and nothing can really be done except warning everyone downstream that the numbers shouldn't be trusted. If this happens often, it will lead to a lack of trust in data from the business team and a fear of pushing changes from the data team.

Problem #3: Your business users don't live in dbt

Your business users consume data everyday through approved company tooling. It's not uncommon for data pipeline issues to happen, and when they do, they tend to be frustrating for the end business user, as they don't operate where these issues are visible.

Let's take the example of Lenny. Every morning, Lenny looks at his dashboard. This morning, the numbers seem off. The figures look like the same ones as yesterday, so Lenny decides to slack his analyst. One hour later, the analyst comes back to him with this answer: "We very sorry, but we've been trying to fix our pipeline since this morning, and this dashboard might be impacted. We're pushing a fix right now, so expect things to go back to normal by noon."

Asking your business users to challenge what they see every time they open their dashboards is definitely a trust-killer, and can harm data adoption from your business team.

The Solution

We’ve built our native dbt cloud x Whaly integration as a solution to these problems. Our integration is designed to help you bridge the gap between your dbt data project and and your business intelligence platform. As a result, analysts can work more efficiently, and build trust with their business teams.

Here’s why we think this is a big deal:

Benefit #1: Faster iteration speed

The Whaly x dbt integration allows you to import all models and sources created in dbt into your Whaly project. This enables your data analyst team to work in an environment that represents the state of your dbt project in Whaly. Analysts can now leverage models and sources declared in dbt to quickly prototype new models in Whaly before moving them back to dbt, where they’ll become a predictable, stable part of the codebase, and core to your business. We’ve observed that the shorter the iteration time, the greater the business satisfaction.

Faster iteration time in Whaly using dbt models as inputs

Benefit #2: Stronger bonds between business and data teams

Our integration allows you to import all results associated with tests and freshness, displaying this information at the chart level. This is enormously helpful in communicating to your business users when the upstream pipeline has issues. You’ll be able to clearly inform everyone and let them know what has and hasn’t been impacted.

This inevitably boosts confidence in your business users around their usage of the charts stored in the BI platform, resting assured that they’ll always be quickly alerted and given specifics on what’s going on with the data. In parallel, this information is beneficial to the analysts, helping them debug and find the root issue faster.

Boosting trust through alerts

What’s next?

Looking ahead, our main goal is to create a platform that drives adoption among business users, while giving the right tools to data teams that will help them excel at their jobs. That's why we’d like to push for a deeper integration between the two platforms by enabling:

  • Automatically push exploration / dashboards as Exposures in your git repository
  • Allow Whaly users to create exploration using the dbt semantic layer
  • Automatically invalidate cache after a dbt run
  • Integration with the dbt CLI in your own orchestrator

To learn more about the integration and how it can benefit your company, get in touch!

Try Whaly

X

Thousands of users rely on Whaly every day to monitor and improve their revenue. Join them now!

Learn moreStart your free trial