Skip to main content

Quickstart for dbt Cloud and Databricks

Updated
dbt Cloud
Quickstart
Databricks
Beginner
Menu

    Introduction

    In this quickstart guide, you'll learn how to use dbt Cloud with Databricks. It will show you how to:

    • Create a Databricks workspace.
    • Load sample data into your Databricks account.
    • Connect dbt Cloud to Databricks.
    • Take a sample query and turn it into a model in your dbt project. A model in dbt is a select statement.
    • Add tests to your models.
    • Document your models.
    • Schedule a job to run.
    Videos for you

    You can check out dbt Fundamentals for free if you're interested in course learning with videos.

    Prerequisites​

    • You have a dbt Cloud account.
    • You have an account with a cloud service provider (such as AWS, GCP, and Azure) and have permissions to create an S3 bucket with this account. For demonstrative purposes, this guide uses AWS as the cloud service provider.

    Create a Databricks workspace

    1. Use your existing account or sign up for a Databricks account. Complete the form with your user information.

      Sign up for DatabricksSign up for Databricks
    2. For the purpose of this tutorial, you will be selecting AWS as our cloud provider but if you use Azure or GCP internally, please choose one of them. The setup process will be similar.

    3. Check your email to complete the verification process.

    4. After setting up your password, you will be guided to choose a subscription plan. Select the Premium or Enterprise plan to access the SQL Compute functionality required for using the SQL warehouse for dbt. We have chosen Premium for this tutorial. Click Continue after selecting your plan.

      Choose Databricks PlanChoose Databricks Plan
    5. Click Get Started when you come to this below page and then Confirm after you validate that you have everything needed.

    6. Now it's time to create your first workspace. A Databricks workspace is an environment for accessing all of your Databricks assets. The workspace organizes objects like notebooks, SQL warehouses, clusters, etc into one place. Provide the name of your workspace and choose the appropriate AWS region and click Start Quickstart. You might get the checkbox of I have data in S3 that I want to query with Databricks. You do not need to check this off for the purpose of this tutorial.

      Setup First WorkspaceSetup First Workspace
    7. By clicking on Start Quickstart, you will be redirected to AWS and asked to log in if you haven’t already. After logging in, you should see a page similar to this.

      Create AWS resourcesCreate AWS resources
    tip

    If you get a session error and don’t get redirected to this page, you can go back to the Databricks UI and create a workspace from the interface. All you have to do is click create workspaces, choose the quickstart, fill out the form and click Start Quickstart.

    1. There is no need to change any of the pre-filled out fields in the Parameters. Just add in your Databricks password under Databricks Account Credentials. Check off the Acknowledgement and click Create stack.

      ParametersParameters
      CapabilitiesCapabilities
    2. Go back to the Databricks tab. You should see that your workspace is ready to use.

      A Databricks WorkspaceA Databricks Workspace
    3. Now let’s jump into the workspace. Click Open and log into the workspace using the same login as you used to log into the account.

    Load data

    1. Download these CSV files (the Jaffle Shop sample data) that you will need for this guide:

    2. First we need a SQL warehouse. Find the drop down menu and toggle into the SQL space.

      SQL spaceSQL space
    3. We will be setting up a SQL warehouse now. Select SQL Warehouses from the left hand side console. You will see that a default SQL Warehouse exists.

    4. Click Start on the Starter Warehouse. This will take a few minutes to get the necessary resources spun up.

    5. Once the SQL Warehouse is up, click New and then File upload on the dropdown menu.

      New File Upload Using Databricks SQLNew File Upload Using Databricks SQL
    6. Let's load the Jaffle Shop Customers data first. Drop in the jaffle_shop_customers.csv file into the UI.

      Databricks Table LoaderDatabricks Table Loader
    7. Update the Table Attributes at the top:

      • data_catalog = hive_metastore
      • database = default
      • table = jaffle_shop_customers
      • Make sure that the column data types are correct. The way you can do this is by hovering over the datatype icon next to the column name.
        • ID = bigint
        • FIRST_NAME = string
        • LAST_NAME = string
      Load jaffle shop customersLoad jaffle shop customers
    8. Click Create on the bottom once you’re done.

    9. Now let’s do the same for Jaffle Shop Orders and Stripe Payments.

      Load jaffle shop ordersLoad jaffle shop orders
      Load stripe paymentsLoad stripe payments
    10. Once that's done, make sure you can query the training data. Navigate to the SQL Editor through the left hand menu. This will bring you to a query editor.

    11. Ensure that you can run a select * from each of the tables with the following code snippets.

      select * from default.jaffle_shop_customers
      select * from default.jaffle_shop_orders
      select * from default.stripe_payments
      Query CheckQuery Check
    12. To ensure any users who might be working on your dbt project has access to your object, run this command.

      grant all privileges on schema default to users;

    Connect dbt Cloud to Databricks

    There are two ways to connect dbt Cloud to Databricks. The first option is Partner Connect, which provides a streamlined setup to create your dbt Cloud account from within your new Databricks trial account. The second option is to create your dbt Cloud account separately and build the Databricks connection yourself (connect manually). If you want to get started quickly, dbt Labs recommends using Partner Connect. If you want to customize your setup from the very beginning and gain familiarity with the dbt Cloud setup flow, dbt Labs recommends connecting manually.

    Set up the integration from Partner Connect

    note

    Partner Connect is intended for trial partner accounts. If your organization already has a dbt Cloud account, connect manually. Refer to Connect to dbt Cloud manually in the Databricks docs for instructions.

    To connect dbt Cloud to Databricks using Partner Connect, do the following:

    1. In the sidebar of your Databricks account, click Partner Connect.

    2. Click the dbt tile.

    3. Select a catalog from the drop-down list, and then click Next. The drop-down list displays catalogs you have read and write access to. If your workspace isn't <UC>-enabled, the legacy Hive metastore (hive_metastore) is used.

    4. If there are SQL warehouses in your workspace, select a SQL warehouse from the drop-down list. If your SQL warehouse is stopped, click Start.

    5. If there are no SQL warehouses in your workspace:

      1. Click Create warehouse. A new tab opens in your browser that displays the New SQL Warehouse page in the Databricks SQL UI.
      2. Follow the steps in Create a SQL warehouse in the Databricks docs.
      3. Return to the Partner Connect tab in your browser, and then close the dbt tile.
      4. Re-open the dbt tile.
      5. Select the SQL warehouse you just created from the drop-down list.
    6. Select a schema from the drop-down list, and then click Add. The drop-down list displays schemas you have read and write access to. You can repeat this step to add multiple schemas.

      Partner Connect creates the following resources in your workspace:

      • A Databricks service principal named DBT_CLOUD_USER.
      • A Databricks personal access token that is associated with the DBT_CLOUD_USER service principal.

      Partner Connect also grants the following privileges to the DBT_CLOUD_USER service principal:

      • (Unity Catalog) USE CATALOG: Required to interact with objects within the selected catalog.
      • (Unity Catalog) USE SCHEMA: Required to interact with objects within the selected schema.
      • (Unity Catalog) CREATE SCHEMA: Grants the ability to create schemas in the selected catalog.
      • (Hive metastore) USAGE: Required to grant the SELECT and READ_METADATA privileges for the schemas you selected.
      • SELECT: Grants the ability to read the schemas you selected.
      • (Hive metastore) READ_METADATA: Grants the ability to read metadata for the schemas you selected.
      • CAN_USE: Grants permissions to use the SQL warehouse you selected.
    7. Click Next.

      The Email box displays the email address for your Databricks account. dbt Labs uses this email address to prompt you to create a trial dbt Cloud account.

    8. Click Connect to dbt Cloud.

      A new tab opens in your web browser, which displays the getdbt.com website.

    9. Complete the on-screen instructions on the getdbt.com website to create your trial dbt Cloud account.

    Set up a dbt Cloud managed repository

    When you develop in dbt Cloud, you can leverage Git to version control your code.

    To connect to a repository, you can either set up a dbt Cloud-hosted managed repository or directly connect to a supported git provider. Managed repositories are a great way to trial dbt without needing to create a new repository. In the long run, it's better to connect to a supported git provider to use features like automation and continuous integration.

    To set up a managed repository:

    1. Under "Setup a repository", select Managed.
    2. Type a name for your repo such as bbaggins-dbt-quickstart
    3. Click Create. It will take a few seconds for your repository to be created and imported.
    4. Once you see the "Successfully imported repository," click Continue.

    Initialize your dbt project​ and start developing

    Now that you have a repository configured, you can initialize your project and start development in dbt Cloud:

    1. Click Start developing in the IDE. It might take a few minutes for your project to spin up for the first time as it establishes your git connection, clones your repo, and tests the connection to the warehouse.
    2. Above the file tree to the left, click Initialize dbt project. This builds out your folder structure with example models.
    3. Make your initial commit by clicking Commit and sync. Use the commit message initial commit and click Commit. This creates the first commit to your managed repo and allows you to open a branch where you can add new dbt code.
    4. You can now directly query data from your warehouse and execute dbt run. You can try this out now:
      • Click + Create new file, add this query to the new file, and click Save as to save the new file:
        select * from default.jaffle_shop_customers
      • In the command line bar at the bottom, enter dbt run and click Enter. You should see a dbt run succeeded message.

    Build your first model

    You have two options for working with files in the dbt Cloud IDE:

    • Create a new branch (recommended) — Create a new branch to edit and commit your changes. Navigate to Version Control on the left sidebar and click Create branch.
    • Edit in the protected primary branch — If you prefer to edit, format, or lint files and execute dbt commands directly in your primary git branch. The dbt Cloud IDE prevents commits to the protected branch, so you will be prompted to commit your changes to a new branch.

    Name the new branch add-customers-model.

    1. Click the ... next to the models directory, then select Create file.
    2. Name the file customers.sql, then click Create.
    3. Copy the following query into the file and click Save.
    with customers as (

    select
    id as customer_id,
    first_name,
    last_name

    from jaffle_shop_customers

    ),

    orders as (

    select
    id as order_id,
    user_id as customer_id,
    order_date,
    status

    from jaffle_shop_orders

    ),

    customer_orders as (

    select
    customer_id,

    min(order_date) as first_order_date,
    max(order_date) as most_recent_order_date,
    count(order_id) as number_of_orders

    from orders

    group by 1

    ),

    final as (

    select
    customers.customer_id,
    customers.first_name,
    customers.last_name,
    customer_orders.first_order_date,
    customer_orders.most_recent_order_date,
    coalesce(customer_orders.number_of_orders, 0) as number_of_orders

    from customers

    left join customer_orders using (customer_id)

    )

    select * from final
    1. Enter dbt run in the command prompt at the bottom of the screen. You should get a successful run and see the three models.

    Later, you can connect your business intelligence (BI) tools to these views and tables so they only read cleaned up data rather than raw data in your BI tool.

    FAQs

    How can I see the SQL that dbt is running?
    How did dbt choose which schema to build my models in?
    Do I need to create my target schema before running dbt?
    If I rerun dbt, will there be any downtime as models are rebuilt?
    What happens if the SQL in my query is bad or I get a database error?

    Change the way your model is materialized

    One of the most powerful features of dbt is that you can change the way a model is materialized in your warehouse, simply by changing a configuration value. You can change things between tables and views by changing a keyword rather than writing the data definition language (DDL) to do this behind the scenes.

    By default, everything gets created as a view. You can override that at the directory level so everything in that directory will materialize to a different materialization.

    1. Edit your dbt_project.yml file.

      • Update your project name to:

        dbt_project.yml
        name: 'jaffle_shop'
      • Configure jaffle_shop so everything in it will be materialized as a table; and configure example so everything in it will be materialized as a view. Update your models config block to:

        dbt_project.yml
        models:
        jaffle_shop:
        +materialized: table
        example:
        +materialized: view
      • Click Save.

    2. Enter the dbt run command. Your customers model should now be built as a table!

      info

      To do this, dbt had to first run a drop view statement (or API call on BigQuery), then a create table as statement.

    3. Edit models/customers.sql to override the dbt_project.yml for the customers model only by adding the following snippet to the top, and click Save:

      models/customers.sql
      {{
      config(
      materialized='view'
      )
      }}

      with customers as (

      select
      id as customer_id
      ...

      )

    4. Enter the dbt run command. Your model, customers, should now build as a view.

      • BigQuery users need to run dbt run --full-refresh instead of dbt run to full apply materialization changes.
    5. Enter the dbt run --full-refresh command for this to take effect in your warehouse.

    FAQs

    What materializations are available in dbt?
    Which materialization should I use for my model?
    What model configurations exist?

    Delete the example models

    You can now delete the files that dbt created when you initialized the project:

    1. Delete the models/example/ directory.

    2. Delete the example: key from your dbt_project.yml file, and any configurations that are listed under it.

      dbt_project.yml
      # before
      models:
      jaffle_shop:
      +materialized: table
      example:
      +materialized: view
      dbt_project.yml
      # after
      models:
      jaffle_shop:
      +materialized: table
    3. Save your changes.

    FAQs

    How do I remove deleted models from my data warehouse?
    I got an "unused model configurations" error message, what does this mean?

    Build models on top of other models

    As a best practice in SQL, you should separate logic that cleans up your data from logic that transforms your data. You have already started doing this in the existing query by using common table expressions (CTEs).

    Now you can experiment by separating the logic out into separate models and using the ref function to build models on top of other models:

    The DAG we want for our dbt projectThe DAG we want for our dbt project
    1. Create a new SQL file, models/stg_customers.sql, with the SQL from the customers CTE in our original query.

    2. Create a second new SQL file, models/stg_orders.sql, with the SQL from the orders CTE in our original query.

      models/stg_customers.sql
      select
      id as customer_id,
      first_name,
      last_name

      from jaffle_shop_customers
      models/stg_orders.sql
      select
      id as order_id,
      user_id as customer_id,
      order_date,
      status

      from jaffle_shop_orders
    3. Edit the SQL in your models/customers.sql file as follows:

      models/customers.sql
      with customers as (

      select * from {{ ref('stg_customers') }}

      ),

      orders as (

      select * from {{ ref('stg_orders') }}

      ),

      customer_orders as (

      select
      customer_id,

      min(order_date) as first_order_date,
      max(order_date) as most_recent_order_date,
      count(order_id) as number_of_orders

      from orders

      group by 1

      ),

      final as (

      select
      customers.customer_id,
      customers.first_name,
      customers.last_name,
      customer_orders.first_order_date,
      customer_orders.most_recent_order_date,
      coalesce(customer_orders.number_of_orders, 0) as number_of_orders

      from customers

      left join customer_orders using (customer_id)

      )

      select * from final

    4. Execute dbt run.

      This time, when you performed a dbt run, separate views/tables were created for stg_customers, stg_orders and customers. dbt inferred the order to run these models. Because customers depends on stg_customers and stg_orders, dbt builds customers last. You do not need to explicitly define these dependencies.

    FAQs

    How do I run one model at a time?
    Do ref-able resource names need to be unique?
    As I create more models, how should I keep my project organized? What should I name my models?

    Add tests to your models

    Adding tests to a project helps validate that your models are working correctly.

    To add tests to your project:

    1. Create a new YAML file in the models directory, named models/schema.yml

    2. Add the following contents to the file:

      models/schema.yml
      version: 2

      models:
      - name: customers
      columns:
      - name: customer_id
      tests:
      - unique
      - not_null

      - name: stg_customers
      columns:
      - name: customer_id
      tests:
      - unique
      - not_null

      - name: stg_orders
      columns:
      - name: order_id
      tests:
      - unique
      - not_null
      - name: status
      tests:
      - accepted_values:
      values: ['placed', 'shipped', 'completed', 'return_pending', 'returned']
      - name: customer_id
      tests:
      - not_null
      - relationships:
      to: ref('stg_customers')
      field: customer_id

    3. Run dbt test, and confirm that all your tests passed.

    When you run dbt test, dbt iterates through your YAML files, and constructs a query for each test. Each query will return the number of records that fail the test. If this number is 0, then the test is successful.

    FAQs

    What tests are available for me to use in dbt? Can I add my own custom tests?
    How do I test one model at a time?
    One of my tests failed, how can I debug it?
    Does my test file need to be named `schema.yml`?
    Why do model and source yml files always start with `version: 2`?
    What tests should I add to my project?
    When should I run my tests?

    Document your models

    Adding documentation to your project allows you to describe your models in rich detail, and share that information with your team. Here, we're going to add some basic documentation to our project.

    1. Update your models/schema.yml file to include some descriptions, such as those below.

      models/schema.yml
      version: 2

      models:
      - name: customers
      description: One record per customer
      columns:
      - name: customer_id
      description: Primary key
      tests:
      - unique
      - not_null
      - name: first_order_date
      description: NULL when a customer has not yet placed an order.

      - name: stg_customers
      description: This model cleans up customer data
      columns:
      - name: customer_id
      description: Primary key
      tests:
      - unique
      - not_null

      - name: stg_orders
      description: This model cleans up order data
      columns:
      - name: order_id
      description: Primary key
      tests:
      - unique
      - not_null
      - name: status
      tests:
      - accepted_values:
      values: ['placed', 'shipped', 'completed', 'return_pending', 'returned']
      - name: customer_id
      tests:
      - not_null
      - relationships:
      to: ref('stg_customers')
      field: customer_id
    2. Run dbt docs generate to generate the documentation for your project. dbt introspects your project and your warehouse to generate a JSON file with rich documentation about your project.

    1. Click the book icon in the Develop interface to launch documentation in a new tab.

    FAQs

    How do I write long-form explanations in my descriptions?
    How do I access documentation in dbt Explorer?

    Commit your changes

    Now that you've built your customer model, you need to commit the changes you made to the project so that the repository has your latest code.

    If you edited directly in the protected primary branch:

    1. Click the Commit and sync git button. This action prepares your changes for commit.
    2. A modal titled Commit to a new branch will appear.
    3. In the modal window, name your new branch add-customers-model. This branches off from your primary branch with your new changes.
    4. Add a commit message, such as "Add customers model, tests, docs" and and commit your changes.
    5. Click Merge this branch to main to add these changes to the main branch on your repo.

    If you created a new branch before editing:

    1. Since you already branched out of the primary protected branch, go to Version Control on the left.
    2. Click Commit and sync to add a message.
    3. Add a commit message, such as "Add customers model, tests, docs."
    4. Click Merge this branch to main to add these changes to the main branch on your repo.

    Deploy dbt

    Use dbt Cloud's Scheduler to deploy your production jobs confidently and build observability into your processes. You'll learn to create a deployment environment and run a job in the following steps.

    Create a deployment environment

    1. In the upper left, select Deploy, then click Environments.
    2. Click Create Environment.
    3. In the Name field, write the name of your deployment environment. For example, "Production."
    4. In the dbt Version field, select the latest version from the dropdown.
    5. Under Deployment connection, enter the name of the dataset you want to use as the target, such as "Analytics". This will allow dbt to build and work with that dataset. For some data warehouses, the target dataset may be referred to as a "schema".
    6. Click Save.

    Create and run a job

    Jobs are a set of dbt commands that you want to run on a schedule. For example, dbt build.

    As the jaffle_shop business gains more customers, and those customers create more orders, you will see more records added to your source data. Because you materialized the customers model as a table, you'll need to periodically rebuild your table to ensure that the data stays up-to-date. This update will happen when you run a job.

    1. After creating your deployment environment, you should be directed to the page for a new environment. If not, select Deploy in the upper left, then click Jobs.
    2. Click Create one and provide a name, for example, "Production run", and link to the Environment you just created.
    3. Scroll down to the Execution Settings section.
    4. Under Commands, add this command as part of your job if you don't see it:
      • dbt build
    5. Select the Generate docs on run checkbox to automatically generate updated project docs each time your job runs.
    6. For this exercise, do not set a schedule for your project to run — while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as deploying a project.
    7. Select Save, then click Run now to run your job.
    8. Click the run and watch its progress under "Run history."
    9. Once the run is complete, click View Documentation to see the docs for your project.

    Congratulations 🎉! You've just deployed your first dbt project!

    FAQs

    What happens if one of my runs fails?
    0