Skip to main content

Quickstart for dbt Cloud and Snowflake

In this quickstart guide, you'll learn how to use dbt Cloud with Snowflake. It will show you how to:

  • Create a new Snowflake worksheet.
  • Load sample data into your Snowflake account.
  • Connect dbt Cloud to Snowflake.
  • Take a sample query and turn it into a model in your dbt project. A model in dbt is a select statement.
  • Add tests to your models.
  • Document your models.
  • Schedule a job to run.

Snowflake also provides a quickstart for you to learn how to use dbt Cloud. It makes use of a different public dataset (Knoema Economy Data Atlas) than what's shown in this guide. For more information, refer to Accelerating Data Teams with dbt Cloud & Snowflake in the Snowflake docs.

Videos for you

You can check out dbt Fundamentals for free if you're interested in course learning with videos.

You can also watch the YouTube video on dbt and Snowflake.

Prerequisites​

  • You have a dbt Cloud account.
  • You have a trial Snowflake account. During trial account creation, make sure to choose the Enterprise Snowflake edition so you have ACCOUNTADMIN access. For a full implementation, you should consider organizational questions when choosing a cloud provider. For more information, see Introduction to Cloud Platforms in the Snowflake docs. For the purposes of this setup, all cloud providers and regions will work so choose whichever you’d like.

Create a new Snowflake worksheet

  1. Log in to your trial Snowflake account.
  2. In the Snowflake UI, click + Worksheet in the upper right corner to create a new worksheet.

Load data

The data used here is stored as CSV files in a public S3 bucket and the following steps will guide you through how to prepare your Snowflake account for that data and upload it.

  1. Create a new virtual warehouse, two new databases (one for raw data, the other for future dbt development), and two new schemas (one for jaffle_shop data, the other for stripe data).

    To do this, run these SQL commands by typing them into the Editor of your new Snowflake worksheet and clicking Run in the upper right corner of the UI:

    create warehouse transforming;
    create database raw;
    create database analytics;
    create schema raw.jaffle_shop;
    create schema raw.stripe;
  2. In the raw database and jaffle_shop and stripe schemas, create three tables and load relevant data into them:

    • First, delete all contents (empty) in the Editor of the Snowflake worksheet. Then, run this SQL command to create the customer table:

      ​​create table raw.jaffle_shop.customers 
      ( id integer,
      first_name varchar,
      last_name varchar
      );
    • Delete all contents in the Editor, then run this command to load data into the customer table:

      copy into raw.jaffle_shop.customers (id, first_name, last_name)
      from 's3://dbt-tutorial-public/jaffle_shop_customers.csv'
      file_format = (
      type = 'CSV'
      field_delimiter = ','
      skip_header = 1
      );
    • Delete all contents in the Editor (empty), then run this command to create the orders table:

      create table raw.jaffle_shop.orders
      ( id integer,
      user_id integer,
      order_date date,
      status varchar,
      _etl_loaded_at timestamp default current_timestamp
      );
    • Delete all contents in the Editor, then run this command to load data into the orders table:

      copy into raw.jaffle_shop.orders (id, user_id, order_date, status)
      from 's3://dbt-tutorial-public/jaffle_shop_orders.csv'
      file_format = (
      type = 'CSV'
      field_delimiter = ','
      skip_header = 1
      );
    • Delete all contents in the Editor (empty), then run this command to create the payment table:

      create table raw.stripe.payment 
      ( id integer,
      orderid integer,
      paymentmethod varchar,
      status varchar,
      amount integer,
      created date,
      _batched_at timestamp default current_timestamp
      );
    • Delete all contents in the Editor, then run this command to load data into the payment table:

      copy into raw.stripe.payment (id, orderid, paymentmethod, status, amount, created)
      from 's3://dbt-tutorial-public/stripe_payments.csv'
      file_format = (
      type = 'CSV'
      field_delimiter = ','
      skip_header = 1
      );
  3. Verify that the data is loaded by running these SQL queries. Confirm that you can see output for each one.

    select * from raw.jaffle_shop.customers;
    select * from raw.jaffle_shop.orders;
    select * from raw.stripe.payment;

Connect dbt Cloud to Snowflake

There are two ways to connect dbt Cloud to Snowflake. The first option is Partner Connect, which provides a streamlined setup to create your dbt Cloud account from within your new Snowflake trial account. The second option is to create your dbt Cloud account separately and build the Snowflake connection yourself (connect manually). If you want to get started quickly, dbt Labs recommends using Partner Connect. If you want to customize your setup from the very beginning and gain familiarity with the dbt Cloud setup flow, dbt Labs recommends connecting manually.

Using Partner Connect allows you to create a complete dbt account with your Snowflake connection, a managed repository, environments, and credentials.

  1. In the Snowflake UI, click on the home icon in the upper left corner. Click on your user, and then select Partner Connect. Find the dbt tile by scrolling or by searching for dbt in the search bar. Click the tile to connect to dbt.

    Snowflake Partner Connect BoxSnowflake Partner Connect Box

    If you’re using the classic version of the Snowflake UI, you can click the Partner Connect button in the top bar of your account. From there, click on the dbt tile to open up the connect box.

    Snowflake Classic UI - Partner ConnectSnowflake Classic UI - Partner Connect
  2. In the Connect to dbt popup, find the Optional Grant option and select the RAW and ANALYTICS databases. This will grant access for your new dbt user role to each database. Then, click Connect.

    Snowflake Classic UI - Connection BoxSnowflake Classic UI - Connection BoxSnowflake New UI - Connection BoxSnowflake New UI - Connection Box
  3. Click Activate when a popup appears:

Snowflake Classic UI - Actviation WindowSnowflake Classic UI - Actviation WindowSnowflake New UI - Activation WindowSnowflake New UI - Activation Window
  1. After the new tab loads, you will see a form. If you already created a dbt Cloud account, you will be asked to provide an account name. If you haven't created account, you will be asked to provide an account name and password.
dbt Cloud - Account Infodbt Cloud - Account Info
  1. After you have filled out the form and clicked Complete Registration, you will be logged into dbt Cloud automatically.

  2. From your Account Settings in dbt Cloud (using the gear menu in the upper right corner), choose the "Partner Connect Trial" project and select snowflake in the overview table. Select edit and update the fields Database and Warehouse to be analytics and transforming, respectively.

dbt Cloud - Snowflake Project Overviewdbt Cloud - Snowflake Project Overviewdbt Cloud - Update Database and Warehousedbt Cloud - Update Database and Warehouse

Set up a dbt Cloud managed repository

If you used Partner Connect, you can skip to initializing your dbt project as the Partner Connect provides you with a managed repository. Otherwise, you will need to create your repository connection.

When you develop in dbt Cloud, you can leverage Git to version control your code.

To connect to a repository, you can either set up a dbt Cloud-hosted managed repository or directly connect to a supported git provider. Managed repositories are a great way to trial dbt without needing to create a new repository. In the long run, it's better to connect to a supported git provider to use features like automation and continuous integration.

To set up a managed repository:

  1. Under "Setup a repository", select Managed.
  2. Type a name for your repo such as bbaggins-dbt-quickstart
  3. Click Create. It will take a few seconds for your repository to be created and imported.
  4. Once you see the "Successfully imported repository," click Continue.

Initialize your dbt project​ and start developing

Now that you have a repository configured, you can initialize your project and start development in dbt Cloud:

  1. Click Develop from the upper left. It might take a few minutes for your project to spin up for the first time as it establishes your git connection, clones your repo, and tests the connection to the warehouse.
  2. Above the file tree to the left, click Initialize your project. This builds out your folder structure with example models.
  3. Make your initial commit by clicking Commit. Use the commit message initial commit. This creates the first commit to your managed repo and allows you to open a branch where you can add new dbt code.
  4. You can now directly query data from your warehouse and execute dbt run. You can try this out now:
    • Click + Create new file, add this query to the new file, and click Save as to save the new file:
      select * from raw.jaffle_shop.customers
    • In the command line bar at the bottom, enter dbt run and click Enter. You should see a dbt run succeeded message.

Build your first model

  1. Click Develop from the upper left of dbt Cloud. You need to create a new branch since the main branch is set to read-only mode.
  2. Click Create branch. You can name it add-customers-model.
  3. Click the ... next to the Models directory, then select Create file.
  4. Name the file models/customers.sql, then click Create.
  5. Copy the following query into the file and click Save.
with customers as (

select
id as customer_id,
first_name,
last_name

from raw.jaffle_shop.customers

),

orders as (

select
id as order_id,
user_id as customer_id,
order_date,
status

from raw.jaffle_shop.orders

),

customer_orders as (

select
customer_id,

min(order_date) as first_order_date,
max(order_date) as most_recent_order_date,
count(order_id) as number_of_orders

from orders

group by 1

),

final as (

select
customers.customer_id,
customers.first_name,
customers.last_name,
customer_orders.first_order_date,
customer_orders.most_recent_order_date,
coalesce(customer_orders.number_of_orders, 0) as number_of_orders

from customers

left join customer_orders using (customer_id)

)

select * from final
  1. Enter dbt run in the command prompt at the bottom of the screen. You should get a successful run and see the three models.

Later, you can connect your business intelligence (BI) tools to these views and tables so they only read cleaned up data rather than raw data in your BI tool.

Change the way your model is materialized

One of the most powerful features of dbt is that you can change the way a model is materialized in your warehouse, simply by changing a configuration value. You can change things between tables and views by changing a keyword rather than writing the data definition language (DDL) to do this behind the scenes.

By default, everything gets created as a view. You can override that by materializing everything in jaffle_shop as a table. Everything in the example project will still be materialized as a view.

  1. Edit your dbt_project.yml file.

    • Update your project name to:

      dbt_project.yml
      name: 'jaffle_shop'
    • Update your models config block to:

      dbt_project.yml
      models:
      jaffle_shop:
      +materialized: table
      example:
      +materialized: view
    • Click Save.

  2. Enter the dbt run command. Your customers model should now be built as a table!

    info

    To do this, dbt had to first run a drop view statement (or API call on BigQuery), then a create table as statement.

  3. Edit models/customers.sql to override the dbt_project.yml for the customers model only by adding the following snippet to the top, and click Save:

    models/customers.sql
    {{
    config(
    materialized='view'
    )
    }}

    with customers as (

    select
    id as customer_id
    ...

    )

  4. Enter the dbt run command. Your model, customers should now build as a view.

  5. Enter the dbt run --full-refresh command for this to take effect in your warehouse.

FAQs

 
 
 

Delete the example models

You can now delete the files that dbt created when you initialized the project:

  1. Delete the models/example/ directory.

  2. Delete the example: key from your dbt_project.yml file, and any configurations that are listed under it.

    dbt_project.yml
    # before
    models:
    jaffle_shop:
    +materialized: table
    example:
    +materialized: view
    dbt_project.yml
    # after
    models:
    jaffle_shop:
    +materialized: table
  3. Save your changes.

FAQs

 
 

Build models on top of other models

As a best practice in SQL, you should separate logic that cleans up your data from logic that transforms your data. You have already started doing this in the existing query by using common table expressions (CTEs).

Now you can experiment by separating the logic out into separate models and using the ref function to build models on top of other models:

The DAG we want for our dbt projectThe DAG we want for our dbt project
  1. Create a new SQL file, models/stg_customers.sql, with the SQL from the customers CTE in our original query.

  2. Create a second new SQL file, models/stg_orders.sql, with the SQL from the orders CTE in our original query.

    models/stg_customers.sql
    select
    id as customer_id,
    first_name,
    last_name

    from `dbt-tutorial`.jaffle_shop.customers
    models/stg_orders.sql
    select
    id as order_id,
    user_id as customer_id,
    order_date,
    status

    from `dbt-tutorial`.jaffle_shop.orders
  3. Edit the SQL in your models/customers.sql file as follows:

    models/customers.sql
    with customers as (

    select * from {{ ref('stg_customers') }}

    ),

    orders as (

    select * from {{ ref('stg_orders') }}

    ),

    customer_orders as (

    select
    customer_id,

    min(order_date) as first_order_date,
    max(order_date) as most_recent_order_date,
    count(order_id) as number_of_orders

    from orders

    group by 1

    ),

    final as (

    select
    customers.customer_id,
    customers.first_name,
    customers.last_name,
    customer_orders.first_order_date,
    customer_orders.most_recent_order_date,
    coalesce(customer_orders.number_of_orders, 0) as number_of_orders

    from customers

    left join customer_orders using (customer_id)

    )

    select * from final

  4. Execute dbt run

This time, when you performed a dbt run, separate views/tables were created for stg_customers, stg_orders and customers. dbt inferred the order to run these models. Because customers depends on stg_customers and stg_orders, dbt builds customers last. You do not need to explicitly define these dependencies.

FAQs

 
 
 As I create more models, how should I keep my project organized? What should I name my models?

Add tests to your models

Adding tests to a project helps validate that your models are working correctly.

To add tests to your project:

  1. Create a new YAML file in the models directory, named models/schema.yml

  2. Add the following contents to the file:

    models/schema.yml
    version: 2

    models:
    - name: customers
    columns:
    - name: customer_id
    tests:
    - unique
    - not_null

    - name: stg_customers
    columns:
    - name: customer_id
    tests:
    - unique
    - not_null

    - name: stg_orders
    columns:
    - name: order_id
    tests:
    - unique
    - not_null
    - name: status
    tests:
    - accepted_values:
    values: ['placed', 'shipped', 'completed', 'return_pending', 'returned']
    - name: customer_id
    tests:
    - not_null
    - relationships:
    to: ref('stg_customers')
    field: customer_id

  3. Run dbt test, and confirm that all your tests passed.

When you run dbt test, dbt iterates through your YAML files, and constructs a query for each test. Each query will return the number of records that fail the test. If this number is 0, then the test is successful.

FAQs

 What tests are available for me to use in dbt? Can I add my own custom tests?
 
 
 Does my test file need to be named `schema.yml`?
 
 
 

Document your models

Adding documentation to your project allows you to describe your models in rich detail, and share that information with your team. Here, we're going to add some basic documentation to our project.

  1. Update your models/schema.yml file to include some descriptions, such as those below.

    models/schema.yml
    version: 2

    models:
    - name: customers
    description: One record per customer
    columns:
    - name: customer_id
    description: Primary key
    tests:
    - unique
    - not_null
    - name: first_order_date
    description: NULL when a customer has not yet placed an order.

    - name: stg_customers
    description: This model cleans up customer data
    columns:
    - name: customer_id
    description: Primary key
    tests:
    - unique
    - not_null

    - name: stg_orders
    description: This model cleans up order data
    columns:
    - name: order_id
    description: Primary key
    tests:
    - unique
    - not_null
    - name: status
    tests:
    - accepted_values:
    values: ['placed', 'shipped', 'completed', 'return_pending', 'returned']

  2. Run dbt docs generate to generate the documentation for your project. dbt introspects your project and your warehouse to generate a JSON file with rich documentation about your project.

  1. Click the book icon in the Develop interface to launch documentation in a new tab.

FAQs

 
 

Commit your changes

Now that you've built your customer model, you need to commit the changes you made to the project so that the repository has your latest code.

  1. Click Commit and add a message. For example, "Add customers model, tests, docs."
  2. Click merge to main To add these changes to the main branch on your repo.

Create a deployment environment

  1. In the upper left, select Deploy, then click Environments.
  2. Click Create Environment.
  3. Name your deployment environment. For example, "Production."
  4. Add a target dataset, for example, "Analytics." dbt will build into this dataset. For some warehouses this will be named "schema."
  5. Click Save.

Create and run a job

Jobs are a set of dbt commands that you want to run on a schedule. For example, dbt run and dbt test.

As the jaffle_shop business gains more customers, and those customers create more orders, you will see more records added to your source data. Because you materialized the customers model as a table, you'll need to periodically rebuild your table to ensure that the data stays up-to-date. This update will happen when you run a job.

  1. After creating your deployment environment, you should be directed to the page for new environment. If not, select Deploy in the upper left, then click Jobs.
  2. Click Create one and provide a name, for example "Production run", and link to the Environment you just created.
  3. Scroll down to "Execution Settings" and select Generate docs on run.
  4. Under "Commands," add these commands as part of your job if you don't see them:
    • dbt run
    • dbt test
  5. For this exercise, do not set a schedule for your project to run while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as deploying a project.
  6. Select Save, then click Run now to run your job.
  7. Click the run and watch its progress under "Run history."
  8. Once the run is complete, click View Documentation to see the docs for your project.
tip

Congratulations 🎉! You've just deployed your first dbt project!

FAQs

 
0