Skip to main content

Vertica configurations

Configuration of Incremental Models

Using the on_schema_change config parameter

You can use on_schema_change parameter with values ignore, fail and append_new_columns. Value sync_all_columns is not supported at this time.

Configuring the ignore (default) parameter

vertica_incremental.sql
    
{{config(materialized = 'incremental',on_schema_change='ignore')}}

select * from {{ ref('seed_added') }}


Configuring the fail parameter

vertica_incremental.sql
      {{config(materialized = 'incremental',on_schema_change='fail')}} 


select * from {{ ref('seed_added') }}


Configuring the append_new_columns parameter

vertica_incremental.sql
    

{{ config( materialized='incremental', on_schema_change='append_new_columns') }}



select * from public.seed_added


Using the incremental_strategy config ​parameter

The append strategy (default):

Insert new records without updating or overwriting any existing data. append only adds the new records based on the condition specified in the is_incremental() conditional block.

vertica_incremental.sql

{{ config( materialized='incremental', incremental_strategy='append' ) }}


select * from public.product_dimension


{% if is_incremental() %}

where product_key > (select max(product_key) from {{this }})


{% endif %}

The merge strategy:

Match records based on a unique_key; update old records, insert new ones. (If no unique_key is specified, all new data is inserted, similar to append.) The unique_key config parameter is required for using the merge strategy, the value accepted by this parameter is a single table column.

vertica_incremental.sql

{{ config( materialized = 'incremental', incremental_strategy = 'merge', unique_key='promotion_key' ) }}


select * FROM public.promotion_dimension


Using the merge_update_columns config parameter

The merge_update_columns config parameter is passed to only update the columns specified and it accepts a list of table columns.

vertica_incremental.sql

{{ config( materialized = 'incremental', incremental_strategy='merge', unique_key = 'id', merge_update_columns = ["names", "salary"] )}}

select * from {{ref('seed_tc1')}}

delete+insert strategy:

Through the delete+insert incremental strategy, you can instruct dbt to use a two-step incremental approach. It will first delete the records detected through the configured is_incremental() block and then re-insert them. The unique_key is a required parameter for using delete+instert strategy which specifies how to update the records when there is duplicate data. The value accepted by this parameter is a single table column.

vertica_incremental.sql

{{ config( materialized = 'incremental', incremental_strategy = 'delete+insert', unique_key='date_key' ) }}


select * FROM public.date_dimension

insert_overwrite strategy:

The insert_overwrite strategy does not use a full-table scan to delete records. Instead of deleting records it drops entire partitions. This strategy may accept partition_by_string and partitions parameters. You provide these parameters when you want to overwrite a part of the table.

partition_by_string accepts an expression based on which partitioning of the table takes place. This is the PARTITION BY clause in Vertica.

partitions accepts a list of values in the partition column.

The config parameter partitions must be used carefully. Two situations to consider:

  • Fewer partitions in the partitions parameter than in the where clause: destination table ends up with duplicates.
  • More partitions in the partitions parameter than in the where clause: destination table ends up missing rows. Less rows in destination than in source.

To understand more about PARTITION BY clause check here

Note:

The partitions parameter is optional, if the partitions parameter is not provided, the partitions in the where clause will be dropped from destination and inserted back from source. If you use a where clause, you might not need the partitions parameter.

The where clause condition is also optional, but if not provided then all data in source is inserted in destination.

If no where clause condition and no partitions parameter are provided, then it drops all partitions from the table and inserts all of them again.

If the partitions parameter is provided but not where clause is provided, the destination table ends up with duplicates because the partitions in the partitions parameter are dropped but all data in the source table (no where clause) is inserted in destination.

The partition_by_string config parameter is also optional. If no partition_by_string parameter is provided, then it behaves like delete+insert. It deletes all records from destination and then it inserts all records from source. It won’t use or drop partitions.

If both the partition_by_string and partitions parameters are not provided then insert_overwrite strategy truncates the target table and insert the source table data into target.

If you want to use partitions parameter then you have to partition the table by passing partition_by_string parameter.

vertica_incremental.sql
{{config(materialized = 'incremental',incremental_strategy = 'insert_overwrite',partition_by_string='YEAR(cc_open_date)',partitions=['2023'])}}


select * from online_sales.call_center_dimension

Optimization options for table materialization

There are multiple optimizations that can be used when materializing models as tables. Each config parameter applies a Vertica specific clause in the generated CREATE TABLE DDL.

For more information see Vertica options for table optimization.

You can configure these optimizations in your model SQL file as described in the examples below:

Configuring the ORDER BY clause

To leverage the ORDER BY clause of the CREATE TABLE statement use the order_by config param in your model.

Using the order_by config parameter

vertica_incremental.sql
        {{ config(  materialized='table',  order_by='product_key') }} 

select * from public.product_dimension


Configuring the SEGMENTED BY clause

To leverage the SEGMENTED BY clause of the CREATE TABLE statement, use the segmented_by_string or segmented_by_all_nodes config parameters in your model. By default ALL NODES are used to segment tables, so the ALL NODES clause in the SQL statement will be added when using segmented_by_string config parameter. You can disable ALL NODES using no_segmentation parameter.

To learn more about segmented by clause check here.

Using the segmented_by_string config parameter

segmented_by_string config parameter can be used to segment projection data using a SQL expression like hash segmentation.

vertica_incremental.sql
   
{{ config( materialized='table', segmented_by_string='product_key' ) }}


select * from public.product_dimension

Using the segmented_by_all_nodes config parameter

segmented_by_all_nodes config parameter can be used to segment projection data for distribution across all cluster nodes.

Note:

If you want to pass segmented_by_all_nodes parameter then you have to segment the table by passing segmented_by_string parameter.

vertica_incremental.sql
        {{ config( materialized='table', segmented_by_string='product_key' ,segmented_by_all_nodes='True' )  }}  

select * from public.product_dimension


Configuring the UNSEGMENTED ALL NODES clause

To leverage theUNSEGMENTED ALL NODES clause of the CREATE TABLE statement, use the no_segmentation config parameters in your model.

Using the no_segmentation config parameter

vertica_incremental.sql
      
{{config(materialized='table',no_segmentation='true')}}


select * from public.product_dimension

Configuring the PARTITION BY clause

To leverage the PARTITION BY clause of the CREATE TABLE statement, use the partition_by_string, partition_by_active_count or the partition_by_group_by_string config parameters in your model.

To learn more about partition by clause check here

Using the partition_by_string config parameter

partition_by_string (optinal) accepts a string value of a any one specific column_name based on which partitioning of the table data takes place.

vertica_incremental.sql
      
{{ config( materialized='table', partition_by_string='employee_age' )}}


select * FROM public.employee_dimension

Using the partition_by_active_count config parameter

partition_by_active_count (optional) specifies how many partitions are active for this table. It accepts an integer value.

Note:

If you want to pass partition_by_active_count parameter then you have to partition the table by passing partition_by_string parameter.

vertica_incremental.sql
    {{ config( materialized='table', 
partition_by_string='employee_age',
partition_by_group_by_string="""
CASE WHEN employee_age < 5 THEN 1
WHEN employee_age>50 THEN 2
ELSE 3 END""",

partition_by_active_count = 2) }}


select * FROM public.employee_dimension


Using the partition_by_group_by_string config parameter

partition_by_group_by_string parameter(optional) accepts a string, in which user should specify each group cases as a single string.

This is derived from the partition_by_string value.

partition_by_group_by_string parameter is used to merge partitions into separate partition groups.

Note:

If you want to pass partition_by_group_by_string parameter then you have to partition the table by passing partition_by_string parameter.

vertica_incremental.sql

{{config(materialized='table',
partition_by_string='number_of_children',
partition_by_group_by_string="""
CASE WHEN number_of_children <= 2 THEN 'small_family'
ELSE 'big_family' END""")}}
select * from public.customer_dimension

Configuring the KSAFE clause

To leverage the KSAFE clause of the CREATE TABLE statement, use the ksafe config parameter in your model.

vertica_incremental.sql
{{  config(  materialized='table',    ksafe='1'   ) }} 

select * from public.product_dimension


0