When you ask a question to a column-based table in a dedicated SQL pool, the system looks at the smallest and largest values stored in each chunk of data. If a chunk doesn’t contain any data related to your question, the system doesn’t waste time reading it from the memory. This helps speed up the query because it only needs to read the chunks that have relevant information.
There are two types to define clustered columnstore index
- One is Default which is clustered columnstore index
- Another one is ordered clustered columnstore index
In Azure Synapse Dedicated SQL Pool, both ordered and non-ordered clustered columnstore indexes (CCIs) offer data compression and improved query performance compared to rowstore indexes. However, they have distinct characteristics and use cases:
Non-ordered clustered columnstore index (CCI)
- Data Storage: Stores data segments compressed based on data types and dictionary encoding. Segments are not arranged in any specific order.
- Filtering Efficiency: Less efficient for queries with predicates on indexed columns, as the entire index needs to be scanned in most cases.
- Data Loading: Faster to load data into, as no sorting is required before compression.
- Update/Delete Performance: Updates and deletes can be slower due to segment merging required to maintain compressed structure.
- Suitable for: Wide tables with many columns, where most queries filter on non-indexed columns or involve aggregations.
Ordered clustered columnstore index (CCI):
- Data Storage: Similar to non-ordered CCI, but data segments are sorted based on specific columns (order keys).
- Filtering Efficiency: Highly efficient for queries with predicates on the order key columns, as only relevant segments need to be scanned.
- Data Loading: Slower to load data into due to the required sorting before compression.
- Update/Delete Performance: Similar to non-ordered CCI, with potential impact due to segment merging.
- Suitable for: Analytical workloads with frequent range filters or sorting on the order key columns.
Key Differences:
- Filtering Efficiency: Ordered CCIs excel for queries with filters on order key columns, while non-ordered CCIs are less efficient.
- Data Loading: Non-ordered CCIs are faster to load data into, while ordered CCIs require slower sorting beforehand.
- Update/Delete Performance: Both require similar maintenance overhead, potentially impacting performance.
- Use Cases: Choose ordered CCIs for range filtering and sorting, while non-ordered CCIs are suitable for aggregations and filtering on other columns.
Additional Considerations:
- Index creation cost: Ordered CCIs might have a higher creation cost due to sorting.
- Memory usage: Sorting might require more memory during index creation.
- Data distribution: Ensure even data distribution on order key columns for optimal performance in ordered CCIs.
Choosing the Right Index:
The choice depends on your specific workload characteristics and query patterns. Analyze your queries to identify frequently used filters and sorting requirements.
Remember that testing and performance analysis are crucial to determine the best CCI type for your specific scenario.
Query performance by ordered CCI
A query’s performance gain from an ordered CCI depends on the query patterns, the size of data, how well the data is sorted, the physical structure of segments, and the DWU and resource class chosen for the query execution. Users should review all these factors before choosing the ordering columns when designing an ordered CCI table.
Queries with all these patterns typically run faster with ordered CCI.
- The queries have equality, inequality, or range predicates
- The predicate columns and the ordered CCI columns are the same.
In this example, table T1 has a clustered columnstore index ordered in the sequence of Col_C, Col_B, and Col_A.
CREATE CLUSTERED COLUMNSTORE INDEX MyOrderedCCI ON T1
ORDER (Col_C, Col_B, Col_A);
The performance of query 1 and query 2 can benefit more from ordered CCI than the other queries, as they reference all the ordered CCI columns.
-- Query #1:
SELECT * FROM T1 WHERE Col_C = 'c' AND Col_B = 'b' AND Col_A = 'a';
-- Query #2
SELECT * FROM T1 WHERE Col_B = 'b' AND Col_C = 'c' AND Col_A = 'a';
-- Query #3
SELECT * FROM T1 WHERE Col_B = 'b' AND Col_A = 'a';
-- Query #4
SELECT * FROM T1 WHERE Col_A = 'a' AND Col_C = 'c';
Data loading performance
The performance of data loading into an ordered CCI table is similar to a partitioned table. Loading data into an ordered CCI table can take longer than a non-ordered CCI table because of the data sorting operation, however queries can run faster afterwards with ordered CCI.
Example performance comparison of loading data into tables with different schemas.
How to Create ordered CCI on large tables
Creating an ordered CCI is an offline operation. For tables with no partitions, the data won’t be accessible to users until the ordered CCI creation process completes. For partitioned tables, since the engine creates the ordered CCI partition by partition, users can still access the data in partitions where ordered CCI creation isn’t in process. You can use this option to minimize the downtime during ordered CCI creation on large tables:
- Create partitions on the target large table (called
Table_A
). - Create an empty ordered CCI table (called
Table_B
) with the same table and partition schema asTable_A
. - Switch one partition from
Table_A
toTable_B
. - Run
ALTER INDEX <Ordered_CCI_Index> ON <Table_B> REBUILD PARTITION = <Partition_ID>
to rebuild the switched-in partition onTable_B
. - Repeat step 3 and 4 for each partition in
Table_A
. - Once all partitions are switched from
Table_A
toTable_B
and have been rebuilt, dropTable_A
, and renameTable_B
toTable_A
.
Note:
For a dedicated SQL pool table with an ordered CCI, ALTER INDEX REBUILD will re-sort the data using tempdb
. Monitor tempdb
during rebuild operations. If you need more tempdb
space, scale up the pool. Scale back down once the index rebuild is complete.
For a dedicated SQL pool table with an ordered CCI, ALTER INDEX REORGANIZE does not re-sort the data. To resort data, use ALTER INDEX REBUILD.
Optimizing clustered columnstore indexes
The below view can be created and used on your system to compute the average rows per row group and identify any suboptimal cluster columnstore indexes. The last column on this view generates a SQL statement that can be used to rebuild your indexes.
CREATE VIEW dbo.vColumnstoreDensity
AS
SELECT
GETDATE() AS [execution_date]
, DB_Name() AS [database_name]
, s.name AS [schema_name]
, t.name AS [table_name]
, COUNT(DISTINCT rg.[partition_number]) AS [table_partition_count]
, SUM(rg.[total_rows]) AS [row_count_total]
, SUM(rg.[total_rows])/COUNT(DISTINCT rg.[distribution_id]) AS [row_count_per_distribution_MAX]
, CEILING ((SUM(rg.[total_rows])*1.0/COUNT(DISTINCT rg.[distribution_id]))/1048576) AS [rowgroup_per_distribution_MAX]
, SUM(CASE WHEN rg.[State] = 0 THEN 1 ELSE 0 END) AS [INVISIBLE_rowgroup_count]
, SUM(CASE WHEN rg.[State] = 0 THEN rg.[total_rows] ELSE 0 END) AS [INVISIBLE_rowgroup_rows]
, MIN(CASE WHEN rg.[State] = 0 THEN rg.[total_rows] ELSE NULL END) AS [INVISIBLE_rowgroup_rows_MIN]
, MAX(CASE WHEN rg.[State] = 0 THEN rg.[total_rows] ELSE NULL END) AS [INVISIBLE_rowgroup_rows_MAX]
, AVG(CASE WHEN rg.[State] = 0 THEN rg.[total_rows] ELSE NULL END) AS [INVISIBLE_rowgroup_rows_AVG]
, SUM(CASE WHEN rg.[State] = 1 THEN 1 ELSE 0 END) AS [OPEN_rowgroup_count]
, SUM(CASE WHEN rg.[State] = 1 THEN rg.[total_rows] ELSE 0 END) AS [OPEN_rowgroup_rows]
, MIN(CASE WHEN rg.[State] = 1 THEN rg.[total_rows] ELSE NULL END) AS [OPEN_rowgroup_rows_MIN]
, MAX(CASE WHEN rg.[State] = 1 THEN rg.[total_rows] ELSE NULL END) AS [OPEN_rowgroup_rows_MAX]
, AVG(CASE WHEN rg.[State] = 1 THEN rg.[total_rows] ELSE NULL END) AS [OPEN_rowgroup_rows_AVG]
, SUM(CASE WHEN rg.[State] = 2 THEN 1 ELSE 0 END) AS [CLOSED_rowgroup_count]
, SUM(CASE WHEN rg.[State] = 2 THEN rg.[total_rows] ELSE 0 END) AS [CLOSED_rowgroup_rows]
, MIN(CASE WHEN rg.[State] = 2 THEN rg.[total_rows] ELSE NULL END) AS [CLOSED_rowgroup_rows_MIN]
, MAX(CASE WHEN rg.[State] = 2 THEN rg.[total_rows] ELSE NULL END) AS [CLOSED_rowgroup_rows_MAX]
, AVG(CASE WHEN rg.[State] = 2 THEN rg.[total_rows] ELSE NULL END) AS [CLOSED_rowgroup_rows_AVG]
, SUM(CASE WHEN rg.[State] = 3 THEN 1 ELSE 0 END) AS [COMPRESSED_rowgroup_count]
, SUM(CASE WHEN rg.[State] = 3 THEN rg.[total_rows] ELSE 0 END) AS [COMPRESSED_rowgroup_rows]
, SUM(CASE WHEN rg.[State] = 3 THEN rg.[deleted_rows] ELSE 0 END) AS [COMPRESSED_rowgroup_rows_DELETED]
, MIN(CASE WHEN rg.[State] = 3 THEN rg.[total_rows] ELSE NULL END) AS [COMPRESSED_rowgroup_rows_MIN]
, MAX(CASE WHEN rg.[State] = 3 THEN rg.[total_rows] ELSE NULL END) AS [COMPRESSED_rowgroup_rows_MAX]
, AVG(CASE WHEN rg.[State] = 3 THEN rg.[total_rows] ELSE NULL END) AS [COMPRESSED_rowgroup_rows_AVG]
, 'ALTER INDEX ALL ON ' + s.name + '.' + t.NAME + ' REBUILD;' AS [Rebuild_Index_SQL]
FROM sys.[pdw_nodes_column_store_row_groups] rg
JOIN sys.[pdw_nodes_tables] nt ON rg.[object_id] = nt.[object_id]
AND rg.[pdw_node_id] = nt.[pdw_node_id]
AND rg.[distribution_id] = nt.[distribution_id]
JOIN sys.[pdw_table_mappings] mp ON nt.[name] = mp.[physical_name]
JOIN sys.[tables] t ON mp.[object_id] = t.[object_id]
JOIN sys.[schemas] s ON t.[schema_id] = s.[schema_id]
GROUP BY
s.[name]
, t.[name];
Now that you have created the view, run this query to identify tables with row groups with less than 100 K rows. You may want to increase the threshold of 100 K if you are looking for more optimal segment quality.
SELECT *
FROM [dbo].[vColumnstoreDensity]
WHERE COMPRESSED_rowgroup_rows_AVG < 100000
OR INVISIBLE_rowgroup_rows_AVG < 100000;
Column | How to use this data |
---|---|
[table_partition_count] | If the table is partitioned, then you may expect to see higher Open row group counts. Each partition in the distribution could in theory have an open row group associated with it. Factor this into your analysis. A small table that has been partitioned could be optimized by removing the partitioning altogether as this would improve compression. |
[row_count_total] | Total row count for the table. For example, you can use this value to calculate percentage of rows in the compressed state. |
[row_count_per_distribution_MAX] | If all rows are evenly distributed this value would be the target number of rows per distribution. Compare this value with the compressed_rowgroup_count. |
[COMPRESSED_rowgroup_rows] | Total number of rows in columnstore format for the table. |
[COMPRESSED_rowgroup_rows_AVG] | If the average number of rows is significantly less than the maximum # of rows for a row group, then consider using CTAS or ALTER INDEX REBUILD to recompress the data |
[COMPRESSED_rowgroup_count] | Number of row groups in columnstore format. If this number is very high in relation to the table, it is an indicator that the columnstore density is low. |
[COMPRESSED_rowgroup_rows_DELETED] | Rows are logically deleted in columnstore format. If the number is high relative to table size, consider recreating the partition or rebuilding the index as this removes them physically. |
[COMPRESSED_rowgroup_rows_MIN] | Use this with the AVG and MAX columns to understand the range of values for the row groups in your columnstore. A low number over the load threshold (102,400 per partition aligned distribution) suggests that optimizations are available in the data load |
[COMPRESSED_rowgroup_rows_MAX] | As above |
[OPEN_rowgroup_count] | Open row groups are normal. One would reasonably expect one OPEN row group per table distribution (60). Excessive numbers suggest data loading across partitions. Double check the partitioning strategy to make sure it is sound |
[OPEN_rowgroup_rows] | Each row group can have 1,048,576 rows in it as a maximum. Use this value to see how full the open row groups are currently |
[OPEN_rowgroup_rows_MIN] | Open groups indicate that data is either being trickle loaded into the table or that the previous load spilled over remaining rows into this row group. Use the MIN, MAX, AVG columns to see how much data is sat in OPEN row groups. For small tables, it could be 100% of all the data! In which case ALTER INDEX REBUILD to force the data to columnstore. |
[OPEN_rowgroup_rows_MAX] | As above |
[OPEN_rowgroup_rows_AVG] | As above |
[CLOSED_rowgroup_rows] | Look at the closed row group rows as a sanity check. |
[CLOSED_rowgroup_count] | The number of closed row groups should be low if any are seen at all. Closed row groups can be converted to compressed row groups using the ALTER INDEX … REORGANIZE command. However, this is not normally required. Closed groups are automatically converted to columnstore row groups by the background “tuple mover” process. |
[CLOSED_rowgroup_rows_MIN] | Closed row groups should have a very high fill rate. If the fill rate for a closed row group is low, then further analysis of the columnstore is required. |
[CLOSED_rowgroup_rows_MAX] | As above |
[CLOSED_rowgroup_rows_AVG] | As above |
[Rebuild_Index_SQL] | SQL to rebuild columnstore index for a table |