Databases and question engines, together with Amazon Redshift, typically depend on completely different statistics in regards to the underlying knowledge to find out the best option to execute a question, such because the variety of distinct values and which values have low selectivity. When Amazon Redshift receives a question, reminiscent of
SELECT insert_date, sum(gross sales)
FROM receipts
WHERE insert_date BETWEEN ‘2024-12-01’ AND ‘2024-12-31’
GROUP BY insert_date
the question planner makes use of statistics to make an informed guess on the best methodology to load and course of knowledge from storage. Extra statistics in regards to the underlying knowledge can typically assist a question planner choose a plan that results in the most effective question efficiency, however this will require a tradeoff among the many value of computing, storing, and sustaining statistics, and may require extra question planning time.
Knowledge lakes are a strong structure to prepare knowledge for analytical processing, as a result of they let builders use environment friendly analytical columnar codecs like Apache Parquet, whereas letting them proceed to switch the form of their knowledge as their functions evolve with open desk codecs like Apache Iceberg. One problem with knowledge lakes is that they don’t at all times have statistics about their underlying knowledge, making it troublesome for question engines to find out the optimum execution path. This will result in points, together with sluggish queries and sudden modifications in question efficiency.
In 2024, Amazon Redshift prospects queried over 77 EB (exabytes) of information residing in knowledge lakes. Given this utilization, the Amazon Redshift crew works to innovate on knowledge lake question efficiency to assist prospects effectively entry their open knowledge to get close to real-time insights to make crucial enterprise selections. In 2024, Amazon Redshift launched a number of options that enhance question efficiency for knowledge lakes, together with sooner question occasions when a knowledge lake doesn’t have statistics. With Amazon Redshift patch 190, the TPC-DS 3TB benchmark confirmed an general 2x question efficiency enchancment on Apache Iceberg tables with out statistics, together with TPC-DS Question #72, which improved by 125 occasions from 690 seconds to five.5 seconds.
On this publish, we first briefly assessment how planner statistics are collected and what influence they’ve on queries. Then, we talk about Amazon Redshift options that ship optimum plans on Iceberg tables and Parquet knowledge even with the dearth of statistics. Lastly, we assessment some instance queries that now execute sooner due to these newest Amazon Redshift improvements.
Stipulations
The benchmarks on this publish have been run utilizing the next setting:
Amazon Redshift Serverless with a base capability of 88 RPU (Amazon Redshift processing unit)
The Cloud Knowledge Warehouse Benchmark derived from the TPC-DS 3TB dataset. The next tables have been partitioned on this dataset (the remaining have been unpartitioned):
catalog_returns on cr_returned_date_sk
catalog_sales on cs_sold_date_sk
store_returns on sr_returned_date_sk
store_sales on ss_sold_date_sk
web_returns on wr_returned_date_sk
web_saleson ws_sold_date_sk
stock on inv_date_sk
For extra data on loading the Cloud Knowledge Warehouse Benchmark into your Amazon Redshift Serverless workgroup, see the Cloud Knowledge Warehouse Benchmark documentation.
Now, let’s assessment how database statistics work and the way they influence question efficiency.
Overview of the influence of planner statistics on question efficiency
To know why database are statistics are necessary, first let’s assessment what a question planner does. A question planner is the mind of a database: while you ship a question to a database, the question planner should decide essentially the most environment friendly option to load and compute all the knowledge required to reply the question. Having details about the underlying dataset, reminiscent of statistics in regards to the variety of rows in a dataset, or the distribution of information, will help the question planner generate an optimum plan for retrieving the information. Amazon Redshift makes use of statistics in regards to the underlying knowledge in tables and columns statistics to find out learn how to construct an optimum question execution path.
Let’s see how this works in an instance. Think about the next question to find out the highest 5 gross sales dates in December 2024 for shops in North America:
SELECT insert_date, sum(gross sales) AS total_sales
FROM receipts
JOIN shops ON shops.id = receipts.store_id
WHERE
shops.area = ‘NAMER’ AND
receipts.insert_date BETWEEN ‘2024-12-01’ AND ‘2024-12-31’
GROUP BY receipts.insert_date
ORDER BY total_sales DESC
LIMIT 5;
On this question, the question planner has to contemplate a number of elements, together with:
Which desk is bigger, shops or receipts? Am I capable of question the smaller desk first to cut back the quantity of looking on the bigger desk?
Which returns extra rows, receipts.insert_date BETWEEN ‘2024-12-01’ AND ‘2024-12-31’ or shops.area = ‘NAMER’?
Is there any partitioning on the tables? Can I search over a smaller set of information to hurry up the question?
Having details about the underlying knowledge will help to generate an optimum question plan. For instance, shops.area = ‘NAMER’ may solely return just a few rows (that’s, it’s extremely selective), that means it’s extra environment friendly to execute that step of the question first earlier than filtering by way of the receipts desk. What helps a question planner make this resolution is the statistics out there on columns and tables.
Desk statistics (also referred to as planner statistics) present a snapshot of the information out there in a desk to assist the question planner make an knowledgeable resolution on execution methods. Databases acquire desk statistics by way of sampling, which entails reviewing a subset of rows to find out the general distribution of information. The standard of statistics, together with the freshness of information, can considerably influence a question plan, which is why databases will reanalyze and regenerate statistics after a sure threshold of the underlying knowledge modifications.
Amazon Redshift helps a number of desk and column degree statistics to help in constructing question plans. These embody:
Statistic
What it’s
Affect
Question plan affect
Variety of rows (numrows)
Variety of rows in a desk
Estimates the general dimension of question outcomes and JOIN sizes
Choices on JOIN ordering and algorithms, and useful resource allocation
Variety of distinct values (NDV)
Variety of distinctive values in a column
Estimates selectivity, that’s, what number of rows will likely be returned from predicates (for instance, WHERE clause) and the dimensions of JOIN outcomes
Choices on JOIN ordering and algorithms
NULL depend
Variety of NULL values in a column
Estimates variety of rows eradicated by IS NULL or IS NOT NULL
Choices on filter pushdown (that’s, what nodes execute a question) and JOIN methods
Min/max values
Smallest and largest values in a column
Helps range-based optimizations (for instance, WHERE x BETWEEN 10 AND 20)
Choices on JOIN order and algorithms, and useful resource allocation
Column dimension
Whole dimension of column knowledge in reminiscence
Estimates general dimension of scans (studying knowledge), JOINs, and question outcomes
Choices on JOIN algorithms and ordering
Open codecs reminiscent of Apache Parquet don’t have any of the previous statistics by default and desk codecs like Apache Iceberg have a subset of the previous statistics reminiscent of variety of rows, NULL depend and min/max values. This will make it difficult for question engines to plan environment friendly queries. Amazon Redshift has added improvements that enhance general question efficiency on knowledge lake knowledge saved in Apache Iceberg and Apache Parquet codecs even when all or partial desk or column-level statistics are unavailable. The subsequent part critiques options in Amazon Redshift that assist enhance question efficiency on knowledge lakes even when desk statistics aren’t current or are restricted.
Amazon Redshift options when knowledge lakes don’t have statistics for Iceberg tables and Parquet
As talked about beforehand, there are numerous circumstances the place tables saved in knowledge lakes lack statistics, which creates challenges for question engines to make knowledgeable selections on selecting the right question plan. Nonetheless, Amazon Redshift has launched a sequence of improvements that enhance efficiency for queries on knowledge lakes even when there aren’t desk statistics out there. On this part, we assessment a few of these enhancements and the way they influence your question efficiency.
Dynamic partition elimination by way of distributed joins
Dynamic partition elimination is a question optimization approach that permits Amazon Redshift to skip studying knowledge unnecessarily throughout question execution on a partitioned desk. It does this by figuring out which partitions of a desk are related to a question and solely scanning these partitions, considerably decreasing the quantity of information that must be processed.
For instance, think about a schema that has two tables:
gross sales (reality desk) with columns:
sale_id
product_id
sale_amount
sale_date
merchandise (dimension desk) with columns:
product_id
product_name
class
The gross sales desk is partitioned by product_id. Within the following instance, you need to discover the full gross sales quantity for merchandise within the Electronics class in December 2024.
SQL question:
SELECT SUM(s.sale_amount)
FROM gross sales s
JOIN merchandise p ON s.product_id = p.product_id
WHERE p.class = ‘Electronics’;
How Amazon Redshift improves this question:
Filter on dimension desk:
The question filters the merchandise desk to solely embody merchandise within the Electronics class.
Determine related partitions:
With the brand new enhancements, Amazon Redshift analyzes this filter and determines which partitions of the gross sales desk should be scanned.
It appears on the product_id values within the merchandise desk that match the Electronics class and solely scans these particular partitions within the gross sales desk.
As a substitute of scanning the complete gross sales desk, Amazon Redshift solely scans the partitions that comprise gross sales knowledge for electronics merchandise.
This considerably reduces the quantity of information Amazon Redshift must course of, making the question sooner.
Beforehand, this optimization was solely utilized on broadcast joins when all little one joins under the be part of have been additionally broadcast joins. The Amazon Redshift crew prolonged this functionality to work on all broadcast joins, regardless if the kid joins under them are broadcast. This enables extra queries to learn from dynamic partition elimination, reminiscent of TPC-DS Q64 and Q75 for Iceberg tables, and TPC-DS Q25 in Parquet.
Metadata caching for Iceberg tables
The Iceberg open desk format employs a two-layer construction: a metadata layer and a knowledge layer. The metadata layer has three ranges of information (metadata.json, manifest lists, and manifests), which permits for efficiency options reminiscent of sooner scan planning and superior knowledge filtering. Amazon Redshift makes use of the Iceberg metadata construction to effectively establish the related knowledge information to scan, utilizing partition worth ranges and column-level statistics and eliminating pointless knowledge processing.
The Amazon Redshift crew noticed that Iceberg metadata is incessantly fetched a number of occasions each inside and throughout queries, resulting in potential efficiency bottlenecks. We applied an in-memory LRU (least lately used) cache for parsed metadata, manifest listing information, and manifest information. This cache retains essentially the most lately used metadata in order that we keep away from fetching them repeatedly from Amazon Easy Storage Service (Amazon S3) throughout queries. This caching has helped with general efficiency enhancements of as much as 2% in a TPC-DS 3TB workload. We observe greater than 90% cache hits for these metadata constructions, decreasing the iceberg metadata processing occasions significantly.
Stats inference for Iceberg tables
As talked about beforehand, the Apache Iceberg file format comes with some statistics reminiscent of variety of rows, variety of nulls, column min/max values and column storage dimension within the metadata information referred to as manifest information. Nonetheless, they don’t at all times present all of the statistics that we want particularly common width which is necessary for the cost-based optimizer utilized by Amazon Redshift.
We delivered a function to estimate common width for variable size columns reminiscent of string and binary from Iceberg metadata. We do that through the use of the column storage dimension and the variety of rows, and we modify for column compression when mandatory. By inferring these extra statistics, our optimizer could make extra correct value estimates for various question plans. This stats inference function, launched in Amazon Redshift patch 186, gives as much as a 7% enchancment within the TPC-DS benchmarks. Now we have additionally enhanced Amazon Redshift optimizer’s value mannequin. The enhancements embody planner optimizations that enhance the estimations of the completely different be part of distribution methods to consider the networking value of distributing the information between the nodes of an Amazon Redshift cluster. The enhancements additionally embody enhancements to Amazon Redshift question optimizer. These enhancements, that are a end result of a number of years of analysis, testing, and implementation demonstrated as much as a forty five% enchancment in a group of TPC-DS benchmarks.
Instance: TPC-DS benchmark highlights on Amazon Redshift no stats queries on knowledge lakes
One option to measure knowledge lake question efficiency for Amazon Redshift is utilizing the TPC-DS benchmark. The TPC-DS benchmark is a standardized benchmark designed to check resolution help methods, particularly concurrently accessed methods the place queries can vary from shorter analytical queries (for instance, reporting, dashboards) to longer operating ETL-style queries for transferring and reworking knowledge into a distinct system. For these exams, we used the Cloud Knowledge Warehouse Benchmark derived from the TPC-DS 3TB to align our testing with many widespread analytical workloads, and supply a typical set of comparisons to measure enhancements to Amazon Redshift knowledge lake question efficiency.
We ran these exams throughout knowledge saved each within the Apache Parquet knowledge format, along with Apache Iceberg tables with knowledge in Apache Parquet information. As a result of we centered these exams on out-of-the-box efficiency, none of those knowledge units had any desk statistics out there. We carried out these exams utilizing the required Amazon Redshift patch variations within the following desk, and used Amazon Redshift Serverless with 88 RPU with none extra tuning. The next outcomes characterize an influence run, which is the sum of how lengthy it took to run all of the exams, from a heat run, that are the outcomes of the ability run after not less than one execution of the workload:
P180 (12/2023)
P190 (5/2025)
Apache Parquet (solely numrows)
7,796
3,553
Apache Iceberg (out-of-the-box, no tuning)
4,411
1,937
We noticed notable enhancements in a number of question run occasions. For this publish, we give attention to the enhancements we noticed in question 82:
SELECT
i_brand_id brand_id, i_brand model,
sum(ss_ext_sales_price) ext_price
FROM date_dim, store_sales, merchandise
WHERE d_date_sk = ss_sold_date_sk AND
ss_item_sk = i_item_sk AND
i_manager_id = 83 AND
d_moy = 12 AND
d_year = 2002
GROUP BY i_brand, i_brand_id
ORDER BY ext_price desc, i_brand_id
LIMIT 100;
On this question, we’re trying to find the highest 100 promoting manufacturers from a particular supervisor in December 2002, which represents a sometimes dashboard-style analytical question. In our energy run, we noticed a discount in question time from 512 seconds to 18.1 seconds for Apache Parquet knowledge, or a 28.2x enchancment in efficiency. The accelerated question efficiency for this question in a heat run is as a result of enhancements to the cost-based optimizer and dynamic partition elimination.
We noticed question efficiency enhancements throughout most of the queries discovered within the Cloud Knowledge Warehouse Benchmark derived from the TPC-DS take a look at suite. We encourage you to strive your individual efficiency exams utilizing Amazon Redshift Serverless in your knowledge lake knowledge to see what efficiency good points you may observe.
Cleanup
In case you ran these exams by yourself and don’t want the sources anymore, you’ll must delete your Amazon Redshift Serverless workgroup. See Shutting down and deleting a cluster. In case you don’t must retailer the Cloud Knowledge Warehouse Benchmark knowledge in your S3 bucket anymore, see Deleting Amazon S3 objects.
Conclusion
On this publish, you realized how cost-based optimizers for databases work, and the way statistical details about your knowledge will help Amazon Redshift execute queries extra effectively. You may optimize question efficiency for Iceberg tables by routinely gathering Puffin statistics, which lets Amazon Redshift use these current improvements to extra effectively question your knowledge. Giving extra data to your question planner—the mind of Amazon Redshift—helps to offer extra predictable efficiency and lets you additional scale the way you work together together with your knowledge in your knowledge lakes and knowledge lakehouses.
Concerning the authors
Martin Milenkoski is a Software program Improvement Engineer on the Amazon Redshift crew, presently specializing in knowledge lake efficiency and question optimization. Martin holds an MSc in Laptop Science from the École Polytechnique Fédérale de Lausanne.
Kalaiselvi Kamaraj is a Sr. Software program Improvement Engineer on the Amazon Redshift crew. She has labored on a number of initiatives inside the Amazon Redshift Question processing crew and presently specializing in efficiency associated initiatives for Amazon Redshift DataLake and question optimizer.
Jonathan Katz is a Principal Product Supervisor – Technical on the AWS Analytics crew and is predicated in New York. He’s a Core Workforce member of the open-source PostgreSQL challenge and an lively open-source contributor, together with to the pgvector challenge.