Thursday, July 17, 2025
Google search engine
HomeTechnologyBig DataCompaction assist for Avro and ORC file codecs in Apache Iceberg tables...

Compaction assist for Avro and ORC file codecs in Apache Iceberg tables in Amazon S3


Apache Iceberg, a high-performance open desk format (OTF), has gained widespread adoption amongst organizations managing massive scale analytic tables and information volumes. Iceberg brings the reliability and ease of SQL tables to information lakes whereas enabling engines like Apache Spark, Apache Trino, Apache Flink, Apache Presto, Apache Hive, Apache Impala, and AWS analytic providers like Amazon Athena to flexibly and securely entry information with lakehouse structure. Whereas the lakehouse constructed utilizing Iceberg represents an evolution to the info lake, however it nonetheless requires providers to compact and optimize the recordsdata and partitions that comprise the tables. Self-managing Iceberg tables with massive volumes of information poses a number of challenges, together with managing concurrent transactions, processing real-time information streams, dealing with small file proliferation, sustaining information high quality and governance, and guaranteeing compliance.

At re:Invent 2024, Amazon S3 launched Amazon S3 Tables marking the primary cloud object retailer with native Iceberg assist for Parquet recordsdata, designed to streamline tabular information administration at scale. Parquet is likely one of the commonest and quickest rising information varieties in Amazon S3. Amazon S3 shops exabytes of Parquet information, and averages over 15 million requests per second to this information. Whereas S3 Tables initially supported Parquet file kind, as mentioned within the S3 Tables AWS Information Weblog, the Iceberg specification extends to Avro, and ORC file codecs for managing massive analytic tables. Now, S3 Tables is increasing its capabilities to incorporate automated compaction for these extra file varieties inside Iceberg tables. This enhancement can also be obtainable for Iceberg tables on basic function S3 buckets, utilizing the lakehouse structure of Amazon SageMaker that beforehand supported Parquet compaction as lined within the weblog put up Speed up queries on Apache Iceberg tables via AWS Glue auto compaction.

This weblog put up explores the efficiency advantages of automated compaction of Iceberg tables utilizing Avro and ORC file varieties in S3 Tables for a knowledge ingestion use with over 20 billion occasions.

Parquet, ORC, and Avro file codecs

Parquet is likely one of the commonest and quickest rising information varieties in Amazon S3. It was initially developed by Twitter and now a part of the Apache ecosystem, is thought for its broad compatibility with huge information instruments equivalent to Spark, Hive, Impala, and Drill. Amazon S3 shops exabytes of Apache Parquet information, and averages over 15 million requests per second to this information. Parquet makes use of a hybrid encoding scheme and helps advanced nested information buildings, making it superb for read-heavy workloads and analytics throughout varied platforms. Parquet additionally supplies glorious compression and environment friendly I/O by enabling selective column reads, lowering the quantity of information scanned throughout queries.

ORC was particularly designed for Hadoop ecosystem and optimized for Hive. It typically gives higher compression ratios and higher learn efficiency for sure forms of queries resulting from its light-weight indexing and aggressive predicate pushdown capabilities. ORC contains built-in statistics and helps light-weight indexes, which might speed up filtering operations considerably. Whereas Parquet gives broader device compatibility, ORC usually outperforms it inside Hive-centric environments, particularly when coping with flat information buildings and huge sequential scans.

Avro file format is normally utilized in streaming situations for its serialization and schema dealing with capabilities and for its seamless integration with Apache Kafka, providing a robust mixture for dealing with real-time information streams. For instance, for storing and validating streaming information schemas, you could have the choice of utilizing AWS Glue Schema registry in AWS. Avro, in distinction with Parquet and ORC, is a row-based storage format designed for environment friendly information serialization and schema evolution. Avro excels in write-heavy use circumstances like information ingestion and streaming and is often used with Kafka. In contrast to Parquet and ORC, that are optimized for analytical queries, Avro is designed for quick reads and writes of full data, and it shops the schema alongside the info, enabling simpler information alternate and evolution over time.

Beneath is a comparability of those 3 file codecs.

Parquet
ORC
Avro

Storage format
Columnar
Columnar
Row-based

Greatest for
Analytics & queries throughout columns
Hive-based queries, heavy compression
Information ingestion, streaming, serialization

Compression
Good
Wonderful (particularly numerical information)
Reasonable

Device compatibility
Broad (Spark, Hive, Presto, and many others.)
Sturdy with Hive/Hadoop
Sturdy with Kafka, Flink, and many others.

Question efficiency
Superb for analytics
Wonderful in Hive
Not optimized for analytics

Schema evolution
Supported
Supported
Wonderful (schema saved with information)

Nested information assist
Sure
Restricted
Sure

Write effectivity
Reasonable
Reasonable
Excessive

Learn effectivity
Excessive (for columnar scans)
Very excessive (in Hive)
Excessive (for full report reads)

Answer Overview

We run two variations of the identical structure: one the place the tables are auto compacted, and one other with out compaction utilizing on this case S3 Tables. By evaluating each situations, this put up demonstrates the effectivity, question efficiency, and price advantages of auto compacted tables vs. non-compacted tables in a simulated Web of Issues (IoT) information pipeline. The next diagram illustrates the answer structure.

Determine 1 – Answer structure diagram

Compaction efficiency check

We simulated IoT information ingestion with over 20 billion occasions and used MERGE INTO for information deduplication throughout two time-based partitions, involving heavy partition reads and shuffling. After ingestion, we ran queries in Athena to check efficiency between compacted and uncompacted tables utilizing the Merge on Learn (MoR) mode on each Avro and ORC codecs. We use the next desk configuration settings:

‘write.delete.mode’=’merge-on-read’
‘write.replace.mode’=’merge-on-read’
‘write.merge.mode’=’merge-on-read’
‘write.distribution.mode=hash’

We use ‘write.distribution.mode=hash’ to generate larger recordsdata that may profit the efficiency. Notice that as we’re producing fairly massive recordsdata already the variations between un-compacted and compacted tables will not be going to that huge, this may change considerably relying in your workload (for instance, partitioning, enter charge, batch dimension) and your chosen write distribution mode. For extra particulars, please consult with the Writing Distribution Modes part within the Apache Iceberg documentation.

The next desk reveals metrics of the Athena question efficiency. Please consult with part “Question and Be a part of information from these S3 Tables to construct insights” for question particulars. All desk sizes used to investigate the question efficiency are over 2 billion rows. These outcomes are particular to this simulation train and the readers’ outcomes might differ relying on their information dimension and queries they’re operating.

Question
Avro question time compaction
Avro question time with out compaction
ORC question time with out compaction
ORC question time with compaction
% enchancment Avro
% enchancment ORC

Question 1
22.45 secs
26.54 secs
30.16 secs
20.32 secs
15.41%
32.63%

Question 2
22.68 secs
25.83 secs
34.17 secs
20.51 secs
12.20%
39.98%

Question 3
25.92 secs
35.65 secs
29.05 secs
24.95 secs
27.29%
14.11%

Conditions

To arrange your personal analysis setting and check the function, you want the next stipulations.

AWS account with entry to the next AWS providers:

Create S3 desk bucket and allow integration with AWS analytics providers

Go to S3 console and allow desk buckets function.

Then select the Create desk bucket button, fill Desk bucket identify with any bucket identify you favor, choose the Allow integration checkbox, then select Create desk bucket.

Arrange Amazon S3 storage

Create an S3 bucket with the next construction:

s3bucket/
/jars
/worker.desc
/checkpointAvro
/checkpointAvroAuto
/checkpointORC
/checkpointORCAuto

Obtain the descriptor file worker.desc from the GitHub repo and put it into the S3 bucket you simply created.

Obtain the applying on the releases web page

Get the packaged utility S3Tables-Avro-orc-auto-compaction-benchmark-0.1 from the GitHub repo, then add the JAR file to the “jars” listing on the S3 bucket. Checkpoint might be used for the Structured Streaming checkpointing mechanism. As a result of we use 4 streaming job runs, one for compacted and one for uncompacted information on every format, we additionally create a “checkpointAuto” folder for each.

Create an EMR Serverless utility

Create an EMR Serverless utility with the next settings (for directions, see Getting began with Amazon EMR Serverless):

Sort: Spark
Model: 7.20
Structure: x86_64
Java Runtime: Java 17
Metastore Integration: AWS Glue Information Catalog
Logs: Allow Amazon CloudWatch Logs if desired (it’s really useful however not required for this weblog)

Configure the community (VPC, subnets, and default safety group) to permit the EMR Serverless utility to succeed in the MSK cluster. Pay attention to the application-id to make use of later for launching the roles.

Create an MSK cluster

Create an MSK cluster on the Amazon MSK console. For extra particulars, see Get began utilizing Amazon MSK. That you must use customized create with no less than two brokers utilizing 3.5.1, Apache Zookeeper mode model, and occasion kind kafka.m7g.xlarge. Don’t use public entry, as an alternative select two personal subnets to deploy (one dealer per subnet or Availability Zone, for a complete of two brokers). For the safety group, keep in mind that the EMR cluster and the Amazon EC2 based mostly producer might want to attain the cluster and act accordingly.

For safety, use PLAINTEXT (in manufacturing, you need to safe entry to the cluster). Select 200 GB as storage dimension for every dealer and don’t allow tiered storage. For community safety teams, you’ll be able to select the default of the VPC.

For the MSK cluster configuration, use the next settings:

auto.create.subjects.allow=true
default.replication.issue=2
min.insync.replicas=2
num.io.threads=8
num.community.threads=5
num.partitions=32
num.reproduction.fetchers=2
reproduction.lag.time.max.ms=30000
socket.obtain.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.ship.buffer.bytes=102400
unclean.chief.election.allow=true
zookeeper.session.timeout.ms=18000
compression.kind=zstd
log.retention.hours=2
log.retention.bytes=10073741824

Configure the info simulator

Log in to your EC2 occasion. As a result of it’s operating on a personal subnet, you need to use an occasion endpoint to attach. To create one, see Hook up with your cases utilizing EC2 Occasion Join Endpoint. After you log in, difficulty the next instructions:

sudo yum set up java-17-amazon-corretto-devel
wget https://archive.apache.org/dist/kafka/3.5.1/kafka_2.12-3.5.1.tgz
tar xzvf kafka_2.12-3.5.1.tgz

Create Kafka subjects

Create two Kafka subjects—keep in mind that that you must change the bootstrap server with the corresponding consumer info. You may get this information from the Amazon MSK console on the small print web page on your MSK cluster.

cd kafka_2.12-3.5.1/bin/

./kafka-topics.sh –topic protobuf-demo-topic-pure –bootstrap-server kafkaBoostrapString –create

Launching EMR Serverless Jobs for Iceberg Tables (Avro/ORC – Compacted & Non-Compacted)

Now it’s time to launch EMR Serverless streaming jobs for 4 completely different Iceberg tables. Every job makes use of a unique Spark Structured Streaming checkpoint and a particular Java class for ingestion logic.

Earlier than launching the roles, be certain:

You’ve disabled auto-compaction within the S3 tables the place mandatory (see S3 Tables upkeep). On this case for employee_Avro_uncompacted and employee_orc_uncompacted tables.
Your EMR Serverless IAM function has permissions to learn/write from S3Tables. Open AWS Lake formation console, then, you’ll be able to comply with these docs to present permissions to the EMR Serverless Position.

After launching every job launch the info simulator and let it end. Then you’ll be able to cancel the job run and launch the subsequent one ( whereas launching the info simulator once more).

Launch the info simulator

Obtain the JAR file to the EC2 occasion and run the producer, be aware that may do that as soon as.

aws s3 cp s3://s3bucket/jars/streaming-iceberg-ingest-1.0-SNAPSHOT.jar .

Now you can begin the protocol buffer producers. Use the next instructions:

java -cp streaming-iceberg-ingest-1.0-SNAPSHOT.jar
com.aws.emr.proto.kafka.producer.ProtoProducer kafkaBoostrapString

You must run this command for every of the tables ( job runs), run the command after the ingestion course of has began.

Desk 1: employee_orc_uncompacted

Checkpoint: checkpointORC
Java Class: SparkCustomIcebergIngestMoRS3BucketsORC

aws emr-serverless start-job-run
–application-id application-identifier
–name employee-orc-uncompacted-job
–execution-role-arn arn-of-emrserverless-role
–mode ‘STREAMING’
–job-driver ‘{
“sparkSubmit”: {
“entryPoint”: “s3://s3bucket/jars/streaming-iceberg-ingest-1.0-SNAPSHOT.jar”,
“entryPointArguments”: (“true”, “s3://s3bucket/warehouse”, “s3://s3bucket/Worker.desc”, “s3://s3bucket/checkpointORC”, “kafkaBootstrapString”, “true”),
“sparkSubmitParameters”: “–class com.aws.emr.spark.iot.SparkCustomIcebergIngestMoRS3BucketsORC –conf spark.executor.cores=16 –conf spark.executor.reminiscence=64g –conf spark.driver.cores=4 –conf spark.driver.reminiscence=16g –conf spark.dynamicAllocation.minExecutors=3 –conf spark.dynamicAllocation.maxExecutors=5 –conf spark.sql.catalog.glue_catalog.http-client.apache.max-connections=3000 –conf spark.emr-serverless.executor.disk.kind=shuffle_optimized –conf spark.emr-serverless.executor.disk=1000G –conf spark.jars /usr/share/aws/iceberg/lib/iceberg-spark3-runtime.jar –files s3://s3bucket/Worker.desc –packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.1”
}
}’

Desk 2: employee_avro_uncompacted

Checkpoint: checkpointAvro
Java Class: SparkCustomIcebergIngestMoRS3BucketsAvro

aws emr-serverless start-job-run
–application-id application-identifier
–name employee-Avro-uncompacted-job
–execution-role-arn arn-of-emrserverless-role
–mode ‘STREAMING’
–job-driver ‘{
“sparkSubmit”: {
“entryPoint”: “s3://s3bucket/jars/streaming-iceberg-ingest-1.0-SNAPSHOT.jar”,
“entryPointArguments”: (“true”, “s3://s3bucket/warehouse”, “s3://s3bucket/Worker.desc”, “s3://s3bucket/checkpointAvro”, “kafkaBootstrapString”, “true”),
“sparkSubmitParameters”: “–class com.aws.emr.spark.iot.SparkCustomIcebergIngestMoRS3BucketsAvro –conf spark.executor.cores=16 –conf spark.executor.reminiscence=64g –conf spark.driver.cores=4 –conf spark.driver.reminiscence=16g –conf spark.dynamicAllocation.minExecutors=3 –conf spark.dynamicAllocation.maxExecutors=5 –conf spark.sql.catalog.glue_catalog.http-client.apache.max-connections=3000 –conf spark.emr-serverless.executor.disk.kind=shuffle_optimized –conf spark.emr-serverless.executor.disk=1000G –conf spark.jars /usr/share/aws/iceberg/lib/iceberg-spark3-runtime.jar –files s3://s3bucket/Worker.desc –packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.1”
}
}’

Desk 3: employee_orc (Auto-Compacted)

Checkpoint: checkpointORCAuto
Java Class: SparkCustomIcebergIngestMoRS3BucketsAutoORC

aws emr-serverless start-job-run
–application-id application-identifier
–name employee-orc-auto-job
–execution-role-arn arn-of-emrserverless-role
–mode ‘STREAMING’
–job-driver ‘{
“sparkSubmit”: {
“entryPoint”: “s3://s3bucket/jars/streaming-iceberg-ingest-1.0-SNAPSHOT.jar”,
“entryPointArguments”: (“true”, “s3://s3bucket/warehouse”, “s3://s3bucket/Worker.desc”, “s3://s3bucket/checkpointORCAuto”, “kafkaBootstrapString”, “true”),
“sparkSubmitParameters”: “–class com.aws.emr.spark.iot.SparkCustomIcebergIngestMoRS3BucketsAutoORC –conf spark.executor.cores=16 –conf spark.executor.reminiscence=64g –conf spark.driver.cores=4 –conf spark.driver.reminiscence=16g –conf spark.dynamicAllocation.minExecutors=3 –conf spark.dynamicAllocation.maxExecutors=5 –conf spark.sql.catalog.glue_catalog.http-client.apache.max-connections=3000 –conf spark.emr-serverless.executor.disk.kind=shuffle_optimized –conf spark.emr-serverless.executor.disk=1000G –conf spark.jars /usr/share/aws/iceberg/lib/iceberg-spark3-runtime.jar –files s3://s3bucket/Worker.desc –packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.1”
}
}’

Desk 4: employee_avro (Auto-Compacted)

Checkpoint: checkpointAvroAuto
Java Class: SparkCustomIcebergIngestMoRS3BucketsAutoAvro

aws emr-serverless start-job-run
–application-id application-identifier
–name employee-Avro-auto-job
–execution-role-arn arn-of-emrserverless-role
–mode ‘STREAMING’
–job-driver ‘{
“sparkSubmit”: {
“entryPoint”: “s3://s3bucket/jars/streaming-iceberg-ingest-1.0-SNAPSHOT.jar”,
“entryPointArguments”: (“true”, “s3://s3bucket/warehouse”, “s3://s3bucket/Worker.desc”, “s3://s3bucket/checkpointAvroAuto”, “kafkaBootstrapString”, “true”),
“sparkSubmitParameters”: “–class com.aws.emr.spark.iot.SparkCustomIcebergIngestMoRS3BucketsAutoAvro –conf spark.executor.cores=16 –conf spark.executor.reminiscence=64g –conf spark.driver.cores=4 –conf spark.driver.reminiscence=16g –conf spark.dynamicAllocation.minExecutors=3 –conf spark.dynamicAllocation.maxExecutors=5 –conf spark.sql.catalog.glue_catalog.http-client.apache.max-connections=3000 –conf spark.emr-serverless.executor.disk.kind=shuffle_optimized –conf spark.emr-serverless.executor.disk=1000G –conf spark.jars /usr/share/aws/iceberg/lib/iceberg-spark3-runtime.jar –files s3://s3bucket/Worker.desc –packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.1”
}
}’

Question and Be a part of information from these S3 Tables to construct insights

You may go to Athena console after which run the queries. Please be sure that Lake Formation permissions are utilized on the catalog database and tables on your IAM Console function. For extra particulars, please consult with docs on the Grant Lake Formation permissions in your desk.

To benchmark these queries in Athena, you’ll be able to run every question a number of occasions—sometimes 5 runs per question—to acquire a dependable efficiency estimate. Within the Athena console, merely execute the identical question repeatedly and report the execution time for every run, which is displayed within the question historical past. After getting 5 execution occasions, calculate the typical to get a consultant benchmark worth. This method helps account for variations in efficiency resulting from background load, offering extra constant and significant outcomes.

Question 1

SELECT function, workforce, avg(age) AS average_age
FROM bigdata.”employee_orc”
GROUP BY function, workforce
ORDER BY average_age DESC

Question 2

SELECT workforce, identify, min(age) as youngest_age
FROM “bigdata”.”employee_Avro”
GROUP BY workforce, identify
ORDER BY youngest_age ASC

Question 3 

SELECT identify, age, start_date, function, workforce
FROM bigdata.”employee_Avro”
WHERE CAST(start_date as DATE) > CAST(‘2023-01-02’ as DATE) and age > 40
ORDER BY start_date DESC
restrict 100

Conclusion

AWS has expanded assist for Iceberg desk optimization to incorporate all Iceberg supported file codecs: Parquet, Avro, and ORC. This complete compaction functionality is now obtainable for each Amazon S3 Tables and Iceberg tables on the whole function S3 buckets utilizing the lakehouse structure in SageMaker with Glue Information Catalog optimization. S3 Tables ship a completely managed expertise via continuous optimization, routinely sustaining your tables by dealing with compaction, snapshot retention, and unreferenced file removing. These automated upkeep options considerably enhance question efficiency and scale back question engine prices. Compaction assist for Avro and ORC codecs is now obtainable in all AWS Areas the place S3 Tables or optimization with the AWS Glue Information Catalog can be found. To be taught extra about S3 Tables compaction, see the S3 Tables upkeep documentation. For basic function bucket optimization, see the Glue Information Catalog optimization documentation.

Particular because of everybody who contributed to this launch: Matthieu Dufour, Srishti Bhargava, Stylianos Herodotou, Kannan Ratnasingham, Shyam Rathi, David Lee.

Concerning the authors

Angel Conde Manjon is a Sr. EMEA Information & AI PSA, based mostly in Madrid. He has beforehand labored on analysis associated to Information Analytics and Synthetic Intelligence in various European analysis tasks. In his present function, Angel helps companions develop companies centered on Information and AI.

Diego Colombatto is a Principal Associate Options Architect at AWS. He brings greater than 15 years of expertise in designing and delivering Digital Transformation tasks for enterprises. At AWS, Diego works with companions and prospects advising the best way to leverage AWS applied sciences to translate enterprise wants into options. Answer architectures, algorithmic buying and selling and cooking are a few of his passions and he’s all the time open to begin a dialog on these subjects.

Sandeep Adwankar is a Senior Technical Product Supervisor at AWS. Based mostly within the California Bay Space, he works with prospects across the globe to translate enterprise and technical necessities into merchandise that allow prospects to enhance how they handle, safe, and entry information.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments