S3 Express One: A Giant Leap for StarRocks Users

StarRocks Engineering
6 min readOct 18, 2024

--

Author: Jeff Ding

Since the launch of AWS S3 in 2009, object storage has gradually become the de facto standard for large-scale data storage, thanks to its simple API, low cost, massive scalability, and high reliability and availability. Object storage offers many advantages, but it is not a silver bullet. New data processing paradigms, such as lakehouse analytics and unified stream-batch processing, face several limitations with object storage that restrict key capabilities, including high latency, QPS limits, pricing risks, and poor list performance.

Fortunately, AWS’ Express One Storage address the S3 issues mentioned above. It promises a better user experience for critical applications and its comprehensive approach is a major reason why StarRocks was quick to support AWS S3 Express One Storage as its backend storage. In this article, we will provide a detailed introduction to the why’s and hows behind this decision.

Usage

Leveraging S3 Express One Storage in StarRocks is very simple. First, prepare a StarRocks shared data cluster, and then enable the AWS S3 service (currently, only AWS S3 supports this capability).

Once all the prerequisites are ready, you only need to execute a command in StarRocks to create a Storage Volume based on Express One Zone as the backend. The command is as follows:

CREATE STORAGE VOLUME s3_express_one_vol
TYPE = S3
LOCATIONS = ("s3://dingkai-test--usw2-az1--x-s3/")
PROPERTIES ("aws.s3.region" = "us-west-2", "aws.s3.access_key"="xxx", "aws.s3.secret_key"="yyy", "aws.s3.endpoint"="https://s3express-usw2-az1.us-west-2.amazonaws.com", "enabled"="true");

It is important to note that the endpoint, region, and other details for Express One Storage have some unique specifications. You can refer to this link to learn more:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html

Other fields are identical to a standard S3 Bucket.

For more details on StarRocks’ Storage Volume capabilities and usage, refer to the documentation.

Performance

What about the performance of S3 Express One Zone Storage? To truly evaluate it, we designed the following three test scenarios. All tests were conducted using the latest version of StarRocks with a storage-compute separation architecture:

  • High-frequency real-time ingestion
  • Offline data ingestion
  • Cold data queries

The testing method is straightforward. We created two independent Storage Volumes: one using a regular S3 bucket, and the other based on an S3 Express One Zone Storage bucket. Then, we created different tables based on these two Storage Volumes to test their performance (thanks to Storage Volumes, we have the flexibility to experiment freely).

mysql> show storage volumes;
+------------------------+
| Storage Volume |
+------------------------+
| builtin_storage_volume |
| jeff_s3_express_bucket |
+------------------------+

mysql> desc storage volume builtin_storage_volume\G
*************************** 1. row ***************************
Name: builtin_storage_volume
Type: S3
IsDefault: true
Location: s3://starrocks-common/skr2veiad-jeff_load_test-1699877456738
Params: {"aws.s3.region":"us-west-2","aws.s3.use_instance_profile":"true","aws.s3.use_aws_sdk_default_behavior":"false","aws.s3.endpoint":"https://s3.us-west-2.amazonaws.com"}
Enabled: true
Comment:
1 row in set (0.19 sec)

mysql> desc storage volume jeff_s3_express_bucket\G
*************************** 1. row ***************************
Name: jeff_s3_express_bucket
Type: S3
IsDefault: false
Location: s3://dingkai-test--usw2-az1--x-s3/
Params: {"aws.s3.access_key":"******","aws.s3.secret_key":"******","aws.s3.endpoint":"https://s3express-usw2-az1.us-west-2.amazonaws.com","aws.s3.region":"us-west-2","aws.s3.use_instance_profile":"false","aws.s3.use_aws_sdk_default_behavior":"false"}
Enabled: true
Comment: s3 express one zone type bucket test by jeff

1 row in set (0.20 sec)

High-Frequency Real-Time Ingestion

In this test, we set different client concurrency levels and continuously imported data into StarRocks via the stream load method, with each batch being 1MB in size.

During the test, we used built-in monitoring system to observe the write I/O latency for the different storage bucket types.

Can you guess which one is the standard object storage and which one is Express One?

Batch Data Ingestion

We used Broker Load to import the largest table, store_sales, from the TPC-DS 1TB dataset.

mysql> show load where Label = "store_sales_1_2"\G
*************************** 1. row ***************************
JobId: 19169
Label: store_sales_1_2
State: FINISHED
Progress: ETL:100%; LOAD:100%
Type: BROKER
Priority: NORMAL
ScanRows: 2879987999
FilteredRows: 0
UnselectedRows: 0
SinkRows: 2879987999
EtlInfo: NULL
TaskInfo: resource:N/A; timeout(s):144000; max_filter_ratio:0.0
ErrorMsg: NULL
CreateTime: 2024-04-17 18:59:55
EtlStartTime: 2024-04-17 18:59:59
EtlFinishTime: 2024-04-17 18:59:59
LoadStartTime: 2024-04-17 18:59:59
LoadFinishTime: 2024-04-17 19:01:59
TrackingSQL:
JobDetails: {"All backends":{"8a4ed954-78bc-4930-bc74-c9d64d9e5cdf":[14752,14753,14754,14755,13283,14756,13284,14757,13285,13286,13287,10004,10005,10006,14750,14751]},"FileNumber":10,"FileSize":421287950369,"InternalTableLoadBytes":481093008697,"InternalTableLoadRows":2879987999,"ScanBytes":421287950369,"ScanRows":2879987999,"TaskNumber":1,"Unfinished backends":{"8a4ed954-78bc-4930-bc74-c9d64d9e5cdf":[]}}
Warehouse: default_warehouse
1 row in set (0.20 sec)


mysql> show load where Label = "store_sales_2_3"\G
*************************** 1. row ***************************
JobId: 19180
Label: store_sales_2_3
State: FINISHED
Progress: ETL:100%; LOAD:100%
Type: BROKER
Priority: NORMAL
ScanRows: 2879987999
FilteredRows: 0
UnselectedRows: 0
SinkRows: 2879987999
EtlInfo: NULL
TaskInfo: resource:N/A; timeout(s):144000; max_filter_ratio:0.0
ErrorMsg: NULL
CreateTime: 2024-04-17 19:04:45
EtlStartTime: 2024-04-17 19:04:49
EtlFinishTime: 2024-04-17 19:04:49
LoadStartTime: 2024-04-17 19:04:49
LoadFinishTime: 2024-04-17 19:07:00
TrackingSQL:
JobDetails: {"All backends":{"4c534297-b69a-42b0-9a41-120f0ad1b0ca":[14752,14753,14754,14755,13283,13284,14756,13285,14757,13286,13287,10004,10005,10006,14750,14751]},"FileNumber":10,"FileSize":421287950369,"InternalTableLoadBytes":481093008697,"InternalTableLoadRows":2879987999,"ScanBytes":421287950369,"ScanRows":2879987999,"TaskNumber":1,"Unfinished backends":{"4c534297-b69a-42b0-9a41-120f0ad1b0ca":[]}}
Warehouse: default_warehouse
1 row in set (0.19 sec)
S3 Express One Zone Storage Read IO
S3 Standard Storage Read IO

During the test, we observed that the CPU on the CN nodes were nearly fully utilized.

Query

For this test, we selected the TPC-DS 1TB dataset and disabled all caches during the process (both memory-based page cache and local disk-based data cache). We compared the query performance across three scenarios: using a standard S3 bucket, an S3 Express One Zone Storage bucket, and local cache hit.

We also carefully recorded the I/O read latency for both types of storage.

S3 Express One Zone Storage Read IO
S3 Standard Storage Read IO

Takeaways

After conducting these tests, we can conclude the following:

  • AWS S3 Express One offers lower I/O latency compared to standard buckets, with particularly stable P99 latency control (a big win!).
  • AWS S3 Express One Storage has a higher IOPS limit than standard buckets, effectively alleviating the S3 I/O throttling issues commonly seen in real-time applications (in the real-time ingestion scenario, many requests on standard S3 buckets experienced IOPS throttling).
  • For batch data processing scenarios, standard S3 buckets already provide sufficient bandwidth, making them adequate for such use cases.
  • Overall, AWS S3 Express One’s low-latency and high-concurrency characteristics make it well-suited for real-time and cache-heavy scenarios, providing a better user experience.

Important Considerations

Beyond the conclusions above, there are three additional factors users must consider:

  1. No free lunch from Cloud providers: Compared to standard S3 storage, the new S3 storage type has some differences in terms of pricing, even though the primary cost structure remains based on storage capacity and API call fees:
  2. Significantly higher storage costs: Compared to standard S3 buckets, storage fees are much higher. For standard S3 buckets, the cost for the first 50TB of storage is just $0.023/GB per month. In contrast, S3 Express One is $0.16/GB per month, which is seven times more expensive. By comparison, AWS EBS GP2 is priced at just $0.1/GB per month.
  3. Reduced API call fees: API call fees are halved compared to standard S3 buckets. For standard S3, update requests (e.g., PUT, DELETE) cost $0.005 per 1,000 requests, and read requests (e.g., GET, HEAD) cost $0.0004 per 10,000 requests. With S3 Express One Storage, update requests cost $0.0025 per 1,000, and read requests are $0.0002 per 10,000.

Seeing this pricing, clever readers will have already figured out how to maximize the value of this new storage type, but for those with questions (or those who want to validate their plan), let’s discuss this further on the StarRocks Slack channel. See you there!

Join Us on Slack

If you’re interested in the StarRocks project, have questions, or simply seek to discover solutions or best practices, join our StarRocks community on Slack. It’s a great place to connect with project experts and peers from your industry.

--

--

StarRocks Engineering
StarRocks Engineering

No responses yet