admin管理员组

文章数量:1334824

I'm trying to query to Athena using boto3, getting a value from the dataframe, then saving the dataframe to S3.

from io import StringIO
import boto3
import awswrangler as wr

region = ""
access_key = ""
secret_key = ""
database = ""

s3 = boto3.resource(service_name='s3',
region_name=region,
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)

sql = "SELECT * FROM tbl"
boto3.setup_default_session(region_name=region,aws_access_key_id=access_key, aws_secret_access_key=secret_key)
df = wr.athena.read_sql_query(sql=sql, database=database)
values = df['value'].unique() # get list of values from dataframe
csv_buffer = StringIO()
df.to_csv(csv_buffer, index=False)
s3.Object(s3_bucket_name, "s3://path/to/folder/values.csv").put(Body=csv_buffer.getvalue())

What would be a more efficient way to read the dataframe + save to S3 for large datasets? I have already tried optimizing by using awswrangler instead of pd.read_csv() which helped at first, but when the dataframe is larger > 700MB, it is causing memory issues.

I'm trying to query to Athena using boto3, getting a value from the dataframe, then saving the dataframe to S3.

from io import StringIO
import boto3
import awswrangler as wr

region = ""
access_key = ""
secret_key = ""
database = ""

s3 = boto3.resource(service_name='s3',
region_name=region,
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)

sql = "SELECT * FROM tbl"
boto3.setup_default_session(region_name=region,aws_access_key_id=access_key, aws_secret_access_key=secret_key)
df = wr.athena.read_sql_query(sql=sql, database=database)
values = df['value'].unique() # get list of values from dataframe
csv_buffer = StringIO()
df.to_csv(csv_buffer, index=False)
s3.Object(s3_bucket_name, "s3://path/to/folder/values.csv").put(Body=csv_buffer.getvalue())

What would be a more efficient way to read the dataframe + save to S3 for large datasets? I have already tried optimizing by using awswrangler instead of pd.read_csv() which helped at first, but when the dataframe is larger > 700MB, it is causing memory issues.

Share Improve this question asked Nov 20, 2024 at 4:22 codenoodlescodenoodles 1434 silver badges14 bronze badges 2
  • If file size is your primary concern I’m not quite sure why you’d read or write to something like CSV when a highly optimized file format like Parquet exists. Even just writing your CSV with compression would likely save you lots in terms of storage space. – esqew Commented Nov 20, 2024 at 4:28
  • @esqew It's because later on I will want to retrieve the CSV from S3 again to do some pandas operations on the data as a dataframe. Would I be able to achieve the same when saving with a Parquet file? – codenoodles Commented Nov 20, 2024 at 4:32
Add a comment  | 

1 Answer 1

Reset to default 0

From the documentation for read_sql_query, it seems like you can split the query result into chunks using chunksize parameter. From there, you can iterate through the chunks and write each chunks in append mode to the csv file using wr.s3.to_csv

from io import StringIO
import boto3
import awswrangler as wr

region = ""
access_key = ""
secret_key = ""
database = ""
CHUNKSIZE = 1000

s3 = boto3.resource(service_name='s3',
region_name=region,
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)

sql = "SELECT * FROM tbl"
boto3.setup_default_session(region_name=region,aws_access_key_id=access_key, aws_secret_access_key=secret_key)
chunks = wr.athena.read_sql_query(sql=sql, database=database, chunksize=CHUNKSIZE)

for i, df in enumerate(chunks):
    values = df['value'].unique() # get list of values from dataframe

    wr.s3.to_csv(
        df=df,
        path= "s3://path/to/folder/values.csv",
        dataset=True,
        mode="append",
        index=False
    )

本文标签: pythonHow to optimize for memory when querying and saving large dataframe to S3Stack Overflow