admin管理员组文章数量:1334824
I'm trying to query to Athena using boto3, getting a value from the dataframe, then saving the dataframe to S3.
from io import StringIO
import boto3
import awswrangler as wr
region = ""
access_key = ""
secret_key = ""
database = ""
s3 = boto3.resource(service_name='s3',
region_name=region,
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)
sql = "SELECT * FROM tbl"
boto3.setup_default_session(region_name=region,aws_access_key_id=access_key, aws_secret_access_key=secret_key)
df = wr.athena.read_sql_query(sql=sql, database=database)
values = df['value'].unique() # get list of values from dataframe
csv_buffer = StringIO()
df.to_csv(csv_buffer, index=False)
s3.Object(s3_bucket_name, "s3://path/to/folder/values.csv").put(Body=csv_buffer.getvalue())
What would be a more efficient way to read the dataframe + save to S3 for large datasets? I have already tried optimizing by using awswrangler
instead of pd.read_csv()
which helped at first, but when the dataframe is larger > 700MB, it is causing memory issues.
I'm trying to query to Athena using boto3, getting a value from the dataframe, then saving the dataframe to S3.
from io import StringIO
import boto3
import awswrangler as wr
region = ""
access_key = ""
secret_key = ""
database = ""
s3 = boto3.resource(service_name='s3',
region_name=region,
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)
sql = "SELECT * FROM tbl"
boto3.setup_default_session(region_name=region,aws_access_key_id=access_key, aws_secret_access_key=secret_key)
df = wr.athena.read_sql_query(sql=sql, database=database)
values = df['value'].unique() # get list of values from dataframe
csv_buffer = StringIO()
df.to_csv(csv_buffer, index=False)
s3.Object(s3_bucket_name, "s3://path/to/folder/values.csv").put(Body=csv_buffer.getvalue())
What would be a more efficient way to read the dataframe + save to S3 for large datasets? I have already tried optimizing by using awswrangler
instead of pd.read_csv()
which helped at first, but when the dataframe is larger > 700MB, it is causing memory issues.
- If file size is your primary concern I’m not quite sure why you’d read or write to something like CSV when a highly optimized file format like Parquet exists. Even just writing your CSV with compression would likely save you lots in terms of storage space. – esqew Commented Nov 20, 2024 at 4:28
- @esqew It's because later on I will want to retrieve the CSV from S3 again to do some pandas operations on the data as a dataframe. Would I be able to achieve the same when saving with a Parquet file? – codenoodles Commented Nov 20, 2024 at 4:32
1 Answer
Reset to default 0From the documentation for read_sql_query, it seems like you can split the query result into chunks using chunksize
parameter. From there, you can iterate through the chunks and write each chunks in append
mode to the csv file using wr.s3.to_csv
from io import StringIO
import boto3
import awswrangler as wr
region = ""
access_key = ""
secret_key = ""
database = ""
CHUNKSIZE = 1000
s3 = boto3.resource(service_name='s3',
region_name=region,
aws_access_key_id = access_key,
aws_secret_access_key = secret_key)
sql = "SELECT * FROM tbl"
boto3.setup_default_session(region_name=region,aws_access_key_id=access_key, aws_secret_access_key=secret_key)
chunks = wr.athena.read_sql_query(sql=sql, database=database, chunksize=CHUNKSIZE)
for i, df in enumerate(chunks):
values = df['value'].unique() # get list of values from dataframe
wr.s3.to_csv(
df=df,
path= "s3://path/to/folder/values.csv",
dataset=True,
mode="append",
index=False
)
本文标签: pythonHow to optimize for memory when querying and saving large dataframe to S3Stack Overflow
版权声明:本文标题:python - How to optimize for memory when querying and saving large dataframe to S3 - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1742382240a2464323.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论