admin管理员组文章数量:1122832
The df contains 100 millions of rows, and group_by columns is like 25-30. Is there a way to speed this operation up from here? or this is the best I can get.
import polars as pl
import numpy as np
rows = 100000000
n_cols = 30
df = pl.DataFrame(np.random.randint(0, 100, size=(n_cols, rows)), schema=[str(col_i) for col_i in range(n_cols)])
First_n_rows_list = [1,2,3]
df = df.sort('col_0').group_by([“col_”+str(i) for i in range(1, n_cols)])
result = pl.concat([df.head(First_n_rows).with_columns(pl.lit(First_n_rows).alias('First_n_rows').cast(pl.Int8)) for First_n_rows in First_n_rows_list])
The df contains 100 millions of rows, and group_by columns is like 25-30. Is there a way to speed this operation up from here? or this is the best I can get.
import polars as pl
import numpy as np
rows = 100000000
n_cols = 30
df = pl.DataFrame(np.random.randint(0, 100, size=(n_cols, rows)), schema=[str(col_i) for col_i in range(n_cols)])
First_n_rows_list = [1,2,3]
df = df.sort('col_0').group_by([“col_”+str(i) for i in range(1, n_cols)])
result = pl.concat([df.head(First_n_rows).with_columns(pl.lit(First_n_rows).alias('First_n_rows').cast(pl.Int8)) for First_n_rows in First_n_rows_list])
Share
Improve this question
edited yesterday
user28199045
asked yesterday
user28199045user28199045
133 bronze badges
3
|
2 Answers
Reset to default 0As you said, you can take head(max(x_list))
and then repeat each row appropriate number of times:
x = head(max(x_list))
(
df.head(x)
.with_columns(
pl.int_range(pl.len() + 1, 1, step=-1)
.over([str(x) for x in range(1,n_cols)]).alias("x")
)
.with_columns(pl.exclude("x").repeat_by("x"))
.explode(pl.exclude("x"))
)
import polars as pl
import numpy as no
n = 50
df = pl.DataFrame(np.random.randint(0, 100, size = (4, n)), schema= ['A', 'B', 'C', 'D'])
x_list = [1, 2, 3]
grouped = df.group_by(['A', 'B', 'C'])
result = pl.concat([grouped.head(x).with_columns(pl.lit(x).alias('x').cast(pl.Int8)) for x in x_list])
本文标签:
版权声明:本文标题:python - How to speed up the operation of repeating take first n rows for each group after group_by? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736283621a1927025.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
df.head(max(x_list))
to reduce the size of the dataframe. – roman Commented yesterday