admin管理员组文章数量:1356691
In following code
import torch
from torch.nn.functional import linear
a=torch.ones(2,3).type(torch.float16)
b=torch.ones(2,3).type(torch.float16)
linear(a,b)
what is the computetype of linear, fp32 or fp16 or other?
Thanks
I try to look into the repo and the torch.nn.functional.linear, but it is too hard.
In following code
import torch
from torch.nn.functional import linear
a=torch.ones(2,3).type(torch.float16)
b=torch.ones(2,3).type(torch.float16)
linear(a,b)
what is the computetype of linear, fp32 or fp16 or other?
Thanks
I try to look into the repo and the torch.nn.functional.linear, but it is too hard.
Share Improve this question edited Mar 31 at 10:00 talonmies 72.4k35 gold badges203 silver badges289 bronze badges asked Mar 31 at 4:05 YSFYSF 111 bronze badge New contributor YSF is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.1 Answer
Reset to default 0The computation will be performed in fp32 (float32) by default (https://www.exxactcorp/blog/hpc/what-is-fp64-fp32-fp16), even though your inputs are fp16 (float16). This is PyTorch's default behavior for numerical stability reasons.
Why fp32 by Default:
Reduced precision (fp16) can lead to numerical instability (overflow/underflow)
Many operations in PyTorch use fp32 internally for accumulation even with fp16 inputs
You could verify:
import torch
from torch.nn.functional import linear
a = torch.ones(2, 3, dtype=torch.float16)
b = torch.ones(4, 3, dtype=torch.float16)
output = linear(a, b)
print(output.dtype)
Expected Output:
On CPU (Newer version)→
torch.float16
On GPU (pre-Ampere) →
torch.float32
On Ampere+ GPUs (with autocast enabled) →
torch.float16
When both inputs (a
and b
) are in torch.float16
, PyTorch automatically upcasts computations to torch.float32
by default for numerical stability, especially on CPU and some GPUs (e.g., older architectures).
If running on Ampere (or newer) GPUs with Tensor Cores enabled, the computation might stay in fp16 for efficiency.
版权声明:本文标题:python - What is the computetype of torch.nn.functional.linear when input is float16 or bfloat16 - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1743968157a2570273.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论