You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2023-10-04-pytorch-2-1.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ For more information, please see the tutorial [here](https://pytorch.org/tutoria
90
90
91
91
**\[Prototype]_torch.export_-based Quantization**
92
92
93
-
_torch.ao.quantization_ now supports post-training static quantization on PyTorch2-based _torch.export_ flows. This includes support for built-in _XNNPACK_ and _X64Inductor__Quantizer_, as well as the ability to specify one’s own _Quantizer_.
93
+
_torch.ao.quantization_ now supports quantization on PyTorch2-based _torch.export_ flows. This includes support for built-in _XNNPACK_ and _X64Inductor__Quantizer_, as well as the ability to specify one’s own _Quantizer_.
94
94
95
95
For an explanation on post-training static quantization with torch.export, see [this tutorial](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html), for quantization-aware training for static quantization with torch.export, see [this tutorial](https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html).
0 commit comments