From 7acd5610d5a893fe39c1998e060f299501d2a1a6 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Date: Wed, 4 Oct 2023 17:34:11 -0700 Subject: [PATCH] Update 2023-10-04-pytorch-2-1.md --- _posts/2023-10-04-pytorch-2-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2023-10-04-pytorch-2-1.md b/_posts/2023-10-04-pytorch-2-1.md index 7ea79b075526..99878ea3c84c 100644 --- a/_posts/2023-10-04-pytorch-2-1.md +++ b/_posts/2023-10-04-pytorch-2-1.md @@ -90,7 +90,7 @@ For more information, please see the tutorial [here](https://pytorch.org/tutoria **\[Prototype] _torch.export_-based Quantization** -_torch.ao.quantization_ now supports quantization on PyTorch2-based _torch.export_ flows.  This includes support for built-in _XNNPACK_ and _X64Inductor_ _Quantizer_, as well as the ability to specify one’s own _Quantizer_. +_torch.ao.quantization_ now supports quantization on PyTorch 2 _torch.export_-based flows.  This includes support for built-in _XNNPACK_ and _X64Inductor_ _Quantizer_, as well as the ability to specify one’s own _Quantizer_. For an explanation on post-training static quantization with torch.export, see [this tutorial](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html), for quantization-aware training for static quantization with torch.export, see [this tutorial](https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html).