Add Float8QuantizedTensor (AQT subclass) and replace to_affine_quantized_floatx with to_affine_quantized_float8 in quantization APIs #4863
Annotations
1 error
test (SM-89, linux.g6.4xlarge.experimental.nvidia.gpu, --pre torch --index-url https://download.p... / linux-job
Canceling since a higher priority waiting request for 'float8_test-Run Float8 Tests-refs/pull/1599/merge' exists
|