Provide the 8-bits floats (FP8) proposed in FP8 Formats for Deep Learning (Float8_E4M3FN, Float8_E5M2) and 8-bit Numerical Formats For Deep Neural Networks (Float8_E4M3FNUZ, Float8_E5M2FNUZ). Mainly for handling data stored in this format. All floating-point arithmetics are with Float32 and convert the output back to FP8.
- Popularity
- 3 Stars
- Updated Last
- 1 Year Ago
- Started In
- February 2024