BF16 (Brain Floating Point)

A 16-bit floating-point format developed by Google Brain that keeps the same exponent range as FP32 but with reduced mantissa precision. BF16 is the default training format for most modern LLMs because it avoids the overflow issues of FP16 while using half the memory of FP32. NVIDIA GPUs from Ampere onward (RTX 3090+) and Apple Silicon M-series chips support BF16 natively in their tensor/matrix units.

Related Products

Related Articles

More Terms