DiscoverDaily Paper CastSINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights
SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights

SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights

Update: 2025-10-03
Share

Description

🤗 Upvotes: 25 | cs.LG



Authors:

Lorenz K. Müller, Philippe Bich, Jiawei Zhuang, Ahmet Çelik, Luca Benfenati, Lukas Cavigelli



Title:

SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights



Arxiv:

http://arxiv.org/abs/2509.22944v2



Abstract:

Post-training quantization has emerged as the most widely used strategy for deploying large language models at low precision. Still, current methods show perplexity degradation at bit-widths less than or equal to 4, partly because representing outliers causes precision issues in parameters that share the same scales as these outliers. This problem is especially pronounced for calibration-free, uniform quantization methods. We introduce SINQ to augment existing post-training quantizers with an additional second-axis scale factor and a fast Sinkhorn-Knopp-style algorithm that finds scales to normalize per-row and per-column variances, thereby minimizing a novel per-matrix proxy target for quantization: the matrix imbalance. Our method has no interactions between layers and can be trivially applied to new architectures to quantize any linear layers. We evaluate our method on the Qwen3 model family and DeepSeek-V2.5. SINQ improves WikiText2 and C4 perplexity significantly against uncalibrated uniform quantization baselines and can be further enhanced by combining it with calibration and non-uniform quantization levels. Code to reproduce the results of this work and to easily quantize models using SINQ is available at https://github.com/huawei-csl/SINQ.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights

SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights

Jingwen Liang, Gengyu Wang