|
TinyLlama.cpp 1.0
A lightweight C++ implementation of the TinyLlama language model
|
2-bit K-quantized block structure More...
#include <quantization.h>

Public Attributes | |
| uint16_t | d |
| uint16_t | dmin |
| uint8_t | scales [GGML_QK_K/16] |
| uint8_t | qs [GGML_QK_K/4] |
2-bit K-quantized block structure
Stores weights quantized to 2 bits with block-wise scaling. Provides maximum compression at the cost of precision.
Definition at line 85 of file quantization.h.
| uint16_t block_q2_K::d |
| uint16_t block_q2_K::dmin |
| uint8_t block_q2_K::qs[GGML_QK_K/4] |
| uint8_t block_q2_K::scales[GGML_QK_K/16] |