Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[falcon] Fix Falcon for rw-1b model #2887

Closed
wants to merge 9 commits into from

Conversation

akawrykow
Copy link
Contributor

@akawrykow akawrykow commented Aug 29, 2023

See #2868

  • Reconciling some differences in config.json:
    image
  • Skipping QKV reshaping when parallel attention is off
  • Fixing vocab size issues (config.json vocab size does not match contents of tokenizer.json - going with the stated size and padding with the missing tokens)
  • Updating tensor map. This model seems to have an ffm-norm layer.
  • Add 1b model type
  • Figure out error loading model: create_tensor: tensor 'blk.0.attn_qkv.weight' has wrong shape; expected 2048, 2176, got 2048, 6144
  • TODO: Incorporate FFM norm this into the graph (added to the graph, but not using it)
  • TODO: Properly incorporate bias (attention_qkv, ffdown, ffup) - added to the graph but not yet using it
  • TODO: figure out alibi stuff

@akawrykow
Copy link
Contributor Author

I'm taking a wild guess here and thinking that parallel_attention = False implies we can skip this QKV reshaping stuff.

After updating gguf to account for an extra post-attention layer in the tensor map for falcon, I'm successfully able to convert the model. Let me try running it now

@akawrykow
Copy link
Contributor Author

Here is the output I see after quantizing + running the model:

PS C:\llama.cpp> ./main -m .\models\falcon-rw-1b\ggml-model-q4_0.gguf
llama_model_loader: loaded meta data with 19 key-value pairs and 292 tensors from .\models\falcon-rw-1b\ggml-model-q4_0.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  2048, 50304,     1,     1 ]
llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    2:             blk.0.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    3:            blk.0.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor    4:              blk.0.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor    5:         blk.0.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor    6:           blk.0.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    7:            blk.0.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    8:              blk.0.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    9:              blk.0.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   10:                blk.0.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   11:            blk.0.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   12:              blk.0.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   13:           blk.1.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   14:             blk.1.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   15:            blk.1.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   16:              blk.1.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   17:         blk.1.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   18:           blk.1.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   19:            blk.1.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   20:              blk.1.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   21:              blk.1.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   22:                blk.1.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   23:            blk.1.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   24:              blk.1.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   25:           blk.2.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   26:             blk.2.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   27:            blk.2.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   28:              blk.2.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   29:         blk.2.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   30:           blk.2.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   31:            blk.2.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   32:              blk.2.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   33:              blk.2.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   34:                blk.2.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   35:            blk.2.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   36:              blk.2.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   37:           blk.3.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   38:             blk.3.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   39:            blk.3.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   40:              blk.3.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   41:         blk.3.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   42:           blk.3.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   43:            blk.3.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   44:              blk.3.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   45:              blk.3.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   46:                blk.3.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   47:            blk.3.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   48:              blk.3.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   49:           blk.4.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   50:             blk.4.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   51:            blk.4.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   52:              blk.4.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   53:         blk.4.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   54:           blk.4.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   55:            blk.4.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   56:              blk.4.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   57:              blk.4.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   58:                blk.4.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   59:            blk.4.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   60:              blk.4.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   61:           blk.5.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   62:             blk.5.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   63:            blk.5.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   64:              blk.5.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   65:         blk.5.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   66:           blk.5.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   67:            blk.5.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   68:              blk.5.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   69:              blk.5.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   70:                blk.5.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   71:            blk.5.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   72:              blk.5.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   73:           blk.6.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   74:             blk.6.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   75:            blk.6.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   76:              blk.6.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   77:         blk.6.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   78:           blk.6.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   79:            blk.6.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   80:              blk.6.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   81:              blk.6.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   82:                blk.6.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   83:            blk.6.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   84:              blk.6.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   85:           blk.7.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   86:             blk.7.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   87:            blk.7.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor   88:              blk.7.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor   89:         blk.7.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   90:           blk.7.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   91:            blk.7.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   92:              blk.7.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   93:              blk.7.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor   94:                blk.7.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor   95:            blk.7.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor   96:              blk.7.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   97:           blk.8.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   98:             blk.8.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   99:            blk.8.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  100:              blk.8.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  101:         blk.8.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  102:           blk.8.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  103:            blk.8.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  104:              blk.8.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  105:              blk.8.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  106:                blk.8.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  107:            blk.8.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  108:              blk.8.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  109:           blk.9.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  110:             blk.9.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  111:            blk.9.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  112:              blk.9.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  113:         blk.9.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  114:           blk.9.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  115:            blk.9.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  116:              blk.9.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  117:              blk.9.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  118:                blk.9.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  119:            blk.9.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  120:              blk.9.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  121:          blk.10.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  122:            blk.10.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  123:           blk.10.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  124:             blk.10.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  125:        blk.10.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  126:          blk.10.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  127:           blk.10.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  128:             blk.10.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  129:             blk.10.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  130:               blk.10.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  131:           blk.10.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  132:             blk.10.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  133:          blk.11.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  134:            blk.11.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  135:           blk.11.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  136:             blk.11.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  137:        blk.11.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  138:          blk.11.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  139:           blk.11.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  140:             blk.11.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  141:             blk.11.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  142:               blk.11.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  143:           blk.11.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  144:             blk.11.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  145:          blk.12.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  146:            blk.12.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  147:           blk.12.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  148:             blk.12.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  149:        blk.12.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  150:          blk.12.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  151:           blk.12.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  152:             blk.12.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  153:             blk.12.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  154:               blk.12.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  155:           blk.12.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  156:             blk.12.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  157:          blk.13.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  158:            blk.13.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  159:           blk.13.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  160:             blk.13.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  161:        blk.13.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  162:          blk.13.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  163:           blk.13.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  164:             blk.13.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  165:             blk.13.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  166:               blk.13.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  167:           blk.13.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  168:             blk.13.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  169:          blk.14.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  170:            blk.14.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  171:           blk.14.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  172:             blk.14.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  173:        blk.14.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  174:          blk.14.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  175:           blk.14.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  176:             blk.14.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  177:             blk.14.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  178:               blk.14.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  179:           blk.14.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  180:             blk.14.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  181:          blk.15.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  182:            blk.15.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  183:           blk.15.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  184:             blk.15.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  185:        blk.15.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  186:          blk.15.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  187:           blk.15.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  188:             blk.15.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  189:             blk.15.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  190:               blk.15.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  191:           blk.15.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  192:             blk.15.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  193:          blk.16.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  194:            blk.16.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  195:           blk.16.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  196:             blk.16.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  197:        blk.16.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  198:          blk.16.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  199:           blk.16.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  200:             blk.16.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  201:             blk.16.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  202:               blk.16.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  203:           blk.16.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  204:             blk.16.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  205:          blk.17.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  206:            blk.17.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  207:           blk.17.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  208:             blk.17.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  209:        blk.17.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  210:          blk.17.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  211:           blk.17.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  212:             blk.17.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  213:             blk.17.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  214:               blk.17.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  215:           blk.17.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  216:             blk.17.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  217:          blk.18.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  218:            blk.18.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  219:           blk.18.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  220:             blk.18.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  221:        blk.18.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  222:          blk.18.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  223:           blk.18.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  224:             blk.18.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  225:             blk.18.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  226:               blk.18.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  227:           blk.18.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  228:             blk.18.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  229:          blk.19.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  230:            blk.19.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  231:           blk.19.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  232:             blk.19.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  233:        blk.19.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  234:          blk.19.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  235:           blk.19.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  236:             blk.19.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  237:             blk.19.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  238:               blk.19.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  239:           blk.19.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  240:             blk.19.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  241:          blk.20.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  242:            blk.20.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  243:           blk.20.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  244:             blk.20.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  245:        blk.20.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  246:          blk.20.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  247:           blk.20.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  248:             blk.20.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  249:             blk.20.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  250:               blk.20.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  251:           blk.20.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  252:             blk.20.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  253:          blk.21.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  254:            blk.21.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  255:           blk.21.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  256:             blk.21.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  257:        blk.21.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  258:          blk.21.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  259:           blk.21.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  260:             blk.21.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  261:             blk.21.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  262:               blk.21.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  263:           blk.21.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  264:             blk.21.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  265:          blk.22.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  266:            blk.22.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  267:           blk.22.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  268:             blk.22.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  269:        blk.22.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  270:          blk.22.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  271:           blk.22.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  272:             blk.22.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  273:             blk.22.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  274:               blk.22.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  275:           blk.22.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  276:             blk.22.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  277:          blk.23.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  278:            blk.23.attn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  279:           blk.23.attn_qkv.weight q4_0     [  2048,  6144,     1,     1 ]
llama_model_loader: - tensor  280:             blk.23.attn_qkv.bias f32      [  6144,     1,     1,     1 ]
llama_model_loader: - tensor  281:        blk.23.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  282:          blk.23.attn_output.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  283:           blk.23.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  284:             blk.23.ffn_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  285:             blk.23.ffn_up.weight q4_0     [  2048,  8192,     1,     1 ]
llama_model_loader: - tensor  286:               blk.23.ffn_up.bias f32      [  8192,     1,     1,     1 ]
llama_model_loader: - tensor  287:           blk.23.ffn_down.weight q4_0     [  8192,  2048,     1,     1 ]
llama_model_loader: - tensor  288:             blk.23.ffn_down.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  289:               output_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  290:                 output_norm.bias f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  291:                    output.weight q8_0     [  2048, 50304,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str
llama_model_loader: - kv   1:                               general.name str
llama_model_loader: - kv   2:                      falcon.context_length u32
llama_model_loader: - kv   3:                  falcon.tensor_data_layout str
llama_model_loader: - kv   4:                    falcon.embedding_length u32
llama_model_loader: - kv   5:                 falcon.feed_forward_length u32
llama_model_loader: - kv   6:                         falcon.block_count u32
llama_model_loader: - kv   7:                falcon.attention.head_count u32
llama_model_loader: - kv   8:             falcon.attention.head_count_kv u32
llama_model_loader: - kv   9:        falcon.attention.layer_norm_epsilon f32
llama_model_loader: - kv  10:                          general.file_type u32
llama_model_loader: - kv  11:                       tokenizer.ggml.model str
llama_model_loader: - kv  12:                      tokenizer.ggml.merges arr
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv  18:               general.quantization_version u32
llama_model_loader: - type  f32:  194 tensors
llama_model_loader: - type q4_0:   97 tensors
llama_model_loader: - type q8_0:    1 tensors
llm_load_print_meta: format         = GGUF V2 (latest)
llm_load_print_meta: arch           = falcon
llm_load_print_meta: vocab type     = BPE
llm_load_print_meta: n_vocab        = 50257
llm_load_print_meta: n_merges       = 50000
llm_load_print_meta: n_ctx_train    = 2048
llm_load_print_meta: n_ctx          = 512
llm_load_print_meta: n_embd         = 2048
llm_load_print_meta: n_head         = 32
llm_load_print_meta: n_head_kv      = 1
llm_load_print_meta: n_layer        = 24
llm_load_print_meta: n_rot          = 64
llm_load_print_meta: n_gqa          = 32
llm_load_print_meta: f_norm_eps     = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: n_ff           = 8192
llm_load_print_meta: freq_base      = 10000.0
llm_load_print_meta: freq_scale     = 1
llm_load_print_meta: model type     = ?B
llm_load_print_meta: model ftype    = mostly Q4_0
llm_load_print_meta: model size     = 1.41 B
llm_load_print_meta: general.name   = Falcon
llm_load_print_meta: BOS token = 1 '"'
llm_load_print_meta: EOS token = 2 '#'
llm_load_print_meta: LF token  = 198 '
'
llm_load_tensors: ggml ctx size =    0.09 MB
error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected  2048, 50257, got  2048, 50304,     1,     1
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '.\models\falcon-rw-1b\ggml-model-q4_0.gguf'
main: error: unable to load model

@slaren
Copy link
Member

slaren commented Aug 29, 2023

I have no idea why, but it looks like the tokenizer is missing some tokens. This model also seems to use alibi, so it will probably require some changes to the computation graph as well.

@akawrykow
Copy link
Contributor Author

This is really bizarre. config.json has vocab_size: 50304 but if cracking open tokenizer.json there are 50257.

In the conversion script, I wonder if taking the stated vocab size would at least fix this issue.

I can look into the alibi stuff next

@akawrykow
Copy link
Contributor Author

That fixed the vocab issue.

Now:

error loading model: create_tensor: tensor 'blk.0.attn_qkv.weight' has wrong shape; expected  2048,  2176, got  2048,  6144,     1,     1
llama_load_model_from_file: failed to load model

I think this is probably due to how we expected to have reshaped the qkv tensor which we skip for this model

let me play around with this

@akawrykow
Copy link
Contributor Author

akawrykow commented Aug 30, 2023

Alright, latest update. I actually got the model running and outputting something (nonsense for now):

                                                                                                                                                                                                                                             " CC .WhyTheTr*MrsSS,CFHigOld1ADVERTISEMENT<|endoftext|>Featured …ThisFlag aByJHereBlAny~~SNoSur ...HereNonSoRCategory:-<|endoftext|>Ms -Report (JrThe>> [Flag ?Details ThisHowNewHMore- > *LAnonymous________________ tAOn_______HereHer~*ItDrWPosted 1ORecommended`Flag lOldFThisHi,By noTr[CRs .Comments ArDidBy TradRossNon:-*AbIsNo<|endoftext|>Category :Dem<|endoftext|>TheHereAreBreRHowFlag IADVERTISEMENT `Flag Mr. TheThis~~By (AnonymousSon_______AFeatured *HereMs SexMrs …>> ?Posted                                                                                                                                                                                                                                     Flag ,SoFlag Flag<|endoftext|>Report<|endoftext|>Top- tRPubSShTrLAnyDrH*________________1Mrs MrsSur [Old,Recommended TheBlF*JrIs[RNo# [end of text]                                                                                                                                                                                                                                                                                                                                       llama_print_timings:        load time =   286.86 ms                                                                                                                                                                                          llama_print_timings:      sample time =   214.09 ms /   191 runs   (    1.12 ms per token,   892.16 tokens per second)                                                                                                                       llama_print_timings: prompt eval time =     0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)                                                                                                                       llama_print_timings:        eval time =  3384.85 ms /   191 runs   (   17.72 ms per token,    56.43 tokens per second)                                                                                                                       llama_print_timings:       total time =  3722.91 ms 

This model has a bunch of extra bias tensors so we need to figure out how to incorporate these into the calculations

@akawrykow akawrykow changed the title [falcon] Fix convert-falcon-hf-to-gguf.py for rw models [falcon] Fix Falcon for rw-1b model Aug 30, 2023
@ggerganov
Copy link
Member

@akawrykow
Copy link
Contributor Author

@ggerganov is there generally some kind of mapping of the operations in the python implementation to what we have available in ggml? Do you see anything missing?

@ggerganov
Copy link
Member

The needed operators (like alibi) should already be available in ggml. I don't think there is anything missing, but we can add it if there is.

It's mostly a matter of correctly building the graphs depending on the config parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants