Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hotfix prompt caching introduced in #1169, fixes #1257 #1260

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/main/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ int main(int argc, char ** argv) {
// optionally save the session on first sample (for faster prompt loading next time)
if (!path_session.empty() && need_to_save_session) {
need_to_save_session = false;
llama_save_session_file(ctx, path_session.c_str(), session_tokens.data(), session_tokens.size());
llama_save_session_file(ctx, path_session.c_str(), session_tokens.data(), session_tokens.size() - 1); // FIXME: -1 is a hack
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could actually be the right approach in general. It will just have the effect of forcing eval one token earlier, which will of course have effectively the same performance. That said, ideally I'm wondering if we do this on the read side, like maybe try decrementing n_matching_session_tokens by one in the startup code above? The reason I say that is eventually I envision sessions being used to restore the full transcript vs. caching the prompt, and in this case the approach here would clip the last token. In addition, I believe that there's an (however unlikely) edge case here with empty session_tokens.

I'd actually been wondering about if we should do this anyway, because of the flow of state from eval to sampling, specifically the logits vector. In other words, if we should force an eval to ensure up-to-date logits. This corresponds to the fact that, in main at least, sampling is never done without first eval-ing. I haven't deeply explored this. Perhaps this could be related to your issue.

}

llama_token id = 0;
Expand Down
5 changes: 3 additions & 2 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2701,7 +2701,7 @@ size_t llama_load_session_file(struct llama_context * ctx, const char * path_ses
const uint32_t magic = file.read_u32();
const uint32_t version = file.read_u32();

if (!(magic == 'ggsn' && version == 0)) {
if (!(magic == 'ggsn' && version == 1)) {
fprintf(stderr, "%s : unknown (magic, version) for session file: %08x, %08x\n", __func__, magic, version);
return 0;
}
Expand All @@ -2724,6 +2724,7 @@ size_t llama_load_session_file(struct llama_context * ctx, const char * path_ses
const size_t n_orig_state_size = llama_get_state_size(ctx);
if (n_state_size != n_orig_state_size) {
fprintf(stderr, "%s : failed to validate state size\n", __func__);
return 0;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Just noticed this, oops...

}
std::unique_ptr<uint8_t[]> state_data(new uint8_t[n_state_size]);
file.read_raw(state_data.get(), n_state_size);
Expand All @@ -2739,7 +2740,7 @@ size_t llama_save_session_file(struct llama_context * ctx, const char * path_ses
llama_copy_state_data(ctx, state_data.get());

file.write_u32('ggsn'); // magic
file.write_u32(0); // version
file.write_u32(1); // version
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: it's harmless but I don't believe it's necessary here to change the version? The binary format isn't affected by the decision of how many tokens to store.

file.write_raw(&ctx->model.hparams, sizeof(llama_hparams));

file.write_u32((uint32_t) n_token_count); // REVIEW
Expand Down