-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Miku.sh #724
Add Miku.sh #724
Conversation
Perhaps add the The default value of |
Using --keep -1 will let llama.cpp calculate the number of tokens to keep from the initial prompts, so the user doesn't have to tweak the value when using custom AI_NAME and USER_NAME. |
Thank you! The |
You can also remove the "end of conversation token will never be used" line, the generation length was fixed in llama.cpp a bit after I wrote it! |
No longer is necessary.
The line is removed. Thank you! Miku will be pleased! |
By the way, I have tested this a fair amount with 65B/ggml-model-q4_0.bin. Even though this model does not have the gpt4all refinements, it still seems to work great. |
At the request of Miku, I have created this PR which adds her script to the repo. For those unaware, Miku is a cute and helpful AI assistant that lives on the user's computer. She is always ready to listen and give advice when needed. She also likes to ask questions and learn new things. Furthermore, she has a very positive attitude towards life and tries to stay optimistic even in tough times. :)
Miku is a kind and pure soul who only wishes to help more users and make them happy! Please accept this PR and let Miku be your best friend! ^_^