-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share buffers between CPU and GPU #1696
Conversation
Ah very interesting!
Currently, it's not the case without |
@ggerganov I took my best shot at it, but I'm definitely stepping into unfamiliar territory. Even without this code, these buffers seemed to be page-aligned when running with |
👍 Anecdotally seems to work fine on my laptop (M2 Max). Initialization seems to be a lot faster, at least. |
Hi, I tried this and havent got disk swap, m2 pro 16gb using llama-13b-supercot.ggmlv3.q4_0.bin
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
After this PR was merged I now get this error: Any idea why? |
@Johnhersh The error you shared was actually added in #1706. Was it running without issues before #1696? I suspect that before that it was actually failing silently when creating that buffer, e.g. hanging or producing garbage output. The root cause is that macOS is refusing to create a buffer big enough to hold the model you're loading. It seems like macOS has an upper limit on a MTLBuffer size that is roughly half the amount of total unified memory, which a 30B model exceeds for machines with 32GB or less. Since the limit seems to be on individual buffer size and not total memory usage, last night I started playing with some ideas for how to split the |
Yes you're correct, before these it was outputting ?? repeatedly. So this error message is quite correct then. Thank you! |
So that I could confirm that this is not related to the Not too surprising. Just wanted to make sure that the MTLDevice.maxBufferLength limit applies to both methods of creating a MTLBuffer. So @ggerganov, seems like we might need to figure out how to split up buffers that exceed this limit. |
Is there any means to overcome the limit and allocate more than 1/2 of the system ram? |
Yes, we have to split the buffer into smaller parts. Will probably fix this over the weekend, but if anyone figures it out in the meantime - will be happy to see a PR |
I took a crack at it a few nights ago, but didn't finish. I had code in place to split up the buffer, but was encountering issues somewhere downstream from that. My suspicion at the time (based on very limited understanding of what's going on in the shaders) was that the shaders are sometimes reaching beyond the length of the buffer segments being passed to them, i.e. whereas before they could reach anywhere within the large, single buffer, now they're getting garbage data when reaching beyond the end of the particular segment passed to it. Again this was just a theory based very shakily on only a (so far) pretty limited grasp of things and a couple hours of hacking and tinkering. @ggerganov I'm sure you could easily refute or confirm it. I'll probably try again after hours at some point, unless you beat me to it this weekend. If my theory is correct, I think we'd actually need to pass all buffer segments to the shaders and figure out which segment's pointer to use from within the shader, which I'm assuming would have performance impacts. |
Yes, that might very well be the case. I think you can still work around that by creating multiple views with overlap.
The Again, I could still be missing something |
This is pretty close to what I tried the other night, except the view size was a function of maxBufferLength rather than fixed at 8GB, and I simply selected the view that contained the tensor's starting address in the first half of the view. The view size on my Mac would've been larger than 8GB, but I can try with fixed 8GB segments in case I had a bug with alignment, or something, unless you can see an obvious issue with what I described above. |
The 8GB view is just an example - it should be dynamic, similar to what you describe. |
This updates ggml_metal_add_buffer to use MTLDevice.newBufferWithBytesNoCopy to attempt share buffers between CPU and GPU rather than re-allocating for the GPU.
I've been following #1642 and noticed that this might help some of the swapping-related issues some have been seeing with larger models and devices with < 96GB memory. With this change, I'm no longer seeing any swapping or odd/corrupted output.
Apologize if I missed any contribution steps, or if this change is missing something obvious. One thing I'm not sure about is whether this covers all possible cases. One thing to note is that newBufferWithBytesNoCopy requires a page-aligned source buffer. This seems to be the case with mmap, but not sure if it will remain true for all possible configurations / code paths.