beginor
1 天前
M1 Max 64G 用户, 在 `/etc/sysctl.conf` 文件中添加选项 `iogpu.wired_limit_mb=57344` ,这样最多可以分配 56G 内存给显卡用。
用 llama.cpp 可以运行 `llama-3.3-70b-instruct.q4_k_m.gguf` ,不过要限制一下上下文大小,不然还是内存不够, 生成速度大概是 4 token / second 。
```
main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv update_slots: all slots are idle
slot launch_slot_: id 0 | task 0 | processing task
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 4096, n_keep = 0, n_prompt_tokens = 26
slot update_slots: id 0 | task 0 | kv cache rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 26, n_tokens = 26, progress = 1.000000
slot update_slots: id 0 | task 0 | prompt done, n_past = 26, n_tokens = 26
slot release: id 0 | task 0 | stop processing: n_past = 354, truncated = 0
slot print_timing: id 0 | task 0 |
prompt eval time = 2035.41 ms / 26 tokens ( 78.29 ms per token, 12.77 tokens per second)
eval time = 79112.92 ms / 329 tokens ( 240.46 ms per token, 4.16 tokens per second)
total time = 81148.33 ms / 355 tokens
srv update_slots: all slots are idle
request: POST /v1/chat/completions 127.0.0.1 200
```