Skip to content

llama-cpp-python-0.1.65 and below crashes (memory issue?) and v0.1.66-0.1.70 errors out with GPU #477

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
4 tasks done
jaymon0703 opened this issue Jul 14, 2023 · 2 comments
Labels
model Model specific issue

Comments

@jaymon0703
Copy link

jaymon0703 commented Jul 14, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

On v0.1.65 i expect GPU should work.

Current Behavior

My Kernel crashes presumably due to a memory issue. On v0.1.66-0.1.70 the model fails to load.

Environment and Context

PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

$ lscpu

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-15

$ uname -a

Linux username-tensorflow-gpu 5.10.0-23-cloud-amd64 #1 SMP Debian 5.10.179-1 (2023-05-12) x86_64 GNU/Linux

$ python3 --version
Python 3.10.10

$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu

$ g++ --version
g++ (Debian 10.2.1-6) 10.2.1 20210110

image

Failure Information (for bugs)

Kernel crashes, or model fails to load.

Crash:

image

Fails to load:

image

Steps to Reproduce

Use CUDA 12.1 and try run below:

%%time
n_gpu_layers = 8 # Change this value based on your model and your GPU VRAM pool.
n_batch = 128 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.

llm = LlamaCpp(
    model_path="models/ggml-model-q4_0.bin",
    n_threads=8,
    n_gpu_layers=n_gpu_layers,
    n_batch=n_batch,
    n_ctx=2048,
    callback_manager=callback_manager, 
    verbose=True
)

Please help! I am using the model from https://huggingface.co/frankenstyle/ggml-q4-models/tree/main/models/llama/7B.

Thank you!

@jllllll
Copy link

jllllll commented Jul 14, 2023

Likely an outdated model. Use this one: https://huggingface.co/TheBloke/LLaMa-7B-GGML/resolve/main/llama-7b.ggmlv3.q4_0.bin

@jaymon0703
Copy link
Author

jaymon0703 commented Jul 14, 2023

@jllllll You are a lifesaver. Now to find an appropriate model as that one gives lofty answers to my QA over Docs project.

EDIT: After some more testing, it seems quite reasonable. Thank you!

@gjmulder gjmulder added llama.cpp Problem with llama.cpp shared lib model Model specific issue and removed llama.cpp Problem with llama.cpp shared lib labels Jul 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model Model specific issue
Projects
None yet
Development

No branches or pull requests

3 participants