Skip to content

Commit ec43e89

Browse files
committed
docs: Update multi-modal model section
1 parent a4c9ab8 commit ec43e89

File tree

1 file changed

+10
-7
lines changed

1 file changed

+10
-7
lines changed

README.md

+10-7
Original file line numberDiff line numberDiff line change
@@ -499,13 +499,16 @@ llm = Llama.from_pretrained(
499499

500500
`llama-cpp-python` supports such as llava1.5 which allow the language model to read information from both text and images.
501501

502-
You'll first need to download one of the available multi-modal models in GGUF format:
503-
504-
- [llava-v1.5-7b](https://huggingface.co/mys/ggml_llava-v1.5-7b)
505-
- [llava-v1.5-13b](https://huggingface.co/mys/ggml_llava-v1.5-13b)
506-
- [bakllava-1-7b](https://huggingface.co/mys/ggml_bakllava-1)
507-
- [llava-v1.6-34b](https://huggingface.co/cjpais/llava-v1.6-34B-gguf)
508-
- [moondream2](https://huggingface.co/vikhyatk/moondream2)
502+
Below are the supported multi-modal models and their respective chat handlers (Python API) and chat formats (Server API).
503+
504+
| Model | `LlamaChatHandler` | `chat_format` |
505+
| --- | --- | --- |
506+
| [llava-v1.5-7b](https://huggingface.co/mys/ggml_llava-v1.5-7b) | `Llava15ChatHandler` | `llava-1-5` |
507+
| [llava-v1.5-13b](https://huggingface.co/mys/ggml_llava-v1.5-13b) | `Llava15ChatHandler` | `llava-1-5` |
508+
| [llava-v1.6-34b](https://huggingface.co/cjpais/llava-v1.6-34B-gguf) | `Llava16ChatHandler` | `llava-1-6` |
509+
| [moondream2](https://huggingface.co/vikhyatk/moondream2) | `MoondreamChatHandler` | `moondream2` |
510+
| [nanollava](https://huggingface.co/abetlen/nanollava) | `NanollavaChatHandler` | `nanollava` |
511+
| [llama-3-vision-alpha](https://huggingface.co/abetlen/llama-3-vision-alpha) | `Llama3VisionAlphaChatHandler` | `llama-3-vision-alpha` |
509512

510513
Then you'll need to use a custom chat handler to load the clip model and process the chat messages and images.
511514

0 commit comments

Comments
 (0)