Skip to content

Inference provider for vllm #2886

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jeffmaury opened this issue Apr 16, 2025 · 1 comment · May be fixed by #2900
Open

Inference provider for vllm #2886

jeffmaury opened this issue Apr 16, 2025 · 1 comment · May be fixed by #2900
Assignees
Labels
kind/feature 💡 Issue for requesting a new feature
Milestone

Comments

@jeffmaury
Copy link
Collaborator

Is your feature request related to a problem? Please describe

Allow to run models with vllm

Describe the solution you'd like

Add a new inference server for vllm

Describe alternatives you've considered

No response

Additional context

No response

@jeffmaury jeffmaury added the kind/feature 💡 Issue for requesting a new feature label Apr 16, 2025
@jeffmaury jeffmaury added this to the 1.7 milestone Apr 16, 2025
@nichjones1 nichjones1 moved this to 📋 Backlog in Podman Desktop Planning Apr 16, 2025
@axel7083
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature 💡 Issue for requesting a new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants