Skip to content

ValueError: Provider 'featherless-ai' not supported. #3064

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
0x2b3bfa0 opened this issue May 9, 2025 · 0 comments · May be fixed by #3066
Open

ValueError: Provider 'featherless-ai' not supported. #3064

0x2b3bfa0 opened this issue May 9, 2025 · 0 comments · May be fixed by #3066
Labels
bug Something isn't working

Comments

@0x2b3bfa0
Copy link

0x2b3bfa0 commented May 9, 2025

Describe the bug

It looks like GET /api/models/...?expand=inferenceProviderMapping is returning a still unsupported inference provider.

PROVIDERS: Dict[PROVIDER_T, Dict[str, TaskProviderHelper]] = {
"black-forest-labs": {
"text-to-image": BlackForestLabsTextToImageTask(),
},
"cerebras": {
"conversational": CerebrasConversationalTask(),
},
"cohere": {
"conversational": CohereConversationalTask(),
},
"fal-ai": {
"automatic-speech-recognition": FalAIAutomaticSpeechRecognitionTask(),
"text-to-image": FalAITextToImageTask(),
"text-to-speech": FalAITextToSpeechTask(),
"text-to-video": FalAITextToVideoTask(),
},
"fireworks-ai": {
"conversational": FireworksAIConversationalTask(),
},
"hf-inference": {
"text-to-image": HFInferenceTask("text-to-image"),
"conversational": HFInferenceConversational(),
"text-generation": HFInferenceTask("text-generation"),
"text-classification": HFInferenceTask("text-classification"),
"question-answering": HFInferenceTask("question-answering"),
"audio-classification": HFInferenceBinaryInputTask("audio-classification"),
"automatic-speech-recognition": HFInferenceBinaryInputTask("automatic-speech-recognition"),
"fill-mask": HFInferenceTask("fill-mask"),
"feature-extraction": HFInferenceFeatureExtractionTask(),
"image-classification": HFInferenceBinaryInputTask("image-classification"),
"image-segmentation": HFInferenceBinaryInputTask("image-segmentation"),
"document-question-answering": HFInferenceTask("document-question-answering"),
"image-to-text": HFInferenceBinaryInputTask("image-to-text"),
"object-detection": HFInferenceBinaryInputTask("object-detection"),
"audio-to-audio": HFInferenceBinaryInputTask("audio-to-audio"),
"zero-shot-image-classification": HFInferenceBinaryInputTask("zero-shot-image-classification"),
"zero-shot-classification": HFInferenceTask("zero-shot-classification"),
"image-to-image": HFInferenceBinaryInputTask("image-to-image"),
"sentence-similarity": HFInferenceTask("sentence-similarity"),
"table-question-answering": HFInferenceTask("table-question-answering"),
"tabular-classification": HFInferenceTask("tabular-classification"),
"text-to-speech": HFInferenceTask("text-to-speech"),
"token-classification": HFInferenceTask("token-classification"),
"translation": HFInferenceTask("translation"),
"summarization": HFInferenceTask("summarization"),
"visual-question-answering": HFInferenceBinaryInputTask("visual-question-answering"),
},
"hyperbolic": {
"text-to-image": HyperbolicTextToImageTask(),
"conversational": HyperbolicTextGenerationTask("conversational"),
"text-generation": HyperbolicTextGenerationTask("text-generation"),
},
"nebius": {
"text-to-image": NebiusTextToImageTask(),
"conversational": NebiusConversationalTask(),
"text-generation": NebiusTextGenerationTask(),
},
"novita": {
"text-generation": NovitaTextGenerationTask(),
"conversational": NovitaConversationalTask(),
"text-to-video": NovitaTextToVideoTask(),
},
"openai": {
"conversational": OpenAIConversationalTask(),
},
"replicate": {
"text-to-image": ReplicateTextToImageTask(),
"text-to-speech": ReplicateTextToSpeechTask(),
"text-to-video": ReplicateTask("text-to-video"),
},
"sambanova": {
"conversational": SambanovaConversationalTask(),
"feature-extraction": SambanovaFeatureExtractionTask(),
},
"together": {
"text-to-image": TogetherTextToImageTask(),
"conversational": TogetherConversationalTask(),
"text-generation": TogetherTextGenerationTask(),
},
}

Would it be more convenient to skip unsupported providers instead of raising an exception?

Reproduction

requirements.txt

huggingface_hub==0.31.1

example.py

from huggingface_hub import InferenceClient

InferenceClient("meta-llama/Llama-3.1-70B-Instruct").chat_completion(messages=[])

Logs

python example.py

Traceback (most recent call last):
  File "/.../example.py", line 13, in <module>
    InferenceClient("meta-llama/Llama-3.1-70B-Instruct").chat_completion(messages=[])
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
  File "/.../.venv/lib/python3.13/site-packages/huggingface_hub/inference/_client.py", line 886, in chat_completion
    provider_helper = get_provider_helper(
        self.provider,
    ...<3 lines>...
        else payload_model,
    )
  File "/.../.venv/lib/python3.13/site-packages/huggingface_hub/inference/_providers/__init__.py", line 169, in get_provider_helper
    raise ValueError(
    ...<3 lines>...
    )
ValueError: Provider 'featherless-ai' not supported. Available values: 'auto' or any provider from ['black-forest-labs', 'cerebras', 'cohere', 'fal-ai', 'fireworks-ai', 'hf-inference', 'hyperbolic', 'nebius', 'novita', 'openai', 'replicate', 'sambanova', 'together'].Passing 'auto' (default value) will automatically select the first provider available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.

GET https://huggingface.co/api/models/meta-llama/Llama-3.1-70B-Instruct?expand=inferenceProviderMapping

{
  "_id": "66969ad27a033bf62173f3e2",
  "id": "meta-llama/Llama-3.1-70B-Instruct",
  "inferenceProviderMapping": {
    "featherless-ai": {
      "status": "live",
      "providerId": "meta-llama/Meta-Llama-3.1-70B-Instruct",
      "task": "conversational"
    },
    "novita": {
      "status": "live",
      "providerId": "meta-llama/llama-3.1-70b-instruct",
      "task": "conversational"
    },
    "nebius": {
      "status": "live",
      "providerId": "meta-llama/Meta-Llama-3.1-70B-Instruct-fast",
      "task": "conversational"
    },
    "hyperbolic": {
      "status": "staging",
      "providerId": "meta-llama/Meta-Llama-3.1-70B-Instruct",
      "task": "conversational"
    }
  }
}

System info

- huggingface_hub version: 0.31.0
- Platform: macOS-15.4.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.3
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /.../.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers: osxkeychain
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.7.0
- Jinja2: 3.1.6
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 11.2.1
- hf_transfer: 0.1.9
- gradio: N/A
- tensorboard: N/A
- numpy: 2.2.5
- pydantic: 2.10.6
- aiohttp: 3.11.18
- hf_xet: 1.1.0
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /.../.cache/huggingface/hub
- HF_ASSETS_CACHE: /.../.cache/huggingface/assets
- HF_TOKEN_PATH: /.../.cache/huggingface/token
- HF_STORED_TOKENS_PATH: /.../.cache/huggingface/stored_tokens
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
@0x2b3bfa0 0x2b3bfa0 added the bug Something isn't working label May 9, 2025
dreadatour added a commit to iterative/datachain that referenced this issue May 9, 2025
* Set name for 'read_values' UDF

* Print packages versions before testing examples

* Temporary limit 'huggingface_hub' version to be <0.30 to prevent "Provider 'featherless-ai' not supported" error (see also: huggingface/huggingface_hub#3064)
@0x2b3bfa0 0x2b3bfa0 linked a pull request May 9, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant