
huggingface hub - ImportError: cannot import name …
Jan 21, 2025 · ImportError: cannot import name 'cached_download' from 'huggingface_hub' Asked 10 months ago Modified 9 months ago Viewed 24k times
python - Cannot load a gated model from hugginface despite having ...
Nov 21, 2024 · I am training a Llama-3.1-8B-Instruct model for a specific task. I have request the access to the huggingface repository, and got access, confirmed on the huggingface webapp dashboard. I …
How to download a model from huggingface? - Stack Overflow
May 19, 2021 · How about using hf_hub_download from huggingface_hub library? hf_hub_download returns the local path where the model was downloaded so you could hook this one liner with …
How to do Tokenizer Batch processing? - HuggingFace
Jun 7, 2023 · 9 in the Tokenizer documentation from huggingface, the call fuction accepts List [List [str]] and says: text (str, List [str], List [List [str]], optional) — The sequence or batch of sequences to be …
Load a pre-trained model from disk with Huggingface Transformers
Sep 22, 2020 · Load a pre-trained model from disk with Huggingface Transformers Asked 5 years, 2 months ago Modified 2 years, 7 months ago Viewed 290k times
Huggingface: How do I find the max length of a model?
Jun 24, 2023 · Given a transformer model on huggingface, how do I find the maximum input sequence length? For example, here I want to truncate to the max_length of the model: tokenizer (examples …
Facing SSL Error with Huggingface pretrained models
Mar 31, 2022 · huggingface.co now has a bad SSL certificate, your lib internally tries to verify it and fails. By adding the env variable, you basically disabled the SSL verification.
How to change huggingface transformers default cache directory?
Aug 8, 2020 · The default cache directory lacks disk capacity, I need to change the configuration of the default cache directory. How can I do that?
Newest 'huggingface' Questions - Stack Overflow
a huggingface model, like Qwen32B-GGUF, contains some quantization-related files which are large. Perhaps, only use one quantization-related file and the rest is not used.
python - Why does HuggingFace-provided Deepseek code result in an ...
Feb 9, 2025 · I am using this code from huggingface: This code is directly pasted from the HuggingFace website's page on deepseek and is supposed to be plug-and-play code: from transformers import …