admin管理员组

文章数量:1122832

I am training a Llama-3.1-8B-Instruct model for a specific task. I have request the access to the huggingface repository, and got access, confirmed on the huggingface webapp dashboard.

I tried calling the huggingface_hub.login function with the token to login and then download the model in the same script. I get an error, saying that I need to be logged in to access gated repositories.

Then I tried loging in via the huggingface-cli login command, which succeeded. I got the same error after running the script.

Then I tried the first approach again, but didn't pass the token, the documentation says I should get prompter for the token. The login function however seems to block after showing the HF logo, but does not show a prompt for the token.

Is there something I'm missing here in order to access the models?

My code:

hf_login()

base_model_name = 'meta-llama/Llama-3.1-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(base_model_name)  # this line causes error
model = AutoModelForCausalLM.from_pretrained(base_model_name)

Error:

OSError: You are trying to access a gated repo.
Make sure to have access to it at .1-8B-Instruct.
401 Client Error. (Request ID: Root=1-673f47aa-6b11aae44cd9c6523654070c;5816d1af-49a5-4262-bec0-dab7ecad66e4)

Cannot access gated repo for url .1-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Llama-3.1-8B-Instruct is restricted. You must have access to it and be authenticated to access it. Please log in.

I'm sure I have access to the meta-llama/Llama-3.1 models. The huggingface-cli whoami command correctly returns my username, so I'm also logged in.

My token is set to read access, I'm also trying with a write access one.

EDIT: I generated a new write-access token. The login via the function huggingface_hub.login was successful. The models still weren't actually downloading. I tried from the windows terminal instead of the pycharm built-in terminal, and now it is working. Still don't know why it works now.

I am training a Llama-3.1-8B-Instruct model for a specific task. I have request the access to the huggingface repository, and got access, confirmed on the huggingface webapp dashboard.

I tried calling the huggingface_hub.login function with the token to login and then download the model in the same script. I get an error, saying that I need to be logged in to access gated repositories.

Then I tried loging in via the huggingface-cli login command, which succeeded. I got the same error after running the script.

Then I tried the first approach again, but didn't pass the token, the documentation says I should get prompter for the token. The login function however seems to block after showing the HF logo, but does not show a prompt for the token.

Is there something I'm missing here in order to access the models?

My code:

hf_login()

base_model_name = 'meta-llama/Llama-3.1-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(base_model_name)  # this line causes error
model = AutoModelForCausalLM.from_pretrained(base_model_name)

Error:

OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct.
401 Client Error. (Request ID: Root=1-673f47aa-6b11aae44cd9c6523654070c;5816d1af-49a5-4262-bec0-dab7ecad66e4)

Cannot access gated repo for url https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Llama-3.1-8B-Instruct is restricted. You must have access to it and be authenticated to access it. Please log in.

I'm sure I have access to the meta-llama/Llama-3.1 models. The huggingface-cli whoami command correctly returns my username, so I'm also logged in.

My token is set to read access, I'm also trying with a write access one.

EDIT: I generated a new write-access token. The login via the function huggingface_hub.login was successful. The models still weren't actually downloading. I tried from the windows terminal instead of the pycharm built-in terminal, and now it is working. Still don't know why it works now.

Share Improve this question edited Nov 21, 2024 at 14:59 majTheHero asked Nov 21, 2024 at 14:54 majTheHeromajTheHero 1341 silver badge14 bronze badges
Add a comment  | 

2 Answers 2

Reset to default 0

you need to loging using huggingface acces token , befor getting to access of gated model, for get , if you have not any acces token, you can create from Access token section.

from huggingface_hub import login

login(token = "hugging_face_access_token")

You need to provide a token. The one of the way is to share it as environment variable. Before you launch python or ipython or jupyter notebook please do this:

HF_TOKEN=<token acquired from Access Tokens in your huggingface profile>

Then the code you execute is following:

import os
from transformers import AutoTokenizer, AutoModelForCausalLM
token = os.enviro["HF_TOKEN"]
 
base_model_name = 'meta-llama/Llama-3.1-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(base_model_name, token=token)
model = AutoModelForCausalLM.from_pretrained(base_model_name, token=token)

本文标签: pythonCannot load a gated model from hugginface despite having access and logging inStack Overflow