admin管理员组文章数量:1355665
I'm having trouble loading my LoRA adapters for inference after fine-tuning Llama 3.1 8B. When I try to load the adapter files in a new session, I get a warning about missing adapter keys:
/usr/local/lib/python3.11/dist-packages/peft/peft_model.py:599: UserWarning: Found missing adapter keys while loading the checkpoint: ['base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.0.self_attn.k_proj.lora_A.default.weight',...
The warning continues with many similar missing keys. The model behaves as if it wasn't fine-tuned at all.
My setup:
- transformers 4.43.1
- peft 0.14.0
- bitsandbytes 0.45.4
- torch 2.6.0+cu124
- accelerate 0.31.0
After training, I saved the model and got these files:
- adapter_model.safetensors
- adapter_config.json
- training_args.bin
Here's how I loaded the model and adapters:
# Load base model
base_model_id = "meta-llama/Llama-3.1-8B"
lora_weights = "path to weights"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=base_model_id,
quantization_config=bnb_config,
trust_remote_code=True,
token=True
)
# Set up tokenizer
eval_tokenizer = AutoTokenizer.from_pretrained(
base_model_id,
add_bos_token=True,
trust_remote_code=True,
use_fast=True
)
eval_tokenizer.pad_token = eval_tokenizer.eos_token
# Load LoRA adapters
ft_model = PeftModel.from_pretrained(base_model, lora_weights, output_loading_info=True)
Any ideas on what might be causing this issue or how to fix it?
Thanks in advance!
本文标签:
版权声明:本文标题:python - LoRA Adapter Loading Issue with Llama 3.1 8B - Missing Keys Warning - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1743960573a2568948.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论