You are currently using the Kubernetes version. This message will be visible during all the test phase.
PEFT documentation
PEFT
You are viewing pr_2 version. A newer version
undefined is available.
Join the Hugging Face community
and get access to the augmented documentation experience
Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes
to get started
PEFT
π€ PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the modelβs parameters.
PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly.
Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.
PEFT is seamlessly integrated with π€ Accelerate for large-scale models leveraging DeepSpeed and Big Model Inference.
If you are new to PEFT, get started by reading the Quicktour guide and conceptual guides for LoRA and Prompting methods.
The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for
a task, please refer to the corresponding Task guides.
Causal Language Modeling
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
GPT-2
β
β
β
β
Bloom
β
β
β
β
OPT
β
β
β
β
GPT-Neo
β
β
β
β
GPT-J
β
β
β
β
GPT-NeoX-20B
β
β
β
β
LLaMA
β
β
β
β
ChatGLM
β
β
β
β
Conditional Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
T5
β
β
β
β
BART
β
β
β
β
Sequence Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
BERT
β
β
β
β
RoBERTa
β
β
β
β
GPT-2
β
β
β
β
Bloom
β
β
β
β
OPT
β
β
β
β
GPT-Neo
β
β
β
β
GPT-J
β
β
β
β
Deberta
β
β
β
Deberta-v2
β
β
β
Token Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
BERT
β
β
RoBERTa
β
β
GPT-2
β
β
Bloom
β
β
OPT
β
β
GPT-Neo
β
β
GPT-J
β
β
Deberta
β
Deberta-v2
β
Text-to-Image Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
Stable Diffusion
β
Image Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
ViT
β
Swin
β
Image to text (Multi-modal models)
We have tested LoRA for ViT and Swin for fine-tuning on image classification.
However, it should be possible to use LoRA for any ViT-based model from π€ Transformers.
Check out the Image classification task guide to learn more. If you run into problems, please open an issue.
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
Blip-2
β
Semantic Segmentation
As with image-to-text models, you should be able to apply LoRA to any of the segmentation models.
Itβs worth noting that we havenβt tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.