Professional Documents
Culture Documents
64e3d4b0a61f4e2fdab7c92e LLM FT PE Whitepaper
64e3d4b0a61f4e2fdab7c92e LLM FT PE Whitepaper
64e3d4b0a61f4e2fdab7c92e LLM FT PE Whitepaper
Later in this piece, you'll get a full understanding of each flavor of fine
tuning but to start, here are the most common approaches:
Only a small fraction of model parameters are updated while other parameters
Parameter-efficient are frozen. Adapter-tuning: Inserts additional task-specific layers between the
layers of pre-trained LLMs and only tunes parameters in the adapters.
fine-tuning
• LoRA: Adaptors are low rank approximations of the original weight matrices (further reading)
• Prefix-tuning: Prepends task-specific vectors to the model and only tunes parameters in the
prefix.