
A Closer Look at Personalized Fine-Tuning in Heterogeneous …
6 days ago · Our analysis uncovers federated feature distortion, a phenomenon where local fine-tuning destabilizes globally learned features, and theoretically characterizes how LP-FT …
Fine-Tuning can Distort Pretrained Features and Underperform...
Jan 28, 2022 · We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head---this is because while fine-tuning learns the head, the lower layers of the neural …
•Pretrained models give large improvements in accuracy, but how we fine-tune them is key •LP-FT is just a starting point, better methods? •What to do when linear probing not so good?
Key takeaway A larger change in parameters can distort pretrained features How to retain information beyond the limited data used for adaptation?
We observe that increasing local work generally amplies pre- trained feature distortion for both baselines and FedSparseNet. Consequently, the performance of IID FL and FedSparseNet …
Can we refine features without distorting them too much? +10% over fine-tuning! What to do when linear probing not so good?
A Closer Look at Model Adaptation using Feature Distortion and ...
Mar 23, 2023 · Going beyond conventional linear probing (LP) and fine tuning (FT) strategies, protocols that can effectively control feature distortion, i.e., the failure to update features …
Fine-Tuning Distorts Pretrained Features - Emergent Mind
Feb 21, 2022 · Fine-tuning alters pretrained features due to simultaneous optimization of the head and lower layers, causing distortions that compromise OOD performance.
span of the training data when using “good” pretrained features. Even with an infinitesimally small learning rate, fine-tuning distorts pretrained features
Fine-Tuning without Distortion: Improving Robustness to ... - NIPS
Our analysis suggests the easy two-step strategy of linear probing then full fine-tuning (LP-FT), which improves pretrained features without distortion, and leads to even higher accuracies.