March 18, 2025

ikayaniaamirshahzad@gmail.com

[2410.09016] Parameter-Efficient Fine-Tuning of State Space Models


View a PDF of the paper titled Parameter-Efficient Fine-Tuning of State Space Models, by Kevin Galim and 4 other authors

View PDF
HTML (experimental)

Abstract:Deep State Space Models (SSMs), such as Mamba (Gu & Dao, 2024), have become powerful tools for language modeling, offering high performance and linear scalability with sequence length. However, the application of parameter-efficient fine-tuning (PEFT) methods to SSM-based models remains underexplored. We start by investigating two fundamental questions on existing PEFT methods: (i) How do they perform on SSM-based models? (ii) Which parameters should they target for optimal results? Our analysis shows that LoRA and its variants consistently outperform all other PEFT methods. While LoRA is effective for linear projection matrices, it fails on SSM modules-yet still outperforms other methods applicable to SSMs, indicating their limitations. This underscores the need for a specialized SSM tuning approach. To address this, we propose Sparse Dimension Tuning (SDT), a PEFT method tailored for SSM modules. Combining SDT for SSMs with LoRA for linear projection matrices, we achieve state-of-the-art performance across extensive experiments.

Submission history

From: Wonjun Kang [view email]
[v1]
Fri, 11 Oct 2024 17:30:28 UTC (89 KB)
[v2]
Fri, 14 Mar 2025 01:26:57 UTC (277 KB)



Source link

Leave a Comment