🍡 feedmeAI
← All topics
Parameter-efficiency 1 item

Everything Parameter-efficiency

📑 arXiv 2d ago

JumpLoRA: Sparse Adapters for Continual Learning in Large Language Models

JumpLoRA introduces adaptive sparsity in LoRA blocks via JumpReLU gating for continual learning in LLMs, achieving dynamic parameter isolation to prevent task interference. The method is modular, compatible with existing LoRA-based continual learning approaches, and significantly boosts performance over IncLoRA by constraining both magnitude and direction of updates.