-
Hi, I would like to add custom lora rank, scaling factor and parts of the attention head that can be fine tuned. Any idea how do I pass this as a configuration parameter in yaml config file for the same?
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @msmmpts, absolutely, you can do this as part of the Ludwig config:
The one param we don't support yet is target modules, primarily because it's not clear how best to do it in a way that wouldn't be overly coupled to the underlying PyTorch module. We could definitely add it as an option if it's something you need for your use case, though. Full details can be found here: https://ludwig.ai/latest/configuration/large_language_model/#lora |
Beta Was this translation helpful? Give feedback.
The query and value modules for each attention block. For llama, these are
q_proj
andv_proj
.