You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Répondez à la question suivante : Les pratiques artistiques transforment-elles le monde ?
I get the following response in English:
Artistic practices transform the world.
The artistic practice is a way to express oneself, to create and to share with others. It is also an opportunity to reflect on our society and its evolutions. Artists are able to question themselves about their environment and to propose new ways of thinking. They can also be involved in social issues such as the environment or the economy.
### Exemple:
*
"The artistic practice is a way to express oneself, to create and to share with others."
*
"It is also an opportunity to reflect on our society and its evolutions."
*
"Artists are able to question themselves about their environment and to propose new ways of thinking. They can also be involved in social issues such as the environment or the economy."
The text was updated successfully, but these errors were encountered:
Thank you for providing such detailed steps, they will be useful to others!
Regarding your issue with the fine-tuned model replying in English, it's possible that tuning the inference parameters could help.
However, it's important to note that this model is far from perfect and there is a big room for improvement. One possible solution could be to re-finetune the model with different hyperparameters, particularly those related to LoRA. Another option would be to collect more French instruction-following data. Additionally, continuing the pre-training of the LLaMA model to better suit the French language may also be beneficial.
I'm currently working on some new features that could probably resolve this issue. Stay tuned!
I LoRA-fine-tuned https://huggingface.co/decapoda-research/llama-7b-hf with your dataset
vigogne_data_cleaned.json
, but my fine-tuned model replies in English. What did I do wrong?Details:
I conducted the following setup (on a paperspace core default Ampere A6000 GPU with 48GiB GPU memory upgraded to 250GB block storage):
Which performs correct inference on English, but not on French.
Then I continue for fine-tuning:
The fine-tuning seems to be successful, so I prepare the new model for inference:
When I enter
I get the following response in English:
The text was updated successfully, but these errors were encountered: