You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we are using local model, we need to pass device to utilise the gpu for inference. However, in launch_concordia_challenge_evaluation.py
# Language Model setup
model = utils.language_model_setup(
api_type=args.api_type,
model_name=args.model_name,
api_key=args.api_key,
disable_language_model=args.disable_language_model,
)
So if we use cmd execution for the evaluation script, device is not passed which is present in utils.language_model_setup so in this case it defaults to 'cpu'.
The text was updated successfully, but these errors were encountered:
This is not really a blocking issue since I'm sure you can just manually edit the file to pass the device. So probably not super urgent here. Anyway though, in principle we might want to loft this device setting all the way out to become a command line argument, but I would worry a bit about adding model-specific complexity into the interface at that level. @jagapiou what do you think?
api_key is already model specific: some models don't support that argument. So I think it's OK to solve it the same way: have device default to None and only forward it if it's explicitly set (sent you a CL).
If we have a lot of model-specific settigns it might be better to have a --model_settings=device=gpu0,use_codestral=True type flag.
If we are using local model, we need to pass device to utilise the gpu for inference. However, in
launch_concordia_challenge_evaluation.py
So if we use cmd execution for the evaluation script, device is not passed which is present in utils.language_model_setup so in this case it defaults to 'cpu'.
The text was updated successfully, but these errors were encountered: