> Training Environment: | > Backend: Torch | > Mixed precision: False | > Precision: float32 | > Current device: 0 | > Num. of GPUs: 1 | > Num. of CPUs: 20 | > Num. of Torch Threads: 1 | > Torch seed: 1 | > Torch CUDNN: True | > Torch CUDNN deterministic: False | > Torch CUDNN benchmark: False | > Torch TF32 MatMul: False > Start Tensorboard: tensorboard --logdir=J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094 > Model has 520103786 parameters  > EPOCH: 0/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:31:21)   --> TIME: 2024-10-07 12:32:33 -- STEP: 0/57 -- GLOBAL_STEP: 0 | > loss_text_ce: 0.09054956585168839 (0.09054956585168839) | > loss_mel_ce: 7.118991374969482 (7.118991374969482) | > loss: 7.209540843963623 (7.209540843963623) | > grad_norm: 0 (0) | > current_lr: 5e-06 | > step_time: 2.8678 (2.8678393363952637) | > loader_time: 69.3847 (69.3847427368164)  --> TIME: 2024-10-07 12:32:54 -- STEP: 50/57 -- GLOBAL_STEP: 50 | > loss_text_ce: 0.07943269610404968 (0.08484900176525116) | > loss_mel_ce: 6.448935508728027 (6.546598110198975) | > loss: 6.52836799621582 (6.631447095870971) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.2855 (0.24002688884735107) | > loader_time: 0.0157 (0.027909402847290047)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.09507475580487933 (+0) | > avg_loss_text_ce: 0.07929773096527372 (+0) | > avg_loss_mel_ce: 6.424974509647915 (+0) | > avg_loss: 6.504272188459124 (+0) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_57.pth  > EPOCH: 1/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:33:30)   --> TIME: 2024-10-07 12:34:55 -- STEP: 43/57 -- GLOBAL_STEP: 100 | > loss_text_ce: 0.0764726921916008 (0.07813759979813598) | > loss_mel_ce: 6.210442543029785 (6.273206633190776) | > loss: 6.286915302276611 (6.351344263830851) | > grad_norm: 0 (0.0) | > current_lr: 3e-06 | > step_time: 0.1814 (0.2750721476798835) | > loader_time: 0.0176 (0.03615218539570653)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.13353742871965682 (+0.03846267291477749) | > avg_loss_text_ce: 0.07507834157773427 (-0.004219389387539449) | > avg_loss_mel_ce: 6.363463401794434 (-0.06151110785348113) | > avg_loss: 6.4385416848318915 (-0.06573050362723265) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_114.pth  > EPOCH: 2/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:35:33)   --> TIME: 2024-10-07 12:36:53 -- STEP: 36/57 -- GLOBAL_STEP: 150 | > loss_text_ce: 0.0721435546875 (0.07358831146525013) | > loss_mel_ce: 6.145806789398193 (6.188661495844523) | > loss: 6.217950344085693 (6.262249761157566) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.2864 (0.2767013642523024) | > loader_time: 0.0103 (0.03218516376283434)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.06073590687343052 (-0.07280152184622629) | > avg_loss_text_ce: 0.06986938736268453 (-0.005208954215049744) | > avg_loss_mel_ce: 6.30383437020438 (-0.059629031590053394) | > avg_loss: 6.3737038884844095 (-0.06483779634748199) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_171.pth  > EPOCH: 3/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:37:19)   --> TIME: 2024-10-07 12:38:29 -- STEP: 29/57 -- GLOBAL_STEP: 200 | > loss_text_ce: 0.07051176577806473 (0.07052644701867268) | > loss_mel_ce: 6.033895492553711 (5.986830119428964) | > loss: 6.10440731048584 (6.057356554886391) | > grad_norm: 0 (0.0) | > current_lr: 3e-06 | > step_time: 0.5699 (0.3076904395530964) | > loader_time: 0.0784 (0.03138653163252206)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.05758019856044224 (-0.0031557083129882812) | > avg_loss_text_ce: 0.06753930981670107 (-0.0023300775459834527) | > avg_loss_mel_ce: 6.263012681688581 (-0.04082168851579926) | > avg_loss: 6.3305520330156595 (-0.04315185546875) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_228.pth  > EPOCH: 4/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:38:54)   --> TIME: 2024-10-07 12:39:58 -- STEP: 22/57 -- GLOBAL_STEP: 250 | > loss_text_ce: 0.06679949909448624 (0.06776332889090884) | > loss_mel_ce: 5.908669471740723 (5.947822874242609) | > loss: 5.97546911239624 (6.0155862244692715) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.1722 (0.27468402819199994) | > loader_time: 0.0269 (0.031688766046003854)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.08270270483834403 (+0.025122506277901795) | > avg_loss_text_ce: 0.06435149482318334 (-0.0031878149935177374) | > avg_loss_mel_ce: 6.234646729060581 (-0.02836595262799957) | > avg_loss: 6.298998083387103 (-0.03155394962855684) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_285.pth  > EPOCH: 5/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:40:28)   --> TIME: 2024-10-07 12:41:29 -- STEP: 15/57 -- GLOBAL_STEP: 300 | > loss_text_ce: 0.06579456478357315 (0.06502530574798585) | > loss_mel_ce: 5.848905086517334 (5.77887757619222) | > loss: 5.914699554443359 (5.84390287399292) | > grad_norm: 0 (0.0) | > current_lr: 3e-06 | > step_time: 0.1809 (0.27963315645853676) | > loader_time: 0.0078 (0.014228073755900066)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.05031810488019671 (-0.032384599958147325) | > avg_loss_text_ce: 0.06355198632393565 (-0.0007995084992476892) | > avg_loss_mel_ce: 6.228910718645368 (-0.005736010415213322) | > avg_loss: 6.292462757655552 (-0.006535325731550579) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_342.pth  > EPOCH: 6/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:42:02)   --> TIME: 2024-10-07 12:43:01 -- STEP: 8/57 -- GLOBAL_STEP: 350 | > loss_text_ce: 0.06599541008472443 (0.06467900751158595) | > loss_mel_ce: 5.786276340484619 (5.757885277271271) | > loss: 5.852271556854248 (5.822564244270325) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.2701 (0.33226513862609863) | > loader_time: 0.0098 (0.025070995092391968)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.05360582896641323 (+0.003287724086216519) | > avg_loss_text_ce: 0.06068255486232894 (-0.00286943146160671) | > avg_loss_mel_ce: 6.201059818267822 (-0.027850900377545784) | > avg_loss: 6.261742387499128 (-0.03072037015642426) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_399.pth  > EPOCH: 7/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:43:37)   --> TIME: 2024-10-07 12:44:33 -- STEP: 1/57 -- GLOBAL_STEP: 400 | > loss_text_ce: 0.06220836192369461 (0.06220836192369461) | > loss_mel_ce: 5.4344892501831055 (5.4344892501831055) | > loss: 5.496697425842285 (5.496697425842285) | > grad_norm: 0 (0.0) | > current_lr: 3e-06 | > step_time: 0.6615 (0.6614542007446289) | > loader_time: 0.0078 (0.007844924926757812)  --> TIME: 2024-10-07 12:44:56 -- STEP: 51/57 -- GLOBAL_STEP: 450 | > loss_text_ce: 0.057098638266325 (0.06138909958741244) | > loss_mel_ce: 5.471390247344971 (5.594047686632941) | > loss: 5.528489112854004 (5.655436786950804) | > grad_norm: 0 (0.0) | > current_lr: 3e-06 | > step_time: 0.181 (0.2878768210317575) | > loader_time: 0.0088 (0.038824216992247355)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.07615266527448382 (+0.02254683630807059) | > avg_loss_text_ce: 0.058887094259262085 (-0.0017954606030668521) | > avg_loss_mel_ce: 6.185043743678501 (-0.016016074589320972) | > avg_loss: 6.243930816650391 (-0.01781157084873719) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_456.pth  > EPOCH: 8/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:45:13)   --> TIME: 2024-10-07 12:46:26 -- STEP: 44/57 -- GLOBAL_STEP: 500 | > loss_text_ce: 0.057373046875 (0.05940423169257966) | > loss_mel_ce: 5.046512126922607 (5.508131341500716) | > loss: 5.103885173797607 (5.567535552111539) | > grad_norm: 0 (0.0) | > current_lr: 5e-06 | > step_time: 0.1904 (0.2714917063713074) | > loader_time: 0.0078 (0.032041788101196296)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.09192051206316267 (+0.015767846788678846) | > avg_loss_text_ce: 0.05702140714441027 (-0.0018656871148518134) | > avg_loss_mel_ce: 6.170133590698242 (-0.014910152980259106) | > avg_loss: 6.227154936109271 (-0.016775880541120003) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_513.pth  > EPOCH: 9/10 --> J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094  > TRAINING (2024-10-07 12:46:48)   --> TIME: 2024-10-07 12:48:02 -- STEP: 37/57 -- GLOBAL_STEP: 550 | > loss_text_ce: 0.05594145506620407 (0.05727484087283547) | > loss_mel_ce: 4.725022792816162 (5.45236247294658) | > loss: 4.780964374542236 (5.509637330029462) | > grad_norm: 0 (0.0) | > current_lr: 3e-06 | > step_time: 0.1255 (0.316402834814948) | > loader_time: 0.0289 (0.03036097578100256)  > EVALUATION  --> EVAL PERFORMANCE | > avg_loader_time: 0.05717955316816058 (-0.03474095889500209) | > avg_loss_text_ce: 0.05570900227342333 (-0.0013124048709869385) | > avg_loss_mel_ce: 6.155474867139544 (-0.014658723558698128) | > avg_loss: 6.211183888571603 (-0.015971047537667538) > BEST MODEL : J:\all_talk_V2_beta\alltalk_tts\finetune\Pedro_lab_new_tokenizer\training\XTTS_FT-October-07-2024_12+31PM-0c95094\best_model_570.pth