You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When dealing with large amount of data (and thus long processing time) the server can unexpectedly crash, shutting down the analysis. It would be great if there was an option for avoiding the analysis of files that have already been analyzed.
I made a very quick fix but I am pretty much sure this can be optimized. In the function analyzeFile I added:
defanalyzeFile(item):
# Get file path and restore cfgfpath=item[0]
cfg.setConfig(item[1])
# CUSTOM - CHECK IF THE FILE ALREADY EXISTSrpath=fpath.replace(cfg.INPUT_PATH, '')
rpath=rpath[1:] ifrpath[0] in ['/', '\\'] elserpath# Check if file already exists and if it does skip the analysisifcfg.RESULT_TYPE=='table':
rtype='.BirdNET.selection.table.txt'elifcfg.RESULT_TYPE=='audacity':
rtype='.BirdNET.results.txt'else:
rtype='.BirdNET.results.csv'outname=os.path.join(cfg.OUTPUT_PATH, rpath.rsplit('.', 1)[0] +rtype)
ifos.path.exists(outname):
print("File {} already exists".format(outname))
else:
DOTHERESTOFTHEANALYSIS
The text was updated successfully, but these errors were encountered:
When dealing with large amount of data (and thus long processing time) the server can unexpectedly crash, shutting down the analysis. It would be great if there was an option for avoiding the analysis of files that have already been analyzed.
I made a very quick fix but I am pretty much sure this can be optimized. In the function analyzeFile I added:
The text was updated successfully, but these errors were encountered: