diff --git a/NEWS.md b/NEWS.md index 92185bd..f83fa2a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -4,7 +4,7 @@ * Addition of five new functions: * `wt_dd_summary()` for querying data from Data Discover. See [APIs](https://abbiodiversity.github.io/wildrtrax/articles/apis.html#data-discover) for more information - * `wt_evaluate_classifier()`, `wt_get_threshold()`, and `wt_additional_species()` for wrangling acoustic automated classification results + * `wt_evaluate_classifier()`, `wt_get_threshold()`, and `wt_additional_species()` for wrangling acoustic automated classification results. See [Acoustic classifiers](https://abbiodiversity.github.io/wildrtrax/articles/acoustic-classifiers.html) for more information. * `wt_add_grts()` to intersect locations with GRTS IDs from [NABat](https://www.nabatmonitoring.org/) * `wt_download_tags()` now becomes `wt_download_media()` to support broader media downloads in batch from WildTrax * Deprecated `wt_report()` @@ -13,6 +13,7 @@ * Switch to `curl::curl_download()` for media and assets * Removed dependencies `pipeR`, `progressr`, `jsonlite`, `future`, `furrr`, `tools`, `magrittr`, `markdown`, `rmarkdown` to increase package stability but reduces speed for functions such as `wt_audio_scanner()`, `wt_run_ap()`. Moved `vembedr` to suggests for vignettes +* Switched `wt_download_report()` to POST requests * Lowercase package name --- diff --git a/R/acoustic-pre-processing.R b/R/acoustic-pre-processing.R index b10ffba..5f50806 100644 --- a/R/acoustic-pre-processing.R +++ b/R/acoustic-pre-processing.R @@ -59,7 +59,7 @@ wt_audio_scanner <- function(path, file_type, extra_cols = F) { } # Create the main tibble - df <- z %>% + df <- df %>% tidyr::unnest(file_path) %>% dplyr::mutate(size_Mb = round(purrr::map_dbl(.x = file_path, .f = ~ fs::file_size(.x)) / 10e5, digits = 2), # Convert file sizes to megabytes file_path = as.character(file_path)) %>% diff --git a/R/analyze-data.R b/R/analyze-data.R index 782a07e..db69032 100644 --- a/R/analyze-data.R +++ b/R/analyze-data.R @@ -213,7 +213,7 @@ wt_summarise_cam <- function(detect_data, raw_data, time_interval = "day", #' detections <- wt_ind_detect(x = df, threshold = 30, units = "minutes") #' } #' -#' @return A dataframe of independent detections in your camera data, based on the threshold you specified. The df wil include information about the duration of each detection, the number of images, the average number of individual animals per image, and the max number of animals in the detection. +#' @return A dataframe of independent detections in your camera data, based on the threshold you specified. The df will include information about the duration of each detection, the number of images, the average number of individual animals per image, and the max number of animals in the detection. wt_ind_detect <- function(x, threshold, units = "minutes", datetime_col = image_date_time, remove_human = TRUE, remove_domestic = TRUE) { diff --git a/R/api.R b/R/api.R index 4375ba7..4772c8d 100644 --- a/R/api.R +++ b/R/api.R @@ -450,13 +450,12 @@ wt_download_media <- function(input, output, type = c("recording","image", "tag_ purrr::map2_chr(.$spectrogram_url, .$clip_file_name_spec, ~ curl::curl_download(.x, .y, mode = "wb")) purrr::map2_chr(.$clip_url, .$clip_file_name_audio, ~ curl::curl_download(.x, .y, mode = "wb")) } - } else if ("media_url" %in% colnames(data)){ + } else if ("media_url" %in% colnames(input_data)){ output_data <- input_data %>% - mutate(image_name = file.path(output, paste0(location, "_", format(parse_date_time(recording_date_time, "%Y-%m-%d %H:%M:%S"), "%Y%m%d_%H%M%S"),".jpeg"))) %>% + mutate(image_name = file.path(output, "/", paste0(location, "_", format(parse_date_time(image_date_time, "%Y-%m-%d %H:%M:%S"), "%Y%m%d_%H%M%S"),".jpeg"))) %>% { - purrr::map2_chr(.$spectrogram_url, .$image_name, ~ curl::curl_download(.x, .y, mode = "wb")) + purrr::map2_chr(.$media_url, .$image_name, ~ curl::curl_download(.x, .y, mode = "wb")) } - } else { stop("Required columns are either 'recording_url', 'media_url', 'spectrogram_url', or 'clip_url'. Use wt_download_report(reports = 'recording', 'image_report' or 'tag') to get the correct media.") } diff --git a/R/classifier-functions.R b/R/classifier-functions.R index f3b1249..6ec65d6 100644 --- a/R/classifier-functions.R +++ b/R/classifier-functions.R @@ -20,7 +20,7 @@ #' remove_species = TRUE, thresholds = c(10, 99)) #' } #' -#' @return A tibble containing columsn for precision, recall, and F-score for each of the requested thresholds. +#' @return A tibble containing columns for precision, recall, and F-score for each of the requested thresholds. wt_evaluate_classifier <- function(data, resolution = "recording", remove_species = TRUE, species = NULL, thresholds = c(10, 99)){ diff --git a/R/convenience-functions.R b/R/convenience-functions.R index c6a54f9..cf9661a 100644 --- a/R/convenience-functions.R +++ b/R/convenience-functions.R @@ -664,6 +664,7 @@ wt_add_grts <- function(data, group_locations_in_cell = FALSE) { #' Format data for a specified portal #' #' @description This function takes the WildTrax reports and converts them to the desired format +#' `r lifecycle::badge("experimental")` #' #' @param input The report from `wt_download_report()` #' @param format A format i.e. 'FWMIS' diff --git a/README.md b/README.md index c301d4d..039cd53 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,7 @@ [![R-CMD-check](https://github.com/ABbiodiversity/wildrtrax/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/ABbiodiversity/wildrtrax/actions/workflows/R-CMD-check.yaml) [![codecov](https://codecov.io/gh/ABbiodiversity/wildrtrax/branch/main/graph/badge.svg)](https://app.codecov.io/gh/ABbiodiversity/wildrtrax) [![CRAN status](https://www.r-pkg.org/badges/version/wildrtrax)](https://CRAN.R-project.org/package=wildrtrax) +[![Codecov test coverage](https://codecov.io/gh/ABbiodiversity/wildRtrax/branch/main/graph/badge.svg)](https://app.codecov.io/gh/ABbiodiversity/wildRtrax?branch=main) ## Overview diff --git a/codecov.yml b/codecov.yml index 8df5a55..1819adb 100644 --- a/codecov.yml +++ b/codecov.yml @@ -1,60 +1,19 @@ -# Workflow derived from https://github.com/r-lib/actions/tree/v2/examples -on: - push: - branches: [main] - pull_request: - branches: [main] - -name: test-coverage - -permissions: read-all - -jobs: - test-coverage: - runs-on: ubuntu-latest - env: - GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} - - steps: - - uses: actions/checkout@v4 - - - uses: r-lib/actions/setup-r@v2 - with: - use-public-rspm: true - - - uses: r-lib/actions/setup-r-dependencies@v2 - with: - extra-packages: any::covr, any::xml2 - needs: coverage - - - name: Test coverage - run: | - cov <- covr::package_coverage( - quiet = FALSE, - clean = FALSE, - install_path = file.path(normalizePath(Sys.getenv("RUNNER_TEMP"), winslash = "/"), "package") - ) - covr::to_cobertura(cov) - shell: Rscript {0} - - - uses: codecov/codecov-action@v4 - with: - fail_ci_if_error: ${{ github.event_name != 'pull_request' && true || false }} - file: ./cobertura.xml - plugin: noop - disable_search: true - token: ${{ secrets.CODECOV_TOKEN }} - - - name: Show testthat output - if: always() - run: | - ## -------------------------------------------------------------------- - find '${{ runner.temp }}/package' -name 'testthat.Rout*' -exec cat '{}' \; || true - shell: bash - - - name: Upload test results - if: failure() - uses: actions/upload-artifact@v4 - with: - name: coverage-test-failures - path: ${{ runner.temp }}/package +comment: false + +coverage: + status: + project: + default: + target: auto + threshold: 1% + informational: true + patch: + default: + target: auto + threshold: 1% + informational: true + +- name: Upload coverage reports to Codecov + uses: codecov/codecov-action@v4.0.1 + with: + token: ${{ secrets.CODECOV_TOKEN }} diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index fee4be6..552cd85 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -10,7 +10,7 @@ articles: classifiers-tutorial: classifiers-tutorial.html introduction: introduction.html tutorials: tutorials.html -last_built: 2024-07-22T21:47Z +last_built: 2024-07-23T20:54Z urls: reference: https://abbiodiversity.github.io/wildrtrax/reference article: https://abbiodiversity.github.io/wildrtrax/articles diff --git a/docs/reference/index.html b/docs/reference/index.html index d595bba..9298516 100644 --- a/docs/reference/index.html +++ b/docs/reference/index.html @@ -125,7 +125,7 @@
wt_format_data() - + experimental
Format data for a specified portal
diff --git a/man/wt_download_media.Rd b/man/wt_download_media.Rd index b2ddb27..3f34b84 100644 --- a/man/wt_download_media.Rd +++ b/man/wt_download_media.Rd @@ -7,7 +7,7 @@ wt_download_media( input, output, - type = c("recording", "tag_clip_audio", "tag_clip_spectrogram") + type = c("recording", "image", "tag_clip_audio", "tag_clip_spectrogram") ) } \arguments{ @@ -15,7 +15,7 @@ wt_download_media( \item{output}{The output folder} -\item{type}{Either recording, tag_clip_spectrogram or tag_clip_audio} +\item{type}{Either recording, image, tag_clip_spectrogram, tag_clip_audio} } \value{ An organized folder of media. Assigning wt_download_tags to an object will return the table form of the data with the functions returning the after effects in the output directory @@ -26,7 +26,7 @@ Download acoustic media in batch \examples{ \dontrun{ dat.report <- wt_download_report() |> -wt_download_media(output = "my/output/folder") +wt_download_media(output = "my/output/folder", type = "recording") } } diff --git a/man/wt_evaluate_classifier.Rd b/man/wt_evaluate_classifier.Rd index 9859d46..a045f5e 100644 --- a/man/wt_evaluate_classifier.Rd +++ b/man/wt_evaluate_classifier.Rd @@ -24,7 +24,7 @@ wt_evaluate_classifier( \item{thresholds}{Numeric; start and end of sequence of score thresholds at which to calculate performance metrics} } \value{ -A tibble containing columsn for precision, recall, and F-score for each of the requested thresholds. +A tibble containing columns for precision, recall, and F-score for each of the requested thresholds. } \description{ Calculates precision, recall, and F-score of BirdNET for a requested sequence of thresholds. You can request the metrics at the minute level for recordings that are processed with the species per minute method (1SPM). You can also exclude species that are not allowed in the project from the BirdNET results before evaluation. diff --git a/man/wt_format_data.Rd b/man/wt_format_data.Rd index f7c1676..e5fb3f1 100644 --- a/man/wt_format_data.Rd +++ b/man/wt_format_data.Rd @@ -16,6 +16,7 @@ A tibble with the formatted report } \description{ This function takes the WildTrax reports and converts them to the desired format +\ifelse{html}{\href{https://lifecycle.r-lib.org/articles/stages.html#experimental}{\figure{lifecycle-experimental.svg}{options: alt='[Experimental]'}}}{\strong{[Experimental]}} } \examples{ \dontrun{ diff --git a/man/wt_ind_detect.Rd b/man/wt_ind_detect.Rd index a734212..d6e61e1 100644 --- a/man/wt_ind_detect.Rd +++ b/man/wt_ind_detect.Rd @@ -27,7 +27,7 @@ wt_ind_detect( \item{remove_domestic}{Logical; Should domestic animal tags (e.g. cows) be removed? Defaults to TRUE.} } \value{ -A dataframe of independent detections in your camera data, based on the threshold you specified. The df wil include information about the duration of each detection, the number of images, the average number of individual animals per image, and the max number of animals in the detection. +A dataframe of independent detections in your camera data, based on the threshold you specified. The df will include information about the duration of each detection, the number of images, the average number of individual animals per image, and the max number of animals in the detection. } \description{ Create an independent detections dataframe using camera data from WildTrax diff --git a/vignettes/classifiers-tutorial.Rmd b/vignettes/classifiers-tutorial.Rmd index 3c130c0..ee5dcc9 100644 --- a/vignettes/classifiers-tutorial.Rmd +++ b/vignettes/classifiers-tutorial.Rmd @@ -217,7 +217,7 @@ eval[eval$threshold==threshold_use,] ``` -A precision at our chosen score threshold of approximately `r round(eval$precision,2)` means that more than half of these detections are likely still false positives, we should probably visually verify to remove those false positives. Given that the overall recall rate of BirdNET is < 10% for precision values above 0.7, the detections should be used with caution in ecological analyses. From a detectability perspective, a recall rate of 10% means that your detection probability with BirdNET is 10% of what it would be with a human listener. +A precision at our chosen score threshold of approximately `round(eval[eval$threshold==threshold_use,]$precision,3)` means that ~1/3 of detections are likely still false positives, we should probably visually verify to remove those false positives. Given that the overall recall rate of BirdNET is < 10% for precision values above 0.7, the detections should be used with caution in ecological analyses. From a detectability perspective, a recall rate of 10% means that your detection probability with BirdNET is 10% of what it would be with a human listener. ## Check for additional species detected diff --git a/vignettes/introduction.Rmd b/vignettes/introduction.Rmd index a65f9e3..68a1afe 100644 --- a/vignettes/introduction.Rmd +++ b/vignettes/introduction.Rmd @@ -23,7 +23,7 @@ library(wildrtrax) ### What is `wildrtrax`? -`wildrtrax`, pronounced *'wild-R-tracks'*, is an R package for ecologists and advanced users who work with environmental sensors such as autonomous recording units (ARUs) and remote cameras. It contains functions designed to meet most needs in order to organize, analyze and standardize data with the [WildTrax](https://wwww.wildtrax.ca) infrastructure. `wildrtrax` iis self-contained and must be run under an R statistical environment, and it also depends on many other R packages. `wildrtrax` is free software and distributed under [MIT License (c) 2023](https://github.com/ABbiodiversity/wildrtrax/blob/master/LICENSE). +`wildrtrax`, pronounced *'wild-R-tracks'*, is an R package for ecologists and advanced users who work with environmental sensors such as autonomous recording units (ARUs) and remote cameras. It contains functions designed to meet most needs in order to organize, analyze and standardize data with the [WildTrax](https://wwww.wildtrax.ca) infrastructure. `wildrtrax` is self-contained and must be run under an R statistical environment, and it also depends on many other R packages. `wildrtrax` is free software and distributed under [MIT License (c) 2023](https://github.com/ABbiodiversity/wildrtrax/blob/master/LICENSE). ### What is **WildTrax**?