Wildlife conservation didn’t adapt the latest technology, which could help address the current mass-extincting crisis and safe species. The goal is to accelerate this uptake.
-
Wildlife.ai is a charitable trust that uses artificial intelligence to accelerate wildlife conservation.
-
Wildlife.ai’s purpose is to ensure artificial intelligence is widely applied to protect biodiversity.
-
Wildlife.ai works with grassroots wildlife conservation projects and develop open-source solutions using machine learning.
-
Wildlife.ai also organises community events, seminars and educational activities to build and maintain machine learning solutions to reduce the current rate of species extinction.
-
Wildlife.ai wants to build an open-source wildlife camera that gets triggered based on the movement of target animals, identifies the species on the device and reports the observation in near real-time to biologists, enabling more efficient species conservation efforts worldwide.
-
It is a smart camera that can capture and record a variety of smaller animals using AI to identify them and provide relevant analystics of the animal population to local biologists.
-
Traditional camera traps capture larger animals, but they don’t work for smaller animals and they don’t provide a sense of biodiversity.
-
The goal is multi-species monitoring in an ongoing way and have the backend system automatically record the species indentification, run biodiversity statistics in the background, get realtime data on the health of an ecosystem in ways we haven’t been able to do in the past, and leverage an open source community and leverage a larger community of expert.
-
Camera successfully managed the local challenge: monitoring of 40 species over three weeks. Now take the system and go globally.
-
The Wildlife Watcher design will be open-sourced. An onboard/enroll system could be an option to the architecture
-
Wildlife Watcher currently relies on the following modules: jupyter, GBIF, W&B, Zenodo, iNaturalist, Edge Impulse, FiftyOne. The goal is to find a solution to connect the in-house software with these modules to make most use of the devices.
GBIF, The Global Biodiversity Information Facility, is an international organisation that focuses on making scientific data on biodiversity available via the Internet using web services. It is an international network and data infrastructure funded by the world’s governments.
W&B or Weights & Biases helps you streamline your ML workflow from end to end
Zenodo is a general-purpose open repository for research papers, data sets, research software, reports, and other digital artifacts
iNaturalist is an American 501 nonprofit social network of naturalists, citizen scientists, and biologists built on the concept of mapping and sharing observations of biodiversity across the globe. iNaturalist may be accessed via its website or from its mobile applications
FiftyOne is an open-source tool for building high-quality datasets and computer vision models
Edge Impulse is the leading development platform for machine learning on edge devices. Edge computing devices are: IoT sensors, smart cameras, uCPE equipment, servers and processors.
-
biologists and nature enthusiasts (hundreds/global) with no links to wildlife.ai
-
users are likely to have multiple cameras
-
Users should be able to communicate with the camera using a mobile app (to set the cameras on/off and adjust settings without opening the cameras)
-
Camera control preference is local setup instead of centralized control
-
Users should be able to analyse the videos using common camera trap labelling platforms; open source platforms: Wildlife Insights, TrapTagger, or Trapper
-
Users should be able to publish frames from the videos to iNaturalist for experts to help with the identification of the species
-
Users should be able to easily train edge models. using their own labelled videos, and upload the models to the cameras (maybe using third party services like Roboflow, Edge Impulse or TensorFlow Lite )
-
Users should be able to publish the species occurrences to GBIF the Camtrap DP, data exchange format
-
Cameras should be able to process the footage on the device and send a small alert message to the users via LoraWan, 3G or satellite.
-
The footage will be stored in a SD card and processed on the device. Users will be able to retrieve the footage from the SD Card. Cameras will be able to send small alert messages to the users via LoraWan, 3G or satellite but not the actual footage.
-
Most of the cameras will be deployed in remote areas so no internet connection
-
Videos would be around 5-10 seconds long
Wildlife Insights streamlines decision-making by providing machine learning models and other tools to manage, analyze and share camera trap data
TrapTagger is an open-source web application that uses the latest artificial intelligence technologies, in combination with highly-optimised manual annotation interfaces, to process camera-trap data.
Trapper is an open source, django based web application for data management in camera trapping studies. Organization and spatio-temporal querying of picture and video files.
Roboflow is an easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Edge Impulse is the leading development platform for machine learning on edge devices. Edge computing devices are: IoT sensors, smart cameras, uCPE equipment, servers and processors.
TensorFlow Lite is a set of tools that enables on-device machine learning by helping developers run their models on mobile, embedded, and edge devices. TensorFlow is a free and open-source machine learning library.
-
The camera hardware will be a combination of ultra-low-power microcontrollers (up to 512KB Flash) and interchangeable modules (e.g. optical sensor, IR lights, transceiver module, batteries) enclosed in a watertight and 3D printed enclosure.
-
The API for the specific camera hasn’t been selected, allowing teams to specify what behaviour they might need from the hardwar, helping the team choose appropriate hardware.