AI-powered identification of endangered animals.
Wildlife Lens leverages advanced AI image recognition to help conservationists quickly identify and monitor endangered species, making wildlife protection faster and more efficient.
Wildlife Lens 1.0 was purpose-built for the Big Cat Project to identify and monitor some of the world's most threatened big cats, such as cheetahs and lions. The platform addresses two main challenges faced by conservationists:
Significant human hours required for painstaking manual identification of photos, through detailed pattern analysis.
Limited scalability due to increasing image collections, and constraints in budget and personnel.
With Wildlife Lens 1.0, researchers no longer need to spend hours manually reviewing a vast number of big cat photos from the field. The app allows them to almost instantly and accurately identify specific animals.
Individual big cat identification requires visual discrimination with extreme variation in pose, lighting, visibility, and background.
Traditional tagging and manual photo matching do not scale well, and result in extreme time consumption for conservation biologists identifying individual animals.
Wildlife Lens's approach uses deep learning to identify individual animals from photographs alone, enabling non-invasive, scalable individual identification and population monitoring.
The main technical challenge is variation in how the same animal appears across images and the visual similarity between different individuals. These issues are intensified by real-world field conditions such as camouflage, low lighting, and blurry images.
Traditional computer vision approaches struggle with the subtle distinctions required, particularly when dealing with naturally camouflaged species or poor lighting conditions in field settings.
Wildlife Lens's deep learning approach allows the system to learn by clustering images of the same individual together.
End-to-end pipeline for big cat identification.
Field photography provides raw image data from natural habitats. Images are collected continuously across diverse environmental conditions, times of day, and seasons to build a comprehensive dataset.
Attention mechanisms identify and weight visual features. The network focuses on relevant patterns such as spot configurations, stripe patterns, and distinctive markings while ignoring background noise.
Metric learning maps images to a vector space optimized for identity separation. Images of the same animal form unique clusters of points in this embedding space, with distances reflecting visual similarity.
Cosine similarity algorithm returns ranked matches from the database for biologists to cross-reference against existing data. When a new image is processed, the system computes its embedding vector and retrieves the most similar known individuals, enabling rapid identification.
The Wildlife Lens model is built on a convolutional backbone augmented with attention modules to emphasize relevant regions for ID. A dual-branch architecture captures both global morphology and localized spot pattern features.
Optimized for recognition with deep residual connections. Multi-scale features capture patterns at different spatial resolutions, from broad characteristics to small textural details.
Channel and spatial attention mechanisms dynamically weight distinct features. This allows the model to focus on biologically relevant regions like spot patterns, tail stripes, and body markings that are consistent across different viewing angles and conditions.
Global and local feature integration provides comprehensive identity representation. Combines whole-body morphology with tiny pattern details to create robust embeddings that are robust to pose and lighting while remaining discriminative between individuals.
Low-dimensional embedding optimized for identity separation. Designed for efficient retrieval and scalability to large databases. The embedding space is structured such that images of the same individual cluster tightly together while maintaining clear separation between different individuals.
Wildlife Lens is trained using hard triplet loss, a metric learning objective designed to maximize identity separation in embedding space. Training samples are organized into triplets consisting of an anchor image, a positive image of the same individual, and a negative image of a different individual.
For each anchor image, the model computes the squared Euclidean distance between its embedding and the embeddings of the hardest positive and hardest negative samples within the batch. The hardest positive is the sample of the same individual that is farthest from the anchor, while the hardest negative is the sample of a different individual that is closest to the anchor.
dap = ║f(A) - f(Phard)║2
dan = ║f(A) - f(Nhard)║2
Compute the squared Euclidean distance for the hardest positive and hardest negative, where f is the embedding function, A is the anchor, Phard is the hardest positive, and Nhard is the hardest negative.
The triplet loss is calculated by comparing these distances and enforcing a margin between them. The loss penalizes cases where the hardest negative is not sufficiently farther from the anchor than the hardest positive, encouraging the model to separate identities by at least a margin α in embedding space.
Li = max(dap - dan + α, 0)
Encourage the anchor to be closer to the positive than the negative by at least margin α, where Li is the loss for anchor i, dap is the anchor–positive distance, dan is the anchor–negative distance, α is the margin, and max(dₐₚ − dₐₙ + α, 0) ensures non-negative loss.
The total hard triplet loss is obtained by summing the individual triplet losses across all anchors in the batch. This batch-level aggregation ensures that training emphasizes the most difficult identity distinctions, leading to more compact intra-individual clusters and stronger inter-individual separation.
Lhard = Σi=1m Li
Sum over all triplets in the batch, where Lhard is the total hard triplet loss, Li is the loss for anchor i, and m is the number of anchors in the batch.
Wildlife Lens evaluates embedding quality using train/validation/test splits and cross-referencing against a reference database. Pairwise similarity, precision-recall curves, confusion matrices, and intra-/inter-individual distance analysis measure how well the model separates and clusters individual animals.
The model is retrained with additional data when performance thresholds are not met, reducing misidentifications and improving overall accuracy.
Biologists verify matches and inspect failure cases to ensure predictions align with conservation expectations. Attention maps highlight which animal regions the model relies on, guiding interpretability, debugging, and targeted retraining, while the ranked individual output also offers another method of cross-referencing.
Observing Wildlife Lens in action and the species it protects.
Demonstration of Wildlife Lens in action.
Interview with conservation biologist about technology and conservation.
"Technology will help facilitate the pace of the [Big Cat] project."
Wildlife Lens proved to be over 75% faster and much more accurate than traditional manual identification methods during its pilot deployment for the Big Cat Project in Tanzania!
Wildlife Lens enables identification of individual animals without physical contact, tagging, or sedation. This reduces stress on animals, eliminates handling, and lowers costs. Identification is performed from photographs taken at a distance, preserving natural behaviors and not infringing on animals.
Reliable identification across multiple sightings allows accurate analysis by conservationists of populations, movement patterns, survival rates, breeding success, and social structures. Long-term monitoring is possible without the limitations of tag failure or death associated with physical marking.
Photo-based identification minimizes the need for invasive tags, cutting costs and complexity. Standard phones, cameras, and laptops are sufficient for ID, enabling broader geographic monitoring and reducing dependency on external support and specialized tools.