Computer Vision for Human Rights Researchers

VFRAME researches and develops state-of-the-art computer vision technologies for application to human rights research and conflict zone monitoring. VFRAME works with to guide the development process of each new technology.

Read about our latest research using 3D-printing and 3D-rendering to generate image training datasets for cluster munitions. Current work: AO-2.5RT cluster munition detector with estimated 70-90% detection accuracy.

Annotation masks for AO-2.5RT synthetic training data.
Research: This animation shows the annotation masks used for a portion of the AO-2.5RT synthetic training datasets. An open-source model will be released upon verification of the detection accuracy, which is expected to be 70-90% on challenging conflict zone imagery.

Current Research and Prototypes

3D Printed Training Data
  • 3D Printed Training Data
  • Enriching synthetic data with 3D printed cluster munition replicas for use in image training datasets
3D Rendered Training Data
  • 3D Rendered Training Data
  • Using 3D modeled scenes to build synthetic training datasets for objects in conflict zones
Cluster Munition Detector
  • Cluster Munition Detector
  • Training object detection algorithms to locate illegal munitions
Scene Summarization
  • Scene Summarization
  • Content-based scene summarization to find the most representative frames in videos
Media Attribute Analysis
  • Media Attribute Analysis
  • Using simple media attributes to understand large OSINT video datasets


Human rights researchers often rely on videos shared online to document war crimes, atrocities, and human rights violations. Manually reviewing these videos is expensive, does not scale, and can cause vicarious trauma. As an increasing number of videos are posted, a new approach is needed to understand these large datasets.

VFRAME is currently working with Syrian Archive and Yemeni Archive, organizations dedicated to documenting war crimes and human rights violations, to develop computer vision tools to address these challenges.

Specifically, VFRAME develops content-based scene summarization algorithms to reduce video processing times, custom object detection models to detect illegal munitions, synthetic datasets to train object detection models, visual similarity search engine for investigations, and biometric redaction tools to blur faces.

VFRAME is designed to run offline on a single desktop for archives up to 100,000 hours of footage. Larger archives can be split into multiple partitions.

Read more about who we are and what we do.

Recent Press


Form Labs
Syrian Archive


German Federal Ministry of Education and Research - BMBF
Meedan / Check Global
NLNet and NGI0
Prototype Fund


Ars Electronica
Beazley Design of the Year Awards 2019


In development