Computer Vision for Human Rights Researchers
VFRAME researches and develops state-of-the-art computer vision technologies for application to human rights research and conflict zone monitoring. VFRAME works with Mnemonic.org to guide the development process of each new technology.
Read about our latest research using 3D-printing and 3D-rendering to generate image training datasets for cluster munitions. Current work: AO-2.5RT cluster munition detector with estimated 70-90% detection accuracy.
Human rights researchers often rely on videos shared online to document war crimes, atrocities, and human rights violations. Manually reviewing these videos is expensive, does not scale, and can cause vicarious trauma. As an increasing number of videos are posted, a new approach is needed to understand these large datasets.
VFRAME is currently working with Syrian Archive and Yemeni Archive, organizations dedicated to documenting war crimes and human rights violations, to develop computer vision tools to address these challenges.
Specifically, VFRAME develops content-based scene summarization algorithms to reduce video processing times, custom object detection models to detect illegal munitions, synthetic datasets to train object detection models, visual similarity search engine for investigations, and biometric redaction tools to blur faces.
VFRAME is designed to run offline on a single desktop for archives up to 100,000 hours of footage. Larger archives can be split into multiple partitions.
Read more about who we are and what we do.