VFRAME researches and develops state-of-the-art computer vision technologies for human rights research and conflict zone monitoring #
VFRAME is developed and maintained by Adam Harvey in Berlin with contributions from Jules LaPlace, Josh Evans, and a growing list of collaborators. VFRAME is being piloted with Mnemonic.org an organization dedicated to helping human rights defenders effectively use digital documentation of human rights violations and international crimes to support advocacy, justice and accountability.
VFRAME’s command line interface (CLI) image processing software and detection models are open-source with MIT licenses and available at github.com/vframeio.
Recent News and Events #
- August 2022: VFRAME announces partnership with Tech 4 Tracing to access munitions for photogrammetry scanning and benchmark dataset captures
- June 2022: VFRAME presents at United Nations Eighth Biennial Meeting of States on Small Arms and Light Weapons in NYC
- February 2022: New video shows how VFRAME uses 3D rendered synthetic image training data to build object detection algorithms for cluster munitions
- December 2021: VFRAME featured in the Financial Times: Researchers train AI on ‘synthetic data’ to uncover Syrian war crimes
- November 2021: VFRAME launches DFACE.app a privacy-focused web app to detect and blur faces in protest imagery. Code: github.com/vframeio/dface
Origins and Mission #
The idea for VFRAME grew out of discussions at a 2017 Data Investigation Camp organized by Tactical Technology Collective in Montenegro. Through meeting with investigative journalists, human rights researchers, and digital activists from around the world it became clear that computer vision was a much needed tool in this community yet the solutions were nowhere in sight.
Since then VFRAME’s mission has been to research, prototype, and deploy computer visions systems that accelerate the capabilities of human rights researchers and investigative journalists.
More precisely, the VFRAME project develops two main technologies: a core image processing engine to analyze large collections of videos (millions) or images (billions), and a synthetic data rendering system to generate high-fidelity training data. Together, these two approaches are being utilized to develop accurate object detectors capable of locating objects of interest (e.g. an RBK-250 tailfin or a 9N235 submunition) in millions of video files culled from online sources.
Stay tuned for more updates during 2023 including the release of the RBK-250, RBK-500, Uragan, Smerch, and AO-25RT submunition.
Team #
Contact
-For inquires about using VFRAME technologies please contact research at vframe.io
- Tip: for fastest reply use private and secure email like ProtonMail
- Insecure emails that contain tracking links or tracking pixels may be automatically deleted
Press Archive #
- Financial Times: Researchers train AI on ‘synthetic data’ to uncover Syrian war crimes
- Der Spiegel: How Artificial Intelligence Helps Solve War Crimes
- Wall Street Journal: AI emerges as Crucial Tool for Groups Seeking Justice for Syria War Crimes (paywall)
- MIT Technology Review: Human rights activists want to use AI to help prove war crimes in court
- Deutsche Welle: VFRAME and Syrian Archive discuss new technologies for analyzing conflict zone videos (Spanish) https://www.dw.com/es/enlaces-ventana-abierta-al-mundo-digital/av-48853416
Funding #
Research and development of VFRAME is or has been supported by the ProtypeFund (DE), Swedish International Development Agency (SIDA), Meedan, and NL Net.
About this site #
This site is designed to be privacy-friendly and does not use any 3rd party analytics to track visits, nor any 3rd party dependencies that compromise privacy or share data. The site is built with Markdown, generated in Hugo, and served as static files using Digital Ocean App platform.