VFRAME is a collection of open-source computer vision software tools designed specifically for human rights researchers working with large datasets of visual media.


VFRAME grew out of discussions at a 2017 Data Investigation Camp run by Tactical Technology Collective in Montenegro. Through meeting with investigative journalists, human rights researchers, and digital activists from around the world it became clear that computer vision was a much needed tool in this community, yet the solutions were either not yet technically developed, existed in disparate states, too expensive, or were only relevant to consumer applications.

What We Do

VFRAME leverages recent advancements in computer vision to bring practical applications to human rights research. The project is being piloted with researchers at Mnemonic and applied to their work in analyzing large media collections from conflict zones that may contain evidence of atrocities and war crimes. Their work requires locating as many videos as possible that could be used to reconstruct the chain of events leading to violence. But one of the main challenges in this work is the massive scale of visual data. Manually reviewing millions of videos is simply not possible, especially for a small team of experts trained to recognize illegal munitions.

VFRAME aims to assist expert human rights researchers by encoding their knowledge into algorithms that scale to meet the new challenges of OSINT investigations. For example, to locate cluster munitions in videos from Syria and Yemen, VFRAME is developing a cluster munition detection algorithm to help automate analysis of several million videos from Syria and Yemen.

Developing custom object detection algorithms brings new technical challenges. It requires locating thousands of diverse images that can be annotated and used as training data, but this is difficult or outright impossible for illegal munitions that appear infrequently and often in low resolution. To offset this, VFRAME introduces a novel approach for using mixed-reality training data comprised of 3D-rendered, 3D-printed, and original source data. This approach uses 3D modeling to recreate objects that are then randomized in 3D environments and rendered into photorealistic imagery. To overcome the overfitting that results from repeating rigid 3D objects, 3D-printed data is fabricated and photographed in real world settings. This combination of data provides a useful, and safe, alternative source of training data for dangerous objects, such as the AO-2.5RT cluster munition.

Research and development of VFRAME is or has been supported by the ProtypeFund (DE), Swedish International Development Agency (SIDA), Meedan, and NL Net.

NB: Portions of this site may be out of sync with the current state of project development. More technical information is available in the documentation notes at https://github.com/vframeio. The open-source public code is updated after each module is tested, and is typically several months behind the active research code.


Please contact Adam Harvey on Keybase at keybase.io/vframeio or email "adam" using PGP key available here.


VFRAME is developed by Adam Harvey and Jules LaPlace and a group of friends and contributors in Berlin and around the world with many thanks to the funders, exhibition partners, and supporters.

 Adam Harvey / DirectorWorking in Python on computer vision, image processing, 3D rendering, and systems engineering. Contact
Adam Harvey / Director
Working in Python on computer vision, image processing, 3D rendering, and systems engineering. Contact
 Jules LaPlace / DeveloperWorking with Python, React, and MySQL on information architecture, image retrieval and interface design.
Jules LaPlace / Developer
Working with Python, React, and MySQL on information architecture, image retrieval and interface design.
 Josh Evans / 3DWorking with Blender, Substance Painter, photogrammetry, and emerging 3D technologies.
Josh Evans / 3D
Working with Blender, Substance Painter, photogrammetry, and emerging 3D technologies.


Form Labs
Syrian Archive


Ars Electronica
Beazley Design of the Year Awards 2019

Development Roadmap

Dates Objective Status and Results
2021 Q4 Beta release -
2021 Q3 RBK-250 case study In Progress
2021 Q2 Improve photorealism, data generation In Progress
2021 Q1 Build benchmark datasets Complete
2020 Q4 Publish ModelZoo models Delayed
2020 Q3 Develop/train/test ModelZoo and Demos Delayed
2020 Q2 Feasibility research for Syrian and Yemeni archive Complete
2020 Q1 Improvements to image processing pipeline Complete
2019 Q4 Synthetic training prototype datasets for AO-2.5RT/M, ShOAB-0.5, BLU-63, PTAB-1M Complete
2019 Q3 Prototype release Complete
2019 Q2 Develop 3D models of cluster munitions, prototype Blender software Complete
2019 Q1 Evaluation of Blender Complete
2018 Q4 Evaluation of Unity Complete
2018 Q3 3D modeling, exhibition of concept prototypes at Ars Electronica EXPORT Complete




The VFRAME project acknowledges support from the following organizations:

German Federal Ministry of Education and Research - BMBF
Meedan / Check Global
NLNet and NGI0
Prototype Fund

About this site

This site is designed to be privacy-friendly and does not use any 3rd party analytics to track visits, nor any 3rd party dependencies that compromise privacy or share data.

The site is built with Markdown and, aside from loading images on a self-hosted external server, no other external requests are made to other sites.