About VFRAME

For code, visit github.com/vframeio

VFRAME (Visual Forensics and Metadata Extraction) is a collection of open-source computer vision software tools designed specifically for human rights reserachers working with large datasets of visual media. VFRAME aims to bridge the gap between state-of-the-art research and the practical needs of researchers investigating videos collected from conflict zones.

The project grew out of discussions that took place in 2017 during the Data Investigation Camp run by Tactical Technology Collective in Montenegro. Through meeting with investigative journalists, human rights researchers, and digital activists from around the world it became clear that computer vision was a much needed tool in this community, yet the solutions were either not technically developed, existed in disparate states, or were only useful for consumer applications.

VFRAME builds on top of existing technologies (OpenCV, TensorFlow, PyTorch, Darknet, FAISS) for the core image processing technology, but extends their capabilities by developing custom workflows for large video datasets and custom object detection models for objects specific to conflict zones.

For example, most computer vision software now offers face detection, scene detection, and basic object detection. As impressive as these are, they fall short when analyzing 3 million videos for evidence of cluster munitions on a single workstation.

To make these tools more relevant, VFRAME is developing custom computer vision tools such as content-based scene summarization to iteratively reduce million-scale video dataests, and custom object detection algorithms for objects appearing in conflict zones such as the AO2.5RT and ShOAB0.5 cluster munitions.

However, one major challenge with developing custom object detection models is to first build a training dataset. During 2019 VFRAME is working, with support from a PrototypeFund grant, to leverage 3D modeling and photorealistic rendering to create hybridized synthetic/real datasets. These hybridized datasets will enable custom object detection algorithms to be developed for objects in conflict zones when there are not enough real-world example images for training.

VFRAME is developed by Adam Harvey and contributors in Berlin.

Recent Events

Press

Credit

Funding

VFRAME has been and is currently supported by grants from PrototypeFund (BMBF) in Germany.

About this site

This site runs on Pelican with no 3rd party dependencies, aside from image hosting on an external server, and no tracking analytics.