FAQ

Which computer vision frameworks do you use?

VFRAME is built with Python and uses the OpenCV, Caffe, and PyTorch frameworks for image processing. It is built on top of existing research projects such as Yolo/Darknet for object detection and Fast.ai’s codebase for image classification. Many thanks to researchers that open source their code.

Where do you receive funding?

VFRAME is currently funded by a grant from PrototypeFund (BMBF). We are seeking additional funding for continued development.

Who works on VFRAME?

VFRAME is developed by Adam Harvey

I work for a newsroom. Can we use VFRAME?

The VFRAME tools will be published on GitHub in fall 2018 and licensed under the MIT License. However, the tools are designed mostly for small teams working on human rights related research.

Why cluster munitions?

The cluster munition detector is a prototype published to illustrate the current research. It is part of a larger visual classification system that includes illegal munitions used in warfare according to international laws. The current prototype aims to demonstrate a proof of concept AO2.5 object detector because it is seen frequently in documentation of the Syrian conflict, which provides enough imagery to train an object detector. Read more about cluster munitions on Wikipedia.

Which other objects do you detect?

A visual wepaons classification guide will be published towards the end of the project with more contextual information about each munition included in the VFRAME visual taxonomy.

Can I contribute?

We are looking for assistance annotating images to train neural networks. Email us if you're interested in contributing 5-10 hours per week starting in the Fall. Keep in mind many images contain graphic content.

Ideas for making VFRAME better?

Get in touch