OpenVino integration (#134)

* OpenVINO installation
* Separate tf_annotation -> tf_annotation and cuda support.
* TF Annotation app now supports openVino backend
* Doc for CUDA component
* OpenVINO Readme file was added
* OpenVINO env and pip requirements for model optimizer
* Update logging
* TF annotation Readme file was added
* Update CHANGELOG
* Keep aspect ratio for image, not reverse input channels
* Move analytics into components
main
Boris Sekachev 7 years ago committed by Nikita Manovich
parent c4dd686ac6
commit 0b10ba6429

1
.gitignore vendored

@ -6,6 +6,7 @@
/.env /.env
/keys /keys
/logs /logs
/components/openvino/*.tgz
# Ignore temporary files # Ignore temporary files
docker-compose.override.yml docker-compose.override.yml

@ -12,6 +12,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Ability to draw/change polyshapes (except for points) by slip method. Just press ENTER and moving a cursor. - Ability to draw/change polyshapes (except for points) by slip method. Just press ENTER and moving a cursor.
- Ability to switch lock/hide properties via label UI element (in right menu) for all objects with same label. - Ability to switch lock/hide properties via label UI element (in right menu) for all objects with same label.
- Shortcuts for outside/keyframe properties - Shortcuts for outside/keyframe properties
- OpenVINO for accelerated model inference
- Tensorflow annotation now works without CUDA. It can use CPU only. OpenVINO and CUDA are supported optionally.
### Changed ### Changed
- Polyshape editing method has been improved. You can redraw part of shape instead of points cloning. - Polyshape editing method has been improved. You can redraw part of shape instead of points cloning.

@ -13,8 +13,6 @@ ENV LANG='C.UTF-8' \
LC_ALL='C.UTF-8' LC_ALL='C.UTF-8'
ARG USER ARG USER
ARG TF_ANNOTATION
ENV TF_ANNOTATION=${TF_ANNOTATION}
ARG DJANGO_CONFIGURATION ARG DJANGO_CONFIGURATION
ENV DJANGO_CONFIGURATION=${DJANGO_CONFIGURATION} ENV DJANGO_CONFIGURATION=${DJANGO_CONFIGURATION}
@ -50,13 +48,28 @@ ENV HOME /home/${USER}
WORKDIR ${HOME} WORKDIR ${HOME}
RUN adduser --shell /bin/bash --disabled-password --gecos "" ${USER} RUN adduser --shell /bin/bash --disabled-password --gecos "" ${USER}
# Install tf annotation if need COPY components /tmp/components
COPY cvat/apps/tf_annotation/docker_setup_tf_annotation.sh /tmp/tf_annotation/
COPY cvat/apps/tf_annotation/requirements.txt /tmp/tf_annotation/ # OpenVINO toolkit support
ENV TF_ANNOTATION_MODEL_PATH=${HOME}/rcnn/frozen_inference_graph.pb ARG OPENVINO_TOOLKIT
ENV OPENVINO_TOOLKIT=${OPENVINO_TOOLKIT}
RUN if [ "$OPENVINO_TOOLKIT" = "yes" ]; then \
/tmp/components/openvino/install.sh; \
fi
# CUDA support
ARG CUDA_SUPPORT
ENV CUDA_SUPPORT=${CUDA_SUPPORT}
RUN if [ "$CUDA_SUPPORT" = "yes" ]; then \
/tmp/components/cuda/install.sh; \
fi
# Tensorflow annotation support
ARG TF_ANNOTATION
ENV TF_ANNOTATION=${TF_ANNOTATION}
ENV TF_ANNOTATION_MODEL_PATH=${HOME}/rcnn/inference_graph
RUN if [ "$TF_ANNOTATION" = "yes" ]; then \ RUN if [ "$TF_ANNOTATION" = "yes" ]; then \
/tmp/tf_annotation/docker_setup_tf_annotation.sh; \ bash -i /tmp/components/tf_annotation/install.sh; \
fi fi
ARG WITH_TESTS ARG WITH_TESTS

@ -31,26 +31,6 @@ The instructions below should work for `Ubuntu 16.04`. It will probably work on
Please read official manual [here](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/). Please read official manual [here](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/).
### Install the latest driver for your graphics card
The step is necessary only to run tf_annotation app. If you don't have a Nvidia GPU you can skip the step.
```bash
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-cache search nvidia-* # find latest nvidia driver
sudo apt-get install nvidia-* # install the nvidia driver
sudo apt-get install mesa-common-dev
sudo apt-get install freeglut3-dev
sudo apt-get install nvidia-modprobe
```
Reboot your PC and verify installation by `nvidia-smi` command.
### Install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker)
The step is necessary only to run tf_annotation app. If you don't have a Nvidia GPU you can skip the step. See detailed installation instructions on repository page.
### Install docker-compose (1.19.0 or newer) ### Install docker-compose (1.19.0 or newer)
```bash ```bash
@ -61,17 +41,18 @@ sudo pip install docker-compose
To build all necessary docker images run `docker-compose build` command. By default, in production mode the tool uses PostgreSQL as database, Redis for caching. To build all necessary docker images run `docker-compose build` command. By default, in production mode the tool uses PostgreSQL as database, Redis for caching.
### Run containers without tf_annotation app ### Run docker containers
To start all containers run `docker-compose up -d` command. Go to [localhost:8080](http://localhost:8080/). You should see a login page. To start default container run `docker-compose up -d` command. Go to [localhost:8080](http://localhost:8080/). You should see a login page.
### Run containers with tf_annotation app ### You can include any additional components. Just add corresponding docker-compose file to build or run command:
If you would like to enable tf_annotation app first of all be sure that nvidia-driver, nvidia-docker and docker-compose>=1.19.0 are installed properly (see instructions above) and `docker info | grep 'Runtimes'` output contains `nvidia`.
Run following command:
```bash ```bash
docker-compose -f docker-compose.yml -f docker-compose.nvidia.yml up -d --build # Build image with CUDA and OpenVINO support
docker-compose -f docker-compose.yml -f docker-compose.cuda.yml -f docker-compose.openvino.yml build
# Run containers with CUDA and OpenVINO support
docker-compose -f docker-compose.yml -f docker-compose.cuda.yml -f docker-compose.openvino.yml up -d
``` ```
### Create superuser account ### Create superuser account
@ -79,7 +60,7 @@ docker-compose -f docker-compose.yml -f docker-compose.nvidia.yml up -d --build
You can [register a user](http://localhost:8080/auth/register) but by default it will not have rights even to view list of tasks. Thus you should create a superuser. The superuser can use admin panel to assign correct groups to the user. Please use the command below: You can [register a user](http://localhost:8080/auth/register) but by default it will not have rights even to view list of tasks. Thus you should create a superuser. The superuser can use admin panel to assign correct groups to the user. Please use the command below:
```bash ```bash
docker exec -it cvat sh -c '/usr/bin/python3 ~/manage.py createsuperuser' docker exec -it cvat bash -ic '/usr/bin/python3 ~/manage.py createsuperuser'
``` ```
Type your login/password for the superuser [on the login page](http://localhost:8080/auth/login) and press **Login** button. Now you should be able to create a new annotation task. Please read documentation for more details. Type your login/password for the superuser [on the login page](http://localhost:8080/auth/login) and press **Login** button. Now you should be able to create a new annotation task. Please read documentation for more details.

@ -0,0 +1,6 @@
### There are some additional components for CVAT
* [NVIDIA CUDA](cuda/README.md)
* [OpenVINO](openvino/README.md)
* [Tensorflow Object Detector](tf_annotation/README.md)
* [Analytics](analytics/README.md)

@ -1,4 +1,18 @@
# Analytics for Computer Vision Annotation Tool (CVAT) ## Analytics for Computer Vision Annotation Tool (CVAT)
It is possible to proxy annotation logs from client to ELK. To do that run the following command below:
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/analytics/docker-compose.analytics.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/analytics/docker-compose.analytics.yml up -d
```
At the moment it is not possible to save advanced settings. Below values should be specified manually. At the moment it is not possible to save advanced settings. Below values should be specified manually.

@ -8,7 +8,7 @@ services:
aliases: aliases:
- elasticsearch - elasticsearch
build: build:
context: ./analytics/elasticsearch context: ./components/analytics/elasticsearch
args: args:
ELK_VERSION: 6.4.0 ELK_VERSION: 6.4.0
volumes: volumes:
@ -23,7 +23,7 @@ services:
aliases: aliases:
- kibana - kibana
build: build:
context: ./analytics/kibana context: ./components/analytics/kibana
args: args:
ELK_VERSION: 6.4.0 ELK_VERSION: 6.4.0
depends_on: ['cvat_elasticsearch'] depends_on: ['cvat_elasticsearch']
@ -32,7 +32,7 @@ services:
cvat_kibana_setup: cvat_kibana_setup:
container_name: cvat_kibana_setup container_name: cvat_kibana_setup
image: cvat image: cvat
volumes: ['./analytics/kibana:/home/django/kibana:ro'] volumes: ['./components/analytics/kibana:/home/django/kibana:ro']
depends_on: ['cvat'] depends_on: ['cvat']
working_dir: '/home/django' working_dir: '/home/django'
entrypoint: ['bash', 'wait-for-it.sh', 'elasticsearch:9200', '-t', '0', '--', entrypoint: ['bash', 'wait-for-it.sh', 'elasticsearch:9200', '-t', '0', '--',
@ -49,7 +49,7 @@ services:
aliases: aliases:
- logstash - logstash
build: build:
context: ./analytics/logstash context: ./components/analytics/logstash
args: args:
ELK_VERSION: 6.4.0 ELK_VERSION: 6.4.0
http_proxy: ${http_proxy} http_proxy: ${http_proxy}

@ -0,0 +1,41 @@
## [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit)
### Requirements
* NVIDIA GPU with a compute capability [3.0 - 7.2]
* Latest GPU driver
### Installation
#### Install the latest driver for your graphics card
```bash
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-cache search nvidia-* # find latest nvidia driver
sudo apt-get install nvidia-* # install the nvidia driver
sudo apt-get install mesa-common-dev
sudo apt-get install freeglut3-dev
sudo apt-get install nvidia-modprobe
```
#### Reboot your PC and verify installation by `nvidia-smi` command.
#### Install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker)
Please be sure that installation was successful.
```bash
docker info | grep 'Runtimes' # output should contains 'nvidia'
```
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml up -d
```

@ -10,7 +10,7 @@ services:
build: build:
context: . context: .
args: args:
TF_ANNOTATION: "yes" CUDA_SUPPORT: "yes"
runtime: "nvidia" runtime: "nvidia"
environment: environment:
NVIDIA_VISIBLE_DEVICES: all NVIDIA_VISIBLE_DEVICES: all

@ -18,8 +18,8 @@ CUDA_VERSION=9.0.176
NCCL_VERSION=2.1.15 NCCL_VERSION=2.1.15
CUDNN_VERSION=7.0.5.15 CUDNN_VERSION=7.0.5.15
CUDA_PKG_VERSION="9-0=${CUDA_VERSION}-1" CUDA_PKG_VERSION="9-0=${CUDA_VERSION}-1"
echo "export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}" >> ${HOME}/.bashrc echo 'export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}' >> ${HOME}/.bashrc
echo "export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}" >> ${HOME}/.bashrc echo 'export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}' >> ${HOME}/.bashrc
apt-get update && apt-get install -y --no-install-recommends --allow-unauthenticated \ apt-get update && apt-get install -y --no-install-recommends --allow-unauthenticated \
libprotobuf-dev \ libprotobuf-dev \
@ -32,11 +32,3 @@ apt-get update && apt-get install -y --no-install-recommends --allow-unauthentic
ln -s cuda-9.0 /usr/local/cuda && \ ln -s cuda-9.0 /usr/local/cuda && \
rm -rf /var/lib/apt/lists/* \ rm -rf /var/lib/apt/lists/* \
/etc/apt/sources.list.d/nvidia-ml.list /etc/apt/sources.list.d/cuda.list /etc/apt/sources.list.d/nvidia-ml.list /etc/apt/sources.list.d/cuda.list
pip3 install --no-cache-dir -r "$(cd `dirname $0` && pwd)/requirements.txt"
cd ${HOME}
wget -O model.tar.gz http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017.tar.gz
tar -xzf model.tar.gz
rm model.tar.gz
mv faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017 rcnn

@ -0,0 +1,23 @@
## [Intel OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit)
### Requirements
* Intel Core with 6th generation and higher or Intel Xeon CPUs.
### Preparation
* Download latest OpenVINO toolkit for Ubuntu 16.04 platform. It should be .tgz archive. Minimum required version is 2018.3.*
* Put downloaded file into components/openvino
* Accept EULA in the eula.cfg file
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml up -d
```

@ -0,0 +1,13 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
OPENVINO_TOOLKIT: "yes"

@ -0,0 +1,3 @@
# Accept actual EULA from openvino installation archive. Valid values are: {accept, decline}
ACCEPT_EULA=accept

@ -0,0 +1,38 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
if [[ `lscpu | grep -o "GenuineIntel"` != "GenuineIntel" ]]; then
echo "OpenVINO supports only Intel CPUs"
exit 1
fi
if [[ `lscpu | grep -o "sse4" | head -1` != "sse4" ]] && [[ `lscpu | grep -o "avx2" | head -1` != "avx2" ]]; then
echo "You Intel CPU should support sse4 or avx2 instruction if you want use OpenVINO"
exit 1
fi
apt update && apt install -y libpng12-dev libcairo2-dev \
libpango1.0-dev libglib2.0-dev libgtk2.0-dev \
libgstreamer0.10-dev libswscale-dev \
libavcodec-dev libavformat-dev cmake libusb-1.0-0-dev cpio
# OpenCV which included into OpenVino toolkit was compiled with other version ffmpeg
# Need to install these packages for it works
apt install -y libavcodec-ffmpeg56 libavformat-ffmpeg56 libswscale-ffmpeg3
cd /tmp/components/openvino
tar -xzf `ls | grep "openvino_toolkit"`
cd `ls -d */ | grep "openvino_toolkit"`
cat ../eula.cfg >> silent.cfg
./install.sh -s silent.cfg
cd /tmp/components && rm openvino -r
echo "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" >> ${HOME}/.bashrc
echo -e '\nexport IE_PLUGINS_PATH=${IE_PLUGINS_PATH}' >> /opt/intel/computer_vision_sdk/bin/setupvars.sh

@ -0,0 +1,41 @@
## [Tensorflow Object Detector](https://github.com/tensorflow/models/tree/master/research/object_detection)
### What is it?
* This application allows you automatically to annotate many various objects on images.
* It uses [Faster RCNN Inception Resnet v2 Atrous Coco Model](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz) from [tensorflow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md)
* It can work on CPU (with Tensorflow or OpenVINO) or GPU (with Tensorflow GPU).
* It supports next classes (just specify them in "labels" row):
```
'surfboard', 'car', 'skateboard', 'boat', 'clock',
'cat', 'cow', 'knife', 'apple', 'cup', 'tv',
'baseball_bat', 'book', 'suitcase', 'tennis_racket',
'stop_sign', 'couch', 'cell_phone', 'keyboard',
'cake', 'tie', 'frisbee', 'truck', 'fire_hydrant',
'snowboard', 'bed', 'vase', 'teddy_bear',
'toaster', 'wine_glass', 'traffic_light',
'broccoli', 'backpack', 'carrot', 'potted_plant',
'donut', 'umbrella', 'parking_meter', 'bottle',
'sandwich', 'motorcycle', 'bear', 'banana',
'person', 'scissors', 'elephant', 'dining_table',
'toothbrush', 'toilet', 'skis', 'bowl', 'sheep',
'refrigerator', 'oven', 'microwave', 'train',
'orange', 'mouse', 'laptop', 'bench', 'bicycle',
'fork', 'kite', 'zebra', 'baseball_glove', 'bus',
'spoon', 'horse', 'handbag', 'pizza', 'sports_ball',
'airplane', 'hair_drier', 'hot_dog', 'remote',
'sink', 'dog', 'bird', 'giraffe', 'chair'.
```
* Component adds "Run TF Annotation" button into dashboard.
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml up -d
```

@ -0,0 +1,13 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
TF_ANNOTATION: "yes"

@ -0,0 +1,32 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
cd ${HOME} && \
wget -O model.tar.gz http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz && \
tar -xzf model.tar.gz && rm model.tar.gz && \
mv faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28 ${HOME}/rcnn && cd ${HOME} && \
mv rcnn/frozen_inference_graph.pb rcnn/inference_graph.pb
if [[ "$CUDA_SUPPORT" = "yes" ]]
then
pip3 install --no-cache-dir tensorflow-gpu==1.7.0
else
if [[ "$OPENVINO_TOOLKIT" = "yes" ]]
then
pip3 install -r ${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer/requirements.txt && \
cd ${HOME}/rcnn/ && \
${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer/mo.py --framework tf \
--data_type FP32 --input_shape [1,600,600,3] \
--input image_tensor --output detection_scores,detection_boxes,num_detections \
--tensorflow_use_custom_operations_config ${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json \
--tensorflow_object_detection_api_pipeline_config pipeline.config --input_model inference_graph.pb && \
rm inference_graph.pb
else
pip3 install --no-cache-dir tensorflow==1.7.0
fi
fi

@ -1,23 +0,0 @@
# Tensorflow annotation cvat django-app
#### What is it?
This application allows you automatically to annotate many various objects on images. [Tensorflow object detector](https://github.com/tensorflow/models/tree/master/research/object_detection) work in backend. It needs NVIDIA GPU for convenience using, but you may run it on CPU (just remove tensorflow-gpu python package and install the CPU tensorflow package version).
#### Enable instructions
1. Download the root dir with this app to cvat/apps if need.
2. Add urls for tf annotation in ```urls.py```:
```
urlpatterns += [path('tf_annotation/', include('cvat.apps.tf_annotation.urls'))]
```
3. Enable this application in ```settings/base.py```
```
INSTALLED_APPS += ['cvat.apps.tf_annotation']
```
1. If you want to run CVAT in container:
* Set TF_ANNOTATION argument to "yes" in ```docker-compose.yml```
* Add ```runtime: nvidia``` (if you have nvidia-gpu) to cvat block ([nvidia-docker2](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) must be installed)
5. Else you must download [model](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017.tar.gz), unpack it and set TF_ANNOTATION_MODEL_PATH environment variable to unpacked file ```frozen_inference_graph.pb```.
This variable must be available from cvat runtime environment.

@ -1,2 +0,0 @@
tensorflow==1.7.0
tensorflow-gpu==1.7.0

@ -12,6 +12,7 @@ from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.engine import annotation, task from cvat.apps.engine import annotation, task
import django_rq import django_rq
import subprocess
import fnmatch import fnmatch
import logging import logging
import json import json
@ -20,33 +21,111 @@ import rq
import tensorflow as tf import tensorflow as tf
import numpy as np import numpy as np
from PIL import Image from PIL import Image
from cvat.apps.engine.log import slogger from cvat.apps.engine.log import slogger
if os.environ.get('OPENVINO_TOOLKIT') == 'yes':
from openvino.inference_engine import IENetwork, IEPlugin
def load_image_into_numpy(image): def load_image_into_numpy(image):
(im_width, im_height) = image.size (im_width, im_height) = image.size
return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8) return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
def normalize_box(box, w, h): def run_inference_engine_annotation(image_list, labels_mapping, treshold):
xmin = int(box[1] * w) def _check_instruction(instruction):
ymin = int(box[0] * h) return instruction == str.strip(
xmax = int(box[3] * w) subprocess.check_output(
ymax = int(box[2] * h) 'lscpu | grep -o "{}" | head -1'.format(instruction), shell=True
return xmin, ymin, xmax, ymax ).decode('utf-8')
)
def _normalize_box(box, w, h, dw, dh):
xmin = min(int(box[0] * dw * w), w)
ymin = min(int(box[1] * dh * h), h)
xmax = min(int(box[2] * dw * w), w)
ymax = min(int(box[3] * dh * h), h)
return xmin, ymin, xmax, ymax
result = {}
MODEL_PATH = os.environ.get('TF_ANNOTATION_MODEL_PATH')
if MODEL_PATH is None:
raise OSError('Model path env not found in the system.')
IE_PLUGINS_PATH = os.getenv('IE_PLUGINS_PATH')
if IE_PLUGINS_PATH is None:
raise OSError('Inference engine plugin path env not found in the system.')
plugin = IEPlugin(device='CPU', plugin_dirs=[IE_PLUGINS_PATH])
if (_check_instruction('avx2')):
plugin.add_cpu_extension(os.path.join(IE_PLUGINS_PATH, 'libcpu_extension_avx2.so'))
elif (_check_instruction('sse4')):
plugin.add_cpu_extension(os.path.join(IE_PLUGINS_PATH, 'libcpu_extension_sse4.so'))
else:
raise Exception('Inference engine requires a support of avx2 or sse4.')
network = IENetwork.from_ir(model = MODEL_PATH + '.xml', weights = MODEL_PATH + '.bin')
input_blob_name = next(iter(network.inputs))
output_blob_name = next(iter(network.outputs))
executable_network = plugin.load(network=network)
job = rq.get_current_job()
del network
try:
for image_num, im_name in enumerate(image_list):
job.refresh()
if 'cancel' in job.meta:
del job.meta['cancel']
job.save()
return None
job.meta['progress'] = image_num * 100 / len(image_list)
job.save_meta()
image = Image.open(im_name)
width, height = image.size
image.thumbnail((600, 600), Image.ANTIALIAS)
dwidth, dheight = 600 / image.size[0], 600 / image.size[1]
image = image.crop((0, 0, 600, 600))
image_np = load_image_into_numpy(image)
image_np = np.transpose(image_np, (2, 0, 1))
prediction = executable_network.infer(inputs={input_blob_name: image_np[np.newaxis, ...]})[output_blob_name][0][0]
for obj in prediction:
obj_class = int(obj[1])
obj_value = obj[2]
if obj_class and obj_class in labels_mapping and obj_value >= treshold:
label = labels_mapping[obj_class]
if label not in result:
result[label] = []
xmin, ymin, xmax, ymax = _normalize_box(obj[3:7], width, height, dwidth, dheight)
result[label].append([image_num, xmin, ymin, xmax, ymax])
finally:
del executable_network
del plugin
return result
def run_tensorflow_annotation(image_list, labels_mapping, treshold):
def _normalize_box(box, w, h):
xmin = int(box[1] * w)
ymin = int(box[0] * h)
xmax = int(box[3] * w)
ymax = int(box[2] * h)
return xmin, ymin, xmax, ymax
def run_annotation(image_list, labels_mapping, treshold):
result = {} result = {}
model_path = os.environ.get('TF_ANNOTATION_MODEL_PATH') model_path = os.environ.get('TF_ANNOTATION_MODEL_PATH')
if model_path is None: if model_path is None:
raise OSError('Model path env not found in the system. Please check the installation manual.') raise OSError('Model path env not found in the system.')
job = rq.get_current_job() job = rq.get_current_job()
detection_graph = tf.Graph() detection_graph = tf.Graph()
with detection_graph.as_default(): with detection_graph.as_default():
od_graph_def = tf.GraphDef() od_graph_def = tf.GraphDef()
with tf.gfile.GFile(model_path, 'rb') as fid: with tf.gfile.GFile(model_path + '.pb', 'rb') as fid:
serialized_graph = fid.read() serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph) od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='') tf.import_graph_def(od_graph_def, name='')
@ -54,8 +133,9 @@ def run_annotation(image_list, labels_mapping, treshold):
try: try:
config = tf.ConfigProto() config = tf.ConfigProto()
config.gpu_options.allow_growth=True config.gpu_options.allow_growth=True
sess = tf.Session(graph=detection_graph,config=config) sess = tf.Session(graph=detection_graph, config=config)
for image_num, image_path in enumerate(image_list): for image_num, image_path in enumerate(image_list):
job.refresh() job.refresh()
if 'cancel' in job.meta: if 'cancel' in job.meta:
del job.meta['cancel'] del job.meta['cancel']
@ -63,6 +143,7 @@ def run_annotation(image_list, labels_mapping, treshold):
return None return None
job.meta['progress'] = image_num * 100 / len(image_list) job.meta['progress'] = image_num * 100 / len(image_list)
job.save_meta() job.save_meta()
image = Image.open(image_path) image = Image.open(image_path)
width, height = image.size width, height = image.size
if width > 1920 or height > 1080: if width > 1920 or height > 1080:
@ -80,7 +161,7 @@ def run_annotation(image_list, labels_mapping, treshold):
for i in range(len(classes[0])): for i in range(len(classes[0])):
if classes[0][i] in labels_mapping.keys(): if classes[0][i] in labels_mapping.keys():
if scores[0][i] >= treshold: if scores[0][i] >= treshold:
xmin, ymin, xmax, ymax = normalize_box(boxes[0][i], width, height) xmin, ymin, xmax, ymax = _normalize_box(boxes[0][i], width, height)
label = labels_mapping[classes[0][i]] label = labels_mapping[classes[0][i]]
if label not in result: if label not in result:
result[label] = [] result[label] = []
@ -146,7 +227,7 @@ def convert_to_cvat_format(data):
return result return result
def create_thread(id, labels_mapping): def create_thread(tid, labels_mapping):
try: try:
TRESHOLD = 0.5 TRESHOLD = 0.5
# Init rq job # Init rq job
@ -154,7 +235,7 @@ def create_thread(id, labels_mapping):
job.meta['progress'] = 0 job.meta['progress'] = 0
job.save_meta() job.save_meta()
# Get job indexes and segment length # Get job indexes and segment length
db_task = TaskModel.objects.get(pk=id) db_task = TaskModel.objects.get(pk=tid)
db_segments = list(db_task.segment_set.prefetch_related('job_set').all()) db_segments = list(db_task.segment_set.prefetch_related('job_set').all())
segment_length = max(db_segments[0].stop_frame - db_segments[0].start_frame + 1, 1) segment_length = max(db_segments[0].stop_frame - db_segments[0].start_frame + 1, 1)
job_indexes = [segment.job_set.first().id for segment in db_segments] job_indexes = [segment.job_set.first().id for segment in db_segments]
@ -162,19 +243,26 @@ def create_thread(id, labels_mapping):
image_list = make_image_list(db_task.get_data_dirname()) image_list = make_image_list(db_task.get_data_dirname())
# Run auto annotation by tf # Run auto annotation by tf
result = run_annotation(image_list, labels_mapping, TRESHOLD) result = None
if os.environ.get('CUDA_SUPPORT') == 'yes' or os.environ.get('OPENVINO_TOOLKIT') != 'yes':
slogger.glob.info("tf annotation with tensorflow framework for task {}".format(tid))
result = run_tensorflow_annotation(image_list, labels_mapping, TRESHOLD)
else:
slogger.glob.info('tf annotation with openvino toolkit for task {}'.format(tid))
result = run_inference_engine_annotation(image_list, labels_mapping, TRESHOLD)
if result is None: if result is None:
slogger.glob.info('tf annotation for task {} canceled by user'.format(id)) slogger.glob.info('tf annotation for task {} canceled by user'.format(tid))
return return
# Modify data format and save # Modify data format and save
result = convert_to_cvat_format(result) result = convert_to_cvat_format(result)
annotation.save_task(id, result) annotation.save_task(tid, result)
db_task.status = "Annotation" db_task.status = "Annotation"
db_task.save() db_task.save()
slogger.glob.info('tf annotation for task {} done'.format(id)) slogger.glob.info('tf annotation for task {} done'.format(tid))
except Exception: except Exception as ex:
slogger.glob.exception('exception was occured during tf annotation of the task {}'.format(id)) slogger.glob.exception('exception was occured during tf annotation of the task {}: {}'.format(tid, ex))
db_task.status = "TF Annotation Fault" db_task.status = "TF Annotation Fault"
db_task.save() db_task.save()

Loading…
Cancel
Save