DL models as serverless functions (#1767)

* Initial experiments with nuclio

* Update nuclio prototype

* Improve nuclio prototype for dextr.

* Dummy lambda manager

* OpenFaaS prototype (dextr.bin and dextr.xml are empty).

* Moved openfaas prototype.

* Add comments

* Add serializers and HLD for lambda_manager

* Initial version of Mask RCNN (without debugging)

* Initial version for faster_rcnn_inception_v2_coco

* Fix faster_rcnn_inception_v2_coco

* Implemented mask_rcnn_inception_resnet_v2_atrous_coco

* Implemented yolo detector as a lambda function

* Removed dextr app.

* Added types for each function (detector and interactor)

* Initial version of lambda_manager.

* Implement a couple of methods for lambda:

GET /api/v1/lambda/functions
GET /api/v1/lambda/functions/public.dextr

* First working version of dextr serverless function

* First version of dextr which works in UI.

* Modify omz.public.faster_rcnn_inception_v2_coco

- image decoding
- restart policy always for the function

* Improve omz.public.mask_rcnn_inception_resnet_v2_atrous_coco

* Improve omz.public.yolo-v3-tf function

* Implemented the initial version of requests for lambda manager.

* First working version of POST /api/v1/lambda/requests

* Updated specification of function.yaml (added labels and used annotations section).

* Added health check for containers (nuclio dashboard feature)

* Read labels spec from function.yaml.

* Added settings for NUCLIO

* Fixed a couple of typos. Now it works in most cases.

* Remove Plugin REST API

* Remove tf_annotation app (it will be replaced by serverless function)

* Remove tf_annotation and cuda components

* Cleanup docs and Dockerfile from CUDA component.

* Just renamed directories inside serverless

* Remove redundant files and code

* Remove redundant files.

* Remove outdated files

* Remove outdated code

* Delete reid app and add draft of serverless function for reid.

* Model list in UI.

* Fixed the framework name (got it from lambda function).

* Add maxRequestBodySize for functions, remove redundant code from UI for auto_annotation.

* Update view of models page.

* Unblock mapping for "primary" models.

* Implement cleanup flag for lambda/requests and labeling mapping for functions.

* Implement protection from running multiple jobs for the same task.

* Fix invocation of functions in docker container.

* Fix Dockerfile.ci

* Remove unused files from lambda_manager

* Fix codacy warnings

* Fix codacy issues.

* Fix codacy warnings

* Implement progress and cancel (aka delete) operation.

* Send annotations in batch.

* Fix UI. Now it can retrieve information about inference requests in progress.

* Update CHANGELOG.md

* Update cvat-ui version.

* Update nuclio version.

* Implement serverless/tensorflow/faster_rcnn_inception_v2_coco

* Add information how to install nuclio platform and run serverless functions.

* Add installation instructions for serverless functions.

* Update OpenVINO files which are responsible for loading network

* relocated functions

* Update dextr function.

* Update faster_rcnn function from omz

* Fix OpenVINO Mask-RCNN

* Fix YOLO v3 serverless function.

* Dummy serverless functions for a couple of more OpenVINO models.

* Protected lambda manager views by correct permissions.

* Fix name of Faster RCNN from Tensorflow.

* Implement Mask RCNN via Tensorflow serverless function.

* Minor client changes (#1847)

* Minor client changes

* Removed extra code

* Add reid serverless function (no support in lambda manager).

* Fix contribution guide.

* Fix person-reidentification-retail-300 and implement text-detection-0004

* Add semantic-segmentation-adas-0001

* Moving model management to cvat-core (#1905)

* Squached changes

* Removed extra line

* Remove duplicated files for OpenVINO serverless functions.

* Updated CHANGELOG.md

* Remove outdated code.

* Running dextr via lambda manager (#1912)

* Deleted outdated migration.

* Add name for DEXTR function.

* Fix restart policy for serverless functions.

* Fix openvino serverless functions for images with alpha channel

* Add more tensorflow serverless functions into deploy.sh

* Use ID instead of name for DEXTR (#1926)

* Update DEXTR function

* Added source "auto" inside lambda manager for automatic annotation.

* Customize payload (depends on type of lambda function).

* First working version of REID (Server only).

* Fix codacy warnings

* Avoid exception during migration (workaround)

File "/usr/local/lib/python3.5/dist-packages/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/usr/local/lib/python3.5/dist-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: table "engine_pluginoption" does not exist

* Add siammask serverless function (it doesn't work, need to serialize state)

* Run ReID from UI (#1949)

* Removed reid route in installation.md

* Fix a command to get lena image in CONTRIBUTION guide.

* Fix typo and crash in case a polygon is a line.

Co-authored-by: Boris Sekachev <40690378+bsekachev@users.noreply.github.com>
main
Nikita Manovich 6 years ago committed by GitHub
parent 3f6d2e9e4f
commit e7585b8ce9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

1
.gitattributes vendored

@ -15,7 +15,6 @@ LICENSE text
*.conf text
*.mimetypes text
*.sh text eol=lf
components/openvino/eula.cfg text eol=lf
*.avi binary
*.bmp binary

@ -23,7 +23,7 @@
/datumaro/ @zhiltsov-max
/cvat/apps/dataset_manager/ @zhiltsov-max
# Advanced components (e.g. OpenVINO)
# Advanced components (e.g. analytics)
/components/ @azhavoro
# Infrastructure

1
.gitignore vendored

@ -7,7 +7,6 @@
/.env
/keys
/logs
/components/openvino/*.tgz
/profiles
/ssh/*
!/ssh/README.md

@ -20,7 +20,7 @@ persistent=yes
# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=
load-plugins=pylint_django
# Use multiple processes to speed up Pylint.
jobs=1
@ -66,7 +66,7 @@ enable= E0001,E0100,E0101,E0102,E0103,E0104,E0105,E0106,E0107,E0110,
W0122,W0124,W0150,W0199,W0221,W0222,W0233,W0404,W0410,W0601,
W0602,W0604,W0611,W0612,W0622,W0623,W0702,W0705,W0711,W1300,
W1301,W1302,W1303,,W1305,W1306,W1307
R0102,R0201,R0202,R0203
R0102,R0202,R0203
# Disable the message, report, category or checker with the given id(s). You

@ -27,7 +27,7 @@
"request": "launch",
"stopOnEntry": false,
"justMyCode": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath": "${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"runserver",
@ -58,7 +58,7 @@
"request": "launch",
"stopOnEntry": false,
"justMyCode": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath": "${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"rqworker",
@ -77,7 +77,7 @@
"request": "launch",
"stopOnEntry": false,
"justMyCode": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath": "${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"rqscheduler",
@ -93,7 +93,7 @@
"request": "launch",
"justMyCode": false,
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath":"${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"rqworker",
@ -112,7 +112,7 @@
"request": "launch",
"justMyCode": false,
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath": "${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"update_git_states"
@ -128,7 +128,7 @@
"request": "launch",
"justMyCode": false,
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath": "${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"migrate"
@ -144,7 +144,7 @@
"request": "launch",
"justMyCode": false,
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"pythonPath": "${command:python.interpreterPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
"test",

@ -6,6 +6,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [1.1.0-beta] - Unreleased
### Added
- DL models as serverless functions (<https://github.com/opencv/cvat/pull/1767>)
- Source type support for tags, shapes and tracks (<https://github.com/opencv/cvat/pull/1192>)
- Source type support for CVAT Dumper/Loader (<https://github.com/opencv/cvat/pull/1192>)
- Intelligent polygon editing (<https://github.com/opencv/cvat/pull/1921>)
@ -14,13 +15,14 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed
- Smaller object details (<https://github.com/opencv/cvat/pull/1877>)
- It is impossible to submit a DL model in OpenVINO format using UI. Now you can deploy new models on the server using serverless functions (<https://github.com/opencv/cvat/pull/1767>)
- Files and folders under share path are now alphabetically sorted
### Deprecated
-
### Removed
-
- Removed OpenVINO and CUDA components because they are not necessary anymore (<https://github.com/opencv/cvat/pull/1767>)
### Fixed
- Some objects aren't shown on canvas sometimes. For example after propagation on of objects is invisible (<https://github.com/opencv/cvat/pull/1834>)
@ -82,12 +84,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Colorized object items in the side panel (<https://github.com/opencv/cvat/pull/1753>)
- [Datumaro] Annotation-less files are not generated anymore in COCO format, unless tasks explicitly requested (<https://github.com/opencv/cvat/pull/1799>)
### Deprecated
-
### Removed
-
### Fixed
- Problem with exported frame stepped image task (<https://github.com/opencv/cvat/issues/1613>)
- Fixed dataset filter item representation for imageless dataset items (<https://github.com/opencv/cvat/pull/1593>)

File diff suppressed because one or more lines are too long

@ -76,33 +76,6 @@ RUN adduser --shell /bin/bash --disabled-password --gecos "" ${USER} && \
COPY components /tmp/components
# OpenVINO toolkit support
ARG OPENVINO_TOOLKIT
ENV OPENVINO_TOOLKIT=${OPENVINO_TOOLKIT}
ENV REID_MODEL_DIR=${HOME}/reid
RUN if [ "$OPENVINO_TOOLKIT" = "yes" ]; then \
/tmp/components/openvino/install.sh && \
mkdir ${REID_MODEL_DIR} && \
curl https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/person-reidentification-retail-0079/FP32/person-reidentification-retail-0079.xml -o reid/reid.xml && \
curl https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/person-reidentification-retail-0079/FP32/person-reidentification-retail-0079.bin -o reid/reid.bin; \
fi
# Tensorflow annotation support
ARG TF_ANNOTATION
ENV TF_ANNOTATION=${TF_ANNOTATION}
ENV TF_ANNOTATION_MODEL_PATH=${HOME}/rcnn/inference_graph
RUN if [ "$TF_ANNOTATION" = "yes" ]; then \
bash -i /tmp/components/tf_annotation/install.sh; \
fi
# Auto segmentation support. by Mohammad
ARG AUTO_SEGMENTATION
ENV AUTO_SEGMENTATION=${AUTO_SEGMENTATION}
ENV AUTO_SEGMENTATION_PATH=${HOME}/Mask_RCNN
RUN if [ "$AUTO_SEGMENTATION" = "yes" ]; then \
bash -i /tmp/components/auto_segmentation/install.sh; \
fi
# Install and initialize CVAT, copy all necessary files
COPY cvat/requirements/ /tmp/requirements/
COPY supervisord.conf mod_wsgi.conf wait-for-it.sh manage.py ${HOME}/
@ -110,24 +83,6 @@ RUN python3 -m pip install --no-cache-dir -r /tmp/requirements/${DJANGO_CONFIGUR
# pycocotools package is impossible to install with its dependencies by one pip install command
RUN python3 -m pip install --no-cache-dir pycocotools==2.0.0
# CUDA support
ARG CUDA_SUPPORT
ENV CUDA_SUPPORT=${CUDA_SUPPORT}
RUN if [ "$CUDA_SUPPORT" = "yes" ]; then \
/tmp/components/cuda/install.sh; \
fi
# TODO: CHANGE URL
ARG WITH_DEXTR
ENV WITH_DEXTR=${WITH_DEXTR}
ENV DEXTR_MODEL_DIR=${HOME}/dextr
RUN if [ "$WITH_DEXTR" = "yes" ]; then \
mkdir ${DEXTR_MODEL_DIR} -p && \
curl https://download.01.org/openvinotoolkit/models_contrib/cvat/dextr_model_v1.zip -o ${DEXTR_MODEL_DIR}/dextr.zip && \
7z e ${DEXTR_MODEL_DIR}/dextr.zip -o${DEXTR_MODEL_DIR} && rm ${DEXTR_MODEL_DIR}/dextr.zip; \
fi
ARG CLAM_AV
ENV CLAM_AV=${CLAM_AV}
RUN if [ "$CLAM_AV" = "yes" ]; then \

@ -1,4 +1,4 @@
FROM cvat
FROM cvat/server
ENV DJANGO_CONFIGURATION=testing
USER root

@ -71,7 +71,6 @@ are visible to users.
Disabled features:
- [Analytics: management and monitoring of data annotation team](/components/analytics/README.md)
- [Support for NVIDIA GPUs](/components/cuda/README.md)
Limitations:
- No more than 10 tasks per user

@ -1,38 +0,0 @@
## [Keras+Tensorflow Mask R-CNN Segmentation](https://github.com/matterport/Mask_RCNN)
### What is it?
- This application allows you automatically to segment many various objects on images.
- It's based on Feature Pyramid Network (FPN) and a ResNet101 backbone.
- It uses a pre-trained model on MS COCO dataset
- It supports next classes (use them in "labels" row):
```python
'BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush'.
```
- Component adds "Run Auto Segmentation" button into dashboard.
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/auto_segmentation/docker-compose.auto_segmentation.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/auto_segmentation/docker-compose.auto_segmentation.yml up -d
```

@ -1,13 +0,0 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
AUTO_SEGMENTATION: "yes"

@ -1,13 +0,0 @@
#!/bin/bash
#
set -e
MASK_RCNN_URL=https://github.com/matterport/Mask_RCNN
cd ${HOME} && \
git clone ${MASK_RCNN_URL}.git && \
curl -L ${MASK_RCNN_URL}/releases/download/v2.0/mask_rcnn_coco.h5 -o Mask_RCNN/mask_rcnn_coco.h5
# TODO remove useless files
# tensorflow and Keras are installed globally

@ -1,41 +0,0 @@
## [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit)
### Requirements
* NVIDIA GPU with a compute capability [3.0 - 7.2]
* Latest GPU driver
### Installation
#### Install the latest driver for your graphics card
```bash
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-cache search nvidia-* # find latest nvidia driver
sudo apt-get --no-install-recommends install nvidia-* # install the nvidia driver
sudo apt-get --no-install-recommends install mesa-common-dev
sudo apt-get --no-install-recommends install freeglut3-dev
sudo apt-get --no-install-recommends install nvidia-modprobe
```
#### Reboot your PC and verify installation by `nvidia-smi` command.
#### Install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker)
Please be sure that installation was successful.
```bash
docker info | grep 'Runtimes' # output should contains 'nvidia'
```
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml up -d
```

@ -1,23 +0,0 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
CUDA_SUPPORT: "yes"
runtime: "nvidia"
environment:
NVIDIA_VISIBLE_DEVICES: all
NVIDIA_DRIVER_CAPABILITIES: compute,utility
# That environment variable is used by the Nvidia Container Runtime.
# The Nvidia Container Runtime parses this as:
# :space:: logical OR
# ,: Logical AND
# https://gitlab.com/nvidia/container-images/cuda/issues/31#note_149432780
NVIDIA_REQUIRE_CUDA: "cuda>=10.0 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411"

@ -1,38 +0,0 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
NVIDIA_GPGKEY_SUM=d1be581509378368edeec8c1eb2958702feedf3bc3d17011adbf24efacce4ab5 && \
NVIDIA_GPGKEY_FPR=ae09fe4bbd223a84b2ccfce3f60f4b3d7fa2af80 && \
apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub && \
apt-key adv --export --no-emit-version -a $NVIDIA_GPGKEY_FPR | tail -n +5 > cudasign.pub && \
echo "$NVIDIA_GPGKEY_SUM cudasign.pub" | sha256sum -c --strict - && rm cudasign.pub && \
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list
CUDA_VERSION=10.0.130
NCCL_VERSION=2.5.6
CUDNN_VERSION=7.6.5.32
CUDA_PKG_VERSION="10-0=$CUDA_VERSION-1"
echo 'export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}' >> ${HOME}/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}' >> ${HOME}/.bashrc
apt-get update && apt-get install -y --no-install-recommends --allow-unauthenticated \
cuda-cudart-$CUDA_PKG_VERSION \
cuda-compat-10-0 \
cuda-libraries-$CUDA_PKG_VERSION \
cuda-nvtx-$CUDA_PKG_VERSION \
libnccl2=$NCCL_VERSION-1+cuda10.0 \
libcudnn7=$CUDNN_VERSION-1+cuda10.0 && \
ln -s cuda-10.0 /usr/local/cuda && \
apt-mark hold libnccl2 libcudnn7 && \
rm -rf /var/lib/apt/lists/* \
/etc/apt/sources.list.d/nvidia-ml.list /etc/apt/sources.list.d/cuda.list
python3 -m pip uninstall -y tensorflow
python3 -m pip install --no-cache-dir tensorflow-gpu==1.15.2

@ -1,45 +0,0 @@
## [Intel OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit)
### Requirements
* Intel Core with 6th generation and higher or Intel Xeon CPUs.
### Preparation
- Download the latest [OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit) .tgz installer
(offline or online) for Ubuntu platforms. Note that OpenVINO does not maintain forward compatability between
Intermediate Representations (IRs), so the version of OpenVINO in CVAT and the version used to translate the
models needs to be the same.
- Put downloaded file into ```cvat/components/openvino```.
- Accept EULA in the `cvat/components/openvino/eula.cfg` file.
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml up -d
```
You should be able to login and see the web interface for CVAT now, complete with the new "Model Manager" button.
### OpenVINO Models
Clone the [Open Model Zoo](https://github.com/opencv/open_model_zoo). `$ git clone https://github.com/opencv/open_model_zoo.git`
Install the appropriate libraries. Currently that command would be `$ pip install -r open_model_zoo/tools/downloader/requirements.in`
Download the models using `downloader.py` file in `open_model_zoo/tools/downloader/`.
The `--name` command can be used to specify specific models.
The `--print_all` command can print all the available models.
Specific models that are already integrated into Cvat can be found [here](https://github.com/opencv/cvat/tree/develop/utils/open_model_zoo).
From the web user interface in CVAT, upload the models using the model manager.
You'll need to include the xml and bin file from the model downloader.
You'll need to include the python and JSON files from scratch or by using the ones in the CVAT libary.
See [here](https://github.com/opencv/cvat/tree/develop/cvat/apps/auto_annotation) for instructions for creating custom
python and JSON files.

@ -1,13 +0,0 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
OPENVINO_TOOLKIT: "yes"

@ -1,3 +0,0 @@
# Accept actual EULA from openvino installation archive. Valid values are: {accept, decline}
ACCEPT_EULA=accept

@ -1,41 +0,0 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
if [[ `lscpu | grep -o "GenuineIntel"` != "GenuineIntel" ]]; then
echo "OpenVINO supports only Intel CPUs"
exit 1
fi
if [[ `lscpu | grep -o "sse4" | head -1` != "sse4" ]] && [[ `lscpu | grep -o "avx2" | head -1` != "avx2" ]]; then
echo "OpenVINO expects your CPU to support SSE4 or AVX2 instructions"
exit 1
fi
cd /tmp/components/openvino
tar -xzf `ls | grep "openvino_toolkit"`
cd `ls -d */ | grep "openvino_toolkit"`
apt-get update && apt-get --no-install-recommends install -y sudo cpio && \
if [ -f "install_cv_sdk_dependencies.sh" ]; then ./install_cv_sdk_dependencies.sh; \
else ./install_openvino_dependencies.sh; fi && SUDO_FORCE_REMOVE=yes apt-get remove -y sudo
cat ../eula.cfg >> silent.cfg
./install.sh -s silent.cfg
cd /tmp/components && rm openvino -r
if [ -f "/opt/intel/computer_vision_sdk/bin/setupvars.sh" ]; then
echo "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" >> ${HOME}/.bashrc;
echo -e '\nexport IE_PLUGINS_PATH=${IE_PLUGINS_PATH}' >> /opt/intel/computer_vision_sdk/bin/setupvars.sh;
else
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ${HOME}/.bashrc;
echo -e '\nexport IE_PLUGINS_PATH=${IE_PLUGINS_PATH}' >> /opt/intel/openvino/bin/setupvars.sh;
fi

@ -1,41 +0,0 @@
## [Tensorflow Object Detector](https://github.com/tensorflow/models/tree/master/research/object_detection)
### What is it?
* This application allows you automatically to annotate many various objects on images.
* It uses [Faster RCNN Inception Resnet v2 Atrous Coco Model](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz) from [tensorflow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md)
* It can work on CPU (with Tensorflow or OpenVINO) or GPU (with Tensorflow GPU).
* It supports next classes (just specify them in "labels" row):
```
'surfboard', 'car', 'skateboard', 'boat', 'clock',
'cat', 'cow', 'knife', 'apple', 'cup', 'tv',
'baseball_bat', 'book', 'suitcase', 'tennis_racket',
'stop_sign', 'couch', 'cell_phone', 'keyboard',
'cake', 'tie', 'frisbee', 'truck', 'fire_hydrant',
'snowboard', 'bed', 'vase', 'teddy_bear',
'toaster', 'wine_glass', 'traffic_light',
'broccoli', 'backpack', 'carrot', 'potted_plant',
'donut', 'umbrella', 'parking_meter', 'bottle',
'sandwich', 'motorcycle', 'bear', 'banana',
'person', 'scissors', 'elephant', 'dining_table',
'toothbrush', 'toilet', 'skis', 'bowl', 'sheep',
'refrigerator', 'oven', 'microwave', 'train',
'orange', 'mouse', 'laptop', 'bench', 'bicycle',
'fork', 'kite', 'zebra', 'baseball_glove', 'bus',
'spoon', 'horse', 'handbag', 'pizza', 'sports_ball',
'airplane', 'hair_drier', 'hot_dog', 'remote',
'sink', 'dog', 'bird', 'giraffe', 'chair'.
```
* Component adds "Run TF Annotation" button into dashboard.
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml up -d
```

@ -1,13 +0,0 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
TF_ANNOTATION: "yes"

@ -1,15 +0,0 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
cd ${HOME} && \
curl http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz -o model.tar.gz && \
tar -xzf model.tar.gz && rm model.tar.gz && \
mv faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28 ${HOME}/rcnn && cd ${HOME} && \
mv rcnn/frozen_inference_graph.pb rcnn/inference_graph.pb
# tensorflow is installed globally

@ -1,6 +1,6 @@
{
"name": "cvat-core",
"version": "3.1.2",
"version": "3.3.0",
"description": "Part of Computer Vision Tool which presents an interface for client-side integration",
"main": "babel.config.js",
"scripts": {
@ -27,7 +27,7 @@
"eslint-plugin-security": "^1.4.0",
"jest": "^24.8.0",
"jest-junit": "^6.4.0",
"jsdoc": "^3.6.2",
"jsdoc": "^3.6.4",
"webpack": "^4.31.0",
"webpack-cli": "^3.3.2"
},

@ -1,5 +1,5 @@
/*
* Copyright (C) 2019 Intel Corporation
* Copyright (C) 2019-2020 Intel Corporation
* SPDX-License-Identifier: MIT
*/
@ -12,6 +12,7 @@
(() => {
const PluginRegistry = require('./plugins');
const serverProxy = require('./server-proxy');
const lambdaManager = require('./lambda-manager');
const {
isBoolean,
isInteger,
@ -20,10 +21,7 @@
checkFilter,
} = require('./common');
const {
TaskStatus,
TaskMode,
} = require('./enums');
const { TaskStatus, TaskMode } = require('./enums');
const User = require('./user');
const { AnnotationFormats } = require('./annotation-formats.js');
@ -54,6 +52,13 @@
cvat.plugins.list.implementation = PluginRegistry.list;
cvat.plugins.register.implementation = PluginRegistry.register.bind(cvat);
cvat.lambda.list.implementation = lambdaManager.list.bind(lambdaManager);
cvat.lambda.run.implementation = lambdaManager.run.bind(lambdaManager);
cvat.lambda.call.implementation = lambdaManager.call.bind(lambdaManager);
cvat.lambda.cancel.implementation = lambdaManager.cancel.bind(lambdaManager);
cvat.lambda.listen.implementation = lambdaManager.listen.bind(lambdaManager);
cvat.lambda.requests.implementation = lambdaManager.requests.bind(lambdaManager);
cvat.server.about.implementation = async () => {
const result = await serverProxy.server.about();
return result;

@ -20,6 +20,7 @@ function build() {
const Statistics = require('./statistics');
const { Job, Task } = require('./session');
const { Attribute, Label } = require('./labels');
const MLModel = require('./ml-model');
const {
ShareFileType,
@ -30,6 +31,7 @@ function build() {
ObjectShape,
LogType,
HistoryActions,
RQStatus,
colors,
Source,
} = require('./enums');
@ -149,7 +151,15 @@ function build() {
* @throws {module:API.cvat.exceptions.PluginError}
* @throws {module:API.cvat.exceptions.ServerError}
*/
async register(username, firstName, lastName, email, password1, password2, userConfirmations) {
async register(
username,
firstName,
lastName,
email,
password1,
password2,
userConfirmations,
) {
const result = await PluginRegistry
.apiWrapper(cvat.server.register, username, firstName,
lastName, email, password1, password2, userConfirmations);
@ -424,6 +434,119 @@ function build() {
return result;
},
},
/**
* Namespace is used for serverless functions management (mainly related with DL models)
* @namespace lambda
* @memberof module:API.cvat
*/
lambda: {
/**
* Method returns list of available serverless models
* @method list
* @async
* @memberof module:API.cvat.lambda
* @returns {module:API.cvat.classes.MLModel[]}
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.PluginError}
*/
async list() {
const result = await PluginRegistry
.apiWrapper(cvat.lambda.list);
return result;
},
/**
* Run long-time request for a function on a specific task
* @method run
* @async
* @memberof module:API.cvat.lambda
* @param {module:API.cvat.classes.Task} task task to be annotated
* @param {module:API.cvat.classes.MLModel} model model used to get annotation
* @param {object} [args] extra arguments
* @returns {string} requestID
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.PluginError}
* @throws {module:API.cvat.exceptions.ArgumentError}
*/
async run(task, model, args) {
const result = await PluginRegistry
.apiWrapper(cvat.lambda.run, task, model, args);
return result;
},
/**
* Run short-time request for a function on a specific task
* @method call
* @async
* @memberof module:API.cvat.lambda
* @param {module:API.cvat.classes.Task} task task to be annotated
* @param {module:API.cvat.classes.MLModel} model model used to get annotation
* @param {object} [args] extra arguments
* @returns {string} requestID
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.PluginError}
* @throws {module:API.cvat.exceptions.ArgumentError}
*/
async call(task, model, args) {
const result = await PluginRegistry
.apiWrapper(cvat.lambda.call, task, model, args);
return result;
},
/**
* Cancel running of a serverless function for a specific task
* @method cancel
* @async
* @memberof module:API.cvat.lambda
* @param {string} requestID
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.PluginError}
* @throws {module:API.cvat.exceptions.ArgumentError}
*/
async cancel(requestID) {
const result = await PluginRegistry
.apiWrapper(cvat.lambda.cancel, requestID);
return result;
},
/**
* @callback onRequestStatusChange
* @param {string} status
* @param {number} progress
* @param {string} [message]
* @global
*/
/**
* Listen for a specific request
* @method listen
* @async
* @memberof module:API.cvat.lambda
* @param {string} requestID
* @param {onRequestStatusChange} onChange
* @throws {module:API.cvat.exceptions.ArgumentError}
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.PluginError}
*/
async listen(requestID, onChange) {
const result = await PluginRegistry
.apiWrapper(cvat.lambda.listen, requestID, onChange);
return result;
},
/**
* Get active lambda requests
* @method requests
* @async
* @memberof module:API.cvat.lambda
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.PluginError}
*/
async requests() {
const result = await PluginRegistry
.apiWrapper(cvat.lambda.requests);
return result;
},
},
/**
* Namespace to working with logs
* @namespace logger
@ -531,6 +654,7 @@ function build() {
ObjectShape,
LogType,
HistoryActions,
RQStatus,
colors,
Source,
},
@ -561,6 +685,7 @@ function build() {
Label,
Statistics,
ObjectState,
MLModel,
},
};
@ -569,6 +694,7 @@ function build() {
cvat.jobs = Object.freeze(cvat.jobs);
cvat.users = Object.freeze(cvat.users);
cvat.plugins = Object.freeze(cvat.plugins);
cvat.lambda = Object.freeze(cvat.lambda);
cvat.client = Object.freeze(cvat.client);
cvat.enums = Object.freeze(cvat.enums);

@ -34,6 +34,26 @@
COMPLETED: 'completed',
});
/**
* List of RQ statuses
* @enum {string}
* @name RQStatus
* @memberof module:API.cvat.enums
* @property {string} QUEUED 'queued'
* @property {string} STARTED 'started'
* @property {string} FINISHED 'finished'
* @property {string} FAILED 'failed'
* @property {string} UNKNOWN 'unknown'
* @readonly
*/
const RQStatus = Object.freeze({
QUEUED: 'queued',
STARTED: 'started',
FINISHED: 'finished',
FAILED: 'failed',
UNKNOWN: 'unknown',
});
/**
* Task modes
* @enum {string}
@ -232,6 +252,18 @@
REMOVED_OBJECT: 'Removed object',
});
/**
* Enum string values.
* @name ModelType
* @memberof module:API.cvat.enums
* @enum {string}
*/
const ModelType = {
DETECTOR: 'detector',
INTERACTOR: 'interactor',
TRACKER: 'tracker',
};
/**
* Array of hex colors
* @name colors
@ -255,7 +287,9 @@
ObjectType,
ObjectShape,
LogType,
ModelType,
HistoryActions,
RQStatus,
colors,
Source,
};

@ -0,0 +1,126 @@
/*
* Copyright (C) 2020 Intel Corporation
* SPDX-License-Identifier: MIT
*/
/* global
require:false
*/
const serverProxy = require('./server-proxy');
const { ArgumentError } = require('./exceptions');
const { Task } = require('./session');
const MLModel = require('./ml-model');
const { RQStatus } = require('./enums');
class LambdaManager {
constructor() {
this.listening = {};
this.cachedList = null;
}
async list() {
if (Array.isArray(this.cachedList)) {
return [...this.cachedList];
}
const result = await serverProxy.lambda.list();
const models = [];
for (const model of result) {
models.push(new MLModel({
id: model.id,
name: model.name,
description: model.description,
framework: model.framework,
labels: [...model.labels],
type: model.kind,
}));
}
this.cachedList = models;
return models;
}
async run(task, model, args) {
if (!(task instanceof Task)) {
throw new ArgumentError(
`Argument task is expected to be an instance of Task class, but got ${typeof (task)}`,
);
}
if (!(model instanceof MLModel)) {
throw new ArgumentError(
`Argument model is expected to be an instance of MLModel class, but got ${typeof (model)}`,
);
}
if (args && typeof (args) !== 'object') {
throw new ArgumentError(
`Argument args is expected to be an object, but got ${typeof (model)}`,
);
}
const body = args;
body.task = task.id;
body.function = model.id;
const result = await serverProxy.lambda.run(body);
return result.id;
}
async call(task, model, args) {
const body = args;
body.task = task.id;
const result = await serverProxy.lambda.call(model.id, body);
return result;
}
async requests() {
const result = await serverProxy.lambda.requests();
return result.filter((request) => ['queued', 'started'].includes(request.status));
}
async cancel(requestID) {
if (typeof (requestID) !== 'string') {
throw new ArgumentError(`Request id argument is required to be a string. But got ${requestID}`);
}
if (this.listening[requestID]) {
clearTimeout(this.listening[requestID].timeout);
delete this.listening[requestID];
}
await serverProxy.lambda.cancel(requestID);
}
async listen(requestID, onUpdate) {
const timeoutCallback = async () => {
try {
this.listening[requestID].timeout = null;
const response = await serverProxy.lambda.status(requestID);
if (response.status === RQStatus.QUEUED || response.status === RQStatus.STARTED) {
onUpdate(response.status, response.progress || 0);
this.listening[requestID].timeout = setTimeout(timeoutCallback, 2000);
} else {
if (response.status === RQStatus.FINISHED) {
onUpdate(response.status, response.progress || 100);
} else {
onUpdate(response.status, response.progress || 0, response.exc_info || '');
}
delete this.listening[requestID];
}
} catch (error) {
onUpdate(RQStatus.UNKNOWN, 0, `Could not get a status of the request ${requestID}. ${error.toString()}`);
}
};
this.listening[requestID] = {
onUpdate,
timeout: setTimeout(timeoutCallback, 2000),
};
}
}
module.exports = new LambdaManager();

@ -0,0 +1,73 @@
/*
* Copyright (C) 2019-2020 Intel Corporation
* SPDX-License-Identifier: MIT
*/
/**
* Class representing a machine learning model
* @memberof module:API.cvat.classes
*/
class MLModel {
constructor(data) {
this._id = data.id;
this._name = data.name;
this._labels = data.labels;
this._framework = data.framework;
this._description = data.description;
this._type = data.type;
}
/**
* @returns {string}
* @readonly
*/
get id() {
return this._id;
}
/**
* @returns {string}
* @readonly
*/
get name() {
return this._name;
}
/**
* @returns {string[]}
* @readonly
*/
get labels() {
if (Array.isArray(this._labels)) {
return [...this._labels];
}
return [];
}
/**
* @returns {string}
* @readonly
*/
get framework() {
return this._framework;
}
/**
* @returns {string}
* @readonly
*/
get description() {
return this._description;
}
/**
* @returns {module:API.cvat.enums.ModelType}
* @readonly
*/
get type() {
return this._type;
}
}
module.exports = MLModel;

@ -162,7 +162,6 @@
response = await Axios.get(`${backendAPI}/restrictions/user-agreements`, {
proxy: config.proxy,
});
} catch (errorData) {
throw generateError(errorData);
}
@ -170,7 +169,15 @@
return response.data;
}
async function register(username, firstName, lastName, email, password1, password2, confirmations) {
async function register(
username,
firstName,
lastName,
email,
password1,
password2,
confirmations,
) {
let response = null;
try {
const data = JSON.stringify({
@ -662,6 +669,96 @@
}
}
async function getLambdaFunctions() {
const { backendAPI } = config;
try {
const response = await Axios.get(`${backendAPI}/lambda/functions`, {
proxy: config.proxy,
});
return response.data;
} catch (errorData) {
throw generateError(errorData);
}
}
async function runLambdaRequest(body) {
const { backendAPI } = config;
try {
const response = await Axios.post(`${backendAPI}/lambda/requests`,
JSON.stringify(body), {
proxy: config.proxy,
headers: {
'Content-Type': 'application/json',
},
});
return response.data;
} catch (errorData) {
throw generateError(errorData);
}
}
async function callLambdaFunction(funId, body) {
const { backendAPI } = config;
try {
const response = await Axios.post(`${backendAPI}/lambda/functions/${funId}`,
JSON.stringify(body), {
proxy: config.proxy,
headers: {
'Content-Type': 'application/json',
},
});
return response.data;
} catch (errorData) {
throw generateError(errorData);
}
}
async function getLambdaRequests() {
const { backendAPI } = config;
try {
const response = await Axios.get(`${backendAPI}/lambda/requests`, {
proxy: config.proxy,
});
return response.data;
} catch (errorData) {
throw generateError(errorData);
}
}
async function getRequestStatus(requestID) {
const { backendAPI } = config;
try {
const response = await Axios.get(`${backendAPI}/lambda/requests/${requestID}`, {
proxy: config.proxy,
});
return response.data;
} catch (errorData) {
throw generateError(errorData);
}
}
async function cancelLambdaRequest(requestId) {
const { backendAPI } = config;
try {
await Axios.delete(
`${backendAPI}/lambda/requests/${requestId}`, {
method: 'DELETE',
},
);
} catch (errorData) {
throw generateError(errorData);
}
}
Object.defineProperties(this, Object.freeze({
server: {
value: Object.freeze({
@ -731,6 +828,18 @@
}),
writable: false,
},
lambda: {
value: Object.freeze({
list: getLambdaFunctions,
status: getRequestStatus,
requests: getLambdaRequests,
run: runLambdaRequest,
call: callLambdaFunction,
cancel: cancelLambdaRequest,
}),
writable: false,
},
}));
}
}

@ -3,27 +3,14 @@
// SPDX-License-Identifier: MIT
import { ActionUnion, createAction, ThunkAction } from 'utils/redux';
import {
Model,
ModelType,
ModelFiles,
ActiveInference,
CombinedState,
} from 'reducers/interfaces';
import { Model, ActiveInference, RQStatus } from 'reducers/interfaces';
import getCore from 'cvat-core-wrapper';
export enum PreinstalledModels {
RCNN = 'RCNN Object Detector',
MaskRCNN = 'Mask RCNN Object Detector',
}
export enum ModelsActionTypes {
GET_MODELS = 'GET_MODELS',
GET_MODELS_SUCCESS = 'GET_MODELS_SUCCESS',
GET_MODELS_FAILED = 'GET_MODELS_FAILED',
DELETE_MODEL = 'DELETE_MODEL',
DELETE_MODEL_SUCCESS = 'DELETE_MODEL_SUCCESS',
DELETE_MODEL_FAILED = 'DELETE_MODEL_FAILED',
CREATE_MODEL = 'CREATE_MODEL',
CREATE_MODEL_SUCCESS = 'CREATE_MODEL_SUCCESS',
CREATE_MODEL_FAILED = 'CREATE_MODEL_FAILED',
@ -50,28 +37,6 @@ export const modelsActions = {
error,
},
),
deleteModelSuccess: (id: number) => createAction(
ModelsActionTypes.DELETE_MODEL_SUCCESS, {
id,
},
),
deleteModelFailed: (id: number, error: any) => createAction(
ModelsActionTypes.DELETE_MODEL_FAILED, {
error, id,
},
),
createModel: () => createAction(ModelsActionTypes.CREATE_MODEL),
createModelSuccess: () => createAction(ModelsActionTypes.CREATE_MODEL_SUCCESS),
createModelFailed: (error: any) => createAction(
ModelsActionTypes.CREATE_MODEL_FAILED, {
error,
},
),
createModelUpdateStatus: (status: string) => createAction(
ModelsActionTypes.CREATE_MODEL_STATUS_UPDATED, {
status,
},
),
fetchMetaFailed: (error: any) => createAction(ModelsActionTypes.FETCH_META_FAILED, { error }),
getInferenceStatusSuccess: (taskID: number, activeInference: ActiveInference) => createAction(
ModelsActionTypes.GET_INFERENCE_STATUS_SUCCESS, {
@ -96,7 +61,7 @@ export const modelsActions = {
taskID,
},
),
cancelInferenceFaild: (taskID: number, error: any) => createAction(
cancelInferenceFailed: (taskID: number, error: any) => createAction(
ModelsActionTypes.CANCEL_INFERENCE_FAILED, {
taskID,
error,
@ -113,361 +78,76 @@ export const modelsActions = {
export type ModelsActions = ActionUnion<typeof modelsActions>;
const core = getCore();
const baseURL = core.config.backendAPI.slice(0, -7);
export function getModelsAsync(): ThunkAction {
return async (dispatch, getState): Promise<void> => {
const state: CombinedState = getState();
const OpenVINO = state.plugins.list.AUTO_ANNOTATION;
const RCNN = state.plugins.list.TF_ANNOTATION;
const MaskRCNN = state.plugins.list.TF_SEGMENTATION;
return async (dispatch): Promise<void> => {
dispatch(modelsActions.getModels());
const models: Model[] = [];
try {
if (OpenVINO) {
const response = await core.server.request(
`${baseURL}/auto_annotation/meta/get`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
data: JSON.stringify([]),
},
);
for (const model of response.models) {
models.push({
id: model.id,
ownerID: model.owner,
primary: model.primary,
name: model.name,
uploadDate: model.uploadDate,
updateDate: model.updateDate,
labels: [...model.labels],
});
}
}
if (RCNN) {
models.push({
id: null,
ownerID: null,
primary: true,
name: PreinstalledModels.RCNN,
uploadDate: '',
updateDate: '',
labels: ['surfboard', 'car', 'skateboard', 'boat', 'clock',
'cat', 'cow', 'knife', 'apple', 'cup', 'tv',
'baseball_bat', 'book', 'suitcase', 'tennis_racket',
'stop_sign', 'couch', 'cell_phone', 'keyboard',
'cake', 'tie', 'frisbee', 'truck', 'fire_hydrant',
'snowboard', 'bed', 'vase', 'teddy_bear',
'toaster', 'wine_glass', 'traffic_light',
'broccoli', 'backpack', 'carrot', 'potted_plant',
'donut', 'umbrella', 'parking_meter', 'bottle',
'sandwich', 'motorcycle', 'bear', 'banana',
'person', 'scissors', 'elephant', 'dining_table',
'toothbrush', 'toilet', 'skis', 'bowl', 'sheep',
'refrigerator', 'oven', 'microwave', 'train',
'orange', 'mouse', 'laptop', 'bench', 'bicycle',
'fork', 'kite', 'zebra', 'baseball_glove', 'bus',
'spoon', 'horse', 'handbag', 'pizza', 'sports_ball',
'airplane', 'hair_drier', 'hot_dog', 'remote',
'sink', 'dog', 'bird', 'giraffe', 'chair',
],
});
}
if (MaskRCNN) {
models.push({
id: null,
ownerID: null,
primary: true,
name: PreinstalledModels.MaskRCNN,
uploadDate: '',
updateDate: '',
labels: ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush',
],
});
}
} catch (error) {
dispatch(modelsActions.getModelsFailed(error));
return;
}
const models = (await core.lambda.list())
.filter((model: Model) => ['detector', 'reid'].includes(model.type));
dispatch(modelsActions.getModelsSuccess(models));
};
}
export function deleteModelAsync(id: number): ThunkAction {
return async (dispatch): Promise<void> => {
try {
await core.server.request(`${baseURL}/auto_annotation/delete/${id}`, {
method: 'DELETE',
});
} catch (error) {
dispatch(modelsActions.deleteModelFailed(id, error));
return;
dispatch(modelsActions.getModelsFailed(error));
}
dispatch(modelsActions.deleteModelSuccess(id));
};
}
export function createModelAsync(name: string, files: ModelFiles, global: boolean): ThunkAction {
return async (dispatch): Promise<void> => {
async function checkCallback(id: string): Promise<void> {
try {
const data = await core.server.request(
`${baseURL}/auto_annotation/check/${id}`, {
method: 'GET',
},
);
switch (data.status) {
case 'failed':
dispatch(modelsActions.createModelFailed(
`Checking request has returned the "${data.status}" status. Message: ${data.error}`,
));
break;
case 'unknown':
dispatch(modelsActions.createModelFailed(
`Checking request has returned the "${data.status}" status.`,
));
break;
case 'finished':
dispatch(modelsActions.createModelSuccess());
break;
default:
if ('progress' in data) {
modelsActions.createModelUpdateStatus(data.progress);
}
setTimeout(checkCallback.bind(null, id), 1000);
}
} catch (error) {
dispatch(modelsActions.createModelFailed(error));
}
}
dispatch(modelsActions.createModel());
const data = new FormData();
data.append('name', name);
data.append('storage', typeof files.bin === 'string' ? 'shared' : 'local');
data.append('shared', global.toString());
Object.keys(files).reduce((acc, key: string): FormData => {
acc.append(key, files[key]);
return acc;
}, data);
try {
dispatch(modelsActions.createModelUpdateStatus('Request is beign sent..'));
const response = await core.server.request(
`${baseURL}/auto_annotation/create`, {
method: 'POST',
data,
contentType: false,
processData: false,
},
);
dispatch(modelsActions.createModelUpdateStatus('Request is being processed..'));
setTimeout(checkCallback.bind(null, response.id), 1000);
} catch (error) {
dispatch(modelsActions.createModelFailed(error));
}
};
}
interface InferenceMeta {
active: boolean;
taskID: number;
requestID: string;
modelType: ModelType;
}
const timers: any = {};
async function timeoutCallback(
url: string,
taskID: number,
modelType: ModelType,
function listen(
inferenceMeta: InferenceMeta,
dispatch: (action: ModelsActions) => void,
): Promise<void> {
try {
delete timers[taskID];
const response = await core.server.request(url, {
method: 'GET',
});
const activeInference: ActiveInference = {
status: response.status,
progress: +response.progress || 0,
error: response.error || response.stderr || '',
modelType,
};
if (activeInference.status === 'unknown') {
dispatch(modelsActions.getInferenceStatusFailed(
taskID,
new Error(
`Inference status for the task ${taskID} is unknown.`,
),
));
return;
}
if (activeInference.status === 'failed') {
): void {
const { taskID, requestID } = inferenceMeta;
core.lambda.listen(requestID, (status: RQStatus, progress: number, message: string) => {
if (status === RQStatus.failed || status === RQStatus.unknown) {
dispatch(modelsActions.getInferenceStatusFailed(
taskID,
new Error(
`Inference status for the task ${taskID} is failed. ${activeInference.error}`,
`Inference status for the task ${taskID} is ${status}. ${message}`,
),
));
return;
}
if (activeInference.status !== 'finished') {
timers[taskID] = setTimeout(
timeoutCallback.bind(
null,
url,
taskID,
modelType,
dispatch,
), 3000,
);
}
dispatch(modelsActions.getInferenceStatusSuccess(taskID, activeInference));
} catch (error) {
dispatch(modelsActions.getInferenceStatusFailed(taskID, new Error(
`Server request for the task ${taskID} was failed`,
)));
}
}
function subscribe(
inferenceMeta: InferenceMeta,
dispatch: (action: ModelsActions) => void,
): void {
if (!(inferenceMeta.taskID in timers)) {
let requestURL = `${baseURL}`;
if (inferenceMeta.modelType === ModelType.OPENVINO) {
requestURL = `${requestURL}/auto_annotation/check`;
} else if (inferenceMeta.modelType === ModelType.RCNN) {
requestURL = `${requestURL}/tensorflow/annotation/check/task`;
} else if (inferenceMeta.modelType === ModelType.MASK_RCNN) {
requestURL = `${requestURL}/tensorflow/segmentation/check/task`;
}
requestURL = `${requestURL}/${inferenceMeta.requestID}`;
timers[inferenceMeta.taskID] = setTimeout(
timeoutCallback.bind(
null,
requestURL,
inferenceMeta.taskID,
inferenceMeta.modelType,
dispatch,
),
);
}
}
export function getInferenceStatusAsync(tasks: number[]): ThunkAction {
return async (dispatch, getState): Promise<void> => {
function parse(response: any, modelType: ModelType): InferenceMeta[] {
return Object.keys(response).map((key: string): InferenceMeta => ({
taskID: +key,
requestID: response[key].rq_id || key,
active: typeof (response[key].active) === 'undefined' ? ['queued', 'started']
.includes(response[key].status.toLowerCase()) : response[key].active,
modelType,
dispatch(modelsActions.getInferenceStatusSuccess(taskID, {
status,
progress,
error: message,
id: requestID,
}));
}
const state: CombinedState = getState();
const OpenVINO = state.plugins.list.AUTO_ANNOTATION;
const RCNN = state.plugins.list.TF_ANNOTATION;
const MaskRCNN = state.plugins.list.TF_SEGMENTATION;
}).catch((error: Error) => {
dispatch(modelsActions.getInferenceStatusFailed(taskID, {
status: 'unknown',
progress: 0,
error: error.toString(),
id: requestID,
}));
});
}
export function getInferenceStatusAsync(): ThunkAction {
return async (dispatch): Promise<void> => {
const dispatchCallback = (action: ModelsActions): void => {
dispatch(action);
};
try {
if (OpenVINO) {
const response = await core.server.request(
`${baseURL}/auto_annotation/meta/get`, {
method: 'POST',
data: JSON.stringify(tasks),
headers: {
'Content-Type': 'application/json',
},
},
);
parse(response.run, ModelType.OPENVINO)
.filter((inferenceMeta: InferenceMeta): boolean => inferenceMeta.active)
const requests = await core.lambda.requests();
requests
.map((request: any): object => ({
taskID: +request.function.task,
requestID: request.id,
}))
.forEach((inferenceMeta: InferenceMeta): void => {
subscribe(inferenceMeta, dispatchCallback);
listen(inferenceMeta, dispatchCallback);
});
}
if (RCNN) {
const response = await core.server.request(
`${baseURL}/tensorflow/annotation/meta/get`, {
method: 'POST',
data: JSON.stringify(tasks),
headers: {
'Content-Type': 'application/json',
},
},
);
parse(response, ModelType.RCNN)
.filter((inferenceMeta: InferenceMeta): boolean => inferenceMeta.active)
.forEach((inferenceMeta: InferenceMeta): void => {
subscribe(inferenceMeta, dispatchCallback);
});
}
if (MaskRCNN) {
const response = await core.server.request(
`${baseURL}/tensorflow/segmentation/meta/get`, {
method: 'POST',
data: JSON.stringify(tasks),
headers: {
'Content-Type': 'application/json',
},
},
);
parse(response, ModelType.MASK_RCNN)
.filter((inferenceMeta: InferenceMeta): boolean => inferenceMeta.active)
.forEach((inferenceMeta: InferenceMeta): void => {
subscribe(inferenceMeta, dispatchCallback);
});
}
} catch (error) {
dispatch(modelsActions.fetchMetaFailed(error));
}
@ -477,37 +157,20 @@ export function getInferenceStatusAsync(tasks: number[]): ThunkAction {
export function startInferenceAsync(
taskInstance: any,
model: Model,
mapping: {
[index: string]: string;
},
cleanOut: boolean,
body: object,
): ThunkAction {
return async (dispatch): Promise<void> => {
try {
if (model.name === PreinstalledModels.RCNN) {
await core.server.request(
`${baseURL}/tensorflow/annotation/create/task/${taskInstance.id}`,
);
} else if (model.name === PreinstalledModels.MaskRCNN) {
await core.server.request(
`${baseURL}/tensorflow/segmentation/create/task/${taskInstance.id}`,
);
} else {
await core.server.request(
`${baseURL}/auto_annotation/start/${model.id}/${taskInstance.id}`, {
method: 'POST',
data: JSON.stringify({
reset: cleanOut,
labels: mapping,
}),
headers: {
'Content-Type': 'application/json',
},
},
);
}
const requestID: string = await core.lambda.run(taskInstance, model, body);
const dispatchCallback = (action: ModelsActions): void => {
dispatch(action);
};
dispatch(getInferenceStatusAsync([taskInstance.id]));
listen({
taskID: taskInstance.id,
requestID,
}, dispatchCallback);
} catch (error) {
dispatch(modelsActions.startInferenceFailed(taskInstance.id, error));
}
@ -518,30 +181,10 @@ export function cancelInferenceAsync(taskID: number): ThunkAction {
return async (dispatch, getState): Promise<void> => {
try {
const inference = getState().models.inferences[taskID];
if (inference) {
if (inference.modelType === ModelType.OPENVINO) {
await core.server.request(
`${baseURL}/auto_annotation/cancel/${taskID}`,
);
} else if (inference.modelType === ModelType.RCNN) {
await core.server.request(
`${baseURL}/tensorflow/annotation/cancel/task/${taskID}`,
);
} else if (inference.modelType === ModelType.MASK_RCNN) {
await core.server.request(
`${baseURL}/tensorflow/segmentation/cancel/task/${taskID}`,
);
}
if (timers[taskID]) {
clearTimeout(timers[taskID]);
delete timers[taskID];
}
}
await core.lambda.cancel(inference.id);
dispatch(modelsActions.cancelInferenceSuccess(taskID));
} catch (error) {
dispatch(modelsActions.cancelInferenceFaild(taskID, error));
dispatch(modelsActions.cancelInferenceFailed(taskID, error));
}
};
}

@ -29,27 +29,19 @@ export function checkPluginsAsync(): ThunkAction {
dispatch(pluginActions.checkPlugins());
const plugins: PluginObjects = {
ANALYTICS: false,
AUTO_ANNOTATION: false,
GIT_INTEGRATION: false,
TF_ANNOTATION: false,
TF_SEGMENTATION: false,
REID: false,
DEXTR_SEGMENTATION: false,
};
const promises: Promise<boolean>[] = [
PluginChecker.check(SupportedPlugins.ANALYTICS),
PluginChecker.check(SupportedPlugins.AUTO_ANNOTATION),
PluginChecker.check(SupportedPlugins.GIT_INTEGRATION),
PluginChecker.check(SupportedPlugins.TF_ANNOTATION),
PluginChecker.check(SupportedPlugins.TF_SEGMENTATION),
PluginChecker.check(SupportedPlugins.DEXTR_SEGMENTATION),
PluginChecker.check(SupportedPlugins.REID),
];
const values = await Promise.all(promises);
[plugins.ANALYTICS, plugins.AUTO_ANNOTATION, plugins.GIT_INTEGRATION, plugins.TF_ANNOTATION,
plugins.TF_SEGMENTATION, plugins.DEXTR_SEGMENTATION, plugins.REID] = values;
[plugins.ANALYTICS, plugins.GIT_INTEGRATION,
plugins.DEXTR_SEGMENTATION] = values;
dispatch(pluginActions.checkedAllPlugins(plugins));
};
}

@ -102,13 +102,7 @@ ThunkAction<Promise<void>, {}, {}, AnyAction> {
const promises = array
.map((task): string => (task as any).frames.preview());
dispatch(
getInferenceStatusAsync(
array.map(
(task: any): number => task.id,
),
),
);
dispatch(getInferenceStatusAsync());
for (const promise of promises) {
try {

@ -15,16 +15,11 @@ interface Props {
taskID: number;
taskMode: string;
bugTracker: string;
loaders: any[];
dumpers: any[];
loadActivity: string | null;
dumpActivities: string[] | null;
exportActivities: string[] | null;
installedTFAnnotation: boolean;
installedTFSegmentation: boolean;
installedAutoAnnotation: boolean;
inferenceIsActive: boolean;
onClickMenu: (params: ClickParam, file?: File) => void;
@ -44,12 +39,7 @@ export default function ActionsMenuComponent(props: Props): JSX.Element {
taskID,
taskMode,
bugTracker,
installedAutoAnnotation,
installedTFAnnotation,
installedTFSegmentation,
inferenceIsActive,
dumpers,
loaders,
onClickMenu,
@ -58,9 +48,6 @@ export default function ActionsMenuComponent(props: Props): JSX.Element {
loadActivity,
} = props;
const renderModelRunner = installedAutoAnnotation
|| installedTFAnnotation || installedTFSegmentation;
let latestParams: ClickParam | null = null;
function onClickMenuWrapper(params: ClickParam | null, file?: File): void {
const copyParams = params || latestParams;
@ -137,17 +124,12 @@ export default function ActionsMenuComponent(props: Props): JSX.Element {
})
}
{!!bugTracker && <Menu.Item key={Actions.OPEN_BUG_TRACKER}>Open bug tracker</Menu.Item>}
{
renderModelRunner
&& (
<Menu.Item
disabled={inferenceIsActive}
key={Actions.RUN_AUTO_ANNOTATION}
>
Automatic annotation
</Menu.Item>
)
}
<hr />
<Menu.Item key={Actions.DELETE_TASK}>Delete</Menu.Item>
</Menu>

@ -9,7 +9,6 @@ import Modal from 'antd/lib/modal';
import DumpSubmenu from 'components/actions-menu/dump-submenu';
import LoadSubmenu from 'components/actions-menu/load-submenu';
import ExportSubmenu from 'components/actions-menu/export-submenu';
import ReIDPlugin from './reid-plugin';
interface Props {
taskMode: string;
@ -18,7 +17,6 @@ interface Props {
loadActivity: string | null;
dumpActivities: string[] | null;
exportActivities: string[] | null;
installedReID: boolean;
taskID: number;
onClickMenu(params: ClickParam, file?: File): void;
}
@ -40,7 +38,6 @@ export default function AnnotationMenuComponent(props: Props): JSX.Element {
loadActivity,
dumpActivities,
exportActivities,
installedReID,
taskID,
} = props;
@ -125,7 +122,6 @@ export default function AnnotationMenuComponent(props: Props): JSX.Element {
Open the task
</a>
</Menu.Item>
{ installedReID && <ReIDPlugin /> }
</Menu>
);
}

@ -1,229 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import ReactDOM from 'react-dom';
import React, { useState, useEffect } from 'react';
import { Row, Col } from 'antd/lib/grid';
import Modal from 'antd/lib/modal';
import Menu from 'antd/lib/menu';
import Text from 'antd/lib/typography/Text';
import InputNumber from 'antd/lib/input-number';
import Tooltip from 'antd/lib/tooltip';
import { clamp } from 'utils/math';
import { run, cancel } from 'utils/reid-utils';
import { connect } from 'react-redux';
import { CombinedState } from 'reducers/interfaces';
import { fetchAnnotationsAsync } from 'actions/annotation-actions';
interface InputModalProps {
visible: boolean;
onCancel(): void;
onSubmit(threshold: number, distance: number): void;
}
function InputModal(props: InputModalProps): JSX.Element {
const { visible, onCancel, onSubmit } = props;
const [threshold, setThreshold] = useState(0.5);
const [distance, setDistance] = useState(50);
const [thresholdMin, thresholdMax] = [0.05, 0.95];
const [distanceMin, distanceMax] = [1, 1000];
return (
<Modal
closable={false}
width={300}
visible={visible}
onCancel={onCancel}
onOk={() => onSubmit(threshold, distance)}
okText='Merge'
>
<Row type='flex'>
<Col span={10}>
<Tooltip title='Similarity of objects on neighbour frames is calculated using AI model' mouseLeaveDelay={0}>
<Text>Similarity threshold: </Text>
</Tooltip>
</Col>
<Col span={12}>
<InputNumber
style={{ width: '100%' }}
min={thresholdMin}
max={thresholdMax}
step={0.05}
value={threshold}
onChange={(value: number | undefined) => {
if (typeof (value) === 'number') {
setThreshold(clamp(value, thresholdMin, thresholdMax));
}
}}
/>
</Col>
</Row>
<Row type='flex'>
<Col span={10}>
<Tooltip title='The value defines max distance to merge (between centers of two objects on neighbour frames)' mouseLeaveDelay={0}>
<Text>Max pixel distance: </Text>
</Tooltip>
</Col>
<Col span={12}>
<InputNumber
style={{ width: '100%' }}
min={distanceMin}
max={distanceMax}
step={5}
value={distance}
onChange={(value: number | undefined) => {
if (typeof (value) === 'number') {
setDistance(clamp(value, distanceMin, distanceMax));
}
}}
/>
</Col>
</Row>
</Modal>
);
}
interface InProgressDialogProps {
visible: boolean;
progress: number;
onCancel(): void;
}
function InProgressDialog(props: InProgressDialogProps): JSX.Element {
const { visible, onCancel, progress } = props;
return (
<Modal
closable={false}
width={300}
visible={visible}
okText='Cancel'
okButtonProps={{
type: 'danger',
}}
onOk={onCancel}
cancelButtonProps={{
style: {
display: 'none',
},
}}
>
<Text>{`Merging is in progress ${progress}%`}</Text>
</Modal>
);
}
const reidContainer = window.document.createElement('div');
reidContainer.setAttribute('id', 'cvat-reid-wrapper');
window.document.body.appendChild(reidContainer);
interface StateToProps {
jobInstance: any | null;
}
interface DispatchToProps {
updateAnnotations(): void;
}
function mapStateToProps(state: CombinedState): StateToProps {
const {
annotation: {
job: {
instance: jobInstance,
},
},
} = state;
return {
jobInstance,
};
}
function mapDispatchToProps(dispatch: any): DispatchToProps {
return {
updateAnnotations(): void {
dispatch(fetchAnnotationsAsync());
},
};
}
function ReIDPlugin(props: StateToProps & DispatchToProps): JSX.Element {
const { jobInstance, updateAnnotations, ...rest } = props;
const [showInputDialog, setShowInputDialog] = useState(false);
const [showInProgressDialog, setShowInProgressDialog] = useState(false);
const [progress, setProgress] = useState(0);
useEffect(() => {
ReactDOM.render((
<>
<InProgressDialog
visible={showInProgressDialog}
progress={progress}
onCancel={() => {
cancel(jobInstance.id);
}}
/>
<InputModal
visible={showInputDialog}
onCancel={() => setShowInputDialog(false)}
onSubmit={async (threshold: number, distance: number) => {
setProgress(0);
setShowInputDialog(false);
setShowInProgressDialog(true);
const onUpdatePercentage = (percent: number): void => {
setProgress(percent);
};
try {
const annotations = await jobInstance.annotations.export();
const merged = await run({
threshold,
distance,
onUpdatePercentage,
jobID: jobInstance.id,
annotations,
});
await jobInstance.annotations.clear();
updateAnnotations(); // one more call to do not confuse canvas
// False positive of no-unsanitized/method
// eslint-disable-next-line no-unsanitized/method
await jobInstance.annotations.import(merged);
updateAnnotations();
} catch (error) {
Modal.error({
title: 'Could not merge annotations',
content: error.toString(),
});
} finally {
setShowInProgressDialog(false);
}
}}
/>
</>
), reidContainer);
});
return (
<Menu.Item
{...rest}
key='run_reid'
title='Run algorithm that merges separated bounding boxes automatically'
onClick={() => {
if (jobInstance) {
setShowInputDialog(true);
}
}}
>
Run ReID merge
</Menu.Item>
);
}
export default connect(
mapStateToProps,
mapDispatchToProps,
)(ReIDPlugin);

@ -1,160 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import React from 'react';
import { Row, Col } from 'antd/lib/grid';
import Icon from 'antd/lib/icon';
import Alert from 'antd/lib/alert';
import Button from 'antd/lib/button';
import Tooltip from 'antd/lib/tooltip';
import message from 'antd/lib/message';
import notification from 'antd/lib/notification';
import Text from 'antd/lib/typography/Text';
import consts from 'consts';
import ConnectedFileManager, {
FileManagerContainer,
} from 'containers/file-manager/file-manager';
import { ModelFiles } from 'reducers/interfaces';
import WrappedCreateModelForm, {
CreateModelForm,
} from './create-model-form';
interface Props {
createModel(name: string, files: ModelFiles, global: boolean): void;
isAdmin: boolean;
modelCreatingStatus: string;
}
export default class CreateModelContent extends React.PureComponent<Props> {
private modelForm: CreateModelForm;
private fileManagerContainer: FileManagerContainer;
public constructor(props: Props) {
super(props);
this.modelForm = null as any as CreateModelForm;
this.fileManagerContainer = null as any as FileManagerContainer;
}
public componentDidUpdate(prevProps: Props): void {
const { modelCreatingStatus } = this.props;
if (prevProps.modelCreatingStatus !== 'CREATED'
&& modelCreatingStatus === 'CREATED') {
message.success('The model has been uploaded');
this.modelForm.resetFields();
this.fileManagerContainer.reset();
}
}
private handleSubmitClick = (): void => {
const { createModel } = this.props;
this.modelForm.submit()
.then((data) => {
const {
local,
share,
} = this.fileManagerContainer.getFiles();
const files = local.length ? local : share;
const grouppedFiles: ModelFiles = {
xml: '',
bin: '',
py: '',
json: '',
};
(files as any).reduce((acc: ModelFiles, value: File | string): ModelFiles => {
const name = typeof value === 'string' ? value : value.name;
const [extension] = name.split('.').reverse();
if (extension in acc) {
acc[extension] = value;
}
return acc;
}, grouppedFiles);
if (Object.keys(grouppedFiles)
.map((key: string) => grouppedFiles[key])
.filter((val) => !!val).length !== 4) {
notification.error({
message: 'Could not upload a model',
description: 'Please, specify correct files',
});
} else {
createModel(data.name, grouppedFiles, data.global);
}
}).catch(() => {
notification.error({
message: 'Could not upload a model',
description: 'Please, check input fields',
});
});
};
public render(): JSX.Element {
const {
modelCreatingStatus,
} = this.props;
const loading = !!modelCreatingStatus
&& modelCreatingStatus !== 'CREATED';
const status = modelCreatingStatus
&& modelCreatingStatus !== 'CREATED' ? modelCreatingStatus : '';
const { AUTO_ANNOTATION_GUIDE_URL } = consts;
return (
<Row type='flex' justify='start' align='middle' className='cvat-create-model-content'>
<Col span={24}>
<Tooltip title='Click to open guide' mouseLeaveDelay={0}>
<Icon
onClick={(): void => {
// false positive
// eslint-disable-next-line
window.open(AUTO_ANNOTATION_GUIDE_URL, '_blank');
}}
type='question-circle'
/>
</Tooltip>
</Col>
<Col span={24}>
<WrappedCreateModelForm
wrappedComponentRef={
(ref: CreateModelForm): void => {
this.modelForm = ref;
}
}
/>
</Col>
<Col span={24}>
<Text type='danger'>* </Text>
<Text className='cvat-text-color'>Select files:</Text>
</Col>
<Col span={24}>
<ConnectedFileManager
ref={
(container: FileManagerContainer): void => {
this.fileManagerContainer = container;
}
}
withRemote={false}
/>
</Col>
<Col span={18}>
{status && <Alert message={`${status}`} />}
</Col>
<Col span={6}>
<Button
type='primary'
disabled={loading}
loading={loading}
onClick={this.handleSubmitClick}
>
Submit
</Button>
</Col>
</Row>
);
}
}

@ -1,80 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import React from 'react';
import { Row, Col } from 'antd/lib/grid';
import Form, { FormComponentProps } from 'antd/lib/form/Form';
import Input from 'antd/lib/input';
import Tooltip from 'antd/lib/tooltip';
import Checkbox from 'antd/lib/checkbox';
import Text from 'antd/lib/typography/Text';
type Props = FormComponentProps;
export class CreateModelForm extends React.PureComponent<Props> {
public submit(): Promise<{name: string; global: boolean}> {
const { form } = this.props;
return new Promise((resolve, reject) => {
form.validateFields((errors, values): void => {
if (!errors) {
resolve({
name: values.name,
global: values.global,
});
} else {
reject(errors);
}
});
});
}
public resetFields(): void {
const { form } = this.props;
form.resetFields();
}
public render(): JSX.Element {
const { form } = this.props;
const { getFieldDecorator } = form;
return (
<Form onSubmit={(e: React.FormEvent): void => e.preventDefault()}>
<Row>
<Col span={24}>
<Text type='danger'>* </Text>
<Text className='cvat-text-color'>Name:</Text>
</Col>
<Col span={14}>
<Form.Item hasFeedback>
{ getFieldDecorator('name', {
rules: [{
required: true,
message: 'Please, specify a model name',
}],
})(<Input placeholder='Model name' />)}
</Form.Item>
</Col>
<Col span={8} offset={2}>
<Form.Item>
<Tooltip title='Will this model be availabe for everyone?' mouseLeaveDelay={0}>
{ getFieldDecorator('global', {
initialValue: false,
valuePropName: 'checked',
})(
<Checkbox>
<Text className='cvat-text-color'>
Load globally
</Text>
</Checkbox>,
)}
</Tooltip>
</Form.Item>
</Col>
</Row>
</Form>
);
}
}
export default Form.create()(CreateModelForm);

@ -1,38 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import './styles.scss';
import React from 'react';
import { Row, Col } from 'antd/lib/grid';
import Text from 'antd/lib/typography/Text';
import { ModelFiles } from 'reducers/interfaces';
import CreateModelContent from './create-model-content';
interface Props {
createModel(name: string, files: ModelFiles, global: boolean): void;
isAdmin: boolean;
modelCreatingStatus: string;
}
export default function CreateModelPageComponent(props: Props): JSX.Element {
const {
isAdmin,
modelCreatingStatus,
createModel,
} = props;
return (
<Row type='flex' justify='center' align='top' className='cvat-create-model-form-wrapper'>
<Col md={20} lg={16} xl={14} xxl={9}>
<Text className='cvat-title'>Upload a new model</Text>
<CreateModelContent
isAdmin={isAdmin}
modelCreatingStatus={modelCreatingStatus}
createModel={createModel}
/>
</Col>
</Row>
);
}

@ -1,43 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
@import '../../base.scss';
.cvat-create-model-form-wrapper {
text-align: center;
margin-top: 40px;
overflow-y: auto;
height: 90%;
> div > span {
font-size: 36px;
}
.cvat-create-model-content {
margin-top: 20px;
width: 100%;
height: auto;
border: 1px solid $border-color-1;
border-radius: 3px;
padding: 20px;
background: $background-color-1;
text-align: initial;
> div:nth-child(1) > i {
float: right;
font-size: 20px;
color: $danger-icon-color;
}
> div:nth-child(4) {
margin-top: 10px;
}
> div:nth-child(6) > button {
margin-top: 10px;
float: right;
width: 120px;
}
}
}

@ -18,7 +18,6 @@ import TasksPageContainer from 'containers/tasks-page/tasks-page';
import CreateTaskPageContainer from 'containers/create-task-page/create-task-page';
import TaskPageContainer from 'containers/task-page/task-page';
import ModelsPageContainer from 'containers/models-page/models-page';
import CreateModelPageContainer from 'containers/create-model-page/create-model-page';
import AnnotationPageContainer from 'containers/annotation-page/annotation-page';
import LoginPageContainer from 'containers/login-page/login-page';
import RegisterPageContainer from 'containers/register-page/register-page';
@ -50,9 +49,6 @@ interface CVATAppProps {
usersFetching: boolean;
aboutInitialized: boolean;
aboutFetching: boolean;
installedAutoAnnotation: boolean;
installedTFAnnotation: boolean;
installedTFSegmentation: boolean;
userAgreementsFetching: boolean;
userAgreementsInitialized: boolean;
notifications: NotificationsState;
@ -222,9 +218,6 @@ class CVATApplication extends React.PureComponent<CVATAppProps & RouteComponentP
aboutInitialized,
pluginsInitialized,
formatsInitialized,
installedAutoAnnotation,
installedTFSegmentation,
installedTFAnnotation,
switchShortcutsDialog,
switchSettingsDialog,
user,
@ -235,9 +228,6 @@ class CVATApplication extends React.PureComponent<CVATAppProps & RouteComponentP
|| (userInitialized && formatsInitialized
&& pluginsInitialized && usersInitialized && aboutInitialized);
const withModels = installedAutoAnnotation
|| installedTFAnnotation || installedTFSegmentation;
const subKeyMap = {
SWITCH_SHORTCUTS: keyMap.SWITCH_SHORTCUTS,
SWITCH_SETTINGS: keyMap.SWITCH_SETTINGS,
@ -270,10 +260,7 @@ class CVATApplication extends React.PureComponent<CVATAppProps & RouteComponentP
<Route exact path='/tasks/create' component={CreateTaskPageContainer} />
<Route exact path='/tasks/:id' component={TaskPageContainer} />
<Route exact path='/tasks/:tid/jobs/:jid' component={AnnotationPageContainer} />
{withModels
&& <Route exact path='/models' component={ModelsPageContainer} />}
{installedAutoAnnotation
&& <Route exact path='/models/create' component={CreateModelPageContainer} />}
<Route exact path='/models' component={ModelsPageContainer} />
<Redirect push to='/tasks' />
</Switch>
</GlobalHotKeys>

@ -24,9 +24,6 @@ interface HeaderContainerProps {
switchSettingsDialog: (show: boolean) => void;
logoutFetching: boolean;
installedAnalytics: boolean;
installedAutoAnnotation: boolean;
installedTFAnnotation: boolean;
installedTFSegmentation: boolean;
serverHost: string;
username: string;
toolName: string;
@ -43,9 +40,6 @@ type Props = HeaderContainerProps & RouteComponentProps;
function HeaderContainer(props: Props): JSX.Element {
const {
installedTFSegmentation,
installedAutoAnnotation,
installedTFAnnotation,
installedAnalytics,
username,
toolName,
@ -62,10 +56,6 @@ function HeaderContainer(props: Props): JSX.Element {
switchSettingsDialog,
} = props;
const renderModels = installedAutoAnnotation
|| installedTFAnnotation
|| installedTFSegmentation;
const {
CHANGELOG_URL,
LICENSE_URL,
@ -172,8 +162,6 @@ function HeaderContainer(props: Props): JSX.Element {
>
Tasks
</Button>
{ renderModels
&& (
<Button
className='cvat-header-button'
type='link'
@ -184,7 +172,6 @@ function HeaderContainer(props: Props): JSX.Element {
>
Models
</Button>
)}
{ installedAnalytics
&& (
<Button

@ -13,6 +13,8 @@ import Modal from 'antd/lib/modal';
import Tag from 'antd/lib/tag';
import Spin from 'antd/lib/spin';
import notification from 'antd/lib/notification';
import Text from 'antd/lib/typography/Text';
import InputNumber from 'antd/lib/input-number';
import {
Model,
@ -31,20 +33,22 @@ interface Props {
runInference(
taskInstance: any,
model: Model,
mapping: StringObject,
cleanOut: boolean,
body: object,
): void;
}
interface State {
selectedModel: string | null;
cleanOut: boolean;
cleanup: boolean;
mapping: StringObject;
colors: StringObject;
matching: {
model: string;
task: string;
};
threshold: number;
maxDistance: number;
}
function colorGenerator(): () => string {
@ -75,11 +79,14 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
selectedModel: null,
mapping: {},
colors: {},
cleanOut: false,
cleanup: false,
matching: {
model: '',
task: '',
},
threshold: 0.5,
maxDistance: 50,
};
}
@ -109,7 +116,7 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
model: '',
task: '',
},
cleanOut: false,
cleanup: false,
});
}
@ -117,8 +124,7 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
const selectedModelInstance = models
.filter((model) => model.name === selectedModel)[0];
if (!selectedModelInstance.primary) {
if (!selectedModelInstance.labels.length) {
if (selectedModelInstance.type !== 'reid' && !selectedModelInstance.labels.length) {
notification.warning({
message: 'The selected model does not include any lables',
});
@ -143,7 +149,6 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
});
}
}
}
private renderModelSelector(): JSX.Element {
const { models } = this.props;
@ -296,10 +301,65 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
);
}
private renderReidContent(): JSX.Element {
const {
threshold,
maxDistance,
} = this.state;
return (
<div>
<Row type='flex' align='middle' justify='start'>
<Col>
<Text>Threshold</Text>
</Col>
<Col offset={1}>
<Tooltip title='Minimum similarity value for shapes that can be merged'>
<InputNumber
min={0.01}
step={0.01}
max={1}
value={threshold}
onChange={(value: number | undefined) => {
if (typeof (value) === 'number') {
this.setState({
threshold: value,
});
}
}}
/>
</Tooltip>
</Col>
</Row>
<Row type='flex' align='middle' justify='start'>
<Col>
<Text>Maximum distance</Text>
</Col>
<Col offset={1}>
<Tooltip title='Maximum distance between shapes that can be merged'>
<InputNumber
placeholder='Threshold'
min={1}
value={maxDistance}
onChange={(value: number | undefined) => {
if (typeof (value) === 'number') {
this.setState({
maxDistance: value,
});
}
}}
/>
</Tooltip>
</Col>
</Row>
</div>
);
}
private renderContent(): JSX.Element {
const {
selectedModel,
cleanOut,
cleanup,
mapping,
} = this.state;
const {
@ -311,8 +371,9 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
.filter((_model): boolean => _model.name === selectedModel)[0];
const excludedModelLabels: string[] = Object.keys(mapping);
const withMapping = model && !model.primary;
const tags = withMapping ? excludedModelLabels
const isDetector = model && model.type === 'detector';
const isReId = model && model.type === 'reid';
const tags = isDetector ? excludedModelLabels
.map((modelLabel: string) => this.renderMappingTag(
modelLabel,
mapping[modelLabel],
@ -332,23 +393,24 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
return (
<div className='cvat-run-model-dialog'>
{ this.renderModelSelector() }
{ withMapping && tags}
{ withMapping
{ isDetector && tags}
{ isDetector
&& mappingISAvailable
&& this.renderMappingInput(availableModelLabels, taskLabels)}
{ withMapping
{ isDetector
&& (
<div>
<Checkbox
checked={cleanOut}
checked={cleanup}
onChange={(e: any): void => this.setState({
cleanOut: e.target.checked,
cleanup: e.target.checked,
})}
>
Clean old annotations
</Checkbox>
</div>
)}
{ isReId && this.renderReidContent() }
</div>
);
}
@ -357,7 +419,9 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
const {
selectedModel,
mapping,
cleanOut,
cleanup,
threshold,
maxDistance,
} = this.state;
const {
@ -373,8 +437,8 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
(model): boolean => model.name === selectedModel,
)[0];
const enabledSubmit = (!!activeModel
&& activeModel.primary) || !!Object.keys(mapping).length;
const enabledSubmit = !!activeModel && (activeModel.type === 'reid'
|| !!Object.keys(mapping).length);
return (
visible && (
@ -387,8 +451,13 @@ export default class ModelRunnerModalComponent extends React.PureComponent<Props
taskInstance,
models
.filter((model): boolean => model.name === selectedModel)[0],
activeModel.type === 'detector' ? {
mapping,
cleanOut,
cleanup,
} : {
threshold,
max_distance: maxDistance,
},
);
closeDialog();
}}

@ -20,7 +20,7 @@ export default function BuiltModelItemComponent(props: Props): JSX.Element {
return (
<Row className='cvat-models-list-item' type='flex'>
<Col span={4} xxl={3}>
<Tag color='orange'>Tensorflow</Tag>
<Tag color='orange'>{model.framework}</Tag>
</Col>
<Col span={6} xxl={7}>
<Text className='cvat-text-color'>

@ -0,0 +1,55 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import React from 'react';
import { Row, Col } from 'antd/lib/grid';
import Tag from 'antd/lib/tag';
import Select from 'antd/lib/select';
import Text from 'antd/lib/typography/Text';
import { Model } from 'reducers/interfaces';
interface Props {
model: Model;
}
export default function DeployedModelItem(props: Props): JSX.Element {
const { model } = props;
return (
<Row className='cvat-models-list-item' type='flex'>
<Col span={3}>
<Tag color='purple'>{model.framework}</Tag>
</Col>
<Col span={3}>
<Text className='cvat-text-color'>
{model.name}
</Text>
</Col>
<Col span={3}>
<Tag color='orange'>
{model.type}
</Tag>
</Col>
<Col span={10}>
<Text style={{ whiteSpace: 'normal', height: 'auto' }}>{model.description}</Text>
</Col>
<Col span={5}>
<Select
showSearch
placeholder='Supported labels'
style={{ width: '90%' }}
value='Supported labels'
>
{model.labels.map(
(label): JSX.Element => (
<Select.Option key={label}>
{label}
</Select.Option>
),
)}
</Select>
</Col>
</Row>
);
}

@ -7,35 +7,38 @@ import { Row, Col } from 'antd/lib/grid';
import Text from 'antd/lib/typography/Text';
import { Model } from 'reducers/interfaces';
import BuiltModelItemComponent from './built-model-item';
import DeployedModelItem from './deployed-model-item';
interface Props {
models: Model[];
}
export default function IntegratedModelsListComponent(props: Props): JSX.Element {
export default function DeployedModelsListComponent(props: Props): JSX.Element {
const { models } = props;
const items = models.map((model): JSX.Element => (
<BuiltModelItemComponent key={model.name} model={model} />
<DeployedModelItem key={model.id} model={model} />
));
return (
<>
<Row type='flex' justify='center' align='middle'>
<Col md={22} lg={18} xl={16} xxl={14}>
<Text className='cvat-text-color' strong>Primary</Text>
</Col>
</Row>
<Row type='flex' justify='center' align='middle'>
<Col md={22} lg={18} xl={16} xxl={14} className='cvat-models-list'>
<Row type='flex' align='middle' style={{ padding: '10px' }}>
<Col span={4} xxl={3}>
<Col span={3}>
<Text strong>Framework</Text>
</Col>
<Col span={6} xxl={7}>
<Col span={3}>
<Text strong>Name</Text>
</Col>
<Col span={5} offset={7}>
<Col span={3}>
<Text strong>Type</Text>
</Col>
<Col span={8}>
<Text strong>Description</Text>
</Col>
<Col span={4}>
<Text strong>Labels</Text>
</Col>
</Row>

@ -3,14 +3,12 @@
// SPDX-License-Identifier: MIT
import React from 'react';
import { Link } from 'react-router-dom';
import Text from 'antd/lib/typography/Text';
import { Row, Col } from 'antd/lib/grid';
import Icon from 'antd/lib/icon';
import {
EmptyTasksIcon as EmptyModelsIcon,
} from 'icons';
import consts from 'consts';
import { EmptyTasksIcon as EmptyModelsIcon } from 'icons';
export default function EmptyListComponent(): JSX.Element {
return (
@ -22,7 +20,7 @@ export default function EmptyListComponent(): JSX.Element {
</Row>
<Row type='flex' justify='center' align='middle'>
<Col>
<Text strong>No models uploaded yet ...</Text>
<Text strong>No models deployed yet...</Text>
</Col>
</Row>
<Row type='flex' justify='center' align='middle'>
@ -32,7 +30,8 @@ export default function EmptyListComponent(): JSX.Element {
</Row>
<Row type='flex' justify='center' align='middle'>
<Col>
<Link to='/models/create'>upload a new model</Link>
<Text type='secondary'>deploy a model with </Text>
<a href={`${consts.NUCLIO_GUIDE}`}>nuclio</a>
</Col>
</Row>
</div>

@ -7,35 +7,23 @@ import React from 'react';
import Spin from 'antd/lib/spin';
import TopBarComponent from './top-bar';
import UploadedModelsList from './uploaded-models-list';
import BuiltModelsList from './built-models-list';
import DeployedModelsList from './deployed-models-list';
import EmptyListComponent from './empty-list';
import FeedbackComponent from '../feedback/feedback';
import { Model } from '../../reducers/interfaces';
interface Props {
installedAutoAnnotation: boolean;
installedTFSegmentation: boolean;
installedTFAnnotation: boolean;
modelsInitialized: boolean;
modelsFetching: boolean;
registeredUsers: any[];
models: Model[];
deployedModels: Model[];
getModels(): void;
deleteModel(id: number): void;
}
export default function ModelsPageComponent(props: Props): JSX.Element {
const {
installedAutoAnnotation,
installedTFSegmentation,
installedTFAnnotation,
modelsInitialized,
modelsFetching,
registeredUsers,
models,
deleteModel,
deployedModels,
} = props;
if (!modelsInitialized) {
@ -47,26 +35,15 @@ export default function ModelsPageComponent(props: Props): JSX.Element {
);
}
const uploadedModels = models.filter((model): boolean => model.id !== null);
const integratedModels = models.filter((model): boolean => model.id === null);
return (
<div className='cvat-models-page'>
<TopBarComponent installedAutoAnnotation={installedAutoAnnotation} />
{ !!integratedModels.length
&& <BuiltModelsList models={integratedModels} />}
{ !!uploadedModels.length && (
<UploadedModelsList
registeredUsers={registeredUsers}
models={uploadedModels}
deleteModel={deleteModel}
/>
<TopBarComponent />
{ deployedModels.length
? (
<DeployedModelsList models={deployedModels} />
) : (
<EmptyListComponent />
)}
{ installedAutoAnnotation
&& !uploadedModels.length
&& !installedTFAnnotation
&& !installedTFSegmentation
&& <EmptyListComponent />}
<FeedbackComponent />
</div>
);

@ -52,7 +52,7 @@
.cvat-models-list-item {
width: 100%;
height: 60px;
height: auto;
border: 1px solid $border-color-1;
border-radius: 3px;
margin-bottom: 15px;

@ -3,49 +3,15 @@
// SPDX-License-Identifier: MIT
import React from 'react';
import { RouteComponentProps } from 'react-router';
import { withRouter } from 'react-router-dom';
import { Row, Col } from 'antd/lib/grid';
import Button from 'antd/lib/button';
import Text from 'antd/lib/typography/Text';
type Props = {
installedAutoAnnotation: boolean;
} & RouteComponentProps;
function TopBarComponent(props: Props): JSX.Element {
const {
installedAutoAnnotation,
history,
} = props;
export default function TopBarComponent(): JSX.Element {
return (
<Row type='flex' justify='center' align='middle'>
<Col md={11} lg={9} xl={8} xxl={7}>
<Text className='cvat-title'>Models</Text>
</Col>
<Col
md={{ span: 11 }}
lg={{ span: 9 }}
xl={{ span: 8 }}
xxl={{ span: 7 }}
>
{ installedAutoAnnotation
&& (
<Button
size='large'
id='cvat-create-model-button'
type='primary'
onClick={
(): void => history.push('/models/create')
}
>
Create new model
</Button>
)}
</Col>
</Row>
);
}
export default withRouter(TopBarComponent);

@ -1,89 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import React from 'react';
import { Row, Col } from 'antd/lib/grid';
import Tag from 'antd/lib/tag';
import Select from 'antd/lib/select';
import Icon from 'antd/lib/icon';
import Menu from 'antd/lib/menu';
import Dropdown from 'antd/lib/dropdown';
import Text from 'antd/lib/typography/Text';
import moment from 'moment';
import { MenuIcon } from 'icons';
import { Model } from 'reducers/interfaces';
interface Props {
model: Model;
owner: any;
onDelete(): void;
}
export default function UploadedModelItem(props: Props): JSX.Element {
const {
model,
owner,
onDelete,
} = props;
return (
<Row className='cvat-models-list-item' type='flex'>
<Col span={4} xxl={3}>
<Tag color='purple'>OpenVINO</Tag>
</Col>
<Col span={5} xxl={7}>
<Text className='cvat-text-color'>
{model.name}
</Text>
</Col>
<Col span={3}>
<Text className='cvat-text-color'>
{owner ? owner.username : 'undefined'}
</Text>
</Col>
<Col span={4}>
<Text className='cvat-text-color'>
{moment(model.uploadDate).format('MMMM Do YYYY')}
</Text>
</Col>
<Col span={5}>
<Select
showSearch
placeholder='Supported labels'
style={{ width: '90%' }}
value='Supported labels'
>
{model.labels.map(
(label): JSX.Element => (
<Select.Option key={label}>
{label}
</Select.Option>
),
)}
</Select>
</Col>
<Col span={3} xxl={2}>
<Text className='cvat-text-color'>Actions</Text>
<Dropdown overlay={
(
<Menu className='cvat-task-item-menu'>
<Menu.Item
onClick={(): void => {
onDelete();
}}
key='delete'
>
Delete
</Menu.Item>
</Menu>
)
}
>
<Icon className='cvat-menu-icon' component={MenuIcon} />
</Dropdown>
</Col>
</Row>
);
}

@ -1,70 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import React from 'react';
import { Row, Col } from 'antd/lib/grid';
import Text from 'antd/lib/typography/Text';
import { Model } from 'reducers/interfaces';
import UploadedModelItem from './uploaded-model-item';
interface Props {
registeredUsers: any[];
models: Model[];
deleteModel(id: number): void;
}
export default function UploadedModelsListComponent(props: Props): JSX.Element {
const {
models,
registeredUsers,
deleteModel,
} = props;
const items = models.map((model): JSX.Element => {
const owner = registeredUsers.filter((user) => user.id === model.ownerID)[0];
return (
<UploadedModelItem
key={model.id as number}
owner={owner}
model={model}
onDelete={(): void => deleteModel(model.id as number)}
/>
);
});
return (
<>
<Row type='flex' justify='center' align='middle'>
<Col md={22} lg={18} xl={16} xxl={14}>
<Text className='cvat-text-color' strong>Uploaded by a user</Text>
</Col>
</Row>
<Row type='flex' justify='center' align='middle'>
<Col md={22} lg={18} xl={16} xxl={14} className='cvat-models-list'>
<Row type='flex' align='middle' style={{ padding: '10px' }}>
<Col span={4} xxl={3}>
<Text strong>Framework</Text>
</Col>
<Col span={5} xxl={7}>
<Text strong>Name</Text>
</Col>
<Col span={3}>
<Text strong>Owner</Text>
</Col>
<Col span={4}>
<Text strong>Uploaded</Text>
</Col>
<Col span={5}>
<Text strong>Labels</Text>
</Col>
<Col span={3} xxl={2} />
</Row>
{ items }
</Col>
</Row>
</>
);
}

@ -11,8 +11,8 @@ const GITTER_PUBLIC_URL = 'https://gitter.im/opencv-cvat/public';
const FORUM_URL = 'https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit';
const GITHUB_URL = 'https://github.com/opencv/cvat';
const GITHUB_IMAGE_URL = 'https://raw.githubusercontent.com/opencv/cvat/develop/cvat/apps/documentation/static/documentation/images/cvat.jpg';
const AUTO_ANNOTATION_GUIDE_URL = 'https://github.com/opencv/cvat/blob/develop/cvat/apps/auto_annotation/README.md';
const SHARE_MOUNT_GUIDE_URL = 'https://github.com/opencv/cvat/blob/master/cvat/apps/documentation/installation.md#share-path';
const NUCLIO_GUIDE = 'https://github.com/opencv/cvat/blob/develop/cvat/apps/documentation/installation.md#semi-automatic-and-automatic-annotation';
const CANVAS_BACKGROUND_COLORS = ['#ffffff', '#f1f1f1', '#e5e5e5', '#d8d8d8', '#CCCCCC', '#B3B3B3', '#999999'];
export default {
@ -25,7 +25,7 @@ export default {
FORUM_URL,
GITHUB_URL,
GITHUB_IMAGE_URL,
AUTO_ANNOTATION_GUIDE_URL,
SHARE_MOUNT_GUIDE_URL,
CANVAS_BACKGROUND_COLORS,
NUCLIO_GUIDE,
};

@ -28,9 +28,6 @@ interface StateToProps {
loadActivity: string | null;
dumpActivities: string[] | null;
exportActivities: string[] | null;
installedTFAnnotation: boolean;
installedTFSegmentation: boolean;
installedAutoAnnotation: boolean;
inferenceIsActive: boolean;
}
@ -53,13 +50,6 @@ function mapStateToProps(state: CombinedState, own: OwnProps): StateToProps {
formats: {
annotationFormats,
},
plugins: {
list: {
TF_ANNOTATION: installedTFAnnotation,
TF_SEGMENTATION: installedTFSegmentation,
AUTO_ANNOTATION: installedAutoAnnotation,
},
},
tasks: {
activities: {
dumps,
@ -70,9 +60,6 @@ function mapStateToProps(state: CombinedState, own: OwnProps): StateToProps {
} = state;
return {
installedTFAnnotation,
installedTFSegmentation,
installedAutoAnnotation,
dumpActivities: tid in dumps ? dumps[tid] : null,
exportActivities: tid in activeExports ? activeExports[tid] : null,
loadActivity: tid in loads ? loads[tid] : null,
@ -112,9 +99,6 @@ function ActionsMenuContainer(props: OwnProps & StateToProps & DispatchToProps):
dumpActivities,
exportActivities,
inferenceIsActive,
installedAutoAnnotation,
installedTFAnnotation,
installedTFSegmentation,
loadAnnotations,
dumpAnnotations,
@ -172,9 +156,6 @@ function ActionsMenuContainer(props: OwnProps & StateToProps & DispatchToProps):
dumpActivities={dumpActivities}
exportActivities={exportActivities}
inferenceIsActive={inferenceIsActive}
installedAutoAnnotation={installedAutoAnnotation}
installedTFAnnotation={installedTFAnnotation}
installedTFSegmentation={installedTFSegmentation}
onClickMenu={onClickMenu}
/>
);

@ -26,7 +26,6 @@ interface StateToProps {
loadActivity: string | null;
dumpActivities: string[] | null;
exportActivities: string[] | null;
installedReID: boolean;
}
interface DispatchToProps {
@ -56,9 +55,6 @@ function mapStateToProps(state: CombinedState): StateToProps {
exports: activeExports,
},
},
plugins: {
list,
},
} = state;
const taskID = jobInstance.task.id;
@ -71,7 +67,6 @@ function mapStateToProps(state: CombinedState): StateToProps {
? loads[taskID] || jobLoads[jobID] : null,
jobInstance,
annotationFormats,
installedReID: list.REID,
};
}
@ -109,7 +104,6 @@ function AnnotationMenuContainer(props: Props): JSX.Element {
loadActivity,
dumpActivities,
exportActivities,
installedReID,
} = props;
const onClickMenu = (params: ClickParam, file?: File): void => {
@ -155,7 +149,6 @@ function AnnotationMenuContainer(props: Props): JSX.Element {
loadActivity={loadActivity}
dumpActivities={dumpActivities}
exportActivities={exportActivities}
installedReID={installedReID}
onClickMenu={onClickMenu}
taskID={jobInstance.task.id}
/>

@ -1,43 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import { connect } from 'react-redux';
import CreateModelPageComponent from 'components/create-model-page/create-model-page';
import { createModelAsync } from 'actions/models-actions';
import {
ModelFiles,
CombinedState,
} from 'reducers/interfaces';
interface StateToProps {
isAdmin: boolean;
modelCreatingStatus: string;
}
interface DispatchToProps {
createModel(name: string, files: ModelFiles, global: boolean): void;
}
function mapStateToProps(state: CombinedState): StateToProps {
const { models } = state;
return {
isAdmin: state.auth.user.isAdmin,
modelCreatingStatus: models.creatingStatus,
};
}
function mapDispatchToProps(dispatch: any): DispatchToProps {
return {
createModel(name: string, files: ModelFiles, global: boolean): void {
dispatch(createModelAsync(name, files, global));
},
};
}
export default connect(
mapStateToProps,
mapDispatchToProps,
)(CreateModelPageComponent);

@ -15,9 +15,6 @@ const core = getCore();
interface StateToProps {
logoutFetching: boolean;
installedAnalytics: boolean;
installedAutoAnnotation: boolean;
installedTFSegmentation: boolean;
installedTFAnnotation: boolean;
username: string;
toolName: string;
serverHost: string;
@ -61,9 +58,6 @@ function mapStateToProps(state: CombinedState): StateToProps {
return {
logoutFetching,
installedAnalytics: list[SupportedPlugins.ANALYTICS],
installedAutoAnnotation: list[SupportedPlugins.AUTO_ANNOTATION],
installedTFSegmentation: list[SupportedPlugins.TF_SEGMENTATION],
installedTFAnnotation: list[SupportedPlugins.TF_ANNOTATION],
username,
toolName: server.name as string,
serverHost: core.config.backendAPI.slice(0, -7),

@ -31,10 +31,7 @@ interface DispatchToProps {
runInference(
taskInstance: any,
model: Model,
mapping: {
[index: string]: string;
},
cleanOut: boolean,
body: object,
): void;
getModels(): void;
closeDialog(): void;
@ -58,12 +55,9 @@ function mapDispatchToProps(dispatch: any): DispatchToProps {
runInference(
taskInstance: any,
model: Model,
mapping: {
[index: string]: string;
},
cleanOut: boolean,
body: object,
): void {
dispatch(startInferenceAsync(taskInstance, model, mapping, cleanOut));
dispatch(startInferenceAsync(taskInstance, model, body));
},
getModels(): void {
dispatch(getModelsAsync());

@ -2,7 +2,6 @@
//
// SPDX-License-Identifier: MIT
import React from 'react';
import { connect } from 'react-redux';
import ModelsPageComponent from 'components/models-page/models-page';
@ -10,38 +9,25 @@ import {
Model,
CombinedState,
} from 'reducers/interfaces';
import {
getModelsAsync,
deleteModelAsync,
} from 'actions/models-actions';
import { getModelsAsync } from 'actions/models-actions';
interface StateToProps {
installedAutoAnnotation: boolean;
installedTFAnnotation: boolean;
installedTFSegmentation: boolean;
modelsInitialized: boolean;
modelsFetching: boolean;
models: Model[];
registeredUsers: any[];
deployedModels: Model[];
}
interface DispatchToProps {
getModels(): void;
deleteModel(id: number): void;
}
function mapStateToProps(state: CombinedState): StateToProps {
const { list } = state.plugins;
const { models } = state;
return {
installedAutoAnnotation: list.AUTO_ANNOTATION,
installedTFAnnotation: list.TF_ANNOTATION,
installedTFSegmentation: list.TF_SEGMENTATION,
modelsInitialized: models.initialized,
modelsFetching: models.fetching,
models: models.models,
registeredUsers: state.users.users,
deployedModels: models.models,
};
}
@ -50,29 +36,10 @@ function mapDispatchToProps(dispatch: any): DispatchToProps {
getModels(): void {
dispatch(getModelsAsync());
},
deleteModel(id: number): void {
dispatch(deleteModelAsync(id));
},
};
}
function ModelsPageContainer(props: DispatchToProps & StateToProps): JSX.Element | null {
const {
installedAutoAnnotation,
installedTFSegmentation,
installedTFAnnotation,
} = props;
const render = installedAutoAnnotation
|| installedTFAnnotation
|| installedTFSegmentation;
return (
render ? <ModelsPageComponent {...props} /> : null
);
}
export default connect(
mapStateToProps,
mapDispatchToProps,
)(ModelsPageContainer);
)(ModelsPageComponent);

@ -48,9 +48,6 @@ interface StateToProps {
formatsFetching: boolean;
userAgreementsInitialized: boolean;
userAgreementsFetching: boolean;
installedAutoAnnotation: boolean;
installedTFSegmentation: boolean;
installedTFAnnotation: boolean;
notifications: NotificationsState;
user: any;
keyMap: Record<string, ExtendedKeyMapOptions>;
@ -66,7 +63,7 @@ interface DispatchToProps {
resetMessages: () => void;
switchShortcutsDialog: () => void;
loadUserAgreements: () => void;
switchSettingsDialog: (show: boolean) => void;
switchSettingsDialog: () => void;
}
function mapStateToProps(state: CombinedState): StateToProps {
@ -91,9 +88,6 @@ function mapStateToProps(state: CombinedState): StateToProps {
formatsFetching: formats.fetching,
userAgreementsInitialized: userAgreements.initialized,
userAgreementsFetching: userAgreements.fetching,
installedAutoAnnotation: plugins.list.AUTO_ANNOTATION,
installedTFSegmentation: plugins.list.TF_SEGMENTATION,
installedTFAnnotation: plugins.list.TF_ANNOTATION,
notifications: state.notifications,
user: auth.user,
keyMap: shortcuts.keyMap,

@ -71,12 +71,8 @@ export interface FormatsState {
// eslint-disable-next-line import/prefer-default-export
export enum SupportedPlugins {
GIT_INTEGRATION = 'GIT_INTEGRATION',
AUTO_ANNOTATION = 'AUTO_ANNOTATION',
TF_ANNOTATION = 'TF_ANNOTATION',
TF_SEGMENTATION = 'TF_SEGMENTATION',
DEXTR_SEGMENTATION = 'DEXTR_SEGMENTATION',
ANALYTICS = 'ANALYTICS',
REID = 'REID',
}
export interface PluginsState {
@ -133,13 +129,12 @@ export interface ShareState {
}
export interface Model {
id: number | null; // null for preinstalled models
ownerID: number | null; // null for preinstalled models
id: string;
name: string;
primary: boolean;
uploadDate: string;
updateDate: string;
labels: string[];
framework: string;
description: string;
type: string;
}
export enum RQStatus {
@ -150,17 +145,11 @@ export enum RQStatus {
failed = 'failed',
}
export enum ModelType {
OPENVINO = 'openvino',
RCNN = 'rcnn',
MASK_RCNN = 'mask_rcnn',
}
export interface ActiveInference {
status: RQStatus;
progress: number;
error: string;
modelType: ModelType;
id: string;
}
export interface ModelsState {
@ -175,14 +164,6 @@ export interface ModelsState {
activeRunTask: any;
}
export interface ModelFiles {
[key: string]: string | File;
xml: string | File;
bin: string | File;
py: string | File;
json: string | File;
}
export interface ErrorState {
message: string;
reason: string;

@ -44,39 +44,6 @@ export default function (
fetching: false,
};
}
case ModelsActionTypes.DELETE_MODEL_SUCCESS: {
return {
...state,
models: state.models.filter(
(model): boolean => model.id !== action.payload.id,
),
};
}
case ModelsActionTypes.CREATE_MODEL: {
return {
...state,
creatingStatus: '',
};
}
case ModelsActionTypes.CREATE_MODEL_STATUS_UPDATED: {
return {
...state,
creatingStatus: action.payload.status,
};
}
case ModelsActionTypes.CREATE_MODEL_FAILED: {
return {
...state,
creatingStatus: '',
};
}
case ModelsActionTypes.CREATE_MODEL_SUCCESS: {
return {
...state,
initialized: false,
creatingStatus: 'CREATED',
};
}
case ModelsActionTypes.SHOW_RUN_MODEL_DIALOG: {
return {
...state,

@ -361,21 +361,6 @@ export default function (state = defaultState, action: AnyAction): Notifications
},
};
}
case ModelsActionTypes.DELETE_MODEL_FAILED: {
return {
...state,
errors: {
...state.errors,
models: {
...state.errors.models,
deleting: {
message: 'Could not delete the model',
reason: action.payload.error.toString(),
},
},
},
};
}
case ModelsActionTypes.GET_INFERENCE_STATUS_SUCCESS: {
if (action.payload.activeInference.status === 'finished') {
const { taskID } = action.payload;
@ -420,7 +405,7 @@ export default function (state = defaultState, action: AnyAction): Notifications
models: {
...state.errors.models,
inferenceStatusFetching: {
message: 'Could not fetch inference status for the '
message: 'Fetching inference status for the '
+ `<a href="/tasks/${taskID}" target="_blank">task ${taskID}</a>`,
reason: action.payload.error.toString(),
},

@ -5,21 +5,15 @@
import { PluginsActionTypes, PluginActions } from 'actions/plugins-actions';
import { registerGitPlugin } from 'utils/git-utils';
import { registerDEXTRPlugin } from 'utils/dextr-utils';
import {
PluginsState,
} from './interfaces';
import { PluginsState } from './interfaces';
const defaultState: PluginsState = {
fetching: false,
initialized: false,
list: {
GIT_INTEGRATION: false,
AUTO_ANNOTATION: false,
TF_ANNOTATION: false,
TF_SEGMENTATION: false,
DEXTR_SEGMENTATION: false,
ANALYTICS: false,
REID: false,
},
};

@ -4,11 +4,10 @@
import getCore from 'cvat-core-wrapper';
import { Canvas } from 'cvat-canvas-wrapper';
import { ShapeType, RQStatus, CombinedState } from 'reducers/interfaces';
import { ShapeType, CombinedState } from 'reducers/interfaces';
import { getCVATStore } from 'cvat-store';
const core = getCore();
const baseURL = core.config.backendAPI.slice(0, -7);
interface DEXTRPlugin {
name: string;
@ -71,79 +70,33 @@ antModal.append(antModalContent);
antModalWrap.append(antModal);
antModalRoot.append(antModalMask, antModalWrap);
function serverRequest(
plugin: DEXTRPlugin,
jid: number,
async function serverRequest(
taskInstance: any,
frame: number,
points: number[],
): Promise<number[]> {
return new Promise((resolve, reject) => {
const reducer = (acc: Point[], _: number, index: number, array: number[]): Point[] => {
const reducer = (acc: number[][],
_: number, index: number,
array: number[]): number[][] => {
if (!(index % 2)) { // 0, 2, 4
acc.push({
x: array[index],
y: array[index + 1],
});
acc.push([
array[index],
array[index + 1],
]);
}
return acc;
};
const reducedPoints = points.reduce(reducer, []);
core.server.request(
`${baseURL}/dextr/create/${jid}`, {
method: 'POST',
data: JSON.stringify({
const models = await core.lambda.list();
const model = models.filter((func: any): boolean => func.id === 'openvino.dextr')[0];
const result = await core.lambda.call(taskInstance, model, {
task: taskInstance,
frame,
points: reducedPoints,
}),
headers: {
'Content-Type': 'application/json',
},
},
).then(() => {
const timeoutCallback = (): void => {
core.server.request(
`${baseURL}/dextr/check/${jid}`, {
method: 'GET',
},
).then((response: any) => {
const { status } = response;
if (status === RQStatus.finished) {
resolve(response.result.split(/\s|,/).map((coord: string) => +coord));
} else if (status === RQStatus.failed) {
reject(new Error(response.stderr));
} else if (status === RQStatus.unknown) {
reject(new Error('Unknown DEXTR status has been received'));
} else {
if (status === RQStatus.queued) {
antModalButton.disabled = false;
}
if (!plugin.data.canceled) {
setTimeout(timeoutCallback, 1000);
} else {
core.server.request(
`${baseURL}/dextr/cancel/${jid}`, {
method: 'GET',
},
).then(() => {
resolve(points);
}).catch((error: Error) => {
reject(error);
});
}
}
}).catch((error: Error) => {
reject(error);
});
};
setTimeout(timeoutCallback, 1000);
}).catch((error: Error) => {
reject(error);
});
});
return result.flat();
}
async function enter(this: any, self: DEXTRPlugin, objects: any[]): Promise<void> {
@ -159,8 +112,7 @@ async function enter(this: any, self: DEXTRPlugin, objects: any[]): Promise<void
for (let i = 0; i < objects.length; i++) {
if (objects[i].points.length >= 8) {
promises[i] = serverRequest(
self,
this.id,
this.task,
objects[i].frame,
objects[i].points,
);

@ -26,24 +26,13 @@ class PluginChecker {
case SupportedPlugins.GIT_INTEGRATION: {
return isReachable(`${serverHost}/git/repository/meta/get`, 'OPTIONS');
}
case SupportedPlugins.AUTO_ANNOTATION: {
return isReachable(`${serverHost}/auto_annotation/meta/get`, 'OPTIONS');
}
case SupportedPlugins.TF_ANNOTATION: {
return isReachable(`${serverHost}/tensorflow/annotation/meta/get`, 'OPTIONS');
}
case SupportedPlugins.TF_SEGMENTATION: {
return isReachable(`${serverHost}/tensorflow/segmentation/meta/get`, 'OPTIONS');
}
case SupportedPlugins.DEXTR_SEGMENTATION: {
return isReachable(`${serverHost}/dextr/enabled`, 'GET');
const list = await core.lambda.list();
return list.map((func: any): boolean => func.id).includes('openvino.dextr');
}
case SupportedPlugins.ANALYTICS: {
return isReachable(`${serverHost}/analytics/app/kibana`, 'GET');
}
case SupportedPlugins.REID: {
return isReachable(`${serverHost}/reid/enabled`, 'GET');
}
default:
return false;
}

@ -1,96 +0,0 @@
// Copyright (C) 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
import getCore from 'cvat-core-wrapper';
import { ShapeType, RQStatus } from 'reducers/interfaces';
const core = getCore();
const baseURL = core.config.backendAPI.slice(0, -7);
type Params = {
threshold: number;
distance: number;
onUpdatePercentage(percentage: number): void;
jobID: number;
annotations: any;
};
export function run(params: Params): Promise<void> {
return new Promise((resolve, reject) => {
const {
threshold,
distance,
onUpdatePercentage,
jobID,
annotations,
} = params;
const { shapes, ...rest } = annotations;
const boxes = shapes.filter((shape: any): boolean => shape.type === ShapeType.RECTANGLE);
const others = shapes.filter((shape: any): boolean => shape.type !== ShapeType.RECTANGLE);
core.server.request(
`${baseURL}/reid/start/job/${params.jobID}`, {
method: 'POST',
data: JSON.stringify({
threshold,
maxDistance: distance,
boxes,
}),
headers: {
'Content-Type': 'application/json',
},
},
).then(() => {
const timeoutCallback = (): void => {
core.server.request(
`${baseURL}/reid/check/${jobID}`, {
method: 'GET',
},
).then((response: any) => {
const { status } = response;
if (status === RQStatus.finished) {
if (!response.result) {
// cancelled
resolve(annotations);
}
const result = JSON.parse(response.result);
const collection = rest;
Array.prototype.push.apply(collection.tracks, result);
collection.shapes = others;
resolve(collection);
} else if (status === RQStatus.started) {
const { progress } = response;
if (typeof (progress) === 'number') {
onUpdatePercentage(+progress.toFixed(2));
}
setTimeout(timeoutCallback, 1000);
} else if (status === RQStatus.failed) {
reject(new Error(response.stderr));
} else if (status === RQStatus.unknown) {
reject(new Error('Unknown REID status has been received'));
} else {
setTimeout(timeoutCallback, 1000);
}
}).catch((error: Error) => {
reject(error);
});
};
setTimeout(timeoutCallback, 1000);
}).catch((error: Error) => {
reject(error);
});
});
}
export function cancel(jobID: number): void {
core.server.request(
`${baseURL}/reid/cancel/${jobID}`, {
method: 'GET',
},
);
}

@ -1,372 +0,0 @@
## Auto annotation
- [Description](#description)
- [Installation](#installation)
- [Usage](#usage)
- [Testing script](#testing)
- [Examples](#examples)
- [Person-vehicle-bike-detection-crossroad-0078](#person-vehicle-bike-detection-crossroad-0078-openvino-toolkit)
- [Landmarks-regression-retail-0009](#landmarks-regression-retail-0009-openvino-toolkit)
- [Semantic Segmentation](#semantic-segmentation)
- [Available interpretation scripts](#available-interpretation-scripts)
### Description
The application will be enabled automatically if
[OpenVINO&trade; component](../../../components/openvino)
is installed. It allows to use custom models for auto annotation. Only models in
OpenVINO&trade; toolkit format are supported. If you would like to annotate a
task with a custom model please convert it to the intermediate representation
(IR) format via the model optimizer tool. See [OpenVINO documentation](https://software.intel.com/en-us/articles/OpenVINO-InferEngine) for details.
### Installation
See the installation instructions for [the OpenVINO component](../../../components/openvino)
### Usage
To annotate a task with a custom model you need to prepare 4 files:
1. __Model config__ (*.xml) - a text file with network configuration.
1. __Model weights__ (*.bin) - a binary file with trained weights.
1. __Label map__ (*.json) - a simple json file with `label_map` dictionary like
object with string values for label numbers.
Example:
```json
{
"label_map": {
"0": "background",
"1": "aeroplane",
"2": "bicycle",
"3": "bird",
"4": "boat",
"5": "bottle",
"6": "bus",
"7": "car",
"8": "cat",
"9": "chair",
"10": "cow",
"11": "diningtable",
"12": "dog",
"13": "horse",
"14": "motorbike",
"15": "person",
"16": "pottedplant",
"17": "sheep",
"18": "sofa",
"19": "train",
"20": "tvmonitor"
}
}
```
1. __Interpretation script__ (*.py) - a file used to convert net output layer
to a predefined structure which can be processed by CVAT. This code will be run
inside a restricted python's environment, but it's possible to use some
builtin functions like __str, int, float, max, min, range__.
Also two variables are available in the scope:
- __detections__ - a list of dictionaries with detections for each frame:
* __frame_id__ - frame number
* __frame_height__ - frame height
* __frame_width__ - frame width
* __detections__ - output np.ndarray (See [ExecutableNetwork.infer](https://software.intel.com/en-us/articles/OpenVINO-InferEngine#inpage-nav-11-6-3) for details).
- __results__ - an instance of python class with converted results.
Following methods should be used to add shapes:
```python
# xtl, ytl, xbr, ybr - expected values are float or int
# label - expected value is int
# frame_number - expected value is int
# attributes - dictionary of attribute_name: attribute_value pairs, for example {"confidence": "0.83"}
add_box(self, xtl, ytl, xbr, ybr, label, frame_number, attributes=None)
# points - list of (x, y) pairs of float or int, for example [(57.3, 100), (67, 102.7)]
# label - expected value is int
# frame_number - expected value is int
# attributes - dictionary of attribute_name: attribute_value pairs, for example {"confidence": "0.83"}
add_points(self, points, label, frame_number, attributes=None)
add_polygon(self, points, label, frame_number, attributes=None)
add_polyline(self, points, label, frame_number, attributes=None)
```
### Testing script
CVAT comes prepackaged with a small command line helper script to help develop interpretation scripts.
It includes a small user interface which allows users to feed in images and see the results using
the user interfaces provided by OpenCV.
See the script and the documentation in the
[auto_annotation directory](https://github.com/opencv/cvat/tree/develop/utils/auto_annotation).
When using the Auto Annotation runner, it is often helpful to drop into a REPL prompt to interact with the variables
directly. You can do this using the `interact` method from the `code` module.
```python
# Import the interact method from the `code` module
from code import interact
for frame_results in detections:
frame_height = frame_results["frame_height"]
frame_width = frame_results["frame_width"]
frame_number = frame_results["frame_id"]
# Unsure what other data members are in the `frame_results`? Use the `interact method!
interact(local=locals())
```
```bash
$ python cvat/utils/auto_annotation/run_models.py --py /path/to/myfile.py --json /path/to/mapping.json --xml /path/to/inference.xml --bin /path/to/inference.bin
Python 3.6.6 (default, Sep 26 2018, 15:10:10)
[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.10.44.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> dir()
['__builtins__', 'frame_results', 'detections', 'frame_number', 'frame_height', 'interact', 'results', 'frame_width']
>>> type(frame_results)
<class 'dict'>
>>> frame_results.keys()
dict_keys(['frame_id', 'frame_height', 'frame_width', 'detections'])
```
When using the `interact` method, make sure you are running using the _testing script_, and ensure that you _remove it_
before submitting to the server! If you don't remove it from the server, the code runners will hang during execution,
and you'll have to restart the server to fix them.
Another useful development method is visualizing the results using OpenCV. This will be discussed more in the
[Semantic Segmentation](#segmentation) section.
### Examples
#### [Person-vehicle-bike-detection-crossroad-0078](https://github.com/opencv/open_model_zoo/blob/2018/intel_models/person-vehicle-bike-detection-crossroad-0078/description/person-vehicle-bike-detection-crossroad-0078.md) (OpenVINO toolkit)
__Links__
- [person-vehicle-bike-detection-crossroad-0078.xml](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.xml)
- [person-vehicle-bike-detection-crossroad-0078.bin](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.bin)
__Task labels__: person vehicle non-vehicle
__label_map.json__:
```json
{
"label_map": {
"1": "person",
"2": "vehicle",
"3": "non-vehicle"
}
}
```
__Interpretation script for SSD based networks__:
```python
def clip(value):
return max(min(1.0, value), 0.0)
for frame_results in detections:
frame_height = frame_results["frame_height"]
frame_width = frame_results["frame_width"]
frame_number = frame_results["frame_id"]
for i in range(frame_results["detections"].shape[2]):
confidence = frame_results["detections"][0, 0, i, 2]
if confidence < 0.5:
continue
results.add_box(
xtl=clip(frame_results["detections"][0, 0, i, 3]) * frame_width,
ytl=clip(frame_results["detections"][0, 0, i, 4]) * frame_height,
xbr=clip(frame_results["detections"][0, 0, i, 5]) * frame_width,
ybr=clip(frame_results["detections"][0, 0, i, 6]) * frame_height,
label=int(frame_results["detections"][0, 0, i, 1]),
frame_number=frame_number,
attributes={
"confidence": "{:.2f}".format(confidence),
},
)
```
#### [Landmarks-regression-retail-0009](https://github.com/opencv/open_model_zoo/blob/2018/intel_models/landmarks-regression-retail-0009/description/landmarks-regression-retail-0009.md) (OpenVINO toolkit)
__Links__
- [landmarks-regression-retail-0009.xml](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/landmarks-regression-retail-0009/FP32/landmarks-regression-retail-0009.xml)
- [landmarks-regression-retail-0009.bin](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/landmarks-regression-retail-0009/FP32/landmarks-regression-retail-0009.bin)
__Task labels__: left_eye right_eye tip_of_nose left_lip_corner right_lip_corner
__label_map.json__:
```json
{
"label_map": {
"0": "left_eye",
"1": "right_eye",
"2": "tip_of_nose",
"3": "left_lip_corner",
"4": "right_lip_corner"
}
}
```
__Interpretation script__:
```python
def clip(value):
return max(min(1.0, value), 0.0)
for frame_results in detections:
frame_height = frame_results["frame_height"]
frame_width = frame_results["frame_width"]
frame_number = frame_results["frame_id"]
for i in range(0, frame_results["detections"].shape[1], 2):
x = frame_results["detections"][0, i, 0, 0]
y = frame_results["detections"][0, i + 1, 0, 0]
results.add_points(
points=[(clip(x) * frame_width, clip(y) * frame_height)],
label=i // 2, # see label map and model output specification,
frame_number=frame_number,
)
```
#### Semantic Segmentation
__Links__
- [masck_rcnn_resnet50_atrous_coco][1] (OpenvVINO toolkit)
- [CVAT Implemenation][2]
__label_map.json__:
```json
{
"label_map": {
"1": "person",
"2": "bicycle",
"3": "car",
}
}
```
Note that the above labels are not all the labels in the model! See [here](https://github.com/opencv/cvat/blob/develop/utils/open_model_zoo/mask_rcnn_inception_resnet_v2_atrous_coco/mapping.json).
**Interpretation script for a semantic segmentation network**:
```python
import numpy as np
import cv2
from skimage.measure import approximate_polygon, find_contours
for frame_results in detections:
frame_height = frame_results['frame_height']
frame_width = frame_results['frame_width']
frame_number = frame_results['frame_id']
detection = frame_results['detections']
# The keys for the below two members will vary based on the model
masks = frame_results['masks']
boxes = frame_results['reshape_do_2d']
for box_index, box in enumerate(boxes):
# Again, these indexes specific to this model
class_label = int(box[1])
box_class_probability = box[2]
if box_class_probability > 0.2:
xmin = box[3] * frame_width
ymin = box[4] * frame_height
xmax = box[5] * frame_width
ymax = box[6] * frame_width
box_width = int(xmax - xmin)
box_height = int(ymin - ymax)
# use the box index and class label index to find the appropriate mask
# note that we need to convert the class label to a zero indexed array by subtracting `1`
class_mask = masks[box_index][class_label - 1]
# Class mask is a 33 x 33 matrix
# resize it to the bounding box
resized_mask = cv2.resize(class_mask, dsize(box_height, box_width), interpolation=cv2.INTER_CUBIC)
# Each pixel is a probability, select every pixel above the probability threshold, 0.5
# Do this using the boolean `>` method
boolean_mask = (resized_mask > 0.5)
# Convert the boolean values to uint8
uint8_mask = boolean_mask.astype(np.uint8) * 255
# Change the x and y coordinates into integers
xmin = int(round(xmin))
ymin = int(round(ymin))
xmax = xmin + box_width
ymax = ymin + box_height
# Create an empty blank frame, so that we can get the mask polygon in frame coordinates
mask_frame = np.zeros((frame_height, frame_width), dtype=np.uint8)
# Put the uint8_mask on the mask frame using the integer coordinates
mask_frame[xmin:xmax, ymin:ymax] = uint8_mask
mask_probability_threshold = 0.5
# find the contours
contours = find_contours(mask_frame, mask_probability_threshold)
# every bounding box should only have a single contour
contour = contours[0]
contour = np.flip(contour, axis=1)
# reduce the precision on the polygon
polygon_mask = approximate_polygon(contour, tolerance=2.5)
polygon_mask = polygon_mask.tolist()
results.add_polygon(polygon_mask, class_label, frame_number)
```
Note that it is sometimes hard to see or understand what is happening in a script.
Use of the computer vision module can help you visualize what is happening.
```python
import cv2
for frame_results in detections:
frame_height = frame_results['frame_height']
frame_width = frame_results['frame_width']
detection = frame_results['detections']
masks = frame_results['masks']
boxes = frame_results['reshape_do_2d']
for box_index, box in enumerate(boxes):
class_label = int(box[1])
box_class_probability = box[2]
if box_class_probability > 0.2:
xmin = box[3] * frame_width
ymin = box[4] * frame_height
xmax = box[5] * frame_width
ymax = box[6] * frame_width
box_width = int(xmax - xmin)
box_height = int(ymin - ymax)
class_mask = masks[box_index][class_label - 1]
# Visualize the class mask!
cv2.imshow('class mask', class_mask)
# wait until user presses keys
cv2.waitKeys()
boolean_mask = (resized_mask > 0.5)
uint8_mask = boolean_mask.astype(np.uint8) * 255
# Visualize the class mask after it's been resized!
cv2.imshow('class mask', uint8_mask)
cv2.waitKeys()
```
Note that you should _only_ use the above commands while running the [Auto Annotation Model Runner][3].
Running on the server will likely require a server restart to fix.
The method `cv2.destroyAllWindows()` or `cv2.destroyWindow('your-name-here')` might be required depending on your
implementation.
### Available interpretation scripts
CVAT comes prepackaged with several out of the box interpretation scripts.
See them in the [open model zoo directory](https://github.com/opencv/cvat/tree/develop/utils/open_model_zoo)
[1]: https://github.com/opencv/open_model_zoo/blob/master/models/public/mask_rcnn_resnet50_atrous_coco/model.yml
[2]: https://github.com/opencv/cvat/tree/develop/utils/open_model_zoo/mask_rcnn_inception_resnet_v2_atrous_coco
[3]: https://github.com/opencv/cvat/tree/develop/utils/auto_annotation

@ -1,6 +0,0 @@
# Copyright (C) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
default_app_config = 'cvat.apps.auto_annotation.apps.AutoAnnotationConfig'

@ -1,15 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.contrib import admin
from .models import AnnotationModel
@admin.register(AnnotationModel)
class AnnotationModelAdmin(admin.ModelAdmin):
list_display = ('name', 'owner', 'created_date', 'updated_date',
'shared', 'primary', 'framework')
def has_add_permission(self, request):
return False

@ -1,15 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.apps import AppConfig
class AutoAnnotationConfig(AppConfig):
name = "cvat.apps.auto_annotation"
def ready(self):
from .permissions import setup_permissions
setup_permissions()

@ -1,22 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import cv2
import numpy as np
class ImageLoader():
def __init__(self, frame_provider):
self._frame_provider = frame_provider
def __iter__(self):
for frame, _ in self._frame_provider.get_frames(self._frame_provider.Quality.ORIGINAL):
yield self._load_image(frame)
def __len__(self):
return len(self._frame_provider)
@staticmethod
def _load_image(image):
return cv2.imdecode(np.fromstring(image.read(), np.uint8), cv2.IMREAD_COLOR)

@ -1,163 +0,0 @@
import itertools
from .model_loader import ModelLoader
from cvat.apps.engine.utils import import_modules, execute_python_code
def _process_detections(detections, path_to_conv_script, restricted=True):
results = Results()
local_vars = {
"detections": detections,
"results": results,
}
source_code = open(path_to_conv_script).read()
if restricted:
global_vars = {
"__builtins__": {
"str": str,
"int": int,
"float": float,
"max": max,
"min": min,
"range": range,
},
}
else:
global_vars = globals()
imports = import_modules(source_code)
global_vars.update(imports)
execute_python_code(source_code, global_vars, local_vars)
return results
def _process_attributes(shape_attributes, label_attr_spec):
attributes = []
for attr_text, attr_value in shape_attributes.items():
if attr_text in label_attr_spec:
attributes.append({
"spec_id": label_attr_spec[attr_text],
"value": attr_value,
})
return attributes
class Results():
def __init__(self):
self._results = {
"shapes": [],
"tracks": []
}
# https://stackoverflow.com/a/50928627/2701402
def add_box(self, xtl: float, ytl: float, xbr: float, ybr: float, label: int, frame_number: int, attributes: dict=None):
"""
xtl - x coordinate, top left
ytl - y coordinate, top left
xbr - x coordinate, bottom right
ybr - y coordinate, bottom right
"""
self.get_shapes().append({
"label": label,
"frame": frame_number,
"points": [xtl, ytl, xbr, ybr],
"type": "rectangle",
"attributes": attributes or {},
})
def add_points(self, points: list, label: int, frame_number: int, attributes: dict=None):
points = self._create_polyshape(points, label, frame_number, attributes)
points["type"] = "points"
self.get_shapes().append(points)
def add_polygon(self, points: list, label: int, frame_number: int, attributes: dict=None):
polygon = self._create_polyshape(points, label, frame_number, attributes)
polygon["type"] = "polygon"
self.get_shapes().append(polygon)
def add_polyline(self, points: list, label: int, frame_number: int, attributes: dict=None):
polyline = self._create_polyshape(points, label, frame_number, attributes)
polyline["type"] = "polyline"
self.get_shapes().append(polyline)
def get_shapes(self):
return self._results["shapes"]
def get_tracks(self):
return self._results["tracks"]
@staticmethod
def _create_polyshape(points: list, label: int, frame_number: int, attributes: dict=None):
return {
"label": label,
"frame": frame_number,
"points": list(itertools.chain.from_iterable(points)),
"attributes": attributes or {},
}
class InferenceAnnotationRunner:
def __init__(self, data, model_file, weights_file, labels_mapping,
attribute_spec, convertation_file):
self.data = iter(data)
self.data_len = len(data)
self.model = ModelLoader(model=model_file, weights=weights_file)
self.frame_counter = 0
self.attribute_spec = attribute_spec
self.convertation_file = convertation_file
self.iteration_size = 128
self.labels_mapping = labels_mapping
def run(self, job=None, update_progress=None, restricted=True):
result = {
"shapes": [],
"tracks": [],
"tags": [],
"version": 0
}
detections = []
for _ in range(self.iteration_size):
try:
frame = next(self.data)
except StopIteration:
break
orig_rows, orig_cols = frame.shape[:2]
detections.append({
"frame_id": self.frame_counter,
"frame_height": orig_rows,
"frame_width": orig_cols,
"detections": self.model.infer(frame),
})
self.frame_counter += 1
if job and update_progress and not update_progress(job, self.frame_counter * 100 / self.data_len):
return None, False
processed_detections = _process_detections(detections, self.convertation_file, restricted=restricted)
self._add_shapes(processed_detections.get_shapes(), result["shapes"])
more_items = self.frame_counter != self.data_len
return result, more_items
def _add_shapes(self, shapes, target_container):
for shape in shapes:
if shape["label"] not in self.labels_mapping:
continue
db_label = self.labels_mapping[shape["label"]]
label_attr_spec = self.attribute_spec.get(db_label)
target_container.append({
"label_id": db_label,
"frame": shape["frame"],
"points": shape["points"],
"type": shape["type"],
"z_order": 0,
"group": None,
"occluded": False,
"attributes": _process_attributes(shape["attributes"], label_attr_spec),
})

@ -1,52 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from openvino.inference_engine import IENetwork, IEPlugin, IECore, get_version
import subprocess
import os
import platform
_IE_PLUGINS_PATH = os.getenv("IE_PLUGINS_PATH", None)
def _check_instruction(instruction):
return instruction == str.strip(
subprocess.check_output(
'lscpu | grep -o "{}" | head -1'.format(instruction), shell=True
).decode('utf-8')
)
def make_plugin_or_core():
version = get_version()
use_core_openvino = False
try:
major, minor, reference = [int(x) for x in version.split('.')]
if major >= 2 and minor >= 1:
use_core_openvino = True
except Exception:
pass
if use_core_openvino:
ie = IECore()
return ie
if _IE_PLUGINS_PATH is None:
raise OSError('Inference engine plugin path env not found in the system.')
plugin = IEPlugin(device='CPU', plugin_dirs=[_IE_PLUGINS_PATH])
if (_check_instruction('avx2')):
plugin.add_cpu_extension(os.path.join(_IE_PLUGINS_PATH, 'libcpu_extension_avx2.so'))
elif (_check_instruction('sse4')):
plugin.add_cpu_extension(os.path.join(_IE_PLUGINS_PATH, 'libcpu_extension_sse4.so'))
elif platform.system() == 'Darwin':
plugin.add_cpu_extension(os.path.join(_IE_PLUGINS_PATH, 'libcpu_extension.dylib'))
else:
raise Exception('Inference engine requires a support of avx2 or sse4.')
return plugin
def make_network(model, weights):
return IENetwork(model = model, weights = weights)

@ -1,39 +0,0 @@
# Generated by Django 2.1.3 on 2019-01-24 14:05
import cvat.apps.auto_annotation.models
from django.conf import settings
import django.core.files.storage
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='AnnotationModel',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', cvat.apps.auto_annotation.models.SafeCharField(max_length=256)),
('created_date', models.DateTimeField(auto_now_add=True)),
('updated_date', models.DateTimeField(auto_now_add=True)),
('model_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
('weights_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
('labelmap_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
('interpretation_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
('shared', models.BooleanField(default=False)),
('primary', models.BooleanField(default=False)),
('framework', models.CharField(default=cvat.apps.auto_annotation.models.FrameworkChoice('openvino'), max_length=32)),
('owner', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
],
options={
'default_permissions': (),
},
),
]

@ -1,5 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -1,76 +0,0 @@
# Copyright (C) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
import json
import cv2
import os
import numpy as np
from cvat.apps.auto_annotation.inference_engine import make_plugin_or_core, make_network
class ModelLoader():
def __init__(self, model, weights):
self._model = model
self._weights = weights
core_or_plugin = make_plugin_or_core()
network = make_network(self._model, self._weights)
if getattr(core_or_plugin, 'get_supported_layers', False):
supported_layers = core_or_plugin.get_supported_layers(network)
not_supported_layers = [l for l in network.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
raise Exception("Following layers are not supported by the plugin for specified device {}:\n {}".
format(core_or_plugin.device, ", ".join(not_supported_layers)))
iter_inputs = iter(network.inputs)
self._input_blob_name = next(iter_inputs)
self._input_info_name = ''
self._output_blob_name = next(iter(network.outputs))
self._require_image_info = False
info_names = ('image_info', 'im_info')
# NOTE: handeling for the inclusion of `image_info` in OpenVino2019
if any(s in network.inputs for s in info_names):
self._require_image_info = True
self._input_info_name = set(network.inputs).intersection(info_names)
self._input_info_name = self._input_info_name.pop()
if self._input_blob_name in info_names:
self._input_blob_name = next(iter_inputs)
if getattr(core_or_plugin, 'load_network', False):
self._net = core_or_plugin.load_network(network,
"CPU",
num_requests=2)
else:
self._net = core_or_plugin.load(network=network, num_requests=2)
input_type = network.inputs[self._input_blob_name]
self._input_layout = input_type if isinstance(input_type, list) else input_type.shape
def infer(self, image):
_, _, h, w = self._input_layout
in_frame = image if image.shape[:-1] == (h, w) else cv2.resize(image, (w, h))
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
inputs = {self._input_blob_name: in_frame}
if self._require_image_info:
info = np.zeros([1, 3])
info[0, 0] = h
info[0, 1] = w
# frame number
info[0, 2] = 1
inputs[self._input_info_name] = info
results = self._net.infer(inputs)
if len(results) == 1:
return results[self._output_blob_name].copy()
else:
return results.copy()
def load_labelmap(labels_path):
with open(labels_path, "r") as f:
return json.load(f)["label_map"]

@ -1,276 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
import django_rq
import numpy as np
import os
import rq
import shutil
import tempfile
from django.db import transaction
from django.utils import timezone
from django.conf import settings
from cvat.apps.engine.log import slogger
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.authentication.auth import has_admin_role
from cvat.apps.engine.serializers import LabeledDataSerializer
from cvat.apps.dataset_manager.task import put_task_data, patch_task_data
from cvat.apps.engine.frame_provider import FrameProvider
from cvat.apps.engine.utils import av_scan_paths
from .models import AnnotationModel, FrameworkChoice
from .model_loader import load_labelmap
from .image_loader import ImageLoader
from .inference import InferenceAnnotationRunner
def _remove_old_file(model_file_field):
if model_file_field and os.path.exists(model_file_field.name):
os.remove(model_file_field.name)
def _update_dl_model_thread(dl_model_id, name, is_shared, model_file, weights_file, labelmap_file,
interpretation_file, run_tests, is_local_storage, delete_if_test_fails, restricted=True):
def _get_file_content(filename):
return os.path.basename(filename), open(filename, "rb")
def _delete_source_files():
for f in [model_file, weights_file, labelmap_file, interpretation_file]:
if f:
os.remove(f)
def _run_test(model_file, weights_file, labelmap_file, interpretation_file):
test_image = np.ones((1024, 1980, 3), np.uint8) * 255
try:
dummy_labelmap = {key: key for key in load_labelmap(labelmap_file).keys()}
runner = InferenceAnnotationRunner(
data=[test_image,],
model_file=model_file,
weights_file=weights_file,
labels_mapping=dummy_labelmap,
attribute_spec={},
convertation_file=interpretation_file)
runner.run(restricted=restricted)
except Exception as e:
return False, str(e)
return True, ""
job = rq.get_current_job()
job.meta["progress"] = "Saving data"
job.save_meta()
with transaction.atomic():
dl_model = AnnotationModel.objects.select_for_update().get(pk=dl_model_id)
test_res = True
message = ""
if run_tests:
job.meta["progress"] = "Test started"
job.save_meta()
test_res, message = _run_test(
model_file=model_file or dl_model.model_file.name,
weights_file=weights_file or dl_model.weights_file.name,
labelmap_file=labelmap_file or dl_model.labelmap_file.name,
interpretation_file=interpretation_file or dl_model.interpretation_file.name,
)
if not test_res:
job.meta["progress"] = "Test failed"
if delete_if_test_fails:
shutil.rmtree(dl_model.get_dirname(), ignore_errors=True)
dl_model.delete()
else:
job.meta["progress"] = "Test passed"
job.save_meta()
# update DL model
if test_res:
if model_file:
_remove_old_file(dl_model.model_file)
dl_model.model_file.save(*_get_file_content(model_file))
if weights_file:
_remove_old_file(dl_model.weights_file)
dl_model.weights_file.save(*_get_file_content(weights_file))
if labelmap_file:
_remove_old_file(dl_model.labelmap_file)
dl_model.labelmap_file.save(*_get_file_content(labelmap_file))
if interpretation_file:
_remove_old_file(dl_model.interpretation_file)
dl_model.interpretation_file.save(*_get_file_content(interpretation_file))
if name:
dl_model.name = name
if is_shared != None:
dl_model.shared = is_shared
dl_model.updated_date = timezone.now()
dl_model.save()
if is_local_storage:
_delete_source_files()
if not test_res:
raise Exception("Model was not properly created/updated. Test failed: {}".format(message))
def create_or_update(dl_model_id, name, model_file, weights_file, labelmap_file, interpretation_file, owner, storage, is_shared):
def get_abs_path(share_path):
if not share_path:
return share_path
share_root = settings.SHARE_ROOT
relpath = os.path.normpath(share_path).lstrip('/')
if '..' in relpath.split(os.path.sep):
raise Exception('Permission denied')
abspath = os.path.abspath(os.path.join(share_root, relpath))
if os.path.commonprefix([share_root, abspath]) != share_root:
raise Exception('Bad file path on share: ' + abspath)
return abspath
def save_file_as_tmp(data):
if not data:
return None
fd, filename = tempfile.mkstemp()
with open(filename, 'wb') as tmp_file:
for chunk in data.chunks():
tmp_file.write(chunk)
os.close(fd)
return filename
is_create_request = dl_model_id is None
if is_create_request:
dl_model_id = create_empty(owner=owner)
run_tests = bool(model_file or weights_file or labelmap_file or interpretation_file)
if storage != "local":
model_file = get_abs_path(model_file)
weights_file = get_abs_path(weights_file)
labelmap_file = get_abs_path(labelmap_file)
interpretation_file = get_abs_path(interpretation_file)
else:
model_file = save_file_as_tmp(model_file)
weights_file = save_file_as_tmp(weights_file)
labelmap_file = save_file_as_tmp(labelmap_file)
interpretation_file = save_file_as_tmp(interpretation_file)
files_to_scan = []
if model_file:
files_to_scan.append(model_file)
if weights_file:
files_to_scan.append(weights_file)
if labelmap_file:
files_to_scan.append(labelmap_file)
if interpretation_file:
files_to_scan.append(interpretation_file)
av_scan_paths(*files_to_scan)
if owner:
restricted = not has_admin_role(owner)
else:
restricted = not has_admin_role(AnnotationModel.objects.get(pk=dl_model_id).owner)
rq_id = "auto_annotation.create.{}".format(dl_model_id)
queue = django_rq.get_queue("default")
queue.enqueue_call(
func=_update_dl_model_thread,
args=(
dl_model_id,
name,
is_shared,
model_file,
weights_file,
labelmap_file,
interpretation_file,
run_tests,
storage == "local",
is_create_request,
restricted
),
job_id=rq_id
)
return rq_id
@transaction.atomic
def create_empty(owner, framework=FrameworkChoice.OPENVINO):
db_model = AnnotationModel(
owner=owner,
)
db_model.save()
model_path = db_model.get_dirname()
if os.path.isdir(model_path):
shutil.rmtree(model_path)
os.mkdir(model_path)
return db_model.id
@transaction.atomic
def delete(dl_model_id):
dl_model = AnnotationModel.objects.select_for_update().get(pk=dl_model_id)
if dl_model:
if dl_model.primary:
raise Exception("Can not delete primary model {}".format(dl_model_id))
shutil.rmtree(dl_model.get_dirname(), ignore_errors=True)
dl_model.delete()
else:
raise Exception("Requested DL model {} doesn't exist".format(dl_model_id))
def run_inference_thread(tid, model_file, weights_file, labels_mapping, attributes, convertation_file, reset, user, restricted=True):
def update_progress(job, progress):
job.refresh()
if "cancel" in job.meta:
del job.meta["cancel"]
job.save()
return False
job.meta["progress"] = progress
job.save_meta()
return True
try:
job = rq.get_current_job()
job.meta["progress"] = 0
job.save_meta()
db_task = TaskModel.objects.get(pk=tid)
result = None
slogger.glob.info("auto annotation with openvino toolkit for task {}".format(tid))
more_data = True
runner = InferenceAnnotationRunner(
data=ImageLoader(FrameProvider(db_task.data)),
model_file=model_file,
weights_file=weights_file,
labels_mapping=labels_mapping,
attribute_spec=attributes,
convertation_file= convertation_file)
while more_data:
result, more_data = runner.run(
job=job,
update_progress=update_progress,
restricted=restricted)
if result is None:
slogger.glob.info("auto annotation for task {} canceled by user".format(tid))
return
serializer = LabeledDataSerializer(data = result)
if serializer.is_valid(raise_exception=True):
if reset:
put_task_data(tid, result)
else:
patch_task_data(tid, result, "create")
slogger.glob.info("auto annotation for task {} done".format(tid))
except Exception as e:
try:
slogger.task[tid].exception("exception was occurred during auto annotation of the task", exc_info=True)
except Exception as ex:
slogger.glob.exception("exception was occurred during auto annotation of the task {}: {}".format(tid, str(ex)), exc_info=True)
raise ex
raise e

@ -1,56 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
from enum import Enum
from django.db import models
from django.conf import settings
from django.contrib.auth.models import User
from django.core.files.storage import FileSystemStorage
fs = FileSystemStorage()
def upload_path_handler(instance, filename):
return os.path.join(settings.MODELS_ROOT, str(instance.id), filename)
class FrameworkChoice(Enum):
OPENVINO = 'openvino'
TENSORFLOW = 'tensorflow'
PYTORCH = 'pytorch'
def __str__(self):
return self.value
class SafeCharField(models.CharField):
def get_prep_value(self, value):
value = super().get_prep_value(value)
if value:
return value[:self.max_length]
return value
class AnnotationModel(models.Model):
name = SafeCharField(max_length=256)
owner = models.ForeignKey(User, null=True, blank=True,
on_delete=models.SET_NULL)
created_date = models.DateTimeField(auto_now_add=True)
updated_date = models.DateTimeField(auto_now_add=True)
model_file = models.FileField(upload_to=upload_path_handler, storage=fs)
weights_file = models.FileField(upload_to=upload_path_handler, storage=fs)
labelmap_file = models.FileField(upload_to=upload_path_handler, storage=fs)
interpretation_file = models.FileField(upload_to=upload_path_handler, storage=fs)
shared = models.BooleanField(default=False)
primary = models.BooleanField(default=False)
framework = models.CharField(max_length=32, default=FrameworkChoice.OPENVINO)
class Meta:
default_permissions = ()
def get_dirname(self):
return "{models_root}/{id}".format(models_root=settings.MODELS_ROOT, id=self.id)
def __str__(self):
return self.name

@ -1,29 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import rules
from cvat.apps.authentication.auth import has_admin_role, has_user_role
@rules.predicate
def is_model_owner(db_user, db_dl_model):
return db_dl_model.owner == db_user
@rules.predicate
def is_shared_model(_, db_dl_model):
return db_dl_model.shared
@rules.predicate
def is_primary_model(_, db_dl_model):
return db_dl_model.primary
def setup_permissions():
rules.add_perm('auto_annotation.model.create', has_admin_role | has_user_role)
rules.add_perm('auto_annotation.model.update', (has_admin_role | is_model_owner) & ~is_primary_model)
rules.add_perm('auto_annotation.model.delete', (has_admin_role | is_model_owner) & ~is_primary_model)
rules.add_perm('auto_annotation.model.access', has_admin_role | is_model_owner |
is_shared_model | is_primary_model)

@ -1,4 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -1,19 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.urls import path
from . import views
urlpatterns = [
path("create", views.create_model),
path("update/<int:mid>", views.update_model),
path("delete/<int:mid>", views.delete_model),
path("start/<int:mid>/<int:tid>", views.start_annotation),
path("check/<str:rq_id>", views.check),
path("cancel/<int:tid>", views.cancel),
path("meta/get", views.get_meta_info),
]

@ -1,265 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import django_rq
import json
import os
from django.http import HttpResponse, JsonResponse, HttpResponseBadRequest
from rest_framework.decorators import api_view
from django.db.models import Q
from rules.contrib.views import permission_required, objectgetter
from cvat.apps.authentication.decorators import login_required
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.authentication.auth import has_admin_role
from cvat.apps.engine.log import slogger
from .model_loader import load_labelmap
from . import model_manager
from .models import AnnotationModel
@login_required
@permission_required(perm=["engine.task.change"],
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
def cancel(request, tid):
try:
queue = django_rq.get_queue("low")
job = queue.fetch_job("auto_annotation.run.{}".format(tid))
if job is None or job.is_finished or job.is_failed:
raise Exception("Task is not being annotated currently")
elif "cancel" not in job.meta:
job.meta["cancel"] = True
job.save()
except Exception as ex:
try:
slogger.task[tid].exception("cannot cancel auto annotation for task #{}".format(tid), exc_info=True)
except Exception as logger_ex:
slogger.glob.exception("exception was occured during cancel auto annotation request for task {}: {}".format(tid, str(logger_ex)), exc_info=True)
return HttpResponseBadRequest(str(ex))
return HttpResponse()
@login_required
@permission_required(perm=["auto_annotation.model.create"], raise_exception=True)
def create_model(request):
if request.method != 'POST':
return HttpResponseBadRequest("Only POST requests are accepted")
try:
params = request.POST
storage = params["storage"]
name = params["name"]
is_shared = params["shared"].lower() == "true"
if is_shared and not has_admin_role(request.user):
raise Exception("Only admin can create shared models")
files = request.FILES if storage == "local" else params
model = files["xml"]
weights = files["bin"]
labelmap = files["json"]
interpretation_script = files["py"]
owner = request.user
rq_id = model_manager.create_or_update(
dl_model_id=None,
name=name,
model_file=model,
weights_file=weights,
labelmap_file=labelmap,
interpretation_file=interpretation_script,
owner=owner,
storage=storage,
is_shared=is_shared,
)
return JsonResponse({"id": rq_id})
except Exception as e:
return HttpResponseBadRequest(str(e))
@login_required
@permission_required(perm=["auto_annotation.model.update"],
fn=objectgetter(AnnotationModel, "mid"), raise_exception=True)
def update_model(request, mid):
if request.method != 'POST':
return HttpResponseBadRequest("Only POST requests are accepted")
try:
params = request.POST
storage = params["storage"]
name = params.get("name")
is_shared = params.get("shared")
is_shared = is_shared.lower() == "true" if is_shared else None
if is_shared and not has_admin_role(request.user):
raise Exception("Only admin can create shared models")
files = request.FILES
model = files.get("xml")
weights = files.get("bin")
labelmap = files.get("json")
interpretation_script = files.get("py")
rq_id = model_manager.create_or_update(
dl_model_id=mid,
name=name,
model_file=model,
weights_file=weights,
labelmap_file=labelmap,
interpretation_file=interpretation_script,
owner=None,
storage=storage,
is_shared=is_shared,
)
return JsonResponse({"id": rq_id})
except Exception as e:
return HttpResponseBadRequest(str(e))
@login_required
@permission_required(perm=["auto_annotation.model.delete"],
fn=objectgetter(AnnotationModel, "mid"), raise_exception=True)
def delete_model(request, mid):
if request.method != 'DELETE':
return HttpResponseBadRequest("Only DELETE requests are accepted")
model_manager.delete(mid)
return HttpResponse()
@api_view(['POST'])
@login_required
def get_meta_info(request):
try:
tids = request.data
response = {
"admin": has_admin_role(request.user),
"models": [],
"run": {},
}
dl_model_list = list(AnnotationModel.objects.filter(Q(owner=request.user) | Q(primary=True) | Q(shared=True)).order_by('-created_date'))
for dl_model in dl_model_list:
labels = []
if dl_model.labelmap_file and os.path.exists(dl_model.labelmap_file.name):
with dl_model.labelmap_file.open('r') as f:
labels = list(json.load(f)["label_map"].values())
response["models"].append({
"id": dl_model.id,
"name": dl_model.name,
"primary": dl_model.primary,
"uploadDate": dl_model.created_date,
"updateDate": dl_model.updated_date,
"labels": labels,
"owner": dl_model.owner.id,
})
queue = django_rq.get_queue("low")
for tid in tids:
rq_id = "auto_annotation.run.{}".format(tid)
job = queue.fetch_job(rq_id)
if job is not None:
response["run"][tid] = {
"status": job.get_status(),
"rq_id": rq_id,
}
return JsonResponse(response)
except Exception as e:
return HttpResponseBadRequest(str(e))
@login_required
@permission_required(perm=["engine.task.change"],
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
@permission_required(perm=["auto_annotation.model.access"],
fn=objectgetter(AnnotationModel, "mid"), raise_exception=True)
def start_annotation(request, mid, tid):
slogger.glob.info("auto annotation create request for task {} via DL model {}".format(tid, mid))
try:
db_task = TaskModel.objects.get(pk=tid)
queue = django_rq.get_queue("low")
job = queue.fetch_job("auto_annotation.run.{}".format(tid))
if job is not None and (job.is_started or job.is_queued):
raise Exception("The process is already running")
data = json.loads(request.body.decode('utf-8'))
should_reset = data["reset"]
user_defined_labels_mapping = data["labels"]
dl_model = AnnotationModel.objects.get(pk=mid)
model_file_path = dl_model.model_file.name
weights_file_path = dl_model.weights_file.name
labelmap_file = dl_model.labelmap_file.name
convertation_file_path = dl_model.interpretation_file.name
restricted = not has_admin_role(dl_model.owner)
db_labels = db_task.label_set.prefetch_related("attributespec_set").all()
db_attributes = {db_label.id:
{db_attr.name: db_attr.id for db_attr in db_label.attributespec_set.all()} for db_label in db_labels}
db_labels = {db_label.name:db_label.id for db_label in db_labels}
model_labels = {value: key for key, value in load_labelmap(labelmap_file).items()}
labels_mapping = {}
for user_model_label, user_db_label in user_defined_labels_mapping.items():
if user_model_label in model_labels and user_db_label in db_labels:
labels_mapping[int(model_labels[user_model_label])] = db_labels[user_db_label]
if not labels_mapping:
raise Exception("No labels found for annotation")
rq_id="auto_annotation.run.{}".format(tid)
queue.enqueue_call(func=model_manager.run_inference_thread,
args=(
tid,
model_file_path,
weights_file_path,
labels_mapping,
db_attributes,
convertation_file_path,
should_reset,
request.user,
restricted,
),
job_id = rq_id,
timeout=604800) # 7 days
slogger.task[tid].info("auto annotation job enqueued")
except Exception as ex:
try:
slogger.task[tid].exception("exception was occurred during annotation request", exc_info=True)
except Exception as logger_ex:
slogger.glob.exception("exception was occurred during create auto annotation request for task {}: {}".format(tid, str(logger_ex)), exc_info=True)
return HttpResponseBadRequest(str(ex))
return JsonResponse({"id": rq_id})
@login_required
def check(request, rq_id):
try:
target_queue = "low" if "auto_annotation.run" in rq_id else "default"
queue = django_rq.get_queue(target_queue)
job = queue.fetch_job(rq_id)
if job is not None and "cancel" in job.meta:
return JsonResponse({"status": "finished"})
data = {}
if job is None:
data["status"] = "unknown"
elif job.is_queued:
data["status"] = "queued"
elif job.is_started:
data["status"] = "started"
data["progress"] = job.meta["progress"] if "progress" in job.meta else ""
elif job.is_finished:
data["status"] = "finished"
job.delete()
else:
data["status"] = "failed"
data["error"] = job.exc_info
job.delete()
except Exception:
data["status"] = "unknown"
return JsonResponse(data)

@ -1,4 +0,0 @@
# Copyright (C) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -1,8 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
# Register your models here.

@ -1,11 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.apps import AppConfig
class AutoSegmentationConfig(AppConfig):
name = 'auto_segmentation'

@ -1,5 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -1,8 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
# Create your models here.

@ -1,8 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
# Create your tests here.

@ -1,14 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.urls import path
from . import views
urlpatterns = [
path('create/task/<int:tid>', views.create),
path('check/task/<int:tid>', views.check),
path('cancel/task/<int:tid>', views.cancel),
path('meta/get', views.get_meta_info),
]

@ -1,310 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.http import HttpResponse, JsonResponse, HttpResponseBadRequest
from rest_framework.decorators import api_view
from rules.contrib.views import permission_required, objectgetter
from cvat.apps.authentication.decorators import login_required
from cvat.apps.dataset_manager.task import put_task_data
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.engine.serializers import LabeledDataSerializer
from cvat.apps.engine.frame_provider import FrameProvider
import django_rq
import os
import rq
import numpy as np
from cvat.apps.engine.log import slogger
import sys
import skimage.io
from skimage.measure import find_contours, approximate_polygon
def run_tensorflow_auto_segmentation(frame_provider, labels_mapping, treshold):
def _convert_to_int(boolean_mask):
return boolean_mask.astype(np.uint8)
def _convert_to_segmentation(mask):
contours = find_contours(mask, 0.5)
# only one contour exist in our case
contour = contours[0]
contour = np.flip(contour, axis=1)
# Approximate the contour and reduce the number of points
contour = approximate_polygon(contour, tolerance=2.5)
segmentation = contour.ravel().tolist()
return segmentation
## INITIALIZATION
# workarround for tf.placeholder() is not compatible with eager execution
# https://github.com/tensorflow/tensorflow/issues/18165
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
# Root directory of the project
ROOT_DIR = os.environ.get('AUTO_SEGMENTATION_PATH')
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
import mrcnn.model as modellib
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
if COCO_MODEL_PATH is None:
raise OSError('Model path env not found in the system.')
job = rq.get_current_job()
## CONFIGURATION
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# Print config details
config = InferenceConfig()
config.display()
## CREATE MODEL AND LOAD TRAINED WEIGHTS
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
## RUN OBJECT DETECTION
result = {}
frames = frame_provider.get_frames(frame_provider.Quality.ORIGINAL)
for image_num, (image_bytes, _) in enumerate(frames):
job.refresh()
if 'cancel' in job.meta:
del job.meta['cancel']
job.save()
return None
job.meta['progress'] = image_num * 100 / len(frame_provider)
job.save_meta()
image = skimage.io.imread(image_bytes)
# for multiple image detection, "batch size" must be equal to number of images
r = model.detect([image], verbose=1)
r = r[0]
# "r['rois'][index]" gives bounding box around the object
for index, c_id in enumerate(r['class_ids']):
if c_id in labels_mapping.keys():
if r['scores'][index] >= treshold:
mask = _convert_to_int(r['masks'][:,:,index])
segmentation = _convert_to_segmentation(mask)
label = labels_mapping[c_id]
if label not in result:
result[label] = []
result[label].append(
[image_num, segmentation])
return result
def convert_to_cvat_format(data):
result = {
"tracks": [],
"shapes": [],
"tags": [],
"version": 0,
}
for label in data:
segments = data[label]
for segment in segments:
result['shapes'].append({
"type": "polygon",
"label_id": label,
"frame": segment[0],
"points": segment[1],
"z_order": 0,
"group": None,
"occluded": False,
"attributes": [],
})
return result
def create_thread(tid, labels_mapping, user):
try:
# If detected object accuracy bigger than threshold it will returend
TRESHOLD = 0.5
# Init rq job
job = rq.get_current_job()
job.meta['progress'] = 0
job.save_meta()
# Get job indexes and segment length
db_task = TaskModel.objects.get(pk=tid)
# Get image list
frame_provider = FrameProvider(db_task.data)
# Run auto segmentation by tf
result = None
slogger.glob.info("auto segmentation with tensorflow framework for task {}".format(tid))
result = run_tensorflow_auto_segmentation(frame_provider, labels_mapping, TRESHOLD)
if result is None:
slogger.glob.info('auto segmentation for task {} canceled by user'.format(tid))
return
# Modify data format and save
result = convert_to_cvat_format(result)
serializer = LabeledDataSerializer(data = result)
if serializer.is_valid(raise_exception=True):
put_task_data(tid, result)
slogger.glob.info('auto segmentation for task {} done'.format(tid))
except Exception as ex:
try:
slogger.task[tid].exception('exception was occured during auto segmentation of the task', exc_info=True)
except Exception:
slogger.glob.exception('exception was occured during auto segmentation of the task {}'.format(tid), exc_info=True)
raise ex
@api_view(['POST'])
@login_required
def get_meta_info(request):
try:
queue = django_rq.get_queue('low')
tids = request.data
result = {}
for tid in tids:
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
if job is not None:
result[tid] = {
"active": job.is_queued or job.is_started,
"success": not job.is_failed
}
return JsonResponse(result)
except Exception as ex:
slogger.glob.exception('exception was occured during tf meta request', exc_info=True)
return HttpResponseBadRequest(str(ex))
@login_required
@permission_required(perm=['engine.task.change'],
fn=objectgetter(TaskModel, 'tid'), raise_exception=True)
def create(request, tid):
slogger.glob.info('auto segmentation create request for task {}'.format(tid))
try:
db_task = TaskModel.objects.get(pk=tid)
queue = django_rq.get_queue('low')
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
if job is not None and (job.is_started or job.is_queued):
raise Exception("The process is already running")
db_labels = db_task.label_set.prefetch_related('attributespec_set').all()
db_labels = {db_label.id:db_label.name for db_label in db_labels}
# COCO Labels
auto_segmentation_labels = { "BG": 0,
"person": 1, "bicycle": 2, "car": 3, "motorcycle": 4, "airplane": 5,
"bus": 6, "train": 7, "truck": 8, "boat": 9, "traffic_light": 10,
"fire_hydrant": 11, "stop_sign": 12, "parking_meter": 13, "bench": 14,
"bird": 15, "cat": 16, "dog": 17, "horse": 18, "sheep": 19, "cow": 20,
"elephant": 21, "bear": 22, "zebra": 23, "giraffe": 24, "backpack": 25,
"umbrella": 26, "handbag": 27, "tie": 28, "suitcase": 29, "frisbee": 30,
"skis": 31, "snowboard": 32, "sports_ball": 33, "kite": 34, "baseball_bat": 35,
"baseball_glove": 36, "skateboard": 37, "surfboard": 38, "tennis_racket": 39,
"bottle": 40, "wine_glass": 41, "cup": 42, "fork": 43, "knife": 44, "spoon": 45,
"bowl": 46, "banana": 47, "apple": 48, "sandwich": 49, "orange": 50, "broccoli": 51,
"carrot": 52, "hot_dog": 53, "pizza": 54, "donut": 55, "cake": 56, "chair": 57,
"couch": 58, "potted_plant": 59, "bed": 60, "dining_table": 61, "toilet": 62,
"tv": 63, "laptop": 64, "mouse": 65, "remote": 66, "keyboard": 67, "cell_phone": 68,
"microwave": 69, "oven": 70, "toaster": 71, "sink": 72, "refrigerator": 73,
"book": 74, "clock": 75, "vase": 76, "scissors": 77, "teddy_bear": 78, "hair_drier": 79,
"toothbrush": 80
}
labels_mapping = {}
for key, labels in db_labels.items():
if labels in auto_segmentation_labels.keys():
labels_mapping[auto_segmentation_labels[labels]] = key
if not len(labels_mapping.values()):
raise Exception('No labels found for auto segmentation')
# Run auto segmentation job
queue.enqueue_call(func=create_thread,
args=(tid, labels_mapping, request.user),
job_id='auto_segmentation.create/{}'.format(tid),
timeout=604800) # 7 days
slogger.task[tid].info('tensorflow segmentation job enqueued with labels {}'.format(labels_mapping))
except Exception as ex:
try:
slogger.task[tid].exception("exception was occured during tensorflow segmentation request", exc_info=True)
except Exception:
pass
return HttpResponseBadRequest(str(ex))
return HttpResponse()
@login_required
@permission_required(perm=['engine.task.access'],
fn=objectgetter(TaskModel, 'tid'), raise_exception=True)
def check(request, tid):
try:
queue = django_rq.get_queue('low')
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
if job is not None and 'cancel' in job.meta:
return JsonResponse({'status': 'finished'})
data = {}
if job is None:
data['status'] = 'unknown'
elif job.is_queued:
data['status'] = 'queued'
elif job.is_started:
data['status'] = 'started'
data['progress'] = job.meta['progress']
elif job.is_finished:
data['status'] = 'finished'
job.delete()
else:
data['status'] = 'failed'
data['stderr'] = job.exc_info
job.delete()
except Exception:
data['status'] = 'unknown'
return JsonResponse(data)
@login_required
@permission_required(perm=['engine.task.change'],
fn=objectgetter(TaskModel, 'tid'), raise_exception=True)
def cancel(request, tid):
try:
queue = django_rq.get_queue('low')
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
if job is None or job.is_finished or job.is_failed:
raise Exception('Task is not being segmented currently')
elif 'cancel' not in job.meta:
job.meta['cancel'] = True
job.save()
except Exception as ex:
try:
slogger.task[tid].exception("cannot cancel tensorflow segmentation for task #{}".format(tid), exc_info=True)
except Exception:
pass
return HttpResponseBadRequest(str(ex))
return HttpResponse()

@ -316,6 +316,9 @@ class ShapeManager(ObjectManager):
def _calc_objects_similarity(obj0, obj1, start_frame, overlap):
def _calc_polygons_similarity(p0, p1):
overlap_area = p0.intersection(p1).area
if p0.area == 0 or p1.area == 0: # a line with many points
return 0
else:
return overlap_area / (p0.area + p1.area - overlap_area)
has_same_type = obj0["type"] == obj1["type"]
@ -328,7 +331,7 @@ class ShapeManager(ObjectManager):
return _calc_polygons_similarity(p0, p1)
elif obj0["type"] == ShapeType.POLYGON:
p0 = geometry.Polygon(pairwise(obj0["points"]))
p1 = geometry.Polygon(pairwise(obj0["points"]))
p1 = geometry.Polygon(pairwise(obj1["points"]))
return _calc_polygons_similarity(p0, p1)
else:

@ -1,31 +0,0 @@
# Semi-Automatic Segmentation with [Deep Extreme Cut](http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr/)
## About the application
The application allows to use deep learning models for semi-automatic semantic and instance segmentation.
You can get a segmentation polygon from four (or more) extreme points of an object.
This application uses the pre-trained DEXTR model which has been converted to Inference Engine format.
We are grateful to K.K. Maninis, S. Caelles, J. Pont-Tuset, and L. Van Gool who permitted using their models in our tool
## Build docker image
```bash
# OpenVINO component is also needed
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml -f cvat/apps/dextr_segmentation/docker-compose.dextr.yml build
```
## Run docker container
```bash
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml -f cvat/apps/dextr_segmentation/docker-compose.dextr.yml up -d
```
## Using
1. Open a job
2. Select "Auto Segmentation" in the list of shapes
3. Run the draw mode as usually (by press the "Create Shape" button or by "N" shortcut)
4. Click four-six (or more if it's need) extreme points of an object
5. Close the draw mode as usually (by shortcut or pressing the button "Stop Creation")
6. Wait a moment and you will get a class agnostic annotation polygon
7. You can close an annotation request if it is too long
(in case if it is queued to rq worker and all workers are busy)

@ -1,7 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from cvat.settings.base import JS_3RDPARTY
JS_3RDPARTY['engine'] = JS_3RDPARTY.get('engine', []) + ['dextr_segmentation/js/enginePlugin.js']

@ -1,8 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.apps import AppConfig
class DextrSegmentationConfig(AppConfig):
name = 'dextr_segmentation'

@ -1,119 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
from cvat.apps.auto_annotation.inference_engine import make_plugin_or_core, make_network
from cvat.apps.engine.frame_provider import FrameProvider
import os
import cv2
import PIL
import numpy as np
_IE_CPU_EXTENSION = os.getenv("IE_CPU_EXTENSION", "libcpu_extension_avx2.so")
_IE_PLUGINS_PATH = os.getenv("IE_PLUGINS_PATH", None)
_DEXTR_MODEL_DIR = os.getenv("DEXTR_MODEL_DIR", None)
_DEXTR_PADDING = 50
_DEXTR_TRESHOLD = 0.9
_DEXTR_SIZE = 512
class DEXTR_HANDLER:
def __init__(self):
self._plugin = None
self._network = None
self._exec_network = None
self._input_blob = None
self._output_blob = None
if not _DEXTR_MODEL_DIR:
raise Exception("DEXTR_MODEL_DIR is not defined")
def handle(self, db_data, frame, points):
# Lazy initialization
if not self._plugin:
self._plugin = make_plugin_or_core()
self._network = make_network(os.path.join(_DEXTR_MODEL_DIR, 'dextr.xml'),
os.path.join(_DEXTR_MODEL_DIR, 'dextr.bin'))
self._input_blob = next(iter(self._network.inputs))
self._output_blob = next(iter(self._network.outputs))
if getattr(self._plugin, 'load_network', False):
self._exec_network = self._plugin.load_network(self._network, 'CPU')
else:
self._exec_network = self._plugin.load(network=self._network)
frame_provider = FrameProvider(db_data)
image = frame_provider.get_frame(frame, frame_provider.Quality.ORIGINAL)
image = PIL.Image.open(image[0])
numpy_image = np.array(image)
points = np.asarray([[int(p["x"]), int(p["y"])] for p in points], dtype=int)
# Padding mustn't be more than the closest distance to an edge of an image
[height, width] = numpy_image.shape[:2]
x_values = points[:, 0]
y_values = points[:, 1]
[min_x, max_x] = [np.min(x_values), np.max(x_values)]
[min_y, max_y] = [np.min(y_values), np.max(y_values)]
padding = min(min_x, min_y, width - max_x, height - max_y, _DEXTR_PADDING)
bounding_box = (
max(min(points[:, 0]) - padding, 0),
max(min(points[:, 1]) - padding, 0),
min(max(points[:, 0]) + padding, width - 1),
min(max(points[:, 1]) + padding, height - 1)
)
# Prepare an image
numpy_cropped = np.array(image.crop(bounding_box))
resized = cv2.resize(numpy_cropped, (_DEXTR_SIZE, _DEXTR_SIZE),
interpolation = cv2.INTER_CUBIC).astype(np.float32)
# Make a heatmap
points = points - [min(points[:, 0]), min(points[:, 1])] + [padding, padding]
points = (points * [_DEXTR_SIZE / numpy_cropped.shape[1], _DEXTR_SIZE / numpy_cropped.shape[0]]).astype(int)
heatmap = np.zeros(shape=resized.shape[:2], dtype=np.float64)
for point in points:
gaussian_x_axis = np.arange(0, _DEXTR_SIZE, 1, float) - point[0]
gaussian_y_axis = np.arange(0, _DEXTR_SIZE, 1, float)[:, np.newaxis] - point[1]
gaussian = np.exp(-4 * np.log(2) * ((gaussian_x_axis ** 2 + gaussian_y_axis ** 2) / 100)).astype(np.float64)
heatmap = np.maximum(heatmap, gaussian)
cv2.normalize(heatmap, heatmap, 0, 255, cv2.NORM_MINMAX)
# Concat an image and a heatmap
input_dextr = np.concatenate((resized, heatmap[:, :, np.newaxis].astype(resized.dtype)), axis=2)
input_dextr = input_dextr.transpose((2,0,1))
pred = self._exec_network.infer(inputs={self._input_blob: input_dextr[np.newaxis, ...]})[self._output_blob][0, 0, :, :]
pred = cv2.resize(pred, tuple(reversed(numpy_cropped.shape[:2])), interpolation = cv2.INTER_CUBIC)
result = np.zeros(numpy_image.shape[:2])
result[bounding_box[1]:bounding_box[1] + pred.shape[0], bounding_box[0]:bounding_box[0] + pred.shape[1]] = pred > _DEXTR_TRESHOLD
# Convert a mask to a polygon
result = np.array(result, dtype=np.uint8)
cv2.normalize(result,result,0,255,cv2.NORM_MINMAX)
contours = None
if int(cv2.__version__.split('.')[0]) > 3:
contours = cv2.findContours(result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)[0]
else:
contours = cv2.findContours(result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)[1]
contours = max(contours, key=lambda arr: arr.size)
if contours.shape.count(1):
contours = np.squeeze(contours)
if contours.size < 3 * 2:
raise Exception('Less then three point have been detected. Can not build a polygon.')
result = ""
for point in contours:
result += "{},{} ".format(int(point[0]), int(point[1]))
result = result[:-1]
return result
def __del__(self):
if self._exec_network:
del self._exec_network
if self._network:
del self._network
if self._plugin:
del self._plugin

@ -1,14 +0,0 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
WITH_DEXTR: "yes"

@ -1,214 +0,0 @@
/*
* Copyright (C) 2018 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
/* global
AREA_TRESHOLD:false
PolyShapeModel:false
ShapeCreatorModel:true
ShapeCreatorView:true
showMessage:false
*/
/* eslint no-underscore-dangle: 0 */
window.addEventListener('DOMContentLoaded', () => {
$('<option value="auto_segmentation" class="regular"> Auto Segmentation </option>').appendTo('#shapeTypeSelector');
const dextrCancelButtonId = 'dextrCancelButton';
const dextrOverlay = $(`
<div class="modal hidden force-modal">
<div class="modal-content" style="width: 300px; height: 70px;">
<center> <label class="regular h2"> Segmentation request is being processed </label></center>
<center style="margin-top: 5px;">
<button id="${dextrCancelButtonId}" class="regular h2" style="width: 250px;"> Cancel </button>
</center>
</div>
</div>`).appendTo('body');
const dextrCancelButton = $(`#${dextrCancelButtonId}`);
dextrCancelButton.on('click', () => {
dextrCancelButton.prop('disabled', true);
$.ajax({
url: `/dextr/cancel/${window.cvat.job.id}`,
type: 'GET',
error: (errorData) => {
const message = `Can not cancel segmentation. Code: ${errorData.status}.
Message: ${errorData.responseText || errorData.statusText}`;
showMessage(message);
},
complete: () => {
dextrCancelButton.prop('disabled', false);
},
});
});
function ShapeCreatorModelWrapper(OriginalClass) {
// Constructor will patch some properties for a created instance
function constructorDecorator(...args) {
const instance = new OriginalClass(...args);
// Decorator for the defaultType property
Object.defineProperty(instance, 'defaultType', {
get: () => instance._defaultType,
set: (type) => {
if (!['box', 'box_by_4_points', 'points', 'polygon',
'polyline', 'auto_segmentation', 'cuboid'].includes(type)) {
throw Error(`Unknown shape type found ${type}`);
}
instance._defaultType = type;
},
});
// Decorator for finish method.
const decoratedFinish = instance.finish;
instance.finish = (result) => {
if (instance._defaultType === 'auto_segmentation') {
try {
instance._defaultType = 'polygon';
decoratedFinish.call(instance, result);
} finally {
instance._defaultType = 'auto_segmentation';
}
} else {
decoratedFinish.call(instance, result);
}
};
return instance;
}
constructorDecorator.prototype = OriginalClass.prototype;
constructorDecorator.prototype.constructor = constructorDecorator;
return constructorDecorator;
}
function ShapeCreatorViewWrapper(OriginalClass) {
// Constructor will patch some properties for each instance
function constructorDecorator(...args) {
const instance = new OriginalClass(...args);
// Decorator for the _create() method.
// We save the decorated _create() and we will use it if type != 'auto_segmentation'
const decoratedCreate = instance._create;
instance._create = () => {
if (instance._type !== 'auto_segmentation') {
decoratedCreate.call(instance);
return;
}
instance._drawInstance = instance._frameContent.polyline().draw({ snapToGrid: 0.1 }).addClass('shapeCreation').attr({
'stroke-width': 0,
z_order: Number.MAX_SAFE_INTEGER,
});
instance._createPolyEvents();
/* the _createPolyEvents method have added "drawdone"
* event handler which invalid for this case
* because of that reason we remove the handler and
* create the valid handler instead
*/
instance._drawInstance.off('drawdone').on('drawdone', (e) => {
let actualPoints = window.cvat.translate.points.canvasToActual(e.target.getAttribute('points'));
actualPoints = PolyShapeModel.convertStringToNumberArray(actualPoints);
if (actualPoints.length < 4) {
showMessage('It is need to specify minimum four extreme points for an object');
instance._controller.switchCreateMode(true);
return;
}
const { frameWidth } = window.cvat.player.geometry;
const { frameHeight } = window.cvat.player.geometry;
for (let idx = 0; idx < actualPoints.length; idx += 1) {
const point = actualPoints[idx];
point.x = Math.clamp(point.x, 0, frameWidth);
point.y = Math.clamp(point.y, 0, frameHeight);
}
e.target.setAttribute('points',
window.cvat.translate.points.actualToCanvas(
PolyShapeModel.convertNumberArrayToString(actualPoints),
));
const polybox = e.target.getBBox();
const area = polybox.width * polybox.height;
if (area > AREA_TRESHOLD) {
$.ajax({
url: `/dextr/create/${window.cvat.job.id}`,
type: 'POST',
data: JSON.stringify({
frame: window.cvat.player.frames.current,
points: actualPoints,
}),
contentType: 'application/json',
success: () => {
function intervalCallback() {
$.ajax({
url: `/dextr/check/${window.cvat.job.id}`,
type: 'GET',
success: (jobData) => {
if (['queued', 'started'].includes(jobData.status)) {
if (jobData.status === 'queued') {
dextrCancelButton.prop('disabled', false);
}
setTimeout(intervalCallback, 1000);
} else {
dextrOverlay.addClass('hidden');
if (jobData.status === 'finished') {
if (jobData.result) {
instance._controller.finish({ points: jobData.result }, 'polygon');
}
} else if (jobData.status === 'failed') {
const message = `Segmentation has fallen. Error: '${jobData.stderr}'`;
showMessage(message);
} else {
let message = `Check segmentation request returned "${jobData.status}" status.`;
if (jobData.stderr) {
message += ` Error: ${jobData.stderr}`;
}
showMessage(message);
}
}
},
error: (errorData) => {
dextrOverlay.addClass('hidden');
const message = `Can not check segmentation. Code: ${errorData.status}.`
+ ` Message: ${errorData.responseText || errorData.statusText}`;
showMessage(message);
},
});
}
dextrCancelButton.prop('disabled', true);
dextrOverlay.removeClass('hidden');
setTimeout(intervalCallback, 1000);
},
error: (errorData) => {
const message = `Can not cancel ReID process. Code: ${errorData.status}.`
+ ` Message: ${errorData.responseText || errorData.statusText}`;
showMessage(message);
},
});
}
instance._controller.switchCreateMode(true);
}); // end of "drawdone" handler
}; // end of _create() method
return instance;
} // end of constructorDecorator()
constructorDecorator.prototype = OriginalClass.prototype;
constructorDecorator.prototype.constructor = constructorDecorator;
return constructorDecorator;
} // end of ShapeCreatorViewWrapper
// Apply patch for classes
ShapeCreatorModel = ShapeCreatorModelWrapper(ShapeCreatorModel);
ShapeCreatorView = ShapeCreatorViewWrapper(ShapeCreatorView);
});

@ -1,13 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.urls import path
from . import views
urlpatterns = [
path('create/<int:jid>', views.create),
path('cancel/<int:jid>', views.cancel),
path('check/<int:jid>', views.check),
path('enabled', views.enabled)
]

@ -1,128 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.http import HttpResponse, HttpResponseBadRequest, JsonResponse
from cvat.apps.authentication.decorators import login_required
from rules.contrib.views import permission_required, objectgetter
from cvat.apps.engine.models import Job
from cvat.apps.engine.log import slogger
from cvat.apps.dextr_segmentation.dextr import DEXTR_HANDLER
import django_rq
import json
import rq
__RQ_QUEUE_NAME = "default"
__DEXTR_HANDLER = DEXTR_HANDLER()
def _dextr_thread(db_data, frame, points):
job = rq.get_current_job()
job.meta["result"] = __DEXTR_HANDLER.handle(db_data, frame, points)
job.save_meta()
@login_required
@permission_required(perm=["engine.job.change"],
fn=objectgetter(Job, "jid"), raise_exception=True)
def create(request, jid):
try:
data = json.loads(request.body.decode("utf-8"))
points = data["points"]
frame = int(data["frame"])
username = request.user.username
slogger.job[jid].info("create dextr request for the JOB: {} ".format(jid)
+ "by the USER: {} on the FRAME: {}".format(username, frame))
db_data = Job.objects.select_related("segment__task__data").get(id=jid).segment.task.data
queue = django_rq.get_queue(__RQ_QUEUE_NAME)
rq_id = "dextr.create/{}/{}".format(jid, username)
job = queue.fetch_job(rq_id)
if job is not None and (job.is_started or job.is_queued):
if "cancel" not in job.meta:
raise Exception("Segmentation process has been already run for the " +
"JOB: {} and the USER: {}".format(jid, username))
else:
job.delete()
queue.enqueue_call(func=_dextr_thread,
args=(db_data, frame, points),
job_id=rq_id,
timeout=15,
ttl=30)
return HttpResponse()
except Exception as ex:
slogger.job[jid].error("can't create a dextr request for the job {}".format(jid), exc_info=True)
return HttpResponseBadRequest(str(ex))
@login_required
@permission_required(perm=["engine.job.change"],
fn=objectgetter(Job, "jid"), raise_exception=True)
def cancel(request, jid):
try:
username = request.user.username
slogger.job[jid].info("cancel dextr request for the JOB: {} ".format(jid)
+ "by the USER: {}".format(username))
queue = django_rq.get_queue(__RQ_QUEUE_NAME)
rq_id = "dextr.create/{}/{}".format(jid, username)
job = queue.fetch_job(rq_id)
if job is None or job.is_finished or job.is_failed:
raise Exception("Segmentation isn't running now")
elif "cancel" not in job.meta:
job.meta["cancel"] = True
job.save_meta()
return HttpResponse()
except Exception as ex:
slogger.job[jid].error("can't cancel a dextr request for the job {}".format(jid), exc_info=True)
return HttpResponseBadRequest(str(ex))
@login_required
@permission_required(perm=["engine.job.change"],
fn=objectgetter(Job, "jid"), raise_exception=True)
def check(request, jid):
try:
username = request.user.username
slogger.job[jid].info("check dextr request for the JOB: {} ".format(jid)
+ "by the USER: {}".format(username))
queue = django_rq.get_queue(__RQ_QUEUE_NAME)
rq_id = "dextr.create/{}/{}".format(jid, username)
job = queue.fetch_job(rq_id)
data = {}
if job is None:
data["status"] = "unknown"
else:
if "cancel" in job.meta:
data["status"] = "finished"
elif job.is_queued:
data["status"] = "queued"
elif job.is_started:
data["status"] = "started"
elif job.is_finished:
data["status"] = "finished"
data["result"] = job.meta["result"]
job.delete()
else:
data["status"] = "failed"
data["stderr"] = job.exc_info
job.delete()
return JsonResponse(data)
except Exception as ex:
slogger.job[jid].error("can't check a dextr request for the job {}".format(jid), exc_info=True)
return HttpResponseBadRequest(str(ex))
def enabled(request):
return HttpResponse()

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save