DL models as serverless functions (#1767)
* Initial experiments with nuclio * Update nuclio prototype * Improve nuclio prototype for dextr. * Dummy lambda manager * OpenFaaS prototype (dextr.bin and dextr.xml are empty). * Moved openfaas prototype. * Add comments * Add serializers and HLD for lambda_manager * Initial version of Mask RCNN (without debugging) * Initial version for faster_rcnn_inception_v2_coco * Fix faster_rcnn_inception_v2_coco * Implemented mask_rcnn_inception_resnet_v2_atrous_coco * Implemented yolo detector as a lambda function * Removed dextr app. * Added types for each function (detector and interactor) * Initial version of lambda_manager. * Implement a couple of methods for lambda: GET /api/v1/lambda/functions GET /api/v1/lambda/functions/public.dextr * First working version of dextr serverless function * First version of dextr which works in UI. * Modify omz.public.faster_rcnn_inception_v2_coco - image decoding - restart policy always for the function * Improve omz.public.mask_rcnn_inception_resnet_v2_atrous_coco * Improve omz.public.yolo-v3-tf function * Implemented the initial version of requests for lambda manager. * First working version of POST /api/v1/lambda/requests * Updated specification of function.yaml (added labels and used annotations section). * Added health check for containers (nuclio dashboard feature) * Read labels spec from function.yaml. * Added settings for NUCLIO * Fixed a couple of typos. Now it works in most cases. * Remove Plugin REST API * Remove tf_annotation app (it will be replaced by serverless function) * Remove tf_annotation and cuda components * Cleanup docs and Dockerfile from CUDA component. * Just renamed directories inside serverless * Remove redundant files and code * Remove redundant files. * Remove outdated files * Remove outdated code * Delete reid app and add draft of serverless function for reid. * Model list in UI. * Fixed the framework name (got it from lambda function). * Add maxRequestBodySize for functions, remove redundant code from UI for auto_annotation. * Update view of models page. * Unblock mapping for "primary" models. * Implement cleanup flag for lambda/requests and labeling mapping for functions. * Implement protection from running multiple jobs for the same task. * Fix invocation of functions in docker container. * Fix Dockerfile.ci * Remove unused files from lambda_manager * Fix codacy warnings * Fix codacy issues. * Fix codacy warnings * Implement progress and cancel (aka delete) operation. * Send annotations in batch. * Fix UI. Now it can retrieve information about inference requests in progress. * Update CHANGELOG.md * Update cvat-ui version. * Update nuclio version. * Implement serverless/tensorflow/faster_rcnn_inception_v2_coco * Add information how to install nuclio platform and run serverless functions. * Add installation instructions for serverless functions. * Update OpenVINO files which are responsible for loading network * relocated functions * Update dextr function. * Update faster_rcnn function from omz * Fix OpenVINO Mask-RCNN * Fix YOLO v3 serverless function. * Dummy serverless functions for a couple of more OpenVINO models. * Protected lambda manager views by correct permissions. * Fix name of Faster RCNN from Tensorflow. * Implement Mask RCNN via Tensorflow serverless function. * Minor client changes (#1847) * Minor client changes * Removed extra code * Add reid serverless function (no support in lambda manager). * Fix contribution guide. * Fix person-reidentification-retail-300 and implement text-detection-0004 * Add semantic-segmentation-adas-0001 * Moving model management to cvat-core (#1905) * Squached changes * Removed extra line * Remove duplicated files for OpenVINO serverless functions. * Updated CHANGELOG.md * Remove outdated code. * Running dextr via lambda manager (#1912) * Deleted outdated migration. * Add name for DEXTR function. * Fix restart policy for serverless functions. * Fix openvino serverless functions for images with alpha channel * Add more tensorflow serverless functions into deploy.sh * Use ID instead of name for DEXTR (#1926) * Update DEXTR function * Added source "auto" inside lambda manager for automatic annotation. * Customize payload (depends on type of lambda function). * First working version of REID (Server only). * Fix codacy warnings * Avoid exception during migration (workaround) File "/usr/local/lib/python3.5/dist-packages/django/db/utils.py", line 89, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/local/lib/python3.5/dist-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: table "engine_pluginoption" does not exist * Add siammask serverless function (it doesn't work, need to serialize state) * Run ReID from UI (#1949) * Removed reid route in installation.md * Fix a command to get lena image in CONTRIBUTION guide. * Fix typo and crash in case a polygon is a line. Co-authored-by: Boris Sekachev <40690378+bsekachev@users.noreply.github.com>main
parent
3f6d2e9e4f
commit
e7585b8ce9
File diff suppressed because one or more lines are too long
@ -1,38 +0,0 @@
|
|||||||
## [Keras+Tensorflow Mask R-CNN Segmentation](https://github.com/matterport/Mask_RCNN)
|
|
||||||
|
|
||||||
### What is it?
|
|
||||||
- This application allows you automatically to segment many various objects on images.
|
|
||||||
- It's based on Feature Pyramid Network (FPN) and a ResNet101 backbone.
|
|
||||||
|
|
||||||
- It uses a pre-trained model on MS COCO dataset
|
|
||||||
- It supports next classes (use them in "labels" row):
|
|
||||||
```python
|
|
||||||
'BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
|
|
||||||
'bus', 'train', 'truck', 'boat', 'traffic light',
|
|
||||||
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
|
|
||||||
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
|
|
||||||
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
|
|
||||||
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
|
|
||||||
'kite', 'baseball bat', 'baseball glove', 'skateboard',
|
|
||||||
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
|
|
||||||
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
|
|
||||||
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
|
|
||||||
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
|
|
||||||
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
|
|
||||||
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
|
|
||||||
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
|
|
||||||
'teddy bear', 'hair drier', 'toothbrush'.
|
|
||||||
```
|
|
||||||
- Component adds "Run Auto Segmentation" button into dashboard.
|
|
||||||
|
|
||||||
### Build docker image
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/auto_segmentation/docker-compose.auto_segmentation.yml build
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run docker container
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/auto_segmentation/docker-compose.auto_segmentation.yml up -d
|
|
||||||
```
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
version: "2.3"
|
|
||||||
|
|
||||||
services:
|
|
||||||
cvat:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
args:
|
|
||||||
AUTO_SEGMENTATION: "yes"
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
MASK_RCNN_URL=https://github.com/matterport/Mask_RCNN
|
|
||||||
|
|
||||||
cd ${HOME} && \
|
|
||||||
git clone ${MASK_RCNN_URL}.git && \
|
|
||||||
curl -L ${MASK_RCNN_URL}/releases/download/v2.0/mask_rcnn_coco.h5 -o Mask_RCNN/mask_rcnn_coco.h5
|
|
||||||
|
|
||||||
# TODO remove useless files
|
|
||||||
# tensorflow and Keras are installed globally
|
|
||||||
@ -1,41 +0,0 @@
|
|||||||
## [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit)
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
|
|
||||||
* NVIDIA GPU with a compute capability [3.0 - 7.2]
|
|
||||||
* Latest GPU driver
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
|
|
||||||
#### Install the latest driver for your graphics card
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo add-apt-repository ppa:graphics-drivers/ppa
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-cache search nvidia-* # find latest nvidia driver
|
|
||||||
sudo apt-get --no-install-recommends install nvidia-* # install the nvidia driver
|
|
||||||
sudo apt-get --no-install-recommends install mesa-common-dev
|
|
||||||
sudo apt-get --no-install-recommends install freeglut3-dev
|
|
||||||
sudo apt-get --no-install-recommends install nvidia-modprobe
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Reboot your PC and verify installation by `nvidia-smi` command.
|
|
||||||
|
|
||||||
#### Install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker)
|
|
||||||
|
|
||||||
Please be sure that installation was successful.
|
|
||||||
```bash
|
|
||||||
docker info | grep 'Runtimes' # output should contains 'nvidia'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Build docker image
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml build
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run docker container
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml up -d
|
|
||||||
```
|
|
||||||
@ -1,23 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
version: "2.3"
|
|
||||||
|
|
||||||
services:
|
|
||||||
cvat:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
args:
|
|
||||||
CUDA_SUPPORT: "yes"
|
|
||||||
runtime: "nvidia"
|
|
||||||
environment:
|
|
||||||
NVIDIA_VISIBLE_DEVICES: all
|
|
||||||
NVIDIA_DRIVER_CAPABILITIES: compute,utility
|
|
||||||
# That environment variable is used by the Nvidia Container Runtime.
|
|
||||||
# The Nvidia Container Runtime parses this as:
|
|
||||||
# :space:: logical OR
|
|
||||||
# ,: Logical AND
|
|
||||||
# https://gitlab.com/nvidia/container-images/cuda/issues/31#note_149432780
|
|
||||||
NVIDIA_REQUIRE_CUDA: "cuda>=10.0 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411"
|
|
||||||
@ -1,38 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
set -e
|
|
||||||
|
|
||||||
NVIDIA_GPGKEY_SUM=d1be581509378368edeec8c1eb2958702feedf3bc3d17011adbf24efacce4ab5 && \
|
|
||||||
NVIDIA_GPGKEY_FPR=ae09fe4bbd223a84b2ccfce3f60f4b3d7fa2af80 && \
|
|
||||||
apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub && \
|
|
||||||
apt-key adv --export --no-emit-version -a $NVIDIA_GPGKEY_FPR | tail -n +5 > cudasign.pub && \
|
|
||||||
echo "$NVIDIA_GPGKEY_SUM cudasign.pub" | sha256sum -c --strict - && rm cudasign.pub && \
|
|
||||||
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
|
|
||||||
echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list
|
|
||||||
|
|
||||||
CUDA_VERSION=10.0.130
|
|
||||||
NCCL_VERSION=2.5.6
|
|
||||||
CUDNN_VERSION=7.6.5.32
|
|
||||||
CUDA_PKG_VERSION="10-0=$CUDA_VERSION-1"
|
|
||||||
echo 'export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}' >> ${HOME}/.bashrc
|
|
||||||
echo 'export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}' >> ${HOME}/.bashrc
|
|
||||||
|
|
||||||
apt-get update && apt-get install -y --no-install-recommends --allow-unauthenticated \
|
|
||||||
cuda-cudart-$CUDA_PKG_VERSION \
|
|
||||||
cuda-compat-10-0 \
|
|
||||||
cuda-libraries-$CUDA_PKG_VERSION \
|
|
||||||
cuda-nvtx-$CUDA_PKG_VERSION \
|
|
||||||
libnccl2=$NCCL_VERSION-1+cuda10.0 \
|
|
||||||
libcudnn7=$CUDNN_VERSION-1+cuda10.0 && \
|
|
||||||
ln -s cuda-10.0 /usr/local/cuda && \
|
|
||||||
apt-mark hold libnccl2 libcudnn7 && \
|
|
||||||
rm -rf /var/lib/apt/lists/* \
|
|
||||||
/etc/apt/sources.list.d/nvidia-ml.list /etc/apt/sources.list.d/cuda.list
|
|
||||||
|
|
||||||
python3 -m pip uninstall -y tensorflow
|
|
||||||
python3 -m pip install --no-cache-dir tensorflow-gpu==1.15.2
|
|
||||||
|
|
||||||
@ -1,45 +0,0 @@
|
|||||||
## [Intel OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit)
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
|
|
||||||
* Intel Core with 6th generation and higher or Intel Xeon CPUs.
|
|
||||||
|
|
||||||
### Preparation
|
|
||||||
|
|
||||||
- Download the latest [OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit) .tgz installer
|
|
||||||
(offline or online) for Ubuntu platforms. Note that OpenVINO does not maintain forward compatability between
|
|
||||||
Intermediate Representations (IRs), so the version of OpenVINO in CVAT and the version used to translate the
|
|
||||||
models needs to be the same.
|
|
||||||
- Put downloaded file into ```cvat/components/openvino```.
|
|
||||||
- Accept EULA in the `cvat/components/openvino/eula.cfg` file.
|
|
||||||
|
|
||||||
### Build docker image
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml build
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run docker container
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
You should be able to login and see the web interface for CVAT now, complete with the new "Model Manager" button.
|
|
||||||
|
|
||||||
### OpenVINO Models
|
|
||||||
|
|
||||||
Clone the [Open Model Zoo](https://github.com/opencv/open_model_zoo). `$ git clone https://github.com/opencv/open_model_zoo.git`
|
|
||||||
|
|
||||||
Install the appropriate libraries. Currently that command would be `$ pip install -r open_model_zoo/tools/downloader/requirements.in`
|
|
||||||
|
|
||||||
Download the models using `downloader.py` file in `open_model_zoo/tools/downloader/`.
|
|
||||||
The `--name` command can be used to specify specific models.
|
|
||||||
The `--print_all` command can print all the available models.
|
|
||||||
Specific models that are already integrated into Cvat can be found [here](https://github.com/opencv/cvat/tree/develop/utils/open_model_zoo).
|
|
||||||
|
|
||||||
From the web user interface in CVAT, upload the models using the model manager.
|
|
||||||
You'll need to include the xml and bin file from the model downloader.
|
|
||||||
You'll need to include the python and JSON files from scratch or by using the ones in the CVAT libary.
|
|
||||||
See [here](https://github.com/opencv/cvat/tree/develop/cvat/apps/auto_annotation) for instructions for creating custom
|
|
||||||
python and JSON files.
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
version: "2.3"
|
|
||||||
|
|
||||||
services:
|
|
||||||
cvat:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
args:
|
|
||||||
OPENVINO_TOOLKIT: "yes"
|
|
||||||
@ -1,3 +0,0 @@
|
|||||||
# Accept actual EULA from openvino installation archive. Valid values are: {accept, decline}
|
|
||||||
ACCEPT_EULA=accept
|
|
||||||
|
|
||||||
@ -1,41 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
set -e
|
|
||||||
|
|
||||||
if [[ `lscpu | grep -o "GenuineIntel"` != "GenuineIntel" ]]; then
|
|
||||||
echo "OpenVINO supports only Intel CPUs"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ `lscpu | grep -o "sse4" | head -1` != "sse4" ]] && [[ `lscpu | grep -o "avx2" | head -1` != "avx2" ]]; then
|
|
||||||
echo "OpenVINO expects your CPU to support SSE4 or AVX2 instructions"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
cd /tmp/components/openvino
|
|
||||||
|
|
||||||
tar -xzf `ls | grep "openvino_toolkit"`
|
|
||||||
cd `ls -d */ | grep "openvino_toolkit"`
|
|
||||||
|
|
||||||
apt-get update && apt-get --no-install-recommends install -y sudo cpio && \
|
|
||||||
if [ -f "install_cv_sdk_dependencies.sh" ]; then ./install_cv_sdk_dependencies.sh; \
|
|
||||||
else ./install_openvino_dependencies.sh; fi && SUDO_FORCE_REMOVE=yes apt-get remove -y sudo
|
|
||||||
|
|
||||||
|
|
||||||
cat ../eula.cfg >> silent.cfg
|
|
||||||
./install.sh -s silent.cfg
|
|
||||||
|
|
||||||
cd /tmp/components && rm openvino -r
|
|
||||||
|
|
||||||
if [ -f "/opt/intel/computer_vision_sdk/bin/setupvars.sh" ]; then
|
|
||||||
echo "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" >> ${HOME}/.bashrc;
|
|
||||||
echo -e '\nexport IE_PLUGINS_PATH=${IE_PLUGINS_PATH}' >> /opt/intel/computer_vision_sdk/bin/setupvars.sh;
|
|
||||||
else
|
|
||||||
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ${HOME}/.bashrc;
|
|
||||||
echo -e '\nexport IE_PLUGINS_PATH=${IE_PLUGINS_PATH}' >> /opt/intel/openvino/bin/setupvars.sh;
|
|
||||||
fi
|
|
||||||
@ -1,41 +0,0 @@
|
|||||||
## [Tensorflow Object Detector](https://github.com/tensorflow/models/tree/master/research/object_detection)
|
|
||||||
|
|
||||||
### What is it?
|
|
||||||
* This application allows you automatically to annotate many various objects on images.
|
|
||||||
* It uses [Faster RCNN Inception Resnet v2 Atrous Coco Model](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz) from [tensorflow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md)
|
|
||||||
* It can work on CPU (with Tensorflow or OpenVINO) or GPU (with Tensorflow GPU).
|
|
||||||
* It supports next classes (just specify them in "labels" row):
|
|
||||||
```
|
|
||||||
'surfboard', 'car', 'skateboard', 'boat', 'clock',
|
|
||||||
'cat', 'cow', 'knife', 'apple', 'cup', 'tv',
|
|
||||||
'baseball_bat', 'book', 'suitcase', 'tennis_racket',
|
|
||||||
'stop_sign', 'couch', 'cell_phone', 'keyboard',
|
|
||||||
'cake', 'tie', 'frisbee', 'truck', 'fire_hydrant',
|
|
||||||
'snowboard', 'bed', 'vase', 'teddy_bear',
|
|
||||||
'toaster', 'wine_glass', 'traffic_light',
|
|
||||||
'broccoli', 'backpack', 'carrot', 'potted_plant',
|
|
||||||
'donut', 'umbrella', 'parking_meter', 'bottle',
|
|
||||||
'sandwich', 'motorcycle', 'bear', 'banana',
|
|
||||||
'person', 'scissors', 'elephant', 'dining_table',
|
|
||||||
'toothbrush', 'toilet', 'skis', 'bowl', 'sheep',
|
|
||||||
'refrigerator', 'oven', 'microwave', 'train',
|
|
||||||
'orange', 'mouse', 'laptop', 'bench', 'bicycle',
|
|
||||||
'fork', 'kite', 'zebra', 'baseball_glove', 'bus',
|
|
||||||
'spoon', 'horse', 'handbag', 'pizza', 'sports_ball',
|
|
||||||
'airplane', 'hair_drier', 'hot_dog', 'remote',
|
|
||||||
'sink', 'dog', 'bird', 'giraffe', 'chair'.
|
|
||||||
```
|
|
||||||
* Component adds "Run TF Annotation" button into dashboard.
|
|
||||||
|
|
||||||
|
|
||||||
### Build docker image
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml build
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run docker container
|
|
||||||
```bash
|
|
||||||
# From project root directory
|
|
||||||
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml up -d
|
|
||||||
```
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
version: "2.3"
|
|
||||||
|
|
||||||
services:
|
|
||||||
cvat:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
args:
|
|
||||||
TF_ANNOTATION: "yes"
|
|
||||||
@ -1,15 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
set -e
|
|
||||||
|
|
||||||
cd ${HOME} && \
|
|
||||||
curl http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz -o model.tar.gz && \
|
|
||||||
tar -xzf model.tar.gz && rm model.tar.gz && \
|
|
||||||
mv faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28 ${HOME}/rcnn && cd ${HOME} && \
|
|
||||||
mv rcnn/frozen_inference_graph.pb rcnn/inference_graph.pb
|
|
||||||
|
|
||||||
# tensorflow is installed globally
|
|
||||||
@ -0,0 +1,126 @@
|
|||||||
|
/*
|
||||||
|
* Copyright (C) 2020 Intel Corporation
|
||||||
|
* SPDX-License-Identifier: MIT
|
||||||
|
*/
|
||||||
|
|
||||||
|
/* global
|
||||||
|
require:false
|
||||||
|
*/
|
||||||
|
|
||||||
|
const serverProxy = require('./server-proxy');
|
||||||
|
const { ArgumentError } = require('./exceptions');
|
||||||
|
const { Task } = require('./session');
|
||||||
|
const MLModel = require('./ml-model');
|
||||||
|
const { RQStatus } = require('./enums');
|
||||||
|
|
||||||
|
class LambdaManager {
|
||||||
|
constructor() {
|
||||||
|
this.listening = {};
|
||||||
|
this.cachedList = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
async list() {
|
||||||
|
if (Array.isArray(this.cachedList)) {
|
||||||
|
return [...this.cachedList];
|
||||||
|
}
|
||||||
|
|
||||||
|
const result = await serverProxy.lambda.list();
|
||||||
|
const models = [];
|
||||||
|
|
||||||
|
for (const model of result) {
|
||||||
|
models.push(new MLModel({
|
||||||
|
id: model.id,
|
||||||
|
name: model.name,
|
||||||
|
description: model.description,
|
||||||
|
framework: model.framework,
|
||||||
|
labels: [...model.labels],
|
||||||
|
type: model.kind,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
this.cachedList = models;
|
||||||
|
return models;
|
||||||
|
}
|
||||||
|
|
||||||
|
async run(task, model, args) {
|
||||||
|
if (!(task instanceof Task)) {
|
||||||
|
throw new ArgumentError(
|
||||||
|
`Argument task is expected to be an instance of Task class, but got ${typeof (task)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!(model instanceof MLModel)) {
|
||||||
|
throw new ArgumentError(
|
||||||
|
`Argument model is expected to be an instance of MLModel class, but got ${typeof (model)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args && typeof (args) !== 'object') {
|
||||||
|
throw new ArgumentError(
|
||||||
|
`Argument args is expected to be an object, but got ${typeof (model)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const body = args;
|
||||||
|
body.task = task.id;
|
||||||
|
body.function = model.id;
|
||||||
|
|
||||||
|
const result = await serverProxy.lambda.run(body);
|
||||||
|
return result.id;
|
||||||
|
}
|
||||||
|
|
||||||
|
async call(task, model, args) {
|
||||||
|
const body = args;
|
||||||
|
body.task = task.id;
|
||||||
|
const result = await serverProxy.lambda.call(model.id, body);
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async requests() {
|
||||||
|
const result = await serverProxy.lambda.requests();
|
||||||
|
return result.filter((request) => ['queued', 'started'].includes(request.status));
|
||||||
|
}
|
||||||
|
|
||||||
|
async cancel(requestID) {
|
||||||
|
if (typeof (requestID) !== 'string') {
|
||||||
|
throw new ArgumentError(`Request id argument is required to be a string. But got ${requestID}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (this.listening[requestID]) {
|
||||||
|
clearTimeout(this.listening[requestID].timeout);
|
||||||
|
delete this.listening[requestID];
|
||||||
|
}
|
||||||
|
await serverProxy.lambda.cancel(requestID);
|
||||||
|
}
|
||||||
|
|
||||||
|
async listen(requestID, onUpdate) {
|
||||||
|
const timeoutCallback = async () => {
|
||||||
|
try {
|
||||||
|
this.listening[requestID].timeout = null;
|
||||||
|
const response = await serverProxy.lambda.status(requestID);
|
||||||
|
|
||||||
|
if (response.status === RQStatus.QUEUED || response.status === RQStatus.STARTED) {
|
||||||
|
onUpdate(response.status, response.progress || 0);
|
||||||
|
this.listening[requestID].timeout = setTimeout(timeoutCallback, 2000);
|
||||||
|
} else {
|
||||||
|
if (response.status === RQStatus.FINISHED) {
|
||||||
|
onUpdate(response.status, response.progress || 100);
|
||||||
|
} else {
|
||||||
|
onUpdate(response.status, response.progress || 0, response.exc_info || '');
|
||||||
|
}
|
||||||
|
|
||||||
|
delete this.listening[requestID];
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
onUpdate(RQStatus.UNKNOWN, 0, `Could not get a status of the request ${requestID}. ${error.toString()}`);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
this.listening[requestID] = {
|
||||||
|
onUpdate,
|
||||||
|
timeout: setTimeout(timeoutCallback, 2000),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
module.exports = new LambdaManager();
|
||||||
@ -0,0 +1,73 @@
|
|||||||
|
/*
|
||||||
|
* Copyright (C) 2019-2020 Intel Corporation
|
||||||
|
* SPDX-License-Identifier: MIT
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Class representing a machine learning model
|
||||||
|
* @memberof module:API.cvat.classes
|
||||||
|
*/
|
||||||
|
class MLModel {
|
||||||
|
constructor(data) {
|
||||||
|
this._id = data.id;
|
||||||
|
this._name = data.name;
|
||||||
|
this._labels = data.labels;
|
||||||
|
this._framework = data.framework;
|
||||||
|
this._description = data.description;
|
||||||
|
this._type = data.type;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @returns {string}
|
||||||
|
* @readonly
|
||||||
|
*/
|
||||||
|
get id() {
|
||||||
|
return this._id;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @returns {string}
|
||||||
|
* @readonly
|
||||||
|
*/
|
||||||
|
get name() {
|
||||||
|
return this._name;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @returns {string[]}
|
||||||
|
* @readonly
|
||||||
|
*/
|
||||||
|
get labels() {
|
||||||
|
if (Array.isArray(this._labels)) {
|
||||||
|
return [...this._labels];
|
||||||
|
}
|
||||||
|
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @returns {string}
|
||||||
|
* @readonly
|
||||||
|
*/
|
||||||
|
get framework() {
|
||||||
|
return this._framework;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @returns {string}
|
||||||
|
* @readonly
|
||||||
|
*/
|
||||||
|
get description() {
|
||||||
|
return this._description;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @returns {module:API.cvat.enums.ModelType}
|
||||||
|
* @readonly
|
||||||
|
*/
|
||||||
|
get type() {
|
||||||
|
return this._type;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
module.exports = MLModel;
|
||||||
@ -1,229 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import ReactDOM from 'react-dom';
|
|
||||||
import React, { useState, useEffect } from 'react';
|
|
||||||
import { Row, Col } from 'antd/lib/grid';
|
|
||||||
import Modal from 'antd/lib/modal';
|
|
||||||
import Menu from 'antd/lib/menu';
|
|
||||||
import Text from 'antd/lib/typography/Text';
|
|
||||||
import InputNumber from 'antd/lib/input-number';
|
|
||||||
import Tooltip from 'antd/lib/tooltip';
|
|
||||||
|
|
||||||
import { clamp } from 'utils/math';
|
|
||||||
import { run, cancel } from 'utils/reid-utils';
|
|
||||||
import { connect } from 'react-redux';
|
|
||||||
import { CombinedState } from 'reducers/interfaces';
|
|
||||||
import { fetchAnnotationsAsync } from 'actions/annotation-actions';
|
|
||||||
|
|
||||||
interface InputModalProps {
|
|
||||||
visible: boolean;
|
|
||||||
onCancel(): void;
|
|
||||||
onSubmit(threshold: number, distance: number): void;
|
|
||||||
}
|
|
||||||
|
|
||||||
function InputModal(props: InputModalProps): JSX.Element {
|
|
||||||
const { visible, onCancel, onSubmit } = props;
|
|
||||||
const [threshold, setThreshold] = useState(0.5);
|
|
||||||
const [distance, setDistance] = useState(50);
|
|
||||||
|
|
||||||
const [thresholdMin, thresholdMax] = [0.05, 0.95];
|
|
||||||
const [distanceMin, distanceMax] = [1, 1000];
|
|
||||||
return (
|
|
||||||
<Modal
|
|
||||||
closable={false}
|
|
||||||
width={300}
|
|
||||||
visible={visible}
|
|
||||||
onCancel={onCancel}
|
|
||||||
onOk={() => onSubmit(threshold, distance)}
|
|
||||||
okText='Merge'
|
|
||||||
>
|
|
||||||
<Row type='flex'>
|
|
||||||
<Col span={10}>
|
|
||||||
<Tooltip title='Similarity of objects on neighbour frames is calculated using AI model' mouseLeaveDelay={0}>
|
|
||||||
<Text>Similarity threshold: </Text>
|
|
||||||
</Tooltip>
|
|
||||||
</Col>
|
|
||||||
<Col span={12}>
|
|
||||||
<InputNumber
|
|
||||||
style={{ width: '100%' }}
|
|
||||||
min={thresholdMin}
|
|
||||||
max={thresholdMax}
|
|
||||||
step={0.05}
|
|
||||||
value={threshold}
|
|
||||||
onChange={(value: number | undefined) => {
|
|
||||||
if (typeof (value) === 'number') {
|
|
||||||
setThreshold(clamp(value, thresholdMin, thresholdMax));
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
<Row type='flex'>
|
|
||||||
<Col span={10}>
|
|
||||||
<Tooltip title='The value defines max distance to merge (between centers of two objects on neighbour frames)' mouseLeaveDelay={0}>
|
|
||||||
<Text>Max pixel distance: </Text>
|
|
||||||
</Tooltip>
|
|
||||||
</Col>
|
|
||||||
<Col span={12}>
|
|
||||||
<InputNumber
|
|
||||||
style={{ width: '100%' }}
|
|
||||||
min={distanceMin}
|
|
||||||
max={distanceMax}
|
|
||||||
step={5}
|
|
||||||
value={distance}
|
|
||||||
onChange={(value: number | undefined) => {
|
|
||||||
if (typeof (value) === 'number') {
|
|
||||||
setDistance(clamp(value, distanceMin, distanceMax));
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
</Modal>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
interface InProgressDialogProps {
|
|
||||||
visible: boolean;
|
|
||||||
progress: number;
|
|
||||||
onCancel(): void;
|
|
||||||
}
|
|
||||||
|
|
||||||
function InProgressDialog(props: InProgressDialogProps): JSX.Element {
|
|
||||||
const { visible, onCancel, progress } = props;
|
|
||||||
return (
|
|
||||||
<Modal
|
|
||||||
closable={false}
|
|
||||||
width={300}
|
|
||||||
visible={visible}
|
|
||||||
okText='Cancel'
|
|
||||||
okButtonProps={{
|
|
||||||
type: 'danger',
|
|
||||||
}}
|
|
||||||
onOk={onCancel}
|
|
||||||
cancelButtonProps={{
|
|
||||||
style: {
|
|
||||||
display: 'none',
|
|
||||||
},
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<Text>{`Merging is in progress ${progress}%`}</Text>
|
|
||||||
</Modal>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const reidContainer = window.document.createElement('div');
|
|
||||||
reidContainer.setAttribute('id', 'cvat-reid-wrapper');
|
|
||||||
window.document.body.appendChild(reidContainer);
|
|
||||||
|
|
||||||
|
|
||||||
interface StateToProps {
|
|
||||||
jobInstance: any | null;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface DispatchToProps {
|
|
||||||
updateAnnotations(): void;
|
|
||||||
}
|
|
||||||
|
|
||||||
function mapStateToProps(state: CombinedState): StateToProps {
|
|
||||||
const {
|
|
||||||
annotation: {
|
|
||||||
job: {
|
|
||||||
instance: jobInstance,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
} = state;
|
|
||||||
|
|
||||||
return {
|
|
||||||
jobInstance,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
function mapDispatchToProps(dispatch: any): DispatchToProps {
|
|
||||||
return {
|
|
||||||
updateAnnotations(): void {
|
|
||||||
dispatch(fetchAnnotationsAsync());
|
|
||||||
},
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
function ReIDPlugin(props: StateToProps & DispatchToProps): JSX.Element {
|
|
||||||
const { jobInstance, updateAnnotations, ...rest } = props;
|
|
||||||
const [showInputDialog, setShowInputDialog] = useState(false);
|
|
||||||
const [showInProgressDialog, setShowInProgressDialog] = useState(false);
|
|
||||||
const [progress, setProgress] = useState(0);
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
ReactDOM.render((
|
|
||||||
<>
|
|
||||||
<InProgressDialog
|
|
||||||
visible={showInProgressDialog}
|
|
||||||
progress={progress}
|
|
||||||
onCancel={() => {
|
|
||||||
cancel(jobInstance.id);
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<InputModal
|
|
||||||
visible={showInputDialog}
|
|
||||||
onCancel={() => setShowInputDialog(false)}
|
|
||||||
onSubmit={async (threshold: number, distance: number) => {
|
|
||||||
setProgress(0);
|
|
||||||
setShowInputDialog(false);
|
|
||||||
setShowInProgressDialog(true);
|
|
||||||
|
|
||||||
const onUpdatePercentage = (percent: number): void => {
|
|
||||||
setProgress(percent);
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
const annotations = await jobInstance.annotations.export();
|
|
||||||
const merged = await run({
|
|
||||||
threshold,
|
|
||||||
distance,
|
|
||||||
onUpdatePercentage,
|
|
||||||
jobID: jobInstance.id,
|
|
||||||
annotations,
|
|
||||||
});
|
|
||||||
await jobInstance.annotations.clear();
|
|
||||||
updateAnnotations(); // one more call to do not confuse canvas
|
|
||||||
// False positive of no-unsanitized/method
|
|
||||||
// eslint-disable-next-line no-unsanitized/method
|
|
||||||
await jobInstance.annotations.import(merged);
|
|
||||||
updateAnnotations();
|
|
||||||
} catch (error) {
|
|
||||||
Modal.error({
|
|
||||||
title: 'Could not merge annotations',
|
|
||||||
content: error.toString(),
|
|
||||||
});
|
|
||||||
} finally {
|
|
||||||
setShowInProgressDialog(false);
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</>
|
|
||||||
), reidContainer);
|
|
||||||
});
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Menu.Item
|
|
||||||
{...rest}
|
|
||||||
key='run_reid'
|
|
||||||
title='Run algorithm that merges separated bounding boxes automatically'
|
|
||||||
onClick={() => {
|
|
||||||
if (jobInstance) {
|
|
||||||
setShowInputDialog(true);
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
Run ReID merge
|
|
||||||
</Menu.Item>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
export default connect(
|
|
||||||
mapStateToProps,
|
|
||||||
mapDispatchToProps,
|
|
||||||
)(ReIDPlugin);
|
|
||||||
@ -1,160 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import React from 'react';
|
|
||||||
import { Row, Col } from 'antd/lib/grid';
|
|
||||||
import Icon from 'antd/lib/icon';
|
|
||||||
import Alert from 'antd/lib/alert';
|
|
||||||
import Button from 'antd/lib/button';
|
|
||||||
import Tooltip from 'antd/lib/tooltip';
|
|
||||||
import message from 'antd/lib/message';
|
|
||||||
import notification from 'antd/lib/notification';
|
|
||||||
import Text from 'antd/lib/typography/Text';
|
|
||||||
|
|
||||||
import consts from 'consts';
|
|
||||||
import ConnectedFileManager, {
|
|
||||||
FileManagerContainer,
|
|
||||||
} from 'containers/file-manager/file-manager';
|
|
||||||
import { ModelFiles } from 'reducers/interfaces';
|
|
||||||
|
|
||||||
import WrappedCreateModelForm, {
|
|
||||||
CreateModelForm,
|
|
||||||
} from './create-model-form';
|
|
||||||
|
|
||||||
interface Props {
|
|
||||||
createModel(name: string, files: ModelFiles, global: boolean): void;
|
|
||||||
isAdmin: boolean;
|
|
||||||
modelCreatingStatus: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export default class CreateModelContent extends React.PureComponent<Props> {
|
|
||||||
private modelForm: CreateModelForm;
|
|
||||||
private fileManagerContainer: FileManagerContainer;
|
|
||||||
|
|
||||||
public constructor(props: Props) {
|
|
||||||
super(props);
|
|
||||||
this.modelForm = null as any as CreateModelForm;
|
|
||||||
this.fileManagerContainer = null as any as FileManagerContainer;
|
|
||||||
}
|
|
||||||
|
|
||||||
public componentDidUpdate(prevProps: Props): void {
|
|
||||||
const { modelCreatingStatus } = this.props;
|
|
||||||
|
|
||||||
if (prevProps.modelCreatingStatus !== 'CREATED'
|
|
||||||
&& modelCreatingStatus === 'CREATED') {
|
|
||||||
message.success('The model has been uploaded');
|
|
||||||
this.modelForm.resetFields();
|
|
||||||
this.fileManagerContainer.reset();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private handleSubmitClick = (): void => {
|
|
||||||
const { createModel } = this.props;
|
|
||||||
this.modelForm.submit()
|
|
||||||
.then((data) => {
|
|
||||||
const {
|
|
||||||
local,
|
|
||||||
share,
|
|
||||||
} = this.fileManagerContainer.getFiles();
|
|
||||||
|
|
||||||
const files = local.length ? local : share;
|
|
||||||
const grouppedFiles: ModelFiles = {
|
|
||||||
xml: '',
|
|
||||||
bin: '',
|
|
||||||
py: '',
|
|
||||||
json: '',
|
|
||||||
};
|
|
||||||
|
|
||||||
(files as any).reduce((acc: ModelFiles, value: File | string): ModelFiles => {
|
|
||||||
const name = typeof value === 'string' ? value : value.name;
|
|
||||||
const [extension] = name.split('.').reverse();
|
|
||||||
if (extension in acc) {
|
|
||||||
acc[extension] = value;
|
|
||||||
}
|
|
||||||
|
|
||||||
return acc;
|
|
||||||
}, grouppedFiles);
|
|
||||||
|
|
||||||
if (Object.keys(grouppedFiles)
|
|
||||||
.map((key: string) => grouppedFiles[key])
|
|
||||||
.filter((val) => !!val).length !== 4) {
|
|
||||||
notification.error({
|
|
||||||
message: 'Could not upload a model',
|
|
||||||
description: 'Please, specify correct files',
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
createModel(data.name, grouppedFiles, data.global);
|
|
||||||
}
|
|
||||||
}).catch(() => {
|
|
||||||
notification.error({
|
|
||||||
message: 'Could not upload a model',
|
|
||||||
description: 'Please, check input fields',
|
|
||||||
});
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
public render(): JSX.Element {
|
|
||||||
const {
|
|
||||||
modelCreatingStatus,
|
|
||||||
} = this.props;
|
|
||||||
const loading = !!modelCreatingStatus
|
|
||||||
&& modelCreatingStatus !== 'CREATED';
|
|
||||||
const status = modelCreatingStatus
|
|
||||||
&& modelCreatingStatus !== 'CREATED' ? modelCreatingStatus : '';
|
|
||||||
|
|
||||||
const { AUTO_ANNOTATION_GUIDE_URL } = consts;
|
|
||||||
return (
|
|
||||||
<Row type='flex' justify='start' align='middle' className='cvat-create-model-content'>
|
|
||||||
<Col span={24}>
|
|
||||||
<Tooltip title='Click to open guide' mouseLeaveDelay={0}>
|
|
||||||
<Icon
|
|
||||||
onClick={(): void => {
|
|
||||||
// false positive
|
|
||||||
// eslint-disable-next-line
|
|
||||||
window.open(AUTO_ANNOTATION_GUIDE_URL, '_blank');
|
|
||||||
}}
|
|
||||||
type='question-circle'
|
|
||||||
/>
|
|
||||||
</Tooltip>
|
|
||||||
</Col>
|
|
||||||
<Col span={24}>
|
|
||||||
<WrappedCreateModelForm
|
|
||||||
wrappedComponentRef={
|
|
||||||
(ref: CreateModelForm): void => {
|
|
||||||
this.modelForm = ref;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
/>
|
|
||||||
</Col>
|
|
||||||
<Col span={24}>
|
|
||||||
<Text type='danger'>* </Text>
|
|
||||||
<Text className='cvat-text-color'>Select files:</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={24}>
|
|
||||||
<ConnectedFileManager
|
|
||||||
ref={
|
|
||||||
(container: FileManagerContainer): void => {
|
|
||||||
this.fileManagerContainer = container;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
withRemote={false}
|
|
||||||
/>
|
|
||||||
</Col>
|
|
||||||
<Col span={18}>
|
|
||||||
{status && <Alert message={`${status}`} />}
|
|
||||||
</Col>
|
|
||||||
<Col span={6}>
|
|
||||||
<Button
|
|
||||||
type='primary'
|
|
||||||
disabled={loading}
|
|
||||||
loading={loading}
|
|
||||||
onClick={this.handleSubmitClick}
|
|
||||||
>
|
|
||||||
Submit
|
|
||||||
</Button>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,80 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import React from 'react';
|
|
||||||
import { Row, Col } from 'antd/lib/grid';
|
|
||||||
import Form, { FormComponentProps } from 'antd/lib/form/Form';
|
|
||||||
import Input from 'antd/lib/input';
|
|
||||||
import Tooltip from 'antd/lib/tooltip';
|
|
||||||
import Checkbox from 'antd/lib/checkbox';
|
|
||||||
import Text from 'antd/lib/typography/Text';
|
|
||||||
|
|
||||||
type Props = FormComponentProps;
|
|
||||||
|
|
||||||
export class CreateModelForm extends React.PureComponent<Props> {
|
|
||||||
public submit(): Promise<{name: string; global: boolean}> {
|
|
||||||
const { form } = this.props;
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
form.validateFields((errors, values): void => {
|
|
||||||
if (!errors) {
|
|
||||||
resolve({
|
|
||||||
name: values.name,
|
|
||||||
global: values.global,
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
reject(errors);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
public resetFields(): void {
|
|
||||||
const { form } = this.props;
|
|
||||||
form.resetFields();
|
|
||||||
}
|
|
||||||
|
|
||||||
public render(): JSX.Element {
|
|
||||||
const { form } = this.props;
|
|
||||||
const { getFieldDecorator } = form;
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Form onSubmit={(e: React.FormEvent): void => e.preventDefault()}>
|
|
||||||
<Row>
|
|
||||||
<Col span={24}>
|
|
||||||
<Text type='danger'>* </Text>
|
|
||||||
<Text className='cvat-text-color'>Name:</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={14}>
|
|
||||||
<Form.Item hasFeedback>
|
|
||||||
{ getFieldDecorator('name', {
|
|
||||||
rules: [{
|
|
||||||
required: true,
|
|
||||||
message: 'Please, specify a model name',
|
|
||||||
}],
|
|
||||||
})(<Input placeholder='Model name' />)}
|
|
||||||
</Form.Item>
|
|
||||||
</Col>
|
|
||||||
<Col span={8} offset={2}>
|
|
||||||
<Form.Item>
|
|
||||||
<Tooltip title='Will this model be availabe for everyone?' mouseLeaveDelay={0}>
|
|
||||||
{ getFieldDecorator('global', {
|
|
||||||
initialValue: false,
|
|
||||||
valuePropName: 'checked',
|
|
||||||
})(
|
|
||||||
<Checkbox>
|
|
||||||
<Text className='cvat-text-color'>
|
|
||||||
Load globally
|
|
||||||
</Text>
|
|
||||||
</Checkbox>,
|
|
||||||
)}
|
|
||||||
</Tooltip>
|
|
||||||
</Form.Item>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
</Form>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export default Form.create()(CreateModelForm);
|
|
||||||
@ -1,38 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import './styles.scss';
|
|
||||||
import React from 'react';
|
|
||||||
import { Row, Col } from 'antd/lib/grid';
|
|
||||||
import Text from 'antd/lib/typography/Text';
|
|
||||||
|
|
||||||
import { ModelFiles } from 'reducers/interfaces';
|
|
||||||
import CreateModelContent from './create-model-content';
|
|
||||||
|
|
||||||
interface Props {
|
|
||||||
createModel(name: string, files: ModelFiles, global: boolean): void;
|
|
||||||
isAdmin: boolean;
|
|
||||||
modelCreatingStatus: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export default function CreateModelPageComponent(props: Props): JSX.Element {
|
|
||||||
const {
|
|
||||||
isAdmin,
|
|
||||||
modelCreatingStatus,
|
|
||||||
createModel,
|
|
||||||
} = props;
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Row type='flex' justify='center' align='top' className='cvat-create-model-form-wrapper'>
|
|
||||||
<Col md={20} lg={16} xl={14} xxl={9}>
|
|
||||||
<Text className='cvat-title'>Upload a new model</Text>
|
|
||||||
<CreateModelContent
|
|
||||||
isAdmin={isAdmin}
|
|
||||||
modelCreatingStatus={modelCreatingStatus}
|
|
||||||
createModel={createModel}
|
|
||||||
/>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@ -1,43 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
@import '../../base.scss';
|
|
||||||
|
|
||||||
.cvat-create-model-form-wrapper {
|
|
||||||
text-align: center;
|
|
||||||
margin-top: 40px;
|
|
||||||
overflow-y: auto;
|
|
||||||
height: 90%;
|
|
||||||
|
|
||||||
> div > span {
|
|
||||||
font-size: 36px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.cvat-create-model-content {
|
|
||||||
margin-top: 20px;
|
|
||||||
width: 100%;
|
|
||||||
height: auto;
|
|
||||||
border: 1px solid $border-color-1;
|
|
||||||
border-radius: 3px;
|
|
||||||
padding: 20px;
|
|
||||||
background: $background-color-1;
|
|
||||||
text-align: initial;
|
|
||||||
|
|
||||||
> div:nth-child(1) > i {
|
|
||||||
float: right;
|
|
||||||
font-size: 20px;
|
|
||||||
color: $danger-icon-color;
|
|
||||||
}
|
|
||||||
|
|
||||||
> div:nth-child(4) {
|
|
||||||
margin-top: 10px;
|
|
||||||
}
|
|
||||||
|
|
||||||
> div:nth-child(6) > button {
|
|
||||||
margin-top: 10px;
|
|
||||||
float: right;
|
|
||||||
width: 120px;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -0,0 +1,55 @@
|
|||||||
|
// Copyright (C) 2020 Intel Corporation
|
||||||
|
//
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
import React from 'react';
|
||||||
|
import { Row, Col } from 'antd/lib/grid';
|
||||||
|
import Tag from 'antd/lib/tag';
|
||||||
|
import Select from 'antd/lib/select';
|
||||||
|
import Text from 'antd/lib/typography/Text';
|
||||||
|
import { Model } from 'reducers/interfaces';
|
||||||
|
|
||||||
|
interface Props {
|
||||||
|
model: Model;
|
||||||
|
}
|
||||||
|
|
||||||
|
export default function DeployedModelItem(props: Props): JSX.Element {
|
||||||
|
const { model } = props;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Row className='cvat-models-list-item' type='flex'>
|
||||||
|
<Col span={3}>
|
||||||
|
<Tag color='purple'>{model.framework}</Tag>
|
||||||
|
</Col>
|
||||||
|
<Col span={3}>
|
||||||
|
<Text className='cvat-text-color'>
|
||||||
|
{model.name}
|
||||||
|
</Text>
|
||||||
|
</Col>
|
||||||
|
<Col span={3}>
|
||||||
|
<Tag color='orange'>
|
||||||
|
{model.type}
|
||||||
|
</Tag>
|
||||||
|
</Col>
|
||||||
|
<Col span={10}>
|
||||||
|
<Text style={{ whiteSpace: 'normal', height: 'auto' }}>{model.description}</Text>
|
||||||
|
</Col>
|
||||||
|
<Col span={5}>
|
||||||
|
<Select
|
||||||
|
showSearch
|
||||||
|
placeholder='Supported labels'
|
||||||
|
style={{ width: '90%' }}
|
||||||
|
value='Supported labels'
|
||||||
|
>
|
||||||
|
{model.labels.map(
|
||||||
|
(label): JSX.Element => (
|
||||||
|
<Select.Option key={label}>
|
||||||
|
{label}
|
||||||
|
</Select.Option>
|
||||||
|
),
|
||||||
|
)}
|
||||||
|
</Select>
|
||||||
|
</Col>
|
||||||
|
</Row>
|
||||||
|
);
|
||||||
|
}
|
||||||
@ -1,89 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import React from 'react';
|
|
||||||
import { Row, Col } from 'antd/lib/grid';
|
|
||||||
import Tag from 'antd/lib/tag';
|
|
||||||
import Select from 'antd/lib/select';
|
|
||||||
import Icon from 'antd/lib/icon';
|
|
||||||
import Menu from 'antd/lib/menu';
|
|
||||||
import Dropdown from 'antd/lib/dropdown';
|
|
||||||
import Text from 'antd/lib/typography/Text';
|
|
||||||
import moment from 'moment';
|
|
||||||
|
|
||||||
import { MenuIcon } from 'icons';
|
|
||||||
import { Model } from 'reducers/interfaces';
|
|
||||||
|
|
||||||
interface Props {
|
|
||||||
model: Model;
|
|
||||||
owner: any;
|
|
||||||
onDelete(): void;
|
|
||||||
}
|
|
||||||
|
|
||||||
export default function UploadedModelItem(props: Props): JSX.Element {
|
|
||||||
const {
|
|
||||||
model,
|
|
||||||
owner,
|
|
||||||
onDelete,
|
|
||||||
} = props;
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Row className='cvat-models-list-item' type='flex'>
|
|
||||||
<Col span={4} xxl={3}>
|
|
||||||
<Tag color='purple'>OpenVINO</Tag>
|
|
||||||
</Col>
|
|
||||||
<Col span={5} xxl={7}>
|
|
||||||
<Text className='cvat-text-color'>
|
|
||||||
{model.name}
|
|
||||||
</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={3}>
|
|
||||||
<Text className='cvat-text-color'>
|
|
||||||
{owner ? owner.username : 'undefined'}
|
|
||||||
</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={4}>
|
|
||||||
<Text className='cvat-text-color'>
|
|
||||||
{moment(model.uploadDate).format('MMMM Do YYYY')}
|
|
||||||
</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={5}>
|
|
||||||
<Select
|
|
||||||
showSearch
|
|
||||||
placeholder='Supported labels'
|
|
||||||
style={{ width: '90%' }}
|
|
||||||
value='Supported labels'
|
|
||||||
>
|
|
||||||
{model.labels.map(
|
|
||||||
(label): JSX.Element => (
|
|
||||||
<Select.Option key={label}>
|
|
||||||
{label}
|
|
||||||
</Select.Option>
|
|
||||||
),
|
|
||||||
)}
|
|
||||||
</Select>
|
|
||||||
</Col>
|
|
||||||
<Col span={3} xxl={2}>
|
|
||||||
<Text className='cvat-text-color'>Actions</Text>
|
|
||||||
<Dropdown overlay={
|
|
||||||
(
|
|
||||||
<Menu className='cvat-task-item-menu'>
|
|
||||||
<Menu.Item
|
|
||||||
onClick={(): void => {
|
|
||||||
onDelete();
|
|
||||||
}}
|
|
||||||
key='delete'
|
|
||||||
>
|
|
||||||
Delete
|
|
||||||
</Menu.Item>
|
|
||||||
</Menu>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
>
|
|
||||||
<Icon className='cvat-menu-icon' component={MenuIcon} />
|
|
||||||
</Dropdown>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@ -1,70 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import React from 'react';
|
|
||||||
import { Row, Col } from 'antd/lib/grid';
|
|
||||||
import Text from 'antd/lib/typography/Text';
|
|
||||||
|
|
||||||
import { Model } from 'reducers/interfaces';
|
|
||||||
import UploadedModelItem from './uploaded-model-item';
|
|
||||||
|
|
||||||
|
|
||||||
interface Props {
|
|
||||||
registeredUsers: any[];
|
|
||||||
models: Model[];
|
|
||||||
deleteModel(id: number): void;
|
|
||||||
}
|
|
||||||
|
|
||||||
export default function UploadedModelsListComponent(props: Props): JSX.Element {
|
|
||||||
const {
|
|
||||||
models,
|
|
||||||
registeredUsers,
|
|
||||||
deleteModel,
|
|
||||||
} = props;
|
|
||||||
|
|
||||||
const items = models.map((model): JSX.Element => {
|
|
||||||
const owner = registeredUsers.filter((user) => user.id === model.ownerID)[0];
|
|
||||||
return (
|
|
||||||
<UploadedModelItem
|
|
||||||
key={model.id as number}
|
|
||||||
owner={owner}
|
|
||||||
model={model}
|
|
||||||
onDelete={(): void => deleteModel(model.id as number)}
|
|
||||||
/>
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
return (
|
|
||||||
<>
|
|
||||||
<Row type='flex' justify='center' align='middle'>
|
|
||||||
<Col md={22} lg={18} xl={16} xxl={14}>
|
|
||||||
<Text className='cvat-text-color' strong>Uploaded by a user</Text>
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
<Row type='flex' justify='center' align='middle'>
|
|
||||||
<Col md={22} lg={18} xl={16} xxl={14} className='cvat-models-list'>
|
|
||||||
<Row type='flex' align='middle' style={{ padding: '10px' }}>
|
|
||||||
<Col span={4} xxl={3}>
|
|
||||||
<Text strong>Framework</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={5} xxl={7}>
|
|
||||||
<Text strong>Name</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={3}>
|
|
||||||
<Text strong>Owner</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={4}>
|
|
||||||
<Text strong>Uploaded</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={5}>
|
|
||||||
<Text strong>Labels</Text>
|
|
||||||
</Col>
|
|
||||||
<Col span={3} xxl={2} />
|
|
||||||
</Row>
|
|
||||||
{ items }
|
|
||||||
</Col>
|
|
||||||
</Row>
|
|
||||||
</>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@ -1,43 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import { connect } from 'react-redux';
|
|
||||||
|
|
||||||
import CreateModelPageComponent from 'components/create-model-page/create-model-page';
|
|
||||||
import { createModelAsync } from 'actions/models-actions';
|
|
||||||
import {
|
|
||||||
ModelFiles,
|
|
||||||
CombinedState,
|
|
||||||
} from 'reducers/interfaces';
|
|
||||||
|
|
||||||
interface StateToProps {
|
|
||||||
isAdmin: boolean;
|
|
||||||
modelCreatingStatus: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface DispatchToProps {
|
|
||||||
createModel(name: string, files: ModelFiles, global: boolean): void;
|
|
||||||
}
|
|
||||||
|
|
||||||
function mapStateToProps(state: CombinedState): StateToProps {
|
|
||||||
const { models } = state;
|
|
||||||
|
|
||||||
return {
|
|
||||||
isAdmin: state.auth.user.isAdmin,
|
|
||||||
modelCreatingStatus: models.creatingStatus,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
function mapDispatchToProps(dispatch: any): DispatchToProps {
|
|
||||||
return {
|
|
||||||
createModel(name: string, files: ModelFiles, global: boolean): void {
|
|
||||||
dispatch(createModelAsync(name, files, global));
|
|
||||||
},
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
export default connect(
|
|
||||||
mapStateToProps,
|
|
||||||
mapDispatchToProps,
|
|
||||||
)(CreateModelPageComponent);
|
|
||||||
@ -1,96 +0,0 @@
|
|||||||
// Copyright (C) 2020 Intel Corporation
|
|
||||||
//
|
|
||||||
// SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import getCore from 'cvat-core-wrapper';
|
|
||||||
import { ShapeType, RQStatus } from 'reducers/interfaces';
|
|
||||||
|
|
||||||
|
|
||||||
const core = getCore();
|
|
||||||
const baseURL = core.config.backendAPI.slice(0, -7);
|
|
||||||
|
|
||||||
type Params = {
|
|
||||||
threshold: number;
|
|
||||||
distance: number;
|
|
||||||
onUpdatePercentage(percentage: number): void;
|
|
||||||
jobID: number;
|
|
||||||
annotations: any;
|
|
||||||
};
|
|
||||||
|
|
||||||
export function run(params: Params): Promise<void> {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
const {
|
|
||||||
threshold,
|
|
||||||
distance,
|
|
||||||
onUpdatePercentage,
|
|
||||||
jobID,
|
|
||||||
annotations,
|
|
||||||
} = params;
|
|
||||||
const { shapes, ...rest } = annotations;
|
|
||||||
|
|
||||||
const boxes = shapes.filter((shape: any): boolean => shape.type === ShapeType.RECTANGLE);
|
|
||||||
const others = shapes.filter((shape: any): boolean => shape.type !== ShapeType.RECTANGLE);
|
|
||||||
|
|
||||||
core.server.request(
|
|
||||||
`${baseURL}/reid/start/job/${params.jobID}`, {
|
|
||||||
method: 'POST',
|
|
||||||
data: JSON.stringify({
|
|
||||||
threshold,
|
|
||||||
maxDistance: distance,
|
|
||||||
boxes,
|
|
||||||
}),
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
).then(() => {
|
|
||||||
const timeoutCallback = (): void => {
|
|
||||||
core.server.request(
|
|
||||||
`${baseURL}/reid/check/${jobID}`, {
|
|
||||||
method: 'GET',
|
|
||||||
},
|
|
||||||
).then((response: any) => {
|
|
||||||
const { status } = response;
|
|
||||||
if (status === RQStatus.finished) {
|
|
||||||
if (!response.result) {
|
|
||||||
// cancelled
|
|
||||||
resolve(annotations);
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = JSON.parse(response.result);
|
|
||||||
const collection = rest;
|
|
||||||
Array.prototype.push.apply(collection.tracks, result);
|
|
||||||
collection.shapes = others;
|
|
||||||
resolve(collection);
|
|
||||||
} else if (status === RQStatus.started) {
|
|
||||||
const { progress } = response;
|
|
||||||
if (typeof (progress) === 'number') {
|
|
||||||
onUpdatePercentage(+progress.toFixed(2));
|
|
||||||
}
|
|
||||||
setTimeout(timeoutCallback, 1000);
|
|
||||||
} else if (status === RQStatus.failed) {
|
|
||||||
reject(new Error(response.stderr));
|
|
||||||
} else if (status === RQStatus.unknown) {
|
|
||||||
reject(new Error('Unknown REID status has been received'));
|
|
||||||
} else {
|
|
||||||
setTimeout(timeoutCallback, 1000);
|
|
||||||
}
|
|
||||||
}).catch((error: Error) => {
|
|
||||||
reject(error);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
setTimeout(timeoutCallback, 1000);
|
|
||||||
}).catch((error: Error) => {
|
|
||||||
reject(error);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
export function cancel(jobID: number): void {
|
|
||||||
core.server.request(
|
|
||||||
`${baseURL}/reid/cancel/${jobID}`, {
|
|
||||||
method: 'GET',
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@ -1,372 +0,0 @@
|
|||||||
## Auto annotation
|
|
||||||
|
|
||||||
- [Description](#description)
|
|
||||||
- [Installation](#installation)
|
|
||||||
- [Usage](#usage)
|
|
||||||
- [Testing script](#testing)
|
|
||||||
- [Examples](#examples)
|
|
||||||
- [Person-vehicle-bike-detection-crossroad-0078](#person-vehicle-bike-detection-crossroad-0078-openvino-toolkit)
|
|
||||||
- [Landmarks-regression-retail-0009](#landmarks-regression-retail-0009-openvino-toolkit)
|
|
||||||
- [Semantic Segmentation](#semantic-segmentation)
|
|
||||||
- [Available interpretation scripts](#available-interpretation-scripts)
|
|
||||||
|
|
||||||
### Description
|
|
||||||
|
|
||||||
The application will be enabled automatically if
|
|
||||||
[OpenVINO™ component](../../../components/openvino)
|
|
||||||
is installed. It allows to use custom models for auto annotation. Only models in
|
|
||||||
OpenVINO™ toolkit format are supported. If you would like to annotate a
|
|
||||||
task with a custom model please convert it to the intermediate representation
|
|
||||||
(IR) format via the model optimizer tool. See [OpenVINO documentation](https://software.intel.com/en-us/articles/OpenVINO-InferEngine) for details.
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
|
|
||||||
See the installation instructions for [the OpenVINO component](../../../components/openvino)
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
To annotate a task with a custom model you need to prepare 4 files:
|
|
||||||
1. __Model config__ (*.xml) - a text file with network configuration.
|
|
||||||
1. __Model weights__ (*.bin) - a binary file with trained weights.
|
|
||||||
1. __Label map__ (*.json) - a simple json file with `label_map` dictionary like
|
|
||||||
object with string values for label numbers.
|
|
||||||
Example:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"label_map": {
|
|
||||||
"0": "background",
|
|
||||||
"1": "aeroplane",
|
|
||||||
"2": "bicycle",
|
|
||||||
"3": "bird",
|
|
||||||
"4": "boat",
|
|
||||||
"5": "bottle",
|
|
||||||
"6": "bus",
|
|
||||||
"7": "car",
|
|
||||||
"8": "cat",
|
|
||||||
"9": "chair",
|
|
||||||
"10": "cow",
|
|
||||||
"11": "diningtable",
|
|
||||||
"12": "dog",
|
|
||||||
"13": "horse",
|
|
||||||
"14": "motorbike",
|
|
||||||
"15": "person",
|
|
||||||
"16": "pottedplant",
|
|
||||||
"17": "sheep",
|
|
||||||
"18": "sofa",
|
|
||||||
"19": "train",
|
|
||||||
"20": "tvmonitor"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
1. __Interpretation script__ (*.py) - a file used to convert net output layer
|
|
||||||
to a predefined structure which can be processed by CVAT. This code will be run
|
|
||||||
inside a restricted python's environment, but it's possible to use some
|
|
||||||
builtin functions like __str, int, float, max, min, range__.
|
|
||||||
|
|
||||||
Also two variables are available in the scope:
|
|
||||||
|
|
||||||
- __detections__ - a list of dictionaries with detections for each frame:
|
|
||||||
* __frame_id__ - frame number
|
|
||||||
* __frame_height__ - frame height
|
|
||||||
* __frame_width__ - frame width
|
|
||||||
* __detections__ - output np.ndarray (See [ExecutableNetwork.infer](https://software.intel.com/en-us/articles/OpenVINO-InferEngine#inpage-nav-11-6-3) for details).
|
|
||||||
|
|
||||||
- __results__ - an instance of python class with converted results.
|
|
||||||
Following methods should be used to add shapes:
|
|
||||||
```python
|
|
||||||
# xtl, ytl, xbr, ybr - expected values are float or int
|
|
||||||
# label - expected value is int
|
|
||||||
# frame_number - expected value is int
|
|
||||||
# attributes - dictionary of attribute_name: attribute_value pairs, for example {"confidence": "0.83"}
|
|
||||||
add_box(self, xtl, ytl, xbr, ybr, label, frame_number, attributes=None)
|
|
||||||
|
|
||||||
# points - list of (x, y) pairs of float or int, for example [(57.3, 100), (67, 102.7)]
|
|
||||||
# label - expected value is int
|
|
||||||
# frame_number - expected value is int
|
|
||||||
# attributes - dictionary of attribute_name: attribute_value pairs, for example {"confidence": "0.83"}
|
|
||||||
add_points(self, points, label, frame_number, attributes=None)
|
|
||||||
add_polygon(self, points, label, frame_number, attributes=None)
|
|
||||||
add_polyline(self, points, label, frame_number, attributes=None)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Testing script
|
|
||||||
|
|
||||||
CVAT comes prepackaged with a small command line helper script to help develop interpretation scripts.
|
|
||||||
|
|
||||||
It includes a small user interface which allows users to feed in images and see the results using
|
|
||||||
the user interfaces provided by OpenCV.
|
|
||||||
|
|
||||||
See the script and the documentation in the
|
|
||||||
[auto_annotation directory](https://github.com/opencv/cvat/tree/develop/utils/auto_annotation).
|
|
||||||
|
|
||||||
When using the Auto Annotation runner, it is often helpful to drop into a REPL prompt to interact with the variables
|
|
||||||
directly. You can do this using the `interact` method from the `code` module.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Import the interact method from the `code` module
|
|
||||||
from code import interact
|
|
||||||
|
|
||||||
|
|
||||||
for frame_results in detections:
|
|
||||||
frame_height = frame_results["frame_height"]
|
|
||||||
frame_width = frame_results["frame_width"]
|
|
||||||
frame_number = frame_results["frame_id"]
|
|
||||||
# Unsure what other data members are in the `frame_results`? Use the `interact method!
|
|
||||||
interact(local=locals())
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ python cvat/utils/auto_annotation/run_models.py --py /path/to/myfile.py --json /path/to/mapping.json --xml /path/to/inference.xml --bin /path/to/inference.bin
|
|
||||||
Python 3.6.6 (default, Sep 26 2018, 15:10:10)
|
|
||||||
[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.10.44.2)] on darwin
|
|
||||||
Type "help", "copyright", "credits" or "license" for more information.
|
|
||||||
>>> dir()
|
|
||||||
['__builtins__', 'frame_results', 'detections', 'frame_number', 'frame_height', 'interact', 'results', 'frame_width']
|
|
||||||
>>> type(frame_results)
|
|
||||||
<class 'dict'>
|
|
||||||
>>> frame_results.keys()
|
|
||||||
dict_keys(['frame_id', 'frame_height', 'frame_width', 'detections'])
|
|
||||||
```
|
|
||||||
|
|
||||||
When using the `interact` method, make sure you are running using the _testing script_, and ensure that you _remove it_
|
|
||||||
before submitting to the server! If you don't remove it from the server, the code runners will hang during execution,
|
|
||||||
and you'll have to restart the server to fix them.
|
|
||||||
|
|
||||||
Another useful development method is visualizing the results using OpenCV. This will be discussed more in the
|
|
||||||
[Semantic Segmentation](#segmentation) section.
|
|
||||||
|
|
||||||
### Examples
|
|
||||||
|
|
||||||
#### [Person-vehicle-bike-detection-crossroad-0078](https://github.com/opencv/open_model_zoo/blob/2018/intel_models/person-vehicle-bike-detection-crossroad-0078/description/person-vehicle-bike-detection-crossroad-0078.md) (OpenVINO toolkit)
|
|
||||||
|
|
||||||
__Links__
|
|
||||||
- [person-vehicle-bike-detection-crossroad-0078.xml](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.xml)
|
|
||||||
- [person-vehicle-bike-detection-crossroad-0078.bin](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.bin)
|
|
||||||
|
|
||||||
__Task labels__: person vehicle non-vehicle
|
|
||||||
|
|
||||||
__label_map.json__:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"label_map": {
|
|
||||||
"1": "person",
|
|
||||||
"2": "vehicle",
|
|
||||||
"3": "non-vehicle"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
__Interpretation script for SSD based networks__:
|
|
||||||
```python
|
|
||||||
def clip(value):
|
|
||||||
return max(min(1.0, value), 0.0)
|
|
||||||
|
|
||||||
for frame_results in detections:
|
|
||||||
frame_height = frame_results["frame_height"]
|
|
||||||
frame_width = frame_results["frame_width"]
|
|
||||||
frame_number = frame_results["frame_id"]
|
|
||||||
|
|
||||||
for i in range(frame_results["detections"].shape[2]):
|
|
||||||
confidence = frame_results["detections"][0, 0, i, 2]
|
|
||||||
if confidence < 0.5:
|
|
||||||
continue
|
|
||||||
|
|
||||||
results.add_box(
|
|
||||||
xtl=clip(frame_results["detections"][0, 0, i, 3]) * frame_width,
|
|
||||||
ytl=clip(frame_results["detections"][0, 0, i, 4]) * frame_height,
|
|
||||||
xbr=clip(frame_results["detections"][0, 0, i, 5]) * frame_width,
|
|
||||||
ybr=clip(frame_results["detections"][0, 0, i, 6]) * frame_height,
|
|
||||||
label=int(frame_results["detections"][0, 0, i, 1]),
|
|
||||||
frame_number=frame_number,
|
|
||||||
attributes={
|
|
||||||
"confidence": "{:.2f}".format(confidence),
|
|
||||||
},
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### [Landmarks-regression-retail-0009](https://github.com/opencv/open_model_zoo/blob/2018/intel_models/landmarks-regression-retail-0009/description/landmarks-regression-retail-0009.md) (OpenVINO toolkit)
|
|
||||||
|
|
||||||
__Links__
|
|
||||||
- [landmarks-regression-retail-0009.xml](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/landmarks-regression-retail-0009/FP32/landmarks-regression-retail-0009.xml)
|
|
||||||
- [landmarks-regression-retail-0009.bin](https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/landmarks-regression-retail-0009/FP32/landmarks-regression-retail-0009.bin)
|
|
||||||
|
|
||||||
__Task labels__: left_eye right_eye tip_of_nose left_lip_corner right_lip_corner
|
|
||||||
|
|
||||||
__label_map.json__:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"label_map": {
|
|
||||||
"0": "left_eye",
|
|
||||||
"1": "right_eye",
|
|
||||||
"2": "tip_of_nose",
|
|
||||||
"3": "left_lip_corner",
|
|
||||||
"4": "right_lip_corner"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
__Interpretation script__:
|
|
||||||
```python
|
|
||||||
def clip(value):
|
|
||||||
return max(min(1.0, value), 0.0)
|
|
||||||
|
|
||||||
for frame_results in detections:
|
|
||||||
frame_height = frame_results["frame_height"]
|
|
||||||
frame_width = frame_results["frame_width"]
|
|
||||||
frame_number = frame_results["frame_id"]
|
|
||||||
|
|
||||||
for i in range(0, frame_results["detections"].shape[1], 2):
|
|
||||||
x = frame_results["detections"][0, i, 0, 0]
|
|
||||||
y = frame_results["detections"][0, i + 1, 0, 0]
|
|
||||||
|
|
||||||
results.add_points(
|
|
||||||
points=[(clip(x) * frame_width, clip(y) * frame_height)],
|
|
||||||
label=i // 2, # see label map and model output specification,
|
|
||||||
frame_number=frame_number,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Semantic Segmentation
|
|
||||||
|
|
||||||
__Links__
|
|
||||||
- [masck_rcnn_resnet50_atrous_coco][1] (OpenvVINO toolkit)
|
|
||||||
- [CVAT Implemenation][2]
|
|
||||||
|
|
||||||
__label_map.json__:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"label_map": {
|
|
||||||
"1": "person",
|
|
||||||
"2": "bicycle",
|
|
||||||
"3": "car",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the above labels are not all the labels in the model! See [here](https://github.com/opencv/cvat/blob/develop/utils/open_model_zoo/mask_rcnn_inception_resnet_v2_atrous_coco/mapping.json).
|
|
||||||
|
|
||||||
**Interpretation script for a semantic segmentation network**:
|
|
||||||
```python
|
|
||||||
import numpy as np
|
|
||||||
import cv2
|
|
||||||
from skimage.measure import approximate_polygon, find_contours
|
|
||||||
|
|
||||||
|
|
||||||
for frame_results in detections:
|
|
||||||
frame_height = frame_results['frame_height']
|
|
||||||
frame_width = frame_results['frame_width']
|
|
||||||
frame_number = frame_results['frame_id']
|
|
||||||
detection = frame_results['detections']
|
|
||||||
|
|
||||||
# The keys for the below two members will vary based on the model
|
|
||||||
masks = frame_results['masks']
|
|
||||||
boxes = frame_results['reshape_do_2d']
|
|
||||||
|
|
||||||
for box_index, box in enumerate(boxes):
|
|
||||||
# Again, these indexes specific to this model
|
|
||||||
class_label = int(box[1])
|
|
||||||
box_class_probability = box[2]
|
|
||||||
|
|
||||||
if box_class_probability > 0.2:
|
|
||||||
xmin = box[3] * frame_width
|
|
||||||
ymin = box[4] * frame_height
|
|
||||||
xmax = box[5] * frame_width
|
|
||||||
ymax = box[6] * frame_width
|
|
||||||
|
|
||||||
box_width = int(xmax - xmin)
|
|
||||||
box_height = int(ymin - ymax)
|
|
||||||
|
|
||||||
# use the box index and class label index to find the appropriate mask
|
|
||||||
# note that we need to convert the class label to a zero indexed array by subtracting `1`
|
|
||||||
class_mask = masks[box_index][class_label - 1]
|
|
||||||
|
|
||||||
# Class mask is a 33 x 33 matrix
|
|
||||||
# resize it to the bounding box
|
|
||||||
resized_mask = cv2.resize(class_mask, dsize(box_height, box_width), interpolation=cv2.INTER_CUBIC)
|
|
||||||
|
|
||||||
# Each pixel is a probability, select every pixel above the probability threshold, 0.5
|
|
||||||
# Do this using the boolean `>` method
|
|
||||||
boolean_mask = (resized_mask > 0.5)
|
|
||||||
|
|
||||||
# Convert the boolean values to uint8
|
|
||||||
uint8_mask = boolean_mask.astype(np.uint8) * 255
|
|
||||||
|
|
||||||
# Change the x and y coordinates into integers
|
|
||||||
xmin = int(round(xmin))
|
|
||||||
ymin = int(round(ymin))
|
|
||||||
xmax = xmin + box_width
|
|
||||||
ymax = ymin + box_height
|
|
||||||
|
|
||||||
# Create an empty blank frame, so that we can get the mask polygon in frame coordinates
|
|
||||||
mask_frame = np.zeros((frame_height, frame_width), dtype=np.uint8)
|
|
||||||
|
|
||||||
# Put the uint8_mask on the mask frame using the integer coordinates
|
|
||||||
mask_frame[xmin:xmax, ymin:ymax] = uint8_mask
|
|
||||||
|
|
||||||
mask_probability_threshold = 0.5
|
|
||||||
# find the contours
|
|
||||||
contours = find_contours(mask_frame, mask_probability_threshold)
|
|
||||||
# every bounding box should only have a single contour
|
|
||||||
contour = contours[0]
|
|
||||||
contour = np.flip(contour, axis=1)
|
|
||||||
|
|
||||||
# reduce the precision on the polygon
|
|
||||||
polygon_mask = approximate_polygon(contour, tolerance=2.5)
|
|
||||||
polygon_mask = polygon_mask.tolist()
|
|
||||||
|
|
||||||
results.add_polygon(polygon_mask, class_label, frame_number)
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that it is sometimes hard to see or understand what is happening in a script.
|
|
||||||
Use of the computer vision module can help you visualize what is happening.
|
|
||||||
|
|
||||||
```python
|
|
||||||
import cv2
|
|
||||||
|
|
||||||
|
|
||||||
for frame_results in detections:
|
|
||||||
frame_height = frame_results['frame_height']
|
|
||||||
frame_width = frame_results['frame_width']
|
|
||||||
detection = frame_results['detections']
|
|
||||||
|
|
||||||
masks = frame_results['masks']
|
|
||||||
boxes = frame_results['reshape_do_2d']
|
|
||||||
|
|
||||||
for box_index, box in enumerate(boxes):
|
|
||||||
class_label = int(box[1])
|
|
||||||
box_class_probability = box[2]
|
|
||||||
|
|
||||||
if box_class_probability > 0.2:
|
|
||||||
xmin = box[3] * frame_width
|
|
||||||
ymin = box[4] * frame_height
|
|
||||||
xmax = box[5] * frame_width
|
|
||||||
ymax = box[6] * frame_width
|
|
||||||
|
|
||||||
box_width = int(xmax - xmin)
|
|
||||||
box_height = int(ymin - ymax)
|
|
||||||
|
|
||||||
class_mask = masks[box_index][class_label - 1]
|
|
||||||
# Visualize the class mask!
|
|
||||||
cv2.imshow('class mask', class_mask)
|
|
||||||
# wait until user presses keys
|
|
||||||
cv2.waitKeys()
|
|
||||||
|
|
||||||
boolean_mask = (resized_mask > 0.5)
|
|
||||||
uint8_mask = boolean_mask.astype(np.uint8) * 255
|
|
||||||
|
|
||||||
# Visualize the class mask after it's been resized!
|
|
||||||
cv2.imshow('class mask', uint8_mask)
|
|
||||||
cv2.waitKeys()
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that you should _only_ use the above commands while running the [Auto Annotation Model Runner][3].
|
|
||||||
Running on the server will likely require a server restart to fix.
|
|
||||||
The method `cv2.destroyAllWindows()` or `cv2.destroyWindow('your-name-here')` might be required depending on your
|
|
||||||
implementation.
|
|
||||||
|
|
||||||
### Available interpretation scripts
|
|
||||||
|
|
||||||
CVAT comes prepackaged with several out of the box interpretation scripts.
|
|
||||||
See them in the [open model zoo directory](https://github.com/opencv/cvat/tree/develop/utils/open_model_zoo)
|
|
||||||
|
|
||||||
[1]: https://github.com/opencv/open_model_zoo/blob/master/models/public/mask_rcnn_resnet50_atrous_coco/model.yml
|
|
||||||
[2]: https://github.com/opencv/cvat/tree/develop/utils/open_model_zoo/mask_rcnn_inception_resnet_v2_atrous_coco
|
|
||||||
[3]: https://github.com/opencv/cvat/tree/develop/utils/auto_annotation
|
|
||||||
@ -1,6 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018-2019 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
default_app_config = 'cvat.apps.auto_annotation.apps.AutoAnnotationConfig'
|
|
||||||
@ -1,15 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.contrib import admin
|
|
||||||
from .models import AnnotationModel
|
|
||||||
|
|
||||||
@admin.register(AnnotationModel)
|
|
||||||
class AnnotationModelAdmin(admin.ModelAdmin):
|
|
||||||
list_display = ('name', 'owner', 'created_date', 'updated_date',
|
|
||||||
'shared', 'primary', 'framework')
|
|
||||||
|
|
||||||
def has_add_permission(self, request):
|
|
||||||
return False
|
|
||||||
@ -1,15 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.apps import AppConfig
|
|
||||||
|
|
||||||
|
|
||||||
class AutoAnnotationConfig(AppConfig):
|
|
||||||
name = "cvat.apps.auto_annotation"
|
|
||||||
|
|
||||||
def ready(self):
|
|
||||||
from .permissions import setup_permissions
|
|
||||||
|
|
||||||
setup_permissions()
|
|
||||||
@ -1,22 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
class ImageLoader():
|
|
||||||
def __init__(self, frame_provider):
|
|
||||||
self._frame_provider = frame_provider
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
for frame, _ in self._frame_provider.get_frames(self._frame_provider.Quality.ORIGINAL):
|
|
||||||
yield self._load_image(frame)
|
|
||||||
|
|
||||||
def __len__(self):
|
|
||||||
return len(self._frame_provider)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _load_image(image):
|
|
||||||
return cv2.imdecode(np.fromstring(image.read(), np.uint8), cv2.IMREAD_COLOR)
|
|
||||||
@ -1,163 +0,0 @@
|
|||||||
import itertools
|
|
||||||
from .model_loader import ModelLoader
|
|
||||||
from cvat.apps.engine.utils import import_modules, execute_python_code
|
|
||||||
|
|
||||||
def _process_detections(detections, path_to_conv_script, restricted=True):
|
|
||||||
results = Results()
|
|
||||||
local_vars = {
|
|
||||||
"detections": detections,
|
|
||||||
"results": results,
|
|
||||||
}
|
|
||||||
source_code = open(path_to_conv_script).read()
|
|
||||||
|
|
||||||
if restricted:
|
|
||||||
global_vars = {
|
|
||||||
"__builtins__": {
|
|
||||||
"str": str,
|
|
||||||
"int": int,
|
|
||||||
"float": float,
|
|
||||||
"max": max,
|
|
||||||
"min": min,
|
|
||||||
"range": range,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
global_vars = globals()
|
|
||||||
imports = import_modules(source_code)
|
|
||||||
global_vars.update(imports)
|
|
||||||
|
|
||||||
|
|
||||||
execute_python_code(source_code, global_vars, local_vars)
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
def _process_attributes(shape_attributes, label_attr_spec):
|
|
||||||
attributes = []
|
|
||||||
for attr_text, attr_value in shape_attributes.items():
|
|
||||||
if attr_text in label_attr_spec:
|
|
||||||
attributes.append({
|
|
||||||
"spec_id": label_attr_spec[attr_text],
|
|
||||||
"value": attr_value,
|
|
||||||
})
|
|
||||||
|
|
||||||
return attributes
|
|
||||||
|
|
||||||
class Results():
|
|
||||||
def __init__(self):
|
|
||||||
self._results = {
|
|
||||||
"shapes": [],
|
|
||||||
"tracks": []
|
|
||||||
}
|
|
||||||
|
|
||||||
# https://stackoverflow.com/a/50928627/2701402
|
|
||||||
def add_box(self, xtl: float, ytl: float, xbr: float, ybr: float, label: int, frame_number: int, attributes: dict=None):
|
|
||||||
"""
|
|
||||||
xtl - x coordinate, top left
|
|
||||||
ytl - y coordinate, top left
|
|
||||||
xbr - x coordinate, bottom right
|
|
||||||
ybr - y coordinate, bottom right
|
|
||||||
"""
|
|
||||||
self.get_shapes().append({
|
|
||||||
"label": label,
|
|
||||||
"frame": frame_number,
|
|
||||||
"points": [xtl, ytl, xbr, ybr],
|
|
||||||
"type": "rectangle",
|
|
||||||
"attributes": attributes or {},
|
|
||||||
})
|
|
||||||
|
|
||||||
def add_points(self, points: list, label: int, frame_number: int, attributes: dict=None):
|
|
||||||
points = self._create_polyshape(points, label, frame_number, attributes)
|
|
||||||
points["type"] = "points"
|
|
||||||
self.get_shapes().append(points)
|
|
||||||
|
|
||||||
def add_polygon(self, points: list, label: int, frame_number: int, attributes: dict=None):
|
|
||||||
polygon = self._create_polyshape(points, label, frame_number, attributes)
|
|
||||||
polygon["type"] = "polygon"
|
|
||||||
self.get_shapes().append(polygon)
|
|
||||||
|
|
||||||
def add_polyline(self, points: list, label: int, frame_number: int, attributes: dict=None):
|
|
||||||
polyline = self._create_polyshape(points, label, frame_number, attributes)
|
|
||||||
polyline["type"] = "polyline"
|
|
||||||
self.get_shapes().append(polyline)
|
|
||||||
|
|
||||||
def get_shapes(self):
|
|
||||||
return self._results["shapes"]
|
|
||||||
|
|
||||||
def get_tracks(self):
|
|
||||||
return self._results["tracks"]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _create_polyshape(points: list, label: int, frame_number: int, attributes: dict=None):
|
|
||||||
return {
|
|
||||||
"label": label,
|
|
||||||
"frame": frame_number,
|
|
||||||
"points": list(itertools.chain.from_iterable(points)),
|
|
||||||
"attributes": attributes or {},
|
|
||||||
}
|
|
||||||
|
|
||||||
class InferenceAnnotationRunner:
|
|
||||||
def __init__(self, data, model_file, weights_file, labels_mapping,
|
|
||||||
attribute_spec, convertation_file):
|
|
||||||
self.data = iter(data)
|
|
||||||
self.data_len = len(data)
|
|
||||||
self.model = ModelLoader(model=model_file, weights=weights_file)
|
|
||||||
self.frame_counter = 0
|
|
||||||
self.attribute_spec = attribute_spec
|
|
||||||
self.convertation_file = convertation_file
|
|
||||||
self.iteration_size = 128
|
|
||||||
self.labels_mapping = labels_mapping
|
|
||||||
|
|
||||||
|
|
||||||
def run(self, job=None, update_progress=None, restricted=True):
|
|
||||||
result = {
|
|
||||||
"shapes": [],
|
|
||||||
"tracks": [],
|
|
||||||
"tags": [],
|
|
||||||
"version": 0
|
|
||||||
}
|
|
||||||
|
|
||||||
detections = []
|
|
||||||
for _ in range(self.iteration_size):
|
|
||||||
try:
|
|
||||||
frame = next(self.data)
|
|
||||||
except StopIteration:
|
|
||||||
break
|
|
||||||
|
|
||||||
orig_rows, orig_cols = frame.shape[:2]
|
|
||||||
|
|
||||||
detections.append({
|
|
||||||
"frame_id": self.frame_counter,
|
|
||||||
"frame_height": orig_rows,
|
|
||||||
"frame_width": orig_cols,
|
|
||||||
"detections": self.model.infer(frame),
|
|
||||||
})
|
|
||||||
|
|
||||||
self.frame_counter += 1
|
|
||||||
if job and update_progress and not update_progress(job, self.frame_counter * 100 / self.data_len):
|
|
||||||
return None, False
|
|
||||||
|
|
||||||
processed_detections = _process_detections(detections, self.convertation_file, restricted=restricted)
|
|
||||||
|
|
||||||
self._add_shapes(processed_detections.get_shapes(), result["shapes"])
|
|
||||||
|
|
||||||
more_items = self.frame_counter != self.data_len
|
|
||||||
|
|
||||||
return result, more_items
|
|
||||||
|
|
||||||
def _add_shapes(self, shapes, target_container):
|
|
||||||
for shape in shapes:
|
|
||||||
if shape["label"] not in self.labels_mapping:
|
|
||||||
continue
|
|
||||||
|
|
||||||
db_label = self.labels_mapping[shape["label"]]
|
|
||||||
label_attr_spec = self.attribute_spec.get(db_label)
|
|
||||||
target_container.append({
|
|
||||||
"label_id": db_label,
|
|
||||||
"frame": shape["frame"],
|
|
||||||
"points": shape["points"],
|
|
||||||
"type": shape["type"],
|
|
||||||
"z_order": 0,
|
|
||||||
"group": None,
|
|
||||||
"occluded": False,
|
|
||||||
"attributes": _process_attributes(shape["attributes"], label_attr_spec),
|
|
||||||
})
|
|
||||||
@ -1,52 +0,0 @@
|
|||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from openvino.inference_engine import IENetwork, IEPlugin, IECore, get_version
|
|
||||||
|
|
||||||
import subprocess
|
|
||||||
import os
|
|
||||||
import platform
|
|
||||||
|
|
||||||
_IE_PLUGINS_PATH = os.getenv("IE_PLUGINS_PATH", None)
|
|
||||||
|
|
||||||
def _check_instruction(instruction):
|
|
||||||
return instruction == str.strip(
|
|
||||||
subprocess.check_output(
|
|
||||||
'lscpu | grep -o "{}" | head -1'.format(instruction), shell=True
|
|
||||||
).decode('utf-8')
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def make_plugin_or_core():
|
|
||||||
version = get_version()
|
|
||||||
use_core_openvino = False
|
|
||||||
try:
|
|
||||||
major, minor, reference = [int(x) for x in version.split('.')]
|
|
||||||
if major >= 2 and minor >= 1:
|
|
||||||
use_core_openvino = True
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if use_core_openvino:
|
|
||||||
ie = IECore()
|
|
||||||
return ie
|
|
||||||
|
|
||||||
if _IE_PLUGINS_PATH is None:
|
|
||||||
raise OSError('Inference engine plugin path env not found in the system.')
|
|
||||||
|
|
||||||
plugin = IEPlugin(device='CPU', plugin_dirs=[_IE_PLUGINS_PATH])
|
|
||||||
if (_check_instruction('avx2')):
|
|
||||||
plugin.add_cpu_extension(os.path.join(_IE_PLUGINS_PATH, 'libcpu_extension_avx2.so'))
|
|
||||||
elif (_check_instruction('sse4')):
|
|
||||||
plugin.add_cpu_extension(os.path.join(_IE_PLUGINS_PATH, 'libcpu_extension_sse4.so'))
|
|
||||||
elif platform.system() == 'Darwin':
|
|
||||||
plugin.add_cpu_extension(os.path.join(_IE_PLUGINS_PATH, 'libcpu_extension.dylib'))
|
|
||||||
else:
|
|
||||||
raise Exception('Inference engine requires a support of avx2 or sse4.')
|
|
||||||
|
|
||||||
return plugin
|
|
||||||
|
|
||||||
|
|
||||||
def make_network(model, weights):
|
|
||||||
return IENetwork(model = model, weights = weights)
|
|
||||||
@ -1,39 +0,0 @@
|
|||||||
# Generated by Django 2.1.3 on 2019-01-24 14:05
|
|
||||||
|
|
||||||
import cvat.apps.auto_annotation.models
|
|
||||||
from django.conf import settings
|
|
||||||
import django.core.files.storage
|
|
||||||
from django.db import migrations, models
|
|
||||||
import django.db.models.deletion
|
|
||||||
|
|
||||||
|
|
||||||
class Migration(migrations.Migration):
|
|
||||||
|
|
||||||
initial = True
|
|
||||||
|
|
||||||
dependencies = [
|
|
||||||
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
|
|
||||||
]
|
|
||||||
|
|
||||||
operations = [
|
|
||||||
migrations.CreateModel(
|
|
||||||
name='AnnotationModel',
|
|
||||||
fields=[
|
|
||||||
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
|
||||||
('name', cvat.apps.auto_annotation.models.SafeCharField(max_length=256)),
|
|
||||||
('created_date', models.DateTimeField(auto_now_add=True)),
|
|
||||||
('updated_date', models.DateTimeField(auto_now_add=True)),
|
|
||||||
('model_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
|
|
||||||
('weights_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
|
|
||||||
('labelmap_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
|
|
||||||
('interpretation_file', models.FileField(storage=django.core.files.storage.FileSystemStorage(), upload_to=cvat.apps.auto_annotation.models.upload_path_handler)),
|
|
||||||
('shared', models.BooleanField(default=False)),
|
|
||||||
('primary', models.BooleanField(default=False)),
|
|
||||||
('framework', models.CharField(default=cvat.apps.auto_annotation.models.FrameworkChoice('openvino'), max_length=32)),
|
|
||||||
('owner', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
|
|
||||||
],
|
|
||||||
options={
|
|
||||||
'default_permissions': (),
|
|
||||||
},
|
|
||||||
),
|
|
||||||
]
|
|
||||||
@ -1,5 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
@ -1,76 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018-2019 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import json
|
|
||||||
import cv2
|
|
||||||
import os
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
from cvat.apps.auto_annotation.inference_engine import make_plugin_or_core, make_network
|
|
||||||
|
|
||||||
class ModelLoader():
|
|
||||||
def __init__(self, model, weights):
|
|
||||||
self._model = model
|
|
||||||
self._weights = weights
|
|
||||||
|
|
||||||
core_or_plugin = make_plugin_or_core()
|
|
||||||
network = make_network(self._model, self._weights)
|
|
||||||
|
|
||||||
if getattr(core_or_plugin, 'get_supported_layers', False):
|
|
||||||
supported_layers = core_or_plugin.get_supported_layers(network)
|
|
||||||
not_supported_layers = [l for l in network.layers.keys() if l not in supported_layers]
|
|
||||||
if len(not_supported_layers) != 0:
|
|
||||||
raise Exception("Following layers are not supported by the plugin for specified device {}:\n {}".
|
|
||||||
format(core_or_plugin.device, ", ".join(not_supported_layers)))
|
|
||||||
|
|
||||||
iter_inputs = iter(network.inputs)
|
|
||||||
self._input_blob_name = next(iter_inputs)
|
|
||||||
self._input_info_name = ''
|
|
||||||
self._output_blob_name = next(iter(network.outputs))
|
|
||||||
|
|
||||||
self._require_image_info = False
|
|
||||||
|
|
||||||
info_names = ('image_info', 'im_info')
|
|
||||||
|
|
||||||
# NOTE: handeling for the inclusion of `image_info` in OpenVino2019
|
|
||||||
if any(s in network.inputs for s in info_names):
|
|
||||||
self._require_image_info = True
|
|
||||||
self._input_info_name = set(network.inputs).intersection(info_names)
|
|
||||||
self._input_info_name = self._input_info_name.pop()
|
|
||||||
if self._input_blob_name in info_names:
|
|
||||||
self._input_blob_name = next(iter_inputs)
|
|
||||||
|
|
||||||
if getattr(core_or_plugin, 'load_network', False):
|
|
||||||
self._net = core_or_plugin.load_network(network,
|
|
||||||
"CPU",
|
|
||||||
num_requests=2)
|
|
||||||
else:
|
|
||||||
self._net = core_or_plugin.load(network=network, num_requests=2)
|
|
||||||
input_type = network.inputs[self._input_blob_name]
|
|
||||||
self._input_layout = input_type if isinstance(input_type, list) else input_type.shape
|
|
||||||
|
|
||||||
def infer(self, image):
|
|
||||||
_, _, h, w = self._input_layout
|
|
||||||
in_frame = image if image.shape[:-1] == (h, w) else cv2.resize(image, (w, h))
|
|
||||||
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
|
|
||||||
inputs = {self._input_blob_name: in_frame}
|
|
||||||
if self._require_image_info:
|
|
||||||
info = np.zeros([1, 3])
|
|
||||||
info[0, 0] = h
|
|
||||||
info[0, 1] = w
|
|
||||||
# frame number
|
|
||||||
info[0, 2] = 1
|
|
||||||
inputs[self._input_info_name] = info
|
|
||||||
|
|
||||||
results = self._net.infer(inputs)
|
|
||||||
if len(results) == 1:
|
|
||||||
return results[self._output_blob_name].copy()
|
|
||||||
else:
|
|
||||||
return results.copy()
|
|
||||||
|
|
||||||
|
|
||||||
def load_labelmap(labels_path):
|
|
||||||
with open(labels_path, "r") as f:
|
|
||||||
return json.load(f)["label_map"]
|
|
||||||
@ -1,276 +0,0 @@
|
|||||||
# Copyright (C) 2018-2020 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import django_rq
|
|
||||||
import numpy as np
|
|
||||||
import os
|
|
||||||
import rq
|
|
||||||
import shutil
|
|
||||||
import tempfile
|
|
||||||
|
|
||||||
from django.db import transaction
|
|
||||||
from django.utils import timezone
|
|
||||||
from django.conf import settings
|
|
||||||
|
|
||||||
from cvat.apps.engine.log import slogger
|
|
||||||
from cvat.apps.engine.models import Task as TaskModel
|
|
||||||
from cvat.apps.authentication.auth import has_admin_role
|
|
||||||
from cvat.apps.engine.serializers import LabeledDataSerializer
|
|
||||||
from cvat.apps.dataset_manager.task import put_task_data, patch_task_data
|
|
||||||
from cvat.apps.engine.frame_provider import FrameProvider
|
|
||||||
from cvat.apps.engine.utils import av_scan_paths
|
|
||||||
|
|
||||||
from .models import AnnotationModel, FrameworkChoice
|
|
||||||
from .model_loader import load_labelmap
|
|
||||||
from .image_loader import ImageLoader
|
|
||||||
from .inference import InferenceAnnotationRunner
|
|
||||||
|
|
||||||
|
|
||||||
def _remove_old_file(model_file_field):
|
|
||||||
if model_file_field and os.path.exists(model_file_field.name):
|
|
||||||
os.remove(model_file_field.name)
|
|
||||||
|
|
||||||
def _update_dl_model_thread(dl_model_id, name, is_shared, model_file, weights_file, labelmap_file,
|
|
||||||
interpretation_file, run_tests, is_local_storage, delete_if_test_fails, restricted=True):
|
|
||||||
def _get_file_content(filename):
|
|
||||||
return os.path.basename(filename), open(filename, "rb")
|
|
||||||
|
|
||||||
def _delete_source_files():
|
|
||||||
for f in [model_file, weights_file, labelmap_file, interpretation_file]:
|
|
||||||
if f:
|
|
||||||
os.remove(f)
|
|
||||||
|
|
||||||
def _run_test(model_file, weights_file, labelmap_file, interpretation_file):
|
|
||||||
test_image = np.ones((1024, 1980, 3), np.uint8) * 255
|
|
||||||
try:
|
|
||||||
dummy_labelmap = {key: key for key in load_labelmap(labelmap_file).keys()}
|
|
||||||
runner = InferenceAnnotationRunner(
|
|
||||||
data=[test_image,],
|
|
||||||
model_file=model_file,
|
|
||||||
weights_file=weights_file,
|
|
||||||
labels_mapping=dummy_labelmap,
|
|
||||||
attribute_spec={},
|
|
||||||
convertation_file=interpretation_file)
|
|
||||||
|
|
||||||
runner.run(restricted=restricted)
|
|
||||||
except Exception as e:
|
|
||||||
return False, str(e)
|
|
||||||
|
|
||||||
return True, ""
|
|
||||||
|
|
||||||
job = rq.get_current_job()
|
|
||||||
job.meta["progress"] = "Saving data"
|
|
||||||
job.save_meta()
|
|
||||||
|
|
||||||
with transaction.atomic():
|
|
||||||
dl_model = AnnotationModel.objects.select_for_update().get(pk=dl_model_id)
|
|
||||||
|
|
||||||
test_res = True
|
|
||||||
message = ""
|
|
||||||
if run_tests:
|
|
||||||
job.meta["progress"] = "Test started"
|
|
||||||
job.save_meta()
|
|
||||||
|
|
||||||
test_res, message = _run_test(
|
|
||||||
model_file=model_file or dl_model.model_file.name,
|
|
||||||
weights_file=weights_file or dl_model.weights_file.name,
|
|
||||||
labelmap_file=labelmap_file or dl_model.labelmap_file.name,
|
|
||||||
interpretation_file=interpretation_file or dl_model.interpretation_file.name,
|
|
||||||
)
|
|
||||||
|
|
||||||
if not test_res:
|
|
||||||
job.meta["progress"] = "Test failed"
|
|
||||||
if delete_if_test_fails:
|
|
||||||
shutil.rmtree(dl_model.get_dirname(), ignore_errors=True)
|
|
||||||
dl_model.delete()
|
|
||||||
else:
|
|
||||||
job.meta["progress"] = "Test passed"
|
|
||||||
job.save_meta()
|
|
||||||
|
|
||||||
# update DL model
|
|
||||||
if test_res:
|
|
||||||
if model_file:
|
|
||||||
_remove_old_file(dl_model.model_file)
|
|
||||||
dl_model.model_file.save(*_get_file_content(model_file))
|
|
||||||
if weights_file:
|
|
||||||
_remove_old_file(dl_model.weights_file)
|
|
||||||
dl_model.weights_file.save(*_get_file_content(weights_file))
|
|
||||||
if labelmap_file:
|
|
||||||
_remove_old_file(dl_model.labelmap_file)
|
|
||||||
dl_model.labelmap_file.save(*_get_file_content(labelmap_file))
|
|
||||||
if interpretation_file:
|
|
||||||
_remove_old_file(dl_model.interpretation_file)
|
|
||||||
dl_model.interpretation_file.save(*_get_file_content(interpretation_file))
|
|
||||||
|
|
||||||
if name:
|
|
||||||
dl_model.name = name
|
|
||||||
|
|
||||||
if is_shared != None:
|
|
||||||
dl_model.shared = is_shared
|
|
||||||
|
|
||||||
dl_model.updated_date = timezone.now()
|
|
||||||
dl_model.save()
|
|
||||||
|
|
||||||
if is_local_storage:
|
|
||||||
_delete_source_files()
|
|
||||||
|
|
||||||
if not test_res:
|
|
||||||
raise Exception("Model was not properly created/updated. Test failed: {}".format(message))
|
|
||||||
|
|
||||||
def create_or_update(dl_model_id, name, model_file, weights_file, labelmap_file, interpretation_file, owner, storage, is_shared):
|
|
||||||
def get_abs_path(share_path):
|
|
||||||
if not share_path:
|
|
||||||
return share_path
|
|
||||||
share_root = settings.SHARE_ROOT
|
|
||||||
relpath = os.path.normpath(share_path).lstrip('/')
|
|
||||||
if '..' in relpath.split(os.path.sep):
|
|
||||||
raise Exception('Permission denied')
|
|
||||||
abspath = os.path.abspath(os.path.join(share_root, relpath))
|
|
||||||
if os.path.commonprefix([share_root, abspath]) != share_root:
|
|
||||||
raise Exception('Bad file path on share: ' + abspath)
|
|
||||||
return abspath
|
|
||||||
|
|
||||||
def save_file_as_tmp(data):
|
|
||||||
if not data:
|
|
||||||
return None
|
|
||||||
fd, filename = tempfile.mkstemp()
|
|
||||||
with open(filename, 'wb') as tmp_file:
|
|
||||||
for chunk in data.chunks():
|
|
||||||
tmp_file.write(chunk)
|
|
||||||
os.close(fd)
|
|
||||||
return filename
|
|
||||||
|
|
||||||
is_create_request = dl_model_id is None
|
|
||||||
if is_create_request:
|
|
||||||
dl_model_id = create_empty(owner=owner)
|
|
||||||
|
|
||||||
run_tests = bool(model_file or weights_file or labelmap_file or interpretation_file)
|
|
||||||
if storage != "local":
|
|
||||||
model_file = get_abs_path(model_file)
|
|
||||||
weights_file = get_abs_path(weights_file)
|
|
||||||
labelmap_file = get_abs_path(labelmap_file)
|
|
||||||
interpretation_file = get_abs_path(interpretation_file)
|
|
||||||
else:
|
|
||||||
model_file = save_file_as_tmp(model_file)
|
|
||||||
weights_file = save_file_as_tmp(weights_file)
|
|
||||||
labelmap_file = save_file_as_tmp(labelmap_file)
|
|
||||||
interpretation_file = save_file_as_tmp(interpretation_file)
|
|
||||||
|
|
||||||
files_to_scan = []
|
|
||||||
if model_file:
|
|
||||||
files_to_scan.append(model_file)
|
|
||||||
if weights_file:
|
|
||||||
files_to_scan.append(weights_file)
|
|
||||||
if labelmap_file:
|
|
||||||
files_to_scan.append(labelmap_file)
|
|
||||||
if interpretation_file:
|
|
||||||
files_to_scan.append(interpretation_file)
|
|
||||||
av_scan_paths(*files_to_scan)
|
|
||||||
|
|
||||||
if owner:
|
|
||||||
restricted = not has_admin_role(owner)
|
|
||||||
else:
|
|
||||||
restricted = not has_admin_role(AnnotationModel.objects.get(pk=dl_model_id).owner)
|
|
||||||
|
|
||||||
rq_id = "auto_annotation.create.{}".format(dl_model_id)
|
|
||||||
queue = django_rq.get_queue("default")
|
|
||||||
queue.enqueue_call(
|
|
||||||
func=_update_dl_model_thread,
|
|
||||||
args=(
|
|
||||||
dl_model_id,
|
|
||||||
name,
|
|
||||||
is_shared,
|
|
||||||
model_file,
|
|
||||||
weights_file,
|
|
||||||
labelmap_file,
|
|
||||||
interpretation_file,
|
|
||||||
run_tests,
|
|
||||||
storage == "local",
|
|
||||||
is_create_request,
|
|
||||||
restricted
|
|
||||||
),
|
|
||||||
job_id=rq_id
|
|
||||||
)
|
|
||||||
|
|
||||||
return rq_id
|
|
||||||
|
|
||||||
@transaction.atomic
|
|
||||||
def create_empty(owner, framework=FrameworkChoice.OPENVINO):
|
|
||||||
db_model = AnnotationModel(
|
|
||||||
owner=owner,
|
|
||||||
)
|
|
||||||
db_model.save()
|
|
||||||
|
|
||||||
model_path = db_model.get_dirname()
|
|
||||||
if os.path.isdir(model_path):
|
|
||||||
shutil.rmtree(model_path)
|
|
||||||
os.mkdir(model_path)
|
|
||||||
|
|
||||||
return db_model.id
|
|
||||||
|
|
||||||
@transaction.atomic
|
|
||||||
def delete(dl_model_id):
|
|
||||||
dl_model = AnnotationModel.objects.select_for_update().get(pk=dl_model_id)
|
|
||||||
if dl_model:
|
|
||||||
if dl_model.primary:
|
|
||||||
raise Exception("Can not delete primary model {}".format(dl_model_id))
|
|
||||||
|
|
||||||
shutil.rmtree(dl_model.get_dirname(), ignore_errors=True)
|
|
||||||
dl_model.delete()
|
|
||||||
else:
|
|
||||||
raise Exception("Requested DL model {} doesn't exist".format(dl_model_id))
|
|
||||||
|
|
||||||
def run_inference_thread(tid, model_file, weights_file, labels_mapping, attributes, convertation_file, reset, user, restricted=True):
|
|
||||||
def update_progress(job, progress):
|
|
||||||
job.refresh()
|
|
||||||
if "cancel" in job.meta:
|
|
||||||
del job.meta["cancel"]
|
|
||||||
job.save()
|
|
||||||
return False
|
|
||||||
job.meta["progress"] = progress
|
|
||||||
job.save_meta()
|
|
||||||
return True
|
|
||||||
|
|
||||||
try:
|
|
||||||
job = rq.get_current_job()
|
|
||||||
job.meta["progress"] = 0
|
|
||||||
job.save_meta()
|
|
||||||
db_task = TaskModel.objects.get(pk=tid)
|
|
||||||
|
|
||||||
result = None
|
|
||||||
slogger.glob.info("auto annotation with openvino toolkit for task {}".format(tid))
|
|
||||||
more_data = True
|
|
||||||
runner = InferenceAnnotationRunner(
|
|
||||||
data=ImageLoader(FrameProvider(db_task.data)),
|
|
||||||
model_file=model_file,
|
|
||||||
weights_file=weights_file,
|
|
||||||
labels_mapping=labels_mapping,
|
|
||||||
attribute_spec=attributes,
|
|
||||||
convertation_file= convertation_file)
|
|
||||||
while more_data:
|
|
||||||
result, more_data = runner.run(
|
|
||||||
job=job,
|
|
||||||
update_progress=update_progress,
|
|
||||||
restricted=restricted)
|
|
||||||
|
|
||||||
if result is None:
|
|
||||||
slogger.glob.info("auto annotation for task {} canceled by user".format(tid))
|
|
||||||
return
|
|
||||||
|
|
||||||
serializer = LabeledDataSerializer(data = result)
|
|
||||||
if serializer.is_valid(raise_exception=True):
|
|
||||||
if reset:
|
|
||||||
put_task_data(tid, result)
|
|
||||||
else:
|
|
||||||
patch_task_data(tid, result, "create")
|
|
||||||
|
|
||||||
slogger.glob.info("auto annotation for task {} done".format(tid))
|
|
||||||
except Exception as e:
|
|
||||||
try:
|
|
||||||
slogger.task[tid].exception("exception was occurred during auto annotation of the task", exc_info=True)
|
|
||||||
except Exception as ex:
|
|
||||||
slogger.glob.exception("exception was occurred during auto annotation of the task {}: {}".format(tid, str(ex)), exc_info=True)
|
|
||||||
raise ex
|
|
||||||
|
|
||||||
raise e
|
|
||||||
@ -1,56 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import os
|
|
||||||
from enum import Enum
|
|
||||||
|
|
||||||
from django.db import models
|
|
||||||
from django.conf import settings
|
|
||||||
from django.contrib.auth.models import User
|
|
||||||
from django.core.files.storage import FileSystemStorage
|
|
||||||
|
|
||||||
fs = FileSystemStorage()
|
|
||||||
|
|
||||||
def upload_path_handler(instance, filename):
|
|
||||||
return os.path.join(settings.MODELS_ROOT, str(instance.id), filename)
|
|
||||||
|
|
||||||
class FrameworkChoice(Enum):
|
|
||||||
OPENVINO = 'openvino'
|
|
||||||
TENSORFLOW = 'tensorflow'
|
|
||||||
PYTORCH = 'pytorch'
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return self.value
|
|
||||||
|
|
||||||
|
|
||||||
class SafeCharField(models.CharField):
|
|
||||||
def get_prep_value(self, value):
|
|
||||||
value = super().get_prep_value(value)
|
|
||||||
if value:
|
|
||||||
return value[:self.max_length]
|
|
||||||
return value
|
|
||||||
|
|
||||||
class AnnotationModel(models.Model):
|
|
||||||
name = SafeCharField(max_length=256)
|
|
||||||
owner = models.ForeignKey(User, null=True, blank=True,
|
|
||||||
on_delete=models.SET_NULL)
|
|
||||||
created_date = models.DateTimeField(auto_now_add=True)
|
|
||||||
updated_date = models.DateTimeField(auto_now_add=True)
|
|
||||||
model_file = models.FileField(upload_to=upload_path_handler, storage=fs)
|
|
||||||
weights_file = models.FileField(upload_to=upload_path_handler, storage=fs)
|
|
||||||
labelmap_file = models.FileField(upload_to=upload_path_handler, storage=fs)
|
|
||||||
interpretation_file = models.FileField(upload_to=upload_path_handler, storage=fs)
|
|
||||||
shared = models.BooleanField(default=False)
|
|
||||||
primary = models.BooleanField(default=False)
|
|
||||||
framework = models.CharField(max_length=32, default=FrameworkChoice.OPENVINO)
|
|
||||||
|
|
||||||
class Meta:
|
|
||||||
default_permissions = ()
|
|
||||||
|
|
||||||
def get_dirname(self):
|
|
||||||
return "{models_root}/{id}".format(models_root=settings.MODELS_ROOT, id=self.id)
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return self.name
|
|
||||||
@ -1,29 +0,0 @@
|
|||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import rules
|
|
||||||
|
|
||||||
from cvat.apps.authentication.auth import has_admin_role, has_user_role
|
|
||||||
|
|
||||||
@rules.predicate
|
|
||||||
def is_model_owner(db_user, db_dl_model):
|
|
||||||
return db_dl_model.owner == db_user
|
|
||||||
|
|
||||||
@rules.predicate
|
|
||||||
def is_shared_model(_, db_dl_model):
|
|
||||||
return db_dl_model.shared
|
|
||||||
|
|
||||||
@rules.predicate
|
|
||||||
def is_primary_model(_, db_dl_model):
|
|
||||||
return db_dl_model.primary
|
|
||||||
|
|
||||||
def setup_permissions():
|
|
||||||
rules.add_perm('auto_annotation.model.create', has_admin_role | has_user_role)
|
|
||||||
|
|
||||||
rules.add_perm('auto_annotation.model.update', (has_admin_role | is_model_owner) & ~is_primary_model)
|
|
||||||
|
|
||||||
rules.add_perm('auto_annotation.model.delete', (has_admin_role | is_model_owner) & ~is_primary_model)
|
|
||||||
|
|
||||||
rules.add_perm('auto_annotation.model.access', has_admin_role | is_model_owner |
|
|
||||||
is_shared_model | is_primary_model)
|
|
||||||
@ -1,4 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
@ -1,19 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.urls import path
|
|
||||||
from . import views
|
|
||||||
|
|
||||||
urlpatterns = [
|
|
||||||
path("create", views.create_model),
|
|
||||||
path("update/<int:mid>", views.update_model),
|
|
||||||
path("delete/<int:mid>", views.delete_model),
|
|
||||||
|
|
||||||
path("start/<int:mid>/<int:tid>", views.start_annotation),
|
|
||||||
path("check/<str:rq_id>", views.check),
|
|
||||||
path("cancel/<int:tid>", views.cancel),
|
|
||||||
|
|
||||||
path("meta/get", views.get_meta_info),
|
|
||||||
]
|
|
||||||
@ -1,265 +0,0 @@
|
|||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
import django_rq
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
|
|
||||||
from django.http import HttpResponse, JsonResponse, HttpResponseBadRequest
|
|
||||||
from rest_framework.decorators import api_view
|
|
||||||
from django.db.models import Q
|
|
||||||
from rules.contrib.views import permission_required, objectgetter
|
|
||||||
|
|
||||||
from cvat.apps.authentication.decorators import login_required
|
|
||||||
from cvat.apps.engine.models import Task as TaskModel
|
|
||||||
from cvat.apps.authentication.auth import has_admin_role
|
|
||||||
from cvat.apps.engine.log import slogger
|
|
||||||
|
|
||||||
from .model_loader import load_labelmap
|
|
||||||
from . import model_manager
|
|
||||||
from .models import AnnotationModel
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["engine.task.change"],
|
|
||||||
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
|
|
||||||
def cancel(request, tid):
|
|
||||||
try:
|
|
||||||
queue = django_rq.get_queue("low")
|
|
||||||
job = queue.fetch_job("auto_annotation.run.{}".format(tid))
|
|
||||||
if job is None or job.is_finished or job.is_failed:
|
|
||||||
raise Exception("Task is not being annotated currently")
|
|
||||||
elif "cancel" not in job.meta:
|
|
||||||
job.meta["cancel"] = True
|
|
||||||
job.save()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
try:
|
|
||||||
slogger.task[tid].exception("cannot cancel auto annotation for task #{}".format(tid), exc_info=True)
|
|
||||||
except Exception as logger_ex:
|
|
||||||
slogger.glob.exception("exception was occured during cancel auto annotation request for task {}: {}".format(tid, str(logger_ex)), exc_info=True)
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
return HttpResponse()
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["auto_annotation.model.create"], raise_exception=True)
|
|
||||||
def create_model(request):
|
|
||||||
if request.method != 'POST':
|
|
||||||
return HttpResponseBadRequest("Only POST requests are accepted")
|
|
||||||
|
|
||||||
try:
|
|
||||||
params = request.POST
|
|
||||||
storage = params["storage"]
|
|
||||||
name = params["name"]
|
|
||||||
is_shared = params["shared"].lower() == "true"
|
|
||||||
if is_shared and not has_admin_role(request.user):
|
|
||||||
raise Exception("Only admin can create shared models")
|
|
||||||
|
|
||||||
files = request.FILES if storage == "local" else params
|
|
||||||
model = files["xml"]
|
|
||||||
weights = files["bin"]
|
|
||||||
labelmap = files["json"]
|
|
||||||
interpretation_script = files["py"]
|
|
||||||
owner = request.user
|
|
||||||
|
|
||||||
rq_id = model_manager.create_or_update(
|
|
||||||
dl_model_id=None,
|
|
||||||
name=name,
|
|
||||||
model_file=model,
|
|
||||||
weights_file=weights,
|
|
||||||
labelmap_file=labelmap,
|
|
||||||
interpretation_file=interpretation_script,
|
|
||||||
owner=owner,
|
|
||||||
storage=storage,
|
|
||||||
is_shared=is_shared,
|
|
||||||
)
|
|
||||||
|
|
||||||
return JsonResponse({"id": rq_id})
|
|
||||||
except Exception as e:
|
|
||||||
return HttpResponseBadRequest(str(e))
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["auto_annotation.model.update"],
|
|
||||||
fn=objectgetter(AnnotationModel, "mid"), raise_exception=True)
|
|
||||||
def update_model(request, mid):
|
|
||||||
if request.method != 'POST':
|
|
||||||
return HttpResponseBadRequest("Only POST requests are accepted")
|
|
||||||
|
|
||||||
try:
|
|
||||||
params = request.POST
|
|
||||||
storage = params["storage"]
|
|
||||||
name = params.get("name")
|
|
||||||
is_shared = params.get("shared")
|
|
||||||
is_shared = is_shared.lower() == "true" if is_shared else None
|
|
||||||
if is_shared and not has_admin_role(request.user):
|
|
||||||
raise Exception("Only admin can create shared models")
|
|
||||||
files = request.FILES
|
|
||||||
model = files.get("xml")
|
|
||||||
weights = files.get("bin")
|
|
||||||
labelmap = files.get("json")
|
|
||||||
interpretation_script = files.get("py")
|
|
||||||
|
|
||||||
rq_id = model_manager.create_or_update(
|
|
||||||
dl_model_id=mid,
|
|
||||||
name=name,
|
|
||||||
model_file=model,
|
|
||||||
weights_file=weights,
|
|
||||||
labelmap_file=labelmap,
|
|
||||||
interpretation_file=interpretation_script,
|
|
||||||
owner=None,
|
|
||||||
storage=storage,
|
|
||||||
is_shared=is_shared,
|
|
||||||
)
|
|
||||||
|
|
||||||
return JsonResponse({"id": rq_id})
|
|
||||||
except Exception as e:
|
|
||||||
return HttpResponseBadRequest(str(e))
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["auto_annotation.model.delete"],
|
|
||||||
fn=objectgetter(AnnotationModel, "mid"), raise_exception=True)
|
|
||||||
def delete_model(request, mid):
|
|
||||||
if request.method != 'DELETE':
|
|
||||||
return HttpResponseBadRequest("Only DELETE requests are accepted")
|
|
||||||
model_manager.delete(mid)
|
|
||||||
return HttpResponse()
|
|
||||||
|
|
||||||
@api_view(['POST'])
|
|
||||||
@login_required
|
|
||||||
def get_meta_info(request):
|
|
||||||
try:
|
|
||||||
tids = request.data
|
|
||||||
response = {
|
|
||||||
"admin": has_admin_role(request.user),
|
|
||||||
"models": [],
|
|
||||||
"run": {},
|
|
||||||
}
|
|
||||||
dl_model_list = list(AnnotationModel.objects.filter(Q(owner=request.user) | Q(primary=True) | Q(shared=True)).order_by('-created_date'))
|
|
||||||
for dl_model in dl_model_list:
|
|
||||||
labels = []
|
|
||||||
if dl_model.labelmap_file and os.path.exists(dl_model.labelmap_file.name):
|
|
||||||
with dl_model.labelmap_file.open('r') as f:
|
|
||||||
labels = list(json.load(f)["label_map"].values())
|
|
||||||
|
|
||||||
response["models"].append({
|
|
||||||
"id": dl_model.id,
|
|
||||||
"name": dl_model.name,
|
|
||||||
"primary": dl_model.primary,
|
|
||||||
"uploadDate": dl_model.created_date,
|
|
||||||
"updateDate": dl_model.updated_date,
|
|
||||||
"labels": labels,
|
|
||||||
"owner": dl_model.owner.id,
|
|
||||||
})
|
|
||||||
|
|
||||||
queue = django_rq.get_queue("low")
|
|
||||||
for tid in tids:
|
|
||||||
rq_id = "auto_annotation.run.{}".format(tid)
|
|
||||||
job = queue.fetch_job(rq_id)
|
|
||||||
if job is not None:
|
|
||||||
response["run"][tid] = {
|
|
||||||
"status": job.get_status(),
|
|
||||||
"rq_id": rq_id,
|
|
||||||
}
|
|
||||||
|
|
||||||
return JsonResponse(response)
|
|
||||||
except Exception as e:
|
|
||||||
return HttpResponseBadRequest(str(e))
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["engine.task.change"],
|
|
||||||
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
|
|
||||||
@permission_required(perm=["auto_annotation.model.access"],
|
|
||||||
fn=objectgetter(AnnotationModel, "mid"), raise_exception=True)
|
|
||||||
def start_annotation(request, mid, tid):
|
|
||||||
slogger.glob.info("auto annotation create request for task {} via DL model {}".format(tid, mid))
|
|
||||||
try:
|
|
||||||
db_task = TaskModel.objects.get(pk=tid)
|
|
||||||
queue = django_rq.get_queue("low")
|
|
||||||
job = queue.fetch_job("auto_annotation.run.{}".format(tid))
|
|
||||||
if job is not None and (job.is_started or job.is_queued):
|
|
||||||
raise Exception("The process is already running")
|
|
||||||
|
|
||||||
data = json.loads(request.body.decode('utf-8'))
|
|
||||||
|
|
||||||
should_reset = data["reset"]
|
|
||||||
user_defined_labels_mapping = data["labels"]
|
|
||||||
|
|
||||||
dl_model = AnnotationModel.objects.get(pk=mid)
|
|
||||||
|
|
||||||
model_file_path = dl_model.model_file.name
|
|
||||||
weights_file_path = dl_model.weights_file.name
|
|
||||||
labelmap_file = dl_model.labelmap_file.name
|
|
||||||
convertation_file_path = dl_model.interpretation_file.name
|
|
||||||
restricted = not has_admin_role(dl_model.owner)
|
|
||||||
|
|
||||||
db_labels = db_task.label_set.prefetch_related("attributespec_set").all()
|
|
||||||
db_attributes = {db_label.id:
|
|
||||||
{db_attr.name: db_attr.id for db_attr in db_label.attributespec_set.all()} for db_label in db_labels}
|
|
||||||
db_labels = {db_label.name:db_label.id for db_label in db_labels}
|
|
||||||
|
|
||||||
model_labels = {value: key for key, value in load_labelmap(labelmap_file).items()}
|
|
||||||
|
|
||||||
labels_mapping = {}
|
|
||||||
for user_model_label, user_db_label in user_defined_labels_mapping.items():
|
|
||||||
if user_model_label in model_labels and user_db_label in db_labels:
|
|
||||||
labels_mapping[int(model_labels[user_model_label])] = db_labels[user_db_label]
|
|
||||||
|
|
||||||
if not labels_mapping:
|
|
||||||
raise Exception("No labels found for annotation")
|
|
||||||
|
|
||||||
rq_id="auto_annotation.run.{}".format(tid)
|
|
||||||
queue.enqueue_call(func=model_manager.run_inference_thread,
|
|
||||||
args=(
|
|
||||||
tid,
|
|
||||||
model_file_path,
|
|
||||||
weights_file_path,
|
|
||||||
labels_mapping,
|
|
||||||
db_attributes,
|
|
||||||
convertation_file_path,
|
|
||||||
should_reset,
|
|
||||||
request.user,
|
|
||||||
restricted,
|
|
||||||
),
|
|
||||||
job_id = rq_id,
|
|
||||||
timeout=604800) # 7 days
|
|
||||||
|
|
||||||
slogger.task[tid].info("auto annotation job enqueued")
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
try:
|
|
||||||
slogger.task[tid].exception("exception was occurred during annotation request", exc_info=True)
|
|
||||||
except Exception as logger_ex:
|
|
||||||
slogger.glob.exception("exception was occurred during create auto annotation request for task {}: {}".format(tid, str(logger_ex)), exc_info=True)
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
return JsonResponse({"id": rq_id})
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
def check(request, rq_id):
|
|
||||||
try:
|
|
||||||
target_queue = "low" if "auto_annotation.run" in rq_id else "default"
|
|
||||||
queue = django_rq.get_queue(target_queue)
|
|
||||||
job = queue.fetch_job(rq_id)
|
|
||||||
if job is not None and "cancel" in job.meta:
|
|
||||||
return JsonResponse({"status": "finished"})
|
|
||||||
data = {}
|
|
||||||
if job is None:
|
|
||||||
data["status"] = "unknown"
|
|
||||||
elif job.is_queued:
|
|
||||||
data["status"] = "queued"
|
|
||||||
elif job.is_started:
|
|
||||||
data["status"] = "started"
|
|
||||||
data["progress"] = job.meta["progress"] if "progress" in job.meta else ""
|
|
||||||
elif job.is_finished:
|
|
||||||
data["status"] = "finished"
|
|
||||||
job.delete()
|
|
||||||
else:
|
|
||||||
data["status"] = "failed"
|
|
||||||
data["error"] = job.exc_info
|
|
||||||
job.delete()
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
data["status"] = "unknown"
|
|
||||||
|
|
||||||
return JsonResponse(data)
|
|
||||||
@ -1,4 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018-2019 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
|
|
||||||
# Register your models here.
|
|
||||||
|
|
||||||
@ -1,11 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.apps import AppConfig
|
|
||||||
|
|
||||||
|
|
||||||
class AutoSegmentationConfig(AppConfig):
|
|
||||||
name = 'auto_segmentation'
|
|
||||||
|
|
||||||
@ -1,5 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
|
|
||||||
# Create your models here.
|
|
||||||
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
|
|
||||||
# Create your tests here.
|
|
||||||
|
|
||||||
@ -1,14 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.urls import path
|
|
||||||
from . import views
|
|
||||||
|
|
||||||
urlpatterns = [
|
|
||||||
path('create/task/<int:tid>', views.create),
|
|
||||||
path('check/task/<int:tid>', views.check),
|
|
||||||
path('cancel/task/<int:tid>', views.cancel),
|
|
||||||
path('meta/get', views.get_meta_info),
|
|
||||||
]
|
|
||||||
@ -1,310 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018-2020 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
|
|
||||||
from django.http import HttpResponse, JsonResponse, HttpResponseBadRequest
|
|
||||||
from rest_framework.decorators import api_view
|
|
||||||
from rules.contrib.views import permission_required, objectgetter
|
|
||||||
from cvat.apps.authentication.decorators import login_required
|
|
||||||
from cvat.apps.dataset_manager.task import put_task_data
|
|
||||||
from cvat.apps.engine.models import Task as TaskModel
|
|
||||||
from cvat.apps.engine.serializers import LabeledDataSerializer
|
|
||||||
from cvat.apps.engine.frame_provider import FrameProvider
|
|
||||||
|
|
||||||
import django_rq
|
|
||||||
import os
|
|
||||||
import rq
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
from cvat.apps.engine.log import slogger
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import skimage.io
|
|
||||||
from skimage.measure import find_contours, approximate_polygon
|
|
||||||
|
|
||||||
def run_tensorflow_auto_segmentation(frame_provider, labels_mapping, treshold):
|
|
||||||
def _convert_to_int(boolean_mask):
|
|
||||||
return boolean_mask.astype(np.uint8)
|
|
||||||
|
|
||||||
def _convert_to_segmentation(mask):
|
|
||||||
contours = find_contours(mask, 0.5)
|
|
||||||
# only one contour exist in our case
|
|
||||||
contour = contours[0]
|
|
||||||
contour = np.flip(contour, axis=1)
|
|
||||||
# Approximate the contour and reduce the number of points
|
|
||||||
contour = approximate_polygon(contour, tolerance=2.5)
|
|
||||||
segmentation = contour.ravel().tolist()
|
|
||||||
return segmentation
|
|
||||||
|
|
||||||
## INITIALIZATION
|
|
||||||
|
|
||||||
# workarround for tf.placeholder() is not compatible with eager execution
|
|
||||||
# https://github.com/tensorflow/tensorflow/issues/18165
|
|
||||||
import tensorflow as tf
|
|
||||||
tf.compat.v1.disable_eager_execution()
|
|
||||||
|
|
||||||
# Root directory of the project
|
|
||||||
ROOT_DIR = os.environ.get('AUTO_SEGMENTATION_PATH')
|
|
||||||
# Import Mask RCNN
|
|
||||||
sys.path.append(ROOT_DIR) # To find local version of the library
|
|
||||||
import mrcnn.model as modellib
|
|
||||||
|
|
||||||
# Import COCO config
|
|
||||||
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
|
|
||||||
import coco
|
|
||||||
|
|
||||||
# Directory to save logs and trained model
|
|
||||||
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
|
|
||||||
|
|
||||||
# Local path to trained weights file
|
|
||||||
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
|
|
||||||
if COCO_MODEL_PATH is None:
|
|
||||||
raise OSError('Model path env not found in the system.')
|
|
||||||
job = rq.get_current_job()
|
|
||||||
|
|
||||||
## CONFIGURATION
|
|
||||||
|
|
||||||
class InferenceConfig(coco.CocoConfig):
|
|
||||||
# Set batch size to 1 since we'll be running inference on
|
|
||||||
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
|
|
||||||
GPU_COUNT = 1
|
|
||||||
IMAGES_PER_GPU = 1
|
|
||||||
|
|
||||||
# Print config details
|
|
||||||
config = InferenceConfig()
|
|
||||||
config.display()
|
|
||||||
|
|
||||||
## CREATE MODEL AND LOAD TRAINED WEIGHTS
|
|
||||||
|
|
||||||
# Create model object in inference mode.
|
|
||||||
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
|
|
||||||
# Load weights trained on MS-COCO
|
|
||||||
model.load_weights(COCO_MODEL_PATH, by_name=True)
|
|
||||||
|
|
||||||
## RUN OBJECT DETECTION
|
|
||||||
result = {}
|
|
||||||
frames = frame_provider.get_frames(frame_provider.Quality.ORIGINAL)
|
|
||||||
for image_num, (image_bytes, _) in enumerate(frames):
|
|
||||||
job.refresh()
|
|
||||||
if 'cancel' in job.meta:
|
|
||||||
del job.meta['cancel']
|
|
||||||
job.save()
|
|
||||||
return None
|
|
||||||
job.meta['progress'] = image_num * 100 / len(frame_provider)
|
|
||||||
job.save_meta()
|
|
||||||
|
|
||||||
image = skimage.io.imread(image_bytes)
|
|
||||||
|
|
||||||
# for multiple image detection, "batch size" must be equal to number of images
|
|
||||||
r = model.detect([image], verbose=1)
|
|
||||||
|
|
||||||
r = r[0]
|
|
||||||
# "r['rois'][index]" gives bounding box around the object
|
|
||||||
for index, c_id in enumerate(r['class_ids']):
|
|
||||||
if c_id in labels_mapping.keys():
|
|
||||||
if r['scores'][index] >= treshold:
|
|
||||||
mask = _convert_to_int(r['masks'][:,:,index])
|
|
||||||
segmentation = _convert_to_segmentation(mask)
|
|
||||||
label = labels_mapping[c_id]
|
|
||||||
if label not in result:
|
|
||||||
result[label] = []
|
|
||||||
result[label].append(
|
|
||||||
[image_num, segmentation])
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def convert_to_cvat_format(data):
|
|
||||||
result = {
|
|
||||||
"tracks": [],
|
|
||||||
"shapes": [],
|
|
||||||
"tags": [],
|
|
||||||
"version": 0,
|
|
||||||
}
|
|
||||||
|
|
||||||
for label in data:
|
|
||||||
segments = data[label]
|
|
||||||
for segment in segments:
|
|
||||||
result['shapes'].append({
|
|
||||||
"type": "polygon",
|
|
||||||
"label_id": label,
|
|
||||||
"frame": segment[0],
|
|
||||||
"points": segment[1],
|
|
||||||
"z_order": 0,
|
|
||||||
"group": None,
|
|
||||||
"occluded": False,
|
|
||||||
"attributes": [],
|
|
||||||
})
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def create_thread(tid, labels_mapping, user):
|
|
||||||
try:
|
|
||||||
# If detected object accuracy bigger than threshold it will returend
|
|
||||||
TRESHOLD = 0.5
|
|
||||||
# Init rq job
|
|
||||||
job = rq.get_current_job()
|
|
||||||
job.meta['progress'] = 0
|
|
||||||
job.save_meta()
|
|
||||||
# Get job indexes and segment length
|
|
||||||
db_task = TaskModel.objects.get(pk=tid)
|
|
||||||
# Get image list
|
|
||||||
frame_provider = FrameProvider(db_task.data)
|
|
||||||
|
|
||||||
# Run auto segmentation by tf
|
|
||||||
result = None
|
|
||||||
slogger.glob.info("auto segmentation with tensorflow framework for task {}".format(tid))
|
|
||||||
result = run_tensorflow_auto_segmentation(frame_provider, labels_mapping, TRESHOLD)
|
|
||||||
|
|
||||||
if result is None:
|
|
||||||
slogger.glob.info('auto segmentation for task {} canceled by user'.format(tid))
|
|
||||||
return
|
|
||||||
|
|
||||||
# Modify data format and save
|
|
||||||
result = convert_to_cvat_format(result)
|
|
||||||
serializer = LabeledDataSerializer(data = result)
|
|
||||||
if serializer.is_valid(raise_exception=True):
|
|
||||||
put_task_data(tid, result)
|
|
||||||
slogger.glob.info('auto segmentation for task {} done'.format(tid))
|
|
||||||
except Exception as ex:
|
|
||||||
try:
|
|
||||||
slogger.task[tid].exception('exception was occured during auto segmentation of the task', exc_info=True)
|
|
||||||
except Exception:
|
|
||||||
slogger.glob.exception('exception was occured during auto segmentation of the task {}'.format(tid), exc_info=True)
|
|
||||||
raise ex
|
|
||||||
|
|
||||||
@api_view(['POST'])
|
|
||||||
@login_required
|
|
||||||
def get_meta_info(request):
|
|
||||||
try:
|
|
||||||
queue = django_rq.get_queue('low')
|
|
||||||
tids = request.data
|
|
||||||
result = {}
|
|
||||||
for tid in tids:
|
|
||||||
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
|
|
||||||
if job is not None:
|
|
||||||
result[tid] = {
|
|
||||||
"active": job.is_queued or job.is_started,
|
|
||||||
"success": not job.is_failed
|
|
||||||
}
|
|
||||||
|
|
||||||
return JsonResponse(result)
|
|
||||||
except Exception as ex:
|
|
||||||
slogger.glob.exception('exception was occured during tf meta request', exc_info=True)
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=['engine.task.change'],
|
|
||||||
fn=objectgetter(TaskModel, 'tid'), raise_exception=True)
|
|
||||||
def create(request, tid):
|
|
||||||
slogger.glob.info('auto segmentation create request for task {}'.format(tid))
|
|
||||||
try:
|
|
||||||
db_task = TaskModel.objects.get(pk=tid)
|
|
||||||
queue = django_rq.get_queue('low')
|
|
||||||
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
|
|
||||||
if job is not None and (job.is_started or job.is_queued):
|
|
||||||
raise Exception("The process is already running")
|
|
||||||
|
|
||||||
db_labels = db_task.label_set.prefetch_related('attributespec_set').all()
|
|
||||||
db_labels = {db_label.id:db_label.name for db_label in db_labels}
|
|
||||||
|
|
||||||
# COCO Labels
|
|
||||||
auto_segmentation_labels = { "BG": 0,
|
|
||||||
"person": 1, "bicycle": 2, "car": 3, "motorcycle": 4, "airplane": 5,
|
|
||||||
"bus": 6, "train": 7, "truck": 8, "boat": 9, "traffic_light": 10,
|
|
||||||
"fire_hydrant": 11, "stop_sign": 12, "parking_meter": 13, "bench": 14,
|
|
||||||
"bird": 15, "cat": 16, "dog": 17, "horse": 18, "sheep": 19, "cow": 20,
|
|
||||||
"elephant": 21, "bear": 22, "zebra": 23, "giraffe": 24, "backpack": 25,
|
|
||||||
"umbrella": 26, "handbag": 27, "tie": 28, "suitcase": 29, "frisbee": 30,
|
|
||||||
"skis": 31, "snowboard": 32, "sports_ball": 33, "kite": 34, "baseball_bat": 35,
|
|
||||||
"baseball_glove": 36, "skateboard": 37, "surfboard": 38, "tennis_racket": 39,
|
|
||||||
"bottle": 40, "wine_glass": 41, "cup": 42, "fork": 43, "knife": 44, "spoon": 45,
|
|
||||||
"bowl": 46, "banana": 47, "apple": 48, "sandwich": 49, "orange": 50, "broccoli": 51,
|
|
||||||
"carrot": 52, "hot_dog": 53, "pizza": 54, "donut": 55, "cake": 56, "chair": 57,
|
|
||||||
"couch": 58, "potted_plant": 59, "bed": 60, "dining_table": 61, "toilet": 62,
|
|
||||||
"tv": 63, "laptop": 64, "mouse": 65, "remote": 66, "keyboard": 67, "cell_phone": 68,
|
|
||||||
"microwave": 69, "oven": 70, "toaster": 71, "sink": 72, "refrigerator": 73,
|
|
||||||
"book": 74, "clock": 75, "vase": 76, "scissors": 77, "teddy_bear": 78, "hair_drier": 79,
|
|
||||||
"toothbrush": 80
|
|
||||||
}
|
|
||||||
|
|
||||||
labels_mapping = {}
|
|
||||||
for key, labels in db_labels.items():
|
|
||||||
if labels in auto_segmentation_labels.keys():
|
|
||||||
labels_mapping[auto_segmentation_labels[labels]] = key
|
|
||||||
|
|
||||||
if not len(labels_mapping.values()):
|
|
||||||
raise Exception('No labels found for auto segmentation')
|
|
||||||
|
|
||||||
# Run auto segmentation job
|
|
||||||
queue.enqueue_call(func=create_thread,
|
|
||||||
args=(tid, labels_mapping, request.user),
|
|
||||||
job_id='auto_segmentation.create/{}'.format(tid),
|
|
||||||
timeout=604800) # 7 days
|
|
||||||
|
|
||||||
slogger.task[tid].info('tensorflow segmentation job enqueued with labels {}'.format(labels_mapping))
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
try:
|
|
||||||
slogger.task[tid].exception("exception was occured during tensorflow segmentation request", exc_info=True)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
return HttpResponse()
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=['engine.task.access'],
|
|
||||||
fn=objectgetter(TaskModel, 'tid'), raise_exception=True)
|
|
||||||
def check(request, tid):
|
|
||||||
try:
|
|
||||||
queue = django_rq.get_queue('low')
|
|
||||||
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
|
|
||||||
if job is not None and 'cancel' in job.meta:
|
|
||||||
return JsonResponse({'status': 'finished'})
|
|
||||||
data = {}
|
|
||||||
if job is None:
|
|
||||||
data['status'] = 'unknown'
|
|
||||||
elif job.is_queued:
|
|
||||||
data['status'] = 'queued'
|
|
||||||
elif job.is_started:
|
|
||||||
data['status'] = 'started'
|
|
||||||
data['progress'] = job.meta['progress']
|
|
||||||
elif job.is_finished:
|
|
||||||
data['status'] = 'finished'
|
|
||||||
job.delete()
|
|
||||||
else:
|
|
||||||
data['status'] = 'failed'
|
|
||||||
data['stderr'] = job.exc_info
|
|
||||||
job.delete()
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
data['status'] = 'unknown'
|
|
||||||
|
|
||||||
return JsonResponse(data)
|
|
||||||
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=['engine.task.change'],
|
|
||||||
fn=objectgetter(TaskModel, 'tid'), raise_exception=True)
|
|
||||||
def cancel(request, tid):
|
|
||||||
try:
|
|
||||||
queue = django_rq.get_queue('low')
|
|
||||||
job = queue.fetch_job('auto_segmentation.create/{}'.format(tid))
|
|
||||||
if job is None or job.is_finished or job.is_failed:
|
|
||||||
raise Exception('Task is not being segmented currently')
|
|
||||||
elif 'cancel' not in job.meta:
|
|
||||||
job.meta['cancel'] = True
|
|
||||||
job.save()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
try:
|
|
||||||
slogger.task[tid].exception("cannot cancel tensorflow segmentation for task #{}".format(tid), exc_info=True)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
return HttpResponse()
|
|
||||||
@ -1,31 +0,0 @@
|
|||||||
# Semi-Automatic Segmentation with [Deep Extreme Cut](http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr/)
|
|
||||||
|
|
||||||
## About the application
|
|
||||||
|
|
||||||
The application allows to use deep learning models for semi-automatic semantic and instance segmentation.
|
|
||||||
You can get a segmentation polygon from four (or more) extreme points of an object.
|
|
||||||
This application uses the pre-trained DEXTR model which has been converted to Inference Engine format.
|
|
||||||
|
|
||||||
We are grateful to K.K. Maninis, S. Caelles, J. Pont-Tuset, and L. Van Gool who permitted using their models in our tool
|
|
||||||
|
|
||||||
## Build docker image
|
|
||||||
```bash
|
|
||||||
# OpenVINO component is also needed
|
|
||||||
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml -f cvat/apps/dextr_segmentation/docker-compose.dextr.yml build
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run docker container
|
|
||||||
```bash
|
|
||||||
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml -f cvat/apps/dextr_segmentation/docker-compose.dextr.yml up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using
|
|
||||||
|
|
||||||
1. Open a job
|
|
||||||
2. Select "Auto Segmentation" in the list of shapes
|
|
||||||
3. Run the draw mode as usually (by press the "Create Shape" button or by "N" shortcut)
|
|
||||||
4. Click four-six (or more if it's need) extreme points of an object
|
|
||||||
5. Close the draw mode as usually (by shortcut or pressing the button "Stop Creation")
|
|
||||||
6. Wait a moment and you will get a class agnostic annotation polygon
|
|
||||||
7. You can close an annotation request if it is too long
|
|
||||||
(in case if it is queued to rq worker and all workers are busy)
|
|
||||||
@ -1,7 +0,0 @@
|
|||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from cvat.settings.base import JS_3RDPARTY
|
|
||||||
|
|
||||||
JS_3RDPARTY['engine'] = JS_3RDPARTY.get('engine', []) + ['dextr_segmentation/js/enginePlugin.js']
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.apps import AppConfig
|
|
||||||
|
|
||||||
class DextrSegmentationConfig(AppConfig):
|
|
||||||
name = 'dextr_segmentation'
|
|
||||||
@ -1,119 +0,0 @@
|
|||||||
|
|
||||||
# Copyright (C) 2018-2020 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from cvat.apps.auto_annotation.inference_engine import make_plugin_or_core, make_network
|
|
||||||
from cvat.apps.engine.frame_provider import FrameProvider
|
|
||||||
|
|
||||||
import os
|
|
||||||
import cv2
|
|
||||||
import PIL
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
_IE_CPU_EXTENSION = os.getenv("IE_CPU_EXTENSION", "libcpu_extension_avx2.so")
|
|
||||||
_IE_PLUGINS_PATH = os.getenv("IE_PLUGINS_PATH", None)
|
|
||||||
|
|
||||||
_DEXTR_MODEL_DIR = os.getenv("DEXTR_MODEL_DIR", None)
|
|
||||||
_DEXTR_PADDING = 50
|
|
||||||
_DEXTR_TRESHOLD = 0.9
|
|
||||||
_DEXTR_SIZE = 512
|
|
||||||
|
|
||||||
class DEXTR_HANDLER:
|
|
||||||
def __init__(self):
|
|
||||||
self._plugin = None
|
|
||||||
self._network = None
|
|
||||||
self._exec_network = None
|
|
||||||
self._input_blob = None
|
|
||||||
self._output_blob = None
|
|
||||||
if not _DEXTR_MODEL_DIR:
|
|
||||||
raise Exception("DEXTR_MODEL_DIR is not defined")
|
|
||||||
|
|
||||||
|
|
||||||
def handle(self, db_data, frame, points):
|
|
||||||
# Lazy initialization
|
|
||||||
if not self._plugin:
|
|
||||||
self._plugin = make_plugin_or_core()
|
|
||||||
self._network = make_network(os.path.join(_DEXTR_MODEL_DIR, 'dextr.xml'),
|
|
||||||
os.path.join(_DEXTR_MODEL_DIR, 'dextr.bin'))
|
|
||||||
self._input_blob = next(iter(self._network.inputs))
|
|
||||||
self._output_blob = next(iter(self._network.outputs))
|
|
||||||
if getattr(self._plugin, 'load_network', False):
|
|
||||||
self._exec_network = self._plugin.load_network(self._network, 'CPU')
|
|
||||||
else:
|
|
||||||
self._exec_network = self._plugin.load(network=self._network)
|
|
||||||
|
|
||||||
frame_provider = FrameProvider(db_data)
|
|
||||||
image = frame_provider.get_frame(frame, frame_provider.Quality.ORIGINAL)
|
|
||||||
image = PIL.Image.open(image[0])
|
|
||||||
numpy_image = np.array(image)
|
|
||||||
points = np.asarray([[int(p["x"]), int(p["y"])] for p in points], dtype=int)
|
|
||||||
|
|
||||||
# Padding mustn't be more than the closest distance to an edge of an image
|
|
||||||
[height, width] = numpy_image.shape[:2]
|
|
||||||
x_values = points[:, 0]
|
|
||||||
y_values = points[:, 1]
|
|
||||||
[min_x, max_x] = [np.min(x_values), np.max(x_values)]
|
|
||||||
[min_y, max_y] = [np.min(y_values), np.max(y_values)]
|
|
||||||
padding = min(min_x, min_y, width - max_x, height - max_y, _DEXTR_PADDING)
|
|
||||||
bounding_box = (
|
|
||||||
max(min(points[:, 0]) - padding, 0),
|
|
||||||
max(min(points[:, 1]) - padding, 0),
|
|
||||||
min(max(points[:, 0]) + padding, width - 1),
|
|
||||||
min(max(points[:, 1]) + padding, height - 1)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Prepare an image
|
|
||||||
numpy_cropped = np.array(image.crop(bounding_box))
|
|
||||||
resized = cv2.resize(numpy_cropped, (_DEXTR_SIZE, _DEXTR_SIZE),
|
|
||||||
interpolation = cv2.INTER_CUBIC).astype(np.float32)
|
|
||||||
|
|
||||||
# Make a heatmap
|
|
||||||
points = points - [min(points[:, 0]), min(points[:, 1])] + [padding, padding]
|
|
||||||
points = (points * [_DEXTR_SIZE / numpy_cropped.shape[1], _DEXTR_SIZE / numpy_cropped.shape[0]]).astype(int)
|
|
||||||
heatmap = np.zeros(shape=resized.shape[:2], dtype=np.float64)
|
|
||||||
for point in points:
|
|
||||||
gaussian_x_axis = np.arange(0, _DEXTR_SIZE, 1, float) - point[0]
|
|
||||||
gaussian_y_axis = np.arange(0, _DEXTR_SIZE, 1, float)[:, np.newaxis] - point[1]
|
|
||||||
gaussian = np.exp(-4 * np.log(2) * ((gaussian_x_axis ** 2 + gaussian_y_axis ** 2) / 100)).astype(np.float64)
|
|
||||||
heatmap = np.maximum(heatmap, gaussian)
|
|
||||||
cv2.normalize(heatmap, heatmap, 0, 255, cv2.NORM_MINMAX)
|
|
||||||
|
|
||||||
# Concat an image and a heatmap
|
|
||||||
input_dextr = np.concatenate((resized, heatmap[:, :, np.newaxis].astype(resized.dtype)), axis=2)
|
|
||||||
input_dextr = input_dextr.transpose((2,0,1))
|
|
||||||
|
|
||||||
pred = self._exec_network.infer(inputs={self._input_blob: input_dextr[np.newaxis, ...]})[self._output_blob][0, 0, :, :]
|
|
||||||
pred = cv2.resize(pred, tuple(reversed(numpy_cropped.shape[:2])), interpolation = cv2.INTER_CUBIC)
|
|
||||||
result = np.zeros(numpy_image.shape[:2])
|
|
||||||
result[bounding_box[1]:bounding_box[1] + pred.shape[0], bounding_box[0]:bounding_box[0] + pred.shape[1]] = pred > _DEXTR_TRESHOLD
|
|
||||||
|
|
||||||
# Convert a mask to a polygon
|
|
||||||
result = np.array(result, dtype=np.uint8)
|
|
||||||
cv2.normalize(result,result,0,255,cv2.NORM_MINMAX)
|
|
||||||
contours = None
|
|
||||||
if int(cv2.__version__.split('.')[0]) > 3:
|
|
||||||
contours = cv2.findContours(result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)[0]
|
|
||||||
else:
|
|
||||||
contours = cv2.findContours(result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)[1]
|
|
||||||
|
|
||||||
contours = max(contours, key=lambda arr: arr.size)
|
|
||||||
if contours.shape.count(1):
|
|
||||||
contours = np.squeeze(contours)
|
|
||||||
if contours.size < 3 * 2:
|
|
||||||
raise Exception('Less then three point have been detected. Can not build a polygon.')
|
|
||||||
|
|
||||||
result = ""
|
|
||||||
for point in contours:
|
|
||||||
result += "{},{} ".format(int(point[0]), int(point[1]))
|
|
||||||
result = result[:-1]
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def __del__(self):
|
|
||||||
if self._exec_network:
|
|
||||||
del self._exec_network
|
|
||||||
if self._network:
|
|
||||||
del self._network
|
|
||||||
if self._plugin:
|
|
||||||
del self._plugin
|
|
||||||
@ -1,14 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
#
|
|
||||||
|
|
||||||
version: "2.3"
|
|
||||||
|
|
||||||
services:
|
|
||||||
cvat:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
args:
|
|
||||||
WITH_DEXTR: "yes"
|
|
||||||
@ -1,214 +0,0 @@
|
|||||||
/*
|
|
||||||
* Copyright (C) 2018 Intel Corporation
|
|
||||||
*
|
|
||||||
* SPDX-License-Identifier: MIT
|
|
||||||
*/
|
|
||||||
|
|
||||||
/* global
|
|
||||||
AREA_TRESHOLD:false
|
|
||||||
PolyShapeModel:false
|
|
||||||
ShapeCreatorModel:true
|
|
||||||
ShapeCreatorView:true
|
|
||||||
showMessage:false
|
|
||||||
*/
|
|
||||||
|
|
||||||
/* eslint no-underscore-dangle: 0 */
|
|
||||||
|
|
||||||
window.addEventListener('DOMContentLoaded', () => {
|
|
||||||
$('<option value="auto_segmentation" class="regular"> Auto Segmentation </option>').appendTo('#shapeTypeSelector');
|
|
||||||
|
|
||||||
const dextrCancelButtonId = 'dextrCancelButton';
|
|
||||||
const dextrOverlay = $(`
|
|
||||||
<div class="modal hidden force-modal">
|
|
||||||
<div class="modal-content" style="width: 300px; height: 70px;">
|
|
||||||
<center> <label class="regular h2"> Segmentation request is being processed </label></center>
|
|
||||||
<center style="margin-top: 5px;">
|
|
||||||
<button id="${dextrCancelButtonId}" class="regular h2" style="width: 250px;"> Cancel </button>
|
|
||||||
</center>
|
|
||||||
</div>
|
|
||||||
</div>`).appendTo('body');
|
|
||||||
|
|
||||||
const dextrCancelButton = $(`#${dextrCancelButtonId}`);
|
|
||||||
dextrCancelButton.on('click', () => {
|
|
||||||
dextrCancelButton.prop('disabled', true);
|
|
||||||
$.ajax({
|
|
||||||
url: `/dextr/cancel/${window.cvat.job.id}`,
|
|
||||||
type: 'GET',
|
|
||||||
error: (errorData) => {
|
|
||||||
const message = `Can not cancel segmentation. Code: ${errorData.status}.
|
|
||||||
Message: ${errorData.responseText || errorData.statusText}`;
|
|
||||||
showMessage(message);
|
|
||||||
},
|
|
||||||
complete: () => {
|
|
||||||
dextrCancelButton.prop('disabled', false);
|
|
||||||
},
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
function ShapeCreatorModelWrapper(OriginalClass) {
|
|
||||||
// Constructor will patch some properties for a created instance
|
|
||||||
function constructorDecorator(...args) {
|
|
||||||
const instance = new OriginalClass(...args);
|
|
||||||
|
|
||||||
// Decorator for the defaultType property
|
|
||||||
Object.defineProperty(instance, 'defaultType', {
|
|
||||||
get: () => instance._defaultType,
|
|
||||||
set: (type) => {
|
|
||||||
if (!['box', 'box_by_4_points', 'points', 'polygon',
|
|
||||||
'polyline', 'auto_segmentation', 'cuboid'].includes(type)) {
|
|
||||||
throw Error(`Unknown shape type found ${type}`);
|
|
||||||
}
|
|
||||||
instance._defaultType = type;
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Decorator for finish method.
|
|
||||||
const decoratedFinish = instance.finish;
|
|
||||||
instance.finish = (result) => {
|
|
||||||
if (instance._defaultType === 'auto_segmentation') {
|
|
||||||
try {
|
|
||||||
instance._defaultType = 'polygon';
|
|
||||||
decoratedFinish.call(instance, result);
|
|
||||||
} finally {
|
|
||||||
instance._defaultType = 'auto_segmentation';
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
decoratedFinish.call(instance, result);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
return instance;
|
|
||||||
}
|
|
||||||
|
|
||||||
constructorDecorator.prototype = OriginalClass.prototype;
|
|
||||||
constructorDecorator.prototype.constructor = constructorDecorator;
|
|
||||||
return constructorDecorator;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
function ShapeCreatorViewWrapper(OriginalClass) {
|
|
||||||
// Constructor will patch some properties for each instance
|
|
||||||
function constructorDecorator(...args) {
|
|
||||||
const instance = new OriginalClass(...args);
|
|
||||||
|
|
||||||
// Decorator for the _create() method.
|
|
||||||
// We save the decorated _create() and we will use it if type != 'auto_segmentation'
|
|
||||||
const decoratedCreate = instance._create;
|
|
||||||
instance._create = () => {
|
|
||||||
if (instance._type !== 'auto_segmentation') {
|
|
||||||
decoratedCreate.call(instance);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
instance._drawInstance = instance._frameContent.polyline().draw({ snapToGrid: 0.1 }).addClass('shapeCreation').attr({
|
|
||||||
'stroke-width': 0,
|
|
||||||
z_order: Number.MAX_SAFE_INTEGER,
|
|
||||||
});
|
|
||||||
instance._createPolyEvents();
|
|
||||||
|
|
||||||
/* the _createPolyEvents method have added "drawdone"
|
|
||||||
* event handler which invalid for this case
|
|
||||||
* because of that reason we remove the handler and
|
|
||||||
* create the valid handler instead
|
|
||||||
*/
|
|
||||||
instance._drawInstance.off('drawdone').on('drawdone', (e) => {
|
|
||||||
let actualPoints = window.cvat.translate.points.canvasToActual(e.target.getAttribute('points'));
|
|
||||||
actualPoints = PolyShapeModel.convertStringToNumberArray(actualPoints);
|
|
||||||
|
|
||||||
if (actualPoints.length < 4) {
|
|
||||||
showMessage('It is need to specify minimum four extreme points for an object');
|
|
||||||
instance._controller.switchCreateMode(true);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const { frameWidth } = window.cvat.player.geometry;
|
|
||||||
const { frameHeight } = window.cvat.player.geometry;
|
|
||||||
for (let idx = 0; idx < actualPoints.length; idx += 1) {
|
|
||||||
const point = actualPoints[idx];
|
|
||||||
point.x = Math.clamp(point.x, 0, frameWidth);
|
|
||||||
point.y = Math.clamp(point.y, 0, frameHeight);
|
|
||||||
}
|
|
||||||
|
|
||||||
e.target.setAttribute('points',
|
|
||||||
window.cvat.translate.points.actualToCanvas(
|
|
||||||
PolyShapeModel.convertNumberArrayToString(actualPoints),
|
|
||||||
));
|
|
||||||
|
|
||||||
const polybox = e.target.getBBox();
|
|
||||||
const area = polybox.width * polybox.height;
|
|
||||||
|
|
||||||
if (area > AREA_TRESHOLD) {
|
|
||||||
$.ajax({
|
|
||||||
url: `/dextr/create/${window.cvat.job.id}`,
|
|
||||||
type: 'POST',
|
|
||||||
data: JSON.stringify({
|
|
||||||
frame: window.cvat.player.frames.current,
|
|
||||||
points: actualPoints,
|
|
||||||
}),
|
|
||||||
contentType: 'application/json',
|
|
||||||
success: () => {
|
|
||||||
function intervalCallback() {
|
|
||||||
$.ajax({
|
|
||||||
url: `/dextr/check/${window.cvat.job.id}`,
|
|
||||||
type: 'GET',
|
|
||||||
success: (jobData) => {
|
|
||||||
if (['queued', 'started'].includes(jobData.status)) {
|
|
||||||
if (jobData.status === 'queued') {
|
|
||||||
dextrCancelButton.prop('disabled', false);
|
|
||||||
}
|
|
||||||
setTimeout(intervalCallback, 1000);
|
|
||||||
} else {
|
|
||||||
dextrOverlay.addClass('hidden');
|
|
||||||
if (jobData.status === 'finished') {
|
|
||||||
if (jobData.result) {
|
|
||||||
instance._controller.finish({ points: jobData.result }, 'polygon');
|
|
||||||
}
|
|
||||||
} else if (jobData.status === 'failed') {
|
|
||||||
const message = `Segmentation has fallen. Error: '${jobData.stderr}'`;
|
|
||||||
showMessage(message);
|
|
||||||
} else {
|
|
||||||
let message = `Check segmentation request returned "${jobData.status}" status.`;
|
|
||||||
if (jobData.stderr) {
|
|
||||||
message += ` Error: ${jobData.stderr}`;
|
|
||||||
}
|
|
||||||
showMessage(message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
error: (errorData) => {
|
|
||||||
dextrOverlay.addClass('hidden');
|
|
||||||
const message = `Can not check segmentation. Code: ${errorData.status}.`
|
|
||||||
+ ` Message: ${errorData.responseText || errorData.statusText}`;
|
|
||||||
showMessage(message);
|
|
||||||
},
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
dextrCancelButton.prop('disabled', true);
|
|
||||||
dextrOverlay.removeClass('hidden');
|
|
||||||
setTimeout(intervalCallback, 1000);
|
|
||||||
},
|
|
||||||
error: (errorData) => {
|
|
||||||
const message = `Can not cancel ReID process. Code: ${errorData.status}.`
|
|
||||||
+ ` Message: ${errorData.responseText || errorData.statusText}`;
|
|
||||||
showMessage(message);
|
|
||||||
},
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
instance._controller.switchCreateMode(true);
|
|
||||||
}); // end of "drawdone" handler
|
|
||||||
}; // end of _create() method
|
|
||||||
|
|
||||||
return instance;
|
|
||||||
} // end of constructorDecorator()
|
|
||||||
|
|
||||||
constructorDecorator.prototype = OriginalClass.prototype;
|
|
||||||
constructorDecorator.prototype.constructor = constructorDecorator;
|
|
||||||
return constructorDecorator;
|
|
||||||
} // end of ShapeCreatorViewWrapper
|
|
||||||
|
|
||||||
// Apply patch for classes
|
|
||||||
ShapeCreatorModel = ShapeCreatorModelWrapper(ShapeCreatorModel);
|
|
||||||
ShapeCreatorView = ShapeCreatorViewWrapper(ShapeCreatorView);
|
|
||||||
});
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
# Copyright (C) 2018-2020 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.urls import path
|
|
||||||
from . import views
|
|
||||||
|
|
||||||
urlpatterns = [
|
|
||||||
path('create/<int:jid>', views.create),
|
|
||||||
path('cancel/<int:jid>', views.cancel),
|
|
||||||
path('check/<int:jid>', views.check),
|
|
||||||
path('enabled', views.enabled)
|
|
||||||
]
|
|
||||||
@ -1,128 +0,0 @@
|
|||||||
# Copyright (C) 2018 Intel Corporation
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: MIT
|
|
||||||
|
|
||||||
from django.http import HttpResponse, HttpResponseBadRequest, JsonResponse
|
|
||||||
from cvat.apps.authentication.decorators import login_required
|
|
||||||
from rules.contrib.views import permission_required, objectgetter
|
|
||||||
|
|
||||||
from cvat.apps.engine.models import Job
|
|
||||||
from cvat.apps.engine.log import slogger
|
|
||||||
from cvat.apps.dextr_segmentation.dextr import DEXTR_HANDLER
|
|
||||||
|
|
||||||
import django_rq
|
|
||||||
import json
|
|
||||||
import rq
|
|
||||||
|
|
||||||
__RQ_QUEUE_NAME = "default"
|
|
||||||
__DEXTR_HANDLER = DEXTR_HANDLER()
|
|
||||||
|
|
||||||
def _dextr_thread(db_data, frame, points):
|
|
||||||
job = rq.get_current_job()
|
|
||||||
job.meta["result"] = __DEXTR_HANDLER.handle(db_data, frame, points)
|
|
||||||
job.save_meta()
|
|
||||||
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["engine.job.change"],
|
|
||||||
fn=objectgetter(Job, "jid"), raise_exception=True)
|
|
||||||
def create(request, jid):
|
|
||||||
try:
|
|
||||||
data = json.loads(request.body.decode("utf-8"))
|
|
||||||
|
|
||||||
points = data["points"]
|
|
||||||
frame = int(data["frame"])
|
|
||||||
username = request.user.username
|
|
||||||
|
|
||||||
slogger.job[jid].info("create dextr request for the JOB: {} ".format(jid)
|
|
||||||
+ "by the USER: {} on the FRAME: {}".format(username, frame))
|
|
||||||
|
|
||||||
db_data = Job.objects.select_related("segment__task__data").get(id=jid).segment.task.data
|
|
||||||
|
|
||||||
queue = django_rq.get_queue(__RQ_QUEUE_NAME)
|
|
||||||
rq_id = "dextr.create/{}/{}".format(jid, username)
|
|
||||||
job = queue.fetch_job(rq_id)
|
|
||||||
|
|
||||||
if job is not None and (job.is_started or job.is_queued):
|
|
||||||
if "cancel" not in job.meta:
|
|
||||||
raise Exception("Segmentation process has been already run for the " +
|
|
||||||
"JOB: {} and the USER: {}".format(jid, username))
|
|
||||||
else:
|
|
||||||
job.delete()
|
|
||||||
|
|
||||||
queue.enqueue_call(func=_dextr_thread,
|
|
||||||
args=(db_data, frame, points),
|
|
||||||
job_id=rq_id,
|
|
||||||
timeout=15,
|
|
||||||
ttl=30)
|
|
||||||
|
|
||||||
return HttpResponse()
|
|
||||||
except Exception as ex:
|
|
||||||
slogger.job[jid].error("can't create a dextr request for the job {}".format(jid), exc_info=True)
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["engine.job.change"],
|
|
||||||
fn=objectgetter(Job, "jid"), raise_exception=True)
|
|
||||||
def cancel(request, jid):
|
|
||||||
try:
|
|
||||||
username = request.user.username
|
|
||||||
slogger.job[jid].info("cancel dextr request for the JOB: {} ".format(jid)
|
|
||||||
+ "by the USER: {}".format(username))
|
|
||||||
|
|
||||||
queue = django_rq.get_queue(__RQ_QUEUE_NAME)
|
|
||||||
rq_id = "dextr.create/{}/{}".format(jid, username)
|
|
||||||
job = queue.fetch_job(rq_id)
|
|
||||||
|
|
||||||
if job is None or job.is_finished or job.is_failed:
|
|
||||||
raise Exception("Segmentation isn't running now")
|
|
||||||
elif "cancel" not in job.meta:
|
|
||||||
job.meta["cancel"] = True
|
|
||||||
job.save_meta()
|
|
||||||
|
|
||||||
return HttpResponse()
|
|
||||||
except Exception as ex:
|
|
||||||
slogger.job[jid].error("can't cancel a dextr request for the job {}".format(jid), exc_info=True)
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
|
|
||||||
@login_required
|
|
||||||
@permission_required(perm=["engine.job.change"],
|
|
||||||
fn=objectgetter(Job, "jid"), raise_exception=True)
|
|
||||||
def check(request, jid):
|
|
||||||
try:
|
|
||||||
username = request.user.username
|
|
||||||
slogger.job[jid].info("check dextr request for the JOB: {} ".format(jid)
|
|
||||||
+ "by the USER: {}".format(username))
|
|
||||||
|
|
||||||
queue = django_rq.get_queue(__RQ_QUEUE_NAME)
|
|
||||||
rq_id = "dextr.create/{}/{}".format(jid, username)
|
|
||||||
job = queue.fetch_job(rq_id)
|
|
||||||
data = {}
|
|
||||||
|
|
||||||
if job is None:
|
|
||||||
data["status"] = "unknown"
|
|
||||||
else:
|
|
||||||
if "cancel" in job.meta:
|
|
||||||
data["status"] = "finished"
|
|
||||||
elif job.is_queued:
|
|
||||||
data["status"] = "queued"
|
|
||||||
elif job.is_started:
|
|
||||||
data["status"] = "started"
|
|
||||||
elif job.is_finished:
|
|
||||||
data["status"] = "finished"
|
|
||||||
data["result"] = job.meta["result"]
|
|
||||||
job.delete()
|
|
||||||
else:
|
|
||||||
data["status"] = "failed"
|
|
||||||
data["stderr"] = job.exc_info
|
|
||||||
job.delete()
|
|
||||||
|
|
||||||
return JsonResponse(data)
|
|
||||||
except Exception as ex:
|
|
||||||
slogger.job[jid].error("can't check a dextr request for the job {}".format(jid), exc_info=True)
|
|
||||||
return HttpResponseBadRequest(str(ex))
|
|
||||||
|
|
||||||
def enabled(request):
|
|
||||||
return HttpResponse()
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue