Auto annotation using models in OpenVINO format (#233)

main
Andrey Zhavoronkov 7 years ago committed by Nikita Manovich
parent 1300d230b6
commit 4387fdaf46

@ -4,6 +4,25 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- OpenVINO auto annotation: it is possible to upload a custom model and annotate images automatically.
### Changed
-
### Deprecated
-
### Removed
-
### Fixed
-
### Security
-
## [0.3.0] - 2018-12-29
### Added
- Ability to copy Object URL and Frame URL via object context menu and player context menu respectively.

@ -126,6 +126,13 @@ volumes:
```
You can change the share device path to your actual share. For user convenience we have defined the enviroment variable $CVAT_SHARE_URL. This variable contains a text (url for example) which will be being shown in the client-share browser.
### Additional optional components
- [Support for Intel OpenVINO: auto annotation](components/openvino/README.md)
- [Analytics: management and monitoring of data annotation team](components/analytics/README.md)
- [TF Object Detection API: auto annotation](components/tf_annotation/README.md)
- [Support for NVIDIA GPUs](components/cuda/README.md)
## Questions
CVAT usage related questions or unclear concepts can be posted in our [Gitter chat](https://gitter.im/opencv-cvat) for **quick replies** from contributors and other users.

@ -1,6 +0,0 @@
### There are some additional components for CVAT
* [NVIDIA CUDA](cuda/README.md)
* [OpenVINO](openvino/README.md)
* [Tensorflow Object Detector](tf_annotation/README.md)
* [Analytics](analytics/README.md)

@ -1,5 +1,7 @@
## Analytics for Computer Vision Annotation Tool (CVAT)
![](/cvat/apps/documentation/static/documentation/images/image097.jpg)
It is possible to proxy annotation logs from client to ELK. To do that run the following command below:
### Build docker image

@ -0,0 +1,163 @@
## Auto annotation
### Description
The application will be enabled automatically if OpenVINO™ component is
installed. It allows to use custom models for auto annotation. Only models in
OpenVINO™ toolkit format are supported. If you would like to annotate a
task with a custom model please convert it to the intermediate representation
(IR) format via the model optimizer tool. See [OpenVINO documentation](https://software.intel.com/en-us/articles/OpenVINO-InferEngine) for details.
### Usage
To annotate a task with a custom model you need to prepare 4 files:
1. __Model config__ (*.xml) - a text file with network configuration.
1. __Model weights__ (*.bin) - a binary file with trained weights.
1. __Label map__ (*.json) - a simple json file with `label_map` dictionary like
object with string values for label numbers. Values in `label_map` should be
exactly equal to labels for the annotation task, otherwise objects with mismatched
labels will be ignored.
Example:
```json
{
"label_map": {
"0": "background",
"1": "aeroplane",
"2": "bicycle",
"3": "bird",
"4": "boat",
"5": "bottle",
"6": "bus",
"7": "car",
"8": "cat",
"9": "chair",
"10": "cow",
"11": "diningtable",
"12": "dog",
"13": "horse",
"14": "motorbike",
"15": "person",
"16": "pottedplant",
"17": "sheep",
"18": "sofa",
"19": "train",
"20": "tvmonitor"
}
}
```
1. __Interpretation script__ (*.py) - a file used to convert net output layer
to a predefined structure which can be processed by CVAT. This code will be run
inside a restricted python's environment, but it's possible to use some
builtin functions like __str, int, float, max, min, range__.
Also two variables are available in the scope:
- __detections__ - a list of dictionaries with detections for each frame:
* __frame_id__ - frame number
* __frame_height__ - frame height
* __frame_width__ - frame width
* __detections__ - output np.ndarray (See [ExecutableNetwork.infer](https://software.intel.com/en-us/articles/OpenVINO-InferEngine#inpage-nav-11-6-3) for details).
- __results__ - an instance of python class with converted results.
Following methods should be used to add shapes:
```python
# xtl, ytl, xbr, ybr - expected values are float or int
# label - expected value is int
# frame_number - expected value is int
# attributes - dictionary of attribute_name: attribute_value pairs, for example {"confidence": "0.83"}
add_box(self, xtl, ytl, xbr, ybr, label, frame_number, attributes=None)
# points - list of (x, y) pairs of float or int, for example [(57.3, 100), (67, 102.7)]
# label - expected value is int
# frame_number - expected value is int
# attributes - dictionary of attribute_name: attribute_value pairs, for example {"confidence": "0.83"}
add_points(self, points, label, frame_number, attributes=None)
add_polygon(self, points, label, frame_number, attributes=None)
add_polyline(self, points, label, frame_number, attributes=None)
```
### Examples
#### [Person-vehicle-bike-detection-crossroad-0078](https://github.com/opencv/open_model_zoo/blob/2018/intel_models/person-vehicle-bike-detection-crossroad-0078/description/person-vehicle-bike-detection-crossroad-0078.md) (OpenVINO toolkit)
__Note__: Model configuration(*.xml) and weights (*.bin) are available in OpenVINO redistributable package.
__Task labels__: person vehicle non-vehicle
__label_map.json__:
```json
{
"label_map": {
"1": "person",
"2": "vehicle",
"3": "non-vehicle"
}
}
```
__Interpretation script for SSD based networks__:
```python
def clip(value):
return max(min(1.0, value), 0.0)
for frame_results in detections:
frame_height = frame_results["frame_height"]
frame_width = frame_results["frame_width"]
frame_number = frame_results["frame_id"]
for i in range(frame_results["detections"].shape[2]):
confidence = frame_results["detections"][0, 0, i, 2]
if confidence < 0.5:
continue
results.add_box(
xtl=clip(frame_results["detections"][0, 0, i, 3]) * frame_width,
ytl=clip(frame_results["detections"][0, 0, i, 4]) * frame_height,
xbr=clip(frame_results["detections"][0, 0, i, 5]) * frame_width,
ybr=clip(frame_results["detections"][0, 0, i, 6]) * frame_height,
label=int(frame_results["detections"][0, 0, i, 1]),
frame_number=frame_number,
attributes={
"confidence": "{:.2f}".format(confidence),
},
)
```
#### [Landmarks-regression-retail-0009](https://github.com/opencv/open_model_zoo/blob/2018/intel_models/landmarks-regression-retail-0009/description/landmarks-regression-retail-0009.md) (OpenVINO toolkit)
__Note__: Model configuration (.xml) and weights (.bin) are available in OpenVINO redistributable package.
__Task labels__: left_eye right_eye tip_of_nose left_lip_corner right_lip_corner
__label_map.json__:
```json
{
"label_map": {
"0": "left_eye",
"1": "right_eye",
"2": "tip_of_nose",
"3": "left_lip_corner",
"4": "right_lip_corner"
}
}
```
__Interpretation script__:
```python
def clip(value):
return max(min(1.0, value), 0.0)
for frame_results in detections:
frame_height = frame_results["frame_height"]
frame_width = frame_results["frame_width"]
frame_number = frame_results["frame_id"]
for i in range(0, frame_results["detections"].shape[1], 2):
x = frame_results["detections"][0, i, 0, 0]
y = frame_results["detections"][0, i + 1, 0, 0]
results.add_points(
points=[(clip(x) * frame_width, clip(y) * frame_height)],
label=i // 2, # see label map and model output specification,
frame_number=frame_number,
)
```

@ -0,0 +1,8 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from cvat.settings.base import JS_3RDPARTY
JS_3RDPARTY['dashboard'] = JS_3RDPARTY.get('dashboard', []) + ['auto_annotation/js/auto_annotation.js']

@ -0,0 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -0,0 +1,11 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.apps import AppConfig
class AutoAnnotationConfig(AppConfig):
name = "auto_annotation"

@ -0,0 +1,24 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import cv2
class ImageLoader():
def __init__(self, image_list):
self.image_list = image_list
def __getitem__(self, i):
return self.image_list[i]
def __iter__(self):
for imagename in self.image_list:
yield imagename, self._load_image(imagename)
def __len__(self):
return len(self.image_list)
@staticmethod
def _load_image(path_to_image):
return cv2.imread(path_to_image)

@ -0,0 +1,5 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -0,0 +1,59 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import json
import cv2
import os
import subprocess
from openvino.inference_engine import IENetwork, IEPlugin
class ModelLoader():
def __init__(self, model, weights):
self._model = model
self._weights = weights
IE_PLUGINS_PATH = os.getenv("IE_PLUGINS_PATH")
if not IE_PLUGINS_PATH:
raise OSError("Inference engine plugin path env not found in the system.")
plugin = IEPlugin(device="CPU", plugin_dirs=[IE_PLUGINS_PATH])
if (self._check_instruction("avx2")):
plugin.add_cpu_extension(os.path.join(IE_PLUGINS_PATH, "libcpu_extension_avx2.so"))
elif (self._check_instruction("sse4")):
plugin.add_cpu_extension(os.path.join(IE_PLUGINS_PATH, "libcpu_extension_sse4.so"))
else:
raise Exception("Inference engine requires a support of avx2 or sse4.")
network = IENetwork.from_ir(model=self._model, weights=self._weights)
supported_layers = plugin.get_supported_layers(network)
not_supported_layers = [l for l in network.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
raise Exception("Following layers are not supported by the plugin for specified device {}:\n {}".
format(plugin.device, ", ".join(not_supported_layers)))
self._input_blob_name = next(iter(network.inputs))
self._output_blob_name = next(iter(network.outputs))
self._net = plugin.load(network=network, num_requests=2)
input_type = network.inputs[self._input_blob_name]
self._input_layout = input_type if isinstance(input_type, list) else input_type.shape
def infer(self, image):
_, _, h, w = self._input_layout
in_frame = image if image.shape[:-1] == (h, w) else cv2.resize(image, (w, h))
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
return self._net.infer(inputs={self._input_blob_name: in_frame})[self._output_blob_name].copy()
@staticmethod
def _check_instruction(instruction):
return instruction == str.strip(
subprocess.check_output(
"lscpu | grep -o \"{}\" | head -1".format(instruction), shell=True
).decode("utf-8"))
def load_label_map(labels_path):
with open(labels_path, "r") as f:
return json.load(f)["label_map"]

@ -0,0 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -0,0 +1,184 @@
/*
* Copyright (C) 2018 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
"use strict";
window.cvat = window.cvat || {};
window.cvat.dashboard = window.cvat.dashboard || {};
window.cvat.dashboard.uiCallbacks = window.cvat.dashboard.uiCallbacks || [];
window.cvat.dashboard.uiCallbacks.push(function(newElements) {
let tids = [];
for (let el of newElements) {
tids.push(el.id.split("_")[1]);
}
$.ajax({
type: "POST",
url: "/auto_annotation/meta/get",
data: JSON.stringify(tids),
contentType: "application/json; charset=utf-8",
success: (data) => {
newElements.each(function() {
let elem = $(this);
let tid = +elem.attr("id").split("_")[1];
const autoAnnoButton = $("<button> Run auto annotation </button>").addClass("semiBold dashboardButtonUI dashboardAutoAnno");
autoAnnoButton.appendTo(elem.find("div.dashboardButtonsUI")[0]);
if (tid in data && data[tid].active) {
autoAnnoButton.text("Cancel auto annotation");
autoAnnoButton.addClass("autoAnnotationProcess");
window.cvat.autoAnnotation.checkAutoAnnotationRequest(tid, autoAnnoButton);
}
autoAnnoButton.on("click", () => {
if (autoAnnoButton.hasClass("autoAnnotationProcess")) {
$.post(`/auto_annotation/cancel/task/${tid}`).fail( (data) => {
let message = `Error during cancel auto annotation request. Code: ${data.status}. Message: ${data.responseText || data.statusText}`;
showMessage(message);
throw Error(message);
});
}
else {
let dialogWindow = $(`#${window.cvat.autoAnnotation.modalWindowId}`);
dialogWindow.attr("current_tid", tid);
dialogWindow.removeClass("hidden");
}
});
});
},
error: (data) => {
let message = `Can not get auto annotation meta info. Code: ${data.status}. Message: ${data.responseText || data.statusText}`;
window.cvat.autoAnnotation.badResponse(message);
}
});
});
window.cvat.autoAnnotation = {
modalWindowId: "autoAnnotationWindow",
autoAnnoFromId: "autoAnnotationForm",
autoAnnoModelFieldId: "autoAnnotationModelField",
autoAnnoWeightsFieldId: "autoAnnotationWeightsField",
autoAnnoConfigFieldId: "autoAnnotationConfigField",
autoAnnoConvertFieldId: "autoAnnotationConvertField",
autoAnnoCloseButtonId: "autoAnnoCloseButton",
autoAnnoSubmitButtonId: "autoAnnoSubmitButton",
checkAutoAnnotationRequest: (tid, autoAnnoButton) => {
function timeoutCallback() {
$.get(`/auto_annotation/check/task/${tid}`).done((data) => {
if (data.status === "started" || data.status === "queued") {
let progress = Math.round(data.progress) || 0;
autoAnnoButton.text(`Cancel auto annotation (${progress}%)`);
setTimeout(timeoutCallback, 1000);
}
else {
autoAnnoButton.text("Run auto annotation");
autoAnnoButton.removeClass("autoAnnotationProcess");
}
}).fail((data) => {
let message = "Error was occurred during check annotation status. " +
`Code: ${data.status}, text: ${data.responseText || data.statusText}`;
window.cvat.autoAnnotation.badResponse(message);
});
}
setTimeout(timeoutCallback, 1000);
},
badResponse(message) {
showMessage(message);
throw Error(message);
},
};
function submitButtonOnClick() {
const annoWindow = $(`#${window.cvat.autoAnnotation.modalWindowId}`);
const tid = annoWindow.attr("current_tid");
const modelInput = $(`#${window.cvat.autoAnnotation.autoAnnoModelFieldId}`);
const weightsInput = $(`#${window.cvat.autoAnnotation.autoAnnoWeightsFieldId}`);
const configInput = $(`#${window.cvat.autoAnnotation.autoAnnoConfigFieldId}`);
const convFileInput = $(`#${window.cvat.autoAnnotation.autoAnnoConvertFieldId}`);
const modelFile = modelInput.prop("files")[0];
const weightsFile = weightsInput.prop("files")[0];
const configFile = configInput.prop("files")[0];
const convFile = convFileInput.prop("files")[0];
if (!modelFile || !weightsFile || !configFile || !convFile) {
showMessage("All files must be selected");
return;
}
let taskData = new FormData();
taskData.append("model", modelFile);
taskData.append("weights", weightsFile);
taskData.append("config", configFile);
taskData.append("conv_script", convFile);
$.ajax({
url: `/auto_annotation/create/task/${tid}`,
type: "POST",
data: taskData,
contentType: false,
processData: false,
}).done(() => {
annoWindow.addClass("hidden");
const autoAnnoButton = $(`#dashboardTask_${tid} div.dashboardButtonsUI button.dashboardAutoAnno`);
autoAnnoButton.addClass("autoAnnotationProcess");
window.cvat.autoAnnotation.checkAutoAnnotationRequest(tid, autoAnnoButton);
}).fail((data) => {
let message = "Error was occurred during run annotation request. " +
`Code: ${data.status}, text: ${data.responseText || data.statusText}`;
window.cvat.autoAnnotation.badResponse(message);
});
}
document.addEventListener("DOMContentLoaded", () => {
$(`<div id="${window.cvat.autoAnnotation.modalWindowId}" class="modal hidden">
<form id="${window.cvat.autoAnnotation.autoAnnoFromId}" class="modal-content" autocomplete="on" onsubmit="return false" style="width: 700px;">
<center>
<label class="semiBold h1"> Auto annotation setup </label>
</center>
<table style="width: 100%; text-align: left;">
<tr>
<td style="width: 25%"> <label class="regular h2"> Model </label> </td>
<td> <input id="${window.cvat.autoAnnotation.autoAnnoModelFieldId}" type="file" name="model" /> </td>
</tr>
<tr>
<td style="width: 25%"> <label class="regular h2"> Weights </label> </td>
<td> <input id="${window.cvat.autoAnnotation.autoAnnoWeightsFieldId}" type="file" name="weights" /> </td>
</tr>
<tr>
<td style="width: 25%"> <label class="regular h2"> Label map </label> </td>
<td> <input id="${window.cvat.autoAnnotation.autoAnnoConfigFieldId}" type="file" name="config" accept=".json" /> </td>
</tr>
<tr>
<td style="width: 25%"> <label class="regular h2"> Convertation script </label> </td>
<td> <input id="${window.cvat.autoAnnotation.autoAnnoConvertFieldId}" type="file" name="convert" /> </td>
</tr>
</table>
<div>
<button id="${window.cvat.autoAnnotation.autoAnnoCloseButtonId}" class="regular h2"> Close </button>
<button id="${window.cvat.autoAnnotation.autoAnnoSubmitButtonId}" class="regular h2"> Submit </button>
</div>
</form>
</div>`).appendTo("body");
const annoWindow = $(`#${window.cvat.autoAnnotation.modalWindowId}`);
const closeWindowButton = $(`#${window.cvat.autoAnnotation.autoAnnoCloseButtonId}`);
const submitButton = $(`#${window.cvat.autoAnnotation.autoAnnoSubmitButtonId}`);
closeWindowButton.on("click", () => {
annoWindow.addClass("hidden");
});
submitButton.on("click", () => {
submitButtonOnClick();
});
});

@ -0,0 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -0,0 +1,14 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.urls import path
from . import views
urlpatterns = [
path("create/task/<int:tid>", views.create),
path("check/task/<int:tid>", views.check),
path("cancel/task/<int:tid>", views.cancel),
path("meta/get", views.get_meta_info),
]

@ -0,0 +1,401 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.http import HttpResponse, JsonResponse, HttpResponseBadRequest
from rules.contrib.views import permission_required, objectgetter
from cvat.apps.authentication.decorators import login_required
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.engine import annotation
import django_rq
import fnmatch
import json
import os
import rq
from cvat.apps.engine.log import slogger
from .model_loader import ModelLoader, load_label_map
from .image_loader import ImageLoader
import os.path as osp
def get_image_data(path_to_data):
def get_image_key(item):
return int(osp.splitext(osp.basename(item))[0])
image_list = []
for root, _, filenames in os.walk(path_to_data):
for filename in fnmatch.filter(filenames, "*.jpg"):
image_list.append(osp.join(root, filename))
image_list.sort(key=get_image_key)
return ImageLoader(image_list)
def create_anno_container():
return {
"boxes": [],
"polygons": [],
"polylines": [],
"points": [],
"box_paths": [],
"polygon_paths": [],
"polyline_paths": [],
"points_paths": [],
}
class Results():
def __init__(self):
self._results = create_anno_container()
def add_box(self, xtl, ytl, xbr, ybr, label, frame_number, attributes=None):
self.get_boxes().append({
"label": label,
"frame": frame_number,
"xtl": xtl,
"ytl": ytl,
"xbr": xbr,
"ybr": ybr,
"attributes": attributes or {},
})
def add_points(self, points, label, frame_number, attributes=None):
self.get_points().append(
self._create_polyshape(points, label, frame_number, attributes)
)
def add_polygon(self, points, label, frame_number, attributes=None):
self.get_polygons().append(
self._create_polyshape(points, label, frame_number, attributes)
)
def add_polyline(self, points, label, frame_number, attributes=None):
self.get_polylines().append(
self._create_polyshape(points, label, frame_number, attributes)
)
def get_boxes(self):
return self._results["boxes"]
def get_polygons(self):
return self._results["polygons"]
def get_polylines(self):
return self._results["polylines"]
def get_points(self):
return self._results["points"]
def get_box_paths(self):
return self._results["box_paths"]
def get_polygon_paths(self):
return self._results["polygon_paths"]
def get_polyline_paths(self):
return self._results["polyline_paths"]
def get_points_paths(self):
return self._results["points_paths"]
def _create_polyshape(self, points, label, frame_number, attributes=None):
return {
"label": label,
"frame": frame_number,
"points": " ".join("{},{}".format(pair[0], pair[1]) for pair in points),
"attributes": attributes or {},
}
def process_detections(detections, path_to_conv_script):
results = Results()
global_vars = {
"__builtins__": {
"str": str,
"int": int,
"float": float,
"max": max,
"min": min,
"range": range,
},
}
local_vars = {
"detections": detections,
"results": results,
}
exec (open(path_to_conv_script).read(), global_vars, local_vars)
return results
def run_inference_engine_annotation(path_to_data, model_file, weights_file,
labels_mapping, attribute_spec, convertation_file, job, update_progress):
def process_attributes(shape_attributes, label_attr_spec):
attributes = []
for attr_text, attr_value in shape_attributes.items():
if attr_text in label_attr_spec:
attributes.append({
"id": label_attr_spec[attr_text],
"value": attr_value,
})
return attributes
def add_polyshapes(shapes, target_container):
for shape in shapes:
if shape["label"] not in labels_mapping:
continue
db_label = labels_mapping[shape["label"]]
target_container.append({
"label_id": db_label,
"frame": shape["frame"],
"points": shape["points"],
"z_order": 0,
"group_id": 0,
"occluded": False,
"attributes": process_attributes(shape["attributes"], attribute_spec[db_label]),
})
def add_boxes(boxes, target_container):
for box in boxes:
if box["label"] not in labels_mapping:
continue
db_label = labels_mapping[box["label"]]
target_container.append({
"label_id": db_label,
"frame": box["frame"],
"xtl": box["xtl"],
"ytl": box["ytl"],
"xbr": box["xbr"],
"ybr": box["ybr"],
"z_order": 0,
"group_id": 0,
"occluded": False,
"attributes": process_attributes(box["attributes"], attribute_spec[db_label]),
})
result = {
"create": create_anno_container(),
"update": create_anno_container(),
"delete": create_anno_container(),
}
data = get_image_data(path_to_data)
data_len = len(data)
model = ModelLoader(model=model_file, weights=weights_file)
frame_counter = 0
detections = []
for _, frame in data:
orig_rows, orig_cols = frame.shape[:2]
detections.append({
"frame_id": frame_counter,
"frame_height": orig_rows,
"frame_width": orig_cols,
"detections": model.infer(frame),
})
frame_counter += 1
if not update_progress(job, frame_counter * 100 / data_len):
return None
processed_detections = process_detections(detections, convertation_file)
add_boxes(processed_detections.get_boxes(), result["create"]["boxes"])
add_polyshapes(processed_detections.get_points(), result["create"]["points"])
add_polyshapes(processed_detections.get_polygons(), result["create"]["polygons"])
add_polyshapes(processed_detections.get_polylines(), result["create"]["polylines"])
return result
def update_progress(job, progress):
job.refresh()
if "cancel" in job.meta:
del job.meta["cancel"]
job.save()
return False
job.meta["progress"] = progress
job.save_meta()
return True
def create_thread(tid, model_file, weights_file, labels_mapping, attributes, convertation_file):
try:
job = rq.get_current_job()
job.meta["progress"] = 0
job.save_meta()
db_task = TaskModel.objects.get(pk=tid)
result = None
slogger.glob.info("auto annotation with openvino toolkit for task {}".format(tid))
result = run_inference_engine_annotation(
path_to_data=db_task.get_data_dirname(),
model_file=model_file,
weights_file=weights_file,
labels_mapping=labels_mapping,
attribute_spec=attributes,
convertation_file= convertation_file,
job=job,
update_progress=update_progress,
)
if result is None:
slogger.glob.info("auto annotation for task {} canceled by user".format(tid))
return
annotation.clear_task(tid)
annotation.save_task(tid, result)
slogger.glob.info("auto annotation for task {} done".format(tid))
except Exception:
try:
slogger.task[tid].exception("exception was occurred during auto annotation of the task", exc_info=True)
except Exception as ex:
slogger.glob.exception("exception was occurred during auto annotation of the task {}: {}".format(tid, str(ex)), exc_info=True)
@login_required
def get_meta_info(request):
try:
queue = django_rq.get_queue("low")
tids = json.loads(request.body.decode("utf-8"))
result = {}
for tid in tids:
job = queue.fetch_job("auto_annotation.create/{}".format(tid))
if job is not None:
result[tid] = {
"active": job.is_queued or job.is_started,
"success": not job.is_failed
}
return JsonResponse(result)
except Exception as ex:
slogger.glob.exception("exception was occurred during auto annotation meta request", exc_info=True)
return HttpResponseBadRequest(str(ex))
@login_required
@permission_required(perm=["engine.task.change"],
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
def create(request, tid):
slogger.glob.info("auto annotation create request for task {}".format(tid))
def write_file(path, file_obj):
with open(path, "wb") as upload_file:
for chunk in file_obj.chunks():
upload_file.write(chunk)
try:
db_task = TaskModel.objects.get(pk=tid)
upload_dir = db_task.get_upload_dirname()
queue = django_rq.get_queue("low")
job = queue.fetch_job("auto_annotation.create/{}".format(tid))
if job is not None and (job.is_started or job.is_queued):
raise Exception("The process is already running")
model_file = request.FILES["model"]
model_file_path = os.path.join(upload_dir, model_file.name)
write_file(model_file_path, model_file)
weights_file = request.FILES["weights"]
weights_file_path = os.path.join(upload_dir, weights_file.name)
write_file(weights_file_path, weights_file)
config_file = request.FILES["config"]
config_file_path = os.path.join(upload_dir, config_file.name)
write_file(config_file_path, config_file)
convertation_file = request.FILES["conv_script"]
convertation_file_path = os.path.join(upload_dir, convertation_file.name)
write_file(convertation_file_path, convertation_file)
db_labels = db_task.label_set.prefetch_related("attributespec_set").all()
db_attributes = {db_label.id:
{db_attr.get_name(): db_attr.id for db_attr in db_label.attributespec_set.all()} for db_label in db_labels}
db_labels = {db_label.id:db_label.name for db_label in db_labels}
class_names = load_label_map(config_file_path)
labels_mapping = {}
for db_key, db_label in db_labels.items():
for key, label in class_names.items():
if label == db_label:
labels_mapping[int(key)] = db_key
if not labels_mapping:
raise Exception("No labels found for annotation")
queue.enqueue_call(func=create_thread,
args=(
tid,
model_file_path,
weights_file_path,
labels_mapping,
db_attributes,
convertation_file_path),
job_id="auto_annotation.create/{}".format(tid),
timeout=604800) # 7 days
slogger.task[tid].info("auto annotation job enqueued")
except Exception as ex:
try:
slogger.task[tid].exception("exception was occurred during annotation request", exc_info=True)
except Exception as logger_ex:
slogger.glob.exception("exception was occurred during create auto annotation request for task {}: {}".format(tid, str(logger_ex)), exc_info=True)
return HttpResponseBadRequest(str(ex))
return HttpResponse()
@login_required
@permission_required(perm=["engine.task.access"],
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
def check(request, tid):
try:
queue = django_rq.get_queue("low")
job = queue.fetch_job("auto_annotation.create/{}".format(tid))
if job is not None and "cancel" in job.meta:
return JsonResponse({"status": "finished"})
data = {}
if job is None:
data["status"] = "unknown"
elif job.is_queued:
data["status"] = "queued"
elif job.is_started:
data["status"] = "started"
data["progress"] = job.meta["progress"]
elif job.is_finished:
data["status"] = "finished"
job.delete()
else:
data["status"] = "failed"
data["stderr"] = job.exc_info
job.delete()
except Exception:
data["status"] = "unknown"
return JsonResponse(data)
@login_required
@permission_required(perm=["engine.task.change"],
fn=objectgetter(TaskModel, "tid"), raise_exception=True)
def cancel(request, tid):
try:
queue = django_rq.get_queue("low")
job = queue.fetch_job("auto_annotation.create/{}".format(tid))
if job is None or job.is_finished or job.is_failed:
raise Exception("Task is not being annotated currently")
elif "cancel" not in job.meta:
job.meta["cancel"] = True
job.save()
except Exception as ex:
try:
slogger.task[tid].exception("cannot cancel auto annotation for task #{}".format(tid), exc_info=True)
except Exception as logger_ex:
slogger.glob.exception("exception was occured during cancel auto annotation request for task {}: {}".format(tid, str(logger_ex)), exc_info=True)
return HttpResponseBadRequest(str(ex))
return HttpResponse()

@ -60,6 +60,9 @@ INSTALLED_APPS = [
if 'yes' == os.environ.get('TF_ANNOTATION', 'no'):
INSTALLED_APPS += ['cvat.apps.tf_annotation']
if 'yes' == os.environ.get('OPENVINO_TOOLKIT', 'no'):
INSTALLED_APPS += ['cvat.apps.auto_annotation']
if os.getenv('DJANGO_LOG_VIEWER_HOST'):
INSTALLED_APPS += ['cvat.apps.log_viewer']

@ -37,8 +37,11 @@ urlpatterns = [
if apps.is_installed('cvat.apps.tf_annotation'):
urlpatterns.append(path('tensorflow/annotation/', include('cvat.apps.tf_annotation.urls')))
if apps.is_installed('cvat.apps.auto_annotation'):
urlpatterns.append(path('auto_annotation/', include('cvat.apps.auto_annotation.urls')))
if apps.is_installed('cvat.apps.log_viewer'):
urlpatterns.append(path('analytics/', include('cvat.apps.log_viewer.urls')))
if apps.is_installed('silk'):
urlpatterns.append(path('profiler/', include('silk.urls')))
urlpatterns.append(path('profiler/', include('silk.urls')))

Loading…
Cancel
Save