Release 0.3 (#260)

* Bug has been fixed: impossible to lock/occlude object in AAM
* Bug has been fixed: invisible points actually are visible
* Bug has been fixed: impossible to close points after editing (#98)
* doc: grammatical cleanup of README.md (#107)
* Add info about development environment into CONTRIBUTING.md (#110)
* Now we store virtual URL instead of update it in the browser address bar (#112)
* Copy URL, Frame URL and object URL functionality in a context menu
* Bug has been fixed: label UIs don't update after changelabel (#109)
* Common escape button for exit from creating/groupping/merging/pasting/aam
* Switch outside/keyframe shortkeys
* Fix django vulnerability (#121)
* Add analytics component (#118)
* Incremental save of annotations (#120)
* Create task timeout 1h -> 4h. (#136)
* OpenVino integration (#134)
* Update README.md (#138)
* Add an extra field into meta section of a dump file (#149)
* Job status was implemented (#153)
* Back link to task from annotation view (#156)
* Change a task with labels and attributes in admin panel (#157)
* Permissions per tasks and jobs (#185)
* Fix context menu, text visibility for small images (#202)
* Fixed: both context menu are opened simultaneously
* Fixed: shape can be unavailable behind text
* Fixed: invisible text outside frame
* Fix upload big xml files for tasks (#199)
* Add Questions section to Readme.md (#226)
* Fixed labels order (#242)
* Propagate behaviour has been updated in cases with a different resolution (#246)
* Updated the guide and images (#241)
* Fix number attribute for float numbers. (#258)
main
Nikita Manovich 7 years ago committed by GitHub
parent 04e4b4aabe
commit 3e09503ba6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

2
.gitignore vendored

@ -6,6 +6,8 @@
/.env
/keys
/logs
/components/openvino/*.tgz
/profiles
# Ignore temporary files
docker-compose.override.yml

@ -9,6 +9,7 @@
"type": "python",
"request": "launch",
"stopOnEntry": false,
"debugStdLib": true,
"pythonPath": "${config:python.pythonPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
@ -23,7 +24,6 @@
"DjangoDebugging"
],
"cwd": "${workspaceFolder}",
"env": {},
"envFile": "${workspaceFolder}/.env",
},
{
@ -44,6 +44,7 @@
"type": "python",
"request": "launch",
"stopOnEntry": false,
"debugStdLib": true,
"pythonPath": "${config:python.pythonPath}",
"program": "${workspaceRoot}/manage.py",
"args": [
@ -65,6 +66,7 @@
"name": "CVAT RQ - low",
"type": "python",
"request": "launch",
"debugStdLib": true,
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"program": "${workspaceRoot}/manage.py",

@ -4,6 +4,46 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3.0] - 2018-12-29
### Added
- Ability to copy Object URL and Frame URL via object context menu and player context menu respectively.
- Ability to change opacity for selected shape with help "Selected Fill Opacity" slider.
- Ability to remove polyshapes points by double click.
- Ability to draw/change polyshapes (except for points) by slip method. Just press ENTER and moving a cursor.
- Ability to switch lock/hide properties via label UI element (in right menu) for all objects with same label.
- Shortcuts for outside/keyframe properties
- Support of Intel OpenVINO for accelerated model inference
- Tensorflow annotation now works without CUDA. It can use CPU only. OpenVINO and CUDA are supported optionally.
- Incremental saving of annotations.
- Tutorial for using polygons (screencast)
- Silk profiler to improve development process
- Admin panel can be used to edit labels and attributes for annotation tasks
- Analytics component to manage a data annotation team, monitor exceptions, collect client and server logs
- Changeable job and task statuses (annotation, validation, completed). A job status can be changed manually, a task status is computed automatically based on job statuses (#153)
- Backlink to a task from its job annotation view (#156)
- Buttons lock/hide for labels. They work for all objects with the same label on a current frame (#116)
### Changed
- Polyshape editing method has been improved. You can redraw part of shape instead of points cloning.
- Unified shortcut (Esc) for close any mode instead of different shortcuts (Alt+N, Alt+G, Alt+M etc.).
- Dump file contains information about data source (e.g. video name, archive name, ...)
- Update requests library due to https://nvd.nist.gov/vuln/detail/CVE-2018-18074
- Per task/job permissions to create/access/change/delete tasks and annotations
- Documentation was improved
- Timeout for creating tasks was increased (from 1h to 4h) (#136)
- Drawing has become more convenience. Now it is possible to draw outside an image. Shapes will be automatically truncated after drawing process (#202)
### Fixed
- Performance bottleneck has been fixed during you create new objects (draw, copy, merge etc).
- Label UI elements aren't updated after changelabel.
- Attribute annotation mode can use invalid shape position after resize or move shapes.
- Labels order is preserved now (#242)
- Uploading large XML files (#123)
- Django vulnerability (#121)
- Grammatical cleanup of README.md (#107)
- Dashboard loading has been accelerated (#156)
- Text drawing outside of a frame in some cases (#202)
## [0.2.0] - 2018-09-28
### Added
- New annotation shapes: polygons, polylines, points

@ -1,4 +1,196 @@
# How to contribute to Computer Vision Annotation Tool (CVAT)
# Contributing to this project
When contributing to this repository, please first discuss the change you wish to make via issue,
email, or any other method with the owners of this repository before making a change.
Please take a moment to review this document in order to make the contribution
process easy and effective for everyone involved.
Following these guidelines helps to communicate that you respect the time of
the developers managing and developing this open source project. In return,
they should reciprocate that respect in addressing your issue or assessing
patches and features.
## Development environment
Next steps should work on clear Ubuntu 18.04.
- Install necessary dependencies:
```sh
$ sudo apt-get install -y curl redis-server python3-dev python3-pip python3-venv libldap2-dev libsasl2-dev
```
- Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/linux#_debian-and-ubuntu-based-distributions) for development
- Install CVAT on your local host:
```sh
$ git clone https://github.com/opencv/cvat
$ cd cvat && mkdir logs keys
$ python3 -m venv .env
$ . .env/bin/activate
$ pip install -U pip wheel
$ pip install -r cvat/requirements/development.txt
$ python manage.py migrate
$ python manage.py collectstatic
```
- Create a super user for CVAT:
```sh
$ python manage.py createsuperuser
Username (leave blank to use 'django'): ***
Email address: ***
Password: ***
Password (again): ***
```
- Run Visual Studio Code from the virtual environment
```
$ code .
```
- Inside Visual Studio Code install [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) and [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) extensions
- Reload Visual Studio Code
- Select `CVAT Debugging` configuration and start debugging (F5)
You have done! Now it is possible to insert breakpoints and debug server and client of the tool.
## Branching model
The project uses [a successful Git branching model](https://nvie.com/posts/a-successful-git-branching-model).
Thus it has a couple of branches. Some of them are described below:
- `origin/master` to be the main branch where the source code of
HEAD always reflects a production-ready state.
- `origin/develop` to be the main branch where the source code of
HEAD always reflects a state with the latest delivered development
changes for the next release. Some would call this the “integration branch”.
## Using the issue tracker
The issue tracker is the preferred channel for [bug reports](#bugs),
[features requests](#features) and [submitting pull
requests](#pull-requests), but please respect the following restrictions:
* Please **do not** use the issue tracker for personal support requests (use
[Stack Overflow](http://stackoverflow.com)).
* Please **do not** derail or troll issues. Keep the discussion on topic and
respect the opinions of others.
<a name="bugs"></a>
## Bug reports
A bug is a _demonstrable problem_ that is caused by the code in the repository.
Good bug reports are extremely helpful - thank you!
Guidelines for bug reports:
1. **Use the GitHub issue search** &mdash; check if the issue has already been
reported.
2. **Check if the issue has been fixed** &mdash; try to reproduce it using the
latest `develop` branch in the repository.
3. **Isolate the problem** &mdash; ideally create a reduced test case.
A good bug report shouldn't leave others needing to chase you up for more
information. Please try to be as detailed as possible in your report. What is
your environment? What steps will reproduce the issue? What browser(s) and OS
experience the problem? What would you expect to be the outcome? All these
details will help people to fix any potential bugs.
Example:
> Short and descriptive example bug report title
>
> A summary of the issue and the browser/OS environment in which it occurs. If
> suitable, include the steps required to reproduce the bug.
>
> 1. This is the first step
> 2. This is the second step
> 3. Further steps, etc.
>
>
> Any other information you want to share that is relevant to the issue being
> reported. This might include the lines of code that you have identified as
> causing the bug, and potential solutions (and your opinions on their
> merits).
<a name="features"></a>
## Feature requests
Feature requests are welcome. But take a moment to find out whether your idea
fits with the scope and aims of the project. It's up to *you* to make a strong
case to convince the project's developers of the merits of this feature. Please
provide as much detail and context as possible.
<a name="pull-requests"></a>
## Pull requests
Good pull requests - patches, improvements, new features - are a fantastic
help. They should remain focused in scope and avoid containing unrelated
commits.
**Please ask first** before embarking on any significant pull request (e.g.
implementing features, refactoring code, porting to a different language),
otherwise you risk spending a lot of time working on something that the
project's developers might not want to merge into the project.
Please adhere to the coding conventions used throughout a project (indentation,
accurate comments, etc.) and any other requirements (such as test coverage).
Follow this process if you'd like your work considered for inclusion in the
project:
1. [Fork](http://help.github.com/fork-a-repo/) the project, clone your fork,
and configure the remotes:
```bash
# Clone your fork of the repo into the current directory
git clone https://github.com/<your-username>/<repo-name>
# Navigate to the newly cloned directory
cd <repo-name>
# Assign the original repo to a remote called "upstream"
git remote add upstream https://github.com/<upstream-owner>/<repo-name>
```
2. If you cloned a while ago, get the latest changes from upstream:
```bash
git checkout <dev-branch>
git pull upstream <dev-branch>
```
3. Create a new topic branch (off the main project development branch) to
contain your feature, change, or fix:
```bash
git checkout -b <topic-branch-name>
```
4. Commit your changes in logical chunks. Please adhere to these [git commit
message guidelines](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)
or your code is unlikely be merged into the main project. Use Git's
[interactive rebase](https://help.github.com/articles/interactive-rebase)
feature to tidy up your commits before making them public.
5. Locally merge (or rebase) the upstream development branch into your topic branch:
```bash
git pull [--rebase] upstream <dev-branch>
```
6. Push your topic branch up to your fork:
```bash
git push origin <topic-branch-name>
```
7. [Open a Pull Request](https://help.github.com/articles/using-pull-requests/)
with a clear title and description.
**IMPORTANT**: By submitting a patch, you agree to allow the project owner to
license your work under the same license as that used by the project.

@ -13,8 +13,6 @@ ENV LANG='C.UTF-8' \
LC_ALL='C.UTF-8'
ARG USER
ARG TF_ANNOTATION
ENV TF_ANNOTATION=${TF_ANNOTATION}
ARG DJANGO_CONFIGURATION
ENV DJANGO_CONFIGURATION=${DJANGO_CONFIGURATION}
@ -42,6 +40,8 @@ RUN apt-get update && \
unrar \
p7zip-full \
vim && \
add-apt-repository --remove ppa:mc3man/gstffmpeg-keep -y && \
add-apt-repository --remove ppa:mc3man/xerus-media -y && \
rm -rf /var/lib/apt/lists/*
# Add a non-root user
@ -50,13 +50,28 @@ ENV HOME /home/${USER}
WORKDIR ${HOME}
RUN adduser --shell /bin/bash --disabled-password --gecos "" ${USER}
# Install tf annotation if need
COPY cvat/apps/tf_annotation/docker_setup_tf_annotation.sh /tmp/tf_annotation/
COPY cvat/apps/tf_annotation/requirements.txt /tmp/tf_annotation/
ENV TF_ANNOTATION_MODEL_PATH=${HOME}/rcnn/frozen_inference_graph.pb
COPY components /tmp/components
# OpenVINO toolkit support
ARG OPENVINO_TOOLKIT
ENV OPENVINO_TOOLKIT=${OPENVINO_TOOLKIT}
RUN if [ "$OPENVINO_TOOLKIT" = "yes" ]; then \
/tmp/components/openvino/install.sh; \
fi
# CUDA support
ARG CUDA_SUPPORT
ENV CUDA_SUPPORT=${CUDA_SUPPORT}
RUN if [ "$CUDA_SUPPORT" = "yes" ]; then \
/tmp/components/cuda/install.sh; \
fi
# Tensorflow annotation support
ARG TF_ANNOTATION
ENV TF_ANNOTATION=${TF_ANNOTATION}
ENV TF_ANNOTATION_MODEL_PATH=${HOME}/rcnn/inference_graph
RUN if [ "$TF_ANNOTATION" = "yes" ]; then \
/tmp/tf_annotation/docker_setup_tf_annotation.sh; \
bash -i /tmp/components/tf_annotation/install.sh; \
fi
ARG WITH_TESTS

@ -1,5 +1,7 @@
# Computer Vision Annotation Tool (CVAT)
[![Gitter chat](https://badges.gitter.im/opencv-cvat/gitter.png)](https://gitter.im/opencv-cvat)
CVAT is completely re-designed and re-implemented version of [Video Annotation Tool from Irvine, California](http://carlvondrick.com/vatic/) tool. It is free, online, interactive video and image annotation tool for computer vision. It is being used by our team to annotate million of objects with different properties. Many UI and UX decisions are based on feedbacks from professional data annotation team.
![CVAT screenshot](cvat/apps/documentation/static/documentation/images/cvat.jpg)
@ -8,6 +10,8 @@ CVAT is completely re-designed and re-implemented version of [Video Annotation T
- [User's guide](cvat/apps/documentation/user_guide.md)
- [XML annotation format](cvat/apps/documentation/xml_format.md)
- [AWS Deployment Guide](cvat/apps/documentation/AWS-Deployment-Guide.md)
- [Questions](#questions)
## Screencasts
@ -15,6 +19,7 @@ CVAT is completely re-designed and re-implemented version of [Video Annotation T
- [Interpolation mode](https://youtu.be/U3MYDhESHo4)
- [Attribute mode](https://youtu.be/UPNfWl8Egd8)
- [Segmentation mode](https://youtu.be/6IJ0QN7PBKo)
- [Tutorial for polygons](https://www.youtube.com/watch?v=XTwfXDh4clI)
## LICENSE
@ -22,32 +27,12 @@ Code released under the [MIT License](https://opensource.org/licenses/MIT).
## INSTALLATION
These instructions below should work for Ubuntu 16.04. Probably it will work on other OSes as well with minor modifications.
The instructions below should work for `Ubuntu 16.04`. It will probably work on other Operating Systems such as `Windows` and `macOS`, but may require minor modifications.
### Install [Docker CE](https://www.docker.com/community-edition) or [Docker EE](https://www.docker.com/enterprise-edition) from official site
Please read official manual [here](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/).
### Install the latest driver for your graphics card
The step is necessary only to run tf_annotation app. If you don't have a Nvidia GPU you can skip the step.
```bash
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-cache search nvidia-* # find latest nvidia driver
sudo apt-get install nvidia-* # install the nvidia driver
sudo apt-get install mesa-common-dev
sudo apt-get install freeglut3-dev
sudo apt-get install nvidia-modprobe
```
Reboot your PC and verify installation by `nvidia-smi` command.
### Install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker)
The step is necessary only to run tf_annotation app. If you don't have a Nvidia GPU you can skip the step. See detailed installation instructions on repository page.
### Install docker-compose (1.19.0 or newer)
```bash
@ -58,25 +43,28 @@ sudo pip install docker-compose
To build all necessary docker images run `docker-compose build` command. By default, in production mode the tool uses PostgreSQL as database, Redis for caching.
### Run containers without tf_annotation app
### Run docker containers
To start all containers run `docker-compose up -d` command. Go to [localhost:8080](http://localhost:8080/). You should see a login page.
To start default container run `docker-compose up -d` command. Go to [localhost:8080](http://localhost:8080/). You should see a login page.
### Run containers with tf_annotation app
### You can include any additional components. Just add corresponding docker-compose file to build or run command:
If you would like to enable tf_annotation app first of all be sure that nvidia-driver, nvidia-docker and docker-compose>=1.19.0 are installed properly (see instructions above) and `docker info | grep 'Runtimes'` output contains `nvidia`.
Run following command:
```bash
docker-compose -f docker-compose.yml -f docker-compose.nvidia.yml up -d --build
# Build image with CUDA and OpenVINO support
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml -f components/openvino/docker-compose.openvino.yml build
# Run containers with CUDA and OpenVINO support
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml -f components/openvino/docker-compose.openvino.yml up -d
```
For details please see [components section](components/README.md).
### Create superuser account
You can [register a user](http://localhost:8080/auth/register) but by default it will not have rights even to view list of tasks. Thus you should create a superuser. The superuser can use admin panel to assign correct groups to the user. Please use the command below:
```bash
docker exec -it cvat sh -c '/usr/bin/python3 ~/manage.py createsuperuser'
docker exec -it cvat bash -ic '/usr/bin/python3 ~/manage.py createsuperuser'
```
Type your login/password for the superuser [on the login page](http://localhost:8080/auth/login) and press **Login** button. Now you should be able to create a new annotation task. Please read documentation for more details.
@ -106,19 +94,13 @@ services:
```
### Annotation logs
It is possible to proxy annotation logs from client to another server over http. For examlpe you can use Logstash.
To do that set DJANGO_LOG_SERVER_URL environment variable in cvat section of docker-compose.yml
file (or add this variable to docker-compose.override.yml).
It is possible to proxy annotation logs from client to ELK. To do that run the following command below:
```yml
version: "2.3"
services:
cvat:
environment:
DJANGO_LOG_SERVER_URL: https://annotation.example.com:5000
```bash
docker-compose -f docker-compose.yml -f components/analytics/docker-compose.analytics.yml up -d --build
```
### Share path
You can use a share storage for data uploading during you are creating a task. To do that you can mount it to CVAT docker container. Example of docker-compose.override.yml for this purpose:
@ -131,7 +113,7 @@ services:
environment:
CVAT_SHARE_URL: "Mounted from /mnt/share host directory"
volumes:
cvat_share:/home/django/share:ro
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
@ -141,3 +123,11 @@ volumes:
o: bind
```
You can change the share device path to your actual share. For user convenience we have defined the enviroment variable $CVAT_SHARE_URL. This variable contains a text (url for example) which will be being shown in the client-share browser.
## Questions
CVAT usage related questions or unclear concepts can be posted in our [Gitter chat](https://gitter.im/opencv-cvat) for **quick replies** from contributors and other users.
However, if you have a feature request or a bug report that can reproduced, feel free to open an issue (with steps to reproduce the bug if it's a bug report).
If you are not sure or just want to browse other users common questions, [Gitter chat](https://gitter.im/opencv-cvat) is the way to go.

@ -0,0 +1,6 @@
### There are some additional components for CVAT
* [NVIDIA CUDA](cuda/README.md)
* [OpenVINO](openvino/README.md)
* [Tensorflow Object Detector](tf_annotation/README.md)
* [Analytics](analytics/README.md)

@ -0,0 +1,104 @@
## Analytics for Computer Vision Annotation Tool (CVAT)
It is possible to proxy annotation logs from client to ELK. To do that run the following command below:
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/analytics/docker-compose.analytics.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/analytics/docker-compose.analytics.yml up -d
```
At the moment it is not possible to save advanced settings. Below values should be specified manually.
## Time picker default
{
"from": "now/d",
"to": "now/d",
"display": "Today",
"section": 0
}
## Time picker quick ranges
```json
[
{
"from": "now/d",
"to": "now/d",
"display": "Today",
"section": 0
},
{
"from": "now/w",
"to": "now/w",
"display": "This week",
"section": 0
},
{
"from": "now/M",
"to": "now/M",
"display": "This month",
"section": 0
},
{
"from": "now/y",
"to": "now/y",
"display": "This year",
"section": 0
},
{
"from": "now/d",
"to": "now",
"display": "Today so far",
"section": 2
},
{
"from": "now/w",
"to": "now",
"display": "Week to date",
"section": 2
},
{
"from": "now/M",
"to": "now",
"display": "Month to date",
"section": 2
},
{
"from": "now/y",
"to": "now",
"display": "Year to date",
"section": 2
},
{
"from": "now-1d/d",
"to": "now-1d/d",
"display": "Yesterday",
"section": 1
},
{
"from": "now-1w/w",
"to": "now-1w/w",
"display": "Previous week",
"section": 1
},
{
"from": "now-1m/m",
"to": "now-1m/m",
"display": "Previous month",
"section": 1
},
{
"from": "now-1y/y",
"to": "now-1y/y",
"display": "Previous year",
"section": 1
}
]
```

@ -0,0 +1,69 @@
version: '2.3'
services:
cvat_elasticsearch:
container_name: cvat_elasticsearch
image: cvat_elasticsearch
networks:
default:
aliases:
- elasticsearch
build:
context: ./components/analytics/elasticsearch
args:
ELK_VERSION: 6.4.0
volumes:
- cvat_events:/usr/share/elasticsearch/data
restart: always
cvat_kibana:
container_name: cvat_kibana
image: cvat_kibana
networks:
default:
aliases:
- kibana
build:
context: ./components/analytics/kibana
args:
ELK_VERSION: 6.4.0
depends_on: ['cvat_elasticsearch']
restart: always
cvat_kibana_setup:
container_name: cvat_kibana_setup
image: cvat
volumes: ['./components/analytics/kibana:/home/django/kibana:ro']
depends_on: ['cvat']
working_dir: '/home/django'
entrypoint: ['bash', 'wait-for-it.sh', 'elasticsearch:9200', '-t', '0', '--',
'/bin/bash', 'wait-for-it.sh', 'kibana:5601', '-t', '0', '--',
'/usr/bin/python3', 'kibana/setup.py', 'kibana/export.json']
environment:
no_proxy: elasticsearch,kibana,${no_proxy}
cvat_logstash:
container_name: cvat_logstash
image: cvat_logstash
networks:
default:
aliases:
- logstash
build:
context: ./components/analytics/logstash
args:
ELK_VERSION: 6.4.0
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
depends_on: ['cvat_elasticsearch']
restart: always
cvat:
environment:
DJANGO_LOG_SERVER_HOST: logstash
DJANGO_LOG_SERVER_PORT: 5000
DJANGO_LOG_VIEWER_HOST: kibana
DJANGO_LOG_VIEWER_PORT: 5601
no_proxy: kibana,logstash,${no_proxy}
volumes:
cvat_events:

@ -0,0 +1,4 @@
ARG ELK_VERSION
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:${ELK_VERSION}
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/

@ -0,0 +1,3 @@
http.host: 0.0.0.0
script.painless.regex.enabled: true
path.repo: ["/usr/share/elasticsearch/data/backup"]

@ -0,0 +1,5 @@
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana-oss:${ELK_VERSION}
COPY kibana.yml /usr/share/kibana/config/

@ -0,0 +1,198 @@
[
{
"_id": "7e8996e0-c23d-11e8-8e1b-758ef07f6de8",
"_type": "dashboard",
"_source": {
"panelsJSON": "[{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":21,\"w\":48,\"h\":13,\"i\":\"1\"},\"id\":\"3ade53d0-c23e-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"1\",\"type\":\"visualization\",\"version\":\"6.4.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":34,\"w\":48,\"h\":27,\"i\":\"2\"},\"id\":\"9397f350-c23e-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"2\",\"type\":\"search\",\"version\":\"6.4.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":0,\"w\":24,\"h\":21,\"i\":\"3\"},\"id\":\"1ec6a660-c244-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"3\",\"type\":\"visualization\",\"version\":\"6.4.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":24,\"y\":0,\"w\":24,\"h\":21,\"i\":\"4\"},\"id\":\"65918380-c244-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"4\",\"type\":\"visualization\",\"version\":\"6.4.0\"}]",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[]}"
},
"timeRestore": false,
"description": "",
"title": "Monitoring",
"optionsJSON": "{\"darkTheme\":false,\"hidePanelTitles\":false,\"useMargins\":true}",
"version": 1
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "d92524b0-c25c-11e8-8e1b-758ef07f6de8",
"_type": "visualization",
"_source": {
"visState": "{\"title\":\"Activity of users\",\"type\":\"metrics\",\"params\":{\"id\":\"61ca57f0-469d-11e7-af02-69e470af7417\",\"type\":\"timeseries\",\"series\":[{\"id\":\"61ca57f1-469d-11e7-af02-69e470af7417\",\"color\":\"#68BC00\",\"split_mode\":\"terms\",\"metrics\":[{\"id\":\"61ca57f2-469d-11e7-af02-69e470af7417\",\"type\":\"count\"}],\"separate_axis\":0,\"axis_position\":\"right\",\"formatter\":\"number\",\"chart_type\":\"line\",\"line_width\":1,\"point_size\":1,\"fill\":0.5,\"stacked\":\"none\",\"label\":\"User\",\"terms_field\":\"userid.keyword\",\"terms_size\":\"100\"}],\"time_field\":\"@timestamp\",\"index_pattern\":\"cvat*\",\"interval\":\"auto\",\"axis_position\":\"left\",\"axis_formatter\":\"number\",\"axis_scale\":\"normal\",\"show_legend\":1,\"show_grid\":1},\"aggs\":[]}",
"uiStateJSON": "{}",
"description": "",
"title": "Activity of users",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
},
"version": 1
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "9397f350-c23e-11e8-8e1b-758ef07f6de8",
"_type": "search",
"_source": {
"hits": 0,
"sort": [
"@timestamp",
"desc"
],
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"highlightAll\":true,\"version\":true,\"query\":{\"language\":\"lucene\",\"query\":\"event:\\\"Send exception\\\"\"},\"filter\":[]}"
},
"columns": [
"task",
"type",
"userid",
"stack"
],
"description": "",
"title": "Table with exceptions",
"version": 1
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "3ade53d0-c23e-11e8-8e1b-758ef07f6de8",
"_type": "visualization",
"_source": {
"title": "Timeline for exceptions",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"1\",\"params\":{\"customBucket\":{\"enabled\":true,\"id\":\"1-bucket\",\"params\":{\"filters\":[{\"input\":{\"query\":\"event:\\\"Send exception\\\"\"},\"label\":\"\"}]},\"schema\":{\"aggFilter\":[],\"deprecate\":false,\"editor\":false,\"group\":\"none\",\"max\":null,\"min\":0,\"name\":\"bucketAgg\",\"params\":[],\"title\":\"Bucket Agg\"},\"type\":\"filters\"},\"customLabel\":\"Exceptions\",\"customMetric\":{\"enabled\":true,\"id\":\"1-metric\",\"params\":{\"customLabel\":\"Exceptions\"},\"schema\":{\"aggFilter\":[\"!top_hits\",\"!percentiles\",\"!percentile_ranks\",\"!median\",\"!std_dev\",\"!sum_bucket\",\"!avg_bucket\",\"!min_bucket\",\"!max_bucket\",\"!derivative\",\"!moving_avg\",\"!serial_diff\",\"!cumulative_sum\"],\"deprecate\":false,\"editor\":false,\"group\":\"none\",\"max\":null,\"min\":0,\"name\":\"metricAgg\",\"params\":[],\"title\":\"Metric Agg\"},\"type\":\"count\"}},\"schema\":\"metric\",\"type\":\"sum_bucket\"},{\"enabled\":true,\"id\":\"2\",\"params\":{\"customInterval\":\"2h\",\"customLabel\":\"Time\",\"extended_bounds\":{},\"field\":\"@timestamp\",\"interval\":\"auto\",\"min_doc_count\":1},\"schema\":\"segment\",\"type\":\"date_histogram\"}],\"params\":{\"addLegend\":true,\"addTimeMarker\":true,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{},\"type\":\"category\"}],\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"legendPosition\":\"right\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Exceptions\"},\"drawLinesBetweenPoints\":true,\"mode\":\"stacked\",\"show\":\"true\",\"showCircles\":true,\"type\":\"histogram\",\"valueAxis\":\"ValueAxis-1\"}],\"times\":[],\"type\":\"histogram\",\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Exceptions\"},\"type\":\"value\"}]},\"title\":\"Timeline for exceptions\",\"type\":\"histogram\"}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "1ec6a660-c244-11e8-8e1b-758ef07f6de8",
"_type": "visualization",
"_source": {
"title": "Duration of events",
"visState": "{\"title\":\"Duration of events\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMetricsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"event.keyword\",\"size\":1000,\"order\":\"desc\",\"orderBy\":\"_key\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Action\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"duration\",\"customLabel\":\"\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"min\",\"schema\":\"metric\",\"params\":{\"field\":\"duration\",\"customLabel\":\"\"}},{\"id\":\"5\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"duration\"}}]}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[{\"$state\":{\"store\":\"appState\"},\"exists\":{\"field\":\"duration\"},\"meta\":{\"alias\":null,\"disabled\":false,\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"key\":\"duration\",\"negate\":false,\"type\":\"exists\",\"value\":\"exists\"}}]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "ec510550-c238-11e8-8e1b-758ef07f6de8",
"_type": "index-pattern",
"_source": {
"fields": "[{\"name\":\"@timestamp\",\"type\":\"date\",\"count\":2,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"@version\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"@version.keyword\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"_id\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_index\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_score\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_source\",\"type\":\"_source\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"application\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"application.keyword\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"box count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"duration\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"event\",\"type\":\"string\",\"count\":2,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"event.keyword\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"frame count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"object count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"points count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"polygon count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"polyline count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"task\",\"type\":\"string\",\"count\":2,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"task.keyword\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"timestamp\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"track count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"userid\",\"type\":\"string\",\"count\":2,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"userid.keyword\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"working time\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}]",
"title": "cvat*",
"timeFieldName": "@timestamp",
"fieldFormatMap": "{\"duration\":{\"id\":\"duration\",\"params\":{\"inputFormat\":\"milliseconds\",\"outputFormat\":\"asSeconds\"}},\"working time\":{\"id\":\"duration\",\"params\":{\"inputFormat\":\"milliseconds\",\"outputFormat\":\"asHours\"}}}"
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "65918380-c244-11e8-8e1b-758ef07f6de8",
"_type": "visualization",
"_source": {
"title": "Number of events",
"visState": "{\"title\":\"Number of events\",\"type\":\"table\",\"params\":{\"perPage\":20,\"showMetricsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"event.keyword\",\"size\":1000,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Action\"}}]}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "543f6260-c25c-11e8-8e1b-758ef07f6de8",
"_type": "visualization",
"_source": {
"title": "Working day",
"visState": "{\"title\":\"Working day\",\"type\":\"table\",\"params\":{\"perPage\":20,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"min\",\"schema\":\"metric\",\"params\":{\"field\":\"@timestamp\",\"customLabel\":\"Start\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"userid.keyword\",\"size\":1000,\"order\":\"asc\",\"orderBy\":\"_key\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"User\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"@timestamp\",\"customLabel\":\"End\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"split\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"d\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"customLabel\":\"day\",\"row\":true}}]}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "31ac2d60-c25b-11e8-8e1b-758ef07f6de8",
"_type": "visualization",
"_source": {
"title": "List of users",
"visState": "{\"aggs\":[{\"enabled\":true,\"id\":\"2\",\"params\":{\"customLabel\":\"User\",\"field\":\"userid.keyword\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"order\":\"asc\",\"orderBy\":\"_key\",\"otherBucket\":true,\"otherBucketLabel\":\"Other\",\"size\":1000},\"schema\":\"bucket\",\"type\":\"terms\"},{\"enabled\":true,\"id\":\"3\",\"params\":{\"customBucket\":{\"enabled\":true,\"id\":\"3-bucket\",\"params\":{\"customInterval\":\"2h\",\"extended_bounds\":{},\"field\":\"@timestamp\",\"interval\":\"auto\",\"min_doc_count\":1},\"schema\":{\"aggFilter\":[],\"deprecate\":false,\"editor\":false,\"group\":\"none\",\"max\":null,\"min\":0,\"name\":\"bucketAgg\",\"params\":[],\"title\":\"Bucket Agg\"},\"type\":\"date_histogram\"},\"customLabel\":\"Activity\",\"customMetric\":{\"enabled\":true,\"id\":\"3-metric\",\"params\":{},\"schema\":{\"aggFilter\":[\"!top_hits\",\"!percentiles\",\"!percentile_ranks\",\"!median\",\"!std_dev\",\"!sum_bucket\",\"!avg_bucket\",\"!min_bucket\",\"!max_bucket\",\"!derivative\",\"!moving_avg\",\"!serial_diff\",\"!cumulative_sum\"],\"deprecate\":false,\"editor\":false,\"group\":\"none\",\"max\":null,\"min\":0,\"name\":\"metricAgg\",\"params\":[],\"title\":\"Metric Agg\"},\"type\":\"count\"}},\"schema\":\"metric\",\"type\":\"sum_bucket\"},{\"enabled\":true,\"id\":\"1\",\"params\":{\"customLabel\":\"Working Time (h)\",\"field\":\"working time\"},\"schema\":\"metric\",\"type\":\"sum\"}],\"params\":{\"perPage\":20,\"showMetricsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"title\":\"List of users\",\"type\":\"table\"}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "7f637200-d068-11e8-9320-a3c87be2b433",
"_type": "visualization",
"_source": {
"title": "List of tasks",
"visState": "{\"title\":\"List of tasks\",\"type\":\"table\",\"params\":{\"perPage\":20,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":2,\"direction\":\"desc\"},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"sum\",\"schema\":\"metric\",\"params\":{\"field\":\"working time\",\"customLabel\":\"Working time (h)\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"task.keyword\",\"size\":1000,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Task\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"userid.keyword\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"_key\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"User\"}}]}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":2,\"direction\":\"desc\"}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"ec510550-c238-11e8-8e1b-758ef07f6de8\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
},
{
"_id": "22250a40-c25d-11e8-8e1b-758ef07f6de8",
"_type": "dashboard",
"_source": {
"title": "Managment",
"hits": 0,
"description": "",
"panelsJSON": "[{\"embeddableConfig\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":1,\"direction\":\"desc\"}}}},\"gridData\":{\"x\":0,\"y\":0,\"w\":24,\"h\":33,\"i\":\"1\"},\"id\":\"31ac2d60-c25b-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"1\",\"type\":\"visualization\",\"version\":\"6.4.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":33,\"w\":48,\"h\":33,\"i\":\"2\"},\"id\":\"543f6260-c25c-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"2\",\"type\":\"visualization\",\"version\":\"6.4.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":66,\"w\":48,\"h\":33,\"i\":\"3\"},\"id\":\"d92524b0-c25c-11e8-8e1b-758ef07f6de8\",\"panelIndex\":\"3\",\"type\":\"visualization\",\"version\":\"6.4.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":24,\"y\":0,\"w\":24,\"h\":33,\"i\":\"4\"},\"id\":\"7f637200-d068-11e8-9320-a3c87be2b433\",\"panelIndex\":\"4\",\"type\":\"visualization\",\"version\":\"6.4.0\"}]",
"optionsJSON": "{\"darkTheme\":false,\"hidePanelTitles\":false,\"useMargins\":true}",
"version": 1,
"timeRestore": false,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[]}"
}
},
"_meta": {
"savedObjectVersion": 2
}
}
]

@ -0,0 +1,5 @@
server.host: 0.0.0.0
elasticsearch.url: http://elasticsearch:9200
elasticsearch.requestHeadersWhitelist: [ "cookie", "authorization", "x-forwarded-user" ]
kibana.defaultAppId: "discover"
server.basePath: /analytics

@ -0,0 +1,40 @@
#/usr/bin/env python
import os
import argparse
import requests
import json
def import_resources(host, port, cfg_file):
with open(cfg_file, 'r') as f:
for saved_object in json.load(f):
_id = saved_object["_id"]
_type = saved_object["_type"]
_doc = saved_object["_source"]
import_saved_object(host, port, _type, _id, _doc)
def import_saved_object(host, port, _type, _id, data):
saved_objects_api = "http://{}:{}/api/saved_objects/{}/{}".format(
host, port, _type, _id)
request = requests.get(saved_objects_api)
if request.status_code == 404:
print("Creating {} as {}".format(_type, _id))
request = requests.post(saved_objects_api, json={"attributes": data},
headers={'kbn-xsrf': 'true'})
else:
print("Updating {} named {}".format(_type, _id))
request = requests.put(saved_objects_api, json={"attributes": data},
headers={'kbn-xsrf': 'true'})
request.raise_for_status()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='import Kibana 6.x resources',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('export_file', metavar='FILE',
help='JSON export file with resources')
parser.add_argument('-p', '--port', metavar='PORT', default=5601, type=int,
help='port of Kibana instance')
parser.add_argument('-H', '--host', metavar='HOST', default='kibana',
help='host of Kibana instance')
args = parser.parse_args()
import_resources(args.host, args.port, args.export_file)

@ -0,0 +1,7 @@
ARG ELK_VERSION
FROM docker.elastic.co/logstash/logstash-oss:${ELK_VERSION}
RUN logstash-plugin install logstash-input-http logstash-filter-aggregate \
logstash-filter-prune logstash-output-email
COPY logstash.conf /usr/share/logstash/pipeline/
EXPOSE 5000

@ -0,0 +1,98 @@
input {
tcp {
port => 5000
codec => json
}
}
filter {
if [logger_name] =~ /cvat.client/ {
# 1. Decode the event from json in 'message' field
# 2. Remove unnecessary field from it
# 3. Type it as client
json {
source => "message"
}
date {
match => ["timestamp", "UNIX", "UNIX_MS"]
remove_field => "timestamp"
}
if [event] == "Send exception" {
aggregate {
task_id => "%{userid}_%{application}_%{message}_%{filename}_%{line}"
code => "
require 'time'
map['userid'] ||= event.get('userid');
map['application'] ||= event.get('application');
map['error'] ||= event.get('message');
map['filename'] ||= event.get('filename');
map['line'] ||= event.get('line');
map['task'] ||= event.get('task');
map['error_count'] ||= 0;
map['error_count'] += 1;
map['aggregated_stack'] ||= '';
map['aggregated_stack'] += event.get('stack') + '\n\n\n';"
timeout => 3600
timeout_tags => ['aggregated_exception']
push_map_as_event_on_timeout => true
}
}
prune {
blacklist_names => ["level", "host", "logger_name", "message", "path",
"port", "stack_info"]
}
mutate {
replace => { "type" => "client" }
}
} else if [logger_name] =~ /cvat.server/ {
# 1. Remove 'logger_name' field and create 'task' field
# 2. Remove unnecessary field from it
# 3. Type it as server
if [logger_name] =~ /cvat\.server\.task_[0-9]+/ {
mutate {
rename => { "logger_name" => "task" }
gsub => [ "task", "cvat.server.task_", "" ]
}
# Need to split the mutate because otherwise the conversion
# doesn't work.
mutate {
convert => { "task" => "integer" }
}
}
prune {
blacklist_names => ["host", "port", "stack_info"]
}
mutate {
replace => { "type" => "server" }
}
}
}
output {
stdout {
codec => rubydebug
}
if [type] == "client" {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "cvat.client"
}
} else if [type] == "server" {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "cvat.server"
}
}
}

@ -0,0 +1,41 @@
## [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit)
### Requirements
* NVIDIA GPU with a compute capability [3.0 - 7.2]
* Latest GPU driver
### Installation
#### Install the latest driver for your graphics card
```bash
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-cache search nvidia-* # find latest nvidia driver
sudo apt-get install nvidia-* # install the nvidia driver
sudo apt-get install mesa-common-dev
sudo apt-get install freeglut3-dev
sudo apt-get install nvidia-modprobe
```
#### Reboot your PC and verify installation by `nvidia-smi` command.
#### Install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker)
Please be sure that installation was successful.
```bash
docker info | grep 'Runtimes' # output should contains 'nvidia'
```
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/cuda/docker-compose.cuda.yml up -d
```

@ -10,7 +10,7 @@ services:
build:
context: .
args:
TF_ANNOTATION: "yes"
CUDA_SUPPORT: "yes"
runtime: "nvidia"
environment:
NVIDIA_VISIBLE_DEVICES: all

@ -18,8 +18,8 @@ CUDA_VERSION=9.0.176
NCCL_VERSION=2.1.15
CUDNN_VERSION=7.0.5.15
CUDA_PKG_VERSION="9-0=${CUDA_VERSION}-1"
echo "export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}" >> ${HOME}/.bashrc
echo "export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}" >> ${HOME}/.bashrc
echo 'export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}' >> ${HOME}/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}' >> ${HOME}/.bashrc
apt-get update && apt-get install -y --no-install-recommends --allow-unauthenticated \
libprotobuf-dev \
@ -32,11 +32,3 @@ apt-get update && apt-get install -y --no-install-recommends --allow-unauthentic
ln -s cuda-9.0 /usr/local/cuda && \
rm -rf /var/lib/apt/lists/* \
/etc/apt/sources.list.d/nvidia-ml.list /etc/apt/sources.list.d/cuda.list
pip3 install --no-cache-dir -r "$(cd `dirname $0` && pwd)/requirements.txt"
cd ${HOME}
wget -O model.tar.gz http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017.tar.gz
tar -xzf model.tar.gz
rm model.tar.gz
mv faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017 rcnn

@ -0,0 +1,23 @@
## [Intel OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit)
### Requirements
* Intel Core with 6th generation and higher or Intel Xeon CPUs.
### Preparation
* Download latest [OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit) installer (offline or online) for Linux platform. It should be .tgz archive. Minimum required version is 2018 R3.
* Put downloaded file into ```components/openvino```.
* Accept EULA in the eula.cfg file.
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/openvino/docker-compose.openvino.yml up -d
```

@ -0,0 +1,13 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
OPENVINO_TOOLKIT: "yes"

@ -0,0 +1,3 @@
# Accept actual EULA from openvino installation archive. Valid values are: {accept, decline}
ACCEPT_EULA=accept

@ -0,0 +1,34 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
if [[ `lscpu | grep -o "GenuineIntel"` != "GenuineIntel" ]]; then
echo "OpenVINO supports only Intel CPUs"
exit 1
fi
if [[ `lscpu | grep -o "sse4" | head -1` != "sse4" ]] && [[ `lscpu | grep -o "avx2" | head -1` != "avx2" ]]; then
echo "You Intel CPU should support sse4 or avx2 instruction if you want use OpenVINO"
exit 1
fi
cd /tmp/components/openvino
tar -xzf `ls | grep "openvino_toolkit"`
cd `ls -d */ | grep "openvino_toolkit"`
apt-get update && apt-get install -y sudo cpio && \
./install_cv_sdk_dependencies.sh && SUDO_FORCE_REMOVE=yes apt-get remove -y sudo
cat ../eula.cfg >> silent.cfg
./install.sh -s silent.cfg
cd /tmp/components && rm openvino -r
echo "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" >> ${HOME}/.bashrc
echo -e '\nexport IE_PLUGINS_PATH=${IE_PLUGINS_PATH}' >> /opt/intel/computer_vision_sdk/bin/setupvars.sh

@ -0,0 +1,41 @@
## [Tensorflow Object Detector](https://github.com/tensorflow/models/tree/master/research/object_detection)
### What is it?
* This application allows you automatically to annotate many various objects on images.
* It uses [Faster RCNN Inception Resnet v2 Atrous Coco Model](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz) from [tensorflow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md)
* It can work on CPU (with Tensorflow or OpenVINO) or GPU (with Tensorflow GPU).
* It supports next classes (just specify them in "labels" row):
```
'surfboard', 'car', 'skateboard', 'boat', 'clock',
'cat', 'cow', 'knife', 'apple', 'cup', 'tv',
'baseball_bat', 'book', 'suitcase', 'tennis_racket',
'stop_sign', 'couch', 'cell_phone', 'keyboard',
'cake', 'tie', 'frisbee', 'truck', 'fire_hydrant',
'snowboard', 'bed', 'vase', 'teddy_bear',
'toaster', 'wine_glass', 'traffic_light',
'broccoli', 'backpack', 'carrot', 'potted_plant',
'donut', 'umbrella', 'parking_meter', 'bottle',
'sandwich', 'motorcycle', 'bear', 'banana',
'person', 'scissors', 'elephant', 'dining_table',
'toothbrush', 'toilet', 'skis', 'bowl', 'sheep',
'refrigerator', 'oven', 'microwave', 'train',
'orange', 'mouse', 'laptop', 'bench', 'bicycle',
'fork', 'kite', 'zebra', 'baseball_glove', 'bus',
'spoon', 'horse', 'handbag', 'pizza', 'sports_ball',
'airplane', 'hair_drier', 'hot_dog', 'remote',
'sink', 'dog', 'bird', 'giraffe', 'chair'.
```
* Component adds "Run TF Annotation" button into dashboard.
### Build docker image
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml build
```
### Run docker container
```bash
# From project root directory
docker-compose -f docker-compose.yml -f components/tf_annotation/docker-compose.tf_annotation.yml up -d
```

@ -0,0 +1,13 @@
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat:
build:
context: .
args:
TF_ANNOTATION: "yes"

@ -0,0 +1,32 @@
#!/bin/bash
#
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
set -e
cd ${HOME} && \
wget -O model.tar.gz http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz && \
tar -xzf model.tar.gz && rm model.tar.gz && \
mv faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28 ${HOME}/rcnn && cd ${HOME} && \
mv rcnn/frozen_inference_graph.pb rcnn/inference_graph.pb
if [[ "$CUDA_SUPPORT" = "yes" ]]
then
pip3 install --no-cache-dir tensorflow-gpu==1.7.0
else
if [[ "$OPENVINO_TOOLKIT" = "yes" ]]
then
pip3 install -r ${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer/requirements.txt && \
cd ${HOME}/rcnn/ && \
${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer/mo.py --framework tf \
--data_type FP32 --input_shape [1,600,600,3] \
--input image_tensor --output detection_scores,detection_boxes,num_detections \
--tensorflow_use_custom_operations_config ${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json \
--tensorflow_object_detection_api_pipeline_config pipeline.config --input_model inference_graph.pb && \
rm inference_graph.pb
else
pip3 install --no-cache-dir tensorflow==1.7.0
fi
fi

@ -5,3 +5,13 @@
default_app_config = 'cvat.apps.authentication.apps.AuthenticationConfig'
from enum import Enum
class AUTH_ROLE(Enum):
ADMIN = 'admin'
USER = 'user'
ANNOTATOR = 'annotator'
OBSERVER = 'observer'
def __str__(self):
return self.value

@ -4,6 +4,24 @@
# SPDX-License-Identifier: MIT
from django.contrib import admin
from django.contrib.auth.models import Group, User
from django.contrib.auth.admin import GroupAdmin, UserAdmin
from django.utils.translation import ugettext_lazy as _
# Register your models here.
class CustomUserAdmin(UserAdmin):
fieldsets = (
(None, {'fields': ('username', 'password')}),
(_('Personal info'), {'fields': ('first_name', 'last_name', 'email')}),
(_('Permissions'), {'fields': ('is_active', 'is_staff', 'is_superuser',
'groups',)}),
(_('Important dates'), {'fields': ('last_login', 'date_joined')}),
)
class CustomGroupAdmin(GroupAdmin):
fieldsets = ((None, {'fields': ('name',)}),)
admin.site.unregister(User)
admin.site.unregister(Group)
admin.site.register(User, CustomUserAdmin)
admin.site.register(Group, CustomGroupAdmin)

@ -4,20 +4,11 @@
# SPDX-License-Identifier: MIT
from django.apps import AppConfig
from django.db.models.signals import post_migrate, post_save
from .settings.authentication import DJANGO_AUTH_TYPE
class AuthenticationConfig(AppConfig):
name = 'cvat.apps.authentication'
def ready(self):
from . import signals
from django.contrib.auth.models import User
from .auth import register_signals
post_migrate.connect(signals.create_groups)
if DJANGO_AUTH_TYPE == 'SIMPLE':
post_save.connect(signals.create_user, sender=User, dispatch_uid="create_user")
import django_auth_ldap.backend
django_auth_ldap.backend.populate_user.connect(signals.update_ldap_groups)
register_signals()

@ -0,0 +1,80 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
from django.conf import settings
import rules
from . import AUTH_ROLE
def register_signals():
from django.db.models.signals import post_migrate, post_save
from django.contrib.auth.models import User, Group
def create_groups(sender, **kwargs):
for role in AUTH_ROLE:
db_group, _ = Group.objects.get_or_create(name=role)
db_group.save()
post_migrate.connect(create_groups, weak=False)
if settings.DJANGO_AUTH_TYPE == 'BASIC':
from .auth_basic import create_user
post_save.connect(create_user, sender=User)
elif settings.DJANGO_AUTH_TYPE == 'LDAP':
import django_auth_ldap.backend
from .auth_ldap import create_user
django_auth_ldap.backend.populate_user.connect(create_user)
# AUTH PREDICATES
has_admin_role = rules.is_group_member(str(AUTH_ROLE.ADMIN))
has_user_role = rules.is_group_member(str(AUTH_ROLE.USER))
has_annotator_role = rules.is_group_member(str(AUTH_ROLE.ANNOTATOR))
has_observer_role = rules.is_group_member(str(AUTH_ROLE.OBSERVER))
@rules.predicate
def is_task_owner(db_user, db_task):
# If owner is None (null) the task can be accessed/changed/deleted
# only by admin. At the moment each task has an owner.
return db_task.owner == db_user
@rules.predicate
def is_task_assignee(db_user, db_task):
return db_task.assignee == db_user
@rules.predicate
def is_task_annotator(db_user, db_task):
from functools import reduce
db_segments = list(db_task.segment_set.prefetch_related('job_set__assignee').all())
return any([is_job_annotator(db_user, db_job)
for db_segment in db_segments for db_job in db_segment.job_set.all()])
@rules.predicate
def is_job_owner(db_user, db_job):
return is_task_owner(db_user, db_job.segment.task)
@rules.predicate
def is_job_annotator(db_user, db_job):
db_task = db_job.segment.task
# A job can be annotated by any user if the task's assignee is None.
has_rights = db_task.assignee is None or is_task_assignee(db_user, db_task)
if db_job.assignee is not None:
has_rights |= (db_user == db_job.assignee)
return has_rights
# AUTH PERMISSIONS RULES
rules.add_perm('engine.task.create', has_admin_role | has_user_role)
rules.add_perm('engine.task.access', has_admin_role | has_observer_role |
is_task_owner | is_task_annotator)
rules.add_perm('engine.task.change', has_admin_role | is_task_owner |
is_task_assignee)
rules.add_perm('engine.task.delete', has_admin_role | is_task_owner)
rules.add_perm('engine.job.access', has_admin_role | has_observer_role |
is_job_owner | is_job_annotator)
rules.add_perm('engine.job.change', has_admin_role | is_job_owner |
is_job_annotator)

@ -0,0 +1,12 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from . import AUTH_ROLE
from django.conf import settings
def create_user(sender, instance, created, **kwargs):
from django.contrib.auth.models import Group
if instance.is_superuser and instance.is_staff:
db_group = Group.objects.get(name=AUTH_ROLE.ADMIN)
instance.groups.add(db_group)

@ -0,0 +1,33 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.conf import settings
from . import AUTH_ROLE
AUTH_LDAP_GROUPS = {
AUTH_ROLE.ADMIN: settings.AUTH_LDAP_ADMIN_GROUPS,
AUTH_ROLE.ANNOTATOR: settings.AUTH_LDAP_ANNOTATOR_GROUPS,
AUTH_ROLE.USER: settings.AUTH_LDAP_USER_GROUPS,
AUTH_ROLE.OBSERVER: settings.AUTH_LDAP_OBSERVER_GROUPS
}
def create_user(sender, user=None, ldap_user=None, **kwargs):
from django.contrib.auth.models import Group
user_groups = []
for role in AUTH_ROLE:
db_group = Group.objects.get(name=role)
for ldap_group in AUTH_LDAP_GROUPS[role]:
if ldap_group.lower() in ldap_user.group_dns:
user_groups.append(db_group)
if role == AUTH_ROLE.ADMIN:
user.is_staff = user.is_superuser = True
# It is important to save the user before adding groups. Please read
# https://django-auth-ldap.readthedocs.io/en/latest/users.html#populating-users
# The user instance will be saved automatically after the signal handler
# is run.
user.save()
user.groups.set(user_groups)

@ -3,16 +3,16 @@
#
# SPDX-License-Identifier: MIT
from functools import wraps
from urllib.parse import urlparse
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.shortcuts import resolve_url, reverse
from django.http import JsonResponse
from urllib.parse import urlparse
from django.contrib.auth.views import redirect_to_login
from functools import wraps
from django.conf import settings
def login_required(function=None, redirect_field_name=REDIRECT_FIELD_NAME, login_url=None, redirect_methods=['GET']):
def login_required(function=None, redirect_field_name=REDIRECT_FIELD_NAME,
login_url=None, redirect_methods=['GET']):
def decorator(view_func):
@wraps(view_func)
def _wrapped_view(request, *args, **kwargs):

@ -1,5 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -1,56 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.conf import settings
import ldap
from django_auth_ldap.config import LDAPSearch, NestedActiveDirectoryGroupType
# Baseline configuration.
settings.AUTH_LDAP_SERVER_URI = ""
# Credentials for LDAP server
settings.AUTH_LDAP_BIND_DN = ""
settings.AUTH_LDAP_BIND_PASSWORD = ""
# Set up basic user search
settings.AUTH_LDAP_USER_SEARCH = LDAPSearch("dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
# Set up the basic group parameters.
settings.AUTH_LDAP_GROUP_SEARCH = LDAPSearch("dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(objectClass=group)")
settings.AUTH_LDAP_GROUP_TYPE = NestedActiveDirectoryGroupType()
# # Simple group restrictions
settings.AUTH_LDAP_REQUIRE_GROUP = "cn=cvat,ou=Groups,dc=example,dc=com"
# Populate the Django user from the LDAP directory.
settings.AUTH_LDAP_USER_ATTR_MAP = {
"first_name": "givenName",
"last_name": "sn",
"email": "mail",
}
settings.AUTH_LDAP_ALWAYS_UPDATE_USER = True
# Cache group memberships for an hour to minimize LDAP traffic
settings.AUTH_LDAP_CACHE_GROUPS = True
settings.AUTH_LDAP_GROUP_CACHE_TIMEOUT = 3600
settings.AUTH_LDAP_AUTHORIZE_ALL_USERS = True
# Keep ModelBackend around for per-user permissions and maybe a local
# superuser.
settings.AUTHENTICATION_BACKENDS.append('django_auth_ldap.backend.LDAPBackend')
AUTH_LDAP_ADMIN_GROUPS = [
"cn=cvat_admins,ou=Groups,dc=example,dc=com"
]
AUTH_LDAP_DATA_ANNOTATORS_GROUPS = [
]
AUTH_LDAP_DEVELOPER_GROUPS = [
"cn=cvat_users,ou=Groups,dc=example,dc=com"
]

@ -1,8 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
# Specify groups that new users will have
AUTH_SIMPLE_DEFAULT_GROUPS = []

@ -1,58 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.conf import settings
import os
settings.LOGIN_URL = 'login'
settings.LOGIN_REDIRECT_URL = '/'
settings.AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
]
AUTH_LDAP_DEVELOPER_GROUPS = []
AUTH_LDAP_DATA_ANNOTATORS_GROUPS = []
AUTH_LDAP_ADMIN_GROUPS = []
DJANGO_AUTH_TYPE = 'LDAP' if os.environ.get('DJANGO_AUTH_TYPE', '') == 'LDAP' else 'SIMPLE'
if DJANGO_AUTH_TYPE == 'LDAP':
from .auth_ldap import *
else:
from .auth_simple import *
# Definition of CVAT groups with permissions for task and annotation objects
# Annotator - can modify annotation for task, but cannot add, change and delete tasks
# Developer - can create tasks and modify (delete) owned tasks and any actions with annotation
# Admin - can any actions for task and annotation, can login to admin area and manage groups and users
cvat_groups_definition = {
'user': {
'description': '',
'permissions': {
'task': ['view', 'add', 'change', 'delete'],
'annotation': ['view', 'change'],
},
'ldap_groups': AUTH_LDAP_DEVELOPER_GROUPS,
},
'annotator': {
'description': '',
'permissions': {
'task': ['view'],
'annotation': ['view', 'change'],
},
'ldap_groups': AUTH_LDAP_DATA_ANNOTATORS_GROUPS,
},
'admin': {
'description': '',
'permissions': {
'task': ['view', 'add', 'change', 'delete'],
'annotation': ['view', 'change'],
},
'ldap_groups': AUTH_LDAP_ADMIN_GROUPS,
},
}

@ -1,62 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.db import models
from django.conf import settings
from .settings import authentication
from django.contrib.auth.models import User, Group
def setup_group_permissions(group):
from cvat.apps.engine.models import Task
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
def append_permissions_for_model(model):
content_type = ContentType.objects.get_for_model(model)
for perm_target, actions in authentication.cvat_groups_definition[group.name]['permissions'].items():
for action in actions:
codename = '{}_{}'.format(action, perm_target)
try:
perm = Permission.objects.get(codename=codename, content_type=content_type)
group_permissions.append(perm)
except:
pass
group_permissions = []
append_permissions_for_model(Task)
group.permissions.set(group_permissions)
group.save()
def create_groups(sender, **kwargs):
for cvat_role, _ in authentication.cvat_groups_definition.items():
Group.objects.get_or_create(name=cvat_role)
def update_ldap_groups(sender, user=None, ldap_user=None, **kwargs):
user_groups = []
for cvat_role, role_settings in authentication.cvat_groups_definition.items():
group_instance, _ = Group.objects.get_or_create(name=cvat_role)
setup_group_permissions(group_instance)
for ldap_group in role_settings['ldap_groups']:
if ldap_group.lower() in ldap_user.group_dns:
user_groups.append(group_instance)
user.save()
user.groups.set(user_groups)
user.is_staff = user.is_superuser = user.groups.filter(name='admin').exists()
def create_user(sender, instance, created, **kwargs):
if instance.is_superuser and instance.is_staff:
admin_group, _ = Group.objects.get_or_create(name='admin')
admin_group.user_set.add(instance)
if created:
for cvat_role, _ in authentication.cvat_groups_definition.items():
group_instance, _ = Group.objects.get_or_create(name=cvat_role)
setup_group_permissions(group_instance)
if cvat_role in authentication.AUTH_SIMPLE_DEFAULT_GROUPS:
instance.groups.add(group_instance)

@ -23,5 +23,7 @@
{% endblock %}
{% block note%}
<p>Have not registered yet? <a href="{% url 'register' %}">Register here</a>.</p>
{% autoescape off %}
{{ note }}
{% endautoescape %}
{% endblock %}

@ -1,27 +0,0 @@
<!--
Copyright (C) 2018 Intel Corporation
SPDX-License-Identifier: MIT
-->
{% extends "auth_base.html" %}
{% block title %}Login{% endblock %}
{% block content %}
<h1>Login</h1>
{% if form.errors %}
<small>Your username and password didn't match. Please try again.</small>
{% endif %}
<form method="post" action="{% url 'login' %}">
{% csrf_token %}
{% for field in form %}
{{ field }}
{% endfor %}
<input type="hidden" name="next" value="{{ next }}" />
<button type="submit" class="btn btn-primary btn-block btn-large">Login</button>
</form>
{% endblock %}
{% block note %}
{% include "note.html" %}
{% endblock %}

@ -1,7 +0,0 @@
<!--
Copyright (C) 2018 Intel Corporation
SPDX-License-Identifier: MIT
-->
<p>
</p>

@ -4,17 +4,20 @@
# SPDX-License-Identifier: MIT
from django.urls import path
import os
from django.contrib.auth import views as auth_views
from django.conf import settings
from . import forms
from . import views
from .settings.authentication import DJANGO_AUTH_TYPE
login_page = 'login{}.html'.format('_ldap' if DJANGO_AUTH_TYPE == 'LDAP' else '')
urlpatterns = [
path('login', auth_views.LoginView.as_view(form_class=forms.AuthForm, template_name=login_page), name='login'),
path('login', auth_views.LoginView.as_view(form_class=forms.AuthForm,
template_name='login.html', extra_context={'note': settings.AUTH_LOGIN_NOTE}),
name='login'),
path('logout', auth_views.LogoutView.as_view(next_page='login'), name='logout'),
path('register', views.register_user, name='register'),
]
if settings.DJANGO_AUTH_TYPE == 'BASIC':
urlpatterns += [
path('register', views.register_user, name='register'),
]

@ -3,13 +3,12 @@
#
# SPDX-License-Identifier: MIT
from django.shortcuts import render
from django.contrib.auth.views import LoginView
from django.http import HttpResponseRedirect
from django.shortcuts import render, redirect
from django.conf import settings
from django.contrib.auth import login, authenticate
from . import forms
from django.contrib.auth import login, authenticate
from django.shortcuts import render, redirect
def register_user(request):
if request.method == 'POST':
@ -20,7 +19,7 @@ def register_user(request):
raw_password = form.cleaned_data.get('password1')
user = authenticate(username=username, password=raw_password)
login(request, user)
return redirect('/')
return redirect(settings.LOGIN_REDIRECT_URL)
else:
form = forms.NewUserForm()
return render(request, 'register.html', {'form': form})

@ -3,3 +3,6 @@
#
# SPDX-License-Identifier: MIT
from cvat.settings.base import JS_3RDPARTY
JS_3RDPARTY['engine'] = JS_3RDPARTY.get('engine', []) + ['dashboard/js/enginePlugin.js']

@ -524,12 +524,16 @@ function uploadAnnotationRequest() {
$.ajax({
url: '/get/task/' + window.cvat.dashboard.taskID,
success: function(data) {
let annotationParser = new AnnotationParser({
start: 0,
stop: data.size,
image_meta_data: data.image_meta_data,
flipped: data.flipped
}, new LabelsInfo(data.spec));
let annotationParser = new AnnotationParser(
{
start: 0,
stop: data.size,
image_meta_data: data.image_meta_data,
flipped: data.flipped
},
new LabelsInfo(data.spec),
new ConstIdGenerator(-1)
);
let asyncParse = function() {
let parsed = null;
@ -538,28 +542,64 @@ function uploadAnnotationRequest() {
}
catch(error) {
overlay.remove();
showMessage("Parsing errors was occured. " + error);
showMessage("Parsing errors was occurred. " + error);
return;
}
let asyncSave = function() {
$.ajax({
url: '/save/annotation/task/' + window.cvat.dashboard.taskID,
type: 'POST',
data: JSON.stringify(parsed),
contentType: 'application/json',
url: '/delete/annotation/task/' + window.cvat.dashboard.taskID,
type: 'DELETE',
success: function() {
let message = 'Annotation successfully uploaded';
showMessage(message);
asyncSaveChunk(0);
},
error: function(response) {
let message = 'Annotation uploading errors was occured. ' + response.responseText;
let message = 'Previous annotations cannot be deleted: ' +
response.responseText;
showMessage(message);
overlay.remove();
},
complete: () => overlay.remove()
});
};
let asyncSaveChunk = function(start) {
const CHUNK_SIZE = 100000;
let end = start + CHUNK_SIZE;
let chunk = {};
let next = false;
for (let prop in parsed) {
if (parsed.hasOwnProperty(prop)) {
chunk[prop] = parsed[prop].slice(start, end);
next |= chunk[prop].length > 0;
}
}
if (next) {
let exportData = createExportContainer();
exportData.create = chunk;
$.ajax({
url: '/save/annotation/task/' + window.cvat.dashboard.taskID,
type: 'POST',
data: JSON.stringify(exportData),
contentType: 'application/json',
success: function() {
asyncSaveChunk(end);
},
error: function(response) {
let message = 'Annotations uploading errors were occurred: ' +
response.responseText;
showMessage(message);
overlay.remove();
},
});
} else {
let message = 'Annotations were uploaded successfully';
showMessage(message);
overlay.remove();
}
};
overlay.setMessage('Annotation is being saved..');
setTimeout(asyncSave);
};

@ -0,0 +1,15 @@
/*
* Copyright (C) 2018 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
"use strict";
window.addEventListener('DOMContentLoaded', () => {
$(`<button class="menuButton semiBold h2"> Open Task </button>`).on('click', () => {
let win = window.open(`${window.location.origin }/dashboard/?jid=${window.cvat.job.id}`, '_blank');
win.focus();
}).prependTo('#engineMenuButtons');
});

@ -3,22 +3,22 @@
SPDX-License-Identifier: MIT
-->
<div class="dashboardTaskUI" id="dashboardTask_{{item.task_id}}">
<div class="dashboardTaskUI" id="dashboardTask_{{item.id}}">
<center class="dashboardTitleWrapper">
<label class="semiBold h1 dashboardTaskNameLabel selectable"> {{ item.name }} </label>
</center>
<center class="dashboardTitleWrapper">
<label class="regular dashboardStatusLabel"> {{ item.status }} </label>
</center>
<div class="dashboardTaskIntro" style='background-image: url("/get/task/{{item.task_id}}/frame/0")'> </div>
<div class="dashboardTaskIntro" style='background-image: url("/get/task/{{item.id}}/frame/0")'> </div>
<div class="dashboardButtonsUI">
<button class="dashboardDumpAnnotation semiBold dashboardButtonUI"> Dump Annotation </button>
<button class="dashboardUploadAnnotation semiBold dashboardButtonUI"> Upload Annotation </button>
<button class="dashboardUpdateTask semiBold dashboardButtonUI"> Update Task </button>
<button class="dashboardDeleteTask semiBold dashboardButtonUI"> Delete Task </button>
{%if item.has_bug_tracker %}
{%if item.bug_tracker %}
<button class="dashboardOpenTrackerButton semiBold dashboardButtonUI"> Open Bug Tracker </button>
<a class="dashboardBugTrackerLink" href='{{item.bug_tracker_link}}' style="display: none;"> </a>
<a class="dashboardBugTrackerLink" href='{{item.bug_tracker}}' style="display: none;"> </a>
{% endif %}
</div>
<div class="dashboardJobsUI">
@ -26,10 +26,12 @@
<label class="regular h1"> Jobs </label>
</center>
<table class="dashboardJobList regular">
{% for segment in item.segments %}
<tr>
<td> <a href="{{segment.url}}"> {{segment.url}} </a> </td>
</tr>
{% for segm in item.segment_set.all %}
{% for job in segm.job_set.all %}
<tr>
<td> <a href="{{base_url}}?id={{job.id}}"> {{base_url}}?id={{job.id}} </a> </td>
</tr>
{% endfor %}
{% endfor %}
</table>
</div>

@ -7,10 +7,9 @@ from django.http import HttpResponse, JsonResponse, HttpResponseBadRequest
from django.shortcuts import redirect
from django.shortcuts import render
from django.conf import settings
from django.contrib.auth.decorators import permission_required
from cvat.apps.authentication.decorators import login_required
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.engine.models import Task as TaskModel, Job as JobModel
from cvat.settings.base import JS_3RDPARTY
import os
@ -40,7 +39,6 @@ def ScanNode(directory):
return result
@login_required
@permission_required('engine.add_task', raise_exception=True)
def JsTreeView(request):
node_id = None
if 'id' in request.GET:
@ -56,63 +54,27 @@ def JsTreeView(request):
json_dumps_params=dict(ensure_ascii=False))
def MainTaskInfo(task, dst_dict):
dst_dict["status"] = task.status
dst_dict["num_of_segments"] = task.segment_set.count()
dst_dict["mode"] = task.mode.capitalize()
dst_dict["name"] = task.name
dst_dict["task_id"] = task.id
dst_dict["created_date"] = task.created_date
dst_dict["updated_date"] = task.updated_date
dst_dict["bug_tracker_link"] = task.bug_tracker
dst_dict["has_bug_tracker"] = len(task.bug_tracker) > 0
dst_dict["owner"] = 'undefined'
dst_dict["id"] = task.id
dst_dict["segments"] = []
def DetailTaskInfo(request, task, dst_dict):
scheme = request.scheme
host = request.get_host()
dst_dict['segments'] = []
for segment in task.segment_set.all():
for job in segment.job_set.all():
segment_url = "{0}://{1}/?id={2}".format(scheme, host, job.id)
dst_dict["segments"].append({
'id': job.id,
'start': segment.start_frame,
'stop': segment.stop_frame,
'url': segment_url
})
db_labels = task.label_set.prefetch_related('attributespec_set').all()
attributes = {}
for db_label in db_labels:
attributes[db_label.id] = {}
for db_attrspec in db_label.attributespec_set.all():
attributes[db_label.id][db_attrspec.id] = db_attrspec.text
dst_dict['labels'] = attributes
@login_required
@permission_required('engine.view_task', raise_exception=True)
def DashboardView(request):
filter_name = request.GET['search'] if 'search' in request.GET else None
tasks_query_set = list(TaskModel.objects.prefetch_related('segment_set').order_by('-created_date').all())
if filter_name is not None:
tasks_query_set = list(filter(lambda x: filter_name.lower() in x.name.lower(), tasks_query_set))
data = []
for task in tasks_query_set:
task_info = {}
MainTaskInfo(task, task_info)
DetailTaskInfo(request, task, task_info)
data.append(task_info)
query_name = request.GET['search'] if 'search' in request.GET else None
query_job = int(request.GET['jid']) if 'jid' in request.GET and request.GET['jid'].isdigit() else None
task_list = None
if query_job is not None and JobModel.objects.filter(pk = query_job).exists():
task_list = [JobModel.objects.select_related('segment__task').get(pk = query_job).segment.task]
else:
task_list = list(TaskModel.objects.prefetch_related('segment_set__job_set').order_by('-created_date').all())
if query_name is not None:
task_list = list(filter(lambda x: query_name.lower() in x.name.lower(), task_list))
task_list = list(filter(lambda task: request.user.has_perm(
'engine.task.access', task), task_list))
return render(request, 'dashboard/dashboard.html', {
'data': data,
'data': task_list,
'max_upload_size': settings.LOCAL_LOAD_MAX_FILES_SIZE,
'max_upload_count': settings.LOCAL_LOAD_MAX_FILES_COUNT,
'base_url': "{0}://{1}/".format(request.scheme, request.get_host()),
'share_path': os.getenv('CVAT_SHARE_URL', default=r'${cvat_root}/share'),
'js_3rdparty': JS_3RDPARTY.get('dashboard', [])
'js_3rdparty': JS_3RDPARTY.get('dashboard', []),
})

@ -0,0 +1,9 @@
### AWS-Deployment Guide
There are two ways of deploying the CVAT.
1. **On Nvidia GPU Machine:** Tensorflow annotation feature is dependent on GPU hardware. One of the easy ways to launch CVAT with the tf-annotation app is to use AWS P3 instances, which provides the NVIDIA GPU. Read more about [P3 instances here.](https://aws.amazon.com/about-aws/whats-new/2017/10/introducing-amazon-ec2-p3-instances/)
Overall setup instruction is explained in [main readme file](https://github.com/opencv/cvat/), except Installing Nvidia drivers. So we need to download the drivers and install it. For Amazon P3 instances, download the Nvidia Drivers from Nvidia website. For more check [Installing the NVIDIA Driver on Linux Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html) link.
2. **On Any other AWS Machine:** We can follow the same instruction guide mentioned in the [Readme file](https://github.com/opencv/cvat/). The additional step is to add a [security group and rule to allow incoming connections](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html).
For any of above, don't forget to add exposed AWS public IP address to `docker-compose.override.com`.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 MiB

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 425 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

@ -5,7 +5,7 @@
*/
Mousetrap.bind(window.cvat.config.shortkeys["open_help"].value, function() {
window.location.href = "/documentation/user_guide.html";
window.open("/documentation/user_guide.html");
return false;
});

@ -32,7 +32,7 @@ There you can:
![](static/documentation/images/image005.jpg)
__Labels__. Use the following schema to create labels: ``label_name <prefix>input_type=attribute_name:attribute_value1,attribute_value2``. You can specify multiple labels and multiple attributes separated by space. Attributes belong to previous label.
__Labels__. Use the following scheme to create labels: ``label_name <prefix>input_type=attribute_name:attribute_value1,attribute_value2``. You can specify multiple labels and multiple attributes separated by space. Attributes belong to previous label.
Example:
- ``vehicle @select=type:__undefined__,car,truck,bus,train ~radio=quality:good,bad ~checkbox=parked:false`` -
@ -43,8 +43,8 @@ There you can:
``label_name``: for example *vehicle, person, face etc.*
``<prefix>``:
- Use ``@`` for unique attributes which cannot be changed from frame to frame *(e.g. age, gender, color, etc)*
- Use ``~`` for temporary attributes which can be changed on any frame *(e.g. quality, pose, truncated, etc)*
- Use ``@`` for unique attributes which cannot be changed from frame to frame *(e.g. age, gender, color, etc.)*
- Use ``~`` for temporary attributes which can be changed on any frame *(e.g. quality, pose, truncated, etc.)*
``input_type``: the following input types are available ``select``, ``checkbox``, ``radio``, ``number``, ``text``.
@ -63,15 +63,15 @@ There you can:
__Flip images__. All selected files will be turned around 180.
__Z-Order__. Defines the order on drawn polygons. Check the box for enable layered dislaying.
__Z-Order__. Defines the order on drawn polygons. Check the box for enable layered displaying.
__Overlap Size__. Use this option to make overlapped segments. The option makes tracks continuous from one segment into another. Use it for interpolation mode. There are several use cases for the parameter:
- For an interpolation task (video sequence) if an object exists on overlapped segments it will be automatically merged into one track if overlap is greater than zero and annotation is good enough on adjacent segments. If overlap equals to zero or annotation is poor on adjacent segments inside a dumped annotation file you will have several tracks, one for each segment, which correspond to the object).
- For an annotation task (independent images) if an object exists on overlapped segments bounding boxes will be automatically merged into one if overlap is greater than zero and annotation is good enough on adjacent segments. If overlap equals to zero or annotation is poor on adjacent segments inside a dumped annotation file you will have several bounding boxes for the same object.
Thus you annotate an object on first segment. You annotate the same object on second segment and if you do it right you will have one track inside your annotation file. If annotations on different segments (on overlapped frames) are very different or overlap is zero you will have two tracks for the same object. The functionality only works for bounding boxes. Polygon, polyline, points don't support automatic merge on overlapped segments even the overlap parameter isn't zero and match between corresponding shapes on adjacent segments is perfect.
Thus you annotate an object on the first segment. You annotate the same object on second segment and if you do it right you will have one track inside your annotation file. If annotations on different segments (on overlapped frames) are very different or overlap is zero you will have two tracks for the same object. This functionality works only for bounding boxes. Polygon, polyline, points don't support automatic merge on overlapped segments even the overlap parameter isn't zero and match between corresponding shapes on adjacent segments is perfect.
__Segment size__. Use this option to divide huge dataset by a few less size segments.
__Segment size__. Use this option to divide a huge dataset into a few smaller segments.
__Image Quality__. Use this option to specify quality of uploaded images. The option makes it faster to load high-quality datasets. Use the value from ``1`` (completely compressed images) to ``95`` (almost not compressed images).
@ -85,13 +85,13 @@ There you can:
### Basic navigation
1. Use arrows below to move on next/previous frame. Mostly every button is covered by a shortcut. To get a hint about the shortcut just put your mouse pointer over an UI element.
1. Use arrows below to move on next/previous frame. Almost every button is covered by a shortcut. To get a hint about the shortcut just put your mouse pointer over an UI element.
![](static/documentation/images/image008.jpg)
2. An image can be zoom in/out using mouse's wheel. The image will be zoomed relatively your current cursor position. Thus if you point on an object it will be under your mouse during zooming process.
3. An image can be moved/shifted by holding left mouse button inside some area without annotated objects. If ``Shift`` key is pressed then all annotated objects are ignored otherwise a highlighted bounding box will be moved instead of the image itself. Usually the functionality is used together with zoom to precisely locate an object of interest.
3. An image can be moved/shifted by holding left mouse button inside some area without annotated objects. If ``Mouse Wheel`` is pressed then all annotated objects are ignored otherwise a highlighted bounding box will be moved instead of the image itself.
### Types of Shapes (basic)
There are four shapes you can annotate your images with:
@ -100,7 +100,7 @@ There are four shapes you can annotate your images with:
- ``polyline``
- ``points``
And here how they all look like:
And there is how they all look like:
![](static/documentation/images/image038.jpg) ![](static/documentation/images/image033.jpg)
@ -111,13 +111,13 @@ Usage examples:
- Create new annotations for a set of images.
- Add/modify/delete objects for existing annotations.
1. Before start need to check that ``Annotation`` is selected:
1. Before starting need to check that ``Annotation`` is selected:
![](static/documentation/images/image082.jpg) ![](static/documentation/images/image081.jpg)
2. Create a new annotation:
- Choose right ``Shape`` (e.g. box) and ``Label`` (was specified by you while creating the task) beforehand:
- Choose right ``Shape`` (box etc.) and ``Label`` (was specified by you while creating the task) beforehand:
![](static/documentation/images/image080.jpg) ![](static/documentation/images/image083.jpg)
@ -125,9 +125,9 @@ Usage examples:
![](static/documentation/images/image011.jpg)
- It is possible to adjust boundaries and location of the bounding box using mouse. In the top right corner size of the box is shown. You can also undo/redo your actions by using ``Ctrl+Z`` / ``Shift+Ctrl+Z Ctrl+Y``.
- It is possible to adjust boundaries and location of the bounding box using mouse. In the top right corner boxes' size is shown, you can check it by clicking one of the boxes' points. You can also undo your actions by using ``Ctrl+Z`` and redo them with ``Shift+Ctrl+Z`` or ``Ctrl+Y``.
3. In the list of objects you can see the labeled car. In the side panel you can perform basic operations under the object.
3. In the list of objects you can see the labeled car. In the side panel you can perform basic operations under the object — choose attributes, change label or delete box.
![](static/documentation/images/image012.jpg)
@ -141,19 +141,18 @@ Usage examples:
- Add/modify/delete objects for existing annotations.
- Edit tracks, merge many bounding boxes into one track.
1. Before start need to be sure that ``Interpolation`` is selected.
1. Before starting need to be sure that ``Interpolation`` is selected.
![](static/documentation/images/image014.jpg)
2. Create a track for an object (look at the selected car as an example):
- Annotate a bounding box on first frame for the object.
- Annotate a bounding box on the first frame for the object.
- In ``Interpolation`` mode the bounding box will be interpolated on next frames automatically.
![](static/documentation/images/image015.jpg)
3. If the object starts to change its position you need to modify bounding boxes where it happens. Changing of bounding boxes on each frame isn't necessary. It is enough to update several key frames and frames between them will be interpolated automatically. See an example below:
- The car starts moving on frame #70. Let's mark the frame as a key frame.
- The car starts moving on frame #70. Let's mark the frame as a key frame. You can press ``K`` for that.
![](static/documentation/images/image016.jpg)
@ -165,7 +164,7 @@ Usage examples:
![](static/documentation/images/image018.jpg)
4. When the annotated object disappears or becomes too small, you need to finish the track. To do that you need to choose ``Outside Property``.
4. When the annotated object disappears or becomes too small, you need to finish the track. To do that you need to choose ``Outside Property``, shortcut ``O``.
![](static/documentation/images/image019.jpg)
@ -181,7 +180,7 @@ Usage examples:
![](static/documentation/images/gif002.gif)
- Press ``Merge Tracks`` button and click on any bounding box of first track and on any bounding box of second track.
- Press ``Merge Tracks`` button and click on any bounding box of the first track and on any bounding box of the second track.
![](static/documentation/images/image021.jpg)
@ -193,17 +192,17 @@ Usage examples:
![](static/documentation/images/gif003.gif)
### Attribute Annotation mode (basics)
### Attribute annotation mode (basics)
- In this Mode you can edit attributes with fast navigation between objects and frames using keyboard. Press ``Shift+Enter`` shortcut to enter AAMode. After that it is possible to change attributes using keyboard.
- In this mode you can edit attributes with fast navigation between objects and frames using keyboard. Press ``Shift+Enter`` shortcut to enter AAMode. After that it is possible to change attributes using keyboard.
![](static/documentation/images/image023.jpg)
- The active attribute will be red. In this case it is ``Gender``. Look at the bottom side panel to see all possible shortcuts to change the attribute. Press ``2`` key on your keyboard to assign ``female`` value for the attribute.
- The active attribute will be red. In this case it is ``gender``. Look at the bottom side panel to see all possible shortcuts to change the attribute. Press ``2`` key on your keyboard to assign ``female`` value for the attribute.
![](static/documentation/images/image024.jpg) ![](static/documentation/images/image025.jpg)
- Press ``Up Arrow``/``Down Arrow`` keys on your keyboard to go to next attribute. In this case after pressing ``Down Arrow`` you will be able to edit ``Age`` attribute.
- Press ``Up Arrow``/``Down Arrow`` on your keyboard to go to next/previous attribute. In this case after pressing ``Down Arrow`` you will be able to edit ``Age`` attribute.
![](static/documentation/images/image026.jpg) ![](static/documentation/images/image027.jpg)
@ -211,21 +210,21 @@ Usage examples:
### Downloading annotations
1. To download latest annotations save all changes first. Press ``Open Menu`` and then ``Save Work`` button. There is ``Ctrl+s`` shortcut to save annotations quickly.
1. To download latest annotations save all changes first. Press ``Open Menu`` and then ``Save Work`` button. There is ``Ctrl+S`` shortcut to save annotations quickly.
2. After that press ``Open Menu`` and then ``Dump Annotation`` button.
![](static/documentation/images/image028.jpg)
3. The annotation will be written into **.xml** file. To find the annotation file go to the directory where your browser saves downloaded files by default. For more information visit [.xml format page](/documentation/xml_format.html).
3. The annotation will be written into **.xml** file. To find the annotation file go to the directory where your browser saves downloaded files by default. For more information visit [.xml format page](./documentation/xml_format.html).
![](static/documentation/images/image029.jpg)
### Vocabulary
**Bounding box** is an area which defines boundaries of an object. To specify it you need to define top left and bottom right points.
**Bounding box** is an area which defines boundaries of an object. To specify it you need to define two opposite corners.
**Tight bounding box** is a bounding box where margin between the object inside and boundaries of the box is absent. By default the type of bounding box is used in most tasks but precision completely depends on an annotation task.
**Tight bounding box** is a bounding box where margin between the object inside and boundaries of the box is absent. This type of bounding box is used in most tasks by default but precision completely depends on an annotation task.
| Bounding box |Tight bounding box|
| ------------ |:----------------:|
@ -240,7 +239,7 @@ Usage examples:
**Attribute** is a property of an annotated object (e.g. color, model, quality, etc.). There are two types of attributes:
- __Unique__: immutable and isn't changed from frame to frame (e.g age, gender, color, etc.)
- __Unique__: immutable and can't be changed from frame to frame (e.g. age, gender, color, etc.)
![](static/documentation/images/image073.jpg)
@ -249,7 +248,7 @@ Usage examples:
![](static/documentation/images/image072.jpg)
---
**Track** is a set of shapes on different frames which corresponds to one object. Tracks are created in ``Interpolation mode`` mode.
**Track** is a set of shapes on different frames which corresponds to one object. Tracks are created in ``Interpolation mode``
![](static/documentation/images/gif004.gif)
@ -264,18 +263,38 @@ Usage examples:
The tool is composed of:
- ``Workspace`` — where images are shown;
- ``Bottom panel`` (under workspace) — for navigation, filtering annotation and accessing tools' menu;
- ``Side Panel`` — contain two lists: of Objects (on the frame) and Labels (of Objects on the frame);
- ``Bottom Side Panel`` — for choosing types of/creating/merging/grouping annotation;
- ``Side panel`` — contains two lists: of objects (on the frame) and labels (of objects on the frame);
- ``Bottom side panel`` — for choosing types of/creating/merging/grouping annotation;
![](static/documentation/images/image034.jpg)
There are also:
- ``Settings`` (F2) — contains different parameters which can be adjusted by the user needs
There is also:
- ``Settings`` (F2) — pop-up in the Bottom panel, contains different parameters which can be adjusted by the user's needs
- ``Context menu`` — available on right mouse button.
---
### Workspace — Context menu
Context menu opens by right mouse click.
By clicking inside bounding box the next is available:
- ``Copy Object URL`` — copying in buffer address of an object on the frame in the task
- ``Change Color``
- ``Remove Shape``
- ``Switch Occluded``
- ``Switch Lock``
- ``Enable Dragging`` — (only for polygons) allows to adjust polygons position
- ``Context menu`` — click right mouse button inside of a shape or at a point (only in poly-shapes)
![](static/documentation/images/image089.jpg) ![](static/documentation/images/image090.jpg)
![](static/documentation/images/image070.jpg) ![](static/documentation/images/image071.jpg)
By clicking on the points of poly-shapes ``Remove`` option is available.
![](static/documentation/images/image092.jpg)
By clicking outside any shapes you can either copy ``Frame URL`` (link to present frame) or ``Job URL`` (link from address bar)
![](static/documentation/images/image091.jpg)
---
### Settings
@ -284,7 +303,7 @@ Click ``F2`` to access settings menu.
![](static/documentation/images/image067.jpg)
There is ``Player Settings`` which adjusting ``Workspace`` and ``Other Settings``.
There is ``Player Settings`` which adjusts ``Workspace`` and ``Other Settings``.
In ``Player Settings`` you can:
- Control step of ``C`` and ``V`` shortcuts
@ -295,7 +314,7 @@ In ``Player Settings`` you can:
![](static/documentation/images/image068.jpg)
- Adjust ``Brightness``/``Contrast``/``Saturation`` of too expose or too dark images using ``F2`` — color settings (changes displaying and not the image itself).
- Adjust ``Brightness``/``Contrast``/``Saturation`` of too exposing or too dark images using ``F2`` — color settings (changes displaying and not the image itself).
Shortcuts:
- ``Shift+B``/``Alt+B`` for brightness
- ``Shift+C``/``Alt+C`` for contrast
@ -304,12 +323,12 @@ Shortcuts:
![](static/documentation/images/image069.jpg)
``Other Settings`` contain:
- ``Show All Interpolation Tracks`` checkbox — shows hidden object on side panel for every interpolated object (turned off by default)
``Other Settings`` contains:
- ``Show All Interpolation Tracks`` checkbox — shows hidden object on the side panel for every interpolated object (turned off by default)
- ``AAM Zoom Margin`` slider — defines margins for shape in attribute annotation mode
- ``Enable AutoSaving`` checkbox — turned off by default
- ``AutoSaving Interval (Min)`` input box — 15 minutes by default
- ``Propagate Frames`` input box — allow to choose on how many frames selected object will be copied in by ``Ctrl+B`` (50 by default)
- ``AutoSaving Interval (min)`` input box — 15 minutes by default
- ``Propagate Frames`` input box — allows to choose on how many frames of selected object will be copied in by ``Ctrl+B`` (50 by default)
---
### Bottom Panel
@ -344,40 +363,85 @@ Go to specified frame. Press ``~`` to highlight element.
---
__Open Menu__ button
It is the main menu for the annotation tool. It can be used to download, upload and remove annotations. As well it shows statistics about the current annotation task.
It is the main menu for the annotation tool. It can be used to download, upload and remove annotations.
![](static/documentation/images/image051.jpg)
As well it shows statistics about the current task, such as:
- task name
- type of performance on the task: ``annotation``, ``validation`` or ``completed task``
- technical information about task
- number of created bounding boxes, sorted by labels (e.g. vehicle, person) and type of annotation (polygons, boxes, etc.)
---
__Filter__ input box
How to use filters is described in the Advanced guide (below).
The way how to use filters is described in the advanced guide (below).
![](static/documentation/images/image059.jpg)
---
__History / Undo-Redo panel__
__History / Undo-redo panel__
Use shortcuts for undo/redo actions ``Ctrl+Z`` __/__ ``Ctrl+Shift+Z``/``Ctrl+Y``
Use ``Ctrl+Z`` for undo actions and ``Ctrl+Shift+Z`` or ``Ctrl+Y`` to redo them.
![](static/documentation/images/image061.jpg)
---
__Fill Opacity slider__
Change opacity of every bounding box in the annotation.
![](static/documentation/images/image086.jpg)
Opacity can be chaged from 0% to 100% and by random colors or white. If any white option is chosen, ``Color By`` scheme won't work.
__Selected Fill Opacity slider__
Change opacity of bounding box under mouse pointer.
![](static/documentation/images/image087.jpg)
Opacity can be changed from 0% till 100%.
__Black Stroke checkbox__
Change bounding box border from white/colored to black.
![](static/documentation/images/image088.jpg)
__Color By options__
Change color scheme of annotation:
- ``Instance`` — every bounding box has random color
![](static/documentation/images/image095.jpg)
- ``Group`` — every group of boxes has its own random color, ungrouped boxes are white
![](static/documentation/images/image094.jpg)
- ``Label`` — every label (e.g. vehicle, pedestrian, roadmark) has its own random color
![](static/documentation/images/image093.jpg)
You can change any random color by pointing on needed box on a frame or on a side panel and pressing ``Enter``.
---
### Side panel
#### Objects
In the Side Panel you can see the list of available objects on the current frame. An example how the list can look like below:
In the side panel you can see the list of available objects on the current frame. An example how the list can look like is below:
|Annotation mode|Interpolation mode|
|--|--|
|![](static/documentation/images/image044.jpg)|![](static/documentation/images/image045.jpg)|
#### Labels
You also can see all labels that used on this frame and highlight them by clicking needed label.
You also can see all the labels that used on this frame and highlight them by clicking needed label.
![](static/documentation/images/image062.jpg)
---
__Objects' card__
@ -398,17 +462,17 @@ A shape can be **Occluded**. Shortcut: ``q``. Such shapes have dashed boundaries
![](static/documentation/images/image049.jpg)
---
You can copy and paste this object on this or other frame. ``Ctrl+C``/``Ctrl+V`` shortcuts works under mouse point.
You can copy and paste this object on this or another frame. ``Ctrl+C``/``Ctrl+V`` shortcuts works under mouse point.
![](static/documentation/images/image052.jpg)
---
You can propagate this object on next X frames. ``Ctrl+B`` shortcut works under mouse point. ``F2`` for change on how many frames to propagate this object.
You can propagate this object on next X frames. ``Ctrl+B`` shortcut works under mouse point. ``F2`` for change on how many frames in nesessary to propagate this object.
![](static/documentation/images/image053.jpg)
---
You can change how this objects' annotation is displayed on this frame. It could be Hide, Shows Only Box, Shows Box and Title. ``H`` is for this object, ``T+H`` for all objects on this frame.
You can change the way this objects' annotation is displayed on this frame. It could be hide, shows only box, shows box and title. ``H`` is for this object, ``T+H`` for all objects on this frame.
![](static/documentation/images/image055.jpg)
@ -421,13 +485,13 @@ To change a type of a highlighted shape using keyboard you need to press ``Shift
### Bottom side panel
- ``Create Shape`` (``N``) — start/stop draw new shape mode
- ``Merge Shapes`` (``M``) — start/stop merge boxes mode
- ``Create Shape`` (``N``) — start/stop drawing new shape mode
- ``Merge Shapes`` (``M``) — start/stop merging boxes mode
- ``Group Shapes`` (``G``) — start/stop grouping boxes mode
- ``Label Type`` — (e.g. Face, Person, Vehicle)
- ``Working Mode`` — Annotation or Interpolation modes. You can't interpolate Polygons/Polylines/Points, but you can propagate them using ``Ctrl+B`` or merge into a track
- ``Shape type`` — (e.g. Box, Polygon, Polyline, Points)
- ``Poly Shape Size`` — (optional) hard number of dots for creating Polygon/Polyline shapes
- ``Label Type`` — (e.g. face, person, vehicle)
- ``Working Mode`` — Annotation or Interpolation modes. You can't interpolate polygons/polylines/points, but you can propagate them using ``Ctrl+B`` or merge into a track
- ``Shape Type`` — (e.g. box, polygon, polyline, points)
- ``Poly Shape Size`` — (optional) hard number of dots for creating polygon/polyline shapes
![](static/documentation/images/image082.jpg)
@ -442,7 +506,7 @@ That is how it looks like.
## Annotation mode (advanced)
Basic operations in the mode was described above.
Basic operations in the mode were described above.
__occluded__ attribute is used if an object is occluded by another object or it isn't fully visible on the frame. Use ``Q`` shortcut to set the property quickly.
@ -452,13 +516,13 @@ Example: both cars on the figure below should be labeled as __occluded__.
![](static/documentation/images/image054.jpg)
If a frame contains too many objects and it is difficult to annotate them due to many shapes are placed mostly in the same place when it makes sense to lock them. Shapes for locked objects are transparent and it is easy to annotate new objects. Also it will not be possible to change previously annotated objects by an accident. Shortcut: ``L``.
If a frame contains too many objects and it is difficult to annotate them due to many shapes which are placed mostly in the same place then it makes sense to lock them. Shapes for locked objects are transparent and it is easy to annotate new objects. Also it will not be possible to change previously annotated objects by an accident. Shortcut: ``L``.
![](static/documentation/images/image066.jpg)
## Interpolation mode (advanced)
Basic operations in the mode was described above.
Basic operations in the mode were described above.
Bounding boxes created in the mode have extra navigation buttons.
- These buttons help to jump to previous/next key frame.
@ -470,7 +534,7 @@ Bounding boxes created in the mode have extra navigation buttons.
![](static/documentation/images/image057.jpg)
## Attribute Annotation mode (advanced)
## Attribute annotation mode (advanced)
Basic operations in the mode was described above.
@ -478,9 +542,7 @@ It is possible to handle many objects on the same frame in the mode.
![](static/documentation/images/image058.jpg)
It is more convenient to annotate objects of the same type. For the purpose
it is possible to specify a corresponding filter. For example, the following
filter will hide all objects except pedestrians: ``pedestrian``.
It is more convenient to annotate objects of the same type. For the purpose it is possible to specify a corresponding filter. For example, the following filter will hide all objects except pedestrians: ``pedestrian``.
To navigate between objects (pedestrians in the case) use the following shortcuts:
- ``Tab`` — go to the next object
@ -493,45 +555,46 @@ By default in the mode objects are zoomed in to full screen. Check
It is used for semantic / instance segmentation.
Be sure ``Z-Order`` flag in ``Create task`` dialog is enabled if you want annotate polygons. Z-Order flag defines order of drawing. It is necessary to get right annotation mask without extra work (additional drawing of borders). Z-order can be changed by `+`/`-` which set maximum/minimum z-order respectively.
Be sure ``Z-Order`` flag in ``Create task`` dialog is enabled if you want to annotate polygons. Z-Order flag defines order of drawing. It is necessary to get right annotation mask without extra work (additional drawing of borders). Z-Order can be changed by `+`/`-` which set maximum/minimum z-order respectively.
![](static/documentation/images/image074.jpg)
Before start need to be sure that ``Polygon`` is selected.
Before starting need to be sure that ``Polygon`` is selected.
![](static/documentation/images/image084.jpg)
Click ``N`` for entering drawing mode. Now you can start your polygon.
You can zoom in/out (on mouse wheel scroll) and move (on mouse wheel press
and mouse move) while drawing. Click ``N`` again for completing the shape.
Also you can set fixed number of points in the field "Poly Shape Size", then
drawing will be stopped automatically. You can drag object after it was drawn
and fix a position of an individual points after finishing the object. You
can add/delete points after finishing.
Click ``N`` for entering drawing mode. There are two ways to draw a polygon — you either create points by clicking or by dragging mouse on the screen, holding ``Shift``.
|Clicking points|Holding Shift+Dragging|
|--|--|
|![](static/documentation/images/gif005.gif)|![](static/documentation/images/gif006.gif)|
When ``Shift`` isn't pressed, you can zoom in/out (on mouse wheel scroll) and move (on mouse wheel press and mouse move), you can delete previous point by clicking right mouse button. Click ``N`` again for completing the shape. You can move points or delete them by double-clicking. Double-click with pressed ``Shift`` will open a polygon editor. In it you can create new points (by clicking or dragging) or delete part of a polygon by closing the red line on other point. Press ``Esc`` to cancel editing.
![](static/documentation/images/gif007.gif)
Also you can set fixed number of points in the field "poly shape size", then drawing will be stopped automatically.
To enable dragging, right-click inside polygon and choose ``Enable Dragging``.
![](static/documentation/images/gif005.gif)
Below you can see results with opacity and black stroke:
![](static/documentation/images/image064.jpg)
Also if you need annotate small objects, increase ``Image Quality`` to ``95`` in ``Create task`` dialog for annotators convenience.
Also if you need to annotate small objects, increase ``Image Quality`` to ``95`` in ``Create task`` dialog for annotators convenience.
## Annotation with polylines
It is used for road markup annotation etc.
Before start need to be sure that ``Polyline`` is selected.
Before starting you have to be sure that ``Polyline`` is selected.
![](static/documentation/images/image085.jpg)
Click ``N`` for entering drawing mode. Now you can start your polyline.
You can zoom in/out (on mouse wheel scroll) and move (on mouse wheel press and
mouse move) while drawing. Click ``N`` again for completing the shape. Also
you can set fixed number of points in the field "Poly Shape Size", then drawing
will be stopped automatically. You can drag object after it was drawn and fix
a position of an individual points after finishing the object. You can
add/delete points after finishing.
Click ``N`` for entering drawing mode. There are two ways to draw a polyline — you either create points by clicking or by dragging mouse on the screen, holding ``Shift``.
When ``Shift`` isn't pressed, you can zoom in/out (on mouse wheel scroll) and move (on mouse wheel press and mouse move), you can delete previous point by clicking right mouse button. Click ``N`` again for completing the shape. You can delete points by double-clicking them. Double-click with pressed ``Shift`` will open a polyline editor. In it you can create new points (by clicking or dragging) or delete part of a polyline by closing the red line on other point. Press ``Esc`` to cancel editing. Also you can set fixed number of points in the field "poly shape size", then drawing will be stopped automatically.
You can adjust polyline after it was drawn.
![](static/documentation/images/image039.jpg)
@ -539,18 +602,12 @@ you can set fixed number of points in the field "Poly Shape Size", then drawing
It is used for face landmarks annotation etc.
Before start need to be sure that ``Points`` is selected.
Before starting you have to be sure that ``Points`` is selected.
![](static/documentation/images/image042.jpg)
Click ``N`` for entering drawing mode. Now you can start marking a needed area.
Click ``N`` again for finishing marking an area. Also you can set fixed number
of points in the field "Poly Shape Size", then drawing will be stopped
automatically. Points are automatically grouped — between individual start
and finish all points will be considered linked. You can zoom in/out (on mouse
wheel scroll) and move (on mouse wheel press and mouse move) while drawing.
You can drag object after it was drawn and fix a position of an individual
points after finishing the object. You can add/delete points after finishing.
Click ``N`` again for finishing marking an area. You can delete points by double-clicking them. Double-click with pressed ``Shift`` will open a points shape editor. In it you can create new points into existing shape. Also you can set fixed number of points in the field "poly shape size", then drawing will be stopped automatically. Points are automatically grouped — between individual start and finish all points will be considered linked. You can zoom in/out (on mouse wheel scroll) and move (on mouse wheel press and mouse move) while drawing. You can drag object after it was drawn and fix a position of individual points after finishing the object. You can add/delete points after finishing.
![](static/documentation/images/image063.jpg)
@ -563,11 +620,11 @@ You may use ``Group Shapes`` button or shortcuts:
- ``Alt+G`` — close group mode
- ``Shift+G`` — reset group for selected shapes
You may select shapes by click or by area selection.
You may select shapes by clicking or by area selection.
Grouped shapes will have ``group_id`` filed in dumped annotation.
Also you may switch color distribution from by instance (default) to by group. For it need switch ``Color By Group`` checkbox.
Also you may switch color distribution from by instance (default) to by group. To do this you have to switch ``Color By Group`` checkbox.
Shapes which haven't ``group_id`` will be highlighted with white color.
@ -579,9 +636,9 @@ Shapes which haven't ``group_id`` will be highlighted with white color.
![](static/documentation/images/image076.jpg)
There are several reasons to use the feature:
There are several reasons for using the feature:
1. When use a filter objects which don't correspond to the filter will be hidden.
1. When using a filter objects which don't correspond to the filter will be hidden.
2. Fast navigation between frames which have an object of interest. Use ``Left Arrow/Right Arrow`` keys for the purpose. If the filter is empty the mentioned arrows will go to previous/next frames which contain any objects.
To use the functionality it is enough to specify a value inside ``Filter`` text box and defocus the text box (for example, click on the image). After that the filter will be applied.
@ -589,7 +646,7 @@ To use the functionality it is enough to specify a value inside ``Filter`` text
---
In a trivial case a correct filter should correspond to the template: ``label[prop operator "value"]``
``label`` is a type of an object (e.g _person, car, face_, etc.). If the type isn't important you can use ``*``.
``label`` is a type of an object (e.g. _person, car, face_, etc.). If the type isn't important you can use ``*``.
``prop`` is a property which should be filtered. The following items are available:
@ -627,10 +684,22 @@ Example | Description
``face[attr/glass="sunglass" or attr/glass="no"]`` | faces with sunglasses or without glasses at all.
```person[attr/race="asian"] | car[attr/model="bmw" or attr/model="mazda"]``` | asian persons or bmw or mazda cars.
## Analytics
If your CVAT instance is built with [analytics](/components/analytics) support you can press F3 in dashboard, a new tab with analytics and logs will be opened.
It allows to see how much working time every user spend on each task and how much they did, over any time range.
![](static/documentation/images/image097.jpg)
It also has activity graph, which can be modified with number of users shown, and timeframe.
![](static/documentation/images/image096.jpg)
## Shortcuts
Many UI elements have shortcut hints. Put your pointer to an interesting element to see it.
Many UI elements have shortcut hints. Put your pointer to a required element to see it.
![](static/documentation/images/image075.jpg)
@ -643,7 +712,7 @@ Many UI elements have shortcut hints. Put your pointer to an interesting element
``L+T`` | lock/unlock all shapes on the current frame
``Q`` or ``Num/`` | set occluded property for an active shape
``N`` | start/stop draw mode
``Alt+N`` | close draw mode without create
``Esc`` | close draw mode without create
``Ctrl+<number>`` | change type of an active shape
``Shift+<number>`` | change type of new shape by default
``Enter`` | change color of active shape
@ -666,22 +735,24 @@ Many UI elements have shortcut hints. Put your pointer to an interesting element
``Shift+S``/``Alt+S`` | increase/decrease saturation on an image
``Ctrl+S`` | save job
``Ctrl+B`` | propagate active shape
``+``/``-`` | change relative order of highlighted polygon
``+``/``-`` | change relative order of highlighted box (if Z-Order is enabled)
| | __Interpolation__ |
``M`` | enter/apply merge mode
``Alt+M`` | close merge mode without apply the merge
``Esc`` | close merge mode without apply the merge
``R`` | go to the next key frame of an active shape
``E`` | go to the previous key frame of an active shape
``O`` | change attribute of an active shape to "Outside the frame"
``K`` | mark current frame as key frame on an active shape
| | __Attribute annotation mode__ |
``Shift+Enter`` | enter/leave Attribute Annotation mode
``Up Arrow`` | go to the next attribute (up)
``Down Arrown`` | go to the next attribute (down)
``Down Arrow`` | go to the next attribute (down)
``Tab`` | go to the next annotated object
``Shift+Tab`` | go to the previous annotated object
``<number>`` | assign a corresponding value to the current attribute
| | __Grouping__ |
``G`` | switch group mode
``Alt+G`` | close group mode
``Esc`` | close group mode
``Shift+G`` | reset group for selected shapes
| | __Filter__ |
``Left Arrow`` | go to the previous frame which corresponds to the specified filter value

@ -4,6 +4,74 @@
# SPDX-License-Identifier: MIT
from django.contrib import admin
from .models import Task, Segment, Job, Label, AttributeSpec
# Register your models here.
class JobInline(admin.TabularInline):
model = Job
can_delete = False
# Don't show extra lines to add an object
def has_add_permission(self, request, object=None):
return False
class SegmentInline(admin.TabularInline):
model = Segment
show_change_link = True
readonly_fields = ('start_frame', 'stop_frame')
can_delete = False
# Don't show extra lines to add an object
def has_add_permission(self, request, object=None):
return False
class AttributeSpecInline(admin.TabularInline):
model = AttributeSpec
extra = 0
max_num = None
class LabelInline(admin.TabularInline):
model = Label
show_change_link = True
extra = 0
max_num = None
class LabelAdmin(admin.ModelAdmin):
# Don't show on admin index page
def has_module_permission(self, request):
return False
inlines = [
AttributeSpecInline
]
class SegmentAdmin(admin.ModelAdmin):
# Don't show on admin index page
def has_module_permission(self, request):
return False
inlines = [
JobInline
]
class TaskAdmin(admin.ModelAdmin):
date_hierarchy = 'updated_date'
readonly_fields = ('size', 'path', 'created_date', 'updated_date',
'overlap', 'flipped')
list_display = ('name', 'mode', 'owner', 'assignee', 'created_date', 'updated_date')
search_fields = ('name', 'mode', 'owner__username', 'owner__first_name',
'owner__last_name', 'owner__email', 'assignee__username', 'assignee__first_name',
'assignee__last_name')
inlines = [
SegmentInline,
LabelInline
]
# Don't allow to add a task because it isn't trivial operation
def has_add_permission(self, request):
return False
admin.site.register(Task, TaskAdmin)
admin.site.register(Segment, SegmentAdmin)
admin.site.register(Label, LabelAdmin)

File diff suppressed because it is too large Load Diff

@ -0,0 +1,100 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
import logging
from cvat.settings.base import LOGGING
from .models import Job, Task
def _get_task(tid):
try:
return Task.objects.get(pk=tid)
except Exception:
raise Exception('{} key must be a task identifier'.format(tid))
def _get_job(jid):
try:
return Job.objects.select_related("segment__task").get(id=jid)
except Exception:
raise Exception('{} key must be a job identifier'.format(jid))
class TaskLoggerStorage:
def __init__(self):
self._storage = dict()
def __getitem__(self, tid):
if tid not in self._storage:
self._storage[tid] = self._create_task_logger(tid)
return self._storage[tid]
def _create_task_logger(self, tid):
task = _get_task(tid)
logger = logging.getLogger('cvat.server.task_{}'.format(tid))
server_file = logging.FileHandler(filename=task.get_log_path())
formatter = logging.Formatter(LOGGING['formatters']['standard']['format'])
server_file.setFormatter(formatter)
logger.addHandler(server_file)
return logger
class JobLoggerStorage:
def __init__(self):
self._storage = dict()
def __getitem__(self, jid):
if jid not in self._storage:
self._storage[jid] = self._get_task_logger(jid)
return self._storage[jid]
def _get_task_logger(self, jid):
job = _get_job(jid)
return slogger.task[job.segment.task.id]
class TaskClientLoggerStorage:
def __init__(self):
self._storage = dict()
def __getitem__(self, tid):
if tid not in self._storage:
self._storage[tid] = self._create_client_logger(tid)
return self._storage[tid]
def _create_client_logger(self, tid):
task = _get_task(tid)
logger = logging.getLogger('cvat.client.task_{}'.format(tid))
client_file = logging.FileHandler(filename=task.get_client_log_path())
logger.addHandler(client_file)
return logger
class JobClientLoggerStorage:
def __init__(self):
self._storage = dict()
def __getitem__(self, jid):
if jid not in self._storage:
self._storage[jid] = self._get_task_logger(jid)
return self._storage[jid]
def _get_task_logger(self, jid):
job = _get_job(jid)
return clogger.task[job.segment.task.id]
class dotdict(dict):
"""dot.notation access to dictionary attributes"""
__getattr__ = dict.get
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
clogger = dotdict({
'task': TaskClientLoggerStorage(),
'job': JobClientLoggerStorage()
})
slogger = dotdict({
'task': TaskLoggerStorage(),
'job': JobLoggerStorage(),
'glob': logging.getLogger('cvat.server'),
})

@ -1,75 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
import inspect
import logging
from . import models
from cvat.settings.base import LOGGING
class TaskLoggerStorage:
def __init__(self):
self._storage = dict()
self._formatter = logging.getLogger('task')
def __getitem__(self, tid):
if tid not in self._storage:
self._storage[tid] = self._create_task_logger(tid)
return self._storage[tid]
def _create_task_logger(self, tid):
task = self._get_task(tid)
if task is not None:
configuration = LOGGING.copy()
handler_configuration = configuration['handlers']['file']
handler_configuration['filename'] = task.get_log_path()
configuration['handlers'] = {
'file_{}'.format(tid): handler_configuration
}
configuration['loggers'] = {
'task_{}'.format(tid): {
'handlers': ['file_{}'.format(tid)],
'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
}
}
logging.config.dictConfig(configuration)
logger = logging.getLogger('task_{}'.format(tid))
return logger
else:
raise Exception('Key must be task indentificator')
def _get_task(self, tid):
try:
return models.Task.objects.get(pk=tid)
except Exception:
return None
class JobLoggerStorage:
def __init__(self):
self._storage = dict()
def __getitem__(self, jid):
if jid not in self._storage:
self._storage[jid] = self._get_task_logger(jid)
return self._storage[jid]
def _get_task_logger(self, jid):
job = self._get_job(jid)
if job is not None:
return task_logger[job.segment.task.id]
else:
raise Exception('Key must be job identificator')
def _get_job(self, jid):
try:
return models.Job.objects.select_related("segment__task").get(id=jid)
except Exception:
return None
task_logger = TaskLoggerStorage()
job_logger = JobLoggerStorage()

@ -0,0 +1,38 @@
# Generated by Django 2.0.9 on 2018-10-11 12:17
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('engine', '0009_auto_20180917_1424'),
]
operations = [
migrations.AddField(
model_name='labeledbox',
name='client_id',
field=models.BigIntegerField(default=-1),
),
migrations.AddField(
model_name='labeledpoints',
name='client_id',
field=models.BigIntegerField(default=-1),
),
migrations.AddField(
model_name='labeledpolygon',
name='client_id',
field=models.BigIntegerField(default=-1),
),
migrations.AddField(
model_name='labeledpolyline',
name='client_id',
field=models.BigIntegerField(default=-1),
),
migrations.AddField(
model_name='objectpath',
name='client_id',
field=models.BigIntegerField(default=-1),
),
]

@ -0,0 +1,74 @@
# Generated by Django 2.0.9 on 2018-10-24 10:50
import cvat.apps.engine.models
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('engine', '0010_auto_20181011_1517'),
]
operations = [
migrations.AddField(
model_name='task',
name='source',
field=cvat.apps.engine.models.SafeCharField(default='unknown', max_length=256),
),
migrations.AlterField(
model_name='label',
name='name',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='labeledboxattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='labeledpointsattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='labeledpolygonattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='labeledpolylineattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='objectpathattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='task',
name='name',
field=cvat.apps.engine.models.SafeCharField(max_length=256),
),
migrations.AlterField(
model_name='trackedboxattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='trackedpointsattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='trackedpolygonattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
migrations.AlterField(
model_name='trackedpolylineattributeval',
name='value',
field=cvat.apps.engine.models.SafeCharField(max_length=64),
),
]

@ -0,0 +1,24 @@
# Generated by Django 2.0.9 on 2018-10-25 13:18
import cvat.apps.engine.models
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('engine', '0011_add_task_source_and_safecharfield'),
]
operations = [
migrations.AddField(
model_name='job',
name='status',
field=models.CharField(default=cvat.apps.engine.models.StatusChoice('annotation'), max_length=32),
),
migrations.AlterField(
model_name='task',
name='status',
field=models.CharField(default=cvat.apps.engine.models.StatusChoice('annotation'), max_length=32),
),
]

@ -0,0 +1,118 @@
# Generated by Django 2.0.9 on 2018-11-07 12:25
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('engine', '0012_auto_20181025_1618'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.AlterModelOptions(
name='attributespec',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='job',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='label',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='labeledboxattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='labeledpointsattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='labeledpolygonattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='labeledpolylineattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='objectpathattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='segment',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='task',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedbox',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedboxattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedpoints',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedpointsattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedpolygon',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedpolygonattributeval',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedpolyline',
options={'default_permissions': ()},
),
migrations.AlterModelOptions(
name='trackedpolylineattributeval',
options={'default_permissions': ()},
),
migrations.RenameField(
model_name='job',
old_name='annotator',
new_name='assignee',
),
migrations.AddField(
model_name='task',
name='assignee',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='assignees', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='task',
name='owner',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='owners', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='job',
name='assignee',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='task',
name='bug_tracker',
field=models.CharField(blank=True, default='', max_length=2000),
),
migrations.AlterField(
model_name='task',
name='owner',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='owners', to=settings.AUTH_USER_MODEL),
),
]

@ -0,0 +1,18 @@
# Generated by Django 2.1.3 on 2018-11-23 10:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('engine', '0013_auth_no_default_permissions'),
]
operations = [
migrations.AddField(
model_name='job',
name='max_shape_id',
field=models.BigIntegerField(default=-1),
),
]

@ -1,4 +1,3 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
@ -8,34 +7,54 @@ from django.conf import settings
from django.contrib.auth.models import User
from io import StringIO
from enum import Enum
import shlex
import csv
from io import StringIO
import re
import os
class StatusChoice(Enum):
ANNOTATION = 'annotation'
VALIDATION = 'validation'
COMPLETED = 'completed'
@classmethod
def choices(self):
return tuple((x.name, x.value) for x in self)
def __str__(self):
return self.value
class SafeCharField(models.CharField):
def get_prep_value(self, value):
value = super().get_prep_value(value)
if value:
return value[:self.max_length]
return value
class Task(models.Model):
name = models.CharField(max_length=256)
name = SafeCharField(max_length=256)
size = models.PositiveIntegerField()
path = models.CharField(max_length=256)
mode = models.CharField(max_length=32)
owner = models.ForeignKey(User, null=True, on_delete=models.SET_NULL)
bug_tracker = models.CharField(max_length=2000, default="")
owner = models.ForeignKey(User, null=True, blank=True,
on_delete=models.SET_NULL, related_name="owners")
assignee = models.ForeignKey(User, null=True, blank=True,
on_delete=models.SET_NULL, related_name="assignees")
bug_tracker = models.CharField(max_length=2000, blank=True, default="")
created_date = models.DateTimeField(auto_now_add=True)
updated_date = models.DateTimeField(auto_now_add=True)
status = models.CharField(max_length=32, default="annotate")
overlap = models.PositiveIntegerField(default=0)
z_order = models.BooleanField(default=False)
flipped = models.BooleanField(default=False)
source = SafeCharField(max_length=256, default="unknown")
status = models.CharField(max_length=32, default=StatusChoice.ANNOTATION)
# Extend default permission model
class Meta:
permissions = (
("view_task", "Can see available tasks"),
("view_annotation", "Can see annotation for the task"),
("change_annotation", "Can modify annotation for the task"),
)
default_permissions = ()
def get_upload_dirname(self):
return os.path.join(self.path, ".upload")
@ -71,18 +90,29 @@ class Segment(models.Model):
start_frame = models.IntegerField()
stop_frame = models.IntegerField()
class Meta:
default_permissions = ()
class Job(models.Model):
segment = models.ForeignKey(Segment, on_delete=models.CASCADE)
annotator = models.ForeignKey(User, null=True, on_delete=models.SET_NULL)
# TODO: add sub-issue number for the task
assignee = models.ForeignKey(User, null=True, blank=True, on_delete=models.SET_NULL)
status = models.CharField(max_length=32, default=StatusChoice.ANNOTATION)
max_shape_id = models.BigIntegerField(default=-1)
class Meta:
default_permissions = ()
class Label(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE)
name = models.CharField(max_length=64)
name = SafeCharField(max_length=64)
def __str__(self):
return self.name
class Meta:
default_permissions = ()
def parse_attribute(text):
match = re.match(r'^([~@])(\w+)=(\w+):(.+)?$', text)
prefix = match.group(1)
@ -99,6 +129,9 @@ class AttributeSpec(models.Model):
label = models.ForeignKey(Label, on_delete=models.CASCADE)
text = models.CharField(max_length=1024)
class Meta:
default_permissions = ()
def get_attribute(self):
return parse_attribute(self.text)
@ -122,31 +155,38 @@ class AttributeSpec(models.Model):
attr = self.get_attribute()
return attr['values']
def __str__(self):
return self.get_attribute()['name']
class AttributeVal(models.Model):
# TODO: add a validator here to be sure that it corresponds to self.label
id = models.BigAutoField(primary_key=True)
spec = models.ForeignKey(AttributeSpec, on_delete=models.CASCADE)
value = models.CharField(max_length=64)
value = SafeCharField(max_length=64)
class Meta:
abstract = True
default_permissions = ()
class Annotation(models.Model):
job = models.ForeignKey(Job, on_delete=models.CASCADE)
label = models.ForeignKey(Label, on_delete=models.CASCADE)
frame = models.PositiveIntegerField()
group_id = models.PositiveIntegerField(default=0)
client_id = models.BigIntegerField(default=-1)
class Meta:
abstract = True
class Shape(models.Model):
occluded = models.BooleanField(default=False)
z_order = models.IntegerField(default=0)
class Meta:
abstract = True
default_permissions = ()
class BoundingBox(Shape):
id = models.BigAutoField(primary_key=True)
@ -154,14 +194,18 @@ class BoundingBox(Shape):
ytl = models.FloatField()
xbr = models.FloatField()
ybr = models.FloatField()
class Meta:
abstract = True
default_permissions = ()
class PolyShape(Shape):
id = models.BigAutoField(primary_key=True)
points = models.TextField()
class Meta:
abstract = True
default_permissions = ()
class LabeledBox(Annotation, BoundingBox):
pass
@ -200,6 +244,7 @@ class TrackedObject(models.Model):
outside = models.BooleanField(default=False)
class Meta:
abstract = True
default_permissions = ()
class TrackedBox(TrackedObject, BoundingBox):
pass

@ -1,14 +1,14 @@
From 5eeb1092c64865c555671ed585da18f974c9c10c Mon Sep 17 00:00:00 2001
From d44089dfc96b56d427d5631442d6587f876f43b6 Mon Sep 17 00:00:00 2001
From: Boris Sekachev <boris.sekachev@intel.com>
Date: Tue, 18 Sep 2018 15:58:20 +0300
Date: Mon, 19 Nov 2018 12:09:48 +0300
Subject: [PATCH] tmp
---
.../engine/static/engine/js/3rdparty/svg.draggable.js | 1 +
cvat/apps/engine/static/engine/js/3rdparty/svg.draw.js | 17 +++++++++++++++--
.../apps/engine/static/engine/js/3rdparty/svg.resize.js | 5 +++--
.../apps/engine/static/engine/js/3rdparty/svg.resize.js | 6 ++++--
.../apps/engine/static/engine/js/3rdparty/svg.select.js | 5 ++++-
4 files changed, 23 insertions(+), 5 deletions(-)
4 files changed, 24 insertions(+), 5 deletions(-)
diff --git a/cvat/apps/engine/static/engine/js/3rdparty/svg.draggable.js b/cvat/apps/engine/static/engine/js/3rdparty/svg.draggable.js
index d88abf5..aba474c 100644
@ -78,7 +78,7 @@ index 68dbf2a..20a6917 100644
}
diff --git a/cvat/apps/engine/static/engine/js/3rdparty/svg.resize.js b/cvat/apps/engine/static/engine/js/3rdparty/svg.resize.js
index 0c3b63d..fb5dc26 100644
index 0c3b63d..dceede5 100644
--- a/cvat/apps/engine/static/engine/js/3rdparty/svg.resize.js
+++ b/cvat/apps/engine/static/engine/js/3rdparty/svg.resize.js
@@ -34,8 +34,8 @@
@ -92,7 +92,15 @@ index 0c3b63d..fb5dc26 100644
};
};
@@ -343,6 +343,7 @@
@@ -98,6 +98,7 @@
};
ResizeHandler.prototype.resize = function (event) {
+ if (event.detail.event.button) return; // only left mouse button
var _this = this;
@@ -343,6 +344,7 @@
}
return;
}

@ -1,280 +0,0 @@
/*
* JavaScript MD5
* https://github.com/blueimp/JavaScript-MD5
*
* Copyright 2011, Sebastian Tschan
* https://blueimp.net
*
* Licensed under the MIT license:
* https://opensource.org/licenses/MIT
*
* Based on
* A JavaScript implementation of the RSA Data Security, Inc. MD5 Message
* Digest Algorithm, as defined in RFC 1321.
* Version 2.2 Copyright (C) Paul Johnston 1999 - 2009
* Other contributors: Greg Holt, Andrew Kepert, Ydnar, Lostinet
* Distributed under the BSD License
* See http://pajhome.org.uk/crypt/md5 for more info.
*/
/* global define */
;(function ($) {
'use strict'
/*
* Add integers, wrapping at 2^32. This uses 16-bit operations internally
* to work around bugs in some JS interpreters.
*/
function safeAdd (x, y) {
var lsw = (x & 0xffff) + (y & 0xffff)
var msw = (x >> 16) + (y >> 16) + (lsw >> 16)
return (msw << 16) | (lsw & 0xffff)
}
/*
* Bitwise rotate a 32-bit number to the left.
*/
function bitRotateLeft (num, cnt) {
return (num << cnt) | (num >>> (32 - cnt))
}
/*
* These functions implement the four basic operations the algorithm uses.
*/
function md5cmn (q, a, b, x, s, t) {
return safeAdd(bitRotateLeft(safeAdd(safeAdd(a, q), safeAdd(x, t)), s), b)
}
function md5ff (a, b, c, d, x, s, t) {
return md5cmn((b & c) | (~b & d), a, b, x, s, t)
}
function md5gg (a, b, c, d, x, s, t) {
return md5cmn((b & d) | (c & ~d), a, b, x, s, t)
}
function md5hh (a, b, c, d, x, s, t) {
return md5cmn(b ^ c ^ d, a, b, x, s, t)
}
function md5ii (a, b, c, d, x, s, t) {
return md5cmn(c ^ (b | ~d), a, b, x, s, t)
}
/*
* Calculate the MD5 of an array of little-endian words, and a bit length.
*/
function binlMD5 (x, len) {
/* append padding */
x[len >> 5] |= 0x80 << (len % 32)
x[((len + 64) >>> 9 << 4) + 14] = len
var i
var olda
var oldb
var oldc
var oldd
var a = 1732584193
var b = -271733879
var c = -1732584194
var d = 271733878
for (i = 0; i < x.length; i += 16) {
olda = a
oldb = b
oldc = c
oldd = d
a = md5ff(a, b, c, d, x[i], 7, -680876936)
d = md5ff(d, a, b, c, x[i + 1], 12, -389564586)
c = md5ff(c, d, a, b, x[i + 2], 17, 606105819)
b = md5ff(b, c, d, a, x[i + 3], 22, -1044525330)
a = md5ff(a, b, c, d, x[i + 4], 7, -176418897)
d = md5ff(d, a, b, c, x[i + 5], 12, 1200080426)
c = md5ff(c, d, a, b, x[i + 6], 17, -1473231341)
b = md5ff(b, c, d, a, x[i + 7], 22, -45705983)
a = md5ff(a, b, c, d, x[i + 8], 7, 1770035416)
d = md5ff(d, a, b, c, x[i + 9], 12, -1958414417)
c = md5ff(c, d, a, b, x[i + 10], 17, -42063)
b = md5ff(b, c, d, a, x[i + 11], 22, -1990404162)
a = md5ff(a, b, c, d, x[i + 12], 7, 1804603682)
d = md5ff(d, a, b, c, x[i + 13], 12, -40341101)
c = md5ff(c, d, a, b, x[i + 14], 17, -1502002290)
b = md5ff(b, c, d, a, x[i + 15], 22, 1236535329)
a = md5gg(a, b, c, d, x[i + 1], 5, -165796510)
d = md5gg(d, a, b, c, x[i + 6], 9, -1069501632)
c = md5gg(c, d, a, b, x[i + 11], 14, 643717713)
b = md5gg(b, c, d, a, x[i], 20, -373897302)
a = md5gg(a, b, c, d, x[i + 5], 5, -701558691)
d = md5gg(d, a, b, c, x[i + 10], 9, 38016083)
c = md5gg(c, d, a, b, x[i + 15], 14, -660478335)
b = md5gg(b, c, d, a, x[i + 4], 20, -405537848)
a = md5gg(a, b, c, d, x[i + 9], 5, 568446438)
d = md5gg(d, a, b, c, x[i + 14], 9, -1019803690)
c = md5gg(c, d, a, b, x[i + 3], 14, -187363961)
b = md5gg(b, c, d, a, x[i + 8], 20, 1163531501)
a = md5gg(a, b, c, d, x[i + 13], 5, -1444681467)
d = md5gg(d, a, b, c, x[i + 2], 9, -51403784)
c = md5gg(c, d, a, b, x[i + 7], 14, 1735328473)
b = md5gg(b, c, d, a, x[i + 12], 20, -1926607734)
a = md5hh(a, b, c, d, x[i + 5], 4, -378558)
d = md5hh(d, a, b, c, x[i + 8], 11, -2022574463)
c = md5hh(c, d, a, b, x[i + 11], 16, 1839030562)
b = md5hh(b, c, d, a, x[i + 14], 23, -35309556)
a = md5hh(a, b, c, d, x[i + 1], 4, -1530992060)
d = md5hh(d, a, b, c, x[i + 4], 11, 1272893353)
c = md5hh(c, d, a, b, x[i + 7], 16, -155497632)
b = md5hh(b, c, d, a, x[i + 10], 23, -1094730640)
a = md5hh(a, b, c, d, x[i + 13], 4, 681279174)
d = md5hh(d, a, b, c, x[i], 11, -358537222)
c = md5hh(c, d, a, b, x[i + 3], 16, -722521979)
b = md5hh(b, c, d, a, x[i + 6], 23, 76029189)
a = md5hh(a, b, c, d, x[i + 9], 4, -640364487)
d = md5hh(d, a, b, c, x[i + 12], 11, -421815835)
c = md5hh(c, d, a, b, x[i + 15], 16, 530742520)
b = md5hh(b, c, d, a, x[i + 2], 23, -995338651)
a = md5ii(a, b, c, d, x[i], 6, -198630844)
d = md5ii(d, a, b, c, x[i + 7], 10, 1126891415)
c = md5ii(c, d, a, b, x[i + 14], 15, -1416354905)
b = md5ii(b, c, d, a, x[i + 5], 21, -57434055)
a = md5ii(a, b, c, d, x[i + 12], 6, 1700485571)
d = md5ii(d, a, b, c, x[i + 3], 10, -1894986606)
c = md5ii(c, d, a, b, x[i + 10], 15, -1051523)
b = md5ii(b, c, d, a, x[i + 1], 21, -2054922799)
a = md5ii(a, b, c, d, x[i + 8], 6, 1873313359)
d = md5ii(d, a, b, c, x[i + 15], 10, -30611744)
c = md5ii(c, d, a, b, x[i + 6], 15, -1560198380)
b = md5ii(b, c, d, a, x[i + 13], 21, 1309151649)
a = md5ii(a, b, c, d, x[i + 4], 6, -145523070)
d = md5ii(d, a, b, c, x[i + 11], 10, -1120210379)
c = md5ii(c, d, a, b, x[i + 2], 15, 718787259)
b = md5ii(b, c, d, a, x[i + 9], 21, -343485551)
a = safeAdd(a, olda)
b = safeAdd(b, oldb)
c = safeAdd(c, oldc)
d = safeAdd(d, oldd)
}
return [a, b, c, d]
}
/*
* Convert an array of little-endian words to a string
*/
function binl2rstr (input) {
var i
var output = ''
var length32 = input.length * 32
for (i = 0; i < length32; i += 8) {
output += String.fromCharCode((input[i >> 5] >>> (i % 32)) & 0xff)
}
return output
}
/*
* Convert a raw string to an array of little-endian words
* Characters >255 have their high-byte silently ignored.
*/
function rstr2binl (input) {
var i
var output = []
output[(input.length >> 2) - 1] = undefined
for (i = 0; i < output.length; i += 1) {
output[i] = 0
}
var length8 = input.length * 8
for (i = 0; i < length8; i += 8) {
output[i >> 5] |= (input.charCodeAt(i / 8) & 0xff) << (i % 32)
}
return output
}
/*
* Calculate the MD5 of a raw string
*/
function rstrMD5 (s) {
return binl2rstr(binlMD5(rstr2binl(s), s.length * 8))
}
/*
* Calculate the HMAC-MD5, of a key and some data (raw strings)
*/
function rstrHMACMD5 (key, data) {
var i
var bkey = rstr2binl(key)
var ipad = []
var opad = []
var hash
ipad[15] = opad[15] = undefined
if (bkey.length > 16) {
bkey = binlMD5(bkey, key.length * 8)
}
for (i = 0; i < 16; i += 1) {
ipad[i] = bkey[i] ^ 0x36363636
opad[i] = bkey[i] ^ 0x5c5c5c5c
}
hash = binlMD5(ipad.concat(rstr2binl(data)), 512 + data.length * 8)
return binl2rstr(binlMD5(opad.concat(hash), 512 + 128))
}
/*
* Convert a raw string to a hex string
*/
function rstr2hex (input) {
var hexTab = '0123456789abcdef'
var output = ''
var x
var i
for (i = 0; i < input.length; i += 1) {
x = input.charCodeAt(i)
output += hexTab.charAt((x >>> 4) & 0x0f) + hexTab.charAt(x & 0x0f)
}
return output
}
/*
* Encode a string as utf-8
*/
function str2rstrUTF8 (input) {
return unescape(encodeURIComponent(input))
}
/*
* Take string arguments and return either raw or hex encoded strings
*/
function rawMD5 (s) {
return rstrMD5(str2rstrUTF8(s))
}
function hexMD5 (s) {
return rstr2hex(rawMD5(s))
}
function rawHMACMD5 (k, d) {
return rstrHMACMD5(str2rstrUTF8(k), str2rstrUTF8(d))
}
function hexHMACMD5 (k, d) {
return rstr2hex(rawHMACMD5(k, d))
}
function md5 (string, key, raw) {
if (!key) {
if (!raw) {
return hexMD5(string)
}
return rawMD5(string)
}
if (!raw) {
return hexHMACMD5(key, string)
}
return rawHMACMD5(key, string)
}
if (typeof define === 'function' && define.amd) {
define(function () {
return md5
})
} else if (typeof module === 'object' && module.exports) {
module.exports = md5
} else {
$.md5 = md5
}
})(this)

@ -8,13 +8,14 @@
"use strict";
class AnnotationParser {
constructor(job, labelsInfo) {
constructor(job, labelsInfo, idGenerator) {
this._parser = new DOMParser();
this._startFrame = job.start;
this._stopFrame = job.stop;
this._flipped = job.flipped;
this._im_meta = job.image_meta_data;
this._labelsInfo = labelsInfo;
this._idGen = idGenerator;
}
_xmlParseError(parsedXML) {
@ -131,7 +132,7 @@ class AnnotationParser {
let result = [];
for (let track of tracks) {
let label = track.getAttribute('label');
let group_id = track.getAttribute('group_id') || "0";
let group_id = track.getAttribute('group_id') || '0';
let labelId = this._labelsInfo.labelIdOf(label);
if (labelId === null) {
throw Error(`An unknown label found in the annotation file: ${label}`);
@ -224,6 +225,7 @@ class AnnotationParser {
ybr: ybr,
z_order: z_order,
attributes: attributeList,
id: this._idGen.next(),
});
}
else {
@ -236,6 +238,7 @@ class AnnotationParser {
occluded: occluded,
z_order: z_order,
attributes: attributeList,
id: this._idGen.next(),
});
}
}
@ -255,7 +258,7 @@ class AnnotationParser {
let tracks = xml.getElementsByTagName('track');
for (let track of tracks) {
let labelId = this._labelsInfo.labelIdOf(track.getAttribute('label'));
let groupId = track.getAttribute('group_id') || "0";
let groupId = track.getAttribute('group_id') || '0';
if (labelId === null) {
throw Error('An unknown label found in the annotation file: ' + name);
}
@ -307,7 +310,8 @@ class AnnotationParser {
group_id: +groupId,
frame: +parsed[type][0].getAttribute('frame'),
attributes: [],
shapes: []
shapes: [],
id: this._idGen.next(),
};
for (let shape of parsed[type]) {

@ -4,7 +4,7 @@
* SPDX-License-Identifier: MIT
*/
/* exported callAnnotationUI translateSVGPos blurAllElements drawBoxSize */
/* exported callAnnotationUI blurAllElements drawBoxSize copyToClipboard */
"use strict";
function callAnnotationUI(jid) {
@ -40,6 +40,7 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
// Setup some API
window.cvat = {
labelsInfo: new LabelsInfo(job),
translate: new CoordinateTranslator(),
player: {
geometry: {
scale: 1,
@ -53,24 +54,69 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
mode: null,
job: {
z_order: job.z_order,
id: job.jobid
id: job.jobid,
images: job.image_meta_data,
},
search: {
value: window.location.search,
set: function(name, value) {
let searchParams = new URLSearchParams(this.value);
if (typeof value === 'undefined' || value === null) {
if (searchParams.has(name)) {
searchParams.delete(name);
}
}
else searchParams.set(name, value);
this.value = `${searchParams.toString()}`;
},
get: function(name) {
try {
let decodedURI = decodeURIComponent(this.value);
let urlSearchParams = new URLSearchParams(decodedURI);
if (urlSearchParams.has(name)) {
return urlSearchParams.get(name);
}
else return null;
}
catch (error) {
showMessage('Bad URL has been found');
this.value = window.location.href;
return null;
}
},
toString: function() {
return `${window.location.origin}/?${this.value}`;
}
}
};
// Remove external search parameters from url
window.history.replaceState(null, null, `${window.location.origin}/?id=${job.jobid}`);
window.cvat.config = new Config();
// Setup components
let annotationParser = new AnnotationParser(job, window.cvat.labelsInfo);
let idGenerator = new IncrementIdGenerator(job.max_shape_id + 1);
let annotationParser = new AnnotationParser(job, window.cvat.labelsInfo, idGenerator);
let shapeCollectionModel = new ShapeCollectionModel().import(shapeData).updateHash();
let shapeCollectionModel = new ShapeCollectionModel(idGenerator).import(shapeData, true);
let shapeCollectionController = new ShapeCollectionController(shapeCollectionModel);
let shapeCollectionView = new ShapeCollectionView(shapeCollectionModel, shapeCollectionController);
// In case of old tasks that dont provide max saved shape id properly
if (job.max_shape_id === -1) {
idGenerator.reset(shapeCollectionModel.maxId + 1);
}
window.cvat.data = {
get: () => shapeCollectionModel.export(),
get: () => shapeCollectionModel.exportAll(),
set: (data) => {
shapeCollectionModel.empty();
shapeCollectionModel.import(data);
shapeCollectionModel.import(data, false);
shapeCollectionModel.update();
},
clear: () => shapeCollectionModel.empty(),
@ -85,6 +131,13 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
let shapeCreatorController = new ShapeCreatorController(shapeCreatorModel);
let shapeCreatorView = new ShapeCreatorView(shapeCreatorModel, shapeCreatorController);
let polyshapeEditorModel = new PolyshapeEditorModel();
let polyshapeEditorController = new PolyshapeEditorController(polyshapeEditorModel);
let polyshapeEditorView = new PolyshapeEditorView(polyshapeEditorModel, polyshapeEditorController);
// Add static member for class. It will be used by all polyshapes.
PolyShapeView.editor = polyshapeEditorModel;
let shapeMergerModel = new ShapeMergerModel(shapeCollectionModel);
let shapeMergerController = new ShapeMergerController(shapeMergerModel);
new ShapeMergerView(shapeMergerModel, shapeMergerController);
@ -95,6 +148,8 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
let aamModel = new AAMModel(shapeCollectionModel, (xtl, xbr, ytl, ybr) => {
playerModel.focus(xtl, xbr, ytl, ybr);
}, () => {
playerModel.fit();
});
let aamController = new AAMController(aamModel);
new AAMView(aamModel, aamController);
@ -129,7 +184,8 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
playerModel.subscribe(shapeCreatorView);
playerModel.subscribe(shapeBufferView);
playerModel.subscribe(shapeGrouperView);
playerModel.shift(0);
playerModel.subscribe(polyshapeEditorView);
playerModel.shift(window.cvat.search.get('frame') || 0, true);
let shortkeys = window.cvat.config.shortkeys;
@ -137,7 +193,14 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
setupSettingsWindow();
setupMenu(job, shapeCollectionModel, annotationParser, aamModel, playerModel, historyModel);
setupFrameFilters();
setupShortkeys(shortkeys);
setupShortkeys(shortkeys, {
aam: aamModel,
shapeCreator: shapeCreatorModel,
shapeMerger: shapeMergerModel,
shapeGrouper: shapeGrouperModel,
shapeBuffer: shapeBufferModel,
shapeEditor: polyshapeEditorModel
});
$(window).on('click', function(event) {
Logger.updateUserActivityTimer();
@ -177,6 +240,16 @@ function buildAnnotationUI(job, shapeData, loadJobEvent) {
});
}
function copyToClipboard(text) {
let tempInput = $("<input>");
$("body").append(tempInput);
tempInput.prop('value', text).select();
document.execCommand("copy");
tempInput.remove();
}
function setupFrameFilters() {
let brightnessRange = $('#playerBrightnessRange');
let contrastRange = $('#playerContrastRange');
@ -248,7 +321,7 @@ function setupFrameFilters() {
}
function setupShortkeys(shortkeys) {
function setupShortkeys(shortkeys, models) {
let annotationMenu = $('#annotationMenu');
let settingsWindow = $('#settingsWindow');
let helpWindow = $('#helpWindow');
@ -291,9 +364,34 @@ function setupShortkeys(shortkeys) {
return false;
});
let cancelModeHandler = Logger.shortkeyLogDecorator(function() {
switch (window.cvat.mode) {
case 'aam':
models.aam.switchAAMMode();
break;
case 'creation':
models.shapeCreator.switchCreateMode(true);
break;
case 'merge':
models.shapeMerger.cancel();
break;
case 'groupping':
models.shapeGrouper.cancel();
break;
case 'paste':
models.shapeBuffer.switchPaste();
break;
case 'poly_editing':
models.shapeEditor.finish();
break;
}
return false;
});
Mousetrap.bind(shortkeys["open_help"].value, openHelpHandler, 'keydown');
Mousetrap.bind(shortkeys["open_settings"].value, openSettingsHandler, 'keydown');
Mousetrap.bind(shortkeys["save_work"].value, saveHandler, 'keydown');
Mousetrap.bind(shortkeys["cancel_mode"].value, cancelModeHandler, 'keydown');
}
@ -423,12 +521,23 @@ function setupMenu(job, shapeCollectionModel, annotationParser, aamModel, player
})();
$('#statTaskName').text(job.slug);
$('#statTaskStatus').text(job.status);
$('#statFrames').text(`[${job.start}-${job.stop}]`);
$('#statOverlap').text(job.overlap);
$('#statZOrder').text(job.z_order);
$('#statFlipped').text(job.flipped);
$('#statTaskStatus').prop("value", job.status).on('change', (e) => {
$.ajax({
type: 'POST',
url: 'save/status/job/' + window.cvat.job.id,
data: JSON.stringify({
status: e.target.value
}),
contentType: "application/json; charset=utf-8",
error: (data) => {
showMessage(`Can not change job status. Code: ${data.status}. Message: ${data.responeText || data.statusText}`);
}
});
});
let shortkeys = window.cvat.config.shortkeys;
$('#helpButton').on('click', () => {
@ -459,13 +568,15 @@ function setupMenu(job, shapeCollectionModel, annotationParser, aamModel, player
});
$('#removeAnnotationButton').on('click', () => {
hide();
confirm('Do you want to remove all annotations? The action cannot be undone!',
() => {
historyModel.empty();
shapeCollectionModel.empty();
}
);
if (!window.cvat.mode) {
hide();
confirm('Do you want to remove all annotations? The action cannot be undone!',
() => {
historyModel.empty();
shapeCollectionModel.empty();
}
);
}
});
$('#saveButton').on('click', () => {
@ -561,7 +672,7 @@ function uploadAnnotation(shapeCollectionModel, historyModel, annotationParser,
try {
historyModel.empty();
shapeCollectionModel.empty();
shapeCollectionModel.import(data);
shapeCollectionModel.import(data, false);
shapeCollectionModel.update();
}
finally {
@ -599,11 +710,12 @@ function saveAnnotation(shapeCollectionModel, job) {
'points count': totalStat.points.annotation + totalStat.points.interpolation,
});
let exportedData = shapeCollectionModel.export();
let annotationLogs = Logger.getLogs();
const exportedData = shapeCollectionModel.export();
shapeCollectionModel.updateExportedState();
const annotationLogs = Logger.getLogs();
const data = {
annotation: exportedData,
annotation: JSON.stringify(exportedData),
logs: JSON.stringify(annotationLogs.export()),
};
@ -612,7 +724,7 @@ function saveAnnotation(shapeCollectionModel, job) {
saveJobRequest(job.jobid, data, () => {
// success
shapeCollectionModel.updateHash();
shapeCollectionModel.confirmExportedState();
saveButton.text('Success!');
setTimeout(() => {
saveButton.prop('disabled', false);
@ -628,26 +740,7 @@ function saveAnnotation(shapeCollectionModel, job) {
});
}
function translateSVGPos(svgCanvas, clientX, clientY) {
let pt = svgCanvas.createSVGPoint();
pt.x = clientX;
pt.y = clientY;
pt = pt.matrixTransform(svgCanvas.getScreenCTM().inverse());
let pos = {
x: pt.x,
y: pt.y
};
if (platform.name.toLowerCase() == 'firefox') {
pos.x /= window.cvat.player.geometry.scale;
pos.y /= window.cvat.player.geometry.scale;
}
return pos;
}
function blurAllElements() {
document.activeElement.blur();
}
}

@ -10,10 +10,11 @@
const AAMUndefinedKeyword = '__undefined__';
class AAMModel extends Listener {
constructor(shapeCollection, focus) {
constructor(shapeCollection, focus, fit) {
super('onAAMUpdate', () => this);
this._shapeCollection = shapeCollection;
this._focus = focus;
this._fit = fit;
this._activeAAM = false;
this._activeIdx = null;
this._active = null;
@ -91,7 +92,10 @@ class AAMModel extends Listener {
for (let shape of this._shapeCollection.currentShapes) {
let labelAttributes = window.cvat.labelsInfo.labelAttributes(shape.model.label);
if (Object.keys(labelAttributes).length && !shape.model.removed && !shape.interpolation.position.outside) {
this._currentShapes.push(shape);
this._currentShapes.push({
model: shape.model,
interpolation: shape.model.interpolate(window.cvat.player.frames.current),
});
}
}
@ -164,6 +168,7 @@ class AAMModel extends Listener {
// Notify for remove aam UI
this.notify();
this._fit();
}
}
@ -182,7 +187,7 @@ class AAMModel extends Listener {
}
this._deactivate();
if (Math.sign(direction) > 0) {
if (Math.sign(direction) < 0) {
// next
this._activeIdx ++;
if (this._activeIdx >= this._currentShapes.length) {

@ -4,7 +4,16 @@
* SPDX-License-Identifier: MIT
*/
/* exported confirm showMessage showOverlay dumpAnnotationRequest */
/* exported
ExportType
confirm
createExportContainer
dumpAnnotationRequest
getExportTargetContainer
showMessage
showOverlay
*/
"use strict";
Math.clamp = function(x, min, max) {
@ -160,6 +169,79 @@ function dumpAnnotationRequest(dumpButton, taskID) {
}
}
const ExportType = Object.freeze({
'create': 0,
'update': 1,
'delete': 2,
});
function createExportContainer() {
const container = {};
Object.keys(ExportType).forEach( action => {
container[action] = {
"boxes": [],
"box_paths": [],
"points": [],
"points_paths": [],
"polygons": [],
"polygon_paths": [],
"polylines": [],
"polyline_paths": [],
};
});
return container;
}
function getExportTargetContainer(export_type, shape_type, container) {
let shape_container_target = undefined;
let export_action_container = undefined;
switch (export_type) {
case ExportType.create:
export_action_container = container.create;
break;
case ExportType.update:
export_action_container = container.update;
break;
case ExportType.delete:
export_action_container = container.delete;
break;
default:
throw Error('Unexpected export type');
}
switch (shape_type) {
case 'annotation_box':
shape_container_target = export_action_container.boxes;
break;
case 'interpolation_box':
shape_container_target = export_action_container.box_paths;
break;
case 'annotation_points':
shape_container_target = export_action_container.points;
break;
case 'interpolation_points':
shape_container_target = export_action_container.points_paths;
break;
case 'annotation_polygon':
shape_container_target = export_action_container.polygons;
break;
case 'interpolation_polygon':
shape_container_target = export_action_container.polygon_paths;
break;
case 'annotation_polyline':
shape_container_target = export_action_container.polylines;
break;
case 'interpolation_polyline':
shape_container_target = export_action_container.polyline_paths;
break;
default:
throw Error('Undefined shape type');
}
return shape_container_target;
}
/* These HTTP methods do not require CSRF protection */
function csrfSafeMethod(method) {
@ -178,7 +260,7 @@ $.ajaxSetup({
$(document).ready(function(){
$('body').css({
width: window.screen.width * 0.95 + 'px',
width: window.screen.width + 'px',
height: window.screen.height * 0.95 + 'px'
});
});

@ -0,0 +1,106 @@
/*
* Copyright (C) 2018 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
/* exported CoordinateTranslator */
"use strict";
class CoordinateTranslator {
constructor() {
this._boxTranslator = {
_playerOffset: 0,
_convert: function(box, sign) {
for (let prop of ["xtl", "ytl", "xbr", "ybr", "x", "y"]) {
if (prop in box) {
box[prop] += this._playerOffset * sign;
}
}
return box;
},
actualToCanvas: function(actualBox) {
let canvasBox = {};
for (let key in actualBox) {
canvasBox[key] = actualBox[key];
}
return this._convert(canvasBox, 1);
},
canvasToActual: function(canvasBox) {
let actualBox = {};
for (let key in canvasBox) {
actualBox[key] = canvasBox[key];
}
return this._convert(actualBox, -1);
},
};
this._pointsTranslator = {
_playerOffset: 0,
_convert: function(points, sign) {
if (typeof(points) === 'string') {
return points.split(' ').map((coord) => coord.split(',')
.map((x) => +x + this._playerOffset * sign).join(',')).join(' ');
}
else if (typeof(points) === 'object') {
let result = [];
for (let point of points) {
result.push({
x: point.x + this._playerOffset * sign,
y: point.y + this._playerOffset * sign,
});
}
return result;
}
else {
throw Error('Unknown points type was found');
}
},
actualToCanvas: function(actualPoints) {
return this._convert(actualPoints, 1);
},
canvasToActual: function(canvasPoints) {
return this._convert(canvasPoints, -1);
}
},
this._pointTranslator = {
clientToCanvas: function(targetCanvas, clientX, clientY) {
let pt = targetCanvas.createSVGPoint();
pt.x = clientX;
pt.y = clientY;
pt = pt.matrixTransform(targetCanvas.getScreenCTM().inverse());
return pt;
},
canvasToClient: function(sourceCanvas, canvasX, canvasY) {
let pt = sourceCanvas.createSVGPoint();
pt.x = canvasX;
pt.y = canvasY;
pt = pt.matrixTransform(sourceCanvas.getScreenCTM());
return pt;
}
};
}
get box() {
return this._boxTranslator;
}
get points() {
return this._pointsTranslator;
}
get point() {
return this._pointTranslator;
}
set playerOffset(value) {
this._boxTranslator._playerOffset = value;
this._pointsTranslator._playerOffset = value;
}
}

@ -0,0 +1,36 @@
/*
* Copyright (C) 2018 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
/* exported
IncrementIdGenerator
ConstIdGenerator
*/
"use strict";
class IncrementIdGenerator {
constructor(startId=0) {
this._startId = startId;
}
next() {
return this._startId++;
}
reset(startId=0) {
this._startId = startId;
}
}
class ConstIdGenerator {
constructor(startId=-1) {
this._startId = startId;
}
next() {
return this._startId;
}
}

@ -80,7 +80,7 @@ var LoggerHandler = function(applicationName, jobId)
return new Promise( (resolve, reject) => {
let xhr = new XMLHttpRequest();
xhr.open('POST', '/logs/exception/' + this._jobId);
xhr.open('POST', '/save/exception/' + this._jobId);
xhr.setRequestHeader('Content-Type', 'application/json');
xhr.setRequestHeader("X-CSRFToken", Cookies.get('csrftoken'));
@ -202,25 +202,31 @@ var LoggerHandler = function(applicationName, jobId)
/*
Log message has simple json format - each message is set of "key" : "value" pairs inside curly braces - {"key1" : "string_value", "key2" : number_value, ...}
Value may be string or number (see json spec)
required fields for all event types:
Log message has simple json format - each message is set of "key" : "value"
pairs inside curly braces - {"key1" : "string_value", "key2" : number_value,
...} Value may be string or number (see json spec) required fields for all event
types:
NAME TYPE DESCRIPTION
"event" string see EventType enum description of possible values.
"timestamp" number timestamp in UNIX format - the number of seconds or milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970
"timestamp" number timestamp in UNIX format - the number of seconds
or milliseconds that have elapsed since 00:00:00
Thursday, 1 January 1970
"application" string application name
"userid" string Unique userid
"task" string Unique task id. (Is expected corresponding Jira task id)
"count" is requiered field for "Add object", "Delete object", "Copy track", "Propagate object", "Merge objecrs", "Undo action" and "Redo action"
events with number value.
"count" is requiered field for "Add object", "Delete object", "Copy track",
"Propagate object", "Merge objecrs", "Undo action" and "Redo action" events with
number value.
Example : { "event" : "Add object", "timestamp" : 1486040342867, "application" : "CVAT", "duration" : 4200, "userid" : "ESAZON1X-MOBL", "count" : 1, "type" : "bounding box" }
Example : { "event" : "Add object", "timestamp" : 1486040342867, "application" :
"CVAT", "duration" : 4200, "userid" : "ESAZON1X-MOBL", "count" : 1, "type" :
"bounding box" }
Types of supported events.
Minimum subset of events to generate simple report are Logger.EventType.addObject, Logger.EventType.deleteObject and Logger.EventType.sendTaskInfo.
Value of "count" property should be a number.
Types of supported events. Minimum subset of events to generate simple report
are Logger.EventType.addObject, Logger.EventType.deleteObject and
Logger.EventType.sendTaskInfo. Value of "count" property should be a number.
*/
var Logger = {
@ -276,50 +282,67 @@ var Logger = {
EventType: {
// dumped as "Paste object". There are no additional required fields.
pasteObject: 0,
// dumped as "Change attribute". There are no additional required fields.
// dumped as "Change attribute". There are no additional required
// fields.
changeAttribute: 1,
// dumped as "Drag object". There are no additional required fields.
dragObject: 2,
// dumped as "Delete object". "count" is required field, value of deleted objects should be positive number.
// dumped as "Delete object". "count" is required field, value of
// deleted objects should be positive number.
deleteObject: 3,
// dumped as "Press shortcut". There are no additional required fields.
pressShortcut: 4,
// dumped as "Resize object". There are no additional required fields.
resizeObject: 5,
// dumped as "Send logs". It's expected that event has "duration" field, but it isn't necessary.
// dumped as "Send logs". It's expected that event has "duration" field,
// but it isn't necessary.
sendLogs: 6,
// dumped as "Save job". It's expected that event has "duration" field, but it isn't necessary.
// dumped as "Save job". It's expected that event has "duration" field,
// but it isn't necessary.
saveJob: 7,
// dumped as "Jump frame". There are no additional required fields.
jumpFrame: 8,
// dumped as "Draw object". It's expected that event has "duration" field, but it isn't necessary.
// dumped as "Draw object". It's expected that event has "duration"
// field, but it isn't necessary.
drawObject: 9,
// dumped as "Change label".
changeLabel: 10,
// dumped as "Send task info". "track count", "frame count", "object count" are required fields. It's expected that event has "current_frame" field.
// dumped as "Send task info". "track count", "frame count", "object
// count" are required fields. It's expected that event has
// "current_frame" field.
sendTaskInfo: 11,
// dumped as "Load job". "track count", "frame count", "object count" are required fields. It's expected that event has "duration" field, but it isn't necessary.
// dumped as "Load job". "track count", "frame count", "object count"
// are required fields. It's expected that event has "duration" field,
// but it isn't necessary.
loadJob: 12,
// dumped as "Move image". It's expected that event has "duration" field, but it isn't necessary.
// dumped as "Move image". It's expected that event has "duration"
// field, but it isn't necessary.
moveImage: 13,
// dumped as "Zoom image". It's expected that event has "duration" field, but it isn't necessary.
// dumped as "Zoom image". It's expected that event has "duration"
// field, but it isn't necessary.
zoomImage: 14,
// dumped as "Lock object". There are no additional required fields.
lockObject: 15,
// dumped as "Merge objects". "count" is required field with positive or negative number value.
// dumped as "Merge objects". "count" is required field with positive or
// negative number value.
mergeObjects: 16,
// dumped as "Copy object". "count" is required field with number value.
copyObject: 17,
// dumped as "Propagate object". "count" is required field with number value.
// dumped as "Propagate object". "count" is required field with number
// value.
propagateObject: 18,
// dumped as "Undo action". "count" is required field with positive or negative number value.
// dumped as "Undo action". "count" is required field with positive or
// negative number value.
undoAction: 19,
// dumped as "Redo action". "count" is required field with positive or negative number value.
// dumped as "Redo action". "count" is required field with positive or
// negative number value.
redoAction: 20,
// dumped as "Send user activity". "working_time" is required field with positive number value.
// dumped as "Send user activity". "working_time" is required field with
// positive number value.
sendUserActivity: 21,
// dumped as "Send exception". Use to send any exception events to the server.
// "message", "filename", "line" are mandatory fields. "stack" and "column" are optional.
// dumped as "Send exception". Use to send any exception events to the
// server. "message", "filename", "line" are mandatory fields. "stack"
// and "column" are optional.
sendException: 22,
// dumped as "Change frame". There are no additional required fields.
changeFrame: 23,
@ -356,10 +379,12 @@ var Logger = {
/**
* Logger.addContinuedEvent Use to add log event with duration field.
* Duration will be calculated automatically when LogEvent.close() method of returned Object will be called.
* Note: in case of LogEvent.close() method will not be callsed event will not be sended to server
* Duration will be calculated automatically when LogEvent.close() method of
* returned Object will be called. Note: in case of LogEvent.close() method
* will not be callsed event will not be sent to server
* @param {Logger.EventType} type Event Type
* @param {Object} values Any event values, for example {count: 1, label: 'vehicle'}
* @param {Object} values Any event values, for example {count: 1, label:
* 'vehicle'}
* @return {LogEvent} instance of LogEvent
* @static
*/
@ -370,7 +395,8 @@ var Logger = {
/**
* Logger.shortkeyLogDecorator use for decorating the shortkey handlers.
* This decorator just create appropriate log event and close it when decored function will performed.
* This decorator just create appropriate log event and close it when
* decored function will performed.
* @param {Function} decoredFunc is function for decorating
* @return {Function} is decorated decoredFunc
* @static
@ -387,7 +413,7 @@ var Logger = {
},
/**
* Logger.sendLogs Try to send exception logs to the server immediatly.
* Logger.sendLogs Try to send exception logs to the server immediately.
* @return {Promise}
* @param {LogEvent} exceptionEvent
* @static
@ -414,7 +440,8 @@ var Logger = {
},
/**
* Logger.setUsername just set username property which will be added to all log messages
* Logger.setUsername just set username property which will be added to all
* log messages
* @param {String} username
* @static
*/
@ -423,7 +450,8 @@ var Logger = {
this._logger.setUsername(username);
},
/** Logger.updateUserActivityTimer method updates internal timer for working time calculation logic
/** Logger.updateUserActivityTimer method updates internal timer for working
* time calculation logic
* @static
*/
updateUserActivityTimer: function()
@ -431,11 +459,12 @@ var Logger = {
this._logger.updateTimer();
},
/** Logger.setTimeThreshold set time threshold in ms for EventType.
* If time interval betwwen incoming log events less than threshold events will be collapsed.
* Note that result event will have timestamp of first event,
* In case of time threshold used for continued event duration will be difference between
* first and last event timestamps and other fields from last event.
/** Logger.setTimeThreshold set time threshold in ms for EventType. If time
* interval betwwen incoming log events less than threshold events will be
* collapsed. Note that result event will have timestamp of first event, In
* case of time threshold used for continued event duration will be
* difference between first and last event timestamps and other fields from
* last event.
* @static
* @param {Logger.EventType} eventType
* @param {Number} threshold
@ -445,7 +474,8 @@ var Logger = {
this._logger.setTimeThreshold(eventType, threshold);
},
/** Logger._eventTypeToString private method to transform Logger.EventType to string
/** Logger._eventTypeToString private method to transform Logger.EventType
* to string
* @param {Logger.EventType} type Event Type
* @return {String} string reppresentation of Logger.EventType
* @static

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save