Website with documentation (#3039)

main
Timur Osmanov 5 years ago committed by GitHub
parent a2df499f50
commit 9615436ecc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,38 @@
name: Github pages
on:
push:
branches:
- develop
jobs:
deploy:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
- name: Setup Hugo
uses: peaceiris/actions-hugo@v2
with:
hugo-version: '0.83.1'
extended: true
- name: Setup Node
uses: actions/setup-node@v2
with:
node-version: '14.x'
- name: Build docs
working-directory: ./site
run: |
npm ci
hugo --baseURL "/cvat/" --minify
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site/public
force_orphan: true

6
.gitignore vendored

@ -41,3 +41,9 @@ yarn-error.log*
/helm-chart/values.*.yaml
/helm-chart/*.values.yaml
/helm-chart/charts/*
#Ignore website temp files
/site/public/
/site/resources/
/site/node_modules/
/site/tech-doc-hugo

3
.gitmodules vendored

@ -0,0 +1,3 @@
[submodule "site/themes/docsy"]
path = site/themes/docsy
url = https://github.com/google/docsy

@ -17,14 +17,14 @@ annotation team. Try it online [cvat.org](https://cvat.org).
## Documentation
- [Installation guide](cvat/apps/documentation/installation.md)
- [User's guide](cvat/apps/documentation/user_guide.md)
- [Installation guide](site/content/en/docs/for-users/installation.md)
- [User's guide](https://openvinotoolkit.github.io/cvat/docs/for-users/user-guide/)
- [Django REST API documentation](#rest-api)
- [Datumaro dataset framework](https://github.com/openvinotoolkit/datumaro/blob/develop/README.md)
- [Command line interface](utils/cli/)
- [XML annotation format](cvat/apps/documentation/xml_format.md)
- [AWS Deployment Guide](cvat/apps/documentation/AWS-Deployment-Guide.md)
- [Frequently asked questions](cvat/apps/documentation/faq.md)
- [Command line interface](site/content/en/docs/for-developers/cli.md)
- [XML annotation format](site/content/en/docs/for-developers/xml_format.md)
- [AWS Deployment Guide](site/content/en/docs/for-developers/AWS-Deployment-Guide.md)
- [Frequently asked questions](site/content/en/docs/for-users/faq.md)
- [Questions](#questions)
## Screencasts
@ -97,7 +97,7 @@ are visible to users.
Disabled features:
- [Analytics: management and monitoring of data annotation team](/components/analytics/README.md)
- [Analytics: management and monitoring of data annotation team](site/content/en/docs/for-developers/analytics.md)
Limitations:

@ -1,4 +1,4 @@
// Copyright (C) 2020 Intel Corporation
// Copyright (C) 2020-2021 Intel Corporation
//
// SPDX-License-Identifier: MIT
@ -232,7 +232,11 @@ function HeaderContainer(props: Props): JSX.Element {
About
</Menu.Item>
{renderChangePasswordItem && (
<Menu.Item className='cvat-header-menu-change-password' onClick={(): void => switchChangePasswordDialog(true)} disabled={changePasswordFetching}>
<Menu.Item
className='cvat-header-menu-change-password'
onClick={(): void => switchChangePasswordDialog(true)}
disabled={changePasswordFetching}
>
{changePasswordFetching ? <LoadingOutlined /> : <EditOutlined />}
Change password
</Menu.Item>
@ -320,12 +324,12 @@ function HeaderContainer(props: Props): JSX.Element {
<Button
className='cvat-header-button'
type='link'
href={`${tool.server.host}/documentation/user_guide.html`}
href='https://openvinotoolkit.github.io/cvat/docs'
onClick={(event: React.MouseEvent): void => {
event.preventDefault();
// false positive
// eslint-disable-next-line
window.open(`${tool.server.host}/documentation/user_guide.html`, '_blank');
window.open('https://openvinotoolkit.github.io/cvat/docs');
}}
>
<QuestionCircleOutlined />

File diff suppressed because it is too large Load Diff

@ -1,22 +0,0 @@
### AWS-Deployment Guide
There are two ways of deploying the CVAT.
1. **On Nvidia GPU Machine:** Tensorflow annotation feature is dependent on GPU hardware. One of the easy ways to launch CVAT with the tf-annotation app is to use AWS P3 instances, which provides the NVIDIA GPU. Read more about [P3 instances here.](https://aws.amazon.com/about-aws/whats-new/2017/10/introducing-amazon-ec2-p3-instances/)
Overall setup instruction is explained in [main readme file](https://github.com/opencv/cvat/), except Installing Nvidia drivers. So we need to download the drivers and install it. For Amazon P3 instances, download the Nvidia Drivers from Nvidia website. For more check [Installing the NVIDIA Driver on Linux Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html) link.
2. **On Any other AWS Machine:** We can follow the same instruction guide mentioned in the
[installation instructions](https://github.com/opencv/cvat/blob/master/cvat/apps/documentation/installation.md).
The additional step is to add a [security group and rule to allow incoming connections](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html).
For any of above, don't forget to add exposed AWS public IP address or hostname to `docker-compose.override.yml`:
```
version: "2.3"
services:
cvat_proxy:
environment:
CVAT_HOST: your-instance.amazonaws.com
```
In case of problems with using hostname, you can also use the public IPV4 instead of hostname. For AWS or any cloud based machines where the instances need to be terminated or stopped, the public IPV4 and hostname changes with every stop and reboot. To address this efficiently, avoid using spot instances that cannot be stopped, since copying the EBS to an AMI and restarting it throws problems. On the other hand, when a regular instance is stopped and restarted, the new hostname/IPV4 can be used in the `CVAT_HOST` variable in the `docker-compose.override.yml` and the build can happen instantly with CVAT tasks being available through the new IPV4.

@ -1,4 +0,0 @@
# Copyright (C) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: MIT

@ -1,11 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.apps import AppConfig
class DocumentationConfig(AppConfig):
name = 'cvat.apps.documentation'

@ -1,385 +0,0 @@
- [Mounting cloud storage](#mounting-cloud-storage)
- [AWS S3 bucket](#aws-s3-bucket-as-filesystem)
- [Ubuntu 20.04](#aws_s3_ubuntu_2004)
- [Mount](#aws_s3_mount)
- [Automatically mount](#aws_s3_automatically_mount)
- [Using /etc/fstab](#aws_s3_using_fstab)
- [Using systemd](#aws_s3_using_systemd)
- [Check](#aws_s3_check)
- [Unmount](#aws_s3_unmount_filesystem)
- [Azure container](#microsoft-azure-container-as-filesystem)
- [Ubuntu 20.04](#azure_ubuntu_2004)
- [Mount](#azure_mount)
- [Automatically mount](#azure_automatically_mount)
- [Using /etc/fstab](#azure_using_fstab)
- [Using systemd](#azure_using_systemd)
- [Check](#azure_check)
- [Unmount](#azure_unmount_filesystem)
- [Google Drive](#google-drive-as-filesystem)
- [Ubuntu 20.04](#google_drive_ubuntu_2004)
- [Mount](#google_drive_mount)
- [Automatically mount](#google_drive_automatically_mount)
- [Using /etc/fstab](#google_drive_using_fstab)
- [Using systemd](#google_drive_using_systemd)
- [Check](#google_drive_check)
- [Unmount](#google_drive_unmount_filesystem)
# Mounting cloud storage
## AWS S3 bucket as filesystem
### <a name="aws_s3_ubuntu_2004">Ubuntu 20.04</a>
#### <a name="aws_s3_mount">Mount</a>
1. Install s3fs:
```bash
sudo apt install s3fs
```
1. Enter your credentials in a file `${HOME}/.passwd-s3fs` and set owner-only permissions:
```bash
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
```
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
1. Run s3fs, replace `bucket_name`, `mount_point`:
```bash
s3fs <bucket_name> <mount_point> -o allow_other
```
For more details see [here](https://github.com/s3fs-fuse/s3fs-fuse).
#### <a name="aws_s3_automatically_mount">Automatically mount</a>
Follow the first 3 mounting steps above.
##### <a name="aws_s3_using_fstab">Using fstab</a>
1. Create a bash script named aws_s3_fuse(e.g in /usr/bin, as root) with this content
(replace `user_name` on whose behalf the disk will be mounted, `backet_name`, `mount_point`, `/path/to/.passwd-s3fs`):
```bash
#!/bin/bash
sudo -u <user_name> s3fs <backet_name> <mount_point> -o passwd_file=/path/to/.passwd-s3fs -o allow_other
exit 0
```
1. Give it the execution permission:
```bash
sudo chmod +x /usr/bin/aws_s3_fuse
```
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
```bash
/absolute/path/to/aws_s3_fuse <mount_point> fuse allow_other,user,_netdev 0 0
```
##### <a name="aws_s3_using_systemd">Using systemd</a>
1. Create unit file `sudo nano /etc/systemd/system/s3fs.service`
(replace `user_name`, `bucket_name`, `mount_point`, `/path/to/.passwd-s3fs`):
```bash
[Unit]
Description=FUSE filesystem over AWS S3 bucket
After=network.target
[Service]
Environment="MOUNT_POINT=<mount_point>"
User=<user_name>
Group=<user_name>
ExecStart=s3fs <bucket_name> ${MOUNT_POINT} -o passwd_file=/path/to/.passwd-s3fs -o allow_other
ExecStop=fusermount -u ${MOUNT_POINT}
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
```
1. Update the system configurations, enable unit autorun when the system boots, mount the bucket:
```bash
sudo systemctl daemon-reload
sudo systemctl enable s3fs.service
sudo systemctl start s3fs.service
```
#### <a name="aws_s3_check">Check</a>
A file `/etc/mtab` contains records of currently mounted filesystems.
```bash
cat /etc/mtab | grep 's3fs'
```
#### <a name="aws_s3_unmount_filesystem">Unmount filesystem</a>
```bash
fusermount -u <mount_point>
```
If you used [systemd](#aws_s3_using_systemd) to mount a bucket:
```bash
sudo systemctl stop s3fs.service
sudo systemctl disable s3fs.service
```
## Microsoft Azure container as filesystem
### <a name="azure_ubuntu_2004">Ubuntu 20.04</a>
#### <a name="azure_mount">Mount</a>
1. Set up the Microsoft package repository.(More [here](https://docs.microsoft.com/en-us/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software#configuring-the-repositories))
```bash
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt-get update
```
1. Install `blobfuse` and `fuse`:
```bash
sudo apt-get install blobfuse fuse
```
For more details see [here](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation)
1. Create enviroments(replace `account_name`, `account_key`, `mount_point`):
```bash
export AZURE_STORAGE_ACCOUNT=<account_name>
export AZURE_STORAGE_ACCESS_KEY=<account_key>
MOUNT_POINT=<mount_point>
```
1. Create a folder for cache:
```bash
sudo mkdir -p /mnt/blobfusetmp
```
1. Make sure the file must be owned by the user who mounts the container:
```bash
sudo chown <user> /mnt/blobfusetmp
```
1. Create the mount point, if it doesn't exists:
```bash
mkdir -p ${MOUNT_POINT}
```
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
1. Mount container(replace `your_container`):
```bash
blobfuse ${MOUNT_POINT} --container-name=<your_container> --tmp-path=/mnt/blobfusetmp -o allow_other
```
#### <a name="azure_automatically_mount">Automatically mount</a>
Follow the first 7 mounting steps above.
##### <a name="azure_using_fstab">Using fstab</a>
1. Create configuration file `connection.cfg` with same content, change accountName,
select one from accountKey or sasToken and replace with your value:
```bash
accountName <account-name-here>
# Please provide either an account key or a SAS token, and delete the other line.
accountKey <account-key-here-delete-next-line>
#change authType to specify only 1
sasToken <shared-access-token-here-delete-previous-line>
authType <MSI/SAS/SPN/Key/empty>
containerName <insert-container-name-here>
```
1. Create a bash script named `azure_fuse`(e.g in /usr/bin, as root) with content below
(replace `user_name` on whose behalf the disk will be mounted, `mount_point`, `/path/to/blobfusetmp`,`/path/to/connection.cfg`):
```bash
#!/bin/bash
sudo -u <user_name> blobfuse <mount_point> --tmp-path=/path/to/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
exit 0
```
1. Give it the execution permission:
```bash
sudo chmod +x /usr/bin/azure_fuse
```
1. Edit `/etc/fstab` with the blobfuse script. Add the following line(replace paths):
```bash
/absolute/path/to/azure_fuse </path/to/desired/mountpoint> fuse allow_other,user,_netdev
```
##### <a name="azure_using_systemd">Using systemd</a>
1. Create unit file `sudo nano /etc/systemd/system/blobfuse.service`.
(replace `user_name`, `mount_point`, `container_name`,`/path/to/connection.cfg`):
```bash
[Unit]
Description=FUSE filesystem over Azure container
After=network.target
[Service]
Environment="MOUNT_POINT=<mount_point>"
User=<user_name>
Group=<user_name>
ExecStart=blobfuse ${MOUNT_POINT} --container-name=<container_name> --tmp-path=/mnt/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
ExecStop=fusermount -u ${MOUNT_POINT}
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
```
1. Update the system configurations, enable unit autorun when the system boots, mount the container:
```bash
sudo systemctl daemon-reload
sudo systemctl enable blobfuse.service
sudo systemctl start blobfuse.service
```
Or for more detail [see here](https://github.com/Azure/azure-storage-fuse/tree/master/systemd)
#### <a name="azure_check">Check</a>
A file `/etc/mtab` contains records of currently mounted filesystems.
```bash
cat /etc/mtab | grep 'blobfuse'
```
#### <a name="azure_unmount_filesystem">Unmount filesystem</a>
```bash
fusermount -u <mount_point>
```
If you used [systemd](#azure_using_systemd) to mount a container:
```bash
sudo systemctl stop blobfuse.service
sudo systemctl disable blobfuse.service
```
If you have any mounting problems, check out the [answers](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ)
to common problems
## Google Drive as filesystem
### <a name="google_drive_ubuntu_2004">Ubuntu 20.04</a>
#### <a name="google_drive_mount">Mount</a>
To mount a google drive as a filesystem in user space(FUSE)
you can use [google-drive-ocamlfuse](https://github.com/astrada/google-drive-ocamlfuse)
To do this follow the instructions below:
1. Install google-drive-ocamlfuse:
```bash
sudo add-apt-repository ppa:alessandro-strada/ppa
sudo apt-get update
sudo apt-get install google-drive-ocamlfuse
```
1. Run `google-drive-ocamlfuse` without parameters:
```bash
google-drive-ocamlfuse
```
This command will create the default application directory (~/.gdfuse/default),
containing the configuration file config (see the [wiki](https://github.com/astrada/google-drive-ocamlfuse/wiki)
page for more details about configuration).
And it will start a web browser to obtain authorization to access your Google Drive.
This will let you modify default configuration before mounting the filesystem.
Then you can choose a local directory to mount your Google Drive (e.g.: ~/GoogleDrive).
1. Create the mount point, if it doesn't exist(replace mount_point):
```bash
mountpoint="<mount_point>"
mkdir -p $mountpoint
```
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
1. Mount the filesystem:
```bash
google-drive-ocamlfuse -o allow_other $mountpoint
```
#### <a name="google_drive_automatically_mount">Automatically mount</a>
Follow the first 4 mounting steps above.
##### <a name="google_drive_using_fstab">Using fstab</a>
1. Create a bash script named gdfuse(e.g in /usr/bin, as root) with this content
(replace `user_name` on whose behalf the disk will be mounted, `label`, `mount_point`):
```bash
#!/bin/bash
sudo -u <user_name> google-drive-ocamlfuse -o allow_other -label <label> <mount_point>
exit 0
```
1. Give it the execution permission:
```bash
sudo chmod +x /usr/bin/gdfuse
```
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
```bash
/absolute/path/to/gdfuse <mount_point> fuse allow_other,user,_netdev 0 0
```
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
##### <a name="google_drive_using_systemd">Using systemd</a>
1. Create unit file `sudo nano /etc/systemd/system/google-drive-ocamlfuse.service`.
(replace `user_name`, `label`(default `label=default`), `mount_point`):
```bash
[Unit]
Description=FUSE filesystem over Google Drive
After=network.target
[Service]
Environment="MOUNT_POINT=<mount_point>"
User=<user_name>
Group=<user_name>
ExecStart=google-drive-ocamlfuse -label <label> ${MOUNT_POINT}
ExecStop=fusermount -u ${MOUNT_POINT}
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
```
1. Update the system configurations, enable unit autorun when the system boots, mount the drive:
```bash
sudo systemctl daemon-reload
sudo systemctl enable google-drive-ocamlfuse.service
sudo systemctl start google-drive-ocamlfuse.service
```
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
#### <a name="google_drive_check">Check</a>
A file `/etc/mtab` contains records of currently mounted filesystems.
```bash
cat /etc/mtab | grep 'google-drive-ocamlfuse'
```
#### <a name="google_drive_unmount_filesystem">Unmount filesystem</a>
```bash
fusermount -u <mount_point>
```
If you used [systemd](#google_drive_using_systemd) to mount a drive:
```bash
sudo systemctl stop google-drive-ocamlfuse.service
sudo systemctl disable google-drive-ocamlfuse.service
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

@ -1,138 +0,0 @@
// Extension loading compatible with AMD and CommonJs
(function(extension) {
'use strict';
if (typeof showdown === 'object') {
// global (browser or nodejs global)
showdown.extension('toc', extension());
} else if (typeof define === 'function' && define.amd) {
// AMD
define('toc', extension());
} else if (typeof exports === 'object') {
// Node, CommonJS-like
module.exports = extension();
} else {
// showdown was not found so we throw
throw Error('Could not find showdown library');
}
}(function() {
function getHeaderEntries(sourceHtml) {
if (typeof window === 'undefined') {
return getHeaderEntriesInNodeJs(sourceHtml);
} else {
return getHeaderEntriesInBrowser(sourceHtml);
}
}
function getHeaderEntriesInNodeJs(sourceHtml) {
var cheerio = require('cheerio');
var $ = cheerio.load(sourceHtml);
var headers = $('h1, h2, h3, h4, h5, h6');
var headerList = [];
for (var i = 0; i < headers.length; i++) {
var el = headers[i];
headerList.push(new TocEntry(el.name, $(el).text(), $(el).attr('id')));
}
return headerList;
}
function getHeaderEntriesInBrowser(sourceHtml) {
// Generate dummy element
var source = document.createElement('div');
source.innerHTML = sourceHtml;
// Find headers
var headers = source.querySelectorAll('h1, h2, h3, h4, h5, h6');
var headerList = [];
for (var i = 0; i < headers.length; i++) {
var el = headers[i];
headerList.push(new TocEntry(el.tagName, el.textContent, el.id));
}
return headerList;
}
function TocEntry(tagName, text, anchor) {
this.tagName = tagName;
this.text = text;
this.anchor = anchor;
this.children = [];
}
TocEntry.prototype.childrenToString = function() {
if (this.children.length === 0) {
return "";
}
var result = "<ul>\n";
for (var i = 0; i < this.children.length; i++) {
result += this.children[i].toString();
}
result += "</ul>\n";
return result;
};
TocEntry.prototype.toString = function() {
var result = "<li>";
if (this.text) {
result += "<a href=\"#" + this.anchor + "\">" + this.text + "</a>";
}
result += this.childrenToString();
result += "</li>\n";
return result;
};
function sortHeader(tocEntries, level) {
level = level || 1;
var tagName = "H" + level,
result = [],
currentTocEntry;
function push(tocEntry) {
if (tocEntry !== undefined) {
if (tocEntry.children.length > 0) {
tocEntry.children = sortHeader(tocEntry.children, level + 1);
}
result.push(tocEntry);
}
}
for (var i = 0; i < tocEntries.length; i++) {
var tocEntry = tocEntries[i];
if (tocEntry.tagName.toUpperCase() !== tagName) {
if (currentTocEntry === undefined) {
currentTocEntry = new TocEntry();
}
currentTocEntry.children.push(tocEntry);
} else {
push(currentTocEntry);
currentTocEntry = tocEntry;
}
}
push(currentTocEntry);
return result;
}
return {
type: 'output',
filter: function(sourceHtml) {
var headerList = getHeaderEntries(sourceHtml);
// No header found
if (headerList.length === 0) {
return sourceHtml;
}
// Sort header
headerList = sortHeader(headerList);
// Build result and replace all [toc]
var result = '<div class="toc">\n<ul>\n' + headerList.join("") + '</ul>\n</div>\n';
return sourceHtml.replace(/\[toc\]/gi, result);
}
};
}));

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -1,34 +0,0 @@
<!--
Copyright (C) 2018-2020 Intel Corporation
SPDX-License-Identifier: MIT
-->
<!DOCTYPE html>
{% load static compress %}
<head>
<title>{% block title %} {% endblock %}</title>
{% compress js file thirdparty %}
<script type="text/javascript" src="{% static 'documentation/js/3rdparty/showdown.js' %}"></script>
<script type="text/javascript" src="{% static 'documentation/js/3rdparty/showdown-toc.js' %}"></script>
{% endcompress %}
</head>
<body>
<xmp id="content" style="display: none">
{% autoescape off %}
{% block content %}
{% endblock %}
{% endautoescape %}
</xmp>
<script type="text/javascript">
var converter = new showdown.Converter({ extensions: ['toc'] });
converter.setFlavor('github');
var user_guide = document.getElementById('content').innerHTML;
// For GitHub documentation we need to have relative links without
// leading slash. Let's just add the leading slash here to have correct
// links inside online documentation.
user_guide = user_guide.replace(/!\[\]\(static/g, '![](/static');
document.body.innerHTML = converter.makeHtml(user_guide);
</script>
</body>

@ -1,14 +0,0 @@
<!--
Copyright (C) 2018-2020 Intel Corporation
SPDX-License-Identifier: MIT
-->
{% extends 'documentation/base_page.html' %}
{% block title %}
CVAT User Guide
{% endblock %}
{% block content %}
{{ user_guide }}
{% endblock %}

@ -1,8 +0,0 @@
<!--
Copyright (C) 2018-2020 Intel Corporation
SPDX-License-Identifier: MIT
-->
{% extends 'documentation/base_page.html' %}
{% block title %} CVAT XML format {% endblock %}
{% block content %} {{ xml_format }} {% endblock %}

@ -1,13 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.urls import path
from . import views
urlpatterns = [
path('user_guide.html', views.UserGuideView),
path('xml_format.html', views.XmlFormatView),
]

File diff suppressed because it is too large Load Diff

@ -1,21 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
from django.shortcuts import render
import os
def UserGuideView(request):
module_dir = os.path.dirname(__file__)
doc_path = os.path.join(module_dir, 'user_guide.md')
return render(request, 'documentation/user_guide.html',
context={"user_guide": open(doc_path, "r").read()})
def XmlFormatView(request):
module_dir = os.path.dirname(__file__)
doc_path = os.path.join(module_dir, 'xml_format.md')
return render(request, 'documentation/xml_format.html',
context={"xml_format": open(doc_path, "r").read()})

@ -104,7 +104,6 @@ INSTALLED_APPS = [
'django.contrib.messages',
'django.contrib.staticfiles',
'cvat.apps.authentication',
'cvat.apps.documentation',
'cvat.apps.dataset_manager',
'cvat.apps.engine',
'cvat.apps.dataset_repo',

@ -26,7 +26,6 @@ urlpatterns = [
path('admin/', admin.site.urls),
path('', include('cvat.apps.engine.urls')),
path('django-rq/', include('django_rq.urls')),
path('documentation/', include('cvat.apps.documentation.urls')),
]
if apps.is_installed('cvat.apps.dataset_repo'):

@ -0,0 +1,63 @@
## Basic manual for website editing
### Edit or add documentation pages
To edit and/or add documentation, you need to have a [GitHub](https://github.com/login) account.
To change documentation files or add a documentation page,
simply click `Edit this page` on the page you would like to edit.
If you need to add a child page, click `Create child page`.
If you need to edit the text that has the markup [markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet),
click on the `Fork this repository` button.
Read how to edit files for github ([GitHub docs](https://docs.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)).
Please note that files have a markup for correct display on the site: the title, the title of the link,
the weight (affects the order of files display on the sidebar) and description (optional):
---
title: "Title"
linkTitle: "Link Title"
weight: 1
description: >
Description
---
### Start site localy
To start the site locally, you need arecent [extended version hugo](https://github.com/gohugoio/hugo/releases)
(recommend version 0.75.0 or later).
Open the most recent release and scroll down until you find a list ofExtendedversions. [Read more](https://gohugo.io/getting-started/installing/#quick-install)
Add a path to "hugo" in the "Path" environment variable.
Clone a repository branch containing the site. For example, using a git command:
git clone --branch <branchname> <remote-repo-url>
If you want to build and/or serve your sitelocally, you also need to get local copies of the themes own submodules:
git submodule update --init --recursive
To build and preview your site locally, use:
cd <your local directory>/cvat/site/
hugo server
By default, your site will be available athttp://localhost:1313/
Instead of a "hugo server" command, you can use the "hugo" command that generates the site into a "public" folder.
To build or update your sites CSS resources you will need [PostCSS](https://postcss.org/) to create final assets.
To install it you must have a recent version of [NodeJS](https://nodejs.org/en/) installed on your machine,
so you can use npm, the Node package manager.
By default npm installs tools under the directory where you run [npm install](https://docs.npmjs.com/cli/v6/commands/npm-install#description):
cd <your local directory>/cvat/site/
npm ci
Then you can build a website in the "public" folder:
hugo
[Read more](https://www.docsy.dev/docs/getting-started/)

@ -0,0 +1 @@
<svg width="98" height="27" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><defs><path d="M101 0v29l-52.544.001C44.326 35.511 35.598 40 25.5 40 11.417 40 0 31.27 0 20.5S11.417 1 25.5 1c4.542 0 8.807.908 12.5 2.5V0h63z" id="logoA"/></defs><g transform="translate(-2 -1)" fill="none" fill-rule="evenodd"><mask id="logoB" fill="#fff"><use xlink:href="#logoA"/></mask><path d="M48.142 1c4.736 0 6.879 3.234 6.879 5.904v2.068h-4.737V6.904c0-.79-.789-2.144-2.142-2.144-1.654 0-2.368 1.354-2.368 2.144v15.192c0 .79.714 2.144 2.368 2.144 1.353 0 2.142-1.354 2.142-2.144v-2.068h4.737v2.068c0 2.67-2.143 5.904-6.88 5.904C42.956 28 41 24.766 41 22.134V6.904C41 4.234 42.955 1 48.142 1zM19-6c9.389 0 17 7.611 17 17s-7.611 17-17 17S2 20.389 2 11 9.611-6 19-6zm42.256 7.338l3.345 19.48h.075l3.42-19.48h5l-6.052 26.324h-5L56.22 1.338h5.037zm20.706 0l5.413 26.324h-4.699l-.94-6.13h-4.548l-.902 6.13h-4.435l5.413-26.324h4.698zm18.038 0v3.723h-4.849v22.6h-4.699v-22.6h-4.81V1.338H100zM19 4a7 7 0 100 14 7 7 0 000-14zm60.557 4.295h-.113l-1.466 9.439h3.007l-1.428-9.439z" fill="#fff" fill-rule="nonzero" mask="url(#logoB)"/></g></svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

@ -0,0 +1,110 @@
// Copyright (C) 2021 Intel Corporation
//
// SPDX-License-Identifier: MIT
/* Increased left padding on the sidebar of documentation */
.td-sidebar-nav__section-title .td-sidebar-nav__section {
padding-left: 0.3rem;
}
/* Main documentation page */
#docs section {
padding-top: 2rem;
padding-bottom: 7rem;
}
#docs .row div {
margin-top: 1rem;
}
/* Footer */
.footer-disclaimer {
font-size: 0.83rem;
line-height: 1.25;
margin-top: 0.5rem;
margin-bottom: 0.5rem;
}
.container-fluid footer {
min-height: inherit;
padding-bottom: 0.5rem !important;
padding-top: 2rem !important;
}
/* Icon color for temporary page */
#temporary-page i {
color: lightgrey;
}
/* About page */
.logo-2 {
opacity: 0.8;
}
.history #year h2 {
text-shadow: 0 0 3px rgb(27, 27, 27);
}
.avatar:hover img {
box-shadow: 0 0 15px gray;
}
.developer-info-list-item {
min-width: 15rem !important;
}
.location {
width: 70%;
}
.marker-location i {
color: lightgray;
}
/* World map block "the team" */
.team-container {
margin: auto;
max-width: 1200px;
}
.world-map-container {
width: 100%;
}
#world-map {
z-index: 1;
width: 100%;
height: 100%;
}
#world-map-marker {
z-index: 2;
position: absolute;
border-radius: 50%;
border: 2px white solid;
box-shadow: 2px 2px 1px gray;
max-height: 25px;
}
#world-map-marker:hover {
border: 4px white solid;
}
#world-map-marker:hover #tooltip div {
visibility: visible;
}
#tooltip {
background: white;
color: #000;
padding: 4px 8px;
font-size: 13px;
border-radius: 8px;
visibility: hidden;
}

@ -0,0 +1,17 @@
// Copyright (C) 2021 Intel Corporation
//
// SPDX-License-Identifier: MIT
/*
Add styles or override variables from the theme here.
*/
@import 'custom';
$enable-gradients: false;
$enable-rounded: true;
$enable-shadows: true;
$info: #f1f1f1;

@ -0,0 +1,196 @@
baseURL = "/"
title = "CVAT"
relativeURLs = true
enableRobotsTXT = true
# Hugo allows theme composition (and inheritance). The precedence is from left to right.
theme = ["docsy"]
# Will give values to .Lastmod etc.
enableGitInfo = true
# Language settings
contentDir = "content/en"
defaultContentLanguage = "en"
defaultContentLanguageInSubdir = false
# Useful when translating.
enableMissingTranslationPlaceholders = true
disableKinds = ["taxonomy", "taxonomyTerm"]
# Highlighting config
pygmentsCodeFences = true
pygmentsUseClasses = false
# Use the new Chroma Go highlighter in Hugo.
pygmentsUseClassic = false
#pygmentsOptions = "linenos=table"
# See https://help.farbox.com/pygments.html
pygmentsStyle = "tango"
# Configure how URLs look like per section.
[permalinks]
blog = "/:section/:year/:month/:day/:slug/"
## Configuration for BlackFriday markdown parser: https://github.com/russross/blackfriday
[blackfriday]
plainIDAnchors = true
hrefTargetBlank = true
angledQuotes = false
latexDashes = true
# Image processing configuration.
[imaging]
resampleFilter = "CatmullRom"
quality = 75
anchor = "smart"
[[menu.main]]
name = "Try it now"
weight = 50
url = "https://cvat.org"
[services]
[services.googleAnalytics]
# Comment out the next line to disable GA tracking. Also disables the feature described in [params.ui.feedback].
id = "UA-00000000-0"
# Language configuration
[languages]
[languages.en]
title = ""
description = ""
languageName ="English"
# Weight used for sorting.
weight = 1
[markup]
[markup.goldmark]
[markup.goldmark.renderer]
unsafe = true
[markup.highlight]
# See a complete list of available styles at https://xyproto.github.io/splash/docs/all.html
style = "tango"
# Uncomment if you want your chosen highlight style used for code blocks without a specified language
# guessSyntax = "true"
# Everything below this are Site Params
# Comment out if you don't want the "print entire section" link enabled.
[outputs]
section = ["HTML", "print"]
[params]
intel_terms_of_use = "https://www.intel.com/content/www/us/en/legal/terms-of-use.html"
intel_privacy_notice = "https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html"
cvat_terms_of_use = "https://cvat.org/api/v1/restrictions/terms-of-use"
# First one is picked as the Twitter card image if not set on page.
# images = ["images/project-illustration.png"]
# Menu title if your navbar has a versions selector to access old versions of your site.
# This menu appears only if you have at least one [params.versions] set.
version_menu = "Releases"
# Flag used in the "version-banner" partial to decide whether to display a
# banner on every page indicating that this is an archived version of the docs.
# Set this flag to "true" if you want to display the banner.
archived_version = false
# The version number for the version of the docs represented in this doc set.
# Used in the "version-banner" partial to display a version number for the
# current doc set.
version = "0.0"
# A link to latest version of the docs. Used in the "version-banner" partial to
# point people to the main doc site.
url_latest_version = "https://example.com"
# Repository configuration (URLs for in-page links to opening issues and suggesting changes)
github_repo = "https://github.com/openvinotoolkit/cvat"
# An optional link to a related project repo. For example, the sibling repository where your product code lives.
github_project_repo = "https://github.com/openvinotoolkit/cvat"
# Specify a value here if your content directory is not in your repo's root directory
# github_subdir = ""
# Uncomment this if you have a newer GitHub repo with "main" as the default branch,
# or specify a new value if you want to reference another branch in your GitHub links
github_branch = "develop"
# Google Custom Search Engine ID. Remove or comment out to disable search.
# gcs_engine_id = "011737558837375720776:fsdu1nryfng"
# Enable Algolia DocSearch
algolia_docsearch = false
# Enable Lunr.js offline search
offlineSearch = true
# Enable syntax highlighting and copy buttons on code blocks with Prism
prism_syntax_highlighting = false
# User interface configuration
[params.ui]
# Enable to show the side bar menu in its compact state.
sidebar_menu_compact = true
ul_show = 2
# Set to true to disable breadcrumb navigation.
breadcrumb_disable = false
# Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled)
sidebar_search_disable = true
# Set to false if you don't want to display a logo (/assets/icons/logo.svg) in the top nav bar
navbar_logo = true
# Set to true to disable the About link in the site footer
footer_about_disable = false
# Adds a H2 section titled "Feedback" to the bottom of each doc. The responses are sent to Google Analytics as events.
# This feature depends on [services.googleAnalytics] and will be disabled if "services.googleAnalytics.id" is not set.
# If you want this feature, but occasionally need to remove the "Feedback" section from a single page,
# add "hide_feedback: true" to the page's front matter.
[params.ui.feedback]
enable = false
# The responses that the user sees after clicking "yes" (the page was helpful) or "no" (the page was not helpful).
yes = 'Glad to hear it! Please <a href="https://github.com/USERNAME/REPOSITORY/issues/new">tell us how we can improve</a>.'
no = 'Sorry to hear that. Please <a href="https://github.com/USERNAME/REPOSITORY/issues/new">tell us how we can improve</a>.'
# Adds a reading time to the top of each doc.
# If you want this feature, but occasionally need to remove the Reading time from a single page,
# add "hide_readingtime: true" to the page's front matter
[params.ui.readingtime]
enable = false
[params.links]
# End user relevant links. These will show up on left side of footer and in the community page if you have one.
[[params.links.user]]
name ="Gitter public chat"
url = "https://gitter.im/opencv-cvat/public"
icon = "fab fa-gitter"
desc = "Join our Gitter channel for community support."
[[params.links.user]]
name = "Stack Overflow"
url = "https://stackoverflow.com/search?q=%23cvat"
icon = "fab fa-stack-overflow"
desc = "Practical questions and curated answers"
[[params.links.user]]
name = "YouTube"
url = "https://www.youtube.com/user/nmanovic"
icon = "fab fa-youtube"
desc = "Practical questions and curated answers"
# Developer relevant links. These will show up on right side of footer and in the community page if you have one.
[[params.links.developer]]
name = "GitHub"
url = "https://github.com/openvinotoolkit/cvat"
icon = "fab fa-github"
desc = "Development takes place here!"
[[params.links.developer]]
name = "Forum on Intel Developer Zone"
url = "https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit"
icon = "fas fa-envelope"
desc = "Development takes place here!"
[[params.links.developer]]
name ="Gitter developers chat"
url = "https://gitter.im/opencv-cvat/dev"
icon = "fab fa-gitter"
desc = "Join our Gitter channel for community support."

@ -0,0 +1,22 @@
+++
title = "CVAT"
linkTitle = "CVAT"
+++
{{< blocks/section height="full" color="docs" >}}
<section id="temporary-page" class="mx-auto text-center py-5">
<div class="py-4">
<i class="fas fa-tools fa-7x"></i>
</div>
<div class="py-4">
<h1 class="text-center">This page is in development.</h1>
</div>
<div class="py-4">
<h3 class="text-center">
Visit our <a href="https://github.com/openvinotoolkit/cvat">GitHub</a> repository.
</h3>
</div>
</section>
{{< /blocks/section >}}

@ -0,0 +1,180 @@
---
title: "About CVAT"
linkTitle: "About"
menu:
main:
weight: 50
---
{{< blocks/cover image_anchor="center" height="min" >}}
<div>
<img class="mb-5 logo-2" src="/images/logo2.png">
<h3 class="mb-4">About Us</h3>
<p>CVAT was designed to provide users with a set of convenient instruments for annotating digital images and videos. <br/> CVAT supports supervised machine learning tasks pertaining to object detection, image classification, image segmentation and 3D data annotation. It allows users to annotate images with four types of shapes: boxes, polygons (both generally and for segmentation tasks), polylines (e.g., for annotation of markings on roads), <br/> and points (e.g., for annotation of face landmarks or pose estimation).</p>
</div>
{{< /blocks/cover >}}
{{< blocks/section height="auto" color="info" >}}
<div class="history col-12 mx-auto text-left">
<p>Data scientists need annotated data (and lots of it) to train the deep neural networks (DNNs) at the core of AI workflows. Obtaining annotated data or annotating data yourself is a challenging and time-consuming process. <br/> For example, it took about 3,100 total hours for members of Intels own data annotation team to annotate more than 769,000 objects for just one of our algorithms. To help solve this challenge, Intel is conducting research to find better methods of data annotation and deliver tools that help developers do the same.</p>
<div class="col mt-5">
<div class="row">
<div id="year" class="col-lg-2 text-left col-lg-2 mt-2">
<h2 class="mt-2">2017
<img class="ml-2" alt="" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjMuMTc5bW0iIGhlaWdodD0iMi45NjQ5bW0iIHZlcnNpb249IjEuMSIgdmlld0JveD0iMCAwIDIzLjE3OSAyLjk2NDkiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8Zz4KICA8cGF0aCBkPSJtMjEuMDMgMCAyLjE0OCAxLjQ4MjQtMi4xNDggMS40ODI0eiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9Ii42NyIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuMSIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjMuNjEiIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjIiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSI2LjU1IiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii41IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iOS40OSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuNiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjEyLjQzIiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii43IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iMTUuMzciIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjgiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSIxOC4zMSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuOSIgb3BhY2l0eT0iLjk5NSIvPgogPC9nPgo8L3N2Zz4K" />
</h2>
<small class="text-left">First version of CVAT was created and open sourced.</small>
</div>
<div id="year" class="col-lg-2 text-left col-lg-2 mt-2">
<h2 class="mt-2">2018
<img class="ml-2" alt="" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjMuMTc5bW0iIGhlaWdodD0iMi45NjQ5bW0iIHZlcnNpb249IjEuMSIgdmlld0JveD0iMCAwIDIzLjE3OSAyLjk2NDkiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8Zz4KICA8cGF0aCBkPSJtMjEuMDMgMCAyLjE0OCAxLjQ4MjQtMi4xNDggMS40ODI0eiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9Ii42NyIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuMSIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjMuNjEiIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjIiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSI2LjU1IiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii41IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iOS40OSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuNiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjEyLjQzIiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii43IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iMTUuMzciIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjgiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSIxOC4zMSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuOSIgb3BhY2l0eT0iLjk5NSIvPgogPC9nPgo8L3N2Zz4K" />
</h2>
<small class="text-left">Publication on GitHub.</small>
</div>
<div id="year" class="col-lg-2 text-left col-lg-2 mt-2">
<h2 class="mt-2">2020
<img class="ml-2" alt="" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjMuMTc5bW0iIGhlaWdodD0iMi45NjQ5bW0iIHZlcnNpb249IjEuMSIgdmlld0JveD0iMCAwIDIzLjE3OSAyLjk2NDkiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8Zz4KICA8cGF0aCBkPSJtMjEuMDMgMCAyLjE0OCAxLjQ4MjQtMi4xNDggMS40ODI0eiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9Ii42NyIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuMSIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjMuNjEiIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjIiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSI2LjU1IiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii41IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iOS40OSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuNiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjEyLjQzIiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii43IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iMTUuMzciIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjgiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSIxOC4zMSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuOSIgb3BhY2l0eT0iLjk5NSIvPgogPC9nPgo8L3N2Zz4K" />
</h2>
<small class="text-left">Release version 1.0.0. Major update. <br/>Opening public demo-server cvat.org.</small>
</div>
<div id="year" class="col-lg-2 text-left col-lg-2 mt-2">
<h2 class="mt-2">2020
<img class="ml-2" alt="" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjMuMTc5bW0iIGhlaWdodD0iMi45NjQ5bW0iIHZlcnNpb249IjEuMSIgdmlld0JveD0iMCAwIDIzLjE3OSAyLjk2NDkiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8Zz4KICA8cGF0aCBkPSJtMjEuMDMgMCAyLjE0OCAxLjQ4MjQtMi4xNDggMS40ODI0eiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9Ii42NyIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuMSIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjMuNjEiIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjIiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSI2LjU1IiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii41IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iOS40OSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuNiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjEyLjQzIiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii43IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iMTUuMzciIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjgiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSIxOC4zMSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuOSIgb3BhY2l0eT0iLjk5NSIvPgogPC9nPgo8L3N2Zz4K" />
</h2>
<small class="text-left">Release version 1.1.0. <br/>Adding DL models.</small>
</div>
<div id="year" class="col-lg-2 text-left col-lg-2 mt-2">
<h2 class="mt-2">2021
<img class="ml-2" alt="" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjMuMTc5bW0iIGhlaWdodD0iMi45NjQ5bW0iIHZlcnNpb249IjEuMSIgdmlld0JveD0iMCAwIDIzLjE3OSAyLjk2NDkiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8Zz4KICA8cGF0aCBkPSJtMjEuMDMgMCAyLjE0OCAxLjQ4MjQtMi4xNDggMS40ODI0eiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9Ii42NyIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuMSIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjMuNjEiIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjIiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSI2LjU1IiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii41IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iOS40OSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuNiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjEyLjQzIiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii43IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iMTUuMzciIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjgiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSIxOC4zMSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuOSIgb3BhY2l0eT0iLjk5NSIvPgogPC9nPgo8L3N2Zz4K" />
</h2>
<small class="text-left">Release version 1.3.0. <br/> Adding CVAT-3D.</small>
</div>
<div id="year" class="col-lg-2 text-left col-lg-2 mt-2">
<h2 class="mt-2">2022
<img class="ml-2" alt="" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjMuMTc5bW0iIGhlaWdodD0iMi45NjQ5bW0iIHZlcnNpb249IjEuMSIgdmlld0JveD0iMCAwIDIzLjE3OSAyLjk2NDkiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8Zz4KICA8cGF0aCBkPSJtMjEuMDMgMCAyLjE0OCAxLjQ4MjQtMi4xNDggMS40ODI0eiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9Ii42NyIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuMSIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjMuNjEiIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjIiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSI2LjU1IiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii41IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iOS40OSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuNiIgb3BhY2l0eT0iLjk5NSIvPgogIDxjaXJjbGUgY3g9IjEyLjQzIiBjeT0iMS40OCIgcj0iLjY2ODE3IiBmaWxsLW9wYWNpdHk9Ii43IiBvcGFjaXR5PSIuOTk1Ii8+CiAgPGNpcmNsZSBjeD0iMTUuMzciIGN5PSIxLjQ4IiByPSIuNjY4MTciIGZpbGwtb3BhY2l0eT0iLjgiIG9wYWNpdHk9Ii45OTUiLz4KICA8Y2lyY2xlIGN4PSIxOC4zMSIgY3k9IjEuNDgiIHI9Ii42NjgxNyIgZmlsbC1vcGFjaXR5PSIuOSIgb3BhY2l0eT0iLjk5NSIvPgogPC9nPgo8L3N2Zz4K" />
</h2>
<small class="text-left">Further development...</small>
</div>
</div>
</div>
</div>
{{< /blocks/section >}}
{{< blocks/section height="auto" color="info" >}}
<h3 class="col-12 text-center">The Team</h3>
<br/>
<div class="team-container">
<div class="world-map-container">
<img id="world-map" src="/images/world-map.png">
<img tabindex="0" id="world-map-marker" style="top: 31%; left: 62.5%" alt="Boris Sekachev" src="https://www.intel.com/content/dam/www/public/us/en/ai/bios/Boris.jpg.rendition.intel.web.336.252.jpg">
<img tabindex="0" id="world-map-marker" style="top: 27%; left: 61.5%" alt="Nikita Manovich" src="https://www.intel.com/content/dam/www/public/us/en/ai/bios/nikita-manovich-DSC_3075.jpg.rendition.intel.web.336.252.jpg">
<img tabindex="0" id="world-map-marker" style="top: 35%; left: 59.5%" alt="Andrey Zhavoronkov" src="https://www.intel.com/content/dam/www/public/us/en/ai/bios/Andrey-Zhavoronkov.jpg.rendition.intel.web.336.252.jpg">
<img tabindex="0" id="world-map-marker" style="top: 31%; left: 60.5%" alt="Maxim Zhiltsov" src="https://avatars.githubusercontent.com/u/13832349?v=4">
<img tabindex="0" id="world-map-marker" style="top: 27%; left: 59.5%" alt="Andrey Chernov" src="https://avatars.githubusercontent.com/u/45849884?v=4">
<img tabindex="0" id="world-map-marker" style="top: 35%; left: 61.5%" alt="Timur Osmanov" src="https://avatars.githubusercontent.com/u/54434686?v=4">
<div class="marker-info">
</div>
</div>
{{< /blocks/section >}}
{{< blocks/section height="min" color="info" >}}
<div class="col-12 mx-auto"><h3 class="text-center">Leadership:</h3></div>
<div class="col">
<div class="row developers-info text-center px-auto">
<div class="col-2 col-xl-2 offset-md-6 mt-5 mx-auto avatar developer-info-list-item">
<div class="avatar">
<a href="https://github.com/nmanovic"><img class="image" style="border-radius: 50%" src="https://www.intel.com/content/dam/www/public/us/en/ai/bios/nikita-manovich-DSC_3075.jpg.rendition.intel.web.336.252.jpg" height=80px></a>
</div>
<div class="developer-info-text-container">
<h4 class="mt-4">Nikita Manovich</h4>
<small class="text-left">Deep Learning Manager</small>
</div>
</div>
<div class="col-2 col-xl-2 offset-md-6 col-xl-2 mt-5 mx-auto avatar developer-info-list-item">
<div class="avatar">
<a href="https://github.com/azhavoro"><img class="image" style="border-radius: 50%" src="https://www.intel.com/content/dam/www/public/us/en/ai/bios/Andrey-Zhavoronkov.jpg.rendition.intel.web.336.252.jpg" height=80px></a>
</div>
<div class="developer-info-text-container">
<h4 class="mt-4">Andrey Zhavoronkov</h4>
<small class="text-left">Deep Learning Software Engineer</small>
</div>
</div>
<div class="col-2 col-xl-2 offset-md-6 col-xl-2 mt-5 mx-auto avatar developer-info-list-item">
<div class="avatar">
<a href="https://github.com/bsekachev"><img class="image" style="border-radius: 50%" src="https://www.intel.com/content/dam/www/public/us/en/ai/bios/Boris.jpg.rendition.intel.web.336.252.jpg" height=80px></a>
</div>
<div class="developer-info-text-container">
<h4 class="mt-4">Boris Sekachev</h4>
<small class="text-left">Deep Learning Software Engineer</small>
</div>
</div>
<div class="col-2 col-xl-2 offset-md-6 mt-5 mx-auto avatar developer-info-list-item">
<div class="avatar">
<a href="https://github.com/zhiltsov-max"><img class="image" style="border-radius: 50%" src="https://avatars.githubusercontent.com/u/13832349?v=4" height=80px></a></div>
<div class="developer-info-text-container">
<h4 class="mt-4">Maxim Zhiltsov</h4>
<small class="text-left">Deep Learning Software Engineer</small>
</div>
</div>
<div class="col-2 col-xl-2 offset-md-6 mt-5 mx-auto avatar developer-info-list-item">
<div class="avatar">
<a href="https://github.com/aschernov"><img class="image" style="border-radius: 50%" src="https://avatars.githubusercontent.com/u/45849884?v=4" height=80px></a></div>
<div class="developer-info-text-container">
<h4 class="mt-4">Andrey Chernov</h4>
<small class="text-left">Program/Project Manager</small>
</div>
</div>
<div class="col-2 col-xl-2 offset-md-6 mt-5 mx-auto avatar developer-info-list-item">
<div class="avatar">
<a href="https://github.com/TOsmanov"><img class="image" style="border-radius: 50%" src="https://avatars.githubusercontent.com/u/54434686?v=4" height=80px></a></div>
<div class="developer-info-text-container">
<h4 class="mt-4">Timur Osmanov</h4>
<small class="text-left">Data Analyst</small>
</div>
</div>
</div>
{{< /blocks/section >}}
{{< blocks/section height="auto" color="docs" >}}
<div class="col-12 mx-auto mb-3"><h3 class="text-center">Contact Us:</h3></div>
<div class="location mx-auto text-center mb-3">
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d9081756.161048459!2d24.815223265281535!3d56.23961364271714!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x4151d4f600916a31%3A0x2603763a741e437!2sIntel!5e0!3m2!1sen!2sru!4v1617960069860!5m2!1sen!2sru" width="100%" height="250" style="border:0;" allowfullscreen="" loading="lazy"></iframe>
</div>
<div class="contact col-12 mx-auto text-left">
<span class="text-center mb-3 marker-location">
<p class="ml-3" ><i class="fas fa-map-marker-alt fa-2x mr-3"></i>Russia, Nizhny Novgorod, Turgeneva street 30 (campus TGV)</p>
</span>
<p class="px-5">
Feedback from users helps Intel determine future direction for CVATs development. We hope to improve the tools user experience, feature set, stability, automation features and ability to be integrated with other services and encourage members of the community to take an active part in CVATs development.
</p>
<div class="row">
<ul class="col-lg-6 text-left">
<li>
You can ask questions anytime in <a href="https://gitter.im/opencv-cvat/public">public Gitter chat</a>.
</li>
<li>
You can find answers to your questions on <a href="https://stackoverflow.com/search?q=%23cvat">Stack Overflow</a>.
</li>
</ul>
<ul class="col-lg-6 text-left">
<li>
You can ask questions anytime in <a href="https://gitter.im/opencv-cvat/dev">Gitter chat for developers</a>.
</li>
<li>
Visit our <a href="https://github.com/openvinotoolkit/cvat">GitHub</a> repository.
</li>
</ul>
</div>
</div>
{{< /blocks/section >}}

Binary file not shown.

After

Width:  |  Height:  |  Size: 620 KiB

@ -0,0 +1,59 @@
---
title: 'CVAT Documentation'
linkTitle: 'Documentation'
no_list: true
menu:
main:
weight: 20
---
CVAT is a free, online, interactive video and image annotation tool for computer vision.
It is being developed and used by Intel to annotate millions of objects with different properties.
Many UI and UX decisions are based on feedbacks from professional data annotation team.
Try it online [cvat.org](https://cvat.org).
<section id="docs">
{{< blocks/section color="docs" >}}
{{% blocks/feature icon="fa-server" title="[Installation Guide](/docs/for-users/installation/)" %}}
CVAT installation guide for different operating systems.
{{% /blocks/feature %}}
{{% blocks/feature icon="fa-book" title="[User's Guide](/docs/for-users/user-guide/)" %}}
This multipage document contains information on how to work with the CVAT user interface.
{{% /blocks/feature %}}
{{% blocks/feature icon="fa-question" title="[FAQ](/docs/for-users/faq/)" %}}
Answers to frequently asked questions.
{{% /blocks/feature %}}
<!--lint disable maximum-line-length-->
{{% blocks/feature icon="fa-magic" title="[Installation Auto Annotation](/docs/for-users/installation_automatic_annotation/)" %}}
This page provides information about the installation of components needed for semi-automatic and automatic annotation.
{{% /blocks/feature %}}
{{% blocks/feature icon="fa-terminal" title="[For Developers](/docs/for-developers/)" %}}
This section contains documents for system administrators, AI researchers and any other advanced users.
{{% /blocks/feature %}}
{{% blocks/feature icon="fab fa-github" title="[GitHub Repository](https://github.com/openvinotoolkit/cvat)" %}}
Computer Vision Annotation Tool GitHub repository.
{{% /blocks/feature %}}
{{< /blocks/section >}}
</section>

@ -0,0 +1,41 @@
---
title: 'AWS-Deployment Guide'
linkTitle: 'AWS-Deployment Guide'
weight: 4
---
There are two ways of deploying the CVAT.
1. **On Nvidia GPU Machine:** Tensorflow annotation feature is dependent on GPU hardware.
One of the easy ways to launch CVAT with the tf-annotation app is to use AWS P3 instances,
which provides the NVIDIA GPU.
Read more about [P3 instances here.](https://aws.amazon.com/about-aws/whats-new/2017/10/introducing-amazon-ec2-p3-instances/)
Overall setup instruction is explained in [main readme file](https://github.com/opencv/cvat/),
except Installing Nvidia drivers.
So we need to download the drivers and install it.
For Amazon P3 instances, download the Nvidia Drivers from Nvidia website.
For more check [Installing the NVIDIA Driver on Linux Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html)
link.
2. **On Any other AWS Machine:** We can follow the same instruction guide mentioned in the
[installation instructions](/docs/for-users/installation/).
The additional step is to add a [security group and rule to allow incoming connections](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html).
For any of above, don't forget to add exposed AWS public IP address or hostname to `docker-compose.override.yml`:
```
version: "2.3"
services:
cvat_proxy:
environment:
CVAT_HOST: your-instance.amazonaws.com
```
In case of problems with using hostname, you can also use the public IPV4 instead of hostname.
For AWS or any cloud based machines where the instances need to be terminated or stopped,
the public IPV4 and hostname changes with every stop and reboot.
To address this efficiently, avoid using spot instances that cannot be stopped,
since copying the EBS to an AMI and restarting it throws problems.
On the other hand, when a regular instance is stopped and restarted,
the new hostname/IPV4 can be used in the `CVAT_HOST` variable in the `docker-compose.override.yml`
and the build can happen instantly with CVAT tasks being available through the new IPV4.

@ -0,0 +1,11 @@
<!--lint disable maximum-heading-length-->
---
title: 'For Developers'
linkTitle: 'For Developers'
weight: 3
description: 'This section contains documents for system administrators, AI researchers and any other advanced users.'
hide_feedback: true
---

@ -1,6 +1,17 @@
## Analytics for Computer Vision Annotation Tool (CVAT)
<!--lint disable maximum-heading-length-->
![](/cvat/apps/documentation/static/documentation/images/image097.jpg)
---
title: 'Analytics for Computer Vision Annotation Tool (CVAT)'
linkTitle: 'Analytics'
weight: 2
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/components/analytics)
---
<!--lint disable heading-style-->
![](/images/image097.jpg)
It is possible to proxy annotation logs from client to ELK. To do that run the following command below:

@ -1,3 +1,11 @@
---
title: 'Backup guide'
linkTitle: 'Backup guide'
weight: 11
---
<!--lint disable heading-style-->
## About CVAT data volumes
Docker volumes are used to store all CVAT data:
@ -8,13 +16,13 @@ Docker volumes are used to store all CVAT data:
- `cvat_data`: used to store uploaded and prepared media data.
Mounted into `cvat` container by `/home/django/data` path.
- `cvat_keys`: used to store user ssh keys needed for [synchronization with a remote Git repository](user_guide.md#task-synchronization-with-a-repository).
- `cvat_keys`: used to store user ssh keys needed for [synchronization with a remote Git repository](/docs/for-users/user-guide/task-synchronization/).
Mounted into `cvat` container by `/home/django/keys` path.
- `cvat_logs`: used to store logs of CVAT backend processes managed by supevisord.
Mounted into `cvat` container by `/home/django/logs` path.
- `cvat_events`: this is an optional volume that is used only when [Analytics component](../../../components/analytics)
- `cvat_events`: this is an optional volume that is used only when [Analytics component](/docs/for-developers/analytics/)
is enabled and is used to store Elasticsearch database files.
Mounted into `cvat_elasticsearch` container by `/usr/share/elasticsearch/data` path.
@ -48,7 +56,7 @@ cvat_data.tar.bz2 cvat_db.tar.bz2 cvat_events.tar.bz2
## How to restore CVAT from backup
Note: CVAT containers must exist (if no, please follow the [installation guide](installation.md#quick-installation-guide)).
Note: CVAT containers must exist (if no, please follow the [installation guide](/docs/for-users/installation/#quick-installation-guide)).
Stop all CVAT containers:
```console

@ -1,4 +1,9 @@
# Command line interface (CLI)
---
title: "Command line interface (CLI)"
linkTitle: "CLI"
weight: 3
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/utils/cli)
---
**Description**
A simple command line interface for working with CVAT tasks. At the moment it

@ -1,4 +1,10 @@
# Data preparation on the fly
---
title: 'Data preparation on the fly'
linkTitle: 'Data preparation on the fly'
weight: 9
---
<!--lint disable heading-style-->
## Description
@ -23,4 +29,4 @@ during task creation, which may take some time.
#### Uploading a manifest with data
When creating a task, you can upload a `manifest.jsonl` file along with the video or dataset with images.
You can see how to prepare it [here](/utils/dataset_manifest/README.md).
You can see how to prepare it [here](/docs/for-developers/dataset_manifest/).

@ -1,4 +1,15 @@
## Simple command line to prepare dataset manifest file
<!--lint disable maximum-heading-length-->
---
title: 'Simple command line to prepare dataset manifest file'
linkTitle: 'Dataset manifest'
weight: 10
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/utils/dataset_manifest)
---
<!--lint disable heading-style-->
### Steps before use

@ -0,0 +1,395 @@
---
title: 'Mounting cloud storage'
linkTitle: 'Mounting cloud storage'
weight: 10
---
<!--lint disable heading-style-->
## AWS S3 bucket as filesystem
### <a name="aws_s3_ubuntu_2004">Ubuntu 20.04</a>
#### <a name="aws_s3_mount">Mount</a>
1. Install s3fs:
```bash
sudo apt install s3fs
```
1. Enter your credentials in a file `${HOME}/.passwd-s3fs` and set owner-only permissions:
```bash
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
```
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
1. Run s3fs, replace `bucket_name`, `mount_point`:
```bash
s3fs <bucket_name> <mount_point> -o allow_other
```
For more details see [here](https://github.com/s3fs-fuse/s3fs-fuse).
#### <a name="aws_s3_automatically_mount">Automatically mount</a>
Follow the first 3 mounting steps above.
##### <a name="aws_s3_using_fstab">Using fstab</a>
1. Create a bash script named aws_s3_fuse(e.g in /usr/bin, as root) with this content
(replace `user_name` on whose behalf the disk will be mounted, `backet_name`, `mount_point`, `/path/to/.passwd-s3fs`):
```bash
#!/bin/bash
sudo -u <user_name> s3fs <backet_name> <mount_point> -o passwd_file=/path/to/.passwd-s3fs -o allow_other
exit 0
```
1. Give it the execution permission:
```bash
sudo chmod +x /usr/bin/aws_s3_fuse
```
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
```bash
/absolute/path/to/aws_s3_fuse <mount_point> fuse allow_other,user,_netdev 0 0
```
##### <a name="aws_s3_using_systemd">Using systemd</a>
1. Create unit file `sudo nano /etc/systemd/system/s3fs.service`
(replace `user_name`, `bucket_name`, `mount_point`, `/path/to/.passwd-s3fs`):
```bash
[Unit]
Description=FUSE filesystem over AWS S3 bucket
After=network.target
[Service]
Environment="MOUNT_POINT=<mount_point>"
User=<user_name>
Group=<user_name>
ExecStart=s3fs <bucket_name> ${MOUNT_POINT} -o passwd_file=/path/to/.passwd-s3fs -o allow_other
ExecStop=fusermount -u ${MOUNT_POINT}
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
```
1. Update the system configurations, enable unit autorun when the system boots, mount the bucket:
```bash
sudo systemctl daemon-reload
sudo systemctl enable s3fs.service
sudo systemctl start s3fs.service
```
#### <a name="aws_s3_check">Check</a>
A file `/etc/mtab` contains records of currently mounted filesystems.
```bash
cat /etc/mtab | grep 's3fs'
```
#### <a name="aws_s3_unmount_filesystem">Unmount filesystem</a>
```bash
fusermount -u <mount_point>
```
If you used [systemd](#aws_s3_using_systemd) to mount a bucket:
```bash
sudo systemctl stop s3fs.service
sudo systemctl disable s3fs.service
```
## Microsoft Azure container as filesystem
### <a name="azure_ubuntu_2004">Ubuntu 20.04</a>
#### <a name="azure_mount">Mount</a>
1. Set up the Microsoft package repository.(More [here](https://docs.microsoft.com/en-us/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software#configuring-the-repositories))
```bash
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt-get update
```
1. Install `blobfuse` and `fuse`:
```bash
sudo apt-get install blobfuse fuse
```
For more details see [here](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation)
1. Create enviroments(replace `account_name`, `account_key`, `mount_point`):
```bash
export AZURE_STORAGE_ACCOUNT=<account_name>
export AZURE_STORAGE_ACCESS_KEY=<account_key>
MOUNT_POINT=<mount_point>
```
1. Create a folder for cache:
```bash
sudo mkdir -p /mnt/blobfusetmp
```
1. Make sure the file must be owned by the user who mounts the container:
```bash
sudo chown <user> /mnt/blobfusetmp
```
1. Create the mount point, if it doesn't exists:
```bash
mkdir -p ${MOUNT_POINT}
```
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
1. Mount container(replace `your_container`):
```bash
blobfuse ${MOUNT_POINT} --container-name=<your_container> --tmp-path=/mnt/blobfusetmp -o allow_other
```
#### <a name="azure_automatically_mount">Automatically mount</a>
Follow the first 7 mounting steps above.
##### <a name="azure_using_fstab">Using fstab</a>
1. Create configuration file `connection.cfg` with same content, change accountName,
select one from accountKey or sasToken and replace with your value:
```bash
accountName <account-name-here>
# Please provide either an account key or a SAS token, and delete the other line.
accountKey <account-key-here-delete-next-line>
#change authType to specify only 1
sasToken <shared-access-token-here-delete-previous-line>
authType <MSI/SAS/SPN/Key/empty>
containerName <insert-container-name-here>
```
1. Create a bash script named `azure_fuse`(e.g in /usr/bin, as root) with content below
(replace `user_name` on whose behalf the disk will be mounted, `mount_point`, `/path/to/blobfusetmp`,`/path/to/connection.cfg`):
```bash
#!/bin/bash
sudo -u <user_name> blobfuse <mount_point> --tmp-path=/path/to/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
exit 0
```
1. Give it the execution permission:
```bash
sudo chmod +x /usr/bin/azure_fuse
```
1. Edit `/etc/fstab` with the blobfuse script. Add the following line(replace paths):
```bash
/absolute/path/to/azure_fuse </path/to/desired/mountpoint> fuse allow_other,user,_netdev
```
##### <a name="azure_using_systemd">Using systemd</a>
1. Create unit file `sudo nano /etc/systemd/system/blobfuse.service`.
(replace `user_name`, `mount_point`, `container_name`,`/path/to/connection.cfg`):
```bash
[Unit]
Description=FUSE filesystem over Azure container
After=network.target
[Service]
Environment="MOUNT_POINT=<mount_point>"
User=<user_name>
Group=<user_name>
ExecStart=blobfuse ${MOUNT_POINT} --container-name=<container_name> --tmp-path=/mnt/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
ExecStop=fusermount -u ${MOUNT_POINT}
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
```
1. Update the system configurations, enable unit autorun when the system boots, mount the container:
```bash
sudo systemctl daemon-reload
sudo systemctl enable blobfuse.service
sudo systemctl start blobfuse.service
```
Or for more detail [see here](https://github.com/Azure/azure-storage-fuse/tree/master/systemd)
#### <a name="azure_check">Check</a>
A file `/etc/mtab` contains records of currently mounted filesystems.
```bash
cat /etc/mtab | grep 'blobfuse'
```
#### <a name="azure_unmount_filesystem">Unmount filesystem</a>
```bash
fusermount -u <mount_point>
```
If you used [systemd](#azure_using_systemd) to mount a container:
```bash
sudo systemctl stop blobfuse.service
sudo systemctl disable blobfuse.service
```
If you have any mounting problems, check out the [answers](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ)
to common problems
## Google Drive as filesystem
### <a name="google_drive_ubuntu_2004">Ubuntu 20.04</a>
#### <a name="google_drive_mount">Mount</a>
To mount a google drive as a filesystem in user space(FUSE)
you can use [google-drive-ocamlfuse](https://github.com/astrada/google-drive-ocamlfuse)
To do this follow the instructions below:
1. Install google-drive-ocamlfuse:
```bash
sudo add-apt-repository ppa:alessandro-strada/ppa
sudo apt-get update
sudo apt-get install google-drive-ocamlfuse
```
1. Run `google-drive-ocamlfuse` without parameters:
```bash
google-drive-ocamlfuse
```
This command will create the default application directory (~/.gdfuse/default),
containing the configuration file config (see the [wiki](https://github.com/astrada/google-drive-ocamlfuse/wiki)
page for more details about configuration).
And it will start a web browser to obtain authorization to access your Google Drive.
This will let you modify default configuration before mounting the filesystem.
Then you can choose a local directory to mount your Google Drive (e.g.: ~/GoogleDrive).
1. Create the mount point, if it doesn't exist(replace mount_point):
```bash
mountpoint="<mount_point>"
mkdir -p $mountpoint
```
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
1. Mount the filesystem:
```bash
google-drive-ocamlfuse -o allow_other $mountpoint
```
#### <a name="google_drive_automatically_mount">Automatically mount</a>
Follow the first 4 mounting steps above.
##### <a name="google_drive_using_fstab">Using fstab</a>
1. Create a bash script named gdfuse(e.g in /usr/bin, as root) with this content
(replace `user_name` on whose behalf the disk will be mounted, `label`, `mount_point`):
```bash
#!/bin/bash
sudo -u <user_name> google-drive-ocamlfuse -o allow_other -label <label> <mount_point>
exit 0
```
1. Give it the execution permission:
```bash
sudo chmod +x /usr/bin/gdfuse
```
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
```bash
/absolute/path/to/gdfuse <mount_point> fuse allow_other,user,_netdev 0 0
```
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
##### <a name="google_drive_using_systemd">Using systemd</a>
1. Create unit file `sudo nano /etc/systemd/system/google-drive-ocamlfuse.service`.
(replace `user_name`, `label`(default `label=default`), `mount_point`):
```bash
[Unit]
Description=FUSE filesystem over Google Drive
After=network.target
[Service]
Environment="MOUNT_POINT=<mount_point>"
User=<user_name>
Group=<user_name>
ExecStart=google-drive-ocamlfuse -label <label> ${MOUNT_POINT}
ExecStop=fusermount -u ${MOUNT_POINT}
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
```
1. Update the system configurations, enable unit autorun when the system boots, mount the drive:
```bash
sudo systemctl daemon-reload
sudo systemctl enable google-drive-ocamlfuse.service
sudo systemctl start google-drive-ocamlfuse.service
```
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
#### <a name="google_drive_check">Check</a>
A file `/etc/mtab` contains records of currently mounted filesystems.
```bash
cat /etc/mtab | grep 'google-drive-ocamlfuse'
```
#### <a name="google_drive_unmount_filesystem">Unmount filesystem</a>
```bash
fusermount -u <mount_point>
```
If you used [systemd](#google_drive_using_systemd) to mount a drive:
```bash
sudo systemctl stop google-drive-ocamlfuse.service
sudo systemctl disable google-drive-ocamlfuse.service
```

@ -1,10 +1,24 @@
# XML annotation format
---
title: 'XML annotation format'
linkTitle: 'XML annotation format'
weight: 3
---
When you want to download annotations from Computer Vision Annotation Tool (CVAT) you can choose one of several data formats. The document describes XML annotation format. Each format has X.Y version (e.g. 1.0). In general the major version (X) is incremented when the data format has incompatible changes and the minor version (Y) is incremented when the data format is slightly modified (e.g. it has one or several extra fields inside meta information). The document will describe all changes for all versions of XML annotation format.
<!--lint disable heading-style-->
When you want to download annotations from Computer Vision Annotation Tool (CVAT)
you can choose one of several data formats. The document describes XML annotation format.
Each format has X.Y version (e.g. 1.0). In general the major version (X) is incremented when the data format has
incompatible changes and the minor version (Y) is incremented when the data format is slightly modified
(e.g. it has one or several extra fields inside meta information).
The document will describe all changes for all versions of XML annotation format.
## Version 1.1
There are two different formats for annotation and interpolation modes at the moment. Both formats have a common part which is described below. From previous version `flipped` tag was added. Also `original_size` tag was added for interpolation mode to specify frame size. In annotation mode each image tag has `width` and `height` attributes for the same purpose.
There are two different formats for images and video tasks at the moment.
The both formats have a common part which is described below. From the previous version `flipped` tag was added.
Also `original_size` tag was added for interpolation mode to specify frame size.
In annotation mode each image tag has `width` and `height` attributes for the same purpose.
```xml
<?xml version="1.0" encoding="utf-8"?>
@ -62,8 +76,12 @@ ex. value 3</values>
### Annotation
Below you can find description of the data format for annotation mode. In this mode images are annotated. On each image it is possible to have many different objects. Each object can have multiple attributes. If an annotation task has been
created with `z_order` flag then each object will have `z_order` attribute which is used to draw objects properly when they are intersected (if `z_order` is bigger the object is closer to camera). In previous versions of the format only `box` shape was available. In later releases `polygon`, `polyline`, and `points` were added. Please see below for more details:
Below you can find description of the data format for images tasks.
On each image it is possible to have many different objects. Each object can have multiple attributes.
If an annotation task is created with `z_order` flag then each object will have `z_order` attribute which is used
to draw objects properly when they are intersected (if `z_order` is bigger the object is closer to camera).
In previous versions of the format only `box` shape was available.
In later releases `polygon`, `polyline`, and `points` were added. Please see below for more details:
```xml
<?xml version="1.0" encoding="utf-8"?>
@ -172,7 +190,11 @@ Example:
### Interpolation
Below you can find description of the data format for interpolation mode. In the mode frames are annotated. The annotation contains tracks. Each track corresponds to an object which can be presented on multiple frames. The same object cannot be presented on the same frame in multiple locations. Each location of the object can have multiple attributes even if an attribute is immutable for the object it will be cloned for each location (a known redundancy).
Below you can find description of the data format for video tasks.
The annotation contains tracks. Each track corresponds to an object which can be presented on multiple frames.
The same object cannot be presented on the same frame in multiple locations.
Each location of the object can have multiple attributes even if an attribute is immutable for the object it will be
cloned for each location (a known redundancy).
```xml
<?xml version="1.0" encoding="utf-8"?>
@ -265,7 +287,8 @@ Example:
## Version 1
There are two different formats for annotation and interpolation modes at the moment. Both formats has a common part which is described below:
There are two different formats for images and video tasks at the moment.
Both formats has a common part which is described below:
```xml
<?xml version="1.0" encoding="utf-8"?>
@ -310,7 +333,9 @@ There are two different formats for annotation and interpolation modes at the mo
### Annotation
Below you can find description of the data format for annotation mode. In the mode images are annotated. On each image it is possible to have many different objects. Each object can have multiple attributes.
Below you can find description of the data format for images tasks.
On each image it is possible to have many different objects. Each object can have multiple attributes.
```xml
<?xml version="1.0" encoding="utf-8"?>
@ -395,7 +420,11 @@ Example:
### Interpolation
Below you can find description of the data format for interpolation mode. In this mode frames are annotated. The annotation contains tracks. Each track corresponds to an object which can be presented on multiple frames. The same object cannot be presented on the same frame in multiple locations. Each location of the object can have multiple attributes even if an attribute is immutable for the object it will be cloned for each location (a known redundancy).
Below you can find description of the data format for video tasks.
The annotation contains tracks. Each track corresponds to an object which can be presented on multiple frames.
The same object cannot be presented on the same frame in multiple locations.
Each location of the object can have multiple attributes even if an attribute is immutable for the object
it will be cloned for each location (a known redundancy).
```xml
<?xml version="1.0" encoding="utf-8"?>

@ -0,0 +1,11 @@
<!--lint disable heading-style-->
---
title: 'For Users'
linkTitle: 'For Users'
weight: 2
description: 'This section contains documents for CVAT users'
hide_feedback: true
---

@ -1,23 +1,16 @@
# Frequently asked questions
- [How to update CVAT](#how-to-update-cvat)
- [Kibana app works, but no logs are displayed](#kibana-app-works-but-no-logs-are-displayed)
- [How to change default CVAT hostname or port](#how-to-change-default-cvat-hostname-or-port)
- [How to configure connected share folder on Windows](#how-to-configure-connected-share-folder-on-windows)
- [How to make unassigned tasks not visible to all users](#how-to-make-unassigned-tasks-not-visible-to-all-users)
- [Where are uploaded images/videos stored](#where-are-uploaded-imagesvideos-stored)
- [Where are annotations stored](#where-are-annotations-stored)
- [How to mark job/task as completed](#how-to-mark-jobtask-as-completed)
- [How to install CVAT on Windows 10 Home](#how-to-install-cvat-on-windows-10-home)
- [I do not have the Analytics tab on the header section. How can I add analytics](#i-do-not-have-the-analytics-tab-on-the-header-section-how-can-i-add-analytics)
- [How to upload annotations to an entire task from UI when there are multiple jobs in the task](#how-to-upload-annotations-to-an-entire-task-from-ui-when-there-are-multiple-jobs-in-the-task)
- [How to specify multiple hostnames for CVAT_HOST](#how-to-specify-multiple-hostnames-for-cvat_host)
- [How to create a task with multiple jobs](#how-to-create-a-task-with-multiple-jobs)
- [How to transfer CVAT to another machine](#how-to-transfer-cvat-to-another-machine)
---
title: 'Frequently asked questions'
linkTitle: 'FAQ'
weight: 20
description: 'Answers to frequently asked questions'
---
<!--lint disable heading-style-->
## How to update CVAT
Before upgrading, please follow the [backup guide](backup_guide.md) and backup all CVAT volumes.
Before updating, please follow the [backup guide](/docs/for-developers/backup_guide/)
and backup all CVAT volumes.
To update CVAT, you should clone or download the new version of CVAT and rebuild the CVAT docker images as usual.
@ -96,7 +89,8 @@ volumes:
## How to make unassigned tasks not visible to all users
Set [reduce_task_visibility](../../settings/base.py#L424) variable to `True`.
Set [reduce_task_visibility](https://github.com/openvinotoolkit/cvat/blob/develop/cvat/settings/base.py#L424)
variable to `True`.
## Where are uploaded images/videos stored
@ -118,17 +112,18 @@ volumes:
## How to mark job/task as completed
The status is set by the user in the [Info window](user_guide.md#info) of the job annotation view.
The status is set by the user in the [Info window](/docs/for-users/user-guide/top-panel/#info)
of the job annotation view.
There are three types of status: annotation, validation or completed.
The status of the job changes the progress bar of the task.
## How to install CVAT on Windows 10 Home
Follow this [guide](installation.md#windows-10).
Follow this [guide](/docs/for-users/installation/#windows-10).
## I do not have the Analytics tab on the header section. How can I add analytics
You should build CVAT images with ['Analytics' component](../../../components/analytics).
You should build CVAT images with ['Analytics' component](https://github.com/openvinotoolkit/cvat/tree/develop/components/analytics).
## How to upload annotations to an entire task from UI when there are multiple jobs in the task
@ -147,8 +142,9 @@ services:
## How to create a task with multiple jobs
Set the segment size when you create a new task, this option is available in the
[Advanced configuration](user_guide.md#advanced-configuration) section.
[Advanced configuration](/docs/for-users/user-guide/creating_an_annotation_task/#advanced-configuration)
section.
## How to transfer CVAT to another machine
Follow the [backup/restore guide](backup_guide.md#how-to-backup-all-cvat-data).
Follow the [backup/restore guide](/docs/for-developers/backup_guide/#how-to-backup-all-cvat-data).

@ -0,0 +1,145 @@
---
title: 'Dataset and annotation formats'
linkTitle: 'Formats'
weight: 6
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats)
---
<!-- lint disable heading-style -->
## How to add a new annotation format support<a id="how-to-add"></a>
1. Add a python script to `dataset_manager/formats`
1. Add an import statement to [registry.py](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats/registry.py).
1. Implement some importers and exporters as the format requires.
Each format is supported by an importer and exporter.
It can be a function or a class decorated with
`importer` or `exporter` from [registry.py](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats/registry.py).
Examples:
```python
@importer(name="MyFormat", version="1.0", ext="ZIP")
def my_importer(file_object, task_data, **options):
...
@importer(name="MyFormat", version="2.0", ext="XML")
class my_importer(file_object, task_data, **options):
def __call__(self, file_object, task_data, **options):
...
@exporter(name="MyFormat", version="1.0", ext="ZIP"):
def my_exporter(file_object, task_data, **options):
...
```
Each decorator defines format parameters such as:
- _name_
- _version_
- _file extension_. For the `importer` it can be a comma-separated list.
These parameters are combined to produce a visible name. It can be
set explicitly by the `display_name` argument.
Importer arguments:
- _file_object_ - a file with annotations or dataset
- _task_data_ - an instance of `TaskData` class.
Exporter arguments:
- _file_object_ - a file for annotations or dataset
- _task_data_ - an instance of `TaskData` class.
- _options_ - format-specific options. `save_images` is the option to
distinguish if dataset or just annotations are requested.
[`TaskData`](https://github.com/openvinotoolkit/cvat/blob/develop/cvat/apps/dataset_manager/bindings.py) provides
many task properties and interfaces to add and read task annotations.
Public members:
- **TaskData. Attribute** - class, `namedtuple('Attribute', 'name, value')`
- **TaskData. LabeledShape** - class, `namedtuple('LabeledShape', 'type, frame, label, points, occluded, attributes, group, z_order')`
- **TrackedShape** - `namedtuple('TrackedShape', 'type, points, occluded, frame, attributes, outside, keyframe, z_order')`
- **Track** - class, `namedtuple('Track', 'label, group, shapes')`
- **Tag** - class, `namedtuple('Tag', 'frame, label, attributes, group')`
- **Frame** - class, `namedtuple('Frame', 'frame, name, width, height, labeled_shapes, tags')`
- **TaskData. shapes** - property, an iterator over `LabeledShape` objects
- **TaskData. tracks** - property, an iterator over `Track` objects
- **TaskData. tags** - property, an iterator over `Tag` objects
- **TaskData. meta** - property, a dictionary with task information
- **TaskData. group_by_frame()** - method, returns
an iterator over `Frame` objects, which groups annotation objects by frame.
Note that `TrackedShape` s will be represented as `LabeledShape` s.
- **TaskData. add_tag(tag)** - method,
tag should be an instance of the `Tag` class
- **TaskData. add_shape(shape)** - method,
shape should be an instance of the `Shape` class
- **TaskData. add_track(track)** - method,
track should be an instance of the `Track` class
Sample exporter code:
```python
...
# dump meta info if necessary
...
# iterate over all frames
for frame_annotation in task_data.group_by_frame():
# get frame info
image_name = frame_annotation.name
image_width = frame_annotation.width
image_height = frame_annotation.height
# iterate over all shapes on the frame
for shape in frame_annotation.labeled_shapes:
label = shape.label
xtl = shape.points[0]
ytl = shape.points[1]
xbr = shape.points[2]
ybr = shape.points[3]
# iterate over shape attributes
for attr in shape.attributes:
attr_name = attr.name
attr_value = attr.value
...
# dump annotation code
file_object.write(...)
...
```
Sample importer code:
```python
...
#read file_object
...
for parsed_shape in parsed_shapes:
shape = task_data.LabeledShape(
type="rectangle",
points=[0, 0, 100, 100],
occluded=False,
attributes=[],
label="car",
outside=False,
frame=99,
)
task_data.add_shape(shape)
```

@ -0,0 +1,22 @@
---
title: "Format specifications:"
linkTitle: "Format specifications"
weight: 1
no_list: true
---
- [CVAT](format-cvat)
- [Datumaro](format-datumaro)
- [LabelMe](format-labelme)
- [MOT](format-mot)
- [MOTS](format-mots)
- [COCO](format-coco)
- [PASCAL VOC and mask](format-voc)
- [YOLO](format-yolo)
- [TF detection API](format-tfrecord)
- [ImageNet](format-imagenet)
- [CamVid](format-camvid)
- [WIDER Face](format-widerface)
- [VGGFace2](format-vggface2)
- [Market-1501](format-market1501)
- [ICDAR13/15](format-icdar)

@ -0,0 +1,42 @@
---
linkTitle: "CamVid"
weight: 10
---
### [CamVid](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/)<a id="camvid" />
#### CamVid export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── labelmap.txt # optional, required for non-CamVid labels
├── <any_subset_name>/
| ├── image1.png
| └── image2.png
├── <any_subset_name>annot/
| ├── image1.png
| └── image2.png
└── <any_subset_name>.txt
# labelmap.txt
# color (RGB) label
0 0 0 Void
64 128 64 Animal
192 0 128 Archway
0 128 192 Bicyclist
0 128 64 Bridge
```
Mask is a `png` image with 1 or 3 channels where each pixel
has own color which corresponds to a label.
`(0, 0, 0)` is used for background by default.
- supported annotations: Rectangles, Polygons
#### CamVid import
Uploaded file: a zip archive of the structure above
- supported annotations: Polygons

@ -0,0 +1,72 @@
---
linkTitle: 'MS COCO'
weight: 5
---
### [MS COCO Object Detection](http://cocodataset.org/#format-data)<a id="coco" />
- [Format specification](http://cocodataset.org/#format-data)
#### COCO export
Downloaded file: a zip archive with following structure:
```bash
archive.zip/
├── images/
│ ├── <image_name1.ext>
│ ├── <image_name2.ext>
│ └── ...
└── annotations/
└── instances_default.json
```
- supported annotations: Polygons, Rectangles
- supported attributes:
- `is_crowd` (checkbox or integer with values 0 and 1) -
specifies that the instance (an object group) should have an
RLE-encoded mask in the `segmentation` field. All the grouped shapes
are merged into a single mask, the largest one defines all
the object properties
- `score` (number) - the annotation `score` field
- arbitrary attributes - will be stored in the `attributes` annotation section
_Note_: there is also a [support for COCO keypoints over Datumaro](https://github.com/openvinotoolkit/cvat/issues/2910#issuecomment-726077582)
1. Install [Datumaro](https://github.com/openvinotoolkit/datumaro)
`pip install datumaro`
1. Export the task in the `Datumaro` format, unzip
1. Export the Datumaro project in `coco` / `coco_person_keypoints` formats
`datum export -f coco -p path/to/project [-- --save-images]`
This way, one can export CVAT points as single keypoints or
keypoint lists (without the `visibility` COCO flag).
#### COCO import
Uploaded file: a single unpacked `*.json` or a zip archive with the structure above (without images).
- supported annotations: Polygons, Rectangles (if the `segmentation` field is empty)
#### How to create a task from MS COCO dataset
1. Download the [MS COCO dataset](http://cocodataset.org/#download).
For example [2017 Val images](http://images.cocodataset.org/zips/val2017.zip)
and [2017 Train/Val annotations](http://images.cocodataset.org/annotations/annotations_trainval2017.zip).
1. Create a CVAT task with the following labels:
```bash
person bicycle car motorcycle airplane bus train truck boat "traffic light" "fire hydrant" "stop sign" "parking meter" bench bird cat dog horse sheep cow elephant bear zebra giraffe backpack umbrella handbag tie suitcase frisbee skis snowboard "sports ball" kite "baseball bat" "baseball glove" skateboard surfboard "tennis racket" bottle "wine glass" cup fork knife spoon bowl banana apple sandwich orange broccoli carrot "hot dog" pizza donut cake chair couch "potted plant" bed "dining table" toilet tv laptop mouse remote keyboard "cell phone" microwave oven toaster sink refrigerator book clock vase scissors "teddy bear" "hair drier" toothbrush
```
1. Select val2017.zip as data
(See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/)
guide for details)
1. Unpack `annotations_trainval2017.zip`
1. click `Upload annotation` button,
choose `COCO 1.1` and select `instances_val2017.json.json`
annotation file. It can take some time.

@ -0,0 +1,48 @@
---
linkTitle: "CVAT"
weight: 1
---
### CVAT<a id="cvat" />
This is the native CVAT annotation format. It supports all CVAT annotations
features, so it can be used to make data backups.
- supported annotations: Rectangles, Polygons, Polylines,
Points, Cuboids, Tags, Tracks
- attributes are supported
- [Format specification](/docs/for-developers/xml_format/)
#### CVAT for images export
Downloaded file: a ZIP file of the following structure:
```bash
taskname.zip/
├── images/
| ├── img1.png
| └── img2.jpg
└── annotations.xml
```
- tracks are split by frames
#### CVAT for videos export
Downloaded file: a ZIP file of the following structure:
```bash
taskname.zip/
├── images/
| ├── frame_000000.png
| └── frame_000001.png
└── annotations.xml
```
- shapes are exported as single-frame tracks
#### CVAT loader
Uploaded file: an XML file or a ZIP file of the structures above

@ -0,0 +1,15 @@
---
linkTitle: "Datumaro"
weight: 1.5
---
### Datumaro format <a id="datumaro" />
[Datumaro](https://github.com/openvinotoolkit/datumaro/) is a tool, which can
help with complex dataset and annotation transformations, format conversions,
dataset statistics, merging, custom formats etc. It is used as a provider
of dataset support in CVAT, so basically, everything possible in CVAT
is possible in Datumaro too, but Datumaro can offer dataset operations.
- supported annotations: any 2D shapes, labels
- supported attributes: any

@ -0,0 +1,73 @@
---
linkTitle: "ICDAR13/15"
weight: 14
---
### [ICDAR13/15](https://rrc.cvc.uab.es/?ch=2)<a id="icdar" />
#### ICDAR13/15 export
Downloaded file: a zip archive of the following structure:
```bash
# word recognition task
taskname.zip/
└── word_recognition/
└── <any_subset_name>/
├── images
| ├── word1.png
| └── word2.png
└── gt.txt
# text localization task
taskname.zip/
└── text_localization/
└── <any_subset_name>/
├── images
| ├── img_1.png
| └── img_2.png
├── gt_img_1.txt
└── gt_img_1.txt
#text segmentation task
taskname.zip/
└── text_localization/
└── <any_subset_name>/
├── images
| ├── 1.png
| └── 2.png
├── 1_GT.bmp
├── 1_GT.txt
├── 2_GT.bmp
└── 2_GT.txt
```
**Word recognition task**:
- supported annotations: Label `icdar` with attribute `caption`
**Text localization task**:
- supported annotations: Rectangles and Polygons with label `icdar`
and attribute `text`
**Text segmentation task**:
- supported annotations: Rectangles and Polygons with label `icdar`
and attributes `index`, `text`, `color`, `center`
#### ICDAR13/15 import
Uploaded file: a zip archive of the structure above
**Word recognition task**:
- supported annotations: Label `icdar` with attribute `caption`
**Text localization task**:
- supported annotations: Rectangles and Polygons with label `icdar`
and attribute `text`
**Text segmentation task**:
- supported annotations: Rectangles and Polygons with label `icdar`
and attributes `index`, `text`, `color`, `center`

@ -0,0 +1,36 @@
---
linkTitle: "ImageNet"
weight: 9
---
### [ImageNet](http://www.image-net.org)<a id="imagenet" />
#### ImageNet export
Downloaded file: a zip archive of the following structure:
```bash
# if we save images:
taskname.zip/
├── label1/
| ├── label1_image1.jpg
| └── label1_image2.jpg
└── label2/
├── label2_image1.jpg
├── label2_image3.jpg
└── label2_image4.jpg
# if we keep only annotation:
taskname.zip/
├── <any_subset_name>.txt
└── synsets.txt
```
- supported annotations: Labels
#### ImageNet import
Uploaded file: a zip archive of the structure above
- supported annotations: Labels

@ -0,0 +1,34 @@
---
linkTitle: "LabelMe"
weight: 2
---
### [LabelMe](http://labelme.csail.mit.edu/Release3.0)<a id="labelme" />
#### LabelMe export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── img1.jpg
└── img1.xml
```
- supported annotations: Rectangles, Polygons (with attributes)
#### LabelMe import
Uploaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── Masks/
| ├── img1_mask1.png
| └── img1_mask2.png
├── img1.xml
├── img2.xml
└── img3.xml
```
- supported annotations: Rectangles, Polygons, Masks (as polygons)

@ -0,0 +1,40 @@
---
linkTitle: "Market-1501"
weight: 13
---
### [Market-1501](https://www.aitribune.com/dataset/2018051063)<a id="market1501" />
#### Market-1501 export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── bounding_box_<any_subset_name>/
│ └── image_name_1.jpg
└── query
├── image_name_2.jpg
└── image_name_3.jpg
# if we keep only annotation:
taskname.zip/
└── images_<any_subset_name>.txt
# images_<any_subset_name>.txt
query/image_name_1.jpg
bounding_box_<any_subset_name>/image_name_2.jpg
bounding_box_<any_subset_name>/image_name_3.jpg
# image_name = 0001_c1s1_000015_00.jpg
0001 - person id
c1 - camera id (there are totally 6 cameras)
s1 - sequence
000015 - frame number in sequence
00 - means that this bounding box is the first one among the several
```
- supported annotations: Label `market-1501` with atrributes (`query`, `person_id`, `camera_id`)
#### Market-1501 import
Uploaded file: a zip archive of the structure above
- supported annotations: Label `market-1501` with atrributes (`query`, `person_id`, `camera_id`)

@ -0,0 +1,47 @@
---
linkTitle: "MOT"
weight: 3
---
### [MOT sequence](https://arxiv.org/pdf/1906.04567.pdf)<a id="mot" />
#### MOT export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── img1/
| ├── image1.jpg
| └── image2.jpg
└── gt/
├── labels.txt
└── gt.txt
# labels.txt
cat
dog
person
...
# gt.txt
# frame_id, track_id, x, y, w, h, "not ignored", class_id, visibility, <skipped>
1,1,1363,569,103,241,1,1,0.86014
...
```
- supported annotations: Rectangle shapes and tracks
- supported attributes: `visibility` (number), `ignored` (checkbox)
#### MOT import
Uploaded file: a zip archive of the structure above or:
```bash
taskname.zip/
├── labels.txt # optional, mandatory for non-official labels
└── gt.txt
```
- supported annotations: Rectangle tracks

@ -0,0 +1,36 @@
---
linkTitle: "MOTS"
weight: 4
---
### [MOTS PNG](https://www.vision.rwth-aachen.de/page/mots)<a id="mots" />
#### MOTS PNG export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
└── <any_subset_name>/
| images/
| ├── image1.jpg
| └── image2.jpg
└── instances/
├── labels.txt
├── image1.png
└── image2.png
# labels.txt
cat
dog
person
...
```
- supported annotations: Rectangle and Polygon tracks
#### MOTS PNG import
Uploaded file: a zip archive of the structure above
- supported annotations: Polygon tracks

@ -0,0 +1,197 @@
---
linkTitle: "TFRecord"
weight: 8
---
### [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord)<a id="tfrecord" />
TFRecord is a very flexible format, but we try to correspond the
format that used in
[TF object detection](https://github.com/tensorflow/models/tree/master/research/object_detection)
with minimal modifications.
Used feature description:
```python
image_feature_description = {
'image/filename': tf.io.FixedLenFeature([], tf.string),
'image/source_id': tf.io.FixedLenFeature([], tf.string),
'image/height': tf.io.FixedLenFeature([], tf.int64),
'image/width': tf.io.FixedLenFeature([], tf.int64),
# Object boxes and classes.
'image/object/bbox/xmin': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/xmax': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/ymin': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/ymax': tf.io.VarLenFeature(tf.float32),
'image/object/class/label': tf.io.VarLenFeature(tf.int64),
'image/object/class/text': tf.io.VarLenFeature(tf.string),
}
```
#### TFRecord export
Downloaded file: a zip archive with following structure:
```bash
taskname.zip/
├── default.tfrecord
└── label_map.pbtxt
# label_map.pbtxt
item {
id: 1
name: 'label_0'
}
item {
id: 2
name: 'label_1'
}
...
```
- supported annotations: Rectangles, Polygons (as masks, manually over [Datumaro](https://github.com/openvinotoolkit/datumaro/blob/develop/docs/user_manual.md))
How to export masks:
1. Export annotations in `Datumaro` format
1. Apply `polygons_to_masks` and `boxes_to_masks` transforms
```bash
datum transform -t polygons_to_masks -p path/to/proj -o ptm
datum transform -t boxes_to_masks -p ptm -o btm
```
1. Export in the `TF Detection API` format
```bash
datum export -f tf_detection_api -p btm [-- --save-images]
```
#### TFRecord import
Uploaded file: a zip archive of following structure:
```bash
taskname.zip/
└── <any name>.tfrecord
```
- supported annotations: Rectangles
#### How to create a task from TFRecord dataset (from VOC2007 for example)
1. Create `label_map.pbtxt` file with the following content:
```js
item {
id: 1
name: 'aeroplane'
}
item {
id: 2
name: 'bicycle'
}
item {
id: 3
name: 'bird'
}
item {
id: 4
name: 'boat'
}
item {
id: 5
name: 'bottle'
}
item {
id: 6
name: 'bus'
}
item {
id: 7
name: 'car'
}
item {
id: 8
name: 'cat'
}
item {
id: 9
name: 'chair'
}
item {
id: 10
name: 'cow'
}
item {
id: 11
name: 'diningtable'
}
item {
id: 12
name: 'dog'
}
item {
id: 13
name: 'horse'
}
item {
id: 14
name: 'motorbike'
}
item {
id: 15
name: 'person'
}
item {
id: 16
name: 'pottedplant'
}
item {
id: 17
name: 'sheep'
}
item {
id: 18
name: 'sofa'
}
item {
id: 19
name: 'train'
}
item {
id: 20
name: 'tvmonitor'
}
```
1. Use [create_pascal_tf_record.py](https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pascal_tf_record.py)
to convert VOC2007 dataset to TFRecord format.
As example:
```bash
python create_pascal_tf_record.py --data_dir <path to VOCdevkit> --set train --year VOC2007 --output_path pascal.tfrecord --label_map_path label_map.pbtxt
```
1. Zip train images
```bash
cat <path to VOCdevkit>/VOC2007/ImageSets/Main/train.txt | while read p; do echo <path to VOCdevkit>/VOC2007/JPEGImages/${p}.jpg ; done | zip images.zip -j -@
```
1. Create a CVAT task with the following labels:
```bash
aeroplane bicycle bird boat bottle bus car cat chair cow diningtable dog horse motorbike person pottedplant sheep sofa train tvmonitor
```
Select images. zip as data.
See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/)
guide for details.
1. Zip `pascal.tfrecord` and `label_map.pbtxt` files together
```bash
zip anno.zip -j <path to pascal.tfrecord> <path to label_map.pbtxt>
```
1. Click `Upload annotation` button, choose `TFRecord 1.0` and select the zip file
with labels from the previous step. It may take some time.

@ -0,0 +1,35 @@
---
linkTitle: "VGGFace2"
weight: 12
---
### [VGGFace2](https://github.com/ox-vgg/vgg_face2)<a id="vggface2" />
#### VGGFace2 export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── labels.txt # optional
├── <any_subset_name>/
| ├── label0/
| | └── image1.jpg
| └── label1/
| └── image2.jpg
└── bb_landmark/
├── loose_bb_<any_subset_name>.csv
└── loose_landmark_<any_subset_name>.csv
# labels.txt
# n000001 car
label0 <class0>
label1 <class1>
```
- supported annotations: Rectangles, Points (landmarks - groups of 5 points)
#### VGGFace2 import
Uploaded file: a zip archive of the structure above
- supported annotations: Rectangles, Points (landmarks - groups of 5 points)

@ -0,0 +1,171 @@
---
linkTitle: "Pascal VOC"
weight: 6
---
### [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/)<a id="voc" />
- [Format specification](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/devkit_doc.pdf)
- supported annotations:
- Rectangles (detection and layout tasks)
- Tags (action- and classification tasks)
- Polygons (segmentation task)
- supported attributes:
- `occluded` (both UI option and a separate attribute)
- `truncated` and `difficult` (should be defined for labels as `checkbox` -es)
- action attributes (import only, should be defined as `checkbox` -es)
- arbitrary attributes (in the `attributes` secion of XML files)
#### Pascal VOC export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── JPEGImages/
│ ├── <image_name1>.jpg
│ ├── <image_name2>.jpg
│ └── <image_nameN>.jpg
├── Annotations/
│ ├── <image_name1>.xml
│ ├── <image_name2>.xml
│ └── <image_nameN>.xml
├── ImageSets/
│ └── Main/
│ └── default.txt
└── labelmap.txt
# labelmap.txt
# label : color_rgb : 'body' parts : actions
background:::
aeroplane:::
bicycle:::
bird:::
```
#### Pascal VOC import
Uploaded file: a zip archive of the structure declared above or the following:
```bash
taskname.zip/
├── <image_name1>.xml
├── <image_name2>.xml
└── <image_nameN>.xml
```
It must be possible for CVAT to match the frame name and file name
from annotation `.xml` file (the `filename` tag, e. g.
`<filename>2008_004457.jpg</filename>` ).
There are 2 options:
1. full match between frame name and file name from annotation `.xml`
(in cases when task was created from images or image archive).
1. match by frame number. File name should be `<number>.jpg`
or `frame_000000.jpg`. It should be used when task was created from video.
#### Segmentation mask export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── labelmap.txt # optional, required for non-VOC labels
├── ImageSets/
│ └── Segmentation/
│ └── default.txt # list of image names without extension
├── SegmentationClass/ # merged class masks
│ ├── image1.png
│ └── image2.png
└── SegmentationObject/ # merged instance masks
├── image1.png
└── image2.png
# labelmap.txt
# label : color (RGB) : 'body' parts : actions
background:0,128,0::
aeroplane:10,10,128::
bicycle:10,128,0::
bird:0,108,128::
boat:108,0,100::
bottle:18,0,8::
bus:12,28,0::
```
Mask is a `png` image with 1 or 3 channels where each pixel
has own color which corresponds to a label.
Colors are generated following to Pascal VOC [algorithm](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:voclabelcolormap).
`(0, 0, 0)` is used for background by default.
- supported shapes: Rectangles, Polygons
#### Segmentation mask import
Uploaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── labelmap.txt # optional, required for non-VOC labels
├── ImageSets/
│ └── Segmentation/
│ └── <any_subset_name>.txt
├── SegmentationClass/
│ ├── image1.png
│ └── image2.png
└── SegmentationObject/
├── image1.png
└── image2.png
```
It is also possible to import grayscale (1-channel) PNG masks.
For grayscale masks provide a list of labels with the number of lines equal
to the maximum color index on images. The lines must be in the right order
so that line index is equal to the color index. Lines can have arbitrary,
but different, colors. If there are gaps in the used color
indices in the annotations, they must be filled with arbitrary dummy labels.
Example:
```
q:0,128,0:: # color index 0
aeroplane:10,10,128:: # color index 1
_dummy2:2,2,2:: # filler for color index 2
_dummy3:3,3,3:: # filler for color index 3
boat:108,0,100:: # color index 3
...
_dummy198:198,198,198:: # filler for color index 198
_dummy199:199,199,199:: # filler for color index 199
...
the last label:12,28,0:: # color index 200
```
- supported shapes: Polygons
#### How to create a task from Pascal VOC dataset
1. Download the Pascal Voc dataset (Can be downloaded from the
[PASCAL VOC website](http://host.robots.ox.ac.uk/pascal/VOC/))
1. Create a CVAT task with the following labels:
```bash
aeroplane bicycle bird boat bottle bus car cat chair cow diningtable
dog horse motorbike person pottedplant sheep sofa train tvmonitor
```
You can add `~checkbox=difficult:false ~checkbox=truncated:false`
attributes for each label if you want to use them.
Select interesting image files (See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/) guide for details)
1. zip the corresponding annotation files
1. click `Upload annotation` button, choose `Pascal VOC ZIP 1.1`
and select the zip file with annotations from previous step.
It may take some time.

@ -0,0 +1,36 @@
---
linkTitle: "Wider Face"
weight: 9
---
### [WIDER Face](http://shuoyang1213.me/WIDERFACE/)<a id="widerface" />
#### WIDER Face export
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── labels.txt # optional
├── wider_face_split/
│ └── wider_face_<any_subset_name>_bbx_gt.txt
└── WIDER_<any_subset_name>/
└── images/
├── 0--label0/
│ └── 0_label0_image1.jpg
└── 1--label1/
└── 1_label1_image2.jpg
```
- supported annotations: Rectangles (with attributes), Labels
- supported attributes:
- `blur`, `expression`, `illumination`, `pose`, `invalid`
- `occluded` (both the annotation property & an attribute)
#### WIDER Face import
Uploaded file: a zip archive of the structure above
- supported annotations: Rectangles (with attributes), Labels
- supported attributes:
- `blur`, `expression`, `illumination`, `occluded`, `pose`, `invalid`

@ -0,0 +1,126 @@
---
linkTitle: "YOLO"
weight: 7
---
### [YOLO](https://pjreddie.com/darknet/yolo/)<a id="yolo" />
- [Format specification](https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects)
- supported annotations: Rectangles
#### YOLO export
Downloaded file: a zip archive with following structure:
```bash
archive.zip/
├── obj.data
├── obj.names
├── obj_<subset>_data
│ ├── image1.txt
│ └── image2.txt
└── train.txt # list of subset image paths
# the only valid subsets are: train, valid
# train.txt and valid.txt:
obj_<subset>_data/image1.jpg
obj_<subset>_data/image2.jpg
# obj.data:
classes = 3 # optional
names = obj.names
train = train.txt
valid = valid.txt # optional
backup = backup/ # optional
# obj.names:
cat
dog
airplane
# image_name.txt:
# label_id - id from obj.names
# cx, cy - relative coordinates of the bbox center
# rw, rh - relative size of the bbox
# label_id cx cy rw rh
1 0.3 0.8 0.1 0.3
2 0.7 0.2 0.3 0.1
```
Each annotation `*.txt` file has a name that corresponds to the name of
the image file (e. g. `frame_000001.txt` is the annotation
for the `frame_000001.jpg` image).
The `*.txt` file structure: each line describes label and bounding box
in the following format `label_id cx cy w h`.
`obj.names` contains the ordered list of label names.
#### YOLO import
Uploaded file: a zip archive of the same structure as above
It must be possible to match the CVAT frame (image name)
and annotation file name. There are 2 options:
1. full match between image name and name of annotation `*.txt` file
(in cases when a task was created from images or archive of images).
1. match by frame number (if CVAT cannot match by name). File name
should be in the following format `<number>.jpg` .
It should be used when task was created from a video.
#### How to create a task from YOLO formatted dataset (from VOC for example)
1. Follow the official [guide](https://pjreddie.com/darknet/yolo/)(see Training YOLO on VOC section)
and prepare the YOLO formatted annotation files.
1. Zip train images
```bash
zip images.zip -j -@ < train.txt
```
1. Create a CVAT task with the following labels:
```bash
aeroplane bicycle bird boat bottle bus car cat chair cow diningtable dog
horse motorbike person pottedplant sheep sofa train tvmonitor
```
Select images. zip as data. Most likely you should use `share`
functionality because size of images. zip is more than 500Mb.
See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/)
guide for details.
1. Create `obj.names` with the following content:
```bash
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor
```
1. Zip all label files together (we need to add only label files that correspond to the train subset)
```bash
cat train.txt | while read p; do echo ${p%/*/*}/labels/${${p##*/}%%.*}.txt; done | zip labels.zip -j -@ obj.names
```
1. Click `Upload annotation` button, choose `YOLO 1.1` and select the zip
file with labels from the previous step.

@ -1,23 +1,15 @@
- [Quick installation guide](#quick-installation-guide)
- [Ubuntu 18.04 (x86_64/amd64)](#ubuntu-1804-x86_64amd64)
- [Windows 10](#windows-10)
- [Mac OS Mojave](#mac-os-mojave)
- [Advanced Topics](#advanced-topics)
- [Deploying CVAT behind a proxy](#deploying-cvat-behind-a-proxy)
- [Additional components](#additional-components)
- [Semi-automatic and automatic annotation](#semi-automatic-and-automatic-annotation)
- [Stop all containers](#stop-all-containers)
- [Advanced settings](#advanced-settings)
- [Share path](#share-path)
- [Email verification](#email-verification)
- [Deploy CVAT on the Scaleway public cloud](#deploy-cvat-on-the-scaleway-public-cloud)
- [Deploy secure CVAT instance with HTTPS](#deploy-secure-cvat-instance-with-https)
- [Prerequisites](#prerequisites)
- [Roadmap](#roadmap)
- [Step-by-step instructions](#step-by-step-instructions)
- [1. Make the proxy listen on 80 and 443 ports](#1-make-the-proxy-listen-on-80-and-443-ports)
- [2. Issue a certificate and run HTTPS versions with `acme.sh` helper](#2-issue-a-certificate-and-run-https-versions-with-acmesh-helper)
- [Create certificate files using an ACME challenge on docker host](#create-certificate-files-using-an-acme-challenge-on-docker-host)
<!--lint disable maximum-heading-length-->
---
title: 'Installation'
linkTitle: 'Installation'
weight: 1
description: 'CVAT installation guide for different operating systems'
---
<!--lint disable heading-style-->
# Quick installation guide
@ -123,7 +115,7 @@ server. Proxy is an advanced topic and it is not covered by the guide.
- Open the installed Google Chrome browser and go to [localhost:8080](http://localhost:8080).
Type your login/password for the superuser on the login page and press the _Login_
button. Now you should be able to create a new annotation task. Please read the
[CVAT user's guide](/cvat/apps/documentation/user_guide.md) for more details.
[CVAT user's guide](/docs/for-users/user-guide/) for more details.
## Windows 10
@ -186,7 +178,7 @@ server. Proxy is an advanced topic and it is not covered by the guide.
- Open the installed Google Chrome browser and go to [localhost:8080](http://localhost:8080).
Type your login/password for the superuser on the login page and press the _Login_
button. Now you should be able to create a new annotation task. Please read the
[CVAT user's guide](/cvat/apps/documentation/user_guide.md) for more details.
[CVAT user's guide](/docs/for-users/user-guide) for more details.
## Mac OS Mojave
@ -253,7 +245,7 @@ server. Proxy is an advanced topic and it is not covered by the guide.
- Open the installed Google Chrome browser and go to [localhost:8080](http://localhost:8080).
Type your login/password for the superuser on the login page and press the _Login_
button. Now you should be able to create a new annotation task. Please read the
[CVAT user's guide](/cvat/apps/documentation/user_guide.md) for more details.
[CVAT user's guide](/docs/for-users/user-guide) for more details.
## Advanced Topics
@ -282,7 +274,7 @@ Please see the [Docker documentation](https://docs.docker.com/network/proxy/) fo
### Additional components
- [Analytics: management and monitoring of data annotation team](/components/analytics/README.md)
- [Analytics: management and monitoring of data annotation team](/docs/for-developers/analytics/)
```bash
# Build and run containers with Analytics component support:
@ -292,7 +284,7 @@ docker-compose -f docker-compose.yml \
### Semi-automatic and automatic annotation
Please follow this [guide](/cvat/apps/documentation/installation_automatic_annotation.md).
Please follow this [guide](/docs/for-users/installation_automatic_annotation/).
### Stop all containers
@ -350,13 +342,14 @@ You can change the share device path to your actual share. For user convenience
we have defined the environment variable \$CVAT_SHARE_URL. This variable
contains a text (url for example) which is shown in the client-share browser.
You can [mount](/cvat/apps/documentation/mounting_cloud_storages.md)
You can [mount](/docs/for-developers/mounting_cloud_storages/)
your cloud storage as a FUSE and use it later as a share.
### Email verification
You can enable email verification for newly registered users.
Specify these options in the [settings file](../../settings/base.py) to configure Django allauth
Specify these options in the
[settings file](https://github.com/openvinotoolkit/cvat/blob/develop/cvat/settings/base.py) to configure Django allauth
to enable email verification (ACCOUNT_EMAIL_VERIFICATION = 'mandatory').
Access is denied until the user's email address is verified.
@ -377,7 +370,8 @@ for details.
### Deploy CVAT on the Scaleway public cloud
Please follow [this tutorial](https://blog.scaleway.com/smart-data-annotation-for-your-computer-vision-projects-cvat-on-scaleway/) to install and set up remote access to CVAT on a Scaleway cloud instance with data in a mounted object storage bucket.
Please follow [this tutorial](https://blog.scaleway.com/smart-data-annotation-for-your-computer-vision-projects-cvat-on-scaleway/)
to install and set up remote access to CVAT on a Scaleway cloud instance with data in a mounted object storage bucket.
### Deploy secure CVAT instance with HTTPS
@ -454,7 +448,8 @@ services:
ALLOWED_HOSTS: '*'
```
Update a CVAT site proxy template `$HOME/cvat/cvat_proxy/conf.d/cvat.conf.template` on docker(system) host. Site config updates from this template each time `cvat_proxy` container start.
Update a CVAT site proxy template `$HOME/cvat/cvat_proxy/conf.d/cvat.conf.template` on docker(system) host.
Site config updates from this template each time `cvat_proxy` container start.
Add a location to server with `server_name ${CVAT_HOST};` ahead others:
@ -480,7 +475,9 @@ Your server should be available (and unsecured) at `http://CVAT.example.com`
Something went wrong ? The most common cause is a containers and images cache which were builded earlier.
This will enable serving `http://CVAT.example.com/.well-known/acme-challenge/`
route from `/var/tmp/letsencrypt-webroot` directory on the container's filesystem which is bind mounted from docker host `$HOME/cvat/letsencrypt-webroot`. That volume needed for issue and renewing certificates only.
route from `/var/tmp/letsencrypt-webroot` directory on the container's filesystem
which is bind mounted from docker host `$HOME/cvat/letsencrypt-webroot`.
That volume needed for issue and renewing certificates only.
Another volume `/etc/ssl/private` should be used within web server according to [acme.sh](https://github.com/acmesh-official/acme.sh#3-install-the-cert-to-apachenginx-etc) documentation
@ -494,7 +491,9 @@ At this point your deployment is up and running, ready for run acme-challenge fo
Point you shell in cvat repository directory, usually `cd $HOME/cvat` on docker host.
Lets Encrypt provides rate limits to ensure fair usage by as many people as possible. They recommend utilize their staging environment instead of the production API during testing. So first try to get a test certificate.
Lets Encrypt provides rate limits to ensure fair usage by as many people as possible.
They recommend utilize their staging environment instead of the production API during testing.
So first try to get a test certificate.
```
~/.acme.sh/acme.sh --issue --staging -d CVAT.example.com -w $HOME/cvat/letsencrypt-webroot --debug

@ -1,4 +1,16 @@
### Semi-automatic and Automatic Annotation
<!--lint disable maximum-heading-length-->
---
title: 'Semi-automatic and Automatic Annotation'
linkTitle: 'Semi-automatic and Automatic Annotation'
weight: 5
description: 'This page provides information about the installation of components needed for
semi-automatic and automatic annotation'
---
<!--lint disable maximum-line-length-->
> **⚠ WARNING: Do not use `docker-compose up`**
> If you did, make sure all containers are stopped by `docker-compose down`.
@ -20,7 +32,7 @@
- You have to install `nuctl` command line tool to build and deploy serverless
functions. Download [version 1.5.16](https://github.com/nuclio/nuclio/releases/tag/1.5.16).
It is important that the version you download matches the version in
[docker-compose.serverless.yml](/components/serverless/docker-compose.serverless.yml)
[docker-compose.serverless.yml](https://github.com/openvinotoolkit/cvat/blob/develop/components/serverless/docker-compose.serverless.yml)
After downloading the nuclio, give it a proper permission and do a softlink
```
@ -28,7 +40,9 @@
sudo ln -sf $(pwd)/nuctl-<version>-linux-amd64 /usr/local/bin/nuctl
```
- Create `cvat` project inside nuclio dashboard where you will deploy new serverless functions and deploy a couple of DL models. Commands below should be run only after CVAT has been installed using `docker-compose` because it runs nuclio dashboard which manages all serverless functions.
- Create `cvat` project inside nuclio dashboard where you will deploy new serverless functions
and deploy a couple of DL models. Commands below should be run only after CVAT has been installed
using `docker-compose` because it runs nuclio dashboard which manages all serverless functions.
```bash
nuctl create project cvat
@ -50,9 +64,11 @@
**Note:**
- See [deploy_cpu.sh](/serverless/deploy_cpu.sh) for more examples.
- See [deploy_cpu.sh](https://github.com/openvinotoolkit/cvat/blob/develop/serverless/deploy_cpu.sh)
for more examples.
#### GPU Support
You will need to install [Nvidia Container Toolkit](https://www.tensorflow.org/install/docker#gpu_support).
Also you will need to add `--resource-limit nvidia.com/gpu=1 --triggers '{"myHttpTrigger": {"maxWorkers": 1}}'` to
the nuclio deployment command. You can increase the maxWorker if you have enough GPU memory.
@ -69,12 +85,15 @@
```
**Note:**
- The number of GPU deployed functions will be limited to your GPU memory.
- See [deploy_gpu.sh](/serverless/deploy_gpu.sh) script for more examples.
- See [deploy_gpu.sh](https://github.com/openvinotoolkit/cvat/blob/develop/serverless/deploy_gpu.sh)
script for more examples.
**Troubleshooting Nuclio Functions:**
- You can open nuclio dashboard at [localhost:8070](http://localhost:8070). Make sure status of your functions are up and running without any error.
- You can open nuclio dashboard at [localhost:8070](http://localhost:8070).
Make sure status of your functions are up and running without any error.
- Test your deployed DL model as a serverless function. The command below should work on Linux and Mac OS.
```bash
@ -115,12 +134,14 @@
}
]
```
</details>
- To check for internal server errors, run `docker ps -a` to see the list of containers.
Find the container that you are interested, e.g., `nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu`.
Then check its logs by `docker logs <name of your container>`
e.g.,
```bash
docker logs nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu
```

@ -0,0 +1,15 @@
---
title: "User's guide"
linkTitle: "User's guide"
weight: 1
description: "This multipage document contains information on how to work with the CVAT user interface"
---
Computer Vision Annotation Tool (CVAT) is a web-based tool which helps to
annotate videos and images for Computer Vision algorithms. It was inspired
by [Vatic](http://carlvondrick.com/vatic/) free, online, interactive video
annotation tool. CVAT has many powerful features: _interpolation of bounding
boxes between key frames, automatic annotation using deep learning models,
shortcuts for most of critical actions, dashboard with a list of annotation
tasks, LDAP and basic authorization, etc..._ It was created for and used by
a professional data annotation team. UX and UI were optimized especially for
computer vision tasks developed by our team.

@ -0,0 +1,5 @@
---
title: "Advanced"
linkTitle: "Advanced"
weight: 30
---

@ -0,0 +1,52 @@
---
title: "AI Tools"
linkTitle: "AI Tools"
weight: 5
---
The tool is designed for semi-automatic and automatic annotation using DL models.
The tool is available only if there is a corresponding model.
For more details about DL models read the [Models](/docs/for-users/user-guide/models/) section.
### Interactors
Interactors are used to create a polygon semi-automatically.
Supported DL models are not bound to the label and can be used for any objects.
To create a polygon usually you need to use regular or positive points.
For some kinds of segmentation negative points are available.
Positive points are the points related to the object.
Negative points should be placed outside the boundary of the object.
In most cases specifying positive points alone is enough to build a polygon.
- Before you start, select the magic wand on the controls sidebar and go to the `Interactors` tab.
Then select a label for the polygon and a required DL model.
![](/images/image114.jpg)
- Click `Interact` to enter the interaction mode. Now you can place positive and/or negative points.
Left click creates a positive point and right click creates a negative point.
`Deep extreme cut` model requires a minimum of 4 points. After you set 4 positive points,
a request will be sent to the server and when the process is complete a polygon will be created.
If you are not satisfied with the result, you can set additional points or remove points by left-clicking on it.
If you want to postpone the request and create a few more points, hold down `Ctrl` and continue,
the request will be sent after the key is released.
![](/images/image188_detrac.jpg)
- To finish interaction, click on the icon on the controls sidebar or press `N` on your keyboard.
- When the object is finished, you can edit it like a polygon.
You can read about editing polygons in the [Annotation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/) section.
### Detectors
Detectors are used to automatically annotate one frame. Supported DL models are suitable only for certain labels.
- Before you start, click the magic wand on the controls sidebar and select the Detectors icon tab.
You need to match the labels of the DL model (left column) with the labels in your task (right column).
Then click `Annotate`.
![](/images/image187.jpg)
- This action will automatically annotates one frame.
In the [Automatic annotation](/docs/for-users/user-guide/advanced/automatic-annotation/) section you can read how to make automatic annotation of all frames.

@ -0,0 +1,19 @@
---
title: "Analytics"
linkTitle: "Analytics"
weight: 1
---
If your CVAT instance was created with analytics support, you can press the `Analytics` button in the dashboard
and analytics and journals will be opened in a new tab.
![](/images/image113.jpg)
The analytics allows you to see how much time every user spends on each task
and how much work they did over any time range.
![](/images/image097.jpg)
It also has an activity graph which can be modified with a number of users shown and a timeframe.
![](/images/image096.jpg)

@ -0,0 +1,9 @@
---
title: "Annotation with cuboids"
linkTitle: "Annotation with cuboids"
weight: 11
---
It is used to annotate 3 dimensional objects such as cars, boxes, etc...
Currently the feature supports one point perspective and has the constraint
where the vertical edges are exactly parallel to the sides.

@ -0,0 +1,31 @@
---
title: "Creating the cuboid"
linkTitle: "Creating the cuboid"
weight: 1
---
Before you start, you have to make sure that Cuboid is selected
and choose a drawing method ”from rectangle” or “by 4 points”.
![](/images/image091.jpg)
#### Drawing cuboid by 4 points
Choose a drawing method “by 4 points” and click Shape to enter the drawing mode. There are many ways to draw a cuboid.
You can draw the cuboid by placing 4 points, after that the drawing will be completed automatically.
The first 3 points determine the plane of the cuboid while the last point determines the depth of that plane.
For the first 3 points, it is recommended to only draw the 2 closest side faces, as well as the top and bottom face.
A few examples:
![](/images/image177_mapillary_vistas.jpg)
### Drawing cuboid from rectangle
Choose a drawing method “from rectangle” and click Shape to enter the drawing mode.
When you draw using the rectangle method, you must select the frontal plane of the object using the bounding box.
The depth and perspective of the resulting cuboid can be edited.
Example:
![](/images/image182_mapillary_vistas.jpg)

@ -0,0 +1,41 @@
---
title: "Editing the cuboid"
linkTitle: "Editing the cuboid"
weight: 2
---
![](/images/image178_mapillary_vistas.jpg)
The cuboid can be edited in multiple ways: by dragging points, by dragging certain faces or by dragging planes.
First notice that there is a face that is painted with gray lines only, let us call it the front face.
You can move the cuboid by simply dragging the shape behind the front face.
The cuboid can be extended by dragging on the point in the middle of the edges.
The cuboid can also be extended up and down by dragging the point at the vertices.
![](/images/gif017_mapillary_vistas.gif)
To draw with perspective effects it should be assumed that the front face is the closest to the camera.
To begin simply drag the points on the vertices that are not on the gray/front face while holding `Shift`.
The cuboid can then be edited as usual.
![](/images/gif018_mapillary_vistas.gif)
If you wish to reset perspective effects, you may right click on the cuboid,
and select `Reset perspective` to return to a regular cuboid.
![](/images/image180_mapillary_vistas.jpg)
The location of the gray face can be swapped with the adjacent visible side face.
You can do it by right clicking on the cuboid and selecting `Switch perspective orientation`.
Note that this will also reset the perspective effects.
![](/images/image179_mapillary_vistas.jpg)
Certain faces of the cuboid can also be edited,
these faces are: the left, right and dorsal faces, relative to the gray face.
Simply drag the faces to move them independently from the rest of the cuboid.
![](/images/gif020_mapillary_vistas.gif)
You can also use cuboids in track mode, similar to rectangles in track mode ([basics](/docs/for-users/user-guide/basics/track-mode-basics/) and [advanced](/docs/for-users/user-guide/advanced/track-mode-advanced/)) or [Track mode with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/track-mode-with-polygons/)

@ -0,0 +1,5 @@
---
title: "Annotation with points"
linkTitle: "Annotation with points"
weight: 10
---

@ -0,0 +1,28 @@
---
title: "Linear interpolation with one point"
linkTitle: "Linear interpolation with one point"
weight: 2
---
You can use linear interpolation for points to annotate a moving object:
1. Before you start, select the `Points`.
1. Linear interpolation works only with one point, so you need to set `Number of points` to 1.
1. After that select the `Track`.
![](/images/image122.jpg)
1. Click `Track` to enter the drawing mode left-click to create a point and after that shape will be automatically completed.
![](/images/image163_detrac.jpg)
1. Move forward a few frames and move the point to the desired position,
this way you will create a keyframe and intermediate frames will be drawn automatically.
You can work with this object as with an interpolated track: you can hide it using the `Outside`,
move around keyframes, etc.
![](/images/image165_detrac.jpg)
1. This way you'll get linear interpolation using the ` Points`.
![](/images/gif013_detrac.gif)

@ -0,0 +1,22 @@
---
title: "Points in shape mode"
linkTitle: "Points in shape mode"
weight: 1
---
It is used for face, landmarks annotation etc.
Before you start you need to select the `Points`. If necessary you can set a fixed number of points
in the `Number of points` field, then drawing will be stopped automatically.
![](/images/image042.jpg)
Click `Shape` to entering the drawing mode. Now you can start annotation of the necessary area.
Points are automatically grouped — all points will be considered linked between each start and finish.
Press `N` again to finish marking the area. You can delete a point by clicking with pressed `Ctrl`
or right-clicking on a point and selecting `Delete point`. Clicking with pressed `Shift` will open the points
shape editor. There you can add new points into an existing shape. You can zoom in/out (when scrolling the mouse wheel)
and move (when clicking the mouse wheel and moving the mouse) while drawing. You can drag an object after
it has been drawn and change the position of individual points after finishing an object.
![](/images/image063_affectnet.jpg)

@ -0,0 +1,5 @@
---
title: "Annotation with polygons"
linkTitle: "Annotation with polygons"
weight: 8
---

@ -0,0 +1,44 @@
---
title: "Drawing using automatic borders"
linkTitle: "Automatic borders"
weight: 2
---
![](/images/gif025_mapillary_vistas.gif)
You can use auto borders when drawing a polygon. Using automatic borders allows you to automatically trace
the outline of polygons existing in the annotation.
- To do this, go to settings -> workspace tab and enable `Automatic Bordering`
or press `Ctrl` while drawing a polygon.
![](/images/image161.jpg)
- Start drawing / editing a polygon.
- Points of other shapes will be highlighted, which means that the polygon can be attached to them.
- Define the part of the polygon path that you want to repeat.
![](/images/image157_mapillary_vistas.jpg)
- Click on the first point of the contour part.
![](/images/image158_mapillary_vistas.jpg)
- Then click on any point located on part of the path. The selected point will be highlighted in purple.
![](/images/image159_mapillary_vistas.jpg)
- Сlick on the last point and the outline to this point will be built automatically.
![](/images/image160_mapillary_vistas.jpg)
Besides, you can set a fixed number of points in the `Number of points` field, then
drawing will be stopped automatically. To enable dragging you should right-click
inside the polygon and choose `Switch pinned property`.
Below you can see results with opacity and black stroke:
![](/images/image064_mapillary_vistas.jpg)
If you need to annotate small objects, increase `Image Quality` to
`95` in `Create task` dialog for your convenience.

@ -0,0 +1,67 @@
---
title: "Creating masks"
linkTitle: "Creating masks"
weight: 6
---
### Cutting holes in polygons
Currently, CVAT does not support cutting transparent holes in polygons. However,
it is poissble to generate holes in exported instance and class masks.
To do this, one needs to define a background class in the task and draw holes
with it as additional shapes above the shapes needed to have holes:
The editor window:
![The editor](/images/mask_export_example1_editor.png)
Remember to use z-axis ordering for shapes by \[\-\] and \[\+\, \=\] keys.
Exported masks:
![A class mask](/images/mask_export_example1_cls_mask.png) ![An instance mask](/images/mask_export_example1_inst_mask.png)
Notice that it is currently impossible to have a single instance number for
internal shapes (they will be merged into the largest one and then covered by
"holes").
### Creating masks
There are several formats in CVAT that can be used to export masks:
- `Segmentation Mask` (PASCAL VOC masks)
- `CamVid`
- `MOTS`
- `ICDAR`
- `COCO` (RLE-encoded instance masks, [guide](/docs/for-users/formats/format-specifications/format-coco))
- `TFRecord` ([over Datumaro](https://github.com/openvinotoolkit/datumaro/blob/develop/docs/user_manual.md), [guide](/docs/for-users/formats/format-specifications/format-tfrecord)):
- `Datumaro`
An example of exported masks (in the `Segmentation Mask` format):
![A class mask](/images/exported_cls_masks_example.png) ![An instance mask](/images/exported_inst_masks_example.png)
Important notices:
- Both boxes and polygons are converted into masks
- Grouped objects are considered as a single instance and exported as a single
mask (label and attributes are taken from the largest object in the group)
#### Class colors
All the labels have associated colors, which are used in the generated masks.
These colors can be changed in the task label properties:
![](/images/label_color_picker.jpg)
Label colors are also displayed in the annotation window on the right panel,
where you can show or hide specific labels
(only the presented labels are displayed):
![](/images/label_panel_anno_window.jpg)
A background class can be:
- A default class, which is implicitly-added, of black color (RGB 0, 0, 0)
- `background` class with any color (has a priority, name is case-insensitive)
- Any class of black color (RGB 0, 0, 0)
To change backgound color in generated masks (default is black),
change `background` class color to the desired one.

@ -0,0 +1,21 @@
---
title: "Edit polygon"
linkTitle: "Edit polygon"
weight: 4
---
To edit a polygon you have to click on it while holding `Shift`, it will open the polygon editor.
- In the editor you can create new points or delete part of a polygon by closing the line on another point.
- When `Intelligent polygon cropping` option is activated in the settings, СVAT considers two criteria to decide which part of a polygon should be cut off during automatic editing.
- The first criteria is a number of cut points.
- The second criteria is a length of a cut curve.
If both criteria recommend to cut the same part, algorithm works automatically, and if not, a user has to make the decision.
If you want to choose manually which part of a polygon should be cut off, disable `Intelligent polygon cropping` in the settings. In this case after closing the polygon, you can select the part of the polygon you want to leave.
![](/images/image209.jpg)
- You can press `Esc` to cancel editing.
![](/images/gif007_mapillary_vistas.gif)

@ -0,0 +1,25 @@
---
title: "Manual drawing"
linkTitle: "Manual drawing"
weight: 1
---
It is used for semantic / instance segmentation.
Before starting, you need to select `Polygon` on the controls sidebar and choose the correct Label.
![](/images/image084.jpg)
- Click `Shape` to enter drawing mode.
There are two ways to draw a polygon: either create points by clicking or
by dragging the mouse on the screen while holding `Shift`.
| Clicking points | Holding Shift+Dragging |
| -------------------------------------------------- | -------------------------------------------------- |
| ![](/images/gif005_detrac.gif) | ![](/images/gif006_detrac.gif) |
- When `Shift` isn't pressed, you can zoom in/out (when scrolling the mouse
wheel) and move (when clicking the mouse wheel and moving the mouse), you can also
delete the previous point by right-clicking on it.
- Press `N` again for completing the shape.
- After creating the polygon, you can move the points or delete them by right-clicking and selecting `Delete point`
or clicking with pressed `Alt` key in the context menu.

@ -0,0 +1,33 @@
---
title: "Track mode with polygons"
linkTitle: "Track mode with polygons"
weight: 5
---
Polygons in the track mode allow you to mark moving objects more accurately other than using a rectangle
([Tracking mode (basic)](/docs/for-users/user-guide/basics/track-mode-basics/); [Tracking mode (advanced)](/docs/for-users/user-guide/advanced/track-mode-advanced/)).
1. To create a polygon in the track mode, click the `Track` button.
![](/images/image184.jpg)
1. Create a polygon the same way as in the case of [Annotation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/).
Press `N` to complete the polygon.
1. Pay attention to the fact that the created polygon has a starting point and a direction,
these elements are important for annotation of the following frames.
1. After going a few frames forward press `Shift+N`, the old polygon will disappear and you can create a new polygon.
The new starting point should match the starting point of the previously created polygon
(in this example, the top of the left mirror). The direction must also match (in this example, clockwise).
After creating the polygon, press `N` and the intermediate frames will be interpolated automatically.
![](/images/image185_detrac.jpg)
1. If you need to change the starting point, right-click on the desired point and select `Set starting point`.
To change the direction, right-click on the desired point and select switch orientation.
![](/images/image186_detrac.jpg)
There is no need to redraw the polygon every time using `Shift+N`,
instead you can simply move the points or edit a part of the polygon by pressing `Shift+Click`.

@ -0,0 +1,23 @@
---
title: "Annotation with polylines"
linkTitle: "Annotation with polylines"
weight: 9
---
It is used for road markup annotation etc.
Before starting, you need to select the `Polyline`. You can set a fixed number of points
in the `Number of points` field, then drawing will be stopped automatically.
![](/images/image085.jpg)
Click `Shape` to enter drawing mode. There are two ways to draw a polyline —
you either create points by clicking or by dragging a mouse on the screen while holding `Shift`.
When `Shift` isn't pressed, you can zoom in/out (when scrolling the mouse wheel)
and move (when clicking the mouse wheel and moving the mouse), you can delete
previous points by right-clicking on it. Press `N` again to complete the shape.
You can delete a point by clicking on it with pressed `Ctrl` or right-clicking on a point
and selecting `Delete point`. Click with pressed `Shift` will open a polyline editor.
There you can create new points(by clicking or dragging) or delete part of a polygon closing
the red line on another point. Press `Esc` to cancel editing.
![](/images/image039_mapillary_vistas.jpg)

@ -0,0 +1,18 @@
---
title: "Annotation with rectangle by 4 points"
linkTitle: "Annotation with rectangle by 4 points"
weight: 7
---
It is an efficient method of bounding box annotation, proposed
[here](https://arxiv.org/pdf/1708.02750.pdf).
Before starting, you need to make sure that the drawing method by 4 points is selected.
![](/images/image134.jpg)
Press `Shape` or `Track` for entering drawing mode. Click on four extreme points:
the top, bottom, left- and right-most physical points on the object.
Drawing will be automatically completed right after clicking the fourth point.
Press `Esc` to cancel editing.
![](/images/gif016_mapillary_vistas.gif)

@ -0,0 +1,19 @@
---
title: "Annotation with Tags"
linkTitle: "Annotation with Tags"
weight: 12
---
It is used to annotate frames, tags are not displayed in the workspace.
Before you start, open the drop-down list in the top panel and select `Tag annotation`.
![](/images/image183.jpg)
The objects sidebar will be replaced with a special panel for working with tags.
Here you can select a label for a tag and add it by clicking on the `Add tag` button.
You can also customize hotkeys for each label.
![](/images/image181.jpg)
If you need to use only one label for one frame, then enable the `Automatically go to the next frame`
checkbox, then after you add the tag the frame will automatically switch to the next.

@ -0,0 +1,28 @@
---
title: "Attribute annotation mode (advanced)"
linkTitle: "Attribute annotation mode"
weight: 3
---
Basic operations in the mode were described in section [attribute annotation mode (basics)](/docs/for-users/user-guide/basics/attribute-annotation-mode-basics/).
It is possible to handle lots of objects on the same frame in the mode.
![](/images/image058_detrac.jpg)
It is more convenient to annotate objects of the same type. In this case you can apply
the appropriate filter. For example, the following filter will
hide all objects except person: `label=="Person"`.
To navigate between objects (person in this case),
use the following buttons `switch between objects in the frame` on the special panel:
![](/images/image026.jpg)
or shortcuts:
- `Tab` — go to the next object
- `Shift+Tab` — go to the previous object.
In order to change the zoom level, go to settings (press `F3`)
in the workspace tab and set the value Attribute annotation mode (AAM) zoom margin in px.

@ -0,0 +1,51 @@
---
title: "Automatic annotation"
linkTitle: "Automatic annotation"
weight: 14
---
Automatic Annotation is used for creating preliminary annotations.
To use Automatic Annotation you need a DL model. You can use primary models or models uploaded by a user.
You can find the list of available models in the `Models` section.
1. To launch automatic annotation, you should open the dashboard and find a task which you want to annotate.
Then click the `Actions` button and choose option `Automatic Annotation` from the dropdown menu.
![](/images/image119_detrac.jpg)
1. In the dialog window select a model you need. DL models are created for specific labels, e.g.
the Crossroad model was taught using footage from cameras located above the highway and it is best to
use this model for the tasks with similar camera angles.
If it's necessary select the `Clean old annotations` checkbox.
Adjust the labels so that the task labels will correspond to the labels of the DL model.
For example, lets consider a task where you have to annotate labels “car” and “person”.
You should connect the “person” label from the model to the “person” label in the task.
As for the “car” label, you should choose the most fitting label available in the model - the “vehicle” label.
The task requires to annotate cars only and choosing the “vehicle” label implies annotation of all vehicles,
in this case using auto annotation will help you complete the task faster.
Click `Submit` to begin the automatic annotation process.
![](/images/image120.jpg)
1. At runtime - you can see the percentage of completion.
You can cancel the automatic annotation by clicking on the `Cancel`button.
![](/images/image121_detrac.jpg)
1. The end result of an automatic annotation is an annotation with separate rectangles (or other shapes)
![](/images/gif014_detrac.gif)
1. You can combine separate bounding boxes into tracks using the `Person reidentification ` model.
To do this, click on the automatic annotation item in the action menu again and select the model
of the `ReID` type (in this case the `Person reidentification` model).
You can set the following parameters:
- Model `Threshold` is a maximum cosine distance between objects embeddings.
- `Maximum distance` defines a maximum radius that an object can diverge between adjacent frames.
![](/images/image133.jpg)
1. You can remove false positives and edit tracks using `Split` and `Merge` functions.
![](/images/gif015_detrac.gif)

@ -0,0 +1,86 @@
---
title: "Filter"
linkTitle: "Filter"
weight: 16
---
There are some reasons to use the feature:
1. When you use a filter, objects that don't match the filter will be hidden.
1. The fast navigation between frames which have an object of interest.
Use the `Left Arrow` / `Right Arrow` keys for this purpose
or customize the UI buttons by right-clicking and select `switching by filter`.
If there are no objects which correspond to the filter,
you will go to the previous / next frame which contains any annotated objects.
To apply filters you need to click on the button on the top panel.
![](/images/image059.jpg)
It will open a window for filter input. Here you will find two buttons: `Add rule` and `Add group`.
![](/images/image202.jpg)
### Rules
The "Add rule" button adds a rule for objects display. A rule may use the following properties:
![](/images/image204.jpg)
**Supported properties:**
| Properties | Supported values | Description |
| ----------- | ------------------------------------------------------ | --------------------------------------------|
| `Label` | all the label names that are in the task | label name |
| `Type` | shape, track or tag | type of object |
| `Shape` | all shape types | type of shape |
| `Occluded` | true or false | occluded ([read more](/docs/for-users/user-guide/advanced/shape-mode-advanced/))|
| `Width` | number of px or field | shape width |
| `Height` | number of px or field | shape height |
| `ServerID` | number or field | ID of the object on the server <br>(You can find out by forming a link to the object through the Action menu)|
| `ObjectID` | number or field | ID of the object in your client <br>(indicated on the objects sidebar)|
| `Attributes`| some other fields including attributes with a <br>similar type or a specific attribute value| any fields specified by a label |
**Supported operators for properties:**
`==` - Equally; `!=` - Not equal; `>` - More; `>=` - More or equal; `<` - Less; `<=` - Less or equal;
`Any in`; `Not in` - these operators allow you to set multiple values in one rule;
![](/images/image203.jpg)
`Is empty`; `is not empty` these operators don't require to input a value.
`Between`; `Not between` these operators allow you to choose a range between two values.
Some properties support two types of values that you can choose:
![](/images/image205.jpg)
You can add multiple rules, to do so click the add rule button and set another rule. Once you've set a new rule, you'll be able to choose which operator they will be connected by: `And` or `Or`.
![](/images/image206.jpg)
All subsequent rules will be joined by the chosen operator. Click `Submit` to apply the filter or if you want multiple rules to be connected by different operators, use groups.
### Groups
To add a group, click the "add group" button. Inside the group you can create rules or groups.
![](/images/image207.jpg)
If there is more than one rule in the group, they can be connected by `And` or `Or` operators.
The rule group will work as well as a separate rule outside the group and will be joined by an
operator outside the group.
You can create groups within other groups, to do so you need to click the add group button within the group.
You can move rules and groups. To move the rule or group, drag it by the button.
To remove the rule or group, click on the `Delete` button.
![](/images/image208.jpg)
If you activate the `Not` button, objects that don't match the group will be filtered out.
Click `Submit` to apply the filter.
The "Cancel" button undoes the filter. The `Clear filter` button removes the filter.
Once applied filter automatically appears in `Recent used` list. Maximum length of the list is 10.

@ -0,0 +1,36 @@
---
title: "OpenCV tools"
linkTitle: "OpenCV tools"
weight: 6
---
The tool based on [Open CV](https://opencv.org/) Computer Vision library which is an open-source product that includes many CV algorithms. Some of these algorithms can be used to simplify the annotation process.
First step to work with OpenCV is to load it into CVAT. Click on the toolbar icon, then click `Load OpenCV`.
![](/images/image198.jpg)
Once it is loaded, the tool's functionality will be available.
### Intelligent scissors
Intelligent scissors is an CV method of creating a polygon by placing points with automatic drawing of a line between them.
The distance between the adjacent points is limited by the threshold of action,
displayed as a red square which is tied to the cursor.
- First, select the label and then click on the `intelligent scissors` button.
![](/images/image199.jpg)
- Create the first point on the boundary of the allocated object.
You will see a line repeating the outline of the object.
- Place the second point, so that the previous point is within the restrictive threshold.
After that a line repeating the object boundary will be automatically created between the points.
![](/images/image200_detrac.jpg)
To increase or lower the action threshold, hold `Ctrl` and scroll the mouse wheel.
Increasing action threshold will affect the performance.
During the drawing process you can remove the last point by clicking on it with the left mouse button.
- Once all the points are placed, you can complete the creation of the object by clicking on the icon or clicking `N`.
As a result, a polygon will be created (read more about the polygons in the [annoation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/)).

@ -0,0 +1,36 @@
---
title: "Review"
linkTitle: "Review"
weight: 13
---
A special mode to check the annotation allows you to point to an object or area in the frame containing an error.
To go into review mode, you need to select `Request a review` in the menu and assign the user to run a check.
![](/images/image194.jpg)
After that, the job status will be changed to `validation`
and the reviewer will be able to open the task in review mode.
Review mode is a UI mode, there is a special "issue" tool which you can use to identify objects
or areas in the frame and describe the problem.
- To do this, first click `open an issue` icon on the controls sidebar:
![](/images/image195.jpg)
- Then click on an object in the frame to highlight the object or highlight the area by holding the left mouse button
and describe the problem. The object or area will be shaded in red.
- The created issue will appear in the workspace and in the `issues` tab on the objects sidebar.
- After you save the annotation, other users will be able to see the problem, comment on each issue
and change the status of the problem to `resolved`.
- You can use the arrows on the issues tab to navigate the frames that contain problems.
![](/images/image196_detrac.jpg)
- Once all the problems are marked, save the annotation, open the menu and select "submit the review". After that you'll see a form containing the verification statistics, here you can give an assessment of the job and choose further actions:
- Accept - changes the status of the job to `completed`.
- Review next passes the job to another user for re-review.
- Reject - changes the status of the job to `annotation`.
![](/images/image197.jpg)

@ -0,0 +1,26 @@
---
title: "Shape grouping"
linkTitle: "Shape grouping"
weight: 15
---
This feature allows us to group several shapes.
You may use the `Group Shapes` button or shortcuts:
- `G` — start selection / end selection in group mode
- `Esc` — close group mode
- `Shift+G` — reset group for selected shapes
You may select shapes clicking on them or selecting an area.
Grouped shapes will have `group_id` filed in dumped annotation.
Also you may switch color distribution from an instance (default) to a group.
You have to switch `Color By Group` checkbox for that.
Shapes that don't have `group_id`, will be highlighted in white.
![](/images/image078_detrac.jpg)
![](/images/image077_detrac.jpg)

@ -0,0 +1,26 @@
---
title: "Shape mode (advanced)"
linkTitle: "Shape mode"
weight: 1
---
Basic operations in the mode were described in section [shape mode (basics)](/docs/for-users/user-guide/basics/shape-mode-basics/).
**Occluded**
Occlusion is an attribute used if an object is occluded by another object or
isn't fully visible on the frame. Use `Q` shortcut to set the property
quickly.
![](/images/image065.jpg)
Example: the three cars on the figure below should be labeled as **occluded**.
![](/images/image054_mapillary_vistas.jpg)
If a frame contains too many objects and it is difficult to annotate them
due to many shapes placed mostly in the same place, it makes sense
to lock them. Shapes for locked objects are transparent, and it is easy to
annotate new objects. Besides, you can't change previously annotated objects
by accident. Shortcut: `L`.
![](/images/image066.jpg)

@ -0,0 +1,76 @@
---
title: "Shortcuts"
linkTitle: "Shortcuts"
weight: 18
---
Many UI elements have shortcut hints. Put your pointer to a required element to see it.
![](/images/image075.jpg)
| Shortcut | Common |
| -------------------------- | -------------------------------------------------------------------------------------------------------- |
| | _Main functions_ |
| `F1` | Open/hide the list of available shortcuts |
| `F2` | Go to the settings page or go back |
| `Ctrl+S` | Go to the settings page or go back |
| `Ctrl+Z` | Cancel the latest action related with objects |
| `Ctrl+Shift+Z` or `Ctrl+Y` | Cancel undo action |
| Hold `Mouse Wheel` | To move an image frame (for example, while drawing) |
| | _Player_ |
| `F` | Go to the next frame |
| `D` | Go to the previous frame |
| `V` | Go forward with a step |
| `C` | Go backward with a step |
| `Right` | Search the next frame that satisfies to the filters <br> or next frame which contain any objects |
| `Left` | Search the previous frame that satisfies to the filters <br> or previous frame which contain any objects |
| `Space` | Start/stop automatic changing frames |
| `` ` `` or `~` | Focus on the element to change the current frame |
| | _Modes_ |
| `N` | Repeat the latest procedure of drawing with the same parameters |
| `M` | Activate or deactivate mode to merging shapes |
| `Alt+M` | Activate or deactivate mode to spliting shapes |
| `G` | Activate or deactivate mode to grouping shapes |
| `Shift+G` | Reset group for selected shapes (in group mode) |
| `Esc` | Cancel any active canvas mode |
| | _Image operations_ |
| `Ctrl+R` | Change image angle (add 90 degrees) |
| `Ctrl+Shift+R` | Change image angle (substract 90 degrees) |
| `Shift+B+=` | Increase brightness level for the image |
| `Shift+B+-` | Decrease brightness level for the image |
| `Shift+C+=` | Increase contrast level for the image |
| `Shift+C+-` | Decrease contrast level for the image |
| `Shift+S+=` | Increase saturation level for the image |
| `Shift+S+-` | Increase contrast level for the image |
| `Shift+G+=` | Make the grid more visible |
| `Shift+G+-` | Make the grid less visible |
| `Shift+G+Enter` | Set another color for the image grid |
| | _Operations with objects_ |
| `Ctrl` | Switch automatic bordering for polygons and polylines during drawing/editing |
| Hold `Ctrl` | When the shape is active and fix it |
| `Alt+Click` on point | Deleting a point (used when hovering over a point of polygon, polyline, points) |
| `Shift+Click` on point | Editing a shape (used when hovering over a point of polygon, polyline or points) |
| `Right-Click` on shape | Display of an object element from objects sidebar |
| `T+L` | Change locked state for all objects in the sidebar |
| `L` | Change locked state for an active object |
| `T+H` | Change hidden state for objects in the sidebar |
| `H` | Change hidden state for an active object |
| `Q` or `/` | Change occluded property for an active object |
| `Del` or `Shift+Del` | Delete an active object. Use shift to force delete of locked objects |
| `-` or `_` | Put an active object "farther" from the user (decrease z axis value) |
| `+` or `=` | Put an active object "closer" to the user (increase z axis value) |
| `Ctrl+C` | Copy shape to CVAT internal clipboard |
| `Ctrl+V` | Paste a shape from internal CVAT clipboard |
| Hold `Ctrl` while pasting | When pasting shape from the buffer for multiple pasting. |
| `Crtl+B` | Make a copy of the object on the following frames |
| | _Operations are available only for track_ |
| `K` | Change keyframe property for an active track |
| `O` | Change outside property for an active track |
| `R` | Go to the next keyframe of an active track |
| `E` | Go to the previous keyframe of an active track |
| | _Attribute annotation mode_ |
| `Up Arrow` | Go to the next attribute (up) |
| `Down Arrow` | Go to the next attribute (down) |
| `Tab` | Go to the next annotated object in current frame |
| `Shift+Tab` | Go to the previous annotated object in current frame |
| `<number>` | Assign a corresponding value to the current attribute |

@ -0,0 +1,21 @@
---
title: "Track mode (advanced)"
linkTitle: "Track mode"
weight: 2
---
Basic operations in the mode were described in section [track mode (basics)](/docs/for-users/user-guide/basics/shape-mode-basic/).
Shapes that were created in the track mode, have extra navigation buttons.
- These buttons help to jump to the previous/next keyframe.
![](/images/image056.jpg)
- The button helps to jump to the initial frame and to the last keyframe.
![](/images/image057.jpg)
You can use the `Split` function to split one track into two tracks:
![](/images/gif010_detrac.gif)

@ -0,0 +1,5 @@
---
title: "Basics"
linkTitle: "Basics"
weight: 8
---

@ -0,0 +1,29 @@
---
title: "Attribute annotation mode (basics)"
linkTitle: "Attribute annotation mode"
weight: 6
---
- In this mode you can edit attributes with fast navigation between objects and frames using a keyboard.
Open the drop-down list in the top panel and select Attribute annotation Mode.
![](/images/image023_affectnet.jpg)
- In this mode objects panel change to a special panel :
![](/images/image026.jpg)
- The active attribute will be red. In this case it is `gender` . Look at the bottom side panel to see all possible
shortcuts for changing the attribute. Press key `2` on your keyboard to assign a value (female) for the attribute
or select from the drop-down list.
![](/images/image024_affectnet.jpg)
- Press `Up Arrow`/`Down Arrow` on your keyboard or click the buttons in the UI to go to the next/previous
attribute. In this case, after pressing `Down Arrow` you will be able to edit the `Age` attribute.
![](/images/image025_affectnet.jpg)
- Use `Right Arrow`/`Left Arrow` keys to move to the previous/next image with annotation.
To see all the hot keys available in the attribute annotation mode, press `F2`.
Read more in the section [attribute annotation mode (advanced)](/docs/for-users/user-guide/advanced/attribute-annotation-mode-advanced/).

@ -0,0 +1,26 @@
---
title: "Basic navigation"
linkTitle: "Basic navigation"
weight: 1
---
1. Use arrows below to move to the next/previous frame.
Use the scroll bar slider to scroll through frames.
Almost every button has a shortcut.
To get a hint about a shortcut, just move your mouse pointer over an UI element.
![](/images/image008.jpg)
1. To navigate the image, use the button on the controls sidebar.
Another way an image can be moved/shifted is by holding the left mouse button inside
an area without annotated objects.
If the `Mouse Wheel` is pressed, then all annotated objects are ignored. Otherwise the
a highlighted bounding box will be moved instead of the image itself.
![](/images/image136.jpg)
1. You can use the button on the sidebar controls to zoom on a region of interest.
Use the button `Fit the image` to fit the image in the workspace.
You can also use the mouse wheel to scale the image
(the image will be zoomed relatively to your current cursor position).
![](/images/image137.jpg)

@ -0,0 +1,46 @@
---
title: "Shape mode (basics)"
linkTitle: "Shape mode"
weight: 3
---
Usage examples:
- Create new annotations for a set of images.
- Add/modify/delete objects for existing annotations.
1. You need to select `Rectangle` on the controls sidebar:
![](/images/image082.jpg)
Before you start, select the correct ` Label` (should be specified by you when creating the task)
and ` Drawing Method` (by 2 points or by 4 points):
![](/images/image080.jpg)
1. Creating a new annotation in `Shape mode`:
- Create a separate `Rectangle` by clicking on `Shape`.
![](/images/image081.jpg)
- Choose the opposite points. Your first rectangle is ready!
![](/images/image011_detrac.jpg)
- To learn about creating a rectangle using the by 4 point drawing method, ([read here](/docs/for-users/user-guide/advanced/annotation-with-rectangle-by-4-points/)).
- It is possible to adjust boundaries and location of the rectangle using a mouse.
Rectangle's size is shown in the top right corner , you can check it by clicking on any point of the shape.
You can also undo your actions using `Ctrl+Z` and redo them with `Shift+Ctrl+Z` or `Ctrl+Y`.
1. You can see the `Object card` in the objects sidebar or open it by right-clicking on the object.
You can change the attributes in the details section.
You can perform basic operations or delete an object by clicking on the action menu button.
![](/images/image012.jpg)
1. The following figure is an example of a fully annotated frame with separate shapes.
![](/images/image013_detrac.jpg)
Read more in the section [shape mode (advanced)](/docs/for-users/user-guide/advanced/shape-mode-advanced/).

@ -0,0 +1,69 @@
---
title: "Track mode (basics)"
linkTitle: "Track mode"
weight: 4
---
Usage examples:
- Create new annotations for a sequence of frames.
- Add/modify/delete objects for existing annotations.
- Edit tracks, merge several rectangles into one track.
1. Like in the `Shape mode`, you need to select a `Rectangle` on the sidebar,
in the appearing form, select the desired `Label` and the `Drawing method`.
![](/images/image083.jpg)
1. Creating a track for an object (look at the selected car as an example):
- Create a `Rectangle` in `Track mode` by clicking on `Track`.
![](/images/image014.jpg)
- In `Track mode` the rectangle will be automatically interpolated on the next frames.
- The cyclist starts moving on frame #2270. Let's mark the frame as a key frame.
You can press `K` for that or click the `star` button (see the screenshot below).
![](/images/image016.jpg)
- If the object starts to change its position, you need to modify the rectangle where it happens.
It isn't necessary to change the rectangle on each frame, simply update several keyframes
and the frames between them will be interpolated automatically.
- Let's jump 30 frames forward and adjust the boundaries of the object. See an example below:
![](/images/image017_detrac.jpg)
- After that the rectangle of the object will be changed automatically on frames 2270 to 2300:
![](/images/gif019_detrac.gif)
1. When the annotated object disappears or becomes too small, you need to
finish the track. You have to choose `Outside Property`, shortcut `O`.
![](/images/image019.jpg)
1. If the object isn't visible on a couple of frames and then appears again,
you can use the `Merge` feature to merge several individual tracks
into one.
![](/images/image020.jpg)
- Create tracks for moments when the cyclist is visible:
![](/images/gif001_detrac.gif)
- Click `Merge` button or press key `M` and click on any rectangle of the first track
and on any rectangle of the second track and so on:
![](/images/image162_detrac.jpg)
- Click `Merge` button or press `M` to apply changes.
![](/images/image020.jpg)
- The final annotated sequence of frames in `Interpolation` mode can
look like the clip below:
![](/images/gif003_detrac.gif)
Read more in the section [track mode (advanced)](/docs/for-users/user-guide/advanced/track-mode-advanced/).

@ -0,0 +1,45 @@
---
title: "Controls sidebar"
linkTitle: "Controls sidebar"
weight: 15
---
**Navigation block** - contains tools for moving and rotating images.
|Icon |Description |
|-- |-- |
|![](/images/image148.jpg)|`Cursor` (`Esc`)- a basic annotation pedacting tool. |
|![](/images/image149.jpg)|`Move the image`- a tool for moving around the image without<br/> the possibility of editing.|
|![](/images/image102.jpg)|`Rotate`- two buttons to rotate the current frame<br/> a clockwise (`Ctrl+R`) and anticlockwise (`Ctrl+Shift+R`).<br/> You can enable `Rotate all images` in the settings to rotate all the images in the job
---
**Zoom block** - contains tools for image zoom.
|Icon |Description |
|-- |-- |
|![](/images/image151.jpg)|`Fit image`- fits image into the workspace size.<br/> Shortcut - double click on an image|
|![](/images/image166.jpg)|`Select a region of interest`- zooms in on a selected region.<br/> You can use this tool to quickly zoom in on a specific part of the frame.|
---
**Shapes block** - contains all the tools for creating shapes.
|Icon |Description |Links to section |
|-- |-- |-- |
|![](/images/image189.jpg)|`AI Tools` |[AI Tools](/docs/for-users/user-guide/advanced/ai-tools/)|
|![](/images/image201.jpg)|`OpenCV` |[OpenCV](/docs/for-users/user-guide/advanced/opencv-tools/)|
|![](/images/image167.jpg)|`Rectangle`|[Shape mode](/docs/for-users/user-guide/basics/shape-mode-basics/); [Track mode](/docs/for-users/user-guide/basics/track-mode-basics/);<br/> [Drawing by 4 points](/docs/for-users/user-guide/advanced/annotation-with-rectangle-by-4-points/)|
|![](/images/image168.jpg)|`Polygon` |[Annotation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/); [Track mode with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/track-mode-with-polygons/) |
|![](/images/image169.jpg)|`Polyline` |[Annotation with polylines](/docs/for-users/user-guide/advanced/annotation-with-polylines/)|
|![](/images/image170.jpg)|`Points` |[Annotation with points](/docs/for-users/user-guide/advanced/annotation-with-points/) |
|![](/images/image176.jpg)|`Cuboid` |[Annotation with cuboids](/docs/for-users/user-guide/advanced/annotation-with-cuboids/) |
|![](/images/image171.jpg)|`Tag` |[Annotation with tags](/docs/for-users/user-guide/advanced/annotation-with-tags/) |
|![](/images/image195.jpg)|`Open an issue` |[Review](/docs/for-users/user-guide/advanced/review/) (available only in review mode) |
---
**Edit block** - contains tools for editing tracks and shapes.
|Icon |Description |Links to section |
|-- |-- |-- |
|![](/images/image172.jpg)|`Merge Shapes`(`M`) — starts/stops the merging shapes mode. |[Track mode (basics)](/docs/for-users/user-guide/basics/track-mode-basics/)|
|![](/images/image173.jpg)|`Group Shapes` (`G`) — starts/stops the grouping shapes mode.|[Shape grouping](/docs/for-users/user-guide/advanced/shape-grouping/)|
|![](/images/image174.jpg)|`Split` — splits a track. |[Track mode (advanced)](/docs/for-users/user-guide/advanced/track-mode-advanced/)|
---

@ -0,0 +1,219 @@
---
title: "Creating an annotation task"
linkTitle: "Creating an annotation task"
weight: 2
---
1. Create an annotation task pressing `Create new task` button on the tasks page or on the project page.
![](/images/image004.jpg)
1. Specify parameters of the task:
#### Basic configuration
**Name** The name of the task to be created.
![](/images/image005.jpg)
**Projects** The project that this task will be related with.
![](/images/image193.jpg)
**Labels**. There are two ways of working with labels (available only if the task is not related to the project):
- The `Constructor` is a simple way to add and adjust labels. To add a new label click the `Add label` button.
![](/images/image123.jpg)
You can set a name of the label in the `Label name` field and choose a color for each label.
![](/images/image124.jpg)
If necessary you can add an attribute and set its properties by clicking `Add an attribute`:
![](/images/image125.jpg)
The following actions are available here:
1. Set the attributes name.
1. Choose the way to display the attribute:
- Select — drop down list of value
- Radio — is used when it is necessary to choose just one option out of few suggested.
- Checkbox — is used when it is necessary to choose any number of options out of suggested.
- Text — is used when an attribute is entered as a text.
- Number — is used when an attribute is entered as a number.
1. Set values for the attribute. The values could be separated by pressing `Enter`.
The entered value is displayed as a separate element which could be deleted
by pressing `Backspace` or clicking the close button (x).
If the specified way of displaying the attribute is Text or Number,
the entered value will be displayed as text by default (e.g. you can specify the text format).
1. Checkbox `Mutable` determines if an attribute would be changed frame to frame.
1. You can delete the attribute by clicking the close button (x).
Click the `Continue` button to add more labels.
If you need to cancel adding a label - press the `Cancel` button.
After all the necessary labels are added click the `Done` button.
After clicking `Done` the added labels would be displayed as separate elements of different colour.
You can edit or delete labels by clicking `Update attributes` or `Delete label`.
- The `Raw` is a way of working with labels for an advanced user.
Raw presents label data in _json_ format with an option of editing and copying labels as a text.
The `Done` button applies the changes and the `Reset` button cancels the changes.
![](/images/image126.jpg)
In `Raw` and `Constructor` mode, you can press the `Copy` button to copy the list of labels.
**Select files**. Press tab `My computer` to choose some files for annotation from your PC.
If you select tab `Connected file share` you can choose files for annotation from your network.
If you select ` Remote source` , you'll see a field where you can enter a list of URLs (one URL per line).
If you upload a video or dataset with images and select `Use cache` option, you can attach a `manifest.jsonl` file.
You can find how to prepare it [here](/docs/for-developers/dataset_manifest/).
![](/images/image127.jpg)
#### Advanced configuration
![](/images/image128_use_cache.jpg)
**Use zip chunks**. Force to use zip chunks as compressed data. Actual for videos only.
**Use cache**. Defines how to work with data. Select the checkbox to switch to the "on-the-fly data processing",
which will reduce the task creation time (by preparing chunks when requests are received)
and store data in a cache of limited size with a policy of evicting less popular items.
See more [here](/docs/for-developers/data_on_fly/).
**Image Quality**. Use this option to specify quality of uploaded images.
The option helps to load high resolution datasets faster.
Use the value from `5` (almost completely compressed images) to `100` (not compressed images).
**Overlap Size**. Use this option to make overlapped segments.
The option makes tracks continuous from one segment into another.
Use it for interpolation mode. There are several options for using the parameter:
- For an interpolation task (video sequence).
If you annotate a bounding box on two adjacent segments they will be merged into one bounding box.
If overlap equals to zero or annotation is poor on adjacent segments inside a dumped annotation file,
you will have several tracks, one for each segment, which corresponds to the object.
- For an annotation task (independent images).
If an object exists on overlapped segments, the overlap is greater than zero
and the annotation is good enough on adjacent segments, it will be automatically merged into one object.
If overlap equals to zero or annotation is poor on adjacent segments inside a dumped annotation file,
you will have several bounding boxes for the same object.
Thus, you annotate an object on the first segment.
You annotate the same object on second segment, and if you do it right, you
will have one track inside the annotations.
If annotations on different segments (on overlapped frames)
are very different, you will have two shapes for the same object.
This functionality works only for bounding boxes.
Polygons, polylines, points don't support automatic merge on overlapped segments
even the overlap parameter isn't zero and match between corresponding shapes on adjacent segments is perfect.
**Segment size**. Use this option to divide a huge dataset into a few smaller segments.
For example, one job cannot be annotated by several labelers (it isn't supported).
Thus using "segment size" you can create several jobs for the same annotation task.
It will help you to parallel data annotation process.
**Start frame**. Frame from which video in task begins.
**Stop frame**. Frame on which video in task ends.
**Frame Step**. Use this option to filter video frames.
For example, enter `25` to leave every twenty fifth frame in the video or every twenty fifth image.
**Chunk size**. Defines a number of frames to be packed in a chunk when send from client to server.
Server defines automatically if empty.
Recommended values:
- 1080p or less: 36
- 2k or less: 8 - 16
- 4k or less: 4 - 8
- More: 1 - 4
**Dataset Repository**. URL link of the repository optionally specifies the path to the repository for storage
(`default: annotation / <dump_file_name> .zip`).
The .zip and .xml file extension of annotation are supported.
Field format: `URL [PATH]` example: `https://github.com/project/repos.git [1/2/3/4/annotation.xml]`
Supported URL formats :
- `https://github.com/project/repos[.git]`
- `github.com/project/repos[.git]`
- `git@github.com:project/repos[.git]`
The task will be highlighted in red after creation if annotation isn't synchronized with the repository.
**Use LFS**. If the annotation file is large, you can create a repository with
[LFS](https://git-lfs.github.com/) support.
**Issue tracker**. Specify full issue tracker's URL if it's necessary.
Push `Submit` button and it will be added into the list of annotation tasks.
Then, the created task will be displayed on a tasks page:
![](/images/image006_detrac.jpg)
1. The tasks page contains elements and each of them relates to a separate task. They are sorted in creation order.
Each element contains: task name, preview, progress bar, button `Open`, and menu `Actions`.
Each button is responsible for a in menu `Actions` specific function:
- `Dump Annotation` and `Export as a dataset` — download annotations or
annotations and images in a specific format. The following formats are available:
- [CVAT for video](/docs/for-developers/xml_format/#interpolation)
is highlighted if a task has the interpolation mode.
- [CVAT for images](/docs/for-developers/xml_format/#annotation)
is highlighted if a task has the annotation mode.
- [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/)
- [(VOC) Segmentation mask](http://host.robots.ox.ac.uk/pascal/VOC/) —
archive contains class and instance masks for each frame in the png
format and a text file with the value of each color.
- [YOLO](https://pjreddie.com/darknet/yolo/)
- [COCO](http://cocodataset.org/#format-data)
- [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord)
- [MOT](https://motchallenge.net/)
- [LabelMe 3.0](http://labelme.csail.mit.edu/Release3.0/)
- [Datumaro](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats/datumaro)
- `Upload annotation` is available in the same formats as in `Dump annotation`.
- [CVAT](/docs/for-developers/xml_format/) accepts both video and image sub-formats.
- `Automatic Annotation` — automatic annotation with OpenVINO toolkit.
Presence depends on how you build CVAT instance.
- `Delete` — delete task.
Push `Open` button to go to task details.
1. Task details is a task page which contains a preview, a progress bar
and the details of the task (specified when the task was created) and the jobs section.
![](/images/image131_detrac.jpg)
- The next actions are available on this page:
1. Change the tasks title.
1. Open `Actions` menu.
1. Change issue tracker or open issue tracker if it is specified.
1. Change labels (available only if the task is not related to the project).
You can add new labels or add attributes for the existing labels in the Raw mode or the Constructor mode.
By clicking `Copy` you will copy the labels to the clipboard.
1. Assigned to — is used to assign a task to a person. Start typing an assignees name and/or
choose the right person out of the dropdown list.
- `Jobs` — is a list of all jobs for a particular task. Here you can find the next data:
- Jobs name with a hyperlink to it.
- Frames — the frame interval.
- A status of the job. The status is specified by the user in the menu inside the job.
There are three types of status: annotation, validation or completed.
The status of the job is changes the progress bar of the task.
- Started on — start date of this job.
- Duration — is the amount of time the job is being worked.
- Assignee is the user who is working on the job.
You can start typing an assignees name and/or choose the right person out of the dropdown list.
- Reviewer a user assigned to carry out the review, read more in the [review](/docs/for-users/user-guide/advanced/review/) section.
- `Copy`. By clicking `Copy` you will copy the job list to the clipboard.
The job list contains direct links to jobs.
You can filter or sort jobs by status, as well as by assigner or reviewer.
1. Follow a link inside `Jobs` section to start annotation process.
In some cases, you can have several links. It depends on size of your
task and `Overlap Size` and `Segment Size` parameters. To improve
UX, only the first chunk of several frames will be loaded and you will be able
to annotate first images. Other frames will be loaded in background.
![](/images/image007_detrac.jpg)

@ -0,0 +1,31 @@
---
title: "Downloading annotations"
linkTitle: "Downloading annotations"
weight: 9
---
1. To download the latest annotations, you have to save all changes first.
click the `Save` button. There is a `Ctrl+S` shortcut to save annotations quickly.
1. After that, сlick the `Menu` button.
1. Press the `Dump Annotation` button.
![](/images/image028.jpg)
1. Choose format dump annotation file. Dump annotation are available in several formats:
- [CVAT for video](/docs/for-developers/xml_format/#interpolation)
is highlighted if a task has the interpolation mode.
- [CVAT for images](/docs/for-developers/xml_format/#annotation)
is highlighted if a task has the annotation mode.
![](/images/image029.jpg 'Example XML format')
- [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/)
- [(VOC) Segmentation mask](http://host.robots.ox.ac.uk/pascal/VOC/) —
archive contains class and instance masks for each frame in the png
format and a text file with the value of each color.
- [YOLO](https://pjreddie.com/darknet/yolo/)
- [COCO](http://cocodataset.org/#format-data)
- [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord)
- [MOT](https://motchallenge.net/)
- [LabelMe 3.0](http://labelme.csail.mit.edu/Release3.0/)
- [Datumaro](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats/datumaro)

@ -0,0 +1,37 @@
---
title: "Getting started"
linkTitle: "Getting started"
weight: 1
---
### Authorization
- First of all, you have to log in to CVAT tool.
![](/images/image001.jpg)
- For register a new user press "Create an account"
![](/images/image002.jpg)
- You can register a user but by default it will not have rights even to view
list of tasks. Thus you should create a superuser. The superuser can use
[Django administration panel](http://localhost:8080/admin) to assign correct
groups to the user. Please use the command below to create an admin account:
`docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'`
- If you want to create a non-admin account, you can do that using the link below
on the login page. Don't forget to modify permissions for the new user in the
administration panel. There are several groups (aka roles): admin, user,
annotator, observer.
![](/images/image003.jpg)
### Administration panel
Go to the [Django administration panel](http://localhost:8080/admin). There you can:
- Create / edit / delete users
- Control permissions of users and access to the tool.
![](/images/image115.jpg)

@ -0,0 +1,16 @@
---
title: "Interface of the annotation tool"
linkTitle: "Interface"
weight: 7
---
The tool consists of:
- `Header` - pinned header used to navigate CVAT sections and account settings;
- `Top panel` — contains navigation buttons, main functions and menu access;
- `Workspace` — space where images are shown;
- `Controls sidebar` — contains tools for navigating the image, zoom,
creating shapes and editing tracks (merge, split, group)
- `Objects sidebar` — contains label filter, two lists:
objects (on the frame) and labels (of objects on the frame) and appearance settings.
![](/images/image034_detrac.jpg)

@ -0,0 +1,25 @@
---
title: "Models"
linkTitle: "Models"
weight: 5
---
### Models
The Models page contains a list of deep learning (DL) models deployed for semi-automatic and automatic annotation.
To open the Models page, click the Models button on the navigation bar.
The list of models is presented in the form of a table. The parameters indicated for each model are the following:
- `Framework` the model is based on
- model `Name`
- model `Type`:
- `detector` - used for automatic annotation (available in [detectors](/docs/for-users/user-guide/advanced/ai-tools/#detectors) and [automatic annotation](/docs/for-users/user-guide/advanced/automatic-annotation/))
- `interactor` - used for semi-automatic shape annotation (available in [interactors](/docs/for-users/user-guide/advanced/ai-tools/#interactors))
- `tracker` - used for semi-automatic track annotation (available in [trackers](/docs/for-users/user-guide/advanced/ai-tools/#trackers))
- `reid` - used to combine individual objects into a track (available in [automatic annotation](/docs/for-users/user-guide/advanced/automatic-annotation/))
- `Description` - brief description of the model
- `Labels` - list of the supported labels (only for the models of the `detectors` type)
![](/images/image099.jpg)
Read how to install your model [here](/docs/for-users/installation/#semi-automatic-and-automatic-annotation).

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save