Website with documentation (#3039)
parent
a2df499f50
commit
9615436ecc
@ -0,0 +1,38 @@
|
||||
name: Github pages
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- develop
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup Hugo
|
||||
uses: peaceiris/actions-hugo@v2
|
||||
with:
|
||||
hugo-version: '0.83.1'
|
||||
extended: true
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v2
|
||||
with:
|
||||
node-version: '14.x'
|
||||
|
||||
- name: Build docs
|
||||
working-directory: ./site
|
||||
run: |
|
||||
npm ci
|
||||
hugo --baseURL "/cvat/" --minify
|
||||
- name: Deploy
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish_dir: ./site/public
|
||||
force_orphan: true
|
||||
@ -0,0 +1,3 @@
|
||||
[submodule "site/themes/docsy"]
|
||||
path = site/themes/docsy
|
||||
url = https://github.com/google/docsy
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,22 +0,0 @@
|
||||
### AWS-Deployment Guide
|
||||
|
||||
There are two ways of deploying the CVAT.
|
||||
|
||||
1. **On Nvidia GPU Machine:** Tensorflow annotation feature is dependent on GPU hardware. One of the easy ways to launch CVAT with the tf-annotation app is to use AWS P3 instances, which provides the NVIDIA GPU. Read more about [P3 instances here.](https://aws.amazon.com/about-aws/whats-new/2017/10/introducing-amazon-ec2-p3-instances/)
|
||||
Overall setup instruction is explained in [main readme file](https://github.com/opencv/cvat/), except Installing Nvidia drivers. So we need to download the drivers and install it. For Amazon P3 instances, download the Nvidia Drivers from Nvidia website. For more check [Installing the NVIDIA Driver on Linux Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html) link.
|
||||
|
||||
2. **On Any other AWS Machine:** We can follow the same instruction guide mentioned in the
|
||||
[installation instructions](https://github.com/opencv/cvat/blob/master/cvat/apps/documentation/installation.md).
|
||||
The additional step is to add a [security group and rule to allow incoming connections](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html).
|
||||
|
||||
For any of above, don't forget to add exposed AWS public IP address or hostname to `docker-compose.override.yml`:
|
||||
|
||||
```
|
||||
version: "2.3"
|
||||
services:
|
||||
cvat_proxy:
|
||||
environment:
|
||||
CVAT_HOST: your-instance.amazonaws.com
|
||||
```
|
||||
|
||||
In case of problems with using hostname, you can also use the public IPV4 instead of hostname. For AWS or any cloud based machines where the instances need to be terminated or stopped, the public IPV4 and hostname changes with every stop and reboot. To address this efficiently, avoid using spot instances that cannot be stopped, since copying the EBS to an AMI and restarting it throws problems. On the other hand, when a regular instance is stopped and restarted, the new hostname/IPV4 can be used in the `CVAT_HOST` variable in the `docker-compose.override.yml` and the build can happen instantly with CVAT tasks being available through the new IPV4.
|
||||
@ -1,4 +0,0 @@
|
||||
|
||||
# Copyright (C) 2018-2019 Intel Corporation
|
||||
#
|
||||
# SPDX-License-Identifier: MIT
|
||||
@ -1,11 +0,0 @@
|
||||
|
||||
# Copyright (C) 2018 Intel Corporation
|
||||
#
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
from django.apps import AppConfig
|
||||
|
||||
|
||||
class DocumentationConfig(AppConfig):
|
||||
name = 'cvat.apps.documentation'
|
||||
|
||||
@ -1,385 +0,0 @@
|
||||
- [Mounting cloud storage](#mounting-cloud-storage)
|
||||
- [AWS S3 bucket](#aws-s3-bucket-as-filesystem)
|
||||
- [Ubuntu 20.04](#aws_s3_ubuntu_2004)
|
||||
- [Mount](#aws_s3_mount)
|
||||
- [Automatically mount](#aws_s3_automatically_mount)
|
||||
- [Using /etc/fstab](#aws_s3_using_fstab)
|
||||
- [Using systemd](#aws_s3_using_systemd)
|
||||
- [Check](#aws_s3_check)
|
||||
- [Unmount](#aws_s3_unmount_filesystem)
|
||||
- [Azure container](#microsoft-azure-container-as-filesystem)
|
||||
- [Ubuntu 20.04](#azure_ubuntu_2004)
|
||||
- [Mount](#azure_mount)
|
||||
- [Automatically mount](#azure_automatically_mount)
|
||||
- [Using /etc/fstab](#azure_using_fstab)
|
||||
- [Using systemd](#azure_using_systemd)
|
||||
- [Check](#azure_check)
|
||||
- [Unmount](#azure_unmount_filesystem)
|
||||
- [Google Drive](#google-drive-as-filesystem)
|
||||
- [Ubuntu 20.04](#google_drive_ubuntu_2004)
|
||||
- [Mount](#google_drive_mount)
|
||||
- [Automatically mount](#google_drive_automatically_mount)
|
||||
- [Using /etc/fstab](#google_drive_using_fstab)
|
||||
- [Using systemd](#google_drive_using_systemd)
|
||||
- [Check](#google_drive_check)
|
||||
- [Unmount](#google_drive_unmount_filesystem)
|
||||
|
||||
# Mounting cloud storage
|
||||
## AWS S3 bucket as filesystem
|
||||
### <a name="aws_s3_ubuntu_2004">Ubuntu 20.04</a>
|
||||
#### <a name="aws_s3_mount">Mount</a>
|
||||
|
||||
1. Install s3fs:
|
||||
|
||||
```bash
|
||||
sudo apt install s3fs
|
||||
```
|
||||
|
||||
1. Enter your credentials in a file `${HOME}/.passwd-s3fs` and set owner-only permissions:
|
||||
|
||||
```bash
|
||||
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
|
||||
chmod 600 ${HOME}/.passwd-s3fs
|
||||
```
|
||||
|
||||
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
|
||||
1. Run s3fs, replace `bucket_name`, `mount_point`:
|
||||
|
||||
```bash
|
||||
s3fs <bucket_name> <mount_point> -o allow_other
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/s3fs-fuse/s3fs-fuse).
|
||||
|
||||
#### <a name="aws_s3_automatically_mount">Automatically mount</a>
|
||||
Follow the first 3 mounting steps above.
|
||||
|
||||
##### <a name="aws_s3_using_fstab">Using fstab</a>
|
||||
|
||||
1. Create a bash script named aws_s3_fuse(e.g in /usr/bin, as root) with this content
|
||||
(replace `user_name` on whose behalf the disk will be mounted, `backet_name`, `mount_point`, `/path/to/.passwd-s3fs`):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
sudo -u <user_name> s3fs <backet_name> <mount_point> -o passwd_file=/path/to/.passwd-s3fs -o allow_other
|
||||
exit 0
|
||||
```
|
||||
|
||||
1. Give it the execution permission:
|
||||
|
||||
```bash
|
||||
sudo chmod +x /usr/bin/aws_s3_fuse
|
||||
```
|
||||
|
||||
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
|
||||
|
||||
```bash
|
||||
/absolute/path/to/aws_s3_fuse <mount_point> fuse allow_other,user,_netdev 0 0
|
||||
```
|
||||
|
||||
##### <a name="aws_s3_using_systemd">Using systemd</a>
|
||||
|
||||
1. Create unit file `sudo nano /etc/systemd/system/s3fs.service`
|
||||
(replace `user_name`, `bucket_name`, `mount_point`, `/path/to/.passwd-s3fs`):
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=FUSE filesystem over AWS S3 bucket
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Environment="MOUNT_POINT=<mount_point>"
|
||||
User=<user_name>
|
||||
Group=<user_name>
|
||||
ExecStart=s3fs <bucket_name> ${MOUNT_POINT} -o passwd_file=/path/to/.passwd-s3fs -o allow_other
|
||||
ExecStop=fusermount -u ${MOUNT_POINT}
|
||||
Restart=always
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
1. Update the system configurations, enable unit autorun when the system boots, mount the bucket:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable s3fs.service
|
||||
sudo systemctl start s3fs.service
|
||||
```
|
||||
|
||||
#### <a name="aws_s3_check">Check</a>
|
||||
A file `/etc/mtab` contains records of currently mounted filesystems.
|
||||
```bash
|
||||
cat /etc/mtab | grep 's3fs'
|
||||
```
|
||||
|
||||
#### <a name="aws_s3_unmount_filesystem">Unmount filesystem</a>
|
||||
```bash
|
||||
fusermount -u <mount_point>
|
||||
```
|
||||
|
||||
If you used [systemd](#aws_s3_using_systemd) to mount a bucket:
|
||||
|
||||
```bash
|
||||
sudo systemctl stop s3fs.service
|
||||
sudo systemctl disable s3fs.service
|
||||
```
|
||||
|
||||
## Microsoft Azure container as filesystem
|
||||
### <a name="azure_ubuntu_2004">Ubuntu 20.04</a>
|
||||
#### <a name="azure_mount">Mount</a>
|
||||
1. Set up the Microsoft package repository.(More [here](https://docs.microsoft.com/en-us/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software#configuring-the-repositories))
|
||||
|
||||
```bash
|
||||
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
|
||||
sudo dpkg -i packages-microsoft-prod.deb
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
1. Install `blobfuse` and `fuse`:
|
||||
|
||||
```bash
|
||||
sudo apt-get install blobfuse fuse
|
||||
```
|
||||
For more details see [here](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation)
|
||||
|
||||
1. Create enviroments(replace `account_name`, `account_key`, `mount_point`):
|
||||
|
||||
```bash
|
||||
export AZURE_STORAGE_ACCOUNT=<account_name>
|
||||
export AZURE_STORAGE_ACCESS_KEY=<account_key>
|
||||
MOUNT_POINT=<mount_point>
|
||||
```
|
||||
|
||||
1. Create a folder for cache:
|
||||
```bash
|
||||
sudo mkdir -p /mnt/blobfusetmp
|
||||
```
|
||||
|
||||
1. Make sure the file must be owned by the user who mounts the container:
|
||||
```bash
|
||||
sudo chown <user> /mnt/blobfusetmp
|
||||
```
|
||||
|
||||
1. Create the mount point, if it doesn't exists:
|
||||
```bash
|
||||
mkdir -p ${MOUNT_POINT}
|
||||
```
|
||||
|
||||
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
|
||||
1. Mount container(replace `your_container`):
|
||||
|
||||
```bash
|
||||
blobfuse ${MOUNT_POINT} --container-name=<your_container> --tmp-path=/mnt/blobfusetmp -o allow_other
|
||||
```
|
||||
|
||||
#### <a name="azure_automatically_mount">Automatically mount</a>
|
||||
Follow the first 7 mounting steps above.
|
||||
##### <a name="azure_using_fstab">Using fstab</a>
|
||||
|
||||
1. Create configuration file `connection.cfg` with same content, change accountName,
|
||||
select one from accountKey or sasToken and replace with your value:
|
||||
|
||||
```bash
|
||||
accountName <account-name-here>
|
||||
# Please provide either an account key or a SAS token, and delete the other line.
|
||||
accountKey <account-key-here-delete-next-line>
|
||||
#change authType to specify only 1
|
||||
sasToken <shared-access-token-here-delete-previous-line>
|
||||
authType <MSI/SAS/SPN/Key/empty>
|
||||
containerName <insert-container-name-here>
|
||||
```
|
||||
|
||||
1. Create a bash script named `azure_fuse`(e.g in /usr/bin, as root) with content below
|
||||
(replace `user_name` on whose behalf the disk will be mounted, `mount_point`, `/path/to/blobfusetmp`,`/path/to/connection.cfg`):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
sudo -u <user_name> blobfuse <mount_point> --tmp-path=/path/to/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
|
||||
exit 0
|
||||
```
|
||||
|
||||
1. Give it the execution permission:
|
||||
```bash
|
||||
sudo chmod +x /usr/bin/azure_fuse
|
||||
```
|
||||
|
||||
1. Edit `/etc/fstab` with the blobfuse script. Add the following line(replace paths):
|
||||
```bash
|
||||
/absolute/path/to/azure_fuse </path/to/desired/mountpoint> fuse allow_other,user,_netdev
|
||||
```
|
||||
|
||||
##### <a name="azure_using_systemd">Using systemd</a>
|
||||
|
||||
1. Create unit file `sudo nano /etc/systemd/system/blobfuse.service`.
|
||||
(replace `user_name`, `mount_point`, `container_name`,`/path/to/connection.cfg`):
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=FUSE filesystem over Azure container
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Environment="MOUNT_POINT=<mount_point>"
|
||||
User=<user_name>
|
||||
Group=<user_name>
|
||||
ExecStart=blobfuse ${MOUNT_POINT} --container-name=<container_name> --tmp-path=/mnt/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
|
||||
ExecStop=fusermount -u ${MOUNT_POINT}
|
||||
Restart=always
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
1. Update the system configurations, enable unit autorun when the system boots, mount the container:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable blobfuse.service
|
||||
sudo systemctl start blobfuse.service
|
||||
```
|
||||
Or for more detail [see here](https://github.com/Azure/azure-storage-fuse/tree/master/systemd)
|
||||
|
||||
#### <a name="azure_check">Check</a>
|
||||
A file `/etc/mtab` contains records of currently mounted filesystems.
|
||||
```bash
|
||||
cat /etc/mtab | grep 'blobfuse'
|
||||
```
|
||||
|
||||
#### <a name="azure_unmount_filesystem">Unmount filesystem</a>
|
||||
```bash
|
||||
fusermount -u <mount_point>
|
||||
```
|
||||
|
||||
If you used [systemd](#azure_using_systemd) to mount a container:
|
||||
|
||||
```bash
|
||||
sudo systemctl stop blobfuse.service
|
||||
sudo systemctl disable blobfuse.service
|
||||
```
|
||||
|
||||
If you have any mounting problems, check out the [answers](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ)
|
||||
to common problems
|
||||
|
||||
## Google Drive as filesystem
|
||||
### <a name="google_drive_ubuntu_2004">Ubuntu 20.04</a>
|
||||
#### <a name="google_drive_mount">Mount</a>
|
||||
To mount a google drive as a filesystem in user space(FUSE)
|
||||
you can use [google-drive-ocamlfuse](https://github.com/astrada/google-drive-ocamlfuse)
|
||||
To do this follow the instructions below:
|
||||
|
||||
1. Install google-drive-ocamlfuse:
|
||||
|
||||
```bash
|
||||
sudo add-apt-repository ppa:alessandro-strada/ppa
|
||||
sudo apt-get update
|
||||
sudo apt-get install google-drive-ocamlfuse
|
||||
```
|
||||
|
||||
1. Run `google-drive-ocamlfuse` without parameters:
|
||||
|
||||
```bash
|
||||
google-drive-ocamlfuse
|
||||
```
|
||||
|
||||
This command will create the default application directory (~/.gdfuse/default),
|
||||
containing the configuration file config (see the [wiki](https://github.com/astrada/google-drive-ocamlfuse/wiki)
|
||||
page for more details about configuration).
|
||||
And it will start a web browser to obtain authorization to access your Google Drive.
|
||||
This will let you modify default configuration before mounting the filesystem.
|
||||
|
||||
Then you can choose a local directory to mount your Google Drive (e.g.: ~/GoogleDrive).
|
||||
|
||||
1. Create the mount point, if it doesn't exist(replace mount_point):
|
||||
|
||||
```bash
|
||||
mountpoint="<mount_point>"
|
||||
mkdir -p $mountpoint
|
||||
```
|
||||
|
||||
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
|
||||
1. Mount the filesystem:
|
||||
|
||||
```bash
|
||||
google-drive-ocamlfuse -o allow_other $mountpoint
|
||||
```
|
||||
|
||||
#### <a name="google_drive_automatically_mount">Automatically mount</a>
|
||||
Follow the first 4 mounting steps above.
|
||||
##### <a name="google_drive_using_fstab">Using fstab</a>
|
||||
|
||||
1. Create a bash script named gdfuse(e.g in /usr/bin, as root) with this content
|
||||
(replace `user_name` on whose behalf the disk will be mounted, `label`, `mount_point`):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
sudo -u <user_name> google-drive-ocamlfuse -o allow_other -label <label> <mount_point>
|
||||
exit 0
|
||||
```
|
||||
|
||||
1. Give it the execution permission:
|
||||
|
||||
```bash
|
||||
sudo chmod +x /usr/bin/gdfuse
|
||||
```
|
||||
|
||||
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
|
||||
|
||||
```bash
|
||||
/absolute/path/to/gdfuse <mount_point> fuse allow_other,user,_netdev 0 0
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
|
||||
|
||||
##### <a name="google_drive_using_systemd">Using systemd</a>
|
||||
|
||||
1. Create unit file `sudo nano /etc/systemd/system/google-drive-ocamlfuse.service`.
|
||||
(replace `user_name`, `label`(default `label=default`), `mount_point`):
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=FUSE filesystem over Google Drive
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Environment="MOUNT_POINT=<mount_point>"
|
||||
User=<user_name>
|
||||
Group=<user_name>
|
||||
ExecStart=google-drive-ocamlfuse -label <label> ${MOUNT_POINT}
|
||||
ExecStop=fusermount -u ${MOUNT_POINT}
|
||||
Restart=always
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
1. Update the system configurations, enable unit autorun when the system boots, mount the drive:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable google-drive-ocamlfuse.service
|
||||
sudo systemctl start google-drive-ocamlfuse.service
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
|
||||
|
||||
#### <a name="google_drive_check">Check</a>
|
||||
A file `/etc/mtab` contains records of currently mounted filesystems.
|
||||
```bash
|
||||
cat /etc/mtab | grep 'google-drive-ocamlfuse'
|
||||
```
|
||||
|
||||
#### <a name="google_drive_unmount_filesystem">Unmount filesystem</a>
|
||||
```bash
|
||||
fusermount -u <mount_point>
|
||||
```
|
||||
|
||||
If you used [systemd](#google_drive_using_systemd) to mount a drive:
|
||||
|
||||
```bash
|
||||
sudo systemctl stop google-drive-ocamlfuse.service
|
||||
sudo systemctl disable google-drive-ocamlfuse.service
|
||||
```
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 214 KiB |
@ -1,138 +0,0 @@
|
||||
// Extension loading compatible with AMD and CommonJs
|
||||
(function(extension) {
|
||||
'use strict';
|
||||
|
||||
if (typeof showdown === 'object') {
|
||||
// global (browser or nodejs global)
|
||||
showdown.extension('toc', extension());
|
||||
} else if (typeof define === 'function' && define.amd) {
|
||||
// AMD
|
||||
define('toc', extension());
|
||||
} else if (typeof exports === 'object') {
|
||||
// Node, CommonJS-like
|
||||
module.exports = extension();
|
||||
} else {
|
||||
// showdown was not found so we throw
|
||||
throw Error('Could not find showdown library');
|
||||
}
|
||||
|
||||
}(function() {
|
||||
|
||||
function getHeaderEntries(sourceHtml) {
|
||||
if (typeof window === 'undefined') {
|
||||
return getHeaderEntriesInNodeJs(sourceHtml);
|
||||
} else {
|
||||
return getHeaderEntriesInBrowser(sourceHtml);
|
||||
}
|
||||
}
|
||||
|
||||
function getHeaderEntriesInNodeJs(sourceHtml) {
|
||||
var cheerio = require('cheerio');
|
||||
var $ = cheerio.load(sourceHtml);
|
||||
var headers = $('h1, h2, h3, h4, h5, h6');
|
||||
|
||||
var headerList = [];
|
||||
for (var i = 0; i < headers.length; i++) {
|
||||
var el = headers[i];
|
||||
headerList.push(new TocEntry(el.name, $(el).text(), $(el).attr('id')));
|
||||
}
|
||||
|
||||
return headerList;
|
||||
}
|
||||
|
||||
function getHeaderEntriesInBrowser(sourceHtml) {
|
||||
// Generate dummy element
|
||||
var source = document.createElement('div');
|
||||
source.innerHTML = sourceHtml;
|
||||
|
||||
// Find headers
|
||||
var headers = source.querySelectorAll('h1, h2, h3, h4, h5, h6');
|
||||
var headerList = [];
|
||||
for (var i = 0; i < headers.length; i++) {
|
||||
var el = headers[i];
|
||||
headerList.push(new TocEntry(el.tagName, el.textContent, el.id));
|
||||
}
|
||||
|
||||
return headerList;
|
||||
}
|
||||
|
||||
function TocEntry(tagName, text, anchor) {
|
||||
this.tagName = tagName;
|
||||
this.text = text;
|
||||
this.anchor = anchor;
|
||||
this.children = [];
|
||||
}
|
||||
|
||||
TocEntry.prototype.childrenToString = function() {
|
||||
if (this.children.length === 0) {
|
||||
return "";
|
||||
}
|
||||
var result = "<ul>\n";
|
||||
for (var i = 0; i < this.children.length; i++) {
|
||||
result += this.children[i].toString();
|
||||
}
|
||||
result += "</ul>\n";
|
||||
return result;
|
||||
};
|
||||
|
||||
TocEntry.prototype.toString = function() {
|
||||
var result = "<li>";
|
||||
if (this.text) {
|
||||
result += "<a href=\"#" + this.anchor + "\">" + this.text + "</a>";
|
||||
}
|
||||
result += this.childrenToString();
|
||||
result += "</li>\n";
|
||||
return result;
|
||||
};
|
||||
|
||||
function sortHeader(tocEntries, level) {
|
||||
level = level || 1;
|
||||
var tagName = "H" + level,
|
||||
result = [],
|
||||
currentTocEntry;
|
||||
|
||||
function push(tocEntry) {
|
||||
if (tocEntry !== undefined) {
|
||||
if (tocEntry.children.length > 0) {
|
||||
tocEntry.children = sortHeader(tocEntry.children, level + 1);
|
||||
}
|
||||
result.push(tocEntry);
|
||||
}
|
||||
}
|
||||
|
||||
for (var i = 0; i < tocEntries.length; i++) {
|
||||
var tocEntry = tocEntries[i];
|
||||
if (tocEntry.tagName.toUpperCase() !== tagName) {
|
||||
if (currentTocEntry === undefined) {
|
||||
currentTocEntry = new TocEntry();
|
||||
}
|
||||
currentTocEntry.children.push(tocEntry);
|
||||
} else {
|
||||
push(currentTocEntry);
|
||||
currentTocEntry = tocEntry;
|
||||
}
|
||||
}
|
||||
|
||||
push(currentTocEntry);
|
||||
return result;
|
||||
}
|
||||
|
||||
return {
|
||||
type: 'output',
|
||||
filter: function(sourceHtml) {
|
||||
var headerList = getHeaderEntries(sourceHtml);
|
||||
|
||||
// No header found
|
||||
if (headerList.length === 0) {
|
||||
return sourceHtml;
|
||||
}
|
||||
|
||||
// Sort header
|
||||
headerList = sortHeader(headerList);
|
||||
|
||||
// Build result and replace all [toc]
|
||||
var result = '<div class="toc">\n<ul>\n' + headerList.join("") + '</ul>\n</div>\n';
|
||||
return sourceHtml.replace(/\[toc\]/gi, result);
|
||||
}
|
||||
};
|
||||
}));
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -1,34 +0,0 @@
|
||||
<!--
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
SPDX-License-Identifier: MIT
|
||||
-->
|
||||
<!DOCTYPE html>
|
||||
{% load static compress %}
|
||||
|
||||
<head>
|
||||
<title>{% block title %} {% endblock %}</title>
|
||||
{% compress js file thirdparty %}
|
||||
<script type="text/javascript" src="{% static 'documentation/js/3rdparty/showdown.js' %}"></script>
|
||||
<script type="text/javascript" src="{% static 'documentation/js/3rdparty/showdown-toc.js' %}"></script>
|
||||
{% endcompress %}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<xmp id="content" style="display: none">
|
||||
{% autoescape off %}
|
||||
{% block content %}
|
||||
{% endblock %}
|
||||
{% endautoescape %}
|
||||
</xmp>
|
||||
<script type="text/javascript">
|
||||
var converter = new showdown.Converter({ extensions: ['toc'] });
|
||||
converter.setFlavor('github');
|
||||
var user_guide = document.getElementById('content').innerHTML;
|
||||
// For GitHub documentation we need to have relative links without
|
||||
// leading slash. Let's just add the leading slash here to have correct
|
||||
// links inside online documentation.
|
||||
user_guide = user_guide.replace(/!\[\]\(static/g, ';
|
||||
document.body.innerHTML = converter.makeHtml(user_guide);
|
||||
</script>
|
||||
</body>
|
||||
@ -1,14 +0,0 @@
|
||||
<!--
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
SPDX-License-Identifier: MIT
|
||||
-->
|
||||
{% extends 'documentation/base_page.html' %}
|
||||
|
||||
{% block title %}
|
||||
CVAT User Guide
|
||||
{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
{{ user_guide }}
|
||||
{% endblock %}
|
||||
@ -1,8 +0,0 @@
|
||||
<!--
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
SPDX-License-Identifier: MIT
|
||||
-->
|
||||
{% extends 'documentation/base_page.html' %}
|
||||
{% block title %} CVAT XML format {% endblock %}
|
||||
{% block content %} {{ xml_format }} {% endblock %}
|
||||
@ -1,13 +0,0 @@
|
||||
|
||||
# Copyright (C) 2018 Intel Corporation
|
||||
#
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
from django.urls import path
|
||||
from . import views
|
||||
|
||||
urlpatterns = [
|
||||
path('user_guide.html', views.UserGuideView),
|
||||
path('xml_format.html', views.XmlFormatView),
|
||||
]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,21 +0,0 @@
|
||||
|
||||
# Copyright (C) 2018 Intel Corporation
|
||||
#
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
from django.shortcuts import render
|
||||
import os
|
||||
|
||||
def UserGuideView(request):
|
||||
module_dir = os.path.dirname(__file__)
|
||||
doc_path = os.path.join(module_dir, 'user_guide.md')
|
||||
|
||||
return render(request, 'documentation/user_guide.html',
|
||||
context={"user_guide": open(doc_path, "r").read()})
|
||||
|
||||
def XmlFormatView(request):
|
||||
module_dir = os.path.dirname(__file__)
|
||||
doc_path = os.path.join(module_dir, 'xml_format.md')
|
||||
|
||||
return render(request, 'documentation/xml_format.html',
|
||||
context={"xml_format": open(doc_path, "r").read()})
|
||||
@ -0,0 +1 @@
|
||||
<svg width="98" height="27" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><defs><path d="M101 0v29l-52.544.001C44.326 35.511 35.598 40 25.5 40 11.417 40 0 31.27 0 20.5S11.417 1 25.5 1c4.542 0 8.807.908 12.5 2.5V0h63z" id="logoA"/></defs><g transform="translate(-2 -1)" fill="none" fill-rule="evenodd"><mask id="logoB" fill="#fff"><use xlink:href="#logoA"/></mask><path d="M48.142 1c4.736 0 6.879 3.234 6.879 5.904v2.068h-4.737V6.904c0-.79-.789-2.144-2.142-2.144-1.654 0-2.368 1.354-2.368 2.144v15.192c0 .79.714 2.144 2.368 2.144 1.353 0 2.142-1.354 2.142-2.144v-2.068h4.737v2.068c0 2.67-2.143 5.904-6.88 5.904C42.956 28 41 24.766 41 22.134V6.904C41 4.234 42.955 1 48.142 1zM19-6c9.389 0 17 7.611 17 17s-7.611 17-17 17S2 20.389 2 11 9.611-6 19-6zm42.256 7.338l3.345 19.48h.075l3.42-19.48h5l-6.052 26.324h-5L56.22 1.338h5.037zm20.706 0l5.413 26.324h-4.699l-.94-6.13h-4.548l-.902 6.13h-4.435l5.413-26.324h4.698zm18.038 0v3.723h-4.849v22.6h-4.699v-22.6h-4.81V1.338H100zM19 4a7 7 0 100 14 7 7 0 000-14zm60.557 4.295h-.113l-1.466 9.439h3.007l-1.428-9.439z" fill="#fff" fill-rule="nonzero" mask="url(#logoB)"/></g></svg>
|
||||
|
After Width: | Height: | Size: 1.1 KiB |
@ -0,0 +1,110 @@
|
||||
// Copyright (C) 2021 Intel Corporation
|
||||
//
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
/* Increased left padding on the sidebar of documentation */
|
||||
|
||||
.td-sidebar-nav__section-title .td-sidebar-nav__section {
|
||||
padding-left: 0.3rem;
|
||||
}
|
||||
|
||||
/* Main documentation page */
|
||||
|
||||
#docs section {
|
||||
padding-top: 2rem;
|
||||
padding-bottom: 7rem;
|
||||
}
|
||||
|
||||
#docs .row div {
|
||||
margin-top: 1rem;
|
||||
}
|
||||
|
||||
/* Footer */
|
||||
|
||||
.footer-disclaimer {
|
||||
font-size: 0.83rem;
|
||||
line-height: 1.25;
|
||||
margin-top: 0.5rem;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.container-fluid footer {
|
||||
min-height: inherit;
|
||||
padding-bottom: 0.5rem !important;
|
||||
padding-top: 2rem !important;
|
||||
}
|
||||
|
||||
/* Icon color for temporary page */
|
||||
|
||||
#temporary-page i {
|
||||
color: lightgrey;
|
||||
}
|
||||
|
||||
/* About page */
|
||||
|
||||
.logo-2 {
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.history #year h2 {
|
||||
text-shadow: 0 0 3px rgb(27, 27, 27);
|
||||
}
|
||||
|
||||
.avatar:hover img {
|
||||
box-shadow: 0 0 15px gray;
|
||||
}
|
||||
|
||||
.developer-info-list-item {
|
||||
min-width: 15rem !important;
|
||||
}
|
||||
|
||||
.location {
|
||||
width: 70%;
|
||||
}
|
||||
|
||||
.marker-location i {
|
||||
color: lightgray;
|
||||
}
|
||||
|
||||
/* World map block "the team" */
|
||||
|
||||
.team-container {
|
||||
margin: auto;
|
||||
max-width: 1200px;
|
||||
}
|
||||
|
||||
.world-map-container {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
#world-map {
|
||||
z-index: 1;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
#world-map-marker {
|
||||
z-index: 2;
|
||||
position: absolute;
|
||||
border-radius: 50%;
|
||||
border: 2px white solid;
|
||||
box-shadow: 2px 2px 1px gray;
|
||||
max-height: 25px;
|
||||
}
|
||||
|
||||
#world-map-marker:hover {
|
||||
border: 4px white solid;
|
||||
}
|
||||
|
||||
#world-map-marker:hover #tooltip div {
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
#tooltip {
|
||||
background: white;
|
||||
color: #000;
|
||||
padding: 4px 8px;
|
||||
font-size: 13px;
|
||||
border-radius: 8px;
|
||||
visibility: hidden;
|
||||
}
|
||||
@ -0,0 +1,17 @@
|
||||
// Copyright (C) 2021 Intel Corporation
|
||||
//
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
/*
|
||||
|
||||
Add styles or override variables from the theme here.
|
||||
|
||||
*/
|
||||
|
||||
@import 'custom';
|
||||
|
||||
$enable-gradients: false;
|
||||
$enable-rounded: true;
|
||||
$enable-shadows: true;
|
||||
|
||||
$info: #f1f1f1;
|
||||
@ -0,0 +1,196 @@
|
||||
baseURL = "/"
|
||||
title = "CVAT"
|
||||
relativeURLs = true
|
||||
|
||||
enableRobotsTXT = true
|
||||
|
||||
# Hugo allows theme composition (and inheritance). The precedence is from left to right.
|
||||
theme = ["docsy"]
|
||||
|
||||
# Will give values to .Lastmod etc.
|
||||
enableGitInfo = true
|
||||
|
||||
# Language settings
|
||||
contentDir = "content/en"
|
||||
defaultContentLanguage = "en"
|
||||
defaultContentLanguageInSubdir = false
|
||||
# Useful when translating.
|
||||
enableMissingTranslationPlaceholders = true
|
||||
|
||||
disableKinds = ["taxonomy", "taxonomyTerm"]
|
||||
|
||||
# Highlighting config
|
||||
pygmentsCodeFences = true
|
||||
pygmentsUseClasses = false
|
||||
# Use the new Chroma Go highlighter in Hugo.
|
||||
pygmentsUseClassic = false
|
||||
#pygmentsOptions = "linenos=table"
|
||||
# See https://help.farbox.com/pygments.html
|
||||
pygmentsStyle = "tango"
|
||||
|
||||
# Configure how URLs look like per section.
|
||||
[permalinks]
|
||||
blog = "/:section/:year/:month/:day/:slug/"
|
||||
|
||||
## Configuration for BlackFriday markdown parser: https://github.com/russross/blackfriday
|
||||
[blackfriday]
|
||||
plainIDAnchors = true
|
||||
hrefTargetBlank = true
|
||||
angledQuotes = false
|
||||
latexDashes = true
|
||||
|
||||
# Image processing configuration.
|
||||
[imaging]
|
||||
resampleFilter = "CatmullRom"
|
||||
quality = 75
|
||||
anchor = "smart"
|
||||
|
||||
[[menu.main]]
|
||||
name = "Try it now"
|
||||
weight = 50
|
||||
url = "https://cvat.org"
|
||||
|
||||
[services]
|
||||
[services.googleAnalytics]
|
||||
# Comment out the next line to disable GA tracking. Also disables the feature described in [params.ui.feedback].
|
||||
id = "UA-00000000-0"
|
||||
|
||||
# Language configuration
|
||||
|
||||
[languages]
|
||||
[languages.en]
|
||||
title = ""
|
||||
description = ""
|
||||
languageName ="English"
|
||||
# Weight used for sorting.
|
||||
weight = 1
|
||||
|
||||
[markup]
|
||||
[markup.goldmark]
|
||||
[markup.goldmark.renderer]
|
||||
unsafe = true
|
||||
[markup.highlight]
|
||||
# See a complete list of available styles at https://xyproto.github.io/splash/docs/all.html
|
||||
style = "tango"
|
||||
# Uncomment if you want your chosen highlight style used for code blocks without a specified language
|
||||
# guessSyntax = "true"
|
||||
|
||||
# Everything below this are Site Params
|
||||
|
||||
# Comment out if you don't want the "print entire section" link enabled.
|
||||
[outputs]
|
||||
section = ["HTML", "print"]
|
||||
|
||||
[params]
|
||||
intel_terms_of_use = "https://www.intel.com/content/www/us/en/legal/terms-of-use.html"
|
||||
intel_privacy_notice = "https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html"
|
||||
cvat_terms_of_use = "https://cvat.org/api/v1/restrictions/terms-of-use"
|
||||
|
||||
# First one is picked as the Twitter card image if not set on page.
|
||||
# images = ["images/project-illustration.png"]
|
||||
|
||||
# Menu title if your navbar has a versions selector to access old versions of your site.
|
||||
# This menu appears only if you have at least one [params.versions] set.
|
||||
version_menu = "Releases"
|
||||
|
||||
# Flag used in the "version-banner" partial to decide whether to display a
|
||||
# banner on every page indicating that this is an archived version of the docs.
|
||||
# Set this flag to "true" if you want to display the banner.
|
||||
archived_version = false
|
||||
|
||||
# The version number for the version of the docs represented in this doc set.
|
||||
# Used in the "version-banner" partial to display a version number for the
|
||||
# current doc set.
|
||||
version = "0.0"
|
||||
|
||||
# A link to latest version of the docs. Used in the "version-banner" partial to
|
||||
# point people to the main doc site.
|
||||
url_latest_version = "https://example.com"
|
||||
|
||||
# Repository configuration (URLs for in-page links to opening issues and suggesting changes)
|
||||
github_repo = "https://github.com/openvinotoolkit/cvat"
|
||||
# An optional link to a related project repo. For example, the sibling repository where your product code lives.
|
||||
github_project_repo = "https://github.com/openvinotoolkit/cvat"
|
||||
|
||||
# Specify a value here if your content directory is not in your repo's root directory
|
||||
# github_subdir = ""
|
||||
|
||||
# Uncomment this if you have a newer GitHub repo with "main" as the default branch,
|
||||
# or specify a new value if you want to reference another branch in your GitHub links
|
||||
github_branch = "develop"
|
||||
|
||||
# Google Custom Search Engine ID. Remove or comment out to disable search.
|
||||
# gcs_engine_id = "011737558837375720776:fsdu1nryfng"
|
||||
|
||||
# Enable Algolia DocSearch
|
||||
algolia_docsearch = false
|
||||
|
||||
# Enable Lunr.js offline search
|
||||
offlineSearch = true
|
||||
|
||||
# Enable syntax highlighting and copy buttons on code blocks with Prism
|
||||
prism_syntax_highlighting = false
|
||||
|
||||
# User interface configuration
|
||||
[params.ui]
|
||||
# Enable to show the side bar menu in its compact state.
|
||||
sidebar_menu_compact = true
|
||||
ul_show = 2
|
||||
# Set to true to disable breadcrumb navigation.
|
||||
breadcrumb_disable = false
|
||||
# Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled)
|
||||
sidebar_search_disable = true
|
||||
# Set to false if you don't want to display a logo (/assets/icons/logo.svg) in the top nav bar
|
||||
navbar_logo = true
|
||||
# Set to true to disable the About link in the site footer
|
||||
footer_about_disable = false
|
||||
|
||||
# Adds a H2 section titled "Feedback" to the bottom of each doc. The responses are sent to Google Analytics as events.
|
||||
# This feature depends on [services.googleAnalytics] and will be disabled if "services.googleAnalytics.id" is not set.
|
||||
# If you want this feature, but occasionally need to remove the "Feedback" section from a single page,
|
||||
# add "hide_feedback: true" to the page's front matter.
|
||||
[params.ui.feedback]
|
||||
enable = false
|
||||
# The responses that the user sees after clicking "yes" (the page was helpful) or "no" (the page was not helpful).
|
||||
yes = 'Glad to hear it! Please <a href="https://github.com/USERNAME/REPOSITORY/issues/new">tell us how we can improve</a>.'
|
||||
no = 'Sorry to hear that. Please <a href="https://github.com/USERNAME/REPOSITORY/issues/new">tell us how we can improve</a>.'
|
||||
|
||||
# Adds a reading time to the top of each doc.
|
||||
# If you want this feature, but occasionally need to remove the Reading time from a single page,
|
||||
# add "hide_readingtime: true" to the page's front matter
|
||||
[params.ui.readingtime]
|
||||
enable = false
|
||||
|
||||
[params.links]
|
||||
# End user relevant links. These will show up on left side of footer and in the community page if you have one.
|
||||
[[params.links.user]]
|
||||
name ="Gitter public chat"
|
||||
url = "https://gitter.im/opencv-cvat/public"
|
||||
icon = "fab fa-gitter"
|
||||
desc = "Join our Gitter channel for community support."
|
||||
[[params.links.user]]
|
||||
name = "Stack Overflow"
|
||||
url = "https://stackoverflow.com/search?q=%23cvat"
|
||||
icon = "fab fa-stack-overflow"
|
||||
desc = "Practical questions and curated answers"
|
||||
[[params.links.user]]
|
||||
name = "YouTube"
|
||||
url = "https://www.youtube.com/user/nmanovic"
|
||||
icon = "fab fa-youtube"
|
||||
desc = "Practical questions and curated answers"
|
||||
# Developer relevant links. These will show up on right side of footer and in the community page if you have one.
|
||||
[[params.links.developer]]
|
||||
name = "GitHub"
|
||||
url = "https://github.com/openvinotoolkit/cvat"
|
||||
icon = "fab fa-github"
|
||||
desc = "Development takes place here!"
|
||||
[[params.links.developer]]
|
||||
name = "Forum on Intel Developer Zone"
|
||||
url = "https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit"
|
||||
icon = "fas fa-envelope"
|
||||
desc = "Development takes place here!"
|
||||
[[params.links.developer]]
|
||||
name ="Gitter developers chat"
|
||||
url = "https://gitter.im/opencv-cvat/dev"
|
||||
icon = "fab fa-gitter"
|
||||
desc = "Join our Gitter channel for community support."
|
||||
@ -0,0 +1,22 @@
|
||||
+++
|
||||
title = "CVAT"
|
||||
linkTitle = "CVAT"
|
||||
+++
|
||||
|
||||
{{< blocks/section height="full" color="docs" >}}
|
||||
|
||||
<section id="temporary-page" class="mx-auto text-center py-5">
|
||||
<div class="py-4">
|
||||
<i class="fas fa-tools fa-7x"></i>
|
||||
</div>
|
||||
<div class="py-4">
|
||||
<h1 class="text-center">This page is in development.</h1>
|
||||
</div>
|
||||
<div class="py-4">
|
||||
<h3 class="text-center">
|
||||
Visit our <a href="https://github.com/openvinotoolkit/cvat">GitHub</a> repository.
|
||||
</h3>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
{{< /blocks/section >}}
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 620 KiB |
@ -0,0 +1,59 @@
|
||||
---
|
||||
title: 'CVAT Documentation'
|
||||
linkTitle: 'Documentation'
|
||||
no_list: true
|
||||
menu:
|
||||
main:
|
||||
weight: 20
|
||||
---
|
||||
|
||||
CVAT is a free, online, interactive video and image annotation tool for computer vision.
|
||||
It is being developed and used by Intel to annotate millions of objects with different properties.
|
||||
Many UI and UX decisions are based on feedbacks from professional data annotation team.
|
||||
Try it online [cvat.org](https://cvat.org).
|
||||
|
||||
<section id="docs">
|
||||
|
||||
{{< blocks/section color="docs" >}}
|
||||
|
||||
{{% blocks/feature icon="fa-server" title="[Installation Guide](/docs/for-users/installation/)" %}}
|
||||
|
||||
CVAT installation guide for different operating systems.
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature icon="fa-book" title="[User's Guide](/docs/for-users/user-guide/)" %}}
|
||||
|
||||
This multipage document contains information on how to work with the CVAT user interface.
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature icon="fa-question" title="[FAQ](/docs/for-users/faq/)" %}}
|
||||
|
||||
Answers to frequently asked questions.
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
<!--lint disable maximum-line-length-->
|
||||
|
||||
{{% blocks/feature icon="fa-magic" title="[Installation Auto Annotation](/docs/for-users/installation_automatic_annotation/)" %}}
|
||||
|
||||
This page provides information about the installation of components needed for semi-automatic and automatic annotation.
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature icon="fa-terminal" title="[For Developers](/docs/for-developers/)" %}}
|
||||
|
||||
This section contains documents for system administrators, AI researchers and any other advanced users.
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature icon="fab fa-github" title="[GitHub Repository](https://github.com/openvinotoolkit/cvat)" %}}
|
||||
|
||||
Computer Vision Annotation Tool GitHub repository.
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{< /blocks/section >}}
|
||||
|
||||
</section>
|
||||
@ -0,0 +1,41 @@
|
||||
---
|
||||
title: 'AWS-Deployment Guide'
|
||||
linkTitle: 'AWS-Deployment Guide'
|
||||
weight: 4
|
||||
---
|
||||
|
||||
There are two ways of deploying the CVAT.
|
||||
|
||||
1. **On Nvidia GPU Machine:** Tensorflow annotation feature is dependent on GPU hardware.
|
||||
One of the easy ways to launch CVAT with the tf-annotation app is to use AWS P3 instances,
|
||||
which provides the NVIDIA GPU.
|
||||
Read more about [P3 instances here.](https://aws.amazon.com/about-aws/whats-new/2017/10/introducing-amazon-ec2-p3-instances/)
|
||||
Overall setup instruction is explained in [main readme file](https://github.com/opencv/cvat/),
|
||||
except Installing Nvidia drivers.
|
||||
So we need to download the drivers and install it.
|
||||
For Amazon P3 instances, download the Nvidia Drivers from Nvidia website.
|
||||
For more check [Installing the NVIDIA Driver on Linux Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html)
|
||||
link.
|
||||
|
||||
2. **On Any other AWS Machine:** We can follow the same instruction guide mentioned in the
|
||||
[installation instructions](/docs/for-users/installation/).
|
||||
The additional step is to add a [security group and rule to allow incoming connections](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html).
|
||||
|
||||
For any of above, don't forget to add exposed AWS public IP address or hostname to `docker-compose.override.yml`:
|
||||
|
||||
```
|
||||
version: "2.3"
|
||||
services:
|
||||
cvat_proxy:
|
||||
environment:
|
||||
CVAT_HOST: your-instance.amazonaws.com
|
||||
```
|
||||
|
||||
In case of problems with using hostname, you can also use the public IPV4 instead of hostname.
|
||||
For AWS or any cloud based machines where the instances need to be terminated or stopped,
|
||||
the public IPV4 and hostname changes with every stop and reboot.
|
||||
To address this efficiently, avoid using spot instances that cannot be stopped,
|
||||
since copying the EBS to an AMI and restarting it throws problems.
|
||||
On the other hand, when a regular instance is stopped and restarted,
|
||||
the new hostname/IPV4 can be used in the `CVAT_HOST` variable in the `docker-compose.override.yml`
|
||||
and the build can happen instantly with CVAT tasks being available through the new IPV4.
|
||||
@ -0,0 +1,11 @@
|
||||
<!--lint disable maximum-heading-length-->
|
||||
|
||||
---
|
||||
|
||||
title: 'For Developers'
|
||||
linkTitle: 'For Developers'
|
||||
weight: 3
|
||||
description: 'This section contains documents for system administrators, AI researchers and any other advanced users.'
|
||||
hide_feedback: true
|
||||
|
||||
---
|
||||
@ -1,6 +1,17 @@
|
||||
## Analytics for Computer Vision Annotation Tool (CVAT)
|
||||
<!--lint disable maximum-heading-length-->
|
||||
|
||||

|
||||
---
|
||||
|
||||
title: 'Analytics for Computer Vision Annotation Tool (CVAT)'
|
||||
linkTitle: 'Analytics'
|
||||
weight: 2
|
||||
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/components/analytics)
|
||||
|
||||
---
|
||||
|
||||
<!--lint disable heading-style-->
|
||||
|
||||

|
||||
|
||||
It is possible to proxy annotation logs from client to ELK. To do that run the following command below:
|
||||
|
||||
@ -1,4 +1,9 @@
|
||||
# Command line interface (CLI)
|
||||
---
|
||||
title: "Command line interface (CLI)"
|
||||
linkTitle: "CLI"
|
||||
weight: 3
|
||||
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/utils/cli)
|
||||
---
|
||||
|
||||
**Description**
|
||||
A simple command line interface for working with CVAT tasks. At the moment it
|
||||
@ -1,4 +1,15 @@
|
||||
## Simple command line to prepare dataset manifest file
|
||||
<!--lint disable maximum-heading-length-->
|
||||
|
||||
---
|
||||
|
||||
title: 'Simple command line to prepare dataset manifest file'
|
||||
linkTitle: 'Dataset manifest'
|
||||
weight: 10
|
||||
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/utils/dataset_manifest)
|
||||
|
||||
---
|
||||
|
||||
<!--lint disable heading-style-->
|
||||
|
||||
### Steps before use
|
||||
|
||||
@ -0,0 +1,395 @@
|
||||
---
|
||||
title: 'Mounting cloud storage'
|
||||
linkTitle: 'Mounting cloud storage'
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!--lint disable heading-style-->
|
||||
|
||||
## AWS S3 bucket as filesystem
|
||||
|
||||
### <a name="aws_s3_ubuntu_2004">Ubuntu 20.04</a>
|
||||
|
||||
#### <a name="aws_s3_mount">Mount</a>
|
||||
|
||||
1. Install s3fs:
|
||||
|
||||
```bash
|
||||
sudo apt install s3fs
|
||||
```
|
||||
|
||||
1. Enter your credentials in a file `${HOME}/.passwd-s3fs` and set owner-only permissions:
|
||||
|
||||
```bash
|
||||
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
|
||||
chmod 600 ${HOME}/.passwd-s3fs
|
||||
```
|
||||
|
||||
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
|
||||
1. Run s3fs, replace `bucket_name`, `mount_point`:
|
||||
|
||||
```bash
|
||||
s3fs <bucket_name> <mount_point> -o allow_other
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/s3fs-fuse/s3fs-fuse).
|
||||
|
||||
#### <a name="aws_s3_automatically_mount">Automatically mount</a>
|
||||
|
||||
Follow the first 3 mounting steps above.
|
||||
|
||||
##### <a name="aws_s3_using_fstab">Using fstab</a>
|
||||
|
||||
1. Create a bash script named aws_s3_fuse(e.g in /usr/bin, as root) with this content
|
||||
(replace `user_name` on whose behalf the disk will be mounted, `backet_name`, `mount_point`, `/path/to/.passwd-s3fs`):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
sudo -u <user_name> s3fs <backet_name> <mount_point> -o passwd_file=/path/to/.passwd-s3fs -o allow_other
|
||||
exit 0
|
||||
```
|
||||
|
||||
1. Give it the execution permission:
|
||||
|
||||
```bash
|
||||
sudo chmod +x /usr/bin/aws_s3_fuse
|
||||
```
|
||||
|
||||
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
|
||||
|
||||
```bash
|
||||
/absolute/path/to/aws_s3_fuse <mount_point> fuse allow_other,user,_netdev 0 0
|
||||
```
|
||||
|
||||
##### <a name="aws_s3_using_systemd">Using systemd</a>
|
||||
|
||||
1. Create unit file `sudo nano /etc/systemd/system/s3fs.service`
|
||||
(replace `user_name`, `bucket_name`, `mount_point`, `/path/to/.passwd-s3fs`):
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=FUSE filesystem over AWS S3 bucket
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Environment="MOUNT_POINT=<mount_point>"
|
||||
User=<user_name>
|
||||
Group=<user_name>
|
||||
ExecStart=s3fs <bucket_name> ${MOUNT_POINT} -o passwd_file=/path/to/.passwd-s3fs -o allow_other
|
||||
ExecStop=fusermount -u ${MOUNT_POINT}
|
||||
Restart=always
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
1. Update the system configurations, enable unit autorun when the system boots, mount the bucket:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable s3fs.service
|
||||
sudo systemctl start s3fs.service
|
||||
```
|
||||
|
||||
#### <a name="aws_s3_check">Check</a>
|
||||
|
||||
A file `/etc/mtab` contains records of currently mounted filesystems.
|
||||
|
||||
```bash
|
||||
cat /etc/mtab | grep 's3fs'
|
||||
```
|
||||
|
||||
#### <a name="aws_s3_unmount_filesystem">Unmount filesystem</a>
|
||||
|
||||
```bash
|
||||
fusermount -u <mount_point>
|
||||
```
|
||||
|
||||
If you used [systemd](#aws_s3_using_systemd) to mount a bucket:
|
||||
|
||||
```bash
|
||||
sudo systemctl stop s3fs.service
|
||||
sudo systemctl disable s3fs.service
|
||||
```
|
||||
|
||||
## Microsoft Azure container as filesystem
|
||||
|
||||
### <a name="azure_ubuntu_2004">Ubuntu 20.04</a>
|
||||
|
||||
#### <a name="azure_mount">Mount</a>
|
||||
|
||||
1. Set up the Microsoft package repository.(More [here](https://docs.microsoft.com/en-us/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software#configuring-the-repositories))
|
||||
|
||||
```bash
|
||||
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
|
||||
sudo dpkg -i packages-microsoft-prod.deb
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
1. Install `blobfuse` and `fuse`:
|
||||
|
||||
```bash
|
||||
sudo apt-get install blobfuse fuse
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation)
|
||||
|
||||
1. Create enviroments(replace `account_name`, `account_key`, `mount_point`):
|
||||
|
||||
```bash
|
||||
export AZURE_STORAGE_ACCOUNT=<account_name>
|
||||
export AZURE_STORAGE_ACCESS_KEY=<account_key>
|
||||
MOUNT_POINT=<mount_point>
|
||||
```
|
||||
|
||||
1. Create a folder for cache:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /mnt/blobfusetmp
|
||||
```
|
||||
|
||||
1. Make sure the file must be owned by the user who mounts the container:
|
||||
|
||||
```bash
|
||||
sudo chown <user> /mnt/blobfusetmp
|
||||
```
|
||||
|
||||
1. Create the mount point, if it doesn't exists:
|
||||
|
||||
```bash
|
||||
mkdir -p ${MOUNT_POINT}
|
||||
```
|
||||
|
||||
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
|
||||
1. Mount container(replace `your_container`):
|
||||
|
||||
```bash
|
||||
blobfuse ${MOUNT_POINT} --container-name=<your_container> --tmp-path=/mnt/blobfusetmp -o allow_other
|
||||
```
|
||||
|
||||
#### <a name="azure_automatically_mount">Automatically mount</a>
|
||||
|
||||
Follow the first 7 mounting steps above.
|
||||
|
||||
##### <a name="azure_using_fstab">Using fstab</a>
|
||||
|
||||
1. Create configuration file `connection.cfg` with same content, change accountName,
|
||||
select one from accountKey or sasToken and replace with your value:
|
||||
|
||||
```bash
|
||||
accountName <account-name-here>
|
||||
# Please provide either an account key or a SAS token, and delete the other line.
|
||||
accountKey <account-key-here-delete-next-line>
|
||||
#change authType to specify only 1
|
||||
sasToken <shared-access-token-here-delete-previous-line>
|
||||
authType <MSI/SAS/SPN/Key/empty>
|
||||
containerName <insert-container-name-here>
|
||||
```
|
||||
|
||||
1. Create a bash script named `azure_fuse`(e.g in /usr/bin, as root) with content below
|
||||
(replace `user_name` on whose behalf the disk will be mounted, `mount_point`, `/path/to/blobfusetmp`,`/path/to/connection.cfg`):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
sudo -u <user_name> blobfuse <mount_point> --tmp-path=/path/to/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
|
||||
exit 0
|
||||
```
|
||||
|
||||
1. Give it the execution permission:
|
||||
|
||||
```bash
|
||||
sudo chmod +x /usr/bin/azure_fuse
|
||||
```
|
||||
|
||||
1. Edit `/etc/fstab` with the blobfuse script. Add the following line(replace paths):
|
||||
|
||||
```bash
|
||||
/absolute/path/to/azure_fuse </path/to/desired/mountpoint> fuse allow_other,user,_netdev
|
||||
```
|
||||
|
||||
##### <a name="azure_using_systemd">Using systemd</a>
|
||||
|
||||
1. Create unit file `sudo nano /etc/systemd/system/blobfuse.service`.
|
||||
(replace `user_name`, `mount_point`, `container_name`,`/path/to/connection.cfg`):
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=FUSE filesystem over Azure container
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Environment="MOUNT_POINT=<mount_point>"
|
||||
User=<user_name>
|
||||
Group=<user_name>
|
||||
ExecStart=blobfuse ${MOUNT_POINT} --container-name=<container_name> --tmp-path=/mnt/blobfusetmp --config-file=/path/to/connection.cfg -o allow_other
|
||||
ExecStop=fusermount -u ${MOUNT_POINT}
|
||||
Restart=always
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
1. Update the system configurations, enable unit autorun when the system boots, mount the container:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable blobfuse.service
|
||||
sudo systemctl start blobfuse.service
|
||||
```
|
||||
|
||||
Or for more detail [see here](https://github.com/Azure/azure-storage-fuse/tree/master/systemd)
|
||||
|
||||
#### <a name="azure_check">Check</a>
|
||||
|
||||
A file `/etc/mtab` contains records of currently mounted filesystems.
|
||||
|
||||
```bash
|
||||
cat /etc/mtab | grep 'blobfuse'
|
||||
```
|
||||
|
||||
#### <a name="azure_unmount_filesystem">Unmount filesystem</a>
|
||||
|
||||
```bash
|
||||
fusermount -u <mount_point>
|
||||
```
|
||||
|
||||
If you used [systemd](#azure_using_systemd) to mount a container:
|
||||
|
||||
```bash
|
||||
sudo systemctl stop blobfuse.service
|
||||
sudo systemctl disable blobfuse.service
|
||||
```
|
||||
|
||||
If you have any mounting problems, check out the [answers](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ)
|
||||
to common problems
|
||||
|
||||
## Google Drive as filesystem
|
||||
|
||||
### <a name="google_drive_ubuntu_2004">Ubuntu 20.04</a>
|
||||
|
||||
#### <a name="google_drive_mount">Mount</a>
|
||||
|
||||
To mount a google drive as a filesystem in user space(FUSE)
|
||||
you can use [google-drive-ocamlfuse](https://github.com/astrada/google-drive-ocamlfuse)
|
||||
To do this follow the instructions below:
|
||||
|
||||
1. Install google-drive-ocamlfuse:
|
||||
|
||||
```bash
|
||||
sudo add-apt-repository ppa:alessandro-strada/ppa
|
||||
sudo apt-get update
|
||||
sudo apt-get install google-drive-ocamlfuse
|
||||
```
|
||||
|
||||
1. Run `google-drive-ocamlfuse` without parameters:
|
||||
|
||||
```bash
|
||||
google-drive-ocamlfuse
|
||||
```
|
||||
|
||||
This command will create the default application directory (~/.gdfuse/default),
|
||||
containing the configuration file config (see the [wiki](https://github.com/astrada/google-drive-ocamlfuse/wiki)
|
||||
page for more details about configuration).
|
||||
And it will start a web browser to obtain authorization to access your Google Drive.
|
||||
This will let you modify default configuration before mounting the filesystem.
|
||||
|
||||
Then you can choose a local directory to mount your Google Drive (e.g.: ~/GoogleDrive).
|
||||
|
||||
1. Create the mount point, if it doesn't exist(replace mount_point):
|
||||
|
||||
```bash
|
||||
mountpoint="<mount_point>"
|
||||
mkdir -p $mountpoint
|
||||
```
|
||||
|
||||
1. Uncomment `user_allow_other` in the `/etc/fuse.conf` file: `sudo nano /etc/fuse.conf`
|
||||
1. Mount the filesystem:
|
||||
|
||||
```bash
|
||||
google-drive-ocamlfuse -o allow_other $mountpoint
|
||||
```
|
||||
|
||||
#### <a name="google_drive_automatically_mount">Automatically mount</a>
|
||||
|
||||
Follow the first 4 mounting steps above.
|
||||
|
||||
##### <a name="google_drive_using_fstab">Using fstab</a>
|
||||
|
||||
1. Create a bash script named gdfuse(e.g in /usr/bin, as root) with this content
|
||||
(replace `user_name` on whose behalf the disk will be mounted, `label`, `mount_point`):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
sudo -u <user_name> google-drive-ocamlfuse -o allow_other -label <label> <mount_point>
|
||||
exit 0
|
||||
```
|
||||
|
||||
1. Give it the execution permission:
|
||||
|
||||
```bash
|
||||
sudo chmod +x /usr/bin/gdfuse
|
||||
```
|
||||
|
||||
1. Edit `/etc/fstab` adding a line like this, replace `mount_point`):
|
||||
|
||||
```bash
|
||||
/absolute/path/to/gdfuse <mount_point> fuse allow_other,user,_netdev 0 0
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
|
||||
|
||||
##### <a name="google_drive_using_systemd">Using systemd</a>
|
||||
|
||||
1. Create unit file `sudo nano /etc/systemd/system/google-drive-ocamlfuse.service`.
|
||||
(replace `user_name`, `label`(default `label=default`), `mount_point`):
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=FUSE filesystem over Google Drive
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Environment="MOUNT_POINT=<mount_point>"
|
||||
User=<user_name>
|
||||
Group=<user_name>
|
||||
ExecStart=google-drive-ocamlfuse -label <label> ${MOUNT_POINT}
|
||||
ExecStop=fusermount -u ${MOUNT_POINT}
|
||||
Restart=always
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
1. Update the system configurations, enable unit autorun when the system boots, mount the drive:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable google-drive-ocamlfuse.service
|
||||
sudo systemctl start google-drive-ocamlfuse.service
|
||||
```
|
||||
|
||||
For more details see [here](https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting)
|
||||
|
||||
#### <a name="google_drive_check">Check</a>
|
||||
|
||||
A file `/etc/mtab` contains records of currently mounted filesystems.
|
||||
|
||||
```bash
|
||||
cat /etc/mtab | grep 'google-drive-ocamlfuse'
|
||||
```
|
||||
|
||||
#### <a name="google_drive_unmount_filesystem">Unmount filesystem</a>
|
||||
|
||||
```bash
|
||||
fusermount -u <mount_point>
|
||||
```
|
||||
|
||||
If you used [systemd](#google_drive_using_systemd) to mount a drive:
|
||||
|
||||
```bash
|
||||
sudo systemctl stop google-drive-ocamlfuse.service
|
||||
sudo systemctl disable google-drive-ocamlfuse.service
|
||||
```
|
||||
@ -0,0 +1,11 @@
|
||||
<!--lint disable heading-style-->
|
||||
|
||||
---
|
||||
|
||||
title: 'For Users'
|
||||
linkTitle: 'For Users'
|
||||
weight: 2
|
||||
description: 'This section contains documents for CVAT users'
|
||||
hide_feedback: true
|
||||
|
||||
---
|
||||
@ -0,0 +1,145 @@
|
||||
---
|
||||
title: 'Dataset and annotation formats'
|
||||
linkTitle: 'Formats'
|
||||
weight: 6
|
||||
description: This section on [GitHub](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats)
|
||||
---
|
||||
|
||||
<!-- lint disable heading-style -->
|
||||
|
||||
## How to add a new annotation format support<a id="how-to-add"></a>
|
||||
|
||||
1. Add a python script to `dataset_manager/formats`
|
||||
1. Add an import statement to [registry.py](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats/registry.py).
|
||||
1. Implement some importers and exporters as the format requires.
|
||||
|
||||
Each format is supported by an importer and exporter.
|
||||
|
||||
It can be a function or a class decorated with
|
||||
`importer` or `exporter` from [registry.py](https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/dataset_manager/formats/registry.py).
|
||||
Examples:
|
||||
|
||||
```python
|
||||
@importer(name="MyFormat", version="1.0", ext="ZIP")
|
||||
def my_importer(file_object, task_data, **options):
|
||||
...
|
||||
|
||||
@importer(name="MyFormat", version="2.0", ext="XML")
|
||||
class my_importer(file_object, task_data, **options):
|
||||
def __call__(self, file_object, task_data, **options):
|
||||
...
|
||||
|
||||
@exporter(name="MyFormat", version="1.0", ext="ZIP"):
|
||||
def my_exporter(file_object, task_data, **options):
|
||||
...
|
||||
```
|
||||
|
||||
Each decorator defines format parameters such as:
|
||||
|
||||
- _name_
|
||||
|
||||
- _version_
|
||||
|
||||
- _file extension_. For the `importer` it can be a comma-separated list.
|
||||
These parameters are combined to produce a visible name. It can be
|
||||
set explicitly by the `display_name` argument.
|
||||
|
||||
Importer arguments:
|
||||
|
||||
- _file_object_ - a file with annotations or dataset
|
||||
- _task_data_ - an instance of `TaskData` class.
|
||||
|
||||
Exporter arguments:
|
||||
|
||||
- _file_object_ - a file for annotations or dataset
|
||||
|
||||
- _task_data_ - an instance of `TaskData` class.
|
||||
|
||||
- _options_ - format-specific options. `save_images` is the option to
|
||||
distinguish if dataset or just annotations are requested.
|
||||
|
||||
[`TaskData`](https://github.com/openvinotoolkit/cvat/blob/develop/cvat/apps/dataset_manager/bindings.py) provides
|
||||
many task properties and interfaces to add and read task annotations.
|
||||
|
||||
Public members:
|
||||
|
||||
- **TaskData. Attribute** - class, `namedtuple('Attribute', 'name, value')`
|
||||
|
||||
- **TaskData. LabeledShape** - class, `namedtuple('LabeledShape', 'type, frame, label, points, occluded, attributes, group, z_order')`
|
||||
|
||||
- **TrackedShape** - `namedtuple('TrackedShape', 'type, points, occluded, frame, attributes, outside, keyframe, z_order')`
|
||||
|
||||
- **Track** - class, `namedtuple('Track', 'label, group, shapes')`
|
||||
|
||||
- **Tag** - class, `namedtuple('Tag', 'frame, label, attributes, group')`
|
||||
|
||||
- **Frame** - class, `namedtuple('Frame', 'frame, name, width, height, labeled_shapes, tags')`
|
||||
|
||||
- **TaskData. shapes** - property, an iterator over `LabeledShape` objects
|
||||
|
||||
- **TaskData. tracks** - property, an iterator over `Track` objects
|
||||
|
||||
- **TaskData. tags** - property, an iterator over `Tag` objects
|
||||
|
||||
- **TaskData. meta** - property, a dictionary with task information
|
||||
|
||||
- **TaskData. group_by_frame()** - method, returns
|
||||
an iterator over `Frame` objects, which groups annotation objects by frame.
|
||||
Note that `TrackedShape` s will be represented as `LabeledShape` s.
|
||||
|
||||
- **TaskData. add_tag(tag)** - method,
|
||||
tag should be an instance of the `Tag` class
|
||||
|
||||
- **TaskData. add_shape(shape)** - method,
|
||||
shape should be an instance of the `Shape` class
|
||||
|
||||
- **TaskData. add_track(track)** - method,
|
||||
track should be an instance of the `Track` class
|
||||
|
||||
Sample exporter code:
|
||||
|
||||
```python
|
||||
...
|
||||
# dump meta info if necessary
|
||||
...
|
||||
# iterate over all frames
|
||||
for frame_annotation in task_data.group_by_frame():
|
||||
# get frame info
|
||||
image_name = frame_annotation.name
|
||||
image_width = frame_annotation.width
|
||||
image_height = frame_annotation.height
|
||||
# iterate over all shapes on the frame
|
||||
for shape in frame_annotation.labeled_shapes:
|
||||
label = shape.label
|
||||
xtl = shape.points[0]
|
||||
ytl = shape.points[1]
|
||||
xbr = shape.points[2]
|
||||
ybr = shape.points[3]
|
||||
# iterate over shape attributes
|
||||
for attr in shape.attributes:
|
||||
attr_name = attr.name
|
||||
attr_value = attr.value
|
||||
...
|
||||
# dump annotation code
|
||||
file_object.write(...)
|
||||
...
|
||||
```
|
||||
|
||||
Sample importer code:
|
||||
|
||||
```python
|
||||
...
|
||||
#read file_object
|
||||
...
|
||||
for parsed_shape in parsed_shapes:
|
||||
shape = task_data.LabeledShape(
|
||||
type="rectangle",
|
||||
points=[0, 0, 100, 100],
|
||||
occluded=False,
|
||||
attributes=[],
|
||||
label="car",
|
||||
outside=False,
|
||||
frame=99,
|
||||
)
|
||||
task_data.add_shape(shape)
|
||||
```
|
||||
@ -0,0 +1,22 @@
|
||||
---
|
||||
title: "Format specifications:"
|
||||
linkTitle: "Format specifications"
|
||||
weight: 1
|
||||
no_list: true
|
||||
---
|
||||
|
||||
- [CVAT](format-cvat)
|
||||
- [Datumaro](format-datumaro)
|
||||
- [LabelMe](format-labelme)
|
||||
- [MOT](format-mot)
|
||||
- [MOTS](format-mots)
|
||||
- [COCO](format-coco)
|
||||
- [PASCAL VOC and mask](format-voc)
|
||||
- [YOLO](format-yolo)
|
||||
- [TF detection API](format-tfrecord)
|
||||
- [ImageNet](format-imagenet)
|
||||
- [CamVid](format-camvid)
|
||||
- [WIDER Face](format-widerface)
|
||||
- [VGGFace2](format-vggface2)
|
||||
- [Market-1501](format-market1501)
|
||||
- [ICDAR13/15](format-icdar)
|
||||
@ -0,0 +1,42 @@
|
||||
---
|
||||
linkTitle: "CamVid"
|
||||
weight: 10
|
||||
---
|
||||
|
||||
### [CamVid](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/)<a id="camvid" />
|
||||
|
||||
#### CamVid export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── labelmap.txt # optional, required for non-CamVid labels
|
||||
├── <any_subset_name>/
|
||||
| ├── image1.png
|
||||
| └── image2.png
|
||||
├── <any_subset_name>annot/
|
||||
| ├── image1.png
|
||||
| └── image2.png
|
||||
└── <any_subset_name>.txt
|
||||
|
||||
# labelmap.txt
|
||||
# color (RGB) label
|
||||
0 0 0 Void
|
||||
64 128 64 Animal
|
||||
192 0 128 Archway
|
||||
0 128 192 Bicyclist
|
||||
0 128 64 Bridge
|
||||
```
|
||||
|
||||
Mask is a `png` image with 1 or 3 channels where each pixel
|
||||
has own color which corresponds to a label.
|
||||
`(0, 0, 0)` is used for background by default.
|
||||
|
||||
- supported annotations: Rectangles, Polygons
|
||||
|
||||
#### CamVid import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
- supported annotations: Polygons
|
||||
@ -0,0 +1,72 @@
|
||||
---
|
||||
linkTitle: 'MS COCO'
|
||||
weight: 5
|
||||
---
|
||||
|
||||
### [MS COCO Object Detection](http://cocodataset.org/#format-data)<a id="coco" />
|
||||
|
||||
- [Format specification](http://cocodataset.org/#format-data)
|
||||
|
||||
#### COCO export
|
||||
|
||||
Downloaded file: a zip archive with following structure:
|
||||
|
||||
```bash
|
||||
archive.zip/
|
||||
├── images/
|
||||
│ ├── <image_name1.ext>
|
||||
│ ├── <image_name2.ext>
|
||||
│ └── ...
|
||||
└── annotations/
|
||||
└── instances_default.json
|
||||
```
|
||||
|
||||
- supported annotations: Polygons, Rectangles
|
||||
- supported attributes:
|
||||
- `is_crowd` (checkbox or integer with values 0 and 1) -
|
||||
specifies that the instance (an object group) should have an
|
||||
RLE-encoded mask in the `segmentation` field. All the grouped shapes
|
||||
are merged into a single mask, the largest one defines all
|
||||
the object properties
|
||||
- `score` (number) - the annotation `score` field
|
||||
- arbitrary attributes - will be stored in the `attributes` annotation section
|
||||
|
||||
_Note_: there is also a [support for COCO keypoints over Datumaro](https://github.com/openvinotoolkit/cvat/issues/2910#issuecomment-726077582)
|
||||
|
||||
1. Install [Datumaro](https://github.com/openvinotoolkit/datumaro)
|
||||
`pip install datumaro`
|
||||
1. Export the task in the `Datumaro` format, unzip
|
||||
1. Export the Datumaro project in `coco` / `coco_person_keypoints` formats
|
||||
`datum export -f coco -p path/to/project [-- --save-images]`
|
||||
|
||||
This way, one can export CVAT points as single keypoints or
|
||||
keypoint lists (without the `visibility` COCO flag).
|
||||
|
||||
#### COCO import
|
||||
|
||||
Uploaded file: a single unpacked `*.json` or a zip archive with the structure above (without images).
|
||||
|
||||
- supported annotations: Polygons, Rectangles (if the `segmentation` field is empty)
|
||||
|
||||
#### How to create a task from MS COCO dataset
|
||||
|
||||
1. Download the [MS COCO dataset](http://cocodataset.org/#download).
|
||||
|
||||
For example [2017 Val images](http://images.cocodataset.org/zips/val2017.zip)
|
||||
and [2017 Train/Val annotations](http://images.cocodataset.org/annotations/annotations_trainval2017.zip).
|
||||
|
||||
1. Create a CVAT task with the following labels:
|
||||
|
||||
```bash
|
||||
person bicycle car motorcycle airplane bus train truck boat "traffic light" "fire hydrant" "stop sign" "parking meter" bench bird cat dog horse sheep cow elephant bear zebra giraffe backpack umbrella handbag tie suitcase frisbee skis snowboard "sports ball" kite "baseball bat" "baseball glove" skateboard surfboard "tennis racket" bottle "wine glass" cup fork knife spoon bowl banana apple sandwich orange broccoli carrot "hot dog" pizza donut cake chair couch "potted plant" bed "dining table" toilet tv laptop mouse remote keyboard "cell phone" microwave oven toaster sink refrigerator book clock vase scissors "teddy bear" "hair drier" toothbrush
|
||||
```
|
||||
|
||||
1. Select val2017.zip as data
|
||||
(See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/)
|
||||
guide for details)
|
||||
|
||||
1. Unpack `annotations_trainval2017.zip`
|
||||
|
||||
1. click `Upload annotation` button,
|
||||
choose `COCO 1.1` and select `instances_val2017.json.json`
|
||||
annotation file. It can take some time.
|
||||
@ -0,0 +1,48 @@
|
||||
---
|
||||
linkTitle: "CVAT"
|
||||
weight: 1
|
||||
---
|
||||
|
||||
### CVAT<a id="cvat" />
|
||||
|
||||
This is the native CVAT annotation format. It supports all CVAT annotations
|
||||
features, so it can be used to make data backups.
|
||||
|
||||
- supported annotations: Rectangles, Polygons, Polylines,
|
||||
Points, Cuboids, Tags, Tracks
|
||||
|
||||
- attributes are supported
|
||||
|
||||
- [Format specification](/docs/for-developers/xml_format/)
|
||||
|
||||
#### CVAT for images export
|
||||
|
||||
Downloaded file: a ZIP file of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── images/
|
||||
| ├── img1.png
|
||||
| └── img2.jpg
|
||||
└── annotations.xml
|
||||
```
|
||||
|
||||
- tracks are split by frames
|
||||
|
||||
#### CVAT for videos export
|
||||
|
||||
Downloaded file: a ZIP file of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── images/
|
||||
| ├── frame_000000.png
|
||||
| └── frame_000001.png
|
||||
└── annotations.xml
|
||||
```
|
||||
|
||||
- shapes are exported as single-frame tracks
|
||||
|
||||
#### CVAT loader
|
||||
|
||||
Uploaded file: an XML file or a ZIP file of the structures above
|
||||
@ -0,0 +1,15 @@
|
||||
---
|
||||
linkTitle: "Datumaro"
|
||||
weight: 1.5
|
||||
---
|
||||
|
||||
### Datumaro format <a id="datumaro" />
|
||||
|
||||
[Datumaro](https://github.com/openvinotoolkit/datumaro/) is a tool, which can
|
||||
help with complex dataset and annotation transformations, format conversions,
|
||||
dataset statistics, merging, custom formats etc. It is used as a provider
|
||||
of dataset support in CVAT, so basically, everything possible in CVAT
|
||||
is possible in Datumaro too, but Datumaro can offer dataset operations.
|
||||
|
||||
- supported annotations: any 2D shapes, labels
|
||||
- supported attributes: any
|
||||
@ -0,0 +1,73 @@
|
||||
---
|
||||
linkTitle: "ICDAR13/15"
|
||||
weight: 14
|
||||
---
|
||||
|
||||
### [ICDAR13/15](https://rrc.cvc.uab.es/?ch=2)<a id="icdar" />
|
||||
|
||||
#### ICDAR13/15 export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
# word recognition task
|
||||
taskname.zip/
|
||||
└── word_recognition/
|
||||
└── <any_subset_name>/
|
||||
├── images
|
||||
| ├── word1.png
|
||||
| └── word2.png
|
||||
└── gt.txt
|
||||
# text localization task
|
||||
taskname.zip/
|
||||
└── text_localization/
|
||||
└── <any_subset_name>/
|
||||
├── images
|
||||
| ├── img_1.png
|
||||
| └── img_2.png
|
||||
├── gt_img_1.txt
|
||||
└── gt_img_1.txt
|
||||
#text segmentation task
|
||||
taskname.zip/
|
||||
└── text_localization/
|
||||
└── <any_subset_name>/
|
||||
├── images
|
||||
| ├── 1.png
|
||||
| └── 2.png
|
||||
├── 1_GT.bmp
|
||||
├── 1_GT.txt
|
||||
├── 2_GT.bmp
|
||||
└── 2_GT.txt
|
||||
```
|
||||
|
||||
**Word recognition task**:
|
||||
|
||||
- supported annotations: Label `icdar` with attribute `caption`
|
||||
|
||||
**Text localization task**:
|
||||
|
||||
- supported annotations: Rectangles and Polygons with label `icdar`
|
||||
and attribute `text`
|
||||
|
||||
**Text segmentation task**:
|
||||
|
||||
- supported annotations: Rectangles and Polygons with label `icdar`
|
||||
and attributes `index`, `text`, `color`, `center`
|
||||
|
||||
#### ICDAR13/15 import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
**Word recognition task**:
|
||||
|
||||
- supported annotations: Label `icdar` with attribute `caption`
|
||||
|
||||
**Text localization task**:
|
||||
|
||||
- supported annotations: Rectangles and Polygons with label `icdar`
|
||||
and attribute `text`
|
||||
|
||||
**Text segmentation task**:
|
||||
|
||||
- supported annotations: Rectangles and Polygons with label `icdar`
|
||||
and attributes `index`, `text`, `color`, `center`
|
||||
@ -0,0 +1,36 @@
|
||||
---
|
||||
linkTitle: "ImageNet"
|
||||
weight: 9
|
||||
---
|
||||
|
||||
### [ImageNet](http://www.image-net.org)<a id="imagenet" />
|
||||
|
||||
#### ImageNet export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
# if we save images:
|
||||
taskname.zip/
|
||||
├── label1/
|
||||
| ├── label1_image1.jpg
|
||||
| └── label1_image2.jpg
|
||||
└── label2/
|
||||
├── label2_image1.jpg
|
||||
├── label2_image3.jpg
|
||||
└── label2_image4.jpg
|
||||
|
||||
# if we keep only annotation:
|
||||
taskname.zip/
|
||||
├── <any_subset_name>.txt
|
||||
└── synsets.txt
|
||||
|
||||
```
|
||||
|
||||
- supported annotations: Labels
|
||||
|
||||
#### ImageNet import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
- supported annotations: Labels
|
||||
@ -0,0 +1,34 @@
|
||||
---
|
||||
linkTitle: "LabelMe"
|
||||
weight: 2
|
||||
---
|
||||
|
||||
### [LabelMe](http://labelme.csail.mit.edu/Release3.0)<a id="labelme" />
|
||||
|
||||
#### LabelMe export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── img1.jpg
|
||||
└── img1.xml
|
||||
```
|
||||
|
||||
- supported annotations: Rectangles, Polygons (with attributes)
|
||||
|
||||
#### LabelMe import
|
||||
|
||||
Uploaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── Masks/
|
||||
| ├── img1_mask1.png
|
||||
| └── img1_mask2.png
|
||||
├── img1.xml
|
||||
├── img2.xml
|
||||
└── img3.xml
|
||||
```
|
||||
|
||||
- supported annotations: Rectangles, Polygons, Masks (as polygons)
|
||||
@ -0,0 +1,40 @@
|
||||
---
|
||||
linkTitle: "Market-1501"
|
||||
weight: 13
|
||||
---
|
||||
|
||||
### [Market-1501](https://www.aitribune.com/dataset/2018051063)<a id="market1501" />
|
||||
|
||||
#### Market-1501 export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── bounding_box_<any_subset_name>/
|
||||
│ └── image_name_1.jpg
|
||||
└── query
|
||||
├── image_name_2.jpg
|
||||
└── image_name_3.jpg
|
||||
# if we keep only annotation:
|
||||
taskname.zip/
|
||||
└── images_<any_subset_name>.txt
|
||||
# images_<any_subset_name>.txt
|
||||
query/image_name_1.jpg
|
||||
bounding_box_<any_subset_name>/image_name_2.jpg
|
||||
bounding_box_<any_subset_name>/image_name_3.jpg
|
||||
# image_name = 0001_c1s1_000015_00.jpg
|
||||
0001 - person id
|
||||
c1 - camera id (there are totally 6 cameras)
|
||||
s1 - sequence
|
||||
000015 - frame number in sequence
|
||||
00 - means that this bounding box is the first one among the several
|
||||
```
|
||||
|
||||
- supported annotations: Label `market-1501` with atrributes (`query`, `person_id`, `camera_id`)
|
||||
|
||||
#### Market-1501 import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
- supported annotations: Label `market-1501` with atrributes (`query`, `person_id`, `camera_id`)
|
||||
@ -0,0 +1,47 @@
|
||||
---
|
||||
linkTitle: "MOT"
|
||||
weight: 3
|
||||
---
|
||||
|
||||
### [MOT sequence](https://arxiv.org/pdf/1906.04567.pdf)<a id="mot" />
|
||||
|
||||
#### MOT export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── img1/
|
||||
| ├── image1.jpg
|
||||
| └── image2.jpg
|
||||
└── gt/
|
||||
├── labels.txt
|
||||
└── gt.txt
|
||||
|
||||
# labels.txt
|
||||
cat
|
||||
dog
|
||||
person
|
||||
...
|
||||
|
||||
# gt.txt
|
||||
# frame_id, track_id, x, y, w, h, "not ignored", class_id, visibility, <skipped>
|
||||
1,1,1363,569,103,241,1,1,0.86014
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
- supported annotations: Rectangle shapes and tracks
|
||||
- supported attributes: `visibility` (number), `ignored` (checkbox)
|
||||
|
||||
#### MOT import
|
||||
|
||||
Uploaded file: a zip archive of the structure above or:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── labels.txt # optional, mandatory for non-official labels
|
||||
└── gt.txt
|
||||
```
|
||||
|
||||
- supported annotations: Rectangle tracks
|
||||
@ -0,0 +1,36 @@
|
||||
---
|
||||
linkTitle: "MOTS"
|
||||
weight: 4
|
||||
---
|
||||
|
||||
### [MOTS PNG](https://www.vision.rwth-aachen.de/page/mots)<a id="mots" />
|
||||
|
||||
#### MOTS PNG export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
└── <any_subset_name>/
|
||||
| images/
|
||||
| ├── image1.jpg
|
||||
| └── image2.jpg
|
||||
└── instances/
|
||||
├── labels.txt
|
||||
├── image1.png
|
||||
└── image2.png
|
||||
|
||||
# labels.txt
|
||||
cat
|
||||
dog
|
||||
person
|
||||
...
|
||||
```
|
||||
|
||||
- supported annotations: Rectangle and Polygon tracks
|
||||
|
||||
#### MOTS PNG import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
- supported annotations: Polygon tracks
|
||||
@ -0,0 +1,197 @@
|
||||
---
|
||||
linkTitle: "TFRecord"
|
||||
weight: 8
|
||||
---
|
||||
|
||||
### [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord)<a id="tfrecord" />
|
||||
|
||||
TFRecord is a very flexible format, but we try to correspond the
|
||||
format that used in
|
||||
[TF object detection](https://github.com/tensorflow/models/tree/master/research/object_detection)
|
||||
with minimal modifications.
|
||||
|
||||
Used feature description:
|
||||
|
||||
```python
|
||||
image_feature_description = {
|
||||
'image/filename': tf.io.FixedLenFeature([], tf.string),
|
||||
'image/source_id': tf.io.FixedLenFeature([], tf.string),
|
||||
'image/height': tf.io.FixedLenFeature([], tf.int64),
|
||||
'image/width': tf.io.FixedLenFeature([], tf.int64),
|
||||
# Object boxes and classes.
|
||||
'image/object/bbox/xmin': tf.io.VarLenFeature(tf.float32),
|
||||
'image/object/bbox/xmax': tf.io.VarLenFeature(tf.float32),
|
||||
'image/object/bbox/ymin': tf.io.VarLenFeature(tf.float32),
|
||||
'image/object/bbox/ymax': tf.io.VarLenFeature(tf.float32),
|
||||
'image/object/class/label': tf.io.VarLenFeature(tf.int64),
|
||||
'image/object/class/text': tf.io.VarLenFeature(tf.string),
|
||||
}
|
||||
```
|
||||
|
||||
#### TFRecord export
|
||||
|
||||
Downloaded file: a zip archive with following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── default.tfrecord
|
||||
└── label_map.pbtxt
|
||||
|
||||
# label_map.pbtxt
|
||||
item {
|
||||
id: 1
|
||||
name: 'label_0'
|
||||
}
|
||||
item {
|
||||
id: 2
|
||||
name: 'label_1'
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
- supported annotations: Rectangles, Polygons (as masks, manually over [Datumaro](https://github.com/openvinotoolkit/datumaro/blob/develop/docs/user_manual.md))
|
||||
|
||||
How to export masks:
|
||||
1. Export annotations in `Datumaro` format
|
||||
1. Apply `polygons_to_masks` and `boxes_to_masks` transforms
|
||||
```bash
|
||||
datum transform -t polygons_to_masks -p path/to/proj -o ptm
|
||||
datum transform -t boxes_to_masks -p ptm -o btm
|
||||
```
|
||||
1. Export in the `TF Detection API` format
|
||||
```bash
|
||||
datum export -f tf_detection_api -p btm [-- --save-images]
|
||||
```
|
||||
|
||||
#### TFRecord import
|
||||
|
||||
Uploaded file: a zip archive of following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
└── <any name>.tfrecord
|
||||
```
|
||||
|
||||
- supported annotations: Rectangles
|
||||
|
||||
#### How to create a task from TFRecord dataset (from VOC2007 for example)
|
||||
|
||||
1. Create `label_map.pbtxt` file with the following content:
|
||||
|
||||
```js
|
||||
item {
|
||||
id: 1
|
||||
name: 'aeroplane'
|
||||
}
|
||||
item {
|
||||
id: 2
|
||||
name: 'bicycle'
|
||||
}
|
||||
item {
|
||||
id: 3
|
||||
name: 'bird'
|
||||
}
|
||||
item {
|
||||
id: 4
|
||||
name: 'boat'
|
||||
}
|
||||
item {
|
||||
id: 5
|
||||
name: 'bottle'
|
||||
}
|
||||
item {
|
||||
id: 6
|
||||
name: 'bus'
|
||||
}
|
||||
item {
|
||||
id: 7
|
||||
name: 'car'
|
||||
}
|
||||
item {
|
||||
id: 8
|
||||
name: 'cat'
|
||||
}
|
||||
item {
|
||||
id: 9
|
||||
name: 'chair'
|
||||
}
|
||||
item {
|
||||
id: 10
|
||||
name: 'cow'
|
||||
}
|
||||
item {
|
||||
id: 11
|
||||
name: 'diningtable'
|
||||
}
|
||||
item {
|
||||
id: 12
|
||||
name: 'dog'
|
||||
}
|
||||
item {
|
||||
id: 13
|
||||
name: 'horse'
|
||||
}
|
||||
item {
|
||||
id: 14
|
||||
name: 'motorbike'
|
||||
}
|
||||
item {
|
||||
id: 15
|
||||
name: 'person'
|
||||
}
|
||||
item {
|
||||
id: 16
|
||||
name: 'pottedplant'
|
||||
}
|
||||
item {
|
||||
id: 17
|
||||
name: 'sheep'
|
||||
}
|
||||
item {
|
||||
id: 18
|
||||
name: 'sofa'
|
||||
}
|
||||
item {
|
||||
id: 19
|
||||
name: 'train'
|
||||
}
|
||||
item {
|
||||
id: 20
|
||||
name: 'tvmonitor'
|
||||
}
|
||||
```
|
||||
|
||||
1. Use [create_pascal_tf_record.py](https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pascal_tf_record.py)
|
||||
|
||||
to convert VOC2007 dataset to TFRecord format.
|
||||
As example:
|
||||
|
||||
```bash
|
||||
python create_pascal_tf_record.py --data_dir <path to VOCdevkit> --set train --year VOC2007 --output_path pascal.tfrecord --label_map_path label_map.pbtxt
|
||||
```
|
||||
|
||||
1. Zip train images
|
||||
|
||||
```bash
|
||||
cat <path to VOCdevkit>/VOC2007/ImageSets/Main/train.txt | while read p; do echo <path to VOCdevkit>/VOC2007/JPEGImages/${p}.jpg ; done | zip images.zip -j -@
|
||||
```
|
||||
|
||||
1. Create a CVAT task with the following labels:
|
||||
|
||||
```bash
|
||||
aeroplane bicycle bird boat bottle bus car cat chair cow diningtable dog horse motorbike person pottedplant sheep sofa train tvmonitor
|
||||
```
|
||||
|
||||
Select images. zip as data.
|
||||
See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/)
|
||||
guide for details.
|
||||
|
||||
1. Zip `pascal.tfrecord` and `label_map.pbtxt` files together
|
||||
|
||||
```bash
|
||||
zip anno.zip -j <path to pascal.tfrecord> <path to label_map.pbtxt>
|
||||
```
|
||||
|
||||
1. Click `Upload annotation` button, choose `TFRecord 1.0` and select the zip file
|
||||
|
||||
with labels from the previous step. It may take some time.
|
||||
@ -0,0 +1,35 @@
|
||||
---
|
||||
linkTitle: "VGGFace2"
|
||||
weight: 12
|
||||
---
|
||||
|
||||
### [VGGFace2](https://github.com/ox-vgg/vgg_face2)<a id="vggface2" />
|
||||
|
||||
#### VGGFace2 export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── labels.txt # optional
|
||||
├── <any_subset_name>/
|
||||
| ├── label0/
|
||||
| | └── image1.jpg
|
||||
| └── label1/
|
||||
| └── image2.jpg
|
||||
└── bb_landmark/
|
||||
├── loose_bb_<any_subset_name>.csv
|
||||
└── loose_landmark_<any_subset_name>.csv
|
||||
# labels.txt
|
||||
# n000001 car
|
||||
label0 <class0>
|
||||
label1 <class1>
|
||||
```
|
||||
|
||||
- supported annotations: Rectangles, Points (landmarks - groups of 5 points)
|
||||
|
||||
#### VGGFace2 import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
- supported annotations: Rectangles, Points (landmarks - groups of 5 points)
|
||||
@ -0,0 +1,171 @@
|
||||
---
|
||||
linkTitle: "Pascal VOC"
|
||||
weight: 6
|
||||
---
|
||||
|
||||
### [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/)<a id="voc" />
|
||||
|
||||
- [Format specification](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/devkit_doc.pdf)
|
||||
|
||||
- supported annotations:
|
||||
|
||||
- Rectangles (detection and layout tasks)
|
||||
- Tags (action- and classification tasks)
|
||||
- Polygons (segmentation task)
|
||||
|
||||
- supported attributes:
|
||||
|
||||
- `occluded` (both UI option and a separate attribute)
|
||||
- `truncated` and `difficult` (should be defined for labels as `checkbox` -es)
|
||||
- action attributes (import only, should be defined as `checkbox` -es)
|
||||
- arbitrary attributes (in the `attributes` secion of XML files)
|
||||
|
||||
#### Pascal VOC export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── JPEGImages/
|
||||
│ ├── <image_name1>.jpg
|
||||
│ ├── <image_name2>.jpg
|
||||
│ └── <image_nameN>.jpg
|
||||
├── Annotations/
|
||||
│ ├── <image_name1>.xml
|
||||
│ ├── <image_name2>.xml
|
||||
│ └── <image_nameN>.xml
|
||||
├── ImageSets/
|
||||
│ └── Main/
|
||||
│ └── default.txt
|
||||
└── labelmap.txt
|
||||
|
||||
# labelmap.txt
|
||||
# label : color_rgb : 'body' parts : actions
|
||||
background:::
|
||||
aeroplane:::
|
||||
bicycle:::
|
||||
bird:::
|
||||
```
|
||||
|
||||
#### Pascal VOC import
|
||||
|
||||
Uploaded file: a zip archive of the structure declared above or the following:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── <image_name1>.xml
|
||||
├── <image_name2>.xml
|
||||
└── <image_nameN>.xml
|
||||
```
|
||||
|
||||
It must be possible for CVAT to match the frame name and file name
|
||||
from annotation `.xml` file (the `filename` tag, e. g.
|
||||
`<filename>2008_004457.jpg</filename>` ).
|
||||
|
||||
There are 2 options:
|
||||
|
||||
1. full match between frame name and file name from annotation `.xml`
|
||||
(in cases when task was created from images or image archive).
|
||||
|
||||
1. match by frame number. File name should be `<number>.jpg`
|
||||
or `frame_000000.jpg`. It should be used when task was created from video.
|
||||
|
||||
#### Segmentation mask export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── labelmap.txt # optional, required for non-VOC labels
|
||||
├── ImageSets/
|
||||
│ └── Segmentation/
|
||||
│ └── default.txt # list of image names without extension
|
||||
├── SegmentationClass/ # merged class masks
|
||||
│ ├── image1.png
|
||||
│ └── image2.png
|
||||
└── SegmentationObject/ # merged instance masks
|
||||
├── image1.png
|
||||
└── image2.png
|
||||
|
||||
# labelmap.txt
|
||||
# label : color (RGB) : 'body' parts : actions
|
||||
background:0,128,0::
|
||||
aeroplane:10,10,128::
|
||||
bicycle:10,128,0::
|
||||
bird:0,108,128::
|
||||
boat:108,0,100::
|
||||
bottle:18,0,8::
|
||||
bus:12,28,0::
|
||||
```
|
||||
|
||||
Mask is a `png` image with 1 or 3 channels where each pixel
|
||||
has own color which corresponds to a label.
|
||||
Colors are generated following to Pascal VOC [algorithm](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:voclabelcolormap).
|
||||
`(0, 0, 0)` is used for background by default.
|
||||
|
||||
- supported shapes: Rectangles, Polygons
|
||||
|
||||
#### Segmentation mask import
|
||||
|
||||
Uploaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── labelmap.txt # optional, required for non-VOC labels
|
||||
├── ImageSets/
|
||||
│ └── Segmentation/
|
||||
│ └── <any_subset_name>.txt
|
||||
├── SegmentationClass/
|
||||
│ ├── image1.png
|
||||
│ └── image2.png
|
||||
└── SegmentationObject/
|
||||
├── image1.png
|
||||
└── image2.png
|
||||
```
|
||||
|
||||
It is also possible to import grayscale (1-channel) PNG masks.
|
||||
For grayscale masks provide a list of labels with the number of lines equal
|
||||
to the maximum color index on images. The lines must be in the right order
|
||||
so that line index is equal to the color index. Lines can have arbitrary,
|
||||
but different, colors. If there are gaps in the used color
|
||||
indices in the annotations, they must be filled with arbitrary dummy labels.
|
||||
Example:
|
||||
|
||||
```
|
||||
q:0,128,0:: # color index 0
|
||||
aeroplane:10,10,128:: # color index 1
|
||||
_dummy2:2,2,2:: # filler for color index 2
|
||||
_dummy3:3,3,3:: # filler for color index 3
|
||||
boat:108,0,100:: # color index 3
|
||||
...
|
||||
_dummy198:198,198,198:: # filler for color index 198
|
||||
_dummy199:199,199,199:: # filler for color index 199
|
||||
...
|
||||
the last label:12,28,0:: # color index 200
|
||||
```
|
||||
|
||||
- supported shapes: Polygons
|
||||
|
||||
#### How to create a task from Pascal VOC dataset
|
||||
|
||||
1. Download the Pascal Voc dataset (Can be downloaded from the
|
||||
[PASCAL VOC website](http://host.robots.ox.ac.uk/pascal/VOC/))
|
||||
|
||||
1. Create a CVAT task with the following labels:
|
||||
|
||||
```bash
|
||||
aeroplane bicycle bird boat bottle bus car cat chair cow diningtable
|
||||
dog horse motorbike person pottedplant sheep sofa train tvmonitor
|
||||
```
|
||||
|
||||
You can add `~checkbox=difficult:false ~checkbox=truncated:false`
|
||||
attributes for each label if you want to use them.
|
||||
|
||||
Select interesting image files (See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/) guide for details)
|
||||
|
||||
1. zip the corresponding annotation files
|
||||
|
||||
1. click `Upload annotation` button, choose `Pascal VOC ZIP 1.1`
|
||||
|
||||
and select the zip file with annotations from previous step.
|
||||
It may take some time.
|
||||
@ -0,0 +1,36 @@
|
||||
---
|
||||
linkTitle: "Wider Face"
|
||||
weight: 9
|
||||
---
|
||||
|
||||
### [WIDER Face](http://shuoyang1213.me/WIDERFACE/)<a id="widerface" />
|
||||
|
||||
#### WIDER Face export
|
||||
|
||||
Downloaded file: a zip archive of the following structure:
|
||||
|
||||
```bash
|
||||
taskname.zip/
|
||||
├── labels.txt # optional
|
||||
├── wider_face_split/
|
||||
│ └── wider_face_<any_subset_name>_bbx_gt.txt
|
||||
└── WIDER_<any_subset_name>/
|
||||
└── images/
|
||||
├── 0--label0/
|
||||
│ └── 0_label0_image1.jpg
|
||||
└── 1--label1/
|
||||
└── 1_label1_image2.jpg
|
||||
```
|
||||
|
||||
- supported annotations: Rectangles (with attributes), Labels
|
||||
- supported attributes:
|
||||
- `blur`, `expression`, `illumination`, `pose`, `invalid`
|
||||
- `occluded` (both the annotation property & an attribute)
|
||||
|
||||
#### WIDER Face import
|
||||
|
||||
Uploaded file: a zip archive of the structure above
|
||||
|
||||
- supported annotations: Rectangles (with attributes), Labels
|
||||
- supported attributes:
|
||||
- `blur`, `expression`, `illumination`, `occluded`, `pose`, `invalid`
|
||||
@ -0,0 +1,126 @@
|
||||
---
|
||||
linkTitle: "YOLO"
|
||||
weight: 7
|
||||
---
|
||||
|
||||
### [YOLO](https://pjreddie.com/darknet/yolo/)<a id="yolo" />
|
||||
|
||||
- [Format specification](https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects)
|
||||
- supported annotations: Rectangles
|
||||
|
||||
#### YOLO export
|
||||
|
||||
Downloaded file: a zip archive with following structure:
|
||||
|
||||
```bash
|
||||
archive.zip/
|
||||
├── obj.data
|
||||
├── obj.names
|
||||
├── obj_<subset>_data
|
||||
│ ├── image1.txt
|
||||
│ └── image2.txt
|
||||
└── train.txt # list of subset image paths
|
||||
|
||||
# the only valid subsets are: train, valid
|
||||
# train.txt and valid.txt:
|
||||
obj_<subset>_data/image1.jpg
|
||||
obj_<subset>_data/image2.jpg
|
||||
|
||||
# obj.data:
|
||||
classes = 3 # optional
|
||||
names = obj.names
|
||||
train = train.txt
|
||||
valid = valid.txt # optional
|
||||
backup = backup/ # optional
|
||||
|
||||
# obj.names:
|
||||
cat
|
||||
dog
|
||||
airplane
|
||||
|
||||
# image_name.txt:
|
||||
# label_id - id from obj.names
|
||||
# cx, cy - relative coordinates of the bbox center
|
||||
# rw, rh - relative size of the bbox
|
||||
# label_id cx cy rw rh
|
||||
1 0.3 0.8 0.1 0.3
|
||||
2 0.7 0.2 0.3 0.1
|
||||
```
|
||||
|
||||
Each annotation `*.txt` file has a name that corresponds to the name of
|
||||
the image file (e. g. `frame_000001.txt` is the annotation
|
||||
for the `frame_000001.jpg` image).
|
||||
The `*.txt` file structure: each line describes label and bounding box
|
||||
in the following format `label_id cx cy w h`.
|
||||
`obj.names` contains the ordered list of label names.
|
||||
|
||||
#### YOLO import
|
||||
|
||||
Uploaded file: a zip archive of the same structure as above
|
||||
It must be possible to match the CVAT frame (image name)
|
||||
and annotation file name. There are 2 options:
|
||||
|
||||
1. full match between image name and name of annotation `*.txt` file
|
||||
(in cases when a task was created from images or archive of images).
|
||||
|
||||
1. match by frame number (if CVAT cannot match by name). File name
|
||||
should be in the following format `<number>.jpg` .
|
||||
It should be used when task was created from a video.
|
||||
|
||||
#### How to create a task from YOLO formatted dataset (from VOC for example)
|
||||
|
||||
1. Follow the official [guide](https://pjreddie.com/darknet/yolo/)(see Training YOLO on VOC section)
|
||||
and prepare the YOLO formatted annotation files.
|
||||
|
||||
1. Zip train images
|
||||
|
||||
```bash
|
||||
zip images.zip -j -@ < train.txt
|
||||
```
|
||||
|
||||
1. Create a CVAT task with the following labels:
|
||||
|
||||
```bash
|
||||
aeroplane bicycle bird boat bottle bus car cat chair cow diningtable dog
|
||||
horse motorbike person pottedplant sheep sofa train tvmonitor
|
||||
```
|
||||
|
||||
Select images. zip as data. Most likely you should use `share`
|
||||
functionality because size of images. zip is more than 500Mb.
|
||||
See [Creating an annotation task](/docs/for-users/user-guide/creating_an_annotation_task/)
|
||||
guide for details.
|
||||
|
||||
1. Create `obj.names` with the following content:
|
||||
|
||||
```bash
|
||||
aeroplane
|
||||
bicycle
|
||||
bird
|
||||
boat
|
||||
bottle
|
||||
bus
|
||||
car
|
||||
cat
|
||||
chair
|
||||
cow
|
||||
diningtable
|
||||
dog
|
||||
horse
|
||||
motorbike
|
||||
person
|
||||
pottedplant
|
||||
sheep
|
||||
sofa
|
||||
train
|
||||
tvmonitor
|
||||
```
|
||||
|
||||
1. Zip all label files together (we need to add only label files that correspond to the train subset)
|
||||
|
||||
```bash
|
||||
cat train.txt | while read p; do echo ${p%/*/*}/labels/${${p##*/}%%.*}.txt; done | zip labels.zip -j -@ obj.names
|
||||
```
|
||||
|
||||
1. Click `Upload annotation` button, choose `YOLO 1.1` and select the zip
|
||||
|
||||
file with labels from the previous step.
|
||||
@ -0,0 +1,15 @@
|
||||
---
|
||||
title: "User's guide"
|
||||
linkTitle: "User's guide"
|
||||
weight: 1
|
||||
description: "This multipage document contains information on how to work with the CVAT user interface"
|
||||
---
|
||||
Computer Vision Annotation Tool (CVAT) is a web-based tool which helps to
|
||||
annotate videos and images for Computer Vision algorithms. It was inspired
|
||||
by [Vatic](http://carlvondrick.com/vatic/) free, online, interactive video
|
||||
annotation tool. CVAT has many powerful features: _interpolation of bounding
|
||||
boxes between key frames, automatic annotation using deep learning models,
|
||||
shortcuts for most of critical actions, dashboard with a list of annotation
|
||||
tasks, LDAP and basic authorization, etc..._ It was created for and used by
|
||||
a professional data annotation team. UX and UI were optimized especially for
|
||||
computer vision tasks developed by our team.
|
||||
@ -0,0 +1,5 @@
|
||||
---
|
||||
title: "Advanced"
|
||||
linkTitle: "Advanced"
|
||||
weight: 30
|
||||
---
|
||||
@ -0,0 +1,52 @@
|
||||
---
|
||||
title: "AI Tools"
|
||||
linkTitle: "AI Tools"
|
||||
weight: 5
|
||||
---
|
||||
|
||||
The tool is designed for semi-automatic and automatic annotation using DL models.
|
||||
The tool is available only if there is a corresponding model.
|
||||
For more details about DL models read the [Models](/docs/for-users/user-guide/models/) section.
|
||||
|
||||
### Interactors
|
||||
|
||||
Interactors are used to create a polygon semi-automatically.
|
||||
Supported DL models are not bound to the label and can be used for any objects.
|
||||
To create a polygon usually you need to use regular or positive points.
|
||||
For some kinds of segmentation negative points are available.
|
||||
Positive points are the points related to the object.
|
||||
Negative points should be placed outside the boundary of the object.
|
||||
In most cases specifying positive points alone is enough to build a polygon.
|
||||
|
||||
- Before you start, select the magic wand on the controls sidebar and go to the `Interactors` tab.
|
||||
Then select a label for the polygon and a required DL model.
|
||||
|
||||

|
||||
|
||||
- Click `Interact` to enter the interaction mode. Now you can place positive and/or negative points.
|
||||
Left click creates a positive point and right click creates a negative point.
|
||||
`Deep extreme cut` model requires a minimum of 4 points. After you set 4 positive points,
|
||||
a request will be sent to the server and when the process is complete a polygon will be created.
|
||||
If you are not satisfied with the result, you can set additional points or remove points by left-clicking on it.
|
||||
If you want to postpone the request and create a few more points, hold down `Ctrl` and continue,
|
||||
the request will be sent after the key is released.
|
||||
|
||||

|
||||
|
||||
- To finish interaction, click on the icon on the controls sidebar or press `N` on your keyboard.
|
||||
|
||||
- When the object is finished, you can edit it like a polygon.
|
||||
You can read about editing polygons in the [Annotation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/) section.
|
||||
|
||||
### Detectors
|
||||
|
||||
Detectors are used to automatically annotate one frame. Supported DL models are suitable only for certain labels.
|
||||
|
||||
- Before you start, click the magic wand on the controls sidebar and select the Detectors icon tab.
|
||||
You need to match the labels of the DL model (left column) with the labels in your task (right column).
|
||||
Then click `Annotate`.
|
||||
|
||||

|
||||
|
||||
- This action will automatically annotates one frame.
|
||||
In the [Automatic annotation](/docs/for-users/user-guide/advanced/automatic-annotation/) section you can read how to make automatic annotation of all frames.
|
||||
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: "Analytics"
|
||||
linkTitle: "Analytics"
|
||||
weight: 1
|
||||
---
|
||||
|
||||
If your CVAT instance was created with analytics support, you can press the `Analytics` button in the dashboard
|
||||
and analytics and journals will be opened in a new tab.
|
||||
|
||||

|
||||
|
||||
The analytics allows you to see how much time every user spends on each task
|
||||
and how much work they did over any time range.
|
||||
|
||||

|
||||
|
||||
It also has an activity graph which can be modified with a number of users shown and a timeframe.
|
||||
|
||||

|
||||
@ -0,0 +1,9 @@
|
||||
---
|
||||
title: "Annotation with cuboids"
|
||||
linkTitle: "Annotation with cuboids"
|
||||
weight: 11
|
||||
---
|
||||
|
||||
It is used to annotate 3 dimensional objects such as cars, boxes, etc...
|
||||
Currently the feature supports one point perspective and has the constraint
|
||||
where the vertical edges are exactly parallel to the sides.
|
||||
@ -0,0 +1,31 @@
|
||||
---
|
||||
title: "Creating the cuboid"
|
||||
linkTitle: "Creating the cuboid"
|
||||
weight: 1
|
||||
---
|
||||
|
||||
Before you start, you have to make sure that Cuboid is selected
|
||||
and choose a drawing method ”from rectangle” or “by 4 points”.
|
||||
|
||||

|
||||
|
||||
#### Drawing cuboid by 4 points
|
||||
|
||||
Choose a drawing method “by 4 points” and click Shape to enter the drawing mode. There are many ways to draw a cuboid.
|
||||
You can draw the cuboid by placing 4 points, after that the drawing will be completed automatically.
|
||||
The first 3 points determine the plane of the cuboid while the last point determines the depth of that plane.
|
||||
For the first 3 points, it is recommended to only draw the 2 closest side faces, as well as the top and bottom face.
|
||||
|
||||
A few examples:
|
||||
|
||||

|
||||
|
||||
### Drawing cuboid from rectangle
|
||||
|
||||
Choose a drawing method “from rectangle” and click Shape to enter the drawing mode.
|
||||
When you draw using the rectangle method, you must select the frontal plane of the object using the bounding box.
|
||||
The depth and perspective of the resulting cuboid can be edited.
|
||||
|
||||
Example:
|
||||
|
||||

|
||||
@ -0,0 +1,41 @@
|
||||
---
|
||||
title: "Editing the cuboid"
|
||||
linkTitle: "Editing the cuboid"
|
||||
weight: 2
|
||||
---
|
||||
|
||||

|
||||
|
||||
The cuboid can be edited in multiple ways: by dragging points, by dragging certain faces or by dragging planes.
|
||||
First notice that there is a face that is painted with gray lines only, let us call it the front face.
|
||||
|
||||
You can move the cuboid by simply dragging the shape behind the front face.
|
||||
The cuboid can be extended by dragging on the point in the middle of the edges.
|
||||
The cuboid can also be extended up and down by dragging the point at the vertices.
|
||||
|
||||

|
||||
|
||||
To draw with perspective effects it should be assumed that the front face is the closest to the camera.
|
||||
To begin simply drag the points on the vertices that are not on the gray/front face while holding `Shift`.
|
||||
The cuboid can then be edited as usual.
|
||||
|
||||

|
||||
|
||||
If you wish to reset perspective effects, you may right click on the cuboid,
|
||||
and select `Reset perspective` to return to a regular cuboid.
|
||||
|
||||

|
||||
|
||||
The location of the gray face can be swapped with the adjacent visible side face.
|
||||
You can do it by right clicking on the cuboid and selecting `Switch perspective orientation`.
|
||||
Note that this will also reset the perspective effects.
|
||||
|
||||

|
||||
|
||||
Certain faces of the cuboid can also be edited,
|
||||
these faces are: the left, right and dorsal faces, relative to the gray face.
|
||||
Simply drag the faces to move them independently from the rest of the cuboid.
|
||||
|
||||

|
||||
|
||||
You can also use cuboids in track mode, similar to rectangles in track mode ([basics](/docs/for-users/user-guide/basics/track-mode-basics/) and [advanced](/docs/for-users/user-guide/advanced/track-mode-advanced/)) or [Track mode with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/track-mode-with-polygons/)
|
||||
@ -0,0 +1,5 @@
|
||||
---
|
||||
title: "Annotation with points"
|
||||
linkTitle: "Annotation with points"
|
||||
weight: 10
|
||||
---
|
||||
@ -0,0 +1,28 @@
|
||||
---
|
||||
title: "Linear interpolation with one point"
|
||||
linkTitle: "Linear interpolation with one point"
|
||||
weight: 2
|
||||
---
|
||||
|
||||
You can use linear interpolation for points to annotate a moving object:
|
||||
|
||||
1. Before you start, select the `Points`.
|
||||
1. Linear interpolation works only with one point, so you need to set `Number of points` to 1.
|
||||
1. After that select the `Track`.
|
||||
|
||||

|
||||
|
||||
1. Click `Track` to enter the drawing mode left-click to create a point and after that shape will be automatically completed.
|
||||
|
||||

|
||||
|
||||
1. Move forward a few frames and move the point to the desired position,
|
||||
this way you will create a keyframe and intermediate frames will be drawn automatically.
|
||||
You can work with this object as with an interpolated track: you can hide it using the `Outside`,
|
||||
move around keyframes, etc.
|
||||
|
||||

|
||||
|
||||
1. This way you'll get linear interpolation using the ` Points`.
|
||||
|
||||

|
||||
@ -0,0 +1,22 @@
|
||||
---
|
||||
title: "Points in shape mode"
|
||||
linkTitle: "Points in shape mode"
|
||||
weight: 1
|
||||
---
|
||||
|
||||
It is used for face, landmarks annotation etc.
|
||||
|
||||
Before you start you need to select the `Points`. If necessary you can set a fixed number of points
|
||||
in the `Number of points` field, then drawing will be stopped automatically.
|
||||
|
||||

|
||||
|
||||
Click `Shape` to entering the drawing mode. Now you can start annotation of the necessary area.
|
||||
Points are automatically grouped — all points will be considered linked between each start and finish.
|
||||
Press `N` again to finish marking the area. You can delete a point by clicking with pressed `Ctrl`
|
||||
or right-clicking on a point and selecting `Delete point`. Clicking with pressed `Shift` will open the points
|
||||
shape editor. There you can add new points into an existing shape. You can zoom in/out (when scrolling the mouse wheel)
|
||||
and move (when clicking the mouse wheel and moving the mouse) while drawing. You can drag an object after
|
||||
it has been drawn and change the position of individual points after finishing an object.
|
||||
|
||||

|
||||
@ -0,0 +1,5 @@
|
||||
---
|
||||
title: "Annotation with polygons"
|
||||
linkTitle: "Annotation with polygons"
|
||||
weight: 8
|
||||
---
|
||||
@ -0,0 +1,67 @@
|
||||
---
|
||||
title: "Creating masks"
|
||||
linkTitle: "Creating masks"
|
||||
weight: 6
|
||||
---
|
||||
|
||||
### Cutting holes in polygons
|
||||
|
||||
Currently, CVAT does not support cutting transparent holes in polygons. However,
|
||||
it is poissble to generate holes in exported instance and class masks.
|
||||
To do this, one needs to define a background class in the task and draw holes
|
||||
with it as additional shapes above the shapes needed to have holes:
|
||||
|
||||
The editor window:
|
||||
|
||||

|
||||
|
||||
Remember to use z-axis ordering for shapes by \[\-\] and \[\+\, \=\] keys.
|
||||
|
||||
Exported masks:
|
||||
|
||||
 
|
||||
|
||||
Notice that it is currently impossible to have a single instance number for
|
||||
internal shapes (they will be merged into the largest one and then covered by
|
||||
"holes").
|
||||
|
||||
### Creating masks
|
||||
|
||||
There are several formats in CVAT that can be used to export masks:
|
||||
- `Segmentation Mask` (PASCAL VOC masks)
|
||||
- `CamVid`
|
||||
- `MOTS`
|
||||
- `ICDAR`
|
||||
- `COCO` (RLE-encoded instance masks, [guide](/docs/for-users/formats/format-specifications/format-coco))
|
||||
- `TFRecord` ([over Datumaro](https://github.com/openvinotoolkit/datumaro/blob/develop/docs/user_manual.md), [guide](/docs/for-users/formats/format-specifications/format-tfrecord)):
|
||||
- `Datumaro`
|
||||
|
||||
An example of exported masks (in the `Segmentation Mask` format):
|
||||
|
||||
 
|
||||
|
||||
Important notices:
|
||||
- Both boxes and polygons are converted into masks
|
||||
- Grouped objects are considered as a single instance and exported as a single
|
||||
mask (label and attributes are taken from the largest object in the group)
|
||||
|
||||
#### Class colors
|
||||
|
||||
All the labels have associated colors, which are used in the generated masks.
|
||||
These colors can be changed in the task label properties:
|
||||
|
||||

|
||||
|
||||
Label colors are also displayed in the annotation window on the right panel,
|
||||
where you can show or hide specific labels
|
||||
(only the presented labels are displayed):
|
||||
|
||||

|
||||
|
||||
A background class can be:
|
||||
- A default class, which is implicitly-added, of black color (RGB 0, 0, 0)
|
||||
- `background` class with any color (has a priority, name is case-insensitive)
|
||||
- Any class of black color (RGB 0, 0, 0)
|
||||
|
||||
To change backgound color in generated masks (default is black),
|
||||
change `background` class color to the desired one.
|
||||
@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "Manual drawing"
|
||||
linkTitle: "Manual drawing"
|
||||
weight: 1
|
||||
---
|
||||
It is used for semantic / instance segmentation.
|
||||
|
||||
Before starting, you need to select `Polygon` on the controls sidebar and choose the correct Label.
|
||||
|
||||

|
||||
|
||||
- Click `Shape` to enter drawing mode.
|
||||
There are two ways to draw a polygon: either create points by clicking or
|
||||
by dragging the mouse on the screen while holding `Shift`.
|
||||
|
||||
| Clicking points | Holding Shift+Dragging |
|
||||
| -------------------------------------------------- | -------------------------------------------------- |
|
||||
|  |  |
|
||||
|
||||
- When `Shift` isn't pressed, you can zoom in/out (when scrolling the mouse
|
||||
wheel) and move (when clicking the mouse wheel and moving the mouse), you can also
|
||||
delete the previous point by right-clicking on it.
|
||||
- Press `N` again for completing the shape.
|
||||
- After creating the polygon, you can move the points or delete them by right-clicking and selecting `Delete point`
|
||||
or clicking with pressed `Alt` key in the context menu.
|
||||
@ -0,0 +1,33 @@
|
||||
---
|
||||
title: "Track mode with polygons"
|
||||
linkTitle: "Track mode with polygons"
|
||||
weight: 5
|
||||
---
|
||||
|
||||
Polygons in the track mode allow you to mark moving objects more accurately other than using a rectangle
|
||||
([Tracking mode (basic)](/docs/for-users/user-guide/basics/track-mode-basics/); [Tracking mode (advanced)](/docs/for-users/user-guide/advanced/track-mode-advanced/)).
|
||||
|
||||
1. To create a polygon in the track mode, click the `Track` button.
|
||||
|
||||

|
||||
|
||||
1. Create a polygon the same way as in the case of [Annotation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/).
|
||||
Press `N` to complete the polygon.
|
||||
|
||||
1. Pay attention to the fact that the created polygon has a starting point and a direction,
|
||||
these elements are important for annotation of the following frames.
|
||||
|
||||
1. After going a few frames forward press `Shift+N`, the old polygon will disappear and you can create a new polygon.
|
||||
The new starting point should match the starting point of the previously created polygon
|
||||
(in this example, the top of the left mirror). The direction must also match (in this example, clockwise).
|
||||
After creating the polygon, press `N` and the intermediate frames will be interpolated automatically.
|
||||
|
||||

|
||||
|
||||
1. If you need to change the starting point, right-click on the desired point and select `Set starting point`.
|
||||
To change the direction, right-click on the desired point and select switch orientation.
|
||||
|
||||

|
||||
|
||||
There is no need to redraw the polygon every time using `Shift+N`,
|
||||
instead you can simply move the points or edit a part of the polygon by pressing `Shift+Click`.
|
||||
@ -0,0 +1,23 @@
|
||||
---
|
||||
title: "Annotation with polylines"
|
||||
linkTitle: "Annotation with polylines"
|
||||
weight: 9
|
||||
---
|
||||
It is used for road markup annotation etc.
|
||||
|
||||
Before starting, you need to select the `Polyline`. You can set a fixed number of points
|
||||
in the `Number of points` field, then drawing will be stopped automatically.
|
||||
|
||||

|
||||
|
||||
Click `Shape` to enter drawing mode. There are two ways to draw a polyline —
|
||||
you either create points by clicking or by dragging a mouse on the screen while holding `Shift`.
|
||||
When `Shift` isn't pressed, you can zoom in/out (when scrolling the mouse wheel)
|
||||
and move (when clicking the mouse wheel and moving the mouse), you can delete
|
||||
previous points by right-clicking on it. Press `N` again to complete the shape.
|
||||
You can delete a point by clicking on it with pressed `Ctrl` or right-clicking on a point
|
||||
and selecting `Delete point`. Click with pressed `Shift` will open a polyline editor.
|
||||
There you can create new points(by clicking or dragging) or delete part of a polygon closing
|
||||
the red line on another point. Press `Esc` to cancel editing.
|
||||
|
||||

|
||||
@ -0,0 +1,18 @@
|
||||
---
|
||||
title: "Annotation with rectangle by 4 points"
|
||||
linkTitle: "Annotation with rectangle by 4 points"
|
||||
weight: 7
|
||||
---
|
||||
|
||||
It is an efficient method of bounding box annotation, proposed
|
||||
[here](https://arxiv.org/pdf/1708.02750.pdf).
|
||||
Before starting, you need to make sure that the drawing method by 4 points is selected.
|
||||
|
||||

|
||||
|
||||
Press `Shape` or `Track` for entering drawing mode. Click on four extreme points:
|
||||
the top, bottom, left- and right-most physical points on the object.
|
||||
Drawing will be automatically completed right after clicking the fourth point.
|
||||
Press `Esc` to cancel editing.
|
||||
|
||||

|
||||
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: "Annotation with Tags"
|
||||
linkTitle: "Annotation with Tags"
|
||||
weight: 12
|
||||
---
|
||||
|
||||
It is used to annotate frames, tags are not displayed in the workspace.
|
||||
Before you start, open the drop-down list in the top panel and select `Tag annotation`.
|
||||
|
||||

|
||||
|
||||
The objects sidebar will be replaced with a special panel for working with tags.
|
||||
Here you can select a label for a tag and add it by clicking on the `Add tag` button.
|
||||
You can also customize hotkeys for each label.
|
||||
|
||||

|
||||
|
||||
If you need to use only one label for one frame, then enable the `Automatically go to the next frame`
|
||||
checkbox, then after you add the tag the frame will automatically switch to the next.
|
||||
@ -0,0 +1,28 @@
|
||||
---
|
||||
title: "Attribute annotation mode (advanced)"
|
||||
linkTitle: "Attribute annotation mode"
|
||||
weight: 3
|
||||
---
|
||||
|
||||
Basic operations in the mode were described in section [attribute annotation mode (basics)](/docs/for-users/user-guide/basics/attribute-annotation-mode-basics/).
|
||||
|
||||
It is possible to handle lots of objects on the same frame in the mode.
|
||||
|
||||

|
||||
|
||||
It is more convenient to annotate objects of the same type. In this case you can apply
|
||||
the appropriate filter. For example, the following filter will
|
||||
hide all objects except person: `label=="Person"`.
|
||||
|
||||
To navigate between objects (person in this case),
|
||||
use the following buttons `switch between objects in the frame` on the special panel:
|
||||
|
||||

|
||||
|
||||
or shortcuts:
|
||||
|
||||
- `Tab` — go to the next object
|
||||
- `Shift+Tab` — go to the previous object.
|
||||
|
||||
In order to change the zoom level, go to settings (press `F3`)
|
||||
in the workspace tab and set the value Attribute annotation mode (AAM) zoom margin in px.
|
||||
@ -0,0 +1,36 @@
|
||||
---
|
||||
title: "OpenCV tools"
|
||||
linkTitle: "OpenCV tools"
|
||||
weight: 6
|
||||
---
|
||||
The tool based on [Open CV](https://opencv.org/) Computer Vision library which is an open-source product that includes many CV algorithms. Some of these algorithms can be used to simplify the annotation process.
|
||||
|
||||
First step to work with OpenCV is to load it into CVAT. Click on the toolbar icon, then click `Load OpenCV`.
|
||||
|
||||

|
||||
|
||||
Once it is loaded, the tool's functionality will be available.
|
||||
|
||||
### Intelligent scissors
|
||||
|
||||
Intelligent scissors is an CV method of creating a polygon by placing points with automatic drawing of a line between them.
|
||||
The distance between the adjacent points is limited by the threshold of action,
|
||||
displayed as a red square which is tied to the cursor.
|
||||
|
||||
- First, select the label and then click on the `intelligent scissors` button.
|
||||
|
||||

|
||||
|
||||
- Create the first point on the boundary of the allocated object.
|
||||
You will see a line repeating the outline of the object.
|
||||
- Place the second point, so that the previous point is within the restrictive threshold.
|
||||
After that a line repeating the object boundary will be automatically created between the points.
|
||||
|
||||

|
||||
|
||||
To increase or lower the action threshold, hold `Ctrl` and scroll the mouse wheel.
|
||||
Increasing action threshold will affect the performance.
|
||||
During the drawing process you can remove the last point by clicking on it with the left mouse button.
|
||||
|
||||
- Once all the points are placed, you can complete the creation of the object by clicking on the icon or clicking `N`.
|
||||
As a result, a polygon will be created (read more about the polygons in the [annoation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/)).
|
||||
@ -0,0 +1,26 @@
|
||||
---
|
||||
title: "Shape grouping"
|
||||
linkTitle: "Shape grouping"
|
||||
weight: 15
|
||||
---
|
||||
|
||||
This feature allows us to group several shapes.
|
||||
|
||||
You may use the `Group Shapes` button or shortcuts:
|
||||
|
||||
- `G` — start selection / end selection in group mode
|
||||
- `Esc` — close group mode
|
||||
- `Shift+G` — reset group for selected shapes
|
||||
|
||||
You may select shapes clicking on them or selecting an area.
|
||||
|
||||
Grouped shapes will have `group_id` filed in dumped annotation.
|
||||
|
||||
Also you may switch color distribution from an instance (default) to a group.
|
||||
You have to switch `Color By Group` checkbox for that.
|
||||
|
||||
Shapes that don't have `group_id`, will be highlighted in white.
|
||||
|
||||

|
||||
|
||||

|
||||
@ -0,0 +1,26 @@
|
||||
---
|
||||
title: "Shape mode (advanced)"
|
||||
linkTitle: "Shape mode"
|
||||
weight: 1
|
||||
---
|
||||
|
||||
Basic operations in the mode were described in section [shape mode (basics)](/docs/for-users/user-guide/basics/shape-mode-basics/).
|
||||
|
||||
**Occluded**
|
||||
Occlusion is an attribute used if an object is occluded by another object or
|
||||
isn't fully visible on the frame. Use `Q` shortcut to set the property
|
||||
quickly.
|
||||
|
||||

|
||||
|
||||
Example: the three cars on the figure below should be labeled as **occluded**.
|
||||
|
||||

|
||||
|
||||
If a frame contains too many objects and it is difficult to annotate them
|
||||
due to many shapes placed mostly in the same place, it makes sense
|
||||
to lock them. Shapes for locked objects are transparent, and it is easy to
|
||||
annotate new objects. Besides, you can't change previously annotated objects
|
||||
by accident. Shortcut: `L`.
|
||||
|
||||

|
||||
@ -0,0 +1,76 @@
|
||||
---
|
||||
title: "Shortcuts"
|
||||
linkTitle: "Shortcuts"
|
||||
weight: 18
|
||||
---
|
||||
|
||||
Many UI elements have shortcut hints. Put your pointer to a required element to see it.
|
||||
|
||||

|
||||
|
||||
| Shortcut | Common |
|
||||
| -------------------------- | -------------------------------------------------------------------------------------------------------- |
|
||||
| | _Main functions_ |
|
||||
| `F1` | Open/hide the list of available shortcuts |
|
||||
| `F2` | Go to the settings page or go back |
|
||||
| `Ctrl+S` | Go to the settings page or go back |
|
||||
| `Ctrl+Z` | Cancel the latest action related with objects |
|
||||
| `Ctrl+Shift+Z` or `Ctrl+Y` | Cancel undo action |
|
||||
| Hold `Mouse Wheel` | To move an image frame (for example, while drawing) |
|
||||
| | _Player_ |
|
||||
| `F` | Go to the next frame |
|
||||
| `D` | Go to the previous frame |
|
||||
| `V` | Go forward with a step |
|
||||
| `C` | Go backward with a step |
|
||||
| `Right` | Search the next frame that satisfies to the filters <br> or next frame which contain any objects |
|
||||
| `Left` | Search the previous frame that satisfies to the filters <br> or previous frame which contain any objects |
|
||||
| `Space` | Start/stop automatic changing frames |
|
||||
| `` ` `` or `~` | Focus on the element to change the current frame |
|
||||
| | _Modes_ |
|
||||
| `N` | Repeat the latest procedure of drawing with the same parameters |
|
||||
| `M` | Activate or deactivate mode to merging shapes |
|
||||
| `Alt+M` | Activate or deactivate mode to spliting shapes |
|
||||
| `G` | Activate or deactivate mode to grouping shapes |
|
||||
| `Shift+G` | Reset group for selected shapes (in group mode) |
|
||||
| `Esc` | Cancel any active canvas mode |
|
||||
| | _Image operations_ |
|
||||
| `Ctrl+R` | Change image angle (add 90 degrees) |
|
||||
| `Ctrl+Shift+R` | Change image angle (substract 90 degrees) |
|
||||
| `Shift+B+=` | Increase brightness level for the image |
|
||||
| `Shift+B+-` | Decrease brightness level for the image |
|
||||
| `Shift+C+=` | Increase contrast level for the image |
|
||||
| `Shift+C+-` | Decrease contrast level for the image |
|
||||
| `Shift+S+=` | Increase saturation level for the image |
|
||||
| `Shift+S+-` | Increase contrast level for the image |
|
||||
| `Shift+G+=` | Make the grid more visible |
|
||||
| `Shift+G+-` | Make the grid less visible |
|
||||
| `Shift+G+Enter` | Set another color for the image grid |
|
||||
| | _Operations with objects_ |
|
||||
| `Ctrl` | Switch automatic bordering for polygons and polylines during drawing/editing |
|
||||
| Hold `Ctrl` | When the shape is active and fix it |
|
||||
| `Alt+Click` on point | Deleting a point (used when hovering over a point of polygon, polyline, points) |
|
||||
| `Shift+Click` on point | Editing a shape (used when hovering over a point of polygon, polyline or points) |
|
||||
| `Right-Click` on shape | Display of an object element from objects sidebar |
|
||||
| `T+L` | Change locked state for all objects in the sidebar |
|
||||
| `L` | Change locked state for an active object |
|
||||
| `T+H` | Change hidden state for objects in the sidebar |
|
||||
| `H` | Change hidden state for an active object |
|
||||
| `Q` or `/` | Change occluded property for an active object |
|
||||
| `Del` or `Shift+Del` | Delete an active object. Use shift to force delete of locked objects |
|
||||
| `-` or `_` | Put an active object "farther" from the user (decrease z axis value) |
|
||||
| `+` or `=` | Put an active object "closer" to the user (increase z axis value) |
|
||||
| `Ctrl+C` | Copy shape to CVAT internal clipboard |
|
||||
| `Ctrl+V` | Paste a shape from internal CVAT clipboard |
|
||||
| Hold `Ctrl` while pasting | When pasting shape from the buffer for multiple pasting. |
|
||||
| `Crtl+B` | Make a copy of the object on the following frames |
|
||||
| | _Operations are available only for track_ |
|
||||
| `K` | Change keyframe property for an active track |
|
||||
| `O` | Change outside property for an active track |
|
||||
| `R` | Go to the next keyframe of an active track |
|
||||
| `E` | Go to the previous keyframe of an active track |
|
||||
| | _Attribute annotation mode_ |
|
||||
| `Up Arrow` | Go to the next attribute (up) |
|
||||
| `Down Arrow` | Go to the next attribute (down) |
|
||||
| `Tab` | Go to the next annotated object in current frame |
|
||||
| `Shift+Tab` | Go to the previous annotated object in current frame |
|
||||
| `<number>` | Assign a corresponding value to the current attribute |
|
||||
@ -0,0 +1,21 @@
|
||||
---
|
||||
title: "Track mode (advanced)"
|
||||
linkTitle: "Track mode"
|
||||
weight: 2
|
||||
---
|
||||
|
||||
Basic operations in the mode were described in section [track mode (basics)](/docs/for-users/user-guide/basics/shape-mode-basic/).
|
||||
|
||||
Shapes that were created in the track mode, have extra navigation buttons.
|
||||
|
||||
- These buttons help to jump to the previous/next keyframe.
|
||||
|
||||

|
||||
|
||||
- The button helps to jump to the initial frame and to the last keyframe.
|
||||
|
||||

|
||||
|
||||
You can use the `Split` function to split one track into two tracks:
|
||||
|
||||

|
||||
@ -0,0 +1,5 @@
|
||||
---
|
||||
title: "Basics"
|
||||
linkTitle: "Basics"
|
||||
weight: 8
|
||||
---
|
||||
@ -0,0 +1,29 @@
|
||||
---
|
||||
title: "Attribute annotation mode (basics)"
|
||||
linkTitle: "Attribute annotation mode"
|
||||
weight: 6
|
||||
---
|
||||
- In this mode you can edit attributes with fast navigation between objects and frames using a keyboard.
|
||||
Open the drop-down list in the top panel and select Attribute annotation Mode.
|
||||
|
||||

|
||||
|
||||
- In this mode objects panel change to a special panel :
|
||||
|
||||

|
||||
|
||||
- The active attribute will be red. In this case it is `gender` . Look at the bottom side panel to see all possible
|
||||
shortcuts for changing the attribute. Press key `2` on your keyboard to assign a value (female) for the attribute
|
||||
or select from the drop-down list.
|
||||
|
||||

|
||||
|
||||
- Press `Up Arrow`/`Down Arrow` on your keyboard or click the buttons in the UI to go to the next/previous
|
||||
attribute. In this case, after pressing `Down Arrow` you will be able to edit the `Age` attribute.
|
||||
|
||||

|
||||
|
||||
- Use `Right Arrow`/`Left Arrow` keys to move to the previous/next image with annotation.
|
||||
|
||||
To see all the hot keys available in the attribute annotation mode, press `F2`.
|
||||
Read more in the section [attribute annotation mode (advanced)](/docs/for-users/user-guide/advanced/attribute-annotation-mode-advanced/).
|
||||
@ -0,0 +1,26 @@
|
||||
---
|
||||
title: "Basic navigation"
|
||||
linkTitle: "Basic navigation"
|
||||
weight: 1
|
||||
---
|
||||
1. Use arrows below to move to the next/previous frame.
|
||||
Use the scroll bar slider to scroll through frames.
|
||||
Almost every button has a shortcut.
|
||||
To get a hint about a shortcut, just move your mouse pointer over an UI element.
|
||||
|
||||

|
||||
|
||||
1. To navigate the image, use the button on the controls sidebar.
|
||||
Another way an image can be moved/shifted is by holding the left mouse button inside
|
||||
an area without annotated objects.
|
||||
If the `Mouse Wheel` is pressed, then all annotated objects are ignored. Otherwise the
|
||||
a highlighted bounding box will be moved instead of the image itself.
|
||||
|
||||

|
||||
|
||||
1. You can use the button on the sidebar controls to zoom on a region of interest.
|
||||
Use the button `Fit the image` to fit the image in the workspace.
|
||||
You can also use the mouse wheel to scale the image
|
||||
(the image will be zoomed relatively to your current cursor position).
|
||||
|
||||

|
||||
@ -0,0 +1,46 @@
|
||||
---
|
||||
title: "Shape mode (basics)"
|
||||
linkTitle: "Shape mode"
|
||||
weight: 3
|
||||
---
|
||||
Usage examples:
|
||||
|
||||
- Create new annotations for a set of images.
|
||||
- Add/modify/delete objects for existing annotations.
|
||||
|
||||
1. You need to select `Rectangle` on the controls sidebar:
|
||||
|
||||

|
||||
|
||||
Before you start, select the correct ` Label` (should be specified by you when creating the task)
|
||||
and ` Drawing Method` (by 2 points or by 4 points):
|
||||
|
||||

|
||||
|
||||
1. Creating a new annotation in `Shape mode`:
|
||||
|
||||
- Create a separate `Rectangle` by clicking on `Shape`.
|
||||
|
||||

|
||||
|
||||
- Choose the opposite points. Your first rectangle is ready!
|
||||
|
||||

|
||||
|
||||
- To learn about creating a rectangle using the by 4 point drawing method, ([read here](/docs/for-users/user-guide/advanced/annotation-with-rectangle-by-4-points/)).
|
||||
|
||||
- It is possible to adjust boundaries and location of the rectangle using a mouse.
|
||||
Rectangle's size is shown in the top right corner , you can check it by clicking on any point of the shape.
|
||||
You can also undo your actions using `Ctrl+Z` and redo them with `Shift+Ctrl+Z` or `Ctrl+Y`.
|
||||
|
||||
1. You can see the `Object card` in the objects sidebar or open it by right-clicking on the object.
|
||||
You can change the attributes in the details section.
|
||||
You can perform basic operations or delete an object by clicking on the action menu button.
|
||||
|
||||

|
||||
|
||||
1. The following figure is an example of a fully annotated frame with separate shapes.
|
||||
|
||||

|
||||
|
||||
Read more in the section [shape mode (advanced)](/docs/for-users/user-guide/advanced/shape-mode-advanced/).
|
||||
@ -0,0 +1,69 @@
|
||||
---
|
||||
title: "Track mode (basics)"
|
||||
linkTitle: "Track mode"
|
||||
weight: 4
|
||||
---
|
||||
Usage examples:
|
||||
|
||||
- Create new annotations for a sequence of frames.
|
||||
- Add/modify/delete objects for existing annotations.
|
||||
- Edit tracks, merge several rectangles into one track.
|
||||
|
||||
1. Like in the `Shape mode`, you need to select a `Rectangle` on the sidebar,
|
||||
in the appearing form, select the desired `Label` and the `Drawing method`.
|
||||
|
||||

|
||||
|
||||
1. Creating a track for an object (look at the selected car as an example):
|
||||
|
||||
- Create a `Rectangle` in `Track mode` by clicking on `Track`.
|
||||
|
||||

|
||||
|
||||
- In `Track mode` the rectangle will be automatically interpolated on the next frames.
|
||||
- The cyclist starts moving on frame #2270. Let's mark the frame as a key frame.
|
||||
You can press `K` for that or click the `star` button (see the screenshot below).
|
||||
|
||||

|
||||
|
||||
- If the object starts to change its position, you need to modify the rectangle where it happens.
|
||||
It isn't necessary to change the rectangle on each frame, simply update several keyframes
|
||||
and the frames between them will be interpolated automatically.
|
||||
- Let's jump 30 frames forward and adjust the boundaries of the object. See an example below:
|
||||
|
||||

|
||||
|
||||
- After that the rectangle of the object will be changed automatically on frames 2270 to 2300:
|
||||
|
||||

|
||||
|
||||
1. When the annotated object disappears or becomes too small, you need to
|
||||
finish the track. You have to choose `Outside Property`, shortcut `O`.
|
||||
|
||||

|
||||
|
||||
1. If the object isn't visible on a couple of frames and then appears again,
|
||||
you can use the `Merge` feature to merge several individual tracks
|
||||
into one.
|
||||
|
||||

|
||||
|
||||
- Create tracks for moments when the cyclist is visible:
|
||||
|
||||

|
||||
|
||||
- Click `Merge` button or press key `M` and click on any rectangle of the first track
|
||||
and on any rectangle of the second track and so on:
|
||||
|
||||

|
||||
|
||||
- Click `Merge` button or press `M` to apply changes.
|
||||
|
||||

|
||||
|
||||
- The final annotated sequence of frames in `Interpolation` mode can
|
||||
look like the clip below:
|
||||
|
||||

|
||||
|
||||
Read more in the section [track mode (advanced)](/docs/for-users/user-guide/advanced/track-mode-advanced/).
|
||||
@ -0,0 +1,45 @@
|
||||
---
|
||||
title: "Controls sidebar"
|
||||
linkTitle: "Controls sidebar"
|
||||
weight: 15
|
||||
---
|
||||
**Navigation block** - contains tools for moving and rotating images.
|
||||
|Icon |Description |
|
||||
|-- |-- |
|
||||
||`Cursor` (`Esc`)- a basic annotation pedacting tool. |
|
||||
||`Move the image`- a tool for moving around the image without<br/> the possibility of editing.|
|
||||
||`Rotate`- two buttons to rotate the current frame<br/> a clockwise (`Ctrl+R`) and anticlockwise (`Ctrl+Shift+R`).<br/> You can enable `Rotate all images` in the settings to rotate all the images in the job
|
||||
|
||||
---
|
||||
|
||||
**Zoom block** - contains tools for image zoom.
|
||||
|Icon |Description |
|
||||
|-- |-- |
|
||||
||`Fit image`- fits image into the workspace size.<br/> Shortcut - double click on an image|
|
||||
||`Select a region of interest`- zooms in on a selected region.<br/> You can use this tool to quickly zoom in on a specific part of the frame.|
|
||||
|
||||
---
|
||||
|
||||
**Shapes block** - contains all the tools for creating shapes.
|
||||
|Icon |Description |Links to section |
|
||||
|-- |-- |-- |
|
||||
||`AI Tools` |[AI Tools](/docs/for-users/user-guide/advanced/ai-tools/)|
|
||||
||`OpenCV` |[OpenCV](/docs/for-users/user-guide/advanced/opencv-tools/)|
|
||||
||`Rectangle`|[Shape mode](/docs/for-users/user-guide/basics/shape-mode-basics/); [Track mode](/docs/for-users/user-guide/basics/track-mode-basics/);<br/> [Drawing by 4 points](/docs/for-users/user-guide/advanced/annotation-with-rectangle-by-4-points/)|
|
||||
||`Polygon` |[Annotation with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/); [Track mode with polygons](/docs/for-users/user-guide/advanced/annotation-with-polygons/track-mode-with-polygons/) |
|
||||
||`Polyline` |[Annotation with polylines](/docs/for-users/user-guide/advanced/annotation-with-polylines/)|
|
||||
||`Points` |[Annotation with points](/docs/for-users/user-guide/advanced/annotation-with-points/) |
|
||||
||`Cuboid` |[Annotation with cuboids](/docs/for-users/user-guide/advanced/annotation-with-cuboids/) |
|
||||
||`Tag` |[Annotation with tags](/docs/for-users/user-guide/advanced/annotation-with-tags/) |
|
||||
||`Open an issue` |[Review](/docs/for-users/user-guide/advanced/review/) (available only in review mode) |
|
||||
|
||||
---
|
||||
|
||||
**Edit block** - contains tools for editing tracks and shapes.
|
||||
|Icon |Description |Links to section |
|
||||
|-- |-- |-- |
|
||||
||`Merge Shapes`(`M`) — starts/stops the merging shapes mode. |[Track mode (basics)](/docs/for-users/user-guide/basics/track-mode-basics/)|
|
||||
||`Group Shapes` (`G`) — starts/stops the grouping shapes mode.|[Shape grouping](/docs/for-users/user-guide/advanced/shape-grouping/)|
|
||||
||`Split` — splits a track. |[Track mode (advanced)](/docs/for-users/user-guide/advanced/track-mode-advanced/)|
|
||||
|
||||
---
|
||||
@ -0,0 +1,37 @@
|
||||
---
|
||||
title: "Getting started"
|
||||
linkTitle: "Getting started"
|
||||
weight: 1
|
||||
---
|
||||
### Authorization
|
||||
|
||||
- First of all, you have to log in to CVAT tool.
|
||||
|
||||

|
||||
|
||||
- For register a new user press "Create an account"
|
||||
|
||||

|
||||
|
||||
- You can register a user but by default it will not have rights even to view
|
||||
list of tasks. Thus you should create a superuser. The superuser can use
|
||||
[Django administration panel](http://localhost:8080/admin) to assign correct
|
||||
groups to the user. Please use the command below to create an admin account:
|
||||
|
||||
`docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'`
|
||||
|
||||
- If you want to create a non-admin account, you can do that using the link below
|
||||
on the login page. Don't forget to modify permissions for the new user in the
|
||||
administration panel. There are several groups (aka roles): admin, user,
|
||||
annotator, observer.
|
||||
|
||||

|
||||
|
||||
### Administration panel
|
||||
|
||||
Go to the [Django administration panel](http://localhost:8080/admin). There you can:
|
||||
|
||||
- Create / edit / delete users
|
||||
- Control permissions of users and access to the tool.
|
||||
|
||||

|
||||
@ -0,0 +1,16 @@
|
||||
---
|
||||
title: "Interface of the annotation tool"
|
||||
linkTitle: "Interface"
|
||||
weight: 7
|
||||
---
|
||||
The tool consists of:
|
||||
|
||||
- `Header` - pinned header used to navigate CVAT sections and account settings;
|
||||
- `Top panel` — contains navigation buttons, main functions and menu access;
|
||||
- `Workspace` — space where images are shown;
|
||||
- `Controls sidebar` — contains tools for navigating the image, zoom,
|
||||
creating shapes and editing tracks (merge, split, group)
|
||||
- `Objects sidebar` — contains label filter, two lists:
|
||||
objects (on the frame) and labels (of objects on the frame) and appearance settings.
|
||||
|
||||

|
||||
@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "Models"
|
||||
linkTitle: "Models"
|
||||
weight: 5
|
||||
---
|
||||
|
||||
### Models
|
||||
|
||||
The Models page contains a list of deep learning (DL) models deployed for semi-automatic and automatic annotation.
|
||||
To open the Models page, click the Models button on the navigation bar.
|
||||
The list of models is presented in the form of a table. The parameters indicated for each model are the following:
|
||||
|
||||
- `Framework` the model is based on
|
||||
- model `Name`
|
||||
- model `Type`:
|
||||
- `detector` - used for automatic annotation (available in [detectors](/docs/for-users/user-guide/advanced/ai-tools/#detectors) and [automatic annotation](/docs/for-users/user-guide/advanced/automatic-annotation/))
|
||||
- `interactor` - used for semi-automatic shape annotation (available in [interactors](/docs/for-users/user-guide/advanced/ai-tools/#interactors))
|
||||
- `tracker` - used for semi-automatic track annotation (available in [trackers](/docs/for-users/user-guide/advanced/ai-tools/#trackers))
|
||||
- `reid` - used to combine individual objects into a track (available in [automatic annotation](/docs/for-users/user-guide/advanced/automatic-annotation/))
|
||||
- `Description` - brief description of the model
|
||||
- `Labels` - list of the supported labels (only for the models of the `detectors` type)
|
||||
|
||||

|
||||
|
||||
Read how to install your model [here](/docs/for-users/installation/#semi-automatic-and-automatic-annotation).
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue