[Datumaro] Update documentation (#2059)

* Update docs

* fix indent

* update design
main
Maxim Zhiltsov 6 years ago committed by GitHub
parent 082fc7a4dd
commit 3fb3f677eb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -129,7 +129,7 @@ Plugins reside in plugin directories:
- `<project_dir>/.datumaro/plugins` for project-specific components
A plugin is a python file or package with any name, which exports some symbols.
To export a symbol put it to `exports` list of the module like this:
To export a symbol, put it to `exports` list of the module like this:
``` python
class MyComponent1: ...

@ -1,13 +1,13 @@
# Dataset Framework (Datumaro)
# Dataset Management Framework (Datumaro)
A framework to build, transform, and analyze datasets.
<!--lint disable fenced-code-flag-->
```
CVAT annotations -- ---> Annotation tool
... \ /
\ /
COCO-like dataset -----> Datumaro ---> dataset ------> Model training
... / \
/ \
VOC-like dataset -- ---> Publication etc.
```
<!--lint enable fenced-code-flag-->
@ -55,12 +55,12 @@ VOC-like dataset -- ---> Publication etc.
- Dataset building operations:
- Merging multiple datasets into one
- Dataset filtering with custom conditions, for instance:
- remove all annotations except polygons of a certain class
- remove polygons of a certain class
- remove images without a specific class
- remove occluded annotations from images
- remove `occluded` annotations from images
- keep only vertically-oriented images
- remove small area bounding boxes from annotations
- Annotation conversions, for instance
- Annotation conversions, for instance:
- polygons to instance masks and vise-versa
- apply a custom colormap for mask annotations
- rename or remove dataset labels

@ -579,7 +579,7 @@ def build_transform_parser(parser_ctor=argparse.ArgumentParser):
|n
Examples:|n
- Convert instance polygons to masks:|n
|s|stransform -n polygons_to_masks
|s|stransform -t polygons_to_masks
""" % ', '.join(builtins),
formatter_class=MultilineFormatter)

@ -409,6 +409,13 @@ class Rename(Transform, CliPlugin):
.format(item=item))
class RemapLabels(Transform, CliPlugin):
"""
Changes labels in the dataset.|n
Examples:|n
- Rename 'person' to 'car' and 'cat' to 'dog', keep 'bus', remove others:|n
|s|sremap_labels -l person:car -l bus:bus -l cat:dog --default delete
"""
DefaultAction = Enum('DefaultAction', ['keep', 'delete'])
@staticmethod
@ -428,7 +435,7 @@ class RemapLabels(Transform, CliPlugin):
parser.add_argument('--default',
choices=[a.name for a in cls.DefaultAction],
default=cls.DefaultAction.keep.name,
help="Action for unspecified labels")
help="Action for unspecified labels (default: %(default)s)")
return parser
def __init__(self, extractor, mapping, default=None):

@ -49,6 +49,8 @@ Datumaro is:
- Versioning (for images, annotations, subsets, sources etc., comparison)
- Documentation generation
- Provision of iterators for user code
- Dataset downloading
- Dataset generation
- Dataset building (export in a specific format, indexation, statistics, documentation)
- Dataset exporting to other formats
- Dataset debugging (run inference, generate dataset slices, compute statistics)
@ -111,17 +113,17 @@ can be downloaded by user to be operated on with Datumaro CLI.
- [ ] with TensorBoard
- Calculation of statistics for datasets
- [ ] Pixel mean, std
- [ ] Object counts (detection scenario)
- [ ] Image-Class distribution (classification scenario)
- [ ] Pixel-Class distribution (segmentation scenario)
- [ ] Image clusters
- [x] Pixel mean, std
- [x] Object counts (detection scenario)
- [x] Image-Class distribution (classification scenario)
- [x] Pixel-Class distribution (segmentation scenario)
- [ ] Image similarity clusters
- [ ] Custom statistics
- Dataset building
- [x] Composite dataset building
- [ ] Annotation remapping
- [ ] Subset splitting
- [x] Class remapping
- [x] Subset splitting
- [x] Dataset filtering (`extract`)
- [x] Dataset merging (`merge`)
- [ ] Dataset item editing (`edit`)
@ -129,7 +131,7 @@ can be downloaded by user to be operated on with Datumaro CLI.
- Dataset comparison (`diff`)
- [x] Annotation-annotation comparison
- [x] Annotation-inference comparison
- [ ] Annotation quality estimation (for CVAT)
- [x] Annotation quality estimation (for CVAT)
- Provide a simple method to check
annotation quality with a model and generate summary
@ -142,9 +144,9 @@ can be downloaded by user to be operated on with Datumaro CLI.
- [x] Task export
- [x] Datumaro project export
- [x] Dataset export
- [ ] Original raw data (images, a video file) can be downloaded (exported)
- [x] Original raw data (images, a video file) can be downloaded (exported)
together with annotations or just have links
on CVAT server (in the future support S3, etc)
on CVAT server (in future, support S3, etc)
- [x] Be able to use local files instead of remote links
- [ ] Specify cache directory
- [x] Use case "annotate for model training"
@ -154,7 +156,7 @@ can be downloaded by user to be operated on with Datumaro CLI.
- convert to a training format
- train a DL model
- [x] Use case "annotate - reannotate problematic images - merge"
- [ ] Use case "annotate and estimate quality"
- [x] Use case "annotate and estimate quality"
- create a task
- annotate
- estimate quality of annotations

@ -1,22 +1,28 @@
# Quick start guide
# User manual
## Contents
- [Installation](#installation)
- [Interfaces](#interfaces)
- [Supported dataset formats and annotations](#formats-support)
- [Supported dataset formats and annotations](#supported-formats)
- [Command line workflow](#command-line-workflow)
- [Command reference](#command-reference)
- [Convert datasets](#convert-datasets)
- [Create a project](#create-project)
- [Add and remove data](#add-and-remove-data)
- [Import a project](#import-project)
- [Extract a subproject](#extract-subproject)
- [Merge projects](#merge-project)
- [Update project (merge)](#update-project)
- [Merge projects](#merge-projects)
- [Export a project](#export-project)
- [Compare projects](#compare-projects)
- [Get project info](#get-project-info)
- [Obtaining project info](#get-project-info)
- [Obtaining project statistics](#get-project-statistics)
- [Register a model](#register-model)
- [Run inference](#run-inference)
- [Run inference explanation](#explain-inference)
- [Transform a project](#transform-project)
- [Extending](#extending)
- [Links](#links)
## Installation
@ -36,7 +42,7 @@ python -m virtualenv venv
. venv/bin/activate
```
Install Datumaro:
Install:
``` bash
pip install 'git+https://github.com/opencv/cvat#egg=datumaro&subdirectory=datumaro'
```
@ -68,57 +74,93 @@ As a python library:
import datumaro
```
## Formats support
## Supported Formats
List of supported formats:
- COCO (`image_info`, `instances`, `person_keypoints`, `captions`, `labels`*)
- MS COCO (`image_info`, `instances`, `person_keypoints`, `captions`, `labels`*)
- [Format specification](http://cocodataset.org/#format-data)
- [Dataset example](../tests/assets/coco_dataset)
- `labels` are our extension - like `instances` with only `category_id`
- PASCAL VOC (`classification`, `detection`, `segmentation` (class, instances), `action_classification`, `person_layout`)
- [Format specification](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/index.html)
- [Dataset example](../tests/assets/voc_dataset)
- YOLO (`bboxes`)
- [Format specification](https://github.com/AlexeyAB/darknet#how-to-train-pascal-voc-data)
- [Dataset example](../tests/assets/yolo_dataset)
- TF Detection API (`bboxes`, `masks`)
- Format specifications: [bboxes](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/using_your_own_dataset.md), [masks](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md)
- [Dataset example](../tests/assets/tf_detection_api_dataset)
- MOT sequences
- [Format specification](https://arxiv.org/pdf/1906.04567.pdf)
- [Dataset example](../tests/assets/mot_dataset)
- CVAT
- [Format specification](https://github.com/opencv/cvat/blob/develop/cvat/apps/documentation/xml_format.md)
- [Dataset example](../tests/assets/cvat_dataset)
- LabelMe
- [Format specification](http://labelme.csail.mit.edu/Release3.0)
- [Dataset example](../tests/assets/labelme_dataset)
List of supported annotation types:
- Labels
- Bounding boxes
- Polygons
- Polylines
- (Segmentation) Masks
- (Key-)Points
- Captions
- Masks
## Command line workflow
> **Note**: command invocation syntax is subject to change,
> **always refer to command --help output**
The key object is the Project. The Project is a combination of
a Project's own dataset, a number of external data sources and an environment.
The key object is a project, so most CLI commands operate on projects. However, there
are few commands operating on datasets directly. A project is a combination of
a project's own dataset, a number of external data sources and an environment.
An empty Project can be created by `project create` command,
an existing dataset can be imported with `project import` command.
A typical way to obtain projects is to export tasks in CVAT UI.
If you want to interact with models, you need to add them to project first.
## Command reference
> **Note**: command invocation syntax is subject to change,
> **always refer to command --help output**
Available CLI commands:
![CLI design doc](images/cli_design.png)
If you want to interact with models, you need to add them to project first.
### Convert datasets
This command allows to convert a dataset from one format into another. In fact, this
command is a combination of `project import` and `project export` and just provides a simpler
way to obtain the same result when no extra options is needed. A list of supported
formats can be found in the `--help` output of this command.
Usage:
``` bash
datum convert --help
datum convert \
-i <input path> \
-if <input format> \
-o <output path> \
-f <output format> \
-- [extra parameters for output format]
```
Example: convert a VOC-like dataset to a COCO-like one:
``` bash
datum convert --input-format voc --input-path <path/to/voc/> \
--output-format coco
```
### Import project
This command creates a Project from an existing dataset.
Supported formats are listed in the command help.
In Datumaro dataset formats are supported by Extractors and Importers.
An Extractor produces a list of dataset items corresponding
to the dataset. An Importer creates a Project from the
data source location. It is possible to add a custom Extractor and Importer.
To do this, you need to put an Extractor and Importer implementation scripts to
`<project_dir>/.datumaro/extractors` and `<project_dir>/.datumaro/importers`.
Supported formats are listed in the command help. Check [extending tips](#extending)
for information on extra format support.
Usage:
@ -178,7 +220,7 @@ datum project create -o my_dataset/
### Add and remove data
A Project can be attached to a number of external Data Sources. Each Source
A Project can contain a number of external Data Sources. Each Data Source
describes a way to produce dataset items. A Project combines dataset items from
all the sources and its own dataset into one composite dataset. You can manage
project sources by commands in the `source` command line context.
@ -195,11 +237,8 @@ is used in COCO format:
```
<!--lint enable fenced-code-flag-->
In Datumaro dataset formats are supported by Extractors.
An Extractor produces a list of dataset items corresponding
to the dataset. It is possible to add a custom Extractor.
To do this, you need to put an Extractor
definition script to `<project_dir>/.datumaro/extractors`.
Supported formats are listed in the command help. Check [extending tips](#extending)
for information on extra format support.
Usage:
@ -237,16 +276,16 @@ This command allows to create a sub-Project from a Project. The new project
includes only items satisfying some condition. [XPath](https://devhints.io/xpath)
is used as query format.
There are several filtering modes available ('-m/--mode' parameter).
There are several filtering modes available (`-m/--mode` parameter).
Supported modes:
- 'i', 'items'
- 'a', 'annotations'
- 'i+a', 'a+i', 'items+annotations', 'annotations+items'
- `i`, `items`
- `a`, `annotations`
- `i+a`, `a+i`, `items+annotations`, `annotations+items`
When filtering annotations, use the 'items+annotations'
When filtering annotations, use the `items+annotations`
mode to point that annotation-less dataset items should be
removed. To select an annotation, write an XPath that
returns 'annotation' elements (see examples).
returns `annotation` elements (see examples).
Usage:
@ -259,7 +298,7 @@ datum project extract \
-e '<xpath filter expression>'
```
Example: extract a dataset with only images which width < height
Example: extract a dataset with only images which `width` < `height`
``` bash
datum project extract \
@ -274,7 +313,7 @@ Example: extract a dataset with only large annotations of class `cat` and any no
datum project extract \
-p test_project \
-o test_project-extract \
--mode annotations -e '/item/annotation[(label="cat" and area > 999.5) or label!="person"]'
--mode annotations -e '/item/annotation[(label="cat" and area > 99.5) or label!="person"]'
```
Example: extract a dataset with only occluded annotations, remove empty images
@ -321,9 +360,9 @@ Item representations are available with `--dry-run` parameter:
</item>
```
### Merge projects
### Update project
This command combines multiple Projects into one.
This command updates items in a project from another one (check [Merge Projects](#merge-projects) for complex merging).
Usage:
@ -346,16 +385,40 @@ datum project merge \
second_project
```
### Merge projects
This command merges items from 2 or more projects and checks annotations for errors.
Spatial annotations are compared by distance and intersected, labels and attributes
are selected by voting.
Merge conflicts, missing items and annotations, other errors are saved into a `.json` file.
Usage:
``` bash
datum merge --help
datum merge <project dirs>
```
Example: merge 4 (partially-)intersecting projects,
- consider voting succeeded when there are 3+ same votes
- consider shapes intersecting when IoU >= 0.6
- check annotation groups to have `person`, `hand`, `head` and `foot` (`?` for optional)
``` bash
datum merge project1/ project2/ project3/ project4/ \
--quorum 3 \
-iou 0.6 \
--groups 'person,hand?,head,foot?'
```
### Export project
This command exports a Project in some format.
This command exports a Project as a dataset in some format.
Supported formats are listed in the command help.
In Datumaro dataset formats are supported by Converters.
A Converter produces a dataset of a specific format
from dataset items. It is possible to add a custom Converter.
To do this, you need to put a Converter
definition script to <project_dir>/.datumaro/converters.
Supported formats are listed in the command help. Check [extending tips](#extending)
for information on extra format support.
Usage:
@ -366,17 +429,17 @@ datum project export \
-p <project dir> \
-o <output dir> \
-f <format> \
[-- <additional format parameters>]
-- [additional format parameters]
```
Example: save project as VOC-like dataset, include images
Example: save project as VOC-like dataset, include images, convert images to `PNG`
``` bash
datum project export \
-p test_project \
-o test_project-export \
-f voc \
-- --save-images
-- --save-images --image-ext='.png'
```
### Get project info
@ -398,7 +461,7 @@ Example:
datum project info -p /test_project
Project:
name: test_project2
name: test_project
location: /test_project
Sources:
source 'instances_minival2014':
@ -419,6 +482,282 @@ Dataset:
labels: person, bicycle, car, motorcycle (and 76 more)
```
### Get project statistics
This command computes various project statistics, such as:
- image mean and std. dev.
- class and attribute balance
- mask pixel balance
- segment area distribution
Usage:
``` bash
datum project stats --help
datum project stats \
-p <project dir>
```
Example:
<details>
``` bash
datum project stats -p /test_project
{
"annotations": {
"labels": {
"attributes": {
"gender": {
"count": 358,
"distribution": {
"female": [
149,
0.41620111731843573
],
"male": [
209,
0.5837988826815642
]
},
"values count": 2,
"values present": [
"female",
"male"
]
},
"view": {
"count": 340,
"distribution": {
"__undefined__": [
4,
0.011764705882352941
],
"front": [
54,
0.1588235294117647
],
"left": [
14,
0.041176470588235294
],
"rear": [
235,
0.6911764705882353
],
"right": [
33,
0.09705882352941177
]
},
"values count": 5,
"values present": [
"__undefined__",
"front",
"left",
"rear",
"right"
]
}
},
"count": 2038,
"distribution": {
"car": [
340,
0.16683022571148184
],
"cyclist": [
194,
0.09519136408243375
],
"head": [
354,
0.17369970559371933
],
"ignore": [
100,
0.04906771344455348
],
"left_hand": [
238,
0.11678115799803729
],
"person": [
358,
0.17566241413150147
],
"right_hand": [
77,
0.037782139352306184
],
"road_arrows": [
326,
0.15996074582924436
],
"traffic_sign": [
51,
0.025024533856722278
]
}
},
"segments": {
"area distribution": [
{
"count": 1318,
"max": 11425.1,
"min": 0.0,
"percent": 0.9627465303140978
},
{
"count": 1,
"max": 22850.2,
"min": 11425.1,
"percent": 0.0007304601899196494
},
{
"count": 0,
"max": 34275.3,
"min": 22850.2,
"percent": 0.0
},
{
"count": 0,
"max": 45700.4,
"min": 34275.3,
"percent": 0.0
},
{
"count": 0,
"max": 57125.5,
"min": 45700.4,
"percent": 0.0
},
{
"count": 0,
"max": 68550.6,
"min": 57125.5,
"percent": 0.0
},
{
"count": 0,
"max": 79975.7,
"min": 68550.6,
"percent": 0.0
},
{
"count": 0,
"max": 91400.8,
"min": 79975.7,
"percent": 0.0
},
{
"count": 0,
"max": 102825.90000000001,
"min": 91400.8,
"percent": 0.0
},
{
"count": 50,
"max": 114251.0,
"min": 102825.90000000001,
"percent": 0.036523009495982466
}
],
"avg. area": 5411.624543462382,
"pixel distribution": {
"car": [
13655,
0.0018431496518735067
],
"cyclist": [
939005,
0.12674674030446592
],
"head": [
0,
0.0
],
"ignore": [
5501200,
0.7425510702956085
],
"left_hand": [
0,
0.0
],
"person": [
954654,
0.12885903974805205
],
"right_hand": [
0,
0.0
],
"road_arrows": [
0,
0.0
],
"traffic_sign": [
0,
0.0
]
}
}
},
"annotations by type": {
"bbox": {
"count": 548
},
"caption": {
"count": 0
},
"label": {
"count": 0
},
"mask": {
"count": 0
},
"points": {
"count": 669
},
"polygon": {
"count": 821
},
"polyline": {
"count": 0
}
},
"annotations count": 2038,
"dataset": {
"image mean": [
107.06903686941979,
79.12831698580979,
52.95829558185416
],
"image std": [
49.40237673503467,
43.29600731496902,
35.47373007603151
],
"images count": 100
},
"images count": 100,
"subsets": {},
"unannotated images": [
"img00051",
"img00052",
"img00053",
"img00054",
"img00055",
],
"unannotated images count": 5
}
```
</details>
### Register model
Supported models:
@ -557,6 +896,85 @@ datum explain \
-s 1000 --progressive
```
### Transform Project
This command allows to modify images or annotations in a project all at once.
``` bash
datum project transform --help
datum project transform \
-p <project_dir> \
-o <output_dir> \
-t <transform_name> \
-- [extra transform options]
```
Example: split a dataset randomly to `train` and `test` subsets, ratio is 2:1
``` bash
datum project transform -t random_split -- --subset train:.67 --subset test:.33
```
Example: convert polygons to masks, masks to boxes etc.:
``` bash
datum project transform -t boxes_to_masks
datum project transform -t masks_to_polygons
datum project transform -t polygons_to_masks
datum project transform -t shapes_to_boxes
```
Example: remap dataset labels, `person` to `car` and `cat` to `dog`, keep `bus`, remove others
``` bash
datum project transform -t remap_labels -- \
-l person:car -l bus:bus -l cat:dog \
--default delete
```
Example: rename dataset items by a regular expression
- Replace `pattern` with `replacement`
- Remove `frame_` from item ids
``` bash
datum project transform -t rename -- -e '|pattern|replacement|'
datum project transform -t rename -- -e '|frame_(\d+)|\\1|'
```
## Extending
There are few ways to extend and customize Datumaro behaviour, which is supported by plugins.
Check [our contribution guide](../CONTRIBUTING.md) for details on plugin implementation.
In general, a plugin is a Python code file. It must be put into a plugin directory:
- `<project_dir>/.datumaro/plugins` for project-specific plugins
- `<datumaro_dir>/plugins` for global plugins
### Dataset Formats
Dataset reading is supported by Extractors and Importers.
An Extractor produces a list of dataset items corresponding
to the dataset. An Importer creates a project from the data source location.
It is possible to add custom Extractors and Importers. To do this, you need
to put an Extractor and Importer implementation scripts to a plugin directory.
Dataset writing is supported by Converters.
A Converter produces a dataset of a specific format from dataset items.
It is possible to add custom Converters. To do this, you need to put a Converter
implementation script to a plugin directory.
### Dataset Conversions ("Transforms")
A Transform is a function for altering a dataset and producing a new one. It can update
dataset items, annotations, classes, and other properties.
A list of available transforms for dataset conversions can be extended by adding a Transform
implementation script into a plugin directory.
### Model launchers
A list of available launchers for model execution can be extended by adding a Launcher
implementation script into a plugin directory.
## Links
- [TensorFlow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md)
- [How to convert model to OpenVINO format](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html)

Loading…
Cancel
Save