Add CamVid format (#2559)

* Update Datumaro to 0.1.4

* Add CamVid format

* Add CamVid to documentation

* Update changelog.

* Added information about ImageNet and CamVid into README.md

Co-authored-by: Nikita Manovich <nikita.manovich@intel.com>
main
Anastasia Yasakova 5 years ago committed by GitHub
parent 86d3f92999
commit 916dd6dff7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -10,11 +10,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added ### Added
- Manual review pipeline: issues/comments/workspace (<https://github.com/openvinotoolkit/cvat/pull/2357>) - Manual review pipeline: issues/comments/workspace (<https://github.com/openvinotoolkit/cvat/pull/2357>)
- Added basic projects implementation (<https://github.com/openvinotoolkit/cvat/pull/2255>) - Basic projects implementation (<https://github.com/openvinotoolkit/cvat/pull/2255>)
- Added documentation on how to mount cloud starage(AWS S3 bucket, Azure container, Google Drive) as FUSE (<https://github.com/openvinotoolkit/cvat/pull/2377>) - Documentation on how to mount cloud starage(AWS S3 bucket, Azure container, Google Drive) as FUSE (<https://github.com/openvinotoolkit/cvat/pull/2377>)
- Added ability to work with share files without copying inside (<https://github.com/openvinotoolkit/cvat/pull/2377>) - Ability to work with share files without copying inside (<https://github.com/openvinotoolkit/cvat/pull/2377>)
- Tooltips in label selectors (<https://github.com/openvinotoolkit/cvat/pull/2509>) - Tooltips in label selectors (<https://github.com/openvinotoolkit/cvat/pull/2509>)
- Page redirect after login using `next` query parameter (<https://github.com/openvinotoolkit/cvat/pull/2527>) - Page redirect after login using `next` query parameter (<https://github.com/openvinotoolkit/cvat/pull/2527>)
- [ImageNet](http://www.image-net.org) format support (<https://github.com/openvinotoolkit/cvat/pull/2376>)
- [CamVid](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/) format support (<https://github.com/openvinotoolkit/cvat/pull/2559>)
### Changed ### Changed

@ -40,11 +40,13 @@ annotation team. Try it online [cvat.org](https://cvat.org).
## Supported annotation formats ## Supported annotation formats
Format selection is possible after clicking on the Upload annotation Format selection is possible after clicking on the Upload annotation and Dump
and Dump annotation buttons. annotation buttons. [Datumaro](https://github.com/openvinotoolkit/datumaro)
[Datumaro](https://github.com/openvinotoolkit/datumaro) dataset dataset framework allows additional dataset transformations via its command
framework allows additional dataset transformations line tool and Python library.
via its command line tool and Python library.
For more information about supported formats look at the
[documentation](cvat/apps/dataset_manager/formats/README.md#formats).
| Annotation format | Import | Export | | Annotation format | Import | Export |
| ----------------------------------------------------------------------------- | ------ | ------ | | ----------------------------------------------------------------------------- | ------ | ------ |
@ -58,6 +60,8 @@ via its command line tool and Python library.
| [TFrecord](https://www.tensorflow.org/tutorials/load_data/tf_records) | X | X | | [TFrecord](https://www.tensorflow.org/tutorials/load_data/tf_records) | X | X |
| [MOT](https://motchallenge.net/) | X | X | | [MOT](https://motchallenge.net/) | X | X |
| [LabelMe 3.0](http://labelme.csail.mit.edu/Release3.0) | X | X | | [LabelMe 3.0](http://labelme.csail.mit.edu/Release3.0) | X | X |
| [ImageNet](http://www.image-net.org) | X | X |
| [CamVid](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/) | X | X |
## Deep learning models for automatic labeling ## Deep learning models for automatic labeling

@ -19,6 +19,7 @@
- [YOLO](#yolo) - [YOLO](#yolo)
- [TF detection API](#tfrecord) - [TF detection API](#tfrecord)
- [ImageNet](#imagenet) - [ImageNet](#imagenet)
- [CamVid](#camvid)
## How to add a new annotation format support<a id="how-to-add"></a> ## How to add a new annotation format support<a id="how-to-add"></a>
@ -835,3 +836,41 @@ taskname.zip/
Uploaded file: a zip archive of the structure above Uploaded file: a zip archive of the structure above
- supported annotations: Labels - supported annotations: Labels
### [CamVid](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/)<a id="camvid" />
#### CamVid Dumper
Downloaded file: a zip archive of the following structure:
```bash
taskname.zip/
├── labelmap.txt # optional, required for non-CamVid labels
└── <any_subset_name>/
├── image1.png
└── image2.png
└── <any_subset_name>annot/
├── image1.png
└── image2.png
└── <any_subset_name>.txt
# labelmap.txt
# color (RGB) label
0 0 0 Void
64 128 64 Animal
192 0 128 Archway
0 128 192 Bicyclist
0 128 64 Bridge
```
Mask is a `png` image with 1 or 3 channels where each pixel
has own color which corresponds to a label.
`(0, 0, 0)` is used for background by default.
- supported annotations: Rectangles, Polygons
#### CamVid Loader
Uploaded file: a zip archive of the structure above
- supported annotations: Polygons

@ -0,0 +1,42 @@
# Copyright (C) 2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
from tempfile import TemporaryDirectory
from datumaro.components.project import Dataset
from pyunpack import Archive
from cvat.apps.dataset_manager.bindings import (CvatTaskDataExtractor,
import_dm_annotations)
from cvat.apps.dataset_manager.util import make_zip_archive
from .registry import dm_env, exporter, importer
from .utils import make_colormap
@exporter(name='CamVid', ext='ZIP', version='1.0')
def _export(dst_file, task_data, save_images=False):
extractor = CvatTaskDataExtractor(task_data, include_images=save_images)
envt = dm_env.transforms
extractor = extractor.transform(envt.get('polygons_to_masks'))
extractor = extractor.transform(envt.get('boxes_to_masks'))
extractor = extractor.transform(envt.get('merge_instance_segments'))
extractor = Dataset.from_extractors(extractor) # apply lazy transforms
label_map = make_colormap(task_data)
with TemporaryDirectory() as temp_dir:
dm_env.converters.get('camvid').convert(extractor,
save_dir=temp_dir, save_images=save_images, apply_colormap=True,
label_map={label: label_map[label][0] for label in label_map})
make_zip_archive(temp_dir, dst_file)
@importer(name='CamVid', ext='ZIP', version='1.0')
def _import(src_file, task_data):
with TemporaryDirectory() as tmp_dir:
Archive(src_file.name).extractall(tmp_dir)
dataset = dm_env.make_importer('camvid')(tmp_dir).make_dataset()
masks_to_polygons = dm_env.transforms.get('masks_to_polygons')
dataset = dataset.transform(masks_to_polygons)
import_dm_annotations(dataset, task_data)

@ -91,4 +91,5 @@ import cvat.apps.dataset_manager.formats.mots
import cvat.apps.dataset_manager.formats.pascal_voc import cvat.apps.dataset_manager.formats.pascal_voc
import cvat.apps.dataset_manager.formats.tfrecord import cvat.apps.dataset_manager.formats.tfrecord
import cvat.apps.dataset_manager.formats.yolo import cvat.apps.dataset_manager.formats.yolo
import cvat.apps.dataset_manager.formats.imagenet import cvat.apps.dataset_manager.formats.imagenet
import cvat.apps.dataset_manager.formats.camvid

@ -270,6 +270,7 @@ class TaskExportTest(_DbTestBase):
'TFRecord 1.0', 'TFRecord 1.0',
'YOLO 1.1', 'YOLO 1.1',
'ImageNet 1.0', 'ImageNet 1.0',
'CamVid 1.0',
}) })
def test_import_formats_query(self): def test_import_formats_query(self):
@ -287,6 +288,7 @@ class TaskExportTest(_DbTestBase):
'TFRecord 1.0', 'TFRecord 1.0',
'YOLO 1.1', 'YOLO 1.1',
'ImageNet 1.0', 'ImageNet 1.0',
'CamVid 1.0',
}) })
def test_exports(self): def test_exports(self):
@ -323,6 +325,7 @@ class TaskExportTest(_DbTestBase):
('TFRecord 1.0', 'tf_detection_api'), ('TFRecord 1.0', 'tf_detection_api'),
('YOLO 1.1', 'yolo'), ('YOLO 1.1', 'yolo'),
('ImageNet 1.0', 'imagenet_txt'), ('ImageNet 1.0', 'imagenet_txt'),
('CamVid 1.0', 'camvid'),
]: ]:
with self.subTest(format=format_name): with self.subTest(format=format_name):
if not dm.formats.registry.EXPORT_FORMATS[format_name].ENABLED: if not dm.formats.registry.EXPORT_FORMATS[format_name].ENABLED:

@ -3751,6 +3751,10 @@ class TaskAnnotationAPITestCase(JobAnnotationAPITestCase):
elif annotation_format == "ImageNet 1.0": elif annotation_format == "ImageNet 1.0":
annotations["tags"] = tags_wo_attrs annotations["tags"] = tags_wo_attrs
elif annotation_format == "CamVid 1.0":
annotations["shapes"] = rectangle_shapes_wo_attrs \
+ polygon_shapes_wo_attrs
else: else:
raise Exception("Unknown format {}".format(annotation_format)) raise Exception("Unknown format {}".format(annotation_format))
@ -3847,7 +3851,7 @@ class TaskAnnotationAPITestCase(JobAnnotationAPITestCase):
self.assertEqual(response.status_code, HTTP_201_CREATED) self.assertEqual(response.status_code, HTTP_201_CREATED)
# 7. check annotation # 7. check annotation
if import_format in {"Segmentation mask 1.1", "MOTS PNG 1.0"}: if import_format in {"Segmentation mask 1.1", "MOTS PNG 1.0", "CamVid 1.0"}:
continue # can't really predict the result to check continue # can't really predict the result to check
response = self._get_api_v1_tasks_id_annotations(task["id"], annotator) response = self._get_api_v1_tasks_id_annotations(task["id"], annotator)
self.assertEqual(response.status_code, HTTP_200_OK) self.assertEqual(response.status_code, HTTP_200_OK)

Loading…
Cancel
Save