Huge feature (200+ commits from different developers). It completely changes layout of data (please expect very long DB migration process if you have a lot of tasks). The primary idea is to send data as zip chunks (e.g. 36 images in one chunk) or encoded video chunks and decode them on the client side. It helps to solve the problem with latency when you try to view a separate frame in the UI quickly (play mode).
Another important feature of the patch is to provide access to the original images. Thus for annotations the client uses compressed chunks but if you want to export a dataset Datumaro will use original chunks (but video will be decoded with original quality and encoded with maximum/optimal quality in any case).
* Replaced wget by curl
* Moved CI stuff into Dockerfile.ci
* Use docker-compose to run commnands inside docker (need environment variables)
* Added patool again (to support different archive formats)
* Roll back tensorflow version: 1.15 -> 1.13.1
Fixed https://github.com/opencv/cvat/issues/982
Fixed https://github.com/opencv/cvat/issues/1017
* datumaro install tensorflow 2.x now. It breaks automatic annotation
using TF.
* Follow redirects in curl (auto_segmentation)
* Slightly enhance command line interface feature.
Added README.md, run tests using travis, run CLI tests from VS code.
* Removed formatted string due to a limitation on our python version inside the container.
* Add information about command line interface to the main page.
* Run tests for REST API
* Added DJANGO_CONFIGURATION with value "testing"
* Fixed crash of python tests
* Update .travis.yml
* Update CHANGELOG.md
* Removed --settings option