Data streaming using chunks (#1007)

Huge feature (200+ commits from different developers). It completely changes layout of data (please expect very long DB migration process if you have a lot of tasks). The primary idea is to send data as zip chunks (e.g. 36 images in one chunk) or encoded video chunks and decode them on the client side. It helps to solve the problem with latency  when you try to view a separate frame in the UI quickly (play mode).
Another important feature of the patch is to provide access to the original images. Thus for annotations the client uses compressed chunks but if you want to export a dataset Datumaro will use original chunks (but video will be decoded with original quality and encoded with maximum/optimal quality in any case).
main
Andrey Zhavoronkov 6 years ago committed by GitHub
parent ecad0231c9
commit e7808cfb03
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,4 +1,5 @@
exclude_paths:
- '**/3rdparty/**'
- '**/engine/js/cvat-core.min.js'
- '**/engine/js/unzip_imgs.js'
- CHANGELOG.md

@ -14,4 +14,4 @@ before_script:
script:
- docker-compose -f docker-compose.yml -f docker-compose.ci.yml run cvat_ci /bin/bash -c 'python3 manage.py test cvat/apps utils/cli'
- docker-compose -f docker-compose.yml -f docker-compose.ci.yml run cvat_ci /bin/bash -c 'python3 manage.py test datumaro/'
- docker-compose -f docker-compose.yml -f docker-compose.ci.yml run cvat_ci /bin/bash -c 'cd cvat-core && npm install && npm run test && npm run coveralls'
- docker-compose -f docker-compose.yml -f docker-compose.ci.yml run cvat_ci /bin/bash -c 'cd cvat-data && npm install && cd ../cvat-core && npm install && npm run test && npm run coveralls'

@ -12,52 +12,60 @@ patches and features.
Next steps should work on clear Ubuntu 18.04.
- Install necessary dependencies:
```sh
$ sudo apt-get update && sudo apt-get --no-install-recommends install -y ffmpeg build-essential nodejs npm curl redis-server python3-dev python3-pip python3-venv libldap2-dev libsasl2-dev
```
- Install necessary dependencies:
```sh
$ sudo apt-get update && sudo apt-get --no-install-recommends install -y ffmpeg build-essential nodejs npm curl redis-server python3-dev python3-pip python3-venv libldap2-dev libsasl2-dev
```
Also please make sure that you have installed ffmpeg with all necessary libav* libraries and pkg-config package.
```sh
# General dependencies
sudo apt-get install -y pkg-config
# Library components
sudo apt-get install -y \
libavformat-dev libavcodec-dev libavdevice-dev \
libavutil-dev libswscale-dev libswresample-dev libavfilter-dev
```
See [PyAV Dependencies installation guide](http://docs.mikeboers.com/pyav/develop/overview/installation.html#dependencies)
for details.
- Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/linux#_debian-and-ubuntu-based-distributions)
for development
- Install CVAT on your local host:
```sh
git clone https://github.com/opencv/cvat
cd cvat && mkdir logs keys
python3 -m venv .env
. .env/bin/activate
pip install -U pip wheel setuptools
pip install -r cvat/requirements/development.txt
pip install -r datumaro/requirements.txt
python manage.py migrate
python manage.py collectstatic
```
- Create a super user for CVAT:
```sh
$ python manage.py createsuperuser
Username (leave blank to use 'django'): ***
Email address: ***
Password: ***
Password (again): ***
```
- Install npm packages for UI and start UI debug server (run the following command from CVAT root directory):
```sh
npm install && \
cd cvat-core && npm install && \
cd ../cvat-canvas && npm install && \
cd ../cvat-ui && npm install && npm start
```
- Open new terminal (Ctrl + Shift + T), run Visual Studio Code from the virtual environment
```sh
cd .. && source .env/bin/activate && code
```
```sh
git clone https://github.com/opencv/cvat
cd cvat && mkdir logs keys
python3 -m venv .env
. .env/bin/activate
pip install -U pip wheel setuptools
pip install -r cvat/requirements/development.txt
pip install -r datumaro/requirements.txt
python manage.py migrate
python manage.py collectstatic
```
- Create a super user for CVAT:
```sh
$ python manage.py createsuperuser
Username (leave blank to use 'django'): ***
Email address: ***
Password: ***
Password (again): ***
```
- Install npm packages for UI and start UI debug server (run the following command from CVAT root directory):
```sh
cd cvat-core && npm install && \
cd ../cvat-canvas && npm install && \
cd ../cvat-data && npm install && \
cd ../cvat-ui && npm install && npm start
```
- Open new terminal (Ctrl + Shift + T), run Visual Studio Code from the virtual environment
```sh
cd .. && source .env/bin/activate && code
```
- Install followig vscode extensions:
- [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome)

@ -35,8 +35,17 @@ RUN apt-get update && \
supervisor \
ffmpeg \
gstreamer0.10-ffmpeg \
libavcodec-dev \
libavdevice-dev \
libavfilter-dev \
libavformat-dev \
libavutil-dev \
libldap2-dev \
libswresample-dev \
libswscale-dev \
libldap2-dev \
libsasl2-dev \
pkg-config \
python3-dev \
python3-pip \
tzdata \
@ -57,7 +66,8 @@ RUN apt-get update && \
dpkg-reconfigure -f noninteractive tzdata && \
add-apt-repository --remove ppa:mc3man/gstffmpeg-keep -y && \
add-apt-repository --remove ppa:mc3man/xerus-media -y && \
rm -rf /var/lib/apt/lists/*
rm -rf /var/lib/apt/lists/* && \
echo 'application/wasm wasm' >> /etc/mime.types
# Add a non-root user
ENV USER=${USER}
@ -123,6 +133,7 @@ COPY ssh ${HOME}/.ssh
COPY utils ${HOME}/utils
COPY cvat/ ${HOME}/cvat
COPY cvat-core/ ${HOME}/cvat-core
COPY cvat-data/ ${HOME}/cvat-data
COPY tests ${HOME}/tests
COPY datumaro/ ${HOME}/datumaro

@ -17,6 +17,11 @@ ENV TERM=xterm \
COPY cvat-core/package*.json /tmp/cvat-core/
COPY cvat-canvas/package*.json /tmp/cvat-canvas/
COPY cvat-ui/package*.json /tmp/cvat-ui/
COPY cvat-data/package*.json /tmp/cvat-data/
# Install cvat-data dependencies
WORKDIR /tmp/cvat-data/
RUN npm install
# Install cvat-core dependencies
WORKDIR /tmp/cvat-core/
@ -31,6 +36,7 @@ WORKDIR /tmp/cvat-ui/
RUN npm install
# Build source code
COPY cvat-data/ /tmp/cvat-data/
COPY cvat-core/ /tmp/cvat-core/
COPY cvat-canvas/ /tmp/cvat-canvas/
COPY cvat-ui/ /tmp/cvat-ui/
@ -38,5 +44,6 @@ RUN npm run build
FROM nginx:stable-alpine
# Replace default.conf configuration to remove unnecessary rules
RUN sed -i "s/}/application\/wasm wasm;\n}/g" /etc/nginx/mime.types
COPY cvat-ui/react_nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=cvat-ui /tmp/cvat-ui/dist /usr/share/nginx/html/

@ -337,6 +337,189 @@
"@babel/plugin-syntax-async-generators": "^7.2.0"
}
},
"@babel/plugin-proposal-class-properties": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/plugin-proposal-class-properties/-/plugin-proposal-class-properties-7.8.3.tgz",
"integrity": "sha512-EqFhbo7IosdgPgZggHaNObkmO1kNUe3slaKu54d5OWvy+p9QIKOzK1GAEpAIsZtWVtPXUHSMcT4smvDrCfY4AA==",
"dev": true,
"requires": {
"@babel/helper-create-class-features-plugin": "^7.8.3",
"@babel/helper-plugin-utils": "^7.8.3"
},
"dependencies": {
"@babel/code-frame": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.8.3.tgz",
"integrity": "sha512-a9gxpmdXtZEInkCSHUJDLHZVBgb1QS0jhss4cPP93EW7s+uC5bikET2twEF3KV+7rDblJcmNvTR7VJejqd2C2g==",
"dev": true,
"requires": {
"@babel/highlight": "^7.8.3"
}
},
"@babel/generator": {
"version": "7.8.7",
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.8.7.tgz",
"integrity": "sha512-DQwjiKJqH4C3qGiyQCAExJHoZssn49JTMJgZ8SANGgVFdkupcUhLOdkAeoC6kmHZCPfoDG5M0b6cFlSN5wW7Ew==",
"dev": true,
"requires": {
"@babel/types": "^7.8.7",
"jsesc": "^2.5.1",
"lodash": "^4.17.13",
"source-map": "^0.5.0"
}
},
"@babel/helper-create-class-features-plugin": {
"version": "7.8.6",
"resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.8.6.tgz",
"integrity": "sha512-klTBDdsr+VFFqaDHm5rR69OpEQtO2Qv8ECxHS1mNhJJvaHArR6a1xTf5K/eZW7eZpJbhCx3NW1Yt/sKsLXLblg==",
"dev": true,
"requires": {
"@babel/helper-function-name": "^7.8.3",
"@babel/helper-member-expression-to-functions": "^7.8.3",
"@babel/helper-optimise-call-expression": "^7.8.3",
"@babel/helper-plugin-utils": "^7.8.3",
"@babel/helper-replace-supers": "^7.8.6",
"@babel/helper-split-export-declaration": "^7.8.3"
}
},
"@babel/helper-function-name": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.8.3.tgz",
"integrity": "sha512-BCxgX1BC2hD/oBlIFUgOCQDOPV8nSINxCwM3o93xP4P9Fq6aV5sgv2cOOITDMtCfQ+3PvHp3l689XZvAM9QyOA==",
"dev": true,
"requires": {
"@babel/helper-get-function-arity": "^7.8.3",
"@babel/template": "^7.8.3",
"@babel/types": "^7.8.3"
}
},
"@babel/helper-get-function-arity": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.8.3.tgz",
"integrity": "sha512-FVDR+Gd9iLjUMY1fzE2SR0IuaJToR4RkCDARVfsBBPSP53GEqSFjD8gNyxg246VUyc/ALRxFaAK8rVG7UT7xRA==",
"dev": true,
"requires": {
"@babel/types": "^7.8.3"
}
},
"@babel/helper-member-expression-to-functions": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.8.3.tgz",
"integrity": "sha512-fO4Egq88utkQFjbPrSHGmGLFqmrshs11d46WI+WZDESt7Wu7wN2G2Iu+NMMZJFDOVRHAMIkB5SNh30NtwCA7RA==",
"dev": true,
"requires": {
"@babel/types": "^7.8.3"
}
},
"@babel/helper-optimise-call-expression": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.8.3.tgz",
"integrity": "sha512-Kag20n86cbO2AvHca6EJsvqAd82gc6VMGule4HwebwMlwkpXuVqrNRj6CkCV2sKxgi9MyAUnZVnZ6lJ1/vKhHQ==",
"dev": true,
"requires": {
"@babel/types": "^7.8.3"
}
},
"@babel/helper-plugin-utils": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.8.3.tgz",
"integrity": "sha512-j+fq49Xds2smCUNYmEHF9kGNkhbet6yVIBp4e6oeQpH1RUs/Ir06xUKzDjDkGcaaokPiTNs2JBWHjaE4csUkZQ==",
"dev": true
},
"@babel/helper-replace-supers": {
"version": "7.8.6",
"resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.8.6.tgz",
"integrity": "sha512-PeMArdA4Sv/Wf4zXwBKPqVj7n9UF/xg6slNRtZW84FM7JpE1CbG8B612FyM4cxrf4fMAMGO0kR7voy1ForHHFA==",
"dev": true,
"requires": {
"@babel/helper-member-expression-to-functions": "^7.8.3",
"@babel/helper-optimise-call-expression": "^7.8.3",
"@babel/traverse": "^7.8.6",
"@babel/types": "^7.8.6"
}
},
"@babel/helper-split-export-declaration": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.8.3.tgz",
"integrity": "sha512-3x3yOeyBhW851hroze7ElzdkeRXQYQbFIb7gLK1WQYsw2GWDay5gAJNw1sWJ0VFP6z5J1whqeXH/WCdCjZv6dA==",
"dev": true,
"requires": {
"@babel/types": "^7.8.3"
}
},
"@babel/highlight": {
"version": "7.8.3",
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.8.3.tgz",
"integrity": "sha512-PX4y5xQUvy0fnEVHrYOarRPXVWafSjTW9T0Hab8gVIawpl2Sj0ORyrygANq+KjcNlSSTw0YCLSNA8OyZ1I4yEg==",
"dev": true,
"requires": {
"chalk": "^2.0.0",
"esutils": "^2.0.2",
"js-tokens": "^4.0.0"
}
},
"@babel/parser": {
"version": "7.8.7",
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.8.7.tgz",
"integrity": "sha512-9JWls8WilDXFGxs0phaXAZgpxTZhSk/yOYH2hTHC0X1yC7Z78IJfvR1vJ+rmJKq3I35td2XzXzN6ZLYlna+r/A==",
"dev": true
},
"@babel/template": {
"version": "7.8.6",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.8.6.tgz",
"integrity": "sha512-zbMsPMy/v0PWFZEhQJ66bqjhH+z0JgMoBWuikXybgG3Gkd/3t5oQ1Rw2WQhnSrsOmsKXnZOx15tkC4qON/+JPg==",
"dev": true,
"requires": {
"@babel/code-frame": "^7.8.3",
"@babel/parser": "^7.8.6",
"@babel/types": "^7.8.6"
}
},
"@babel/traverse": {
"version": "7.8.6",
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.8.6.tgz",
"integrity": "sha512-2B8l0db/DPi8iinITKuo7cbPznLCEk0kCxDoB9/N6gGNg/gxOXiR/IcymAFPiBwk5w6TtQ27w4wpElgp9btR9A==",
"dev": true,
"requires": {
"@babel/code-frame": "^7.8.3",
"@babel/generator": "^7.8.6",
"@babel/helper-function-name": "^7.8.3",
"@babel/helper-split-export-declaration": "^7.8.3",
"@babel/parser": "^7.8.6",
"@babel/types": "^7.8.6",
"debug": "^4.1.0",
"globals": "^11.1.0",
"lodash": "^4.17.13"
}
},
"@babel/types": {
"version": "7.8.7",
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.8.7.tgz",
"integrity": "sha512-k2TreEHxFA4CjGkL+GYjRyx35W0Mr7DP5+9q6WMkyKXB+904bYmG40syjMFV0oLlhhFCwWl0vA0DyzTDkwAiJw==",
"dev": true,
"requires": {
"esutils": "^2.0.2",
"lodash": "^4.17.13",
"to-fast-properties": "^2.0.0"
}
},
"debug": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
"integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
"dev": true,
"requires": {
"ms": "^2.1.1"
}
},
"ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==",
"dev": true
}
}
},
"@babel/plugin-proposal-dynamic-import": {
"version": "7.5.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-proposal-dynamic-import/-/plugin-proposal-dynamic-import-7.5.0.tgz",
@ -1567,9 +1750,9 @@
"dev": true
},
"aws4": {
"version": "1.9.0",
"resolved": "https://registry.npmjs.org/aws4/-/aws4-1.9.0.tgz",
"integrity": "sha512-Uvq6hVe90D0B2WEnUqtdgY1bATGz3mw33nH9Y+dmA+w5DHvUmBgkr5rM/KCHpCsiFNRUfokW/szpPPgMK2hm4A==",
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/aws4/-/aws4-1.9.1.tgz",
"integrity": "sha512-wMHVg2EOHaMRxbzgFJ9gtjOOCrI80OHLG14rxi28XwOW8ux6IiEbRCGGGqCtdAIg4FQCbW20k9RsT4y3gJlFug==",
"dev": true
},
"babel-loader": {
@ -1939,28 +2122,6 @@
"integrity": "sha1-0ygVQE1olpn4Wk6k+odV3ROpYEg=",
"dev": true
},
"cacache": {
"version": "11.3.3",
"resolved": "https://registry.npmjs.org/cacache/-/cacache-11.3.3.tgz",
"integrity": "sha512-p8WcneCytvzPxhDvYp31PD039vi77I12W+/KfR9S8AZbaiARFBCpsPJS+9uhWfeBfeAtW7o/4vt3MUqLkbY6nA==",
"dev": true,
"requires": {
"bluebird": "^3.5.5",
"chownr": "^1.1.1",
"figgy-pudding": "^3.5.1",
"glob": "^7.1.4",
"graceful-fs": "^4.1.15",
"lru-cache": "^5.1.1",
"mississippi": "^3.0.0",
"mkdirp": "^0.5.1",
"move-concurrently": "^1.0.1",
"promise-inflight": "^1.0.1",
"rimraf": "^2.6.3",
"ssri": "^6.0.1",
"unique-filename": "^1.1.1",
"y18n": "^4.0.0"
}
},
"cache-base": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/cache-base/-/cache-base-1.0.1.tgz",
@ -2628,9 +2789,9 @@
}
},
"css-loader": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/css-loader/-/css-loader-3.2.0.tgz",
"integrity": "sha512-QTF3Ud5H7DaZotgdcJjGMvyDj5F3Pn1j/sC6VBEOVp94cbwqyIBdcs/quzj4MC1BKQSrTpQznegH/5giYbhnCQ==",
"version": "3.4.2",
"resolved": "https://registry.npmjs.org/css-loader/-/css-loader-3.4.2.tgz",
"integrity": "sha512-jYq4zdZT0oS0Iykt+fqnzVLRIeiPWhka+7BqPn+oSIpWJAHak5tmB/WZrJ2a21JhCeFyNnnlroSl8c+MtVndzA==",
"dev": true,
"requires": {
"camelcase": "^5.3.1",
@ -2638,24 +2799,41 @@
"icss-utils": "^4.1.1",
"loader-utils": "^1.2.3",
"normalize-path": "^3.0.0",
"postcss": "^7.0.17",
"postcss": "^7.0.23",
"postcss-modules-extract-imports": "^2.0.0",
"postcss-modules-local-by-default": "^3.0.2",
"postcss-modules-scope": "^2.1.0",
"postcss-modules-scope": "^2.1.1",
"postcss-modules-values": "^3.0.0",
"postcss-value-parser": "^4.0.0",
"schema-utils": "^2.0.0"
"postcss-value-parser": "^4.0.2",
"schema-utils": "^2.6.0"
},
"dependencies": {
"postcss": {
"version": "7.0.27",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz",
"integrity": "sha512-WuQETPMcW9Uf1/22HWUWP9lgsIC+KEHg2kozMflKjbeUtw9ujvFX6QmIfozaErDkmLWS9WEnEdEe6Uo9/BNTdQ==",
"dev": true,
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
"supports-color": "^6.1.0"
}
},
"schema-utils": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.2.0.tgz",
"integrity": "sha512-5EwsCNhfFTZvUreQhx/4vVQpJ/lnCAkgoIHLhSpp4ZirE+4hzFvdJi0FMub6hxbFVBJYSpeVVmon+2e7uEGRrA==",
"version": "2.6.4",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.6.4.tgz",
"integrity": "sha512-VNjcaUxVnEeun6B2fiiUDjXXBtD4ZSH7pdbfIu1pOFwgptDPLMo/z9jr4sUfsjFVPqDCEin/F7IYlq7/E6yDbQ==",
"dev": true,
"requires": {
"ajv": "^6.10.2",
"ajv-keywords": "^3.4.1"
}
},
"source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==",
"dev": true
}
}
},
@ -4755,13 +4933,13 @@
}
},
"globule": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/globule/-/globule-1.3.0.tgz",
"integrity": "sha512-YlD4kdMqRCQHrhVdonet4TdRtv1/sZKepvoxNT4Nrhrp5HI8XFfc8kFlGlBn2myBo80aGp8Eft259mbcUJhgSg==",
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/globule/-/globule-1.3.1.tgz",
"integrity": "sha512-OVyWOHgw29yosRHCHo7NncwR1hW5ew0W/UrvtwvjefVJeQ26q4/8r8FmPsSF1hJ93IgWkyv16pCTz6WblMzm/g==",
"dev": true,
"requires": {
"glob": "~7.1.1",
"lodash": "~4.17.10",
"lodash": "~4.17.12",
"minimatch": "~3.0.2"
}
},
@ -5149,6 +5327,12 @@
"integrity": "sha1-8w9xbI4r00bHtn0985FVZqfAVgc=",
"dev": true
},
"infer-owner": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/infer-owner/-/infer-owner-1.0.4.tgz",
"integrity": "sha512-IClj+Xz94+d7irH5qRyfJonOdfTzuDaifE6ZPWfx0N0+/ATZCbuTPq2prFl526urkQd90WyUKIh1DfBQ2hMz9A==",
"dev": true
},
"inflight": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz",
@ -5378,13 +5562,10 @@
"dev": true
},
"is-finite": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/is-finite/-/is-finite-1.0.2.tgz",
"integrity": "sha1-zGZ3aVYCvlUO8R6LSqYwU0K20Ko=",
"dev": true,
"requires": {
"number-is-nan": "^1.0.0"
}
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/is-finite/-/is-finite-1.1.0.tgz",
"integrity": "sha512-cdyMtqX/BOqqNBBiKlIVkytNHm49MtMlYyn1zxzvJKWmFMlGzm+ry5BBfYyeY9YmNKbRSo/o7OX9w9ale0wg3w==",
"dev": true
},
"is-fullwidth-code-point": {
"version": "2.0.0",
@ -5584,9 +5765,9 @@
"dev": true
},
"js-base64": {
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/js-base64/-/js-base64-2.5.1.tgz",
"integrity": "sha512-M7kLczedRMYX4L8Mdh4MzyAMM9O5osx+4FcOQuTvr3A9F2D9S5JXheN0ewNbrvK2UatkTRhL5ejGmGSjNMiZuw==",
"version": "2.5.2",
"resolved": "https://registry.npmjs.org/js-base64/-/js-base64-2.5.2.tgz",
"integrity": "sha512-Vg8czh0Q7sFBSUMWWArX/miJeBWYBPpdU/3M/DKSaekLMqrqVPaedp+5mZhie/r0lgrcaYBfwXatEew6gwgiQQ==",
"dev": true
},
"js-levenshtein": {
@ -6363,9 +6544,9 @@
}
},
"node-sass": {
"version": "4.13.0",
"resolved": "https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz",
"integrity": "sha512-W1XBrvoJ1dy7VsvTAS5q1V45lREbTlZQqFbiHb3R3OTTCma0XBtuG6xZ6Z4506nR4lmHPTqVRwxT6KgtWC97CA==",
"version": "4.13.1",
"resolved": "https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz",
"integrity": "sha512-TTWFx+ZhyDx1Biiez2nB0L3YrCZ/8oHagaDalbuBSlqXgUPsdkUSzJsVxeDO9LtPB49+Fh3WQl3slABo6AotNw==",
"dev": true,
"requires": {
"async-foreach": "^0.1.3",
@ -7381,9 +7562,9 @@
}
},
"postcss-modules-scope": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/postcss-modules-scope/-/postcss-modules-scope-2.1.0.tgz",
"integrity": "sha512-91Rjps0JnmtUB0cujlc8KIKCsJXWjzuxGeT/+Q2i2HXKZ7nBUeF9YQTZZTNvHVoNYj1AthsjnGLtqDUE0Op79A==",
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/postcss-modules-scope/-/postcss-modules-scope-2.1.1.tgz",
"integrity": "sha512-OXRUPecnHCg8b9xWvldG/jUpRIGPNRka0r4D4j0ESUU2/5IOnpsjfPPmDprM3Ih8CgZ8FXjWqaniK5v4rWt3oQ==",
"dev": true,
"requires": {
"postcss": "^7.0.6",
@ -7633,9 +7814,9 @@
"dev": true
},
"psl": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/psl/-/psl-1.6.0.tgz",
"integrity": "sha512-SYKKmVel98NCOYXpkwUqZqh0ahZeeKfmisiLIcEZdsb+WbLv02g/dI5BUmZnIyOe7RzZtLax81nnb2HbvC2tzA==",
"version": "1.7.0",
"resolved": "https://registry.npmjs.org/psl/-/psl-1.7.0.tgz",
"integrity": "sha512-5NsSEDv8zY70ScRnOTn7bK7eanl2MvFrOrS/R6x+dBt5g1ghnj9Zv90kO8GwT8gxcu2ANyFprnFYB85IogIJOQ==",
"dev": true
},
"pstree.remy": {
@ -8010,9 +8191,9 @@
}
},
"request": {
"version": "2.88.0",
"resolved": "https://registry.npmjs.org/request/-/request-2.88.0.tgz",
"integrity": "sha512-NAqBSrijGLZdM0WZNsInLJpkJokL72XYjUpnB0iwsRgxh7dB6COrHnTBNwN0E+lHDAJzu7kLAkDeY08z2/A0hg==",
"version": "2.88.2",
"resolved": "https://registry.npmjs.org/request/-/request-2.88.2.tgz",
"integrity": "sha512-MsvtOrfG9ZcrOwAW+Qi+F6HbD0CWXEh9ou77uOb7FM2WPhwT7smM833PzanhJLsgXjN89Ir6V2PczXNnMpwKhw==",
"dev": true,
"requires": {
"aws-sign2": "~0.7.0",
@ -8022,7 +8203,7 @@
"extend": "~3.0.2",
"forever-agent": "~0.6.1",
"form-data": "~2.3.2",
"har-validator": "~5.1.0",
"har-validator": "~5.1.3",
"http-signature": "~1.2.0",
"is-typedarray": "~1.0.0",
"isstream": "~0.1.2",
@ -8032,7 +8213,7 @@
"performance-now": "^2.1.0",
"qs": "~6.5.2",
"safe-buffer": "^5.1.2",
"tough-cookie": "~2.4.3",
"tough-cookie": "~2.5.0",
"tunnel-agent": "^0.6.0",
"uuid": "^3.3.2"
},
@ -8402,22 +8583,22 @@
}
},
"sass-loader": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/sass-loader/-/sass-loader-8.0.0.tgz",
"integrity": "sha512-+qeMu563PN7rPdit2+n5uuYVR0SSVwm0JsOUsaJXzgYcClWSlmX0iHDnmeOobPkf5kUglVot3QS6SyLyaQoJ4w==",
"version": "8.0.2",
"resolved": "https://registry.npmjs.org/sass-loader/-/sass-loader-8.0.2.tgz",
"integrity": "sha512-7o4dbSK8/Ol2KflEmSco4jTjQoV988bM82P9CZdmo9hR3RLnvNc0ufMNdMrB0caq38JQ/FgF4/7RcbcfKzxoFQ==",
"dev": true,
"requires": {
"clone-deep": "^4.0.1",
"loader-utils": "^1.2.3",
"neo-async": "^2.6.1",
"schema-utils": "^2.1.0",
"schema-utils": "^2.6.1",
"semver": "^6.3.0"
},
"dependencies": {
"schema-utils": {
"version": "2.6.1",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.6.1.tgz",
"integrity": "sha512-0WXHDs1VDJyo+Zqs9TKLKyD/h7yDpHUhEFsM2CzkICFdoX1av+GBq/J2xRTFfsQO5kBfhZzANf2VcIm84jqDbg==",
"version": "2.6.4",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.6.4.tgz",
"integrity": "sha512-VNjcaUxVnEeun6B2fiiUDjXXBtD4ZSH7pdbfIu1pOFwgptDPLMo/z9jr4sUfsjFVPqDCEin/F7IYlq7/E6yDbQ==",
"dev": true,
"requires": {
"ajv": "^6.10.2",
@ -8523,12 +8704,6 @@
}
}
},
"serialize-javascript": {
"version": "1.7.0",
"resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.7.0.tgz",
"integrity": "sha512-ke8UG8ulpFOxO8f8gRYabHQe/ZntKlcig2Mp+8+URDP1D8vJZ0KUt7LYo07q25Z/+JVSgpr/cui9PIp5H6/+nA==",
"dev": true
},
"serve-index": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/serve-index/-/serve-index-1.9.1.tgz",
@ -9401,28 +9576,66 @@
}
},
"terser-webpack-plugin": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.3.0.tgz",
"integrity": "sha512-W2YWmxPjjkUcOWa4pBEv4OP4er1aeQJlSo2UhtCFQCuRXEHjOFscO8VyWHj9JLlA0RzQb8Y2/Ta78XZvT54uGg==",
"version": "1.4.3",
"resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.4.3.tgz",
"integrity": "sha512-QMxecFz/gHQwteWwSo5nTc6UaICqN1bMedC5sMtUc7y3Ha3Q8y6ZO0iCR8pq4RJC8Hjf0FEPEHZqcMB/+DFCrA==",
"dev": true,
"requires": {
"cacache": "^11.3.2",
"find-cache-dir": "^2.0.0",
"cacache": "^12.0.2",
"find-cache-dir": "^2.1.0",
"is-wsl": "^1.1.0",
"loader-utils": "^1.2.3",
"schema-utils": "^1.0.0",
"serialize-javascript": "^1.7.0",
"serialize-javascript": "^2.1.2",
"source-map": "^0.6.1",
"terser": "^4.0.0",
"webpack-sources": "^1.3.0",
"terser": "^4.1.2",
"webpack-sources": "^1.4.0",
"worker-farm": "^1.7.0"
},
"dependencies": {
"cacache": {
"version": "12.0.3",
"resolved": "https://registry.npmjs.org/cacache/-/cacache-12.0.3.tgz",
"integrity": "sha512-kqdmfXEGFepesTuROHMs3MpFLWrPkSSpRqOw80RCflZXy/khxaArvFrQ7uJxSUduzAufc6G0g1VUCOZXxWavPw==",
"dev": true,
"requires": {
"bluebird": "^3.5.5",
"chownr": "^1.1.1",
"figgy-pudding": "^3.5.1",
"glob": "^7.1.4",
"graceful-fs": "^4.1.15",
"infer-owner": "^1.0.3",
"lru-cache": "^5.1.1",
"mississippi": "^3.0.0",
"mkdirp": "^0.5.1",
"move-concurrently": "^1.0.1",
"promise-inflight": "^1.0.1",
"rimraf": "^2.6.3",
"ssri": "^6.0.1",
"unique-filename": "^1.1.1",
"y18n": "^4.0.0"
}
},
"serialize-javascript": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-2.1.2.tgz",
"integrity": "sha512-rs9OggEUF0V4jUSecXazOYsLfu7OGK2qIn3c7IPBiffz32XniEp/TX9Xmc9LQfK2nQ2QKHvZ2oygKUGU0lG4jQ==",
"dev": true
},
"source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==",
"dev": true
},
"webpack-sources": {
"version": "1.4.3",
"resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-1.4.3.tgz",
"integrity": "sha512-lgTS3Xhv1lCOKo7SA5TjKXMjpSM4sBjNV5+q2bqesbSPs5FjGmU6jjtBSkX9b4qW87vDIsCIlUPOEhbZrMdjeQ==",
"dev": true,
"requires": {
"source-list-map": "^2.0.0",
"source-map": "~0.6.1"
}
}
}
},
@ -9548,21 +9761,13 @@
}
},
"tough-cookie": {
"version": "2.4.3",
"resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.4.3.tgz",
"integrity": "sha512-Q5srk/4vDM54WJsJio3XNn6K2sCG+CQ8G5Wz6bZhRZoAe/+TxjWB/GlFAnYEbkYVlON9FMk/fE3h2RLpPXo4lQ==",
"version": "2.5.0",
"resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.5.0.tgz",
"integrity": "sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==",
"dev": true,
"requires": {
"psl": "^1.1.24",
"punycode": "^1.4.1"
},
"dependencies": {
"punycode": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/punycode/-/punycode-1.4.1.tgz",
"integrity": "sha1-wNWmOycYgArY4esPpSachN1BhF4=",
"dev": true
}
"psl": "^1.1.28",
"punycode": "^2.1.1"
}
},
"trim-newlines": {

@ -19,23 +19,24 @@
"devDependencies": {
"@babel/cli": "^7.5.5",
"@babel/core": "^7.5.5",
"@babel/plugin-proposal-class-properties": "^7.8.3",
"@babel/preset-env": "^7.5.5",
"@babel/preset-typescript": "^7.3.3",
"@types/node": "^12.6.8",
"@typescript-eslint/eslint-plugin": "^1.13.0",
"@typescript-eslint/parser": "^1.13.0",
"babel-loader": "^8.0.6",
"css-loader": "^3.2.0",
"css-loader": "^3.4.2",
"dts-bundle-webpack": "^1.0.2",
"eslint": "^6.1.0",
"eslint-config-airbnb-typescript": "^4.0.1",
"eslint-config-typescript-recommended": "^1.4.17",
"eslint-plugin-import": "^2.18.2",
"node-sass": "^4.13.0",
"node-sass": "^4.13.1",
"nodemon": "^1.19.1",
"postcss-loader": "^3.0.0",
"postcss-preset-env": "^6.7.0",
"sass-loader": "^8.0.0",
"sass-loader": "^8.0.2",
"style-loader": "^1.0.0",
"typescript": "^3.5.3",
"webpack": "^4.36.1",

@ -9,6 +9,12 @@ export interface Size {
height: number;
}
export interface Image {
renderWidth: number;
renderHeight: number;
imageData: ImageData | CanvasImageSource;
}
export interface Position {
x: number;
y: number;
@ -110,7 +116,7 @@ export enum Mode {
}
export interface CanvasModel {
readonly image: HTMLImageElement | null;
readonly image: Image | null;
readonly objects: any[];
readonly zLayer: number | null;
readonly gridSize: Size;
@ -153,7 +159,7 @@ export class CanvasModelImpl extends MasterImpl implements CanvasModel {
activeElement: ActiveElement;
angle: number;
canvasSize: Size;
image: HTMLImageElement | null;
image: Image | null;
imageID: number | null;
imageOffset: number;
imageSize: Size;
@ -310,7 +316,7 @@ export class CanvasModelImpl extends MasterImpl implements CanvasModel {
this.data.image = null;
this.notify(UpdateReasons.IMAGE_CHANGED);
},
).then((data: HTMLImageElement): void => {
).then((data: Image): void => {
if (frameData.number !== this.data.imageID) {
// already another image
return;
@ -516,7 +522,7 @@ export class CanvasModelImpl extends MasterImpl implements CanvasModel {
return this.data.zLayer;
}
public get image(): HTMLImageElement | null {
public get image(): Image | null {
return this.data.image;
}

@ -285,7 +285,7 @@ export class CanvasViewImpl implements CanvasView, Listener {
}
private moveCanvas(): void {
for (const obj of [this.background, this.grid, this.loadingAnimation]) {
for (const obj of [this.background, this.grid]) {
obj.style.top = `${this.geometry.top}px`;
obj.style.left = `${this.geometry.left}px`;
}
@ -303,7 +303,7 @@ export class CanvasViewImpl implements CanvasView, Listener {
private transformCanvas(): void {
// Transform canvas
for (const obj of [this.background, this.grid, this.loadingAnimation, this.content]) {
for (const obj of [this.background, this.grid, this.content]) {
obj.style.transform = `scale(${this.geometry.scale}) rotate(${this.geometry.angle}deg)`;
}
@ -358,7 +358,7 @@ export class CanvasViewImpl implements CanvasView, Listener {
}
private resizeCanvas(): void {
for (const obj of [this.background, this.grid, this.loadingAnimation]) {
for (const obj of [this.background, this.grid]) {
obj.style.width = `${this.geometry.image.width}px`;
obj.style.height = `${this.geometry.image.height}px`;
}
@ -709,10 +709,21 @@ export class CanvasViewImpl implements CanvasView, Listener {
} else {
this.loadingAnimation.classList.add('cvat_canvas_hidden');
const ctx = this.background.getContext('2d');
this.background.setAttribute('width', `${image.width}px`);
this.background.setAttribute('height', `${image.height}px`);
this.background.setAttribute('width', `${image.renderWidth}px`);
this.background.setAttribute('height', `${image.renderHeight}px`);
if (ctx) {
ctx.drawImage(image, 0, 0);
if (image.imageData instanceof ImageData) {
ctx.scale(image.renderWidth / image.imageData.width,
image.renderHeight / image.imageData.height);
ctx.putImageData(image.imageData, 0, 0);
// Transformation matrix must not affect the putImageData() method.
// By this reason need to redraw the image to apply scale.
// https://www.w3.org/TR/2dcontext/#dom-context-2d-putimagedata
ctx.drawImage(this.background, 0, 0);
} else {
ctx.drawImage(image.imageData, 0, 0);
}
}
this.moveCanvas();
this.resizeCanvas();

@ -23,10 +23,12 @@ const nodeConfig = {
},
module: {
rules: [{
test: /\.ts$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
options: {
plugins: ['@babel/plugin-proposal-class-properties'],
presets: [
['@babel/preset-env'],
['@babel/typescript'],
@ -35,14 +37,20 @@ const nodeConfig = {
},
},
}, {
test: /\.css$/,
use: ['style-loader', 'css-loader']
test: /\.(css|scss)$/,
exclude: /node_modules/,
use: ['style-loader', {
loader: 'css-loader',
options: {
importLoaders: 2,
},
}, 'postcss-loader', 'sass-loader']
}],
},
plugins: [
new DtsBundleWebpack({
name: 'cvat-canvas.node',
main: 'dist/declaration/canvas.d.ts',
main: 'dist/declaration/src/typescript/canvas.d.ts',
out: '../cvat-canvas.node.d.ts',
}),
]
@ -70,10 +78,12 @@ const webConfig = {
},
module: {
rules: [{
test: /\.ts$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
options: {
plugins: ['@babel/plugin-proposal-class-properties'],
presets: [
['@babel/preset-env', {
targets: '> 2.5%', // https://github.com/browserslist/browserslist
@ -97,7 +107,7 @@ const webConfig = {
plugins: [
new DtsBundleWebpack({
name: 'cvat-canvas',
main: 'dist/declaration/canvas.d.ts',
main: 'dist/declaration/src/typescript/canvas.d.ts',
out: '../cvat-canvas.d.ts',
}),
]

@ -0,0 +1,33 @@
/*
* Copyright (C) 2019 Intel Corporation
* SPDX-License-Identifier: MIT
*/
/* global
require:false
*/
const Axios = require('axios');
Axios.defaults.withCredentials = true;
Axios.defaults.xsrfHeaderName = 'X-CSRFTOKEN';
Axios.defaults.xsrfCookieName = 'csrftoken';
onmessage = (e) => {
Axios.get(e.data.url, e.data.config)
.then((response) => {
postMessage({
responseData: response.data,
id: e.data.id,
isSuccess: true,
});
})
.catch((error) => {
postMessage({
id: e.data.id,
error,
isSuccess: false,
});
});
};

@ -9,14 +9,14 @@
*/
(() => {
const cvatData = require('../../cvat-data');
const PluginRegistry = require('./plugins');
const serverProxy = require('./server-proxy');
const { ArgumentError } = require('./exceptions');
const { isBrowser, isNode } = require('browser-or-node');
const { Exception, ArgumentError, DataError } = require('./exceptions');
// This is the frames storage
const frameDataCache = {};
const frameCache = {};
/**
* Class provides meta information about specific frame and frame itself
@ -24,8 +24,28 @@
* @hideconstructor
*/
class FrameData {
constructor(width, height, tid, number) {
constructor({
width,
height,
name,
taskID,
frameNumber,
startFrame,
stopFrame,
decodeForward,
}) {
Object.defineProperties(this, Object.freeze({
/**
* @name filename
* @type {string}
* @memberof module:API.cvat.classes.FrameData
* @readonly
* @instance
*/
filename: {
value: name,
writable: false,
},
/**
* @name width
* @type {integer}
@ -49,7 +69,7 @@
writable: false,
},
tid: {
value: tid,
value: taskID,
writable: false,
},
/**
@ -60,7 +80,19 @@
* @instance
*/
number: {
value: number,
value: frameNumber,
writable: false,
},
startFrame: {
value: startFrame,
writable: false,
},
stopFrame: {
value: stopFrame,
writable: false,
},
decodeForward: {
value: decodeForward,
writable: false,
},
}));
@ -86,42 +118,419 @@
}
FrameData.prototype.data.implementation = async function (onServerRequest) {
return new Promise(async (resolve, reject) => {
try {
if (this.number in frameCache[this.tid]) {
resolve(frameCache[this.tid][this.number]);
} else {
onServerRequest();
const frame = await serverProxy.frames.getData(this.tid, this.number);
if (isNode) {
frameCache[this.tid][this.number] = global.Buffer.from(frame, 'binary').toString('base64');
resolve(frameCache[this.tid][this.number]);
} else if (isBrowser) {
const reader = new FileReader();
reader.onload = () => {
const image = new Image(frame.width, frame.height);
image.onload = () => {
frameCache[this.tid][this.number] = image;
resolve(frameCache[this.tid][this.number]);
};
image.src = reader.result;
};
reader.readAsDataURL(frame);
return new Promise((resolve, reject) => {
const resolveWrapper = (data) => {
this._data = {
imageData: data,
renderWidth: this.width,
renderHeight: this.height,
};
return resolve(this._data);
};
if (this._data) {
resolve(this._data);
return;
}
const { provider } = frameDataCache[this.tid];
const { chunkSize } = frameDataCache[this.tid];
const start = parseInt(this.number / chunkSize, 10) * chunkSize;
const stop = Math.min(
this.stopFrame,
(parseInt(this.number / chunkSize, 10) + 1) * chunkSize - 1,
);
const chunkNumber = Math.floor(this.number / chunkSize);
const onDecodeAll = async (frameNumber) => {
if (frameDataCache[this.tid].activeChunkRequest
&& chunkNumber === frameDataCache[this.tid].activeChunkRequest.chunkNumber) {
const callbackArray = frameDataCache[this.tid].activeChunkRequest.callbacks;
for (let i = callbackArray.length - 1; i >= 0; --i) {
if (callbackArray[i].frameNumber === frameNumber) {
const callback = callbackArray[i];
callbackArray.splice(i, 1);
callback.resolve(await provider.frame(callback.frameNumber));
}
}
if (callbackArray.length === 0) {
frameDataCache[this.tid].activeChunkRequest = null;
}
}
};
const rejectRequestAll = () => {
if (frameDataCache[this.tid].activeChunkRequest
&& chunkNumber === frameDataCache[this.tid].activeChunkRequest.chunkNumber) {
for (const r of frameDataCache[this.tid].activeChunkRequest.callbacks) {
r.reject(r.frameNumber);
}
frameDataCache[this.tid].activeChunkRequest = null;
}
} catch (exception) {
reject(exception);
};
const makeActiveRequest = () => {
const taskDataCache = frameDataCache[this.tid];
const activeChunk = taskDataCache.activeChunkRequest;
activeChunk.request = serverProxy.frames.getData(this.tid,
activeChunk.chunkNumber).then((chunk) => {
frameDataCache[this.tid].activeChunkRequest.completed = true;
if (!taskDataCache.nextChunkRequest) {
provider.requestDecodeBlock(chunk,
taskDataCache.activeChunkRequest.start,
taskDataCache.activeChunkRequest.stop,
taskDataCache.activeChunkRequest.onDecodeAll,
taskDataCache.activeChunkRequest.rejectRequestAll);
}
}).catch((exception) => {
if (exception instanceof Exception) {
reject(exception);
} else {
reject(new Exception(exception.message));
}
}).finally(() => {
if (taskDataCache.nextChunkRequest) {
if (taskDataCache.activeChunkRequest) {
for (const r of taskDataCache.activeChunkRequest.callbacks) {
r.reject(r.frameNumber);
}
}
taskDataCache.activeChunkRequest = taskDataCache.nextChunkRequest;
taskDataCache.nextChunkRequest = null;
makeActiveRequest();
}
});
};
if (isNode) {
resolve('Dummy data');
} else if (isBrowser) {
provider.frame(this.number).then((frame) => {
if (frame === null) {
onServerRequest();
const activeRequest = frameDataCache[this.tid].activeChunkRequest;
if (!provider.isChunkCached(start, stop)) {
if (!activeRequest
|| (activeRequest
&& activeRequest.completed
&& activeRequest.chunkNumber !== chunkNumber)) {
if (activeRequest && activeRequest.rejectRequestAll) {
activeRequest.rejectRequestAll();
}
frameDataCache[this.tid].activeChunkRequest = {
request: null,
chunkNumber,
start,
stop,
onDecodeAll,
rejectRequestAll,
completed: false,
callbacks: [{
resolve: resolveWrapper,
reject,
frameNumber: this.number,
}],
};
makeActiveRequest();
} else if (activeRequest.chunkNumber === chunkNumber) {
if (!activeRequest.onDecodeAll
&& !activeRequest.rejectRequestAll) {
activeRequest.onDecodeAll = onDecodeAll;
activeRequest.rejectRequestAll = rejectRequestAll;
}
activeRequest.callbacks.push({
resolve: resolveWrapper,
reject,
frameNumber: this.number,
});
} else {
if (frameDataCache[this.tid].nextChunkRequest) {
const { callbacks } = frameDataCache[this.tid].nextChunkRequest;
for (const r of callbacks) {
r.reject(r.frameNumber);
}
}
frameDataCache[this.tid].nextChunkRequest = {
request: null,
chunkNumber,
start,
stop,
onDecodeAll,
rejectRequestAll,
completed: false,
callbacks: [{
resolve: resolveWrapper,
reject,
frameNumber: this.number,
}],
};
}
} else {
activeRequest.callbacks.push({
resolve: resolveWrapper,
reject,
frameNumber: this.number,
});
provider.requestDecodeBlock(null, start, stop,
onDecodeAll, rejectRequestAll);
}
} else {
if (this.number % chunkSize > chunkSize / 4
&& provider.decodedBlocksCacheSize > 1
&& this.decodeForward
&& !provider.isNextChunkExists(this.number)) {
const nextChunkNumber = Math.floor(this.number / chunkSize) + 1;
if (nextChunkNumber * chunkSize < this.stopFrame) {
provider.setReadyToLoading(nextChunkNumber);
const nextStart = nextChunkNumber * chunkSize;
const nextStop = (nextChunkNumber + 1) * chunkSize - 1;
if (!provider.isChunkCached(nextStart, nextStop)) {
if (!frameDataCache[this.tid].activeChunkRequest) {
frameDataCache[this.tid].activeChunkRequest = {
request: null,
chunkNumber: nextChunkNumber,
start: nextStart,
stop: nextStop,
onDecodeAll: null,
rejectRequestAll: null,
completed: false,
callbacks: [],
};
makeActiveRequest();
}
} else {
provider.requestDecodeBlock(null, nextStart, nextStop,
null, null);
}
}
}
resolveWrapper(frame);
}
}).catch((exception) => {
if (exception instanceof Exception) {
reject(exception);
} else {
reject(new Exception(exception.message));
}
});
}
});
};
function getFrameMeta(taskID, frame) {
const { meta, mode } = frameDataCache[taskID];
let size = null;
if (mode === 'interpolation') {
[size] = meta.frames;
} else if (mode === 'annotation') {
if (frame >= meta.size) {
throw new ArgumentError(
`Meta information about frame ${frame} can't be received from the server`,
);
} else {
size = meta.frames[frame];
}
} else {
throw new DataError(
`Invalid mode is specified ${mode}`,
);
}
return size;
}
class FrameBuffer {
constructor(size, chunkSize, stopFrame, taskID) {
this._size = size;
this._buffer = {};
this._requestedChunks = {};
this._chunkSize = chunkSize;
this._stopFrame = stopFrame;
this._activeFillBufferRequest = false;
this._taskID = taskID;
}
getFreeBufferSize() {
let requestedFrameCount = 0;
for (const chunk of Object.values(this._requestedChunks)) {
requestedFrameCount += chunk.requestedFrames.size;
}
return this._size - Object.keys(this._buffer).length - requestedFrameCount;
}
requestOneChunkFrames(chunkIdx) {
return new Promise((resolve, reject) => {
this._requestedChunks[chunkIdx] = {
...this._requestedChunks[chunkIdx],
resolve,
reject,
};
for (const frame of this._requestedChunks[chunkIdx].requestedFrames.entries()) {
const requestedFrame = frame[1];
const frameMeta = getFrameMeta(this._taskID, requestedFrame);
const frameData = new FrameData({
...frameMeta,
taskID: this._taskID,
frameNumber: requestedFrame,
startFrame: frameDataCache[this._taskID].startFrame,
stopFrame: frameDataCache[this._taskID].stopFrame,
decodeForward: false,
});
frameData.data().then(() => {
if (!(chunkIdx in this._requestedChunks)
|| !this._requestedChunks[chunkIdx].requestedFrames.has(requestedFrame)) {
reject(chunkIdx);
} else {
this._requestedChunks[chunkIdx].requestedFrames.delete(requestedFrame);
this._requestedChunks[chunkIdx].buffer[requestedFrame] = frameData;
if (this._requestedChunks[chunkIdx].requestedFrames.size === 0) {
const bufferedframes = Object.keys(
this._requestedChunks[chunkIdx].buffer,
).map((f) => +f);
this._requestedChunks[chunkIdx].resolve(new Set(bufferedframes));
}
}
}).catch(() => {
reject(chunkIdx);
});
}
});
}
fillBuffer(startFrame, frameStep = 1, count = null) {
const freeSize = this.getFreeBufferSize();
const requestedFrameCount = count ? count * frameStep : freeSize * frameStep;
const stopFrame = Math.min(startFrame + requestedFrameCount, this._stopFrame + 1);
for (let i = startFrame; i < stopFrame; i += frameStep) {
const chunkIdx = Math.floor(i / this._chunkSize);
if (!(chunkIdx in this._requestedChunks)) {
this._requestedChunks[chunkIdx] = {
requestedFrames: new Set(),
resolve: null,
reject: null,
buffer: {},
};
}
this._requestedChunks[chunkIdx].requestedFrames.add(i);
}
let bufferedFrames = new Set();
// Need to decode chunks in sequence
// eslint-disable-next-line no-async-promise-executor
return new Promise(async (resolve, reject) => {
for (const chunkIdx in this._requestedChunks) {
if (Object.prototype.hasOwnProperty.call(this._requestedChunks, chunkIdx)) {
try {
const chunkFrames = await this.requestOneChunkFrames(chunkIdx);
if (chunkIdx in this._requestedChunks) {
bufferedFrames = new Set([...bufferedFrames, ...chunkFrames]);
this._buffer = {
...this._buffer,
...this._requestedChunks[chunkIdx].buffer,
};
delete this._requestedChunks[chunkIdx];
if (Object.keys(this._requestedChunks).length === 0) {
resolve(bufferedFrames);
}
} else {
reject(chunkIdx);
break;
}
} catch (error) {
reject(error);
break;
}
}
}
});
}
async makeFillRequest(start, step, count = null) {
if (!this._activeFillBufferRequest) {
this._activeFillBufferRequest = true;
try {
await this.fillBuffer(start, step, count);
this._activeFillBufferRequest = false;
} catch (error) {
if (typeof (error) === 'number' && error in this._requestedChunks) {
this._activeFillBufferRequest = false;
}
throw error;
}
}
}
async require(frameNumber, taskID, fillBuffer, frameStep) {
for (const frame in this._buffer) {
if (frame < frameNumber
|| frame >= frameNumber + this._size * frameStep) {
delete this._buffer[frame];
}
}
this._required = frameNumber;
const frameMeta = getFrameMeta(taskID, frameNumber);
let frame = new FrameData({
...frameMeta,
taskID,
frameNumber,
startFrame: frameDataCache[taskID].startFrame,
stopFrame: frameDataCache[taskID].stopFrame,
decodeForward: !fillBuffer,
});
if (frameNumber in this._buffer) {
frame = this._buffer[frameNumber];
delete this._buffer[frameNumber];
const cachedFrames = this.cachedFrames();
if (fillBuffer && !this._activeFillBufferRequest
&& this._size > this._chunkSize
&& cachedFrames.length < (this._size * 3) / 4) {
const maxFrame = cachedFrames ? Math.max(...cachedFrames) : frameNumber;
if (maxFrame < this._stopFrame) {
this.makeFillRequest(maxFrame + 1, frameStep).catch((e) => {
if (e !== 'not needed') {
throw e;
}
});
}
}
} else if (fillBuffer) {
this.clear();
await this.makeFillRequest(frameNumber, frameStep, fillBuffer ? null : 1);
frame = this._buffer[frameNumber];
} else {
this.clear();
}
return frame;
}
clear() {
for (const chunkIdx in this._requestedChunks) {
if (Object.prototype.hasOwnProperty.call(this._requestedChunks, chunkIdx)
&& this._requestedChunks[chunkIdx].reject) {
this._requestedChunks[chunkIdx].reject('not needed');
}
}
this._activeFillBufferRequest = false;
this._requestedChunks = {};
this._buffer = {};
}
cachedFrames() {
return Object.keys(this._buffer).map((f) => +f);
}
}
async function getPreview(taskID) {
return new Promise(async (resolve, reject) => {
try {
// Just go to server and get preview (no any cache)
const result = await serverProxy.frames.getPreview(taskID);
return new Promise((resolve, reject) => {
// Just go to server and get preview (no any cache)
serverProxy.frames.getPreview(taskID).then((result) => {
if (isNode) {
resolve(global.Buffer.from(result, 'binary').toString('base64'));
} else if (isBrowser) {
@ -131,48 +540,75 @@
};
reader.readAsDataURL(result);
}
} catch (error) {
}).catch((error) => {
reject(error);
}
});
});
}
async function getFrame(taskID, mode, frame) {
async function getFrame(taskID, chunkSize, chunkType, mode, frame,
startFrame, stopFrame, isPlaying, step) {
if (!(taskID in frameDataCache)) {
const blockType = chunkType === 'video' ? cvatData.BlockType.MP4VIDEO
: cvatData.BlockType.ARCHIVE;
const meta = await serverProxy.frames.getMeta(taskID);
const mean = meta.frames.reduce((a, b) => a + b.width * b.height, 0)
/ meta.frames.length;
const stdDev = Math.sqrt(meta.frames.map(
(x) => Math.pow(x.width * x.height - mean, 2),
).reduce((a, b) => a + b) / meta.frames.length);
// limit of decoded frames cache by 2GB
const decodedBlocksCacheSize = Math.floor(2147483648 / (mean + stdDev) / 4 / chunkSize)
|| 1;
frameDataCache[taskID] = {
meta: await serverProxy.frames.getMeta(taskID),
meta,
chunkSize,
mode,
startFrame,
stopFrame,
provider: new cvatData.FrameProvider(
blockType, chunkSize, Math.max(decodedBlocksCacheSize, 9),
decodedBlocksCacheSize, 1,
),
frameBuffer: new FrameBuffer(
Math.min(180, decodedBlocksCacheSize * chunkSize),
chunkSize,
stopFrame,
taskID,
),
decodedBlocksCacheSize,
activeChunkRequest: null,
nextChunkRequest: null,
};
frameCache[taskID] = {};
const frameMeta = getFrameMeta(taskID, frame);
// actual only for video chunks
frameDataCache[taskID].provider.setRenderSize(frameMeta.width, frameMeta.height);
}
if (!(frame in frameDataCache[taskID])) {
let size = null;
if (mode === 'interpolation') {
[size] = frameDataCache[taskID].meta;
} else if (mode === 'annotation') {
if (frame >= frameDataCache[taskID].meta.length) {
throw new ArgumentError(
`Meta information about frame ${frame} can't be received from the server`,
);
} else {
size = frameDataCache[taskID].meta[frame];
}
} else {
throw new ArgumentError(
`Invalid mode is specified ${mode}`,
);
}
return frameDataCache[taskID].frameBuffer.require(frame, taskID, isPlaying, step);
}
frameDataCache[taskID][frame] = new FrameData(size.width, size.height, taskID, frame);
function getRanges(taskID) {
if (!(taskID in frameDataCache)) {
return {
decoded: [],
buffered: [],
};
}
return frameDataCache[taskID][frame];
return {
decoded: frameDataCache[taskID].provider.cachedFrames,
buffered: frameDataCache[taskID].frameBuffer.cachedFrames(),
};
}
module.exports = {
FrameData,
getFrame,
getRanges,
getPreview,
};
})();

@ -14,6 +14,7 @@
} = require('./exceptions');
const store = require('store');
const config = require('./config');
const DownloadWorker = require('./download.worker');
function generateError(errorData) {
if (errorData.response) {
@ -26,12 +27,66 @@
return new ServerError(message, 0);
}
class WorkerWrappedAxios {
constructor() {
const worker = new DownloadWorker();
const requests = {};
let requestId = 0;
worker.onmessage = (e) => {
if (e.data.id in requests) {
if (e.data.isSuccess) {
requests[e.data.id].resolve(e.data.responseData);
} else {
requests[e.data.id].reject(e.data.error);
}
delete requests[e.data.id];
}
};
worker.onerror = (e) => {
if (e.data.id in requests) {
requests[e.data.id].reject(e);
delete requests[e.data.id];
}
};
function getRequestId() {
return requestId++;
}
async function get(url, requestConfig) {
return new Promise((resolve, reject) => {
const newRequestId = getRequestId();
requests[newRequestId] = {
resolve,
reject,
};
worker.postMessage({
url,
config: requestConfig,
id: newRequestId,
});
});
}
Object.defineProperties(this, Object.freeze({
get: {
value: get,
writable: false,
},
}));
}
}
class ServerProxy {
constructor() {
const Axios = require('axios');
Axios.defaults.withCredentials = true;
Axios.defaults.xsrfHeaderName = 'X-CSRFTOKEN';
Axios.defaults.xsrfCookieName = 'csrftoken';
const workerAxios = new WorkerWrappedAxios();
let token = store.get('token');
if (token) {
@ -275,7 +330,7 @@
});
}
async function createTask(taskData, files, onUpdate) {
async function createTask(taskSpec, taskDataSpec, onUpdate) {
const { backendAPI } = config;
async function wait(id) {
@ -315,12 +370,14 @@
});
}
const batchOfFiles = new FormData();
for (const key in files) {
if (Object.prototype.hasOwnProperty.call(files, key)) {
for (let i = 0; i < files[key].length; i++) {
batchOfFiles.append(`${key}[${i}]`, files[key][i]);
}
const taskData = new FormData();
for (const [key, value] of Object.entries(taskDataSpec)) {
if (Array.isArray(value)) {
value.forEach((element, idx) => {
taskData.append(`${key}[${idx}]`, element);
});
} else {
taskData.set(key, value);
}
}
@ -328,7 +385,7 @@
onUpdate('The task is being created on the server..');
try {
response = await Axios.post(`${backendAPI}/tasks`, JSON.stringify(taskData), {
response = await Axios.post(`${backendAPI}/tasks`, JSON.stringify(taskSpec), {
proxy: config.proxy,
headers: {
'Content-Type': 'application/json',
@ -340,7 +397,7 @@
onUpdate('The data is being uploaded to the server..');
try {
await Axios.post(`${backendAPI}/tasks/${response.data.id}/data`, batchOfFiles, {
await Axios.post(`${backendAPI}/tasks/${response.data.id}/data`, taskData, {
proxy: config.proxy,
});
} catch (errorData) {
@ -435,8 +492,7 @@
let response = null;
try {
// TODO: change 0 frame to preview
response = await Axios.get(`${backendAPI}/tasks/${tid}/frames/0`, {
response = await Axios.get(`${backendAPI}/tasks/${tid}/data?type=preview`, {
proxy: config.proxy,
responseType: 'blob',
});
@ -451,20 +507,23 @@
return response.data;
}
async function getData(tid, frame) {
async function getData(tid, chunk) {
const { backendAPI } = config;
let response = null;
try {
response = await Axios.get(`${backendAPI}/tasks/${tid}/frames/${frame}`, {
proxy: config.proxy,
responseType: 'blob',
});
response = await workerAxios.get(
`${backendAPI}/tasks/${tid}/data?type=chunk&number=${chunk}&quality=compressed`,
{
proxy: config.proxy,
responseType: 'arraybuffer',
},
);
} catch (errorData) {
throw generateError(errorData);
}
return response.data;
return response;
}
async function getMeta(tid) {
@ -472,7 +531,7 @@
let response = null;
try {
response = await Axios.get(`${backendAPI}/tasks/${tid}/frames/meta`, {
response = await Axios.get(`${backendAPI}/tasks/${tid}/data/meta`, {
proxy: config.proxy,
});
} catch (errorData) {

@ -11,7 +11,7 @@
const PluginRegistry = require('./plugins');
const loggerStorage = require('./logger-storage');
const serverProxy = require('./server-proxy');
const { getFrame, getPreview } = require('./frames');
const { getFrame, getRanges, getPreview } = require('./frames');
const { ArgumentError } = require('./exceptions');
const { TaskStatus } = require('./enums');
const { Label } = require('./labels');
@ -113,9 +113,14 @@
}),
frames: Object.freeze({
value: {
async get(frame) {
async get(frame, isPlaying = false, step = 1) {
const result = await PluginRegistry
.apiWrapper.call(this, prototype.frames.get, frame);
.apiWrapper.call(this, prototype.frames.get, frame, isPlaying, step);
return result;
},
async ranges() {
const result = await PluginRegistry
.apiWrapper.call(this, prototype.frames.ranges);
return result;
},
async preview() {
@ -416,8 +421,10 @@
* @async
* @throws {module:API.cvat.exceptions.PluginError}
* @throws {module:API.cvat.exceptions.ServerError}
* @throws {module:API.cvat.exceptions.DataError}
* @throws {module:API.cvat.exceptions.ArgumentError}
*/
/**
* Get the first frame of a task for preview
* @method preview
@ -430,6 +437,15 @@
* @throws {module:API.cvat.exceptions.ArgumentError}
*/
/**
* Returns the ranges of cached frames
* @method ranges
* @memberof Session.frames
* @returns {Array{string}}
* @instance
* @async
*/
/**
* Namespace is used for an interaction with logs
* @namespace logger
@ -692,6 +708,7 @@
this.frames = {
get: Object.getPrototypeOf(this).frames.get.bind(this),
ranges: Object.getPrototypeOf(this).frames.ranges.bind(this),
preview: Object.getPrototypeOf(this).frames.preview.bind(this),
};
@ -755,6 +772,10 @@
start_frame: undefined,
stop_frame: undefined,
frame_filter: undefined,
data_chunk_size: undefined,
data_compressed_chunk_type: undefined,
data_original_chunk_type: undefined,
use_zip_chunks: undefined,
};
for (const property in data) {
@ -992,6 +1013,24 @@
data.image_quality = quality;
},
},
/**
* @name useZipChunks
* @type {boolean}
* @memberof module:API.cvat.classes.Task
* @instance
* @throws {module:API.cvat.exceptions.ArgumentError}
*/
useZipChunks: {
get: () => data.use_zip_chunks,
set: (useZipChunks) => {
if (typeof (useZipChunks) !== 'boolean') {
throw new ArgumentError(
'Value must be a boolean',
);
}
data.use_zip_chunks = useZipChunks;
},
},
/**
* After task has been created value can be appended only.
* @name labels
@ -1173,6 +1212,21 @@
data.frame_filter = filter;
},
},
dataChunkSize: {
get: () => data.data_chunk_size,
set: (chunkSize) => {
if (typeof (chunkSize) !== 'number' || chunkSize < 1) {
throw new ArgumentError(
`Chunk size value must be a positive number. But value ${chunkSize} has been got.`,
);
}
data.data_chunk_size = chunkSize;
},
},
dataChunkType: {
get: () => data.data_compressed_chunk_type,
},
}));
// When we call a function, for example: task.annotations.get()
@ -1206,6 +1260,7 @@
this.frames = {
get: Object.getPrototypeOf(this).frames.get.bind(this),
ranges: Object.getPrototypeOf(this).frames.ranges.bind(this),
preview: Object.getPrototypeOf(this).frames.preview.bind(this),
};
@ -1297,7 +1352,7 @@
);
};
Job.prototype.frames.get.implementation = async function (frame) {
Job.prototype.frames.get.implementation = async function (frame, isPlaying, step) {
if (!Number.isInteger(frame) || frame < 0) {
throw new ArgumentError(
`Frame must be a positive integer. Got: "${frame}"`,
@ -1310,13 +1365,25 @@
);
}
const frameData = await getFrame(this.task.id, this.task.mode, frame);
const frameData = await getFrame(
this.task.id,
this.task.dataChunkSize,
this.task.dataChunkType,
this.task.mode,
frame,
this.startFrame,
this.stopFrame,
isPlaying,
step,
);
return frameData;
};
Job.prototype.frames.preview.implementation = async function () {
const frameData = await getPreview(this.task.id);
return frameData;
Job.prototype.frames.ranges.implementation = async function () {
const rangesData = await getRanges(
this.task.id,
);
return rangesData;
};
// TODO: Check filter for annotations
@ -1473,39 +1540,44 @@
return this;
}
const taskData = {
const taskSpec = {
name: this.name,
labels: this.labels.map((el) => el.toJSON()),
image_quality: this.imageQuality,
z_order: Boolean(this.zOrder),
};
if (typeof (this.bugTracker) !== 'undefined') {
taskData.bug_tracker = this.bugTracker;
taskSpec.bug_tracker = this.bugTracker;
}
if (typeof (this.segmentSize) !== 'undefined') {
taskData.segment_size = this.segmentSize;
taskSpec.segment_size = this.segmentSize;
}
if (typeof (this.overlap) !== 'undefined') {
taskData.overlap = this.overlap;
taskSpec.overlap = this.overlap;
}
const taskDataSpec = {
client_files: this.clientFiles,
server_files: this.serverFiles,
remote_files: this.remoteFiles,
image_quality: this.imageQuality,
use_zip_chunks: this.useZipChunks,
};
if (typeof (this.startFrame) !== 'undefined') {
taskData.start_frame = this.startFrame;
taskDataSpec.start_frame = this.startFrame;
}
if (typeof (this.stopFrame) !== 'undefined') {
taskData.stop_frame = this.stopFrame;
taskDataSpec.stop_frame = this.stopFrame;
}
if (typeof (this.frameFilter) !== 'undefined') {
taskData.frame_filter = this.frameFilter;
taskDataSpec.frame_filter = this.frameFilter;
}
if (typeof (this.dataChunkSize) !== 'undefined') {
taskDataSpec.chunk_size = this.dataChunkSize;
}
const taskFiles = {
client_files: this.clientFiles,
server_files: this.serverFiles,
remote_files: this.remoteFiles,
};
const task = await serverProxy.tasks.createTask(taskData, taskFiles, onUpdate);
const task = await serverProxy.tasks.createTask(taskSpec, taskDataSpec, onUpdate);
return new Task(task);
};
@ -1514,7 +1586,7 @@
return result;
};
Task.prototype.frames.get.implementation = async function (frame) {
Task.prototype.frames.get.implementation = async function (frame, isPlaying, step) {
if (!Number.isInteger(frame) || frame < 0) {
throw new ArgumentError(
`Frame must be a positive integer. Got: "${frame}"`,
@ -1527,10 +1599,32 @@
);
}
const result = await getFrame(this.id, this.mode, frame);
const result = await getFrame(
this.id,
this.dataChunkSize,
this.dataChunkType,
this.mode,
frame,
0,
this.size - 1,
isPlaying,
step,
);
return result;
};
Job.prototype.frames.preview.implementation = async function () {
const frameData = await getPreview(this.task.id);
return frameData;
};
Task.prototype.frames.ranges.implementation = async function () {
const rangesData = await getRanges(
this.id,
);
return rangesData;
};
Task.prototype.frames.preview.implementation = async function () {
const frameData = await getPreview(this.id);
return frameData;

@ -2522,78 +2522,126 @@ const taskAnnotationsDummyData = {
const jobAnnotationsDummyData = JSON.parse(JSON.stringify(taskAnnotationsDummyData));
const frameMetaDummyData = {
1: [{
"width": 1920,
"height": 1080
}, {
"width": 1600,
"height": 1143
}, {
"width": 1600,
"height": 859
}, {
"width": 3840,
"height": 2160
}, {
"width": 2560,
"height": 1920
}, {
"width": 1920,
"height": 1080
}, {
"width": 1920,
"height": 1080
}, {
"width": 700,
"height": 453
}, {
"width": 1920,
"height": 1200
}],
2: [{
"width": 1920,
"height": 1080
}],
3: [{
"width": 1888,
"height": 1408
}],
100: [{
"width": 1920,
"height": 1080
}, {
"width": 1600,
"height": 1143
}, {
"width": 1600,
"height": 859
}, {
"width": 3840,
"height": 2160
}, {
"width": 2560,
"height": 1920
}, {
"width": 1920,
"height": 1080
}, {
"width": 1920,
"height": 1080
}, {
"width": 700,
"height": 453
}, {
"width": 1920,
"height": 1200
}],
101: [{
"width": 1888,
"height": 1408
}],
102: [{
"width":1920,
"height":1080
}],
1: {
"chunk_size": 36,
"size": 9,
"image_quality": 95,
"start_frame": 0,
"stop_frame": 8,
"frame_filter": "",
"frames":[{
"width": 1920,
"height": 1080
}, {
"width": 1600,
"height": 1143
}, {
"width": 1600,
"height": 859
}, {
"width": 3840,
"height": 2160
}, {
"width": 2560,
"height": 1920
}, {
"width": 1920,
"height": 1080
}, {
"width": 1920,
"height": 1080
}, {
"width": 700,
"height": 453
}, {
"width": 1920,
"height": 1200
}],
},
2: {
"chunk_size": 36,
"size": 75,
"image_quality": 50,
"start_frame": 0,
"stop_frame": 74,
"frame_filter": "",
"frames": [{
"width": 1920,
"height": 1080
}],
},
3: {
"chunk_size": 36,
"size": 5002,
"image_quality": 50,
"start_frame": 0,
"stop_frame": 5001,
"frame_filter": "",
"frames": [{
"width": 1888,
"height": 1408
}],
},
100: {
"chunk_size": 36,
"size": 9,
"image_quality": 50,
"start_frame": 0,
"stop_frame": 8,
"frame_filter": "",
"frames": [{
"width": 1920,
"height": 1080
}, {
"width": 1600,
"height": 1143
}, {
"width": 1600,
"height": 859
}, {
"width": 3840,
"height": 2160
}, {
"width": 2560,
"height": 1920
}, {
"width": 1920,
"height": 1080
}, {
"width": 1920,
"height": 1080
}, {
"width": 700,
"height": 453
}, {
"width": 1920,
"height": 1200
}],
},
101: {
"chunk_size": 36,
"size": 5002,
"image_quality": 50,
"start_frame": 0,
"stop_frame": 5001,
"frame_filter": "",
"frames": [{
"width": 1888,
"height": 1408
}],
},
102: {
"chunk_size": 36,
"size": 1,
"image_quality": 50,
"start_frame": 0,
"stop_frame": 0,
"frame_filter": "",
"frames": [{
"width":1920,
"height":1080
}],
},
}
module.exports = {
@ -2606,3 +2654,4 @@ module.exports = {
frameMetaDummyData,
formatsDummyData,
}

@ -46,13 +46,33 @@ const webConfig = {
options: {
presets: [
['@babel/preset-env', {
targets: '> 2.5%', // https://github.com/browserslist/browserslist
targets: '> 2.5%',
}],
],
sourceType: 'unambiguous',
},
},
}],
}, {
test: /3rdparty\/.*\.worker\.js$/,
use: {
loader: 'worker-loader',
options: {
publicPath: '/static/engine/js/3rdparty/',
name: '[name].js',
},
},
}, {
test: /\.worker\.js$/,
exclude: /3rdparty/,
use: {
loader: 'worker-loader',
options: {
publicPath: '/static/engine/js/',
name: '[name].js',
},
},
},
],
},
};

@ -0,0 +1 @@
**/3rdparty/*.js

@ -0,0 +1,56 @@
/*
* Copyright (C) 2018-2020 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
module.exports = {
"env": {
"node": false,
"browser": true,
"es6": true,
"jquery": true,
"qunit": true,
},
"parserOptions": {
"parser": "babel-eslint",
"sourceType": "module",
"ecmaVersion": 2018,
},
"plugins": [
"security",
"no-unsanitized",
"no-unsafe-innerhtml",
],
"extends": [
"eslint:recommended",
"plugin:security/recommended",
"plugin:no-unsanitized/DOM",
"airbnb-base",
],
"rules": {
"no-await-in-loop": [0],
"global-require": [0],
"no-new": [0],
"class-methods-use-this": [0],
"no-restricted-properties": [0, {
"object": "Math",
"property": "pow",
}],
"no-plusplus": [0],
"no-param-reassign": [0],
"no-underscore-dangle": ["error", { "allowAfterThis": true }],
"no-restricted-syntax": [0, {"selector": "ForOfStatement"}],
"no-continue": [0],
"no-unsafe-innerhtml/no-unsafe-innerhtml": 1,
// This rule actual for user input data on the node.js environment mainly.
"security/detect-object-injection": 0,
"indent": ["warn", 4],
"no-useless-constructor": 0,
"func-names": [0],
"valid-typeof": [0],
"no-console": [0], // this rule deprecates console.log, console.warn etc. because "it is not good in production code"
"max-classes-per-file": [0],
"quotes": ["warn", "single"],
},
};

@ -0,0 +1 @@
dist

@ -0,0 +1,7 @@
# cvat-data module
```bash
npm run build # build with minification
npm run build -- --mode=development # build without minification
npm run server # run debug server
```

File diff suppressed because it is too large Load Diff

@ -0,0 +1,34 @@
{
"name": "cvat-data",
"version": "0.1.0",
"description": "",
"main": "src/js/cvat-data.js",
"devDependencies": {
"@babel/cli": "^7.4.4",
"@babel/core": "^7.4.4",
"@babel/preset-env": "^7.4.4",
"babel-loader": "^8.0.6",
"copy-webpack-plugin": "^5.0.5",
"eslint": "^6.4.0",
"eslint-config-airbnb-base": "^14.0.0",
"eslint-plugin-import": "^2.18.2",
"eslint-plugin-no-unsafe-innerhtml": "^1.0.16",
"eslint-plugin-no-unsanitized": "^3.0.2",
"eslint-plugin-security": "^1.4.0",
"nodemon": "^1.19.2",
"webpack": "^4.39.3",
"webpack-cli": "^3.3.7",
"worker-loader": "^2.0.0"
},
"dependencies": {
"async-mutex": "^0.1.4",
"jszip": "3.1.5"
},
"scripts": {
"patch": "cd src/js && patch --dry-run --forward -p0 < 3rdparty_patch.diff >> /dev/null && patch -p0 < 3rdparty_patch.diff; true",
"build": "npm run patch; webpack --config ./webpack.config.js",
"server": "npm run patch; nodemon --watch config --exec 'webpack-dev-server --config ./webpack.config.js --mode=development --open'"
},
"author": "Intel",
"license": "MIT"
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -0,0 +1,88 @@
## 3rdparty components
These files are from the [Broadway.js](https://github.com/mbebenita/Broadway) repository:
- Decoder.js
- mp4.js
- avc.wasm
### Why do we store them here?
Authors don't provide an npm package, so we need to store these components in our repository.
We use this dependency to decode video chunks from a server and split them to frames on client side.
We need to run this package in node environent (for example for debug, or for running unit tests).
But there aren't any ways to do that (even with syntetic environment, provided for example by the package ``browser-env``).
For example there are issues with canvas using (webpack doesn't work with binary canvas package for node-js) and others.
So, we have solved to write patch file for this library.
It modifies source code a little to support our scenario of using.
### How to build awc.wasm and Decoder.js
1. Clone Emscripten SDK, install and activate the latest fastcomp SDK:
```sh
git clone https://github.com/emscripten-core/emsdk.git && cd emsdk
```
```sh
./emsdk install latest-fastcomp
```
```sh
./emsdk activate latest-fastcomp
```
1. Clone Broadway.js
```sh
git clone https://github.com/mbebenita/Broadway.git && cd Broadway/Decoder
```
1. Edit `make.py`:
- Remove or comment the following options:
`'-s', 'NO_BROWSER=1',`\
`'-s', 'PRECISE_I64_MATH=0',`
- Remove `"HEAP8", "HEAP16", "HEAP32"` from the `EXPORTED_FUNCTIONS` list.
- Increase total memory to make possible decode 4k videos
(or try to enable `ALLOW_MEMORY_GROWTH`, but this option has not been tested):\
`'-s', 'TOTAL_MEMORY=' + str(100*1024*1024),`
- Add the following options:\
`'-s', "ENVIRONMENT='worker'",`\
`'-s', 'WASM=1',`
1. Activate emsdk environment and build Broadway.js:
```sh
. /tmp/emsdk/emsdk_env.sh
```
```sh
python2 make.py
```
1. Copy the following files to cvat-data 3rdparty source folder:
```sh
cd ..
```
```sh
cp Player/avc.wasm Player/Decoder.js Player/mp4.js <CVAT_FOLDER>/cvat-data/src/
```
```sh
js/3rdparty
```
### How work with a patch file
```bash
# from cvat-data/src/js
cp -r 3rdparty 3rdparty_edited
# change 3rdparty edited as we need
diff -u 3rdparty 3rdparty_edited/ > 3rdparty_patch.diff
patch -p0 < 3rdparty_patch.diff # apply patch from cvat-data/src/js
```
Also these files have been added to ignore for git in all future revisions:
```bash
# from cvat-data dir
git update-index --skip-worktree src/js/3rdparty/*.js
```
This behaviour can be reset with:
```bash
# from cvat-data dir
git update-index --no-skip-worktree src/js/3rdparty/*.js
```
[Stackoverflow issue](https://stackoverflow.com/questions/4348590/how-can-i-make-git-ignore-future-revisions-to-a-file)

Binary file not shown.

@ -0,0 +1,977 @@
module.exports = (function(){
'use strict';
function assert(condition, message) {
if (!condition) {
error(message);
}
};
/**
* Represents a 2-dimensional size value.
*/
var Size = (function size() {
function constructor(w, h) {
this.w = w;
this.h = h;
}
constructor.prototype = {
toString: function () {
return "(" + this.w + ", " + this.h + ")";
},
getHalfSize: function() {
return new Size(this.w >>> 1, this.h >>> 1);
},
length: function() {
return this.w * this.h;
}
};
return constructor;
})();
var Bytestream = (function BytestreamClosure() {
function constructor(arrayBuffer, start, length) {
this.bytes = new Uint8Array(arrayBuffer);
this.start = start || 0;
this.pos = this.start;
this.end = (start + length) || this.bytes.length;
}
constructor.prototype = {
get length() {
return this.end - this.start;
},
get position() {
return this.pos;
},
get remaining() {
return this.end - this.pos;
},
readU8Array: function (length) {
if (this.pos > this.end - length)
return null;
var res = this.bytes.subarray(this.pos, this.pos + length);
this.pos += length;
return res;
},
readU32Array: function (rows, cols, names) {
cols = cols || 1;
if (this.pos > this.end - (rows * cols) * 4)
return null;
if (cols == 1) {
var array = new Uint32Array(rows);
for (var i = 0; i < rows; i++) {
array[i] = this.readU32();
}
return array;
} else {
var array = new Array(rows);
for (var i = 0; i < rows; i++) {
var row = null;
if (names) {
row = {};
for (var j = 0; j < cols; j++) {
row[names[j]] = this.readU32();
}
} else {
row = new Uint32Array(cols);
for (var j = 0; j < cols; j++) {
row[j] = this.readU32();
}
}
array[i] = row;
}
return array;
}
},
read8: function () {
return this.readU8() << 24 >> 24;
},
readU8: function () {
if (this.pos >= this.end)
return null;
return this.bytes[this.pos++];
},
read16: function () {
return this.readU16() << 16 >> 16;
},
readU16: function () {
if (this.pos >= this.end - 1)
return null;
var res = this.bytes[this.pos + 0] << 8 | this.bytes[this.pos + 1];
this.pos += 2;
return res;
},
read24: function () {
return this.readU24() << 8 >> 8;
},
readU24: function () {
var pos = this.pos;
var bytes = this.bytes;
if (pos > this.end - 3)
return null;
var res = bytes[pos + 0] << 16 | bytes[pos + 1] << 8 | bytes[pos + 2];
this.pos += 3;
return res;
},
peek32: function (advance) {
var pos = this.pos;
var bytes = this.bytes;
if (pos > this.end - 4)
return null;
var res = bytes[pos + 0] << 24 | bytes[pos + 1] << 16 | bytes[pos + 2] << 8 | bytes[pos + 3];
if (advance) {
this.pos += 4;
}
return res;
},
read32: function () {
return this.peek32(true);
},
readU32: function () {
return this.peek32(true) >>> 0;
},
read4CC: function () {
var pos = this.pos;
if (pos > this.end - 4)
return null;
var res = "";
for (var i = 0; i < 4; i++) {
res += String.fromCharCode(this.bytes[pos + i]);
}
this.pos += 4;
return res;
},
readFP16: function () {
return this.read32() / 65536;
},
readFP8: function () {
return this.read16() / 256;
},
readISO639: function () {
var bits = this.readU16();
var res = "";
for (var i = 0; i < 3; i++) {
var c = (bits >>> (2 - i) * 5) & 0x1f;
res += String.fromCharCode(c + 0x60);
}
return res;
},
readUTF8: function (length) {
var res = "";
for (var i = 0; i < length; i++) {
res += String.fromCharCode(this.readU8());
}
return res;
},
readPString: function (max) {
var len = this.readU8();
assert (len <= max);
var res = this.readUTF8(len);
this.reserved(max - len - 1, 0);
return res;
},
skip: function (length) {
this.seek(this.pos + length);
},
reserved: function (length, value) {
for (var i = 0; i < length; i++) {
assert (this.readU8() == value);
}
},
seek: function (index) {
if (index < 0 || index > this.end) {
error("Index out of bounds (bounds: [0, " + this.end + "], index: " + index + ").");
}
this.pos = index;
},
subStream: function (start, length) {
return new Bytestream(this.bytes.buffer, start, length);
}
};
return constructor;
})();
var PARANOID = true; // Heavy-weight assertions.
/**
* Reads an mp4 file and constructs a object graph that corresponds to the box/atom
* structure of the file. Mp4 files are based on the ISO Base Media format, which in
* turn is based on the Apple Quicktime format. The Quicktime spec is available at:
* http://developer.apple.com/library/mac/#documentation/QuickTime/QTFF. An mp4 spec
* also exists, but I cannot find it freely available.
*
* Mp4 files contain a tree of boxes (or atoms in Quicktime). The general structure
* is as follows (in a pseudo regex syntax):
*
* Box / Atom Structure:
*
* [size type [version flags] field* box*]
* <32> <4C> <--8--> <24-> <-?-> <?>
* <------------- box size ------------>
*
* The box size indicates the entire size of the box and its children, we can use it
* to skip over boxes that are of no interest. Each box has a type indicated by a
* four character code (4C), this describes how the box should be parsed and is also
* used as an object key name in the resulting box tree. For example, the expression:
* "moov.trak[0].mdia.minf" can be used to access individual boxes in the tree based
* on their 4C name. If two or more boxes with the same 4C name exist in a box, then
* an array is built with that name.
*
*/
var MP4Reader = (function reader() {
var BOX_HEADER_SIZE = 8;
var FULL_BOX_HEADER_SIZE = BOX_HEADER_SIZE + 4;
function constructor(stream) {
this.stream = stream;
this.tracks = {};
}
constructor.prototype = {
readBoxes: function (stream, parent) {
while (stream.peek32()) {
var child = this.readBox(stream);
if (child.type in parent) {
var old = parent[child.type];
if (!(old instanceof Array)) {
parent[child.type] = [old];
}
parent[child.type].push(child);
} else {
parent[child.type] = child;
}
}
},
readBox: function readBox(stream) {
var box = { offset: stream.position };
function readHeader() {
box.size = stream.readU32();
box.type = stream.read4CC();
}
function readFullHeader() {
box.version = stream.readU8();
box.flags = stream.readU24();
}
function remainingBytes() {
return box.size - (stream.position - box.offset);
}
function skipRemainingBytes () {
stream.skip(remainingBytes());
}
var readRemainingBoxes = function () {
var subStream = stream.subStream(stream.position, remainingBytes());
this.readBoxes(subStream, box);
stream.skip(subStream.length);
}.bind(this);
readHeader();
switch (box.type) {
case 'ftyp':
box.name = "File Type Box";
box.majorBrand = stream.read4CC();
box.minorVersion = stream.readU32();
box.compatibleBrands = new Array((box.size - 16) / 4);
for (var i = 0; i < box.compatibleBrands.length; i++) {
box.compatibleBrands[i] = stream.read4CC();
}
break;
case 'moov':
box.name = "Movie Box";
readRemainingBoxes();
break;
case 'mvhd':
box.name = "Movie Header Box";
readFullHeader();
assert (box.version == 0);
box.creationTime = stream.readU32();
box.modificationTime = stream.readU32();
box.timeScale = stream.readU32();
box.duration = stream.readU32();
box.rate = stream.readFP16();
box.volume = stream.readFP8();
stream.skip(10);
box.matrix = stream.readU32Array(9);
stream.skip(6 * 4);
box.nextTrackId = stream.readU32();
break;
case 'trak':
box.name = "Track Box";
readRemainingBoxes();
this.tracks[box.tkhd.trackId] = new Track(this, box);
break;
case 'tkhd':
box.name = "Track Header Box";
readFullHeader();
assert (box.version == 0);
box.creationTime = stream.readU32();
box.modificationTime = stream.readU32();
box.trackId = stream.readU32();
stream.skip(4);
box.duration = stream.readU32();
stream.skip(8);
box.layer = stream.readU16();
box.alternateGroup = stream.readU16();
box.volume = stream.readFP8();
stream.skip(2);
box.matrix = stream.readU32Array(9);
box.width = stream.readFP16();
box.height = stream.readFP16();
break;
case 'mdia':
box.name = "Media Box";
readRemainingBoxes();
break;
case 'mdhd':
box.name = "Media Header Box";
readFullHeader();
assert (box.version == 0);
box.creationTime = stream.readU32();
box.modificationTime = stream.readU32();
box.timeScale = stream.readU32();
box.duration = stream.readU32();
box.language = stream.readISO639();
stream.skip(2);
break;
case 'hdlr':
box.name = "Handler Reference Box";
readFullHeader();
stream.skip(4);
box.handlerType = stream.read4CC();
stream.skip(4 * 3);
var bytesLeft = box.size - 32;
if (bytesLeft > 0) {
box.name = stream.readUTF8(bytesLeft);
}
break;
case 'minf':
box.name = "Media Information Box";
readRemainingBoxes();
break;
case 'stbl':
box.name = "Sample Table Box";
readRemainingBoxes();
break;
case 'stsd':
box.name = "Sample Description Box";
readFullHeader();
box.sd = [];
var entries = stream.readU32();
readRemainingBoxes();
break;
case 'avc1':
stream.reserved(6, 0);
box.dataReferenceIndex = stream.readU16();
assert (stream.readU16() == 0); // Version
assert (stream.readU16() == 0); // Revision Level
stream.readU32(); // Vendor
stream.readU32(); // Temporal Quality
stream.readU32(); // Spatial Quality
box.width = stream.readU16();
box.height = stream.readU16();
box.horizontalResolution = stream.readFP16();
box.verticalResolution = stream.readFP16();
assert (stream.readU32() == 0); // Reserved
box.frameCount = stream.readU16();
box.compressorName = stream.readPString(32);
box.depth = stream.readU16();
assert (stream.readU16() == 0xFFFF); // Color Table Id
readRemainingBoxes();
break;
case 'mp4a':
stream.reserved(6, 0);
box.dataReferenceIndex = stream.readU16();
box.version = stream.readU16();
stream.skip(2);
stream.skip(4);
box.channelCount = stream.readU16();
box.sampleSize = stream.readU16();
box.compressionId = stream.readU16();
box.packetSize = stream.readU16();
box.sampleRate = stream.readU32() >>> 16;
// TODO: Parse other version levels.
assert (box.version == 0);
readRemainingBoxes();
break;
case 'esds':
box.name = "Elementary Stream Descriptor";
readFullHeader();
// TODO: Do we really need to parse this?
skipRemainingBytes();
break;
case 'avcC':
box.name = "AVC Configuration Box";
box.configurationVersion = stream.readU8();
box.avcProfileIndication = stream.readU8();
box.profileCompatibility = stream.readU8();
box.avcLevelIndication = stream.readU8();
box.lengthSizeMinusOne = stream.readU8() & 3;
assert (box.lengthSizeMinusOne == 3, "TODO");
var count = stream.readU8() & 31;
box.sps = [];
for (var i = 0; i < count; i++) {
box.sps.push(stream.readU8Array(stream.readU16()));
}
var count = stream.readU8() & 31;
box.pps = [];
for (var i = 0; i < count; i++) {
box.pps.push(stream.readU8Array(stream.readU16()));
}
skipRemainingBytes();
break;
case 'btrt':
box.name = "Bit Rate Box";
box.bufferSizeDb = stream.readU32();
box.maxBitrate = stream.readU32();
box.avgBitrate = stream.readU32();
break;
case 'stts':
box.name = "Decoding Time to Sample Box";
readFullHeader();
box.table = stream.readU32Array(stream.readU32(), 2, ["count", "delta"]);
break;
case 'stss':
box.name = "Sync Sample Box";
readFullHeader();
box.samples = stream.readU32Array(stream.readU32());
break;
case 'stsc':
box.name = "Sample to Chunk Box";
readFullHeader();
box.table = stream.readU32Array(stream.readU32(), 3,
["firstChunk", "samplesPerChunk", "sampleDescriptionId"]);
break;
case 'stsz':
box.name = "Sample Size Box";
readFullHeader();
box.sampleSize = stream.readU32();
var count = stream.readU32();
if (box.sampleSize == 0) {
box.table = stream.readU32Array(count);
}
break;
case 'stco':
box.name = "Chunk Offset Box";
readFullHeader();
box.table = stream.readU32Array(stream.readU32());
break;
case 'smhd':
box.name = "Sound Media Header Box";
readFullHeader();
box.balance = stream.readFP8();
stream.reserved(2, 0);
break;
case 'mdat':
box.name = "Media Data Box";
assert (box.size >= 8, "Cannot parse large media data yet.");
box.data = stream.readU8Array(remainingBytes());
break;
default:
skipRemainingBytes();
break;
};
return box;
},
read: function () {
var start = (new Date).getTime();
this.file = {};
this.readBoxes(this.stream, this.file);
console.info("Parsed stream in " + ((new Date).getTime() - start) + " ms");
},
traceSamples: function () {
var video = this.tracks[1];
var audio = this.tracks[2];
console.info("Video Samples: " + video.getSampleCount());
console.info("Audio Samples: " + audio.getSampleCount());
var vi = 0;
var ai = 0;
for (var i = 0; i < 100; i++) {
var vo = video.sampleToOffset(vi);
var ao = audio.sampleToOffset(ai);
var vs = video.sampleToSize(vi, 1);
var as = audio.sampleToSize(ai, 1);
if (vo < ao) {
console.info("V Sample " + vi + " Offset : " + vo + ", Size : " + vs);
vi ++;
} else {
console.info("A Sample " + ai + " Offset : " + ao + ", Size : " + as);
ai ++;
}
}
}
};
return constructor;
})();
var Track = (function track () {
function constructor(file, trak) {
this.file = file;
this.trak = trak;
}
constructor.prototype = {
getSampleSizeTable: function () {
return this.trak.mdia.minf.stbl.stsz.table;
},
getSampleCount: function () {
return this.getSampleSizeTable().length;
},
/**
* Computes the size of a range of samples, returns zero if length is zero.
*/
sampleToSize: function (start, length) {
var table = this.getSampleSizeTable();
var size = 0;
for (var i = start; i < start + length; i++) {
size += table[i];
}
return size;
},
/**
* Computes the chunk that contains the specified sample, as well as the offset of
* the sample in the computed chunk.
*/
sampleToChunk: function (sample) {
/* Samples are grouped in chunks which may contain a variable number of samples.
* The sample-to-chunk table in the stsc box describes how samples are arranged
* in chunks. Each table row corresponds to a set of consecutive chunks with the
* same number of samples and description ids. For example, the following table:
*
* +-------------+-------------------+----------------------+
* | firstChunk | samplesPerChunk | sampleDescriptionId |
* +-------------+-------------------+----------------------+
* | 1 | 3 | 23 |
* | 3 | 1 | 23 |
* | 5 | 1 | 24 |
* +-------------+-------------------+----------------------+
*
* describes 5 chunks with a total of (2 * 3) + (2 * 1) + (1 * 1) = 9 samples,
* each chunk containing samples 3, 3, 1, 1, 1 in chunk order, or
* chunks 1, 1, 1, 2, 2, 2, 3, 4, 5 in sample order.
*
* This function determines the chunk that contains a specified sample by iterating
* over every entry in the table. It also returns the position of the sample in the
* chunk which can be used to compute the sample's exact position in the file.
*
* TODO: Determine if we should memoize this function.
*/
var table = this.trak.mdia.minf.stbl.stsc.table;
if (table.length === 1) {
var row = table[0];
assert (row.firstChunk === 1);
return {
index: Math.floor(sample / row.samplesPerChunk),
offset: sample % row.samplesPerChunk
};
}
var totalChunkCount = 0;
for (var i = 0; i < table.length; i++) {
var row = table[i];
if (i > 0) {
var previousRow = table[i - 1];
var previousChunkCount = row.firstChunk - previousRow.firstChunk;
var previousSampleCount = previousRow.samplesPerChunk * previousChunkCount;
if (sample >= previousSampleCount) {
sample -= previousSampleCount;
if (i == table.length - 1) {
return {
index: totalChunkCount + previousChunkCount + Math.floor(sample / row.samplesPerChunk),
offset: sample % row.samplesPerChunk
};
}
} else {
return {
index: totalChunkCount + Math.floor(sample / previousRow.samplesPerChunk),
offset: sample % previousRow.samplesPerChunk
};
}
totalChunkCount += previousChunkCount;
}
}
assert(false);
},
chunkToOffset: function (chunk) {
var table = this.trak.mdia.minf.stbl.stco.table;
return table[chunk];
},
sampleToOffset: function (sample) {
var res = this.sampleToChunk(sample);
var offset = this.chunkToOffset(res.index);
return offset + this.sampleToSize(sample - res.offset, res.offset);
},
/**
* Computes the sample at the specified time.
*/
timeToSample: function (time) {
/* In the time-to-sample table samples are grouped by their duration. The count field
* indicates the number of consecutive samples that have the same duration. For example,
* the following table:
*
* +-------+-------+
* | count | delta |
* +-------+-------+
* | 4 | 3 |
* | 2 | 1 |
* | 3 | 2 |
* +-------+-------+
*
* describes 9 samples with a total time of (4 * 3) + (2 * 1) + (3 * 2) = 20.
*
* This function determines the sample at the specified time by iterating over every
* entry in the table.
*
* TODO: Determine if we should memoize this function.
*/
var table = this.trak.mdia.minf.stbl.stts.table;
var sample = 0;
for (var i = 0; i < table.length; i++) {
var delta = table[i].count * table[i].delta;
if (time >= delta) {
time -= delta;
sample += table[i].count;
} else {
return sample + Math.floor(time / table[i].delta);
}
}
},
/**
* Gets the total time of the track.
*/
getTotalTime: function () {
if (PARANOID) {
var table = this.trak.mdia.minf.stbl.stts.table;
var duration = 0;
for (var i = 0; i < table.length; i++) {
duration += table[i].count * table[i].delta;
}
assert (this.trak.mdia.mdhd.duration == duration);
}
return this.trak.mdia.mdhd.duration;
},
getTotalTimeInSeconds: function () {
return this.timeToSeconds(this.getTotalTime());
},
getTimeScale: function () {
return this.trak.mdia.mdhd.timeScale;
},
/**
* Converts time units to real time (seconds).
*/
timeToSeconds: function (time) {
return time / this.getTimeScale();
},
/**
* Converts real time (seconds) to time units.
*/
secondsToTime: function (seconds) {
return seconds * this.getTimeScale();
},
foo: function () {
/*
for (var i = 0; i < this.getSampleCount(); i++) {
var res = this.sampleToChunk(i);
console.info("Sample " + i + " -> " + res.index + " % " + res.offset +
" @ " + this.chunkToOffset(res.index) +
" @@ " + this.sampleToOffset(i));
}
console.info("Total Time: " + this.timeToSeconds(this.getTotalTime()));
var total = this.getTotalTimeInSeconds();
for (var i = 50; i < total; i += 0.1) {
// console.info("Time: " + i.toFixed(2) + " " + this.secondsToTime(i));
console.info("Time: " + i.toFixed(2) + " " + this.timeToSample(this.secondsToTime(i)));
}
*/
},
/**
* AVC samples contain one or more NAL units each of which have a length prefix.
* This function returns an array of NAL units without their length prefixes.
*/
getSampleNALUnits: function (sample) {
var bytes = this.file.stream.bytes;
var offset = this.sampleToOffset(sample);
var end = offset + this.sampleToSize(sample, 1);
var nalUnits = [];
while(end - offset > 0) {
var length = (new Bytestream(bytes.buffer, offset)).readU32();
nalUnits.push(bytes.subarray(offset + 4, offset + length + 4));
offset = offset + length + 4;
}
return nalUnits;
}
};
return constructor;
})();
// Only add setZeroTimeout to the window object, and hide everything
// else in a closure. (http://dbaron.org/log/20100309-faster-timeouts)
(function() {
var timeouts = [];
var messageName = "zero-timeout-message";
// Like setTimeout, but only takes a function argument. There's
// no time argument (always zero) and no arguments (you have to
// use a closure).
function setZeroTimeout(fn) {
timeouts.push(fn);
window.postMessage(messageName, "*");
}
function handleMessage(event) {
if (event.source == window && event.data == messageName) {
event.stopPropagation();
if (timeouts.length > 0) {
var fn = timeouts.shift();
fn();
}
}
}
window.addEventListener("message", handleMessage, true);
// Add the one thing we want added to the window object.
window.setZeroTimeout = setZeroTimeout;
})();
var MP4Player = (function reader() {
var defaultConfig = {
filter: "original",
filterHorLuma: "optimized",
filterVerLumaEdge: "optimized",
getBoundaryStrengthsA: "optimized"
};
function constructor(stream, useWorkers, webgl, render) {
this.stream = stream;
this.useWorkers = useWorkers;
this.webgl = webgl;
this.render = render;
this.statistics = {
videoStartTime: 0,
videoPictureCounter: 0,
windowStartTime: 0,
windowPictureCounter: 0,
fps: 0,
fpsMin: 1000,
fpsMax: -1000,
webGLTextureUploadTime: 0
};
this.onStatisticsUpdated = function () {};
this.avc = new Player({
useWorker: useWorkers,
reuseMemory: true,
webgl: webgl,
size: {
width: 640,
height: 368
}
});
this.webgl = this.avc.webgl;
var self = this;
this.avc.onPictureDecoded = function(){
updateStatistics.call(self);
};
this.canvas = this.avc.canvas;
}
function updateStatistics() {
var s = this.statistics;
s.videoPictureCounter += 1;
s.windowPictureCounter += 1;
var now = Date.now();
if (!s.videoStartTime) {
s.videoStartTime = now;
}
var videoElapsedTime = now - s.videoStartTime;
s.elapsed = videoElapsedTime / 1000;
if (videoElapsedTime < 1000) {
return;
}
if (!s.windowStartTime) {
s.windowStartTime = now;
return;
} else if ((now - s.windowStartTime) > 1000) {
var windowElapsedTime = now - s.windowStartTime;
var fps = (s.windowPictureCounter / windowElapsedTime) * 1000;
s.windowStartTime = now;
s.windowPictureCounter = 0;
if (fps < s.fpsMin) s.fpsMin = fps;
if (fps > s.fpsMax) s.fpsMax = fps;
s.fps = fps;
}
var fps = (s.videoPictureCounter / videoElapsedTime) * 1000;
s.fpsSinceStart = fps;
this.onStatisticsUpdated(this.statistics);
return;
}
constructor.prototype = {
readAll: function(callback) {
console.info("MP4Player::readAll()");
this.stream.readAll(null, function (buffer) {
this.reader = new MP4Reader(new Bytestream(buffer));
this.reader.read();
var video = this.reader.tracks[1];
this.size = new Size(video.trak.tkhd.width, video.trak.tkhd.height);
console.info("MP4Player::readAll(), length: " + this.reader.stream.length);
if (callback) callback();
}.bind(this));
},
play: function() {
var reader = this.reader;
if (!reader) {
this.readAll(this.play.bind(this));
return;
};
var video = reader.tracks[1];
var audio = reader.tracks[2];
var avc = reader.tracks[1].trak.mdia.minf.stbl.stsd.avc1.avcC;
var sps = avc.sps[0];
var pps = avc.pps[0];
/* Decode Sequence & Picture Parameter Sets */
this.avc.decode(sps);
this.avc.decode(pps);
/* Decode Pictures */
var pic = 0;
setTimeout(function foo() {
var avc = this.avc;
video.getSampleNALUnits(pic).forEach(function (nal) {
avc.decode(nal);
});
pic ++;
if (pic < 3000) {
setTimeout(foo.bind(this), 1);
};
}.bind(this), 1);
}
};
return constructor;
})();
var Broadway = (function broadway() {
function constructor(div) {
var src = div.attributes.src ? div.attributes.src.value : undefined;
var width = div.attributes.width ? div.attributes.width.value : 640;
var height = div.attributes.height ? div.attributes.height.value : 480;
var controls = document.createElement('div');
controls.setAttribute('style', "z-index: 100; position: absolute; bottom: 0px; background-color: rgba(0,0,0,0.8); height: 30px; width: 100%; text-align: left;");
this.info = document.createElement('div');
this.info.setAttribute('style', "font-size: 14px; font-weight: bold; padding: 6px; color: lime;");
controls.appendChild(this.info);
div.appendChild(controls);
var useWorkers = div.attributes.workers ? div.attributes.workers.value == "true" : false;
var render = div.attributes.render ? div.attributes.render.value == "true" : false;
var webgl = "auto";
if (div.attributes.webgl){
if (div.attributes.webgl.value == "true"){
webgl = true;
};
if (div.attributes.webgl.value == "false"){
webgl = false;
};
};
var infoStrPre = "Click canvas to load and play - ";
var infoStr = "";
if (useWorkers){
infoStr += "worker thread ";
}else{
infoStr += "main thread ";
};
this.player = new MP4Player(new Stream(src), useWorkers, webgl, render);
this.canvas = this.player.canvas;
this.canvas.onclick = function () {
this.play();
}.bind(this);
div.appendChild(this.canvas);
infoStr += " - webgl: " + this.player.webgl;
this.info.innerHTML = infoStrPre + infoStr;
this.score = null;
this.player.onStatisticsUpdated = function (statistics) {
if (statistics.videoPictureCounter % 10 != 0) {
return;
}
var info = "";
if (statistics.fps) {
info += " fps: " + statistics.fps.toFixed(2);
}
if (statistics.fpsSinceStart) {
info += " avg: " + statistics.fpsSinceStart.toFixed(2);
}
var scoreCutoff = 1200;
if (statistics.videoPictureCounter < scoreCutoff) {
this.score = scoreCutoff - statistics.videoPictureCounter;
} else if (statistics.videoPictureCounter == scoreCutoff) {
this.score = statistics.fpsSinceStart.toFixed(2);
}
// info += " score: " + this.score;
this.info.innerHTML = infoStr + info;
}.bind(this);
}
constructor.prototype = {
play: function () {
this.player.play();
}
};
return constructor;
})();
return {
Size,
Track,
MP4Reader,
MP4Player,
Bytestream,
Broadway,
}
})();

File diff suppressed because one or more lines are too long

@ -0,0 +1,350 @@
/*
* Copyright (C) 2019 Intel Corporation
* SPDX-License-Identifier: MIT
*/
/* global
require:true
*/
const { Mutex } = require('async-mutex');
// eslint-disable-next-line max-classes-per-file
const { MP4Reader, Bytestream } = require('./3rdparty/mp4');
const ZipDecoder = require('./unzip_imgs.worker');
const H264Decoder = require('./3rdparty/Decoder.worker');
const BlockType = Object.freeze({
MP4VIDEO: 'mp4video',
ARCHIVE: 'archive',
});
class FrameProvider {
constructor(blockType, blockSize, cachedBlockCount,
decodedBlocksCacheSize = 5, maxWorkerThreadCount = 2) {
this._frames = {};
this._cachedBlockCount = Math.max(1, cachedBlockCount); // number of stored blocks
this._decodedBlocksCacheSize = decodedBlocksCacheSize;
this._blocksRanges = [];
this._blocks = {};
this._blockSize = blockSize;
this._running = false;
this._blockType = blockType;
this._currFrame = -1;
this._requestedBlockDecode = null;
this._width = null;
this._height = null;
this._decodingBlocks = {};
this._decodeThreadCount = 0;
this._timerId = setTimeout(this._worker.bind(this), 100);
this._mutex = new Mutex();
this._promisedFrames = {};
this._maxWorkerThreadCount = maxWorkerThreadCount;
}
async _worker() {
if (this._requestedBlockDecode !== null
&& this._decodeThreadCount < this._maxWorkerThreadCount) {
await this.startDecode();
}
this._timerId = setTimeout(this._worker.bind(this), 100);
}
isChunkCached(start, end) {
return (`${start}:${end}` in this._blocksRanges);
}
/* This method removes extra data from a cache when memory overflow */
async _cleanup() {
if (this._blocksRanges.length > this._cachedBlockCount) {
const shifted = this._blocksRanges.shift(); // get the oldest block
const [start, end] = shifted.split(':').map((el) => +el);
delete this._blocks[start / this._blockSize];
for (let i = start; i <= end; i++) {
delete this._frames[i];
}
}
// delete frames whose are not in areas of current frame
const distance = Math.floor(this._decodedBlocksCacheSize / 2);
for (let i = 0; i < this._blocksRanges.length; i++) {
const [start, end] = this._blocksRanges[i].split(':').map((el) => +el);
if (end < this._currFrame - distance * this._blockSize
|| start > this._currFrame + distance * this._blockSize) {
for (let j = start; j <= end; j++) {
delete this._frames[j];
}
}
}
}
async requestDecodeBlock(block, start, end, resolveCallback, rejectCallback) {
const release = await this._mutex.acquire();
try {
if (this._requestedBlockDecode !== null) {
if (start === this._requestedBlockDecode.start
&& end === this._requestedBlockDecode.end) {
this._requestedBlockDecode.resolveCallback = resolveCallback;
this._requestedBlockDecode.rejectCallback = rejectCallback;
} else if (this._requestedBlockDecode.rejectCallback) {
this._requestedBlockDecode.rejectCallback();
}
}
if (!(`${start}:${end}` in this._decodingBlocks)) {
this._requestedBlockDecode = {
block: block || this._blocks[Math.floor(start / this._blockSize)],
start,
end,
resolveCallback,
rejectCallback,
};
} else {
this._decodingBlocks[`${start}:${end}`].rejectCallback = rejectCallback;
this._decodingBlocks[`${start}:${end}`].resolveCallback = resolveCallback;
}
} finally {
release();
}
}
isRequestExist() {
return this._requestedBlockDecode !== null;
}
setRenderSize(width, height) {
this._width = width;
this._height = height;
}
/* Method returns frame from collection. Else method returns 0 */
async frame(frameNumber) {
this._currFrame = frameNumber;
return new Promise((resolve, reject) => {
if (frameNumber in this._frames) {
if (this._frames[frameNumber] !== null) {
resolve(this._frames[frameNumber]);
} else {
this._promisedFrames[frameNumber] = {
resolve,
reject,
};
}
} else {
resolve(null);
}
});
}
isNextChunkExists(frameNumber) {
const nextChunkNum = Math.floor(frameNumber / this._blockSize) + 1;
if (this._blocks[nextChunkNum] === 'loading') {
return true;
}
return nextChunkNum in this._blocks;
}
/*
Method start asynchronic decode a block of data
@param block - is a data from a server as is (ts file or archive)
@param start {number} - is the first frame of a block
@param end {number} - is the last frame of a block + 1
@param callback - callback)
*/
setReadyToLoading(chunkNumber) {
this._blocks[chunkNumber] = 'loading';
}
static cropImage(imageBuffer, imageWidth, imageHeight, xOffset, yOffset, width, height) {
if (xOffset === 0 && width === imageWidth
&& yOffset === 0 && height === imageHeight) {
return new ImageData(new Uint8ClampedArray(imageBuffer), width, height);
}
const source = new Uint32Array(imageBuffer);
const bufferSize = width * height * 4;
const buffer = new ArrayBuffer(bufferSize);
const rgbaInt32 = new Uint32Array(buffer);
const rgbaInt8Clamped = new Uint8ClampedArray(buffer);
if (imageWidth === width) {
return new ImageData(
new Uint8ClampedArray(imageBuffer, yOffset * 4, bufferSize),
width,
height,
);
}
let writeIdx = 0;
for (let row = yOffset; row < height; row++) {
const start = row * imageWidth + xOffset;
rgbaInt32.set(source.subarray(start, start + width), writeIdx);
writeIdx += width;
}
return new ImageData(rgbaInt8Clamped, width, height);
}
async startDecode() {
const release = await this._mutex.acquire();
try {
const height = this._height;
const width = this._width;
const { start, end, block } = this._requestedBlockDecode;
this._blocksRanges.push(`${start}:${end}`);
this._decodingBlocks[`${start}:${end}`] = this._requestedBlockDecode;
this._requestedBlockDecode = null;
this._blocks[Math.floor((start + 1) / this._blockSize)] = block;
for (let i = start; i <= end; i++) {
this._frames[i] = null;
}
this._cleanup();
if (this._blockType === BlockType.MP4VIDEO) {
const worker = new H264Decoder();
let index = start;
worker.onmessage = (e) => {
if (e.data.consoleLog) { // ignore initialization message
return;
}
const scaleFactor = Math.ceil(this._height / e.data.height);
this._frames[index] = FrameProvider.cropImage(
e.data.buf, e.data.width, e.data.height, 0, 0,
Math.floor(width / scaleFactor), Math.floor(height / scaleFactor),
);
if (this._decodingBlocks[`${start}:${end}`].resolveCallback) {
this._decodingBlocks[`${start}:${end}`].resolveCallback(index);
}
if (index in this._promisedFrames) {
this._promisedFrames[index].resolve(this._frames[index]);
delete this._promisedFrames[index];
}
if (index === end) {
this._decodeThreadCount--;
delete this._decodingBlocks[`${start}:${end}`];
worker.terminate();
}
index++;
};
worker.onerror = (e) => {
worker.terminate();
this._decodeThreadCount--;
for (let i = index; i <= end; i++) {
if (i in this._promisedFrames) {
this._promisedFrames[i].reject();
delete this._promisedFrames[i];
}
}
if (this._decodingBlocks[`${start}:${end}`].rejectCallback) {
this._decodingBlocks[`${start}:${end}`].rejectCallback(Error(e));
}
delete this._decodingBlocks[`${start}:${end}`];
};
worker.postMessage({
type: 'Broadway.js - Worker init',
options: {
rgb: true,
reuseMemory: false,
},
});
const reader = new MP4Reader(new Bytestream(block));
reader.read();
const video = reader.tracks[1];
const avc = reader.tracks[1].trak.mdia.minf.stbl.stsd.avc1.avcC;
const sps = avc.sps[0];
const pps = avc.pps[0];
/* Decode Sequence & Picture Parameter Sets */
worker.postMessage({ buf: sps, offset: 0, length: sps.length });
worker.postMessage({ buf: pps, offset: 0, length: pps.length });
/* Decode Pictures */
for (let sample = 0; sample < video.getSampleCount(); sample++) {
video.getSampleNALUnits(sample).forEach((nal) => {
worker.postMessage({ buf: nal, offset: 0, length: nal.length });
});
}
this._decodeThreadCount++;
} else {
const worker = new ZipDecoder();
let index = start;
worker.onerror = (e) => {
for (let i = start; i <= end; i++) {
if (i in this._promisedFrames) {
this._promisedFrames[i].reject();
delete this._promisedFrames[i];
}
}
if (this._decodingBlocks[`${start}:${end}`].rejectCallback) {
this._decodingBlocks[`${start}:${end}`].rejectCallback(Error(e));
}
this._decodeThreadCount--;
worker.terminate();
};
worker.onmessage = (event) => {
this._frames[event.data.index] = event.data.data;
if (this._decodingBlocks[`${start}:${end}`].resolveCallback) {
this._decodingBlocks[`${start}:${end}`].resolveCallback(event.data.index);
}
if (event.data.index in this._promisedFrames) {
this._promisedFrames[event.data.index].resolve(
this._frames[event.data.index],
);
delete this._promisedFrames[event.data.index];
}
if (index === end) {
worker.terminate();
delete this._decodingBlocks[`${start}:${end}`];
this._decodeThreadCount--;
}
index++;
};
worker.postMessage({ block, start, end });
this._decodeThreadCount++;
}
} finally {
release();
}
}
get decodeThreadCount() {
return this._decodeThreadCount;
}
get decodedBlocksCacheSize() {
return this._decodedBlocksCacheSize;
}
/*
Method returns a list of cached ranges
Is an array of strings like "start:end"
*/
get cachedFrames() {
return [...this._blocksRanges].sort(
(a, b) => a.split(':')[0] - b.split(':')[0],
);
}
}
module.exports = {
FrameProvider,
BlockType,
};

@ -0,0 +1,35 @@
/*
* Copyright (C) 2019 Intel Corporation
* SPDX-License-Identifier: MIT
*/
/* global
require:true
*/
const JSZip = require('jszip');
onmessage = (e) => {
const zip = new JSZip();
if (e.data) {
const { start, end, block } = e.data;
zip.loadAsync(block).then((_zip) => {
let index = start;
_zip.forEach((relativePath) => {
const fileIndex = index++;
if (fileIndex <= end) {
_zip.file(relativePath).async('blob').then((fileData) => {
createImageBitmap(fileData).then((img) => {
postMessage({
fileName: relativePath,
index: fileIndex,
data: img,
});
});
});
}
});
});
}
};

@ -0,0 +1,64 @@
/* global
require:true,
__dirname:true,
*/
const path = require('path');
const CopyPlugin = require('copy-webpack-plugin');
const cvatData = {
target: 'web',
mode: 'production',
entry: './src/js/cvat-data.js',
output: {
path: path.resolve(__dirname, 'dist'),
filename: 'cvat-data.min.js',
library: 'cvatData',
libraryTarget: 'window',
},
module: {
rules: [
{
test: /.js?$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
options: {
presets: [
['@babel/preset-env', {
targets: '> 2.5%', // https://github.com/browserslist/browserslist
}],
],
sourceType: 'unambiguous',
},
},
}, {
test: /\.worker\.js$/,
exclude: /3rdparty/,
use: {
loader: 'worker-loader',
options: {
publicPath: '/',
name: '[name].js',
},
},
}, {
test: /3rdparty\/.*\.worker\.js$/,
use: {
loader: 'worker-loader',
options: {
publicPath: '/3rdparty/',
name: '3rdparty/[name].js',
},
},
},
],
},
plugins: [
new CopyPlugin([
'./src/js/3rdparty/avc.wasm',
]),
],
};
module.exports = cvatData;

@ -1609,9 +1609,9 @@
"dev": true
},
"readable-stream": {
"version": "2.3.6",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.6.tgz",
"integrity": "sha512-tQtKA9WIAhBF3+VLAseyMqZeBjW0AHJoxOtYqSUZNJxauErmLbVm2FW1y+J/YA9dUrAC39ITejlZWhVIwawkKw==",
"version": "2.3.7",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz",
"integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==",
"dev": true,
"requires": {
"core-util-is": "~1.0.0",
@ -2929,6 +2929,65 @@
"toggle-selection": "^1.0.6"
}
},
"copy-webpack-plugin": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/copy-webpack-plugin/-/copy-webpack-plugin-5.1.1.tgz",
"integrity": "sha512-P15M5ZC8dyCjQHWwd4Ia/dm0SgVvZJMYeykVIVYXbGyqO4dWB5oyPHp9i7wjwo5LhtlhKbiBCdS2NvM07Wlybg==",
"dev": true,
"requires": {
"cacache": "^12.0.3",
"find-cache-dir": "^2.1.0",
"glob-parent": "^3.1.0",
"globby": "^7.1.1",
"is-glob": "^4.0.1",
"loader-utils": "^1.2.3",
"minimatch": "^3.0.4",
"normalize-path": "^3.0.0",
"p-limit": "^2.2.1",
"schema-utils": "^1.0.0",
"serialize-javascript": "^2.1.2",
"webpack-log": "^2.0.0"
},
"dependencies": {
"globby": {
"version": "7.1.1",
"resolved": "https://registry.npmjs.org/globby/-/globby-7.1.1.tgz",
"integrity": "sha1-+yzP+UAfhgCUXfral0QMypcrhoA=",
"dev": true,
"requires": {
"array-union": "^1.0.1",
"dir-glob": "^2.0.0",
"glob": "^7.1.2",
"ignore": "^3.3.5",
"pify": "^3.0.0",
"slash": "^1.0.0"
}
},
"ignore": {
"version": "3.3.10",
"resolved": "https://registry.npmjs.org/ignore/-/ignore-3.3.10.tgz",
"integrity": "sha512-Pgs951kaMm5GXP7MOvxERINe3gsaVjUWFm+UZPSq9xYriQAksyhg0csnS0KXSNRD5NmNdapXEpjxG49+AKh/ug==",
"dev": true
},
"pify": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz",
"integrity": "sha1-5aSs0sEB/fPZpNB/DbxNtJ3SgXY=",
"dev": true
},
"schema-utils": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz",
"integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==",
"dev": true,
"requires": {
"ajv": "^6.1.0",
"ajv-errors": "^1.0.0",
"ajv-keywords": "^3.1.0"
}
}
}
},
"core-js": {
"version": "2.6.10",
"resolved": "https://registry.npmjs.org/core-js/-/core-js-2.6.10.tgz",
@ -3455,6 +3514,32 @@
"randombytes": "^2.0.0"
}
},
"dir-glob": {
"version": "2.2.2",
"resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-2.2.2.tgz",
"integrity": "sha512-f9LBi5QWzIW3I6e//uxZoLBlUt9kcp66qo0sSCxL6YZKc75R1c4MFCoe/LaZiBGmgujvQdxc5Bn3QhfyvK5Hsw==",
"dev": true,
"requires": {
"path-type": "^3.0.0"
},
"dependencies": {
"path-type": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/path-type/-/path-type-3.0.0.tgz",
"integrity": "sha512-T2ZUsdZFHgA3u4e5PfPbjd7HDDpxPnQb5jN0SrDsjNSuVXHJqtwTnWqG0B1jZrgmJ/7lj1EmVIByWt1gxGkWvg==",
"dev": true,
"requires": {
"pify": "^3.0.0"
}
},
"pify": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz",
"integrity": "sha1-5aSs0sEB/fPZpNB/DbxNtJ3SgXY=",
"dev": true
}
}
},
"dns-equal": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/dns-equal/-/dns-equal-1.0.0.tgz",
@ -5756,13 +5841,13 @@
"dev": true
},
"globule": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/globule/-/globule-1.2.1.tgz",
"integrity": "sha512-g7QtgWF4uYSL5/dn71WxubOrS7JVGCnFPEnoeChJmBnyR9Mw8nGoEwOgJL/RC2Te0WhbsEUCejfH8SZNJ+adYQ==",
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/globule/-/globule-1.3.1.tgz",
"integrity": "sha512-OVyWOHgw29yosRHCHo7NncwR1hW5ew0W/UrvtwvjefVJeQ26q4/8r8FmPsSF1hJ93IgWkyv16pCTz6WblMzm/g==",
"dev": true,
"requires": {
"glob": "~7.1.1",
"lodash": "~4.17.10",
"lodash": "~4.17.12",
"minimatch": "~3.0.2"
}
},
@ -6538,13 +6623,10 @@
"dev": true
},
"is-finite": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/is-finite/-/is-finite-1.0.2.tgz",
"integrity": "sha1-zGZ3aVYCvlUO8R6LSqYwU0K20Ko=",
"dev": true,
"requires": {
"number-is-nan": "^1.0.0"
}
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/is-finite/-/is-finite-1.1.0.tgz",
"integrity": "sha512-cdyMtqX/BOqqNBBiKlIVkytNHm49MtMlYyn1zxzvJKWmFMlGzm+ry5BBfYyeY9YmNKbRSo/o7OX9w9ale0wg3w==",
"dev": true
},
"is-fullwidth-code-point": {
"version": "2.0.0",
@ -6702,9 +6784,9 @@
"dev": true
},
"js-base64": {
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/js-base64/-/js-base64-2.5.1.tgz",
"integrity": "sha512-M7kLczedRMYX4L8Mdh4MzyAMM9O5osx+4FcOQuTvr3A9F2D9S5JXheN0ewNbrvK2UatkTRhL5ejGmGSjNMiZuw==",
"version": "2.5.2",
"resolved": "https://registry.npmjs.org/js-base64/-/js-base64-2.5.2.tgz",
"integrity": "sha512-Vg8czh0Q7sFBSUMWWArX/miJeBWYBPpdU/3M/DKSaekLMqrqVPaedp+5mZhie/r0lgrcaYBfwXatEew6gwgiQQ==",
"dev": true
},
"js-levenshtein": {
@ -7662,9 +7744,9 @@
}
},
"node-sass": {
"version": "4.13.0",
"resolved": "https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz",
"integrity": "sha512-W1XBrvoJ1dy7VsvTAS5q1V45lREbTlZQqFbiHb3R3OTTCma0XBtuG6xZ6Z4506nR4lmHPTqVRwxT6KgtWC97CA==",
"version": "4.13.1",
"resolved": "https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz",
"integrity": "sha512-TTWFx+ZhyDx1Biiez2nB0L3YrCZ/8oHagaDalbuBSlqXgUPsdkUSzJsVxeDO9LtPB49+Fh3WQl3slABo6AotNw==",
"dev": true,
"requires": {
"async-foreach": "^0.1.3",
@ -10822,6 +10904,12 @@
"integrity": "sha1-tf3AjxKH6hF4Yo5BXiUTK3NkbG0=",
"dev": true
},
"slash": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/slash/-/slash-1.0.0.tgz",
"integrity": "sha1-xB8vbDn8FtHNF61LXYlhFK5HDVU=",
"dev": true
},
"slice-ansi": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/slice-ansi/-/slice-ansi-2.1.0.tgz",
@ -11202,9 +11290,9 @@
"dev": true
},
"readable-stream": {
"version": "2.3.6",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.6.tgz",
"integrity": "sha512-tQtKA9WIAhBF3+VLAseyMqZeBjW0AHJoxOtYqSUZNJxauErmLbVm2FW1y+J/YA9dUrAC39ITejlZWhVIwawkKw==",
"version": "2.3.7",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz",
"integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==",
"dev": true,
"requires": {
"core-util-is": "~1.0.0",
@ -11614,6 +11702,12 @@
"ajv-keywords": "^3.1.0"
}
},
"serialize-javascript": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-2.1.2.tgz",
"integrity": "sha512-rs9OggEUF0V4jUSecXazOYsLfu7OGK2qIn3c7IPBiffz32XniEp/TX9Xmc9LQfK2nQ2QKHvZ2oygKUGU0lG4jQ==",
"dev": true
},
"source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
@ -12671,6 +12765,28 @@
"errno": "~0.1.7"
}
},
"worker-loader": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/worker-loader/-/worker-loader-2.0.0.tgz",
"integrity": "sha512-tnvNp4K3KQOpfRnD20m8xltE3eWh89Ye+5oj7wXEEHKac1P4oZ6p9oTj8/8ExqoSBnk9nu5Pr4nKfQ1hn2APJw==",
"dev": true,
"requires": {
"loader-utils": "^1.0.0",
"schema-utils": "^0.4.0"
},
"dependencies": {
"schema-utils": {
"version": "0.4.7",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz",
"integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==",
"dev": true,
"requires": {
"ajv": "^6.1.0",
"ajv-keywords": "^3.1.0"
}
}
}
},
"wrap-ansi": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-5.1.0.tgz",

@ -23,6 +23,7 @@
"@typescript-eslint/parser": "^2.19.2",
"babel-loader": "^8.0.6",
"babel-plugin-import": "^1.12.2",
"copy-webpack-plugin": "^5.1.1",
"css-loader": "^3.2.0",
"eslint": "^6.8.0",
"eslint-config-airbnb-typescript": "^7.0.0",
@ -42,7 +43,8 @@
"typescript": "^3.7.3",
"webpack": "^4.42.1",
"webpack-cli": "^3.3.8",
"webpack-dev-server": "^3.8.0"
"webpack-dev-server": "^3.8.0",
"worker-loader": "^2.0.0"
},
"dependencies": {
"@types/react": "^16.9.2",

@ -657,7 +657,7 @@ export function switchPlay(playing: boolean): AnyAction {
};
}
export function changeFrameAsync(toFrame: number):
export function changeFrameAsync(toFrame: number, fillBuffer?: boolean, frameStep?: number):
ThunkAction<Promise<void>, {}, {}, AnyAction> {
return async (dispatch: ActionCreator<Dispatch>): Promise<void> => {
const state: CombinedState = getStore().getState();
@ -675,7 +675,13 @@ ThunkAction<Promise<void>, {}, {}, AnyAction> {
payload: {
number: state.annotation.player.frame.number,
data: state.annotation.player.frame.data,
filename: state.annotation.player.frame.filename,
delay: state.annotation.player.frame.delay,
changeTime: state.annotation.player.frame.changeTime,
states: state.annotation.annotations.states,
minZ: state.annotation.annotations.zLayer.min,
maxZ: state.annotation.annotations.zLayer.max,
curZ: state.annotation.annotations.zLayer.cur,
},
});
@ -694,7 +700,7 @@ ThunkAction<Promise<void>, {}, {}, AnyAction> {
to: toFrame,
},
);
const data = await job.frames.get(toFrame);
const data = await job.frames.get(toFrame, fillBuffer, frameStep);
const states = await job.annotations.get(toFrame, showAllInterpolationTracks, filters);
const [minZ, maxZ] = computeZRange(states);
const currentTime = new Date().getTime();
@ -720,21 +726,25 @@ ThunkAction<Promise<void>, {}, {}, AnyAction> {
payload: {
number: toFrame,
data,
filename: data.filename,
states,
minZ,
maxZ,
curZ: maxZ,
changeTime: currentTime + delay,
delay,
},
});
} catch (error) {
dispatch({
type: AnnotationActionTypes.CHANGE_FRAME_FAILED,
payload: {
number: toFrame,
error,
},
});
if (error !== 'not needed') {
dispatch({
type: AnnotationActionTypes.CHANGE_FRAME_FAILED,
payload: {
number: toFrame,
error,
},
});
}
}
};
}
@ -945,6 +955,9 @@ export function getJobAsync(
const frameNumber = Math.max(Math.min(job.stopFrame, initialFrame), job.startFrame);
const frameData = await job.frames.get(frameNumber);
// call first getting of frame data before rendering interface
// to load and decode first chunk
await frameData.data();
const states = await job.annotations
.get(frameNumber, showAllInterpolationTracks, filters);
const [minZ, maxZ] = computeZRange(states);
@ -958,6 +971,7 @@ export function getJobAsync(
job,
states,
frameNumber,
frameFilename: frameData.filename,
frameData,
colors,
filters,
@ -965,6 +979,7 @@ export function getJobAsync(
maxZ,
},
});
dispatch(changeFrameAsync(frameNumber, false));
} catch (error) {
dispatch({
type: AnnotationActionTypes.GET_JOB_FAILED,

@ -389,6 +389,7 @@ ThunkAction<Promise<void>, {}, {}, AnyAction> {
labels: data.labels,
z_order: data.advanced.zOrder,
image_quality: 70,
use_zip_chunks: data.advanced.useZipChunks,
};
if (data.advanced.bugTracker) {
@ -412,6 +413,9 @@ ThunkAction<Promise<void>, {}, {}, AnyAction> {
if (data.advanced.imageQuality) {
description.image_quality = data.advanced.imageQuality;
}
if (data.advanced.dataChunkSize) {
description.data_chunk_size = data.advanced.dataChunkSize;
}
const taskInstance = new cvat.classes.Task(description);
taskInstance.clientFiles = data.files.local;

@ -36,6 +36,7 @@ interface Props {
annotations: any[];
frameData: any;
frameAngle: number;
frameFetching: boolean;
frame: number;
opacity: number;
colorBy: ColorBy;
@ -125,6 +126,7 @@ export default class CanvasWrapperComponent extends React.PureComponent<Props> {
contrastLevel,
saturationLevel,
workspace,
frameFetching,
} = this.props;
if (prevProps.sidebarCollapsed !== sidebarCollapsed) {
@ -199,6 +201,15 @@ export default class CanvasWrapperComponent extends React.PureComponent<Props> {
canvasInstance.rotate(frameAngle);
}
const loadingAnimation = window.document.getElementById('cvat_canvas_loading_animation');
if (loadingAnimation && frameFetching !== prevProps.frameFetching) {
if (frameFetching) {
loadingAnimation.classList.remove('cvat_canvas_hidden');
} else {
loadingAnimation.classList.add('cvat_canvas_hidden');
}
}
this.activateOnCanvas();
}

@ -119,6 +119,7 @@
overflow: hidden;
text-overflow: ellipsis;
user-select: none;
word-break: break-all;
}
.cvat-player-frame-url-icon {

@ -17,6 +17,7 @@ interface Props {
startFrame: number;
stopFrame: number;
frameNumber: number;
frameFilename: string;
inputFrameRef: React.RefObject<InputNumber>;
onSliderChange(value: SliderValue): void;
onInputChange(value: number): void;
@ -28,6 +29,7 @@ function PlayerNavigation(props: Props): JSX.Element {
startFrame,
stopFrame,
frameNumber,
frameFilename,
inputFrameRef,
onSliderChange,
onInputChange,
@ -58,8 +60,8 @@ function PlayerNavigation(props: Props): JSX.Element {
</Row>
<Row type='flex' justify='center'>
<Col className='cvat-player-filename-wrapper'>
<Tooltip title='filename.png'>
<Text type='secondary'>filename.png</Text>
<Tooltip title={frameFilename}>
<Text type='secondary'>{frameFilename}</Text>
</Tooltip>
</Col>
<Col offset={1}>

@ -19,6 +19,7 @@ interface Props {
saving: boolean;
savingStatuses: string[];
frameNumber: number;
frameFilename: string;
inputFrameRef: React.RefObject<InputNumber>;
startFrame: number;
stopFrame: number;
@ -50,6 +51,7 @@ export default function AnnotationTopBarComponent(props: Props): JSX.Element {
redoAction,
playing,
frameNumber,
frameFilename,
inputFrameRef,
startFrame,
stopFrame,
@ -98,6 +100,7 @@ export default function AnnotationTopBarComponent(props: Props): JSX.Element {
startFrame={startFrame}
stopFrame={stopFrame}
frameNumber={frameNumber}
frameFilename={frameFilename}
inputFrameRef={inputFrameRef}
onSliderChange={onSliderChange}
onInputChange={onInputChange}

@ -29,6 +29,8 @@ export interface AdvancedConfiguration {
frameFilter?: string;
lfs: boolean;
repository?: string;
useZipChunks: boolean;
dataChunkSize?: number;
}
type Props = FormComponentProps & {
@ -36,6 +38,52 @@ type Props = FormComponentProps & {
installedGit: boolean;
};
function isPositiveInteger(_: any, value: any, callback: any): void {
if (!value) {
callback();
return;
}
const intValue = +value;
if (Number.isNaN(intValue)
|| !Number.isInteger(intValue) || intValue < 1) {
callback('Value must be a positive integer');
}
callback();
}
function isNonNegativeInteger(_: any, value: any, callback: any): void {
if (!value) {
callback();
return;
}
const intValue = +value;
if (Number.isNaN(intValue) || intValue < 0) {
callback('Value must be a non negative integer');
}
callback();
}
function isIntegerRange(min: number, max: number, _: any, value: any, callback: any): void {
if (!value) {
callback();
return;
}
const intValue = +value;
if (Number.isNaN(intValue)
|| !Number.isInteger(intValue)
|| intValue < min || intValue > max
) {
callback(`Value must be an integer [${min}, ${max}]`);
}
callback();
}
class AdvancedConfigurationForm extends React.PureComponent<Props> {
public submit(): Promise<void> {
return new Promise((resolve, reject) => {
@ -49,6 +97,16 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
const filteredValues = { ...values };
delete filteredValues.frameStep;
if (values.overlapSize && +values.segmentSize <= +values.overlapSize) {
reject(new Error('Overlap size must be more than segment size'));
}
if (typeof (values.startFrame) !== 'undefined' && typeof (values.stopFrame) !== 'undefined'
&& +values.stopFrame < +values.startFrame
) {
reject(new Error('Stop frame must be more or equal start frame'));
}
onSubmit({
...values,
frameFilter: values.frameStep ? `step=${values.frameStep}` : undefined,
@ -94,14 +152,14 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
initialValue: 70,
rules: [{
required: true,
message: 'This field is required',
message: 'The field is required.',
}, {
validator: isIntegerRange.bind(null, 5, 100),
}],
})(
<Input
size='large'
type='number'
min={5}
max={100}
suffix={<Icon type='percentage' />}
/>,
)}
@ -116,7 +174,11 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
return (
<Form.Item label={<span>Overlap size</span>}>
<Tooltip title='Defines a number of intersected frames between different segments'>
{form.getFieldDecorator('overlapSize')(
{form.getFieldDecorator('overlapSize', {
rules: [{
validator: isNonNegativeInteger,
}],
})(
<Input size='large' type='number' />,
)}
</Tooltip>
@ -130,7 +192,11 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
return (
<Form.Item label={<span>Segment size</span>}>
<Tooltip title='Defines a number of frames in a segment'>
{form.getFieldDecorator('segmentSize')(
{form.getFieldDecorator('segmentSize', {
rules: [{
validator: isPositiveInteger,
}],
})(
<Input size='large' type='number' />,
)}
</Tooltip>
@ -143,7 +209,11 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
return (
<Form.Item label={<span>Start frame</span>}>
{form.getFieldDecorator('startFrame')(
{form.getFieldDecorator('startFrame', {
rules: [{
validator: isNonNegativeInteger,
}],
})(
<Input
size='large'
type='number'
@ -160,7 +230,11 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
return (
<Form.Item label={<span>Stop frame</span>}>
{form.getFieldDecorator('stopFrame')(
{form.getFieldDecorator('stopFrame', {
rules: [{
validator: isNonNegativeInteger,
}],
})(
<Input
size='large'
type='number'
@ -177,7 +251,11 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
return (
<Form.Item label={<span>Frame step</span>}>
{form.getFieldDecorator('frameStep')(
{form.getFieldDecorator('frameStep', {
rules: [{
validator: isPositiveInteger,
}],
})(
<Input
size='large'
type='number'
@ -289,6 +367,60 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
);
}
private renderUzeZipChunks(): JSX.Element {
const { form } = this.props;
return (
<Form.Item help='Force to use zip chunks as compressed data. Actual for videos only.'>
{form.getFieldDecorator('useZipChunks', {
initialValue: true,
valuePropName: 'checked',
})(
<Checkbox>
<Text className='cvat-text-color'>
Use zip chunks
</Text>
</Checkbox>,
)}
</Form.Item>
);
}
private renderChunkSize(): JSX.Element {
const { form } = this.props;
return (
<Form.Item label={<span>Chunk size</span>}>
<Tooltip
title={(
<>
Defines a number of frames to be packed in
a chunk when send from client to server.
Server defines automatically if empty.
<br />
Recommended values:
<br />
1080p or less: 36
<br />
2k or less: 8 - 16
<br />
4k or less: 4 - 8
<br />
More: 1 - 4
</>
)}
>
{form.getFieldDecorator('dataChunkSize', {
rules: [{
validator: isPositiveInteger,
}],
})(
<Input size='large' type='number' />,
)}
</Tooltip>
</Form.Item>
);
}
public render(): JSX.Element {
const { installedGit } = this.props;
@ -300,6 +432,12 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
</Col>
</Row>
<Row>
<Col>
{this.renderUzeZipChunks()}
</Col>
</Row>
<Row type='flex' justify='start'>
<Col span={7}>
{this.renderImageQuality()}
@ -324,6 +462,12 @@ class AdvancedConfigurationForm extends React.PureComponent<Props> {
</Col>
</Row>
<Row type='flex' justify='start'>
<Col span={7}>
{this.renderChunkSize()}
</Col>
</Row>
{ installedGit ? this.renderGit() : null}
<Row>

@ -43,6 +43,7 @@ const defaultState = {
advanced: {
zOrder: false,
lfs: false,
useZipChunks: true,
},
labels: [],
files: {
@ -141,10 +142,10 @@ export default class CreateTaskContent extends React.PureComponent<Props, State>
}).then((): void => {
const { onCreate } = this.props;
onCreate(this.state);
}).catch((): void => {
}).catch((error: Error): void => {
notification.error({
message: 'Could not create a task',
description: 'Please, check configuration you specified',
description: error.toString(),
});
});
};

@ -44,6 +44,8 @@ interface State {
export default class DetailsComponent extends React.PureComponent<Props, State> {
private mounted: boolean;
private previewImageElement: HTMLImageElement;
private previewWrapperRef: React.RefObject<HTMLDivElement>;
constructor(props: Props) {
super(props);
@ -51,6 +53,8 @@ export default class DetailsComponent extends React.PureComponent<Props, State>
const { taskInstance } = props;
this.mounted = false;
this.previewImageElement = new Image();
this.previewWrapperRef = React.createRef<HTMLDivElement>();
this.state = {
name: taskInstance.name,
bugTracker: taskInstance.bugTracker,
@ -60,9 +64,25 @@ export default class DetailsComponent extends React.PureComponent<Props, State>
}
public componentDidMount(): void {
const { taskInstance } = this.props;
const { taskInstance, previewImage } = this.props;
const { previewImageElement, previewWrapperRef } = this;
this.mounted = true;
previewImageElement.onload = () => {
const { height, width } = previewImageElement;
if (width > height) {
previewImageElement.style.width = '100%';
} else {
previewImageElement.style.height = '100%';
}
};
previewImageElement.src = previewImage;
previewImageElement.alt = 'Preview';
if (previewWrapperRef.current) {
previewWrapperRef.current.appendChild(previewImageElement);
}
getReposData(taskInstance.id)
.then((data): void => {
if (data !== null && this.mounted) {
@ -135,11 +155,11 @@ export default class DetailsComponent extends React.PureComponent<Props, State>
}
private renderPreview(): JSX.Element {
const { previewImage } = this.props;
const { previewWrapperRef } = this;
// Add image on mount after get its width and height to fit it into wrapper
return (
<div className='cvat-task-preview-wrapper'>
<img alt='Preview' className='cvat-task-preview' src={previewImage} />
</div>
<div ref={previewWrapperRef} className='cvat-task-preview-wrapper' />
);
}

@ -76,15 +76,14 @@
}
.cvat-task-preview-wrapper {
display: flex;
justify-content: flex-start;
overflow: hidden;
margin-bottom: 20px;
> .cvat-task-preview {
max-width: 252px;
max-height: 144px;
}
width: 252px;
height: 144px;
display: table-cell;
text-align: center;
vertical-align: middle;
background-color: $background-color-2;
}
.cvat-user-selector {

@ -55,6 +55,7 @@ interface StateToProps {
annotations: any[];
frameData: any;
frameAngle: number;
frameFetching: boolean;
frame: number;
opacity: number;
colorBy: ColorBy;
@ -129,6 +130,7 @@ function mapStateToProps(state: CombinedState): StateToProps {
frame: {
data: frameData,
number: frame,
fetching: frameFetching,
},
frameAngles,
},
@ -175,6 +177,7 @@ function mapStateToProps(state: CombinedState): StateToProps {
jobInstance,
frameData,
frameAngle: frameAngles[frame - jobInstance.startFrame],
frameFetching,
frame,
activatedStateID,
activatedAttributeID,

@ -32,6 +32,7 @@ import { CombinedState, FrameSpeed, Workspace } from 'reducers/interfaces';
interface StateToProps {
jobInstance: any;
frameNumber: number;
frameFilename: string;
frameStep: number;
frameSpeed: FrameSpeed;
frameDelay: number;
@ -47,7 +48,7 @@ interface StateToProps {
}
interface DispatchToProps {
onChangeFrame(frame: number): void;
onChangeFrame(frame: number, fillBuffer?: boolean, frameStep?: number): void;
onSwitchPlay(playing: boolean): void;
onSaveAnnotation(sessionInstance: any): void;
showStatistics(sessionInstance: any): void;
@ -63,6 +64,7 @@ function mapStateToProps(state: CombinedState): StateToProps {
player: {
playing,
frame: {
filename: frameFilename,
number: frameNumber,
delay: frameDelay,
},
@ -103,6 +105,7 @@ function mapStateToProps(state: CombinedState): StateToProps {
saving,
savingStatuses,
frameNumber,
frameFilename,
jobInstance,
undoAction: history.undo.length ? history.undo[history.undo.length - 1][0] : undefined,
redoAction: history.redo.length ? history.redo[history.redo.length - 1][0] : undefined,
@ -114,8 +117,8 @@ function mapStateToProps(state: CombinedState): StateToProps {
function mapDispatchToProps(dispatch: any): DispatchToProps {
return {
onChangeFrame(frame: number): void {
dispatch(changeFrameAsync(frame));
onChangeFrame(frame: number, fillBuffer?: boolean, frameStep?: number): void {
dispatch(changeFrameAsync(frame, fillBuffer, frameStep));
},
onSwitchPlay(playing: boolean): void {
dispatch(switchPlay(playing));
@ -208,7 +211,10 @@ class AnnotationTopBarContainer extends React.PureComponent<Props> {
setTimeout(() => {
const { playing: stillPlaying } = this.props;
if (stillPlaying) {
onChangeFrame(frameNumber + 1 + framesSkiped);
onChangeFrame(
frameNumber + 1 + framesSkiped,
stillPlaying, framesSkiped + 1,
);
}
}, frameDelay);
} else {
@ -451,6 +457,7 @@ class AnnotationTopBarContainer extends React.PureComponent<Props> {
stopFrame,
},
frameNumber,
frameFilename,
undoAction,
redoAction,
workspace,
@ -623,6 +630,7 @@ class AnnotationTopBarContainer extends React.PureComponent<Props> {
startFrame={startFrame}
stopFrame={stopFrame}
frameNumber={frameNumber}
frameFilename={frameFilename}
inputFrameRef={this.inputFrameRef}
undoAction={undoAction}
redoAction={redoAction}

@ -43,6 +43,7 @@ const defaultState: AnnotationState = {
player: {
frame: {
number: 0,
filename: '',
data: null,
fetching: false,
delay: 0,
@ -114,6 +115,7 @@ export default (state = defaultState, action: AnyAction): AnnotationState => {
job,
states,
frameNumber: number,
frameFilename: filename,
colors,
filters,
frameData: data,
@ -148,6 +150,7 @@ export default (state = defaultState, action: AnyAction): AnnotationState => {
...state.player,
frame: {
...state.player.frame,
filename,
number,
data,
},
@ -195,9 +198,11 @@ export default (state = defaultState, action: AnyAction): AnnotationState => {
const {
number,
data,
filename,
states,
minZ,
maxZ,
curZ,
delay,
changeTime,
} = action.payload;
@ -212,6 +217,7 @@ export default (state = defaultState, action: AnyAction): AnnotationState => {
...state.player,
frame: {
data,
filename,
number,
fetching: false,
changeTime,
@ -225,7 +231,7 @@ export default (state = defaultState, action: AnyAction): AnnotationState => {
zLayer: {
min: minZ,
max: maxZ,
cur: maxZ,
cur: curZ,
},
},
};

@ -323,6 +323,7 @@ export interface AnnotationState {
player: {
frame: {
number: number;
filename: string;
data: any | null;
fetching: boolean;
delay: number;

@ -8,6 +8,7 @@ const path = require('path');
const HtmlWebpackPlugin = require("html-webpack-plugin");
const TsconfigPathsPlugin = require('tsconfig-paths-webpack-plugin');
const Dotenv = require('dotenv-webpack');
const CopyPlugin = require('copy-webpack-plugin');
module.exports = {
target: 'web',
@ -73,7 +74,26 @@ module.exports = {
},
}
]
}],
}, {
test: /3rdparty\/.*\.worker\.js$/,
use: {
loader: 'worker-loader',
options: {
publicPath: '/',
name: '3rdparty/[name].js',
},
},
}, {
test: /\.worker\.js$/,
exclude: /3rdparty/,
use: {
loader: 'worker-loader',
options: {
publicPath: '/',
name: '[name].js',
},
},
},],
},
plugins: [
new HtmlWebpackPlugin({
@ -83,6 +103,12 @@ module.exports = {
new Dotenv({
systemvars: true,
}),
new CopyPlugin([
{
from: '../cvat-data/src/js/3rdparty/avc.wasm',
to: '3rdparty/',
},
]),
],
node: { fs: 'empty' },
};

@ -120,7 +120,7 @@ class Annotation:
self._MAX_ANNO_SIZE=30000
self._frame_info = {}
self._frame_mapping = {}
self._frame_step = db_task.get_frame_step()
self._frame_step = db_task.data.get_frame_step()
db_labels = self._db_task.label_set.all().prefetch_related('attributespec_set').order_by('pk')
@ -177,20 +177,20 @@ class Annotation:
return self._get_attribute_id(label_id, attribute_name, 'immutable')
def _init_frame_info(self):
if self._db_task.mode == "interpolation":
if hasattr(self._db_task.data, 'video'):
self._frame_info = {
frame: {
"path": "frame_{:06d}".format(frame),
"width": self._db_task.video.width,
"height": self._db_task.video.height,
} for frame in range(self._db_task.size)
"width": self._db_task.data.video.width,
"height": self._db_task.data.video.height,
} for frame in range(self._db_task.data.size)
}
else:
self._frame_info = {db_image.frame: {
"path": db_image.path,
"width": db_image.width,
"height": db_image.height,
} for db_image in self._db_task.image_set.all()}
} for db_image in self._db_task.data.images.all()}
self._frame_mapping = {
self._get_filename(info["path"]): frame for frame, info in self._frame_info.items()
@ -202,15 +202,15 @@ class Annotation:
("task", OrderedDict([
("id", str(self._db_task.id)),
("name", self._db_task.name),
("size", str(self._db_task.size)),
("size", str(self._db_task.data.size)),
("mode", self._db_task.mode),
("overlap", str(self._db_task.overlap)),
("bugtracker", self._db_task.bug_tracker),
("created", str(timezone.localtime(self._db_task.created_date))),
("updated", str(timezone.localtime(self._db_task.updated_date))),
("start_frame", str(self._db_task.start_frame)),
("stop_frame", str(self._db_task.stop_frame)),
("frame_filter", self._db_task.frame_filter),
("start_frame", str(self._db_task.data.start_frame)),
("stop_frame", str(self._db_task.data.stop_frame)),
("frame_filter", self._db_task.data.frame_filter),
("z_order", str(self._db_task.z_order)),
("labels", [
@ -250,13 +250,13 @@ class Annotation:
("dumped", str(timezone.localtime(timezone.now())))
])
if self._db_task.mode == "interpolation":
if hasattr(self._db_task.data, "video"):
self._meta["task"]["original_size"] = OrderedDict([
("width", str(self._db_task.video.width)),
("height", str(self._db_task.video.height))
("width", str(self._db_task.data.video.width)),
("height", str(self._db_task.data.video.height))
])
# Add source to dumped file
self._meta["source"] = str(os.path.basename(self._db_task.video.path))
self._meta["source"] = str(os.path.basename(self._db_task.data.video.path))
def _export_attributes(self, attributes):
exported_attributes = []
@ -271,7 +271,7 @@ class Annotation:
def _export_tracked_shape(self, shape):
return Annotation.TrackedShape(
type=shape["type"],
frame=self._db_task.start_frame + shape["frame"] * self._frame_step,
frame=self._db_task.data.start_frame + shape["frame"] * self._frame_step,
points=shape["points"],
occluded=shape["occluded"],
outside=shape.get("outside", False),
@ -284,7 +284,7 @@ class Annotation:
return Annotation.LabeledShape(
type=shape["type"],
label=self._get_label_name(shape["label_id"]),
frame=self._db_task.start_frame + shape["frame"] * self._frame_step,
frame=self._db_task.data.start_frame + shape["frame"] * self._frame_step,
points=shape["points"],
occluded=shape["occluded"],
z_order=shape.get("z_order", 0),
@ -294,7 +294,7 @@ class Annotation:
def _export_tag(self, tag):
return Annotation.Tag(
frame=self._db_task.start_frame + tag["frame"] * self._frame_step,
frame=self._db_task.data.start_frame + tag["frame"] * self._frame_step,
label=self._get_label_name(tag["label_id"]),
group=tag.get("group", 0),
attributes=self._export_attributes(tag["attributes"]),
@ -303,16 +303,11 @@ class Annotation:
def group_by_frame(self):
def _get_frame(annotations, shape):
db_image = self._frame_info[shape["frame"]]
frame = self._db_task.start_frame + shape["frame"] * self._frame_step
rpath = db_image['path'].split(os.path.sep)
if len(rpath) != 1:
rpath = os.path.sep.join(rpath[rpath.index(".upload")+1:])
else:
rpath = rpath[0]
frame = self._db_task.data.start_frame + shape["frame"] * self._frame_step
if frame not in annotations:
annotations[frame] = Annotation.Frame(
frame=frame,
name=rpath,
name=db_image['path'],
height=db_image["height"],
width=db_image["width"],
labeled_shapes=[],
@ -322,7 +317,7 @@ class Annotation:
annotations = {}
data_manager = DataManager(self._annotation_ir)
for shape in sorted(data_manager.to_shapes(self._db_task.size), key=lambda s: s.get("z_order", 0)):
for shape in sorted(data_manager.to_shapes(self._db_task.data.size), key=lambda shape: shape.get("z_order", 0)):
_get_frame(annotations, shape).labeled_shapes.append(self._export_labeled_shape(shape))
for tag in self._annotation_ir.tags:
@ -338,7 +333,7 @@ class Annotation:
@property
def tracks(self):
for track in self._annotation_ir.tracks:
tracked_shapes = TrackManager.get_interpolated_shapes(track, 0, self._db_task.size)
tracked_shapes = TrackManager.get_interpolated_shapes(track, 0, self._db_task.data.size)
for tracked_shape in tracked_shapes:
tracked_shape["attributes"] += track["attributes"]
@ -360,7 +355,7 @@ class Annotation:
def _import_tag(self, tag):
_tag = tag._asdict()
label_id = self._get_label_id(_tag.pop('label'))
_tag['frame'] = (int(_tag['frame']) - self._db_task.start_frame) // self._frame_step
_tag['frame'] = (int(_tag['frame']) - self._db_task.data.start_frame) // self._frame_step
_tag['label_id'] = label_id
_tag['attributes'] = [self._import_attribute(label_id, attrib) for attrib in _tag['attributes']
if self._get_attribute_id(label_id, attrib.name)]
@ -375,7 +370,7 @@ class Annotation:
def _import_shape(self, shape):
_shape = shape._asdict()
label_id = self._get_label_id(_shape.pop('label'))
_shape['frame'] = (int(_shape['frame']) - self._db_task.start_frame) // self._frame_step
_shape['frame'] = (int(_shape['frame']) - self._db_task.data.start_frame) // self._frame_step
_shape['label_id'] = label_id
_shape['attributes'] = [self._import_attribute(label_id, attrib) for attrib in _shape['attributes']
if self._get_attribute_id(label_id, attrib.name)]
@ -385,12 +380,12 @@ class Annotation:
_track = track._asdict()
label_id = self._get_label_id(_track.pop('label'))
_track['frame'] = (min(int(shape.frame) for shape in _track['shapes']) - \
self._db_task.start_frame) // self._frame_step
self._db_task.data.start_frame) // self._frame_step
_track['label_id'] = label_id
_track['attributes'] = []
_track['shapes'] = [shape._asdict() for shape in _track['shapes']]
for shape in _track['shapes']:
shape['frame'] = (int(shape['frame']) - self._db_task.start_frame) // self._frame_step
shape['frame'] = (int(shape['frame']) - self._db_task.data.start_frame) // self._frame_step
_track['attributes'] = [self._import_attribute(label_id, attrib) for attrib in shape['attributes']
if self._get_immutable_attribute_id(label_id, attrib.name)]
shape['attributes'] = [self._import_attribute(label_id, attrib) for attrib in shape['attributes']

@ -3,12 +3,10 @@
# SPDX-License-Identifier: MIT
from cvat.apps.annotation import models
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from cvat.apps.annotation.serializers import AnnotationFormatSerializer
from django.core.files import File
import os
from copy import deepcopy
def register_format(format_file):

@ -4,21 +4,19 @@
# SPDX-License-Identifier: MIT
import cv2
import numpy as np
class ImageLoader():
def __init__(self, image_list):
self.image_list = image_list
def __getitem__(self, i):
return self.image_list[i]
def __init__(self, frame_provider):
self._frame_provider = frame_provider
def __iter__(self):
for imagename in self.image_list:
yield self._load_image(imagename)
for frame in self._frame_provider.get_frames(self._frame_provider.Quality.ORIGINAL):
yield self._load_image(frame)
def __len__(self):
return len(self.image_list)
return len(self._frame_provider)
@staticmethod
def _load_image(path_to_image):
return cv2.imread(path_to_image)
def _load_image(image):
return cv2.imdecode(np.fromstring(image.read(), np.uint8), cv2.IMREAD_COLOR)

@ -3,7 +3,6 @@
# SPDX-License-Identifier: MIT
import django_rq
import fnmatch
import numpy as np
import os
import rq
@ -19,6 +18,7 @@ from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.authentication.auth import has_admin_role
from cvat.apps.engine.serializers import LabeledDataSerializer
from cvat.apps.engine.annotation import put_task_data, patch_task_data
from cvat.apps.engine.frame_provider import FrameProvider
from .models import AnnotationModel, FrameworkChoice
from .model_loader import load_labelmap
@ -208,19 +208,6 @@ def delete(dl_model_id):
else:
raise Exception("Requested DL model {} doesn't exist".format(dl_model_id))
def get_image_data(path_to_data):
def get_image_key(item):
return int(os.path.splitext(os.path.basename(item))[0])
image_list = []
for root, _, filenames in os.walk(path_to_data):
for filename in fnmatch.filter(filenames, "*.jpg"):
image_list.append(os.path.join(root, filename))
image_list.sort(key=get_image_key)
return ImageLoader(image_list)
def run_inference_thread(tid, model_file, weights_file, labels_mapping, attributes, convertation_file, reset, user, restricted=True):
def update_progress(job, progress):
job.refresh()
@ -241,7 +228,7 @@ def run_inference_thread(tid, model_file, weights_file, labels_mapping, attribut
result = None
slogger.glob.info("auto annotation with openvino toolkit for task {}".format(tid))
result = run_inference_engine_annotation(
data=get_image_data(db_task.get_data_dirname()),
data=ImageLoader(FrameProvider(db_task.data)),
model_file=model_file,
weights_file=weights_file,
labels_mapping=labels_mapping,

@ -11,10 +11,9 @@ from cvat.apps.authentication.decorators import login_required
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.engine.serializers import LabeledDataSerializer
from cvat.apps.engine.annotation import put_task_data
from cvat.apps.engine.frame_provider import FrameProvider
import django_rq
import fnmatch
import json
import os
import rq
@ -26,13 +25,7 @@ import sys
import skimage.io
from skimage.measure import find_contours, approximate_polygon
def load_image_into_numpy(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
def run_tensorflow_auto_segmentation(image_list, labels_mapping, treshold):
def run_tensorflow_auto_segmentation(frame_provider, labels_mapping, treshold):
def _convert_to_int(boolean_mask):
return boolean_mask.astype(np.uint8)
@ -88,16 +81,17 @@ def run_tensorflow_auto_segmentation(image_list, labels_mapping, treshold):
## RUN OBJECT DETECTION
result = {}
for image_num, image_path in enumerate(image_list):
frames = frame_provider.get_frames(frame_provider.Quality.ORIGINAL)
for image_num, image_bytes in enumerate(frames):
job.refresh()
if 'cancel' in job.meta:
del job.meta['cancel']
job.save()
return None
job.meta['progress'] = image_num * 100 / len(image_list)
job.meta['progress'] = image_num * 100 / len(frame_provider)
job.save_meta()
image = skimage.io.imread(image_path)
image = skimage.io.imread(image_bytes)
# for multiple image detection, "batch size" must be equal to number of images
r = model.detect([image], verbose=1)
@ -117,20 +111,6 @@ def run_tensorflow_auto_segmentation(image_list, labels_mapping, treshold):
return result
def make_image_list(path_to_data):
def get_image_key(item):
return int(os.path.splitext(os.path.basename(item))[0])
image_list = []
for root, _, filenames in os.walk(path_to_data):
for filename in fnmatch.filter(filenames, '*.jpg'):
image_list.append(os.path.join(root, filename))
image_list.sort(key=get_image_key)
return image_list
def convert_to_cvat_format(data):
result = {
"tracks": [],
@ -166,12 +146,12 @@ def create_thread(tid, labels_mapping, user):
# Get job indexes and segment length
db_task = TaskModel.objects.get(pk=tid)
# Get image list
image_list = make_image_list(db_task.get_data_dirname())
frame_provider = FrameProvider(db_task.data)
# Run auto segmentation by tf
result = None
slogger.glob.info("auto segmentation with tensorflow framework for task {}".format(tid))
result = run_tensorflow_auto_segmentation(image_list, labels_mapping, TRESHOLD)
result = run_tensorflow_auto_segmentation(frame_provider, labels_mapping, TRESHOLD)
if result is None:
slogger.glob.info('auto segmentation for task {} canceled by user'.format(tid))

@ -4,56 +4,47 @@
# SPDX-License-Identifier: MIT
from collections import OrderedDict
import os
import os.path as osp
from django.db import transaction
from cvat.apps.annotation.annotation import Annotation
from cvat.apps.engine.annotation import TaskAnnotation
from cvat.apps.engine.models import Task, ShapeType, AttributeType
from cvat.apps.engine.models import ShapeType, AttributeType
import datumaro.components.extractor as datumaro
from datumaro.util.image import Image
class CvatImagesDirExtractor(datumaro.Extractor):
_SUPPORTED_FORMATS = ['.png', '.jpg']
def __init__(self, url):
class CvatImagesExtractor(datumaro.Extractor):
def __init__(self, url, frame_provider):
super().__init__()
items = []
for (dirpath, _, filenames) in os.walk(url):
for name in filenames:
path = osp.join(dirpath, name)
if self._is_image(path):
item_id = Task.get_image_frame(path)
item = datumaro.DatasetItem(id=item_id, image=path)
items.append((item.id, item))
items = sorted(items, key=lambda e: int(e[0]))
items = OrderedDict(items)
self._items = items
self._frame_provider = frame_provider
self._subsets = None
def __iter__(self):
for item in self._items.values():
yield item
frames = self._frame_provider.get_frames(
self._frame_provider.Quality.ORIGINAL,
self._frame_provider.Type.NUMPY_ARRAY)
for item_id, image in enumerate(frames):
yield datumaro.DatasetItem(
id=item_id,
image=Image(image),
)
def __len__(self):
return len(self._items)
return len(self._frame_provider)
def subsets(self):
return self._subsets
def _is_image(self, path):
for ext in self._SUPPORTED_FORMATS:
if osp.isfile(path) and path.endswith(ext):
return True
return False
def get(self, item_id, subset=None, path=None):
if path or subset:
raise KeyError()
return datumaro.DatasetItem(
id=item_id,
image=self._frame_provider[item_id].getvalue()
)
class CvatAnnotationsExtractor(datumaro.Extractor):
def __init__(self, url, cvat_annotations):
@ -170,7 +161,6 @@ class CvatAnnotationsExtractor(datumaro.Extractor):
return item_anno
class CvatTaskExtractor(CvatAnnotationsExtractor):
def __init__(self, url, db_task, user):
cvat_annotations = TaskAnnotation(db_task.id, user)
@ -254,4 +244,4 @@ def import_dm_annotations(dm_dataset, cvat_task_anno):
group=group_map.get(ann.group, 0),
attributes=[cvat_task_anno.Attribute(name=n, value=str(v))
for n, v in ann.attributes.items()],
))
))

@ -45,7 +45,7 @@ class cvat_rest_api_task_images(datumaro.SourceExtractor):
self._connect()
os.makedirs(self._cache_dir, exist_ok=True)
self._cvat_cli.tasks_frame(task_id=self._config.task_id,
frame_ids=[item_id], outdir=self._cache_dir)
frame_ids=[item_id], outdir=self._cache_dir, quality='original')
def _connect(self):
if self._session is not None:
@ -126,6 +126,7 @@ class cvat_rest_api_task_images(datumaro.SourceExtractor):
def __len__(self):
return len(self._items)
# pylint: disable=no-self-use
def subsets(self):
return None

@ -16,6 +16,7 @@ import django_rq
from cvat.apps.engine.log import slogger
from cvat.apps.engine.models import Task
from cvat.apps.engine.frame_provider import FrameProvider
from .util import current_function_name, make_zip_archive
_CVAT_ROOT_DIR = __file__[:__file__.rfind('cvat/')]
@ -23,7 +24,7 @@ _DATUMARO_REPO_PATH = osp.join(_CVAT_ROOT_DIR, 'datumaro')
sys.path.append(_DATUMARO_REPO_PATH)
from datumaro.components.project import Project, Environment
import datumaro.components.extractor as datumaro
from .bindings import CvatImagesDirExtractor, CvatTaskExtractor
from .bindings import CvatImagesExtractor, CvatTaskExtractor
_MODULE_NAME = __package__ + '.' + osp.splitext(osp.basename(__file__))[0]
@ -77,11 +78,11 @@ class TaskProject:
def _create(self):
self._project = Project.generate(self._project_dir)
self._project.add_source('task_%s' % self._db_task.id, {
'url': self._db_task.get_data_dirname(),
'format': _TASK_IMAGES_EXTRACTOR,
})
self._project.env.extractors.register(_TASK_IMAGES_EXTRACTOR,
CvatImagesDirExtractor)
lambda url: CvatImagesExtractor(url,
FrameProvider(self._db_task.data)))
self._init_dataset()
self._dataset.define_categories(self._generate_categories())
@ -91,18 +92,19 @@ class TaskProject:
def _load(self):
self._project = Project.load(self._project_dir)
self._project.env.extractors.register(_TASK_IMAGES_EXTRACTOR,
CvatImagesDirExtractor)
lambda url: CvatImagesExtractor(url,
FrameProvider(self._db_task.data)))
def _import_from_task(self, user):
self._project = Project.generate(self._project_dir,
config={'project_name': self._db_task.name})
self._project.add_source('task_%s_images' % self._db_task.id, {
'url': self._db_task.get_data_dirname(),
'format': _TASK_IMAGES_EXTRACTOR,
})
self._project.env.extractors.register(_TASK_IMAGES_EXTRACTOR,
CvatImagesDirExtractor)
lambda url: CvatImagesExtractor(url,
FrameProvider(self._db_task.data)))
self._project.add_source('task_%s_anno' % self._db_task.id, {
'format': _TASK_ANNO_EXTRACTOR,
@ -173,9 +175,9 @@ class TaskProject:
images_meta = {
'images': items,
}
db_video = getattr(self._db_task, 'video', None)
db_video = getattr(self._db_task.data, 'video', None)
if db_video is not None:
for i in range(self._db_task.size):
for i in range(self._db_task.data.size):
frame_info = {
'id': i,
'width': db_video.width,
@ -183,7 +185,7 @@ class TaskProject:
}
items.append(frame_info)
else:
for db_image in self._db_task.image_set.all():
for db_image in self._db_task.data.images.all():
frame_info = {
'id': db_image.frame,
'name': osp.basename(db_image.path),
@ -345,4 +347,4 @@ def get_export_formats():
if fmt['tag'] in available_formats:
public_formats.append(fmt)
return public_formats
return public_formats

@ -56,7 +56,7 @@ class SegmentAdmin(admin.ModelAdmin):
class TaskAdmin(admin.ModelAdmin):
date_hierarchy = 'updated_date'
readonly_fields = ('size', 'created_date', 'updated_date', 'overlap')
readonly_fields = ('created_date', 'updated_date', 'overlap')
list_display = ('name', 'mode', 'owner', 'assignee', 'created_date', 'updated_date')
search_fields = ('name', 'mode', 'owner__username', 'owner__first_name',
'owner__last_name', 'owner__email', 'assignee__username', 'assignee__first_name',

@ -656,7 +656,7 @@ class JobAnnotation:
class TaskAnnotation:
def __init__(self, pk, user):
self.user = user
self.db_task = models.Task.objects.prefetch_related("image_set").get(id=pk)
self.db_task = models.Task.objects.prefetch_related("data__images").get(id=pk)
# Postgres doesn't guarantee an order by default without explicit order_by
self.db_jobs = models.Job.objects.select_related("segment").filter(segment__task_id=pk).order_by('id')

@ -1,3 +1,7 @@
# Copyright (C) 2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
import copy
import numpy as np

@ -0,0 +1,149 @@
# Copyright (C) 2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
import math
from io import BytesIO
from enum import Enum
import numpy as np
from PIL import Image
from cvat.apps.engine.media_extractors import VideoReader, ZipReader
from cvat.apps.engine.models import DataChoice
from cvat.apps.engine.mime_types import mimetypes
class FrameProvider():
class Quality(Enum):
COMPRESSED = 0
ORIGINAL = 100
class Type(Enum):
BUFFER = 0
PIL = 1
NUMPY_ARRAY = 2
def __init__(self, db_data):
self._db_data = db_data
if db_data.compressed_chunk_type == DataChoice.IMAGESET:
self._compressed_chunk_reader_class = ZipReader
elif db_data.compressed_chunk_type == DataChoice.VIDEO:
self._compressed_chunk_reader_class = VideoReader
else:
raise Exception('Unsupported chunk type')
if db_data.original_chunk_type == DataChoice.IMAGESET:
self._original_chunk_reader_class = ZipReader
elif db_data.original_chunk_type == DataChoice.VIDEO:
self._original_chunk_reader_class = VideoReader
else:
raise Exception('Unsupported chunk type')
self._extracted_compressed_chunk = None
self._compressed_chunk_reader = None
self._extracted_original_chunk = None
self._original_chunk_reader = None
def __len__(self):
return self._db_data.size
def _validate_frame_number(self, frame_number):
frame_number_ = int(frame_number)
if frame_number_ < 0 or frame_number_ >= self._db_data.size:
raise Exception('Incorrect requested frame number: {}'.format(frame_number_))
chunk_number = frame_number_ // self._db_data.chunk_size
frame_offset = frame_number_ % self._db_data.chunk_size
return frame_number_, chunk_number, frame_offset
def _validate_chunk_number(self, chunk_number):
chunk_number_ = int(chunk_number)
if chunk_number_ < 0 or chunk_number_ >= math.ceil(self._db_data.size / self._db_data.chunk_size):
raise Exception('requested chunk does not exist')
return chunk_number_
@staticmethod
def _av_frame_to_png_bytes(av_frame):
pil_img = av_frame.to_image()
buf = BytesIO()
pil_img.save(buf, format='PNG')
buf.seek(0)
return buf
def _get_frame(self, frame_number, chunk_path_getter, extracted_chunk, chunk_reader, reader_class):
_, chunk_number, frame_offset = self._validate_frame_number(frame_number)
chunk_path = chunk_path_getter(chunk_number)
if chunk_number != extracted_chunk:
extracted_chunk = chunk_number
chunk_reader = reader_class([chunk_path])
frame, frame_name = chunk_reader[frame_offset]
if reader_class is VideoReader:
return (self._av_frame_to_png_bytes(frame), 'image/png')
return (frame, mimetypes.guess_type(frame_name))
def _get_frames(self, chunk_path_getter, reader_class, out_type):
for chunk_idx in range(math.ceil(self._db_data.size / self._db_data.chunk_size)):
chunk_path = chunk_path_getter(chunk_idx)
chunk_reader = reader_class([chunk_path])
for frame, _ in chunk_reader:
if out_type == self.Type.BUFFER:
yield self._av_frame_to_png_bytes(frame) if reader_class is VideoReader else frame
elif out_type == self.Type.PIL:
yield frame.to_image() if reader_class is VideoReader else Image.open(frame)
elif out_type == self.Type.NUMPY_ARRAY:
if reader_class is VideoReader:
image = np.array(frame.to_image())
else:
image = np.array(Image.open(frame))
if len(image.shape) == 3 and image.shape[2] in {3, 4}:
image[:, :, :3] = image[:, :, 2::-1] # RGB to BGR
yield image
else:
raise Exception('unsupported output type')
def get_preview(self):
return self._db_data.get_preview_path()
def get_chunk(self, chunk_number, quality=Quality.ORIGINAL):
chunk_number = self._validate_chunk_number(chunk_number)
if quality == self.Quality.ORIGINAL:
return self._db_data.get_original_chunk_path(chunk_number)
elif quality == self.Quality.COMPRESSED:
return self._db_data.get_compressed_chunk_path(chunk_number)
def get_frame(self, frame_number, quality=Quality.ORIGINAL):
if quality == self.Quality.ORIGINAL:
return self._get_frame(
frame_number=frame_number,
chunk_path_getter=self._db_data.get_original_chunk_path,
extracted_chunk=self._extracted_original_chunk,
chunk_reader=self._original_chunk_reader,
reader_class=self._original_chunk_reader_class,
)
elif quality == self.Quality.COMPRESSED:
return self._get_frame(
frame_number=frame_number,
chunk_path_getter=self._db_data.get_compressed_chunk_path,
extracted_chunk=self._extracted_compressed_chunk,
chunk_reader=self._compressed_chunk_reader,
reader_class=self._compressed_chunk_reader_class,
)
def get_frames(self, quality=Quality.ORIGINAL, out_type=Type.BUFFER):
if quality == self.Quality.ORIGINAL:
return self._get_frames(
chunk_path_getter=self._db_data.get_original_chunk_path,
reader_class=self._original_chunk_reader_class,
out_type=out_type,
)
elif quality == self.Quality.COMPRESSED:
return self._get_frames(
chunk_path_getter=self._db_data.get_compressed_chunk_path,
reader_class=self._compressed_chunk_reader_class,
out_type=out_type,
)

@ -1,18 +1,22 @@
# Copyright (C) 2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
import tempfile
import shutil
import numpy as np
import zipfile
from io import BytesIO
import itertools
from abc import ABC, abstractmethod
from ffmpy import FFmpeg
import av
import av.datasets
import numpy as np
from pyunpack import Archive
from PIL import Image
import mimetypes
_SCRIPT_DIR = os.path.realpath(os.path.dirname(__file__))
MEDIA_MIMETYPES_FILES = [
os.path.join(_SCRIPT_DIR, "media.mimetypes"),
]
mimetypes.init(files=MEDIA_MIMETYPES_FILES)
from cvat.apps.engine.mime_types import mimetypes
def get_mime(name):
for type_name, type_def in MEDIA_TYPES.items():
@ -21,110 +25,85 @@ def get_mime(name):
return 'unknown'
class MediaExtractor:
def __init__(self, source_path, dest_path, image_quality, step, start, stop):
self._source_path = source_path
self._dest_path = dest_path
self._image_quality = image_quality
class IMediaReader(ABC):
def __init__(self, source_path, step, start, stop):
self._source_path = sorted(source_path)
self._step = step
self._start = start
self._stop = stop
def get_source_name(self):
return self._source_path
@staticmethod
def create_tmp_dir():
return tempfile.mkdtemp(prefix='cvat-', suffix='.data')
#Note step, start, stop have no affect
class ImageListExtractor(MediaExtractor):
def __init__(self, source_path, dest_path, image_quality, step=1, start=0, stop=0):
if not source_path:
raise Exception('No image found')
super().__init__(
source_path=sorted(source_path),
dest_path=dest_path,
image_quality=image_quality,
step=1,
start=0,
stop=0,
)
@staticmethod
def delete_tmp_dir(tmp_dir):
if tmp_dir:
shutil.rmtree(tmp_dir)
@abstractmethod
def __iter__(self):
return iter(self._source_path)
pass
@abstractmethod
def __getitem__(self, k):
return self._source_path[k]
def __len__(self):
return len(self._source_path)
pass
@abstractmethod
def save_preview(self, preview_path):
pass
def slice_by_size(self, size):
# stopFrame should be included
it = itertools.islice(self, self._start, self._stop + 1 if self._stop else None)
frames = list(itertools.islice(it, 0, size * self._step, self._step))
while frames:
yield frames
frames = list(itertools.islice(it, 0, size * self._step, self._step))
@property
@abstractmethod
def image_names(self):
pass
@abstractmethod
def get_image_size(self):
pass
def save_image(self, k, dest_path):
image = Image.open(self[k])
# Ensure image data fits into 8bit per pixel before RGB conversion as PIL clips values on conversion
if image.mode == "I":
# Image mode is 32bit integer pixels.
# Autoscale pixels by factor 2**8 / im_data.max() to fit into 8bit
im_data = np.array(image)
im_data = im_data * (2**8 / im_data.max())
image = Image.fromarray(im_data.astype(np.int32))
image = image.convert('RGB')
image.save(dest_path, quality=self._image_quality, optimize=True)
height = image.height
width = image.width
image.close()
return width, height
class PDFExtractor(MediaExtractor):
def __init__(self, source_path, dest_path, image_quality, step=1, start=0, stop=0):
#Note step, start, stop have no affect
class ImageListReader(IMediaReader):
def __init__(self, source_path, step=1, start=0, stop=0):
if not source_path:
raise Exception('No PDF found')
from pdf2image import convert_from_path
self._temp_directory = tempfile.mkdtemp(prefix='cvat-')
raise Exception('No image found')
super().__init__(
source_path=source_path[0],
dest_path=dest_path,
image_quality=image_quality,
source_path=source_path,
step=1,
start=0,
stop=0,
)
self._dimensions = []
file_ = convert_from_path(self._source_path)
self._basename = os.path.splitext(os.path.basename(self._source_path))[0]
for page_num, page in enumerate(file_):
output = os.path.join(self._temp_directory, self._basename + str(page_num) + '.jpg')
self._dimensions.append(page.size)
page.save(output, 'JPEG')
self._length = len(os.listdir(self._temp_directory))
def _get_imagepath(self, k):
img_path = os.path.join(self._temp_directory, self._basename + str(k) + '.jpg')
return img_path
def __iter__(self):
i = 0
while os.path.exists(self._get_imagepath(i)):
yield self._get_imagepath(i)
i += 1
def __del__(self):
if self._temp_directory:
shutil.rmtree(self._temp_directory)
return zip(self._source_path, self.image_names)
def __getitem__(self, k):
return self._get_imagepath(k)
return (self._source_path[k], self.image_names[k])
def __len__(self):
return self._length
return len(self._source_path)
def save_preview(self, preview_path):
shutil.copyfile(self._source_path[0], preview_path)
@property
def image_names(self):
return self._source_path
def save_image(self, k, dest_path):
shutil.copyfile(self[k], dest_path)
return self._dimensions[k]
def get_image_size(self):
img = Image.open(self._source_path[0])
return img.width, img.height
#Note step, start, stop have no affect
class DirectoryExtractor(ImageListExtractor):
def __init__(self, source_path, dest_path, image_quality, step=1, start=0, stop=0):
class DirectoryReader(ImageListReader):
def __init__(self, source_path, step=1, start=0, stop=0):
image_paths = []
for source in source_path:
for root, _, files in os.walk(source):
@ -132,89 +111,302 @@ class DirectoryExtractor(ImageListExtractor):
paths = filter(lambda x: get_mime(x) == 'image', paths)
image_paths.extend(paths)
super().__init__(
source_path=sorted(image_paths),
dest_path=dest_path,
image_quality=image_quality,
source_path=image_paths,
step=1,
start=0,
stop=0,
)
#Note step, start, stop have no affect
class ArchiveExtractor(DirectoryExtractor):
def __init__(self, source_path, dest_path, image_quality, step=1, start=0, stop=0):
Archive(source_path[0]).extractall(dest_path)
class ArchiveReader(DirectoryReader):
def __init__(self, source_path, step=1, start=0, stop=0):
self._tmp_dir = self.create_tmp_dir()
self._archive_source = source_path[0]
Archive(self._archive_source).extractall(self._tmp_dir)
super().__init__(
source_path=[self._tmp_dir],
step=1,
start=0,
stop=0,
)
def __del__(self):
if (self._tmp_dir):
self.delete_tmp_dir(self._tmp_dir)
@property
def image_names(self):
return [os.path.join(os.path.dirname(self._archive_source), os.path.relpath(p, self._tmp_dir)) for p in super().image_names]
#Note step, start, stop have no affect
class PdfReader(DirectoryReader):
def __init__(self, source_path, step=1, start=0, stop=0):
if not source_path:
raise Exception('No PDF found')
from pdf2image import convert_from_path
self._pdf_source = source_path[0]
self._tmp_dir = self.create_tmp_dir()
file_ = convert_from_path(self._pdf_source)
basename = os.path.splitext(os.path.basename(self._pdf_source))[0]
for page_num, page in enumerate(file_):
output = os.path.join(self._tmp_dir, '{}{:09d}.jpeg'.format(basename, page_num))
page.save(output, 'JPEG')
super().__init__(
source_path=[dest_path],
dest_path=dest_path,
image_quality=image_quality,
source_path=[self._tmp_dir],
step=1,
start=0,
stop=0,
)
class VideoExtractor(MediaExtractor):
def __init__(self, source_path, dest_path, image_quality, step=1, start=0, stop=0):
from cvat.apps.engine.log import slogger
_dest_path = tempfile.mkdtemp(prefix='cvat-', suffix='.data')
def __del__(self):
if (self._tmp_dir):
self.delete_tmp_dir(self._tmp_dir)
@property
def image_names(self):
return [os.path.join(os.path.dirname(self._pdf_source), os.path.relpath(p, self._tmp_dir)) for p in super().image_names]
class ZipReader(IMediaReader):
def __init__(self, source_path, step=1, start=0, stop=0):
self._zip_source = zipfile.ZipFile(source_path[0], mode='r')
file_list = [f for f in self._zip_source.namelist() if get_mime(f) == 'image']
super().__init__(file_list, step, start, stop)
def __iter__(self):
for f in zip(self._source_path, self.image_names):
yield (BytesIO(self._zip_source.read(f[0])), f[1])
def __len__(self):
return len(self._source_path)
def __getitem__(self, k):
return (BytesIO(self._zip_source.read(self._source_path[k])), self.image_names[k])
def __del__(self):
self._zip_source.close()
def save_preview(self, preview_path):
with open(preview_path, 'wb') as f:
f.write(self._zip_source.read(self._source_path[0]))
def get_image_size(self):
img = Image.open(BytesIO(self._zip_source.read(self._source_path[0])))
return img.width, img.height
@property
def image_names(self):
return [os.path.join(os.path.dirname(self._zip_source.filename), p) for p in self._source_path]
class VideoReader(IMediaReader):
def __init__(self, source_path, step=1, start=0, stop=0):
self._output_fps = 25
super().__init__(
source_path=source_path[0],
dest_path=_dest_path,
image_quality=image_quality,
source_path=source_path,
step=step,
start=start,
stop=stop,
)
# translate inversed range 1:95 to 2:32
translated_quality = 96 - self._image_quality
translated_quality = round((((translated_quality - 1) * (31 - 2)) / (95 - 1)) + 2)
self._tmp_output = tempfile.mkdtemp(prefix='cvat-', suffix='.data')
target_path = os.path.join(self._tmp_output, '%d.jpg')
output_opts = '-start_number 0 -b:v 10000k -vsync 0 -an -y -q:v ' + str(translated_quality)
filters = ''
if self._stop > 0:
filters = 'between(n,' + str(self._start) + ',' + str(self._stop) + ')'
elif self._start > 0:
filters = 'gte(n,' + str(self._start) + ')'
if self._step > 1:
filters += ('*' if filters else '') + 'not(mod(n-' + str(self._start) + ',' + str(self._step) + '))'
if filters:
output_opts += " -vf select=\"'" + filters + "'\""
ff = FFmpeg(
inputs = {self._source_path: None},
outputs = {target_path: output_opts})
slogger.glob.info("FFMpeg cmd: {} ".format(ff.cmd))
ff.run()
def _getframepath(self, k):
return "{0}/{1}.jpg".format(self._tmp_output, k)
)
def __iter__(self):
i = 0
while os.path.exists(self._getframepath(i)):
yield self._getframepath(i)
i += 1
def decode_frames(container):
for packet in container.demux():
if packet.stream.type == 'video':
for frame in packet.decode():
yield frame
def __del__(self):
if self._tmp_output:
shutil.rmtree(self._tmp_output)
container = self._get_av_container()
source_video_stream = container.streams.video[0]
source_video_stream.thread_type = 'AUTO'
image_names = self.image_names
def __getitem__(self, k):
return self._getframepath(k)
return itertools.zip_longest(decode_frames(container), image_names, fillvalue=image_names[0])
def __len__(self):
return len(os.listdir(self._tmp_output))
container = self._get_av_container()
# Not for all containers return real value
length = container.streams.video[0].frames
return length
def __getitem__(self, k):
return next(itertools.islice(self, k, k + 1))
def _get_av_container(self):
return av.open(av.datasets.curated(self._source_path[0]))
def save_preview(self, preview_path):
container = self._get_av_container()
stream = container.streams.video[0]
preview = next(container.decode(stream))
preview.to_image().save(preview_path)
@property
def image_names(self):
return self._source_path
def get_image_size(self):
image = (next(iter(self)))[0]
return image.width, image.height
class IChunkWriter(ABC):
def __init__(self, quality):
self._image_quality = quality
@staticmethod
def _compress_image(image_path, quality):
image = image_path.to_image() if isinstance(image_path, av.VideoFrame) else Image.open(image_path)
# Ensure image data fits into 8bit per pixel before RGB conversion as PIL clips values on conversion
if image.mode == "I":
# Image mode is 32bit integer pixels.
# Autoscale pixels by factor 2**8 / im_data.max() to fit into 8bit
im_data = np.array(image)
im_data = im_data * (2**8 / im_data.max())
image = Image.fromarray(im_data.astype(np.int32))
converted_image = image.convert('RGB')
image.close()
buf = BytesIO()
converted_image.save(buf, format='JPEG', quality=quality, optimize=True)
buf.seek(0)
width, height = converted_image.size
converted_image.close()
return width, height, buf
@abstractmethod
def save_as_chunk(self, images, chunk_path):
pass
class ZipChunkWriter(IChunkWriter):
def save_as_chunk(self, images, chunk_path):
with zipfile.ZipFile(chunk_path, 'x') as zip_chunk:
for idx, (image, image_name) in enumerate(images):
arcname = '{:06d}{}'.format(idx, os.path.splitext(image_name)[1])
if isinstance(image, BytesIO):
zip_chunk.writestr(arcname, image.getvalue())
else:
zip_chunk.write(filename=image, arcname=arcname)
# return empty list because ZipChunkWriter write files as is
# and does not decode it to know img size.
return []
class ZipCompressedChunkWriter(IChunkWriter):
def save_as_chunk(self, images, chunk_path):
image_sizes = []
with zipfile.ZipFile(chunk_path, 'x') as zip_chunk:
for idx, (image, _) in enumerate(images):
w, h, image_buf = self._compress_image(image, self._image_quality)
image_sizes.append((w, h))
arcname = '{:06d}.jpeg'.format(idx)
zip_chunk.writestr(arcname, image_buf.getvalue())
return image_sizes
class Mpeg4ChunkWriter(IChunkWriter):
def __init__(self, _):
super().__init__(17)
self._output_fps = 25
@staticmethod
def _create_av_container(path, w, h, rate, pix_format, options):
container = av.open(path, 'w')
video_stream = container.add_stream('libx264', rate=rate)
video_stream.pix_fmt = pix_format
video_stream.width = w
video_stream.height = h
video_stream.options = options
return container, video_stream
def save_as_chunk(self, images, chunk_path):
if not images:
raise Exception('no images to save')
input_w = images[0][0].width
input_h = images[0][0].height
pix_format = images[0][0].format.name
output_container, output_v_stream = self._create_av_container(
path=chunk_path,
w=input_w,
h=input_h,
rate=self._output_fps,
pix_format=pix_format,
options={
"crf": str(self._image_quality),
"preset": "ultrafast",
},
)
def save_image(self, k, dest_path):
shutil.copyfile(self[k], dest_path)
self._encode_images(images, output_container, output_v_stream)
output_container.close()
return [(input_w, input_h)]
@staticmethod
def _encode_images(images, container, stream):
for frame, _ in images:
# let libav set the correct pts and time_base
frame.pts = None
frame.time_base = None
for packet in stream.encode(frame):
container.mux(packet)
# Flush streams
for packet in stream.encode():
container.mux(packet)
class Mpeg4CompressedChunkWriter(Mpeg4ChunkWriter):
def __init__(self, quality):
# translate inversed range [1:100] to [0:51]
self._image_quality = round(51 * (100 - quality) / 99)
self._output_fps = 25
def save_as_chunk(self, images, chunk_path):
if not images:
raise Exception('no images to save')
input_w = images[0][0].width
input_h = images[0][0].height
downscale_factor = 1
while input_h / downscale_factor >= 1080:
downscale_factor *= 2
output_h = input_h // downscale_factor
output_w = input_w // downscale_factor
# width and height must be divisible by 2
if output_h % 2:
output_h += 1
if output_w % 2:
output_w +=1
output_container, output_v_stream = self._create_av_container(
path=chunk_path,
w=output_w,
h=output_h,
rate=self._output_fps,
pix_format='yuv420p',
options={
'profile': 'baseline',
'coder': '0',
'crf': str(self._image_quality),
'wpredp': '0',
'flags': '-loop'
},
)
self._encode_images(images, output_container, output_v_stream)
output_container.close()
return [(input_w, input_h)]
def _is_archive(path):
mime = mimetypes.guess_type(path)
mime_type = mime[0]
encoding = mime[1]
supportedArchives = ['application/zip', 'application/x-rar-compressed',
supportedArchives = ['application/x-rar-compressed',
'application/x-tar', 'application/x-7z-compressed', 'application/x-cpio',
'gzip', 'bzip2']
return mime_type in supportedArchives or encoding in supportedArchives
@ -236,6 +428,13 @@ def _is_pdf(path):
mime = mimetypes.guess_type(path)
return mime[0] == 'application/pdf'
def _is_zip(path):
mime = mimetypes.guess_type(path)
mime_type = mime[0]
encoding = mime[1]
supportedArchives = ['application/zip']
return mime_type in supportedArchives or encoding in supportedArchives
# 'has_mime_type': function receives 1 argument - path to file.
# Should return True if file has specified media type.
# 'extractor': class that extracts images from specified media.
@ -247,32 +446,38 @@ def _is_pdf(path):
MEDIA_TYPES = {
'image': {
'has_mime_type': _is_image,
'extractor': ImageListExtractor,
'extractor': ImageListReader,
'mode': 'annotation',
'unique': False,
},
'video': {
'has_mime_type': _is_video,
'extractor': VideoExtractor,
'extractor': VideoReader,
'mode': 'interpolation',
'unique': True,
},
'archive': {
'has_mime_type': _is_archive,
'extractor': ArchiveExtractor,
'extractor': ArchiveReader,
'mode': 'annotation',
'unique': True,
},
'directory': {
'has_mime_type': _is_dir,
'extractor': DirectoryExtractor,
'extractor': DirectoryReader,
'mode': 'annotation',
'unique': False,
},
'pdf': {
'has_mime_type': _is_pdf,
'extractor': PDFExtractor,
'extractor': PdfReader,
'mode': 'annotation',
'unique': True,
},
'zip': {
'has_mime_type': _is_zip,
'extractor': ZipReader,
'mode': 'annotation',
'unique': True,
}
}

@ -3,13 +3,54 @@
from django.db import migrations
from django.conf import settings
from cvat.apps.engine.task import get_image_meta_cache
from cvat.apps.engine.models import Job, ShapeType
from cvat.apps.engine.media_extractors import get_mime
from PIL import Image
from ast import literal_eval
import os
def make_image_meta_cache(db_task):
with open(db_task.get_image_meta_cache_path(), 'w') as meta_file:
cache = {
'original_size': []
}
if db_task.mode == 'interpolation':
image = Image.open(db_task.get_frame_path(0))
cache['original_size'].append({
'width': image.size[0],
'height': image.size[1]
})
image.close()
else:
filenames = []
for root, _, files in os.walk(db_task.get_upload_dirname()):
fullnames = map(lambda f: os.path.join(root, f), files)
images = filter(lambda x: get_mime(x) == 'image', fullnames)
filenames.extend(images)
filenames.sort()
for image_path in filenames:
image = Image.open(image_path)
cache['original_size'].append({
'width': image.size[0],
'height': image.size[1]
})
image.close()
meta_file.write(str(cache))
def get_image_meta_cache(db_task):
try:
with open(db_task.get_image_meta_cache_path()) as meta_cache_file:
return literal_eval(meta_cache_file.read())
except Exception:
make_image_meta_cache(db_task)
with open(db_task.get_image_meta_cache_path()) as meta_cache_file:
return literal_eval(meta_cache_file.read())
def _flip_shape(shape, size):
if shape.type == ShapeType.RECTANGLE:

@ -0,0 +1,461 @@
# Generated by Django 2.2.4 on 2019-10-23 10:25
import os
import re
import shutil
import glob
import logging
import sys
import traceback
import itertools
import multiprocessing
import time
from django.db import migrations, models
import django.db.models.deletion
from django.conf import settings
from cvat.apps.engine.media_extractors import (VideoReader, ArchiveReader, ZipReader,
PdfReader , ImageListReader, Mpeg4ChunkWriter,
ZipChunkWriter, ZipCompressedChunkWriter, get_mime)
from cvat.apps.engine.models import DataChoice
MIGRATION_THREAD_COUNT = 2
def fix_path(path):
ind = path.find('.upload')
if ind != -1:
path = path[ind + len('.upload') + 1:]
return path
def get_frame_step(frame_filter):
match = re.search("step\s*=\s*([1-9]\d*)", frame_filter)
return int(match.group(1)) if match else 1
def get_task_on_disk():
folders = [os.path.relpath(f, settings.DATA_ROOT)
for f in glob.glob(os.path.join(settings.DATA_ROOT, '*'), recursive=False)]
return set(int(f) for f in folders if f.isdigit())
def get_frame_path(task_data_dir, frame):
d1 = str(int(frame) // 10000)
d2 = str(int(frame) // 100)
path = os.path.join(task_data_dir, d1, d2,
str(frame) + '.jpg')
return path
def slice_by_size(frames, size):
it = itertools.islice(frames, 0, None)
frames = list(itertools.islice(it, 0, size , 1))
while frames:
yield frames
frames = list(itertools.islice(it, 0, size, 1))
def migrate_task_data(db_task_id, db_data_id, original_video, original_images, size, start_frame,
stop_frame, frame_filter, image_quality, chunk_size, return_dict):
try:
db_data_dir = os.path.join(settings.MEDIA_DATA_ROOT, str(db_data_id))
compressed_cache_dir = os.path.join(db_data_dir, 'compressed')
original_cache_dir = os.path.join(db_data_dir, 'original')
old_db_task_dir = os.path.join(settings.DATA_ROOT, str(db_task_id))
old_task_data_dir = os.path.join(old_db_task_dir, 'data')
if os.path.exists(old_task_data_dir) and size != 0:
if original_video:
if os.path.exists(original_video):
reader = VideoReader([original_video], get_frame_step(frame_filter), start_frame, stop_frame)
original_chunk_writer = Mpeg4ChunkWriter(100)
compressed_chunk_writer = ZipCompressedChunkWriter(image_quality)
for chunk_idx, chunk_images in enumerate(reader.slice_by_size(chunk_size)):
original_chunk_path = os.path.join(original_cache_dir, '{}.mp4'.format(chunk_idx))
original_chunk_writer.save_as_chunk(chunk_images, original_chunk_path)
compressed_chunk_path = os.path.join(compressed_cache_dir, '{}.zip'.format(chunk_idx))
compressed_chunk_writer.save_as_chunk(chunk_images, compressed_chunk_path)
reader.save_preview(os.path.join(db_data_dir, 'preview.jpeg'))
else:
original_chunk_writer = ZipChunkWriter(100)
for chunk_idx, chunk_image_ids in enumerate(slice_by_size(range(size), chunk_size)):
chunk_images = []
for image_id in chunk_image_ids:
image_path = get_frame_path(old_task_data_dir, image_id)
chunk_images.append((image_path, image_path))
original_chunk_path = os.path.join(original_cache_dir, '{}.zip'.format(chunk_idx))
original_chunk_writer.save_as_chunk(chunk_images, original_chunk_path)
compressed_chunk_path = os.path.join(compressed_cache_dir, '{}.zip'.format(chunk_idx))
os.symlink(original_chunk_path, compressed_chunk_path)
shutil.copyfile(get_frame_path(old_task_data_dir, image_id), os.path.join(db_data_dir, 'preview.jpeg'))
else:
reader = None
if os.path.exists(original_images[0]): # task created from images
reader = ImageListReader(original_images)
else: # task created from archive or pdf
archives = []
pdfs = []
zips = []
for p in glob.iglob(os.path.join(db_data_dir, 'raw', '**', '*'), recursive=True):
mime_type = get_mime(p)
if mime_type == 'archive':
archives.append(p)
elif mime_type == 'pdf':
pdfs.append(p)
elif mime_type == 'zip':
zips.append(p)
if archives:
reader = ArchiveReader(archives, get_frame_step(frame_filter), start_frame, stop_frame)
elif zips:
reader = ZipReader(archives, get_frame_step(frame_filter), start_frame, stop_frame)
elif pdfs:
reader = PdfReader(pdfs, get_frame_step(frame_filter), start_frame, stop_frame)
if not reader:
original_chunk_writer = ZipChunkWriter(100)
for chunk_idx, chunk_image_ids in enumerate(slice_by_size(range(size), chunk_size)):
chunk_images = []
for image_id in chunk_image_ids:
image_path = get_frame_path(old_task_data_dir, image_id)
chunk_images.append((image_path, image_path))
original_chunk_path = os.path.join(original_cache_dir, '{}.zip'.format(chunk_idx))
original_chunk_writer.save_as_chunk(chunk_images, original_chunk_path)
compressed_chunk_path = os.path.join(compressed_cache_dir, '{}.zip'.format(chunk_idx))
os.symlink(original_chunk_path, compressed_chunk_path)
shutil.copyfile(get_frame_path(old_task_data_dir, image_id), os.path.join(db_data_dir, 'preview.jpeg'))
else:
original_chunk_writer = ZipChunkWriter(100)
compressed_chunk_writer = ZipCompressedChunkWriter(image_quality)
for chunk_idx, chunk_images in enumerate(reader.slice_by_size(chunk_size)):
compressed_chunk_path = os.path.join(compressed_cache_dir, '{}.zip'.format(chunk_idx))
compressed_chunk_writer.save_as_chunk(chunk_images, compressed_chunk_path)
original_chunk_path = os.path.join(original_cache_dir, '{}.zip'.format(chunk_idx))
original_chunk_writer.save_as_chunk(chunk_images, original_chunk_path)
reader.save_preview(os.path.join(db_data_dir, 'preview.jpeg'))
shutil.rmtree(old_db_task_dir)
return_dict[db_task_id] = (True, '')
except Exception as e:
traceback.print_exc(file=sys.stderr)
return_dict[db_task_id] = (False, str(e))
return 0
def migrate_task_schema(db_task, Data, log):
log.info('Start schema migration of task ID {}.'.format(db_task.id))
try:
# create folders
new_task_dir = os.path.join(settings.TASKS_ROOT, str(db_task.id))
os.makedirs(new_task_dir, exist_ok=True)
os.makedirs(os.path.join(new_task_dir, 'artifacts'), exist_ok=True)
new_task_logs_dir = os.path.join(new_task_dir, 'logs')
os.makedirs(new_task_logs_dir, exist_ok=True)
# create Data object
db_data = Data.objects.create(
size=db_task.size,
image_quality=db_task.image_quality,
start_frame=db_task.start_frame,
stop_frame=db_task.stop_frame,
frame_filter=db_task.frame_filter,
compressed_chunk_type = DataChoice.IMAGESET,
original_chunk_type = DataChoice.VIDEO if db_task.mode == 'interpolation' else DataChoice.IMAGESET,
)
db_data.save()
db_task.data = db_data
db_data_dir = os.path.join(settings.MEDIA_DATA_ROOT, str(db_data.id))
os.makedirs(db_data_dir, exist_ok=True)
compressed_cache_dir = os.path.join(db_data_dir, 'compressed')
os.makedirs(compressed_cache_dir, exist_ok=True)
original_cache_dir = os.path.join(db_data_dir, 'original')
os.makedirs(original_cache_dir, exist_ok=True)
old_db_task_dir = os.path.join(settings.DATA_ROOT, str(db_task.id))
# move logs
for log_file in ('task.log', 'client.log'):
task_log_file = os.path.join(old_db_task_dir, log_file)
if os.path.isfile(task_log_file):
shutil.move(task_log_file, new_task_logs_dir)
if hasattr(db_task, 'video'):
db_task.video.data = db_data
db_task.video.path = fix_path(db_task.video.path)
db_task.video.save()
for db_image in db_task.image_set.all():
db_image.data = db_data
db_image.path = fix_path(db_image.path)
db_image.save()
old_raw_dir = os.path.join(old_db_task_dir, '.upload')
new_raw_dir = os.path.join(db_data_dir, 'raw')
for client_file in db_task.clientfile_set.all():
client_file.file = client_file.file.path.replace(old_raw_dir, new_raw_dir)
client_file.save()
for server_file in db_task.serverfile_set.all():
server_file.file = server_file.file.replace(old_raw_dir, new_raw_dir)
server_file.save()
for remote_file in db_task.remotefile_set.all():
remote_file.file = remote_file.file.replace(old_raw_dir, new_raw_dir)
remote_file.save()
db_task.save()
#move old raw data
if os.path.exists(old_raw_dir):
shutil.move(old_raw_dir, new_raw_dir)
return (db_task.id, db_data.id)
except Exception as e:
log.error('Cannot migrate schema for the task: {}'.format(db_task.id))
log.error(str(e))
traceback.print_exc(file=sys.stderr)
def create_data_objects(apps, schema_editor):
migration_name = os.path.splitext(os.path.basename(__file__))[0]
migration_log_file = '{}.log'.format(migration_name)
stdout = sys.stdout
stderr = sys.stderr
# redirect all stdout to the file
log_file_object = open(os.path.join(settings.MIGRATIONS_LOGS_ROOT, migration_log_file), 'w')
sys.stdout = log_file_object
sys.stderr = log_file_object
log = logging.getLogger(migration_name)
log.addHandler(logging.StreamHandler(stdout))
log.addHandler(logging.StreamHandler(log_file_object))
log.setLevel(logging.INFO)
disk_tasks = get_task_on_disk()
Task = apps.get_model('engine', 'Task')
Data = apps.get_model('engine', 'Data')
db_tasks = Task.objects
task_count = db_tasks.count()
log.info('\nStart schema migration...')
migrated_db_tasks = []
for counter, db_task in enumerate(db_tasks.all().iterator()):
res = migrate_task_schema(db_task, Data, log)
log.info('Schema migration for the task {} completed. Progress {}/{}'.format(db_task.id, counter+1, task_count))
if res:
migrated_db_tasks.append(res)
log.info('\nSchema migration is finished...')
log.info('\nStart data migration...')
manager = multiprocessing.Manager()
return_dict = manager.dict()
def create_process(db_task_id, db_data_id):
db_data = Data.objects.get(pk=db_data_id)
db_data_dir = os.path.join(settings.MEDIA_DATA_ROOT, str(db_data_id))
new_raw_dir = os.path.join(db_data_dir, 'raw')
original_video = None
original_images = None
if hasattr(db_data, 'video'):
original_video = os.path.join(new_raw_dir, db_data.video.path)
else:
original_images = [os.path.realpath(os.path.join(new_raw_dir, db_image.path)) for db_image in db_data.images.all()]
args = (db_task_id, db_data_id, original_video, original_images, db_data.size,
db_data.start_frame, db_data.stop_frame, db_data.frame_filter, db_data.image_quality, db_data.chunk_size, return_dict)
return multiprocessing.Process(target=migrate_task_data, args=args)
results = {}
task_idx = 0
while True:
for res_idx in list(results.keys()):
res = results[res_idx]
if not res.is_alive():
del results[res_idx]
if res.exitcode == 0:
ret_code, message = return_dict[res_idx]
if ret_code:
counter = (task_idx - len(results))
progress = (100 * counter) / task_count
log.info('Data migration for the task {} completed. Progress: {:.02f}% | {}/{}.'.format(res_idx, progress, counter, task_count))
else:
log.error('Cannot migrate data for the task: {}'.format(res_idx))
log.error(str(message))
if res_idx in disk_tasks:
disk_tasks.remove(res_idx)
else:
log.error('#Cannot migrate data for the task: {}'.format(res_idx))
while task_idx < len(migrated_db_tasks) and len(results) < MIGRATION_THREAD_COUNT:
log.info('Start data migration for the task {}, data ID {}'.format(migrated_db_tasks[task_idx][0], migrated_db_tasks[task_idx][1]))
results[migrated_db_tasks[task_idx][0]] = create_process(*migrated_db_tasks[task_idx])
results[migrated_db_tasks[task_idx][0]].start()
task_idx += 1
if len(results) == 0:
break
time.sleep(5)
if disk_tasks:
suspicious_tasks_dir = os.path.join(settings.DATA_ROOT, 'suspicious_tasks')
os.makedirs(suspicious_tasks_dir, exist_ok=True)
for tid in disk_tasks:
suspicious_task_path = os.path.join(settings.DATA_ROOT, str(tid))
try:
shutil.move(suspicious_task_path, suspicious_tasks_dir)
except Exception as e:
log.error('Cannot move data for the suspicious task {}, \
that is not represented in the database.'.format(suspicious_task_path))
log.error(str(e))
# DL models migration
if apps.is_installed('auto_annotation'):
DLModel = apps.get_model('auto_annotation', 'AnnotationModel')
for db_model in DLModel.objects.all():
try:
old_location = os.path.join(settings.BASE_DIR, 'models', str(db_model.id))
new_location = os.path.join(settings.BASE_DIR, 'data', 'models', str(db_model.id))
if os.path.isdir(old_location):
shutil.move(old_location, new_location)
db_model.model_file.name = db_model.model_file.name.replace(old_location, new_location)
db_model.weights_file.name = db_model.weights_file.name.replace(old_location, new_location)
db_model.labelmap_file.name = db_model.labelmap_file.name.replace(old_location, new_location)
db_model.interpretation_file.name = db_model.interpretation_file.name.replace(old_location, new_location)
db_model.save()
except Exception as e:
log.error('Cannot migrate data for the DL model: {}'.format(db_model.id))
log.error(str(e))
log_file_object.close()
sys.stdout = stdout
sys.stderr = stderr
class Migration(migrations.Migration):
dependencies = [
('engine', '0023_auto_20200113_1323'),
]
operations = [
migrations.CreateModel(
name='Data',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('chunk_size', models.PositiveIntegerField(default=36)),
('size', models.PositiveIntegerField(default=0)),
('image_quality', models.PositiveSmallIntegerField(default=50)),
('start_frame', models.PositiveIntegerField(default=0)),
('stop_frame', models.PositiveIntegerField(default=0)),
('frame_filter', models.CharField(blank=True, default='', max_length=256)),
('compressed_chunk_type', models.CharField(choices=[('video', 'VIDEO'), ('imageset', 'IMAGESET'), ('list', 'LIST')], default=DataChoice('imageset'), max_length=32)),
('original_chunk_type', models.CharField(choices=[('video', 'VIDEO'), ('imageset', 'IMAGESET'), ('list', 'LIST')], default=DataChoice('imageset'), max_length=32)),
],
options={
'default_permissions': (),
},
),
migrations.AddField(
model_name='task',
name='data',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='tasks', to='engine.Data'),
),
migrations.AddField(
model_name='image',
name='data',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='images', to='engine.Data'),
),
migrations.AddField(
model_name='video',
name='data',
field=models.OneToOneField(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='video', to='engine.Data'),
),
migrations.AddField(
model_name='clientfile',
name='data',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='client_files', to='engine.Data'),
),
migrations.AddField(
model_name='remotefile',
name='data',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='remote_files', to='engine.Data'),
),
migrations.AddField(
model_name='serverfile',
name='data',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='server_files', to='engine.Data'),
),
migrations.RunPython(
code=create_data_objects
),
migrations.RemoveField(
model_name='image',
name='task',
),
migrations.RemoveField(
model_name='remotefile',
name='task',
),
migrations.RemoveField(
model_name='serverfile',
name='task',
),
migrations.RemoveField(
model_name='task',
name='frame_filter',
),
migrations.RemoveField(
model_name='task',
name='image_quality',
),
migrations.RemoveField(
model_name='task',
name='size',
),
migrations.RemoveField(
model_name='task',
name='start_frame',
),
migrations.RemoveField(
model_name='task',
name='stop_frame',
),
migrations.RemoveField(
model_name='video',
name='task',
),
migrations.AlterField(
model_name='image',
name='path',
field=models.CharField(default='', max_length=1024),
),
migrations.AlterField(
model_name='video',
name='path',
field=models.CharField(default='', max_length=1024),
),
migrations.AlterUniqueTogether(
name='clientfile',
unique_together={('data', 'file')},
),
migrations.RemoveField(
model_name='clientfile',
name='task',
),
]

@ -0,0 +1,18 @@
# Generated by Django 2.2.10 on 2020-03-24 12:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('engine', '0024_auto_20191023_1025'),
]
operations = [
migrations.AlterField(
model_name='data',
name='chunk_size',
field=models.PositiveIntegerField(null=True),
),
]

@ -0,0 +1,13 @@
# Copyright (C) 2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
import mimetypes
_SCRIPT_DIR = os.path.realpath(os.path.dirname(__file__))
MEDIA_MIMETYPES_FILES = [
os.path.join(_SCRIPT_DIR, "media.mimetypes"),
]
mimetypes.init(files=MEDIA_MIMETYPES_FILES)

@ -1,11 +1,9 @@
# Copyright (C) 2018 Intel Corporation
# Copyright (C) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
from enum import Enum
import re
import shlex
import os
from django.db import models
@ -27,12 +25,102 @@ class StatusChoice(str, Enum):
COMPLETED = 'completed'
@classmethod
def choices(self):
return tuple((x.value, x.name) for x in self)
def choices(cls):
return tuple((x.value, x.name) for x in cls)
def __str__(self):
return self.value
class DataChoice(str, Enum):
VIDEO = 'video'
IMAGESET = 'imageset'
LIST = 'list'
@classmethod
def choices(cls):
return tuple((x.value, x.name) for x in cls)
def __str__(self):
return self.value
class Data(models.Model):
chunk_size = models.PositiveIntegerField(null=True)
size = models.PositiveIntegerField(default=0)
image_quality = models.PositiveSmallIntegerField(default=50)
start_frame = models.PositiveIntegerField(default=0)
stop_frame = models.PositiveIntegerField(default=0)
frame_filter = models.CharField(max_length=256, default="", blank=True)
compressed_chunk_type = models.CharField(max_length=32, choices=DataChoice.choices(),
default=DataChoice.IMAGESET)
original_chunk_type = models.CharField(max_length=32, choices=DataChoice.choices(),
default=DataChoice.IMAGESET)
class Meta:
default_permissions = ()
def get_frame_step(self):
match = re.search("step\s*=\s*([1-9]\d*)", self.frame_filter)
return int(match.group(1)) if match else 1
def get_data_dirname(self):
return os.path.join(settings.MEDIA_DATA_ROOT, str(self.id))
def get_upload_dirname(self):
return os.path.join(self.get_data_dirname(), "raw")
def get_compressed_cache_dirname(self):
return os.path.join(self.get_data_dirname(), "compressed")
def get_original_cache_dirname(self):
return os.path.join(self.get_data_dirname(), "original")
@staticmethod
def _get_chunk_name(chunk_number, chunk_type):
if chunk_type == DataChoice.VIDEO:
ext = 'mp4'
elif chunk_type == DataChoice.IMAGESET:
ext = 'zip'
else:
ext = 'list'
return '{}.{}'.format(chunk_number, ext)
def _get_compressed_chunk_name(self, chunk_number):
return self._get_chunk_name(chunk_number, self.compressed_chunk_type)
def _get_original_chunk_name(self, chunk_number):
return self._get_chunk_name(chunk_number, self.original_chunk_type)
def get_original_chunk_path(self, chunk_number):
return os.path.join(self.get_original_cache_dirname(),
self._get_original_chunk_name(chunk_number))
def get_compressed_chunk_path(self, chunk_number):
return os.path.join(self.get_compressed_cache_dirname(),
self._get_compressed_chunk_name(chunk_number))
def get_preview_path(self):
return os.path.join(self.get_data_dirname(), 'preview.jpeg')
class Video(models.Model):
data = models.OneToOneField(Data, on_delete=models.CASCADE, related_name="video", null=True)
path = models.CharField(max_length=1024, default='')
width = models.PositiveIntegerField()
height = models.PositiveIntegerField()
class Meta:
default_permissions = ()
class Image(models.Model):
data = models.ForeignKey(Data, on_delete=models.CASCADE, related_name="images", null=True)
path = models.CharField(max_length=1024, default='')
frame = models.PositiveIntegerField()
width = models.PositiveIntegerField()
height = models.PositiveIntegerField()
class Meta:
default_permissions = ()
class Project(models.Model):
name = SafeCharField(max_length=256)
owner = models.ForeignKey(User, null=True, blank=True,
@ -54,7 +142,6 @@ class Task(models.Model):
null=True, blank=True, related_name="tasks",
related_query_name="task")
name = SafeCharField(max_length=256)
size = models.PositiveIntegerField()
mode = models.CharField(max_length=32)
owner = models.ForeignKey(User, null=True, blank=True,
on_delete=models.SET_NULL, related_name="owners")
@ -67,52 +154,28 @@ class Task(models.Model):
# Zero means that there are no limits (default)
segment_size = models.PositiveIntegerField(default=0)
z_order = models.BooleanField(default=False)
image_quality = models.PositiveSmallIntegerField(default=50)
start_frame = models.PositiveIntegerField(default=0)
stop_frame = models.PositiveIntegerField(default=0)
frame_filter = models.CharField(max_length=256, default="", blank=True)
status = models.CharField(max_length=32, choices=StatusChoice.choices(),
default=StatusChoice.ANNOTATION)
data = models.ForeignKey(Data, on_delete=models.CASCADE, null=True, related_name="tasks")
# Extend default permission model
class Meta:
default_permissions = ()
def get_frame_path(self, frame):
d1 = str(int(frame) // 10000)
d2 = str(int(frame) // 100)
path = os.path.join(self.get_data_dirname(), d1, d2,
str(frame) + '.jpg')
return path
@staticmethod
def get_image_frame(image_path):
assert image_path.endswith('.jpg')
index = os.path.splitext(os.path.basename(image_path))[0]
return int(index)
def get_frame_step(self):
match = re.search("step\s*=\s*([1-9]\d*)", self.frame_filter)
return int(match.group(1)) if match else 1
def get_upload_dirname(self):
return os.path.join(self.get_task_dirname(), ".upload")
def get_data_dirname(self):
return os.path.join(self.get_task_dirname(), "data")
def get_task_dirname(self):
return os.path.join(settings.TASKS_ROOT, str(self.id))
def get_log_path(self):
return os.path.join(self.get_task_dirname(), "task.log")
def get_task_logs_dirname(self):
return os.path.join(self.get_task_dirname(), 'logs')
def get_client_log_path(self):
return os.path.join(self.get_task_dirname(), "client.log")
return os.path.join(self.get_task_logs_dirname(), "client.log")
def get_image_meta_cache_path(self):
return os.path.join(self.get_task_dirname(), "image_meta.cache")
def get_log_path(self):
return os.path.join(self.get_task_logs_dirname(), "task.log")
def get_task_dirname(self):
return os.path.join(settings.DATA_ROOT, str(self.id))
def get_task_artifacts_dirname(self):
return os.path.join(self.get_task_dirname(), 'artifacts')
def __str__(self):
return self.name
@ -129,21 +192,21 @@ class MyFileSystemStorage(FileSystemStorage):
return name
def upload_path_handler(instance, filename):
return os.path.join(instance.task.get_upload_dirname(), filename)
return os.path.join(instance.data.get_upload_dirname(), filename)
# For client files which the user is uploaded
class ClientFile(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE)
data = models.ForeignKey(Data, on_delete=models.CASCADE, null=True, related_name='client_files')
file = models.FileField(upload_to=upload_path_handler,
max_length=1024, storage=MyFileSystemStorage())
class Meta:
default_permissions = ()
unique_together = ("task", "file")
unique_together = ("data", "file")
# For server files on the mounted share
class ServerFile(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE)
data = models.ForeignKey(Data, on_delete=models.CASCADE, null=True, related_name='server_files')
file = models.CharField(max_length=1024)
class Meta:
@ -151,31 +214,12 @@ class ServerFile(models.Model):
# For URLs
class RemoteFile(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE)
data = models.ForeignKey(Data, on_delete=models.CASCADE, null=True, related_name='remote_files')
file = models.CharField(max_length=1024)
class Meta:
default_permissions = ()
class Video(models.Model):
task = models.OneToOneField(Task, on_delete=models.CASCADE)
path = models.CharField(max_length=1024)
width = models.PositiveIntegerField()
height = models.PositiveIntegerField()
class Meta:
default_permissions = ()
class Image(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE)
path = models.CharField(max_length=1024)
frame = models.PositiveIntegerField()
width = models.PositiveIntegerField()
height = models.PositiveIntegerField()
class Meta:
default_permissions = ()
class Segment(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE)
start_frame = models.IntegerField()
@ -212,8 +256,8 @@ class AttributeType(str, Enum):
SELECT = 'select'
@classmethod
def choices(self):
return tuple((x.value, x.name) for x in self)
def choices(cls):
return tuple((x.value, x.name) for x in cls)
def __str__(self):
return self.value
@ -252,8 +296,8 @@ class ShapeType(str, Enum):
CUBOID = 'cuboid'
@classmethod
def choices(self):
return tuple((x.value, x.name) for x in self)
def choices(cls):
return tuple((x.value, x.name) for x in cls)
def __str__(self):
return self.value

@ -80,7 +80,7 @@ class ClientFileSerializer(serializers.ModelSerializer):
# pylint: disable=no-self-use
def to_representation(self, instance):
if instance:
upload_dir = instance.task.get_upload_dirname()
upload_dir = instance.data.get_upload_dirname()
return instance.file.path[len(upload_dir) + 1:]
else:
return instance
@ -116,38 +116,6 @@ class RqStatusSerializer(serializers.Serializer):
"Queued", "Started", "Finished", "Failed"])
message = serializers.CharField(allow_blank=True, default="")
class TaskDataSerializer(serializers.ModelSerializer):
client_files = ClientFileSerializer(many=True, source='clientfile_set',
default=[])
server_files = ServerFileSerializer(many=True, source='serverfile_set',
default=[])
remote_files = RemoteFileSerializer(many=True, source='remotefile_set',
default=[])
class Meta:
model = models.Task
fields = ('client_files', 'server_files', 'remote_files')
# pylint: disable=no-self-use
def update(self, instance, validated_data):
client_files = validated_data.pop('clientfile_set')
server_files = validated_data.pop('serverfile_set')
remote_files = validated_data.pop('remotefile_set')
for file in client_files:
client_file = models.ClientFile(task=instance, **file)
client_file.save()
for file in server_files:
server_file = models.ServerFile(task=instance, **file)
server_file.save()
for file in remote_files:
remote_file = models.RemoteFile(task=instance, **file)
remote_file.save()
return instance
class WriteOnceMixin:
"""Adds support for write once fields to serializers.
@ -193,36 +161,93 @@ class WriteOnceMixin:
return extra_kwargs
class TaskSerializer(WriteOnceMixin, serializers.ModelSerializer):
labels = LabelSerializer(many=True, source='label_set', partial=True)
segments = SegmentSerializer(many=True, source='segment_set', read_only=True)
class DataSerializer(serializers.ModelSerializer):
image_quality = serializers.IntegerField(min_value=0, max_value=100)
use_zip_chunks = serializers.BooleanField(default=False)
client_files = ClientFileSerializer(many=True, default=[])
server_files = ServerFileSerializer(many=True, default=[])
remote_files = RemoteFileSerializer(many=True, default=[])
class Meta:
model = models.Task
fields = ('url', 'id', 'name', 'size', 'mode', 'owner', 'assignee',
'bug_tracker', 'created_date', 'updated_date', 'overlap',
'segment_size', 'z_order', 'status', 'labels', 'segments',
'image_quality', 'start_frame', 'stop_frame', 'frame_filter',
'project')
read_only_fields = ('size', 'mode', 'created_date', 'updated_date',
'status')
write_once_fields = ('overlap', 'segment_size', 'image_quality')
ordering = ['-id']
model = models.Data
fields = ('chunk_size', 'size', 'image_quality', 'start_frame', 'stop_frame', 'frame_filter',
'compressed_chunk_type', 'original_chunk_type', 'client_files', 'server_files', 'remote_files', 'use_zip_chunks')
# pylint: disable=no-self-use
def validate_frame_filter(self, value):
match = re.search("step\s*=\s*([1-9]\d*)", value)
if not match:
raise serializers.ValidationError("Invalid frame filter expression")
return value
# pylint: disable=no-self-use
def validate_chunk_size(self, value):
if not value > 0:
raise serializers.ValidationError('Chunk size must be a positive integer')
return value
# pylint: disable=no-self-use
def validate(self, data):
if 'start_frame' in data and 'stop_frame' in data \
and data['start_frame'] > data['stop_frame']:
raise serializers.ValidationError('Stop frame must be more or equal start frame')
return data
# pylint: disable=no-self-use
def create(self, validated_data):
client_files = validated_data.pop('client_files')
server_files = validated_data.pop('server_files')
remote_files = validated_data.pop('remote_files')
validated_data.pop('use_zip_chunks')
db_data = models.Data.objects.create(**validated_data)
data_path = db_data.get_data_dirname()
if os.path.isdir(data_path):
shutil.rmtree(data_path)
os.makedirs(db_data.get_compressed_cache_dirname())
os.makedirs(db_data.get_original_cache_dirname())
for f in client_files:
client_file = models.ClientFile(data=db_data, **f)
client_file.save()
for f in server_files:
server_file = models.ServerFile(data=db_data, **f)
server_file.save()
for f in remote_files:
remote_file = models.RemoteFile(data=db_data, **f)
remote_file.save()
db_data.save()
return db_data
class TaskSerializer(WriteOnceMixin, serializers.ModelSerializer):
labels = LabelSerializer(many=True, source='label_set', partial=True)
segments = SegmentSerializer(many=True, source='segment_set', read_only=True)
data_chunk_size = serializers.ReadOnlyField(source='data.chunk_size')
data_compressed_chunk_type = serializers.ReadOnlyField(source='data.compressed_chunk_type')
data_original_chunk_type = serializers.ReadOnlyField(source='data.original_chunk_type')
size = serializers.ReadOnlyField(source='data.size')
image_quality = serializers.ReadOnlyField(source='data.image_quality')
data = serializers.ReadOnlyField(source='data.id')
class Meta:
model = models.Task
fields = ('url', 'id', 'name', 'mode', 'owner', 'assignee',
'bug_tracker', 'created_date', 'updated_date', 'overlap',
'segment_size', 'z_order', 'status', 'labels', 'segments',
'project', 'data_chunk_size', 'data_compressed_chunk_type', 'data_original_chunk_type', 'size', 'image_quality', 'data')
read_only_fields = ('mode', 'created_date', 'updated_date', 'status', 'data_chunk_size',
'data_compressed_chunk_type', 'data_original_chunk_type', 'size', 'image_quality', 'data')
write_once_fields = ('overlap', 'segment_size')
ordering = ['-id']
# pylint: disable=no-self-use
def create(self, validated_data):
labels = validated_data.pop('label_set')
db_task = models.Task.objects.create(size=0, **validated_data)
db_task.start_frame = validated_data.get('start_frame', 0)
db_task.stop_frame = validated_data.get('stop_frame', 0)
db_task.frame_filter = validated_data.get('frame_filter', '')
db_task = models.Task.objects.create(**validated_data)
for label in labels:
attributes = label.pop('attributespec_set')
db_label = models.Label.objects.create(task=db_task, **label)
@ -233,11 +258,10 @@ class TaskSerializer(WriteOnceMixin, serializers.ModelSerializer):
if os.path.isdir(task_path):
shutil.rmtree(task_path)
upload_dir = db_task.get_upload_dirname()
os.makedirs(upload_dir)
output_dir = db_task.get_data_dirname()
os.makedirs(output_dir)
os.makedirs(db_task.get_task_logs_dirname())
os.makedirs(db_task.get_task_artifacts_dirname())
db_task.save()
return db_task
# pylint: disable=no-self-use
@ -248,11 +272,6 @@ class TaskSerializer(WriteOnceMixin, serializers.ModelSerializer):
instance.bug_tracker = validated_data.get('bug_tracker',
instance.bug_tracker)
instance.z_order = validated_data.get('z_order', instance.z_order)
instance.image_quality = validated_data.get('image_quality',
instance.image_quality)
instance.start_frame = validated_data.get('start_frame', instance.start_frame)
instance.stop_frame = validated_data.get('stop_frame', instance.stop_frame)
instance.frame_filter = validated_data.get('frame_filter', instance.frame_filter)
instance.project = validated_data.get('project', instance.project)
labels = validated_data.get('label_set', [])
for label in labels:
@ -346,9 +365,35 @@ class AboutSerializer(serializers.Serializer):
description = serializers.CharField(max_length=2048)
version = serializers.CharField(max_length=64)
class ImageMetaSerializer(serializers.Serializer):
class FrameMetaSerializer(serializers.Serializer):
width = serializers.IntegerField()
height = serializers.IntegerField()
name = serializers.CharField(max_length=1024)
class DataMetaSerializer(serializers.ModelSerializer):
frames = FrameMetaSerializer(many=True, allow_null=True)
image_quality = serializers.IntegerField(min_value=0, max_value=100)
class Meta:
model = models.Data
fields = (
'chunk_size',
'size',
'image_quality',
'start_frame',
'stop_frame',
'frame_filter',
'frames',
)
read_only_fields = (
'chunk_size',
'size',
'image_quality',
'start_frame',
'stop_frame',
'frame_filter',
'frames',
)
class AttributeValSerializer(serializers.Serializer):
spec_id = serializers.IntegerField()

File diff suppressed because one or more lines are too long

@ -342,10 +342,12 @@ class AnnotationSaverController {
this._autoSaveInterval = null;
const { shortkeys } = window.cvat.config;
Mousetrap.bind(shortkeys.save_work.value, () => {
this.save();
return false;
}, 'keydown');
Mousetrap.bind(shortkeys.save_work.value, Logger.shortkeyLogDecorator(
() => {
this.save();
return false;
},
), 'keydown');
}
autoSave(enabled, time) {

@ -96,7 +96,7 @@ function setupFrameFilters() {
const brightnessRange = $('#playerBrightnessRange');
const contrastRange = $('#playerContrastRange');
const saturationRange = $('#playerSaturationRange');
const frameBackground = $('#frameBackground');
const canvasBackground = $('#canvasBackground');
const reset = $('#resetPlayerFilterButton');
let brightness = 100;
let contrast = 100;
@ -105,7 +105,7 @@ function setupFrameFilters() {
const { shortkeys } = window.cvat.config;
function updateFilterParameters() {
frameBackground.css('filter', `contrast(${contrast}%) brightness(${brightness}%) saturate(${saturation}%)`);
canvasBackground.css('filter', `contrast(${contrast}%) brightness(${brightness}%) saturate(${saturation}%)`);
}
brightnessRange.attr('title', `
@ -488,12 +488,15 @@ function setupMenu(job, task, shapeCollectionModel,
}
function buildAnnotationUI(jobData, taskData, imageMetaData, annotationData, annotationFormats,
loadJobEvent) {
function buildAnnotationUI(
jobData, taskData, imageMetaData,
annotationData, annotationFormats, loadJobEvent,
) {
// Setup some API
window.cvat = {
labelsInfo: new LabelsInfo(taskData.labels),
translate: new CoordinateTranslator(),
frozen: true,
player: {
geometry: {
scale: 1,
@ -511,6 +514,7 @@ function buildAnnotationUI(jobData, taskData, imageMetaData, annotationData, ann
task_id: taskData.id,
mode: taskData.mode,
images: imageMetaData,
chunk_size: taskData.data_chunk_size,
},
search: {
value: window.location.search,
@ -646,7 +650,6 @@ function buildAnnotationUI(jobData, taskData, imageMetaData, annotationData, ann
playerModel.shift(window.cvat.search.get('frame') || 0, true);
const { shortkeys } = window.cvat.config;
setupHelpWindow(shortkeys);
setupSettingsWindow();
setupMenu(jobData, taskData, shapeCollectionModel,
@ -708,12 +711,14 @@ function callAnnotationUI(jid) {
$.get(`/api/v1/jobs/${jid}`).done((jobData) => {
$.when(
$.get(`/api/v1/tasks/${jobData.task_id}`),
$.get(`/api/v1/tasks/${jobData.task_id}/frames/meta`),
$.get(`/api/v1/tasks/${jobData.task_id}/data/meta`),
$.get(`/api/v1/jobs/${jid}/annotations`),
$.get('/api/v1/server/annotation/formats'),
).then((taskData, imageMetaData, annotationData, annotationFormats) => {
$('#loadingOverlay').remove();
setTimeout(() => {
setTimeout(async () => {
window.cvat.config.backendAPI = `${window.location.origin}/api/v1`;
[window.cvatTask] = (await window.cvat.tasks.get({ id: taskData[0].id }));
buildAnnotationUI(jobData, taskData[0],
imageMetaData[0], annotationData[0], annotationFormats[0], loadJobEvent);
});

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -451,6 +451,9 @@ var Logger = {
shortkeyLogDecorator: function(decoredFunc) {
let self = this;
return function(e, combo) {
if (window.cvat.frozen) {
return;
}
let pressKeyEvent = self.addContinuedEvent(self.EventType.pressShortcut, {key: combo});
let returnValue = decoredFunc(e, combo);
pressKeyEvent.close();

@ -1,5 +1,5 @@
/*
* Copyright (C) 2018 Intel Corporation
* Copyright (C) 2018-2019 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/
@ -14,122 +14,78 @@
Mousetrap:false
*/
'use strict';
class FrameProvider extends Listener {
constructor(stop, tid) {
class FrameProviderWrapper extends Listener {
constructor(stop) {
super('onFrameLoad', () => this._loaded);
this._MAX_LOAD = 500;
this._stack = [];
this._loadInterval = null;
this._required = null;
this._loaded = null;
this._loadAllowed = true;
this._preloadRunned = false;
this._loadCounter = this._MAX_LOAD;
this._frameCollection = {};
this._stop = stop;
this._tid = tid;
}
require(frame) {
if (frame in this._frameCollection) {
this._preload(frame);
return this._frameCollection[frame];
}
this._required = frame;
this._loadCounter = this._MAX_LOAD;
this._load();
return null;
this._loaded = null;
this._result = null;
this._required = null;
}
_onImageLoad(image, frame) {
const next = frame + 1;
if (next <= this._stop && this._loadCounter > 0) {
this._stack.push(next);
async require(frameNumber, isPlaying, step) {
if (frameNumber === this._loaded && this._result && !isPlaying) {
return this._result;
}
this._loadCounter--;
this._loaded = frame;
this._frameCollection[frame] = image;
this._loadAllowed = true;
image.onload = null;
image.onerror = null;
this.notify();
}
_preload(frame) {
if (this._preloadRunned) {
return;
}
const loadFrame = (frameData) => {
frameData.data().then((data) => {
this._loaded = frameNumber;
this._result = data;
this.notify();
}).catch(() => {
this._loaded = { frameNumber };
this.notify();
});
};
const last = Math.min(this._stop, frame + Math.ceil(this._MAX_LOAD / 2));
if (!(last in this._frameCollection)) {
for (let idx = frame + 1; idx <= last; idx++) {
if (!(idx in this._frameCollection)) {
this._loadCounter = this._MAX_LOAD - (idx - frame);
this._stack.push(idx);
this._preloadRunned = true;
this._load();
return;
this._required = frameNumber;
const ranges = await window.cvatTask.frames.ranges();
if (!isPlaying) {
const frameData = await window.cvatTask.frames.get(frameNumber, isPlaying, step);
for (const range of ranges.decoded) {
const [start, stop] = range.split(':').map((el) => +el);
if (frameNumber >= start && frameNumber <= stop) {
const data = await frameData.data();
this._loaded = frameNumber;
this._result = data;
return this._result;
}
}
loadFrame(frameData);
return null;
}
}
_load() {
if (!this._loadInterval) {
this._loadInterval = setInterval(() => {
if (!this._loadAllowed) {
return;
}
if (this._loadCounter <= 0) {
this._stack = [];
}
if (!this._stack.length && this._required == null) {
clearInterval(this._loadInterval);
this._preloadRunned = false;
this._loadInterval = null;
return;
}
if (this._required != null) {
this._stack.push(this._required);
this._required = null;
}
if (ranges.buffered.includes(frameNumber)) {
const frameData = await window.cvatTask.frames.get(frameNumber, isPlaying, step);
const data = await frameData.data();
this._loaded = frameNumber;
this._result = data;
return this._result;
}
const frame = this._stack.pop();
if (frame in this._frameCollection) {
this._loadCounter--;
const next = frame + 1;
if (next <= this._stop && this._loadCounter > 0) {
this._stack.push(frame + 1);
}
return;
}
// fetching from server
// we don't want to wait it
// but we promise to notify the player when frame is loaded
setTimeout(async () => {
const frameData = await window.cvatTask.frames.get(frameNumber, isPlaying, step);
if (frameData) {
loadFrame(frameData);
}
}, 0);
return null;
}
// If load up to last frame, no need to load previous frames from stack
if (frame === this._stop) {
this._stack = [];
}
get loaded() {
return this._loaded;
}
this._loadAllowed = false;
const image = new Image();
image.onload = this._onImageLoad.bind(this, image, frame);
image.onerror = () => {
this._loadAllowed = true;
image.onload = null;
image.onerror = null;
};
image.src = `/api/v1/tasks/${this._tid}/frames/${frame}`;
}, 25);
}
get result() {
return this._result;
}
}
const MAX_PLAYER_SCALE = 10;
const MIN_PLAYER_SCALE = 0.1;
@ -140,6 +96,8 @@ class PlayerModel extends Listener {
start: window.cvat.player.frames.start,
stop: window.cvat.player.frames.stop,
current: window.cvat.player.frames.current,
requested: new Set(),
chunkSize: window.cvat.job.chunk_size,
previous: null,
};
@ -150,11 +108,16 @@ class PlayerModel extends Listener {
resetZoom: task.mode === 'annotation',
};
this._playing = false;
this._playInterval = null;
this._pauseFlag = null;
this._frameProvider = new FrameProvider(this._frame.stop, task.id);
this._chunkSize = window.cvat.job.chunk_size;
this._frameProvider = new FrameProviderWrapper(this._frame.stop);
this._continueAfterLoad = false;
this._continueTimeout = null;
this._image = null;
this._activeBufrequest = false;
this._step = 1;
this._timeout = 1000 / this._settings.fps;
this._geometry = {
scale: 1,
@ -175,6 +138,10 @@ class PlayerModel extends Listener {
this._frameProvider.subscribe(this);
}
get bufferSize() {
return this._bufferSize;
}
get frames() {
return {
start: this._frame.start,
@ -192,11 +159,11 @@ class PlayerModel extends Listener {
}
get playing() {
return this._playInterval != null;
return this._playing;
}
get image() {
return this._frameProvider.require(this._frame.current);
return this._image;
}
get resetZoom() {
@ -241,119 +208,162 @@ class PlayerModel extends Listener {
onFrameLoad(last) { // callback for FrameProvider instance
if (last === this._frame.current) {
if (this._continueTimeout) {
clearTimeout(this._continueTimeout);
this._continueTimeout = null;
}
// If need continue playing after load, set timeout for additional frame download
if (this._continueAfterLoad) {
this._continueTimeout = setTimeout(() => {
// If you still need to play, start it
this._continueTimeout = null;
if (this._continueAfterLoad) {
this._continueAfterLoad = false;
this.play();
} else { // Else update the frame
this.shift(0);
}
}, 5000);
} else { // Just update frame if no need to play
this._continueAfterLoad = false;
// play starts from next frame, but there need to show current requested frame
this._frame.current = this._frame.previous;
this.play();
} else {
this.shift(0);
}
}
}
play() {
this._pauseFlag = false;
this._playInterval = setInterval(() => {
if (this._pauseFlag) { // pause method without notify (for frame downloading)
if (this._playInterval) {
clearInterval(this._playInterval);
this._playInterval = null;
}
return;
async _playFunction() {
if (this._pauseFlag) { // pause method without notify (for frame downloading)
if (this._playInterval) {
clearInterval(this._playInterval);
this._playInterval = null;
}
return;
}
const skip = Math.max(Math.floor(this._settings.fps / 25), 1);
if (!this.shift(skip)) this.pause(); // if not changed, pause
}, 1000 / this._settings.fps);
const res = await this.shift(this._step);
if (!res) {
this.pause(); // if not changed, pause
} else if (this._frame.requested.size === 0 && !this._playInterval) {
this._playInterval = setInterval(() => this._playFunction(), this._timeout);
}
}
play() {
this._step = Math.max(Math.floor(this._settings.fps / 25), 1);
this._pauseFlag = false;
this._playing = true;
this._timeout = 1000 / this._settings.fps;
this._frame.requested.clear();
this._playFunction();
}
pause() {
this._pauseFlag = true;
this._playing = false;
if (this._playInterval) {
clearInterval(this._playInterval);
this._playInterval = null;
this._pauseFlag = true;
this._frame.requested.clear();
this.notify();
}
}
updateGeometry(geometry) {
this._geometry.width = geometry.width;
this._geometry.height = geometry.height;
}
shift(delta, absolute) {
async shift(delta, absolute, isLoadFrame = true) {
if (['resize', 'drag'].indexOf(window.cvat.mode) !== -1) {
return false;
}
this._continueAfterLoad = false; // default reset continue
this._frame.current = Math.clamp(absolute ? delta : this._frame.current + delta,
const requestedFrame = Math.clamp(absolute ? delta : this._frame.current + delta,
this._frame.start,
this._frame.stop);
const frame = this._frameProvider.require(this._frame.current);
if (!frame) {
if (this._frame.requested.has(requestedFrame)) {
return false;
}
if (absolute) {
this._frame.requested.clear();
}
if (!isLoadFrame) {
this._image = null;
this._continueAfterLoad = this.playing;
this._pauseFlag = true;
this.notify();
return false;
}
window.cvat.player.frames.current = this._frame.current;
window.cvat.player.geometry.frameWidth = frame.width;
window.cvat.player.geometry.frameHeight = frame.height;
if (requestedFrame === this._frame.current && this._image !== null) {
return false;
}
this._frame.requested.add(requestedFrame);
Logger.addEvent(Logger.EventType.changeFrame, {
from: this._frame.previous,
to: this._frame.current,
});
try {
const frame = await this._frameProvider.require(requestedFrame,
this._playing, this._step);
if (!this._frame.requested.has(requestedFrame)) {
return false;
}
this._frame.requested.delete(requestedFrame);
this._frame.current = requestedFrame;
if (!frame) {
this._image = null;
this._continueAfterLoad = this.playing;
this._pauseFlag = true;
this.notify();
return false;
}
const changed = this._frame.previous !== this._frame.current;
const curFrameRotation = this._framewiseRotation[this._frame.current];
const prevFrameRotation = this._framewiseRotation[this._frame.previous];
const differentRotation = curFrameRotation !== prevFrameRotation;
// fit if tool is in the annotation mode or frame loading is first in the interpolation mode
if (this._settings.resetZoom || this._frame.previous === null || differentRotation) {
this._frame.previous = this._frame.current;
this.fit(); // notify() inside the fit()
} else {
this._frame.previous = this._frame.current;
this.notify();
window.cvat.player.frames.current = requestedFrame;
window.cvat.player.geometry.frameWidth = frame.renderWidth;
window.cvat.player.geometry.frameHeight = frame.renderHeight;
this._image = frame;
Logger.addEvent(Logger.EventType.changeFrame, {
from: this._frame.previous,
to: this._frame.current,
});
const changed = this._frame.previous !== this._frame.current;
const curFrameRotation = this._framewiseRotation[this._frame.current];
const prevFrameRotation = this._framewiseRotation[this._frame.previous];
const differentRotation = curFrameRotation !== prevFrameRotation;
// fit if tool is in the annotation mode or frame loading is first
// in the interpolation mode
if (this._settings.resetZoom || this._frame.previous === null || differentRotation) {
this._frame.previous = requestedFrame;
this.fit(); // notify() inside the fit()
} else {
this._frame.previous = requestedFrame;
this.notify();
}
return changed;
} catch (error) {
if (typeof (error) === 'number') {
this._frame.requested.delete(error);
} else {
throw error;
}
}
return false;
}
return changed;
updateGeometry(geometry) {
this._geometry.width = geometry.width;
this._geometry.height = geometry.height;
}
fit() {
const img = this._frameProvider.require(this._frame.current);
if (!img) return;
if (!this._image) {
return;
}
const { rotation } = this.geometry;
if ((rotation / 90) % 2) {
// 90, 270, ..
this._geometry.scale = Math.min(this._geometry.width / img.height,
this._geometry.height / img.width);
this._geometry.scale = Math.min(this._geometry.width / this._image.renderHeight,
this._geometry.height / this._image.renderWidth);
} else {
// 0, 180, ..
this._geometry.scale = Math.min(this._geometry.width / img.width,
this._geometry.height / img.height);
this._geometry.scale = Math.min(this._geometry.width / this._image.renderWidth,
this._geometry.height / this._image.renderHeight);
}
this._geometry.top = (this._geometry.height - img.height * this._geometry.scale) / 2;
this._geometry.left = (this._geometry.width - img.width * this._geometry.scale) / 2;
this._geometry.top = (this._geometry.height
- this._image.renderHeight * this._geometry.scale) / 2;
this._geometry.left = (this._geometry.width
- this._image.renderWidth * this._geometry.scale) / 2;
window.cvat.player.rotation = rotation;
window.cvat.player.geometry.scale = this._geometry.scale;
@ -361,10 +371,12 @@ class PlayerModel extends Listener {
}
focus(xtl, xbr, ytl, ybr) {
const img = this._frameProvider.require(this._frame.current);
if (!img) return;
const fittedScale = Math.min(this._geometry.width / img.width,
this._geometry.height / img.height);
if (!this._image) {
return;
}
const fittedScale = Math.min(this._geometry.width / this._image.renderWidth,
this._geometry.height / this._image.renderHeight);
const boxWidth = xbr - xtl;
const boxHeight = ybr - ytl;
@ -376,26 +388,37 @@ class PlayerModel extends Listener {
if (this._geometry.scale < fittedScale) {
this._geometry.scale = fittedScale;
this._geometry.top = (this._geometry.height - img.height * this._geometry.scale) / 2;
this._geometry.left = (this._geometry.width - img.width * this._geometry.scale) / 2;
this._geometry.top = (this._geometry.height
- this._image.renderHeight * this._geometry.scale) / 2;
this._geometry.left = (this._geometry.width
- this._image.renderWidth * this._geometry.scale) / 2;
} else {
this._geometry.left = (this._geometry.width / this._geometry.scale - xtl * 2 - boxWidth) * this._geometry.scale / 2;
this._geometry.top = (this._geometry.height / this._geometry.scale - ytl * 2 - boxHeight) * this._geometry.scale / 2;
this._geometry.left = ((this._geometry.width / this._geometry.scale
- xtl * 2 - boxWidth) * this._geometry.scale) / 2;
this._geometry.top = ((this._geometry.height / this._geometry.scale
- ytl * 2 - boxHeight) * this._geometry.scale) / 2;
}
window.cvat.player.geometry.scale = this._geometry.scale;
this._frame.previous = this._frame.current; // fix infinite loop via playerUpdate->collectionUpdate*->AAMUpdate->playerUpdate->...
// fix infinite loop via playerUpdate->collectionUpdate*->AAMUpdate->playerUpdate->...
this._frame.previous = this._frame.current;
this.notify();
}
scale(point, value) {
if (!this._frameProvider.require(this._frame.current)) return;
if (!this._image) {
return;
}
const oldScale = this._geometry.scale;
const newScale = value > 0 ? this._geometry.scale * 6 / 5 : this._geometry.scale * 5 / 6;
const newScale = value > 0
? (this._geometry.scale * 6) / 5
: (this._geometry.scale * 5) / 6;
this._geometry.scale = Math.clamp(newScale, MIN_PLAYER_SCALE, MAX_PLAYER_SCALE);
this._geometry.left += (point.x * (oldScale / this._geometry.scale - 1)) * this._geometry.scale;
this._geometry.top += (point.y * (oldScale / this._geometry.scale - 1)) * this._geometry.scale;
this._geometry.left += this._geometry.scale
* (point.x * (oldScale / this._geometry.scale - 1));
this._geometry.top += this._geometry.scale
* (point.y * (oldScale / this._geometry.scale - 1));
window.cvat.player.geometry.scale = this._geometry.scale;
this.notify();
@ -423,6 +446,7 @@ class PlayerModel extends Listener {
}
this.fit();
return true;
}
}
@ -443,7 +467,7 @@ class PlayerController {
move: null,
};
function setupPlayerShortcuts(playerModel) {
function setupPlayerShortcuts() {
const nextHandler = Logger.shortkeyLogDecorator((e) => {
this.next();
e.preventDefault();
@ -612,10 +636,12 @@ class PlayerController {
const { frames } = this._model;
const progressWidth = e.target.clientWidth;
const x = e.clientX + window.pageXOffset - e.target.offsetLeft;
const percent = x / progressWidth;
const percent = Math.clamp(x / progressWidth, 0, 1);
const targetFrame = Math.round((frames.stop - frames.start) * percent);
this._model.pause();
this._model.shift(targetFrame + frames.start, true);
if (targetFrame !== frames.current) {
this._model.pause();
this._model.shift(targetFrame + frames.start, true);
}
}
}
@ -650,34 +676,34 @@ class PlayerController {
this._model.pause();
}
next() {
this._model.shift(1);
this._model.pause();
async next() {
await this._model.shift(1);
await this._model.pause();
}
previous() {
this._model.shift(-1);
this._model.pause();
async previous() {
await this._model.shift(-1);
await this._model.pause();
}
first() {
this._model.shift(this._model.frames.start, true);
this._model.pause();
async first() {
await this._model.shift(this._model.frames.start, true);
await this._model.pause();
}
last() {
this._model.shift(this._model.frames.stop, true);
this._model.pause();
async last() {
await this._model.shift(this._model.frames.stop, true);
await this._model.pause();
}
forward() {
this._model.shift(this._model.multipleStep);
this._model.pause();
async forward() {
await this._model.shift(this._model.multipleStep);
await this._model.pause();
}
backward() {
this._model.shift(-this._model.multipleStep);
this._model.pause();
async backward() {
await this._model.shift(-this._model.multipleStep);
await this._model.pause();
}
seek(frame) {
@ -704,6 +730,7 @@ class PlayerView {
this._controller = playerController;
this._playerUI = $('#playerFrame');
this._playerBackgroundUI = $('#frameBackground');
this._playerCanvasBackground = $('#canvasBackground');
this._playerContentUI = $('#frameContent');
this._playerGridUI = $('#frameGrid');
this._playerTextUI = $('#frameText');
@ -729,6 +756,7 @@ class PlayerView {
this._rotationWrapperUI = $('#rotationWrapper');
this._rotatateAllImagesUI = $('#rotateAllImages');
this._latestDrawnImage = null;
this._clockwiseRotationButtonUI.on('click', () => {
this._controller.rotate(90);
});
@ -758,7 +786,7 @@ class PlayerView {
this._playerContentUI.on('dblclick', () => this._controller.fit());
this._playerContentUI.on('mousemove', e => this._controller.frameMouseMove(e));
this._progressUI.on('mousedown', e => this._controller.progressMouseDown(e));
this._progressUI.on('mouseup', () => this._controller.progressMouseUp());
this._progressUI.on('mouseup', e => this._controller.progressMouseUp(e));
this._progressUI.on('mousemove', e => this._controller.progressMouseMove(e));
this._playButtonUI.on('click', () => this._controller.play());
this._pauseButtonUI.on('click', () => this._controller.pause());
@ -913,10 +941,16 @@ class PlayerView {
$('.custom-menu').hide(100);
});
window.document.body.style.pointerEvents = 'none';
playerModel.subscribe(this);
}
onPlayerUpdate(model) {
if (!this._latestDrawnImage && model.image) {
window.document.body.style.pointerEvents = '';
window.cvat.frozen = false;
}
const { image } = model;
const { frames } = model;
const { geometry } = model;
@ -928,8 +962,22 @@ class PlayerView {
}
this._loadingUI.addClass('hidden');
if (this._playerBackgroundUI.css('background-image').slice(5, -2) !== image.src) {
this._playerBackgroundUI.css('background-image', `url("${image.src}")`);
if (this._latestDrawnImage !== image) {
this._latestDrawnImage = image;
const ctx = this._playerCanvasBackground[0].getContext('2d');
this._playerCanvasBackground.attr('width', image.renderWidth);
this._playerCanvasBackground.attr('height', image.renderHeight);
if (window.cvatTask.dataChunkType === 'video') {
ctx.scale(image.renderWidth / image.imageData.width,
image.renderHeight / image.imageData.height);
ctx.putImageData(image.imageData, 0, 0);
// Transformation matrix must not affect the putImageData() method.
// By this reason need to redraw the image to apply scale.
// https://www.w3.org/TR/2dcontext/#dom-context-2d-putimagedata
ctx.drawImage(this._playerCanvasBackground[0], 0, 0);
} else {
ctx.drawImage(image.imageData, 0, 0);
}
}
if (model.playing) {
@ -967,20 +1015,24 @@ class PlayerView {
this._rotationWrapperUI.css('transform', `rotate(${geometry.rotation}deg)`);
for (const obj of [this._playerBackgroundUI, this._playerGridUI]) {
obj.css('width', image.width);
obj.css('height', image.height);
obj.css('width', image.renderWidth);
obj.css('height', image.renderHeight);
obj.css('top', geometry.top);
obj.css('left', geometry.left);
obj.css('transform', `scale(${geometry.scale})`);
}
for (const obj of [this._playerContentUI, this._playerTextUI]) {
obj.css('width', image.width + geometry.frameOffset * 2);
obj.css('height', image.height + geometry.frameOffset * 2);
obj.css('width', image.renderWidth + geometry.frameOffset * 2);
obj.css('height', image.renderHeight + geometry.frameOffset * 2);
obj.css('top', geometry.top - geometry.frameOffset * geometry.scale);
obj.css('left', geometry.left - geometry.frameOffset * geometry.scale);
}
this._playerCanvasBackground.css('top', geometry.top);
this._playerCanvasBackground.css('left', geometry.left);
this._playerCanvasBackground.css('transform', `scale(${geometry.scale})`);
this._playerContentUI.css('transform', `scale(${geometry.scale})`);
this._playerTextUI.css('transform', `scale(10) rotate(${-geometry.rotation}deg)`);
this._playerGridPath.attr('stroke-width', 2 / geometry.scale);

@ -199,7 +199,7 @@ class ShapeBufferModel extends Listener {
let imageSizes = window.cvat.job.images;
let startFrame = window.cvat.player.frames.start;
let originalImageSize = imageSizes[object.frame - startFrame] || imageSizes[0];
let originalImageSize = imageSizes.frames[object.frame - startFrame] || imageSizes.frames[0];
// Getting normalized coordinates [0..1]
let normalized = {};
@ -225,7 +225,7 @@ class ShapeBufferModel extends Listener {
numOfFrames --;
object.z_order = this._collection.zOrder(object.frame).max;
let imageSize = imageSizes[object.frame - startFrame] || imageSizes[0];
let imageSize = imageSizes.frames[object.frame - startFrame] || imageSizes.frames[0];
let position = {};
if (this._shape.type === 'box') {
position.xtl = normalized.xtl * imageSize.width;
@ -310,9 +310,9 @@ class ShapeBufferController {
let imageSizes = window.cvat.job.images;
let message = `Propagate up to ${endFrame} frame. `;
let refSize = imageSizes[curFrame - startFrame] || imageSizes[0];
let refSize = imageSizes.frames[curFrame - startFrame] || imageSizes.frames[0];
for (let _frame = curFrame + 1; _frame <= endFrame; _frame ++) {
let size = imageSizes[_frame - startFrame] || imageSizes[0];
let size = imageSizes.frames[_frame - startFrame] || imageSizes.frames[0];
if ((size.width != refSize.width) || (size.height != refSize.height) ) {
message += 'Some covered frames have another resolution. Shapes in them can differ from reference. ';
break;

File diff suppressed because one or more lines are too long

@ -478,6 +478,13 @@
}
#frameBackground {
position: absolute;
z-index: -1;
background-repeat: no-repeat;
transform-origin: top left;
}
#canvasBackground {
position: absolute;
z-index: 0;
background-repeat: no-repeat;

@ -7,14 +7,13 @@ import os
import sys
import rq
import shutil
from PIL import Image
from traceback import print_exception
from ast import literal_eval
from urllib import error as urlerror
from urllib import parse as urlparse
from urllib import request as urlrequest
from cvat.apps.engine.media_extractors import get_mime, MEDIA_TYPES
from cvat.apps.engine.media_extractors import get_mime, MEDIA_TYPES, Mpeg4ChunkWriter, ZipChunkWriter, Mpeg4CompressedChunkWriter, ZipCompressedChunkWriter
from cvat.apps.engine.models import DataChoice
import django_rq
from django.conf import settings
@ -36,54 +35,17 @@ def create(tid, data):
def rq_handler(job, exc_type, exc_value, traceback):
splitted = job.id.split('/')
tid = int(splitted[splitted.index('tasks') + 1])
db_task = models.Task.objects.select_for_update().get(pk=tid)
with open(db_task.get_log_path(), "wt") as log_file:
print_exception(exc_type, exc_value, traceback, file=log_file)
try:
db_task = models.Task.objects.select_for_update().get(pk=tid)
with open(db_task.get_log_path(), "wt") as log_file:
print_exception(exc_type, exc_value, traceback, file=log_file)
except models.Task.DoesNotExist:
pass # skip exceptions in the code
return False
############################# Internal implementation for server API
def make_image_meta_cache(db_task):
with open(db_task.get_image_meta_cache_path(), 'w') as meta_file:
cache = {
'original_size': []
}
if db_task.mode == 'interpolation':
image = Image.open(db_task.get_frame_path(0))
cache['original_size'].append({
'width': image.size[0],
'height': image.size[1]
})
image.close()
else:
filenames = []
for root, _, files in os.walk(db_task.get_upload_dirname()):
fullnames = map(lambda f: os.path.join(root, f), files)
images = filter(lambda x: get_mime(x) == 'image', fullnames)
filenames.extend(images)
filenames.sort()
for image_path in filenames:
image = Image.open(image_path)
cache['original_size'].append({
'width': image.size[0],
'height': image.size[1]
})
image.close()
meta_file.write(str(cache))
def get_image_meta_cache(db_task):
try:
with open(db_task.get_image_meta_cache_path()) as meta_cache_file:
return literal_eval(meta_cache_file.read())
except Exception:
make_image_meta_cache(db_task)
with open(db_task.get_image_meta_cache_path()) as meta_cache_file:
return literal_eval(meta_cache_file.read())
def _copy_data_from_share(server_files, upload_dir):
job = rq.get_current_job()
job.meta['status'] = 'Data are being copied from share..'
@ -108,7 +70,7 @@ def _save_task_to_db(db_task):
segment_size = db_task.segment_size
segment_step = segment_size
if segment_size == 0:
segment_size = db_task.size
segment_size = db_task.data.size
# Segment step must be more than segment_size + overlap in single-segment tasks
# Otherwise a task contains an extra segment
@ -121,9 +83,8 @@ def _save_task_to_db(db_task):
segment_step -= db_task.overlap
for x in range(0, db_task.size, segment_step):
start_frame = x
stop_frame = min(x + segment_size - 1, db_task.size - 1)
for start_frame in range(0, db_task.data.size, segment_step):
stop_frame = min(start_frame + segment_size - 1, db_task.data.size - 1)
slogger.glob.info("New segment for task #{}: start_frame = {}, \
stop_frame = {}".format(db_task.id, start_frame, stop_frame))
@ -138,9 +99,10 @@ def _save_task_to_db(db_task):
db_job.segment = db_segment
db_job.save()
db_task.data.save()
db_task.save()
def _validate_data(data):
def _count_files(data):
share_root = settings.SHARE_ROOT
server_files = []
@ -184,6 +146,9 @@ def _validate_data(data):
counter=counter,
)
return counter
def _validate_data(counter):
unique_entries = 0
multiple_entries = 0
for media_type, media_config in MEDIA_TYPES.items():
@ -203,7 +168,12 @@ def _validate_data(data):
if unique_entries == 0 and multiple_entries == 0:
raise ValueError('No media data found')
return counter
task_modes = [MEDIA_TYPES[media_type]['mode'] for media_type, media_files in counter.items() if media_files]
if not all(mode == task_modes[0] for mode in task_modes):
raise Exception('Could not combine different task modes for data')
return counter, task_modes[0]
def _download_data(urls, upload_dir):
job = rq.get_current_job()
@ -237,15 +207,17 @@ def _create_thread(tid, data):
slogger.glob.info("create task #{}".format(tid))
db_task = models.Task.objects.select_for_update().get(pk=tid)
if db_task.size != 0:
db_data = db_task.data
if db_task.data.size != 0:
raise NotImplementedError("Adding more data is not implemented")
upload_dir = db_task.get_upload_dirname()
upload_dir = db_data.get_upload_dirname()
if data['remote_files']:
data['remote_files'] = _download_data(data['remote_files'], upload_dir)
media = _validate_data(data)
media = _count_files(data)
media, task_mode = _validate_data(media)
if data['server_files']:
_copy_data_from_share(data['server_files'], upload_dir)
@ -255,58 +227,80 @@ def _create_thread(tid, data):
job.save_meta()
db_images = []
extractors = []
length = 0
extractor = None
for media_type, media_files in media.items():
if not media_files:
continue
extractor = MEDIA_TYPES[media_type]['extractor'](
source_path=[os.path.join(upload_dir, f) for f in media_files],
dest_path=upload_dir,
image_quality=db_task.image_quality,
step=db_task.get_frame_step(),
start=db_task.start_frame,
stop=db_task.stop_frame,
)
length += len(extractor)
db_task.mode = MEDIA_TYPES[media_type]['mode']
extractors.append(extractor)
for extractor in extractors:
for frame, image_orig_path in enumerate(extractor):
image_dest_path = db_task.get_frame_path(db_task.size)
dirname = os.path.dirname(image_dest_path)
if not os.path.exists(dirname):
os.makedirs(dirname)
if db_task.mode == 'interpolation':
extractor.save_image(frame, image_dest_path)
else:
width, height = extractor.save_image(frame, image_dest_path)
db_images.append(models.Image(
task=db_task,
path=image_orig_path,
frame=db_task.size,
width=width, height=height))
db_task.size += 1
progress = frame * 100 // length
job.meta['status'] = 'Images are being compressed... {}%'.format(progress)
job.save_meta()
if db_task.mode == 'interpolation':
image = Image.open(db_task.get_frame_path(0))
models.Video.objects.create(
task=db_task,
path=extractors[0].get_source_name(),
width=image.width, height=image.height)
image.close()
if db_task.stop_frame == 0:
db_task.stop_frame = db_task.start_frame + (db_task.size - 1) * db_task.get_frame_step()
else:
if media_files:
if extractor is not None:
raise Exception('Combined data types are not supported')
extractor = MEDIA_TYPES[media_type]['extractor'](
source_path=[os.path.join(upload_dir, f) for f in media_files],
step=db_data.get_frame_step(),
start=db_data.start_frame,
stop=db_data.stop_frame,
)
db_task.mode = task_mode
db_data.compressed_chunk_type = models.DataChoice.VIDEO if task_mode == 'interpolation' and not data['use_zip_chunks'] else models.DataChoice.IMAGESET
db_data.original_chunk_type = models.DataChoice.VIDEO if task_mode == 'interpolation' else models.DataChoice.IMAGESET
def update_progress(progress):
job.meta['status'] = 'Images are being compressed... {}%'.format(round(progress * 100))
job.save_meta()
compressed_chunk_writer_class = Mpeg4CompressedChunkWriter if db_data.compressed_chunk_type == DataChoice.VIDEO else ZipCompressedChunkWriter
original_chunk_writer_class = Mpeg4ChunkWriter if db_data.original_chunk_type == DataChoice.VIDEO else ZipChunkWriter
compressed_chunk_writer = compressed_chunk_writer_class(db_data.image_quality)
original_chunk_writer = original_chunk_writer_class(100)
# calculate chunk size if it isn't specified
if db_data.chunk_size is None:
if isinstance(compressed_chunk_writer, ZipCompressedChunkWriter):
w, h = extractor.get_image_size()
area = h * w
db_data.chunk_size = max(2, min(72, 36 * 1920 * 1080 // area))
else:
db_data.chunk_size = 36
frame_counter = 0
total_len = len(extractor) or 100
image_names = []
image_sizes = []
for chunk_idx, chunk_images in enumerate(extractor.slice_by_size(db_data.chunk_size)):
for img in chunk_images:
image_names.append(img[1])
original_chunk_path = db_data.get_original_chunk_path(chunk_idx)
original_chunk_writer.save_as_chunk(chunk_images, original_chunk_path)
compressed_chunk_path = db_data.get_compressed_chunk_path(chunk_idx)
img_sizes = compressed_chunk_writer.save_as_chunk(chunk_images, compressed_chunk_path)
image_sizes.extend(img_sizes)
db_data.size += len(chunk_images)
update_progress(db_data.size / total_len)
if db_task.mode == 'annotation':
for image_name, image_size in zip(image_names, image_sizes):
db_images.append(models.Image(
data=db_data,
path=os.path.relpath(image_name, upload_dir),
frame=frame_counter,
width=image_size[0],
height=image_size[1],
))
frame_counter += 1
models.Image.objects.bulk_create(db_images)
else:
models.Video.objects.create(
data=db_data,
path=os.path.relpath(image_names[0], upload_dir),
width=image_sizes[0][0], height=image_sizes[0][1])
if db_data.stop_frame == 0:
db_data.stop_frame = db_data.start_frame + (db_data.size - 1) * db_data.get_frame_step()
extractor.save_preview(db_data.get_preview_path())
slogger.glob.info("Founded frames {} for task #{}".format(db_task.size, tid))
slogger.glob.info("Founded frames {} for Data #{}".format(db_data.size, db_data.id))
_save_task_to_db(db_task)

@ -26,6 +26,7 @@
<script type="text/javascript" src="{% static 'engine/js/3rdparty/jquery-3.3.1.js' %}"></script>
<script type="text/javascript" src="{% static 'engine/js/3rdparty/js.cookie.js' %}"></script>
<script type="text/javascript" src="{% static 'engine/js/3rdparty/jquery.fullscreen.js' %}"></script>
<script type="text/javascript" src="{% static 'engine/js/cvat-core.min.js' %}"></script>
{% for js_file in js_3rdparty %}
<script type="text/javascript" src="{% static js_file %}"></script>
{% endfor %}
@ -80,6 +81,7 @@
<svg id="frameContent"> </svg>
<svg id="frameText"> </svg>
<svg id="frameBackground"> </svg>
<canvas id="canvasBackground"> </canvas>
<svg id="frameGrid" xmlns="http://www.w3.org/2000/svg">
<defs>
<pattern id="playerGridPattern" width="100" height="100" patternUnits="userSpaceOnUse">

@ -1,25 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os.path as osp
from django.test import TestCase
from cvat.apps.engine.models import Task
class TaskModelTest(TestCase):
def test_frame_id_path_conversions(self):
task_id = 1
task = Task(task_id)
for i in [10 ** p for p in range(6)]:
src_path_expected = osp.join(
str(i // 10000), str(i // 100), '%s.jpg' % i)
src_path = task.get_frame_path(i)
dst_frame = task.get_image_frame(src_path)
self.assertTrue(src_path.endswith(src_path_expected),
'%s vs. %s' % (src_path, src_path_expected))
self.assertEqual(i, dst_frame)

@ -6,14 +6,14 @@ import os
import shutil
from PIL import Image
from io import BytesIO
from enum import Enum
import random
from rest_framework.test import APITestCase, APIClient
from rest_framework import status
from django.conf import settings
from django.contrib.auth.models import User, Group
from cvat.apps.engine.models import (Task, Segment, Job, StatusChoice,
AttributeType, Project)
from cvat.apps.annotation.models import AnnotationFormat
AttributeType, Project, Data)
from unittest import mock
import io
import xml.etree.ElementTree as ET
@ -21,6 +21,8 @@ from collections import defaultdict
import zipfile
from pycocotools import coco as coco_loader
import tempfile
import av
import numpy as np
def create_db_users(cls):
(group_admin, _) = Group.objects.get_or_create(name="admin")
@ -50,14 +52,27 @@ def create_db_users(cls):
cls.user = cls.user5 = user_dummy
def create_db_task(data):
data_settings = {
"size": data.pop("size"),
"image_quality": data.pop("image_quality"),
}
db_data = Data.objects.create(**data_settings)
shutil.rmtree(db_data.get_data_dirname(), ignore_errors=True)
os.makedirs(db_data.get_data_dirname())
os.makedirs(db_data.get_upload_dirname())
db_task = Task.objects.create(**data)
shutil.rmtree(db_task.get_task_dirname(), ignore_errors=True)
os.makedirs(db_task.get_upload_dirname())
os.makedirs(db_task.get_data_dirname())
os.makedirs(db_task.get_task_dirname())
os.makedirs(db_task.get_task_logs_dirname())
os.makedirs(db_task.get_task_artifacts_dirname())
db_task.data = db_data
db_task.save()
for x in range(0, db_task.size, db_task.segment_size):
for x in range(0, db_task.data.size, db_task.segment_size):
start_frame = x
stop_frame = min(x + db_task.segment_size - 1, db_task.size - 1)
stop_frame = min(x + db_task.segment_size - 1, db_task.data.size - 1)
db_segment = Segment()
db_segment.task = db_task
@ -1051,7 +1066,7 @@ class TaskGetAPITestCase(APITestCase):
def _check_response(self, response, db_task):
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["name"], db_task.name)
self.assertEqual(response.data["size"], db_task.size)
self.assertEqual(response.data["size"], db_task.data.size)
self.assertEqual(response.data["mode"], db_task.mode)
owner = db_task.owner.id if db_task.owner else None
self.assertEqual(response.data["owner"], owner)
@ -1060,7 +1075,7 @@ class TaskGetAPITestCase(APITestCase):
self.assertEqual(response.data["overlap"], db_task.overlap)
self.assertEqual(response.data["segment_size"], db_task.segment_size)
self.assertEqual(response.data["z_order"], db_task.z_order)
self.assertEqual(response.data["image_quality"], db_task.image_quality)
self.assertEqual(response.data["image_quality"], db_task.data.image_quality)
self.assertEqual(response.data["status"], db_task.status)
self.assertListEqual(
[label.name for label in db_task.label_set.all()],
@ -1146,7 +1161,7 @@ class TaskUpdateAPITestCase(APITestCase):
self.assertEqual(response.status_code, status.HTTP_200_OK)
name = data.get("name", db_task.name)
self.assertEqual(response.data["name"], name)
self.assertEqual(response.data["size"], db_task.size)
self.assertEqual(response.data["size"], db_task.data.size)
mode = data.get("mode", db_task.mode)
self.assertEqual(response.data["mode"], mode)
owner = db_task.owner.id if db_task.owner else None
@ -1159,7 +1174,7 @@ class TaskUpdateAPITestCase(APITestCase):
self.assertEqual(response.data["segment_size"], db_task.segment_size)
z_order = data.get("z_order", db_task.z_order)
self.assertEqual(response.data["z_order"], z_order)
image_quality = data.get("image_quality", db_task.image_quality)
image_quality = data.get("image_quality", db_task.data.image_quality)
self.assertEqual(response.data["image_quality"], image_quality)
self.assertEqual(response.data["status"], db_task.status)
if data.get("labels"):
@ -1187,7 +1202,6 @@ class TaskUpdateAPITestCase(APITestCase):
data = {
"name": "new name for the task",
"owner": self.owner.id,
"image_quality": 60,
"labels": [{
"name": "non-vehicle",
"attributes": [{
@ -1204,7 +1218,6 @@ class TaskUpdateAPITestCase(APITestCase):
data = {
"name": "new name for the task",
"owner": self.assignee.id,
"image_quality": 63,
"labels": [{
"name": "car",
"attributes": [{
@ -1221,7 +1234,6 @@ class TaskUpdateAPITestCase(APITestCase):
def test_api_v1_tasks_id_observer(self):
data = {
"name": "new name for the task",
"image_quality": 61,
"labels": [{
"name": "test",
}]
@ -1231,7 +1243,6 @@ class TaskUpdateAPITestCase(APITestCase):
def test_api_v1_tasks_id_no_auth(self):
data = {
"name": "new name for the task",
"image_quality": 59,
"labels": [{
"name": "test",
}]
@ -1315,7 +1326,6 @@ class TaskCreateAPITestCase(APITestCase):
def _check_response(self, response, user, data):
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(response.data["name"], data["name"])
self.assertEqual(response.data["size"], 0)
self.assertEqual(response.data["mode"], "")
self.assertEqual(response.data["owner"], data.get("owner", user.id))
self.assertEqual(response.data["assignee"], data.get("assignee"))
@ -1323,7 +1333,6 @@ class TaskCreateAPITestCase(APITestCase):
self.assertEqual(response.data["overlap"], data.get("overlap", None))
self.assertEqual(response.data["segment_size"], data.get("segment_size", 0))
self.assertEqual(response.data["z_order"], data.get("z_order", False))
self.assertEqual(response.data["image_quality"], data.get("image_quality", 50))
self.assertEqual(response.data["status"], StatusChoice.ANNOTATION)
self.assertListEqual(
[label["name"] for label in data.get("labels")],
@ -1342,7 +1351,6 @@ class TaskCreateAPITestCase(APITestCase):
def test_api_v1_tasks_admin(self):
data = {
"name": "new name for the task",
"image_quality": 60,
"labels": [{
"name": "non-vehicle",
"attributes": [{
@ -1359,7 +1367,6 @@ class TaskCreateAPITestCase(APITestCase):
data = {
"name": "new name for the task",
"owner": self.assignee.id,
"image_quality": 63,
"labels": [{
"name": "car",
"attributes": [{
@ -1376,7 +1383,6 @@ class TaskCreateAPITestCase(APITestCase):
def test_api_v1_tasks_observer(self):
data = {
"name": "new name for the task",
"image_quality": 61,
"labels": [{
"name": "test",
}]
@ -1386,7 +1392,6 @@ class TaskCreateAPITestCase(APITestCase):
def test_api_v1_tasks_no_auth(self):
data = {
"name": "new name for the task",
"image_quality": 59,
"labels": [{
"name": "test",
}]
@ -1402,9 +1407,76 @@ def generate_image_file(filename):
f.name = filename
f.seek(0)
return f
return (width, height), f
def generate_image_files(*args):
images = []
image_sizes = []
for image_name in args:
img_size, image = generate_image_file(image_name)
image_sizes.append(img_size)
images.append(image)
return image_sizes, images
def generate_video_file(filename, width=1920, height=1080, duration=1, fps=25):
f = BytesIO()
total_frames = duration * fps
container = av.open(f, mode='w', format='mp4')
stream = container.add_stream('mpeg4', rate=fps)
stream.width = width
stream.height = height
stream.pix_fmt = 'yuv420p'
for frame_i in range(total_frames):
img = np.empty((stream.width, stream.height, 3))
img[:, :, 0] = 0.5 + 0.5 * np.sin(2 * np.pi * (0 / 3 + frame_i / total_frames))
img[:, :, 1] = 0.5 + 0.5 * np.sin(2 * np.pi * (1 / 3 + frame_i / total_frames))
img[:, :, 2] = 0.5 + 0.5 * np.sin(2 * np.pi * (2 / 3 + frame_i / total_frames))
img = np.round(255 * img).astype(np.uint8)
img = np.clip(img, 0, 255)
frame = av.VideoFrame.from_ndarray(img, format='rgb24')
for packet in stream.encode(frame):
container.mux(packet)
# Flush stream
for packet in stream.encode():
container.mux(packet)
# Close the file
container.close()
f.name = filename
f.seek(0)
return [(width, height)] * total_frames, f
def generate_zip_archive_file(filename, count):
image_sizes = []
zip_buf = BytesIO()
with zipfile.ZipFile(zip_buf, 'w') as zip_chunk:
for idx in range(count):
image_name = "image_{:6d}.jpg".format(idx)
size, image_buf = generate_image_file(image_name)
image_sizes.append(size)
zip_chunk.writestr(image_name, image_buf.getvalue())
zip_buf.name = filename
zip_buf.seek(0)
return image_sizes, zip_buf
class TaskDataAPITestCase(APITestCase):
_image_sizes = {}
class ChunkType(str, Enum):
IMAGESET = 'imageset'
VIDEO = 'video'
def __str__(self):
return self.value
def setUp(self):
self.client = APIClient()
@ -1415,27 +1487,56 @@ class TaskDataAPITestCase(APITestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
path = os.path.join(settings.SHARE_ROOT, "test_1.jpg")
data = generate_image_file("test_1.jpg")
with open(path, 'wb') as image:
filename = "test_1.jpg"
path = os.path.join(settings.SHARE_ROOT, filename)
img_size, data = generate_image_file(filename)
with open(path, "wb") as image:
image.write(data.read())
cls._image_sizes[filename] = img_size
path = os.path.join(settings.SHARE_ROOT, "test_2.jpg")
data = generate_image_file("test_2.jpg")
with open(path, 'wb') as image:
filename = "test_2.jpg"
path = os.path.join(settings.SHARE_ROOT, filename)
img_size, data = generate_image_file(filename)
with open(path, "wb") as image:
image.write(data.read())
cls._image_sizes[filename] = img_size
path = os.path.join(settings.SHARE_ROOT, "test_3.jpg")
data = generate_image_file("test_3.jpg")
with open(path, 'wb') as image:
filename = "test_3.jpg"
path = os.path.join(settings.SHARE_ROOT, filename)
img_size, data = generate_image_file(filename)
with open(path, "wb") as image:
image.write(data.read())
cls._image_sizes[filename] = img_size
path = os.path.join(settings.SHARE_ROOT, "data", "test_3.jpg")
filename = os.path.join("data", "test_3.jpg")
path = os.path.join(settings.SHARE_ROOT, filename)
os.makedirs(os.path.dirname(path))
data = generate_image_file("test_3.jpg")
with open(path, 'wb') as image:
img_size, data = generate_image_file(filename)
with open(path, "wb") as image:
image.write(data.read())
cls._image_sizes[filename] = img_size
filename = "test_video_1.mp4"
path = os.path.join(settings.SHARE_ROOT, filename)
img_sizes, data = generate_video_file(filename, width=1280, height=720)
with open(path, "wb") as video:
video.write(data.read())
cls._image_sizes[filename] = img_sizes
filename = os.path.join("videos", "test_video_1.mp4")
path = os.path.join(settings.SHARE_ROOT, filename)
os.makedirs(os.path.dirname(path))
img_sizes, data = generate_video_file(filename, width=1280, height=720)
with open(path, "wb") as video:
video.write(data.read())
cls._image_sizes[filename] = img_sizes
filename = os.path.join("test_archive_1.zip")
path = os.path.join(settings.SHARE_ROOT, filename)
img_sizes, data = generate_zip_archive_file(filename, count=5)
with open(path, "wb") as zip_archive:
zip_archive.write(data.read())
cls._image_sizes[filename] = img_sizes
@classmethod
def tearDownClass(cls):
@ -1452,8 +1553,14 @@ class TaskDataAPITestCase(APITestCase):
path = os.path.join(settings.SHARE_ROOT, "data", "test_3.jpg")
os.remove(path)
path = os.path.join(settings.SHARE_ROOT, "test_video_1.mp4")
os.remove(path)
path = os.path.join(settings.SHARE_ROOT, "videos", "test_video_1.mp4")
os.remove(path)
def _run_api_v1_tasks_id_data(self, tid, user, data):
def _run_api_v1_tasks_id_data_post(self, tid, user, data):
with ForceLogin(user, self.client):
response = self.client.post('/api/v1/tasks/{}/data'.format(tid),
data=data)
@ -1463,59 +1570,285 @@ class TaskDataAPITestCase(APITestCase):
def _create_task(self, user, data):
with ForceLogin(user, self.client):
response = self.client.post('/api/v1/tasks', data=data, format="json")
return response
def _get_task(self, user, tid):
with ForceLogin(user, self.client):
return self.client.get("/api/v1/tasks/{}".format(tid))
def _run_api_v1_task_id_data_get(self, tid, user, data_type, data_quality=None, data_number=None):
url = '/api/v1/tasks/{}/data?type={}'.format(tid, data_type)
if data_quality is not None:
url += '&quality={}'.format(data_quality)
if data_number is not None:
url += '&number={}'.format(data_number)
with ForceLogin(user, self.client):
return self.client.get(url)
def _get_preview(self, tid, user):
return self._run_api_v1_task_id_data_get(tid, user, "preview")
def _get_compressed_chunk(self, tid, user, number):
return self._run_api_v1_task_id_data_get(tid, user, "chunk", "compressed", number)
def _get_original_chunk(self, tid, user, number):
return self._run_api_v1_task_id_data_get(tid, user, "chunk", "original", number)
def _get_compressed_frame(self, tid, user, number):
return self._run_api_v1_task_id_data_get(tid, user, "frame", "compressed", number)
def _get_original_frame(self, tid, user, number):
return self._run_api_v1_task_id_data_get(tid, user, "frame", "original", number)
@staticmethod
def _extract_zip_chunk(chunk_buffer):
chunk = zipfile.ZipFile(chunk_buffer, mode='r')
return [Image.open(BytesIO(chunk.read(f))) for f in sorted(chunk.namelist())]
@staticmethod
def _extract_video_chunk(chunk_buffer):
container = av.open(chunk_buffer)
stream = container.streams.video[0]
return [f.to_image() for f in container.decode(stream)]
def _test_api_v1_tasks_id_data_spec(self, user, spec, data, expected_compressed_type, expected_original_type, image_sizes):
# create task
response = self._create_task(user, spec)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
task_id = response.data["id"]
# post data for the task
response = self._run_api_v1_tasks_id_data_post(task_id, user, data)
self.assertEqual(response.status_code, status.HTTP_202_ACCEPTED)
response = self._get_task(user, task_id)
expected_status_code = status.HTTP_200_OK
if user == self.user and "owner" in spec and spec["owner"] != user.id and \
"assignee" in spec and spec["assignee"] != user.id:
expected_status_code = status.HTTP_403_FORBIDDEN
self.assertEqual(response.status_code, expected_status_code)
if expected_status_code == status.HTTP_200_OK:
task = response.json()
self.assertEqual(expected_compressed_type, task["data_compressed_chunk_type"])
self.assertEqual(expected_original_type, task["data_original_chunk_type"])
self.assertEqual(len(image_sizes), task["size"])
# check preview
response = self._get_preview(task_id, user)
self.assertEqual(response.status_code, expected_status_code)
if expected_status_code == status.HTTP_200_OK:
preview = Image.open(io.BytesIO(b"".join(response.streaming_content)))
self.assertEqual(preview.size, image_sizes[0])
# check compressed chunk
response = self._get_compressed_chunk(task_id, user, 0)
self.assertEqual(response.status_code, expected_status_code)
if expected_status_code == status.HTTP_200_OK:
compressed_chunk = io.BytesIO(b"".join(response.streaming_content))
if task["data_compressed_chunk_type"] == self.ChunkType.IMAGESET:
images = self._extract_zip_chunk(compressed_chunk)
else:
images = self._extract_video_chunk(compressed_chunk)
self.assertEqual(len(images), min(task["data_chunk_size"], len(image_sizes)))
for image_idx, image in enumerate(images):
self.assertEqual(image.size, image_sizes[image_idx])
# check original chunk
response = self._get_original_chunk(task_id, user, 0)
self.assertEqual(response.status_code, expected_status_code)
if expected_status_code == status.HTTP_200_OK:
original_chunk = io.BytesIO(b"".join(response.streaming_content))
if task["data_original_chunk_type"] == self.ChunkType.IMAGESET:
images = self._extract_zip_chunk(original_chunk)
else:
images = self._extract_video_chunk(original_chunk)
for image_idx, image in enumerate(images):
self.assertEqual(image.size, image_sizes[image_idx])
self.assertEqual(len(images), min(task["data_chunk_size"], len(image_sizes)))
if task["data_original_chunk_type"] == self.ChunkType.IMAGESET:
server_files = [img for key, img in data.items() if key.startswith("server_files")]
client_files = [img for key, img in data.items() if key.startswith("client_files")]
if server_files:
source_files = [os.path.join(settings.SHARE_ROOT, f) for f in sorted(server_files)]
else:
source_files = [f for f in sorted(client_files, key=lambda e: e.name)]
source_images = []
for f in source_files:
if zipfile.is_zipfile(f):
source_images.extend(self._extract_zip_chunk(f))
else:
source_images.append(Image.open(f))
for img_idx, image in enumerate(images):
server_image = np.array(image)
source_image = np.array(source_images[img_idx])
self.assertTrue(np.array_equal(source_image, server_image))
def _test_api_v1_tasks_id_data(self, user):
data = {
task_spec = {
"name": "my task #1",
"owner": self.owner.id,
"assignee": self.assignee.id,
"overlap": 0,
"segment_size": 100,
"z_order": False,
"image_quality": 75,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
response = self._create_task(user, data)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
task_id = response.data["id"]
data = {
"client_files[0]": generate_image_file("test_1.jpg"),
"client_files[1]": generate_image_file("test_2.jpg"),
"client_files[2]": generate_image_file("test_3.jpg"),
image_sizes, images = generate_image_files("test_1.jpg", "test_2.jpg", "test_3.jpg")
task_data = {
"client_files[0]": images[0],
"client_files[1]": images[1],
"client_files[2]": images[2],
"image_quality": 75,
}
response = self._run_api_v1_tasks_id_data(task_id, user, data)
self.assertEqual(response.status_code, status.HTTP_202_ACCEPTED)
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.IMAGESET, self.ChunkType.IMAGESET, image_sizes)
data = {
task_spec = {
"name": "my task #2",
"overlap": 0,
"segment_size": 0,
"image_quality": 75,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
response = self._create_task(user, data)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
task_id = response.data["id"]
data = {
task_data = {
"server_files[0]": "test_1.jpg",
"server_files[1]": "test_2.jpg",
"server_files[2]": "test_3.jpg",
"server_files[3]": "data/test_3.jpg",
"server_files[3]": os.path.join("data", "test_3.jpg"),
"image_quality": 75,
}
image_sizes = [
self._image_sizes[task_data["server_files[3]"]],
self._image_sizes[task_data["server_files[0]"]],
self._image_sizes[task_data["server_files[1]"]],
self._image_sizes[task_data["server_files[2]"]],
]
response = self._run_api_v1_tasks_id_data(task_id, user, data)
self.assertEqual(response.status_code, status.HTTP_202_ACCEPTED)
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.IMAGESET, self.ChunkType.IMAGESET, image_sizes)
task_spec = {
"name": "my video task #1",
"overlap": 0,
"segment_size": 100,
"z_order": False,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
image_sizes, video = generate_video_file(filename="test_video_1.mp4", width=1280, height=720)
task_data = {
"client_files[0]": video,
"image_quality": 43,
}
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.VIDEO, self.ChunkType.VIDEO, image_sizes)
task_spec = {
"name": "my video task #2",
"overlap": 0,
"segment_size": 5,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
task_data = {
"server_files[0]": "test_video_1.mp4",
"image_quality": 57,
}
image_sizes = self._image_sizes[task_data["server_files[0]"]]
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.VIDEO, self.ChunkType.VIDEO, image_sizes)
task_spec = {
"name": "my video task #3",
"overlap": 0,
"segment_size": 0,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
task_data = {
"server_files[0]": os.path.join("videos", "test_video_1.mp4"),
"image_quality": 57,
}
image_sizes = self._image_sizes[task_data["server_files[0]"]]
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.VIDEO, self.ChunkType.VIDEO, image_sizes)
task_spec = {
"name": "my video task #4",
"overlap": 0,
"segment_size": 5,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
task_data = {
"server_files[0]": "test_video_1.mp4",
"image_quality": 12,
"use_zip_chunks": True,
}
image_sizes = self._image_sizes[task_data["server_files[0]"]]
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.IMAGESET, self.ChunkType.VIDEO, image_sizes)
task_spec = {
"name": "my archive task #6",
"overlap": 0,
"segment_size": 0,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
task_data = {
"server_files[0]": "test_archive_1.zip",
"image_quality": 88,
}
image_sizes = self._image_sizes[task_data["server_files[0]"]]
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.IMAGESET, self.ChunkType.IMAGESET, image_sizes)
task_spec = {
"name": "my archive task #7",
"overlap": 0,
"segment_size": 0,
"labels": [
{"name": "car"},
{"name": "person"},
]
}
image_sizes, archive = generate_zip_archive_file("test_archive_2.zip", 7)
task_data = {
"client_files[0]": archive,
"image_quality": 100,
}
self._test_api_v1_tasks_id_data_spec(user, task_spec, task_data, self.ChunkType.IMAGESET, self.ChunkType.IMAGESET, image_sizes)
def test_api_v1_tasks_id_data_admin(self):
self._test_api_v1_tasks_id_data(self.admin)
@ -1534,7 +1867,6 @@ class TaskDataAPITestCase(APITestCase):
"overlap": 0,
"segment_size": 100,
"z_order": False,
"image_quality": 75,
"labels": [
{"name": "car"},
{"name": "person"},
@ -1577,7 +1909,6 @@ class JobAnnotationAPITestCase(APITestCase):
"overlap": 0,
"segment_size": 100,
"z_order": False,
"image_quality": 75,
"labels": [
{
"name": "car",
@ -1607,9 +1938,10 @@ class JobAnnotationAPITestCase(APITestCase):
tid = response.data["id"]
images = {
"client_files[0]": generate_image_file("test_1.jpg"),
"client_files[1]": generate_image_file("test_2.jpg"),
"client_files[2]": generate_image_file("test_3.jpg"),
"client_files[0]": generate_image_file("test_1.jpg")[1],
"client_files[1]": generate_image_file("test_2.jpg")[1],
"client_files[2]": generate_image_file("test_3.jpg")[1],
"image_quality": 75,
}
response = self.client.post("/api/v1/tasks/{}/data".format(tid), data=images)
assert response.status_code == status.HTTP_202_ACCEPTED
@ -2731,7 +3063,6 @@ class TaskAnnotationAPITestCase(JobAnnotationAPITestCase):
response = self._get_annotation_formats(annotator)
self.assertEqual(response.status_code, HTTP_200_OK)
if annotator is not None:
supported_formats = response.data
else:

@ -6,13 +6,12 @@ import os
import os.path as osp
import re
import traceback
from ast import literal_eval
import shutil
from datetime import datetime
from tempfile import mkstemp
from django.views.generic import RedirectView
from django.http import HttpResponseBadRequest, HttpResponseNotFound
from django.http import HttpResponse, HttpResponseNotFound
from django.shortcuts import render
from django.conf import settings
from sendfile import sendfile
@ -24,6 +23,7 @@ from rest_framework import viewsets
from rest_framework import serializers
from rest_framework.decorators import action
from rest_framework import mixins
from rest_framework.exceptions import APIException
from django_filters import rest_framework as filters
import django_rq
from django.db import IntegrityError
@ -36,8 +36,8 @@ from cvat.apps.authentication.decorators import login_required
from .log import slogger, clogger
from cvat.apps.engine.models import StatusChoice, Task, Job, Plugin
from cvat.apps.engine.serializers import (TaskSerializer, UserSerializer,
ExceptionSerializer, AboutSerializer, JobSerializer, ImageMetaSerializer,
RqStatusSerializer, TaskDataSerializer, LabeledDataSerializer,
ExceptionSerializer, AboutSerializer, JobSerializer, DataMetaSerializer,
RqStatusSerializer, DataSerializer, LabeledDataSerializer,
PluginSerializer, FileInfoSerializer, LogEventSerializer,
ProjectSerializer, BasicUserSerializer)
from cvat.apps.annotation.serializers import AnnotationFileSerializer, AnnotationFormatSerializer
@ -47,6 +47,7 @@ from cvat.apps.authentication import auth
from rest_framework.permissions import SAFE_METHODS
from cvat.apps.annotation.models import AnnotationDumper, AnnotationLoader
from cvat.apps.annotation.format import get_annotation_formats
from cvat.apps.engine.frame_provider import FrameProvider
import cvat.apps.dataset_manager.task as DatumaroTask
from drf_yasg.utils import swagger_auto_schema
@ -375,6 +376,9 @@ class TaskViewSet(auth.TaskGetQuerySetMixin, viewsets.ModelViewSet):
task_dirname = instance.get_task_dirname()
super().perform_destroy(instance)
shutil.rmtree(task_dirname, ignore_errors=True)
if instance.data and not instance.data.tasks.all():
shutil.rmtree(instance.data.get_data_dirname(), ignore_errors=True)
instance.data.delete()
@swagger_auto_schema(method='get', operation_summary='Returns a list of jobs for a specific task',
responses={'200': JobSerializer(many=True)})
@ -388,17 +392,79 @@ class TaskViewSet(auth.TaskGetQuerySetMixin, viewsets.ModelViewSet):
return Response(serializer.data)
@swagger_auto_schema(method='post', operation_summary='Method permanently attaches images or video to a task')
@action(detail=True, methods=['POST'], serializer_class=TaskDataSerializer)
@swagger_auto_schema(method='get', operation_summary='Method returns data for a specific task',
manual_parameters=[
openapi.Parameter('type', in_=openapi.IN_QUERY, required=True, type=openapi.TYPE_STRING,
enum=['chunk', 'frame', 'preview'],
description="Specifies the type of the requested data"),
openapi.Parameter('quality', in_=openapi.IN_QUERY, required=True, type=openapi.TYPE_STRING,
enum=['compressed', 'original'],
description="Specifies the quality level of the requested data, doesn't matter for 'preview' type"),
openapi.Parameter('number', in_=openapi.IN_QUERY, required=True, type=openapi.TYPE_NUMBER,
description="A unique number value identifying chunk or frame, doesn't matter for 'preview' type"),
]
)
@action(detail=True, methods=['POST', 'GET'])
def data(self, request, pk):
"""
These data cannot be changed later
"""
db_task = self.get_object() # call check_object_permissions as well
serializer = TaskDataSerializer(db_task, data=request.data)
if serializer.is_valid(raise_exception=True):
serializer.save()
task.create(db_task.id, serializer.data)
if request.method == 'POST':
db_task = self.get_object() # call check_object_permissions as well
serializer = DataSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
db_data = serializer.save()
db_task.data = db_data
db_task.save()
data = {k:v for k, v in serializer.data.items()}
data['use_zip_chunks'] = serializer.validated_data['use_zip_chunks']
task.create(db_task.id, data)
return Response(serializer.data, status=status.HTTP_202_ACCEPTED)
else:
data_type = request.query_params.get('type', None)
data_id = request.query_params.get('number', None)
data_quality = request.query_params.get('quality', 'compressed')
possible_data_type_values = ('chunk', 'frame', 'preview')
possible_quality_values = ('compressed', 'original')
if not data_type or data_type not in possible_data_type_values:
return Response(data='data type not specified or has wrong value', status=status.HTTP_400_BAD_REQUEST)
elif data_type == 'chunk' or data_type == 'frame':
if not data_id:
return Response(data='number not specified', status=status.HTTP_400_BAD_REQUEST)
elif data_quality not in possible_quality_values:
return Response(data='wrong quality value', status=status.HTTP_400_BAD_REQUEST)
try:
db_task = self.get_object()
frame_provider = FrameProvider(db_task.data)
if data_type == 'chunk':
data_id = int(data_id)
data_quality = FrameProvider.Quality.COMPRESSED \
if data_quality == 'compressed' else FrameProvider.Quality.ORIGINAL
path = os.path.realpath(frame_provider.get_chunk(data_id, data_quality))
# Follow symbol links if the chunk is a link on a real image otherwise
# mimetype detection inside sendfile will work incorrectly.
return sendfile(request, path)
elif data_type == 'frame':
data_id = int(data_id)
data_quality = FrameProvider.Quality.COMPRESSED \
if data_quality == 'compressed' else FrameProvider.Quality.ORIGINAL
buf, mime = frame_provider.get_frame(data_id, data_quality)
return HttpResponse(buf.getvalue(), content_type=mime)
elif data_type == 'preview':
return sendfile(request, frame_provider.get_preview())
else:
return Response(data='unknown data type {}.'.format(data_type), status=status.HTTP_400_BAD_REQUEST)
except APIException as e:
return Response(data=e.default_detail, status=e.status_code)
except Exception as e:
msg = 'cannot get requested data type: {}, number: {}, quality: {}'.format(data_type, data_id, data_quality)
slogger.task[pk].error(msg, exc_info=True)
return Response(data=msg + '\n' + str(e), status=status.HTTP_400_BAD_REQUEST)
@swagger_auto_schema(method='get', operation_summary='Method returns annotations for a specific task')
@swagger_auto_schema(method='put', operation_summary='Method performs an update of all annotations in a specific task')
@ -477,7 +543,7 @@ class TaskViewSet(auth.TaskGetQuerySetMixin, viewsets.ModelViewSet):
raise serializers.ValidationError(
"Please specify a correct 'format' parameter for the request")
file_path = os.path.join(db_task.get_task_dirname(),
file_path = os.path.join(db_task.get_task_artifacts_dirname(),
"{}.{}.{}.{}".format(filename, username, timestamp, db_dumper.format.lower()))
queue = django_rq.get_queue("default")
@ -548,40 +614,30 @@ class TaskViewSet(auth.TaskGetQuerySetMixin, viewsets.ModelViewSet):
return response
@swagger_auto_schema(method='get', operation_summary='Method provides a list of sizes (width, height) of media files which are related with the task',
responses={'200': ImageMetaSerializer(many=True)})
@action(detail=True, methods=['GET'], serializer_class=ImageMetaSerializer,
url_path='frames/meta')
def data_info(self, request, pk):
try:
db_task = self.get_object() # call check_object_permissions as well
meta_cache_file = open(db_task.get_image_meta_cache_path())
except OSError:
task.make_image_meta_cache(db_task)
meta_cache_file = open(db_task.get_image_meta_cache_path())
@staticmethod
@swagger_auto_schema(method='get', operation_summary='Method provides a meta information about media files which are related with the task',
responses={'200': DataMetaSerializer()})
@action(detail=True, methods=['GET'], serializer_class=DataMetaSerializer,
url_path='data/meta')
def data_info(request, pk):
db_task = models.Task.objects.prefetch_related('data__images').select_related('data__video').get(pk=pk)
if hasattr(db_task.data, 'video'):
media = [db_task.data.video]
else:
media = list(db_task.data.images.order_by('frame'))
data = literal_eval(meta_cache_file.read())
serializer = ImageMetaSerializer(many=True, data=data['original_size'])
if serializer.is_valid(raise_exception=True):
return Response(serializer.data)
frame_meta = [{
'width': item.width,
'height': item.height,
'name': item.path,
} for item in media]
@swagger_auto_schema(method='get', manual_parameters=[openapi.Parameter('frame', openapi.IN_PATH, required=True,
description="A unique integer value identifying this frame", type=openapi.TYPE_INTEGER)],
operation_summary='Method returns a specific frame for a specific task',
responses={'200': openapi.Response(description='frame')})
@action(detail=True, methods=['GET'], serializer_class=None,
url_path='frames/(?P<frame>\d+)')
def frame(self, request, pk, frame):
try:
# Follow symbol links if the frame is a link on a real image otherwise
# mimetype detection inside sendfile will work incorrectly.
db_task = self.get_object()
path = os.path.realpath(db_task.get_frame_path(frame))
return sendfile(request, path)
except Exception as e:
slogger.task[pk].error(
"cannot get frame #{}".format(frame), exc_info=True)
return HttpResponseBadRequest(str(e))
db_data = db_task.data
db_data.frames = frame_meta
serializer = DataMetaSerializer(db_data)
return Response(serializer.data)
@swagger_auto_schema(method='get', operation_summary='Export task as a dataset in a specific format',
manual_parameters=[openapi.Parameter('action', in_=openapi.IN_QUERY,

@ -7,13 +7,14 @@ import rq
import cv2
import math
import numpy
import fnmatch
import itertools
from openvino.inference_engine import IENetwork, IEPlugin
from scipy.optimize import linear_sum_assignment
from scipy.spatial.distance import euclidean, cosine
from cvat.apps.engine.models import Job
from cvat.apps.engine.frame_provider import FrameProvider
class ReID:
@ -33,22 +34,20 @@ class ReID:
def __init__(self, jid, data):
self.__threshold = data["threshold"]
self.__max_distance = data["maxDistance"]
self.__frame_urls = {}
self.__frame_boxes = {}
db_job = Job.objects.select_related('segment__task').get(pk = jid)
db_segment = db_job.segment
db_task = db_segment.task
self.__frame_iter = itertools.islice(
FrameProvider(db_task.data).get_frames(FrameProvider.Quality.ORIGINAL),
db_segment.start_frame,
db_segment.stop_frame + 1,
)
self.__stop_frame = db_segment.stop_frame
for root, _, filenames in os.walk(db_task.get_data_dirname()):
for filename in fnmatch.filter(filenames, '*.jpg'):
frame = int(os.path.splitext(filename)[0])
if frame >= db_segment.start_frame and frame <= db_segment.stop_frame:
self.__frame_urls[frame] = os.path.join(root, filename)
for frame in self.__frame_urls:
for frame in range(db_segment.start_frame, db_segment.stop_frame + 1):
self.__frame_boxes[frame] = [box for box in data["boxes"] if box["frame"] == frame]
IE_PLUGINS_PATH = os.getenv('IE_PLUGINS_PATH', None)
@ -151,6 +150,7 @@ class ReID:
job = rq.get_current_job()
box_tracks = {}
next_image = cv2.imdecode(numpy.fromstring(next(self.__frame_iter).read(), numpy.uint8), cv2.IMREAD_COLOR)
for idx, (cur_frame, next_frame) in enumerate(list(zip(frames[:-1], frames[1:]))):
job.refresh()
if "cancel" in job.meta:
@ -171,8 +171,8 @@ class ReID:
if not (len(cur_boxes) and len(next_boxes)):
continue
cur_image = cv2.imread(self.__frame_urls[cur_frame], cv2.IMREAD_COLOR)
next_image = cv2.imread(self.__frame_urls[next_frame], cv2.IMREAD_COLOR)
cur_image = next_image
next_image = cv2.imdecode(numpy.fromstring(next(self.__frame_iter).read(), numpy.uint8), cv2.IMREAD_COLOR)
difference_matrix = self.__compute_difference_matrix(cur_boxes, next_boxes, cur_image, next_image)
cur_idxs, next_idxs = linear_sum_assignment(difference_matrix)
for idx, cur_idx in enumerate(cur_idxs):

@ -10,10 +10,9 @@ from cvat.apps.authentication.decorators import login_required
from cvat.apps.engine.models import Task as TaskModel
from cvat.apps.engine.serializers import LabeledDataSerializer
from cvat.apps.engine.annotation import put_task_data
from cvat.apps.engine.frame_provider import FrameProvider
import django_rq
import fnmatch
import json
import os
import rq
@ -91,7 +90,7 @@ def run_inference_engine_annotation(image_list, labels_mapping, treshold):
return result
def run_tensorflow_annotation(image_list, labels_mapping, treshold):
def run_tensorflow_annotation(frame_provider, labels_mapping, treshold):
def _normalize_box(box, w, h):
xmin = int(box[1] * w)
ymin = int(box[0] * h)
@ -117,17 +116,18 @@ def run_tensorflow_annotation(image_list, labels_mapping, treshold):
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(graph=detection_graph, config=config)
for image_num, image_path in enumerate(image_list):
frames = frame_provider.get_frames(frame_provider.Quality.ORIGINAL)
for image_num, image in enumerate(frames):
job.refresh()
if 'cancel' in job.meta:
del job.meta['cancel']
job.save()
return None
job.meta['progress'] = image_num * 100 / len(image_list)
job.meta['progress'] = image_num * 100 / len(frame_provider)
job.save_meta()
image = Image.open(image_path)
image = Image.open(image)
width, height = image.size
if width > 1920 or height > 1080:
image = image.resize((width // 2, height // 2), Image.ANTIALIAS)
@ -154,20 +154,6 @@ def run_tensorflow_annotation(image_list, labels_mapping, treshold):
del sess
return result
def make_image_list(path_to_data):
def get_image_key(item):
return int(os.path.splitext(os.path.basename(item))[0])
image_list = []
for root, dirnames, filenames in os.walk(path_to_data):
for filename in fnmatch.filter(filenames, '*.jpg'):
image_list.append(os.path.join(root, filename))
image_list.sort(key=get_image_key)
return image_list
def convert_to_cvat_format(data):
result = {
"tracks": [],
@ -202,7 +188,7 @@ def create_thread(tid, labels_mapping, user):
# Get job indexes and segment length
db_task = TaskModel.objects.get(pk=tid)
# Get image list
image_list = make_image_list(db_task.get_data_dirname())
image_list = FrameProvider(db_task.data)
# Run auto annotation by tf
result = None

@ -6,7 +6,6 @@ django-cacheops==4.0.6
django-compressor==2.2
django-rq==2.0.0
EasyProcess==0.2.3
ffmpy==0.2.2
Pillow==6.2.0
numpy==1.16.2
python-ldap==3.0.0
@ -46,6 +45,7 @@ h5py==2.9.0
imgaug==0.2.9
django-cors-headers==3.2.0
furl==2.0.0
av==6.2.0
# The package is used by pyunpack as a command line tool to support multiple
# archives. Don't use as a python module because it has GPL license.
patool==1.12

@ -19,6 +19,8 @@ import sys
import fcntl
import shutil
import subprocess
import mimetypes
mimetypes.add_type("application/wasm", ".wasm", True)
from pathlib import Path
@ -323,6 +325,34 @@ USE_TZ = True
CSRF_COOKIE_NAME = "csrftoken"
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
os.makedirs(STATIC_ROOT, exist_ok=True)
DATA_ROOT = os.path.join(BASE_DIR, 'data')
os.makedirs(DATA_ROOT, exist_ok=True)
MEDIA_DATA_ROOT = os.path.join(DATA_ROOT, 'data')
os.makedirs(MEDIA_DATA_ROOT, exist_ok=True)
TASKS_ROOT = os.path.join(DATA_ROOT, 'tasks')
os.makedirs(TASKS_ROOT, exist_ok=True)
SHARE_ROOT = os.path.join(BASE_DIR, 'share')
os.makedirs(SHARE_ROOT, exist_ok=True)
MODELS_ROOT = os.path.join(DATA_ROOT, 'models')
os.makedirs(MODELS_ROOT, exist_ok=True)
LOGS_ROOT = os.path.join(BASE_DIR, 'logs')
os.makedirs(MODELS_ROOT, exist_ok=True)
MIGRATIONS_LOGS_ROOT = os.path.join(LOGS_ROOT, 'migrations')
os.makedirs(MIGRATIONS_LOGS_ROOT, exist_ok=True)
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
@ -381,22 +411,6 @@ if os.getenv('DJANGO_LOG_SERVER_HOST'):
LOGGING['loggers']['cvat.server']['handlers'] += ['logstash']
LOGGING['loggers']['cvat.client']['handlers'] += ['logstash']
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
os.makedirs(STATIC_ROOT, exist_ok=True)
DATA_ROOT = os.path.join(BASE_DIR, 'data')
os.makedirs(DATA_ROOT, exist_ok=True)
SHARE_ROOT = os.path.join(BASE_DIR, 'share')
os.makedirs(SHARE_ROOT, exist_ok=True)
MODELS_ROOT = os.path.join(BASE_DIR, 'models')
os.makedirs(MODELS_ROOT, exist_ok=True)
DATA_UPLOAD_MAX_MEMORY_SIZE = 100 * 1024 * 1024 # 100 MB
DATA_UPLOAD_MAX_NUMBER_FIELDS = None # this django check disabled
LOCAL_LOAD_MAX_FILES_COUNT = 500

@ -9,9 +9,20 @@ _temp_dir = tempfile.TemporaryDirectory(suffix="cvat")
DATA_ROOT = os.path.join(_temp_dir.name, 'data')
os.makedirs(DATA_ROOT, exist_ok=True)
SHARE_ROOT = os.path.join(_temp_dir.name, 'share')
os.makedirs(SHARE_ROOT, exist_ok=True)
MEDIA_DATA_ROOT = os.path.join(DATA_ROOT, 'data')
os.makedirs(MEDIA_DATA_ROOT, exist_ok=True)
TASKS_ROOT = os.path.join(DATA_ROOT, 'tasks')
os.makedirs(TASKS_ROOT, exist_ok=True)
MODELS_ROOT = os.path.join(DATA_ROOT, 'models')
os.makedirs(MODELS_ROOT, exist_ok=True)
# To avoid ERROR django.security.SuspiciousFileOperation:
# The joined path (...) is located outside of the base path component
MEDIA_ROOT = _temp_dir.name

@ -38,4 +38,4 @@ def import_tf():
except AttributeError:
pass
return tf
return tf

@ -4,7 +4,10 @@ import logging
import os
import requests
from io import BytesIO
import mimetypes
from PIL import Image
from .definition import ResourceType
log = logging.getLogger(__name__)
@ -18,7 +21,7 @@ class CLI():
def tasks_data(self, task_id, resource_type, resources):
""" Add local, remote, or shared files to an existing task. """
url = self.api.tasks_id_data(task_id)
data = None
data = {}
files = None
if resource_type == ResourceType.LOCAL:
files = {'client_files[{}]'.format(i): open(f, 'rb') for i, f in enumerate(resources)}
@ -26,6 +29,7 @@ class CLI():
data = {'remote_files[{}]'.format(i): f for i, f in enumerate(resources)}
elif resource_type == ResourceType.SHARE:
data = {'server_files[{}]'.format(i): f for i, f in enumerate(resources)}
data['image_quality'] = 50
response = self.session.post(url, data=data, files=files)
response.raise_for_status()
@ -56,7 +60,7 @@ class CLI():
data = {'name': name,
'labels': labels,
'bug_tracker': bug,
'image_quality': 50}
}
response = self.session.post(url, json=data)
response.raise_for_status()
response_json = response.json()
@ -77,15 +81,23 @@ class CLI():
else:
raise e
def tasks_frame(self, task_id, frame_ids, outdir='', **kwargs):
def tasks_frame(self, task_id, frame_ids, outdir='', quality='original', **kwargs):
""" Download the requested frame numbers for a task and save images as
task_<ID>_frame_<FRAME>.jpg."""
for frame_id in frame_ids:
url = self.api.tasks_id_frame_id(task_id, frame_id)
url = self.api.tasks_id_frame_id(task_id, frame_id, quality)
response = self.session.get(url)
response.raise_for_status()
im = Image.open(BytesIO(response.content))
outfile = 'task_{}_frame_{:06d}.jpg'.format(task_id, frame_id)
mime_type = im.get_format_mimetype() or 'image/jpg'
im_ext = mimetypes.guess_extension(mime_type)
# FIXME It is better to use meta information from the server
# to determine the extension
# replace '.jpe' or '.jpeg' with a more used '.jpg'
if im_ext == '.jpe' or '.jpeg' or None:
im_ext = '.jpg'
outfile = 'task_{}_frame_{:06d}{}'.format(task_id, frame_id, im_ext)
im.save(os.path.join(outdir, outfile))
def tasks_dump(self, task_id, fileformat, filename, **kwargs):
@ -149,8 +161,8 @@ class CVAT_API_V1():
def tasks_id_data(self, task_id):
return self.tasks_id(task_id) + '/data'
def tasks_id_frame_id(self, task_id, frame_id):
return self.tasks_id(task_id) + '/frames/{}'.format(frame_id)
def tasks_id_frame_id(self, task_id, frame_id, quality):
return self.tasks_id(task_id) + '/data?type=frame&number={}&quality={}'.format(frame_id, quality)
def tasks_id_annotations_format(self, task_id, fileformat):
return self.tasks_id(task_id) + '/annotations?format={}' \

@ -180,7 +180,14 @@ frames_parser.add_argument(
'--outdir',
type=str,
default='',
help='directory to save images'
help='directory to save images (default: CWD)'
)
frames_parser.add_argument(
'--quality',
type=str,
choices=('original', 'compressed'),
default='original',
help='choose quality of images (default: %(default)s)'
)
#######################################################################

@ -37,7 +37,7 @@ class TestCLI(APITestCase):
def setUpClass(cls):
super().setUpClass()
cls.img_file = os.path.join(settings.SHARE_ROOT, 'test_cli.jpg')
data = generate_image_file(cls.img_file)
_, data = generate_image_file(cls.img_file)
with open(cls.img_file, 'wb') as image:
image.write(data.read())
@ -65,9 +65,15 @@ class TestCLI(APITestCase):
self.assertTrue(os.path.exists(path))
os.remove(path)
def test_tasks_frame_original(self):
path = os.path.join(settings.SHARE_ROOT, 'task_1_frame_000000.jpg')
self.cli.tasks_frame(1, [0], outdir=settings.SHARE_ROOT, quality='original')
self.assertTrue(os.path.exists(path))
os.remove(path)
def test_tasks_frame(self):
path = os.path.join(settings.SHARE_ROOT, 'task_1_frame_000000.jpg')
self.cli.tasks_frame(1, [0], outdir=settings.SHARE_ROOT)
self.cli.tasks_frame(1, [0], outdir=settings.SHARE_ROOT, quality='compressed')
self.assertTrue(os.path.exists(path))
os.remove(path)

Loading…
Cancel
Save