3.2 KiB
Frequently asked questions
- How to update CVAT
- Kibana app works, but no logs are displayed
- How to change default CVAT hostname or port
- How to configure connected share folder on Windows
- How to make unassigned tasks not visible to all users
- Can Nvidia GPU be used to run inference with my own model
How to update CVAT
Before upgrading, please follow the official docker manual and backup all CVAT volumes.
To update CVAT, you should clone or download the new version of CVAT and rebuild the CVAT docker images as usual.
docker-compose build
and run containers:
docker-compose up -d
Sometimes the update process takes a lot of time due to changes in the database schema and data.
You can check the current status with docker logs cvat.
Please do not terminate the migration and wait till the process is complete.
Kibana app works, but no logs are displayed
Make sure there aren't error messages from Elasticsearch:
docker logs cvat_elasticsearch
If you see errors like this:
lood stage disk watermark [95%] exceeded on [uMg9WI30QIOJxxJNDiIPgQ][uMg9WI3][/usr/share/elasticsearch/data/nodes/0] free: 116.5gb[4%], all indices on this node will be marked read-only
You should free up disk space or change the threshold, to do so check: Elasticsearch documentation.
How to change default CVAT hostname or port
The best way to do that is to create docker-compose.override.yml and override the host and port settings here.
version: "2.3"
services:
cvat_proxy:
environment:
CVAT_HOST: example.com
ports:
- "80:80"
Please don't forget to include this file in docker-compose commands
using the -f option (in some cases it can be omitted).
How to configure connected share folder on Windows
Follow the Docker manual and configure the directory that you want to use as a shared directory:
After that, it should be possible to use this directory as a CVAT share:
version: "2.3"
services:
cvat:
volumes:
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
driver_opts:
type: none
device: /d/my_cvat_share
o: bind
How to make unassigned tasks not visible to all users
Set reduce_task_visibility variable to True.
Can Nvidia GPU be used to run inference with my own model
Nvidia GPU can be used to accelerate inference of tf_annotation and auto_segmentation models.
OpenVino doesn't support Nvidia cards, so you can run your own models only on CPU.