While deploying I got the following error:
```
./Miniconda3-latest-Linux-x86_64.sh: 438: [[: not found
./Miniconda3-latest-Linux-x86_64.sh: 444: [[: not found
CondaFileIOError: '/root/miniconda3/pkgs/envs/*/env.txt'. [Errno 2] No such file or directory: '/root/miniconda3/pkgs/envs/*/env.txt'
```
### Motivation and context
This is a very simple pull request. The type of the credentials
parameter of `make_client` is currently `Optional[Tuple[int, int]]`, but
it should be `Optional[Tuple[str, str]]` as used by `Client#login`. This
PR makes that change.
### How has this been tested?
The typing does not affect the functionality of the code (just the
warnings I get in an IDE).
This PR adds an option to specify file to job mapping explicitly during
task creation. This option is incompatible with most other job-related
parameters like `sorting_method` and `frame_step`.
- Added a new task creation parameter (`job_file_mapping`) to set a
custom file to job mapping during task creation
Pull Request regarding Issue #2987
PIL.Image conversion from I;16 to L or RGB are unsuccessful as for now.
See the corresponding Issue in the Pillow GitHub (Opened 2018, so no
changes to be expected)
https://github.com/python-pillow/Pillow/issues/3011
The proposed changes at least fix this issue for the mode 'I;16' and
delivers a possible solution for other modes (eg. I;16B/L/N).
This results in a correct calculation of the preview thumbnail and the
actual image, the annotation will be performed on.
We have used this solution on our own dataset and created annotations
accordingly.
use the OpenCV contour function to extract contour instead of
scikit-image function. to solve the problem of poor drawing of the
contour see the issue 4660
https://github.com/openvinotoolkit/cvat/issues/4660
This is pretty much the same fix applied to f-BRS in #5384. We've been
using HRNet for a while and now an then we receive "500 errors" just as
reported in #5299 when someone forgets to drop the alpha-channel from
our images.
### Motivation and context
The RuntimeError is a little bit different, but comes from the same
issue: RGBA instead of RGB images:
> RuntimeError: Given groups=1, weight of size [16, 3, 1, 1], expected
input[*, 4, *, *] to have 3 channels, but got 4 channels instead.
### How has this been tested?
I created a task with images with and w/o alpha channel, and the
interactor works on both now.
Django REST Framework ignores the Content-Type on request body parts, so
it doesn't know that they are JSON-encoded. Instead, it just tries to
decode each part as if it was an `str()`-encoded value.
Change the encoding to match the decoding. The only type this matters
for is `str`, because `json.dumps` and `str` produce different encodings
for `str` values.
Remove `none_type` from the list of encodable types since, to my
knowledge, there's no way to encode a `None` value as a
`multipart/form-data` part in a way that DRF will understand.
### Motivation and context
Integration of YOLOv7 as a serverless nuclio function that can be used
for auto-labeling. YoloV7 is the SOTA at the time of this PR therefore
it would make sense to support it in CVAT. The integration is quite
simple into CVAT as docker based on Ultralytics YoloV5 with coco
pretrained model (https://github.com/WongKinYiu/yolov7) and a docker
image (https://hub.docker.com/r/ultralytics/yolov5).
related issue: #5548
### How has this been tested?
Automatic annotation was run using YOLOv7 on a custom dataset.
The serverless function was deployed using
```
nuctl deploy --project-name cvat \
--path serverless/onnx/WongKinYiu/yolov7/nuclio \
--volume `pwd`/serverless/common:/opt/nuclio/common \
--platform local
```
Then using the 'Automatic annotation' action the function was tested and
the auto-generated labels were controlled to check that no coordinates
misfit is happening.
### Use custom model:
1. Export your model with NMS for image resolution of 640x640
(preferable).
2. Copy your custom model yolov7-custom.onnx to /serverless/common
3. Modify function.yaml file according to your labels.
4. Modify model_handler.py as follow:
```
self.model_path = "yolov7-custom.onnx"
```
Co-authored-by: Nikita Manovich <nikita@cvat.ai>
Co-authored-by: yasakova-anastasia <yasakova_anastasiya@mail.ru>
In my understanding of https://github.com/nuclio/nuclio/issues/1821, the
Nuctl (1.8.14) CLI is looking for a path that is only valid on a Linux
environment, which it does not find when running via Git Bash (even when
using the Windows version of Nuctl). However, installing CVAT onto a
Linux VM allows Nuctl to locate this path and operate normally. I
initially found this when setting up CVAT myself on Git Bash as per the
given instructions for Windows 10.
(I am still learning how to use GitHub as far as pull requests / forks /
etc work, sorry if this is not the right way to approach this change.
Please let me know if I've missed something important.)
### How has this been tested?
This is only a change to instructions, but I did test this on multiple
machines . As long as the machine is capable of running a Linux kernel
it shouldn't run into any issues.
This will let users to run their PyTorch code without network access,
provided that they have already cached the data.
### How has this been tested?
<!-- Please describe in detail how you tested your changes.
Include details of your testing environment, and the tests you ran to
see how your change affects other areas of the code, etc. -->
Unit tests.
Extracted from https://github.com/opencv/cvat/pull/5083
- Added a default arg for task data uploading
- Added an option to wait for the data processing in task data uploading
- Moved data splitting by requests for TUS closer to the point of use
It's possible to specify only the manifest file and filename pattern for
creating task with cloud storage data.
The special characters supported now for the pattern are `*`, `?`,
`[seq]`, `[!seq]`.
Please see
[here](8898a8b264/tests/python/rest_api/test_tasks.py (L686))
for some examples of how to use this functionality.
Co-authored-by: Maxim Zhiltsov <zhiltsov.max35@gmail.com>
This fixes#5369 by adding keyboard shortcut [x] that delete a frame
without prompt.
Co-authored-by: Scotty Kwok <scottykwok@sebit.word>
Co-authored-by: Boris Sekachev <sekachev.bs@gmail.com>
You have to use the `import_status` action in order to query the input
status. Otherwise, the `/api/projects/{id}/dataset/` endpoint initiates
a dataset export. Currently, `import_dataset` inadvertently monitors the
status of that export, not the original import.
For user-facing functions, keep accepting `str` paths to maintain
compatibility and flexibility, but add support for arbitrary path-like
objects. For internal functions (in `downloading.py` and
`uploading.py`), don't bother and require `pathlib.Path`.
The only code that isn't converted is build-time code (e.g. `setup.py`)
and code that came from openapi-generator.
Fix https://github.com/opencv/cvat/issues/5245
The PR contains a simple fix. Just return BAD REQUEST if somebody tries
to export a task without data. It doesn't make sense. But a more complex
fix will require changing a massive amount of code. It doesn't make any
sense to support such a weird scenario.