diff --git a/README.md b/README.md index de2baca2..9721f67c 100644 --- a/README.md +++ b/README.md @@ -61,16 +61,16 @@ via its command line tool and Python library. ## Deep learning models for automatic labeling -| Name | Type | Framework | -| ------------------------------------------------------------------------------------------------------- | ---------- | ---------- | -| [Deep Extreme Cut](/serverless/openvino/dextr/nuclio) | interactor | OpenVINO | -| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | -| [Mask RCNN](/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | detector | OpenVINO | -| [YOLO v3](/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | detector | OpenVINO | -| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | -| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | -| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | -| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | +| Name | Type | Framework | CPU | GPU | +| ------------------------------------------------------------------------------------------------------- | ---------- | ---------- | --- | --- | +| [Deep Extreme Cut](/serverless/openvino/dextr/nuclio) | interactor | OpenVINO | X | +| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | X | X | +| [Mask RCNN](/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | detector | OpenVINO | X | +| [YOLO v3](/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | detector | OpenVINO | X | +| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | X | +| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | X | +| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | X | +| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | X | ## Online demo: [cvat.org](https://cvat.org) diff --git a/components/serverless/docker-compose.serverless.yml b/components/serverless/docker-compose.serverless.yml index 3309ab49..de94f616 100644 --- a/components/serverless/docker-compose.serverless.yml +++ b/components/serverless/docker-compose.serverless.yml @@ -2,7 +2,7 @@ version: '3.3' services: serverless: container_name: nuclio - image: quay.io/nuclio/dashboard:1.4.8-amd64 + image: quay.io/nuclio/dashboard:1.5.8-amd64 restart: always networks: default: diff --git a/cvat/apps/documentation/installation.md b/cvat/apps/documentation/installation.md index 8abd316d..9b23485c 100644 --- a/cvat/apps/documentation/installation.md +++ b/cvat/apps/documentation/installation.md @@ -290,32 +290,7 @@ docker-compose -f docker-compose.yml -f components/analytics/docker-compose.anal ### Semi-automatic and automatic annotation -- You have to install `nuctl` command line tool to build and deploy serverless - functions. Download [the latest release](https://github.com/nuclio/nuclio/releases). -- Create `cvat` project inside nuclio dashboard where you will deploy new - serverless functions and deploy a couple of DL models. Commands below should - be run only after CVAT has been installed using docker-compose because it - runs nuclio dashboard which manages all serverless functions. - -```bash -nuctl create project cvat -``` - -```bash -nuctl deploy --project-name cvat \ - --path serverless/openvino/dextr/nuclio \ - --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ - --platform local -``` - -```bash -nuctl deploy --project-name cvat \ - --path serverless/openvino/omz/public/yolo-v3-tf/nuclio \ - --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ - --platform local -``` - -Note: see [deploy.sh](/serverless/deploy.sh) script for more examples. +Please follow [instructions](/cvat/apps/documentation/installation_automatic_annotation.md) ### Stop all containers diff --git a/cvat/apps/documentation/installation_automatic_annotation.md b/cvat/apps/documentation/installation_automatic_annotation.md new file mode 100644 index 00000000..e3343211 --- /dev/null +++ b/cvat/apps/documentation/installation_automatic_annotation.md @@ -0,0 +1,91 @@ + +### Semi-automatic and Automatic Annotation + + +> **⚠ WARNING: Do not use `docker-compose up`** +> If you did, make sure all containers are stopped by `docker-compose down`. +- To bring up cvat with auto annotation tool, from cvat root directory, you need to run: + ```bash + docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d + ``` + If you did any changes to the docker-compose files, make sure to add `--build` at the end. + + To stop the containers, simply run: + + ```bash + docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml down + ``` + +- You have to install `nuctl` command line tool to build and deploy serverless + functions. Download [version 1.5.8](https://github.com/nuclio/nuclio/releases). + It is important that the version you download matches the version in + [docker-compose.serverless.yml](/components/serverless/docker-compose.serverless.yml) + After downloading the nuclio, give it a proper permission and do a softlink + ``` + sudo chmod +x nuctl--linux-amd64 + sudo ln -sf $(pwd)/nuctl--linux-amd64 /usr/local/bin/nuctl + ``` + +- Create `cvat` project inside nuclio dashboard where you will deploy new serverless functions and deploy a couple of DL models. Commands below should be run only after CVAT has been installed using `docker-compose` because it runs nuclio dashboard which manages all serverless functions. + + ```bash + nuctl create project cvat + ``` + + ```bash + nuctl deploy --project-name cvat \ + --path serverless/openvino/dextr/nuclio \ + --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ + --platform local + ``` + + ```bash + nuctl deploy --project-name cvat \ + --path serverless/openvino/omz/public/yolo-v3-tf/nuclio \ + --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ + --platform local + ``` + **Note:** + - See [deploy_cpu.sh](/serverless/deploy_cpu.sh) for more examples. + + #### GPU Support + You will need to install Nvidia Container Toolkit and make sure your docker supports GPU. Follow [Nvidia docker instructions](https://www.tensorflow.org/install/docker#gpu_support). + Also you will need to add `--resource-limit nvidia.com/gpu=1` to the nuclio deployment command. + As an example, below will run on the GPU: + + ```bash + nuctl deploy tf-faster-rcnn-inception-v2-coco-gpu \ + --project-name cvat --path "serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio" --platform local \ + --base-image tensorflow/tensorflow:2.1.1-gpu \ + --desc "Faster RCNN from Tensorflow Object Detection GPU API" \ + --image cvat/tf.faster_rcnn_inception_v2_coco_gpu \ + --resource-limit nvidia.com/gpu=1 + ``` + + **Note:** + - Since the model is loaded during deployment, the number of GPU functions you can deploy will be limited to your GPU memory. + + - See [deploy_gpu.sh](/serverless/deploy_gpu.sh) script for more examples. + +####Debugging Nuclio Functions: + +- You can open nuclio dashboard at [localhost:8070](http://localhost:8070). Make sure status of your functions are up and running without any error. + +- To check for internal server errors, run `docker ps -a` to see the list of containers. Find the container that you are interested, e.g. `nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu`. Then check its logs by + + ```bash + docker logs + ``` + e.g., + + ```bash + docker logs nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu + ``` + +- If you would like to debug a code inside a container, you can use vscode to directly attach to a container [instructions](https://code.visualstudio.com/docs/remote/attach-container). To apply your changes, make sure to restart the container. + ```bash + docker restart + ``` + + > **⚠ WARNING:** + > Do not use nuclio dashboard to stop the container because with any modifications, it rebuilds the container and you will lose your changes. \ No newline at end of file diff --git a/serverless/deploy.sh b/serverless/deploy_cpu.sh similarity index 97% rename from serverless/deploy.sh rename to serverless/deploy_cpu.sh index 20face66..a86149fe 100755 --- a/serverless/deploy.sh +++ b/serverless/deploy_cpu.sh @@ -1,4 +1,5 @@ #!/bin/bash +# Sample commands to deploy nuclio functions on CPU SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" diff --git a/serverless/deploy_gpu.sh b/serverless/deploy_gpu.sh new file mode 100755 index 00000000..f0b89649 --- /dev/null +++ b/serverless/deploy_gpu.sh @@ -0,0 +1,15 @@ +#!/bin/bash +# Sample commands to deploy nuclio functions on GPU + +SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" + +nuctl create project cvat + +nuctl deploy --project-name cvat \ + --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \ + --platform local --base-image tensorflow/tensorflow:2.1.1-gpu \ + --desc "Faster RCNN from Tensorflow Object Detection GPU API" \ + --image cvat/tf.faster_rcnn_inception_v2_coco_gpu \ + --resource-limit nvidia.com/gpu=1 + +nuctl get function diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py b/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py index 8158eee3..74aa85bc 100644 --- a/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py +++ b/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py @@ -35,7 +35,8 @@ class ModelLoader: width, height = image.size if width > 1920 or height > 1080: image = image.resize((width // 2, height // 2), Image.ANTIALIAS) - image_np = np.array(image.getdata()).reshape((image.height, image.width, 3)).astype(np.uint8) + image_np = np.array(image.getdata())[:, :3].reshape( + (image.height, image.width, -1)).astype(np.uint8) image_np = np.expand_dims(image_np, axis=0) return self.session.run(