@ -64,12 +64,14 @@ with specific configuration files. In the case it is `docker-compose.serverless.
It has necessary instructions how to build and deploy Nuclio platform as a
It has necessary instructions how to build and deploy Nuclio platform as a
docker container and enable corresponding support in CVAT.
docker container and enable corresponding support in CVAT.
```console
```bash
$ docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f components/serverless/docker-compose.serverless.yml up -d --build
docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f components/serverless/docker-compose.serverless.yml up -d --build
```
```
```console
```bash
$ docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f components/serverless/docker-compose.serverless.yml ps
docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f components/serverless/docker-compose.serverless.yml ps
```
```
Name Command State Ports
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
cvat /usr/bin/supervisord Up 8080/tcp
cvat /usr/bin/supervisord Up 8080/tcp
@ -87,8 +89,10 @@ is already installed on your operating system. Run the following
command to check that it works. In the beginning you should not have
command to check that it works. In the beginning you should not have
any deployed serverless functions.
any deployed serverless functions.
```console
```bash
$ nuctl get functions
nuctl get functions
```
```
No functions found
No functions found
```
```
@ -104,10 +108,12 @@ First step is to deploy [SiamMask][siammask-serverless]. The deployment process
can depend on your operating system. On Linux you can use `serverless/deploy_cpu.sh`
can depend on your operating system. On Linux you can use `serverless/deploy_cpu.sh`
auxiliary script, but below we are using `nuctl` directly.
auxiliary script, but below we are using `nuctl` directly.
```console
```bash
$ nuctl create project cvat
nuctl create project cvat
nuctl deploy --project-name cvat --path "./serverless/pytorch/foolwood/siammask/nuclio" --platform local
nuctl deploy --project-name cvat --path "./serverless/pytorch/foolwood/siammask/nuclio" --platform local
```
```
21.05.07 13:00:22.233 nuctl (I) Deploying function {"name": ""}
21.05.07 13:00:22.233 nuctl (I) Deploying function {"name": ""}
21.05.07 13:00:22.233 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.05.07 13:00:22.233 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.05.07 13:00:22.652 nuctl (I) Cleaning up before deployment {"functionName": "pth-foolwood-siammask"}
21.05.07 13:00:22.652 nuctl (I) Cleaning up before deployment {"functionName": "pth-foolwood-siammask"}
@ -123,8 +129,10 @@ nuctl deploy --project-name cvat --path "./serverless/pytorch/foolwood/siammask/
21.05.07 13:00:32.796 nuctl (I) Function deploy complete {"functionName": "pth-foolwood-siammask", "httpPort": 49155}
21.05.07 13:00:32.796 nuctl (I) Function deploy complete {"functionName": "pth-foolwood-siammask", "httpPort": 49155}
```
```
```console
```bash
$ nuctl get functions
nuctl get functions
```
```
NAMESPACE | NAME | PROJECT | STATE | NODE PORT | REPLICAS
NAMESPACE | NAME | PROJECT | STATE | NODE PORT | REPLICAS
nuclio | pth-foolwood-siammask | cvat | ready | 49155 | 1/1
nuclio | pth-foolwood-siammask | cvat | ready | 49155 | 1/1
```
```
@ -168,13 +176,13 @@ command. Inference of the serverless function is optimized for CPU using
< details >
< details >
< summary >
< summary >
```console
```bash
$ serverless/deploy_cpu.sh serverless/openvino/omz/public/yolo-v3-tf/
serverless/deploy_cpu.sh serverless/openvino/omz/public/yolo-v3-tf/
```
```
< / summary >
< / summary >
```console
```
Deploying serverless/openvino/omz/public/yolo-v3-tf function...
Deploying serverless/openvino/omz/public/yolo-v3-tf function...
21.07.12 15:55:17.314 nuctl (I) Deploying function {"name": ""}
21.07.12 15:55:17.314 nuctl (I) Deploying function {"name": ""}
21.07.12 15:55:17.314 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.07.12 15:55:17.314 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
@ -234,13 +242,13 @@ autoscaler does not support the local platform (docker)._
< details >
< details >
< summary >
< summary >
```console
```bash
$ serverless/deploy_gpu.sh serverless/tensorflow/matterport/mask_rcnn
serverless/deploy_gpu.sh serverless/tensorflow/matterport/mask_rcnn
```
```
< / summary >
< / summary >
```console
```
Deploying serverless/tensorflow/matterport/mask_rcnn function...
Deploying serverless/tensorflow/matterport/mask_rcnn function...
21.07.12 16:48:48.995 nuctl (I) Deploying function {"name": ""}
21.07.12 16:48:48.995 nuctl (I) Deploying function {"name": ""}
21.07.12 16:48:48.995 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.07.12 16:48:48.995 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
@ -280,9 +288,9 @@ of computer vision research projects and production applications in Facebook.
Clone the repository somewhere. I assume that all other experiments will be
Clone the repository somewhere. I assume that all other experiments will be
run from the cloned `detectron2` directory.
run from the cloned `detectron2` directory.
```console
```bash
$ git clone https://github.com/facebookresearch/detectron2
git clone https://github.com/facebookresearch/detectron2
$ cd detectron2
cd detectron2
```
```
### Run local experiments
### Run local experiments
@ -295,18 +303,18 @@ In my case I have Ubuntu 20.04 with python 3.8.5. I installed
a virtual environment. Follow [opencv-python][opencv-python-github]
a virtual environment. Follow [opencv-python][opencv-python-github]
installation guide to get the library for demo and visualization.
installation guide to get the library for demo and visualization.
```console
```bash
$ python3 -m venv .detectron2
python3 -m venv .detectron2
$ . .detectron2/bin/activate
. .detectron2/bin/activate
$ pip install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
$ pip install opencv-python
pip install opencv-python
```
```
Install the detectron2 library from your local clone (you should be inside
Install the detectron2 library from your local clone (you should be inside
detectron2 directory).
detectron2 directory).
```console
```bash
$ python -m pip install -e .
python -m pip install -e .
```
```
After the library from Facebook AI Research is installed, we can run a couple
After the library from Facebook AI Research is installed, we can run a couple
@ -314,22 +322,22 @@ of experiments. See the [official tutorial][detectron2-tutorial] for more
examples. I decided to experiment with [RetinaNet][retinanet-model-zoo]. First
examples. I decided to experiment with [RetinaNet][retinanet-model-zoo]. First
step is to download model weights.
step is to download model weights.
```console
```bash
$ curl -O https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/model_final_971ab9.pkl
curl -O https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/model_final_971ab9.pkl
```
```
To run experiments let's download an image with cats from wikipedia.
To run experiments let's download an image with cats from wikipedia.
```console
```bash
$ curl -O https://upload.wikimedia.org/wikipedia/commons/thumb/0/0b/Cat_poster_1.jpg/1920px-Cat_poster_1.jpg
curl -O https://upload.wikimedia.org/wikipedia/commons/thumb/0/0b/Cat_poster_1.jpg/1920px-Cat_poster_1.jpg
```
```
Finally let's run the DL model inference on CPU. If all is fine, you will see
Finally let's run the DL model inference on CPU. If all is fine, you will see
a window with cats and bounding boxes around them with scores.
a window with cats and bounding boxes around them with scores.
```console
```bash
$ python demo/demo.py --config-file configs/COCO-Detection/retinanet_R_101_FPN_3x.yaml \
python demo/demo.py --config-file configs/COCO-Detection/retinanet_R_101_FPN_3x.yaml \
--input 1920px-Cat_poster_1.jpg --opts MODEL.WEIGHTS model_final_971ab9.pkl MODEL.DEVICE cpu
--input 1920px-Cat_poster_1.jpg --opts MODEL.WEIGHTS model_final_971ab9.pkl MODEL.DEVICE cpu
```
```


@ -559,13 +567,13 @@ The actual deployment process is described in
< details >
< details >
< summary >
< summary >
```console
```bash
$ ./serverless/deploy_cpu.sh ./serverless/pytorch/facebookresearch/detectron2/retinanet/
./serverless/deploy_cpu.sh ./serverless/pytorch/facebookresearch/detectron2/retinanet/
```
```
< / summary >
< / summary >
```console
```
21.07.21 15:20:31.011 nuctl (I) Deploying function {"name": ""}
21.07.21 15:20:31.011 nuctl (I) Deploying function {"name": ""}
21.07.21 15:20:31.011 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.07.21 15:20:31.011 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.07.21 15:20:31.407 nuctl (I) Cleaning up before deployment {"functionName": "pth.facebookresearch.detectron2.retinanet_r101"}
21.07.21 15:20:31.407 nuctl (I) Cleaning up before deployment {"functionName": "pth.facebookresearch.detectron2.retinanet_r101"}
@ -675,8 +683,8 @@ After these changes deploy the serverless function once again. For
`serverless/pytorch/facebookresearch/detectron2/retinanet/nuclio/` you should
`serverless/pytorch/facebookresearch/detectron2/retinanet/nuclio/` you should
run the command below:
run the command below:
```console
```bash
$ serverless/deploy_cpu.sh serverless/pytorch/facebookresearch/detectron2/retinanet
serverless/deploy_cpu.sh serverless/pytorch/facebookresearch/detectron2/retinanet
```
```
To debug python code inside a container you have to publish the port (in this
To debug python code inside a container you have to publish the port (in this
@ -693,15 +701,15 @@ IP address of your host (usually IP address starts from `192.168.`). You will
need to confirm that you want to connect to your host computer and enter your
need to confirm that you want to connect to your host computer and enter your
password. Keep the terminal open after that.
password. Keep the terminal open after that.
```console
```bash
$ docker exec -it nuclio-nuclio-pth.facebookresearch.detectron2.retinanet_r101 /bin/bash
docker exec -it nuclio-nuclio-pth.facebookresearch.detectron2.retinanet_r101 /bin/bash
$ apt update & & apt install -y ssh
apt update & & apt install -y ssh
$ ssh -R 5678:localhost:5678 user@ipaddress
ssh -R 5678:localhost:5678 user@ipaddress
```
```
See how the latest command looks like in my case:
See how the latest command looks like in my case:
```console
```
root@2d6cceec8f70:/opt/nuclio# ssh -R 5678:localhost:5678 nmanovic@192.168.50.188
root@2d6cceec8f70:/opt/nuclio# ssh -R 5678:localhost:5678 nmanovic@192.168.50.188
The authenticity of host '192.168.50.188 (192.168.50.188)' can't be established.
The authenticity of host '192.168.50.188 (192.168.50.188)' can't be established.
ECDSA key fingerprint is SHA256:0sD6IWi+FKAhtUXr2TroHqyjcnYRIGLLx/wkGaZeRuo.
ECDSA key fingerprint is SHA256:0sD6IWi+FKAhtUXr2TroHqyjcnYRIGLLx/wkGaZeRuo.
@ -758,16 +766,20 @@ First of all need to check that you are using the recommended version of
Nuclio framework. In my case it is `1.5.16` but you need to check [the
Nuclio framework. In my case it is `1.5.16` but you need to check [the
installation manual][cvat-auto-annotation-guide].
installation manual][cvat-auto-annotation-guide].
```console
```bash
$ nuctl version
nuctl version
```
```
Client version:
Client version:
"Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3"
"Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3"
```
```
Check that Nuclio dashboard is running and its version corresponds to `nuctl` .
Check that Nuclio dashboard is running and its version corresponds to `nuctl` .
```console
```bash
$ docker ps --filter NAME=^nuclio$
docker ps --filter NAME=^nuclio$
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ab0c076c927 quay.io/nuclio/dashboard:1.5.16-amd64 "/docker-entrypoint.…" 6 weeks ago Up 46 minutes (healthy) 80/tcp, 0.0.0.0:8070->8070/tcp, :::8070->8070/tcp nuclio
7ab0c076c927 quay.io/nuclio/dashboard:1.5.16-amd64 "/docker-entrypoint.…" 6 weeks ago Up 46 minutes (healthy) 80/tcp, 0.0.0.0:8070->8070/tcp, :::8070->8070/tcp nuclio
```
```
@ -775,20 +787,22 @@ CONTAINER ID IMAGE COMMAND
Be sure that the model, which doesn't work, is healthy. In my case Inside Outside
Be sure that the model, which doesn't work, is healthy. In my case Inside Outside
Guidance is not running.
Guidance is not running.
```console
```bash
$ docker ps --filter NAME=iog
docker ps --filter NAME=iog
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
```
Let's run it. Go to the root of CVAT repository and run the deploying command.
Let's run it. Go to the root of CVAT repository and run the deploying command.
```console
```bash
$ serverless/deploy_cpu.sh serverless/pytorch/shiyinzhang/iog
serverless/deploy_cpu.sh serverless/pytorch/shiyinzhang/iog
```
```
< details >
< details >
```console
```
Deploying serverless/pytorch/shiyinzhang/iog function...
Deploying serverless/pytorch/shiyinzhang/iog function...
21.07.06 12:49:08.763 nuctl (I) Deploying function {"name": ""}
21.07.06 12:49:08.763 nuctl (I) Deploying function {"name": ""}
21.07.06 12:49:08.763 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.07.06 12:49:08.763 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
@ -834,10 +848,16 @@ assigned by Nuclio. Now the port is used by `openvino-dextr` as we can
see in logs. To prove our hypothesis just need to run a couple of docker
see in logs. To prove our hypothesis just need to run a couple of docker
commands:
commands:
```console
```bash
$ docker container ls -a | grep iog
docker container ls -a | grep iog
```
```
eb0c1ee46630 cvat/pth.shiyinzhang.iog:latest "conda run -n iog pr…" 9 minutes ago Created nuclio-nuclio-pth.shiyinzhang.iog
eb0c1ee46630 cvat/pth.shiyinzhang.iog:latest "conda run -n iog pr…" 9 minutes ago Created nuclio-nuclio-pth.shiyinzhang.iog
$ docker inspect eb0c1ee46630 | grep 49154
```
```bash
docker inspect eb0c1ee46630 | grep 49154
```
```
"Error": "driver failed programming external connectivity on endpoint nuclio-nuclio-pth.shiyinzhang.iog (02384290f91b2216162b1603322dadee426afe7f439d3d090f598af5d4863b2d): Bind for 0.0.0.0:49154 failed: port is already allocated",
"Error": "driver failed programming external connectivity on endpoint nuclio-nuclio-pth.shiyinzhang.iog (02384290f91b2216162b1603322dadee426afe7f439d3d090f598af5d4863b2d): Bind for 0.0.0.0:49154 failed: port is already allocated",
"HostPort": "49154"
"HostPort": "49154"
```
```
@ -846,15 +866,19 @@ To solve the problem let's just remove the previous container for the function.
In this case it is `eb0c1ee46630` . After that the deploying command works as
In this case it is `eb0c1ee46630` . After that the deploying command works as
expected.
expected.
```console
```bash
$ docker container rm eb0c1ee46630
docker container rm eb0c1ee46630
```
```
eb0c1ee46630
eb0c1ee46630
$ serverless/deploy_cpu.sh serverless/pytorch/shiyinzhang/iog
```
```bash
serverless/deploy_cpu.sh serverless/pytorch/shiyinzhang/iog
```
```
< details >
< details >
```console
```
Deploying serverless/pytorch/shiyinzhang/iog function...
Deploying serverless/pytorch/shiyinzhang/iog function...
21.07.06 13:09:52.934 nuctl (I) Deploying function {"name": ""}
21.07.06 13:09:52.934 nuctl (I) Deploying function {"name": ""}
21.07.06 13:09:52.934 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
21.07.06 13:09:52.934 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
@ -885,19 +909,27 @@ useful to look at logs. Just run a couple of commands like
< details >
< details >
```console
```bash
$ docker logs cvat
docker logs cvat
```
```
2021-07-06 13:44:54,699 DEBG 'runserver' stderr output:
2021-07-06 13:44:54,699 DEBG 'runserver' stderr output:
[Tue Jul 06 13:44:54.699431 2021] [wsgi:error] [pid 625:tid 140010969868032] [remote 172.28.0.3:40972] [2021-07-06 13:44:54,699] ERROR django.request: Internal Server Error: /api/v1/lambda/functions/pth.shiyinzhang.iog
[Tue Jul 06 13:44:54.699431 2021] [wsgi:error] [pid 625:tid 140010969868032] [remote 172.28.0.3:40972] [2021-07-06 13:44:54,699] ERROR django.request: Internal Server Error: /api/v1/lambda/functions/pth.shiyinzhang.iog
2021-07-06 13:44:54,700 DEBG 'runserver' stderr output:
2021-07-06 13:44:54,700 DEBG 'runserver' stderr output:
[Tue Jul 06 13:44:54.699712 2021] [wsgi:error] [pid 625:tid 140010969868032] [remote 172.28.0.3:40972] ERROR - 2021-07-06 13:44:54,699 - log - Internal Server Error: /api/v1/lambda/functions/pth.shiyinzhang.iog
[Tue Jul 06 13:44:54.699712 2021] [wsgi:error] [pid 625:tid 140010969868032] [remote 172.28.0.3:40972] ERROR - 2021-07-06 13:44:54,699 - log - Internal Server Error: /api/v1/lambda/functions/pth.shiyinzhang.iog
$ docker container ls --filter name=iog
```
```bash
docker container ls --filter name=iog
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b6ef9a9f3e2 cvat/pth.shiyinzhang.iog:latest "conda run -n iog pr…" 4 hours ago Up 4 hours (healthy) 0.0.0.0:49159->8080/tcp, :::49159->8080/tcp nuclio-nuclio-pth.shiyinzhang.iog
3b6ef9a9f3e2 cvat/pth.shiyinzhang.iog:latest "conda run -n iog pr…" 4 hours ago Up 4 hours (healthy) 0.0.0.0:49159->8080/tcp, :::49159->8080/tcp nuclio-nuclio-pth.shiyinzhang.iog
$ docker logs nuclio-nuclio-pth.shiyinzhang.iog
```
```bash
docker logs nuclio-nuclio-pth.shiyinzhang.iog
```
```
< / details >
< / details >