Improving documentation about tracker SiamMask (#3807)

* update docs

* fix mistakes

* fix file name

* fix mistake
main
Timur Osmanov 4 years ago committed by GitHub
parent e3eeccb12a
commit 483e4f6a58
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -91,7 +91,7 @@ and where is a background. Negative points are optional.
Detectors are used to automatically annotate one frame. Supported DL models are suitable only for certain labels.
- Before you start, click the magic wand on the controls sidebar and select the Detectors icon tab.
- Before you start, click the `magic wand` on the controls sidebar and select the `Detectors` tab.
You need to match the labels of the DL model (left column) with the labels in your task (right column).
Then click `Annotate`.
@ -101,28 +101,48 @@ Detectors are used to automatically annotate one frame. Supported DL models are
In the [Automatic annotation](/docs/manual/advanced/automatic-annotation/) section you can read
how to make automatic annotation of all frames.
### Mask RCNN
The model generates polygons for each instance of an object in the image.
### Faster RCNN
The model generates bounding boxes for each instance of an object in the image. In this model,
RPN and Fast R-CNN are combined into a single network.
## Trackers
Trackers are used to automatically annotate an object using bounding box.
Supported DL models are not bound to the label and can be used for any objects.
- Before you start, select the `magic wand` on the controls sidebar and go to the `Trackers` tab.
Then select a label for the object and сlick `Track`. Then annotate the desired objects with the
Then select a `Label` and `Tracker` for the object and click `Track`. Then annotate the desired objects with the
bounding box in the first frame.
![Start tracking an object](/images/trackers_tab.png)
![Start tracking an object](/images/trackers_tab.jpg)
- All annotated objects will be automatically tracked when you move to the next frame.
For tracking, use `Next` button on the top panel or the `F` button to move on to the next frame.
![Annotation using a tracker](/images/tracker_siammask_DETRAC.gif)
- You can enable/disable tracking using `tracker switcher` on sidebar.
![Tracker switcher](/images/tracker_switcher.jpg)
- Trackable objects have indication on canvas with a model indication.
![Tracker indication](/images/tracker_indication.png)
![Tracker indication](/images/tracker_indication.jpg)
- You can monitoring the process by the messages appearing at the top.
If you change one or more objects, before moving to the next frame, you will see a message that
the objects states initialization is taking place. The objects that you do not change are already on the server
and therefore do not require initialization. After the objects are initialized, tracking will occur.
![Tracker pop-up window](/images/tracker_pop-up_window.jpg)
- A pop-up window displays information about the number of tracked objects.
### SiamMask
![Tracker pop-up window](/images/tracker_pop-up_window.png)
Fast online Object Tracking and Segmentation. Tracker is able to track different objects in one server request.
Trackable object will be tracked automatically if the previous frame was
a latest keyframe for the object. Have tracker indication on canvas. `SiamMask` tracker supported CUDA.

@ -145,14 +145,18 @@ finish the process.
![Create a video annotation task](/images/create_video_task.png)
Open the task and use [AI tools][cvat-ai-tools-user-guide] to start tracking
an object. Draw a bounding box around an object.
an object. Draw a bounding box around an object, and sequentially switch
through the frame and correct the restrictive box if necessary.
![Start tracking an object](/images/start_tracking.png)
![Start tracking an object](/images/trackers_tab.jpg)
Finally you will get bounding boxes for 10 frames by default.
Finally you will get bounding boxes.
![SiamMask results](/images/siammask_results.gif)
`SiamMask` model is more optimized to work on Nvidia GPUs.
For more information about deploying the model for the GPU, [read on](#objects-segmentation-using-mask-rcnn).
### Object detection using YOLO-v3
First of all let's deploy the DL model. The deployment process is similar for

@ -42,3 +42,10 @@ Tracks are created in `Track mode`
Can be used to reduce the annotation file and to facilitate editing polygons.
![](/images/approximation_accuracy.gif)
---
**Trackable** object will be tracked automatically if the previous frame was
a latest keyframe for the object. More details in the section [trackers](/docs/manual/advanced/ai-tools/#trackers).
![](/images/tracker_indication.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Loading…
Cancel
Save