Documentation for trackers updated (#5305)

main
Mariia Acoca 3 years ago committed by GitHub
parent ec3e1f34a4
commit 3c8ac1ef7d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,155 +1,310 @@
---
title: 'AI Tools'
linkTitle: 'AI Tools'
title: 'OpenCV and AI Tools'
linkTitle: 'OpenCV and AI Tools'
weight: 14
description: 'Overview of semi-automatic and automatic annotation tools available in CVAT.'
---
The tool is designed for semi-automatic and automatic annotation using DL models.
The tool is available only if there is a corresponding model.
For more details about DL models read the [Models](/docs/manual/advanced/models/) section.
Label and annotate your data in semi-automatic and automatic mode with the help of **AI** and **OpenCV** tools.
While [interpolation](/docs/manual/advanced/annotation-with-polygons/track-mode-with-polygons/)
is good for annotation of the videos made by the security cameras,
**AI** and **OpenCV** tools are good for both:
videos where the camera is stable and videos, where it
moves together with the object, or movements of the object are chaotic.
See:
- [Interactors](#interactors)
- [AI tools: annotate with interactors](#ai-tools-annotate-with-interactors)
- [AI tools: add extra points](#ai-tools-add-extra-points)
- [AI tools: delete points](#ai-tools-delete-points)
- [OpenCV: intelligent scissors](#opencv-intelligent-scissors)
- [Settings](#settings)
- [Interactors models](#interactors-models)
- [Detectors](#detectors)
- [Labels matching](#labels-matching)
- [Annotate with detectors](#annotate-with-detectors)
- [Detectors models](#detectors-models)
- [Trackers](#trackers)
- [AI tools: annotate with trackers](#ai-tools-annotate-with-trackers)
- [OpenCV: annotate with trackers](#opencv-annotate-with-trackers)
- [When tracking](#when-tracking)
- [Trackers models](#trackers-models)
- [OpenCV: histogram equalization](#opencv-histogram-equalization)
## Interactors
Interactors are used to create a polygon semi-automatically.
Supported DL models are not bound to the label and can be used for any objects.
To create a polygon usually you need to use regular or positive points.
For some kinds of segmentation negative points are available.
Positive points are the points related to the object.
Negative points should be placed outside the boundary of the object.
In most cases specifying positive points alone is enough to build a polygon.
A list of available out-of-the-box interactors is placed below.
Interactors are a part of **AI** and **OpenCV** tools.
- Before you start, select the `magic wand` on the controls sidebar and go to the `Interactors` tab.
Then select a label for the polygon and a required DL model. To view help about each of the
models, you can click the `Question mark` icon.
Use interactors to label objects in images by
creating a polygon semi-automatically.
![](/images/image114_detrac.jpg)
When creating a polygon, you can use positive points
or negative points (for some models):
- Click `Interact` to enter the interaction mode. Depending on the selected model,
the method of markup will also differ.
Now you can place positive and/or negative points. The [IOG](#inside-outside-guidance) model also uses a rectangle.
Left click creates a positive point and right click creates a negative point.
After placing the required number of points (the number is different depending on the model),
the request will be sent to the server and when the process is complete a polygon will be created.
If you are not satisfied with the result, you can set additional points or remove points.
To delete a point, hover over the point you want to delete, if the point can be deleted,
it will enlarge and the cursor will turn into a cross, then left-click on the point.
If you want to postpone the request and create a few more points, hold down `Ctrl` and continue (the `Block`
button on the top panel will turn blue), the request will be sent after the key is released.
- **Positive points** define the area in which the object is located.
- **Negative points** define the area in which the object is not located.
![](/images/image188_detrac.jpg)
![](/images/image188_detrac.jpg)
- In the process of drawing, you can select the number of points in the polygon using the switch.
### AI tools: annotate with interactors
![](/images/image224.jpg)
To annotate with interactors, do the following:
- You can use the `Selected opacity` slider in the `Objects sidebar` to change the opacity of the polygon.
You can read more in the [Objects sidebar](/docs/manual/basics/objects-sidebar/#appearance) section.
1. Click **Magic wand** ![Magic wand](/images/image189.jpg), and go to the **Interactors** tab.
2. From the **Label** drop-down, select a label for the polygon.
3. From the **Interactor** drop-down, select a model (see [Interactors models](#interactors-models)).
<br>Click the **Question mark** to see information about each model:
<br>![](/images/image114_detrac.jpg)
4. (Optional) If the model returns masks, and you need to
convert masks to polygons, use the **Convert masks to polygons** toggle.
5. Click **Interact**.
6. Use the left click to add positive points and the right click to add negative points.
<br>Number of points you can add depends on the model.
7. On the top menu, click **Done** (or **Shift+N**, **N**).
- To finish interaction, click on the `Done` button on the top panel or press `N` on your keyboard.
### AI tools: add extra points
- When the object is finished, you can edit it like a polygon.
You can read about editing polygons in the [Annotation with polygons](/docs/manual/advanced/annotation-with-polygons/) section.
> **Note:** More points improve outline accuracy, but make shape editing harder.
Fewer points make shape editing easier, but reduce outline accuracy.
### Deep extreme cut (DEXTR)
This is an optimized version of the original model, introduced at the end of 2017.
It uses the information about extreme points of an object to get its mask. The mask then converted to a polygon.
For now this is the fastest interactor on CPU.
Each model has a minimum required number of points for annotation.
Once the required number of points is reached, the request
is automatically sent to the server.
The server processes the request and adds a polygon to the frame.
![](/images/dextr_example.gif)
For a more accurate outline, postpone request
to finish adding extra points first:
### Feature backpropagating refinement scheme (f-BRS)
1. Hold down the **Ctrl** key.
<br>On the top panel, the **Block** button will turn blue.
2. Add points to the image.
3. Release the **Ctrl** key, when ready.
The model allows to get a mask for an object using positive points
(should be left-clicked on the foreground), and negative points
(should be right-clicked on the background, if necessary).
It is recommended to run the model on GPU, if possible.
In case you used **Mask to polygon** when the object is finished,
you can edit it like a polygon.
![](/images/fbrs_example.gif)
You can change the number of points in the
polygon with the slider:
### High Resolution Net (HRNet)
![](/images/image224.jpg)
The model allows to get a mask for an object using positive points
(should be left-clicked on the foreground), and negative points
(should be right-clicked on the background, if necessary).
It is recommended to run the model on GPU, if possible.
### AI tools: delete points
![](/images/hrnet_example.gif)
<br>To delete a point, do the following:
### Inside-Outside-Guidance
1. With the cursor, hover over the point you want to delete.
2. If the point can be deleted, it will enlarge and the cursor will turn into a cross.
3. Left-click on the point.
The model uses a bounding box and inside/outside points to create a mask.
First of all, you need to create a bounding box, wrapping the object.
Then you need to use positive and negative points to say the model where is a foreground,
and where is a background. Negative points are optional.
### OpenCV: intelligent scissors
![](/images/iog_example.gif)
To use **Intelligent scissors**, do the following:
1. On the menu toolbar, click **OpenCV**![OpenCV](/images/image201.jpg) and wait for the library to load.
<br>![](/images/image198.jpg)
2. Go to the **Drawing** tab, select the label, and click on the **Intelligent scissors** button.
![](/images/image199.jpg)
3. Add the first point on the boundary of the allocated object. <br> You will see a line repeating the outline of the object.
4. Add the second point, so that the previous point is within the restrictive threshold.
<br>After that a line repeating the object boundary will be automatically created between the points.
![](/images/image200_detrac.jpg)
5. To finish placing points, on the top menu click **Done** (or **N** on the keyboard).
As a result, a polygon will be created.
You can change the number of points in the
polygon with the slider:
![](/images/image224.jpg)
To increase or lower the action threshold, hold **Ctrl** and scroll the mouse wheel.
During the drawing process, you can remove the last point by clicking on it with the left mouse button.
### Settings
- On how to adjust the polygon,
see [Objects sidebar](/docs/manual/basics/objects-sidebar/#appearance).
- For more information about polygons in general, see
[Annotation with polygons](/docs/manual/advanced/annotation-with-polygons/).
### Interactors models
<!--lint disable maximum-line-length-->
| Model | Tool | Description | Example |
| --------------------------------------------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- |
| Deep extreme <br>cut (DEXTR) | AI Tool | This is an optimized version of the original model, <br>introduced at the end of 2017. It uses the <br>information about extreme points of an object <br>to get its mask. The mask is then converted to a polygon. <br>For now this is the fastest interactor on the CPU. <br><br>For more information, see: <li>[GitHub: DEXTR-PyTorch](https://github.com/scaelles/DEXTR-PyTorch) <li>[Site: DEXTR-PyTorch](https://cvlsegmentation.github.io/dextr)<li>[Paper: DEXTR-PyTorch](https://arxiv.org/pdf/1711.09081.pdf) | ![](/images/dextr_example.gif) |
| Feature backpropagating <br>refinement <br>scheme (f-BRS) | AI Tool | The model allows to get a mask for an <br>object using positive points (should be <br>left-clicked on the foreground), <br>and negative points (should be right-clicked <br>on the background, if necessary). <br>It is recommended to run the model on GPU, <br>if possible. <br><br>For more information, see: <li>[GitHub: f-BRS](https://github.com/saic-vul/fbrs_interactive_segmentation) <li>[Paper: f-BRS](https://arxiv.org/pdf/2001.10331.pdf) | ![](/images/fbrs_example.gif) |
| High Resolution <br>Net (HRNet) | AI Tool | The model allows to get a mask for <br>an object using positive points (should <br>be left-clicked on the foreground), <br> and negative points (should be <br>right-clicked on the background, <br> if necessary). <br>It is recommended to run the model on GPU, <br>if possible. <br><br>For more information, see: <li>[GitHub: HRNet](https://github.com/saic-vul/ritm_interactive_segmentation) <li>[Paper: HRNet](https://arxiv.org/pdf/2102.06583.pdf) | ![](/images/hrnet_example.gif) |
| Inside-Outside-Guidance<br>(IOG) | AI Tool | The model uses a bounding box and <br>inside/outside points to create a mask. <br>First of all, you need to create a bounding<br> box, wrapping the object. <br>Then you need to use positive <br>and negative points to say the <br>model where is <br>a foreground, and where is a background.<br>Negative points are optional. <br><br>For more information, see: <li>[GitHub: IOG](https://github.com/shiyinzhang/Inside-Outside-Guidance) <li>[Paper: IOG](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_Interactive_Object_Segmentation_With_Inside-Outside_Guidance_CVPR_2020_paper.pdf) | ![](/images/iog_example.gif) |
| Intelligent scissors | OpenCV | Intelligent scissors is a CV method of creating <br>a polygon by placing points with the automatic <br>drawing of a line between them. The distance<br> between the adjacent points is limited by <br>the threshold of action, displayed as a <br>red square that is tied to the cursor. <br><br> For more information, see: <li>[Site: Intelligent Scissors Specification](https://docs.opencv.org/4.x/df/d6b/classcv_1_1segmentation_1_1IntelligentScissorsMB.html) | ![int scissors](/images/intelligent_scissors.gif) |
<!--lint enable maximum-line-length-->
## Detectors
Detectors are used to automatically annotate one frame. Supported DL models are suitable only for certain labels.
Detectors are a part of **AI** tools.
Use detectors to automatically
identify and locate objects in images or videos.
### Labels matching
Each model is trained on a dataset and supports only the dataset's labels.
For example:
- DL model has the label `car`.
- Your task (or project) has the label `vehicle`.
To annotate, you need to match these two labels to give
DL model a hint, that in this case `car` = `vehicle`.
If you have a label that is not on the list
of DL labels, you will not be able to
match them.
For this reason, supported DL models are suitable only for certain labels.
<br>To check the list of labels for each model, see [Detectors models](#detectors-models).
- Before you start, click the `magic wand` on the controls sidebar and select the `Detectors` tab.
You need to match the labels of the DL model (left column) with the labels in your task (right column).
Then click `Annotate`.
### Annotate with detectors
![](/images/image187.jpg)
To annotate with detectors, do the following:
- Some of models supports attributes annotation (like facial emotions, for example: ``serverless/openvino/omz/intel/face-detection-0205``).
In this case you can also match attributes of the DL model with the attributes of a CVAT label.
1. Click **Magic wand** ![Magic wand](/images/image189.jpg), and go to the **Detectors** tab.
2. From the **Model** drop-down, select model (see [Detectors models](#detectors-models)).
3. From the left drop-down select the DL model label, from the right drop-down
select the matching label of your task.
![](/images/image187_1.jpg)
![](/images/image187.jpg)
- This action will automatically annotates one frame.
In the [Automatic annotation](/docs/manual/advanced/automatic-annotation/) section you can read
how to make automatic annotation of all frames.
4. (Optional) If the model returns masks, and you
need to convert masks to polygons, use the **Convert masks to polygons** toggle.
5. Click **Annotate**.
### Mask RCNN
This action will automatically annotate one frame.
For automatic annotation of multiple frames,
see [Automatic annotation](/docs/manual/advanced/automatic-annotation/).
The model generates polygons for each instance of an object in the image.
### Detectors models
### Faster RCNN
<!--lint disable maximum-line-length-->
The model generates bounding boxes for each instance of an object in the image. In this model,
RPN and Fast R-CNN are combined into a single network.
| Model | Description |
| ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Mask RCNN | The model generates polygons for each instance of an object in the image. <br><br> For more information, see: <li>[Github: Mask RCNN](https://github.com/matterport/Mask_RCNN) <li>[Paper: Mask RCNN](https://arxiv.org/pdf/1703.06870.pdf) |
| Faster RCNN | The model generates bounding boxes for each instance of an object in the image. <br>In this model, RPN and Fast R-CNN are combined into a single network. <br><br> For more information, see: <li>[Github: Faster RCNN](https://github.com/ShaoqingRen/faster_rcnn) <li>[Paper: Faster RCNN](https://arxiv.org/pdf/1506.01497.pdf) |
| YOLO v3 | YOLO v3 is a family of object detection architectures and models pre-trained on the COCO dataset. <br><br> For more information, see: <li>[Github: YOLO v3](https://github.com/ultralytics/yolov3) <li>[Site: YOLO v3](https://docs.ultralytics.com/#yolov3) <li>[Paper: YOLO v3](https://arxiv.org/pdf/1804.02767v1.pdf) |
| YOLO v5 | YOLO v5 is a family of object detection architectures and models based on the Pytorch framework. <br><br> For more information, see: <li>[Github: YOLO v5](https://github.com/ultralytics/yolov5) <li>[Site: YOLO v5](https://docs.ultralytics.com/#yolov5) |
| Semantic segmentation for ADAS | This is a segmentation network to classify each pixel into 20 classes. <br><br> For more information, see: <li>[Site: ADAS](https://docs.openvino.ai/2019_R1/_semantic_segmentation_adas_0001_description_semantic_segmentation_adas_0001.html) |
| Mask RCNN with Tensorflow | Mask RCNN version with Tensorflow. The model generates polygons for each instance of an object in the image. <br><br> For more information, see: <li>[Github: Mask RCNN](https://github.com/matterport/Mask_RCNN) <li>[Paper: Mask RCNN](https://arxiv.org/pdf/1703.06870.pdf) |
| Faster RCNN with Tensorflow | Faster RCNN version with Tensorflow. The model generates bounding boxes for each instance of an object in the image. <br>In this model, RPN and Fast R-CNN are combined into a single network. <br><br> For more information, see: <li>[Site: Faster RCNN with Tensorflow](https://docs.openvino.ai/2021.4/omz_models_model_faster_rcnn_inception_v2_coco.html) <li>[Paper: Faster RCNN](https://arxiv.org/pdf/1506.01497.pdf) |
| RetinaNet | Pytorch implementation of RetinaNet object detection. <br> <br><br> For more information, see: <li>[Specification: RetinaNet](https://paperswithcode.com/lib/detectron2/retinanet) <li>[Paper: RetinaNet](https://arxiv.org/pdf/1708.02002.pdf)<li>[Documentation: RetinaNet](https://detectron2.readthedocs.io/en/latest/tutorials/training.html) |
| Face Detection | Face detector based on MobileNetV2 as a backbone for indoor and outdoor scenes shot by a front-facing camera. <br> <br><br> For more information, see: <li>[Site: Face Detection 0205](https://docs.openvino.ai/latest/omz_models_model_face_detection_0205.html) |
<!--lint enable maximum-line-length-->
## Trackers
Trackers are used to automatically annotate an object using bounding box.
Supported DL models are not bound to the label and can be used for any objects.
Trackers are part of **AI** and **OpenCV** tools.
Use trackers to identify and label
objects in a video or image sequence
that are moving or changing over time.
### AI tools: annotate with trackers
To annotate with trackers, do the following:
1. Click **Magic wand** ![Magic wand](/images/image189.jpg), and go to the **Trackers** tab.
<br>![Start tracking an object](/images/trackers_tab.jpg)
2. From the **Label** drop-down, select the label for the object.
3. From **Tracker** drop-down, select tracker.
4. Click **Track**, and annotate the objects with the bounding box in the first frame.
5. Go to the top menu and click **Next** (or the **F** on the keyboard)
to move to the next frame.
<br>All annotated objects will be automatically tracked.
### OpenCV: annotate with trackers
To annotate with trackers, do the following:
1. On the menu toolbar, click **OpenCV**![OpenCV](/images/image201.jpg) and wait for the library to load.
<br>![](/images/image198.jpg)
2. Go to the **Tracker** tab, select the label, and click **Tracking**.
- Before you start, select the `magic wand` on the controls sidebar and go to the `Trackers` tab.
Then select a `Label` and `Tracker` for the object and click `Track`. Then annotate the desired objects with the
bounding box in the first frame.
<br>![Start tracking an object](/images/image242.jpg)
![Start tracking an object](/images/trackers_tab.jpg)
3. From the **Label** drop-down, select the label for the object.
4. From **Tracker** drop-down, select tracker.
5. Click **Track**.
6. To move to the next frame, on the top menu click the **Next** button (or **F** on the keyboard).
- All annotated objects will be automatically tracked when you move to the next frame.
For tracking, use `Next` button on the top panel or the `F` button to move on to the next frame.
All annotated objects will be automatically tracked when you move to the next frame.
![Annotation using a tracker](/images/tracker_mil_detrac.gif)
### When tracking
- You can enable/disable tracking using `tracker switcher` on sidebar.
- To enable/disable tracking, use **Tracker switcher** on the sidebar.
![Tracker switcher](/images/tracker_switcher.jpg)
- Trackable objects have indication on canvas with a model indication.
- Trackable objects have an indication on canvas with a model name.
![Tracker indication](/images/tracker_indication_detrac.jpg)
- You can monitoring the process by the messages appearing at the top.
If you change one or more objects, before moving to the next frame, you will see a message that
the objects states initialization is taking place. The objects that you do not change are already on the server
and therefore do not require initialization. After the objects are initialized, tracking will occur.
- You can follow the tracking by the messages appearing at the top.
![Tracker pop-up window](/images/tracker_pop-up_window.jpg)
### SiamMask
### Trackers models
<!--lint disable maximum-line-length-->
| Model | Tool | Description | Example |
| ----------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
| TrackerMIL | OpenCV | TrackerMIL model is not bound to <br>labels and can be used for any <br>object. It is a fast client-side model <br>designed to track simple non-overlapping objects. <br><br>For more information, see: <li>[Article: Object Tracking using OpenCV](https://learnopencv.com/tag/mil/) | ![Annotation using a tracker](/images/tracker_mil_detrac.gif) |
| SiamMask | AI Tools | Fast online Object Tracking and Segmentation. The trackable object will <br>be tracked automatically if the previous frame <br>was the latest keyframe for the object. <br><br>For more information, see:<li> [Github: SiamMask](https://github.com/foolwood/SiamMask) <li> [Paper: SiamMask](https://arxiv.org/pdf/1812.05050.pdf) | ![Annotation using a tracker](/images/tracker_ai_tools.gif) |
| Transformer Tracking (TransT) | AI Tools | Simple and efficient online tool for object tracking and segmentation. <br>If the previous frame was the latest keyframe <br>for the object, the trackable object will be tracked automatically.<br>This is a modified version of the PyTracking <br> Python framework based on Pytorch<br> <br><br>For more information, see: <li> [Github: TransT](https://github.com/chenxin-dlut/TransT)<li> [Paper: TransT](https://arxiv.org/pdf/2103.15436.pdf) | ![Annotation using a tracker](/images/tracker_transit.gif) |
<!--lint enable maximum-line-length-->
## OpenCV: histogram equalization
**Histogram equalization** improves
the contrast by stretching the intensity range.
It increases the global contrast of images
when its usable data is represented by close contrast values.
It is useful in images with backgrounds
and foregrounds that are bright or dark.
To improve the contrast of the image, do the following:
1. In the **OpenCV** menu, go to the **Image** tab.
2. Click on **Histogram equalization** button.
<br>![](/images/image221.jpg)
**Histogram equalization** will improve
contrast on current and following
frames.
Example of the result:
Fast online Object Tracking and Segmentation. Tracker is able to track different objects in one server request.
Trackable object will be tracked automatically if the previous frame was
a latest keyframe for the object. Have tracker indication on canvas. `SiamMask` tracker supported CUDA.
![](/images/image222.jpg)
> If you plan to track simple non-overlapping objects consider using fast client-side [TrackerMIL from OpenCV](/docs/manual/advanced/opencv-tools/#trackermil).
To disable **Histogram equalization**, click on the button again.

@ -1,99 +0,0 @@
---
title: 'OpenCV tools'
linkTitle: 'OpenCV tools'
weight: 15
description: 'Guide to using Computer Vision algorithms during annotation.'
---
The tool based on [Open CV](https://opencv.org/) Computer Vision library
which is an open-source product that includes many CV algorithms.
Some of these algorithms can be used to simplify the annotation process.
First step to work with OpenCV is to load it into CVAT.
Click on the toolbar icon, library will be downloaded automatically.
![](/images/image198.jpg)
Once it is loaded, the tool's functionality will be available.
### Intelligent scissors
Intelligent scissors is an CV method of creating a polygon
by placing points with automatic drawing of a line between them.
The distance between the adjacent points is limited by the threshold of action,
displayed as a red square which is tied to the cursor.
- First, select the label and then click on the `intelligent scissors` button.
![](/images/image199.jpg)
- Create the first point on the boundary of the allocated object.
You will see a line repeating the outline of the object.
- Place the second point, so that the previous point is within the restrictive threshold.
After that a line repeating the object boundary will be automatically created between the points.
![](/images/image200_detrac.jpg)
To increase or lower the action threshold, hold `Ctrl` and scroll the mouse wheel.
Increasing action threshold will affect the performance.
During the drawing process you can remove the last point by clicking on it with the left mouse button.
- You can also create a boundary manually (like when
[creating a polygon](/docs/manual/advanced/annotation-with-polygons/manual-drawing/)) by temporarily disabling
the automatic line creation. To do that, switch blocking on by pressing `Ctrl`.
- In the process of drawing, you can select the number of points in the polygon using the switch.
![](/images/image224.jpg)
- You can use the `Selected opacity` slider in the `Objects sidebar` to change the opacity of the polygon.
You can read more in the [Objects sidebar](/docs/manual/basics/objects-sidebar/#appearance) section.
- Once all the points are placed, you can complete the creation of the object
by clicking on the `Done` button on the top panel or press `N` on your keyboard.
As a result, a polygon will be created (read more about the polygons in the [annotation with polygons](/docs/manual/advanced/annotation-with-polygons/)).
### Histogram Equalization
Histogram equalization is an CV method that improves contrast in an image in order to stretch out the intensity range.
This method usually increases the global contrast of images when its usable data
is represented by close contrast values.
It is useful in images with backgrounds and foregrounds that are both bright or both dark.
- First, select the image tab and then click on `histogram equalization` button.
![](/images/image221.jpg)
- Then contrast of current frame will be improved.
If you change frame, it will be equalized too.
You can disable equalization by clicking `histogram equalization` button again.
![](/images/image222.jpg)
### TrackerMIL
Trackers are used to automatically annotate an object on video.
The TrackerMIL model is not bound to labels and can be used for any object.
- Before you start, select the `OpenCV tools` on the controls sidebar and go to the `Trackers` tab.
Then select a `Label` and `Tracker` for the object and click `Track`. Then annotate the desired objects with the
bounding box in the first frame.
![Start tracking an object](/images/image242.jpg)
- All annotated objects will be automatically tracked when you move to the next frame.
For tracking, use `Next` button on the top panel or the `F` button to move on to the next frame.
![Annotation using a tracker](/images/tracker_mil_detrac.gif)
- You can enable/disable tracking using `tracker switcher` on sidebar.
![Tracker switcher](/images/tracker_switcher.jpg)
- Trackable objects have indication on canvas with a model indication.
![Tracker indication](/images/tracker_indication_detrac.jpg)
- You can follow the tracking by the messages appearing at the top.
![Tracker pop-up window](/images/tracker_pop-up_window.jpg)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 814 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 391 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 940 KiB

After

Width:  |  Height:  |  Size: 529 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1006 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 MiB

After

Width:  |  Height:  |  Size: 960 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 617 KiB

Loading…
Cancel
Save