Skip to content

Commit

Permalink
Merge pull request #25 from Burhan-Q/dev
Browse files Browse the repository at this point in the history
Adds KWARGS and tracking examples
  • Loading branch information
Burhan-Q committed Aug 3, 2024
2 parents bc9c7c3 + 5e51215 commit 1953d95
Show file tree
Hide file tree
Showing 4 changed files with 305 additions and 10 deletions.
53 changes: 49 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,8 @@ The Example snippets are more "complete" blocks of code that can be used for boi
| `ultra.example-nas-predict` | Setup Ultralytics NAS to perform inference (simple). |
| `ultra.example-rtdetr-predict` | Setup Ultralytics RT-DETR to perform inference (simple). |
| `ultra.example-callback` | Example showing how to add a custom callback function. |
| `ultra.example-track-loop-persist` | Example of how to open video, loop frames, and maintain tracked object IDs. |
| `ultra.example-track-kwords` | Example showing all keyword arguments available for track mode. |

### Snippet Example

Expand All @@ -231,6 +233,48 @@ for result in results:

</p></details>

## KWARGS

Use these to insert the various model methods defined in [modes] with all keyword arguments, default values, and commented descriptions quickly into your code. Includes `model` as default variable, but is an editable field accessible using tab stops.

| Prefix | Description | Reference |
| ---------------------- | ---------------------------------------------------------------------------------------- | ---------- |
| `ultra.kwargs-predict` | Snippet using model `predict` method, including all keyword arguments and defaults. | [predict] |
| `ultra.kwargs-train` | Snippet using model `train` method, including all keyword arguments and defaults. | [train] |
| `ultra.kwargs-track` | Snippet using model `track` method, including all keyword arguments and defaults. | [track] |
| `ultra.kwargs-val` | Snippet using model `val` method, including all keyword arguments and defaults. | [val] |

### Snippet Example

<details><summary><code>ultra.kwargs-predict</code></summary>
<p>

```py
model.predict(
source=src, # (str, optional) source directory for images or videos
imgsz=640, # (int | list) input images size as int or list[w,h] for predict
conf=0.25, # (float) minimum confidence threshold
iou=0.7, # (float) intersection over union (IoU) threshold for NMS
vid_stride=1, # (int) video frame-rate stride
stream_buffer=False, # (bool) buffer all streaming frames (True) or return the most recent frame (False)
visualize=False, # (bool) visualize model features
augment=False, # (bool) apply image augmentation to prediction sources
agnostic_nms=False, # (bool) class-agnostic NMS
classes=None, # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3]
retina_masks=False, # (bool) use high-resolution segmentation masks
embed=None, # (list[int], optional) return feature vectors/embeddings from given layers
show=False, # (bool) show predicted images and videos if environment allows
save=True, # (bool) save prediction results
save_frames=False, # (bool) save predicted individual video frames
save_txt=False, # (bool) save results as .txt file
save_conf=False, # (bool) save results with confidence scores
save_crop=False, # (bool) save cropped images with results
stream=False, # (bool) for processing long videos or numerous images with reduced memory usage by returning a generator
verbose=True, # (bool) enable/disable verbose inference logging in the terminal
)
```

</p></details>

## Use with `neovim`

Expand All @@ -248,10 +292,11 @@ Make sure that the path `"./ultralytics-snippets/"` is valid for your install lo

[ann]: https://docs.ultralytics.com/usage/simple-utilities/#drawing-annotations
[models]: https://docs.ultralytics.com/models
[_modes]: https://docs.ultralytics.com/modes
[_predict]: https://docs.ultralytics.com/modes/predict
[_train]: https://docs.ultralytics.com/modes/train
[_val]: https://docs.ultralytics.com/modes/val
[modes]: https://docs.ultralytics.com/modes
[predict]: https://docs.ultralytics.com/modes/predict
[train]: https://docs.ultralytics.com/modes/train
[track]: https://docs.ultralytics.com/modes/track
[val]: https://docs.ultralytics.com/modes/val
[YOLOv8]: https://docs.ultralytics.com/models/yolov8
[YOLOv5]: https://docs.ultralytics.com/models/yolov5
[YOLOv9]: https://docs.ultralytics.com/models/yolov9
Expand Down
6 changes: 5 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"name": "ultralytics-snippets",
"displayName": "Ultralytics Snippets",
"description": "Snippets to use with the Ultralytics Python library.",
"version": "0.1.6",
"version": "0.1.7",
"publisher": "Ultralytics",
"repository": {
"type": "git",
Expand Down Expand Up @@ -47,6 +47,10 @@
{
"language": "python",
"path": "./snippets/results.json"
},
{
"language": "python",
"path": "./snippets/kwargs.json"
}
],
"qna": false
Expand Down
93 changes: 88 additions & 5 deletions snippets/examples.json
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,8 @@
" save_txt=${19:False}, # (bool) save results as .txt file",
" save_conf=${20:False}, # (bool) save results with confidence scores",
" save_crop=${21:False}, # (bool) save cropped images with results",
" stream=${22:False} # (bool) for processing long videos or numerous images with reduced memory usage by returning a generator",
" verbose=${23:True} # (bool) enable/disable verbose inference logging in the terminal",
" stream=${22:False}, # (bool) for processing long videos or numerous images with reduced memory usage by returning a generator",
" verbose=${23:True}, # (bool) enable/disable verbose inference logging in the terminal",
")",
"# reference https://docs.ultralytics.com/modes/predict/"
],
Expand All @@ -206,7 +206,7 @@
"'''",
"model = YOLO(\"yolov${1|8,5,9,10|}${2|n,s,m,l,x,c,e|}${3|.,-cls.,-seg.,-obb.,-pose.,-world.,-worldv2.|}pt\")",
"results: list = model.train(",
" data=${4:\"coco8.yaml\"}, # (str, optional) path to data file, i.e. coco8.yaml",
" data=\"${4:coco8.yaml}\", # (str, optional) path to data file, i.e. coco8.yaml",
" epochs=${5:100}, # (int) number of epochs to train for",
" time=${6:None}, # (float, optional) number of hours to train for, overrides epochs if supplied",
" patience=${7:100}, # (int) epochs to wait for no observable improvement for early stopping of training",
Expand Down Expand Up @@ -272,8 +272,8 @@
" mixup=${64:0.0}, # (float) image mixup (probability)",
" copy_paste=${65:0.0}, # (float) segment copy-paste (probability)",
" auto_augment=\"${66|randaugment,autoaugment,augmix|}\", # (str) auto augmentation policy for classification (randaugment, autoaugment, augmix)",
" erasing=${67:0.4}, # (float) probability of random erasing during classification training (0-0.9), 0 means no erasing, must be less than 1.0.",
" crop_fraction=${68:1.0}, # (float) image crop fraction for classification (0.1-1), 1.0 means no crop, must be greater than 0.",
" erasing=${67:0.4}, # (float) probability of random erasing during classification training [0-0.9], 0 is no erasing, must be < 1.0.",
" crop_fraction=${68:1.0}, # (float) image crop fraction for classify [0.1-1], 1.0 is no cropping, must be > 0.",
")",
"# reference https://docs.ultralytics.com/modes/train/"
],
Expand Down Expand Up @@ -315,5 +315,88 @@
"# See docs page about SAM2 https://docs.ultralytics.com/models/sam-2 for more information"
],
"description": "Example showing use of SAM2 with bounding box and point prompts."
},

"Ultralytics Track Looping Frames with Persistence":{
"prefix":"ultra.example-track-loop-persist",
"body":[
"import cv2",
"",
"from ultralytics import YOLO",
"",
"# Load the YOLOv8 model",
"model = YOLO(\"yolov8${1|n,s,m,l,x|}.pt\", task=\"detect\")",
"",
"# Open the video file",
"video_path = \"${2:path/to/video.mp4}\"",
"cap = cv2.VideoCapture(video_path)",
"",
"# Loop through the video frames",
"while cap.isOpened():",
" # Read a frame from the video",
" success, frame = cap.read()",
"",
" if success:",
" # Run YOLOv8 tracking on the frame, persisting tracks between frames",
" results = model.track(frame, persist=True)",
"",
" # Visualize the results on the frame",
" annotated_frame = results[0].plot()",
"",
" # Display the annotated frame",
" cv2.imshow(\"YOLOv8 Tracking\", annotated_frame)",
"",
" # Break the loop if 'q' is pressed",
" if cv2.waitKey(1) & 0xFF == ord(\"q\"):",
" break",
" else:",
" # Break the loop if the end of the video is reached",
" break",
"",
"# Release the video capture object and close the display window",
"cap.release()",
"cv2.destroyAllWindows()",
"# reference https://docs.ultralytics.com/modes/track/",
"$0"
],
"description": "Example of how to open video, loop frames, and maintain tracked object IDs."
},

"Ultralytics Track with all Keywords":{
"prefix":"ultra.example-track-kwords",
"body":[
"from ultralytics import YOLO",
"",
"src=\"${1:https://youtu.be/LNwODJXcvt4}\"",
"model = YOLO(\"yolov8${2|n,s,m,l,x|}${3|.,-seg.,-obb.,-pose.|}pt\")",
"results = model.track(",
" source=src, # (str, optional) source directory for images or videos",
" imgsz=${5:640}, # (int | list) input images size as int or list[w,h] for predict",
" conf=${6:0.25}, # (float) minimum confidence threshold",
" iou=${7:0.7}, # (float) intersection over union (IoU) threshold for NMS",
" persist=${8:False}, # (bool) persist track-ids across frames",
" tracker=\"${9|botsort,bytetrack|}\", # (str) tracker type, choices=[botsort.yaml, bytetrack.yaml]",
" vid_stride=${10:1}, # (int) video frame-rate stride",
" stream_buffer=${11:False}, # (bool) buffer all streaming frames (True) or return the most recent frame (False)",
" visualize=${12:False}, # (bool) visualize model features",
" augment=${13:False}, # (bool) apply image augmentation to prediction sources",
" agnostic_nms=${14:False}, # (bool) class-agnostic NMS",
" classes=${15:None}, # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3]",
" retina_masks=${16:False}, # (bool) use high-resolution segmentation masks",
" embed=${17:None}, # (list[int], optional) return feature vectors/embeddings from given layers",
" show=${18:False}, # (bool) show predicted images and videos if environment allows",
" save=${19:True}, # (bool) save prediction results",
" save_frames=${20:False}, # (bool) save predicted individual video frames",
" save_txt=${21:False}, # (bool) save results as .txt file",
" save_conf=${20:False}, # (bool) save results with confidence scores",
" save_crop=${21:False}, # (bool) save cropped images with results",
" stream=${22:False}, # (bool) for processing long videos or numerous images with reduced memory usage by returning a generator",
" verbose=${23:True}, # (bool) enable/disable verbose inference logging in the terminal",
")",
"# reference https://docs.ultralytics.com/modes/track/",
"# reference https://docs.ultralytics.com/modes/predict/ (tracking accepts same keyword arguments as predict)",
"$0"
],
"description": "Example showing all keyword arguments available for track mode."
}
}
Loading

0 comments on commit 1953d95

Please sign in to comment.