Milestone Development said that we agree with that the documentation might be vague in the area. Also, we will need to consider if the samples are updated, including simple implementations of bounding shapes that are easy to understand for all users.
They noticed that your search agent “search through saved metadata and get desired results”, based on this, it is a bit unclear to them whether you want to rely on (ONVIF) metadata streams already associated with the cameras, or whether it is your own custom metadata storage it refers to.
In case you are using the ONVIF metadata streams and the metadata contained in there is tracking objects (i.e. objects retain their ObjectId throughout your lifetime), then approach 1 below should be followed. Otherwise, approach 2 should be followed.
Approach 1 – Tracked objects already available in ONVIF metadata
Override the GetMetadataObjectsAsync(CancellationToken) method in MySearchResultData and return a Task which, in turn, returns a collection of SearchResultDataMetaDataObjects which correspond to the tracked objects in the ONVIF metadata.
Approach 2 – Bounding shape and metadata stored in custom storage
Override the GetBoundingShapesAsync(DateTime, CancellationToken) method in MySearchResultData and return a Task which, in turn, returns a collection of BoundingShape instances corresponding to a bounding box for the requested DateTime. In order for a bounding box to show up on the search result shown in the UI, this method must return a BoundingShape when the requested DateTime is the same as the MySearchResultData.TriggerTime (because the thumbnail shown in the UI is an image captured at the TriggerTime). BoundingShapes returned for timestamps other than the TriggerTime are used in the preview player, and thus allows the search agent to provide bounding shapes that “follow” an object if that type of information is available in the source data.
NOTE- You may have noticed that DateTimeKind on the provided DateTime objects are important as well.