How to store metadata in custom format on recording server

Hello everybody.

I’m sorry for such a dummy questions but, the situation is the following:

We need to integrate analytics server with the milestone VMS and implement Search plugin for XProtect Smart Client for searching in stored metadata.

For now the analytics server produces metadata and can send it in a custom format via HTTP (JSON).

So, video streams are provided by IP cameras and metadata is provided by analytics server in custom format for multiple cameras.

Also, the metadata is generated (POST HTTP) by analytics server when some event occurs (for example license plate recognition).

I have a few questions:

What requirements the analytics server should meet for been successfully added as the metadata source for multiple camera devices?

What do you think what we should focus on: Event plugin or MIP driver implementation?

What the role of MIP driver for storing the retrieved metadata on recording server?

Should the stored metadata correspond to the Onvif format to successfully be searched by Search plugin?

Can the conversion from custom format to onvif can be done on drivers side to be stored in recording server and be appropriate for searching by Smart client?

Do you have any code samples how to store the retrieved metadata in driver side to recorder?

Not a stupid question at all and there is actually multiple ways to go, so you have good reason to be confused. :slight_smile:

First thing to note is that you cannot add your analytics server as a metadata channel on the actual hardware representing a camera. What you should do instead is to add one or more new pieces of ‘hardware’ which points to your analytics server and then have multiple metadata channels on this/these hardware instance(s). Finally, you can then set the related metadata device of each of the cameras to the appropriate metadata device.

With that in place you can either choose to use the old MIPDriver device driver that comes with the XProtect device packs or you can make your own driver using the driver framework.

I would recommend creating your own driver as you then can leave your analytics server unchanged and you can then do the conversion from your format to the ONVIF format inside the driver (as well as implementing the communication). The driver can have any number of metadata channels. You can look at the demo driver sample for inspiration as it shows how to receive and pass on metadata on 2 channels: https://doc.developer.milestonesys.com/html/index.html?base=samples/demodriver.html&tree=tree_1.html

Alternatively you can use the old MIPDriver, but in that case you will have to implement the conversion in your analytics server. You can have a look at the multi channel metadata provider sample: https://doc.developer.milestonesys.com/html/index.html?base=samples/multichannelmetadataprovider.html&tree=tree_2.html

To answer your remaining questions:

The role of the driver is primarily to facilitate communication with the external device/server and pass on the data to the recording server (which will handle storage and sharing), but can also be doing necessary data conversion.

The metadata has to be in the ONVIF format before being passed on to the recording server (either by driver or external server).

Search has support for certain types of ONVIF data, but you can also add your own search agent plugin for the Smart Client. For more info about Search have a look here: https://doc.developer.milestonesys.com/html/index.html?base=gettingstarted/intro_searchagent.html&tree=tree_4.html

Good luck! :slight_smile:

Hello Peter,

Thank you so much for your support.

Could you, please also confirm if my understanding is correct regarding the following:

Search plugin in our case should search the stored metadata and display the search results in Results area (video snapshots when the required object is detected).

As I understand, metadata, stored on recorder, should include some ID for the camera which detected the required object and the frame time when the detection happened.

Basically, how the stored metadata for the particular camera can be bounded to the stored video recording?

The metadata is bound to a camera through the “Related metadata” setting of the camera (see the Client tab for the camera in Management Client). You can also set this through Configuration API if you want to do it through code.

The metadata data should not contain the camera id.

Yes, I understand, but we have the single metadata source for multiple cameras (analytics server). I suppose we should use the same “related metadata” setting for multiple cameras. It’s not completeley clear, how the search engine can distinguish the required frame for the particular camera.

Thanks,

Anton

Search cannot distinguish data in the same channel for multiple source. I will suggest that you make a device driver that has as many channels as you have connected cameras to your server and then have the driver distribute the data between the different channels (even though all the data comes from the same server).

Peter,

Thank you, for the comprehensive answer