Hello,
I’m still developing with C++ and the component integration seems to provide an interesting toolkit: Video Renderer.
If I correctly understand, this toolkit provide an asynchronous way to get real time video stream from any milestone server sources. Am I correct?
I would like to use it but it seems that the documentation is incomplete, especially for the xml configuration. Could you please get us a more complete documentation, and/or maybe a sample using this toolkit?
Kind regards.
Here is an example of the parameters the Smart Client uses to create a Video Renderer Toolkit and add a source to it:
Video Renderer Toolkit Constructor:
<?xml version='1.0' encoding="utf-8"?>
vr
AddSources:
<?xml version="1.0" encoding="UTF-8" standalone="true"?>
is
<media_type>VIDEO</media_type>
<server_uri>http://yourServer.yourDomain.com:7563/</server_uri>
<device_id>96ba548b-929d-4dd1-9f7a-2ddccfcfcd17</device_id>
<token update\_key="TOKEN\_UPDATE\_KEY">TOKEN#ac8777fe-b2cd-475d-8c70-b8a7f0b9bc0f#[yourServer.yourDomain.com//ServerConnector#</token>](https://yourServer.yourDomain.com//ServerConnector#%3C/token%3E)
<video_stream_attributes>
<framerate update\_key="framerate\_VIDEO\_STREAM\_ATTTRIBUTE\_UPDATE\_KEY">full</framerate>
<motiononly update\_key="motiononly\_VIDEO\_STREAM\_ATTTRIBUTE\_UPDATE\_KEY">no</motiononly>
</video_stream_attributes>
<compression_rate update_key=“COMPRESSION_RATE_ATTTRIBUTE_UPDATE_KEY”>100</compression_rate>
Rendering Parameters:
DeinterlaceMode = NoFilter
RemovableMasksLifted = false
HardwareAcceleration = AutoNvidia
NumberOfDecodingThreads = -1
KeepAspectRatio = true
WindowWidth = 800
WindowHeight = 600
BufferTimeSpan = 200
Hello,
Thank you for this example. I can create the VideoRendererToolkit, add sources, connect them and launch live. The HandleRenderedData is also called and I can get the ImD3D9SurfaceRenderingInformation. An image is available (according to GetDataAvailabilityAtRequestedTime) but I can’t see how to use it: write into a file or display it. Could you please help me about that?
Regards.
I consulted a Milestone developer:
--
The returned surface is a collage of all the views that have an updated frame since last v-sync. Each view comes with a set of coordinates (top left corner and width and height of a rectangle) that indicates where on the large surface the decoded image is located. Other names for this concept are “texture atlas” or “sprite sheet”, if you want to Google it. It is an optimization technique. It is faster to render on screen several small portions of one large texture than to render several small individual texture objects, because the context switch takes time.
It is correct to cast the pointer to an IDirect3DSurface9. I am not a C# developer, so I won’t try to explain how to use this unmanaged pointer in a managed environment, but I’m sure there are many good articles online that explain it well. If you want to keep the instance of the surface around for longer than it takes to process the callback, you can temporarily increase the reference count by calling AddRef() on the surface, and then call Release() when you are done using it.
--