Thank you for your answer .
I am a little confused. you say,“You cannot use a redistributable installer for 2018R1 for development with MIP SDK 2016R3.”
I may not had expressed it clearly. Finaly , I have to develop my application using MIP Documentation of 2016R3 and 32-bit C++ SDK from x86_2018R1(because I could not find the C++ SDK of x86_2016R3).I am just using the MIP Documentation of 2016R3 and the demo from the documentation to guide my work, and I met the problem " … All worked well until I tried to get the property of “Next_Begin_Time” from the playback data(some other property also have the problem) … ". Do you mean the Documentation of C++ SDK( MultiMedia Toolkit Documentation) from x86_2018R1 has some different with Documentation of 2016R3 ? I cannot use 2016R3 Documentation to guide my develop with C++ SDK from x86_2018R1?
In addition, I want to make sure whether the higher version(eg. 2016R3 ,2018R1) of 32-bit C++ SDK(MultiMedia Toolkit,I mainly used the Image Server Source Toolkit) is compatible with the lower version. Because I have to make my application work well on two version of your VMS system(2014 and 2016). If it is not compatible. Shall I have to get the 32-bit C++ SDK of 2014 and 2016 version ? If it is compatible, something well be easier.
Last, I am not using .Net . I have read the MediaLiveService C++ code and the other available source toolkits documents , but I still did not find the method or demo to realizes the “play faster” and " play lower " functions . Is it the Renderer toolkit? I do as the follow
•Setup call-back handlers
•Setup HandleRenderedData() call-back method.
You do this by providing an instance of ImRenderingHandler through the SetRenderingHandler() method. The HandleRenderedData() method will be called by the Video Renderer Toolkit every time an image is ready to be displayed for a given source. So in this call-back you would typically implement the code that will display the images somehow.
•Setup HandleOutOfBandData() call-back method.
You do this by providing an instance of the ImOutOfBandHandler through the SetOutOfBandHandler() method. The HandleOutOfBandData() method is called by the Video Renderer Toolkit every time out-out-band data (non video) is received from a source. If the source is an Image Server, the out-band-data could for instance be live status packages with information about the currently connected camera (e.g. is the feed currently beeing recorded?). So in this call-back you would typically parse the out-of-band data and show it somehow together with the images maybe as an overlay or maybe in a header above the image. Note that the format and content of the out-of-band data is completely source dependent.
•Setup HandleSourceStateChanged() call-back method.
You do this by providing an instance of ImSourceStateHandler through the SetSourceStateHandler() method. The HandleSourceStateChanged() method will be called everytime a source changes state. This state is typically also shown together with the displayed images. Like while reconnecting, a "Reconnecting ..." overlay might be shown on top of the last displayed image.
•Now that the handlers have been setup, it is time to add some sources to the Video Renderer Toolkit. This is done using the AddSources() method. For each source to add you have to provide two things. The first thing is a source toolkit XML that defines how to retrieve data from a source. The second thing you must provide is a set of rendering parameters which defines how to render the images received from the source. From the AddSources() method you will get a list of unique source identifiers which will be used when making source specific operations thoughout the Video Renderer Toolkit interface. A newly added source will initially be in the Disconnected state and the HandleSourceStateChanged() call-back method will be called to reflect that.
•Once the sources are added, the next thing to do is to connect them to their endpoints (e.g. an Image Server). This is done by calling ConnectSources() using the source identifies returned from the AddSources() method. Now the state of the souces will change to Connecting. When the connection has been established, it will change to Connected and data is now ready to be retrieved from the source. If for some reason it is not possible to establish the connection, the source will enter the ConnectionFailed state instead. Here it will wait a few seconds before entering the Connecting state again. If the connection is established but at some point is lost, the source will enter the ConnectionLost state. Here it will likewise wait a few seconds before the connection is retried. For When this happens the Reconnecting state is entered. All these state changes are communicated through the HandleSourceStateChanged() call-back method.
•With a number of connected sources, we can now use the navigation methods to control what is being rendered. The Video Renderer Toolkit can be in three modes; pause, playback and live.
•In pause mode, a single image is rendered. Which image to render is controlled by the navigation methods starting with "Move..." (e.g. MoveTo()). The pause mode is automatically entered when using one of these methods.
•In playback mode, the toolkit will actively retrieve images from the source toolkits and render these at whatever speed is requested. To do this the ImPlaybackSourceToolkit interface must be implemented by the source. You enter playback mode by using the DoPlayback() method. The DoPlayback() method is typically also used to continuously synchronize the playback to a master clock.
•Finally there is the live mode which also actively retrieves images from the source toolkits and render these in realtime. Here the ImLiveSourceToolkit interface must be implemented by the source. To enter live mode, you must call the DoLive() method.
•When done with a source, it can be removed using the RemoveSources() method. It is perfectly legal to only remove some of the sources and maybe add a few more.
but I cannot downcast the ImTolkit to ImRendererToolkit. Did I miss something? Does the C++ Media Toolkit support different speed playback(eg.play faster, player lower)?
utf8_string_t config = "<?xml version='1.0' encoding='utf-8'?>"
"<toolkit type='source'>"
" <provider>mmp</provider>"
" <config>"
" <jpeg_encoder quality='90' quality_update_key='qual'>"
" <video_decoder number_of_threads='4'>"
" <toolkit type='source'>"
" <provider>is</provider>"
" <config>"
" <server_uri>" + vmsRecorderUri + "</server_uri>"
" <device_id>" + cameraGuid + "</device_id>"
" <media_type>VIDEO</media_type>" + authorizationToken +
" <maximum_queue_size>5</maximum_queue_size>"
" </config>"
" </toolkit>"
" </video_decoder>"
" </jpeg_encoder>"
" </config>"
"</toolkit>";
...
ImToolkit *toolkit = factory->CreateInstance(config);
...
ImRendererToolkit *renderToolkit = dynamic_cast<ImRendererToolkit *>(toolkit);
My Englist is not good, do you understand me? Waiting for your answer!Thank you!
