Two questions 1. Is 32-bit C++ MIPSDK of MIPSDK_Redist_Installer_x86_2018R1.msi compatible with MIPSDK_Installer_2016R3.msi?2.How can I get different speed playback video by 32-bit C++ MIPSDK(Image Server Source Toolkit )?

Hi,I am a C++ developer. I am using your C++ SDK to realize the liveview and playback functions(by Image Server Source Toolkit ).for some reason I must use the 32-bit C++ SDK to develop my software. Now I have two question .

  1. In the begining, I made my software by C++ MIPSDK from MIPSDK_Installer_2016R3.msi. then I found that which I use(C++ MIPSDK from MIPSDK_Installer_2016R3.msi) did not have 32-bit supported. unfortunately, I did not find the 32-bit C++ SDK of 2016R3 except MIPSDK_Redist_Installer_x86_2018R1.msi. So I had a try using 32-bit C++ SDK from 2018R1.msi. All worked well until I tried to get the property of “Next_Begin_Time” from the playback data(some other property also have the problem).
const ImUTCTimeProperty *prevBeginTimeProperty = dynamic_cast<const ImUTCTimeProperty *>(record->GetProperty("Prev_Begin_Time"));
								if (0 != prevBeginTimeProperty)
								{
	
									//cout << "Begin time stamp of previous record: " << prevBeginTimeProperty->GetValue() << endl;
									cout << "Begin time stamp of previous record: " << prevBeginTimeProperty->GetValue() << endl;
 
								}

It shows prevBeginTimeProperty->GetValue() throws a exception in CoreToolkits.dll. I am sure all the pointer are available and it work well when using 64-bit SDK from 2016R3 . what is the problem? Is it a compatible problem between 2016R3 and 2018R1? Finally I replace the CoreToolkit.dll by the same dll which I found in XProtect Smart Client 2016 R3(x86) installed files. All work well again! Can I do like this? If This is indeed a compatibility issue between 2016R3 and 2018R1, maybe some other problems are waiting for me, It is so sad!

2. the second question is , can I get the different speeds playback video from C++ MIPSDK.I read the MIP Documentation but I did not found any method or deom to meet my request. how can I do get the different speeds playback video just like the function in smart client by C++ SDK?

:blush: :blush: :blush: :blush: :blush: :blush:

waiting for your answer ! thank you well much!

You cannot use a redistributable installer for 2018R1 for development with MIP SDK 2016R3.

Make sure you have MIP SDK 2018R1 and MIPSDK_Redist_Installer_x86_2018R1

https://developer.milestonesys.com/s/article/About-Milestone-software-development-Kit-SDK-download-link-For-MSP

Are you using managed c++ (.NET) you should be able to use the MIP Library samples. Otherwise there is the MediaLiveService C++ sample:

http://doc.developer.milestonesys.com/html/index.html?base=samples/medialiveservice_cpp.html&tree=tree_2.html

And the documentation on the multimedia toolkit:

http://doc.developer.milestonesys.com/html/index.html?base=mmtkhelp/main.html&tree=tree_2.html

Thank you for your answer .

I am a little confused. you say,“You cannot use a redistributable installer for 2018R1 for development with MIP SDK 2016R3.”

I may not had expressed it clearly. Finaly , I have to develop my application using MIP Documentation of 2016R3 and 32-bit C++ SDK from x86_2018R1(because I could not find the C++ SDK of x86_2016R3).I am just using the MIP Documentation of 2016R3 and the demo from the documentation to guide my work, and I met the problem " … All worked well until I tried to get the property of “Next_Begin_Time” from the playback data(some other property also have the problem) … ". Do you mean the Documentation of C++ SDK( MultiMedia Toolkit Documentation) from x86_2018R1 has some different with Documentation of 2016R3 ? I cannot use 2016R3 Documentation to guide my develop with C++ SDK from x86_2018R1?

In addition, I want to make sure whether the higher version(eg. 2016R3 ,2018R1) of 32-bit C++ SDK(MultiMedia Toolkit,I mainly used the Image Server Source Toolkit) is compatible with the lower version. Because I have to make my application work well on two version of your VMS system(2014 and 2016). If it is not compatible. Shall I have to get the 32-bit C++ SDK of 2014 and 2016 version ? If it is compatible, something well be easier.

Last, I am not using .Net . I have read the MediaLiveService C++ code and the other available source toolkits documents , but I still did not find the method or demo to realizes the “play faster” and " play lower " functions . Is it the Renderer toolkit? I do as the follow

•Setup call-back handlers
•Setup HandleRenderedData() call-back method.
 You do this by providing an instance of ImRenderingHandler through the SetRenderingHandler() method. The HandleRenderedData() method will be called by the Video Renderer Toolkit every time an image is ready to be displayed for a given source. So in this call-back you would typically implement the code that will display the images somehow.
•Setup HandleOutOfBandData() call-back method.
 You do this by providing an instance of the ImOutOfBandHandler through the SetOutOfBandHandler() method. The HandleOutOfBandData() method is called by the Video Renderer Toolkit every time out-out-band data (non video) is received from a source. If the source is an Image Server, the out-band-data could for instance be live status packages with information about the currently connected camera (e.g. is the feed currently beeing recorded?). So in this call-back you would typically parse the out-of-band data and show it somehow together with the images maybe as an overlay or maybe in a header above the image. Note that the format and content of the out-of-band data is completely source dependent.
•Setup HandleSourceStateChanged() call-back method.
 You do this by providing an instance of ImSourceStateHandler through the SetSourceStateHandler() method. The HandleSourceStateChanged() method will be called everytime a source changes state. This state is typically also shown together with the displayed images. Like while reconnecting, a "Reconnecting ..." overlay might be shown on top of the last displayed image.
•Now that the handlers have been setup, it is time to add some sources to the Video Renderer Toolkit. This is done using the AddSources() method. For each source to add you have to provide two things. The first thing is a source toolkit XML that defines how to retrieve data from a source. The second thing you must provide is a set of rendering parameters which defines how to render the images received from the source. From the AddSources() method you will get a list of unique source identifiers which will be used when making source specific operations thoughout the Video Renderer Toolkit interface. A newly added source will initially be in the Disconnected state and the HandleSourceStateChanged() call-back method will be called to reflect that.
•Once the sources are added, the next thing to do is to connect them to their endpoints (e.g. an Image Server). This is done by calling ConnectSources() using the source identifies returned from the AddSources() method. Now the state of the souces will change to Connecting. When the connection has been established, it will change to Connected and data is now ready to be retrieved from the source. If for some reason it is not possible to establish the connection, the source will enter the ConnectionFailed state instead. Here it will wait a few seconds before entering the Connecting state again. If the connection is established but at some point is lost, the source will enter the ConnectionLost state. Here it will likewise wait a few seconds before the connection is retried. For When this happens the Reconnecting state is entered. All these state changes are communicated through the HandleSourceStateChanged() call-back method.
•With a number of connected sources, we can now use the navigation methods to control what is being rendered. The Video Renderer Toolkit can be in three modes; pause, playback and live.
•In pause mode, a single image is rendered. Which image to render is controlled by the navigation methods starting with "Move..." (e.g. MoveTo()). The pause mode is automatically entered when using one of these methods.
•In playback mode, the toolkit will actively retrieve images from the source toolkits and render these at whatever speed is requested. To do this the ImPlaybackSourceToolkit interface must be implemented by the source. You enter playback mode by using the DoPlayback() method. The DoPlayback() method is typically also used to continuously synchronize the playback to a master clock.
•Finally there is the live mode which also actively retrieves images from the source toolkits and render these in realtime. Here the ImLiveSourceToolkit interface must be implemented by the source. To enter live mode, you must call the DoLive() method.
•When done with a source, it can be removed using the RemoveSources() method. It is perfectly legal to only remove some of the sources and maybe add a few more.

but I cannot downcast the ImTolkit to ImRendererToolkit. Did I miss something? Does the C++ Media Toolkit support different speed playback(eg.play faster, player lower)?

utf8_string_t config = "<?xml version='1.0' encoding='utf-8'?>"
			"<toolkit type='source'>"
			"  <provider>mmp</provider>"
			"  <config>"
			"    <jpeg_encoder quality='90' quality_update_key='qual'>"
			"      <video_decoder number_of_threads='4'>"
			"        <toolkit type='source'>"
			"          <provider>is</provider>"
			"          <config>"
			"            <server_uri>" + vmsRecorderUri + "</server_uri>"
			"            <device_id>" + cameraGuid + "</device_id>"
			"            <media_type>VIDEO</media_type>" + authorizationToken +
			"            <maximum_queue_size>5</maximum_queue_size>"
			"          </config>"
			"        </toolkit>"
			"      </video_decoder>"
			"    </jpeg_encoder>"
			"  </config>"
			"</toolkit>";
...
ImToolkit *toolkit = factory->CreateInstance(config);
...
ImRendererToolkit *renderToolkit = dynamic_cast<ImRendererToolkit *>(toolkit);

My Englist is not good, do you understand me? Waiting for your answer!Thank you!

:blush: :blush: :blush:

If you do not mix files from MIP SDK 2018R1 and MIP SDK 2016R3 all is good.

MIP SDK is backwards compatible. It is recommended to use the newest MIP SDK when developing (currently MIP SDK 2018R1).

https://developer.milestonesys.com/s/article/about-MIP-SDK-compatibility-with-XProtect-product-versions

Ok,thank you very much. **But I did replace the CoreToolkit.dll of 2018R1 with the same name DLL from 2016R3 smart client installed files and solved my problem when I tried to get some property from the playback data. I feel some bad and hope everything will work well.**Maybe I should use the 2018R1’ SDK and try to get the corresponding documentation(It is not easy because the network).

Through your link,I still don’t know the answer whether I can realizes the “play faster” and " play lower " functions by C++ SDK and how can I do it?

If using the renderer toolkit you must use an XML that fits.

Unfortunately it is not there in the documentation: “to be written”

http://doc.developer.milestonesys.com/html/index.html?base=mmtkhelp/video_renderer_toolkit.html&tree=tree_2.html

I will make some inquiries here at Milestone Systems.

Alternative is more work: You could use a regular source toolkit and poll for the data more rapidly to implement faster playback.