I’ve been using the Mobile Server Protocol to find and export recorded sequences of video, and now I’m trying to stream live video.
I used the VideoStream sample as a starting point but it’s not obvious to me how it actually works. Its sequence of commands (gleaned via Wireshark) is:
- Connect
- LogIn
- GetAllViewsAndCameras
- RequestStream
It’s the last of these that I’m not clear about. I was hoping that I could literally get a “copy” of the raw RTSP stream from the camera that could then be viewed in (for example) VLC.
There are lots of parameters and it would take a long time to go through every combination so I’m hoping someone can provide some clarification for me please!
- SignalType - “Live”, presumably!
- MethodType - “Push” or “Pull” - intuitively I would choose “Pull”, since that’s how RTSP works, but the VideoStream sample uses “Push”.
- StreamType - “FragmentedMP4” or “Transcoded” - but I get error 14 if I omit the parameter or use “Transcoded” (the default). Also, the ByteOrder parameter refers to a “Native” stream type, but this also gives me error 14.
- ByteOrder - “LittleEndian” or “Network” - I’m on Linux so I assumed “Network” is the right option.
Is there a more detailed description of these somewhere that gives sensible combinations for different applications, perhaps?
Hi,
Unfortunately Mobile Server does not provide RTSP stream. If you need an RTSP one, you can try with Open Network Bridge Server.
Mobile Server provides either FragmentedMp4 or Transcoded (Jpegs) depending on the StreamType parameter. Depending on the StreamHeaders parameter, the video data is either raw or wrapped in a structure with some additional data. You can refer to the protocol for more information on that.
Thanks Svetlana. That’s a shame but perhaps not a surprise.
And what about the parameters? Is there somewhere that describes them more fully than the link I mentioned?
About he documentation, you are looking at the right place.
If you need any additional information, let us know.
I do need additional information - that’s why I asked this question! 
What I need to know is mostly covered in the original question, but it’s things like:
- What’s the difference between the “Push” and “Pull” MethodTypes?
- Is there a “Native” StreamType? (If so, why isn’t it documented? If not, why is it mentioned in the ByteOrder description?)
- Why might I need to choose “Network” over “LittleEndian” for ByteOrder?
- What combinations of settings are sensible for different playback cases? For example, what if I wanted to get a stream of JPEGs (like a MJPEG or Multipart MIME Replace)? What settings would I need to use then?
- And you said “…the video data is either raw or wrapped in a structure with some additional data. You can refer to the protocol for more information on that.” Where in the Mobile Server Protocol documentation is that…? I’ve seen such wrappers elsewhere, but not directly referring to the Mobile Server Protocol; is that what you’re referring to?
- MethodTypes - Push means the client opens a connection and the server pushes the data. Pull - clients request data, chunk by chunk.
- Currently no Native is supported . Thanks for the remark about ByteOrder - will be fixed
- Playback does not support FragementedMp4, only Transcoding. (No MJPEG)
- For the video data, there is detailed description of the video data headers in MIPSDK Mobile Documentation:
https://doc.developer.milestonesys.com/mipsdkmobile/index.html
Go to Getting started.., Protocols, Video Channel
Thanks for the info and the link. (Here’s a shortcut link to the page you mentioned, in case anyone else needs it.)
I’ll have a read and experiment a bit.
Quick extra question relating to that link: in the “Header Extension sizes” section, all of the 4-byte fields are shown as INT rather than UINT (which is what’s used everywhere else except for altitude). Should the fields in this section all actually be UINT?
A few additional comments about the existing documentation based upon the RequestStream call that the VideoStream sample uses…
- ItemId should be CameraId.
- FragmentedDurationMs is not documented.
- ResizeAvailable is not documented.
Also, with StreamHeaders set to “NoHeaders” (which makes it faster, requiring less parsing), how can I tell how many bytes are in the frame? I would expect that at the very least the frame size would somehow need to be supplied, else how do I know when I have finished reading the frame bytes?
EDIT:
I also just noticed that it says that ItemId, DestWidth, DestHeight, Fps, and ComprLevel are “mandatory for SignalType not Upload”.
This is not correct! I’ve just requested a live FragmentedMP4 stream and didn’t supply any of those parameters (it’s CameraId not ItemId, as I mentioned above) and it worked fine.
And something confusing about trying to get JPEGs.
If I provide just the CameraId and the SignalType (“Live”), I get error 14, which (according to this page) is “Wrong input parameter”. No clarification about what that means, which parameter is wrong, or what’s wrong about it.
I would expect error 13 (“Missing input parameter”) if I had missed one, so what’s wrong with SignalType being “Live”? (The CameraId is certainly fine, because if I specify “FragmentedMP4” for the StreamType, it works fine.)
I’ve tried “Push” and “Pull” for the MethodType in case one isn’t compatible with getting JPEGs. I’ve tried it with and without a ComprLevel and with and without KeyFramesOnly. I’ve no idea what combination of input parameters will allow me to get live JPEGs!
Hi Richard ,
Thank you for your feedback.
The required parameters for starting a Transcoded stream are:
SignalType
MethodType
ItemId
DestWidth
DestHeight
FPS
ComprLvl
By the way , what platform do you use for your development. We do provide several SDKs for easier integration (.Net, iOS, Android, Javascript)
Thanks, I managed to get a JPEG. 
When getting a transcoded stream, does push not work? I read the header and the data, but it was only one JPEG. Should I carry on reading from the stream and expect additional JPEGs to arrive? If so, I assume I should also expect headers so I know how many bytes to read? EDIT: I tried carrying on reading bytes and it does look like I get more headers and more images. I’m not yet sure if it carries on indefinitely though.
I’m also still not clear how to work out how many bytes to read when headers are turned off (i.e. with StreamHeaders set to “NoHeaders”).
And regarding the platform, I’m using protocol integration on Linux. I am using Java, so would use an SDK one were available.
Hi,
Sorry for the delay.
Yes you should carry on reading when you are in push mode and frames should arrive .You can use the jpeg starting sequence for separating frames when no header used or videoId when headers sent.
Thanks Svetlana. When you say “jpeg starting sequence” do you mean I would need to look out for the sequence of bytes that starts all JPEG files?