We have been connecting to the Image/Recording Server directly and getting transcoded JPGs directly that we are then showing the user. However, my understanding is that this method is slow and compute intensive for the recording server. I know that we can fetch the raw h264 data from the generic byte format from the recording server, but I also saw that there are other options.
When should I use a WebRTC integration or a MobileServer (web/js) integration?
What protocol/formats are the MobileServer using under the hood for communication and media streaming?
Which is least expensive for the recording server to run per stream?
Are there any other advantages/disadvantages of using one or the other I should be thinking of?
Thank you for the help!
Yes. It is putting performance pressure on the Recording Server (RS) when asking it for JPEGs as the transcoding uses a lot of computation. It is strongly recommended to seek other solutions. I am glad you mention that it is possible to get raw video.
Both WebRTC and Mobile Server fetch the raw video from the RS, in this they are the same. For the RS there is no difference in how expensive it is. You could say that for RS each stream is costing the same no matter if it is WebRTC, Mobile, Smart Client that is the client. (Or using Image Server protocol but asking for raw video.)
WebRTC uses the raw video unaltered, this means that today the requirement is to set up the camera to use a h264 codec delivering the video to the RS.
Mobile Server do transcoding (except for direct streaming) to JPEG in the size that is requested. Mobile Server has the ability to transcode all the formats/codecs that are supported in the many camera models that are supported.
In the playback scenario you have finer control using Mobile where you can even playback backwards, in WebRTC you can go to a time then start playing forward.