We are developing a VA software using MIP SDK.
To get video streams from milestone, we used to use jpeglivesource class.
It was good, automatically using nvdec hardware acceleration.
In our software, we process videos like this
- get single frame of image from jpeg video stream
- decode jpeg to yub or bitmap
- processing (cut, mask, detect, and etc)
- encode bitmap to jpeg
- go to first step for next frame
When I first saw bitmaplivesource, I thought normal jpeg making process.
it is that From H.264 video, decode H.264 to bitmap, then encode bitmap to jpeg.
We thought that in MIP SDK, there should be hidden decoding and encoding steps.
so, We changed stream class to bitmaplivesource wishing lower system load to get bitmap image than to get jpeg image. But we were wrong. The system’s load is higher than before. I obviously means that MIP SDK decodes jpeg to bitmap again.
anyway, this is my question.
how’s processes for jpeglivesource and bitmaplivesource?
does they use hardware acceleration?
I thought like this, are they right?
jpeglivesource
- get H.264 stream from recording server
- decode H.264 to bitmap
- encode bitmap to jpeg
- return jpeg image
bitmaplivesource
- get H.264 stream from recording server
- decode H.264 to bitmap
- encode bitmap to jpeg
- decode jpeg to bitmap
- return jpeg image
these weird process is what it looks like when monitoring system load.
I think bitmaplivesource has to skip 3rd and 4th steps.
can you tell me how it works?