How's the system loads of bitmaplivesource and jpeglivesource?

We are developing a VA software using MIP SDK.

To get video streams from milestone, we used to use jpeglivesource class.

It was good, automatically using nvdec hardware acceleration.

In our software, we process videos like this

  • get single frame of image from jpeg video stream
  • decode jpeg to yub or bitmap
  • processing (cut, mask, detect, and etc)
  • encode bitmap to jpeg
  • go to first step for next frame

When I first saw bitmaplivesource, I thought normal jpeg making process.

it is that From H.264 video, decode H.264 to bitmap, then encode bitmap to jpeg.

We thought that in MIP SDK, there should be hidden decoding and encoding steps.

so, We changed stream class to bitmaplivesource wishing lower system load to get bitmap image than to get jpeg image. But we were wrong. The system’s load is higher than before. I obviously means that MIP SDK decodes jpeg to bitmap again.

anyway, this is my question.

how’s processes for jpeglivesource and bitmaplivesource?

does they use hardware acceleration?

I thought like this, are they right?

jpeglivesource

  • get H.264 stream from recording server
  • decode H.264 to bitmap
  • encode bitmap to jpeg
  • return jpeg image

bitmaplivesource

  • get H.264 stream from recording server
  • decode H.264 to bitmap
  • encode bitmap to jpeg
  • decode jpeg to bitmap
  • return jpeg image

these weird process is what it looks like when monitoring system load.

I think bitmaplivesource has to skip 3rd and 4th steps.

can you tell me how it works?

It is the other way around; it is decoded to bitmap then re-encoded to Jpeg.

From the developer I have a tip that might improve performance:

Milestone uses a fork toolkit that seems to be unnecessary and adds a significant delay in the stream. This can be disabled by inserting this line of code before creating the BitmapLiveSource object:

EnvironmentManager.Instance.EnvironmentOptions[EnvironmentOptions.ToolkitFork] = "No"; 

Thank you for the answer.

then, you mean

when I use bitmaplivesource

  • get H.264 stream from recording server
  • decode H.264 to bitmap
  • return bitmap images

when I use jpeglivesource

  • get H.264 stream from recording server
  • decode H.264 to bitmap
  • encode bitmap to jpeg
  • return jpeg images

Do I understand as you meant?

if it is, when I use bitmaplivesource system loads should be lower than jpeglivesource.

but actual result wasn’t like that.

anyway.

I’ll try what you told me.

Thank you again.

The decoding and re-encoding will happen in the GPU if you have a graphics card supported by Milestone Hardware Acceleration. If meassuring the CPU and memory foot print JPEG is much better, but the GPU might be working harder. This is only a hunch but I have an idea this might explain what you have observed.