How practical will applying a Nvidia Jetson device be?

Regarding an idea to localize image processing at the camera,

Is there potential for gain to process the signal at source​, ( physical connection between camera and a Nvidia compute device) vs recall from archive please?

Currently aiming for basic object detection on a surface that moves quickly through a field of view.

An interpretation is that blurring associates the compression of the file format.

An ideal solution is image processing from the archive as is.​ however due to blur the resolution doesn’t exist in a frame to detect. I hear storing in jpeg format may improve info content leading to more potential for good detection opportunity.

Would storing in jpeg be about the same as detecting at current time from a compute device attached to a cam please?

Mostly, an ideal solution is to retrieve from archive. Is there technique to reduce blur in circumstance of a surface moving by a cams fov quickly? And if not, will there be opportunity with a Jetson device, in scope of writing detection instances or images to archive?

Thomas​

In Milestone XProtect LPR detection is made on every frame if the camera is set up to use JPEG and only key frames if the camera is setup to use H.264/ H.265. This might not exactly answer your question but might be helpful. If this does not answer your question, please explain in other words and give me some details on the needed functionality.

Yes this is helpful. A goal is to understand how to improve the blur effect that, for example, may ​occur with difference between a fast moving car vs slow regarding reading the license numbers.

Are there software adjustments to trade-off quality attribut​es in exchange for a good frames with minimal blur for a good detection opportunity? Retrieval/processing of frames is being tested with the SDK at the moment.

It would be possible to experiment with other camera settings. There will be a trade-off between quality and bandwidth/disc requirements.