Dear support.
Thanks to the documentation you provided, the GenericByte format is more clear now.
But I still have a doubt on the payload structure. Let’s me explain that.
Audio sub-format
Let’s consider the AAC/ADTS codec for the audio coming from camera. In this case, MUST the payload of a single Audio Stream Packet be a complete ADTS frame? In my idea, I’d think of splitting the audio stream from the camera in generic chunks of bytes (for example X bytes, without caring of ADTS frame structure) and thus making Audio Stream sub-format Packets adding the mandatory header. Is it possible?
Video sub-format
As for the above question, Must I consider for example a well-formed GOP to build a Video Stream Packet or may I just consider video stream as a sequence of raw bytes and make Packets unaware of what GOP is at that level?
Audio/Video format
The camera send an mpeg audio/video coded stream (H264 and AAC) synchronized. Do I need to separate audio/video in my Driver and send to the Registraton Server some Multi-Packets with audio and video stream packets insiede? Is there another way to do so avoiding to demultiplex audio and video data stream? If the splitting is the only one possibility, is there a specific API to use in Driver Framework in order to demultiplex MPEG A/V stream and obtain audio and video frames from it?
In general, what the registration server is able to care of about Audio/Video data from camera?
Thanks a lot,
BR.