We have a video server which we hit via a reverse proxy docker container (using nginx). We noticed that if we successfully connect to the server and login but that docker container either goes down or is stopped manually, there are numerous repeating requests. So we searched through the library to track this down, specifically the files `XPMobileSDK.js`, `ConnectionRequest.js`, `Connection.js`, and `Ajax.js`.
What we observed was that the library has a `setInterval` which will send a `LiveMessage` request regularly for the established connection. So after the docker container goes down and the next interval iteration is hit and the ajax request is sent, it fails with a 404 HTTP status. That failed request is handled by sending another request via the `onComplete` callback when the status is not 200 and the `readyState` is 4. Furthermore, because the `LiveMessage` requests are sent via a `setInterval`, it will eventually send another one which will then also fail and thus increment the amount of requests that are being sent and resent, etc.
So if I left it sitting long enough with the docker container down, the library will eventually reach a point where it is sending almost over 100 requests per second. I was wondering if anyone on the team had any thoughts regarding this, has observed this before, or if it is a known issue if an issue at all? Thank you in advance for your help.
Hi Hunter,
What you described sounds very interesting.
We will take a look on it for sure.
Thanks!
You are right Hunter !
We reproduced the issue in-house.
It is definitely not a desired behavior.
We will try to introduce timeout between LiveMessage commands for the upcoming 2019 R1 release.
This will at least decrease number of message that are sent to one per second.
We continue the discussion further what will be the best behavior in this case.
Most probably after some interval of impossibility to be send LiveMessages, SDK will disconnect and will inform observers for the state/status.
I doubt however this will be available in 2019 R1.
Most probably in 2019 R2.
Awesome, glad that it was reproducible for you guys; always great to hear. Thank you for the quick confirmation
@Petar Vutov I saw there was a 2019 R1 release and I was wondering if this contained anything regarding this reported issue, I did not see anything in the release notes but I wanted to check for any progress.
Hi Hunter,
For 2019 R1 release we just increased the timeout between LiveMessage commands in case of failed connection to the Mobile Server.
You are right though - release notes are not updated with this particular fix.
You can give it a try 
@Petar Vutov
Saw there’s a 2019R2, downloading it now. Just curious if there’s been any further developments on the issue reported in this thread. Also is there a changelog that can be viewed for the R2?
Hello @Hunter Adams ,
I am trying to dig deeply into this problem. I can clearly see that what you describe is happening but after 30 seconds all LiveMessages are stopped. Please check the image and your code to see if you are not overwriting some of the default behavior
@Teodor Hadjistoyanov
I haven’t modified that code any and do not have a failCallback defined there. That being said, I am not observing the same behavior anymore now that I am testing after updating to R2 so it seems to have been resolved since the creation of this question. We’ve been occasionally revisiting this post to see if it has been updated since reporting it and your team reproducing it, thanks for pointing out its resolution!
@Hunter Adams
You are right Hunter,
Exactly in R2 was implemented this protection timeout. After it expires live messages are stopped. Same happens in Milestone Web Client - 30 seconds after mobile server is stopped for example - web client logs out.
The actual value of this timeout is received from the Mobile server and is the same as watchdog timeout value configured in the server (30 seconds by default).