@danielamitay Iโm assuming the fix was deployed around 10am Central time. We have not experienced any new 503 Service Unavailable Errors since that time. Thank you for your attention and efforts on this!
@dustin.autery can you check again now? In our last fix we added a throttle that was too strict, which would result in 30s timeouts, and ultimately cascading failures due to excessive queuing. We are instead going to over-provision as opposed to throttling.
@dustin.autery apologies for the inconvenience and unavailability. We had a bad deployment that wasnโt caught by CI. Weโve rolled back and fixed the issue, you should be seeing recovery.
@danielamitay Iโm assuming the fix was deployed around 10am Central time. We have not experienced any new 503 Service Unavailable Errors since that time. Thank you for your attention and efforts on this!
@dustin.autery can you check again now? In our last fix we added a throttle that was too strict, which would result in 30s timeouts, and ultimately cascading failures due to excessive queuing. We are instead going to over-provision as opposed to throttling.
@danielamitay Any update on this? We continue to experience 503 errors.
We are not seeing a recovery. Weโre still encountering 503 Service Unavailable Errors.
@dustin.autery apologies for the inconvenience and unavailability. We had a bad deployment that wasnโt caught by CI. Weโve rolled back and fixed the issue, you should be seeing recovery.