All Systems Operational

About This Site

The root cause of the reboots has been identified and resolved.
Please click here to read Intel MDS Vulnerability - How It Affected AWS and Us:
https://magemojo.com/magento/ecommerce/intel-mds-vulnerability-how-it-affected-aws-and-us/

Mojo Stratus - Northern Virginia Operational
Mojo Stratus - London Operational
Mojo Stratus - Frankfurt Operational
Mojo Stratus - Sydney Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Sep 22, 2019

No incidents reported today.

Sep 21, 2019

No incidents reported.

Sep 20, 2019
Postmortem - Read details
Sep 21, 17:28 EDT
Resolved - This incident has been resolved.
Sep 20, 04:51 EDT
Investigating - We are currently investigating this issue.
Sep 20, 03:48 EDT
Sep 19, 2019

No incidents reported.

Sep 18, 2019

No incidents reported.

Sep 17, 2019
Resolved - This incident has been resolved.
Sep 17, 00:17 EDT
Monitoring - A fix has been implemented and we are monitoring the results.
Sep 17, 00:10 EDT
Investigating - We are currently investigating this issue.
Sep 17, 00:06 EDT
Sep 16, 2019
Resolved - All stores have been verified to be functioning.

Due to this hardware failure, data had to be restored to an entirely new node. The files were restored from 9-15-2019 10pm UTC, while the database was from 3:30pm UTC. This will affect any outbound static IP whitelisting, along with inbound custom admin URLs.

Email has been sent out to those affected by this hardware failure.
Sep 16, 19:44 EDT
Update - All stores have been verified to be functioning.

Due to this hardware failure, data had to be restored to an entirely new node. The files were restored from 9-15-2019 10pm UTC, while the database was from 3:30pm UTC. This will affect any outbound static IP whitelisting, along with inbound custom admin URLs.
Sep 16, 19:44 EDT
Monitoring - All stores have been recovered. We are in the process of verifying that everything is back up and working correctly.
Sep 16, 18:14 EDT
Identified - AWS has confirmed that there is a hardware failure that affected this node. We are working to move everything from that node to new hardware as quickly as possible.
Sep 16, 16:45 EDT
Investigating - We are currently investigating this issue.
Sep 16, 15:40 EDT
Sep 15, 2019

No incidents reported.

Sep 14, 2019
Resolved - There was a disruption of the os level networking which caused the internal kubernetes network to lose endpoints to individual store pods. We are still investigating whether this fault occured in the operating system itself, or was a disruption due to hardware or temporary physical disconnect. This appears to have affected one node only.
Sep 14, 17:51 EDT
Investigating - We are currently investigating a network disruption between CloudFront and a Virginia based node, which is resulting in CloudFront 502 errors for some stores.
Sep 14, 15:36 EDT
Sep 13, 2019
Completed - The scheduled maintenance has been completed.
Sep 13, 19:00 EDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Sep 13, 18:00 EDT
Scheduled - We will be undergoing a scheduled maintenance to Frankfurt Production instances at this time.
Sep 13, 17:51 EDT
Sep 12, 2019

No incidents reported.

Sep 11, 2019

No incidents reported.

Sep 10, 2019
Resolved - This incident has been resolved.
Sep 10, 04:50 EDT
Investigating - We are currently investigating this issue.
Sep 10, 03:53 EDT
Sep 9, 2019

No incidents reported.

Sep 8, 2019

No incidents reported.