Was this helpful?
Thumbs UP Thumbs Down

Amazon, Fortnite suffer major AWS service outage

Fortnite video game on screen
Amazon on phone screen

AWS outage

On October 20, 2025, AWS experienced a major infrastructure outage that disrupted dozens of apps and websites globally. Services, including Amazon’s retail platform, Alexa, and Fortnite, were affected.

The outage underscored how much of the internet depends on a small number of cloud hubs: AWS confirmed the disruption originated in its us-east-1 (Northern Virginia) region and pointed to an internal DNS resolution failure affecting DynamoDB API endpoints.

A clear example of systemic risk when critical services concentrate in one region.

snapchat application icon

Major platforms knocked offline

Among the services impacted: Fortnite, Snapchat, Ring, Venmo, Coinbase, and Amazon’s retail and streaming services all reported disruptions. The outage triggered thousands of user reports worldwide via Downdetector and other monitoring tools.

Even non-gaming platforms saw impact, underscoring broad cloud exposure. Many companies scrambled as core systems became inaccessible. Recovery took hours with some lingering delays.

root cause card

Root cause traced to DNS

AWS identified the outage as originating in the US-EAST-1 region and linked to a Domain Name System (DNS) resolution error affecting the DynamoDB API endpoint. Because DNS is critical to routing web traffic, the failure cascaded across many services.

AWS internal load-balancer health-monitoring systems were implicated. The technical fault was not due to an attack, but an internal fault.

time is money concept

Timeline of service disruption

The outage began at approximately 3:11 a.m. ET in the US-EAST-1 region. AWS announced the underlying issue was “fully mitigated” by 6:35 a.m., yet many services continued to recover throughout the day.

Some companies reported downstream effects lasting into the afternoon. For hours, multiple popular apps and websites were inaccessible or severely degraded. The speed of recovery was faster than some past outages, but the impact was still substantial.

New York USA freedom tower in lower manhattan and us flag

US-EAST-1 region spotlight

AWS’s US-EAST-1 region, located in Northern Virginia, is one of its oldest and most heavily utilized data-center hubs. Because many services route through or rely on this region, a failure there leads to widespread global impact.

The region’s concentration of resources underscores why one regional fault had global consequences. AWS has previously experienced major outages in this region, raising questions about regional diversification.

business woman with financial symbols around

Business and financial impact

Disruptions to Amazon’s own retail and cloud operations, as well as fintech apps like Venmo and Coinbase, triggered financial and reputational costs. For gaming and streaming platforms, downtime means lost revenue and frustrated users.

Companies dependent on cloud infrastructure must absorb both the cost of disruption and the expense of mitigation. The incident will likely influence vendor-risk and cloud-strategy decisions across enterprises.

Fortnite video game on screen

Gaming industry hit hard

For the gaming industry, the outage was dramatic: Fortnite and other major titles were knocked offline for many players. Game sessions were interrupted, login systems failed, and in some cases, in-game economies were paused.

Players and streamers alike faced large-scale disruption, highlighting how tightly connected modern games are to cloud services. The event will prompt game studios to review cloud redundancy strategies.

Prime video logo on phone screen

Streaming and retail services affected

Amazon’s own retail site and streaming services (Prime Video) were disrupted, along with other consumer services relying on AWS. Smart home platforms (Alexa, Ring) also faced interruptions, impacting home automation and security.

Users worldwide reported an inability to access Amazon.com or operate connected devices. The consumer-impact dimension emphasizes that cloud outages are not just enterprise problems; they affect everyday life.

Supply chain management concept transportation and logistic suppliers import export

Supply-chain ripple effects

Because many businesses rely on AWS infrastructure from software vendors to content delivery networks, the outage caused cascading effects. Smaller companies with fewer contingency plans felt the impact strongly as upstream cloud failure blocked their operations.

Interruptions in ticketing systems, learning platforms, and payment processors were reported. The event exposes the fragility and interconnectedness of the modern digital supply chain.

Cubes dice with arrows up and down and risk

Single-vendor dependency risks

The outage highlights risks inherent in relying heavily on a single cloud provider. When major infrastructure fails, thousands of dependent services go with it.

Enterprises are now rethinking strategies: multi-cloud, cross-region redundancy, and vendor diversification are gaining priority. The incident will likely accelerate the shift toward hybrid and multi-cloud architectures to reduce systemic risk.

Handwriting text writing implications concept meaning conclusion state of being

Regulatory and policy implications

Regulators are paying attention: the concentration of cloud infrastructure with a few providers (AWS, Azure, Google Cloud) raises questions around competition, critical infrastructure classification, and resilience standards.

The UK’s regulator warned about the dangers of too much reliance on one provider. Governments may push for stricter rules on cloud vendor risk-management and mandatory incident reporting.

lessons learned concept on green blackboard

Lessons for IT and operations teams

IT teams must assume outages will occur and plan accordingly: build cross-region failover, maintain offline backups, test recovery, and avoid designs where a single region failure causes full service collapse.

Disaster-recovery plans must now include cloud-provider regional failures, not just data-centre outages. Monitoring and alerting must detect production-impacting events quickly and trigger failover procedures.

User experience card.

Communication and customer-experience lessons

During the outage, many services lacked clear communication timelines. Users were left uncertain when systems would return.

Companies must prepare incident-response communication plans: update dashboards, provide expected recovery times, and manage user expectations. Transparency during outages builds trust and can mitigate reputational damage.

DNS concept

Technical takeaway

The root cause being an internal DNS error emphasises that infrastructure monitoring must include not only high-level metrics but deep chain dependencies (e.g., DNS resolution, load-balancer health, database endpoints).

Dependency graphs must incorporate lower-level services. Tools must alert on “service-within-service” failures, not only user-facing services.

Business team working

Impact on cloud market sentiment

Despite the outage, AWS’s market share and investor sentiment remained stable (Amazon’s shares rose modestly). But customer conversations are shifting: cloud SLAs, regional redundancy, and shared-responsibility models are under review.

The event may give competitors (Azure, Google Cloud) opportunities to highlight resilience differences. Cloud contracts, service credits, and audit rights may become more common negotiation items.

AWS partnership breathes life into Xeon. See, AWS just gave Intel a lift by ordering custom Xeon chips for cloud growth.

Key takeaway concept

Key takeaways

The AWS outage of October 2025 serves as a wake-up call: even the largest cloud providers can fail, and when they do, the ripple effect touches millions. For businesses, the cloud has huge benefits, but also systemic risks.

Diversify regions and vendors, build redundancy, test fail-over, communicate clearly, and monitor deeply. The internet’s infrastructure depends on a few players, and resilience must be built, not assumed.

Data center freeze hints at AI correction. Explore why AWS and Microsoft pause data centers amid AI dip.

Does your organization rely on a single cloud provider or region, and would you consider switching to a multi-cloud or cross-region strategy after seeing this outage? Share your thoughts.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.