Jump to content

AWS reveals it broke itself by exceeding OS thread limits, sysadmins weren’t familiar with some workarounds


steven36
 Share

Recommended Posts

First solution: run on bigger servers to reduce chatter in the Kinesis fleet

 

rpDOViF.jpg

 

Amazon Web Services has revealed that adding capacity to an already complex system was the reason its US-EAST-1 region took an unplanned and rather inconvenient break last week.

 

The short version of the story is that the company’s Kinesis service, which is used directly by customers and underpins other parts of AWS’ own operations, added more capacity. Servers in the Kinesis fleet need to communicate with each other, and to do so create new threads for each of the other servers in the front-end fleet. AWS says there are “many thousands of servers” involved and that when new servers are added it can take up to an hour for news of additions to reach the entire fleet.

 

Adding capacity therefore “caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration.”

 

AWS figured that out, but also learned that fixing the problem meant rebooting all of Kinesis.

 

But it was only possible to bring “a few hundred” servers back at a time, and as we’ve seen above Kinesis uses “many thousands of servers”.

 

Which explains why recovery from the outage was slow.

 

The whole sad story is explained in much greater detail in this AWS post, which also explains how it plans to avoid such incidents in future,

 

Plan one: use bigger servers.

 

“In the very short term, we will be moving to larger CPU and memory servers, reducing the total number of servers and, hence, threads required by each server to communicate across the fleet,” the post says, explaining that doing so “will provide significant headroom in thread count used as the total threads each server must maintain is directly proportional to the number of servers in the fleet.”

 

The company also plans new “fine-grained alarming for thread consumption in the service” and plans “an increase in thread count limits in our operating system configuration, which we believe will give us significantly more threads per server and give us significant additional safety margin there as well.”

 

Also on the agenda: isolating in-demand services like CloudFront to uses dedicated Kinesis servers.

Dashboard dashed by dependencies

The TIFU!-like post also outlines why Amazon’ dashboards offered only scanty info about the incident – because they, too, depend on a service depends on Kinesis.

 

AWS has built a dependency-lite way to get info to the Service Health Dashboard it uses as a public status page. The post says it worked as expected, but “we encountered several delays during the earlier part of the event … as it is a more manual and less familiar tool for our support operators.”

 

The cloud therefore used the Personal Health Dashboard, visible to impacted customers only.

 

The post ends with an apology “While we are proud of our long track record of availability with Amazon Kinesis, we know how critical this service, and the other AWS services that were impacted, are to our customers, their applications and end users, and their businesses.”

 

“We will do everything we can to learn from this event and use it to improve our availability even further.

 

Source

Link to comment
Share on other sites

  • Replies 0
  • Created
  • Last Reply

Top Posters In This Topic

  • steven36

    1

Popular Days

Top Posters In This Topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...