Jump to content

Google Looks to Speed Up the Internet


nsane.forums

Recommended Posts

The search giant proposes enhancements for the Web's TCP transport layer to reduce latency.

Google technicians want an overhaul of the Web's TCP (Transmission Control Protocol) transport layer and are suggesting ways to reduce latency and make the Web faster.

The company's "Make the Web Faster" team is making several recommendations to improve TCP speed, including increasing the TCP initial congestion window. In a blog post on Monday, team member Yuchung Cheng called TCP "the workhorse of the Internet," designed to deliver Web content and operate over a range of network types. Web browsers, he said, typically open up parallel TCP connections ahead of making actual requests." This strategy overcomes inherent TCP limitations but results in high latency in many situations and is not scalable," he said. "Our research shows that the key to reducing latency is saving round trips. We're experimenting with several improvements to TCP."

Recommendations include increasing the TCP initial congestion window. "The amount of data sent at the beginning of a TCP connection is currently three packets, implying three round trips to deliver a tiny, 15K-sized content. Our experiments indicate that IW10 [initial congestion window of 10 packets] reduces the network latency of Web transfers by over 10 percent," Cheng said. Google also wants the initial timeout reduced from three seconds to one second. "An RTT [round-trip time] of three seconds was appropriate a couple of decades ago, but today's Internet requires a much smaller timeout."

Google's suggestions, said IDC analyst Al Hilwa, "appear to be well-researched recommendations and if implemented broadly will yield significant improvements in practically everyone's network performance and latency. The issue is that the capability has to be broadly implemented to achieve the desired performance gains. Of course new TCP/IP stacks would work with the old ones as they would now, but when two sides of a connection have the improvements, the benefits should surface."

Google also is encouraging use of the Google-developed TCP Fast Open protocol, which reduces application network latency, and proportional rate reduction (PRR) for TCP. "Packet losses indicate the network is in disorder or is congested. PRR, a new loss recovery algorithm, retransmits smoothly to recover losses during network congestion. The algorithm is faster than the current mechanism by adjusting the transmission rate according to the degree of losses. PRR is now part of the Linux kernel and is in the process of becoming part of the TCP standard," Cheng said.

Also, Google is developing algorithms to recover faster on "noisy" mobile networks, said Cheng.

Google's TCP work is open source and disseminated through the Linux kernel, IETF standards proposals, and research publications to encourage industry involvement, Cheng noted.

view.gif View: Original Article

Link to comment
Share on other sites


  • Replies 2
  • Views 1.7k
  • Created
  • Last Reply
  • Administrator

Google works on Internet standards with TCP proposals, SPDY standardization

ojwIv.jpg

As part of Google's continuing quest to dole out Web pages ever more quickly, the search giant has proposed a number of changes to Transmission Control Protocol (TCP), the ubiquitous Internet protocol used to reliably deliver HTTP and HTTPS data (and much more besides) over the 'net.

Google's focus is on reducing latency between client machines and servers, and in particular, reducing the number of round trips (either client to server and back to client, or vice versa) required. When data is sent over a TCP connection, its receipt must be acknowledged by the receiving end. The sending end can only send a certain number of packets before it must wait for an acknowledgement. The time taken to receive an acknowledged is governed by the round-trip time (RTT). With high bandwidth, high latency connections, clients and servers can end up spending most of their time waiting for acknowledgements, rather than sending packets.

When a new connection is made, a computer may initially send three packets before acknowledgement is required. Google wants to increase this to ten. With ten packets, a browser can typically deliver an entire HTTP request to a server before it has to stop and wait for a reply.

TCP connections require a certain amount of negotiation between client and server, requiring a round trip, before data can be sent. Google is proposing to modify TCP so that some data can be sent during that negotiation, so that the server will have it on-hand already, and can start processing it straight away.

TCP waits a predetermined time (the RTO or retransmission timeout) for acknowledgments to arrive. If the RTO expires, unacknowledged packets are assumed lost and retransmitted. This ensures that if the data has been lost in transmission that the sender is never waiting for an acknowledgement that will never arrive. This timeout value varies according to the network conditions and RTT, with a default of 3 seconds. Google wants to reduce this default to 1 second, so that if data has been lost, neither end needs to wait so long before it has another go.

Finally, Google wants to use a new algorithm to adjust how TCP connections react to packet loss. Packet loss can indicate networks that are congested, and TCP reacts by reducing the rate at which data is sent when this congestion is detected. The company claims that the algorithms currently used to respond to this packet loss can exact too great a penalty, making connections slow down too much and for too long, and that its new algorithm is better.

In addition to these proposed changes, Google is also suggesting other modifications, especially to make TCP recover better on mobile networks.

Changing TCP is not to be taken lightly. The protocol is already suffering due to buffer bloat undermining its built-in handling of network congestion. While Google's proposed changes are well-intentioned and might improve network performance, they come with the risk that an overlooked problem or a bad interaction with other traffic could cause widespread damage to the Internet.

The proposed changes to TCP to reduce latencies and start sending data sooner are a continuation of previous work Google has done to try to make Web serving, in particular, faster. The company has previously proposed other modifications to protocols such as SSL to similarly accelerate data transmission.

More far-reaching than these SSL tweaks is Google's proposed alternative to the HTTP protocol that underpins the Web: SPDY.

Initially, SPDY was a proprietary Google protocol implemented only in Google's Chrome browser. That's changing, however. Amazon's Silk browser includes SPDY support, and Firefox 11 will include preliminary SPDY support. Partially motivated by SPDY's uptake, the IETF's HTTPbis Working Group—the team of industry experts tasked with maintaining and developing the HTTP specification—is considering the development of a new specification, HTTP/2.0, with the goal of improving the performance of HTTP connections. The working group will solicit suggestions from the industry, and with two, soon to be three implementations already, SPDY is likely to be well-placed among those suggestions.

:view: View: Original Article

Link to comment
Share on other sites


This is great :)

I don't know what the average RTT on a mobile network is, but I don't expect it to be over a second. Actually I just looked up some figures and in general the ping times seem to be between 100 and 500 ms, so an initial RTT of 1 second should suffice.

As mentioned in the article this really helps when your packet gets lost somewhere along the transmission (which is especially important in mobile transmission); whereas it would take 3 seconds for your phone (browser) to re-send the package now it will 'only' take 1 second after their suggested change is implemented.

Their other suggestion of increasing the initial congestion window is also a great idea. Again especially helpful for mobile devices which will greatly benefit from the fact that in most cases one round trip less (meaning from 200ms up to 1000ms) will be required in order to get the necessary data.

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...