Jump to content

How 1500 bytes became the MTU of the internet


aum

Recommended Posts

10BASE ethernet card CC BY-SA 4.0 - Dmitry Nosachev

 

Ethernet is everywhere, tens of thousands of hardware vendors speak and implement it. However almost every ethernet link has one number in common, the MTU:

$ ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

The MTU (Maximum Transmission Unit) states how big a single packet can be. Generally speaking, when you are talking to devices on your own LAN the MTU will be around 1500 bytes and the internet runs almost universally on 1500 as well. However, this does not mean that these link layer technologies can’t transmit bigger packets.

 

For example, 802.11 (better known as WiFi) has a MTU of 2304 bytes, or if your network is using FDDI then you have a MTU around 4352 bytes. Ethernet itself has the concept of “jumbo frames”, where the MTU can be set up to 9000 bytes (on supporting NICs, Switches and Routers).

 

However, almost none of this matters on the internet. Since the backbone of the internet is now mostly made up of ethernet links, the de facto maximum size of a packet is now unofficially set to 1500 bytes to avoid packets being fragmented down links.

 

On the face of it 1500 is a weird number, we would normally expect a lot of constants in computing to be based around mathematical constants, like powers of 2. 1500, however fits none of those.

 

So where did 1500 come from, and why are we still using it?

A brief history of ethernet

Ethernet’s first major break into the world came in the form of 10BASE-2 (cheapernet) and 10BASE-5 (thicknet), the numbers indicating roughly how many hundred meters a single network segment could span over.

 

These standards both ran over a coax cable that was shared between all machines in the segment (you could attach more than one segment to another using a repeater).

 

The choice of physical cable presents a challenge, a lot of data transmission standards have two kinds of signals: The data signal and the clock signal.

 

clock and data signal

 

The purpose of the clock signal is to tell the other side when it has updated the data signal line, it typically does this by switching between on and off state every time it moves forwards in state. That way the other end can watch the clock signal, and when it changes it can look at the state of the data signal to get the next bit of data.

 

This only works if you have the signal lanes to spare. Coax based Ethernet does not since it only has a single lane in the form of the center conductor of the coax cable.. On top of that it is sharing this signal line with other systems, meaning transmitting a socalled carrier signal is not possible since other systems may want to transmit.

 

In order to overcome this limitation Ethernet starts every transmission with a small training sequence (or preamble) of 01010101.... This lets the other systems on the network get an idea of the speed that the data signal is changing at and adjust their own internal clocks so they don’t go out of sync while reading and miss bits of the transmission.

 

ethernet preamble to train a PLL

 

Or, if you were looking at it on the wire directly: (In this case a 64 byte ping packet)

 

KhIITvohDc

 

In this case, a Phase Locked Loop (PLL) is used to produce a matching clock signal based on the training one. Then once the training signal stops after 56 bits the PLL can continue in sync with the transmitter’s clock and can be used as a clock signal for the network card to read the state of the bits on the wire without going out of sync with the transmitter.

 

The problem is, PLL’s were not so great back in 1988 [when people were deploying 10BASE5] and so you could not go on for too long after a training signal without the clock signal from desynchronizing from what it was trained on at the beginning. If the recipient did desynchronize, then the packet would have to be retransmitted, causing more time usage on the shared line between all the computers on the segment.

 

The engineers at the time picked 1500 bytes, or around 12000 bits as the best “safe” value.

 


Post Publishing Edit: After publishing this, another email has surfaced thanks to @yeled

 

This email is from a creator that also points out that at the time 1500 reduced the amount of memory an ethernet card needed in order to buffer a single packet, and since this allowed cards to be cheaper it likely helped adoption.

In retrospect, a longer maximum might have been better, but if it increased the cost of NICs during the early days it may have prevented the widespread acceptance of Ethernet, so I’m not really concerned.


 

Since then various other transmission systems have come and gone, but the lowest MTU value of them has still been ethernet at 1500 bytes.Going below the lowest MTU on a network will either result in IP fragmentation, or the need to do path MTU detection. Both of which have their own sets of problems. Even if sometimes large OS vendors dropped the default MTU to even lower.

The efficiency factor

So now we know that the internet’s MTU is capped at 1500 mostly due to poor PLL’s in the 80’s, how bad is this for the efficiency of the internet?

 

AMS-IX breakdown of ethernet frame sizes

 

If we look at data from a major internet traffic exchange point (AMS-IX), we see that at least 20% of packets transiting the exchange are the maximum size. We can also see the total traffic of the LAN:

 

AMS-IX Traffic graph

 

If you combine these two graphs, you get something that roughly looks like this. This is an estimation of how much traffic each packet size bucket is:

 

AMS-IX traffic by packet size bracket

 

Or if we look at just the traffic that all of those ethernet preambles and headers cause, we get the same graph but with different scales:

 

AMS-IX traffic by packet size overhead

 

This shows a great deal of bandwidth being spent on headers for the largest packet class. Since the peak traffic shows the biggest packet bucket reading at around 246GBit/s of overhead we can assume that if we had all adopted jumbo frames while we had the chance to, this overhead would only be around 41GBit/s.

 

But I think at this point, the ship has sailed to do this on the wider internet. While some internet transport carriers operate on 9000 MTU, the vast majority don’t, and changing the internet’s mind collectively has been shown time and time again to be prohibitively difficult.

 


If you have more context on the history of 1500 bytes, please email them into [email protected]. Sadly the manuals, mailing list posts, and other context to this are disappearing fast without a trace.


 

If you liked this kind of stuff, you may like the rest of the blog even if it is generally more geared towards the modern day abuses of standards :). If you want to stay up to date with what I do next you can use my blog’s RSS Feed or you can follow me on twitter

 

Until next time!

 

Source

Link to comment
Share on other sites


  • Views 1.1k
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...