most of all boxes (linux too) default to 1500.If you have GBit ethernet, you might want to try Jumbo frames for higher transfer volumina;
during connection establishing the hop with the tiniest MTU is responsible for setting the MTU for that IP connection.
(I think thats called Path MTU-Discovery)
so once the tcp connection is established, the maximum MTU for the way (the least common denominator, aka the smallest possible MTU) should be found out correctly.
if some device on the way is configured wrong, fragmentation won't work, packages get clamped.
That's what some guy at IBM told me 15 years ago, anyway :)
Of course, what we know now is that every new network technology is eventually superseded by the next generation of ethernet, and eventually everything migrates to IP (remember when it was called TCP/IP even if you weren't using TCP?).
From what I remember, ATM addressed latency, and many other networking performance metrics in three major ways.
1. A small fixed-size cell of data instead of a (potentially) large variable sized packet. The idea here was that by having small fixed-length cells, the transmission delay at any node in the network would be small and predictable.
Reducing the tranmit delay means less time a cell has to spend sitting in a queue at a node.
2. ATM cells use a hop-by-hop "tagging" mechanism to make forwarding decisions in each network node. In the header of each cell are two numbers, the virtual path identifier, or VPI, and the virtual channel or circuit identifier, or VCI. When a node receives a cell, it does a simple table lookup based on the VPI/VCI. The table entry says what to do with the cell: drop it, send it to the CPU, forward it using this new VPI/VCI pair, etc. The idea here is to make the lookup process and forwarding decision take less time and be more predictable.
3. ATM was one of the first technologies to really flesh out and enforce specific classes of service (CoS) in a network. The ATM standards include CoS profiles such as variable bit rate (VBR), constant bit rate (CBR), available bit rate (ABR), and unspecified bit rate (UBR). More importantly, quality of service (QoS) in an ATM network is designed to be end-to-end, meaning an ATM network can really truly honestly guarantee throughput, latency, and other performance requirements.
There might be more to ATM about latency, but those are the features that I remember being the big hitters.
Heh. I remember when PSInet was telling everyone that they could get you on and off of their network at any two points in the world in one hop. Turns out that they just connected all of their POPs together with ATM, so all you saw was the IP hops, not the ATM hops.
Speaking of IP ... I realized this week that no one calls it TCP/IP anymore. That's a good thing, because it was kind of stupid to call it TCP/IP even if you weren't using TCP.
So I gather the problem with ATM is that it doesn't scale well.
small fixed size packets are wonderful until you have a lot of data to move, then you spend all your time wasted on packet overhead.
So you make the packets bigger, but I imagine everybody on the network has to agree to the new packetsize at the same time which means you can't change it in flight.
And if you could, well, then you have the same problems as what we have now.
Has anyone ever used Zebra or Quagga as a router in a high-traffic environment?
I'm interested in knowing how its performance stacks up against a "real" router.
The idea of being able to use a device on which usable amounts of memory are affordable (enough memory to hold the full BGP table on a Cisco is quite expensive) and on which the components are off-the-shelf replaceable is appealing.
I suspect the sustained bandwidth will be good but the individual packet latency will be somewhat lacking. That's based on an educated guess. Any real-world experience would be interesting to hear about.
Linux/FreeBSD/etc., since Quagga is just shoving routes into the kernel's
routing table. There are some decent write-ups detailing performance
of various cards, etc.
If you're not shy of spending money, Vyatta may be a more polished
off-the-shelf option worth looking at that is still oodles cheaper than
something similar from brand C.
If your definition of high-traffic is more than a few Mpps, then
you're probably looking for something with ASICs and a TCAM.
I am suspecting that with a 1 Gbps card, the Linux kernel can do throughput in the hundreds of megabits with no problem, but if you start analyzing the latency of each individual hop, someone's going to point to it and tell us our network is broken.
And unfortunately, in the managed hosting business, when a customer says that something is wrong with your network, you are guilty until proven innocent.
Throwing newer hardware may improve the numbers, but he was able to achieve 8Gbps forwarding performance at 1518B frames, which is significantly better than what you'd see, for example, on a Cisco 7206VXR w/NPE-G2. The ultimate bottleneck is PPS, which was dismal compared to the same platform (700,000pps vs. 2,000,000).
If you turn on any features, such as stateful connection tracking which you're probably using in your firewall example, and possibly even dot1q tagging as you mentioned earlier, the numbers may take a dive.
I think we're going to set one up on a non-critical link and see how it runs.