Language:
switch to room list switch to menu My folders
Go to page: First ... 18 19 20 21 [22] 23 24 25 26 ... Last
[#] Sun Aug 15 2010 11:38:23 EDT from IGnatius T Foobar @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

All good stuff, but in the case I looked at on Friday, these people doubled the MTU *and* set the DF bit. And then called in with a "network problem."

[#] Sun Aug 15 2010 18:10:36 EDT from Nite*Star @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

Isn't the default MTU for broadband 1492? OR something like that?

[#] Mon Aug 16 2010 09:59:59 EDT from IGnatius T Foobar @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

For a Windows system it's 1500 for local networks and 576 for external networks.

[#] Mon Aug 16 2010 11:33:48 EDT from dothebart @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

most of all boxes (linux too) default to 1500.If you have GBit ethernet, you might want to try Jumbo frames for higher transfer volumina;

during connection establishing the hop with the tiniest MTU is responsible for setting the MTU for that IP connection.

(I think thats called Path MTU-Discovery)

so once the tcp connection is established, the maximum MTU for the way (the least common denominator, aka the smallest possible MTU) should be found out correctly.

if some device on the way is configured wrong, fragmentation won't work, packages get clamped.



[#] Mon Aug 16 2010 16:02:10 EDT from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I expect if everybody set their MTU to 1 a lot of mtu configuration problems would just go away.

[#] Tue Aug 17 2010 12:39:26 EDT from Spell Binder @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Might as well switch to ATM at that point.

[#] Wed Aug 18 2010 18:06:00 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

ATM is, after all, the future. Voice, data, video ... it's all going to move to ATM.

That's what some guy at IBM told me 15 years ago, anyway :)

[#] Wed Aug 18 2010 18:14:36 EDT from Spell Binder @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I wouldn't be surprised if I said that. From a latency point of view, ATM still has some unique advantages. It's just all the cruft that comes with ATM that ruins the whole thing. :)
ATM Binder

[#] Wed Aug 18 2010 20:58:18 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

s/cruft/complexity

Of course, what we know now is that every new network technology is eventually superseded by the next generation of ethernet, and eventually everything migrates to IP (remember when it was called TCP/IP even if you weren't using TCP?).

[#] Thu Aug 19 2010 10:15:47 EDT from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

splain to me low latency genius of ATM. speed I find is less of an issue than latency. at least for what I do.

[#] Thu Aug 19 2010 15:44:54 EDT from Spell Binder @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

It's been a while, so bear with me here.

From what I remember, ATM addressed latency, and many other networking performance metrics in three major ways.

1. A small fixed-size cell of data instead of a (potentially) large variable sized packet. The idea here was that by having small fixed-length cells, the transmission delay at any node in the network would be small and predictable.
Reducing the tranmit delay means less time a cell has to spend sitting in a queue at a node.

2. ATM cells use a hop-by-hop "tagging" mechanism to make forwarding decisions in each network node. In the header of each cell are two numbers, the virtual path identifier, or VPI, and the virtual channel or circuit identifier, or VCI. When a node receives a cell, it does a simple table lookup based on the VPI/VCI. The table entry says what to do with the cell: drop it, send it to the CPU, forward it using this new VPI/VCI pair, etc. The idea here is to make the lookup process and forwarding decision take less time and be more predictable.

3. ATM was one of the first technologies to really flesh out and enforce specific classes of service (CoS) in a network. The ATM standards include CoS profiles such as variable bit rate (VBR), constant bit rate (CBR), available bit rate (ABR), and unspecified bit rate (UBR). More importantly, quality of service (QoS) in an ATM network is designed to be end-to-end, meaning an ATM network can really truly honestly guarantee throughput, latency, and other performance requirements.

There might be more to ATM about latency, but those are the features that I remember being the big hitters.
ATM Binder

[#] Thu Aug 19 2010 23:41:42 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Heh.  I remember when PSInet was telling everyone that they could get you on and off of their network at any two points in the world in one hop.  Turns out that they just connected all of their POPs together with ATM, so all you saw was the IP hops, not the ATM hops.

Speaking of IP ... I realized this week that no one calls it TCP/IP anymore.  That's a good thing, because it was kind of stupid to call it TCP/IP even if you weren't using TCP.



[#] Thu Aug 26 2010 15:33:29 EDT from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

UDP over TCP/IP?


So I gather the problem with ATM is that it doesn't scale well.
small fixed size packets are wonderful until you have a lot of data to move, then you spend all your time wasted on packet overhead.
So you make the packets bigger, but I imagine everybody on the network has to agree to the new packetsize at the same time which means you can't change it in flight.
And if you could, well, then you have the same problems as what we have now.

[#] Thu Aug 26 2010 17:40:35 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Actually it was originally supposed to have scaled hugely, eventually replacing the worldwide telephone network. Over-complexity and a changing digital landscape eventually kept that from happening.

[#] Mon Dec 20 2010 15:00:15 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Has anyone ever used Zebra or Quagga as a router in a high-traffic environment?
I'm interested in knowing how its performance stacks up against a "real" router.

The idea of being able to use a device on which usable amounts of memory are affordable (enough memory to hold the full BGP table on a Cisco is quite expensive) and on which the components are off-the-shelf replaceable is appealing.

I suspect the sustained bandwidth will be good but the individual packet latency will be somewhat lacking. That's based on an educated guess. Any real-world experience would be interesting to hear about.

[#] Thu Dec 23 2010 00:20:52 EST from Ahff Rowe @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I think the underlying question is, what is the routing performance of
Linux/FreeBSD/etc., since Quagga is just shoving routes into the kernel's
routing table. There are some decent write-ups detailing performance
of various cards, etc.

If you're not shy of spending money, Vyatta may be a more polished
off-the-shelf option worth looking at that is still oodles cheaper than
something similar from brand C.

If your definition of high-traffic is more than a few Mpps, then
you're probably looking for something with ASICs and a TCAM.

[#] Sat Dec 25 2010 19:33:16 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I have running in our datacenter a Linux firewall with a single Ethernet connection that performs iptables services to about 70 different networks (using 802.1q tagged trunk back to the switch, each subscriber on their own vlan). Performance is incredibly good. I've seen it do upwards of 80 Mbps sustained throughput without breaking a sweat. And that's with a 100 Mbps ethernet card.

I am suspecting that with a 1 Gbps card, the Linux kernel can do throughput in the hundreds of megabits with no problem, but if you start analyzing the latency of each individual hop, someone's going to point to it and tell us our network is broken.

And unfortunately, in the managed hosting business, when a customer says that something is wrong with your network, you are guilty until proven innocent.

[#] Mon Dec 27 2010 13:49:29 EST from Ahff Rowe @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Take a look at: http://docs.rodecker.nl/10-GE_Routing_on_Linux.pdf

Throwing newer hardware may improve the numbers, but he was able to achieve 8Gbps forwarding performance at 1518B frames, which is significantly better than what you'd see, for example, on a Cisco 7206VXR w/NPE-G2. The ultimate bottleneck is PPS, which was dismal compared to the same platform (700,000pps vs. 2,000,000).

If you turn on any features, such as stateful connection tracking which you're probably using in your firewall example, and possibly even dot1q tagging as you mentioned earlier, the numbers may take a dive.

[#] Mon Dec 27 2010 15:02:05 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Doesn't that one machine serve as a rather large single point of failure if it's responsible for so many networks?

[#] Mon Dec 27 2010 15:03:20 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

That's the kind of example I was looking for. Thanks for posting it. It does seem to confirm what I suspected -- the Linux router will provide exceptionally good throughput in terms of Mbps per dollar, at the expense of per-packet latency.

I think we're going to set one up on a non-critical link and see how it runs.

Go to page: First ... 18 19 20 21 [22] 23 24 25 26 ... Last