[quagga-users 12135] Re: State of the Quagga?

tommy at the-coffeeshop.dk tommy at the-coffeeshop.dk
Thu Mar 3 11:00:41 GMT 2011

Citat af Alexis Rosen <quagga-users at alexis.users.panix.com>:

> Yeah. BTW, 6GB is more than enough unless you're running a   
> *ridiculous* number of full BGP views (you're not, that's what   
> router servers are for) or doing a lot of other nonrouting things on  
>  the box.

True. I just didn't have a quote on me with less than 12 gigs,  
especially in these VM days where ~100gigs is the usual size on most  
of my new hardware.

> Make sure you can return the cards if you can't get them working the  
>  way you want. This isn't rocket science, but it's also apparently   
> not often done. YMMV and you should definitely lab this stuff up   
> before you put it into production.

Yup, makes sense (especially the test-YMMV part:)

> Yes. The 82756s also do multiqueue but they don't have as much   
> hardware assist as the 82599s. There are apparently non-Intel   
> adapters that also work well, but I don't know much about them.

Okay. I mostly do Intel when buying NICs, both due to the fact that  
they usually work well and they're always in stock at my local pusher.

> Depending on what kernel you wind up with, look into IRQ affinity,   
> that's a big problem if you get it wrong. You want to hardcode   
> things, not run irqbalanced, or at least that's what worked in our   
> setup. And did I already say YMMV?? I don't know if that's still   
> true if you're running a kernel with RFS or the transmit flow   
> patches (not even sure if those are in a released kernel yet).

Okay. I've previously just done irqbalance since I usually presume  
that software cut for any given task will do a better job of it than  
me :)
Do you provision a core per NIC (plus maybe one or two for the general  
routing stuff?)

> BTW, doing anything stateful is a big hit to performance. If we want  
>  NAT on anything, we do it on a separate box. Firewall stuff can be   
> equally threatening.

Yup. Planning to do packet pushing (and eBGP) on one set of boxes and  
then the internal firewall stuff on internal pairs (running vrrp on  
the server-local IPs, etc).

> Um, what? Do you mean on a cisco? Because I can pretty much   
> guarantee you couldn't route a million pps ten years ago on Intel   
> hardware.

Yeah, no. Not ten years, but around 2003 or so, ISTR people reporting  
that they were now able to do a million PPS using Linux and Intel GE  
NICs. It may just be my memory, though.

> I'm thinking of looking at it again when their next gen hits. There   
> is potential for a big price/performance win there, if they can tune  
>  the software right. It's mostly a NUMA thing. And it should be   
> doable, I think. The big question is, can you consistently assign   
> packets to the right CPU so you don't have a ridiculous amount of   
> cache-line bouncing on a many-cores server? We'll see.

Yeah, that does indeed sound interesting.
That's a thing that always annoyed me a bit about the whole OSS router  
stuff - there's always a potential leap around the corner :)

>> I wonder if maybe people are a bit afraid to speak of the known   
>> limits of their network in public?
> Maybe. Lame. It won't help.

No, not at all.
Security by obscurity has never helped much, but I still get the  
occasional request from a customer to disable all ICMP to their  
server, because its more secure if "the hacker" can't ping it and thus  
won't think it exists (even though it responds to http...)


More information about the Quagga-users mailing list