[quagga-users 12582] Re: [quagga-dev 8955] Strange logs

Sami Halabi sodynet1 at gmail.com
Sun Nov 27 22:52:00 GMT 2011


Hi,
I don't have the "interrupt" nor "irq" dir in my /proc.
actually it wasn't there, i mounted it manuualy:
mount -t procfs procfs /proc

Sami


On Sun, Nov 27, 2011 at 10:26 PM, Stig <stig at ubnt.com> wrote:

> On Sun, Nov 27, 2011 at 2:39 AM, Sami Halabi <sodynet1 at gmail.com> wrote:
> > Hi Stug,
> > how do I do that?
>
> Well, my home router isn't the best example since it doesn't have
> multi-queue nics, but if you look at the nics in /proc/interrupts I
> have:
>
> root at v600:/home/vyatta# cat /proc/interrupts
>           CPU0       CPU1       CPU2       CPU3
>  43:          1          1        619          1   PCI-MSI-edge      eth1
>  44:          3     104898          1          2   PCI-MSI-edge      eth2
>  45:          0          0          1        576   PCI-MSI-edge      eth3
>  46:        575          1          1          1   PCI-MSI-edge      eth4
>  47:          7          9      16016          7   PCI-MSI-edge      eth5
>
>
> You can see the interrupts are spread across the 4 cpu's.  To see the
> smp_affinity for each interrupt:
>
> root at v600:/home/vyatta# cat /proc/irq/43/smp_affinity
> 4
> root at v600:/home/vyatta# cat /proc/irq/44/smp_affinity
> 2
> root at v600:/home/vyatta# cat /proc/irq/45/smp_affinity
> 8
> root at v600:/home/vyatta# cat /proc/irq/46/smp_affinity
> 1
> root at v600:/home/vyatta# cat /proc/irq/47/smp_affinity
> 4
>
> To change the smp affinity you can echo what ever cpu bit mask you
> want into the same proc entry.
>
> stig
>
>
>
>
> > Thanks in advance,
> > Sami
> >
> >
> > On Sun, Nov 27, 2011 at 1:33 AM, Stig <stig at ubnt.com> wrote:
> >>
> >> If you have a multi-core system, you could change the interrupt
> >> smp-affinity so that some core/thread is not used for handling network
> >> interrupts.
> >>
> >> On Sat, Nov 26, 2011 at 11:35 AM, Sami Halabi <sodynet1 at gmail.com>
> wrote:
> >> > Hi,
> >> > Lately i see strange messages on my /var/log, i use FBSD-8.1-R with
> >> > Quagga
> >> > 0.99.17
> >> > Nov 26 19:28:50 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:28:57 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 21313ms (cpu time 1ms)
> >> > Nov 26 19:28:57 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 16.449412 seconds
> >> > Nov 26 19:29:12 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:12 bgpServer watchquagga[1394]: bgpd state ->
> unresponsive
> >> > : no
> >> > response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:15 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 17260ms (cpu time 0ms)
> >> > Nov 26 19:29:15 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 13.707425 seconds
> >> > Nov 26 19:29:15 bgpServer bgpd[79850]: SLOW THREAD: task bgp_read
> >> > (10675d0)
> >> > ran for 17095ms (cpu time 5ms)
> >> > Nov 26 19:29:15 bgpServer watchquagga[1394]: bgpd: slow echo response
> >> > finally received after 13.700411 seconds
> >> > Nov 26 19:29:30 bgpServer watchquagga[1394]: bgpd state ->
> unresponsive
> >> > : no
> >> > response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:30 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:34 bgpServer bgpd[79850]: SLOW THREAD: task bgp_read
> >> > (10675d0)
> >> > ran for 16281ms (cpu time 0ms)
> >> > Nov 26 19:29:34 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 16276ms (cpu time 0ms)
> >> > Nov 26 19:29:34 bgpServer watchquagga[1394]: bgpd: slow echo response
> >> > finally received after 13.585467 seconds
> >> > Nov 26 19:29:34 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 13.496174 seconds
> >> > Nov 26 19:29:49 bgpServer watchquagga[1394]: bgpd state ->
> unresponsive
> >> > : no
> >> > response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:49 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:54 bgpServer watchquagga[1394]: Forked background command
> >> > [pid
> >> > 79871]: /usr/local/etc/quagga/service bgpd stop
> >> > Nov 26 19:30:00 bgpServer watchquagga[1394]: bgpd state -> down : read
> >> > returned EOF
> >> > Nov 26 19:30:00 bgpServer watchquagga[1394]: Forked background command
> >> > [pid
> >> > 79875]: /usr/local/etc/quagga/service zebra restart
> >> > Nov 26 19:30:01 bgpServer watchquagga[1394]: Forked background command
> >> > [pid
> >> > 79879]: /usr/local/etc/quagga/service bgpd start
> >> > Nov 26 19:30:01 bgpServer watchquagga[1394]: Phased global restart has
> >> > completed.
> >> > Nov 26 19:30:01 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 24850ms (cpu time 0ms)
> >> > Nov 26 19:30:01 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 21.684171 seconds
> >> > Nov 26 19:30:01 bgpServer bgpd[79885]: BGPd 0.99.17 starting: vty at 2605
> ,
> >> > bgp@<all>:179
> >> > Nov 26 19:30:05 bgpServer watchquagga[1394]: bgpd state -> up :
> connect
> >> > succeeded
> >> > Nov 26 19:30:06 bgpServer watchquagga[1394]: zebra state -> up : echo
> >> > response received after 0.065503 seconds
> >> > Nov 26 19:30:21 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:30:35 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 24767ms (cpu time 43ms)
> >> > Nov 26 19:30:35 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 24.512305 seconds
> >> > Nov 26 19:30:40 bgpServer watchquagga[1394]: zebra state -> up : echo
> >> > response received after 0.004419 seconds
> >> >
> >> > Lately i pass 2-3GB of traffic daliy maybe its related? moreover I
> have
> >> > 10G
> >> > Dual card base 82599-EB card connected to a 10GB Switch.
> >> > Why these messages arrive? does that mean i have something wrong?
> >> > /etc/sysctl.conf
> >> > net.inet.flowtable.enable=0
> >> > net.inet.ip.fastforwarding=1
> >> > kern.ipc.somaxconn=8192
> >> > kern.ipc.shmmax=2147483648
> >> > kern.ipc.maxsockets=204800
> >> > kern.ipc.maxsockbuf=262144
> >> > kern.maxfiles=256000
> >> > kern.maxfilesperproc=230400
> >> > net.inet.ip.dummynet.pipe_slot_limit=1000
> >> > net.inet.ip.dummynet.io_fast=1
> >> > #10Gb sysctls
> >> > hw.intr_storm_threshold=9000
> >> > kern.ipc.nmbclusters=262144
> >> > kern.ipc.nmbjumbop=262144
> >> > dev.ix.0.rx_processing_limit=4096
> >> > dev.ix.1.rx_processing_limit=4096
> >> >
> >> > Thanks in advance,
> >> > --
> >> > Sami Halabi
> >> > Information Systems Engineer
> >> > NMS Projects Expert
> >> >
> >> > _______________________________________________
> >> > Quagga-dev mailing list
> >> > Quagga-dev at lists.quagga.net
> >> > http://lists.quagga.net/mailman/listinfo/quagga-dev
> >> >
> >> >
> >
> >
> >
> >
> > --
> > Sami Halabi
> > Information Systems Engineer
> > NMS Projects Expert
> >
>



-- 
Sami Halabi
Information Systems Engineer
NMS Projects Expert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quagga.net/pipermail/quagga-users/attachments/20111128/dadffe9a/attachment.html>


More information about the Quagga-users mailing list