[quagga-dev 12102] Re: Advise on implementation

Olivier Dugeon olivier.dugeon at orange.com
Wed Mar 4 16:57:47 GMT 2015


Yes, I agree that it is orthogonal for part of the problem. Let me 
clarify my view.
IMHO, having only one central daemon supervising the dedicated protocol 
is safer than delegate this to several processes i.e. Zebra controlling 
both ZAPI and
TED_APIversus have Zebra with ZAPI and another daemon for TED_API.The 
decision is now
impacted as we introduce in the equation the constraint to not overload 
existing ZAPI
communication channel.So, if we go for socket, this constraint imply a 
separate process
while a lighter mechanism could authorize the usage of same process i.e. 



Le 04/03/2015 16:56, Lou Berger a écrit :
> Olivier,
>      While I was suggesting another daemon, the IPC mechanism used can be
> a common one (obviously on a different socket). So, from my view the
> discussion on separate/which daemon is orthogonal from the discussion on
> (new/revised) common IPC approach.
> Lou
> On 3/4/2015 9:29 AM, Olivier Dugeon wrote:
>> Hello Lou, David,
>> Yes, I consider to have this kind of common TED/Topology Database
>> between the various daemon.
>> In fact, for PCE and BGP-LS, we just need an access to the OSPF and
>> IS-IS TED, and for PCE, to have
>> the possibility to announce that the router is PCE capable. I also
>> envisage to use socket for that purpose.
>> But, if we would avoid using ZAPI, this imply to write another daemon
>> e.g. dbls (database link state) just
>> to implement what zebra do for ZAPI i.e. listen socket from new data
>> coming from other daemon and push
>> these incoming data to listener daemon. So, yes, it reuse part of ZAPI /
>> OSPF-API but imply another
>> daemon in Quagga framework that must be launch before routing daemon and
>> just after zebra one.
>> That's why I'm thinking to another way to exchange data and list several
>> possibility including also
>> to put TED database in shared memory or use shared memory to
>> communicate, to avoid add an extra daemon
>> between process.
>> If I try to summarize the exchange, shm and pthread are modern tools,
>> but require strict coding and
>> verification to be sure that there is no problem and bad side effect, in
>> particular in case of crash.
>> Thus, it is preferable to go to socket based mechanism even if it see as
>> an old system, but safer one.
>> Now, can we extend ZAPI for PCE/BGP-LS ... as these use cases will not
>> use VPN and so heavily solicited ZAPI,
>> and in case of VPN that request ZAPI performance, we don't use
>> BPG-LS/PCE/... The scenarios are exclusive.
>> Or, do I need to implement a second bus, similar to ZAPI/OSPF-API, for
>> that purpose ?
>> And, just to help deciding, for me, this kind of communication is
>> internal to Quagga and has not the vocation
>> to be expose outside (like OSPF-API do). If an external API is needed,
>> e.g. for monitoring purpose or whatever
>> you have in mind, we need to implement a such interface on top of this
>> new one.
>> Thanks for all your valuable comment.
>> Regards,
>> Olivier
>> Le 03/03/2015 23:12, Lou Berger a écrit :
>>> On 3/3/2015 4:50 PM, David Lamparter wrote:
>>>>>>> On Tue, Mar 3, 2015 at 7:19 AM, Greg Troxel <gdt at ir.bbn.com> wrote:
>>>>>>>> My basic opinion is that shm interfaces end up being painful for
>>>>>>>> various reasons, including portability but also leaving shm segments
>>>>>>>> around.  Since this is control plane, and sockets are fast anyway, I
>>>>>>>> don't see any reason to get wrapped up in shm.
>>>>>> Le 03/03/2015 16:31, Dinesh Dutt a écrit :
>>>>>>> +1, plus the issue that the various daemons may no longer be isolated
>>>>>>> from each other due to this shm,
>>>>> On 3/3/2015 12:37 PM, Olivier Dugeon wrote:
>>>>>> What do you mean by 'may no longer be isolated from each other' ? They
>>>>>> are already link to the zebra daemon.
>>>>>> In order to implement BGP-LS, we need some communication between OSFP,
>>>>>> IS-IS and BGP. Of course, this will
>>>>>> break this isolation principle, but wathever the solution will be.
>>>> I agree with Dinesh & Greg.  The problem is, with SHM, it is far easier
>>>> for invalid data to cause a crash in all the daemons from a bug in just
>>>> one of the daemons.  The problem isn't locking or anything, it's invalid
>>>> (pointer) data inside the SHM data.  It's possible to take precautions,
>>>> but ultimately you need to validate all incoming data and thus lose much
>>>> of the zerocopy advantage.
>>> I too am not a big fan of SHM, unless you have something really simple
>>> like a double buffered scheme with single writer.  For cases where IPC
>>> is too expensive (e.g., between a TED and path computation) I think a
>>> scheme that provides a common way to bind additional functionality into
>>> a (e.g., TED) daemon makes more sense.
>>>>> On Tue, Mar 03, 2015 at 02:23:59PM -0500, Lou Berger wrote:
>>>>>       Have you considered introducing a generic TED (topology database)
>>>>> deamon in parallel to zebra?  I think this would more closely match what
>>>>> other implementations have done and there are many advantages over
>>>>> direct protocol exchanges.
>>>>> And yes, I know this question/topic is orthogonal to the current
>>>>> question/thread on shm vs ipc/rpc.
>>>> This sounds interesting.  The direction of this, as I understand it, is
>>>> to split off new functionality into a separate daemon to reduce
>>>> congestion in zebra.
>>> exactly.  It makes a lot of sense to have a common topology
>>> repository/DB just like it does for normal routes, but there's little if
>>> any intersection/overlap with current zebra so why put it into the same
>>> daemon...
>>> Lou
>>>> Jumping from this to details regarding socket-based protocols, there are
>>>> also some design questions.  I'd like to use some pseudostandard
>>>> extensible encoding, allowing us to add fields in a structured manner
>>>> and profit from code generators for various languages.  Apache Avro is
>>>> unfortunately under the Apache License, leaving mostly protobuf.  What
>>>> are people's opinions on this?
>>>> (I'd very much like to use something common here, for ZAPI, OSPFAPI, and
>>>> any new APIs including linkstate stuff and structured access to
>>>> configuration.)
>>>> -David

More information about the Quagga-dev mailing list