[Sis-dtn] Proposed registry (and registry changes) to address BP operations in CCSDS.

Vint Cerf vint at google.com
Wed Aug 29 13:04:06 UTC 2018


two short thoughts:

1. it may be very useful to have some "standard" services for DTN. Not all
nodes necessarily implement them, but
if they do, they all work the same way. Think TELNET, FTP, SMTP, PING, TCP,
UDP, in the Internet context.

2. One imagines that store and forward nodes in a DTN network would be
expected to have standard services
needed to participate in the network as relays, including network
management services.

v


On Wed, Aug 29, 2018 at 8:16 AM, Scott, Keith L. <kscott at mitre.org> wrote:

> Comments inline below but what I think is the high-order question pulled
> up to here:
>
>
>
> So maybe the zero order issue is:
>
> Who’s going to use the information in the DTN_Network_Services elements of
> the Site and Aperture Registry and *for what*?  E.g.:
>
>    - Find out if a particular site supports DTN forwarding or not.
>    - Find out who to talk to in order to arrange to route through a
>    particular site.
>       - Will require a discussion of the site’s SABR contact plan, but
>       OK.  Note: contact plans are NOT candidates for inclusion in registries.
>    - Find out the CBHE Node numbers of nodes associated with the site so
>    I can set up my routing to them.
>       - I’ll still need all the convergence layer information that Leigh
>       asserts is somewhat transient and not-well-suited to a registry.
>    - Know whether I can ping a particular spacecraft or ground station
>    (or send AMS messages to it, or send files to it with CFDP).
>       - I personally think the idea of registering one level higher in
>       the stack, i.e. the message / file formats, is a bridge too far, and while
>       ping is a well-understood thing and a sometimes-useful diagnostic it will
>       probably not be operationally very interesting.
>
>
>
> For each of the above activities that makes sense (not asserting all of
> them do, or that the list above is complete):
>
>    - Why is it useful for someone to do this (whether in reference to a
>    ground or space node)?[sort of gets into use cases maybe, but in a
>    backwards sort of way]
>    - Is it reasonable / feasible for someone to do this (or what are the
>    constraints, like very remote nodes will need to have this info updated
>    periodically rather than pull-on-demand)?
>       - Or are there other / better ways to accomplish the task, such as
>       subscribing to a multicast stream?  Caution: this assumes that multicast
>       streams are used, known, and distributed.
>    - What are the security implications of making this information
>    available (gets at Leigh’s email)?
>
>
>
>
>
>
>
> *From: *"Shames, Peter M (312B)" <Peter.M.Shames at jpl.nasa.gov>
> *Date: *Tuesday, August 28, 2018 at 5:03 PM
> *To: *Keith Scott <kscott at mitre.org>, Scott Burleigh <
> scott.c.burleigh at jpl.nasa.gov>, "sis-dtn at mailman.ccsds.org" <
> sis-dtn at mailman.ccsds.org>
> *Cc: *SANA Group <ssg at mailman.ccsds.org>
> *Subject: *Re: Proposed registry (and registry changes) to address BP
> operations in CCSDS.
>
>
>
> I think this whole discussion would benefit from having a few Use Cases
> (or maybe even more than a few).  We have the possibilities of completely
> terrestrial deployments, completely space deployments, and a mixture of the
> two.  We have fixed nodes, and orbiting nodes, and roving / human nodes,
> and nodes fixed (or orbiting) in some remote location.  We have nodes that
> may be streaming fountains of data and others that are demand only and
> others that use a client / server model.  And we currently have registries
> here on Earth, but we may well want to have cached registry information at
> other locations where there is a cluster of user (and server) nodes.  Right
> now we have no way of supporting anything like that, but we should, IMHO,
> be thinking about it.
>
>
>
> Aside from limited deployment situations where the nodes, and the services
> they provide, are "baked in" to the implementation, using registries to
> manage this makes sense to me.  The proposal is to use the existing, and
> planned, SANA registries to provide such a framework, to extend them as
> needed for DTN's particular requirements, and to think later (but not too
> much later) about how to expand this registry model out into the Solar
> System.
>
>
>
> My assumptions are the following, and pardon me if I do not get the DTN
> terminology exact:
>
>
>
>    1. DTN services live on host systems
>
> DTN services are provided by DTN Nodes and the applications that access
> them.  Those entities are often instantiated on a particular host … system
> (?) (in particular hardware) but could move between hosts (which would
> totally screw up my notion of registering CL addresses).  I admit that I
> tend to think as your definition, but Scott often attempts to educate me
> otherwise.
>
>    1. Host systems may be (fixed) service sites, or user sites, or a
>    combo of both
>
> I don’t think DTN has a real position on this – see above.  I sort of
> THINK the answer to what I think you’re asking is ‘yes’ but given that I
> think the binding between a DTN node and a particular piece of hardware is
> not necessarily fixed, the question might not make much sense.
>
>    1. Sites (which may be S/C, ground stations, mission sites, user
>    sites, or other) are owned and operated by some organization
>
> IMHO I think this is a definition of ‘site’ that should be managed by the
> site definition of the RMP.  Same for 4 and 5 below.
>
>    1. Sites have various identifiers, including location, ownership, and
>    services
>    2. Each organization that operates a site will have at least one Point
>    of Contact (PoC) for the services it provides
>    3. DTN Bundle Agents (BA) will be hosted at these Sites
>
> Again with the caveat of 1 above, DTN nodes will in many cases be
> instantiated at Sites.
>
>    1. Each BA will have one (or more) associated, unique, Bundle Node
>    Number, tied to the Site
>
> Bundle Node Number è CBHE Node Number, ok.
>
>    1. Each BA may offer one (or more) services, identified by CBHE
>    Service Numbers
>
> OK.
>
>    1. In addition to the AMS and CFDP application data transfer services
>    (message and file) there will need to be application data oriented services
>    that specify, in some way, the contents of these messages and files.
>
> Yeah, the missions/applications’ problems, not mine ☺
>
>
>
> All of this stuff could be registered, and most of the necessary
> registries exist already (items 3, 4, 5, 6) or enough exists that what is
> needed is to add the "DTN hooks".  Items 7 & 8 are DTN specific.  Which of
> these make sense in registries depend on how you see this information
> evolving.  If growth of the SSI is slow and implementation driven you can
> probably (continue to) get away with local tables.  As soon as the DTN
> grows significantly, has multiple implementations, and many different
> agencies involved, I argue that you will need some registries, and that
> those will need to be deployed "around the SSI" and locally cached for
> efficiency.
>
> I don’t think a registry for #6 exists yet, or even understand what such a
> registry would be unless it’s the proposed ‘DTN Network Services’ element
> of the Site Registry (containing a CBHE Node number; really a list of CBHE
> node numbers now that I think about it, since a DTN Node might register in
> multiple endpoints).
>
>
>
> It may be that some of these services are of the "well known" variety,
> similar to HTTP, SMTP, DNS, ARP, for others, like how we handle SLE now,
> you need to have a private agreement about address, port, and access
> credentials, all arranged by contacting the PoC for that service.  For the
> SLE services there is even the notion of a service agreement and possible a
> cost.
>
> For a set of ‘well-known’ CBHE service numbers I’d tend to agree that
> those should be known, and MAYBE it makes sense to include which services
> are supported by a given DTN Node.
>
>
>
> I think including costs in such a registry is folly – costs change,
> exchange rates change.
>
>
>
>
>
> If all of the registry info was just in the SANA registries then I agree
> with Scott, you could burn a lot of time just finding out what services
> were available and how to access them.  Lots of round-trip query / response
> traffic, just like happens now in the Internet with DNS, ARP, etc.  But
> that is why I think in the future that we will need some way to distribute
> these registries, cache them "local" to clusters of users, and keep them in
> synch.  That could use some sort of pub/sub or multi-cast approach between
> registries.
>
>
>
> I hope this helps the dialogue.  Does anyone have a good set of Use Cases
> that we could leverage?
>
> I don’t think the whole panoply of nodes (fixed, orbiting, roving,
> streaming, sensor) really plays into the use cases.  If we just want to
> flesh out the ideas and compare strawmen, I’d say a small collection of
> mission control/mission science center nodes, a couple ground station
> nodes, and a pair of spacecraft nodes would suffice.
>
>
>
> I see the registries as being used primarily by people on the ground with
> access to them, with information relevant to a particular mission being
> copied to that mission before launch and updated as needed so the mission
> isn’t trying to query SANA (which I don’t think is the purpose, correct?)
> FWIW I didn’t interpret Scott’s discussion as being about remote things
> querying the registries for info.
>
>
>
>
>
>
>
> Cheers, Peter
>
>
>
>
>
> *From: *Keith Scott <kscott at mitre.org>
> *Date: *Tuesday, August 28, 2018 at 8:27 AM
> *To: *Scott Burleigh <Scott.C.Burleigh at jpl.nasa.gov>, "
> sis-dtn at mailman.ccsds.org" <sis-dtn at mailman.ccsds.org>
> *Cc: *Peter Shames <Peter.M.Shames at jpl.nasa.gov>, "SANA Steering Group
> (SSG)" <ssg at mailman.ccsds.org>
> *Subject: *Re: Proposed registry (and registry changes) to address BP
> operations in CCSDS.
>
>
>
> Interesting, I was interpreting ‘services’ as ‘CBHE service numbers’
> pretty literally.  Your first comment (about CBHE service numbers for data
> SOURCES) was not how I interpreted SEA’s comments; I perceived they were
> looking for the CBHE service #s that a node was … willing to receive on?
> (plus maybe a flag of ‘I’m willing to forward data’?)  Sort of like ‘what
> are the open (IP) ports on this machine’.  Maybe such a registry would only
> contain the ‘well-known’ CBHE services (things like echo (for those unwise
> enough to try to use it), AMS, CFDP, …) and not
> application/mission-specific services (which might use
> ‘private/experimental’ CBHE service # space anyway)?  I think my
> interpretation aligns more with your question below about *destinations*
> of data.  I think I’d leave the (application/mission-specific) data
> services out of the registry altogether (your notion of the available
> application data sources) and just address them as you suggest below.
>
>
>
> Peter – can you clarify?
>
>
>
>                                 --keith
>
>
>
>
>
> *From: *Scott Burleigh <Scott.C.Burleigh at jpl.nasa.gov>
> *Date: *Tuesday, August 28, 2018 at 11:13 AM
> *To: *Keith Scott <kscott at mitre.org>, "sis-dtn at mailman.ccsds.org" <
> sis-dtn at mailman.ccsds.org>
> *Cc: *"Shames, Peter M (312B)" <Peter.M.Shames at jpl.nasa.gov>, SANA Group <
> ssg at mailman.ccsds.org>
> *Subject: *RE: Proposed registry (and registry changes) to address BP
> operations in CCSDS.
>
>
>
> I think there’s a little bit of a conceptual disconnect here.
>
>
>
> As I understand it, the reason you would want to know what services – that
> is, what *sources* of data – are supported at a given BP node is that you
> want to obtain some of that data.  To do so, you would send a bundle,
> requesting the desired data, to the endpoint formed by the ID of the node
> and the ID of the service that is the data source, and the node would send
> the data back to you.
>
>
>
> But this sort of client/server data flow requires round-trip communication
> that can take a long time; it is innately non-delay-tolerant.  That is the
> central point we started with in 1998.  The sort of registry we are talking
> about here might be valuable, but I don’t think it has anything to do with
> DTN.
>
>
>
> The delay-tolerant way to obtain data is simply to receive it when it is
> generated by the source; to accomplish this, you join the corresponding
> multicast group to which the source node publishes the new data.  (And, in
> the long run, I think you pick up previously published data after the fact
> by joining persistent multicast groups that act like information-centric
> networking stores.)
>
>
>
> Multicast bundles have sources that are identified by node/service, but of
> course the sources know their own identities; no need for a registry.  The
> destinations of these bundles are “imc” endpoints identified by multicast
> group number and, as relevant, service number within multicast group.  So a
> registry of multicast groups would be a very helpful element of DTN
> infrastructure, but I wouldn’t expect a registry of node/service pairs – or
> even, really, a registry of nodes – to be of much utility.
>
>
>
> What about knowing which *destinations* of data are operating at which BP
> nodes, do we need a registry for that?
>
>
>
> Certainly it is the case that non-multicast messages have sources and
> destinations that are identified by node/service.  But again the source
> endpoints are known by the sources themselves, and I am doubtful that any
> application is going to need a registry of the node/service pairs that
> identify potential destinations of non-multicast bundles.  The scalable and
> responsive way to provide that information, I think, is for the
> applications to manage it themselves.  E.g.:
>
> 1.      Node A sends a bundle saying “You can get data X from me” to
> multicast group Q.
>
> 2.      Node B, a member of multicast group Q, receives that bundle and
> sends a non-multicast bundle to node A (the source of the original
> multicast) saying “Great, please send me X, encrypted.”
>
> 3.      Node A receives that bundle, uses the public key of A to encrypt
> X, and sends encrypted X in a non-multicast bundle to B (the source of the
> request bundle).
>
> 4.      Node B receives that bundle and uses its private key to decrypt X.
>
>
>
> We could have skipped step 1 by providing this information in a SANA
> registry; node B could have learned that X was at A by querying the
> registry.  But that would require another round-trip data exchange between
> node B and the registry; I am skeptical that there is an advantage.
>
>
>
> That said, I don’t object to creating the proposed SANA registries in the
> near term.  For the relatively small-scale and stable application
> structures we are likely to see over the next few years they may serve us
> well.
>
>
>
> Scott
>
>
>
> *From:* Scott, Keith L. <kscott at mitre.org>
> *Sent:* Tuesday, August 28, 2018 5:39 AM
> *To:* sis-dtn at mailman.ccsds.org
> *Cc:* Burleigh, Scott C (312B) <Scott.C.Burleigh at jpl.nasa.gov>; Shames,
> Peter M (312B) <Peter.M.Shames at jpl.nasa.gov>; SANA Group <
> ssg at mailman.ccsds.org>
> *Subject:* Proposed registry (and registry changes) to address BP
> operations in CCSDS.
>
>
>
> Greetings,
>
>
>
> Peter Shames noted that while there are registries for SANA CBHE Node and
> Service numbers that will allow us to deconflict node EIDs and have a
> uniform interpretation of the service numbers, there is nothing that says
> WHICH services are running at WHICH node, OR anything that associates BP
> Nodes (in our case identified by CBHE Node IDs) and the services they’re
> running with ‘Sites’ (using the Service and Site Aperture Registry
> definition) such as ground stations, spacecraft, etc.  This kind of
> information, while not technically required to make BP work, is part of the
> administration/operation of the network, and would be expected to change
> infrequently (and so at least feasible to maintain in a registry).
>
>
>
> The desire is for the DTN WG to identify changes / augmentations to the
> SANA registries to provide the information above.  The attached Registry
> Management Policy deck includes an overview of the Service Site & Aperture
> structure on slides 21—23.
>
>
>
> I propose the following strawman for discussion:
>
>
>
> Change the Network Services under Site Service Info on slide 22 to be a ‘mayInclude
> (0..*) – i.e. allow for possibly multiple network services.  This would
> allow sites that have multiple BP routers, which I’ll admit might be
> unusual but certainly possible.
>
>
>
> Define a BP_Network_Service element (registry) that is a logical subclass
> of the Network_Services listed on slide 22 as:
>
> ·         *CBHE Node #,* the CBHE Node Number of the BP Node, hyperlinked
> to the node number allocation range in the CBHE Node Number Registry
>
> ·         *POC*: a link to the appropriate SANA registry entry for who to
> talk to about connecting with this node (e.g. routing through it) [Could be
> a person or an organization?]
>
> ·         *List of CBHE Service #s*: A list of CBHE service numbers that
> can be expected to be running on the node (e.g. CFDP) – hyperlinked to
> their corresponding entries in the CBHE Service # registry.
>
> ·         *List of convergence layers and their information*: so for
> example, if the node is running a TCPCL on IP address 10.1.2.3:1234, a
> UDP CL on port address 10.1.2.6:5678, and an LTP/Encap CL on virtual
> channel 3, those would be listed.
>
> o   Maybe listing the VC isn’t appropriate here?  That’s more a function
> of the mission configuration (e.g. each mission could use a different VC
> even if all share the same aperture and site, I think)
>
> o   I’d vote for the CL info being a free text field with a convention
> for entries like TCP:1234:4556 rather than trying to generate a whole
> sub-structure for CL entries – thoughts?
>
>
>
> Somebody should probably also define an IP_Network_Service element.  That
> may fall to us as well but could pretty much mirror the BP one (but without
> CL info).
>
>
>
> Thoughts on this?
>
>
>
>
>
>                                                 v/r,
>
>
>
>                                                 --keith
>
>
>
>
>
> *Dr. Keith Scott
>                                                       Office:
> +1.703.983.6547*
>
> *Chief Engineer, Communications Network Engineering & Analysis
>  Fax:      +1.703.983.7142*
>
> *Advanced Data Transport Capability Area
> Lead                                             Email:  kscott at mitre.org
> <kscott at mitre.org>*
>
>
>
> *The MITRE Corporation* <http://www.mitre.org/>
>
> *M/S J500*
>
> *7515 Colshire Drive*
>
> *McLean, VA 22102*
>
>
>
> *MITRE self-signs its own certificates.  Information about the MITRE PKI
> Certificate Chain is available from https://www.mitre.org/tech/mii/pki/
> <https://www.mitre.org/tech/mii/pki/>*
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> SIS-DTN mailing list
> SIS-DTN at mailman.ccsds.org
> https://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-dtn
>
>


-- 
New postal address:
Google
1875 Explorer Street, 10th Floor
Reston, VA 20190
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.ccsds.org/pipermail/sis-dtn/attachments/20180829/600043e3/attachment.html>


More information about the SIS-DTN mailing list