[Sis-dtn] Proposed registry (and registry changes) to address BP operations in CCSDS.
Shames, Peter M (312B)
Peter.M.Shames at jpl.nasa.gov
Tue Aug 28 21:03:10 UTC 2018
I think this whole discussion would benefit from having a few Use Cases (or maybe even more than a few). We have the possibilities of completely terrestrial deployments, completely space deployments, and a mixture of the two. We have fixed nodes, and orbiting nodes, and roving / human nodes, and nodes fixed (or orbiting) in some remote location. We have nodes that may be streaming fountains of data and others that are demand only and others that use a client / server model. And we currently have registries here on Earth, but we may well want to have cached registry information at other locations where there is a cluster of user (and server) nodes. Right now we have no way of supporting anything like that, but we should, IMHO, be thinking about it.
Aside from limited deployment situations where the nodes, and the services they provide, are "baked in" to the implementation, using registries to manage this makes sense to me. The proposal is to use the existing, and planned, SANA registries to provide such a framework, to extend them as needed for DTN's particular requirements, and to think later (but not too much later) about how to expand this registry model out into the Solar System.
My assumptions are the following, and pardon me if I do not get the DTN terminology exact:
1. DTN services live on host systems
2. Host systems may be (fixed) service sites, or user sites, or a combo of both
3. Sites (which may be S/C, ground stations, mission sites, user sites, or other) are owned and operated by some organization
4. Sites have various identifiers, including location, ownership, and services
5. Each organization that operates a site will have at least one Point of Contact (PoC) for the services it provides
6. DTN Bundle Agents (BA) will be hosted at these Sites
7. Each BA will have one (or more) associated, unique, Bundle Node Number, tied to the Site
8. Each BA may offer one (or more) services, identified by CBHE Service Numbers
9. In addition to the AMS and CFDP application data transfer services (message and file) there will need to be application data oriented services that specify, in some way, the contents of these messages and files.
All of this stuff could be registered, and most of the necessary registries exist already (items 3, 4, 5, 6) or enough exists that what is needed is to add the "DTN hooks". Items 7 & 8 are DTN specific. Which of these make sense in registries depend on how you see this information evolving. If growth of the SSI is slow and implementation driven you can probably (continue to) get away with local tables. As soon as the DTN grows significantly, has multiple implementations, and many different agencies involved, I argue that you will need some registries, and that those will need to be deployed "around the SSI" and locally cached for efficiency.
It may be that some of these services are of the "well known" variety, similar to HTTP, SMTP, DNS, ARP, for others, like how we handle SLE now, you need to have a private agreement about address, port, and access credentials, all arranged by contacting the PoC for that service. For the SLE services there is even the notion of a service agreement and possible a cost.
If all of the registry info was just in the SANA registries then I agree with Scott, you could burn a lot of time just finding out what services were available and how to access them. Lots of round-trip query / response traffic, just like happens now in the Internet with DNS, ARP, etc. But that is why I think in the future that we will need some way to distribute these registries, cache them "local" to clusters of users, and keep them in synch. That could use some sort of pub/sub or multi-cast approach between registries.
I hope this helps the dialogue. Does anyone have a good set of Use Cases that we could leverage?
Cheers, Peter
From: Keith Scott <kscott at mitre.org>
Date: Tuesday, August 28, 2018 at 8:27 AM
To: Scott Burleigh <Scott.C.Burleigh at jpl.nasa.gov>, "sis-dtn at mailman.ccsds.org" <sis-dtn at mailman.ccsds.org>
Cc: Peter Shames <Peter.M.Shames at jpl.nasa.gov>, "SANA Steering Group (SSG)" <ssg at mailman.ccsds.org>
Subject: Re: Proposed registry (and registry changes) to address BP operations in CCSDS.
Interesting, I was interpreting ‘services’ as ‘CBHE service numbers’ pretty literally. Your first comment (about CBHE service numbers for data SOURCES) was not how I interpreted SEA’s comments; I perceived they were looking for the CBHE service #s that a node was … willing to receive on? (plus maybe a flag of ‘I’m willing to forward data’?) Sort of like ‘what are the open (IP) ports on this machine’. Maybe such a registry would only contain the ‘well-known’ CBHE services (things like echo (for those unwise enough to try to use it), AMS, CFDP, …) and not application/mission-specific services (which might use ‘private/experimental’ CBHE service # space anyway)? I think my interpretation aligns more with your question below about destinations of data. I think I’d leave the (application/mission-specific) data services out of the registry altogether (your notion of the available application data sources) and just address them as you suggest below.
Peter – can you clarify?
--keith
From: Scott Burleigh <Scott.C.Burleigh at jpl.nasa.gov>
Date: Tuesday, August 28, 2018 at 11:13 AM
To: Keith Scott <kscott at mitre.org>, "sis-dtn at mailman.ccsds.org" <sis-dtn at mailman.ccsds.org>
Cc: "Shames, Peter M (312B)" <Peter.M.Shames at jpl.nasa.gov>, SANA Group <ssg at mailman.ccsds.org>
Subject: RE: Proposed registry (and registry changes) to address BP operations in CCSDS.
I think there’s a little bit of a conceptual disconnect here.
As I understand it, the reason you would want to know what services – that is, what sources of data – are supported at a given BP node is that you want to obtain some of that data. To do so, you would send a bundle, requesting the desired data, to the endpoint formed by the ID of the node and the ID of the service that is the data source, and the node would send the data back to you.
But this sort of client/server data flow requires round-trip communication that can take a long time; it is innately non-delay-tolerant. That is the central point we started with in 1998. The sort of registry we are talking about here might be valuable, but I don’t think it has anything to do with DTN.
The delay-tolerant way to obtain data is simply to receive it when it is generated by the source; to accomplish this, you join the corresponding multicast group to which the source node publishes the new data. (And, in the long run, I think you pick up previously published data after the fact by joining persistent multicast groups that act like information-centric networking stores.)
Multicast bundles have sources that are identified by node/service, but of course the sources know their own identities; no need for a registry. The destinations of these bundles are “imc” endpoints identified by multicast group number and, as relevant, service number within multicast group. So a registry of multicast groups would be a very helpful element of DTN infrastructure, but I wouldn’t expect a registry of node/service pairs – or even, really, a registry of nodes – to be of much utility.
What about knowing which destinations of data are operating at which BP nodes, do we need a registry for that?
Certainly it is the case that non-multicast messages have sources and destinations that are identified by node/service. But again the source endpoints are known by the sources themselves, and I am doubtful that any application is going to need a registry of the node/service pairs that identify potential destinations of non-multicast bundles. The scalable and responsive way to provide that information, I think, is for the applications to manage it themselves. E.g.:
1. Node A sends a bundle saying “You can get data X from me” to multicast group Q.
2. Node B, a member of multicast group Q, receives that bundle and sends a non-multicast bundle to node A (the source of the original multicast) saying “Great, please send me X, encrypted.”
3. Node A receives that bundle, uses the public key of A to encrypt X, and sends encrypted X in a non-multicast bundle to B (the source of the request bundle).
4. Node B receives that bundle and uses its private key to decrypt X.
We could have skipped step 1 by providing this information in a SANA registry; node B could have learned that X was at A by querying the registry. But that would require another round-trip data exchange between node B and the registry; I am skeptical that there is an advantage.
That said, I don’t object to creating the proposed SANA registries in the near term. For the relatively small-scale and stable application structures we are likely to see over the next few years they may serve us well.
Scott
From: Scott, Keith L. <kscott at mitre.org>
Sent: Tuesday, August 28, 2018 5:39 AM
To: sis-dtn at mailman.ccsds.org
Cc: Burleigh, Scott C (312B) <Scott.C.Burleigh at jpl.nasa.gov>; Shames, Peter M (312B) <Peter.M.Shames at jpl.nasa.gov>; SANA Group <ssg at mailman.ccsds.org>
Subject: Proposed registry (and registry changes) to address BP operations in CCSDS.
Greetings,
Peter Shames noted that while there are registries for SANA CBHE Node and Service numbers that will allow us to deconflict node EIDs and have a uniform interpretation of the service numbers, there is nothing that says WHICH services are running at WHICH node, OR anything that associates BP Nodes (in our case identified by CBHE Node IDs) and the services they’re running with ‘Sites’ (using the Service and Site Aperture Registry definition) such as ground stations, spacecraft, etc. This kind of information, while not technically required to make BP work, is part of the administration/operation of the network, and would be expected to change infrequently (and so at least feasible to maintain in a registry).
The desire is for the DTN WG to identify changes / augmentations to the SANA registries to provide the information above. The attached Registry Management Policy deck includes an overview of the Service Site & Aperture structure on slides 21—23.
I propose the following strawman for discussion:
Change the Network Services under Site Service Info on slide 22 to be a ‘mayInclude (0..*) – i.e. allow for possibly multiple network services. This would allow sites that have multiple BP routers, which I’ll admit might be unusual but certainly possible.
Define a BP_Network_Service element (registry) that is a logical subclass of the Network_Services listed on slide 22 as:
· CBHE Node #, the CBHE Node Number of the BP Node, hyperlinked to the node number allocation range in the CBHE Node Number Registry
· POC: a link to the appropriate SANA registry entry for who to talk to about connecting with this node (e.g. routing through it) [Could be a person or an organization?]
· List of CBHE Service #s: A list of CBHE service numbers that can be expected to be running on the node (e.g. CFDP) – hyperlinked to their corresponding entries in the CBHE Service # registry.
· List of convergence layers and their information: so for example, if the node is running a TCPCL on IP address 10.1.2.3:1234, a UDP CL on port address 10.1.2.6:5678, and an LTP/Encap CL on virtual channel 3, those would be listed.
o Maybe listing the VC isn’t appropriate here? That’s more a function of the mission configuration (e.g. each mission could use a different VC even if all share the same aperture and site, I think)
o I’d vote for the CL info being a free text field with a convention for entries like TCP:1234:4556 rather than trying to generate a whole sub-structure for CL entries – thoughts?
Somebody should probably also define an IP_Network_Service element. That may fall to us as well but could pretty much mirror the BP one (but without CL info).
Thoughts on this?
v/r,
--keith
Dr. Keith Scott Office: +1.703.983.6547
Chief Engineer, Communications Network Engineering & Analysis Fax: +1.703.983.7142
Advanced Data Transport Capability Area Lead Email: kscott at mitre.org<mailto:kscott at mitre.org>
The MITRE Corporation<http://www.mitre.org/>
M/S J500
7515 Colshire Drive
McLean, VA 22102
MITRE self-signs its own certificates. Information about the MITRE PKI Certificate Chain is available from https://www.mitre.org/tech/mii/pki/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.ccsds.org/pipermail/sis-dtn/attachments/20180828/9dee4fda/attachment.html>
More information about the SIS-DTN
mailing list