[Sis-csi] Text for Transport Layer Section

Lee.Neitzel at EmersonProcess.com Lee.Neitzel at EmersonProcess.com
Mon May 1 12:51:59 EDT 2006


In a previous email I described how the process control industry analyzed its data flows to define a set of three interaction models.

It appears to me that the description below identifies a data flow (SAR) that may not be ideally suited to UDP. It would be good to see a follow-up that describes the full behavior for SAR transfers without reference to protocols. It could then be determined where the mechanisms for reliability have to be (at what layer based on where the real endpoints are). 

This could be used to define a common high-rate protocol that sits on top of a connection-oriented (e.g. TCP) or a connectionless (e.g. UDP) service.  


-----Original Message-----
From: sis-csi-bounces at mailman.ccsds.org [mailto:sis-csi-bounces at mailman.ccsds.org] On Behalf Of Scott, Keith L.
Sent: Monday, May 01, 2006 9:54 AM
To: Keith Hogie; sis-csi at mailman.ccsds.org
Subject: RE: [Sis-csi] Text for Transport Layer Section

Keith,

I'm not trying to bash UDP here, or to downplay its utility.  The
examples you cite: DNS and NTP, which are unreliable and consume very
little bandwidth, are perfectly well-suited to UDP.  What I think
requires careful consideration is attempting to craft custom mechanisms
for larger data transfers over UDP.  I am concerned in particular about
having each application decide that the services it needs are not
supported by any of [TCP, SCTP, MDP, NORM, FLUTE, CFDP, and DTN], and
decides to simply roll its own over UDP.  In particular:

  o For 'reliable' data transfer, I fear the mechanisms designed as
'one-offs' would be expensive to build and maintain, would not work as
planned (it actually takes a while and a good bit of testing to prove
out such mechanisms), and would not be well-documented so that if
someone else wanted to interoperate with one of the applications in the
future it would be difficult or involve reverse-engineering the code.

  o For applications that involve large amounts of data I fear that
'home-grown' UDP-based applications will simply blast out data in the
naïve belief that this is the best way for them to fill the bottleneck
link, whose bandwidth the application may not have any way of knowing
without something even more heinous such as RSVP.  This is why both
FLUTE and NORM have congestion control mechanisms that can be invoked
when running in shared Interent environments, e.g.

This is not an argument against application protocols running over UDP.
I am not advocating scrapping UDP, or rewriting DNS or NTP to use TCP,
or telling space missions that they can't use UDP.  For telemetry
applications that want to transmit a few kBytes of housekeeping every
second or so, have at it, UDP is the right way to go.  For a SAR that's
going to generate 600MBytes of data at one go that has to transmitted
reliably, I don't think UDP is appropriate (especially if unregulated).
For the area in between -- we're going to have to sort that out over
time and only once we know the applications / requirements involved.
Maybe QoS marking or RED with penalty boxes or some such are hammers
that can be applied to this problem, maybe not.

Your arguments that congestion should not be a consideration seem
fragile to me.  We're talking here about moving towards a large network
of spacecraft, including both the numerous robotic Low Earth Orbit
sensor/comm spacecraft and crewed missions, and possibly building up a
large fixed infrastructure on the surface of the moon.  Arguing that
UDP data flows are limited by their RF link bandwidths is an argument
FOR careful use of UDP -- if the UDP flow is limited by its RF link
bandwidth, it has successfully consumed the entire link (and starved
out any other traffic).  Two such flows, originating from several hops
away in the network converging on a bottleneck link will just cause
both to lose data and, without QoS, to starve out other streams.  If
they're transmitting at high enough rates to congest intermediate
links, I suspect they're going to want to get all that data they just
lost back.

Careful management of all the link bandwidths and application data
rates works great, but that's exactly what we're trying to get away
from.  For single space missions this approach has worked well in the
past, but it simply doesn't scale to include all of the elements we
foresee in the cislunar environment.  Link bandwidth management only
gets more difficult in multi-hop environments which is, I think, the
domain of this WG.  Simply re-wickering the stack to include IP to
single spacecraft while maintaining the current management overhead is
not the goal.

Will we have such a network in the near future: no.  Can we start out
now with applications essentially taking their application data units
that are currently wrapped in CCSDS packets and wrapping them instead
in UDP: yes, that would work.  But in the long run this would be the
wrong approach.  The Internet suite is not just IP, nor is it just IP
and TCP.


The CHIPsat experience supports the need for DTN-like and PEP-like
functionalities.  I view the 'hop-by-hop FTP' approach taken by CHIPSat
(and CFDP's store-and-forward overlay procedures) as precursors to
DTN's store-and-forward overlay, which can use TCP, UDP, FTP, or other
mechanisms (independently) for each hop.  The differences being that
hop-by-hop FTP is oriented solely towards files and may not be
particularly well-suited to smaller, more ephemeral transmission such
as telemetry, and (I suspect) it has to be routed by hand.  One could
craft some sort of standardized, general-purpose service on top of the
CHIPSat FTP approach and incorporate the ability to do automated
routing, -- that's what DTN is doing.

CHIPSat's experience is also a good example of the benefits PEPs can
bring by breaking end-to-end TCP connections.  You mention that the
'space link' FTP is decoupled from the terrestrial TCP, which is
exactly what TCP PEPs do, with the added advantage that PEPs can
forward data onward before the entire file has accumulated at the
staging point.  In addition, a PEP-based approach might be able to
increase performance even further over an FTP-based one in that the
transfer across the dedicated space link could use different
parameters, including no congestion control and a rate limitation to
keep from 'auto-congesting' the link.  For dedicated bandwidth
connections, DTN over a rate-controlled UDP convergence layer (maybe
with a bit of FEC to combat the occasional error, maybe not) might be a
good solution.

I think your points about the resource requirements of PEPs and DTN are
well-taken and I'll incorporate them.


=========================


We need to close this out, and our main points of contention seem to be
competing fears for what might or might not happen in the future.  I
can easily see this WG recommending some or all of FLUTE, NORM,
CFDP-over-[UDP or DTN] as the 'recommended' set of approached to data
transfers.  That said, I'm going to press forward with the document we
have and trying to incorporate your concerns and fears as well as my
own.

Our next round of work, recommending a suite of data transfer
mechanisms, is going to need some serious requirements injected into it
by people from both the science and spacecraft operations areas.
Without that, I fear we would be doomed to repeat this cycle ad
nausium.


		--keith

_______________________________________________
Sis-CSI mailing list
Sis-CSI at mailman.ccsds.org
http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi



More information about the Sis-CSI mailing list