[Sis-csi] Text for Transport Layer Section

Dave Israel dave.israel at nasa.gov
Fri Apr 28 09:30:37 EDT 2006


I don't understand how we've devolved into a "one-size-fits-all" 
debate.  If the infrastructure supports IP, it will also support UDP (and, 
under some well documented conditions, TCP/SCPS-TP).  In those situations, 
DTN could run over UDP/TCP/SCPS-TP and happily co-exist with anybody's 
UDP-based reliable data transfer application.  I do not agree that we are 
15 years away from needing "overlay networks" -- they are not just useful 
for lunar missions.

Let's not hold up the completion of this Green Book over issues that will 
be future mission and application specific decisions.  I believe this green 
book is a first step to getting networked based comm infrastructure in 
place and should not be delayed over "above the network layer" debates.

Regards,
Dave


At 01:43 AM 4/28/2006, Keith Hogie wrote:
>Keith Scott,
>
>   I am still concerned that this misses what most missions will be
>looking for and may do more to chase off users than attract them.
>If we are trying to introduce concepts of using Internet protocols
>to new missions I think we need to talk more to things they
>understand and feel comfortable with.  The current contents
>seems to go into lots more detail on concepts like "overlay
>networks" and DTN.  These are concepts that will not apply to
>any missions for many years to come.  They may be of some use
>15 years from now when there is more of a mesh network available
>in space but right now they are still very experimental.
>
>See more specific comments below.  I can get you an alternate
>version of this section next week if you are interested.
>
>Keith Hogie
>
>
>Scott, Keith L. wrote:
>>Below is the new text for the transport layer section that includes UDP 
>>and a generalization/expansion of the overlay section.  I am in the 
>>process of uploading a new version that includes Howie's security changes 
>>(thanks!) to CWE.
>>         --keith
>>
>>
>>     1.1         Transport Layer
>>The standard transport protocols used with the Internet protocol suite 
>>are TCP, which provides a reliable bytestream service, and UDP, which 
>>provides an unreliable datagram service. Other services such as overlay 
>>services that provide reliability without the requirement for 
>>bidirectional end-to-end paths, or that provide reliable multicast, can 
>>be built on top of TCP and UDP.
>>
>>       1.1.1        UDP
>>UDP provides an unreliable data delivery service comparable to standard 
>>TDM and CCSDS packet delivery systems currently used for space 
>>communication.  UDP packets may be lost or duplicated in the network, and 
>>no feedback of such events is provided to the sender.  Because UDP does 
>>not implement reliability or require any signaling from the recipient to 
>>the sender, it can function over paths with arbitrary delays and/or 
>>simplex paths.  UDP is commonly used for data delivery where completeness 
>>is not required, such as cyclic telemetry.  If a UDP packet containing 
>>one or a set of telemetry measurements is lost, it may be enough to 
>>simply wait for the next packet, which will contain more up-to-date 
>>information.
>
>This starts out OK but then shifts into making it sound like you
>can't or don't want to try doing reliable delivery with UDP.  If you
>look at your own laptop you will find lots of things using UDP all
>the time.  Actually, before you use TCP for most any activity your
>laptop starts by using UDP to do a DNS lookup.  If you don't get
>a "reliable" UDP response then your TCP connection never even
>get started.
>
>You probably also have NTP running over UDP to keep your clock on time.
>Also any audio or video streaming you use is over UDP.  Most network
>management systems use SNMP to reliably manage thousands of devices
>and that's all over UDP.
>
>For reliable file transfers, NFS was developed over UDP to allow
>better adjustment of retransmissions mechanisms.  The MDP (predecessor
>to NORM) file transfer protocol has been used for years by groups like
>the US Postal Service and Kmart to reliably distribute files to
>thousands of users using UDP over IP multicast over the open Internet.
>We have also heard about the DMC satellites all using a UDP based
>file transfer protocol to move their image files over 8 Mbps downlinks.
>
>The main message is that UDP is used for all sorts of packet
>delivery and file delivery applications and those are exactly the
>sort of things that missions want to do.
>
>>While it is possible to use UDP for data transport and to implement 
>>reliability at the application layer, care should be taken in doing so, 
>>especially in a network that concurrently carries other Internet protocol 
>>traffic such as TCP.  Applications using UDP would need (in a mixed 
>>network) to ensure that they did not congest the network, either by 
>>implementing some sort of congestion control mechanism or by careful 
>>management of all link volumes.  Note that this is not considered a 
>>problem if the application sends small amounts of data such as small 
>>telemetry samples.  It becomes an issue only when when a UDP-based 
>>application might want to send large amounts of data that could, if sent 
>>all at once, overwhelm a router in the middle of the network.  The IETF 
>>is currently investigating a datagram congestion control protocol
>>(DCCP) for such applications, though DCCP requires bidirectional 
>>communications.
>
>This whole paragraph seems to focus on "congestion" as such a major
>issue.  Actually, TCP creates "congestion" issues on current NASA
>operational networks.  The UDP data flows are limited by their RF
>link bandwidths.  However, users running TCP based applications
>over operational networks can cause much worse peaks in traffic.
>As TCP starts up, it ramps up its data rate until it realizes it
>needs to ease off due to congestion.  In the process it may
>actually interfere with some constant rate UDP flows.
>
>Also, congestion is something that missions and their flight software
>have always dealt with.  Spacecraft have multiple possible streams
>of data and their RF downlink is slower than their onboard data bus.
>The flight software and hardware have been designed to make sure
>that each data flow only uses its allocated about of bandwidth.
>
>When we connect a fast PC to the Internet with a gigabit Ethernet,
>then we do create a potential for creating congestion.  But then
>even if you have a gig Ethernet connection to your PC, there is
>probably a slower link leaving your building.  So if you try to
>flood UDP data at 1 Gbps, most of it will get dropped before
>getting out of your building.
>
>When space missions get designed, the end-to-end links are all
>considered, and packet data rates get set to make sure that the
>mission doesn't clog up the links it is using.  For the next 5-10
>years missions will continue to be carefully designed to make
>sure they fully utilize their links and don't congest them.
>Also, any missions with one-way links or very long delays will
>always have to control their sending data rate since they cannot
>get any TCP-like feedback for rate control information.
>
>At some point many years in the future we may actually have
>multiple end nodes flowing data over space routers and sharing
>space links.  But for now we need to focus on how to get basic
>infrastructure in place and get missions to start using basic
>UDP/TCP/IP capabilities that they are comfortable with.
>
>>       1.1.2        TCP
>>TCP provides a reliable, in-order bytstream delivery service without 
>>duplication.  This means that when applications send data using TCP, the 
>>sending TCP endpoint will attempt to detect lost data and will retransmit 
>>data until it is acknowledged by the receiver.  TCP also provides 
>>congestion control to attempt to keep from overloading the network with 
>>too much traffic.  Because of the way reliability and congestion control 
>>are implmeneted within the TCP protocol, TCP performance can suffer in 
>>stressed environments characterized by large bandwidth*delay products, 
>>high bit error rates, and significant asymmetries in data rate.  The 
>>round trip light time from the Earth to the Moon is on the order of 3 
>>seconds, and an overall round trip time including intermediate relays of 
>>on the order of 5 seconds will probably be more typical.  This is enough 
>>to cause degradation in TCP performance, especially if 'stock' end 
>>systems are used.
>
>
>
>>In the 1990s, CCSDS developed the Space Communications Protocol Standards 
>>Transport Protocol (SCPS-TP) extensions to TCP to attempt to extend the 
>>operating range over which TCP can perform efficiently.
>>While SCPS-TP provides a compatible application programming interface to 
>>TCP, deploying the SCPS-TP extensions in every end host is often 
>>impractical.  Instead, Performance Enhancing Proxies, or PEPs, are often 
>>used to isolate the high bandwidth*delay links that can lower TCP performance.
>
>At this point it seems that UDP has been discouraged, standard TCP
>has problems and all future missions need to get into PEPs, overlay
>networks, and eventually DTN.  Using IP is starting to sound real
>complex and risky for new missions and they may stop reading and
>go back to their traditional ways.
>
>
>>         1.1.2.1       Using Performance Enhancing Proxies to Improve TCP
>>         Performance
>>Some of the performance problems of end-to-end TCP can be ameliorated 
>>with the use of performance enhancing proxies, or PEPs.  For TCP traffic, 
>>a PEP is a device that is in the network but that interacts with the 
>>end-to-end TCP flow in order to improve its performance.  There are a 
>>number of different kinds of PEPs discussed in [RFC3135], but one of the 
>>most common types are split-connection PEPs.  Split connection PEPs break 
>>the TCP connection into multiple connections, with the connections across 
>>the stressed portions of the network using technologies that are 
>>specifically designed and/or tuned for performance.
>
>Current missions like CHIPSat use standard FTP/TCP and work fine.  They
>do their connection splitting by having the spacecraft first move a file
>from the spacecraft to a FTP server at the ground station and later, in a 
>separate TCP connection, the file gets moved to the control center.
>The main thing is that the space link TCP connection is not disturbed by
>any ground Internet congestion.  It is running over its fixed rate,
>private, point-to-point RF link as designed.
>
>Many current and future spacecraft will continue to use this simple
>approach to avoid end-to-end TCP complications.  It's really about
>determining where the "end-points" for any particular connection are.
>In one scenario the "ends" would actually be the instrument in space
>and the scientist on the ground.  However, since we don't have 24 hr.
>end-to-end connectivity, there are actually multiple "ends" as data
>moves in a store-and-forward fashion from instrument to scientist.
>1 - data from instrument to onboard data store with some protocol
>2 - later, files move from spacecraft to ground file store
>3 - files move from ground storage to control center, LZP, or scientist
>These scenarios can be either TCP or UDP based on mission
>designers preferences and link characteristics.  Or UDP on some hops
>and TCP on other hops.  That's what mission communication design
>is about.
>
>There are also cases where the RF downlink is actually faster than
>the ground network links.  When the DMCs do their 8 Mbps dumps, the
>links out of the ground stations are not that fast so the data
>must be stored and forwarded later at lower rates.  Many NASA
>satellites also do this with 75 or 150 Mbps dumps to stations
>that don't have ground network links anywhere near that fast.
>
>My point is that we have been dealing with intermittent links,
>long delay links, noisy links, and non END-to-END links forever.
>We can use UDP/IP in place of traditional frames and packets
>in all these scenarios and we can use TCP/IP in carefully
>selected scenarios.  We don't need to go pushing missions
>into more exotic things like PEPs, overlay networks, and DTN
>to support their basic packet and file delivery needs.  Right
>now we need to focus on showing missions simple ways to use
>UDP/TCP/IP to do their traditional data delivery operations.
>If we ever get that in place, there will still be plenty of
>time to add fancier features it we ever need them.
>
>
>>
>>*Figure **7: Split-connection PEPs break TCP connections into three parts.*
>>Figure 7 illustrates a pair of split-connection PEPs bracketing a 
>>stressed link.  The PEP on the left terminates the TCP connection from 
>>the left-hand host, and uses a separate transport connection (in this 
>>case, SCPS-TP)
>>Note that in order to terminate TCP connections, the PEPs must be able to 
>>see and modify the TCP headers.  This requires that the TCP headers be 
>>'in the clear' as they pass through the PEP, and not encrypted.
>>Network security mechanisms such as IP security (IPSEC) encrypt the 
>>transport (TCP) headers, preventing the use of performance enhancing 
>>proxies.  It is worth noting that most PEPs will pass IPSEC traffic, but 
>>it will not benefit from the PEP's enhancement.  This means that IPSEC 
>>can still be used if the security benefits it provides override the 
>>performance degradation.
>>It is also worth mentioning that most of the benefits of IPSEC can be 
>>obtained from transport layer security (TLS) mechanisms.  TLS encrypts 
>>the payload user data at the application/transport layer boundary, 
>>leaving the transport layer headers in the clear.  This allows PEPs in 
>>the middle of the path to do their jobs by manipulating the headers 
>>and/or terminating TCP connections.
>
>As mentioned above, current missions already do their flavor of
>"connection splitting" by using simple store-and-forward concepts.
>
>>
>>       1.1.3        Overlay Network Services
>>While TCP and UDP provide enough services to cover most terrestrial 
>>communications needs, there are times when neither is in itself 
>>particularly well-suited to an environment or application.  Perhaps the 
>>two most common situations that require more support are reliable 
>>multicast and communication when no end-to-end path exists.  TCP's 
>>control loops that provide reliability and congestion control are 
>>necessarily peer relationships between a single sender and a single 
>>receiver.  Thus TCP is not suited to multicast traffic.  While UDP can 
>>support multicast traffic, it does not provide any reliability or 
>>congestion control.  Finally, both TCP and UDP rely on IP.  IP assumes 
>>that network paths run uninterrupted from sender to receiver.  While this 
>>is a good assumption in most terrestrial environments, it may not hold 
>>for space applications, as spacecraft pointing, antenna/comm. scheduling, 
>>and obscurations may conspire to interrupt communications.
>
>As mentioned above, MDP has been doing reliable file transfers using
>UDP/IP multicast for years with thousands of simultaneous users.
>
>Also the comment about IP assuming uninterrupted paths from sender to
>receiver needs to be careful about identifying the "ends" as mentioned
>earlier.  If the "ends" are the instrument and scientist, then you
>won't normally have an end-to-end path.  But if you add in
>intermediate store-and-forward "ends" IP works just fine.
>
>In fact when I send this email, I'll bet that I don't have an
>end-to-end IP path all the way from my computer to yours.  That
>does not mean that I can't use IP to send my email.  What happens
>is that I add some application level information (email addresses)
>and send my message over a single hop to my postoffice.  Then
>that postoffice uses the high level information to forward
>my email over other one or more end-to-end IP connections
>from postoffice to postoffice.  The messages end up at
>destination postoffices and whenever you connect your computer
>you can collect the message.
>
>I can't support HTTP or SSH connections over this type of
>file store-and-forward environment but science satellites
>don't care about that.  Their primary goal is to collect
>data and eventually move it to its destinations.  This can
>and is being done today using simple TCP and UDP techniques
>over IP on space links.
>
>>The common approach to providing enhanced services such as reliable 
>>multicast or communication without an end-to-end path is to create a new 
>>layer of protocol on top of either TCP or UDP.  This new layer of 
>>protocol defines an overlay network as shown in Figure 8.  It may be the 
>>case that only the end systems, some nodes in the network (as shown in 
>>the figure), or all nodes implement the overlay protocol.  Nodes in the 
>>overlay then use TCP, UDP, or data-link level communications to exchange 
>>data.  The overlay may provide a reliable file replication service, a 
>>reliable (unicast) file delivery service over intermittently-connected 
>>links, or it may look like a transport protocol itself.
>>
>Is email considered an "overlay network" and what's wrong with
>using it with some possible substitutions for message transfer
>agent protocols to avoid TCP based issues.  I think the Mars folks
>at Devon Island have already run tests of this approach.  Email is
>something that missions can relate to much better than some abstract
>concept like "overlay networks".
>
>>Figure 8: An overlay network (larger, dark circles) sparsely deployed in 
>>an underlying network (smaller, white circles).
>>The Asynchronous Layered Coding (ALC) protocol [REF_RFC3450] forms the 
>>basis for a number of overlay protocols, including Nack-Oriented Reliable 
>>Multicast (NORM) [REF_RFC3940], a general-purpose reliable data 
>>distribution protocol, and File Delivery over Unidirectional Transport 
>>(FLUTE) [REF_RFC3926], a file delivery protocol.
>>The CCSDS File Delivery Protocol (CFDP) with its store-and-forward 
>>overlay (SFO) procedures also implements an overlay network focused on 
>>file delivery.  CFDP can run over TCP or UDP, or can be configured to run 
>>directly over data link protocols such as AOS and Prox-1.
>
>ALC, FLUTE, NORM, CFDP, etc. are all UDP-based and are the
>types of simple protocols that can work well on space links.
>These relate much more closely to how missions operate today.
>
>The discussion about CFDP over TCP really seems to be gross
>overkill and overhead.  If CFDP really can't handle out of order
>packets then it seems that CFDP or its software implementation
>needs fixing.  As Dai mentioned, delayed ACKs and retransmissions
>are really the ultimate out-of-order packet arrival case.
>
>Another issue that doesn't seem to be addressed is how these
>various protocols affect things like memory and non-volatile
>storage on space systems.  When you use TCP, PEPs, and DTN you
>need to add more memory and storage on space nodes and that is
>not at all popular with missions.  When you use UDP-based
>file transfer protocols the file itself is your retransmission
>window.  You don't need to keep data in memory for retransmission
>since you can always go back to the sending application and
>ask for a specific chunk of data again.  When you try to add
>retransmission mechanisms like TCP and make things look
>transparent to applications, then the intermediate layer
>must buffer data and as delays increase the amount of
>buffering increases.  With simple UDP file transfers you
>don't need to add any memory as delays increase because
>the original file is always sitting there.  UDP-based file
>delivery mechanisms are highly independent of delays,
>data rates, and file sizes.
>>A slightly different type of overlay network that is Delay/Disruption 
>>Tolerant Networking (DTN) [REF_XXXXX].  DTN provides an 
>>optionally-reliable datagram delivery service to applications, regardless 
>>of whether end-to-end paths exist or not.  Reliable message delivery is 
>>accomplished by a sequence of custody transfers from node to node in the 
>>overlay rather than with end-to-end reliability as with TCP.  Custody 
>>transfers are a function of the overlay protocol and don't depend on 
>>bidirectional paths between overlay nodes.  Thus a DTN node might 
>>transmit a message on Tuesday using UDP over AOS and receive an 
>>indication that some other node has taken custody of the message on 
>>Wednesday, with that indication coming by way of a TCP connection over Prox-1.
>>
>>Unlike the overlays above, DTN is designed to accommodate changing 
>>connectivity in addition to intermittency.  The DTN overlay is designed 
>>to run its own routing protocol(s) independent of the underlying 
>>network.  These DTN routing protocols can account for things the 
>>underlying network does not, such as scheduled future periods of 
>>connectivity.  Thus a DTN node might decide to break a message it is 
>>currently forwarding into two parts, one to be sent now over UDP and 
>>another to be sent over a future scheduled Prox-1 connection.  The 
>>various pieces are then reassembled at the destination (or can be 
>>reassembled at another intermediate node if they happen to meet).
>>To illustrate how overlay services can improve performance in 
>>intermittently-connected environments, Figure 9 shows two views of a 
>>notional four-hop network path.  The top view uses end-to-end networking 
>>such as IP between the source at the top and the destination at the 
>>bottom.  Time in the figure progresses to the right, and up/down 
>>timelines for each link are shown.  A heavy bar centered on the thin line 
>>for a link indicates that a particular link is up at a particular time, 
>>and a thin line without a bar indicates that the link is down.
>>Data is represented by the heavy boxes that are above link connectivity 
>>indicators, and the source is assumed to always have data to send.
>>
>>*Figure **9: End-to-end networking requires a full path between source 
>>and destination before any data can be sent.  A long-term 
>>store-and-forward system can use individual links as they are available.*
>>In the end-to-end (top) portion of the figure, the source has to wait 
>>until there is a complete path to the destination before any data can be 
>>sent, which increases the latency and reduces throughput.  The 
>>message-based store-and-forward system, on the other hand, gets the first 
>>bit to the destination much faster, and has a higher overall throughput.
>We have always done this sort of store-and-forward data delivery to
>deal with intermittent links.  It's possible that a more elaborate
>solution than something like basic email might be needed in the
>future but that's still be be determined.  Right now DTN is still
>very experimental and needs lots more use and real space based
>investigation before we go pushing it as a solution for everything.
>
>I still remember how the Xpress Transport Protocol (XTP) was
>going to provide better performance, reliable multicast, and
>all sorts of other neat features.  However, after a few years
>of serious development and even forming a company to implement
>it in silicon (Protocol Engines, Inc.) it became clear that it was
>not going to be able to meet all its grand visions and be able
>to scale up for large scale deployment.  DTN also needs more time
>to mature and to do a better job of defining how it provides
>significant benefits for space communication that can't be
>achieved by much simpler approaches.
>>------------------------------------------------------------------------
>>_______________________________________________
>>Sis-CSI mailing list
>>Sis-CSI at mailman.ccsds.org
>>http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi
>
>
>--
>----------------------------------------------------------------------
>   Keith Hogie                   e-mail: Keith.Hogie at gsfc.nasa.gov
>   Computer Sciences Corp.       office: 301-794-2999  fax: 301-794-9480
>   7700 Hubble Dr.
>   Lanham-Seabrook, MD 20706  USA        301-286-3203 @ NASA/Goddard
>----------------------------------------------------------------------
>
>
>_______________________________________________
>Sis-CSI mailing list
>Sis-CSI at mailman.ccsds.org
>http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi

______________________________________________________________
Dave Israel
Leader, Advanced Technology Development Group
Microwave & Communication Systems Branch
NASA Goddard Space Flight Center  Code 567.3
Greenbelt, MD 20771
Phone: (301) 286-5294      Fax:   (301) 286-1769
E-mail: dave.israel at nasa.gov

"Without deviation from the norm, progress is not possible."  -Frank Zappa 




More information about the Sis-CSI mailing list