[Sis-csi] Green book thoughts

Peter Shames peter.shames at jpl.nasa.gov
Tue Apr 18 18:17:14 EDT 2006


UDP with NACks is NOT UDP, it's some other protocol.   Best thing to  
do is to document and describe it as some other protocol and feed it  
into the RFC (or CCSDS) mix for evaluation.  Making believe that this  
protocol is UDP just because you use the UDP PDU structure is false  
advertising.

Peter



On Apr 18, 2006, at 11:15 AM, <L.Wood at surrey.ac.uk> wrote:

> Resending the below, now that I've subscribed as  
> L.Wood at surrey.ac.uk rather than L.Wood at eim.surrey.ac.uk, so admin  
> approval of all my messages should no longer be required.
>
> -----Original Message-----
> From: Wood L Dr (Electronic Eng)
> Sent: Tue 2006-04-18 18:58
> To: Scott, Keith L.; Keith Hogie; sis-csi at mailman.ccsds.org
> Subject: RE: [Sis-csi] Green book thoughts
>
> My take:
>
> It's not a question of space link capacities being relatively  
> speaking too low (though those will always lag terrestrial link  
> capacities).
>
> High-rate UDP-based flows from space have the ability to cause  
> congestion, but won't congest the (permanently-connected,  
> terrestrial) network because they'll always be deliberately set to  
> deliver to the edges of that network. This is a sensible design  
> choice.
>
> Example: SSTL's Saratoga rate-based UDP-based transfer protocol  
> with NACKs, which fills an 8.1Mbps downlink from the five DMC  
> satellites, and delivers the flow to a host in the groundstation on  
> the edge of the Internet. Because that's sensible engineering;  
> congestion is not a problem on links you own (and want to get the  
> most from), while congestion on the internetwork is avoided through  
> deliberate selection of endhost locations.
>
> (Described in our 'Using Internet nodes and routers onboard  
> satellites' paper at:
> ftp://ftp-eng.cisco.com/lwood/cleo/README.html
> which we recently updated thanks to another six months of in-orbit  
> use. I'd like to give another example as well, but the DMC  
> satellites are the only ones I know that generate large amounts of  
> data and move it at high rates using IP.)
>
> Your point 2) - real-time data with loss, reliable data with no  
> loss -- we've done both of these using UDP streams. I'd want to  
> carefully word discussion to avoid the 'it has to be reliable so we  
> must use TCP or SCTP' mindset.
>
> There's nothing stopping you from filling your own links or your  
> own network; given sensible engineering choices, it's only where  
> you're peering traffic across the terrestrial Internet and crossing  
> different use models that congestion comes into play on the path.
>
> It's likely that any use of shared space links will be based on a  
> coarse-grained scheduling model where payloads get given  
> coordinated access to the links for efficiency reasons -- still  
> based on IP, but the congestion models and contention assumptions  
> of the terrestrial Internet won't be immediately applicable. And  
> the main reason they're not applicable is that, for a long time to  
> come, agencies won't be ISPs with competing users contending in  
> real-time for shared link capacity; any contention will be for  
> slots in a coarse scheduling model, likely well in advance.
>
> You can run IP across a single link too. If you're running IP from  
> a payload across a single point-to-point link to an endhost on the  
> other end of that link (and you own the lot), internetwork  
> congestion is not your problem.
>
> When you talk about 'affecting the network' you need to state what  
> that network *is*. You're really worried about affecting the  
> Internet as a whole, right? I think we need to state that the  
> Internet is both protocols and conventional assumptions of use  
> (congestion, peering sharing) and that we can reuse many of the  
> protocols without also having to reuse the assumptions. We have our  
> own use models. TCP is built around a number of assumptions that  
> don't fit our use (congestion, backoff, slow start presuming a  
> shared path), so we avoid those limiting assumptions by building  
> upon UDP for the space-based Internet.
>
> L.
>
> UDP forever.
>
> -----Original Message-----
> From: sis-csi-bounces at mailman.ccsds.org on behalf of Scott, Keith L.
> Sent: Tue 2006-04-18 18:10
> To: Keith Hogie; sis-csi at mailman.ccsds.org
> Subject: RE: [Sis-csi] Green book thoughts
>
> It seems to me that the points of contention here center on whether or
> not an application using UDP might cause network congestion and hence
> lose packets, and whether/how building reliability on top of UDP is a
> Good Idea.  The arguments seem to oscillate between high-bandwidth
> downlinks where we want to use all of the available capacity and the
> assertion that UDP flows from space (there's a B-movie title in there
> somewhere...) simply can't congest the "network" because the space  
> link
> bandwidth is too low.
>
> I would assert that for some (possibly many?) future missions,
> bandwidths will be such that pure rate-controlled streams coming from
> some space applications would have the ability to congest shared  
> (space
> and/or ground).  A single HDTV stream competing with any other
> appreciable flows in the ground or space portions of the network could
> do this, e.g.  I also don't think we can assert that streams will not
> cross some portion of shared network, especially if there is
> inter-agency cross-support.  One must consider the possibility of
> commercial ground stations, and also the possibility of shared in- 
> space
> crosslinks.
>
>
>
> That said, do we all agree (at least among ourselves -- the case will
> need to be made to external audiences) that:
>   1) Moving to IP provides a large benefit to missions in that:
>         o it decouples applications from the data links
>         o it facilitates multi-hop routing over heterogeneous data
> links
>         o it provides an efficient multiplexing mechanism for numerous
> data types
>         o traffic can be directly routed from ground stations over
> closed networks
>         or the Internet to its destination(s) on the ground with
> commercial
>         network equipment
>
>   2) We don't really know how operators will want to use a networked
> capability,
>      except that they will probably want some mix of real-time data
> that can
>      take loss and reliable data that wants no loss.  These are
> supportable
>      in continuously connected environments by TCP, UDP, and NORM (the
> latter
>      two supporting simplex environments, to some extent); and in
> disconnected
>      environments by overlays like CFDP, and DTN.
>
>   3) Building application-specific reliability mechanisms on top of  
> UDP
> is
>      an option, but *in general*, new applications should first  
> look to
> standard
>      transport mechanisms (exact list TBD from Red Books from this WG)
> to
>      fulfill their needs.  Non-congestion controlled flows that might
> cause
>      significant network congestion are discouraged, but not  
> prohibited
> if
>      circumstances require their use and they can be designed to
> 'not-too-
>      adversely' affect the network.  Note that 'not-too-adversely'  
> here
> is
>      an overall system design trade -- a particular application might
> need to
>      simply blast bits without regard to the rest of the network.   
> Note
> also
>      that the overlays mentioned above may be part of the recommended
> set of
>      standard transports.
>
>
>         --keith
>
>
>
>
>
>
> >-----Original Message-----
> >From: sis-csi-bounces at mailman.ccsds.org
> >[mailto:sis-csi-bounces at mailman.ccsds.org] On Behalf Of Keith Hogie
> >Sent: Tuesday, April 18, 2006 2:33 AM
> >To: sis-csi at mailman.ccsds.org
> >Subject: Re: [Sis-csi] Green book thoughts
> >
> >More thoughts about Keith Scott, and Scott Burleigh's comments
> >inline below.
> >
> >Scott Burleigh wrote:
> >> Scott, Keith L. wrote:
> >>
> >>>Keith,
> >>>
> >>>I've integrated most of this into the document, but I have a
> >few issues
> >>>inline.
> >>>
> >>>
> >>>>-----Original Message-----
> >>>>From: sis-csi-bounces at mailman.ccsds.org
> >>>>[mailto:sis-csi-bounces at mailman.ccsds.org] On Behalf Of Keith  
> Hogie
> >>>>Sent: Thursday, March 30, 2006 12:48 PM
> >>>>To: sis-csi at mailman.ccsds.org
> >>>>Subject: [Sis-csi] Green book thoughts
> >>>>
> >>>>All,
> >>>>
> >>>>  Sorry for the late input but here are a few thoughts
> >>>>and inputs for this afternoon's telecon.  I couldn't see
> >>>>shipping the whole 6 MB around to convey
> >>>>this little bit of info.
> >>>>
> >>>>  These are based on Draft_15.doc
> >>>>
> >>>>2.2.2  Telepresence/Telescience
> >>>>
> >>>>....  At Cislunar (and Cismartian) distances there will be
> >>>>significant degradation of such capabilities (telepresence) ...
> >>>>
> >>>>  This is true at the outer limits of these environments,
> >but Cislunar
> >>>>
> >>>>
> >>>>also includes everything from low-earth orbit out to beyond
> >the moon.
> >>>>I think we often focus on Cislunar being only lunar missions and
> >>>>forget about all of the other missions like all of the current
> >>>>"Cislunar satellites" already in orbit around the Earth.
> >>>>
> >>>>  Section 2.1 just got done mentioning that Cislunar covers
> >>>>everything from LEO to L2.  Somehow we need to be careful about
> >>>>making broad statements about Cislunar.  Do we need to break
> >>>>down Cislunar into short, medium, and long or something like that.
> >>>>
> >>>>
> >>>I'm trying to work this in conjunction with your comment on
> scenarios
> >>>below.
> >>>
> >>>
> >> I'll insert a brief plea for conservation of language here.
> >"cislunar"
> >> means "lying between the earth and the moon or the moon's orbit"
> >> (Webster's 9th New Collegiate Dictionary, 1991).  If L2 is not in
> >> cislunar space, but we specifically want this architecture
> >to extend to
> >> L2, then we should adopt a different name.
> >>
> >> Alternatively, I think it would be reasonable to say that this
> >> architecture is being *designed* for communications in
> >cislunar space
> >> but that in practice it might also be usable in other environments:
> >> cis-L2 space, for example, or the space between Mars and its moons.
> >
> >True, L2 is beyond Cislunar.  The main thought was to not
> >sound like only Lunar but make it speak to a wider range of missions
> >including LEO out to L2.  There are still lots of missions coming
> >up in those domains that have nothing to do with Exploration
> >Initiative.  Also this is a CCSDS document so it needs to address
> >non-NASA missions in all environments.
> >
> >>
> >>>>4.4 Automated Data Forwarding
> >>>>
> >>>>  Not sure if the SLE reference really fits here.  That is a
> >>>>reference to the old circuit switch concept and not routing
> >>>>based on addresses in packets.  SLE doesn't do IP packet
> >>>>forwarding and doesn't apply in an end-to-end IP architecture
> >>>>does it
> >>>>
> >>>>
> >>>I think the data forwarding part of SLE here (as opposed to
> >the service
> >>>management part) is just a tunneling mechanism like GRE.
> >Its inclusion
> >>>allows the space (data) link protocol to be terminated somewhere
> else
> >>>besides the ground station.  That's how many agencies are set up to
> >>>operate now, and they'll probably continue with it until they can
> get
> >>>equipment at the ground stations to terminate the space data
> >links AND
> >>>something to provide reliability, or ate least comprehensive data
> >>>accounting.
> >>>
> >>>
> >
> >SLE is not really like GRE.  GRE operates at the network layer
> >as part of the packet routing process and simply forwards packets
> >one-by-one.  SLE is a tunnel operating above the transport layer
> >and has TCP acknowledgments.  It also is related to the legacy
> >"circuit" concept where there is only one hop beyond the ground
> >station.  At the last SCAWG there was a discussion about being
> >able to forward data received with errors.  You can do that with
> >SLE but it only applies to the first hop from the ground station.
> >If packets are forwarding across multiple space links based on
> >IP addresses, you cannot forward errored data.  At some point we
> >need to decide if we are moving to packet forwarding or staying with
> >the circuit model.
> >
> >There are already many stations that forward IP packets directly
> >and things like "reliability" and "data accounting" are not a
> >problem.  The concept of "networking" in space means that some
> >of the legacy notions of moving data over a single link from
> >the control center to a spacecraft need to change.  A packet
> >forwarding model has some significant differences from the
> >current circuit model.
>
> Exactly.  We want to move to the state where the space link is
> terminated at the ground station and packets are forwarded over an IP
> network from there.  But we're not there yet, and SLE is part of the
> existing infrastructure.  Hopefully missions will see benefits from
> going to a more fully routed infrastructure and move away from SLE as
> time passes, butmentioning that it's supported under this architecture
> so as to provide a smooth transition path will help get us over the
> (probably inevitable) initial resistance to 'new stuff'.
>
> >> Yes.  I think the best way to think of SLE is simply as an
> >extension of
> >> the space link, as the name suggests.  SLE doesn't do packet
> >forwarding
> >> any more than AOS or TM does; the SLE engine at the ground
> >station is a
> >> repeater, not a router.  It's underneath the end-to-end IP
> >architecture
> >> in exactly the same way that AOS or TM or Prox-1 is underneath the
> >> end-to-end IP architecture, forming what is functionally a single
> >> point-to-point link between the spacecraft and the mission
> >operations
> >> center where IP is terminated.
> >>
> >>>>Should this mention things like standard automated routing
> >>>>protocols like RIP, OSPF, Mobile IP, MANET stuff, etc. along
> >>>>with static routing.
> >>>>
> >>>>4.4.1 Forwarding, Routing
> >>>>
> >>>>  This mentions that "This will allow mission operations
> >>>>personnel to maintain absolute control over the forwarding
> >>>>process..."  They don't really have that control today and
> >>>>won't have it in the future.  Today they may indicate that
> >>>>they want their commands sent to a particular antenna but
> >>>>they don't control the exact path from their control center,
> >>>>across all links, to the antenna.
> >>>>
> >>>>  I don't think we want to give them the impression that
> >>>>they can control all routers in NASA's operational networks.
> >>>>
> >>>>
> >>>Right, this was referring mainly to the space portion of the
> network.
> >>>
> >>>
> >> I'm uneasy about this too.  It may be something we've got to say  
> for
>
> >> political reasons in the near term, but if we've got mission
> >operations
> >> personnel acting in the capacity of IP packet routers in the
> >> interplanetary network of the future then I think we are
> >going to look
> >> pretty silly.  Not quite as bad as using avian carriers to
> >convey the
> >> packets, but close.
> >>
> >>>>4.6 Transport layer
> >>>>
> >>>>  --- insert after first paragraph ---
> >>>>
> >>>>  UDP provides a data delivery very similar to standard
> >>>>TDM and CCSDS packet delivery systems that are currently
> >>>>used for space communication.  It's unreliable delivery
> >>>>attribute means that it does not utilize any process,
> >>>>such as acknowledgments, for determining if the data
> >>>>gets to its destination.  Reliable delivery can be
> >>>>implemented over UDP in the application layer if desired
> >>>>with applications such as CFDP.
> >>>>
> >>>>
> >>>This is true, but I don't want to give mission software people the
> >>>impression that they can just hack up reliability on a
> >per-application
> >>>basis.
> >>>
> >
> >But mission people can do whatever reliability they want on
> >a per-application basis.  That is one of the great features
> >of layers and the Internet.  The only people that need to agree
> >on an end-to-end reliable transfer protocol are the two end systems
> >(e.g. satellite and control center).  On the current Internet there
> >are all sorts of different reliable data delivery options that
> >co-exist.  They are both TCP and UDP based.
>
> Yes, however, if every mission spends all its time redesigning
> retransmission schemes, it will:
>
> 1) lose any benefit of standardization obove IP (increased development
> time/cost)
> 2) possibly cause those flows to be penalized (classified as
> nonresponsive under RED, e.g.)
>
> 2) is very dangerous, IMHO, as it would cause the system to behave
> poorly, interacts adversely with existing deployed Internet
> infrastructure, and would cause the missions to simply blame 'that new
> networking stuff'.
>
> >> Absolutely.  That would be a serious blow to interoperability and
> >> low-cost mission software development, just as abandoning TCP in  
> the
>
> >> terrestrial Internet would be.  A huge step backward.
> >>
> >
> >The big interoperability comes from using IP everywhere and does
> >not require TCP at all.  The Internet has both TCP and UDP
> >traffic flowing over it all the time.  Things like voice and video
> >and my son's Game Boy use UDP for all sorts of streaming operations.
> >TCP is the wrong answer for many data flows.  We don't want to
> >abandon TCP but we also cannot mandate only TCP.  All current
> >space missions operate in a UDP-like mode and there are many good
> >reasons for that.
> >
>
> 75% of Internet flows are TCP-based; 95% of the bytes are.  I'm not
> advocating for using TCP if you don't need reliability; by all means
> use UDP.  But for applications that want 100% data return and may
> traverse a large shared network like the Internet, I think there needs
> to be a Really Good Reason (TM) before a mission ditches TCP and  
> writes
> their own.  I would soften my position somewhat for missions existing
> widely-deployed tools that do "CFDP-like" things and that coexist
> peacefully in the Internet at large.
>
> >
> >>>  CFDP is sort of a special case and I'm not sure how well even
> >>>it would play in a mixed envrionment (that is, I'm not sure if CFDP
> >>>implements TCP-friendly congestion control).
> >>>
> >> It does not.  There is no congestion control in the CFDP design.
> >>
> >
> >I don't see CFDP as a special case.  That is exactly the mode that
> >lots of missions want to use and will continue to use.  It works
> >very well for space communications across a dedicated RF link.  I
> >realize that UDP file transfer mechanisms have potential for
> >flooding a network and that CFDP does not specify any flow control.
> >But missions do have a very strong flow control in that they
> >have a fixed upper limit on their RF transmission rate.
> >
> >When a satellite turns on its power hungry transmitter,
> >it wants to be able to fill the RF link with data and does not
> >want to worry about flow control by any protocol.  Especially
> >any protocol that might require a two-way link.  The bandwidth
> >of the link has been set during mission design and the software
> >wants to allocate it among things like housekeeping data and
> >science data.  Satellites do not operate in a highly interactive
> >or whimsical nature like people surfing the Internet.  They
> >have much more highly planned and predictable data transfers.
> >
> >Also, as propagation delays get longer, it becomes even more
> >important to use UDP based protocols because you cannot get
> >any interactive flow control information.  Also with the current
> >high downlink rates and very low uplink rates UDP works much
> >better to allow missions to shove data down and not need to
> >have ACKs coming back up.  Things like TDRSS demand access are
> >another reason for using UDP.  DAS only gives you a one-way
> >link so you have to use something like CFDP over UDP.
>
> Wouldn't this lead to each mission simply blasting at the maximum
> downlink rate and losing a bunch of packets at some bottleneck along
> the path (especially if that path traverses the Internet).  TDRSS
> demand access or other paths with simplex links will require UDP and
> (hopefully) some notion of buffering, rate-limiting, or otherwise
> protecting the data once it makes it to the ground and before it is
> released into a network where it may get lost.
>
> >For the next 10 to 15 years I see UDP as the primary means
> >of data delivery for space missions.  There are some areas
> >where TCP is OK but the majority of data will be delivered
> >using UDP based techniques.  We won't have any large mesh
> >network in space for quite awhile and issues of multiple
> >missions data merging over shared links will not be an
> >issue anytime soon.  Right now we need to get basic
> >IP packet routing available so missions can benefit from
> >using standard communication techniques with widely
> >available hardware and software support.
>
> The problem isn't the space network, it's once the flow hits the  
> ground
> network (Internet or closed multi-agency data distribution network).
>
> >>>Think about the
> >>>conditions under which TCP underperforms a UDP-based scheme
> >-- the win
> >>>seems to be in those cases where you have enough loss to build an
> >>>out-of-sequence queue at the receiver and then can't fill it
> >until the
> >>>end of the communications session.  In such cases a UDP-based
> >>>application could get immediate access to the 'out-of-sequence'
> data,
> >>>but presumably would still have the same set of holes.  Given the
> low
> >>>(relative to interplanetary) distances, power levels, and data  
> rates
> >>>people talk about for crewed lunar exploration, is this
> >really going to
> >>>be a problem?
> >>>
> >
> >Any data where timely delivery is more important that complete
> >delivery must use UDP.  This is things like voice, video, and
> >realtime telemetry.  For the last 30 years that I have been
> >processing satellite data, we have always lived without
> >hardly any retransmission protocols.  You get the data that
> >you get and you deal with it.  Moving to files onboard is
> >a major change and then doing reliable delivery is another
> >change.  Using files onboard a spacecraft and then some sort
> >of UDP based file transfer with NACKs is the mode that missions
> >relate to and will be using for many years.  I think TCP based
> >operations will only have limited usage.
> >>>
> >>>>  One advantage of UDP is that since it does not require
> >>>>any acknowledgments, its operation is not affected by
> >>>>propagation delay and it can also function over one-way
> >>>>links.  UDP provides an alternative to TCP in scenarios
> >>>>where TCP performance suffers due to delays and errors.
> >>>>Since UDP doesn't require acknowledgments, it allows
> >>>>end-system application implementors to design their
> >>>>own retransmission schemes to meet their particular
> >>>>environment.
> >>>>
> >>>>
> >>>Again true, but I think this needs to be said very carefully, lest
> >>>everyone decide that it would be more fun to design their own
> >>>retransmission schemes than to work the application protocols.  The
> >>>hidden assumption here is that if TCP performance suffers
> >due to delays
> >>>and errors, I can do a better job with my retransmission
> >protocol over
> >>>UDP.  While there are alternate congestion control schemes
> >(e.g. Steven
> >>>Low's FAST TCP, maybe some aspects of SCTP) that we could end up
> >>>recommending, I doubt most flight software programs ability to
> >>>correctly design and implement something with wide applicability.
> >>>That's not meant as a knock on the flight software people -- the
> task
> >>>of writing a new transport layer that works well under the full
> range
> >>>of conditions is huge - much bigger than a single mission -
> >and they've
> >>>got N other things to do that are mission-specific and
> >higher priority.
> >>>
> >>>
> >> And a way better use of taxpayer money.
> >
> >Flow control is implemented by the limited RF bandwidth of
> >spacecraft and that is not going to change anytime soon.
> >What is the problem with one mission using CFDP, while others
> >use MDP, NORM, Digital Fountain, NFS, or any other file delivery
> >mechanism they want.  The "network" is there to forward packets
> >and support whatever the users want to do.
>
> The only issue is how these various protocols intereact with each  
> other
> and other traffic on the same network.  One of the main points of
> moving to an IP-based infrastructure is to allow a standard network
> service to applications and to provide efficient and flexible
> multiplexing of multiple traffic types onto (constrained) links.
>
> >>
> >>>>  UDP is also commonly used for data delivery
> >>>>scenarios where timeliness of delivery is more important
> >>>>than completeness.  Common uses include streaming data
> >>>>such as voice and video.  It is also used for multicast
> >>>>delivery where the same data is sent to multiple
> >>>>destinations and it is not desirable to have
> >>>>multiple acknowledgments returning from all the
> >>>>destinations.
> >>>>
> >>>>
> >> Sure, UDP is wholly appropriate where there's no need for
> >fine-grained
> >> retransmission.
> >>
> >>>>4.8 Store-and Forward for Disrupted Environments
> >>>>
> >>>>Seems to jump to DTN rather quickly.  We have survived
> >>>>for many years doing store-and-forward based on files
> >>>>or big chunks of data and that will still work fine for
> >>>>many future missions.  We don't need to tell them that
> >>>>they need to completely change what they are used to
> >>>>just because we are putting in IP.
> >>>>
> >>>>
> >>>I moved the DTN section to after TCP transport, which I think flows
> >>>pretty well.  I like your idea of tying the file-based
> >>>store-and-forward to current mission operations procedures.
> >>>
> >>>
> >>>>How about something like
> >>>>
> >>>>A critical component of the cislunar architecture for dealing
> >>>>with cases
> >>>>where there is not an end-to-end path between source and
> >>>>destination is
> >>>>the use of store-and forward mechanisms.   The store-and-forward
> >>>>function can occur at either the packet level or file level.
> >>>>Traditional store-and-forward space communication occur at
> >the file or
> >>>>
> >>>>
> >>>>mass-storage partition level with files being stored at
> >each hop and
> >>>>forwarded later.  Another option is to do store-and-forward at the
> >>>>packet level with approaches such as Delay/Disruption Tolerant
> >>>>Networking (DTN).
> >>>>
> >>>Mostly correct, except that DTN is in fact store-and-forward at the
> >>>file level.  One could think of DTN as the refactoring of CFDP's
> >>>multi-hop transport mechanisms (as distinct from its file system
> >>>manipulation capabilities) married to a general-purpose API.
> >>>
> >>>
> >> I completely agree with the second sentence, but not with
> >the first.
> >> DTN is store-and-forward at the file level if CFDP packages
> >each file in
> >> a single bundle; I personally don't see that happening, though.   
> The
>
> >> understanding I've been working under is that scientists are
> >perfectly
> >> happy with the out-of-order and incremental arrival of the
> >individual
> >> records of files as in CFDP, so waiting for the entire file
> >to arrive
> >> before delivering it would be a retreat from the CFDP
> >design.  What's
> >> more, the scenarios for using multiple ground stations in
> >parallel to
> >> transmit a single file rely on individual records being separately
> >> routable over multiple parallel reliable links (nominally
> >LTP-enabled).
> >> Both concepts amount to fine-grained transmission of file
> >data, which I
> >> think means packaging each record (or PDU, with maybe some
> >aggregation
> >> to optimize performance) in single bundle.
> >
> >I think we need to have more discussions on relevant satellite
> >operations scenarios.  Scientists have always wanted some data
> >in realtime and then wait for processed data.  30 years ago it
> >would often take anywhere from 9 months to 2 years for scientists
> >to get their processed data.  Now it gets there quicker but there
> >are still delays.  A common mode is for a level-zero processing
> >system to gather realtime science data and playback data, merge
> >it all together in a timewise sequence to get the most complete
> >set of data, and then create data products consisting of 24 hours
> >of data from midnight to midnight.
> >
> >Some of the DTN scenarios I have seen don't seem to relate to
> >spacecraft data transfer needs.  Things like HTTP GET requests
> >and web page caching make lots of sense in a DoD environment but
> >I don't see them in satellite operations.  If we are going to mention
> >DTN, we need a much better description on what it is and how it
> >fits into unmanned, science satellite operations.  Web page
> >caching may have some application 15 years from now in a manned
> >environment but we have lots more unmanned missions coming up
> >now and they have much different data transfer needs.
> >
> >
> >>
> >> Which is why I've been pounding so hard on bandwidth
> >efficiency in the
> >> bundle protocol.  I'm expecting DTN in space to *not* be all about
> >> multi-megabyte bundles where blowing 460 bytes on a header is a
> >> non-issue.  I'm expecting it to be about fairly small
> >bundles, on the
> >> order of 64KB, which can't tolerate big headers.
> >>
> >> Scott
> >>
> >>
> >>
> >---------------------------------------------------------------
> >---------
> >>
> >> _______________________________________________
> >> Sis-CSI mailing list
> >> Sis-CSI at mailman.ccsds.org
> >> http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi
> >
> >
> >--
> >--------------------------------------------------------------------- 
> -
> >   Keith Hogie                   e-mail: Keith.Hogie at gsfc.nasa.gov
> >   Computer Sciences Corp.       office: 301-794-2999  fax:
> >301-794-9480
> >   7700 Hubble Dr.
> >   Lanham-Seabrook, MD 20706  USA        301-286-3203 @ NASA/Goddard
> >--------------------------------------------------------------------- 
> -
> >
> >_______________________________________________
> >Sis-CSI mailing list
> >Sis-CSI at mailman.ccsds.org
> >http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi
> >
>
> _______________________________________________
> Sis-CSI mailing list
> Sis-CSI at mailman.ccsds.org
> http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi
>
>
>
> _______________________________________________
> Sis-CSI mailing list
> Sis-CSI at mailman.ccsds.org
> http://mailman.ccsds.org/cgi-bin/mailman/listinfo/sis-csi

________________________________________________________

Peter Shames
CCSDS System Engineering Area Director

Jet Propulsion Laboratory, MS 301-265
California Institute of Technology
Pasadena, CA 91109 USA

Telephone: +1 818 354-5740,  Fax: +1 818 393-1333

Internet:  Peter.Shames at jpl.nasa.gov
________________________________________________________

We must recognize the strong and undeniable influence that our  
language exerts on our ways of thinking and, in fact, delimits the  
abstract space in which we can formulate - give form to - our thoughts.

							Niklaus Wirth


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ccsds.org/pipermail/sis-csi/attachments/20060418/49f0b3bb/attachment.html


More information about the Sis-CSI mailing list