[Sois-sig1] Rough summary of prior sois-sig1 discussions

Adrian J. Hooke adrian.j.hooke@jpl.nasa.gov
Fri, 22 Aug 2003 13:39:42 -0700


--=====================_629188375==_.ALT
Content-Type: text/plain; charset="us-ascii"; format=flowed

BEGIN SUMMARY OF PRIOR DISCUSSIONS:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
From: Joseph F. Smith [mailto:Joseph.F.Smith@jpl.nasa.gov]
Sent: 29 July 2003 23:34

For quite some time, Adrian Hooke has had a Powerpoint slide that
shows the different CCSDS protocols, and how they inter-relate, sort
of like a protocol stack.

I've always liked this slide, but I've had some issues with the way
that Adrian showed the SOIF protocols, that I couldn't quite put my
finger on.  The easiest way to address my issues, was to modify his
slide.  Having done that, I am attaching the  modified Hooke slide
for your comments.

You'll see the SOIF protocols off to the left.  Notice, that I put
the Onboard Application Services above that, and the Time Constrained
Applications Services above everything.

So, how does this look.  If this isn't quite right, that's OK.  The
reason to send this out is to get comments, so that its a better
product.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
At 12:58 PM +0100 8/14/03, Steve Parkes wrote:
Joe,

What happened to the Time Critical Network Services from SOIF in your
diagram?

Please do not assume that these will be straight TCP/IP. There is a growing
concern about using TCP/IP as an onboard transport/network protocol due to
the overhead and recovery mechanisms.  The Time Critical Network Services
working group (GSFC and UoD) following the last SOIF meeting have been
working towards a protocol set for onboard networks which is can carry
TCP/IP and other protocols but which has significantly less overhead.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Thu, 14 Aug 2003 08:19:39 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
>At 04:58 AM 8/14/2003, Steve Parkes wrote:
>Please do not assume that these will be straight TCP/IP. There is a 
>growing concern about using TCP/IP as an onboard transport/network 
>protocol due to the overhead and recovery mechanisms.  The Time Critical 
>Network Services working group (GSFC and UoD) following the last SOIF 
>meeting have been working towards a protocol set for onboard networks 
>which is can carry TCP/IP and other protocols but which has significantly 
>less overhead.

I'm having trouble parsing your note. Is the problem the overhead of IP, or 
the recovery mechanisms and overhead of TCP, or all of the above, or 
something else? Somehow, encapsulating "TCP/IP and other protocols" in yet 
another protocol set does not seems to be a way to get "significantly less 
overhead".

It does seem to me that here is a classic case of why we need cross-Area 
coordination. I've added Dai and Durst to this reply for their take on the 
issues.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

>At 07:55 AM 08/15/03, Joseph F. Smith wrote:
>>  I thought that the TCP/IP and SCPS protocols were over on the right 
>> side of the diagram,

Guys, could we please try to get the terminology right? To summarize, the 
"IP suite" maps as follows:

Applications: Lots
Transport:    TCP/UDP
Security:     IPSec
Network:      IP

The current SCPS profile of the IP suite is:

Applications: FTP  (the SCPS-FP *is* FTP)
Transport:    TCP/UDP (the SCPS-TP *is* TCP)
Security:     IPSec or SCPS-SP (the SCPS-SP maps to IPSec)
Network:      IP or SCPS-NP (the SCPS-NP maps to IP)

So could someone please clarify what is meant by "TCP/IP and SCPS 
protocols", because to me they are one and the same thing? In reality, 
isn't the only discussion that's germane to onboard networking the 
discussion about whether to use IP or it's compressed version (NP) in a 
constrained environment?

///adrian

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Fri, 15 Aug 2003 12:54:14 -0400
From: "Richard G. Schnurr" <Rick.Schnurr@nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <5.1.0.14.2.20030815080316.025c3830@mail1.jpl.nasa.gov>

I agree with your statement of the layers as far at it goes.  All of what 
Steve and I are talking about are below the network layer.  In general many 
link layers are considered sub-networks by some of us.  For me this is 
based in years of using Ethernet, PCI, USB, 1553 and other bus sub networks 
to solve real world problems.  Often the "quality of service issues" and 
fault tolerance issues must be solved at this level.  The fact that IP can 
ride over any of these links or sub-networks is the beauty of agreeing on 
using IP in one of its forms as a unifying force.  I completely agree that 
the SCPS suit is equivalent and can be used with appropriate gateway 
services and may provide benefits.  However the onboard links/sub-networks 
need to meet "flight requirements".

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Mon, 18 Aug 2003 10:19:26 -0700
From: Peter Shames <peter.shames@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <5.1.0.14.2.20030815080316.025c3830@mail1.jpl.nasa.gov>

There's a real danger here, since I find myself agreeing with Adrian twice 
in the same one week period.  As he correctly points out, there is a direct 
relationship and/or mapping between TCP/IP (standard Internet Protocol 
Suite or IPS) and SCPS.  The main difference, for on-board purposes, is the 
ability to use compressed addresses, for a significant savings in 
overhead.  The main different for link purposes is that SCPS comes tuned 
for space use with some valuable extensions beyond what basic IPS provides.

There does seem to be another thing going on here, and that is an attempt 
to try and build some Spacewire specific network and transport 
services.  At least that is how I read the comments from Steve Parkes and 
Rick Schnurr.  There are, of course, a few issues here that need to be 
addressed.  My summary is;

- how does this relate to any of the existing standardized approaches (e.g. 
IPS or SCPS?
- what advantages, if any, does it offer in terms of performance, 
reliability, overhead, implementation cost?
- how do you achieve interoperability across any data link that is not 
Spacewire?
- why aren't you being clear about this being new network layer and 
transport layer functionality instead of calling it parts of the Spacewire 
data link protocol?
- how do you justify doing this instead of evaluating what already exists 
for suitability?

In Rick's note he said the following;
Honestly, I have not compared what we came up to with SCPS but in some ways 
SCPS and this transport layer are at different levels of the protocol 
stack.  In reality this exchange shows that we still cannot come to grips 
with link layers providing transport services.   Good examples include 
1394/USB.

It's not clear to me why anyone would go off and invent something new 
without looking at what has been done, and used successfully in space, 
first.  And this issues of SCPS, which includes transport and this new 
"transport" layer being at different layers is just bogus.  The term 
"transport" means something in ISO, as does the term "network".  It is not 
useful to play fast and loose with these terms as it just confuses 
everyone.  Link layers do not provide transport services.  Specs like 1394 
and USB are not just link layer specs.  Instead, these specs include 
transport and application layer functionality like "plug and play".  These 
are not link layer functions and it is specious to describe them as such.

In the end we are either about trying to define the SOIF architecture, with 
its support for multiple protocols, busses, components, and applications or 
we are working as a Spacewire industry group.  We are very interested in 
the former and will work to support standards that accomplish this.  We 
will make use of Spacewire and other busses as they fit various mission 
goals, but have commitments to other bus standards as well.  Developing 
specs that only work with one bus are antithetical to what i understood we 
were all trying to accomplish.

Regards, Peter Shames

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Mon, 18 Aug 2003 11:01:56 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <p0521066dbb66b623d291@[128.149.8.95]>

At 10:19 AM 8/18/2003, Peter Shames wrote:
>It's not clear to me why anyone would go off and invent something new 
>without looking at what has been done, and used successfully in space, 
>first.  And this issues of SCPS, which includes transport and this new 
>"transport" layer being at different layers is just bogus.  The term 
>"transport" means something in ISO, as does the term "network".  It is not 
>useful to play fast and loose with these terms as it just confuses 
>everyone.  Link layers do not provide transport services.  Specs like 1394 
>and USB are not just link layer specs.  Instead, these specs include 
>transport and application layer functionality like "plug and play".  These 
>are not link layer functions and it is specious to describe them as such.

I too am trying to parse the [apparently loose] use of "Transport" and 
"Network". What I thought was going on was that:

a) Onboard buses and LANs are at the Link layer.

b) Onboard Applications may run over general purpose onboard Transport and 
Network Services, but in time constrained circumstances they may [for 
reasons that are both unclear and possibly undocumented] want to bypass 
these general purpose services and run directly over the bus/LAN Link layer.

c) So what SOIS is trying to do is to write a universal convergence layer 
("Kevin") that can allow time constrained Applications to interface 
directly with a variety of underlying buses/LANs and yet still get robust 
reliability and routing services.

d) So the top side of "Kevin" is trying to provide very thin and robust 
special-purpose Transport and Network services that sit directly below the 
time constrained Application.

e) The bottom-side of "Kevin" contains the bus/LAN-specific drivers that 
allow it to run over various technologies.

If a) through e) are correct, then Kevin would in fact have Transport and 
Network capability, and it would be designed to run over more than just 
Spacewire. But what's missing here is an understanding of why general 
purpose "TCP-UDP/IP" reliability and routing won't work, and why special 
purpose capabilities must be designed. And in particular, if someone thinks 
that the Applications are running native TCP/IP and that this *then* gets 
encapsulated in "Kevin" to achieve performance, then someone needs to think 
about performance some more.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

At 05:48 PM 8/18/2003, David Stanton wrote:
It would seem to me that the important thing to recognise is that the 
network and transport layers should be defined by their architectural 
significance rather than the functionality of typical protocols which 
inhabit these layers. Network layer provides homogeneous communications 
capability over a heterogeneous collection of subnets and, to accomplish 
this, exists in all data systems which are at subnet boundaries (it also 
needs to provide global means of, for instance, addressing routing and 
relaying which leads to the common functionality of typical network layers. 
The transport layer provides the communication service at the end systems 
and different functionalities are possible depending on whether reliability 
is required, the only common functioanlity being a multipexing one.

If the functionalities required by SOIS are those of relaying (and maybe 
routing) through switching nodes in a subnetwork and providing reliability 
within a subnetwork, these functionailities can be embedded in the subnet 
and should not be termed network or transport layer functionailty. 
Respective examples are Bluetooth's piconet interconnectivity and our very 
own COP-1 and COP-P .

If, however, the onboard systems have requirements at network layer and at 
transport layer which cannot be met by the existing globally adopted 
TCP/UDP/IP protocols then we have a different story. However, I'd be 
astonished if this was the case given the (in comms terms) benign nature of 
the onboard networking environment and its similarity to the terrestrial 
environment.

I think I've just agreed with both Adrian and Peter. It's getting scary in 
here.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Mon, 18 Aug 2003 19:34:22 -0400
From: "Richard G. Schnurr" <Rick.Schnurr@nasa.gov>
Subject: Re: CCSDS Protocol Diagram
In-reply-to: <002601c365d2$7daec890$0a01a8c0@keltik>

Sorry to respond in pies meal.  I agree with most of what you say.  I will 
disagree with one point.  The onboard network that runs the Spacecraft is 
not particularly like most ground networks.  The spacecraft network has 
significant amounts of synchronous/scheduled traffic, mixed with scheduled 
periods of asynchronous data.   The reliability of the synchronous data is 
usually assured using some form of bounded retries with timely 
notification.  How one schedules/allocates the network is usually a 
significant problem: Mil-STD-1553, IEEE-1394, and USB all provide services 
to support such transfers but they are not uniform.  Mapping to our 
application is rarely unique.  If we have no agreed to mappings wire level 
compatibility cannot be achieved, one of our goals within SOIS.  As stated 
in the previous E-Mail the SpaceWire is not encumbered by a pre-existing 
method of handling this so "we" are inventing one.  I have no real desire 
to re-invent the wheel so help us not do that.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Tue, 19 Aug 2003 01:45:03 +0100
From: David Stanton <dstanton@keltik.co.uk>
Subject: Re: CCSDS Protocol Diagram

I think we're probably in agreement. Your solution seems to be spacewire 
oriented and there is an obvious analogy with terrestrial high rate subnets 
such as ATM. The subnet serves both the packet (i.e IP) traffic which can 
support both anisochronous and isochronous traffic (though the latter only 
with RTP and a comfortable bandwidth margin) and circuit switched (with 
guaranteed isochronous capability) traffic. In the latter case we're not in 
the packet handling (with vocabulary defined by ISO 7498) universe and so 
the application plugs straight into the subnet, no need for network or 
transport references. Your case of a halfway house where the ephemeris of 
data means that there is a time limit on the the number of retries is 
interesting. I'm trying hard to think of a similar terrestrial analogy and 
I believe that looking at UDP/RTP might be appropriate (or even 
TCP/HTTP/404 :-))

Interestingly, Gorrie Fairhust at York(?) has advocated limited retries in 
the space link subnet as a supplement to end-to-end TCP for some time. 
Durst does not agree.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Tue, 19 Aug 2003 08:28:00 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: Re: CCSDS Protocol Diagram
In-reply-to: <5.1.0.14.0.20030818191917.01ca70b8@pop500.gsfc.nasa.gov>

At 04:34 PM 8/18/2003, Richard G. Schnurr wrote:
>.... I agree with most of what you say.  I will disagree with one 
>point.  The onboard network that runs the Spacecraft is not particularly 
>like most ground networks.  The spacecraft network has significant amounts 
>of synchronous/scheduled traffic, mixed with scheduled periods of 
>asynchronous data.

That's an interesting thought. A recognition that there is a difference in 
networking technology between spacecraft and ground network in the 
dimensions that you describe would appear to lead to the conclusion that 
the onboard networks are not purely IP-based. In fact, they seem to more 
closely resemble a process control environment, so perhaps the Neitzels of 
the world need to be brought into the "Kevin" discussion?

But doesn't that recognition also run counter to the "all-IP-all-the-time" 
crowd, who advocate changing the whole end-to-end system to match their 
religion, no matter what the cost?

///adrian

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Tue, 19 Aug 2003 20:26:53 +0200
From: Chris Plummer <c.plummer@skynet.be>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <5.1.0.14.2.20030819080254.02489e60@mail1.jpl.nasa.gov>

Yep, the "all-IP-all-the-time" paradigm hinges on the "no matter what the 
cost", and that cost is grossly underestimated for onboard systems. To meet 
the fairly modest hard real-time requirements of the synchronous data 
transfers on the onboard bus could conceivably be met using an all IP 
solution, but it would imply a prohibitive cost in terms of processing 
power and available bandwidth.

To a certain extent, the fieldbus community swings too far the other way by 
ripping out all of the middle layers and planting applications straight on 
top of the data link. The tricky path that we will have to tread in the 
SOIF area is to allow the user community to use whatever is appropriate for 
each case, i.e. IP etc. when it makes sense, e.g. for non time critical 
asynchronous transfers, and lean-and-mean fieldbus type protocols when we 
are dealing with the synchronous or time critical command and control.

One potential solution of course would be to use separate onboard buses for 
communication, and command and control functions. However, this is both a 
costly and limiting solution that doesn't scale well to different missions. 
Therefore, we are left with the problem of supporting this mixed traffic 
capability on a single bus. And I'm afraid that the "all-IP-all-the-time" 
advocates are going to have to listen to, and accept, the views of the 
onboard practitioners who have been dealing with this problem for many years.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Tue, 19 Aug 2003 12:53:00 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <p0521068fbb6827886de0@[128.149.8.95]>

At 12:16 PM 8/19/2003, Peter Shames wrote:
>In fact, i would (and have) argue that this is exactly why we ought to be 
>coming up with the right lean and mean MTS solution that can run over all 
>sorts of underlying busses (or queues), including over IP if you happen to 
>have it.

Yes, but doesn't this one go ... ahh ... offboard? If so, why discuss it in 
SOIS?

>At 8:26 PM +0200 8/19/03, Chris Plummer wrote:
>>To a certain extent, the fieldbus community swings too far the other way 
>>by ripping out all of the middle layers and planting applications 
>>straight on top of the data link. The tricky path that we will have to 
>>tread in the SOIF area is to allow the user community to use whatever is 
>>appropriate for each case, i.e. IP etc. when it makes sense, e.g. for non 
>>time critical asynchronous transfers, and lean-and-mean fieldbus type 
>>protocols when we are dealing with the synchronous or time critical 
>>command and control.

So how does "the right lean and mean MTS" differ from a "lean-and-mean 
fieldbus", when we are dealing with the synchronous or time critical 
command and control? [Which, after all, is what this  thread is all about, 
isn't it?] And if it's an application planted straight on top of the data 
link, what plants it there? And how does it implement reliability? And why 
aren't we discussing this on a proper mailing list, so other people can 
share the joy?

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Wed, 20 Aug 2003 09:04:19 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <p0521066dbb66b623d291@[128.149.8.95]>

At 10:19 AM 8/18/2003, Peter Shames wrote:
>In the end we are either about trying to define the SOIF architecture, 
>with its support for multiple protocols, busses, components, and 
>applications or we are working as a Spacewire industry group.  We are very 
>interested in the former and will work to support standards that 
>accomplish this.  We will make use of Spacewire and other busses as they 
>fit various mission goals, but have commitments to other bus standards as 
>well.  Developing specs that only work with one bus are antithetical to 
>what i understood we were all trying to accomplish.

If one bus turned out to be able to meet the majority of mission needs, and 
was widely supported by industrial suppliers, would you still take that 
position?

Has the SOIS team actually produced a document that compares the various 
candidates head-to-head in a nice clear format that anyone can understand? 
["Ah, I see now, for this set of requirements I would probably select x and 
y"]. While "support for multiple protocols, busses, components, and 
applications" *may* be the inevitable conclusion, are we sure that we are 
there yet? Isn't standardization a quest for convergence?

///adrian

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Wed, 20 Aug 2003 08:08:46 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <002101c3667f$772810a0$0424fea9@Cotec8>

Since Rick said "forward this as you like", I'm sending this back to the 
rest of you for comment since there appear to be some differences at work 
here. Comments, people? I have one, which I will send separately.

///adrian
+++++++++

>Date: Tue, 19 Aug 2003 17:31:17 -0400
>From: "Richard G. Schnurr" <Rick.Schnurr@nasa.gov>
>Subject: Re: CCSDS Protocol Diagram
>To: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
>Cc: Jane Marquart <Jane.Marquart@gsfc.nasa.gov>
>
>Hi Adrian,
>
>I think we have converged.  We do think we can use the IP/SCPS on the long 
>haul and many instrument applications.  We also think we can map certain 
>IP/SCPS connections to underlying links that have the appropriate 
>properties to allow the spacecraft to function using IP/SCPS (This has a 
>big test advantage for us since we create a custom box that performs this 
>function repeatedly).  We also recognize that many links exist and some 
>unifying standard must be used when one is going from one link to another.
>
>I agree with the model where IP data traverses a ATM or another packet 
>switched network.  One must have a preordained mapping of connections and 
>BW otherwise someone who is not paying very much might use more of the 
>connection than he/she is allowed.  This same model can be applied to 
>spacecraft.  In general our concept for the onboard network would allow 
>IP/SCPS networks/connections to be pushed across the SpaceWire network 
>much as IP is pushed across ATM/Frame-Relay links.  Yes other dedicated 
>traffic might co-exist on the link if needed.
>
>As far as the IP stuff goes maybe others noise has drowned my personal 
>position out so let me restate.  I think all the protocols/standards we 
>come up with (above physical) should have a standard mapping to IP.  All 
>of the protocol data units should be self describing and all management 
>commands and parameters should be defined in terms of an XML markup.  All 
>gateway functions should be fully defined and automatic.
>
>An example: if I open a UDP/TCP stream on the ground to an address that 
>exists on a spacecraft at the end point of a SpaceWire network the data 
>unit should be delivered and should be understood by that end point.
>
>Conversely if the device at the end of a SpaceWire link sends a packet 
>intended for an address on another sub-net that packet should reach its 
>destination.
>
>None of this should interfere with the critical real time traffic which 
>might or might not be IP/SCPS based.  IE the SpaceWire Bus should allocate 
>BW and perform retries on alternate networks to insure seamless and timely 
>delivery of time critical data.
>
>If this requires standard "Kevins" for the different data links then so be 
>it.   For GPM we have no RT "kevin" on the Ethernet.  Thus we cannot mix 
>time critical and non time critical data on the Ethernet.  Our 
>solution:  Keep a 1553 bus for the time critical stuff and do all the 
>asynchronous housekeeping and data traffic over the Ethernet.  Thus we 
>eliminate the problem by segmenting our data on two links.  The 1553 real 
>time link is not IP/SCPS but it could be if the overhead were low enough 
>and fault tolerance issues were addressed.  Its the segmentation of the 
>traffic/reliability classes which is of note (and is really the hard part 
>in my view).
>
>The current baseline for the SpaceWire Packet switched network assumes 
>that the connections are split.  If one is entirely within the SpaceWire 
>network one is free to use the channel number directly.  Otherwise this 
>channel must be associated with some higher level network/transport connection.
>
>Rick
>
>Forward this as you like.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Wed, 20 Aug 2003 08:13:17 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: Re: CCSDS Protocol Diagram
In-reply-to: <5.1.0.14.0.20030819165022.01d379f8@pop500.gsfc.nasa.gov>

At 02:31 PM 8/19/2003, Richard G. Schnurr wrote:
>If this requires standard "Kevins" for the different data links then so be 
>it.   For GPM we have no RT "kevin" on the Ethernet.  Thus we cannot mix 
>time critical and non time critical data on the Ethernet.  Our 
>solution:  Keep a 1553 bus for the time critical stuff and do all the 
>asynchronous housekeeping and data traffic over the Ethernet.  Thus we 
>eliminate the problem by segmenting our data on two links.

At 11:26 AM 8/19/2003, Chris Plummer wrote:
>One potential solution of course would be to use separate onboard buses 
>for communication, and command and control functions. However, this is 
>both a costly and limiting solution that doesn't scale well to different 
>missions. Therefore, we are left with the problem of supporting this mixed 
>traffic capability on a single bus.

Umm, there seems to be a strong difference of opinion here. Rick: the ESA 
folk seem to represent the "views of the onboard practitioners who have 
been dealing with this problem for many years". What would the GPM (Global 
Precipitation Mission) people have to say about the assertion that they may 
be adopting "a costly and limiting solution that doesn't scale well to 
different missions"?

///adrian

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Wed, 20 Aug 2003 18:33:46 -0400
From: "Richard G. Schnurr" <Rick.Schnurr@nasa.gov>
Subject: Re: CCSDS Protocol Diagram
In-reply-to: <5.1.0.14.2.20030820080323.024fb858@mail1.jpl.nasa.gov>

We actually agree with Chris Plummer's position but as good little flight 
engineers we chose to walk before we run.  Our goal is to develop a single 
seamless network that can be used for command and control and data.  We 
think the SpaceWire network developed can support this.  As far as any 
particular mission flying a costly bus everything is relative.  On GPM we 
were going to fly SpaceWire and 1553 for the Spacecraft bus.  So a single 
Ethernet supporting both was actually less expensive.

In any event we also wanted to avoid implementing something in this area 
for flight until we get agreement from the community.  I think you can 
imagine that GSFC is capable of implementing something that might become a 
local de facto standard.  We have no interest in doing so as it might not 
lead to the convergence we desire.

I think Adrian is making with Adrian's point I think we should write down 
our requirements to make sure that any proposal can be objectively measured 
against some criteria.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Wed, 20 Aug 2003 16:02:14 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: RE: CCSDS Protocol Diagram
In-reply-to: <p052106adbb6945e390df@[128.149.8.95]>

At 09:20 AM 8/20/2003, Peter Shames wrote:
>(Rick)
>>>I think we have converged.  We do think we can use the IP/SCPS on the 
>>>long haul and many instrument applications.
>(Peter)
>We do not believe that you can use IP/SCPS on the "long haul" as we 
>understand that term.  SCPS can be used on the "short haul" out to Lunar 
>distances, if the right options are chosen.

I still have NO IDEA what Rick means by "IP/SCPS". If you can take the hit 
of the overhead, you can run IP-over-CCSDS anywhere you want. If you want 
to run any flavor of TCP (including the SCPS flavor; repeat after me 
"SCPS-TP *is* TCP; SCPS-TP *is* TCP") over IP then as you note things get 
really goopy after about Lunar distance. The point is that the IP suite is 
primarily useful in low delay environments with rock solid connectivity.

>As discussions in this thread have demonstrated, there are many who do not 
>believe that IP is of any use, onboard or otherwise.  They would rather 
>run apps right down to the link layer,

Which is, as Dai pointed out, because all they want is point-point transfer 
across a single homogenous link. If you want to  transfer across multiple 
and possibly heterogenous links then you need a networking protocol, which 
is what IP (and the SCPS-NP) is all about.

>(Rick)
>>>An example: if I open a UDP/TCP stream on the ground to an address that 
>>>exists on a spacecraft at the end point of a SpaceWire network the data 
>>>unit should be delivered and should be understood by that end point.
>(Peter)
>This is an assertion about end to end delivery of data between addressable 
>end points within an IP domain.  Where IP protocols work (Lunar distances 
>as less), and where network connectivity makes sense, this is a reasonable 
>assertion to make.

Again, let's be clear: it's not "IP protocols" that go toes-up beyond Lunar 
distances, it's the *chatty* protocols (like TCP) that sit on top of IP 
that get the vapors.

>There are a number of open questions to be discussed here, like:
>- is such end to end connectivity needed?

That continues to be the $64K question. If you ask an end user "Do you want 
to interface with the mission systems on the ground using the IP suite, and 
do you want your onboard instrument to interface with the spacecraft 
systems using the IP suite", you will probably get a "Yes". But translating 
that into a universal user demand to run the IP suite end-to-end really 
takes some mushrooms!

///adrian
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Date: Tue, 19 Aug 2003 14:19:07 -0700
From: "Adrian J. Hooke" <Adrian.J.Hooke@jpl.nasa.gov>
Subject: Re: CCSDS Protocol Diagram
In-reply-to: <001201c36690$37e1bb10$0a01a8c0@keltik>

At 01:26 PM 8/19/2003, David Stanton wrote:
>Sounds like we need a joint SIS/SOIS session to thrash this stuff out. 
>Maybe the day before the CESG meeting in October?
--------------------------

Yes, if the problem space involves Applications and end-to-end services and 
has any significance beyond the onboard system, the solution space clearly 
involves SOIS, SIS and MOIMS and so a cross-Area meeting seems in order. 
Perhaps Peter could convene it under the SES umbrella? Right now:
- the MOIMS Area meetings are Oct 29 - Nov 3.
- the SES Area meetings are Oct 23 - 29.
- the SOIS Area meetings are Oct 27-29
- the SIS Area does not plan to meet

Sounds like Oct 26 would be a possibility, if Peter could yield some time? 
I think most of these meetings are clustered around the Goddard area.

However, before firmly scheduling a meeting I would like to see an archived 
cross-Area mailing list set up right away to cover this Special Interest 
Group, technical discussions initiated on that list (including a summary of 
what's been discussed already on this ad-hoc group) and a draft agenda 
developed which identifies and frames the issues that cannot be resolved 
electronically.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

END SUMMARY OF PRIOR DISCUSSIONS.
--=====================_629188375==_.ALT
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
BEGIN SUMMARY OF PRIOR DISCUSSIONS:<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
From: Joseph F. Smith
[<a href=3D"mailto:Joseph.F.Smith@jpl.nasa.gov"=
 eudora=3D"autourl">mailto:Joseph.F.Smith@jpl.nasa.gov</a>]<br>
Sent: 29 July 2003 23:34<br><br>
For quite some time, Adrian Hooke has had a Powerpoint slide that<br>
shows the different CCSDS protocols, and how they inter-relate,=20
sort<br>
of like a protocol stack.<br><br>
I've always liked this slide, but I've had some issues with the way<br>
that Adrian showed the SOIF protocols, that I couldn't quite put my<br>
finger on.&nbsp; The easiest way to address my issues, was to modify
his<br>
slide.&nbsp; Having done that, I am attaching the&nbsp; modified Hooke
slide<br>
for your comments.<br><br>
You'll see the SOIF protocols off to the left.&nbsp; Notice, that I
put<br>
the Onboard Application Services above that, and the Time
Constrained<br>
Applications Services above everything.<br><br>
So, how does this look.&nbsp; If this isn't quite right, that's OK.&nbsp;
The<br>
reason to send this out is to get comments, so that its a better<br>
product.<br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
At 12:58 PM +0100 8/14/03, Steve Parkes wrote:<br>
Joe,<br><br>
What happened to the Time Critical Network Services from SOIF in
your<br>
diagram?<br><br>
Please do not assume that these will be straight TCP/IP. There is a
growing<br>
concern about using TCP/IP as an onboard transport/network protocol due
to<br>
the overhead and recovery mechanisms.&nbsp; The Time Critical Network
Services<br>
working group (GSFC and UoD) following the last SOIF meeting have
been<br>
working towards a protocol set for onboard networks which is can
carry<br>
TCP/IP and other protocols but which has significantly less
overhead.<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Thu, 14 Aug 2003 08:19:39 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
<blockquote type=3Dcite class=3Dcite cite>At 04:58 AM 8/14/2003, Steve Parke=
s
wrote:<br>
Please do not assume that these will be straight TCP/IP. There is a
growing concern about using TCP/IP as an onboard transport/network
protocol due to the overhead and recovery mechanisms.&nbsp; The Time
Critical Network Services working group (GSFC and UoD) following the last
SOIF meeting have been working towards a protocol set for onboard
networks which is can carry TCP/IP and other protocols but which has
significantly less overhead.</blockquote><br>
I'm having trouble parsing your note. Is the problem the overhead of IP,
or the recovery mechanisms and overhead of TCP, or all of the above, or
something else? Somehow, encapsulating &quot;TCP/IP and other
protocols&quot; in yet another protocol set does not seems to be a way to
get &quot;significantly less overhead&quot;.<br><br>
It does seem to me that here is a classic case of why we need cross-Area
coordination. I've added Dai and Durst to this reply for their take on
the issues.<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
<blockquote type=3Dcite class=3Dcite cite>A<font color=3D"#0000FF">t 07:55 A=
M
08/15/03, Joseph F. Smith wrote:<br>
<blockquote type=3Dcite class=3Dcite cite>&nbsp;I thought that the TCP/IP an=
d
SCPS protocols were over on the right side of the
diagram,</font></blockquote></blockquote><br>
Guys, could we please try to get the terminology right? To summarize, the
&quot;IP suite&quot; maps as follows:<br><br>
Applications: Lots<br>
Transport:&nbsp;&nbsp;&nbsp; TCP/UDP <br>
Security:&nbsp;&nbsp;&nbsp;&nbsp; IPSec<br>
Network:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; IP<br><br>
The current SCPS profile of the IP suite is:<br><br>
Applications: FTP&nbsp; (the SCPS-FP *is* FTP)<br>
Transport:&nbsp;&nbsp;&nbsp; TCP/UDP (the SCPS-TP *is* TCP)<br>
Security:&nbsp;&nbsp;&nbsp;&nbsp; IPSec or SCPS-SP (the SCPS-SP maps to
IPSec)<br>
Network:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; IP or SCPS-NP (the SCPS-NP maps to
IP)<br><br>
So could someone please clarify what is meant by &quot;TCP/IP and SCPS
protocols&quot;, because to me they are one and the same thing? In
reality, isn't the only discussion that's germane to onboard networking
the discussion about whether to use IP or it's compressed version (NP) in
a constrained environment?<br><br>
///adrian<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
Date: Fri, 15 Aug 2003 12:54:14 -0400<br>
From: &quot;Richard G. Schnurr&quot; &lt;Rick.Schnurr@nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to:
&lt;5.1.0.14.2.20030815080316.025c3830@mail1.jpl.nasa.gov&gt;<br><br>
I agree with your statement of the layers as far at it goes.&nbsp; All of
what Steve and I are talking about are below the network layer.&nbsp; In
general many link layers are considered sub-networks by some of us.&nbsp;
For me this is based in years of using Ethernet, PCI, USB, 1553 and other
bus sub networks to solve real world problems.&nbsp; Often the
&quot;quality of service issues&quot; and fault tolerance issues must be
solved at this level.&nbsp; The fact that IP can ride over any of these
links or sub-networks is the beauty of agreeing on using IP in one of its
forms as a unifying force.&nbsp; I completely agree that the SCPS suit is
equivalent and can be used with appropriate gateway services and may
provide benefits.&nbsp; However the onboard links/sub-networks need to
meet &quot;flight requirements&quot;.<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
Date: Mon, 18 Aug 2003 10:19:26 -0700<br>
From: Peter Shames &lt;peter.shames@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to:
&lt;5.1.0.14.2.20030815080316.025c3830@mail1.jpl.nasa.gov&gt;<br><br>
There's a real danger here, since I find myself agreeing with Adrian
twice in the same one week period.&nbsp; As he correctly points out,
there is a direct relationship and/or mapping between TCP/IP (standard
Internet Protocol Suite or IPS) and SCPS.&nbsp; The main difference, for
on-board purposes, is the ability to use compressed addresses, for a
significant savings in overhead.&nbsp; The main different for link
purposes is that SCPS comes tuned for space use with some valuable
extensions beyond what basic IPS provides.<br><br>
There does seem to be another thing going on here, and that is an attempt
to try and build some Spacewire specific network and transport
services.&nbsp; At least that is how I read the comments from Steve
Parkes and Rick Schnurr.&nbsp; There are, of course, a few issues here
that need to be addressed.&nbsp; My summary is;<br><br>
- how does this relate to any of the existing standardized approaches
(e.g. IPS or SCPS?<br>
- what advantages, if any, does it offer in terms of performance,
reliability, overhead, implementation cost?<br>
- how do you achieve interoperability across any data link that is not
Spacewire?<br>
- why aren't you being clear about this being new network layer and
transport layer functionality instead of calling it parts of the
Spacewire data link protocol?<br>
- how do you justify doing this instead of evaluating what already exists
for suitability?<br><br>
In Rick's note he said the following;<br>

<dl><font color=3D"#0000FF">
<dd>Honestly, I have not compared what we came up to with SCPS but in
some ways SCPS and this transport layer are at different levels of the
protocol stack.&nbsp; In reality this exchange shows that we still cannot
come to grips with link layers providing transport services.&nbsp;&nbsp;
Good examples include 1394/USB.<br><br>
</font>
</dl>It's not clear to me why anyone would go off and invent something
new without looking at what has been done, and used successfully in
space, first.&nbsp; And this issues of SCPS, which includes transport and
this new &quot;transport&quot; layer being at different layers is just
bogus.&nbsp; The term &quot;transport&quot; means something in ISO, as
does the term &quot;network&quot;.&nbsp; It is not useful to play fast
and loose with these terms as it just confuses everyone.&nbsp; Link
layers do not provide transport services.&nbsp; Specs like 1394 and USB
are not just link layer specs.&nbsp; Instead, these specs include
transport and application layer functionality like &quot;plug and
play&quot;.&nbsp; These are not link layer functions and it is specious
to describe them as such.<br><br>
In the end we are either about trying to define the SOIF architecture,
with its support for multiple protocols, busses, components, and
applications or we are working as a Spacewire industry group.&nbsp; We
are very interested in the former and will work to support standards that
accomplish this.&nbsp; We will make use of Spacewire and other busses as
they fit various mission goals, but have commitments to other bus
standards as well.&nbsp; Developing specs that only work with one bus are
antithetical to what i understood we were all trying to
accomplish.<br><br>
Regards, Peter Shames<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Mon, 18 Aug 2003 11:01:56 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to: &lt;p0521066dbb66b623d291@[128.149.8.95]&gt;<br><br>
<font color=3D"#0000FF">At 10:19 AM 8/18/2003, Peter Shames wrote:<br>
</font><blockquote type=3Dcite class=3Dcite cite>It's not clear to me why
anyone would go off and invent something new without looking at what has
been done, and used successfully in space, first.&nbsp; And this issues
of SCPS, which includes transport and this new &quot;transport&quot;
layer being at different layers is just bogus.&nbsp; The term
&quot;transport&quot; means something in ISO, as does the term
&quot;network&quot;.&nbsp; It is not useful to play fast and loose with
these terms as it just confuses everyone.&nbsp; Link layers do not
provide transport services.&nbsp; Specs like 1394 and USB are not just
link layer specs.&nbsp; Instead, these specs include transport and
application layer functionality like &quot;plug and play&quot;.&nbsp;
These are not link layer functions and it is specious to describe them as
such.</blockquote><br>
I too am trying to parse the [apparently loose] use of
&quot;Transport&quot; and &quot;Network&quot;. What I thought was going
on was that:<br><br>
a) Onboard buses and LANs are at the Link layer.<br><br>
b) Onboard Applications may run over general purpose onboard Transport
and Network Services, but in time constrained circumstances they may [for
reasons that are both unclear and possibly undocumented] want to bypass
these general purpose services and run directly over the bus/LAN Link
layer.<br><br>
c) So what SOIS is trying to do is to write a universal convergence layer
(&quot;Kevin&quot;) that can allow time constrained Applications to
interface directly with a variety of underlying buses/LANs and yet still
get robust reliability and routing services.<br><br>
d) So the top side of &quot;Kevin&quot; is trying to provide very thin
and robust special-purpose Transport and Network services that sit
directly below the time constrained Application.<br><br>
e) The bottom-side of &quot;Kevin&quot; contains the bus/LAN-specific
drivers that allow it to run over various technologies.<br><br>
If a) through e) are correct, then Kevin would in fact have Transport and
Network capability, and it would be designed to run over more than just
Spacewire. But what's missing here is an understanding of why general
purpose &quot;TCP-UDP/IP&quot; reliability and routing won't work, and
why special purpose capabilities must be designed. And in particular, if
someone thinks that the Applications are running native TCP/IP and that
this *then* gets encapsulated in &quot;Kevin&quot; to achieve
performance, then someone needs to think about performance some
more.<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
At 05:48 PM 8/18/2003, David Stanton wrote:<br>
<font face=3D"arial" size=3D2>It would seem to me that the important thing t=
o
recognise is that the network and transport layers should be defined by
their architectural significance rather than the functionality of typical
protocols which inhabit these layers. Network layer provides homogeneous
communications capability over a heterogeneous collection of subnets and,
to accomplish this, exists in all data systems which are at subnet
boundaries (it also needs to provide global means of, for instance,
addressing routing and relaying which leads to the common functionality
of typical network layers. The transport layer provides the communication
service at the end systems and different functionalities are possible
depending on whether reliability is required, the only common
functioanlity being a multipexing one.<br>
</font>&nbsp;<br>
<font face=3D"arial" size=3D2>If the functionalities required by SOIS are
those of relaying (and maybe routing) through switching nodes in a
subnetwork and providing reliability within a subnetwork, these
functionailities can be embedded in the subnet and should not be termed
network or transport layer functionailty. Respective examples are
Bluetooth's piconet interconnectivity and our very own COP-1 and COP-P
.<br>
</font>&nbsp;<br>
<font face=3D"arial" size=3D2>If, however, the onboard systems have
requirements at network layer and at transport layer which cannot be met
by the existing globally adopted TCP/UDP/IP protocols then we have a
different story. However, I'd be astonished if this was the case given
the (in comms terms) benign nature of the onboard networking environment
and its similarity to the terrestrial environment.<br>
</font>&nbsp;<br>
<font face=3D"arial" size=3D2>I think I've just agreed with both Adrian and
Peter. It's getting scary in here.<br><br>
</font>++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><=
br>
Date: Mon, 18 Aug 2003 19:34:22 -0400<br>
From: &quot;Richard G. Schnurr&quot; &lt;Rick.Schnurr@nasa.gov&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
In-reply-to: &lt;002601c365d2$7daec890$0a01a8c0@keltik&gt;<br><br>
Sorry to respond in pies meal.&nbsp; I agree with most of what you
say.&nbsp; I will disagree with one point.&nbsp; The onboard network that
runs the Spacecraft is not particularly like most ground networks.&nbsp;
The spacecraft network has significant amounts of synchronous/scheduled
traffic, mixed with scheduled periods of asynchronous data.&nbsp;&nbsp;
The reliability of the synchronous data is usually assured using some
form of bounded retries with timely notification.&nbsp; How one
schedules/allocates the network is usually a significant problem:
Mil-STD-1553, IEEE-1394, and USB all provide services to support such
transfers but they are not uniform.&nbsp; Mapping to our application is
rarely unique.&nbsp; If we have no agreed to mappings wire level
compatibility cannot be achieved, one of our goals within SOIS.&nbsp; As
stated in the previous E-Mail the SpaceWire is not encumbered by a
pre-existing method of handling this so &quot;we&quot; are inventing
one.&nbsp; I have no real desire to re-invent the wheel so help us not do
that.<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
Date: Tue, 19 Aug 2003 01:45:03 +0100<br>
From: David Stanton &lt;dstanton@keltik.co.uk&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
&nbsp;<br>
<font face=3D"arial" size=3D2>I think we're probably in agreement. Your
solution seems to be spacewire oriented and there is an obvious analogy
with terrestrial high rate subnets such as ATM. The subnet serves both
the packet (i.e IP) traffic which can support both anisochronous and
isochronous traffic (though the latter only with RTP and a comfortable
bandwidth margin) and circuit switched (with guaranteed isochronous
capability) traffic. In the latter case we're not in the packet handling
(with vocabulary defined by ISO 7498) universe and so the application
plugs straight into the subnet, no need for network or transport
references. Your case of a halfway house where the ephemeris of data
means that there is a time limit on the the number of retries is
interesting. I'm trying hard to think of a similar terrestrial analogy
and I believe that looking at UDP/RTP might be appropriate (or even
TCP/HTTP/404 :-))<br>
</font>&nbsp;<br>
<font face=3D"arial" size=3D2>Interestingly, Gorrie Fairhust at York(?) has
advocated limited retries in the space link subnet as a supplement to
end-to-end TCP for some time. Durst does not agree.<br>
</font>&nbsp;<br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Tue, 19 Aug 2003 08:28:00 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
In-reply-to:
&lt;5.1.0.14.0.20030818191917.01ca70b8@pop500.gsfc.nasa.gov&gt;<br><br>
<font color=3D"#0000FF">At 04:34 PM 8/18/2003, Richard G. Schnurr
wrote:<br>
<blockquote type=3Dcite class=3Dcite cite>.... I agree with most of what you
say.&nbsp; I will disagree with one point.&nbsp; The onboard network that
runs the Spacecraft is not particularly like most ground networks.&nbsp;
The spacecraft network has significant amounts of synchronous/scheduled
traffic, mixed with scheduled periods of asynchronous data.&nbsp;
</font></blockquote><br>
That's an interesting thought. A recognition that there is a difference
in networking technology between spacecraft and ground network in the
dimensions that you describe would appear to lead to the conclusion that
the onboard networks are not purely IP-based. In fact, they seem to more
closely resemble a process control environment, so perhaps the Neitzels
of the world need to be brought into the &quot;Kevin&quot; discussion?
<br><br>
But doesn't that recognition also run counter to the
&quot;all-IP-all-the-time&quot; crowd, who advocate changing the whole
end-to-end system to match their religion, no matter what the
cost?<br><br>
///adrian<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Tue, 19 Aug 2003 20:26:53 +0200<br>
From: Chris Plummer &lt;c.plummer@skynet.be&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to:
&lt;5.1.0.14.2.20030819080254.02489e60@mail1.jpl.nasa.gov&gt;<br><br>
<font face=3D"arial" size=3D2 color=3D"#0000FF">Yep, the
&quot;all-IP-all-the-time&quot; paradigm hinges on the &quot;no matter
what the cost&quot;, and that cost is grossly underestimated for onboard
systems. To meet the fairly modest hard real-time requirements of the
synchronous data transfers on the onboard bus could conceivably be met
using an all IP solution, but it would imply a prohibitive cost in terms
of processing power and available bandwidth.<br>
</font>&nbsp;<br>
<font face=3D"arial" size=3D2 color=3D"#0000FF">To a certain extent, the
fieldbus community swings too far the other way by ripping out all of the
middle layers and planting applications straight on top of the data link.
The tricky path that we will have to tread in the SOIF area is to allow
the user community to use whatever is appropriate for each case, i.e. IP
etc. when it makes sense, e.g. for non time critical asynchronous
transfers, and lean-and-mean fieldbus type protocols when we are dealing
with the synchronous or time critical command and control.<br>
</font>&nbsp;<br>
<font face=3D"arial" size=3D2 color=3D"#0000FF">One potential solution of
course would be to use separate onboard buses for communication, and
command and control functions. However, this is both a costly and
limiting solution that doesn't scale well to different missions.
Therefore, we are left with the problem of supporting this mixed traffic
capability on a single bus. And I'm afraid that the
&quot;all-IP-all-the-time&quot; advocates are going to have to listen to,
and accept, the views of the onboard practitioners who have been dealing
with this problem for many years.<br>
</font>&nbsp;<br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Tue, 19 Aug 2003 12:53:00 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to: &lt;p0521068fbb6827886de0@[128.149.8.95]&gt;<br><br>
<font color=3D"#000080">At 12:16 PM 8/19/2003, Peter Shames wrote:<br>
<blockquote type=3Dcite class=3Dcite cite>In fact, i would (and have) argue
that this is exactly why we ought to be coming up with the right lean and
mean MTS solution that can run over all sorts of underlying busses (or
queues), including over IP if you happen to have
it.</font></blockquote><br>
Yes, but doesn't this one go ... ahh ... offboard? If so, why discuss it
in SOIS?<br><br>
<blockquote type=3Dcite class=3Dcite cite><font color=3D"#0000FF">At 8:26 PM
+0200 8/19/03, Chris Plummer wrote:</font><br>
<blockquote type=3Dcite class=3Dcite cite><font face=3D"Arial, Helvetica"=
 size=3D2 color=3D"#0000FF">To
a certain extent, the fieldbus community swings too far the other way by
ripping out all of the middle layers and planting applications straight
on top of the data link. The tricky path that we will have to tread in
the SOIF area is to allow the user community to use whatever is
appropriate for each case, i.e. IP etc. when it makes sense, e.g. for non
time critical asynchronous transfers, and lean-and-mean fieldbus type
protocols when we are dealing with the synchronous or time critical
command and control.</font></blockquote></blockquote><br>
So how does &quot;the right lean and mean MTS&quot; differ from a
&quot;lean-and-mean fieldbus&quot;, when we are dealing with the
synchronous or time critical command and control? [Which, after all, is
what this&nbsp; thread is all about, isn't it?] And if it's an
application planted straight on top of the data link, what plants it
there? And how does it implement reliability? And why aren't we
discussing this on a proper mailing list, so other people can share the
joy? <br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Wed, 20 Aug 2003 09:04:19 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to: &lt;p0521066dbb66b623d291@[128.149.8.95]&gt;<br><br>
<font color=3D"#0000FF">At 10:19 AM 8/18/2003, Peter Shames wrote:<br>
</font><blockquote type=3Dcite class=3Dcite cite>In the end we are either
about trying to define the SOIF architecture, with its support for
multiple protocols, busses, components, and applications or we are
working as a Spacewire industry group.&nbsp; We are very interested in
the former and will work to support standards that accomplish this.&nbsp;
We will make use of Spacewire and other busses as they fit various
mission goals, but have commitments to other bus standards as well.&nbsp;
Developing specs that only work with one bus are antithetical to what i
understood we were all trying to accomplish.</blockquote><br>
If one bus turned out to be able to meet the majority of mission needs,
and was widely supported by industrial suppliers, would you still take
that position?<br><br>
Has the SOIS team actually produced a document that compares the various
candidates head-to-head in a nice clear format that anyone can
understand? [&quot;Ah, I see now, for this set of requirements I would
probably select x and y&quot;]. While &quot;support for multiple
protocols, busses, components, and applications&quot; *may* be the
inevitable conclusion, are we sure that we are there yet? Isn't
standardization a quest for convergence?<br><br>
///adrian<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
Date: Wed, 20 Aug 2003 08:08:46 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to: &lt;002101c3667f$772810a0$0424fea9@Cotec8&gt;<br><br>
<font color=3D"#0000FF">Since Rick said &quot;forward this as you
like&quot;, I'm sending this back to the rest of you for comment since
there appear to be some differences at work here. Comments, people? I
have one, which I will send separately.<br><br>
///adrian<br>
+++++++++<br><br>
</font><blockquote type=3Dcite class=3Dcite cite>Date: Tue, 19 Aug 2003
17:31:17 -0400<br>
From: &quot;Richard G. Schnurr&quot; &lt;Rick.Schnurr@nasa.gov&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
To: &quot;Adrian J. Hooke&quot; &lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Cc: Jane Marquart &lt;Jane.Marquart@gsfc.nasa.gov&gt;<br><br>
Hi Adrian,<br><br>
I think we have converged.&nbsp; We do think we can use the IP/SCPS on
the long haul and many instrument applications.&nbsp; We also think we
can map certain IP/SCPS connections to underlying links that have the
appropriate properties to allow the spacecraft to function using IP/SCPS
(This has a big test advantage for us since we create a custom box that
performs this function repeatedly).&nbsp; We also recognize that many
links exist and some unifying standard must be used when one is going
from one link to another.<br><br>
I agree with the model where IP data traverses a ATM or another packet
switched network.&nbsp; One must have a preordained mapping of
connections and BW otherwise someone who is not paying very much might
use more of the connection than he/she is allowed.&nbsp; This same model
can be applied to spacecraft.&nbsp; In general our concept for the
onboard network would allow IP/SCPS networks/connections to be pushed
across the SpaceWire network much as IP is pushed across ATM/Frame-Relay
links.&nbsp; Yes other dedicated traffic might co-exist on the link if
needed.<br><br>
As far as the IP stuff goes maybe others noise has drowned my personal
position out so let me restate.&nbsp; I think all the protocols/standards
we come up with (above physical) should have a standard mapping to
IP.&nbsp; All of the protocol data units should be self describing and
all management commands and parameters should be defined in terms of an
XML markup.&nbsp; All gateway functions should be fully defined and
automatic.<br><br>
An example: if I open a UDP/TCP stream on the ground to an address that
exists on a spacecraft at the end point of a SpaceWire network the data
unit should be delivered and should be understood by that end
point.<br><br>
Conversely if the device at the end of a SpaceWire link sends a packet
intended for an address on another sub-net that packet should reach its
destination.<br><br>
None of this should interfere with the critical real time traffic which
might or might not be IP/SCPS based.&nbsp; IE the SpaceWire Bus should
allocate BW and perform retries on alternate networks to insure seamless
and timely delivery of time critical data.<br>
<br>
If this requires standard &quot;Kevins&quot; for the different data links
then so be it.&nbsp;&nbsp; For GPM we have no RT &quot;kevin&quot; on the
Ethernet.&nbsp; Thus we cannot mix time critical and non time critical
data on the Ethernet.&nbsp; Our solution:&nbsp; Keep a 1553 bus for the
time critical stuff and do all the asynchronous housekeeping and data
traffic over the Ethernet.&nbsp; Thus we eliminate the problem by
segmenting our data on two links.&nbsp; The 1553 real time link is not
IP/SCPS but it could be if the overhead were low enough and fault
tolerance issues were addressed.&nbsp; Its the segmentation of the
traffic/reliability classes which is of note (and is really the hard part
in my view).<br><br>
The current baseline for the SpaceWire Packet switched network assumes
that the connections are split.&nbsp; If one is entirely within the
SpaceWire network one is free to use the channel number directly.&nbsp;
Otherwise this channel must be associated with some higher level
network/transport connection.<br><br>
Rick<br><br>
Forward this as you like.</blockquote><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
Date: Wed, 20 Aug 2003 08:13:17 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
In-reply-to:
&lt;5.1.0.14.0.20030819165022.01d379f8@pop500.gsfc.nasa.gov&gt;<br><br>
<font color=3D"#800080">At 02:31 PM 8/19/2003, Richard G. Schnurr
wrote:<br>
<blockquote type=3Dcite class=3Dcite cite>If this requires standard
&quot;Kevins&quot; for the different data links then so be
it.&nbsp;&nbsp; For GPM we have no RT &quot;kevin&quot; on the
Ethernet.&nbsp; Thus we cannot mix time critical and non time critical
data on the Ethernet.&nbsp; <b>Our solution:&nbsp; Keep a 1553 bus for
the time critical stuff and do all the asynchronous housekeeping and data
traffic over the Ethernet.</b>&nbsp; Thus we eliminate the problem by
segmenting our data on two links.&nbsp; </font></blockquote><br>
<font color=3D"#0000FF">At 11:26 AM 8/19/2003, Chris Plummer wrote:<br>
</font><blockquote type=3Dcite class=3Dcite cite><font face=3D"arial" size=
=3D2 color=3D"#0000FF">One
potential solution of course would be to <b>use separate onboard buses
for communication, and command and control functions. However, this is
both a costly and limiting solution that doesn't scale well to different
missions. </b>Therefore, we are left with the problem of supporting this
mixed traffic capability on a single bus.</font></blockquote><br>
Umm, there seems to be a strong difference of opinion here. Rick: the ESA
folk seem to represent the &quot;views of the onboard practitioners who
have been dealing with this problem for many years&quot;. What would the
GPM (Global Precipitation Mission) people have to say about the assertion
that they may be adopting &quot;a costly and limiting solution that
doesn't scale well to different missions&quot;?<br><br>
///adrian<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Wed, 20 Aug 2003 18:33:46 -0400<br>
From: &quot;Richard G. Schnurr&quot; &lt;Rick.Schnurr@nasa.gov&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
In-reply-to:
&lt;5.1.0.14.2.20030820080323.024fb858@mail1.jpl.nasa.gov&gt;<br><br>
We actually agree with Chris Plummer's position but as good little flight
engineers we chose to walk before we run.&nbsp; Our goal is to develop a
single seamless network that can be used for command and control and
data.&nbsp; We think the SpaceWire network developed can support
this.&nbsp; As far as any particular mission flying a costly bus
everything is relative.&nbsp; On GPM we were going to fly SpaceWire and
1553 for the Spacecraft bus.&nbsp; So a single Ethernet supporting both
was actually less expensive.<br><br>
In any event we also wanted to avoid implementing something in this area
for flight until we get agreement from the community.&nbsp; I think you
can imagine that GSFC is capable of implementing something that might
become a local de facto standard.&nbsp; We have no interest in doing so
as it might not lead to the convergence we desire.<br><br>
I think Adrian is making with Adrian's point I think we should write down
our requirements to make sure that any proposal can be objectively
measured against some criteria.<br><br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
Date: Wed, 20 Aug 2003 16:02:14 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: RE: CCSDS Protocol Diagram<br>
In-reply-to: &lt;p052106adbb6945e390df@[128.149.8.95]&gt;<br><br>
<font color=3D"#0000FF">At 09:20 AM 8/20/2003, Peter Shames wrote:<br>
</font><blockquote type=3Dcite class=3Dcite cite><font=
 color=3D"#800080">(Rick)<br>
<blockquote type=3Dcite class=3Dcite cite><blockquote type=3Dcite class=3Dci=
te cite>I
think we have converged.&nbsp; We do think we can use the IP/SCPS on the
long haul and many instrument
applications.</font></blockquote></blockquote><font=
 color=3D"#0000FF">(Peter)<br>
We do not believe that you can use IP/SCPS on the &quot;long haul&quot;
as we understand that term.&nbsp; SCPS can be used on the &quot;short
haul&quot; out to Lunar distances, if the right options are chosen.&nbsp;
</font></blockquote><br>
I still have NO IDEA what Rick means by &quot;IP/SCPS&quot;. If you can
take the hit of the overhead, you can run IP-over-CCSDS anywhere you
want. If you want to run any flavor of TCP (including the SCPS flavor;
repeat after me &quot;SCPS-TP *is* TCP; SCPS-TP *is* TCP&quot;) over IP
then as you note things get really goopy after about Lunar distance. The
point is that the IP suite is primarily useful in low delay environments
with rock solid connectivity.<br><br>
<blockquote type=3Dcite class=3Dcite cite><font color=3D"#0000FF">As
discussions in this thread have demonstrated, there are many who do not
believe that IP is of any use, onboard or otherwise.&nbsp; They would
rather run apps right down to the link layer, </font></blockquote><br>
Which is, as Dai pointed out, because all they want is point-point
transfer across a single homogenous link. If you want to&nbsp; transfer
across multiple and possibly heterogenous links then you need a
networking protocol, which is what IP (and the SCPS-NP) is all
about.<br><br>
<blockquote type=3Dcite class=3Dcite cite><font color=3D"#800080">(Rick)<br>
<blockquote type=3Dcite class=3Dcite cite><blockquote type=3Dcite class=3Dci=
te cite>An
example: if I open a UDP/TCP stream on the ground to an address that
exists on a spacecraft at the end point of a SpaceWire network the data
unit should be delivered and should be understood by that end
point.</font></blockquote></blockquote>(Peter)<br>
<font color=3D"#0000FF">This is an assertion about end to end delivery of
data between addressable end points within an IP domain.&nbsp; Where IP
protocols work (Lunar distances as less), and where network connectivity
makes sense, this is a reasonable assertion to make.&nbsp;
</font></blockquote><br>
Again, let's be clear: it's not &quot;IP protocols&quot; that go toes-up
beyond Lunar distances, it's the *chatty* protocols (like TCP) that sit
on top of IP that get the vapors.<br><br>
<blockquote type=3Dcite class=3Dcite cite><font color=3D"#0000FF">There are =
a
number of open questions to be discussed here, like:<br>
- is such end to end connectivity needed?</font></blockquote><br>
That continues to be the $64K question. If you ask an end user &quot;Do
you want to interface with the mission systems on the ground using the IP
suite, and do you want your onboard instrument to interface with the
spacecraft systems using the IP suite&quot;, you will probably get a
&quot;Yes&quot;. But translating that into a universal user demand to run
the IP suite end-to-end really takes some mushrooms!<br><br>
///adrian <br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
Date: Tue, 19 Aug 2003 14:19:07 -0700<br>
From: &quot;Adrian J. Hooke&quot;
&lt;Adrian.J.Hooke@jpl.nasa.gov&gt;<br>
Subject: Re: CCSDS Protocol Diagram<br>
In-reply-to: &lt;001201c36690$37e1bb10$0a01a8c0@keltik&gt;<br><br>
<font color=3D"#0000FF">At 01:26 PM 8/19/2003, David Stanton wrote:<br>
</font><blockquote type=3Dcite class=3Dcite cite><font face=3D"arial" size=
=3D2 color=3D"#0000FF">Sounds
like we need a joint SIS/SOIS session to thrash this stuff out. Maybe the
day before the CESG meeting in
October?</font></blockquote><font=
 color=3D"#0000FF">--------------------------<br><br>
</font>Yes, if the problem space involves Applications and end-to-end
services and has any significance beyond the onboard system, the solution
space clearly involves SOIS, SIS and MOIMS and so a cross-Area meeting
seems in order. Perhaps Peter could convene it under the SES umbrella?
Right now:<br>
<pre>- the MOIMS Area meetings are Oct 29 - Nov 3.
- the SES Area meetings are Oct 23 - 29.
- the SOIS Area meetings are Oct 27-29
</pre>- the SIS Area does not plan to meet<br><br>
Sounds like Oct 26 would be a possibility, if Peter could yield some
time? I think most of these meetings are clustered around the Goddard
area.<br><br>
However, before firmly scheduling a meeting I would like to see an
archived cross-Area mailing list set up right away to cover this Special
Interest Group, technical discussions initiated on that list (including a
summary of what's been discussed already on this ad-hoc group) and a
draft agenda developed which identifies and frames the issues that cannot
be resolved electronically.<br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><br>
END SUMMARY OF PRIOR DISCUSSIONS.</html>

--=====================_629188375==_.ALT--