[Sois-tcoa] [Sois-tcons] RE: reference model (2 levels
Rick.Schnurr at nasa.gov
Tue Mar 8 17:57:52 EST 2005
Don't know it this thread is dead but I am sending out the newly minted
OBL executive summary to fuel more discussion.
Basically Greg is correct the OBL simply provides a nomalized packet
delivery service with a normalized address space, MTU and link
reliability. The MTU is supported by Fragmentation when necessary. Link
reliability is bus dependent: so some normalization is required.
OBL has no schedule since many of the underlying bus do not have
schedules. In reality the 1553 bus schedule can be done by the HW or the
real time operating system.
The work for OBL will now be to get prototypes moving for the supported bus
to support the specified service interface.
Keith you seemed to be under the impression the OBL would provide some
unifying force. In reality OBL uses the TCONS OSA, ODA 16 bit address
scheme. If people decided that 32 bit IPv4 addresses were better that
would be OK. The real work for OBL is to convert some uniform address into
the address class and "next hop" address for use by the underlying Data Link.
Personally you know that I am fond of IP and am not fond of reinventing the
wheel but I do agree with Greg that if IP is used we will need some better
stacks. This will not be hard since most stacks are built to multi-user
environments where they are trying to stay out of the way as opposed to a
real time environment where "network" activity drives the show.
In any event please provide comments and if you feel that OBL is not
providing the correct functionality let me know.
PS Would have been fun to be in the the thread, in real time, but I was
away on vacation.
At 12:49 PM 3/8/2005, gregory.menke at gsfc.nasa.gov wrote:
>Keith Scott writes:
> > [Greg wrote]:
> > >We did simplistic measurements of selected UDP rx and tx transit time
> > >from app layer to hardware on a 233mhz PowerPC. Packet traffic
> > >experienced delays up to 5ms when ambient traffic loads approached
> > >several megabits/second. The latency SD was quite large, though the
> > >minimum was acceptable. On a flight processor running 100mhz or so,
> > >possibly without L2 cache and with limited memory, the situation will
> > >be much worse.
> > [Keith Scott]
> > Why is it that we believe that TCONS is going to be able to meet much
> > tighter timing requirements while at the same time providing a more
> > service spec as ambient traffic loads pass several megabits per
> second, in a
> > box with significantly less resources? Do we claim that the TCONS stack
> > will be much more tightly integrated than TCP/UDP/IP, and that that
> > integration will decrease latencies?
> > In testing UDP transmission delays through the stack, it would seem likely
> > that the latency and jitter are introduced by interrupts (there's not
> a lot
> > to UDP processing, especially if you turn the checksums off). Interrupts
> > will be present in the OS(s) running TCONS as well, right? How is it that
> > TCONS is going to address this? Do we say that to meet the latency
> > requirements, TCONS will be implemented in hardware/firmware and thus
> not be
> > subject to resource usage on the 'host' system? Is the argument that
> > can control the interrupt rate by limiting the rate at which traffic is
> > allowed to impact the box? This might be difficult to do with, say, IP
> > (since it could involve limiting the aggregate rate of a number of
> > sources impinging on a single destination).
>Interrupts are clearly part of the question, however the IP stack is
>wholly asynchronous so there is no schedule to test latency against-
>or any way to manage latency short of regulating application access to
>To limit per-pdu latency, some stacks will give priority (ours
>didn't), so our qos and scheduling activities have to be done at the
>app layer, with the app layer strictly regulating access to the IP
>stack. And once thats dealt with, then buffer utilization has to be
>addressed. Different stacks implement it differently, some have
>per-socket pools, some have one pool shared by all.
>TCONS is intended to set and track a schedule for realtime traffic,
>defer buffering to the app layer so those apps which need special
>buffering can manage it themselves, and it offers a well-defined
>packet drop model so misbehaving apps cannot disrupt the ambient
>communications of others.
> > >The previous generation of GSFC spacecraft have packet transit latency
> > >requirements of less than 4ms. System designers are not going to
> > >accept an increase in maximum packet latency even after clock rates
> > >increase by a factor of 10.
> > >
> > >Once again, if a spacecraft does not need or have realtime networking
> > >requirements, then there is no reason to use TCONS.
> > [Keith Scott]
> > Right, and I think that the only realtime TCONS service is the
> schduled one.
> > I think that the best-effort and guaranteed services TCONS specifies will
> > end up similar to TCP/UDP (since they have essentially the same service
> > requirements), and that most applications, for non-scheduled service,
> > use those instead.
>However, the non-scheduled traffic STILL must be managed so it doesn't
>disrupt the schedule traffic. Generally that means its sent whenever
>scheduled traffic isn't being sent. However the decision of when
>scheduled traffic is present or not is made by the TCONS and thus it
>must control when other data units are transmitted.
> > >For a general measure, anything we implement should offer no worse
> > >realtime properties than existing systems, so you can view a
> > >worst-case device-to-application latency of 4ms as "sufficiently
> > >realtime". "Realistically realtime" should offer latency of no more
> > >than 1ms. On a 100mhz PPC, running a substantial pile of software and
> > >moving lots of data, an IP stack cannot reliably meet either limit.
> > [Keith Scott]
> > And neither will TCONS (intentionally inflammatory statement). The point
> > is, without some benchmarks of what 'a pile of software' and 'lots of
> > are, we could argue this forever. My fear is that we will continue to
> > reject alternate solutions because they are "too heavy, too slow under
> > network load, goo slow under high CPU utilization, ..." without having any
> > firm notions of what these are. When we have finally developed TCONS
> to the
> > point that it can be tested, if we find that it fails one or more
> > performance tests used to reject other approaches, proponents of those
> > approaches are likely to be (justifiably) upset. How do we know when
> > won?
>TCONS can make realtime guarantees because the scheduled service
>maintains a well-defined comms schedule. If datacomms slop off the
>end of their assigned slots, then it is a bug in the slot
>configuration or a bug in TCONS. Such an event is immediately
>detected by TCONS and action can be taken. An IP stack has no way at
>all to approach that sort of control or monitoring.
> > >You <might> be able to tweak some IP stacks on some processors to do
> > >so, and limit application access to the IP stack to ensure packets are
> > >not lost and buffers not overloaded, but such a solution is inevitably
> > >idiosyncratic.
> > [Keith Scott]
> > I suppose TCONS can prevent packet loss in the stack by applying
> > backpressure on the application. I'm not sufficiently famiar with the
> > lower-layer interface to OBL to know how it applies backpressure to the
> > network layer.
>The user supplies the buffers in both directions, TCONS will not drop
>packets and congestion will cause the app layer to delay while it
>Sois-tcons mailing list
>Sois-tcons at mailman.ccsds.org
Richard G Schnurr Jr
Electrical Systems Center
Applied Engineering and Technology Directorate
NASA GSFC Code 560.0
Greenbelt MD 20771
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 60928 bytes
Desc: not available
Url : http://mailman.ccsds.org/pipermail/sois-tcons/attachments/20050308/eef88e29/OBL_Executive_summary_rgs_3_8-0001.obj
More information about the Sois-tcons