Tutorials

Home   >   Archive by category "Tutorials"
Data Recording Using Commercial Servers – Tutorial

Data Recording Using Commercial Servers – Tutorial

Introduction to Data Recording Using Commercial Servers

Legacy data recorders were originally implemented as custom hardware-based systems. Many of these systems have since migrated to an architecture that combines an industrial PC with data acquisition modules and custom front panels. There are now new software-de ned data recorders on the market that leverage low-cost commercial server platforms for the underlying computing/networking/storage hardware.

Satellite ground systems and range data systems record the telemetry data at one or more points in the downlink processing stream. For satellite systems, command data may also be recorded. Data may be recorded at the antenna site so that the data is not lost in the event of a network outage. Data may also be recorded at the antenna site if the downlink rate exceeds the network capacity. In addition, data is nearly always recorded after decryption and placed into data archives that can be used to support detailed analysis.

SLE Services

Data recorders capture the data streams on each of their input channels first in memory and then write this data to files stored on disk drives. Typically, each channel is written to a separate file. A high rate channel running at 1 Gbps quickly creates large raw data files! To avoid overly large files, long duration recordings may be split across multiple physical files on the disk with the data recorder closing one file and starting another during the recording. This happens under the covers without data loss.

Data recorders store meta data along with the raw input data. This meta data is usually stored in a separate set of files. Meta data allows the data recorder to replay the raw data. For each block of data written to the raw data file, the corresponding meta data would include information such as the arrival time of the first bit, the number of bits in that segment, and the duration of those bits.

This meta data allows the recorder to replay the raw data stream file with the same data rate and timing as when the data was originally recorded. It also allows the replay to start and stop at any point in the recording.

Data-Recording-Basics

Data Storage

Commercial servers offer a wide range of disk storage options, both in performance (recording speed) and reliability. With a software-de ned recorder, the storage subsystem can be sized and configured to specific needs.

RAID storage can be used for applications that require a near 100% guarantee against data loss.

The RAID controllers in commercial servers also allow the data files to be mirrored (RAID 0) or striped (RAID 5) across multiple disks to greatly increase the data recording bandwidth when compared to a single disk drive. With RAID 5, parity information can also be striped across the disks to automatically handle a drive failure without data loss.

File Management

Data recorders include file management functions that enable the user to assign names to each recording and allow files to be Exported and Imported. Exporting a file from one data recorder and Importing it to another allows the second recorder to replay the recorded data.

File management functions also include what to do in the event that the disk storage is becoming too full. For example, the data recorder can be configured to delete the oldest recordings first if necessary to make room for the current recording. Data recorders allow the user to “protect” specific recordings so that they are never deleted as part of the file management process.

With a commercial server, files can also be moved using the operating system’s capabilities to copy, delete, and relocate files. Recordings can be copied to removable media (e.g. CD, removable drive) or transferred across the network (e.g. FTP).

Playback

Playback from the data recorder is used to regenerate and output data streams. The user can select all or a subset of the channels captured in a particular recording to be included in the playback. Typically playback is performed at the original data rates, but faster or slower playback rates can be set. This allows a low data rate stream to be processed in less time.

An important feature of the meta data files is that they allow the data recorder to accurately reproduce the data stream, including dead clock periods, static data, and data rates shifts.

Software-Defined Data Recorders

Software-de ned data recorders offer several nice features over point solutions. Other processing functions are easily added to the software-defined data recorder. One example is the ability to monitor data quality during the recording. If the telemetry data is unencrypted, the data recorder can include a frame synchronizer that provides a real time indication of whether or not the data is good.

Custom data quality checks can also be implemented. Data quality statistics can be displayed and stored with the meta data.

And this works both ways. Data recording can be added to other software-de ned systems such as modems, gateways, and front end processors. The modem becomes a modem/data recorder. The front end processor becomes a FEP with integrated data recording.

Other Advantages of Commercial Servers

Commercial servers have multiple advantages over industrial PCs for applications and environments that do not require a ruggedized data recorder. These include:

Current Technology: The commercial server market dwarfs both the industrial PC and the data recorder markets. There’s a much higher level of capital investment in being competitive by offering the latest technology. Commercial servers are always one or two technology generations ahead of industrial PCs.

System Management: Commercial servers from companies such as Dell, HP, and IBM have system’s management tools that run diagnostics, report issues, perform monitoring, etc.

Lower Cost: The market size again creates an advantage in that commercial servers are lower in initial cost. They also have worldwide support systems in place that provide service convenience and reduce the long-term cost of maintenance.

Technology Refresh: Software-de ned recorders running on commercial servers afford a simpler approach to technology refresh when the existing system has grown tired. The data recording application and any signal acquisition modules are simply migrated to a current server platform.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

A Quick Synopsis of Space Link Extension – Tutorial

A Quick Synopsis of Space Link Extension – Tutorial

Introduction to Space Link Extension SLE

CCSDS Space Link Extension (SLE) defines a standardized set of services that allow ground antenna sites and control centers to send spacecraft data back and forth. This spacecraft data includes the data channels in the return link (spacecraft to ground) and the forward link (ground to spacecraft).

SLE Services

SLE is a CCSDS standard. Whereas many of the CCSDS standards define the data content, format, and protocol of the forward and return links, the SLE standards define a set of services for the transfer of the data between the nodes of a ground network. The SLE standards also define a set of management services that are used to make and manage the connections between nodes.

These “services” are implemented in software that runs on the computer systems that are part of the ground communications network. The SLE standards help ensure software written by different organizations can interoperate over the ground network.

SLE uses lots of terms and acronyms so that the standard is generic and not specific to any implementation. This App Note tries to describe SLE in more general terms, but there’s no avoiding having to relay some of the SLE terminology.

SLE differentiates the two sides of the ground network as the SLE Provider and the SLE User.



Ground Communications Network Using SLE

SLE Providers and SLE Users

Typically, the SLE Provider is the ground antenna side and the SLE User is the control center side. The SLE User requests a service, like “give me the data from my instrument on the spacecraft,” and the SLE Provider does its best to comply with the request.

Data channels in the return link flow from the SLE Provider to one or more SLE Users. Multiple SLE Users can connect to the same SLE Provider and receive some or all of the return link data. The SLE Provider acts as the distribution point on the ground for the spacecraft data.

Forward link data flows in the other direction, from the SLE User to the SLE Provider. Since this is the data that is subsequently modulated and transmitted to the spacecraft, there are usually fewer ground nodes producing the forward link data and the nodes must coordinate who has control.

Multiple SLE Users can connect to a SLE Provider and a SLE User can have forward links being sent to more than one SLE Provider.

Making the Connection

There are SLE Management Services to establish the connection and validate that connection. These connections are known as Service Instances. A SLE User requests an SLE Bind operation when It wants to establish a connection with an SLE Provider. There’s authentication with user names, passwords, and credentials being exchanged. If all is good, the connection is made and the SLE User can Start the flow of data. A Stop and Unbind operation requests shutdown of this particular connection between SLE User and SLE Provider.

SLE Return Link Services

CCSDS allows for multiple data sources on the spacecraft to be combined into a single downlink. These data sources might be data from multiple instruments on the satellite, state-of-health data, image data, even voice data if it’s a manned mission. Each data source builds its own CCSDS frames with a unique channel ID. The downlink multiplexes these channels together transmitting them one after another based on which channels have frames to send, while accounting for their assigned priority. All channels come down the RF link together, but likely need to end up in different places for processing.

RAF: The Return All Frames (RAF) service does just that. The SLE User who requests this service gets all the frames. A good example might be a system that is recording the return link and wants to get every channel.

RCF: The Return Channel Frames (RCF) service lets the SLE User get a single channel that’s of interest (or that they are allowed to have access to). For example, a research organization that has an instrument on-board might use the RCF service to get the data from their instrument and not any of the others. A separate connection is required to request each separate channel.

RSP: The Return Space Packets (RSP) service delivers a set of Space Packet channels. A Space Packet is a channel of variable sized data packets within the frame.

There are another half-dozen or more Return Link services within the SLE standard, but those three most commonly used.

SLE Forward Link Services

CCSDS systems commonly use a Communications Link Transmission Unit (CLTU) for the forward link data. A CLTU is pretty much just a formatted block of data that gets put into one or more CCSDS Transfer Frames. This is the data stream that is sent to the SLE Provider location and then modulated and transmitted to the spacecraft or satellite. At the spacecraft, each CLTU has a specific destination where it Is processed.

FCLTU: The Forward CLTU service enables the user of the service to send an asynchronous stream of CLTUs. The service allows the SLE User to specify a window of time for sending the data out the antenna.

EFCLTU: A newer service is the Enhanced Forward CLTU service. EFCLTU adds the ability to send a synchronous stream of CLTUs.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

Private Cloud Architectures and the KS-252 – Tutorial

Introduction to Private Cloud Architectures and the KS-252

The KS-252 is a modern NSA Type 1 COMSEC device that natively supports network-based systems. Legacy COMSEC devices in many control centers use serial interfaces for the PlainText and Cipher-Text telemetry and command data streams. The KS-252, with its Ethernet interfaces, enables the COMSEC devices to be networked with the other control center equipment and deployed via private cloud architecture.

COMSEC Pooling

Cloud architectures deploy suites of Applications across low-cost commercial servers, often running the various Apps on Virtual Machines. The KS-252 is well suited for cloud architectures. Architecturally, the KS-252 can be thought of as a COMSEC App on the network, creating a “pool” of COMSEC devices.

A KS-252 can be configured with a commanding algorithm or a telemetry algorithm. It supports either an encrypt channel or a decrypt channel for that algorithm. For a given telemetry or command/command echo stream, a KS-252 from the pool is dynamically configured and connected to Red Front End and Black Front End Apps.

Security considerations do impact the flexibility. The KS-252 exchanges security credentials with each connecting device, requiring each Front End Processor App that may connect with the KS-252 to have these credentials. There is also the requirement to have the proper keys loaded and many control centers choose to limit the number of keys on an individual COMSEC device. These two constraints may cause the deployed architecture to have sub-pools of both KS-252s and the Front End Processor Apps, with a sub-pool assigned to a particular set of satellites.

Key Factors

UDP Interfaces: The KS-252’s network protocol is UDP. With UDP, delivery of packets over the network is not guaranteed. The KS-252 should be directly connected to a dedicated Ethernet network within the control center and not extended over a wide area network where packet loss can be expected.

Throughput: The KS-252 traffic interfaces operate at Ethernet’s 100 Mbps data rate. With packet overhead, the effective maximum throughput for telemetry is on the order of 90 Mbps. Front End Processor Apps that use multiple KS-252s in parallel achieve higher throughput rates.

User Interface: Currently, both the security administration and the monitor and control interfaces for the KS-252 are browser-based. Most deployments use an agent on the red-side Front End Processor to interact with the KS-252. A future software release is slated to provide a set of GEMS messages for control and monitor of the KS-252, and this will simplify operations with the device.

Timing Considerations

Legacy serial devices provide a deterministic rate at which telemetry and command data flow. The KS-252 transfers variable length packets over an Ethernet network, and this causes “jitter” in the end-to-end data flow. Front End Processors on both the plain-text and cipher-text side of the KS-252 employ timing algorithms to give the KS-252 serial-like timing.

An example timing diagram is shown below. The KS-252 accepts multiple commands in each packet, with each command being separated by one or more S-bits. There is a one-to-one corresponding output packet. If the input packet contains two commands, the output packet contains two commands. In this example, the black side Front End Processor or modem generates a continuous serial output, filling the spacing between command packets with S-tones.

Universal Time (UTC). NTP enables the various servers in the satellite ground system to maintain their system time to within a few milliseconds of each other.



KS-252 Command Packets and Timing

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

WAN Forward Error Correction for Real Time Data Streams – Tutorial

WAN Forward Error Correction for Real Time Data Streams – Tutorial

Introduction to WAN Forward Error Correction for Real Time Data Streams

dataQualityRunning a forward error correction algorithm over a wide area network with long path delays can improve packet throughput and reduce latency by eliminating the need to retransmit lost packets.

The network protocol between two devices operating over a WAN must maintain a balance between latency and error recovery from packet loss while fitting within the available bandwidth and still achieve high data quality. It’s the classic “Iron Triangle.”

Error recovery from packet loss can be improved, but at some point this results in too much latency and/or consumes too much bandwidth.

Real Time Data Streams

Transporting real time data streams over a WAN differs from “normal” network traf c where packet ow is of the request-response form. Continuous data streams, such as telemetry data, ow continuously in one direction. The data is sequentially generated in time. There’s no ability to “get ahead” on the transmitting side. On the receiving side, the data must remain ordered, requiring buffering to account for out of order and the delay associated with retransmitted packets.

This buffering directly increases the end-to-end latency. In many systems, particularly those with a long round-trip time on the WAN, running TCP protocols may not be feasible.

WAN Forward Error Correction (WAN FEC)

UDP is a viable alternative to TCP, provided the system implements a WAN FEC to recover from lost packets. To handle lost packets, data streams are aggregated, encoded, and interleaved for WAN transmission. Performing error detection and recovery on the aggregated set of data streams reduces latency for each individual data stream. The aggregation also improves the error recovery.

Encoding the data streams adds check bits to the data, and interleaving “spreads out” the source data bits and their check bits so that when packets are lost, the impact on any one data stream is less concentrated. The ratio of check bits to source data bits and the spread of the interleave determines whether or not the content of the lost packet(s) can be recovered.

The optimal FEC technique to manage system performance in a constrained environment (i.e. do the best that can be done) depends on the channel data rate, available bandwidth, latency requirements, and network Quality of Service. Regarding QoS, burst losses are the most dif cult to handle, particularly when there are requirements for low latency and the network’s bandwidth is limited.

Data Quality Constraints

DataQualityCurveWith AMERGINT’s WAN FEC, there are three primary tuning parameters. The Collection Interval (how much data is accumulated before being packetized and sent) is one component of the network latency. This also affects packet overhead on the network. The Spread of these packets is the primary tuning parameter associated with correction of burst errors. Finally, the FEC Algorithm and the resulting ratio of check bits to data bits impacts bandwidth.

Data quality with a WAN FEC can be managed (obtained) in the region above the curve in the notional diagram to the right.

The coded/packetized data rate determines the bandwidth asymptote. The available bandwidth must exceed this rate. Low latency requirements drive higher coding/packet overhead and this increases the necessary bandwidth.

In addition, the allowable latency must exceed the recoverable burst error period plus the processing latency. Otherwise, data quality degrades during periods of burst packet loss.

The graph shows that small changes in the required maximum latency can quickly require high bandwidth to achieve a specific data quality over the network.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

A Really Simple Introduction to Satellite Links – Tutorial

A Really Simple Introduction to Satellite Links – Tutorial

Introduction to A Really Simple Introduction to Satellite Links

Satellite links are the means of communications between a spacecraft and the control systems here on earth. By definition, these are RF (radio frequency) links and understanding how they work is very much a mystery to many of us. A satellite waveform is de ned by the modulation. In many cases, the modulation is designed to ensure the signal can be demodulated.

We certainly cannot make you an expert on satellite communications in this short tutorial. That takes a degree or two in electrical engineering and likely a matching number of years of practical experience.

But it can be helpful to understand the lingo and how these links work. In addition to defining the terms, we’ll attempt to describe the aspects of satellite communications that constrain and influence the waveforms that are used.

satOrbits

Radio Frequency Links

This App Note focuses on non-transponded satellite links. These are the RF links that transmit telemetry or payload data from the satellite to the ground and the links that transmit command or uplink data from the ground to the satellite. There are modems (short for modulator/ demodulator) on both sides of these links, one on the satellite and one at the ground antenna.

When transmitting, the modem first generates a waveform and then amplifies it before sending it to the antenna. The antenna radiates the wave and usually focuses the RF energy (power) in the direction of the receiving antenna/modem combination. Each RF link is a “channel” that connects the two modems.
The RF links for satellite communications have five key implementation drivers:

  1. The link covers a long distance. The power radiated from an antenna decreases with the square of the distance. The signals coming from a geosynchronous satellite at 23,800 miles away are very faint (literally billionths of a watt), and the signal takes about 1/8 second to reach earth.
  2. The link can have Doppler effects. Satellites in low earth orbit have high velocities as they fly over a ground antenna. This generates Doppler on the waveform, distorting the signal’s frequency and phase.
  3. The link is asymmetric. The ground side antenna is capable of generating high power when transmitting while the satellite side has a relatively small antenna aperture for receiving that power. Conversely, the satellite side has limited transmit power, while the ground antenna can be quite large.
  4. The link has continuous data. Unlike network-based communications where packets are sent only when there’s data to send, each satellite link always contains some data to maintain synchronization. The data may be ll or idle data, but the link is never empty.
  5. The link’s QoS is important. There’s no do over. Satellite communications waveforms are designed to have essentially no transmission errors, as they do not support retransmission for lost or corrupted data because of the long delay.

RF Bands

radioFreqBands

RF links use a carrier signal which is simply a sine wave at a fixed frequency. The RF spectrum is divided up into established frequency bands, and each band defines a range of frequencies.

For example, L-Band is the 1 to 2 GHz range of the radio spectrum, and S-Band is the 2 to 4 GHz range. Satellite systems operate in the frequency bands from L-Band to Ka Band, ranging from 1 GHz to 40 GHz. It’s important to note that these same frequency bands are used for other links. Your home’s wireless network operates in S-band.

Every satellite link is assigned a range within an RF Band, and it’s this portion of the spectrum that the signal occupies when it is transmitted. The signals are filtered so they do not interfere with adjacent links. A satellite signal at L-band might be assigned to operate at 1.35 to 1.36 GHz.

Modulation

Modulation is the process of changing the pure carrier signal so as to convey the data that is being transmitted. The ground modem is modulating the uplink data onto the carrier signal. The modulation process increases the bandwidth of the signal. There are multiple ways to create this modulation:

  • The modem can vary the amplitude of the sine wave, known as amplitude modulation.
  • The modem can vary the frequency of the sine wave, known as frequency modulation.
  • The modem can vary the phase of the sine wave, known as phase modulation.

Binary phase shift keying is one the simplest techniques to visualize and is shown below. Each time the bit being transmitted changes logic level (0 to 1 or 1 to 0), the phase of the carrier is shifted 180 degrees.

BPSK-modulation

Satellite systems use much more sophisticated techniques to communicate large amounts of data in a small amount of spectrum over very long distances. Many satellite systems use phase modulation or even a combination of phase and amplitude modulation.

Demodulation

The demodulator has the harder job to do. It has to receive this modulated carrier and extract the modulating waveform to recover the information that was transmitted. There are several complicating factors that corrupt the received signal, which all good receivers are designed to handle.

The rst is noise, which comes primarily from the input amplifier and the background noise the antenna is looking at. The antenna must collect and focus enough of the extremely weak signal to overcome the noise, and the input amplifier must be designed to add as little noise as possible. All electronics adds some noise to the signal, but since the signal is weakest at this point, it tends to set the signal to noise ratio.

There’s interference too, in that there is power from every other transmitter in sight of the antenna. The motion of the satellite relative to ground also changes the phase and frequency of the received signal (the Doppler effect). The receiver filters noise and interference that are outside the band of the desired signal. The demodulator tracks the movement of the signal (Doppler), and extracts the data from the remaining noise and interference.

Satellite communications engineers have cleverly designed ways to resolve all of these issues, but how that’s done is more than well beyond this Tutorial.

BER and Channel Coding

BER-Coding-GainAn important aspect of satellite links is the channel coding that is used to improve the bit error rate performance.

Bit error rate is a complicated topic in and of itself. A simplistic way to think of BER is that BER is the probability of a bit error at a given signal to noise ratio. As a signal degrades (a lower signal to noise ratio), the number of bit errors increases.

Channel coding is also known as forward error correction. The modulator performs the channel coding which adds redundant information to the transmitted data. This “encoding” is “decoded” back to the original data by the demodulator.

We all have performed channel encoding leaving a phone message for someone when we give them our number up front, record our message, and then repeat the number before hanging up. You’ve transmitted the same data twice in hopes of them getting it right as they write it down.

Channel coding reduces the required signal level needed to achieve a particular bit error rate. This is known as the coding gain. Coding gain allows transmitters to operate at lower power or to transmit at higher bit rates. The complexity in channel coding is on the decoding side. Most satellite downlinks use channel coding to conserve transmit power at the satellite. Satellite uplinks do not typically use channel coding because of the increased receiver complexity (and thus receiver power) that would be required on the satellite. Instead the ground transmits at higher power. The command decryptor on the satellite provides an effective guard against bit errors by rejecting commands that contain bit errors.

There are a number of coding techniques used, and they are usually named after their creators or given a name that describes the technique.

Block codes expand some number of input bits to a larger number of coded output bits. For example, CCSDS uses a Reed-Solomon (255, 223) block code which adds 32 check symbols to each block of 233 symbols, creating a code block that has 255 symbols. Interleaved block codes work well in correcting burst errors where several contiguous bits are corrupted.

Convolutional codes encode the bits in a sliding manner where the output bits are de ned by some number of previous input bits. These forward error codes work well even with very noisy signals. Viterbi decoding is an optimal method of decoding a convolutionally encoded signal.

Modern systems use channel coding that is iteratively decoded. One that’s now commonly used in satellite communications is the Low Density Parity Check (LDPC).

Concatenated coding involves using both block codes and convolutional codes to improve bit error rate performance. The modem rst block encodes the data, puts this into frames, and then convolutionally encodes that data stream.

All this coding comes at a price in that the number of data bits (usually referred to as symbols) is increased and this decreases the effective data rate.

Bit Conditioning

codeWaveFormsThere’s one more set of techniques, referred to as bit conditioning, that help out the demodulator/bit sync.

Pulse code modulation is a form of encoding the data bits. The common ones are NRZ-L, NRZ-M, and NRZ-S. What helps is remembering that NRZ stands for Non-Return to Zero, L stands for Level, M stands for Mark (Logic Level 1), and S stands for Space (Logic Level 0).

NRZ-L is the one that makes sense to most of us as it looks like how digital data appears on an oscilloscope. With NRZ-M, every ‘1’ in the bit stream changes the logic level. With NRZ-S, every ‘0’ in the bit stream changes the logic level.

A bit synchronizer’s job is made more difficult when there are long strings of bits without changes in the logic level. Bi-phase PCM codes guarantee at least one symbol transition for every bit. Another technique that removes long strings of ‘1’s and ‘0’s is to run the data through a randomizer. There are de ned randomization polynomials that are used.

Finally, there’s preamble insertion. A good example of this is the commanding uplink. Commands are sent intermittently, but the modulated data link has to remain continuous. The ground system transmits an idle pattern (such as alternating ‘1’s and ‘0’’s) between commands, and then prefaces each command sequence with a preamble.

This preamble is a known sequence of bits that the spacecraft’s demodulator can detect and then know that what follows is command data, which it needs to pass along. Preambles are usually Barker Codes, a short sequence of bits that are easy to correlate on (i.e. find).

Summary

Link or Channel – the RF connection between a modulator and a demodulator

Bandwidth – the amount of RF spectrum the signal occupies (but be aware that’s only one of many definitions)

Modulation – changing the carrier signal so that it contains the information being sent

Demodulation – receiving the modulated signal and recovering the information

Bit Error Rate – the probability of bit error at a speci c signal-to-noise ratio

Coding – adding redundant data bits (or symbols) to increase the bit error rate

Conditioning – changing the logic levels before modulation to improve the bit synchronizer’s performance

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

Time-Data Correlation of Serial Data Streams – Tutorial

Time-Data Correlation of Serial Data Streams – Tutorial

Introduction to Time-Data Correlation of Serial Data Streams

Many telemetry systems require multiple serial data streams be transported across a ground network. If the data streams must be re-generated as the original serial data streams, this regeneration must be done such that they have the same time correlation to each other. Often times, the time stream itself must also be regenerated.

Telemetry data is often produced as serial data streams from the receivers/bit synchronizers. These streams may be generated at a test site, but processed at a separate telemetry processing center. In the long ago past, long distance serial networks were used to make the connection. Wide area networks have replaced these circuits.

With WANs, there is network packetization of the data. The time correlation of multiple telemetry data streams and time data stream must be maintained. A range of techniques is used to accomplish collection, transport, and regeneration of serial data streams with time-data correlation.

Overview Diagram

The diagram below shows two sites where telemetry streams are generated and then sent over a WAN to another site where the data is regenerated and then processed.

Regeneration of Serial Data Streams with Time-Data Correlation

Network Transport of Serial Data Streams

Network transport requires packetization of the data, and this packetization necessarily adds some delay (or latency). With packetization, data bits are buffered at the remote network gateway over each packet period. During this packet period, the remote gateway collects data bits for each serial data stream and combines those bits, along with time information, into the next packet sent across the network.

The WAN induces some additional latency along with jitter. Jitter is the variation in the receipt of the packets on the other end. This jitter requires the local gateway to buffer the data to avoid a data underrun condition. In other words, it must hold up output of the serial data streams to ensure it doesn’t run out of data before the next packet is received.

Real time systems may use a WAN FEC (Forward Error Correction) technique to allow UDP to be used as the network protocol. TCP can reorder packets and retransmit lost packets which leads to higher network latency and jitter. UDP is more efficient in that packets are transmitted once (aka re and forget). A WAN FEC allows for data recovery in the event that packets are dropped.

Time-Division Multiplexing

One approach is to time-division multiplex each of the data streams at the remote gateway along with an IRIG signal. Each data-time stream occupies a pro-rata portion of the aggregate data stream. Synchronization markers are added to the TDM’d stream to create frames and this data is packetized and sent across the network to the local gateway.

The local gateway receives the packets, performs the master frame synchronization and then extracts out (demultiplex) each of the data streams. The serial data/clock and time signals are regenerated in hardware.

This approach is very hardware/firmware centric. Any TDM multiplexing scheme introduces some sliding skew in the data as data is “held up” waiting for its slot to come around. It works best when the various serial data rates are integer multiples of each other, and more importantly, sourced from the same clock. Otherwise, there’s some amount of jitter in the time-data correlation.

Time-Based Regeneration

Time-Tagging of the Bits in a PacketAnother approach is to capture the time information necessary to regenerate the data, along with the data in each packet. For example, the time information might include the starting and ending time for the packet, along with the time offsets for the first and last bit of each stream. Another approach captures the absolute time for the first bit of each stream in the packet along with the duration for the number of bits in that packet. The time information for each serial data stream is independently captured.

The local gateway uses hardware-level time- release capabilities and is configured to output the serial data streams with a time offset that accounts for the packetization and network delays. The data streams can be accurately regenerated along with any timing streams.

The network bandwidth needed to transmit the information for each data-time stream is slightly greater then that needed for a TDM approach since the packets contain each data stream’s time information. The key bene t is that data from multiple sites can be regenerated with time correlation between them.

When properly implemented, this method can achieve very accurate time-data correlation, on the order of 20 microseconds.

Software-Based Time-Data Correlation

The serial data streams being regenerated are subsequently processed by systems/software that performs functions such as frame synchronization, time-tagging, decommutation, EU conversion, etc. In this process, the time information associated with each telemetry measurand is calculated and associated with the data.

Modern systems can perform the necessary time-data correlation without having to regenerate the serial data streams. This eliminates hardware and cost in the telemetry processing center. These modern software-based systems can effectively re-associate the time information in the processing.

Processing of Serial Data Streams with Time-Data Correlation in Software

For example, one of the first steps in processing is to time-tag the telemetry frames. The time information in the network packets from the remote gateway associates time with specific bits. Knowing where the time hacks associated with a specific bit that precedes the sync pattern and with a bit the follows the sync pattern, the frame synchronizer can establish the time of the sync pattern by interpolating between these two bit times. With each telemetry frame time-tagged, the decommutation software can process and accurately time-tag each telemetry measurand.

Since the data flows through the software processing as it arrives, these software-based approaches can reduce system latency by eliminating some of the buffering at the local gateway. Software-based approaches also afford greater flexibility.

The potential downside is interoperability with other systems. Serial data lines provide an easy (lowest common denominator) way to connect systems. Software-based systems may require data translation. Most software-based systems can be configured to regenerate physical outputs as serial data streams.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

High Rate Processing: 10 Gbps Links – Tutorial

High Rate Processing: 10 Gbps Links – Tutorial

Introduction to High Rate Processing: 10 Gbps Links

Satellite downlinks are real time data streams. A real time data stream is one that is continuous and often has no means for retransmission. Fall behind in the receipt and processing of the data and there’s no catching back up because there’s never a break or gap in the data.

In the late 1970’s, 1 Mbps was high speed for a real time data stream. Over the next thirty years, it was 5 Mbps, then 10 Mbps, then 100 Mbps, then 400 Mbps. You get the idea. Today downlink data rates in excess of 1 Gbps are common, and they are moving towards 10 Gbps.

The processing of high rate real time data streams (>1 Gbps) can be particularly challenging. Catching all of the data, decoding it, processing it, and getting it moved from point A to point B, relies on parallelism, transaction rates, and latency.

High Rate Data Flow

High Rate Processing of a 1-10 Gbps Satellite Downlink
High Rate Processing of a 1-10 Gbps Satellite Downlink

Ingest

High rate downlinks rely on phase shift keying to split the data across multiple channels in the RF transmission. The demodulator often has a separate physical output for each channel. For example, a downlink that is QPSK modulated would have an I channel output and an Q channel output. Many high rate demodulators have four separate outputs. For example, a 3.2 Gbps downlink might arrive on four 800 Mbps channels.

On some legacy demodulators, these outputs are serial data/clock. At these data rates, the electrical protocol is differential ECL, usually Negative ECL (or NECL). There is a migration to using the UDP protocol over dedicated point-to-point Ethernets. There is a dedicated Ethernet connection for each channel.

The first step in the high rate processing is to ingest these data streams without any data loss. Serial streams are parallelized in hardware (e.g. turned into 32-bit words), blocked into data frames, and then DMA’d into memory. A network stack handles the UDP protocol.

One important design element for ingest is to have larger rather than smaller packets. Smaller packets require more network transactions per second. There is both header overhead and transaction overhead for each packet. For example, transmitting a 2 Gbps data stream in UDP packets with a packet payload size of 1,024 bytes requires ~244,000 packets per second. The same 2 Gbps data stream with a maximum UDP payload size of 9,000 bytes requires ~28,000 packets per second.

Data Recording

High rate processing systems immediately archive the raw data. If there’s a drop-out downstream, the data can be processed after the fact.

Data recording also relies on parallelism. The data is stored to disk arrays using RAID controllers. A RAID controller stripes data across multiple disks. There is no single low cost disk drive that sustains continuous writes at 10 Gbps, but an array of 24 disks only requires each disk drive to support writes at ~500 Mbps (accounting for some recording overhead).

Decoding and Data Merging

The spacecraft performs data encoding/interleaving of the high rate downlink to support decoding and error recovery on the ground. Encoding techniques include Convolutional Encoding, interleaving, Reed-Solomon, and Low-Density Parity Check (LDPC).

Data encoding techniques have asymmetric processing requirements—the processing needed to decode the data is much higher than the processing needed to encode the data. Given the constraints on processing in space, this asymmetry is a favorable one. Data decoding and error recovery on the ground side can consume significant processing resources. In many cases,
the demodulator performs some or all of the work. When that is not the case, the high rate processor gets to fill in.

Here too, parallelism makes the processing possible. High rate systems can use low-cost Graphics Processing Units (GPUs) for block-oriented processing. These GPUs have thousands of processors and the decoding/error recovery algorithms work on blocks of data. Multiple blocks are processed in parallel on the GPU. This does induce latency in the system throughput as multiple data blocks must be collected so they can be sent to the GPU in groups.

Data merging may be required across the separate data channels, putting the separate data streams back into a composite data stream. The decoded and merged data may be stored (recorded to disk) in parallel with it being sent downstream for processing.

Network Transport

The processing of the data itself, turning it into useful information, often takes place away from the antenna. The high rate processor can rate buffer the stored data across a wide area network if the network bandwidth is constrained. In addition, the high rate processor can implement a forward error correction (FEC) technique over the WAN to reduce the network latency induced by network transmission errors.

Post Processing

Playback processing pulls the raw or partially processed data from the disks and performs the required post-processing to include any image or signal processing and the corresponding data analysis. With the data captured and archived, this post-processing can take place in non-real time.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

Digital IF and Software Modems – Tutorial

Introduction to Digital IF and Software Modems

Modems for RF communications have traditionally performed their analog and signal processing in custom hardware and firmware. New technologies are enabling software modems to replace these legacy point-solution modems.

In the 1970s and 1980s, modems for satellite telemetry, ranging, and commanding were implemented as multiple hardware boxes. There were separate physical units for the RF converters, receiver, demodulator, bit synchronizer, frame synchronizer, command formatter, baseband modulator, and IF modulator.

In the 1990s and 2000s, the 70 MHz Intermediate Frequency became standard, and modems consolidated many of these functions into a single box. Hardware boards replaced the hardware boxes. They performed the signal processing with analog circuit cards, backed by FPGAs that implemented the digital signal processing. The modem’s software provided little more than the user interface and the network connection for data, monitor, and control.

As more and more of the functionality migrated to the FPGAs under control of the software, these modems became known as software-defined modems.

Today digital IF, waveform-agnostic signal converters, multi-core processors, and waveform-specific applications are making possible true software modems.

Digital IF

Digital IF allows the majority of that hardware and FPGA firmware processing to be replaced with software. This has key advantages, but first let’s explore what Digital IF is.

IF is short for Intermediate Frequency. Rather than having an RF or 70 MHz analog signal as the intermediate frequency, Digital IF provides a digitized sample representation of that same signal. The digitized samples can then be processed entirely in software.

In addition, transport of the digitized samples can be over a much longer distance than a traditional RF or baseband analog signal. A Digital IF interface can ow via Ethernet, whether that be a local area network or possibly even a wide area network.

There are some important reality checkpoints. The digitized samples must be at a frequency and resolution that is sufficient to reliably perform the digital signal processing. For example, 40 Msamples/second at 12-bits each may be required to processing a 5 Mbps telemetry downlink with 10 MHz of bandwidth. So, there’s high throughput and high network loading that most often requires the network connection carrying the Digital IF be dedicated.

Signal Conversion

With Digital IF, there’s still hardware that performs the analog signal to digital sample conversion for received links and the digital sample to analog signal conversion for transmitted links. The Signal Converter has RF or IF signals on one side and Ethernet interfaces on the other.

With a true software modem, the Signal Converter is “waveform agnostic.” Waveform agnostic means that it requires no knowledge of the type of modulation and demodulation. To it, all signals are the same. Any waveform-specific processing is performed in the software that’s across the network.

Signal converters that are waveform agnostic can be deployed in any satellite ground system, supporting the waveforms of current and future unknown satellites.

Multi-Core Processors

Processing 60 Mbytes of sample data every second and doing the complex computations needed for a software modem is no small task. It’s one that would swamp most computer systems prior to a few years ago. But the multi-core processors in today’s high-performance, low-cost servers have changed all that.

Multi-core processors are advantageous for signal processing in several areas. First, there’s their sheer performance when doing floating-point arithmetic. Second, the multi-core processors include instruction primitives that speed up the sophisticated math even more. Third, processing tasks can be assigned to a core and that core dedicated to that task. This is important for tasks that require near real time execution and/or have high throughput requirements.

Getting all of this power and functionality in a commercial, low-cost server has enabled the software modem.

Waveform-Specific Applications

Many previous generation modems combined FPGA firmware and software. These modems are configurable in that the modem’s processing can be configured for the waveform, data rates, data formats, and other parameters. But they are essentially a monolithic firmware/software implementation that has all of the features and functions in one hard-to-manage configuration. These are software-defined modems, and they necessitate a long regression test period with each new release.

True software modems are implemented with separate software applications for each unique waveform. In other words, there’s an application specific to a type or family of spacecraft. This has the advantage of allowing customers to only deploy the waveform applications they need and update them as needed. An App is started and stopped for each contact.

These “smaller” Apps are independent, and thus more reliable, more mature, and easier to manage. Plus, adding a new waveform-specific App does not invalidate any of the existing Apps.

Reduced Life Cycle Costs

True software modems are dramatically reducing the up front and on-going costs of the modems used for satellite telemetry, ranging, and commanding.

Hosting the Apps on commercial servers has two cost benefits. First, the initial cost is much lower than the custom industrial computers used with point solution modems. Second, the software Apps can be easily migrated to new server platforms, eliminating the need to replace/ repurchase the full modem.

The up-front costs are also lower because the vendor cost to produce and support the products are lower. They may also be lower because the satellite operator only needs to purchase the waveform Apps specific to their satellites.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802

Satellite Telemetry Time-Tagging – Tutorial

Satellite Telemetry Time-Tagging – Tutorial

Introduction to Satellite Telemetry Time-Tagging

telemetryFlowTime-tagging of satellite telemetry assigns a timestamp to each minor frame or transfer frame. Depending on the accuracy requirements, a range of techniques can be used to correlate time with each telemetry frame.

Mission control center software uses the ground arrival time of downlink telemetry to archive telemetry measurands, detect missing frames, support anomaly resolution, and in some cases, calculate the on-orbit time for the telemetry points. Most satellite systems embed on-orbit time within the downlink data, and this time information is used for the payload and state-of-heath processing.

Depending on the operational use and required accuracy, the time-tagging of satellite telemetry can range from simple to sophisticated. Simple schemes assign a time to when the decrypted and synchronized telemetry frame is received by the T&C software. More sophisticated methods assign a timestamp that matches the ground arrival time of the frame’s first bit with microsecond precision.

Telemetry Flow

The ow of telemetry data from spacecraft to T&C software follows a path with multiple processing steps, transport protocols, and format conversions. Understanding each of these helps with understanding telemetry time-tagging.

Telemetry data originates on the spacecraft. The telemetry frames that contain this data are contiguous and of a xed length, with each frame having a synchronization pattern at the beginning. The telemetry may be encrypted and then modulated for RF transmission.

The RF signal is received by a ground antenna where it is frequency converted and input to the modem. The modem demodulates the signal and bit synchronizes the data stream to recover the telemetry data bits.

Key issue -- If the downlink is encrypted, the modem has no ability to detect where each frame starts/ends in the steady stream of bits. The modem outputs the telemetry data in network packets at regular intervals or as a bit stream that is then formatted into network packets. These network packets pass over a wide area network to the control center. Here the data is passed through a decryptor and then to the front end processor (FEP).

With the data now decrypted, the FEP performs frame synchronization for each frame. These frames, along with their time-tag are passed in network packets to the T&C software.

Simple Time-Tagging

A simple approach to time-tagging is suf cient for many satellite systems. The system time of the FEP Is synchronized to NTP (Network Time Protocol) and assigns the time to each minor frame as they are synchronized and sent to the T&C software. This timestamp is the time associated with the frame being processed. It does not represent the ground arrival time that might be 1-2 seconds earlier in time.

The time tag does provide an accurate way to assign a relative time to each telemetry frame and to the telemetry measurands as they repeatedly are received in subsequent frames.

Legacy Time-Tagging

Many legacy systems transport the IRIG time data stream from the antenna site with the serial telemetry data. These streams are multiplexed together and then demultiplexed at the control center with some degree of time-data correlation. Time-data correlation means that the regenerated data streams and IRIG time data streams have the same time relationship to each other as when they were first multiplexed.

The IRIG time stream is sent to the FEP in parallel with the telemetry data owing through the decryptor and then to the FEP’s frame synchronizer. The IRIG time stream is decoded in the FEP and used as the time source for time-tagging the telemetry frames.

This time-tagging approach more closely approximates the ground arrival time, although it is skewed in time by the processing delay and jitter associated with the decryption and frame synchronization. The deterministic delays can be removed in the ground processing software.

Time Sources

The two most commonly used time sources are IRIG-B and NTP.

IRIG stands for Inter-Range Instrumentation Group and the IRIG standard de nes various time formats for transferring timing information. IRIG-B is the format most often used in satellite ground systems. IRIG-B time generators are synchronized to the GPS timing signal, allowing them to effectively be synchronized across multiple physical locations. The IRIG-B output is an amplitude-modulated signal that must be decoded by an IRIG Time Processor in the FEP. The IRIG signal provides an accurate indicator at one second intervals, and the FEP‘s Time Processor runs a high stability internal oscillator and synchronizes its internal time counters to this time hack to provide sub-microsecond accuracies.

NTP is the acronym for Network Time Protocol. NTP enables clock synchronization between computer systems on a common network, synchronizing their system clocks to Coordinated Universal Time (UTC). NTP enables the various servers in the satellite ground system to maintain their system time to within a few milliseconds of each other.

High Precision, High Accuracy Time-Tagging

Modern ground architectures may require a very precise timestamp, taking out the skew and jitter associated with transport and processing delay. The time-tagging approach must determine the exact arrival time at the remote antenna site of the leading edge of the rst bit of each telemetry frame.

The modem outputs snippets of the telemetry data in packets and provides a precise timestamp for the first bit in each telemetry packet and the duration of the data in that packet. This time and duration information traverses the ground network in the same packet as the telemetry data snippet.

 

Telemetry Processing Timing Diagram
Telemetry Processing Timing Diagram

The network packets arrive at the control center after transport across the WAN. The Network Gateway accounts for the WAN delay, jitter in the packet arrival rate, and the potential for lost packets needing to be re-transmitted.

At the control center, the network gateway decouples the 1st bit time and packet duration information from the telemetry data. The telemetry data is decrypted and then re-joined in the FEP with the timing information thanks to synchronization between the Network Gateway and FEP.

As part of the frame synchronization process, the FEP interpolates between two time-stamps to determine the offset, and therefore, the time that corresponds to the first bit of the actual telemetry frame. This time-stamp is then sent with the telemetry frame to the T&C software.

Note that time-tag accuracies on the order of 50 microseconds and frame-to-frame variation of less than 1 microsecond can be achieved with this approach.

Can we help? AMERGINT’s expertise is available to assist in your systems engineering and design

Randy Culver
randy@amergint.com
719-522-2802