North American Vs. European Digital Transmission Signaling Standards

North American Vs. European Digital Transmission Signaling Standards:

A Technical and Historical Perspective

 

Sean Butler 

ICSA 855 Telecommunications Policy and Standards

 

May 1997

 

 

 

 

 

 

(c) Sean Butler, 1997

 

 

 

Introduction

There are two major standards in the international realm of digital transmissions — one used in North American and one used in Europe and most of the rest of the world. (Note that Japan has a standard that differs from both, though it is very similar to the North American standard, but will not be discussed further in this paper.) The fact that there are various standards causes interconnection problems at international borders. Conversions from one standard to the other must occur, so it naturally seems that it would be simpler and therefore more economical if there were just one standard. However, various standards still exist and will most likely exist for a long time to come.

 

This paper will review the two major standards of North America and Europe, first by exploring the technical details and implementations of each, then discussing how and why they were developed, why the differences originally came into being, and why they still exist.

 

In order to review the historical reasons for the existence of two different standards, a detailed technical review of each is necessary. Following the technical review, advantages and disadvantages for the different implementations will be given. Then, there will be a discussion on potential reasons for why, when one standard is generally considered to be better than the other, the lesser standard is still in use. And finally, there is a short exploration of the future possibilities of a single standard.

 

 

 

Technical Aspects of both Standards

A technical review of both the North American, also commonly called T1, and the European, also called E1, signaling follows. The review is quite detailed in the technical analysis of the standards, but this is necessary for an understanding of the discussion of advantages and disadvantages of each standard that is to follow. Most of the following technical information was found in the Larscom technical manuals for two of that company’s products (1 and 2).

 

 

Technical Aspects of North American Standards

T1 Networks

The T1 digital transmission system, which is also is known as Digital Standard 1 (DS1) and which was created by AT&T, is the primary telecommunications system used in North America. A T1 facility provides full-duplex transmission at 1.544 megabits per second (Mbps). This bandwidth is divided into 8 kilobytes per second (kbps) of overhead and 1.535 Mbps of user information. For digitized voice applications, the information bandwidth typically consists of 24 multiplexed 64 kbps channels. For the transmission of data, a T1 facility may be channelized the same as it is for voice, or it may carry from one to as many as several hundred multiplexed signals on an unchannelized basis.

 

Today’s T1 networks generally intermix T1 termination points both at customer premises and at the carrier’s Central Office, which creates a hybrid public/private environment. The high bandwidth of T1 lines is suitable for integrating voice, data, facsimile, and other such services, that were previously on separate networks, into a single network.

 

 

Customer Premise Equipment

Many types of equipment are used at customer premises to enable interconnections to a T1 carrier. Digital Terminating Equipment (DTE) provides the source for the transmitted signal and the destination for the received signal, and includes equipment such as PBX’s, multiplexers, computers, channel banks, etc.

 

The customer site needs a Data Service Unit (DSU), which converts signals transmitted at frequencies less than 1.544 Mbps (also known as subrate signals) to a T1 signal. Then this interfaces with a Channel Service Unit, which provides termination and interface functions such as electrical interfacing, surge protection, keep alive signals, loop back facilities, etc. The CSU connects to the public T1 network.

 

 

T1 Signal Characteristics

The T1 signal is bipolar, meaning alternating signals are of opposite polarity. The digitized signal is encoded using pulse code modulation (PCM) and time-division multiplexing (TDM). Each time slot of the signal is 648 nanoseconds, which gives 1,544,000 time slots per second. Data is encoded by the presence or absence of a pulse. The pulses have one half the duration of the time slot and an amplitude of 3 volts. When a pulse is present the time slot data is ONE and when it is absent data is ZERO, thus implementing the binary, digitized encoding scheme.

 

T1 signals use Alternate Mark Inversion (AMI) line coding, which means that consecutive pulses on the signal are of opposite polarity. If two consecutive pulses with the same polarity are detected by any equipment in the circuit, there was a transmission error.

 

 

Transmission Facilities

T1 signals are generally transmitted over twisted pair cooper wire, which gives a signal loss of approximately 5 to 6 dB per 1000 feet. This signal loss requires repeaters every 6000 feet along the wire to compensate for the degraded signal and ensure there is an adequate signal level at the T1’s termination points. T1 signaling can also be transmitted via fiber optic systems, microwaves, satellite, etc.

 

 

 

 

 

Pulse Density

In order for T1 equipment (such as repeaters, CSU’s, etc.) to interpret and regenerate the signal, it must be able to determine the time slots based on pulses received. But since pulses occur only when a ONE is transmitted, too many consecutive ZERO’s can cause timing/synchronization problems. Therefore, AT&T defined a standard which defines a particular number of ONE’s that must be received for various numbers of ZERO’s. This is known as a pulse density requirement, as the density of pulses, or ONES, for a given number of time slots is guaranteed.

 

The formula developed by AT&T specifies how many ONE’s are needed for a given number of ZERO’s, and one of the implications of the formula is that no more than 15 consecutive ZERO’s may be received without a ONE. This requirement was very important in the past when repeaters would become hung and need to be manually reset. Although today’s repeaters can generally handle many more consecutive ZERO’s before having problems, the AT&T specification is still in effect.

 

This pulse density requirement may be implemented by the CSU/DSU or DTE, and can be implemented in various ways. For example, a certain bit position may be reserved for the transmission of a pulse (ONE), or a ZERO may be overwritten with a pulse in the middle of a data stream (which is known as bit robbing), etc.

 

 

Clear Channel Capability

The problem with the bit robbing solution described above for overcoming the pulse density requirement is that it corrupts the original user data. By overwriting a ZERO with a ONE, the actual data portion of the transmission is changed. In digital voice transmissions, this change is not noticeable due to the sampling rate (8000 cycles per second) and the fact that very few bits need to be changed. With such lower numbers of changes, when the digitized voice is reconverted back into analog voice, the changes are insignificant and cannot be detected by the human ear. However, in data communications, any changes in the user information are devastating. In order to eliminate all data corruption, bit robbing can not be used. One alternative is to dedicate part of the bandwidth to a pulse. For example, every 8th bit may be used.

 

Another possible solution is that the method of data encoding ensures pulse density while allowing the entire bandwidth to be used for data. This is termed clear channel capability. The method most often used to enable clear channel capability in T1 networks is a data encoding scheme called Bipolar 8-Zero Substitution (B8ZS). This is a slight modification of the AMI encoding scheme that replaces any eight consecutive zeros with a fixed code containing two bipolar violations, meaning that two consecutive pulses of the same polarity (a coding violation) are sent twice in a given eight bit slot. In this way, the sending equipment must encode the violations in its transmission when it detects it should be sending eight consecutive zeros. The receiving equipment will recognize this code and change it back to eight zeros.

 

More on such schemes will follow below, as they become quite important when discussing advantages and disadvantages of T1 vs. E1.

 

Framing Synchronization

The data in T1 signals are grouped into frames of 192 data bits and 1 framing bit, for a total of 193 bits. The framing bits occur in a fixed pattern of ONE’s and ZERO’s, which are used by network equipment to synchronize to the data stream. If five consecutive framing bits contain two or more errors, an Out of Frame error occurs, in which case the CSU will issue an alarm. The alarms are recorded for the circuit provider to help troubleshoot such problems.

 

 

 

Framing Formats

The framing bits discussed above are used to create the framing formats. Since there are 192 information bits preceded by a single framing bit, 8 kbps of the total 1.544 Mbps is overhead. Therefore the actual rate left for user information is 1.536 Mbps.

 

The first main framing format is called D4, or Super Framing (SF). One super frame is defined as 12 193-bit frames (2316 bits), and all 12 overhead bits are used for frame synchronization. Within each single frame of the 12 total frames, each of the 24 channels of the T1 is allotted 8 consecutive bits (24 x 8 = 192). These 192 bits are preceded by the frame synchronization bit.

 

A primary drawback of the D4 standard was that it was not possible to monitor the circuit live — i.e., in order to monitor it, all traffic had to be halted and test signals sent across. However, as synchronization improvements evolved, the framing requirements became considerably less than the 8 kbps used with D4, and the Extended Super Frame format was developed. ESF allowed monitoring of a live circuit without loss of information bandwidth.

 

ESF extends the super frame to 24 single frames. Then the 8 kbps of overhead is divided into 3 channels as follows. A 2 kbps channel is reserved for framing, a 2 kbps channel is reserved for a Cyclic Redundancy Check (CRC) (used to check for errors in the transmission), and a 4 kbps channel used for diagnostics.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The T1 Hierarchy

The T1 standard has a hierarchy of data rates. Channels from T1’s can be multiplexed to create various speeds for a transmission facility. The following table, from Digital and Analog Communication Systems (4), shows this hierarchy:

 

 

Digital Signal Number Bit Rate, Mbits/sec No. 64k Voice channels Transmission Media

DS-0 0.064 1 wire pairs
DS-1 1.544 24 wire pairs
DS-1C 3.152 48 wire pairs
DS-2 6.32 96 wire pairs, fiber
DS-3 44.736 672 coax, radio, fiber
DS-3C 90.254 1,344 radio, fiber
DS-4E 139.264 2,016 radio, fiber, coax
DS-4 274.176 4,032 Coax, fiber
DS-432 432 6,048 fiber
DS-5 560.16 8,064 coax, fiber

 

 

 

Technical Aspects of European Standards

E1 Networks

The E1 digital transmission system provides full-duplex transmissions at 2.048 Mbps. The bandwidth is divided into 32 multiplexed 64 kbps channels. Either one or two of those channels are reserved for framing, based on the frame format. For digitized voice, the information bandwidth generally consists of 32 multiplexed 64 kbps channels, whereas for data, it may be channelized the same as with voice, or may carry from one to several hundred signals on an unchannelized basis.

 

 

Customer Premise Equipment

The Customer Premise Equipment used for E1 networks is the same as those discussed above for T1 networks. However, though the devices perform the same functions, they are in fact different since they must operate a different signaling standard.

 

 

 

 

 

E1 Signal Characteristics

Just as with T1 signals, E1 signals are also bipolar, meaning alternating signals are of opposite polarity. E1 also uses Pulse Code Modulation (PCM) and Time Division Multiplexing (TDM), as does T1. E1 uses a smaller time slot, though, at 488 nanoseconds (ns) vs. 648 ns for T1. This division gives 2,048,000 slots per second vs. 1,544,000 for T1. And just as with T1, pulses have one half the duration of the time slot, and indicate a ONE in the binary/digitized transmission. E1 incorporates Alternate Mark Inversion as does T1.

 

 

Transmission Facilities

Transmissions facilities for E1 signals are identical to those for T1, namely standard twisted pair wire in most cases, with 5 to 6 dB loss per 1000 feet. There is discrepancy in the literature on the use of repeaters. Whereas the Larscom Technical Manual (2) states that repeaters are used every 6000 feet for E1, which is identical to the specification for T1, the Dictionary of PC Hardware and Data Communications Terms (3) states that because of the higher speeds of E1 compared to T1 signaling, repeaters on copper links are required more often than every 6,000 feet. Most likely, the Dictionary of PC Hardware and Data Communications is accurate, as the attenuation on electrical and optical signals is greater when the clocking of the signal is high. Unfortunately, the CCITT documentation which contains the original and official specifications could not be obtained due to the high price CCITT charges for them.

 

E1 signals can also be carried by satellites, microwaves, fiber optics, coaxial cable, etc., just as with T1.

 

 

Pulse Density

Repeaters and other network terminal equipment must be able to determine time slots based on pulses received in the signal, similar to T1. However, with E1 there is no specific requirement to maintain a pulse density as with T1, because ones density is automatically maintained by High Density Binary 3 (HDB3) coding.

 

HDB3 replaces strings of four 0’s with a special sequence. This sequence contains intentional bipolar violations, which is similar to the B8ZS solution discussed above for T1 signaling. Two consecutive pulses have the same polarity, which is in violation of normal sequences of pulses. In this way, the originating network equipment must put in the special sequence, and the receiving equipment must take it out. The main difference between B8ZS and HDB3 is that HDB3 is automatically used by all E1 equipment, whereas B8ZS is only used in certain T1 circuits, so the equipment of that circuit must be capable of and configured for it.

 

 

 

 

 

 

 

E1 Framing Synchronization

With E1 signaling, data is grouped into frames of 256 bits. Each frame consists of 32 8-bit time slots, and 8000 frames are transmitted each second (8000 x 256 = 2,048,000 or 2.048 Mbps). This provides for 32 64 kbps channels. (It is interesting to note here that the 64 kbps channel of E1 is the same as the DS0 signal for North American T1. This is so because both evolved from the restrictions imposed by digitized voice standards.)

 

Framing information is carried in time slot 0 (TS0) while signaling information is carried in time slot 16 (TS16). The remaining 30 time slots are for user information. A group of 16 frames is a multiframe.

 

 

Framing Formats

There are two main framing formats used in E1 signaling, TS0 and TS16. As the names imply, time slot 0 or time slot 16 is used in each to provide the framing pattern which allows the receiving E1 equipment to synchronize on the signal correctly.

 

TS16 was designed to provide signaling information to a public switched network, in which case individual 64 kbps time slots can be routed independently through the network.

 

TS0 has two main forms, one with a 4 bit CRC check and one without. Since Frame Synchronization does not require all 8 bits of every TS0 in every frame, the extra bits are used for other functions, such as frame loss alarms, data links to transmit control and status information, etc.

 

 

 

The E1 Hierarchy

The E1 standard also has a hierarchy of data rates as channels from individual E1 circuits can be multiplexed to create various speeds for a transmission facility. The E1 table below was created from both Digital Transmission Hierarchies (5) and from Digital and Analog Communication Systems (4) (a complete table with all of the information here was not found in a single resource):

 

Digital Signal Number Bit Rate, Mbits/sec No. 64k Voice channels Transmission Media

DS-0 0.064 1 wire pairs
E1 2.048 30 wire pairs
E2 8.448 120 wire pairs, fiber
E3 34.368 480 coax, radio, fiber
E4 139.264 1,920 coax, radio, fiber
E5 565.148 7,680 coax, fiber

 

 

 

Advantages and Disadvantages of Each Standard

The E1 standard appears to have several advantages over T1. The most obvious advantage is that there are more channels per circuit, so less wire or fiber needs to be laid to get the same number of channels.

 

Also, 32 channels is a binary multiple, and 24 is not, so E1 can generally interface with computer equipment better. Since the standards are based on digitization (i.e. one or zero), this is important, as much of the equipment in the circuit is computer based.

 

Another advantage E1 has over T1 is that since there is a separate time slot for signaling, a single channel runs clear at 64 kbps, vs. the 56 kbps in the US due to the robbed signaling bit. Besides the additional 8 kbps available in the channel, with E1 there are no issues with maintaining pulse density. In the US, for data connections, B8ZS encoding must be used, and since this is not used for the majority of circuits, there are often misconfigurations during the initial set up. With E1, pulse density is built in via the HDB3 encoding into every circuit, so there are less chances for errors in circuit engineering.

 

 

 

Historical Aspects of the Two Standards

Since the E1 standard seems to be a better method of digital transmission, why would T1 even exist? This section will look at how and why the standards developed as they did and why the North American standard still exists in spite of its apparent inferiority..

 

 

 

How the Standards Developed

The T1 standard was developed in the United States in the early 1960’s. There are many reasons for the technical choices behind the standards, and only a few major ones will be discussed here.

 

First, the American system in use prior to T1’s development employed analog frequency division multiplexing (FDM), and any new technologies deployed had to interface seamlessly with it. This required that a multiple of 12 channels was necessary, due the FDM scheme used. Twelve channels was too little to be economically viable. Also, 36 channels was not physically possible because digitized voice standards required 8000 samples/second. In order to have 36 channels at such a rate, the speed of the signals on the wire would have been much higher, and the wire standards and spacing requirements for repeaters at the time would have meant that new wire would have to be laid and new repeaters installed at closer intervals. Neither of these requirements were economically viable given the installed base of wire and manhole spacing for repeaters at the time. Therefore, 24 was the only possible choice.

 

In addition, the T1 pulse density requirement was developed before data transmissions began in earnest. Therefore, no one realized that the bit robbing scheme employed by T1 to ensure too many consecutive zero’s did not throw a repeater out of rhythm would be a problem (which it is in data communications). By the time E1 was being developed, data transmission had become popular, so methods other than bit robbing could be explored to ensure a given pulse density.

 

 

Why T1 is still in use in North America

The US aggressively deployed the T1 standard, so the 1.544 Mbps was incorporated into the repeaters quickly. By 1965 there were 100,000 active channels, and over 1,000,000 by 1972. CCITT studied the PCM standards for over 8 years before deciding on two standards in 1972 (the chose both T1 and E1!). By that time, with over 1,000,000 circuits, there was no economical way for North America to change, as all of their repeaters would have had to have been replaced. CCITT did require that the 24 channel T1 system convert to 30-channels for international connections, so Europe could basically ignore the North American Standard, as the burden for the interconnections fell on North America not Europe.

 

It should be noted that it was not easy for all of Europe to adopt to the E1 standard. The UK, and Italy to a lesser extent, already had a fair amount of 24 channel T1 equipment in place before the 1972 CCITT agreement to use 32 channel (30 voice) E1 standards throughout Europe. The UK had to remove nearly 7000 channels of T1 equipment prior to 1978 as part of the E1 agreement.

 

 

The Future

The future is clearly not bright for a single standard. As technology has advanced, and higher speeds have become necessary, each standard created a hierarchy of their own. In the hierarchy, circuits of lower speeds are multiplexed together to form higher speed connections. Because of this, the installed base of each standard has continued to grow, and therefore the investment in the networks are too great to actually convert one standard to the other.

 

Also, even if a new, better standard were developed, it would most likely not ever be used to replace either standard because of the economical roadblocks. Although new higher level international standards are being developed in the arena of telecommunications (such as ATM), the actual physical layer circuits for long distance transmissions have yet to be addressed. Their installation and maintenance is very different from the higher layers because of the distances involved in circuits, the installed wire/fiber base is so great, the amount of repeaters that are installed and working is high, etc. Whereas computer or telephony equipment at higher layers of transmission often need to be upgraded, the installed physical base does not. Basically, more physical fiber/wire and repeaters that have already been installed in anticipation of future growth can be used to obtain higher speeds, rather than install new wire and repeaters that operate faster.

 

 

 

 

 

Conclusions

This paper reviewed the technical aspects of the North American T1 and European E1 digital transmission standards. It then reviewed the advantages of E1 signaling over T1 signaling, and then discussed why a lesser technology (T1) still exists even after the development of a better one (E1). The primary reason is that the E1 standards came into being nearly 10 years after the North America began deploying T1, and therefore any change in North America would have had great financial repercussions. Therefore two standards exist, and interconnections at international borders are not straight forward. Also, the future aspects of a single standard are bleak, because of the same economical roadblocks. The installed base of equipment that uses each technology is too great to change from one standard to the other, or even to a new standard, given that multiplexing of the current circuits can always give faster speeds (and because there is an abundance of wire/fiber that was installed in anticipation of futrue growth, so that we will not run out for many years to come).

 

 

 

 

References and Resources not specifically Referenced

(1) “Access-T: Multiport DSU/CSU System (Technical Manual).” Larscom Incorporated,

4600 Patrick Henry Drive, Santa Clara CA 95054.

 

(2) “E1 NSM Product Technical Manual.” Larscom Incorporated, 4600 Patric Henry Drive,

Santa Clara CA 95054.

 

(3) “Dictionary of PC Hardware and Data Communications Terms.” 1996. URL available at:

http://www.ora.com/reference/dictionary/terms/E/E1.htm

 

(4) Couch, Leon W. “Digital and Analog Communication Systems,” 4th ed. Macmillan

Publishing Company. 1993.

 

(5) “Digital Transmission Hierarchies.” 1997. URL available at

http://www.tbi.net/~jhall/dighier1.html.

 

(6) Stamper, David A. “Business Data Communications,” 4th ed. The Benjamin / Cummings

Publishing Company, Inc. 1994.

 

(7) “T1 Products and Information Page.” Certified Consultants and Systems. URL available at:

http://superstore.com/~ccst1.html.

 

(8) “The CCITT and the development Telephony since 1956.” J. Lalou. Telecommunications

Journal, Vol. 44, March 1976.

 

(9)”Transmission Performance of Telephone Networks containing PCM Links.” D. L. Richards

Proc. IEE, No. 115, 1968.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s