Thursday, December 16, 2010

Time Division Multiplexing (TDM) ccie certication training institute in gurgaon

Network Bulls
www.networkbulls.com
Best Institute for CCNA CCNP CCSP CCIP CCIE Training in India
M-44, Old Dlf, Sector-14 Gurgaon, Haryana, India
Call: +91-9654672192

TDM is the primary technology used in traditional digital voice; it is also extensively used in data circuits. The basic
premise is to take pieces of multiple streams of digital data and interleave them on a single transmission medium.
T1 Circuits
On a Tl circuit, there are up to 24 channels available for voice. 64k from conversation 1 is loaded into the first Tl
channel, then 64k from the conversation 2 is loaded into the second channel, and so on. If not enough conversations exist
to fill the available channels, they are padded with null values. The 24 channels are grouped together as a frame.
Depending on the implementation, either 12 frames are grouped together as a larger frame (called SuperFrame or SF), or
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
24 frames are grouped together (called Extended SuperFrame or ESF). T l s are typically full duplex, with two wires
sending and the other two wires receiving.
E1 Circuits
An El is very similar to a T l . There are 32 channels, of which 30 can be used for voice. (The other two are used for
framing and signaling, respectively.) The 32 channels are grouped together as a frame, and 16 frames are grouped
together as a multiframe. El circuits are common in Europe and Mexico, with some El services becoming available in
the United States.
Channel Associated Signaling (CAS)—T1
Although the 64 k channels of a Tl are intended to carry digitized voice, we must also be able to transmit signaling information,
such as on-hook and off-hook, addressing, and so forth. In CAS circuits, the least significant bit of each channel
in every sixth frame is "stolen" to generate signaling bit strings. SF implementation takes 12 frames and creates a
SuperFrame. Using one bit per channel in every sixth frame gives two 12-bit signaling strings (known as A and B) per
SuperFrame. The A and B strings are used to signal basic status, addressing, and supervisory messages. In ESF, 24 channels
are in an Extended SuperFrame, which gives A, B, C, and D signaling strings. These can be used to signal more
advanced supervisory functions.
Because CAS takes one bit from each channel in every sixth frame, it is known as Robbed Bit Signaling (RBS). Using
RBS means that a slight degradation occurs in voice quality because every sixth frame has only 7 instead of 8 bits to
represent the sample; however, this is not generally a perceptible degradation.
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
Channel Associated Signaling (CAS)—T1
El signaling is slightly different. In an El CAS circuit, the first channel (channel 0 or timeslot 1) is reserved for framing
information. The 17th channel (channel 16 or timeslot 17) contains signaling information—no bits are robbed from the
individual channels. Timeslots 2-16 and 18-32 carry the voice data. Each channel has specific bits in timeslot 17 for
signaling. This means that although El CAS does not use RBS, it is still considered CAS; however, the signaling is outof-
band in its own timeslot.
Common Channel Signaling (CCS)
CCS provides for a completely out-of-band signaling channel. This is the function of the D channel in an ISDN PRI or
BRI implementation. The full 64 k of bandwidth per channel is available for voice; instead of generating ABCD bits, a
protocol known as Q.931 is used out-of-band in a separate channel for signaling. An ISDN PRI Tl provides 23 voice
channels of 64 k each (called Bearer or B channels) and one 64 k D (for Data) channel (timeslot 24) for signaling. An
ISDN PRI El provides 30 B channels and 1 D channel (timeslot 17); an ISDN BRI circuit provides two 64 k B channels
and one D channel of 16 k.
Understanding Packetization
IP networks move data around in small pieces known as packets. Because we know how to digitize our voice, it now
becomes just another binary payload to move around in a packet. VoIP uses Digital Signal Processors (DSP) for the codec
functions. The digitized voice is then packaged in an appropriate protocol structure to move it through the IP infrastructure.
DSPs
DSPs are specialized chips that perform high-speed codec functions. DSPs are found in the IP phones to encode the
analog speech of the user and to decode the digitized contents of the packets arriving from the other end of the call. DSPs
are also used on IOS gateways at the interface to PSTN circuits, to change from a digital circuit to packetized voice, or
from an analog circuit to packetized voice. DSPs also change from one codec to another, allow conferencing and call
park, and other telephony features. DSPs are a vital component of a VoIP system. Different chip types have varying
capacities, but the general rule is that you want as many DSP resources available to you as possible. The DSP calculator
on cisco.com will help you calculate what you must have.
Real-Time Transport Protocol (RTP)
RTP was developed to better serve real-time traffic such as voice and video. Voice payloads are encapsulated by RTP, then
by UDP, then by IP. A Layer 2 header of the correct format is applied; the type obviously depends on the link technology
in use by each router interface. A single voice call generates two one-way RTP/UDP/IP packet streams. UDP provides
multiplexing and checksum capability; RTP provides payload identification, timestamps, and sequence numbering.
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
Understanding VoIP
The elements of traditional telephony—status, address and supervisory signaling, digitization, and so on—must have
functional parallels in the VoIP world for systems to function as people expect them to, and more importantly, for VoIP to
interact with the PSTN properly.
This section examines packetizing digital voice, signaling, and transport protocols, the components of a VoIP network,
and the factors that can cause problems in VoIP networks and how they can be mitigated.
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
Payload identification allows us to treat voice traffic differently from video, for example, simply by looking for the RTP
header label, simplifying our configuration tasks. Timestamping and sequence numbering allows VoIP devices to reorder
RTP packets that arrived out of sequence and play them back with the same timing in which they were recorded, eliminating
delays or jerkiness. There is no provision for retransmission of a lost RTP packet.
Each RTP stream is accompanied by a Real-Time Transport Control Protocol (RTCP) stream. RTCP monitors the quality
of the RTP stream, allowing devices to record events such as packet count, delay, loss, and jitter (delay variation).
A single voice packet by default contains a payload of 20 msec of voice (either uncompressed or compressed). Because
sampling is occurring at 8000 times per second, 20 msec gives us 160 samples. If we divide 8000 by 160, we see that we
are generating 50 packets with 160 bytes of payload, per second, for a one-way voice stream.
If we use compression, we can squeeze the 160-byte payload down to 20 bytes using the G.729 codec. We still have 160
samples, still 20 msec of audio, but reduced payload size.
Codecs
The codecs supported by Cisco include the following:
• G.711 (64kbps)—Toll-quality voice, uncompressed.
• G.729 (8kbps)
• Annex A variant: less processor-intensive, allows more voice channels encoded per DSP chip; lower audio
quality than G.729
• Annex B variant: Allows the use of Voice Activity Detection and Comfort Noise Generation; can be applied to
G.729 or G.729-A
The values for bandwidth shown do not include the Layer 3 and Layer 2 overhead; the actual bandwidth used by a single
(one-way) voice stream can be significantly larger. The following tables summarize the additional overhead added by
packetization and Layer 2 encapsulation (assume 50 packets per second (pps):
Bandwidth Calculation, Without Layer 2
Codec G.711 G.729
Voice Payload 160 Bytes 20 Bytes
RTP Header 12 Bytes 12 Bytes
UDP Header 8 Bytes 8 Bytes
IP Header 20 Bytes 20 Bytes
Total Before Layer 2 200 Bytes 60 Bytes
Total Bitrate @ 50 pps 80,000 bps (80 kbps) 24,000 bps (24 kbps)
Bandwidth Calculation, With Layer 2
Layer 2 Type G.711 = 200 Bytes/packet G.729 = 60 Bytes/packet
Ethernet 18 Bytes 18 Bytes
Multilink PPP 6 Bytes 6 Bytes
Frame Relay FRF. 12 6 Bytes 6 Bytes
Total incl. Layer 2 218 Bytes 206 Bytes 206 Bytes 78 Bytes 66 Bytes 66 Bytes
Total Bitrate incl. Layer 2
(@ 50 pps)
87.200 82,400 82,400
(87.2 kbps) (82.4 kbps) (82.4 kbps)
31,200 26,400 26,400
(31.2 kbps) (26.4 kbps) (26.4 kbps)
When using G.729, the RTP/UDP/IP header of 40 bytes is twice the size of the 20B voice payload. This consumes significant
bandwidth just for header transmission on a slow link. The recommended solution is to use Compressed RTP
(cRTP) on slow WAN links. cRTP reduces the RTP/UDP/IP header to 2 bytes without checksums or 4 bytes with checksums.
The effect of using cRTP is illustrated in the following table. (Note: Ethernet is not included because it is not classified
as a slow link.)
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
Bandwidth Calculation, Using cRTP
Codec G.711 G.729
Voice Payload 160 Bytes 20 Bytes
cRTP header w/ chksum 4 Bytes 4 Bytes
cRTP header no chksum 2 Bytes 2 Bytes
Total before Layer 2: 164 Bytes 162 Bytes 24 Bytes 22 Bytes
Multilink PPP or
Frame Relay FRF. 12 6 Bytes 6 Bytes
Total WAN bandwidth @50 pps incl. Layer 2: 68000 bps 67,200 bps
(68 kbps) (67.2 kbps)
12,000 bps 11,200
(12 kbps) (11.2 kbps)
Voice Activity Detection (VAD)
Phone conversations on average include about 35% silence. In Cisco Unified Communications, by default silence is packetized
and transmitted, consuming the same bandwidth as speech. In situations where bandwidth is very scarce, the VAD
feature can be enabled, causing the voice stream to be stopped during periods of silence. The theory here is that the bandwidth
otherwise used for silence can be reclaimed for voice or data transmission. VAD also adds Comfort Noise
Generation (CNG), which fills in the dead silence created by the stopped voice flow with white noise. VAD should not be
taken into account during the network design bandwidth allocation process because its effectiveness varies with background
noise and speech patterns. VAD is also made ineffective by Music on Hold and fax features. In reality, VAD typically
causes more problems than it solves, and it is usually wiser to add the necessary bandwidth.
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
© 2008 Cisco Systems Inc. All rights reserved. This publication is protected by copyright. Please see page 147 for more details.
Additional DSP Functions
In addition to digitizing voice, DSP resources are used for the following:
• Conferencing: DSPs mix the audio streams from the conference participants and transmit the mix (minus their own)
to each participant.
• Transcoding and Media Termination Points (MTP): A transcoder changes a packetized audio stream from one
codec to another, perhaps for transit across a slow WAN link. MTPs provide a point for the stream to be terminated
while other services are set up.
• Echo Cancellation: DSPs provide the calculation power needed to analyze the audio stream and filter out the repetitive
patterns that indicate echo. Echo is a chief cause of perceived poor voice quality; echo cancellation is an important
function.

No comments:

Post a Comment