Nokia Corporation v. Apple Inc.

Filing 379

NOTICE of Issuance of Amended Subpoena upon The Institute of Electrical and Electronics Engineers, Inc. ("IEEE") by Apple Inc. (Attachments: # 1 Exhibit 1 part 1, # 2 Exhibit 1 part 2, # 3 Exhibit 1 part 3, # 4 Exhibit 1 part 4, # 5 Exhibit 1 part 5, # 6 Exhibit 1 part 6, # 7 Exhibit 1 part 7, # 8 Exhibit 1 part 8)(Moore, David)

Download PDF
1064 1064 IEEE TRANSACTIONS ON COMMUNICATIONS, YOLo COM-25, NO. 10, OCTOBER 1977 IEEE TRANSACTIONS ON COMMUNICATIONS,VOL. COM-25, NO. 10, 1977 Concatenated Coding Systems Employing a Concatenated Coding Systems Employing Unit-Memory Convolutional Code and a Unit-Memory Convolutional Code and Byte-Oriented Decoding Algorithm Byte-Oriented Decoding Algorithm LIN-NAN LEE, Abstract-Concatenated coding systeq1s utilizing aconvolutional convolutional Abstruct-Concatenated coding systems code as the inner code and aa Reed-Solomon code as the outer code are and Reed-Solomon code as the outer code code as the considered. In order to obtain very reliable communications over a very considered. In order to obtainvery reliable communications a noisy channel with relatively modest coding complexity, it is proposed noisy channel with relatively modest is to concatenate a byte-oriented unit-memory convolutional code withwith to concatenate a byte-oriented unit-memory convolutionai code an RS outer code who~ symbol size is one byte. It is further proposed an RS outer code whose symbol size is one byte. It is further proposed to utilize aareal-time minimal-byte-error probability decoding algorithm, to utilize real-time minimal-byte-error probability decoding algorithm, together }Vith feedback from the outer decoder, in the decoder for the together pith feedback from the outer decoder, in the decoder for the inner convolutional code. The performance inner convolutional code. The performance of the proposed concateof the proposed concatenated coding system is studied, and the improvement over conventional nated is studied, and the improvement over conventional concatenated systems due to to concatenated systems due each additional feature is isolated. each additional feature is isolated. I. INTRODUCTION I. INTRODUCTION coding systems HE COMPLEXITY of systems T exponentially withthetheconventional codingcodes(or grows THE COMPLEXITY ofblocklengthforblock block codes (or block length for exponentially with with the constraint length forfor convolutib.nal codes). To overwith the constraint length convolutibpal codes). To overcome the complexity of very long codes; the idea of cascading come the complexity very long codes; the idea cascading two ormorecodesof two or more codes of lesser complexity to achieve highly lesser complexity to achieve reliable communications was considered first by Elias [1] ,,and reliable communications was considered first by Elias [ 11 and later by Forney [2]. Forney's technique of using two or more later by Forney [2]. Forney's technique of two or more block codes over different alphabets to obtain a very low error block codes over different alphabets to obtain a very low error rate is known as concatenated coding. rate is known as concatenated coding. Guided by the premise that a convolutional code generally the premise Guided a convolutional performs better than a block code of the same complexity, code performs better than a the Falconer [3],andlaterJelinekandCocke[4], Falconer [3], and later Jelinek and Cocke [4], considered cascading an outerblockcodewithaninnerconyolutional cascading an outer block code with an inner convolutional code. Figure 1 shows a general representation of ~uch a blocka general representation such blockcode. Figure 1 convolutional concatenated coding ,system. In both the convolutional concatenated codihg system.both In the Falconer and Jelinek-Cocke schemes, ~equential decoding was Falconer and schemes, sequential was used for the inner decoder; and the outer block coding system used for the inner decoder; and the outer block coding system was used only to intervene when the sequential decoder was used only t o intervene when the sequential decoder experienced computational overflow. Therefore, these systems experienced computational overflow. Therefore, these can be regarded as primarily sequentially decoded convolucan be regarded as sequentially decoded tional coding systems. tional coding systems. Maximumlikelihood (i.e., Viterbi [5]) decoding of conMaximum likelihood (i.e., [5]) convolutional codes with a moderate constraint length can provide volutional codes with a moderate constraint length can anerror rate of less thanat10- 2 at a rate slightly higher than an error rate of less than a rate slightly Paper approved by thethe Editor for Communication Theory of the Paper approved by Editor for Communication Theory of the IEEE CommunicationsSocietyforpublicationafterpresentation IEEE Communications Society for publication after presentation in in partatat IEEEInternationalSymposiumonInformationTheory, part thethe IEEE International Symposium on Information Theory, 21-24, 1976. Manuscript received October Ronneby, Sweden, June 21-24, 1976. Manuscript received October 19, Ronneby, Sweden, 1976; revised May 6, 1977. This work was supported by the National This work was supported by the National 1976; revised May 6, Aeronautics and Space Administration under Aeronautics and Space Administration under NASA Grant NSG 5025 NASA Grant 5025 the Goddard Space at the University of Notre Dame in liaison with the Goddard Space at the University of Notre Dame in liaison submitted to the Flight Center. This paper forms part of a dissertation submitted to the This paper forms part of a Flight Graduate School of the University of Notre Dame in partial fulfillment Graduate School of University of Notre Dame in fulfillment of the requirements for the Ph.D. degree. of the requirements for the Ph.D. degree. The author was with the Department of Engineering, The authorwaswiththeDepartmentofElectrical Electrical Engineering, University of NotreDame,NotreDame,IN46556.He University of Notre Dame, Notre Dame, IN 46556. He is nowwith is now with the LINKABIT Corporation, San Diego, CA 92121. the LINKABIT Corporation, San Diego, CA 92121. MEMBER, IEEE Data SOYrCe Block Encoder L Fig. 1. A concatenated coding system employing a a convolutional coding system employing convolutional Fig. 1. code as the inner code and a block code as the outer code. code as the inner code and a block code as code. R comp ofthenoisy the noisy channel.Forney's Forney's work [2]] suggested Rcomp [2 that a concatenated coding system with a powerful outer code code can perform reasonably well when its inner decoder is operated well its inner decoder with a probability of error in the range between 10- 2 and in the and 10- 3 . It was natural thenforfor Odenwalder [6] to choose a was natural then [6] Viterbidecoder decoder fortheinnercoding for the inner coding system in his blockin blockconvolutional concatenated coding system. convolutional concatenated coding Because the output error patterns of Viierbi-type decoders output error patterns Because Viterbi-type decoders for convolutional codes are bursty, block codes over a large convolutional codes block codes alphabet, suchthatmanybits bits of theinnercodeformone one inner code form alphabet, such that many symbol of the outer code, appear very attractive fortM outer very for.thc? symbol codingsystem. Reed-Solomon coding system. The Reed-Solomon (RS) block codes are The codes particula~ly appealing because they can be decoded by relabe decoded by particularly tively siJrlple procedures (such as the Berlekamp.,.Massey [7],, siryple procedures as Berlekamp-Massey [ 7 ] [8] algorithm) and have optimum distance properties [17] . [17]. [8] Because, the lengths of the bursts of output errors made byby bursts errors made Because,the Viterbi decoders are widely distributed, it is generally necessary Viterbi are distributed, to interleave the inner convolutional code so that errors in the to so errors in the individual RS-symbols ofoneblock are independent; otherone block otherindividual RS-symbols wise, a veiy long block code would be required to operate the long required wise, operate the system efficiently. Because the most likely length of the outmost outefficiently. put er~or patterns made by the inner decoder are on the order ertor inner decoder the order of the constraint length, K,, of the convolutional code, Odenconstraint length, K convolutional code, Odenwalder chose the RS symbol alphabet to be GF(2 K ). GF(2K). RS symbol alphabet In block-convolutional a block-convolutional concatenanted coding system concatenanted coding such as Odenwalder's employing a Viterbi decoder with such as employing a decoder with unlikely that the conventional convolutional codes, it is very unlikely ~hat the conventional convolutional is beginning of a decoding error burst is always aligned with the is boundary between two RS symbols; in fact, such a burst only RS symbols; in two bits long may affect two RS symbols. This fact led us to RS symbols. This consider using good convolutionalcodes whichare symbolconvolutional codes are symboloriented rather than bit-oriented. In [9], we reported a [9], rather bit-oriented. than In class ofunit-memoryconvolutional unit-memory convolutional codes for which ko-bit class k,-bit information segments are encoded no-bit no-bit encoded are encoded into into encoded segments. It was shown there that an (no, k o ) convolutional (no, k,) segments. was shown there that Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008208 1065 YSTEMS LEE: CONCATENATED CODING SYSTEMS LEE: CONCATENATED CODING code with unit memory always achieves the largest free distance code with unit memory always achieves free distance possible for codes of the same rate ko/no and the same number for codes the same ko/n, the 2Mk o of encoderstates, where M is 'theencodermemory. states, 2Mko the encoder memory. The unit-memory codes are naturally byte-oriented with byte with byte The unit-memory are size equal to k o information bits. It will beshown that the shown the size equal to ko improved free distance andthe sYl11bol-oriented nature of the free distance symbol-oriented nature these codes provides an improvement of approximately 0.3 dB these codes an improvement approximately 0.3 in the overall performance of the concatenated coding system in the overall performance when these codes replace bit-oriented convolutional codes. bit-oriented convolutional the Anotherimprovement is to modifythe decoder for the improvement is for the convolutional code so that the decoder emits notonlythe the the code so emits not only most-likely estimated symbol, but also reliability information but also estimated about the estimated symbol. The outer decoder may then use estimated symbol. outer about either “erasures-andthis reliability information to performeither "erasures-andthis errors" decoding or "generalized-minimum-distance" (GMD) errors” “generalized-minimum-distance’’ decoding as suggested by Forney [2]. Zeoli [10] and Jelinek Forney [2]. Jelinek decoding as [IO] [11] proposed to extract reliability information by annexing by [I 11 a long tail to the original convolutional code and using this long tail to original added tail to provide anerrordetection the added to an error detection capability forthe estimate made by the Viterbi decoder for the original shorter decoder for estimate convolutional code. This approach requires thefeedbackof of feedback convolutional code. This approach symbols previously decoded by the Viterbi decoder and, more decoder symbols decoded more importantly, uses the output of the outer restart importantly, uses the output of the outer decoder to restart by the inner Viterbi decoder whenever an error is corrected by decoder error is the inner the outer decoder. It will be shown that the error detecting the outer It shown that the error detecting capability used with an "erasures-and-errors" outer decoder capability “erasures-and-errors” provides an improvement of 0.2 dB and thatthefeedback that the feedback an improvement 0.2 dB from the outer decoder further improves the performance by performance from the outer further 0.3 dB. 0.3 dB. An alternative approach to extracting reliability information approach from the inner decoder is compute the posteriori from the inner decoder is to compute the a posterion probability of for from the ability of correctness for each decoded symbol fromthe decoder for the short constraint length convolutional code and for the short constraint length and then use this probability as then use this probability as the reliability information provided to the outer coding system. It will be shown that, when used system. outer with an errors-and-erasures scheme decoder, with an errors-and-erasures outerdecoder, this scheme imdB 0.1 dB compared proves performance by only 0.05 dB to 0.1 dB compared to only is less Zeoli’s hard-decision decoding and hence is less powerful than Zeoli's tail annexation tail annexation scheme; yet itsperformance is undoubtedly its performance optimal among all schemes employing the short optimal among all schemesemploying onlytheshortcon- constraint length convolutional (with code straint length convolutional code (with no annexed tail). However, shown However, it will be shown that, in conjection with the use of in conjection with the feedback feedback from the outer decoder, the a posteriori probability outer decoder, the inner 0.2 dB improvement than inner decoder provides about 0.2 dB more improvement than does the decoder does the Viterbi decoder aided byfeedback.Infact,the the a feedback. In fact, posteriori inner decoder, used with feedback from the outer decoder, feedback from the outer offers a Zeoli’s scheme; decoder, offers a slight improvement over Zeoli's scheme; the inner encoder and the decoder moreover the inner encoder and the innerinner decoder have the same so inner decoder same constraint length so that the inner decoder generally and automatically returns to few branches automatically returns to normal operation only a few branches after making an error. after making an error. The plan is as follows. The plan of this paper is as follows. In Section II, a "real Section 11, “real time” decoding algorithm for unit-memory time" decoding algorithm for unit-memory convolutional codes is calculates codes is developed which calculates the a posteriori probability for value decoded. byte Sections 111, for each value of the byte being decoded. In Sections III, IV, and V, the several and V, the performances of several block-convolutional concatenated coding catenated coding systems having unit-memoryconvolutional convolutional codes are having inner codes are compared with similar systems haVing conven- convolutional tional bit-oriented convolutional inner codes. In each case, we chose the (1 8 , 6 ) unit-memory convolutional code as the inner (18,6) convolutional code practically complexity in terms code because it has practically minimum complexity in terms of decoder implementation, and because of its reasonably large of its reasonably and Reed-Solomon codes free distance (dfree= 16). We chose the Reed-Solomon codes distance (dfree = over GF(2 6 ), with block length 63 symbols, as the outer codes GF(26), block length RS would matched so that the symbol size of the RS codes would be matched to the byte-size (six bits) of the unit-memory code. In Section VI, of unit-memory code. In Section the degradation of performance, when the rate 1/3 inner degradation rate convolutional convolutional code is replaced byrate rate 1/2 convolutional a a code, is considered in order to demonstrate tradeoff in thethe tradeoff between bandwidth expansion and signal-energy-to-noise ratio. In Section VII, the 95%confidence intervals for the simulation Section VII, the95% simulation results are obtained and interpreted. and interpreted. II. REAL-TIME MINIMAL-BYTE-ERROR PROBABILITY 11. DECODING OF UNIT-MEMORY CODES We now develop an alogrithm for real-timeminimal-byteminimal-byteof unit-memory convolutional error probability decoding of the unit-memory convolutional probability codes described in [9] . in [9]. the byte of k o Let at (tt = 1,2, -.) denotes the byte (or subblock) of ko ( = 1 , 2, ...) informationbits to be encodedatat time t , andlet bt ( t = bits t, let b t (t = 1,, 2, ...) be the corresponding encoded subblock ofof no bits. corresponding subblock 1 2, ..*) For a unit-memory code, code, b , =a,Go +at-lG1 (1) (1 1 G1 ko X and where, of where Go and Gl are ko X no matrices and where, by way of convention, a, = 0 . We assume that the sequence b , ,, b , ,, ... ao = O. the bl 2 ... has been transmitted over a discrete memoryless channel and discrete memoryless channel that fr t, (t = 1,2, -) is the received subblock corresponding to corresponding (t = 1, 2, ...) transmitted subblock denote the transmitted subblock b,. We shall write a[ t, t' 1 to denote bt . [at,a,+l,...,a,~];similarlyforb[t,t~lf[t,t'l' [at, at+1 , "',ad; similarly fOf b[t,t'l andr[,,,’l. and By real-timedecoding withdelay A, we mean thatthe feal-time decoding delay fl, the decoding decision for at is madefromthe the observation of from observation of f[ 1, t + A l . The real-time minimal-byte-error probability r [ l , t+ Al' minimal-byte-error probability (RTMBEP) decoding chooses (RTMBEP)decoding rule then is that which chooses its estimate C as that value of a, which maximizes P(at I rC1,+ A ] ) i , , at of at whichmaximizesP(atlf[I,t+Al) for t = 1,, 2,, .... To find a recursive algorithm for this decoding = 1 2 -.. find decoding rule, we begin by noting that that peat If[l,t+Al)=p(at'f[1,t+Al)/~P(a,r[1,t+Al) (2) where we have used Q! to denote a running variable for at. It a denote running It suffices then to find a recursive method for calculating P(at, to find for peat, r[l,t+Al) ). ‘[l,t+AI . We next observe that peat, f[l, + A 1 = peat, fO, ] ( r [ t + l , t + A ] P(‘t> r [ l , tt+A1 )) =P(at>r [ l , tt]p)P(f[t+l,t+A 1 I at, r [ l , t ] ) r[l, t]) =P(at,f[l,tl)p(r[t+l,t+Al 1lat) =P(atp r[l,t])P(r[t+l,t+A] %) (3) where we have used the facts that the channel is memoryless facts that the and that the code has unit memory. It remains to find recurthat the code memory. It sive obtaining the two the sive rules for obtaining the two probabilities on the right in (3). probabilities Obtaining the recursion for P ( a t , r c l ,,tl) is quite standard peat> r[1, I ) standard [12] - ~ 4 1 ; P I -[14]; Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008209 1066 1066 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-25, NO. 10, OCTOBER 1977 VOL. COM-25, NO. 10, OCTOBER 1977 IEEE TRANSACTIONS ON COMMUNICATIONS, peat, 'p,t]) = ~ P(a[t-l,t], '[l,t]) = ~ r[ l,t] ) (10) = P(r[t+i,t+.<l] I at+i-l = a) (11) [(01.) = peat = °t-l 01., and Peat-I> '[l,t-1]) h(a) °t~l (4) But also also peat, 't I at-I, ,[ l,t-l]) The R TMBEP Decoding Algorithm for Unit-Memory Codes RTMBEP Decoding Algorithm Unit-Memory Codes tor =P(at I at-I> 't-l)P('t I a[t-l,t]' '[l,t-l]) = 2- kO P('t I b(a[t-l,t]» i decremented from A to 1 as the algorithm ~ where i will bedecrementedfrom progresses. (Of course, the received segment r[t+l,t+A] must (Of the r[ t+l, t+.t.] must also be stored so that P(rt+i I ~ ( Q [ , + ~ - ~ , ~ can I also be P(rt+ilb(a[t+i-l,t+i]» + ~ > ) found for i = A, A - 1, -., 1.) We may now state: i = ~, ~ ... , for state: Step 0: Set[(0)=2 ko ~ndset[(a)=Ofora*O.Sett= StepO:Setf(0)=2koandsetf(ol)=Ofora#0.Sett=1. 1. Step 1: Make the replacement, for all states &, a, 1: . replacement, for (5) where we have written b(a[t-l,t]) for the value ofb t deterb(act-l,tl) for the of bt determined by (1) from a[t-l',t],, and where we assume here and and by from hereafter that all informaHon sequences are equally likely (as hereafter that all information correspondstomaximum-likelihood corresponds to maxirrium"likelihood decoding.) Substituting (5) into (4),,we have our desired recursion (5) into ( ) 4 p(at"[l,t]~=2-kO ~ [(01.) "'-,2- kO , ~ [(OI.')P('t I b(OI.', Q' a». ' Step 2: Set i = A and, for all states a,set i = ~ a, for h(a) = r k o 44 = 2- k o P(at-l,r[l,t-l]) E x P(rt+~ b(OI., a')). P(rt+a I b(e, a’)). Q' a‘ 0(--:1 (6) 3: make the replacement, for Step 3 Decrease i by 1 and make the replacement, for all : states a, a, We now turn to the quantity P(r[t+l,t+.<l] lat) which we note is the i = 1 value' of h(a) + 2- kO h(OI.')P('t+i b(a, 01.'». h(a) ...- 2-’oX~ h(cy’)P(rt+i II b(a, a’)). a! a’ = ~ P(at+i' r[t+i,t+~] '\ at+i-d· (7) °t+i 1 If now i = 1, go to Step. 4. Otherwise, return to Step 3. i = Step. return Step estirriate, of at> byte ao Step 4: Emit, as theestimate. of ut, that byte a0 which reliability the maxirriizes f(a)h(ol), and emit, as the reliability indicator, the maximizes t(OI.)h(a), probability Proceeding in the same manner that (6) was obtained from (4), from (4), the that (6) we desired recursion we find the desireq recursion peat = 01.0 I r[l,t+.<l]) = [(OI.o)h(OI.o)/ ~[(a)h(OI.). P('[t+i,t+~] I at+i-l) P(r[t+i,t+Al at+i-l) return to Step 1. Increase t by 1 and return to Step 1. that should require any The only feature of the the algorithm that should require any only feature of = 2- kO ~ P(r[t+i+l,t+.<l] I at+a comment is the initialization of f(0)a t 2’0. . This is required so initialization of t(O)at 2kO required °t+i that the’ first time step 1 is performed one obtains the correct the one obtains the correct (8) initial value f(a) = P(rl IIP(O,, a)). I i f fact, however, it makes • P('t+i I b(a[t+i-l,t+i] [(01.) = P('l b ( 0 01.»). In a c t , makes output if [and no difference in the output from the algorithm if the f and h 4ifference values are scaled by fixed positive constants, s o that f(0) = 1 so teO) = recursion in initialized =A This recursion in initialized with its ii = ~ value is permissible in Step 0 and the factors 2-ko can be removed the factors 2- ko removed 1,2 ' in Steps 1,2,, and 3. P('t+~ I at+.<l-l) =: ~ P(at+~, rt+~ I at+~-l) NotethatStep of algorithm, which has the that Step 3 of thealgorithm, whichhas the, same ~+.<l ' performed ~ - tirries complexity as Step 1, is performed’ A - 1 times for each time Step desirable to'keep ~ = 2- kO ~ P(rt+~ I b(a[t+~-l,t+~] », that Step 1 is performed. It is clearly desirable then to.keep A as small as possible. Table I shows the variation of the decodi variation of decod°t+.<l ing -byte-error probability, B E the 'delay, ~, (9.) ingbyte-error probability, P BE , , with the decoding ‘delay, A, for the ‘(no = 18, ko = ‘6) unit-memory code of [9] used on a (no = k o =6) of used it white ,noise = A - 1, A should be noted and evaluated with ii =: ~ - 1, ~ - 2, "', 1. It should be noted simulated three-bit-quantized additivewhite Gaussian .noise chaimel. ~ = Virtually (8) thatthe recursion (8) requires less multiplicationthanthe the (AWGN) channel. We see that A = 8 gives virtually the same the than PBE "optunum" ~ = 00. ' previously [14]The RTMBEPdecoding previously reported in [14], The reduc- PBE as the “ o p t i k m ” choice A = 00. decoding We-now point We 'now point. out, however, that one can reduce the ratio one reduce ratio is particular1y.evident unit-memory codes due t o tion is particularly evident for unit-memory codes due to their of Step 3 operations to Step 1 .operations to as.close to unity operations Step 1,operations as close connected. tiellis structure. ' fully connected trelli~ st~cture. algorithm t o carry out the pies as desired without any degradation in performance but at. the any degradation in but at, , An ~gorithm to carry out the recursive rules given by (6) of "trick" to variable “state” the and (8): reguires, for imd (8) requires, for each byte (or "state" in the usual Viterbi cost of additional storage. The“trick” is t o use a variuble from peat t+Al ) delay ~. two f ( a decoding decoding terminology) a, at decodi~g terminology) a, the storage of two real numbers [(a)) decoding delay A. Each ut is decoded from P(ut Ir[l,t+~]) but A, depending on the value o f t , takes some value in the LX, on the of t, some the h (e); namely, and h(a); ~at1tely, ” ». e-., ’ Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008210 1067 1067 LEE: CONCATENATED CODING SYSTEMS LEE: CONCATENATED TABLE I VARIATION OF DECODING BYTE·ERROR PROBABILITY P WITH DECODINGBYTE-ERROR PROBABILITY p WITH DECODING DELAY A FOR RTMBEP DECODING OF THE (18, 6) RTMBEP THE (18,6) UNIT·MEMORY CODE ON A SIMULATED AWGN CHANNEL UNIT-MEMORY CODE A CHANNEL WITH A SIGNAL ENERGY PER INFORMATION BIT TO WITH SIGNAL ONE-SIDE NOISE POWER SPECTRAL DENSITY RATIO RATIO ONE-SIDE POWER OF 1.25 DB (4000 BYTES DECODED FOR EACH A) FOR EACH 1.25 (4000 1 Eb/N O I 1.00 db 1. 00 db I 1.25 db 1.25 db 1.50 db 1. 50 db I 1.75 db 1. 7S db P (95% % ~ 1 9 5 conf idence) conf~dencel (18,6) uni t-memory 118,61 unit-memory code lcode .0305 (:t.. 0053) lt.00531 .0200 {::.OO44) f+.OO44l .0118 (:t- 0033) p (95% confidence) M=6, 13,1) code .0488 (:..0068) .0325 (:..0056) .0233 .0400 (::.0062\ .0225 {:..O(47) .0140 (~.OO37) fi+1 (a) = 2h+l(a) = 2 - kkoo~ fi(aY’(rt+i I b(a’,a)) x fi(a)p(rt+i b(a', a)) 0(' (Yl I M,=,7, (3,1) .0065 (+.0025 .0103 (::.0032 code A, < A < AM. delay, A,, range .:lm :s;;;; .:l :s;;;; .:lM' The minimum decoding delay, 11 m , is minimum chosen large enough to ensure negligible degradation, say degradation, .:lm = 8, while the maximum decoding delay, .:lM, is chosen maximum delay, AM, A, = 8, the will small enough to make the increased memory tolerable as will soon become apparent. In this variable real-time minimal-byte-error probability minimal-byte-error stores AM - A, (VRTMBEP) decoding, one stores .:lM - .:lm + 2 real numbers fiCa) for each state a, namely: &(a) for i = 1,2,, ... , 11M -11 m + I state a, = 1 , 2 -, AM - A, 1 and h(a) where h(a) + fi(a)=P(at+i-1 = K ~cl,t+i-l]) + P(r[t+i,t+A~] 1 at+i-l (13) (13) = a) h(a) = 2- k o ~ P(rt+AM I b(a, a')). 0(' Step 3: Decrease i by 1 and make the replacement, for all states a, h(a) +- 2- kO ~ h(a')p('t+i Ib(a, a')). 0(' + If now i :s;;;; 11M - .:lm + 1,, go to Step 4. Otherwise, return to < AM - A, 1 Step Step 3,. 3 Step 4: Emit, as the estimate of a t + i - l , that byte Qo which estimate 0t+i-1, byte % fi(a)h(a), indicator, the maximizes fi(a)h(a), and emit, as the reliability indicator, the probability (12) (1 2) and where h(a) is as in (11) with 11 replaced by 11M , h(a) as (1 1) A AM. that, executing Step the Observe now that, in the process of executing Step 2 of the A = AM, sequenRTMBEP algorithm with 11 = .:lM, one would obtain sequentially the quantities quantities = AM - 1 AM - 2, -., 1. for i = .:lM - 1,, .:lM - 2, ... , 1. But the product of the quanproduct of the quanfiCa) tity in (13) with fi(a) as in (12) is, according to (3), equal to P(at+i-l a I rcl t + A M ); statistic P(Ot+i-1 = a I '[I,, t+AM I); this is precisely the statistic needed decoding delay A = AM 1. toestimatewith estimate 0t+i-1 with a decoding delay of 11 = .:lM - i + 1. the Step Hence, if we hadthe foresight to performStep 1 of the we the RTMBEP algorithm .:lM - .:lm + 1 times and to storethe RTMBEP AM - A, 1 fiCa), make AM - A, resulting fi(a), then we could make .:lM - 11 m + 1 decoding decisions during the .:lM - 1 times that Step 3 is performed. Step during the AM Thus, for each block of 11M - 11m + 1 forward recursions of each block recursions Thus, AM - A, step I, the backwardrecursion of step 3 wouldbe performed recursion be step 1, A, average step 1 11m - 1 times. The average ratio of step 3 to step 1 would be of step be (AM - l)/(AM - A, (11 M - l)!(.:lM - 11m + 1). For instance, with .:lm = 8 and instance, with A, = 8 11M = 13, we would perform Step 3 only twice for each time Step AM = 13, AM we performed Step 1; and we wouldbestoring Step 1; be storing only .:lM A, = 11m + 2 = 7 real numbers per state rather than 2 as in the rather than original RTMBEP algorithm in which Step 3 is performed .:l A1= that Step 1 = 7 times for each time that Step 1 is performed. It should now be obvious that the following algorithm is obvious the algorithm the necessary modification tothe RTMBEP decoding algothe algorithm for obtaining reduced computation at the price of addiobtaining computation at the tional storage as has just been described. just + c for i = 1,, 2,, "', .:lM - 11m in order. = 1 2 -, AM - A, Step 2: Set i = .:lM and,for all states a, set for = AM a, (:t. 0048 ) .0128 (:t.. 0035 p (95% confidence) and·;':set and ,‘set + + return Step 1 If i = 1,, increase t by .:lM - .:lm + 1 and return to Step 1.. = 1 t AM - A, Otherwise, decrease i by 1 and return t o Step 3. Otherwise, return to It is satisfying to note that VRTMBEP decoding algorithm algorithm reduces to the RTMBEP algorithm when .:lM = .:lm' It should when AM = A,. number,L, be pointed out that when only a finite number, L , of informainformationbytes are encoded andonetakes bytes and one takes AM = L , the largest .:lM = L, possible choice, then the VRTMBEP algorithm reduces to that that given by Bahl et 01. [12] (when the later algorithm is specialized Bah] al. [ 121 later to unit-memory codes) and does about twice the computation of the usual Viterbi decoder; but this case also maximizes the Viterbi decoder; but this memory requirements. The chief advantage which both chief RTMBEP and VRTMBEP decoding of unit-memory codes have of reliability informaover Viterbi decoding is in their providing reliability informadecoding tion about the decoding decisions; information of considerable the value to the outer decoder in aa concatenated coding system. the outer decoder coding One may argue that Viterbi algorithm can be implemented in Viterbi can implementalogarithm domain, thus resulting much simpler implementathus interesting note that both tion. However, it is interesting to note that both RTMBEP and VRTMBEP can also be implemented in the logarithm domain the domain the with the assistance of a ROM storing the operation in the logadomain corresponding [ 181 . rithm domain corresponding to normal addition [18] . Because the resulting performanceofof the RTMBEP and the algorithm VRTMBEP algoritlms are indistinguishable when 11 = 11 m is when A = A, degradation chosen large enough for negligible degradation compared to for A = -, A, = 8, .:l = 00, say .:lm = 8, we will not hereafter distinguish between will the two algorithms in our discussion of concatenated coding of two in coding systems. The VRTMBEP Decoding Algorithm for Unit-Memory Codes The VRTMBEP Unit-Memory Codes 0 Set fAM-A,+l : 2ko SetfAM-Am+l((Y)= Step 0: SetfAM-Am+1 (0) = 2kO and setfAM-Am+1(a)= O f o # O. Set = 1. o for raa ;/=O . S e t t = 1. I: Step 1: Set fI(a) = 2 - k o 5 : f A ~ - M-A ( +1(a')p(rt I b(a', a)), fl((Y) = 2- ko ~ fA A m + 1ma ’ ) P ( r t lb(a’, a)), a‘ ct' 111. ODENWALDERS III. ODENWALDER'S CONCATENATED CODING SYSTEM AND SOFT-DECISION MODIFICATION WITH THE RTMBEP DECODING ALGORITHM The concatenated coding system proposed by Odenwalder concatenated coding by [6] [6], , which we shall call System I, isasas shown in Figure 1 decoder decoder and where the inner decoder is a hard-decision Viterbi decoder and Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008211 1068 1068 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-25, NO. 10, OCTOBER 1977 ON COMMUNICATIONS, NO. 10, OCTOBER 1977 TRANSACTIONS IEEE COM-25, VOL. TABLE II TABLE I1 BYTE-ERROR PROBABILITY, p,, FOR VITERBI DECODING OF BYTE-ERROR PROBABILITY,p VITERBI THREE R =kO/no = 1/3 CONVOLUTIONAL CODES ON A THREE R = ko/no = 113 CONVOLUTIONAL SIMULATED AWGN CHANNEL (8000 BYTES DECODED SIMULATED AWGN CHANNEL FOR EACH POINT SHOWN, DECODING DELAY L1 IN SHOWN, A BITS OF 48 IN ALL CASES) BITS CASES) , 4 P (bytes) P P 6 6 8 a .0248 ,0248 .0193 .ou3 = i=t+l ( t = 6 RS t = 6 RS code o t 0 t = = :s a code RS c o d e .0193 ,0133 where the outer decoder is a t-error correcting decoder for the t-error correcting decoder for the RS outer block code. Here and hereafter, we assume that the code. the RS the symbols interleaving is "perfect," i.e., thatthesymbolsin in each RS is “perfect,” Le., block at the output of the interleaver have been independently block at the output of the independently then decoded by theinnerViterbi inner Viterbi decoder. Thus, we canthen upperbound the probability of a decoding error in an RS block, the probability a decoding in block, P ERSS, ,as P E R as PER8 t = 4 RS t = 4 ns code /I. A 16 16 .0285 .was " (bytes) Concatenated with: Concatenated with: o o "- System I wi th S y s t e m I with f.1 = 6, ( 3 , 1) code 1.1 = 6, (3.1) code '\ "- '\ System I with System I with M 7, ( 3 . 1) code M = 7, 1 3 , 1 ) code '\ '\ "- "- \ \ y)pi(l - p ~ (14) , where n is the RS block length (in bytes) and p is the bytelength and byteRS error probability at the Viterbi decoder output. Further, since error probability at decoder output. Further, almost all the incorrectly decoded RS codewords are d min = incorrectly decoded dmin= almost 2tt + 1 symbols away from the correct codeword (where d min 2 1 symbols from the correct codeword dmin is the minimum distance of the RS code), the byte-error probis distance of the probability, PBE , of the concatenated coding system is given ability, P B E concatenated coding given closely by + 2 + 2tt + 1 PBE = PBE = - - PERS ' PERS. n n (1 5 ) (15) For a byte size of 6 bits, as will be assumed hereafter, the size as will hereafter, RS code has length = 26 = 63 RS code has length n = 26 -- 1 = 63 bytes. For convenient For convenient reference, we give in Table II the byte-error probability ofaa give byte-error probability Table I1 Viterbi decoder for the three different convolutional codes ofof for the three different convolutional codes rate R CON = ko/no = 1/3 that will be used in our subsequent R c o N = ko/no = 113 comparisons comparisons when used onfourdifferent four different AWGN channels; these data are taken from [9]. The AWGN channels are specifrom [9]. are per encoder bit to fied by the ratio of of channel energy per encoder input bit to the ratio channel Eb‘/No. one-side noise power spectral density, Eb'/No . Note that the that the energy per channel input bit (decoder output bit),E,, is energy per channel input bit (decoder output bit), E s ' is given E, = RCONEb‘. E b' RRSEb R,s by E s = RCONEb '. But also E b ’ = RRSEb where RRS is the RS code a n d E b channel informarate of the RS code and E b is the channel energy per informaentering the RS Thus, the channel tion bit entering the RS encoder. Thus, the channel energy per information bit to ratio information bit to one-sided noise power spectral density ratio for the coding system, for the overall concatenated coding system, Eb/N,-, is given by Eb/No 1 . = - - - (Es/No ). R (ESINO )· . (16) RRSRcON RS CON Using I1 (15), Using the results of Table II together with (14) and (15), with we can calculate the byte-error probability for for Odenwalder's Odenwalder’s calculate the byte-error probability System for RS System I for various RS outer codes. The results of this calcuare for the three different CON = 1/3 lation are shown in Fig. 2 for the three different R RCoN= 1/3 convolutional convolutional codes, namely (i) the conventional (3, 1) code conventional (3, 1) M = 6, K = 7; (ii) conventional (3, 1) with with M = 6, i.e., K = 7; (ii) the conventional (3,1) code with M = 7, i.e., K = 8; and (iii) the (18,6) unit-memory code. = 7, Le., = 8; (18, 6) code. and- (iii) free distances 15, 16 16, Codes Codes (i), (ii) and (iii) have free distances of 15, 16 and 16, respectively, andtheircorrespondingViterbidecoders their corresponding Viterbi decoders have 64, 128 states, 64, 128 and 64 states, respectively. We see, from Fig. 2, that System I with System I w i t h (IS,6) unit-memory code (18.6) unit-memory c o d e "- \ I l 1 . 9 . 432. . .2.0 . 1.9 22222 10 '0 n l I l l 1 2.1 /..2 2.3 2.4 2.5 2.5 System II with System I1 with (lB.6) unit-memory code (18.6) unit-memory c o d e 1 2.6 2.6 Eb/No(db) Fig.2.Theperformance 2. The performance of ConcatenatedCodingSystems of Coding Systems I and I1 II GF(2 6 ) with with RS codes over CF(26) on a simulated AWGN channelwith Eb’/No = 1.25. Eb'/No = 1.25. the use of theunit-memorycode of unit-memory code provides an advantage of of 0.3 about 0.3 dB over the conventional code with the same state conventional code with the to the complexity, part of of which gain is attributable to the larger part unit-memory the unit-memory free distance of the unit-memory code. But the unit-memory distance 0.1 the conventional code code is also about 0.1 dB superior to the conventional code with the same free distance (and doubled number of decoder the free (and doubled number of entirely the byte-oriented states); this gain is attributable entirely toto the byte-oriented this unit-memory code. structure of the unit-memory code. It should be mentioned that gains of 0.1 dB are not insignifshould that of 0.1 icant in concatenated coding systems. As can be seen from coding be Fig. 2 a gain of 0.1 dB corresponds t o a reduction OfPBE by of 0.1 corresponds to of PBE nearly an order of magnitude, such steepness of the PBEvs. order such of the PBE VS. Eb/No curves being characteristic of well-designed conEb/No of catenated coding' systems. codingsystems. Theinnerdecoder, i.e., theViterbidecoder,inSystem inner decoder, Viterbi decoder, in System I makes "hard decisions" onthedecodedbytes.Thesystem system “barddecisions” the decoded bytes. The performance can be improved by using a "soft decision" can be “soft decision” decoder which passes along to the outer decoder a reliability the outer decoder reliability .' indicator for each decoded byte. Such a system, in which the for each decoded byte. Such in which inner decoder is RTMBEP decoder and the outer decoder is an the outer decoder errors-and-erasures decoderforthe the RS code, will be called for System 1 . (For ease of reference, we summarize in Table 111 1 II. of reference, III the characteristics of eachofof the six concatenatedcoding characteristics of coding systems systems that will be considered in this paper.) When the reliaconsidered this paper.) bility indicator, peat I r[ l , tt+L1]) for a decoded byte is less than indicator, P(ut r[l, + A ) byte T some specified T,, the outer decoder treats the byte as having outer decoder treats the byte been “erased.” The erasures-and-errors decoderforthe the RS "erased." for code can correct t errors and e erasures, whenever 2 t + e < can correct and whenever 2t dmin.Thus, the block error probability for the outer decoder d min . the block error probability for the outer decoder is upperbounded by by + ERS P = d=~in ~ e=d-2t Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008212 LEE: CONCATENATED CODING SYSTEMS LEE: CONCATENATED CODING SYSTEMS TABLE III TABLE I11 THE SIX BLOCK-CONVOLUTIONAL CONCATENATED CODING THE SIX BLOCK-CONVOLUTIONAL CONCATENATED CODING SYSTEMS STUDIED (EO = ERRORSSONLY DECODER, SYSTEMS STUDIED (EO = E R R O R ONLY DECODER, E + E ==ERRORSSAND ERASURES DECODER, E + E E R R O R AND ERASURES DECODER, FBTID ==FEEDBACK TO INNER DECODER) FBTID FEEDBACK TO INNER DECODER) 1 Inner Decoder rT y p e Inner D e c o d e Type Sys s t e m I I S y tern System II1 System I Sys s t e m III S y tern I11 System IVv System I System Vv System System VI I System V vi terbi ihhard-decision Vlterb ard-decision RTMBEP soft-decision RTMBEP s o f t - d e c i s m n vi terbil hhard-decision Viterb ard-decision Outerr DecoderrTType h t e Decode ype EO EO E+E Inner Code Tail l Inner Code T a i Annexationn annexatio NO NO NO NO EO withh FBTID EO w i t FBTID NO NO RTMBEP hard-decision RTMBEP h a r d - d e c i s i o n EO wi thh FBTID EO w i t FBTID NO NO RTMBEP soft-decision RTMBEP s o f t - d e c i s i o n E'EEw i t h FETID E I with FBTID NO NO Viterbi l soft-decision V l t e r b soft-decision E'E wi thh FBTID E+E w i t FBTID YES YES is where p is again thebyte-error probability inner the inner where p again the byte-error probability the for for decoder, where q is the byte-erasure probability for the inner decoder, where 4 is the byte-erasure probability for the inner decoder, and where decoder, and where *i TABLE IV TABLE IV VARIATION OF INNER DECODER BYTE-ERROR PROBABILITY VARIATION O F INNER DECODER BYTE-ERROR PROBABILITY P AND BYTE ERASURE PROBABILITY q AND OF OUTER p AND BYTE ERASURE PROBABILITY q AND OF OUTER DECODER BYTE-ERRORPROBABILITY DECODER BYTE-ERROR PROBABILITY PBE WITH THE P WITH THE , , ERASURE THRESHOLD T FOR THE (18, 6) UNIT-MEMORY ERASURETHRESHOLD T FOR THE(18,6) UNIT-MEMORY CODE ON AA SIMULATED CODE ON SIMULATED AWGN CHANNEL AND WITH AWGN CHANNEL AND WITH THE MINIMUM DISTANCE d min OF THE OUTER THE MINIMUM DISTANCE dmin OF THE OUTER RSCODE RS CODE P d BE for min = P d 9 for P m,n = 13 d BE BE v.nn 0.80 .01000 dS .05000 .735 x 10- ~ 3 .407 X 10- .877 .70 in 1. 00 .01325 .04150 .740 x x 10- 2 3 .477 X 10- .128 I I .80 .80 .00675 ,00675 .677 .01950 .02100 .50 I .03400 ,03400 I = 17 x x 10- 5 10- 4 4 .213 x 10- 3 .555 X 10- 10- 2 I I foe I I 2 .123 X 10,123 X lo-' 4 .244 X 10.244 X 6 .193 X 10,193 X 10- 4 10- 6 •,00800 0 .25 0 8 0 0 .02650 ,02650 3 .902 X 10,902 X .179 X .179 X .01350 .01350 .01125 .01125 2 .107 x 10.lo7 X 4 .332 X 10.332 x 6 .481 X 10,481 x lo-' .80 .80 .00425 .00425 .02125 ,02125 .70 .00525 ,00525 .01625 .70 ,01625 .00900 O .OO~O .00400 .00400 3 .112 x 10,112 X 4 .981 X 10,981 x 3 .1133 X 10,11 X 6 .691 x 10,691 X 10- 6 .684 X l o + ,684 X 10- 5 8 .173 X 10.173 X lo-' .50 .SO 1. .SO so .70 .70 .50 .50 1. 25 n! n ) n! (( t : e ) = t! e!(n -- tt --e)! . t, = t! e!(n e)! .136 X .136 X 8 .774 X 10-' ,774 X 1 . 0 .149 X ,149 X 8 .204 X 10- w ,204 x l o 8 11 .416 x 10,416 X .00 .DO .00250 .00250 .01050 ,01050 .70 .70 .00250 .00250 .00825 ,00825 5 .416 X 10,416 X 10- 5 8 .636 X 10, 6 3 6 X lo-' 8 .334 X 10,334 X l o' 11 .196 X 10,196 X .50 .50 1. 75 1.75 Thebyte-error The byte-error probability of the overall system is again probability the of overall system is again obtained from (15). obtained from (1 5). The byte-error-probability, p, ,and the erasure probability, q, , The byte-error-probability,p and the erasure probability, q depend on the particular threshold, T, ,specified. The optimal the particular threshold, T specified. The depend threshold is a function of E b 'INo and the minimum distance, threshold is a function of Eb'/No and the distance, d min , of theReed-Solomon dmin,of the Reed-Solomon code. Roughlyspeaking, for a code. Roughly speaking, for a given block length n, as d min gets larger, the overall block given block length n, as dmin gets larger, the overall block error probability is minimized at a higher erasure rate. We have error probability is minimized at a higher erasure rate. We have found no simple way to determinethe optimal threshold found no simple way to determine the optimal threshold analytically. Instead, we have found p and q for T = 0.5,0.7 analytically. Instead, we have found p and q for T = 0.5,0.7 and 0.8 by simulation and have used these values of p and q to and 0.8 by simulation and have used these values of p and q to calculate the byte-error probability of the coding system. In calculate the byte-error probability of the coding system. In Table IV, we show the result of this calculation. We see, for Table IV, we show result of this calculation. We see, for E b 'INo in the range form 1.25 dB to 1.75 dB, that T = 0.7 is Eb'/No in the range form 1.25 dB to 1.75 dB, that T = 0.7 is the best threshold among the three candidates. the best threshold among the threecandidates. The performance of System II with T = 0.7 is also plotted The performance of System I1 with T = 0.7 is also plotted in Figure 2. .The inprovement over System II of the performance in Figure 2 The inprovement over System of the performance due to the erasure scheme, as observed from Figure 2, is due to the erasure scheme, as observed from Figure 2, is dependenton error error correctingcapability of theouter dependent on the the correcting capability of the outer coding system as well as on Eb'/No and is approximately codingsystem as well as on E b 'INo and is approximately 0.1 dB. This slight improvement is probably not significant 0.1 dB. This slight improvement is probably not significant enough to justify the increased complexity of the RTMBEP enough to justify the increased complexity of the RTMBEP decoder over the Viterbi decoder. However, as we shall soon decoder over the Viterbi decoder. However, as we shall soon see, the RTMBEP decoder coupled with an "erasures-andsee, the RTMBEP decoder coupledwith an "erasures-anderror" block decoder performs much better than Viterbi error" block decoder performs much better than thethe Viterbi decoder when feedback from the outer decoderis utilized. decoder when feedback from the outer decoder is utilized. 1069 1069 .00400 .00400 .00250 5 .336 X 10, 3 3 6 X 10c5 8 .816 x 10.816 X lo-' ,00250 11 .927 X 10,927 X 10-l' .249 X ,249 X lo+ Fig. 3. A block/convolutional concatenated coding system with Fig. 3. A block/convolutional concatenated coding system with feedback from the outer decoder to the inner decoder. feedback from the outer decoder to the inner decoder. the general concept of such aablock-convolutional concatenated the general concept of such block-convolutional coding system. coding system. To study the gain provided by feedbackfromtheouter feedback from the outer To study the gain decoder, we first implemented a software Viterbi decoder and decoder decoder, we first implemented software a software RTMBEP decoder which can be restarted with feeda software RTMBEP can be with feedback. Assuming that the outer decoder always makes correct the outer back. Assuming always decisions, ajustifiable assumption since theprobability of probability decisions, a justifiable assumption byte-error at the outer decoder output is at least several orders byte-error at decoder output is several of magnitude less than that at the inner decoder's output, we the inner of magnitude less than obtained the results shown in Table V for the (18, 6) unitobtained the results shown for the (18, unitmemory convolutionalcodeon on asimulated AWGN channel simulated AWGN memory convolutional code the with an E b 'INo of 1.25 dB. From Table V, we see that the an Eb'/No 1.25 dB. V, IV. FEEDBACK FROM THE OUTER DECODER IV. FEEDBACK FROM THE OUTER DECODER RTMBEP decoder receives a considerably greater benefit from RTMBEP receives considerably greater TO THE INNER DECODER TO THEINNER DECODER the feedback than does the Viterbi decoder. We then considered the feedback thandoes the decoder. Because of the nature of convolutional code and the Viterbi the following block-convolutional Because of the nature of convolutional code and the Viterbi the following block-convolutional concatenated coding decoding algorithm, once an "error event" occurs the decoding algorithm, once an "error event" occurs the decoder systems. systems. System III: A hard-decisionViterbi inner decoder with Viterbi often makes number of closely spaced erroneous estimations System III: with often makes a number of closely spaced erroneous estimations before it recovers to correct operation. Since the outer feedback from RS decoder, i.e., before it recovers to correct operation. Since the outer feedback from the errors-only RS outer decoder, Le., System I decoder of a concatenated coding is designed in such decoder of a concatenated coding system is designed in such a with feedback. way that it is able to detect and correct almost the errors System IV: A hard-decision RTMBEP inner decoder with hard-decision RTMBEP decoder with System IV: way that it is able to detect and correct almost all of the errors made is then of significant feedback from the RS decoder. made by the inner decoder, itit is then of significant advantage feedback from the errors-only RS outer decoder. the inner decoder, if the corrected estimates of the outer decoder are fed back to System V: RTMBEP System V: A soft-decision RTMBEP innerdecoderwith decoder with if the corrected estimates of the outer decoder are fed back to restart the decoder from the point where first erred in feedback from the Le., restart the inner decoder from the point where it first erred in feedback from the erasures-and-errors RS outer decoder, i.e., order eliminate the "burst" of errors. Figure 3 System I1 feedback. order to eliminate the "burst" of errors. Figure 3 illustrates System II with feedback. Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008213 1070 IEEE ON TRANSACTIONS ON COMMUNICATIONS, VOL. COM-25, NO. 10, OCTOBER 1977 TRANSACTIONS 10, TABLE V TABLE THE EFFECT OF FEEDBACK FROM THE OUTER DECODER ON THE EFFECT OF OUTER THE BYTE-ERROR PROBABILITY FOR A VITERBI DECODER BYTE-ERROR PROBABILITY FOR AND AN RTMBEP DECODER ON A SIMULATED AWGN AN RTMBEP CHANNEL WITH AN Eb'/NO OF 1.25 DB (8000 BYTES (8000 ANEb’fNo OF DECODED FOR EACH POINT SHOWN, DECODING EACH POINT DELAy .... OF 48 BITS IN EACH CASE) DELAY A O F CASE) p for code (18,6) unit memory pforM=7, (3,1) (95% confidence) (95% confidence) NO feedback wi th feedback Viterbi Decoder .0200 (.:..0032) .0110 (~.0023) RTl1BEP Decoder .0193 (.:..0031) .0075 code No feedback Ni th feedback .0225 .0133 (.:..0025) (~.0019) (.:..0034} inner decoder without feedback from an errors-only RS outer decoder without errors-only of Systems decoder. Hence, by comparing the performances of Systems I decoder. and IVin in Fig. 4,, we can conclude that feedback fromthe IV the 4 conclude that outer decoder improves the system by a full 0.5 dB for a harda hard· decision RTMBEP inner decoder. By comparing the performdecoder. performV in ances of Systems IV and V in Fig. 4, we can further conclude further conclude Systems that, when feedback from the outer decoder is used, an from outer the decoder an additional 0.1 dB improvement can be gained by using a softby 0.1 decision RTMBEP inner decoder rather than decoder rather than a hard-decision hard-decision one-the one-the same improvement as was observed in the previous the feedback from the outer decoder. section when there was no feedback from the outer decoder. P BE ’t\ 10--3 lo- Concatenated w i t h with o 0 t = t ~ 4 RS code t o 10-4 = 6 RS code RS t = a RS 8 RS code 10-5 10-6 System I System I 1 lo-’ 10-8 10-10 2 .1.9 . 2 . 1 . 0 1.9 4 .3 2 2 2 2.0 2.1 2.2 2.3 2.4 2.5 2.5 2.6 2.6 Eb/No(db) Eb/No{db) Fig. 4. Performance of Concatenated Coding Systems I-V employing 4. Performance Concatenated Systems (18, 6) unit-memory convolutional the (18, 6) unit-memory convolutional code and RS codes over 6 ) on a simulated AWGN channel with Eb’/No = 1.25 dB. CF(26) GF(2 Eb'/No = 1.25 111, The performance of Systems III, IV and V when used with unit-memory code on the(18, 6) unit-memorycodeonthe the AWGN channel are (18, comparison, the corresponding shown in Fig. 4. For ease ofcomparison, the corresponding For 11, Systems performances of Systems I and II, given in Fig. 2, are repeated between Systems in Fig. 4. By comparing performances between Systems I and 111, feedback from the outer decoder III, we see from Fig. 4 that feedback from the outer decoder 0.3 for hard-decision improves thesystembyaboutabout 0.3 dBfor a hard-decision system by V, perViterbi inner decoder. As can be seen from Table V, the perinner decoder. formance of hard-decision decoder vir· formance of a hard-decision RTMBEP innerdecoder is virtually indistinguishable from that ofaa Viterbi inner decoder that inner decoder for a unit-memory code; thus, the performance of System I in the performance of Fig. 4 is also the performance of the system with an RTMBEP performance V. ZEOLI’S TAIL ANNEXATION SCHEME APPLIED ZEOLI'S TO A UNIT-MEMORY CONVOLUTIONAL CODE In [10],, Zeoli proposed a concatenated coding system that [ 101 coding system that employed a rather long constraint length (K = 32, Le., M = long constraint length = 31) convolutional code obtained by annexing a long tail to the 3 1) convolutional code obtained by annexing the M = 7, (3, 1) convolutional code.The longer code is then (3, = convolutional code. The decoded by the same Viterbi decoder as for the short code by the decoder the short code with the exception that the information sequence the exception that the information sequence along the best pathtoto each state is treated as correctand used to and "cancel" the effect of the longer tail from encoded the encoded. “cancel” effect the of the sequence. Thus,thedecoderstatecomplexity Thus, the decoder state complexity remains the same as thatfor the original code andtheannexed for the and the annexed tail has absolutely no effect on the hard-decision decoding error probhard-decision decoding error probmade. ability until after an error has been made. But the tail provides ability excellent “error-detection” once the Viterbi decoder starts "error-detection" once the Viterbi decoder starts t o to when a make mistakes. Because the tail is not canceled when a the state metrics become extremely decoding error is made, the state metrics become extremely ominous after a few decoded branches and can be used as the decoded branches and can be inner basis for excellent erasure rules for the output of the inner excellent the output decoder. decoder. However, feedbackfromtheouterdecoder from the outer decoder is no but now order to longer an option, but now a necessity in order t o reset the correct state and thus to terminate decoder to the correct state and thus to terminate the the very “error propagation" to "error propagation” used to trigger the erasure alarm. Zeoli’s To study the improvement resulting from Zeoli's scheme, study the improvement we annexed, to the (18, 6) 6) unit-memory convolutional code, to the (18, unit-memory convolutional code, a three-branch-long "random tail" such that the resultant code three-branch-long “random tail” such that the resultant code = (18,6) code. The encoding is actually an M = 4, (18, 6) convolutional code. The encoding this convolutional code matrices of this latter convolutional code are shown in Table VI. The length of the tail was chosen t o be comparable VI. length to 1) memory = in [ l o ] . in memory to the M = 31, (3, 1) code used in [10]. (Because the decoder is intended t o made mistakes continually after its decoder to = first error, it makes no difference whether the annexed M = it the annexed 4, (18, 6)) code is catastrophic [15] or not.) The The last of the (18, 6 [15] or not.) the of systems to be consideredin this paper, System VI, is that of in paper, VI, systems to [lo] Zeoli [10],, namely a soft-decision Viterbi inner decoder with soft-decision inner decoder with feedback from an errors-and-erasures RS outer decoder, with from an RS decoder, with = conventional = 31, (18, 6) code the M = 4, (18, 6) code replacing his conventional M = 3 1, (3, 1) (3,1) code. The state metric used in the "real time Viterbi decoder” state metric the “real Viterbi decoder" p(t = b g P ( a [ l , t + .... ] [14] VI, namely [14] of System VI, namely f.J.(t + A) = 10gP(a[1,t+ A ] I r ~ ~ , ~ + a ] ) wi hIe tn .... [ , + ,[l,t+ .... ])when O[l,t+ A ]] is the “best path” at time t + A, can "best path" at time + be used as the basis for an effective erasure rule as follows. The p(t - p(t), along encoded path, difference, f.J.(t + A) - f.J.(t), is, along the correct encoded path, the sum of Ano is statistically independent random variables, sum random encoded bit. Note that, each corresponding to oneencodedbit.Notethat,for for Syscorresponding t o = = 144. central-limit-theorem tem VI, Ano = 8(18) = 144. The central-limit-theorem can VI, Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008214 1071 1071 LEE: CONCATENATED CODING SYSTEMS LEE: CONCATENATED CODING SYSTEMS TABLE VI TABLE VI THE ENCODING MATRICES OF THEE M= 4 (18,6) THE ENCODING MATRICES O F T H M = 4 (18,6) CONVOLUTIONAL CODEOBTAINED CONVOLUTIONAL CODE OBTAINED BY BY ANNEXING AA RANDOMLY CHOSEN TAIL ANNEXING RANDOMLY CHOSEN TAIL TO THE (18, ,6)) UNIT-MEMORY COOE TO THE ( 1 8 6 UNIT-MEMORY CODE 111000 G = 0 110100 0 11010 :I:mm] [oollo 1 000111 1 0 00001 0011 001110 000110 G1 = 0011000 1 0 0 1 0 0 0 0 1 1 0 1 1011100 01 01 G 1 0110000 1 0 0 0 1 0 0 1 0 1 1 0 0 1 111000 1 1 110000 110001 100001 100011 11ooor 011100 011010 011000 ["'00001 1000001101 001100] UOO"' 001110 00111 00 1 100 1 000111 1000111 1110 0 111 1100 0 0 0 0 0 0 010 1 1 11o00000111 i110001 000101111 11 11 000 1 0 0 I :E 1Oo: 100110 010011 101001 000110 000011 100001 ':E;;\ p:; ""'] 000001 0 0 0 1 1 0 011oocq o 1 o o o 111001101111 100011 on 010011 000011 0 01 1 0 0 1 01 1 0 0 0 10 0 0 0 1 11 0 0 0 1 1 1100011 1 1 0 100ilO 100001 G 11000 0 10 G 2 = 2 1110000 0 00110101 001000 1 1 1 0 00 0 1 1 1 1 0 1 1 1 1 0 1 0 00 1 1 1 0 011010 011100 011000 00 1 1 0 1 0 11100 001100 011100 0 1 1 0 ~ 0 1 1 1 0 0 110110 11 [00 0 [000 " G = G 3= 3 oo,"n] o 0 10 0 0~ 1 1 0 l o 1 11 o ] 0l 010110 101100 011001 110010 100101 TABLE VII TABLE VI1 VARIATION OF INNER DECODER BYTE-ERROR PROBABILITY VARIATION OF INNER DECODER BYTE-ERROR PROBABILITY P ANDBYTE-ERASURE PROBABILITY q AND OF THE OUTER p AND BYTE-ERASURE PROBABILITY 4 AND OF THE OUTER DECODER BYTE-ERROR PROBABILITY PBE WITH THE DECODER BYTE-ERROR PROBABILITYPBE WITH THE ERASURE PARAMETER FOR THEM= 4 (18,6) CODE ERASURE PARAMETER FOR T H E M = 4 (18,6) CODE OBTAINED BY ANNEXiNG A TAIL TO THE (18, 6)' OBTAINED BY ANNEXING A TAIL TO THE (18,6) UNIT-MEMORY CODE ON A SIMULATED AWGN UNIT-MEMORY CODE ON A SIMULATED AWGN CHANNEL WITH AN Eb'/lVo OF 1.25 DB AND CHANNEL WITH AN Eb'INo O F 1.25 DB AND WITH THE MINIMUM DISTANCE d min OF WITH THE MINIMUM DISTANCE dmin OF THE OUTER RS CODE THE OUTER RS CODE o no",] 111001 110000 110010 1001010 0 0 1 100001 100101 10 OOoill 000011 001011 011100 000110 00 1 1 1 0 0 010110 I"oo""0 10110 101100 10119 0 101lOaJ 101100 011100 011100 110001 ["000 100011 ~~ ~~ P B E forr PBE f o 1 h pP J .00125 .00125 .00263 .00263 . .00425 00425 d min 7 4 2.095X10-1 0 ~ ~ 8.175XIO8.175X10-7 2.095~ 5 1. 074XIO- 7 3.602XIO1.074X10-7 3.602X10-5 7 5 1. 899X104.168XIO1.899X10-7 4.168X1015 .02088 .02088 2.00 2.00 P B E forr PDE f o d dmin = 13 = 13 min .03788 .03788 1. 80 1.80 l1Ull 010100 000000] 0001111 111010 100000 00011 1 1 1 0 1 0 100000 000000 000000 1111111 0101000 11111 01010 1000000 0001111 1110100 10000 00011 11101 [ 010100 010100 OOOUOO ~ 1 1 1 1 1 1 0 0 0 0 0 111111 1110100 100000 000111 11101 100000 000111 1. 50 1.50 P B E for P BE for d dmin = 99 = q .01450 .01450 amin = min = 17 17 9.465XIO- IO 9.465X10-10 IO 1.245XIO1.245X10-10 IO 3.708XIO3.708X10-10 + thus be invoked to assert that p(tt + ~) - p(t))is approximately thus be invoked to assert that p ( A) - p ( t is approximately Gaussian. Letting m and a be the (easily calculable) mean and and Gaussian. Letting m and u be the (easily calculable) standard deviation of f.1(tt + ~)- p(t), ,it is natural to use the standard deviation of p ( + A) p ( t ) it is natural to use the erasure rule: Erase at whenever p(tt + ~) -- p(t)) is more than erasure rule: Erase a, whenever p ( + A) p ( t is more A standard deviations above m. . In Table VII, we give the perX standard deviations above m In Table VII, we give the performance of System VI using this erasure rule for A = 1.5, 1.8 formance of System VI using this erasure rule for X = 1.5, 1.8 and 2.0; the value 1.8 is seen to give the best performance. and 2.0; the value 1.8 is seen to give the best performance. Note that if p(tt + ~) - pet)) were truly Gaussian, the probaNote if p ( + A) - p ( t were truly Gaussian, the probaof bility that it would exceed m + 1.8a (i.e., the probability of m + 1.80 (i.e., the bility that would an erasure in the Viterbi decoder output) would be .036; the an erasure the decoder output) would .036; the obs~rved value of 0.21 given in Table VII is rough confirmaobserved value of 0.21 given in VI1 rough confirmation of the appropriateness of the Gaussian approximation. tion of the appropriateness of the Gaussian approximation. The performance of System VI on the AWGN channel is The performance of System VI on the AWGN channel is shown in Fig. 5; ; for comparison, the performance of Zeoli's shown in Fig. 5 for comparison, the performance of Zeoli's original system, taken from [10], is also shown. The performorigin4 system, taken from [IO], is also shown. The performance of Systems III and V, given in Fig. 4, are also repeated in ance of Systems 111 and V, given in Fig. 4, are also repeated Fig. 5 to indicate how System VI compares to thee systems Fig. 5 to indicate how System VI compares to t h systems previously considered. By comparing the performance of previously considered. By comparing the performance of Systems III and VI, we see that Zeoli's tail annexation scheme Systems I11 and VI, we see that Zeoli's tail annexation scheme (and the resulting erasure capability) has improved the per(and the resulting erasure capability) has improved the performance of the feedback system with a Viterbi inner decoder formance of the feedback system with a Viterbi inner decoder by al:)Qut 0.2 dB. by about 0.2 dB. '- 1o - ~ 10-6 System III System I11 Unit-memory code) Unit-memory code) System V I ((Zeoli) S y s t e m VI Z e o l i ) 111 = 7,, 13,11 ) code)) (I4 = 7 (3,l code System VII System V \o 10-1 VI. DEGRADATION OF PERFORMANCE FOR VI. DEGRADATION OF PERFORMANCE FOR EMPLOYING HIGHER RATE INNER CODES EMPLOYING HIGHER RATE INNERCODES We have studied, rather extensively,block-convolutional We have studied, rather extensively, block-convolutional concatenated coding systems employing rate 1/3 convolutional concatenated coding systems employing rate 1/3 convolutional codes and Reed-Solomon codes over GF(2 6 ). However, it is codes and Reed-Solomon codes over CF(26). However, it is sometimes desired in practice to operate theinner convolusometimes desired in practice to operate the inner convolutional codes' at a higher rate (Le., narrower bandwidth), rate tional codes at a higher rate (i.e., narrower bandwidth), rate 1/2 in particular, in order to ease the burden imposed on the 1/2 in particular, in order to ease the imposed on phase-lock loops in the receiver. We now describe an heuristic phase-lock loops in receiver. We now describe an heuristic approach to estimate the performance of similar concatenated approach to estimate the performance of similar concatenated coding systems with rate 1/2 coding systems from the rate coding systems with rate 1/2 codi~g systems from therate 1/3 results. 1/3 results. the From past experience [16], it has been observed that the From past experience [16], it has been obseyed performance of a rate 1/2 convolutional coding system is performance of a rate 1/2 convolutionalcoding system is a rate 1/3 convolutional about 0.5 dB inferior to that about 0.5 dB inferior to thatof of a rate 1/3convolutional coding system of same complexity. To verify the general coding system of the same complexity. To verify the general applicability of this rule-of-thumb, we used a hard-decision appliqbility of this rule-of-thumb, we used a hard-decision I 1.9 2.0 2.1 2.2 I I I I 2.3 I 2.4 I 2.6 2.5 (Unii t - m e m o r y code) ( U n t~memory c o d e ) System V System v (Unit-memoryy code) ) (Unit-memor code I Eb/No (db) Fig. 5. Performance Fig. 5. Performance ofZeoli's tail annexationscheme of Zeoli's tail annexation scheme (System VI) (System VI) on a simulated AWGN channel with Eb'/No == 1.25 dB, and comon a simulated AWGN channel with Eb'/No = 1.25 dB, and comparison with. other concatenated coding systems. parison with.other concatenated coding systems. Viterbi decoder (without feedback) for an M = 6,, (2, 1) conViterbi decoder (without feedback) for an M = 6 (2, 1) convolutional code on a simulated AWGN channel at Eb'/No = volutional code on a simulat~d AWGN channel at Eb'/N0 = 1.75 dB, or, equivalently, Es/No ~ -1.25 dB. The results of 1.75 da, or, equivalently, E,/No = -1.25 dB. The results of this simulation and the calculated overall byte-error-probability this simulation.and the calculated overall byte-error-probability whenthis when thi~ decoder is used with an errors onlyRS outer decoder is used with an errors only RS outer decoder concatenated with Reed-Solomon codes are given in Reed-Solomon codes' are given in decoder concatenated of the similar R = Figure 6. For comparison, Figure 6. For comparison, the performance of the similar R = 1/3 system employing the M = 6, (3, I) cQde is also shown. 1/3 system employing the M = 6, (3, 1) csde is also shown. We see from Fig. 6 that thelatter system is about 0.5 dB We see from Fig. 6 that the latter system is about 0.5 dB superior to the former. It seems reasonable then to conclude former. It seems reasonable then concluqe superior to Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008215 1072 1072 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-25, NO. 10,OCTOBER 1977 10, TRANSACTIONS ON 'E B - Concatenated with: Concatenated with: 0 = t 4 RS code /;, b t = RS t = 6 Rs code 0 0 t = t = 8 RS code @ e 10- 3 t t = = 10 RS code System I M = 6, 13,1} code System I M = 6, (2,1) code II 1.9 1. 2.0 2.0' II 2.1 2.1 2.2 I I 2.2 II 2.3 2.3 I I 2.4 2.5 I I I I 2.5 2.6 2.6 I I I I 2.7 2.8 2.7 2.8 2.9 I I 2.9 I I 3.0 3.0 I 2.1 2.1 I 3.2 3.2 I 3.3 3.3 I I 3.4 3.4 I I 3.5 3.5 Eb/Noldb) Eb/No(db) Figure 6. Performance of Concatenated Coding System I on a simulated AWGN channel with Eb'/No = 1.75 dB when aa simulated with Eb'/NO = when 6. Performance Concatenated Coding System rate 1/2, M = 6,. (2,. I) convolutional code is used, and with Eb'/No = 1.25 dB when a a rate 1/3, M = 6 , (3, 1) con1/2, code and with Eb'/NO = 1.25 when rate = 6, 1) = 6 , (2. 1) . . volutional code is used. volutional code is block-convolutional that a concatenated block-convolutional coding system with 0.5 dB inferior to that rate 1/2 inner code may a rate 1/2 inner codemay be about 0.5 dB inferior to that of decoder with a rate 1/3 inner code for the the same number of decoder rate 1/3 inner code for states for the Viterbi inner decoder, though we should be a states for the Viterbi inner decoder, though be 1/2 little bit cautious that performance of rate 1/2 and rate 1/3 little bit cautious that performance rate 1/3 inner coding system with soft-decision and errors and erasures with errors outer decoders have not been compared. decoders VII. CONFIDENCE INTERVALS FOR THE SIMULATION RESULTS the performances of Inthe preceding, we have reportedtheperformances the block-convolutional numerous block-convolutional concated coding systems. The coding overall the byte-error rate overall byte-error rate was calculated from the byte-error rate as of the innerdecoder inner decoder as obtained bysimulation.TheThe rather by simulation. rather PBE for the inner decoding imply thatthe large large values of PBE for theinner that the simulations modest simulations require only a mod~st sample size. The predompredominate single-byte inate single-byte error events 'indicate a quiteindependent indicate independent byte-decision. with decoder makes byte-decision. Assuming that the decoder makes an error with PB E for each byte-decision, the probability PBE independentlyforeachbyte-decision,the number of byte errors for for L decisions is a binomial random of byte errors binomial L and PBE. mean variable with parameters Land PBE . The mean value of is LPBE, this random variable is LPB E, and the standard deviation is standard dLPBE(1 PBE). is sufficiently y'LPBE (1 -- PBE ). L is sufficiently large forthis binomial this by random variable to be well approximated by a Gaussian random variable withthe the same mean and variance. Since 95.4% 95.4% of the samples of a Gaussian random variable are within the interval specified byby the mean plus and minus twice the specified mean minus 95% that the actual standard deviation, we can be 95% confident that the actual deviation, PBE for the inner decoder byte-error rate for the inner decoder is in the interval PBE ± * 2&PBB(1 2VLPBE(1 - PBE). Such 95% confidence intervals are indi- PBE )· 95% are indicated in Tables I1 and V. II in performances of the = 6, 1) The performances of System I for the M = 6 , (3, 1 ) inner code and for the ( 1 8 , 6 ) unit-memory inner code are shown in the (18,6) with their corresponding confidence Fig. 7 together with their corresponding confidence intervals. that be that the actual We conclude that we may be 95% confident that the actual of concatenated coding system performance of the concatenated coding system deviates no morethanabout than about 0.1 dBfromoursimulation from our simulation results.MoreMoresince simulation are through the over, since all the simulation results are obtained through the same pseudorandom number sequence, the relative differences number sequence, the among fact, much more in performance among various systems are, in fact, much more the alone would accuratethan 0.1 than the 0.1 dB confidence interval alone would indicate. VIII. SUMMARY AND CONCLUSIONS We have extensively studied block-convolutional conblock-convolutional concoding systems with catenated coding systems with various modifications. We have found that employing unit-memory convolutional codes rather employing unit-memory convolutional codes rather than conventional codes can improve the performance by conventional codes can performance by 0.3 nearly 0.3 dB. Feedback from the outer decoder to from the outer decoder t o restart a Viterbiinnerdecoder of inner decoder also contributes' an improvement of contributes about 0.3 dB. But, surprisingly, feedback from the outer from outer the decoder t o restart an RTMBEP inner decoder provides an decoder to approximately 0.5 dB advantage; this might be the principal the principal occasion where the use of RTMBEP decodingratherthan of rather than Viterbi decoding is justified.Anotherunexpected Another unexpected result is that soft-decisions by the inner decoder inin'conjunction with an inner decoder conjunction with an erasures-and-errors outer decoder improve the overall performdecoder ance by only about 0.1 dB for RTMBEP decoding. Even with only about 0.1 Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008216 1073 1073 CONCATENATED CODING SYSTEMS LEE: CONCATENATED CODING SYSTEMS :-:r r t ~ I 0.4 0.4 t 0.5 1.5 L O'l 0.2 0.2 0.3 lo-' System I with System I H = 6, (2,1) code 11 == 6, ( 2 . 1 ) 0.3 : t Sys tel"'l I wlth System I wi th M = 6, (3,1) code M '= 6, (3,l) I I 11 ~ 0,, t1 = h 0.6 0.6 13,11 i L ~:7~1: Code Code 0.7 Fd':'S 2-K 8 ; ~I System I with M = 7 , I H 7, 0:= 13.1) (3.1) code Syster:1 I, with S y s t e n I, i 0.9 (18,6) (18.6) unit-memory unlt-memory l code code 1.l 1.11.1_ I \I 0 1.21 1.1 2 with 1. IV 1.3 1.3 2.0 2.0 2.1 2.1 2.2 /..2 2.3 2.3 2.4 2.5 2.S 2.6 2.6 Eb/No(dh) 7. 95% Fig. 7. 95% Confidence Intervals fortheperformance the performance of System I (18, 6 code and with withthe M = 6,, (3, 1) convolutionalcodeandwiththe the (18, 6)) the M = 6 (3, code. unit-memory convolutional code. whch erasure Zeoli's modification, which provides an excellent erasure capability, soft-decisions in conjectionwithan an erasures-andconjection with capability, erasures-anderrors outer decoder improve performance by about only improve 0.2 dB. 0.2 dB. 8, effects feature In Fig. 8, we summarize the effects of each feature discussed above on the performance of block-convolutional concatenated the performance coding systems. The figure is drawn in terms of a dB scale. As figure communications a communications engineer starts to choose a concatenated coding system, the first question he faces is whether he is question he whether he coding system,the willing willing to trade the increased cost ofmodems to operate at the cost modems at signal gains, lower signal energyperchannel per channel bitfor coding gains, if he for rate 1/3 code decides to choose aarate 1/3 convolutional inner code instead 0.5 of arate 1/2 code, he gains about 0.5 dB. Then, he decides . rate 1/2 code, he gains he employ; to M = (3, 1) code which inner code to employ; to choose the M = 7, (3,1) code gives dB M = (3, code but gives a 0.2 dB advantage over the M = 6 (3, 1) code but decoder, requires twice the number of states in the decoder, whereas to M = 1, (1 8, 6) 0.3 advantage' choose the M = 1, (18, 6) code gives a 0.3 dB advantage with states, but more branch connections same number of states, but more branch connections required whether he in the inner decoder. The thirdquestion is whether he will inner third question decoder to the allow the decisions of the outer decoder to be fed back to the choice decoding. inner decoder; if not, the obvious choice is Viterbi decoding. decoder; 0.3 dB 0.5 dB, depending on whether Otherwise, he can gain 0.3 dB or 0.5 dB, depending on whether RTMBEP And aViterbidecoder Viterbi decoder or an RTMBEP decoder is utilized.And finally, used, he finally, if a soft-decision inner decoder is used, he can gain soft-decision decoder dB through scheme Viterbi 0.2 dB through Zeoli's erasurescheme if he uses aViterbi 0.05 dB decoder, or decoder, or gain about 0.05 dB if an RTMBEP decoder is employed. employed. The leading contenders for a good concatenated system are Zeoli's annexation scheme with thethe unit-memory code (Sysscheme with unit-memory code tem VI), or either hard decision (system IV) or soft-decision VI), either hard (system IV) unit-memory code with (System V) RTMBEP decoding of the unit-memory code with System II w~th (18,6) wlth ( 1 8 , 6 ) Unit-memcry Unit-memcry code System III wi th I1 == 7 13.11 (3.1) code System System system t' ~ 1.9 l.fJ (18,6) (18.6) Unit-memory code VI wi th I1 = = 7 :; " (3,1) (3.1) code code System III with UnltUnit- (18.6) > memory 11S.61 1 :l code code Svstem VI w r th System VI vii t h (18.6) Unit(lS.6) Unltmemory System memory code code i System IV Hith (18.6) (18,6) Unit-memory code Uni t-memory System V wlth V with (18.6) (18.6) uni l l n l t-memory code code 1.41 1.1 15 . L Fig. 8. 8. The relative dB gains among the concatenated coding concatenated coding studied. systems studied. the softfeedbackfrom outer outer decoder. Among them,thesoftfeedback from the the Among decision RTMBEP decoder with feedback performs the best. qecoder feedback best. modification Intermsof of hardware implementation, Zeoli's modification terms unit-memory code and with the unit-memory code and the the hard-decision RTMBEP decoder are of approximately the same complexity. However, the decoder since the operation of the Viterbi decoder for Zeoli's system operation of feedback from the outer decoder, there depends on the correct feedback from the outer decoder, there outer is always a slim chance that the outer decoder fails to provide the decoder. Since theencoder encoder correct decisions to the Viterbidecoder.Since constraint length is much larger than the decoder constraint the length conlength, this can cause endless errors as if a catastrophic conthis cause volutional code were used. Thus, it is necessary to send Thus, it synchronization signals inthe Zeoli schemeperiodically to the periodically signals reset the Viterbi decoder to guarantee restoration of normal of Viterbi the operation. the RTMBEP decoder has''the same constraint the has length as that of the encoder; therefore, the decoder is able to encoder; therefore, recover from errors in a few branches by itself without feederrors a few branches itself The feedback from the outer decoderonly speeds this only speeds this back. The feedback from the outer back, most therefore, process uptherefore, when an error is fedback,thethe most damage it can cause is for the RTMBEP decoder to make a the a few more errors before it recovers by itself. This is certainly a errors very desirable advantage for a concatenated codingsystem. system. can restore operaMoreover, because the decodercan restore its normal operaquickly, the required this for tion quickly, the degree of interleaving required for this the Reed-Solomon block scheme is considerably less than the full Reed-Solomon block the Zeoli's length interleaving required for the Zeoli's scheme. Finally, as a remark to information theorists, we note that remark theorists, for System I11 (the RTMBEP inner decoder for the rate 1/3 System III the rate 1/3 code concatenated with (18, 6) unit-memorycodeconcatenatedwiththe the (63, 51), 6-error-correcting with feedback from the RS 6-error-correcting RS code with feedback from the RS errorsdecoder) byte-error-probability only decoder) we can achieve a byte-error-probability of Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008217 1074 1074 IEEE TRANSACTIONS O N COMMUNICATIONS, VOL. COM-25, NO. IO, OCTOBER 1 9 7 7 COM-2S, 10, 1977 TRANSACTIONS ON COMMUNICATIONS, 10- 66 at Eb/No of 2.17 dB, or,equivalently,at at Es/No of lop Eb/No dB, or, equivalently, EJN, 3.52 dB. The ,cut-off rate, R compp , of this 8-level quantized rate, R c o m , 3.52 dB. The 8-level AWGN channel is 0.275 whereas its channel capacity is 0.44. capacity AWGN is 0.275 its The overall rate of the concatenated coding system is 0.27. It concatenated system The overall seems that the cut-off rate, rather th& thethe channel capacity, the cut-off rate, rather th~ seems is still the practical limit of rate for reliable communications, limit for is still the communications, even for a very sophisticated concatenated coding system, just even for a sophisticated concatenated as it is ina a conventional convolutional coding system emas is conventional codingsystememploying sequential decoding sequential decoding [16]. The advantage of the concon[16]. The catenatedcoding system resides only in the elimination of coding "deleted data" such as is always presenta in a sequential “deleted such data” as is in sequential decoding system because of the latter's highly variable system latter’s computation. computation. ACKNOWLEDGMENT ACKNOWLEDGMENT The author would like to express his gratitude to his DisThe author gratitude to DisAdvisor, Professor James sertation Advisor, Professor James L. Massey for his patient and and generous guidance of this research and for his suggestions guidance research to match byte-oriented convolutional codes with Reedmatch byte-oriented convolutional codes Solomon codesand to develop the "real-time minimal-byteSolomon codes and ‘‘real-time minimal-byteerror probability (RTMBEP) decoding algorithm." probability algorithm.” 1. N. Lee, "Short, Unit-Memory, Byte-Oriented, Binary ConL. “Short,Unit-Memory,Byte-Oriented, ConTransvolutionalCodes Having MaximalFree Distance,” Codes Free Distance," IEEE Transactions on Information Theory, Vol. IT-22, No. 3, pp. 349-352, No.3, 349-352, May 1976. 1976. "Coupled Decoding of Block-Convolutional G. W. Zeoli,“CoupledDecodingofBlock-ConvolutionalCon- ConG. Transactions Codes," catenated Codes,” IEEE Transactions on Communications, Vol. COM-21, pp. 219-226, March 1973. COM-21, "Bootstrap Decoding," F. Jelinek, “Bootstrap Trellis Decoding,” IEEE Transactions on Information Theory, Vol. IT-21, pp. 318-325, May 1975. Theory, 318-325, 1. R. Bahl, J . Cocke, F. Jelinek and J . Raviv, “Optimal Decoding J. and 1. "Optimal L. of Linear Codes for Minimizing Symbol Error Rate," IEEE IEEE for Error Rate,” Transactions on Information Theory, Vol.IT-20,pp.284-288, Information Theory, IT-20, pp. 284-288, Transactions March 1974. C. L. "M.A.P. Bit P. 1. McAdam, 1. R. Welch and C. 1. Weber,“M.A.P.BitDe-DeL. L. coding of Convolutional Codes," presented at 1972 International ofConv91utional Codes,” presented at 1972 International Symposium on Information Theory, Asilomar, California, Information Theory, California, January, 1972. 1972. 1.N. Lee, "Real-Time Minimal-Bit-Error ProbabilityDecoding L. N. “Real-TimeMinimal-Bit-Error Probability Decoding of Convolutional Codes," IEEE Transactions on CommunicaofConvolutionalCodes,” COM-22, 146-151, Feb. 1974. tions, Vol. COM-22, pp. 146-151, Feb. 1974. J.. 1. Massey and M. K.Sain,“InversesofLinear‘Sequential Sain, "Inverses of Linear Sequential J L. Circuits," IEEE Transactions on Computers, Vol. C-17, pp. 330330Transactions Circuits,” 337, April 1968. "Sequential Decoding for Efficient CommunicaL I. M. Jacobs,“SequentialDecodingforEfficientCommunicafrom Deep Space," Communication tions from Deep Space,” IEEE Transactions on Communication COM-IS, pp. Technology, Technology, Vol. COM-15, pp. 492-501, Aug. 1967. W. W. Peterson and E. J . Weldon, Error Correcting Codes, 1. and E. Second Edition MIT Press, Cambridge, Mass., 1972. MIt Cambridge, 1. N. Lee, "Concatenated Coding Employing Systems Employing Un!tL. “Concatenated Coding Systems UnitConvolutional Codes and Byte-Oriented Decoding AlMemoryConvolutionalCodesandByte-OrientedDecoding gorithms", Ph.D. Dissertation, Department En- Electrical Engorithms”, Ph.D. Dissertation, Department Electrical of of of gjneering, University of Notre Dame, June 1976. gbeering, Dame, June 1976. [9] [10) [11) [12] [l3] [l4) [15] [16) [17] [18) REFERENCES REFERENCES [1) [2) [3) [4) [5) [6) [7) f8) P. Elias, “Error-Free Coding,” IRE Transactions on Information P. Elias, "Error-Free Coding," Transactions 19S4. Theory, Vol. IT-4, pp. 29-37, Sept. 1954. Theory, Vol. Mass.: M.I.T. G. D. Forney, Jr., Concatenated Codes. Cambridge, Mass.: M.LT. G. Jr., Codes. Press, 1966. ' Press, 1966. D. Falconer, HybridSequentialand D. D. Falconer,“A"A Hybrid Sequential and AlgebraicDecoding Decoding Scheme,” F’h.D. Scheme," Ph.D. Dissertation,Dept.ofElectricalEngineering, Dept. of Electrical Engineering, M.I.T., 1966. M.LT., Cambridge, Mass., 1966. F. Jelinek and J.. Cocke, “Bootstrap Hybrid Decoding forfor Symand J "Bootstrap Hybrid Decoding SymF. Control, metrical metrical Binary Input Channels,” Information and Control, Vol. Channels," Information 18, pp. 1971. 18, pp. 261-298, March 1971. A. J Viterbi, "Error Bounds Convolutional Codes and A. J.. Viterbi, “Error Bounds for for Convolutional Codes and An ZEEE TransacOptimal Decoding Algorithm," IEEE TransacAsymptoticallyOptimalDecodingAlgorithm,” tions on Information Theory, IT-13, pp. 260-269, April tions on InformationTheory, Vol. IT-l3,pp.260-269, 1967. 1967. J. P. J. P. Odenwalder, “Optimal Decoding of Convolutional Codes,” "Optimal Decoding of Convolutional Codes," Ph. Ph. D. Dissertation, School of Engineering and Applied Sciences, Dissertation, School of Engineering and Applied Sciences, University of California, Los Angeles, 1970. of California, 1970. E. R. Coding Theorx. E. R. Berlekamp, Algebraic Coding' Theory. New York: McGrawHill, 1968. Hill,1968. J L. Massey, "Shift-Register Synthesis and Decoding,” J.. 1. Massey,“Shift-Register Synthesis and BCH Decoding," Transactions on Information Theory, IT-15’, IEEE Transactions on Information Theory, Vol. IT-IS', pp. 122-125, Jan. 1969. 172-125, Jan. 1969. * * (S'73-M'76) born in Kaohsiung, Lin-nan Lee (S’73-M’76) was born in Kaohsiung, Taiwan,China, on February 24, 1949. February 24, 1949. HerereChina, ceived the B.S.degreefromNationalTaiwan degree from National Taiwan University, Taipei, Taiwan, China, in in 1970, and Taipei, Taiwan, China, 1970, and the MS. and the Ph. D. degrees from the UniM.S. Ph. versity of NotreDame,NotreDame, Dame, Notre Dame, IN, in ,.: “I ”,,” 1973 and 1976, respectively, allElectrical 1973 and 1976, respectively, all in Electr'ical in 1 Engineering. From 1970 to 1971, he he was aaCommunica1970 to 1971, Communications Officer in the Chinese Air Force. Between Officer in the Chinese Force: Between a research assistant 1971 and 1975, he he wasaresearchassistantat at and 1975, the University of Notre Dame thll yniversity of Notre Dame engagedin graduate studies on digital in studies on digital communications with emphasis on information and coding theories. communications with emphasis on information and coding theories. In In 1975, hejoinedthe the LINKABIT Corporationwherehe he is presently he joined where of coding, modulation research engaged in study, research and development of coding, modulation and in multiple access techniques for satellite,communication systems. for satellite/communication systems. mult)p1e I , Authorized licensed use limited to: UNIVERSITY NOTRE DAME. Downloaded on January 6, 2010 at 14:14 from IEEE Xplore. Restrictions apply. AppDel0008218 Exhibit 7 GSM ENHANCED FULL RATE SPEECH CODEC K. Jarvinen, J. Vainio, P. Kapanen, T. Honkanen, P. Haavisto Nokia Research Center, Tampere, Finland kari.jarvinen@research.nokia.com R. Salami, C. Lajlamme, J-P. Adoul University of Sherbrooke, Sherbrooke, Canada ABSTRACT This paper describes the GSM enhanced full rate (EFR) speech codec that has been standardised for the GSM mobile communication system. The GSM EFR coclec has been jointly developed by Nokia and University of Sherbrooke. It provides speech quality at least equivalent to that of a wireline telephony reference (32 kbit/s ADPCM). The EFR coldec uses 12.2 kbit/s for speech coding and 10.6 kbit/s for error protection. Speech coding is based on the ACELP algorithm (Algebraic Code Excited Linear Prediction). The codec provides substantial quality improvement compared to the existing GSM full rate and half rate codecs. The old GSM codecs lack behind wireline quality even in error-free channel conditions, while the EFR codec provides wireline quality not only for terror-free conditions but also for the most typical error conditions. With the EFR codec, wireline quality is also sustained in the presence of background noise and in tandem connections (mobile to mobile calls). 1. INTRODUCTION The background for introducing wireline speech quality to GSM is the increasing use of the GSM system in communications environments where it competes with fixed or cordless systems. To be competitive also with respect to speech quality, GSM must provide wireline speech quality which is robust to typical usage conditions such as background noise and transmission errors. The standardisation of an enhanced full irate (EFR) codec for GSM started in European Telecommunications Standards Institute (ETSI) in 1994 with a pre-study phase. The pre-study phase was undertaken to set essential requirements for the EFR codec and to assess the technical feasibility of meeting them. During the pre-study phase, wireline speech quality was set as a development target for the EFR codec [I]. Wireline quality (with ITU-T (3.728 16 kbit/s LD-CELP as a reference codec) was required not only for error-free transmissilon, but also in low error-rate conditions (C/I=13 dB) as well as in background noise (error-free conditions). Wireline performance was required also for speaker independence and for speaker recognisability. For more severe error conditions (C/I=lO clB and C/I=7 dB) significant improvement to the existing GSM full rate (FR) codec was required. In extreme error condiiiions (C/1<7 dB) the requirement was to provide graceful degradation without annoying effects. In error-free self-tandem (mobile to mobile calls) the EFR codec should perform equal to G.728 in tandem. In erroneous tandem at C/I=lO dB, the EFR. codec was required to perform significantly better than the FR codec. 0-8186-7919-0/97 $10.00 8 1997 IEEE: Parameter 2 LSP sets ACB index (lag) FCB pulses FCB gain Total 771 2nd and 4th subframe 1st and 3rd subframe I 9 35 5 I Total per frame 38 6 30 35 5 140 20 244 frame subhame \ ; / I \; Figure 1: Block diagram of the GSM EFR encoder. 2.1 Linear Prediction A 10th order linear prediction (LP)analysis is carried out twice for each 20 ms frame using two different asymmetric windows of length 30 ms. Both LP analyses are performed for the same set of speech samples without using any samples from future frames (no lookahead). The two sets of LP parameters are converted into line spectrum pairs (LSP).First order moving average prediction is used for both LSP sets. The LSP residual vectors are jointly quantised using split matrix quantisation (SMQ) with 5 submatrices of dimension 2x2 (two elements from both sets). The submatrices are quantised with 7, 8, 9, 8, and 6 bits, respectively. A total of 38 bits are used for LSP quantisation. For the 1st and 3rd subframes, LP parameters interpolated from the adjacent subframes are used in the codec. 2.2 Pitch Analysis The adaptive codebook is searched for the lag range [ 17 3/6, 1431 with a combined open-loop/closed-loop search [2]. A fractional lag with 116th resolution is used for lag values below 95 in the 1st and 3rd subframes and for all lag values in the 2nd and 4th subframes. The codebook search consists of the following steps: -~ An open-loop search for integer lag values is carried out once every 10 ms from the weighted original speech. Small lag values are preferred to avoid pitch multiples. A closed-loop search for integer lag values is performed on subframe basis. For the 1st and 3rd subframe the search is carried out in the vicinity of the found open-loop lag [To]- 3, To,+ 31 and for the 2nd and 4th subframe in the vicinity of + the lag found for the previous subframe [T,, - 5, Tps 41. Fractions are searched around the closed-loop lag if it is less than 95 (and always in the 2nd and 4th subframes). The lag is quantised with 9 bits for the 1st and 3rd subframes and with 6 bits for the other two subframes where the lags are coded differentially. The codebook gain is quantised to 4 bits. 2.3 Fixed Codebook An algebraic codcbook with 35 bits is used as the fixed codebook. Each excitation vector contains 10 non-zero pulses, with amplitudes +1 or -1. The 40 positions in each subframe are divided into 5 tracks where each track contains two pulses. In the design, the two pulses for each track may overlap resulting in a single pulse with amplitude +2 or -2. The allowed positions for pulses are shown in Table 11. Table 11: Allowed pulse positions for each track. The optimal pulse positions are determined using a nonexhaustive analysis-by-synthesis search: For each of the five tracks, the pulse positions with maximum absolute values of the sum of normalised backward filtered target and normalised long-term prediction residual are searched. From these the global maximum value for all the pulse positions is selected. The first pulse p 0 is always set in the position corresponding to the global maximum value. Five iterations are carried out in which the position of pulse p l is set to one of the five track maxima. The rest of the pulses are searched in pairs by sequentially searching each of the pulse pairs (p2, p31, (p4, p 5 ) , (p6, ~ 7 1 , (p8, p 9 ) in nested loops. For each iteration, all 9 initial pulse positions are cyclically shifted so that the pulse pairs are changed and the pulse p l is placed at a local maximum of a different track. The rest of the pulses are searched also for the other positions in the tracks. In the search, at least one pulse position is located corresponding to the global maximum and one pulse is located in a position corresponding to one of the 5 local maxima. For subframes with a lag less than the subframe length, adaptive pitch sharpening filter is used. The two pulse positions in each track are encoded with 6 bits and the sign of the first pulse in each track is encoded with one bit. The fixed codebook gain is coded using moving average prediction and quantised to 5 bits. 772 2.4 Error Concealment 5. COMPLEXITY AND DELAY Error concealment in the EFR codec is based on partially replacing the parameters of the received bad frame with values extrapolated from the previous good frames: The LSPs of the previous good frame are used but shifted towards their mean values. The codebook gains are replaced by attenuated values derived from the previous subframes using median filtering. The amount of attenuation is different for the two codebooks and depends on which state the error concealment is in. The lag values are replaced by the lag value from the 4th subframe of the previous good frame. The received excitation pulses of the fixed codebook are used as such. In case a good frame is preceded by a bad frame, the codebook gains for the good frame are limited below values used for the last good subframe. 3. CHANNEL CODEC The EFR channel codec is almost the same as the FR channel codec because the design aim was to keep it as unchanged as possible. During the GSM EFR codec standardisation, the use of the existing FR channel codec (or any existing GSM generator polynomials) was encouraged since this minimises hardware changes in the GSM base stations and speeds up the introduction of the EFR codec. In the PCS 1900 EFR coldec standardisation, the use of the existing FR channel codec was an essential requirement. Therefore, the FR channel codlec was included in the EFR channel codec as a module together with additional error protection. The additional error protection consists of an 8bit CRC parity check and a repetition code. The FR channel codec module protects the 182 most important bits with 1R-rate convolution code and it uses a 3-bit CRC that covers the 50 most important bits. The bits in the EFR codec are divided into protected and unprotected bits according to their subjective importance to speech quality. Only 66 bits :are left unprotected. These consist of the least significant bits of pulse positions in the algebraic code. The 8-bit CRC covers the 65 most important bits. It was included in the EFR codec to achieve reliable bad frame detection and, consequently, to reduce the number of undetected bad frames. These are a major source of audible degradations in current digital cellular systems. 4. VAD/DTX The EFR codec contains also the functioins of discontinuous transmission (DTX) and voice activity. detection (VAD). DTX allows the radio transmitter to be switched off during speech pauses in order to save power in the mobile station and also to reduce the overall interference level over thle air interface. VAD is used on the transmit side to detect speech pauses, during which characteristic parameters of the background acoustic noise are transmitted to the receive side, where similar noise, referred to as comfort noise, is then generated. The comfort noise parameters in the EFR codec consist of averaged LSP parameters and an averaged fixed codebook gain. Locally generated random numbers are used on the receive side as excitation pulses. During comfort noise generation, the adaptive codebook is switched off. The estimated average speech channel activity for the EFR codec is 64% [3]. The complexity of the EFR codec has been estimated from a C-code implemented with a fixed point function library in which each operation has been assigned a weight representative for performing the operation on a typical DSP [3]. The theoretical worst case complexity of the EFR codec has been estimated during ETSI EFR verification phase to be 18.1 WMOPS (weighted million operations per second) [3]. This is below that of the GSM half rate codec (21.2 WMOPS) [3], [4]. Memory consumption estimated for data RAM (4.7k 16-bit words), data ROM (5.9k 16-bit words) and program ROM are each below those of the GSM half rate codec. The delay of the EER codec is approximately the same as that of the FR codec. Both codecs have a buffering delay of 20 ms without any lookahead. The round-trip delay (uplink delay + downlink delay) for EFR taking into account all system and processing delays of the GSM network is 191.0 ms while for the FR codec it is estimated to be 188.5 ms [3]. The difference is unnoticeable. 6. SUBJECTIVETEST RESULTS The most extensive subjective tests of the performance of the EFR codec are from the PCS 1900 EFR codec validation tests [ 5 ] . These were carried out to characterise the performance of the EFR codec after selection for PCS 1900. The standardisation process in ETSI also included pre-selection tests which were carried out in six laboratories, but no common analysis averaging the scores over all laboratories exists [3]. The results from both tests are well in line with each other. Both show that the EFR codec has basic speech quality at least equal to that of a wireline reference (G.721 32 kbit/s ADPCM in PCS 1900 tests and G.728 16 kbit/s LD-CELP in ETSI tests). The PCS I900 EFR codec validation test results are discussed first. The EFR codec was tested in three channel error conditions with C/I-ratios 13, 10 and 7 dB. The channel bit error-rates for these are approximately 2%, 5% and 8%, respectively. The two lowest error-rate conditions correspond to operating well inside a cell while the 7 dB C/I condition corresponds to operating at a cell boundary. Figure 2 shows the results from channel error test. These show that the EFR codec performs much better than the FR codec in the error-free case and in all the tested error conditions. In error-free and low errorrate conditions (C/I=13 dB), the performance of the EFR codec is statistically better (based upon 95% confidence intervals) than the performance of the error-free reference ADPCM codec. At medium error-rate conditions (C/I=lO dB) the EFR codec performs equally well to (error-free) ADPCM. Only at medium to high error-rates (C/I<IO dB) the EFR performance falls below wireline quality. Figure 3 shows the results from background noise test (home noise at 20 dB, car noise at 15 dB and 25 dB, street noise at 10 dB, and office noise at 20 dB). For all of these, the EFR codec performs clcarly better than the FR codec and equal to or better than ADPCM. For scores averaged over all background noise conditions, the EFR codec performs statistically better than ADPCM. Figure 4 shows test results from tandem test for self-tandem and for tandeming with either FR or ADPCM codec. The EFR codec performs statistically equivalent to ADPCM in tandems with FR and ADPCM and statistically better than ADPCM for self-tandem. Figure 5 shows the results from talker dependency test (with 12 talkers). The 773 Background Noise Test (DCR, DMOS) Channel Error Test (ACR, MOS) 5 4.5 3.5 L o 3 2.5 ‘2-c- OGSM FR ] I 3.5 >1.5 2 3 I 14 Error-free Cll=13 dB 1-GSM EFR -GSM C/I=lO dB FR -- ADPCM (error-free) C l k 7 dB I Figure 3: Results from background noise test. Figure 2: Results from channel error test. Tandem Test (ACR, MOS) Talker Dependency Test (DCR, DMOS) 4.5 4 U) 3.5 0 = 3 OGSM FR 2.5 2 GSM EFR GSM FR ADPCM Figure 4: Results from tandem test. Figure 5: Results from talker dependency test. EFR codec was found statistically better than ADPCM. The ETSI pre-selection tests consisted of five experiments: transmission errors (C/I=IO and 7 dB), tandeming (C/I=IO dB), background noise (music 20 dB and vehicle 10 dB), talker dependency and high error conditions (C/I=4 dB). The performance of the EFR codec was found equal to G.728 for error-free transmission, speech in background noise (for both noise types) and talker dependency. No testing was carried out for the low error-rate condition C/I=13 dB. In erroneous transmission at CLI=lO dB and 7 dB, the EFR codec was found clearly better than the FR codec. At C/I=lO dB, the EFR codec performed equal to or better than MNRU 24 dB in half of the tests. For the outside-a-cell error condition of C/I=4 dB, the results show that the EFR codec has approximately the same performance as the FR codec. The EFR codec was tested in error-free self-tandem in one laboratory and was found equivalent to (3.728. In self-tandem at C/I=10 dB, the EFR codec performed clearly better than the FR codec and equal performance to the FR codec in single encoding at C/I=lO dB was demonstrated in all laboratories except one. Verification tests complementing the pre-selection tests were carried out in ETSI for such items as DTMF and network information tones, frequency response, complexity, delay, different languages, music and special input signals (e.g., different input levels, sine waves, noise signals etc.). In all the verification tests, the EFR codec performed well [3]. For DTMFtones, the EFR codec was found 100% transparent. associated previously only with fixed networks to the end users of mobile communication systems. The GSM EFR specifications have been completed in 1996 and the codec is expected to be deployed in GSM, DCS 1800 and PCS 1900 networks in 19971998. ACKNOWLEDGEMENTS The authors acknowledge the efforts of M. Fatini and J. Hagqvist in PCS 1900 EFR codec standardisation, the algorithmic contribution of B. Bessette and the efforts of members of ETSI SMG2 SEC, especially those who have carried out pre-selection and verification tests for the EFR codec. REFERENCES [ 1J “Enhanced Full-Rate Speech Codec - Study Phase Report,” [2] [3J [4] 7. CONCLUSIONS The GSM EFR codec has met and even exceeded the essential requirements set for the development of the EFR codec. It provides substantial improvement in speech quality compared to the existing GSM full rate codec and brings high speech quality [SI 774 Report of the ETSI SMGl sub-group on the EFR codec, Version 0.5.3, December 19, 1994. R. Salami, C. Laflamme, J-P. Adoul, and D. Massaloux, “A toll quality 8 kb/s speech codec for the personal communications system (PCS),” IEEE Trans. Veh. Technol., vol43, no. 3, pp. 808-8 16, Aug. 1994. “Digital cellular telecommunications system; Performance characterization of the GSM enhanced full rate speech codec (GSM 06.55),” ETSI Technical Report (In preparation), Draft ETR 305, version 1.O.O. Complexity Evaluation Subgroup of ETSI TCH-HS (Alcatel Radiotelephone, Analog Devices, Italtel-SIT, Nokia), ”Complexity Evaluation of the Full-Rate, ANT and Motorola Codecs for the Selection of the GSM Half-Rate Codec,” Proceedings of the 1994 DSPx Exposition & Symposium, June 13-15, San Fransisco (USA), 1994. “COMSAT Test Report on the PCS1900 EFR Codec,” Source: Nokia, Temporary Document 47/95 of ETSI SMG2SEC, June 1995. Exhibit 8 A GOlOD JOB WELL DONE: THE LATEST WIRELESS CODECS DELIVER WIRELINE QUALITY Leigh A. Thorpe & Paul V. Coverdale N ortel, Ottawa, Canada Abstract The results of a listening test conducted at Nortel on behalf of the CDMA Development Group (CDG) suggest that the new crop of wireless codecs are all able to deliver voice quality comparable to wireline with single encoding over a clean wireless transmission channel. The codecs were tested with clean speech, varying input level, background noise, and tandem encoding. Given these findings, we hope to provoke discussion among the: audience about the next challenges facing speech coding researchers and codec designers. Potential issues: codec robustness to errors, noise reduction, and very-low bit rate codecs for a variety of applications. Introduction How good is the general quality of the latest codecs intended for the three major wireless technologies? How does that quality compare to the quality of wireline equipment? Listening tests were conducted to evaluate the performance of the current standard codecs for the three major wireless technologies. The codecs tested are the latest standards for CDMA, North American TDMA, and GSM (PCS1900). The tests examined performance with clean Speech and robustness to input impairments and tandem encoding. Method Four codecs intended for wireless and two codecs used in wireline networks were evaluated. Clean source speech and speech with input impairments was processed through each codec in a non-real-time software simulation. Ratings were gathered from listeners in an absolute category rating (ACR) procedure[ 11. Test and reference codecs The following codecs listed below in the tests. Mu-law PCM and ADPCM were included to represent the range of wireline quality. ADPCM at 32 kb/s is generally taken to represent the lower boundary of wireline quality. Test codecs: TDMA EFRC: 8 kb/s enhanced full-rate codec (EFRC) for TDMA (IS-641). CMDA EVRC: the new 8 kb/s variable-rate coding standard for CDMA systems. (This codec includes a noise reduction algorithm applied to the input speech signal.) CDMA Q13: The 13 kb/s variable-rate codec proposed by the CDG for CDMA systems. GSM EFWC: 13 kb/s enhanced full-rate (EFR) for GSM and PCS 1900. Reference codecs: p-law PCM: G.711 at 64 kb/s. Used in North American digital wireline switching and transmission. ADPCM: G.726 at 32 kb/s. Used on many international wireline calls and private networks; also in CT2 and DECT wireless standards. Test cases The test cases examined the contribution of varying input level, background noise level and degradation from asynchronous tandem operation. (Because of the difficulty of defining comparable channel degradation for the various wireless systems, we did not try to compare the effects of transmission impairments across wireless platforms.) The test cases included the following: Clean speech: SNR > 45 dB; -20 dBmO input level Input levels: SNR > 45 dB; -10 dBmO and -30 dBmO Background noise: car interior noise at 10 and 20 dB SNR; street noise at 15 dB SNR; babble at 20 dB SNR; and Hoth noise at 20 dB SNR (white noise filtered to model long-term average room noise). Asynchronous tandem: clean speech input; nominal level. The source file is encoded, decoded, converted to analog, re-digitized, encoded and decoded a second time. Additional reference cases processed through the modulated noise reference unit [ 2 ] were also included in the session. Samples with dB Q values of 5, 10, 15, 20, 25, 30 and 35 were used. Preparation of source files and test samples Source speech from four talkers was prepared from highquality 16-bit linear PCM recordings from each of four talkers (two men and two women). To maintain equivalent speech levels for each sample, the filtered source files were equalized for speech power as determined by the P.56 algorithm (SV6) [3]. The samples were then filtered with the modified IRS transmit filter [ 11 and processed through plaw PCM before processing through the test codecs to simulate the effects of the wireline portion of a connection. For speech-plus-noise samples, the filtered, equalized speech was mixed with filtered noise scaled to the appropriate level to achieve the intended signal-to-noise ratio. 85 Figure 1. 4 3 E Mean ratings given to the clean speech case for each of the test and reference codecs. The results for all of the wireless codecs fall between those for p-law PCM and ADPCM, often taken as defining the range of wireline quality. 3 2 Bad 1 plaw AD EVRC Q13 TDMA GSM EFRC EFRC PCM PCM The chart shows the results for the clean speecwclean transmission test case. All of these codecs were found to provide voice quality better than ADPCM. In addition, Q13, EVRC, and GSM EFRC showed voice quality equivalent to or nearly equivalent to y-law PCM with clean input speech. Source files were processed through each codec in a non-realtime software simulation. Asynchronous tandem processing was done in digital simulation using up-sampling, applying an all-pass filter, and downsampling again [4]. Listening test procedure Results for cases with background noise showed that all the test codecs performed as well or better than ADPCM with all the noise types tested. Because of its on-board noise reduction algorithm, the EVRC performed better than any other codec in the noise cases. Finally, voice quality remains acceptable even with asynchronous tandem processing. Speech samples for each test case were rated by 60 typical telephone users in a carefully controlled listening test. Listeners rated the overall audio quality of the sample on a 5point scale. The rating scale was defined as 1-bad, 2-poor, 3-fair, &good, and S-excellent. All listeners rated every test sample once in a classic repeated-measures design. Such a design allows variation due to differences in subject rating criteria to be partialled out in an analysis of variance. These data demonstrate clearly that the latest codecs for each of the major wireless standards (Q13 and IS-127 for CDMA, IS-641 for TDMA, and EFRC for GSM) can deliver voice quality equivalent to wireline under good transmission conditions. Given these findings, we hope to provoke discussion among the audience about the next challenges facing speech coding researchers and codec designers. These perhaps include: robustness to transmission impairments, noise reduction, and the development of half-rate codecs with wireline equivalent quality for wireless and other applications. The test samples were played back to listeners over one channel of high-fidelity headphones. The playback signal was filtered to simulate an ideal handset receiver response, and was presented at 79 dB SPL. Listeners were told to wear the headphones with the live speaker at their telephone ear. Listeners were run in groups of three. Each group received a different randomization of the test samples. Randomization was done using a randomized block design, with each test case presented once in each of four blocks. This controls both for effects of order of presentation and for criterion shifts due to effects of familiarization and fatigue. The randomization was also constrained so that samples from the same talker were never presented consecutively. References [ l ] Recommendation P.830 (1 996). Subjective performance assessment of telephone-band and wideband digital codecs. Geneva: International Telecommunications Union. [ 2 ] Recommendation P.8 1 (1993). Modulated Noise Reference Unit (MNRU). Geneva: International Telecommunications Union. Listeners were given written instructions to read at the beginning of the listening session, and completed a short practice session before the test session began. The whole session took about one hour to complete. [3] Recommendation G.191 (1993). Software tools for speech and audio coding standardization. Geneva: International Telecommunications Union. Results [4] Recommendation P.56 (1993). Objective measures of active Listener ratings for each test case were averaged to obtain the Mean Opinion Score (MOS). Data were collapsed over talkers in computing these means. In an experiment with 60 listeners and 240 judgments per test case, differences of greater than about 0.1 MOS are statistically reliable. speech level. Geneva: International Telecommunications Union. 86 Exhibit 9 Exhibit 10

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?