Apple Computer Inc. v. Burst.com, Inc.

Filing 108

Declaration of Allen gersho in Support of 107 Response of Burst.com, Inc.'s Opposition to Plaintiff Apple Computer, Inc.'s Motion for Summary Judgment on Invalidity Based on Kramer and Kepley Patents filed byBurst.com, Inc.. (Attachments: # 1 Exhibit A to A. Gersho Declaration# 2 Exhibit B to A. Gersho Declaration# 3 Exhibit C to A. Gersho Declaration# 4 Exhibit D to A. Gersho Declaration# 5 Exhibit E to A. Gersho Declaration# 6 Exhibit F to A. Gersho Declaration (Part 1)# 7 Exhibit F to A. Gersho Declaration (Part 2)# 8 Exhibit G to A. Gersho Declaration# 9 Exhibit H to A. Gersho Declaration# 10 Exhibit I to A. Gersho Declaration (Part 1)# 11 Exhibit I to A. Gersho Declaration (Part 2)# 12 Exhibit J to A. Gersho Declaration# 13 Exhibit K to A. Gersho Declaration# 14 Exhibit L to A. Gersho Declaration# 15 Exhibit M to A. Gersho Declaration# 16 Exhibit N to A. Gersho Declaration# 17 Exhibit O to A. Gersho Declaration)(Related document(s) 107 ) (Crosby, Ian) (Filed on 6/7/2007)

Download PDF
topic of this section. The name follows because the encoder output is "fed back" for use in selecting the new codebook. A feedback vector quantizercan be viewed as Apple the vector extension of a scalar adaptive quantizer with Computer Inc. v. Burst.com, Inc. backward estimation (AQB) [31. Thesecondapproach is the vector extension of a scalar adaptive quantizer with is calledsimplyadaptive forwardestimation(AQF)and vector quantization. Adaptive VQ will be considered in a Case 3:06-cv-00019-MHP Document 108-8 ENCODER later section. Observe that systems can combine the two techniques and use both feedback and side information. We also point out that unlike most scalar AQB and AQF systems, the vector analogs considered here involve Doc. 108 Att. 7 no explicit estimation of the underlying densities. It should be emphasized that the results of information theory imply that VQ's with memory do no better than can memorylessVQ's in the sense of minimizing average distortionforagivenrateconstraint.In fact, the basic mathematical model for a datacompression system in information theory is exactly a memoryless VQ and such codescan perform arbitrarily close to the optimal performance achievable using any data compression system. The exponential growth of computation and memory with rate, however, may result in nonimplementable VQ's. A VQ with memory may yield the desired distortion with practicable complexity. A general feedbackVQ can be describedas follows [221: Suppose now that we have a space S whose members we shall call states and that for each state s in S we have a separate quantizer: an encoder ys, decoder ps, and codebook C,. The channel codeword space M is assumed to be the same for all of the VQ's. Consider a data compression system consisting of a sequential machine such that if the machine is in state s, then it uses the quantizer with encoder ys and decoder pS. It then selects its next state by a mapping called a next-state function or states transition function f such that given a state and a channel symbol u, then f(v,s) is the new state ofthemachine. of inputvectors Moreprecisely,givenasequence {x,,; n = 0 , 1 , 2 , . . . } and an initial state so, then the subsequent state sequence s,,, channel symbol sequence v,, and reproduction sequence 2,, are defined recursively for n = 0 , 1 , 2 , . . . as un = 3/s,,(Xn), 2n Filed 06/07/2007 Page 1 of 15 = ps,(vn), sn+1 = f(un,sn). (5) DECODER decoder are in a common state S,,The encoder uses a state VGI ys, t o encode the input vector and then selects a new state for the next input vector. Knowing the VQ Figure 10. Feedback VQ. A t time n both encoder and used and the resulting channel symbol, the decoder can produce the correct reproduction. Note that the state VQ's may be computed at each time from some rule or, if they are small in number, simply stored separately. Since the next state depends only on the current state and the channel codeword, the decoder can track the state if it knowstheinitial stateand thechannelsequence.A general feedback vector quantizer is depicted in Fig. IO. The freedom touse different quantizers based on thepast to without increasing therate should permit the code perform better than a memoryless quantizer of the same dimension and rate. of An important drawback all feedback quantizers is that channelerrors can accumulateandcausedisastrous reconstruction errors. As with scalar feedback quantizer systems, this must be handled by periodic resetting or by error control or by a combination of the two. If thestate space is finite, then weshall call the resulting system afinite-statevectorquantizeror FSVQ.Foran FSVQ, all of the codebooks and the next-state transition table can all be stored in ROM, making the general FSVQ structure amenable to LSI or VLSl implementation [43]. Observe that a memoryless vector quantizer is simply a feedback vector quantizer or finite-state vector quantizer with only a single state. The general FSVQ is a special case APRIL 1984 IEEE MAGAZINE ASSP 15 Dockets.Justia.com of a tracking finite state source coding system [441 where Gersho's algorithm which begins with their Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 2 of 15 systemand then uses a stochastic gradient algorithm to iteratively imthe encoder is a minimum distortion mapping. Three design algorithms for feedback vector quantizers prove the vector linear predictor coefficients, that is, to The using variations on the generalized Lloyd algorithm have better match the predictor to the quantizer. stochastic been recently developed. The remainder of this sectionis gradient algorithm i s used only in the design of the sysdevoted to brief descriptions of these techniques. tem, not as an on line adaptation mechanism as in the a/. [481 adaptivegradientalgorithmsof,e.g.,Gibsonet Vector predictive quantization and Dunn [49]. A scalar version of this algorithm for imCuperman and Gersho [45,46] proposed a vector preproving the predictor for the quantizer was developed in is unpublished work of Y. Linde. dictive coder or vector predictive quantizer (VPQ) which avectorgeneralizationofDPCM or predictive quantization. A VPQ is sketched in Fig. 11, For a fixed predictor, Productlmultistep FVQ the VQ design algorithm i s used to design a VQ for the A second basic approach for designing feedback vector prediction error sequence. Cuperman and Gersho considis quantizers which is quite simple and works quite well to ered several variations on the basic algorithm, some of use a product multistep VQ such as the gainishape VQ or which will be later mentioned. the separating mean VQ and use a simple feedback quanChang [471 developed an extension to Cupermanand tizer on thescalar portion and an ordinary memoryless VQ on the remaining vector.This approach was developed in [IO] for gainishape VQ of LPC parameters and in [37] for separating mean VQ of images. Both efforts used simple scalar predictivequantizationforthefeedbackquantization of the scalar terms. FS V Q I I E; R"+ 1 ENCODER DECODER Figure 1 'i. VectorPredictiveQuantization. A linear vector predictor for the next input vector of a process given the previous input vector is applied t o the previous reproduction of the input vector. The resulting prediction is subtracted from the current input vector t o form an error vector which is vector quantized. The decoder uses a copy of the encoder and the received encoded error vectors to construct thereproduction, The first general design technique for finite-state vector quantizers was reported by Foster and Gray [50,51]. There are two principal design components: 1, Design an initial set of state codebooks and a next-state function usingan ad hoc algorithm. 2. Given the next-state function, use a variation of the basic algorithm to attempt to improve the state codebooks. The second component is accomplished by a slight extension of the basic algorithm that is similar to the extension of [52] for the design of trellis encoders: all Encode the data using theFSVQ and then replace of the reproduction vectors by the centroids of the training vectors which map intothosevectors;now,however,the centroids are conditioned on both the channelsymbol and the state. While such conditional averages are likely impossible to compute analytically, they are easily computed for a training sequence. For example, in the case of a squared error distance one simply forms the Euclidean centroid of all inputvectorswhichcorrespondtothe v i n an encoding of the state s andchannelsymbol training sequence. As with ordinary VQ, replacing the old decoder or codea code with larger disbook by centroids cannot yield tortion. Unlike memoryless VQ, however, replacing the old encoder by a minimum distortion rule for the new decoder can in principal causean increase in distortion and hence now the iteration is somewhat different: Reis place the old encoder (which a minimum distortion rule for the old decoder)by a minimum distortion rule for the new decoder. If the distortion goes down, then continue the iteration and find the new centroids. If the distortion goes up, then quit with the encoder being a quantizer for the previous codebook and the decoder being the centroids for the encoder. By construction this algorithm can only improve performance. It turns out, however, that in 16 IEEE ASSP MAGAZINE APRIL 1984 Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 3 of 15 practice it is a good idea to not stop the algorithm if the is adaptive stochastic automata. The reader referred to[531 distortion increases slightly, but to let it continue: it will for a discussion of these improvement algorithms. in almost always eventually drop back down distortion and VECTOR TREE AND TRELLIS ENCODERS converge to something better. The first design component is more complicated. We As with scalar feedback quantizers, the actions of the here describe one of the more promising approaches of decoder of a feedback VQ can be depicted as a directed [511 called the omniscient design approach. Say that we graph or tree. A simpleexample is depicted in Fig.12, wish t o design an FSVQ with K states and rate R bits per where a merged tree or trellis can be drawn since the vector. For simplicity we label the states as 0 through K-I. feedback VQ has only a finite number of states. First use the training sequence to design a memoryless VQ is Instead of using the ordinary VQ encoder which only with K codewords, one for each state. We shall call these permitted to look at the current input vector in order codewords state labels and VQ the state quantizer. We todecideonachannelsymbol,onecould this use algocall the output of the state VQ the "ideal next state" in- rithmssuch as theViterbialgorithm,M-algorithmor stead ofachannelsymbol.Nextbreak up the training M,L-algorithm, Fano algorithm, or stack algorithm for a sequence into subsequences as follows: Encode the train- minimum cost search through a directed graph and search ing sequence using the state VQ and for each state s col- several levels ahead into the tree or trellis before choosing lect all of training vectors whichfollow the occurrence of a channel symbol. This introduces additional delay into an this state label. Thus fors the corresponding training sub- the encoding ofseveral vectors, but it ensures better long sequence consists of all input vectors that occur when the run average distortion behavior. This technique is called current ideal state is s. Use the basic algorithm to design tree or trellis encoding and is also referred to as looka rate R codebook C for the corresponding training se- ahead coding,delayeddecisioncoding,andmultipath , quence for each s. search coding. (See, e.g., [54,52] for surveys.) We point The resulting state VQ and the collection of codebooks out that a tree encoding system uses a tree to denote the for eachstatehave beendesignedtoyieldgoodperformance in the following communication system:The state VQ encoder is in an ideal state s chosen by using the on thelast input vector. The encoderuses the correspondC. The , ing VQ encoder ys described by the codebook output of y, is the channel symbol. In order to decode the (01-1) channel symbol, the decoder must also know the ideal state. Unfortunately, however, this ideal state cannot be determined from knowledge of the initialstate and all of I the received channel symbols. Thus the decoder must be (-1,-1) omniscient in the sense of knowing this additional side (a) DECQDER (b) NEXT-STATE information in order to be able to decode. In particular, FUNCTION this system is not an FSVQ by our definition. We can use the state quantizer and the various codebooks, however, STAT E to construct an FSVQ by approximating the omniscient 0 system: Instead of forming the ideal next state by using the state VQ on the actual input vector (as we did in the design procedure), use the state VQ on the current reproduction vector in order to choose the next state. This on encoder will yield a state sequence depending only 1 outputs and the original state and hence will be trackable by the decoder. This is analogous to the scalar practice of as building a predictive coder and choosing the predictor if it knew the past inputs, but in fact applying it to past CODE WORD reproductions. STATE CHANNEL STATE Combining the previously described steps of (I) initial SYMBOL (statelabel) codebookdesign, (2) state codebooksand (c) TRELLIS next-state function design, and (3) iterative improvement of code for given next-state function, provides a complete Figure 12. Decodertrellisfor a twostate 1 bitper design algorithm. vector t w o dimensionalwaveformcoder. The trellis depicts the possible state transitions for the given nextIn addition to the above design approach, techniques state function. The transitions are labeled by the correhave been developed for iterating on (2) and (3) above in sponding decoder output [in parentheses1 andchannel the sense of optimizing the next-state function for a given symbol produced by the encoder. collection of codebooks. These algorithms, however, are more complicated and require ideas from the theory of -F operation on3:06-cv-00019-MHP decoderat succes- ADAPTIVE VQ Case successive vectors by the Document 108-8 Filed 06/07/2007 Page 4 of 15 sive times while a tree-searched VQ uses a tree to conAs a final class of VQ we considersystems that use one struct a fast search for a single vector at a single time. VQ to adapt a waveform coder, which might be another A natural variation of the basic algorithm for designing VQ. The adaptation information is communicated to the FSVQ's can be used to design trellis encoding systems: receiver via a low rate side information channel. Simply replace the FSVQ encoder which finds the miniof vector quantization using the Thevariousforms mum distortion reproduction for asingle input vector by Itakura-Saito family of distortion measures can be consida Viterbi or other search algorithm which searches the ered as model classifiers, that is, they fit an all-pole model decoder trellis to some fixed depth to find a good long to an observed sequence of sampled speech. When used term minimum distortion path.The centroid computation alone in an LPC VQ system, the model is used, t o synis accomplished exactly as with an FSVQ: each branch or thesize thespeech at the receiver. Alternatively, one could transition label is replaced by the centroid of all training use the model selected to choose a waveform coder devectors causing that transition, thatis, the centroid condi- signed to be good for sampled waveforms that produce tioned on the decoder state and channel symbol. Scalar thatmodel. For example,analogous to theomniscient and simple two dimensional vector trellis encoding sys- design of FSVQ one could design separate VQ`s for the tems were designed in [52] using this approach. subsequencesof thetrainingsequenceencodinginto Trellis encoding systemsare not reallyvectorquancommon models. Both the model index and the waveform tization systems as we have defined them since the encoding index are then sent to the receiver. Thus LPC VQ coder is permitted tosearch aheadto determine the effect can be usedto adapt a waveform coder, possibly also a VQ on the decoder output of several input vectors while a or related system. This will yield system typically of much a vector quantizer is restricted to search only a single vector higher rate, but potentially of much better quality since ahead.The two systems areintimatelyrelated,howthe codebooks can be matched to local behavior of the ever, andatrellisencoder can always beused to imdata. The general structure is shown inFig. 13. The model prove the performance of a feedback vector quantizer. VQ typically operates on a much larger vector of samples Very little work has yetbeendoneonvectortrellis and at a much lower rate in bits per sample than does the encoding systems. waveform coder and hence the bits spent on specifying the model through the side channel are typically much fewer than those devoted to the waveform coder. There are a variety of such possible systems since both the model quantizer and the waveform quantizer take can X" on many of the structures so far considered. In addition, as inspeechrecognitionapplications [ 5 5 ] thegainindependentvariations of theItakura-Saitodistortion measure which either normalize or optimize gain may be better suited for the model quantization than the usual form. Few such systems have yet been studied in detail. We here briefly describe some systems of this type that have appeared in the literature to exemplify some typical combinations. All of them use some form of memoryless a VQ for the model quantization, but variety of waveform coders are used. The first application of VQ to adaptive coding was by [32] w h o used anLPC VQ t o Adoul, Debray, and Dalle ENCODER choose a predictor for use in a scalar predictive waveform coder.Vector quantization was used only for the adaptationandnotforthewaveformcoding.Anadaptive VQ generalization of this system was later developed by [45,461 who used an alternative CupermanandGersho classification techniqueto pick one of three vector predictors and then used those predictors in a predictive vector quantizer. The predictive vector quantizer design algoFigure 13. A d a ~ tUB. The model VQ uses the Itakurai~~ rithmpreviouslydescribed was used,except now the Saito distortion t o select an LPG model t o fit the input training sequence was broken up intosubsequences corframe ef many sample vectors. This selection in turn dea responding to the selected predictor and quantizer was t termines the waveform coder used o digitize the sample vectors. A side channel then informs the receiver which t471 designed for each resulting error sequence. Chang decoder t o use on the channel symbols produced by the used a similar scheme with an ordinary LPC VQ as the waveform coder. classifier and with a stochastic gradient algorithm run on each of the vector predictive quantizers in order to im- 1 IEEE ASSP MAGAZINE APRIL 1984 Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 5 of 15 a subbanditransform coder for the waveform coding and used the side information to adapt the bit allocation for the scalar parameter quantizers. Many other variations the on general themeare possible and the structure is a promising one processes such as for speech that exhibit local stationarity, that slowly varying is, short term statistical behavior. The use of one VQ to partition a training sequence in order to design good codes for the resulting distinct subsequences is an intuitive apof adaptivedata proach to thecomputer-aideddesign compression systems. EXAMPLES ENCODER We next consider the performance of various forms of vectorquantizers on three popular guinea pigs: Gauss Markov sources,speechwaveforms,andimages.For the speech coding example we consider both waveform codersusing the squared error distortion measureand vocoders using the Itakura-Saito distortion. The caveats of the introduction should be kept in mind when interpreting the results. DECODER Figure 14. RELP VQ. AnLPC VQ is used for model selection and a single VQ t o waveform encode the residuals formed by passing the original waveform through the inverse filter A/*, The side information specifies t o the decoder which of the model filters */A should be used for synthesis. prove the prediction coefficienrs for the corresponding codebooks. Rebolledo et a/. [561 and Adoul and Mabilleau [571 de(RELP) velopedvectorresidualexcitedlinearpredictive systems. (See Fig. 14.) A similar system employing eithera scalar or a simple vector trellis encoder for the waveform coder wa.s developed by Stewart et a / . [52]. Both of these systems used thebasic algorithm to design both the model VQ and the waveform coders. The RELP VQ systems yielded disappointingly poor performance at low bitrates. Significantly better performance was achieved by using the residual codebooks produced in theRELP design to construct codebooks for the original waveform, that is, instead of coding the model and the residual, code the model and use the selected model to construct a waveform coder for the original waveform as depicted in Fig. 15 [521. For lack of a better name, this system might be called an inverted RELP because it uses residual codebooks to drive an inverse model filter in order to get a codebook for the original waveform. Yet another use of LPC VQ to adapt a waveform coder Cox 1581who used was reported by Heron, Crochiere, and ENCODER I CODEBOOK DECODER Figure 15. Inverted RELP. An LPG VQ is used t o select a model filter u / A . A waveform codebook is then formec by driving the model filter with all possible residual codewords from a RELP VQ design. Thus, unlike a RELP system, the original waveform [and not a residual1 is matched by possible reproduction codewords. APRIL 1984 IEEE ASSP MAGAZINE 19 The performance of the systems are given by SNR's 108-8 Case 3:06-cv-00019-MHP Document for Filed 06/07/2007 Page 6 of 15 squarederrorandby an analogousquantityforthe TABLE I I Itakura-Saito distortion: In both cases we measure norFEEDBACK VQ OF AGAUSSMARKOV SOURCE, malized average distortion on a logarithmic scale, where the normalization is by the average distortion of the optiFSVQI VPQ FSVQ2 mum zerorate code-the average distortion between the input sequence and the centroid of the entire input sek SNR K n M SNR K n M SNR quence. This quantity reduces to an SNR in the squared 1 10.0 64 2 64 9.5 16 2 16 10.0 error case and provides a useful dimensionless normal4 64 11.2 2 10.8 256 4 512 10.8 32 ized average distortion in general. We call this quantity 11.4 3 512 8 1536 11.1 64 8 192 11.6 the SNR in both cases, The SNR is given in tables instead 4 12.1 5-12 16 2048 11.3 128 16 5-12 11.6 16 of graphs in order to facilitate quantitative comparisons among the coding schemes. Gauss Markov sources n M 22 4a 8 24 64 We first consider the popular guinea pig of a Gauss Markov source. This source is useful as a mathematical model for some realdata sources and its information theoreticoptimalperformancebounds as describedby the distortion-rate function are known. For this example we consider only the squared error distortion. A Gauss Markov source or a first order Gauss autoregressive source (X,} i s d e f i n e d b y t h e d i f f e r e n c e e q u a t i o n X n + l = ax,, + W n , where {W,} is a zero mean, unit variance, independentandidenticallydistributed Gaussian source. We here consider the highly correlated case of a = 0.9 and vector quantizers of 1 bit/sample. The maximum achievable SNRas given byShannon's distortion-rate function for this source and rate is 13.2 dB [7]. Various design algorithms were used to design vector quantizers for several dimensions for this source. Table I describes the results of designing several memoryless vec- Signal t o Noise Ratios(SNR),number of states (K), numberof multiplicationsper sample (nl, and storage [MI for feedback quantizers: FSVQwith number of states increased until negligiblechange lFSVQl1, FSVQ with fewer states [FSVQ21, VPQ. Rate = 1 bit/sample, k = vector dimension. Training Sequence = 60000 samples from a Gauss Markov Source with correlation coefficient 0.9, TABLE I MEMORYLESS VQ FOR AGAUSSMARKOVSOURCE. VQ k SNR n 1 4.4 M 2 2 4.4 2 7.9 124 2 7.9 8 4 3 9.2 8 24 9.2 6 4 10.2 16 64 10.2 8 5 10.6 32 160 10.4 10 12 6 10.9 64 384 10.7 7 11.2 128 896 11.0 14 8 9 TSVQ SNR n M MVQ WVQ SNR n M SNR n M 2 4.4 7.6 42 8.6 120 8.4 8 310 9.3 756 9.1 1778 9.4 9.9 22 48 6 18 9.4 32 10 50 12 72 14 98 16 128 7.93 1 9.3 1 10 2 9.8 17 3 9.9 4 10.2 4 10.6 5 10.9 6 5 26 31 43 57 Signal t o Noise Ratios [SNRI, number ofmultiplications per sample [nl, and storage requirements of memoryless vector quantizers: full search memoryless VQ IVQI, binary tree-searched [TSVQI, binary multistage VQ [MVQI, and gainishape VQ (G/SVQI. Rate = 1 bit/sample. k = vect o r dimension. Training Sequence = 60000 samples from a Gauss Markov Source with correlation coefficient0.9, tor quantizers for a training sequence of 60,000 samples. Given arethe design SNR (code performance on the training sequence), the number of multiplications per sample required by the encoder, and the number of real scalars that must be stored for the encoder codebook. The number of multiplications is used as ameasure of. encoder complexity because it is usually the dominant compuis tation and because the number of additions required usuallycomparable. It is given by n = (the number of codewordssearched) X (dimension)/(dimension) = the number of codewords searched.Theactualstoragerequired dependson the number bytes usedto store of each floating point number. Many (but not all) of the final codes were subsequently tested on different test sequences of 60,000 samples. In all cases the opentest SNR's were within -25 dB of the design distortion.The systems considered are full search VQ's [251, binary tree-searched VQ`s [591, binary multistageVQ's [47], and gainishape VQ`s [36]. The gain and codebook sizes for the gainishape codes were experimentally optimized. As expected, the full search VQ yields the best peris formance foreach dimension, but the tree-searched VQ not much worse and has a much lower complexity. The 1 multistage VQ i s noticeably inferior, losing more than dB at the higher dimensions, but i t s memory requirements aresmall.Thegainishape VQ compares poorly on the it basis of performance vs. rate for a fixed dimension, but is the best code in the sense of providing the minimum distortion for a fixed complexity and rate. For larger rates and lower distortion the relative merits may be quite different. For example, the multistage VQ is then capable of better performance relative to the ordinary VQ since the quantization errors in the various stages do notaccumulate so rapidly. (See, e.g., [341.)Thus in this 20 IEEE ASSP MAGAZINE APRIL 1984 Case 3:06-cv-00019-MHP TABLE I l l Document 108-8 Filed 06/07/2007 Page 7 of 15 MEMORYLESS VQ DF SAMPLEDSPEECH. 9 k SNRin n SNRout 1 2.0 2.1 2 5.2 5.3 3 6.1 6.0 4 7.1 7.0 5 7.9 7.6 6 8.5 8.1 7 9.1 8.4 8 9.7 8.8 2 4 8 16 32 64 128 256 M SNRin n SNRout 2 2.0 2.1 8 5.1 5.1 24 5.5 5.5 64 6.4 6.4 160 7.1 6.9 384 7.9 7.5 896 8.3 7.8 2048 8.9 8.0 2 4 6 8 10 12 14 16 M 2 12 42 120 310 756 1778 4080 ' 1 MVQ k SNRin SNRout n 2 1 2.0 2.1 2 4.3 4.4 4 6 34.4 4.3 4 4.4 4.5 8 5 5.0 5.0 10 12 6 5.0 4.9 14 7 5.3 5.1 8.8 8 5.6 5.5 12816 9 M WVQ SNRin SNRout n M 2 8 18 4.6 32 50 72 98 4.5 6.0 7.2 7.7 8.2 11 IO 9.3 9.8 10.4 4 14 6.1 4 20 6.9 844 7.4 16 100 7.7 16 120 8.1 264 32 8.5 584 64 8.9 128 1288 9.3 2824 256 Signal t o Noise Ratios inside training sequence ISNRin3of 640000 speech samples, Signal t o Noise Ratios outside training sequence [SNRoutI of 76800 speech samples, number of multiplications per sample [nl, and storage requirements of memoryless vector quantizers: full search memoryless VQ [VQI, binarytree-searched [TSVQI, binarymultistage V Q [MVQI, and gain/shape V Q (G/SVQI. Rate = 1 bit/sample. k = vector dimension, of the trainingsequence for these codes was significantly inferior, by 1 to 2 dB for the larger dimensions. From the discussion of average distortion, this suggests thatthe training sequence was too short. Hence the secondFSVQ design (FSVQ2) was run witha larger trainingsequence of 128,000 samples and fewer states. The test sequence for these codes always yielded performance within .3 dB of the design value. The VPQ test performance with within .I dB of the designperformance. The scalar predictive quantizer performance and the codebook for the prediction error quantizer are the same as the analytically optimized predictive quantization system of Arnstein [60] run on the same data. Observe that the scalarFSVQ in the first experiment with 64 states yielded performance quite close to that of the scalar VPQ, which does not have a finite number of states. Intuitively the FSVQ is trying to approximate the infinite state machine by using a large number of states. less The VPQ, however, i s less complexandrequires memory and hence for this application is superior. For comparison, the best I bit/sample scalar trellis en11.25 dB for this coding system for this source yields system uses a block source [52]. Thetrellisencoding Viterbi algorithm with a search depth of 1000 samples for the encoder. It is perhaps surprising that in this example the VPQ and the FSVQ with the shortdelay of only4 samples can outperform a Viterbi algorithm with a delay of 1000 samples. It points out, however, two advantages of feedback VQ over scalar trellis encoding systems: 1 . The decoder is permitted to be a more general form of finitestate machine than the shift-registerbased nonlinear filter usually used in trellis encoding systems; and 2. the encoder performs asinglefullsearchofasmallvector codebook instead of a Viterbi algorithm consisting of a tree search of a sequence of scalar codebooks. In other words, single short vector searches may yield better performance than a "look ahead" sequence of searches of scalar codebooks. Speech waveform coding The second set of results considers a training sequence of 640,000 samples of ordinary speech from four different case multistage VQ may be far better because if its much male speakers sampledat 6.5 kHz. The reader is reminded smaller computational requirements. is not generallyasubjectivelygood thatsquarederror Table II presents results for three feedbackVQ's for the distortion measure for speech. Better subjective quality same source. In .addition to the parameters of Table I, may be obtained by using more complicated distortion the number of states for the FSVQ`s are given. The first measures such as the general quadratic distortion meaFSVQ and the VPQ were designed for the same training sures with input dependent weighting such as the arithsequence of 60,000 samples. Because of the extensive metic segmented distortions. The VQ design techniques computation required and the shortness of the training extend tosuch distortion measures, but the centroid comsequenceforafeedbackquantizer,onlydimensions putations are more complicated. (See [301 for the theory 1 through 4 were considered. The first FSVQ was designed and [45,461 for the application of input-weighted quadratic 1 bit per using the omniscient design approach for distortion measures,) sample, dimensions 1 through 4, and a variety of numbers Tables Ill and IV are the counterparts of Tables I and I I of states. For the first example, the number of states was for this source. Now, however, the SNR's of the codes on chosenbydesigning FSVQ`s for more and more states testsequences of samples outside of the training seuntilfirther increases yieldednegligibleimprovements quence(and bya differentspeaker) are presentedfor [SI]. It was found, however, that the performance outside comparison. In addition, some larger dimensions conare APRIL 1984 IEEE ASSP MAGAZINE 21 Case 3:06-cv-00019-MHP dimension in comparison to about Document 108-8 Filed 06/07/2007 Page IdB 15 the Gauss 8 of for FEEDBACK V Q OF SAMPLEDSPEECH. I SNRin SNRout K n M SNRin n SNRout M 2 8 2 4 8 24 16 64 I 2.0 32 ! 7.8 7.5 64!8.3 9.0 I 10.9 ; 12.2 2 2.0 2 2 512 9.4 10.8 2560 32 512 2.6 2.1 6.2 46.4 64 6.87.3 10 192 16 2048 , 7.6 8.0 Table V presents a comparisonof VQ and FSVQ for vecusing the ltakura-Saito distorquantizationofspeech tortion measure or, equivalently, vector quantization of idered because the longer training sequence made them LPC speech models [16,14,53]. The training sequence and more trustworthy. Again for comparison, the best known (nonadaptive) scalar trellisencodingsystemfor this source yields a performance of 9 dB [521. Here the trellis TABLE VI encoder uses the M-algorithm with asearch depth of ADAPTIVEVPQ. 31 samples. The general comparisons are similar t o those of the previous source, but thereare several differences. VPQ Thetree-searched VQ is nowmoredegraded in comparison to the full search VQ and the multistage VQ is k SNRin SNRout even worse, about 3 dB below the full search at the largest 4.34 1 4.12 Signal t o Noise Ratios inside training sequence [SNRinI of 640000 speech samples, Signal t o Noise Ratios outside training sequence [SNRouel of 76800 speech samples, number of states (Kl, number of multiplications per sample (nl, and storage [MI for feedbackquantizers: Rate = 1 bit/sample. k = vector dimension, Markov case. The complexity and storage requirements are the same except for the shapeigain VQ where different optimum selections ofgain and shape codebook size yield different complexity and storage requirements. The VPQ of dimension 4 is inferior to the trellis encoder and the FSVQ, FSVQ of the same dimension. The four dimensional however, still outperforms the scalar trellis encoder. Observe that an FSVQ of dimension 4 provides better performanceinsideandoutsidethetrainingsequence than does a full search memoryless vector quantizer of dimension 8, achieving better performance with 16 4-dimensionaldistortionevaluationsthanwith 512 8-dimensionaldistortioncomputations.The cost, of course, is a large increase in memory. This, however, is a basic point of FSVQ design-to use more memory but less computation. LPC VQ (vocoding) TABLE V LPC VQ AND FSVQ WITHANDWITHOUTNEXTSTATE FUNCTIONIMPROVEMENT. 2 3 4 7.47 8.10 8.87 7.17 7.67 8.30 FSVQl Rr VQ SNRin SNRout SNRout SNRout SNRin SNRin FSVQZ K 6.1 16 16 4 Signal t o Noise Ratios inside training sequence [SNRinI of 5000 vectors, and Signal t o NoiseRatios in t e s t sequence [SNRoutl of 600 vectors, rate = 1.023 bits/ sample. 2.9 .008 1 3.7 6.1.016 2 5.2 4.3 7.2 7.5 6.2 7.3,023 3 8.4 9.0 5.9 7.5 8.8.031 4 8.7 7.9 7.8 9.5 9.6 9.3 5 8.9 8.8 9.7 10.7.039 10.6 6 ,047 10.5 9.5 7 .055 11.6 10.1 8 ,062 12.6 10.7 4 Signal t o Noise Ratios inside training sequence [SNRin) of 5000 vectors of 128 samples each, Signal t o Noise Ratios outside training sequence [SNRoutl of 600 vectors of 128 sampleseach: memoryless VQ, omniscient FSVQ design [FSVQI I, and for omnisicient FSVQ design with next-state function improvement [FSVQ2).K = number of states in FSVQ, R = rate in bits/vector, r = rate in bits/sample. Itakura-Saito distortion measure. test sequence are as above, but now theinput dimension is 128 samples and the output vectors are 10th order alli s now effectively polemodels.Thetrainingsequence shorter since it contains only 5000 input vectors of this dimension. As a resultthe test results are noticeablydifferent than the design results. Because of the shortness of the training sequence, only FSVQ's of small dimension and few states were considered. The table summarizes memoryless VQ and two FSVQ designs: the first FSVQ design used was a straightforward application of the design technique outlined previously andthesecond used the stochastic iterationnext-state improvement algorithm of [53]. Observe that the nextstate function improvement yields codes that perform better outside of the training sequence then do the ordinary FSVQ codes. 22 IEEE ASSP MAGAZINE APRIL 1984 Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 9 of 15 Gainishape VQ's for this application are developed in [I61 and [361. Tree-searched LPC VQ is considered for binary and nonbinary trees in combination with gain/shape codes in [I61 and [IO]. Adaptive coding Table VI presents the results of a simple example of an adaptive VQ, here consisting of an LPC VQ with 8 codeof wordsevery 128 samples combined with VPQs dimensions 1-4. Each' of-the 8 VPQs is designed for the subsequence of training vectors mapping into the corresponding LPG VQ model [47]. The rate of this system is 1 f 3/128 = 1.023 bits/sample. The performance is sig10 scalar nificantly worse than the dB achieved by a hybrid trellis encoderof the same rate [ 5 2 ] ,but it improves on the nonadaptive VPQ by about 3/4 dB. Adaptive vector quanlittle work tizers arestill quite new, however, and relatively on the wide variety of possible systems has yet been done. Image coding In 1980-1982 four separate groups developed successful applications of VQ techniques t o image coding[61, 62, 63,64,65,66,67,371. The only real difference from waveform coding is that now the VQ operates on small rectangular blocks of from 9 to 16 pixels, that is, the vectors of images, typically arereally2-dimensionalsubblocks squares with 3 Or4 pixels on a side or 3 by4 rectangles. We here consider both the basic technique and one variation. We consider only small codebooks of 6 bitsper4 X 3 block of 12 pixels for purposes of demonstration. Better h quality pictures could be obtainedthe same rate of 1 bit at per pixel by using larger block sizes and hence largerrates of, say, 8 to 10 bits per block. Better quality could also likely be achieved with more complicated distortion measures than the simple squared error used. Fig. 16 gives thetrainingsequenceoffiveimages. Fig. 17a shows a small portion of the fifth image, an eye, magnified. Fig. 17b is a picture of the 26 = 64 codewords. Fig. 17c shows the decoded eye. Fig. 18 shows the original, decoded image,and errorimageforthecomplete picture. The errorimage is usefulforhighlightingthe problems encountered with the ordinary memoryless VQ. In particular, edges are poorly reproduced and the codeword edges make the picture appear "blocky." This problem was attacked by Ramamurthi and Gersho [62,671 by constructing segmented (or union or composite) codesseparate codebooksfortheedgeinformationandthe texture information where a simple classifier was used to distinguish the two in design. In [371 a feedback vector quantizer was developed by using a separating mean VQ with a predictivescalar quantizer to track the mean. Fig. 19 shows the original eye, ordinary VQ, and the feedback VQ. The improved ability to track edges is clearly discernible. Fig. 20 shows the full decoded image for feedback VQ together with the error pattern. Although image coding using VQ is still in its infancy, APRIL 1984 IEEE ASSP MAGAZINE Figure 16. Image Training Sequence. The training sequence consisted of the sequence of 3 x 4 subblocks of the five 256 x 256 images shown. 23 Case 3:06-cv-00019-MHP Document 108-8 the tradeoffs among performance, 10 of 15 Filed 06/07/2007 Page rate, complexity, and a3 storage for these codes. of The basicstructure of all the V Q systems is well suited to VLSl implementation: a minimum distortion search algorithm on a chip communicating with off-board storage As for codebooks and next-state-transition functions. new and better design algorithms are developed, the chips can be updated by simply reburning the codebook and transition ROM's. The basic approach can also be incorporated into the designofsometraditional scalar datacompression schemes, an approachwhichGersho calls "imbedded bl CI bJ `igure 17. Basic Image V&l Example a t 1/2 bit per pixel. a1 Original Eye Magnified [bl 6 bit codebook VQ codelook for 4 x 3 blocks [GI Decoded Image. these preliminaryexperimentsusingonlyfairlysimple V Q techniques with small memorylessandfeedback codebooks demonstrate that the general approach holds considerable promise for such applications. COMMENTS cl We have described Lloyd's basic iterative algorithm and of how it can be used to improve the performance a variety of vector quantization systems, ranging from the fundamental memoryless full search VQ that serves as the basic a model for data compression in information theory to variety of feedbackandadaptive systems that can be viewed as vector extensions of popular scalar compression systems. By a variety of examplesof systems and code design simulations we have tried to illustrate some of Figure 18. Full Image for Basic Example (a3 Original [bl Decoded Image [cl Error Image. 24 IEEE ASSP MAGAZINE APRIL 1 s t ~ Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 11 of 15 1 a1 a3 bl Figure 20. Full Image forSeparatingMean (a) DecodedImageusingSeparatingMean DPCM Mean Coding [bl Error Image. Example VQ with Cl Figure 19. VQ vs. Separating Mean VQ at Rate V 2 bit per pixel (a1 Original Eye Magnified [ b l VQ Decoded Image [cl Separating Mean VQ with DPCM Mean Coding Decoded Image. VQ" [Ill. Such schemes typically enforce additional structure on the code such as preprocessing,transforming, splitting intosubbands, and scalar quantization, however, and hence the algorithms may not have the freedom to do as well as the more unconstrained structures consideredhere. Even ifthetraditional schemes provemore useful because of existing DSP chips or intuitive variations well matched to particulardatasources,thevector quantization systemscan proveausefulbenchmark for comparison. Recently VQ has also been successfully used in isolated word recognition systems without dynamic time warping by using either separate codebooks for each utterance or by mapping trajectories through one or more codebooks [68,69,70,71,55,721. Vectorquantization has also been used as a front end acoustic processor to isolated utterAPRIL 1984 IEEE ASSP MAGAZINE ance and continuous speech recognition systems which REFERENCES Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 12 of 15 then do approximately maximum likelihood linguistic de[I] Davisson, L. D. and Gray, R. M., Data Compression, coding based on probabilities estimatedusing "hidden Dowden, Hutchinson, & Ross, Inc., Stroudsbug, PA Markov'' models for the VQ output data. [73,74,751. (1976). Benchmark Papers i n ElectricalEngineering Variations of the basic VQ design algorithm have been and Computer Science, Volume 14. triedforseveraldistortionmeasures,includingthe [2] Jayant, N. S., Editor,Waveform coding quantization squared error, weighted squared error, the ltakura-Saito and Coding, IEEE Press, N Y (1976). distortion, and an (arithmetic) segmented signal to noise [3] Jayant, N.S.and Noll, P., Digital Coding of Waveratio. (See,e.g., [30,45,461). Other distortion measures forms, Prentice-Hall, Englewood Cliffs, NJ (1984). are currently under study. [4] Shannon, C. E., "A mathematical theory of commuThe algorithm has not yet been extended to some of the 27 nication,"BellSystemsTechnicaljournal more complicated distortion measures implicit in noise pp. 379-423, 623-656 (1948). maskingtechniquesforenhancingthesubjectiveper[5] Shannon, C. E., "Coding theorems for a discrete formance of scalar quantization speech coding systems. IRE National Consource with a fidelity criterion," Whether scalar systems designed by sophisticated techvention Record, Part 4, pp. 142-163 (1959). niquesmatchedtosubjectivedistortion measures will [6] Gallager, R. G., Information theory and reliable comsystems designed for sound or look better than vector munication, John Wiley & Sons, NY (1968). mathematically tractable distortion measures remains to [7] Berger, T., Rate Distortion Theory, Prentice-Hall Inc., be seen. Whenever the subjective distortion measures can Englewood Cliffs, NJ (1971). be quantified and a means found to compute centroids, [8] Viterbi, A. J. and Omura, J. K., Principles o f Digital however, the vector systems will yield better quantitative CommunicationandCoding,McGraw-HillBook is only performance.Sincethecentroidcomputation Company, New York (1979). done in design and not in implementation, it can be quite [91 Lloyd, S. P., Least squares quantization in PCM, Bell complicated and still yield useful results. Laboratories Technical Note (1957). (Published in the is essentiallyaclusThegeneralizedLloydalgorithm March 1982 special issue on quantization). its [IO] Wong, D., Juang,B.-H.,andGray,A.H., tering algorithm and we attempted to demonstrate have Jr., "An applicability to the design of a variety of compression data 800 bit/s vector quantization LPC vocoder," I systems. Other clustering algorithms may yieldbetter Transactions o n Acoustics Speechand Signal Processcodes in someapplications. For example,Freeman [76] ing ASSP-30 pp. 770-779 (October 1982). proposed a design algorithm for scalar trellis encoding [Ill Gersho, A. and Cuperman, V., `Vector Quantization: systems using the squared error distortion measure which Apattern-matchingtechniquefor speech coding," replaced the Lloyd procedure by a conjugate gradient pro- I Communications Magazine, (December 1983). cedure for minimizing the average distortion for a long [I21 Itakura, F. and Saito, S., "Analysis synthesis telephony training sequence. He found that for memoryless Gausa based onthemaximumliklihoodmethod," Prosian source the resulting codes were superior to those ceedings of the 6th International Congress o f Acousobtained by the Lloyd procedure. It would be interesting tics, pp. C-17-C-20 (August 1968). to characterize the reasons for this superiority, e.g., the [I31 Kullback, S., Information Theory and Statistics, procedure may find a better local minimum or it may simDover, New York (1969). ply be numerically better suited for finding a continuous [I41 Gray, R. M., Gray, A. H., Jr., Rebolledo, G., and Shore, local minimum on adigitalcomputer. It would also be J. E., "Rate distortion speech coding with a minimum interesting to consider variatiqns of this approach for the discrimination information distortion measure," I design of some of the other systems considered here. T r a n s a c t i o n so nI n f o r m a t i o nT h e o r y IT-27 (6) A survey article with many topics cannot provide compp. 708-721 (Nov. 1981). plete descriptions or exhaustive studies of of systems any [IS] Shore, J.E. and Gray, R. M., "Minimum-cross-entropy sketched. It i s hoped,however,thattheseexamples pattern classification and clusteranalysis," ITransimpart the flavor of vector quantizer design algorithms actions on Pattern Analysis and Machine Intelligence and that they may interest some readers to further delve PAMI-4 pp. 11-17 (Jan. 1982). into the recent and current work in the area. [I61 Buzo, A., Gray, A. H., Jr., and Gray, R. M., and Markel, J.D.,"SpeechcodingbaseduponvectorquanACKNOWLEDGMENT tization," / E Transactions on Acoustics Speech and The author gratefully acknowledges the many helpful SignalProcessing ASSP-28 pp. 562-574. (October comments from students and colleagues that aided the 1980). preparation of this paper. [I71 Gray, R. M., BUZO, A,, Gray, A. H., Jr., and Matsuyama, Portions o the research describedhereweresupportedbythe f Y., "Distortion measures for speech processing," I Army Research Office, the Air Force Office of Scientific Research, Transactions o n Acoustics, Speech, and Signal ProtheNationalScienceFoundation,theJohnSimonGuggenheim cessing, ASSP-28 pp. 367-376 (August 1980). Memorial Foundation, and the Joint Services Electronics Program at [I81 Gray, R. M., Kieffer, J.C., and Linde, Y., "Locally optiStanford University. 26 IEEE ASSP MAGAZINE APRIL 1984 Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 13 of 15 posium 3:06-cv-00019-MHP(June1982). Case on lnformation Theory, Document 108-8 of the lnstitute of TelevisionEngineers o f Japan, Filed 06/07/2007 Page 14 of 15 Foster, I., Gray, R. M., and Ostendouf, M., Finite-state pp. 6-2 (1983). vector quantization for waveform coding, IEEE Trans. [67] Ramamurthi, B. and Gersho, A,, "Image coding using Info. Theory, to appear. segmentedcodebooks," Proceedings International Stewart, L. C., Gray, R. M., and Linde, Y., "The design Picture Coding Symposium, (Mar. 1983). oftrelliswaveformcoders," I Transactions o n [68] Hamabe, R., Yamada, Y., Murata, M., and Namekawa, CommunicationsCOM-30pp. 702-710 (April 1982). T., "A speech recognition system using inverse filter Ostendorf, M. and Gray, R. M., An algorithm for the matching technique,'' Proceedings o f the Ann. Conf. design o f labeled-transition finite-state vector quanInst. o f Television Engineers, (June1981). (in Japanese) tizers, submitted for publication 1983. [69] Shore, J. E. and Burton, D. K., "Discreteutterance Fehn, H.G. and Noll, P., "Multipath search coding speech recognition without time alignment,'' of stationary signals with applications to speech," Proceedings 1982 /E International Conference on AcousticsSpeechandSignalProcessing,p. 907 I Transactions onCommunicationsCOM-30 pp. 687-701 (April 1982). (May 1982). Shore, J.E. andBurton,D. K., "Discreteutterance [70] Martinez, H. G., Riviera, C., and Buzo, A,, "Discrete speech recognition without time alignment,'' I utterancerecognitionbaseduponsourcecoding Transactions on InformationTheory IT-29 pp. 473-491 techniques," Proceedings /E International Confer(July 1983). ence on Acoustics Speech a n d Signal Processing, Rebolledo, G., Gray, R. M., and Burg, J . P., "A multipp. 539-542 (May 1982). rate voice digitizer based upon vector quantization,'' [71] Rabiner, L., Levinson, S. E., and Sondhi, M. M., "On / E Transactions onCommunicationsCOM-30 theapplicationofvectorquantization and hidden pp. 721-727 (April 1982). Markov models to speaker-independent isolated Adoul, J.-P. andMabilleau, P., "4800 bps RELP voword recognition," Bell System Technical Journal 62 coder using vector quantization for both filter and pp. 1075-1106 (April 1983). residual representation," Proceedings o f the I In- [72] Shore, E., Burton, D., and Buck, J., "Ageneraliternational Conference on Acoustics Speech and Sigzationofisolatedwordrecognitionusingvector nal Processing 1 p. 601 (April 1982). quantization,'' Proceedings 1983 International ConferHeron, C. D.,Crochiere, R. E., and Cox, R.V., "A ence o n Acoustics Speech a n d Signal Processing, 32-band subbanditransform coder incorporating vecpp. 1021-1024 (April 1983). torquantizationfordynamicbitallocation," Pro- [73] Jelinek, F., Mercer, R. L., and Bahl, L. R., "Continuous ceedings ICASSP, pp. 1276-1279 (April 1983). speech recognition: statistical methods,'' inHandbook of Statistics, Vol. 2, P. R. Khrishaieh and L. N. Gray, R. M. andLinde, Y., "Vector quantizers and Kanal, eds., North-Holland, pp, 549-573.(1982). predictive quantizers for Gauss-Markov sources," I Transactions onCommunications COM-30 [74] Billi, R., "VectorquantizationandMarkovmodels appliedto speech recognition," Proc. ICASSP 82, pp, 381-389 (Feb. 1982). Arnstein, D. S . , "Quantizationerrorinpredictive pp. 574-577, Paris (May 1982). coders," I Transactions onCommunications [75] Rabiner, L. R., Levinson, S. E., and Sondhi, M. M., COM-23 pp. 423-429 (April 1975). "On the application of vector quantization and hid[611Yamada, Y.., Fujita, K., and Tazaki, S., "Vector quanden Markov models to speaker-independent isolated tization of video signals,'' Proceedings of Annual word recognition,'' BSTJ, Vol. 62, pp. 1075-1105, Conference o f IC, p, 1031 (1980). (April 1983). [621 Gersho, A. and Ramamurthi, B., '`Image coding using [76] Freeman, G. H., "The design of time-invariant trellis vector quantization," Proceedings o f the / E Intersource codes," Abstracts o f the 1983 /E International national Conference o n Acoustics Speech and Signal Symposium on Information Theory, p p . 42-43 Processing 1 pp. 428-431 (April 1982). (September 1983). [631 Baker, R. L. and Gray, R. M., "Imagecompression [77] Haoui, A. and Messerschmidt, D., "Predictive Vector using non-adaptivespatial vector quantization," ConQuantization," Proceedings of the IEEE International ference Record of the Sixteenth Asilomar Conference Conferenceon Acoustics, Speech, and Signal Proo n Circuits Systems and Computers, (October 1982). cessing (1984). [ M I Murakami, T., Asai, K., and Yamazaki, E., V e c t o r quantizerofvideo signals,'' ElectronicLetters 7 pp. 1005-1006 (Nov. 1982). Robert M. Gray was born inSan Diego, CA, on November 1,1943. He [651 Yamada, Y. and Tazaki, S., "Vector quantizer design received the B.S. and M.S. degrees from M.I.T. in 1966 and the Ph.D. for video signals," lCE TransactionsJ66-B pp. 965- degree from U.S.C. in 1969, all in Electrical Engineering. Since 1969 he has been with the Information Systems Laboratory and the Electrical 972 (1983). (in Japanese) EngineeringDepartmentofStanfordUniversity, CA, wherehe is [661 Yamada, Y. and Tazaki, S., "A method for constructing currentlya Professor engaged i n teachingand research incomsuccessiveapproximationvectorquantizersfor munication and information theory with an emphasis on data comvideo signals," Proceedings ofthe Annual Conference pression. He was Associate Editor (1977-1980) and Editor (1980-1983) J. 28 IEEE ASSP MAGAZINE APRIL 1984 of the IEEE Transactions on Information Theory and was a member of the IEEE InformationTheoryGroup Board ofGovernors (1974-1980). Hewas corecipientwith Lee D. Davisson ofthe1976 IEEE Information TheoryGroupPaperAward.He has been afellowof the JapanSociety for the Promotion of Science (1981) and the John Simon Guggenheim Memorial Foundation (1981-1982). He is a fellow of the IEEE and a member of SigmaXi, Kappa N u , SIAM, IMS, AAAS, and theSociete Eta des IngenieursetScientifiauesde France. Heholds an Advanced Class i m a t e u r Radio License (KB6XQ). Case 3:06-cv-00019-MHP Document 108-8 Filed 06/07/2007 Page 15 of 15 Notes added i n proof: A similar design technique for FsVQ was independently developed by Haoui and Messerschmidt 1771. It should be Dointed out that the . FSVQ design algorithm described is incomplete in that it does not describethe methods used to avoid iere communicating collection Of states and wasted statesThese issues are discussed in [51]. APRIL 1984 IEEE ASSP MAGAZINE 29

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?