Apple Inc. v. Samsung Electronics Co. Ltd. et al
Filing
991
Administrative Motion to File Under Seal Documents Re Apples Opposition To Samsungs Motion To Exclude Opinions Of Certain Of Apple Experts filed by Apple Inc.. (Attachments: #1 Declaration Of Cyndi Wheeler In Support Of Apples Administrative Motion To File Under Seal Documents Re Apples Opposition to Exclude Apple Experts Opinions, #2 [Proposed] Order Granting Apples Administrative Motion To File Under Seal, #3 Apples Opposition To Samsungs Motion To Exclude Opinions Of Certain Of Apples Experts, #4 Declaration Of Mia Mazza In Support Of Apples Opposition To Samsungs Motion To Exclude Opinions Of Certain Of Apples Experts, #5 Exhibit Mazza Decl. Ex. D, #6 Exhibit Mazza Decl. Ex. F, #7 Exhibit Mazza Decl. Ex. G, #8 Exhibit Mazza Decl. Ex. J, #9 Exhibit Mazza Decl. Ex. K, #10 Exhibit Mazza Decl. Ex. L, #11 Exhibit Mazza Decl. Ex. R, #12 Exhibit Mazza Decl. Ex. S, #13 Exhibit Mazza Decl. Ex. T, #14 Exhibit Mazza Decl. Ex. U, #15 Exhibit Mazza Decl. Ex. V, #16 Exhibit Hauser Decl. Ex. B, #17 Exhibit Hauser Decl. Ex. C, #18 Exhibit Hauser Decl. Ex. D, #19 Exhibit Hauser Decl. Ex. E, #20 Exhibit Musika Decl. Ex. S, #21 Exhibit Musika Decl. Ex. T, #22 Exhibit Musika Decl. Ex. U, #23 [Proposed] Order Denying Samsungs Motion To Exclude Opinions Of Apples Experts)(Jacobs, Michael) (Filed on 5/31/2012) Modified on 6/3/2012 attachment #1 Sealed pursuant to General Order No. 62 (dhm, COURT STAFF).
Exhibit D
Commercial Use of Conjoint Analysis: An Update
Author(s): Dick R. Wittink and Philippe Cattin
Reviewed work(s):
Source: Journal of Marketing, Vol. 53, No. 3 (Jul., 1989), pp. 91-96
Published by: American Marketing Association
Stable URL: http://www.jstor.org/stable/1251345 .
Accessed: 30/05/2012 13:51
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
American Marketing Association is collaborating with JSTOR to digitize, preserve and extend access to
Journal of Marketing.
http://www.jstor.org
Dick R. Wittink& Philippe Cattin
Commercial
Use
Analysis: An
of
Conjoint
Update
The authors report results of a survey conducted to update a previous one on the commercial
use of
conjoint analysis. They document an extensive number of applications and show systematic changes in
their characteristics consistent with research results reported in the literature. Issues relevant to the options available to analysts involved in the conduct of conjoint analysis are identified and discussed.
survey of conjoint analysis research suppliers
was conductedto updatea previous study (Cattin
and Wittink 1982). A comparisonof the results from
the two surveys shows systematic changes in how
studies are conducted. These changes tend to be consistent with the implications from conjoint research
reportedin the marketingliterature.Many issues related to the conduct and implementationof a conjoint
study warrantfurtherexamination.
A
Sampling of Commercial Users
As the method's popularityhas grown and changes in
data collection or analysis have been shown to be acceptable, the conjoint supplier population has grown
as well. For the survey, we concentratedon these research suppliers to learn about commercial applications. We started with an American Marketing Association directory listing of 156 firms providing
DickR. Wittink Professorof Marketing Quantitative
is
and
Methods,
Johnson Graduate
School of Management,
CornellUniversity, a
and
of
School of ManageVisitingProfessor Marketing,
KelloggGraduate
ment, Northwestern
University,
duringthe 1988-89 academicyear.
Cattin a marketing
is
consultant
based in Paris,France.
Philippe
services on "all marketresearch"techniques. We expected a relatively small numberof the firms from this
listing to be active in conjoint analysis and received
26 completed questionnaires a returnrate of 17%.1
for
We identified 13 other research suppliers from advertisements in MarketingNews. These firms either
mentionedconjoint analysis as one of the services offered or the informationsuggested that conjoint analysis might be offered. From this group eight completed questionnaireswere received, a response rate
of 62%. We also used a listing of researcherswho had
requested informationabout a new conjoint software
package introducedin 1985 to identify 57 additional
firms.2 From this group we received 15 completed
'This response rate appears to be small. However, at the time the
survey was conducted most of the firms included in the listing would
not have offered conjoint analysis as a service. We believe that the
26 respondents represent at least 50% of the firms providing the service.
2Though the use of a list from one particular source may bias our
survey results in the direction of features favored by that firm, such
a bias should be slight or nonexistent for several reasons. First, our
intent was to include firms that for whatever reason were excluded
from the first two lists. Second, the listing included competitors of
the firm that requested informationto understandthe competitive threat.
Third, the software provided by the firm was not available for commercial use until the second half of 1985, the last year of the fiveyear period covered by our survey.
Journal of Marketing
Vol. 53 (July 1989), 91-96
Commercial of Conjoint
Use
Analysis 91
/
questionnaires, a 26% return rate.3 Finally, we used
a list of 47 individuals who had attended a multivariate analysis seminar and were associated with other
research suppliers. From this group 17 completed
questionnaires were obtained, a 36% completion rate.
On the basis of our prior knowledge of which firms
definitely were providing conjoint services, we believe that the survey participants had responsibility for
a large proportion of all commercial projects completed during the 1981-1985 period used for this survey.
The survey respondents together carried out 1062
projects during the five-year period in comparison with
a total of 698 documented applications prior to 1981.
Though we cannot be sure that our coverage of commercial applications is equal across the two surveys,
the annual commercial use in the early 1980s appears
to have exceeded the annual use during the 1970s.
Part of this growth was due to additional suppliers entering the field. For example, approximately 30% of
the respondent firms had started offering the service
after 1980. To obtain independent judgments about
the total number of commercial projects, we contacted
several leading suppliers. Their estimates of the actual
market varied greatly, ranging from 200 to 2000 a year.
As we documented 1062 projects over a five-year period, the actual number is clearly greater than 200.
The upper bound of the range of estimates may be
more representative of usage in the late 1980s. For
example, most of the software that facilitates the commercial use of conjoint first was introduced in 1985.
As a consequence, the number of research suppliers
offering conjoint analysis may have grown exponentially after 1985. In the early 1980s the annual commercial usage should have been closer to the lower
bound. Our judgment is that this number may have
been about 400 a year during the period of the survey.
Survey Results
Frequency of Usage by Product Category
We show in Table 1 that during 1981-1985 almost
60% of the applications were for consumer goods and
less than 20% were for industrial goods. The largest
change in relative frequency is for the service categories, which together account for 18% in 1981-1985
but 13% in the earlier survey. In general, however,
the distributions of relative frequencies for the categories are very similar.
3The low response rate must be interpretedagainst the fact that these
firms were not included in either of the first two lists. In many cases
the firm was considering the opportunity to offer conjoint analysis as
a new service, given the recent availability of conjoint software packages. Such a firm would have had no experience to report at the time
the survey was conducted.
92 / Journalof Marketing,
July 1989
TABLE 1
Commercial Use of Conjoint Analysis
Percentage of
Applications'
1981-1985 1971-1980
Product/Service Category
Consumer goods
Industrialgoods
Financial services
Other services
Other
Purposeb
New product/concept
identification
Competitive analysis
Pricing
Marketsegmentation
Repositioning
Advertising
Distribution
Means of Data Collectionc
Personal interview
Computer-interactive method
Mail questionnaire
Telephone interview
Combination
Stimulus Construction
Full profile
(concept evaluation)
Paired comparisons
Tradeoff matrices
Combination
Other
Response Scale
Rating scale
Rank order
Paired choice
Otherd
Estimation Proceduree
Least squares
MONANOVA
Logit
LINMAP
Otherf
18
9
9
5
100
61
20
8
5
6
100
59
47
72
40
C
38
33
33
18
5
100
61
48
64
12
9
8
7
100
C
39
7
NA
61
10
6
10
13
100
27
14
3
100
49
36
9
6
100
34
45
11
10
100
54
11
11
16
24
10
56
C
6
18
55
100
105
aThe results reported are weighted by the number of projects
completed by each supplier.
bA given study may involve multiple purposes.
CThis category was not included in the 1989 survey.
din the 1986 survey, this category was specifically defined as
"constant sum."
eThe percentages reported for 1971-1980 reflect the use of
multiple procedures by some suppliers.
fThis category includes PREFMAP and monotone regression for
1971-1980.
Project Purpose
One commercial project may serve multiple purposes.
To determine the percentage of studies involving
specified purposes, we identified seven different, but
not mutually exclusive, categories. The results show
that an average of slightly more than two identified
purposes were served by a given study. Results from
both surveys are reportedin Table 1. The orderingof
the categories common to both surveys according to
frequency is identical across the two surveys. Interestingly, one of the new categories, competitive analysis, was the secondmost frequent
purposein the 19811985 time period. Competitiveanalysis is now a very
common use of conjoint analysis, undoubtedly because of the opportunityto conduct market simulations.4
Means of Data Collection
We show in Table 1 that almost two thirdsof the commercialapplicationswere done by personalinterview.
The second most frequentmeans was computer-interactive procedures. The relative frequency during the
1981-1985 period for this means of data collection
was only 12%. The use of mail questionnairesand
telephone interviews was relatively infrequent.However, these means are particularly
importantif a probis needed from a large geographicarea.
ability sample
Mail surveys tend to have relatively low cooperation
rates and the extent of cooperationwill be lower still
if the survey instrumentrequires additional explanations. (See also Cerro 1988 and Stahl 1988).
Stimulus Construction
During the 1981-1985 period, the full-profile procedure was used in almost two thirdsof the commercial
applications, a slight increase in relative use in comparison with the first survey. Tradeoff matrices account for only 6% of the applications5in contrast to
27% in the first survey. Thus, dramaticchanges occurred in the relative popularity of alternative data
collection methods, as documented in Table 1. Several reasons can be suggested for the decline in popularity of tradeoff matrices. First, respondentsparticipatingin a conjointsurveyobjectto the tradeoffmatrix
format (e.g., Currim, Weinberg, and Wittink 1981,
p. 70). Second, the matrix format is more artificial
thanthe full-prpfile
method.Third,the analysisof rankorder preferences is complicated when the matrices
differ, as they usually do, in dimensionality. This
complication requires users to have access to and
knowledge about special algorithms.
A preferencerank order of the cells in a tradeoff
matrix can be obtained indirectly, however, by using
pairedcomparisons.One also can construct
objectpairs
4In more than half of the applications, market or preference shares
were predicted.
5According to Johnson (1987, p. 257), the tradeoff matrices method
"... has become nearly obsolete."
from a full-profile design, using more than two attributes at a time. In general, paired comparisons account for 10% of the commercial applications.
Response Scale
Traditionally, conjoint data are collected on a nonmetric scale. Ranked input data also are expected to
be more reliable (Green and Srinivasan 1978). Interestingly, however, the relative popularityof rank-order response scales was lower during 1981-1985 than
in the 1971-1980 period. Rating scales now account
for almost half of the commercialapplicationsin comparisonwith slightly more than a thirdin the first survey. Severalreasonsmay accountfor this change. One
is that with rank-order
data, the maximum difference
in parameterestimates for the best and worst levels
of an attributedepends on the numberof intermediate
levels. Both part-worthvalues and inferred importances may not be comparableacross attributeswith
varying numbers of attribute levels (Wittink,
and Nutter 1982).
Krishnamurthi,
Estimation Method
During 1981-1985, least squareswas used five times
as often as MONANOVA,whereasMONANOVAwas
the more frequentlyused method during 1971-1980.
This change is consistent with empirical and simulation findings about the relative performanceof alternativeestimation
methodson rank-order (Carmone,
data
Green, and Jain 1978; Jain et al. 1979; Wittink and
Cattin 1981). In addition, the increasinguse of rating
scales (see Table 1) strengthens the case for least
squares. Still, a preferencefor nonmetricprocedures
is sometimes expressed (Johnson 1987), even though
such proceduresapplied to ratings are likely to have
lower predictivevalidity thanmetricprocedures(e.g.,
Huber 1975).
Some estimation procedurescan accommodate a
variety of preferencemodel specifications. The maineffects part-worthmodel is the most popularspecification, yet for a continuous attribute(such as price6)
a continuousfunctioncan often providemore efficient
estimates. Researchers who care about the model
specification validity and estimation efficiency will
gathersufficient data to test models, at least at an aggregate level. Interestingly, such tests often favor a
model with interactioneffects. For example, Louviere
(1988) has obtained considerable evidence that respondents treat attributescomplementarily. For designs that accommodate specific interactions, see
6Price was included as a separate attribute in almost two thirds of
the commercial applications. For the estimated price sensitivity to be
meaningful, price must be carefully labeled as the cost of the product.
Also, study participants must understand that objects differ only in
the characteristics explicitly listed and that a higher or lower price has
no implications for characteristics not included in the study.
Commercial of Conjoint
Use
Analysis 93
/
Carmoneand Green (1981). Importantly,model comparisontests should reflect a study's purpose(Hagerty
1986).
Reliability
The reliability of conjoint results is partly a function
of the number of respondents (e.g., for market simulations). The typical sample size reportedby survey
respondentshas a median of 300. To determine the
requiredsample size, analysts may use standardstatistical inference formulas. However, these formulas
assume probabilitysampling of respondents.
For the numberof preference(tradeoff)judgments
per respondent,we obtaineda median value of 16 for
the typical application. The reliability is also determined by the numberof attributesused (a median of
eight attributes)and the numberof attributelevels (a
median of three levels for the typical study). On the
basis of this information, the reliability of results at
the level of an individualrespondentappearstypically
to be very low. Indeed, 16 judgments seems inadequate for the estimation of all parametersin a study
using eight attributesand three levels per attributein
a part-worth model. Perhaps other information is
combined with the preferencejudgments (e.g. Green
1984; Green, Goldberg,and Montemayor1981). Still,
these numbers underscore the importance of substituting continuous functions whenever possible.
More research is needed to assess systematic differences in results due to alternative data collection
procedures.Reibstein, Bateson, and Boulding (1988)
examined the reliability of individual-level parameter
estimates for alternativestimuli and attributeconfigurationsas well as data collection methods. Overall,
theirresults suggest a respectabledegree of reliability.
However, conclusions about differences in reliability
between alternativemanipulationsand data collection
procedures may depend on the reliability measure
adopted (Wittink et al. 1988).
Validity
The closest conjoint studies usually come to validation is by comparing predicted market shares from a
simulationfor the objects available in the marketplace
with their actual market shares (e.g. Clarke 1987,
p. 185). However, for this validation attempt to be
meaningful, adjustmentsshould be made for the extent to which respondentsare aware of and have access to each of the brands.Such adjustments
have been
an importantfeature of simulated test-marketmodel
predictions(e.g., Silk and Urban 1978). Anotherkey
component of market share predictions is the choice
rule assumed to apply to the respondents.Commonly
a respondentis assumed to choose the object with the
highest predictedpreference(first-choice rule). However, more needs to be known aboutthe (relative)per-
94 / Journal Marketing, 1989
of
July
formance of alternative choice rules7 (see, e.g.,
Finkbeiner 1988).
One of the most appealing characteristicsof conjoint analysis is the option to simulate a variety of
marketscenariosand to make market(preference)
share
for projectabilityof these prepredictions. However,
dictions to a target market, a probability sample is
necessary. This condition is rarely satisfied. Instead,
respondentstend to be selected purposivelyon the basis of demographicor socioeconomic characteristics.
The validity of marketsimulationpredictionsdepends
also on the completeness of the set of attributesused
to define objects, yet an analyst may focus on a reduced numberof attributesto simplify the task for respondents. The increasinginterestin and use of market simulatorsmakes it importantto use an extensive
set of attributes,which places a premiumon designs
that can accommodate many attributes(e.g., by allowing the set of attributesand their levels to be respondent-specific).Analysts also can utilize computer
programsthat identify the characteristicsof an "optimal" product for market share or profit maximization (e.g., Green, Carroll, and Goldberg 1981). Optimization algorithms are available for product lines
as well (Green and Krieger 1985).
Postsurvey Developments
Toward the end of the survey period, conjoint software packages were introduced.As a result, the cost
of conjoint applicationshas declined because the software can be thought of as a substitute for expert
knowledge. We thereforeexpect an accelerationin the
growthof conjoint applications.Some of the software
is designed specifically for computer-interactive
data
collection. This approachmay be favored for several
reasons. First, respondentinterestin and involvement
with the computer-interactive
tasks seem to be high
(Johnson 1987, p. 263). Second, the flexibility of
computer-interactiveapproaches affords substantial
advantages. By using different attributesand levels
for different respondents, one can include a larger
number of attributes and levels in a study without
overwhelmingthe respondents.Third, it is easy to include options for determininga respondent'sconsistency in providingpreferencejudgments. Fourth, parameters be estimated soon as a sufficientnumber
can
as
of judgmentsis obtained. The numberand kind of additional preferencejudgments needed from a respondent can be made to depend on the change in the es70ne difficulty is that the predicted values for objects are usually
measured on at best an interval scale. Thus, admissible transformations can have dramatic effects on predicted market shares (with the
exception of the first-choice rule). To get around this problem, preferences can be measured as probabilities of choice.
timatedprecision of parameter
estimates. Fifth, at the
end of the exercise, results can be shown to the respondent. Also, as soon as the results are obtained
from all respondents,market-levelpredictionscan be
made. Thus, the results can be communicatedto managers much more rapidly, which is particularlyimportantwhen conjoint is used at some stage in a timeconstrainednew productdevelopment process.
One of the attractivefeatures of conjoint analysis
is that it provides information about the influence
of
("importance") attributeson the preferencefor objects. However, increasingly conjoint proceduresare
adapted to include direct attributeassessments. For
example, in adaptiveconjointanalysis(Johnson1987),
the parameter
estimates are obtainedby combining direct assessments of attributelevels and paired-comparison evaluations. However, the influence (weight)
of the direct assessments on the parameterestimates
is allowed to decrease as the numberof paired-comparison judgments provided by a respondent increases.
Green has popularizedthe use of hybrid methods
(e.g. Green 1984). In these proceduresdirect attribute
assessments are combined with information from
preference judgments about objects. The increasing
interest in using direct assessments is also evident in
Srinivasan's (1987) model of choice as a two-stage
process. In his procedure, respondentsare given the
opportunityto eliminate unacceptableattributelevels.
Subsequently,a compensatorymodel is appliedto explain preferences for objects with acceptable levels.
For this model, self-explicated weights are based on
attributeimportancesand attribute-leveldesirabilities.
Similar to the derived attributeimportancesinferred
from conjoint results, the stated importancesare defined in terms of the differences between the best and
worst of the acceptable attributelevels. In an empirical application, Srinivasan obtained slightly higher
predictivevalidity of 1982 MBA job choice data than
was obtainedby Wittinkand Montgomery(1979) with
tradeoff-matrix
data on 1979 job choices.
We note that the elicitation of unacceptableattribute levels is a form of direct assessment, even if this
informationis used primarilyto simplify the data collection task. Johnson (1987, p. 259) argues that the
eliminationof unacceptablelevels should be included
only when "the interview is otherwise too long." This
word of caution appearsto be consistent with the results of a recent study designed to investigate the validity of unacceptable level assessments (Green,
Krieger, and Bansal 1988).
Conclusions
From a survey of research suppliers, we have documented200 conjointapplications year, during 1981a
1985, though we believe the actual average may be
about twice that number. In addition, since 1985 use
may have become more widespreadbecause of the introduction of conjoint software. The availability of
programsthat provide customized study designs and
analyses also has reduced the cost per study substantially.
We highlight differences between this survey and
comparable results for the 1971-1980 period. The
comparisons show a systematic reduction in the use
of rank-orderpreferences relative to judgments obtained on a rating scale. In addition, data analysis is
based on regression analysis in the majorityof applications. The reportedchanges are directionally consistent with the results from studies reported in the
literature.
Duringthe period of the first survey, academicresearchersplaced great emphasis on the relative merits
of alternativedata collection and analysis methods. In
the 1980s attention shifted to more refined data collection procedures, optimal combination of directly
stated attribute evaluations and object preferences,
flexibility in the preference tasks and the ability to
accommodatemany attributes,marketsimulationprocedures, and choice rules. Additional researchwould
be helpfulto determine extentto which ratingscales
the
interval-scaled preference judgments. Also,
provide
alternativefunctionalforms, including allowances for
attributeinteractions, should be compared. Though
conjoint analysis appears to be widely used and accepted, there is little documentedevidence on the validity of market predictions made. More research is
needed also on the applicability of alternative approaches, including software packages, for different
productcategories and types of applications.
REFERENCES
Carmone, Frank J. and Paul E. Green (1981), "Model MisParameterEstimation,"Journal
specification in Multiattribute
of Marketing Research,
18 (February), 87-93.
,
, and Arun K. Jain (1978), "Robustness
of Conjoint Analysis: Some Monte Carlo Results," Journal
of Marketing Research,
15 (May), 300-3.
Cattin, Philippe and Dick R. Wittink (1982), "CommercialUse
of Conjoint Analysis: A Survey," Journal of Marketing, 46
(Summer), 44-53.
Cerro, Dan (1988), "Conjoint Analysis by Mail," Sawtooth
Commercial of Conjoint
Use
Analysis95
/
Software Conference Proceedings, 139-44.
Clarke, Darral G. (1987), Marketing Analysis and Decision
Making. Redwood City, CA: The Scientific Press.
Currim, Imran S., Charles B. Weinberg, and Dick R. Wittink
(1981), "Design of Subscription Programs for a Performing
Arts Series," Journal of Consumer Research, 8 (June), 6775.
Finkbeiner, Carl T. (1988), "Comparison of Conjoint Choice
Simulators," Sawtooth Software Conference Proceedings,
75-103.
Green, Paul E. (1984), "Hybrid Models for Conjoint Analysis:
An Expository Review," Journal of Marketing Research,
21 (May), 155-69.
, J. Douglas Carroll, and Stephen M. Goldberg
(1981), "A General Approach to Product Design Optimization via Conjoint Analysis," Journal of Marketing, 45
(Summer), 17-37.
, Stephen M. Goldberg, and Mila Montemayor
(1981), "A Hybrid Utility Estimation Model for Conjoint
Analysis," Journal of Marketing, 45 (Winter), 33-41.
and Abba M. Krieger (1985), "Models and Heuristics for Product Line Selection," Marketing Science, 4
(Winter), 1-19.
,
, and Pradeep Bansal (1988), "Completely Unacceptable Levels in Conjoint Analysis: A Cautionary Note," Journal of Marketing Research, 25 (August), 293-300.
and V. Srinivasan (1978), "Conjoint Analysis in
Consumer Research: Issues and Outlook," Journal of Consumer Research, 5 (September), 103-23.
Hagerty, Michael R. (1986), "The Cost of Simplifying Preference Models," Marketing Science, 5 (Fall), 298-319.
Huber, Joel (1975), "Predicting Preferences on Experimental
Bundles of Attributes: A Comparison of Models," Journal
of Marketing Research, 12 (August), 290-7.
Jain, Arun K., Franklin Acito, Naresh K. Malhotra, and Vijay
Mahajan (1979), "A Comparison of Internal Validity of Al-
96 / Journalof Marketing,
July 1989
terative Parameter Estimation Methods in Decompositional Multiattribute Preference Models," Journal of Marketing Research, 16 (August), 313-22.
Johnson, Richard M. (1987), "Adaptive Conjoint Analysis,"
Sawtooth Software Conference Proceedings, 253-65.
Louviere, Jordan J. (1988), Analyzing Decision Making: Metric Conjoint Analysis. Beverly Hills, CA: Sage Publications, Inc.
Reibstein, David, John E. G. Bateson, and William Boulding
(1988), "Conjoint Analysis Reliability: Empirical Findings," Marketing Science, 7 (Summer), 271-86.
Silk, Alvin J. and Glen L. Urban (1978), "Pre-Test-Market
Evaluation of New Packaged Goods: A Model and Measurement Methodology," Journal of Marketing Research,
15 (May), 171-91.
Srinivasan, V. (1987). "A Conjunctive-Compensatory Approach to the Self-Explication of Multiattributed Preferences," working paper (March).
Stahl, Brent (1988), "Conjoint Analysis by Telephone," Sawtooth Software Conference Proceedings, 131-8.
Wittink, Dick R. and Philippe Cattin (1981), "Alternative
Estimation Methods for Conjoint Analysis: A Mont6
Carlo Study," Journal ofMarketingResearch, 18 (February),
101-6.
and David B. Montgomery (1979), "Predictive Validity of Tradeoff Analysis for Alternative Segmentation
Schemes," in AMA Educators' Conference Proceedings,
69-71.
,Lakshman Krishnamurthi, and Julia B. Nutter
(1982), "Comparing Derived Importance Weights Across
Attributes," Journal of Consumer Research, 8 (March),
471-4.
,David J. Reibstein, William Boulding, John E. G.
Bateson, and John W. Walsh (1988), "Conjoint Reliability
Approaches and Measures: A Cautionary Note," working
paper (October).
Reprint No. JM533107
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?