Estimating the Helpfulness and Economic Impact of - NYU Stern

Loading...

1498

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

VOL. 23,

NO. 10,

OCTOBER 2011

Estimating the Helpfulness and Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics Anindya Ghose and Panagiotis G. Ipeirotis, Member, IEEE Abstract—With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact. Index Terms—Internet commerce, social media, user-generated content, textmining, word-of-mouth, product reviews, economics, sentiment analysis, online communities.

Ç 1

INTRODUCTION

W

the rapid growth of the Internet, product related word-of-mouth conversations have migrated to online markets, creating active electronic communities that provide a wealth of information. Reviewers contribute time and energy to generate reviews, enabling a social structure that provides benefits both for the users and the firms that host electronic markets. In such a context, “who” says “what” and “how” they say it, matters. On the flip side, a large number of reviews for a single product may also make it harder for individuals to track the gist of users’ discussions and evaluate the true underlying quality of a product. Recent work has shown that the distribution of an overwhelming majority of reviews posted in online markets is bimodal [1]. Reviews are either allotted an extremely high rating or an extremely low rating. In such situations, the average numerical star rating assigned to a product may not convey a lot of information to a prospective buyer or to the manufacturer ITH

. The authors are with the Department of Information, Operations, and Management Sciences, Leonarn N. Stern School of Business, New York University, New York, NY 10012. E-mail: {aghose, panos}@stern.nyu.edu. Manuscript received 29 Aug. 2008; revised 30 June 2009; accepted 4 Jan. 2010; published online 24 Sept. 2010. Recommended for acceptance by B.C. Ooi. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TKDE-2008-08-0447. Digital Object Identifier no. 10.1109/TKDE.2010.188. 1041-4347/11/$26.00 ß 2011 IEEE

who tries to understand what aspects of its product are important. Instead, the reader has to read the actual reviews to examine which of the positive and which of the negative attributes of a product are of interest. So far, the best effort for ranking reviews for consumers comes in the form of “peer reviewing” in review forums, where customers give “helpful” votes to other reviews in order to signal their informativeness. Unfortunately, the helpful votes are not a useful feature for ranking recent reviews: the helpful votes are accumulated over a long period of time, and hence cannot be used for review placement in a short- or medium-term time frame. Similarly, merchants need to know what aspects of reviews are the most informative from consumers’ perspective. Such reviews are likely to be the most helpful for merchants, as they contain valuable information about what aspects of the product are driving the sales up or down. In this paper, we propose techniques for predicting the helpfulness and importance of a review so that we can have: .

.

a consumer-oriented mechanism which can potentially rank the reviews according to their expected helpfulness (i.e., estimating the social impact), and a manufacturer-oriented ranking mechanism, which can potentially rank the reviews according to their expected influence on sales (i.e., estimating the economic impact).

Published by the IEEE Computer Society

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

To understand better what are the factors that influence consumers perception of usefulness and what factors affect consumers most, we conduct a two-level study. First, we perform an explanatory econometric analysis, trying to identify what aspects of a review (and of a reviewer) are important determinants of its usefulness and impact. Then, at the second level, we build a predictive model using Random Forests that offer significant predictive power and allow us to predict with high accuracy how peer consumers are going to rate a review and how sales will be affected by the posted review. Our algorithms are based on the idea that the writing style of the review plays an important role in determining the perceived helpfulness by other fellow customers and the degree of influencing purchase decisions. In our work, we perform multiple levels of automatic text analysis to identify characteristics of the review that are important. We perform our analysis at the lexical, grammatical, semantic, and at the stylistic levels to identify text features that have high predictive power in identifying the perceived helpfulness and the economic impact of a review. Furthermore, we examine whether the past history and characteristics of a reviewer can be a useful predictor for the usefulness and impact of a review. We present an extensive experimental analysis using a real data set of 411 products, monitored over a 15-month period on Amazon.com. Our analysis indicates that we can predict accurately the helpfulness and influence of product reviews. The rest of the paper is structured as follows: First, Section 2 discusses related work and provides the theoretical framework for generating the variables for our analysis. Then, in Section 3, we describe our data set and discuss how we extract the variables that we use to predict the usefulness and impact of a review. In Section 4, we present our explanatory econometric analysis for estimating the influence of the different variables and in Section 5, we describe the experimental results of our predictive modeling that uses Random Forest classifiers. Finally, Section 6 provides some additional discussion and concludes the paper.

2

THEORETICAL FRAMEWORK AND RELATED LITERATURE

From a business perspective, consumer product reviews are most influential if they affect product sales and the online behavior of users of the word-of-mouth forum.

2.1 Sales Impact The first relevant stream of literature assesses the effect of online product reviews on sales. Research in this direction has generally assumed that the primary reason that reviews influence sales is because they provide information about the product or the vendor to potential consumers. Prior research has demonstrated an association between numeric ratings of reviews (review valence) and subsequent sales of the book on that site [2], [3], [4], or between review volume and sales [5], [6], [7]. Indeed, to the extent that better products receive more positive reviews, there should be a positive relationship between review valence and sales. Research also demonstrated that reviews and sales may be positively related even when underlying product quality is controlled [3], [5].

1499

However, prior work has not looked at how the textual characteristics of a review affect sales. Our hypothesis is that the text of product reviews affects sales even after taking into consideration the numerical information such as review valence and volume. Intuitively, reviews of reasonable length, that are easy to read, and lack spelling and grammar errors should be, all else being equal, more helpful, and influential compared to other reviews that are difficult to read and have errors. Reviewers also write “subjective opinions” that portray reviewers’ emotions about product features or more “objective statements” that portray factual data about product features, or a mix of both. Keeping these in mind, we formulate three potential constructs for text-based features that are likely to have an impact: 1) the average level of subjectivity and the range and mix of subjective and objective comments, 2) the extent to which the content is easy to read, and 3) the proportion of spelling errors in the review. In particular, we test the following hypotheses: Hypothesis 1a. All else equal, a change in the subjectivity level and mixture of objective and subjective statements in reviews will be associated with a change in sales. Hypothesis 1b. All else equal, a change in the readability score of reviews will be associated with a change in sales. Hypothesis 1c. All else equal, a decrease in the proportion of spelling errors in reviews will be positively related to sales.

2.2 Helpfulness Votes and Peer Recognition A second stream of related research on word-of-mouth suggests that perceived attributes of the reviewer may shape consumer response to reviews [5]. In the social psychology literature, message source characteristics have been found to influence judgment and behavior [8], [9], [10], [11], and it has been often suggested that source characteristics might shape product attitudes and purchase propensity. Indeed, Forman et al. [5] draw on the information processing literature to suggest that product sales will be affected by reviewer disclosure of identity-related information. Prior research on computer mediated communication (CMC) suggests that online community members communicate information about product evaluations with an intent to influence others’ purchase decisions as well as provide social information about contributing members themselves [12], [13]. Research concerning the motivations of content creators in online contexts highlights the role of identity motives in defining why users provide social information about themselves (e.g., [14], [15], [16], [17]). Increasingly, we have seen that both identity-descriptive information about reviewers and product information are prominently displayed on the websites of online retailers. Prior research on self-verification in online contexts has pointed out the use of persistent labeling, defined as using a single, consistent way of identifying oneself such as “real name” in the Amazon context, and self-presentation, defined as ways of presenting oneself online that may help others to identify one, such as posting geographic location or a personal profile in the Amazon context [17] as important phenomena in the online world. Indeed, information about product reviewers is often highly salient. Visitors to the site can see more professional aspects of

1500

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

reviewers such as their badges (e.g., “top-50 reviewer,” “top-100 reviewer” badges) and ranks (“reviewer rank”) as well as personal information about reviewers ranging from their real name to where they live, their nick names, hobbies, professional interests, pictures, and other posted links. In addition, users have the opportunity to examine more “professional” aspects of a reviewer such as the proportion of helpful votes given by other users not only for a given review but across all the reviews of all other products posted by a reviewer. Further, interested users can also read the actual content of all reviews generated by a reviewer across all products. With regard to the benefits reviewers derive, work on online user-generated content has primarily focused on the consequences of peer recognition rather than on its antecedents [18], [19]. Its only recently that Forman et al. [5] evaluated the influence of reviewers’ disclosure of information about themselves on the extent of peer recognition of reviewers and their interactions with the review valence by drawing on the social psychology literature. We hypothesize that after controlling for features examined in prior work such as reviewer disclosure of identity information and the valence of reviews, the actual text of the review matters in determining the extent to which users find the review useful. In particular, we focus on four constructs, namely, subjectiveness, informativeness, readability, and proportion of spelling errors. Our paper thus contributes to the existing stream of work by examining text-based antecedents of peer recognition in online word-of-mouth forums. In particular, we test the following hypotheses: Hypothesis 2a. All else equal, a change in the subjectivity level and mixture of objective and subjective statements in a review will be associated with a change in the perceived helpfulness of that review. Hypothesis 2b. All else equal, a change in the readability of a review will be associated with a change in the perceived helpfulness of that review. Hypothesis 2c. All else equal, a decrease in the proportion of spelling errors in a review will be positively related to perceived helpfulness of that review. Hypothesis 2d. All else equal, an increase in the average helpfulness of a reviewer’s historical reviews will be positively related to perceived helpfulness of a review posted by that reviewer. This paper builds on our previous work [20], [21], [22]. In [20] and [21], we examined just the effect of subjectivity, while in the current work, we expand our data to include more product categories and examine a significantly increased number of features, such as different readability metrics, information about the reviewer history, different features of reviewer disclosure, and so on. The present paper is unique in looking at how various additional features of the review text affect product sales and the perceived helpfulness of these reviews. In parallel with our work, researchers in the natural language processing field have examined the task of predicting review helpfulness [23], [24], [25], [26], [27], using reviews from Amazon.com or movie reviews as training and test data. Our work uses a superset of the

VOL. 23,

NO. 10,

OCTOBER 2011

features used in the past for helpfulness prediction (e.g., reviewer history and disclosure, deviation of subjectivity in the review, and so on). Also, none of these studies attempts to predict the influence of reviews on product sales. A differentiating factor of our approach is the two-pronged approach building on methodologies from economics and from data mining, building both explanatory and predictive models to understand better the impact of different factors. Interestingly, all prior research use support vector machines (in a binary classification and in regression mode), which we observed to perform worse than Random Forests (as we discuss in Section 5). Predicting the helpfulness of a review is also related to the task of evaluating the quality of web posts or the quality of answers to posted questions [28], [29], [30], [31], although there are more cues (e.g., clickstream data) that can be used to estimate the perceived quality of a posting. Recently, Hao et al. [32] also presented techniques for predicting whether a review will receive any votes about its helpfulness or whether it will stay unrated. Tsur and Rappoport [33] presented an unsupervised algorithm for estimating ranking the reviews according to their expected helpfulness.

3

DATA SET AND VARIABLES

A major goal of this paper is to explore how the usergenerated textual content of a review and the self-reported characteristics of the reviewer who generated the review can influence economic transactions (such as product sales) and online community and social behavior (such as peer recognition in the form of helpful votes). To examine this, we collected data about the economic transactions on Amazon.com and analyzed the associated review system. In this section, we describe the data that we collected from Amazon; furthermore, we discuss how we computed the variables to perform our analysis, based on the discussion of Section 2.

3.1 Product and Sales Data To conduct our study, we created a panel data set of products belonging to three product categories: 1. Audio and video players (144 products), 2. Digital cameras (109 products), and 3. DVDs (158 products). We picked the products by selecting all the items that appeared in the “Top-100” list of Amazon over a period of three months from January 2005 to March 2005. We decided to use popular products, in order to have products in our study with a significant number of reviews. Then, using Amazon web services, from March 2005 until May 2006, we collected the information for these products described below. We collected various product-specific characteristics over time. Specifically, we collected the manufacturer suggested list price of the product, its Amazon retail price, and its Amazon sales rank (which serves as a proxy for units of demand [34], as we will describe later). Together with sales and price data, we also collected other data that may influence the purchasing behavior of consumers. For example, we collected the date the product was released into the market, to compute the elapsed time from the date of product release, since products released long time ago tend to see a decrease in sales over time. We

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

also collected the number of reviews and the average review rating of the product over time.

3.2 Individual Review Data Beyond the product-specific data, we also collected all reviews of a product since the product was released into the market. For each review, we retrieve the actual textual content of the review and the review rating of the product given by the reviewer. The rating that a reviewer allocates to the reviewed product is denoted by a number of stars on a scale of 1 to 5. From the textual content, we generated a set of variables at the lexical, grammatical, and at the stylistic level. We describe these variables in detail in Section 3.4, when we describe the textual analysis that we conducted. Review helpfulness. Amazon has a voting system whereby community members provide helpful votes to rate the reviews of other community members. Previous peer ratings appear immediately above the posted review, in the form, “[number of helpful votes] out of [number of members who voted] found the following review helpful.” These helpful and total votes enable us to compute the fraction of votes that evaluated the review as helpful. To have as much accurate representation of the percentage of customers that found the review helpful, we collected the votes in December 2007, ensuring that there is a significant time period after the time the review was posted and that there is a significant number of peer rating votes accumulated for the review. 3.3

Reviewer Characteristics

3.3.1 Reviewer Disclosure While review valence is likely to influence consumers, there is reason to believe that social information about reviewers themselves (rather than the product or vendor) is likely to be an important predictor of consumers’ buying decisions [5]. On many sites, social information about the reviewer is at least as prominent as product information. For example, on sites such as Amazon, information about product reviewers is graphically depicted, highly salient, and sometimes more detailed and voluminous than information on the products they review: the “Top-1,000” reviewers have special tags displayed next to their names, the reviewers that disclose their real name1 are also highlighted, and so on. Given the extent and salience of available social information regarding product reviewers, it seems important to control for the impact of such information on online product sales and review helpfulness. Amazon has a procedure by which reviewers can disclose personal information about themselves. There are several types of information that users can disclose: we focus our analysis on the categories most commonly indicated by users: whether the user disclosed their real name, their location, nickname, and hobbies. With real name, we refer to a registration procedure that Amazon provides for users to indicate their actual name by providing verification with a credit card, as mentioned above. Reviewers may also post additional information in their profiles such as geographical location, disclose additional information (e.g., “Hobbies”), or use a nickname (e.g., “Gadget King”). We use these data to control for the impact 1. Amazon compares the name of the reviewer with the name listed in the credit card on file before assigning the “Real Name” tag.

1501

of self-descriptive identity claims. We encode this information as binary variables. We also constructed an additional dummy variable, labeled “any disclosure;” this variable captures each instance where the reviewer has engaged in any one of the four kinds of self-disclosure. We also collected the reviewer rank of the reviewer as published on Amazon.

3.3.2 Reviewer History Since one of our goal is to predict the future usefulness of a review, we wanted to examine whether the past history of a reviewer can be used to predict the usefulness of the future reviews written by the same reviewer. For this, we collected the past reviews for each reviewer, and collected the helpful and total votes for each of the past reviews. Using this information, we constructed for each reviewer and for each point in time the past performance of a reviewer. Specifically, we created two variables, by microaveraging and macroaveraging the past votes on the reviews. The variable reviewer history macro, is the ratio of all past helpful votes divided by the total number of votes. Similarly, we also created the variable reviewer history micro, in which we first computed the average helpfulness for each of the past reviews and then computed the average across all past reviews. The difference with the macro and micro versions is that the micro version gives equal weight to the helpfulness of all past reviews, while the macro version weights more heavily the importance of reviews that received a large number of votes. 3.4 Textual Analysis of Reviews Our approach is based on the hypothesis that the actual text of the review matters. Previous text mining approaches focused on extracting automatically the polarity of the review [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46]. In our setting, the numerical rating score already gives the (approximate) polarity of the review,2 so we look in the text to extract features that are not possible to observe using simple numeric ratings. 3.4.1 Readability Analysis We are interested to examine what types of reviews affect most sales and what types of reviews are most helpful to the users. For example, everything else being equal, a review that is easy to read will be more helpful than another that has spelling mistakes and is difficult to read. As a first, low-level variable, we measured the number of spelling mistakes within each review, and we normalized the number by dividing with the length of the review (in characters).3 To measure the spelling errors, we used an offthe-shelf spell checker, ignoring capitalized words and words with numbers in them. We also ignored the top-100 most frequent non-English words that appear in the reviews: most of them were brand names or terminology words that do not appear in the spell checkers list. Furthermore, to measure the cognitive effort that a user needs in order to read a review, we measured the length of a review in sentences, words, and characters. 2. We should note, though, that the numeric rating does not capture all the polarity information that appears in the review [19]. 3. To take the logarithm of the normalized variable for errorless reviews, we added one to the number of spelling errors before normalizing.

1502

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

Beyond these basic features, we also used the extensive results from research on readability. Past research has shown that easy-reading text improves comprehension, retention, and reading speed, and that the average reading level of the US adult population is at the eighth grade level [47]. Therefore, a review that can be read easily by a large number of users is also expected to be rated by more users. Today, there are numerous metrics for measuring the readability of a text, and while none of them is perfect, the computed measures correlate well with the actual difficulty of reading a text. To avoid idiosyncratic errors peculiar to a specific readability metric, we computed a set of metrics for each review. Specifically, we computed the following: . Automated Readability Index, . Coleman-Liau Index, . Flesch Reading Ease, . Flesch-Kincaid Grade Level, . Gunning fog index, and . SMOG. (See [48] for detailed description on how to compute each of these metrics.) Based on research in readability, these metrics are useful metrics for measuring how easy is for a user to read a review.

3.4.2 Subjectivity Analysis Beyond the lower level spelling and readability analysis, we also expect that there are stylistic choices that affect the perceived helpfulness of a review. We observed empirically that there are two types of listed information, from the stylistic point of view. There are reviews that list “objective” information, listing the characteristics of the product, and giving an alternate product description that confirms (or rejects) the description given by the merchant. The other types of reviews are the reviews with “subjective,” sentimental information, in which the reviewers give a very personal description of the product, and give information that typically does not appear in the official description of the product. As a first step toward understanding the impact of the style of the reviews on helpfulness and product sales, we rely on existing literature of subjectivity estimation from computational linguistics [41]. Specifically, Pang and Lee [41] described a technique that identifies which sentences in a text convey objective information, and which of them contain subjective elements. Pang and Lee applied their techniques in a data set with movie review data set, in which they considered as objective information the movie plot, and as subjective the information that appeared in the reviews. In our scenario, we follow the same paradigm. In particular, objective information is considered the information that also appears in the product description, and subjective is everything else. Using this definition, we then generated a training set with two classes of documents: A set of “objective” documents that contains the product descriptions of each of the products in our data set. . A set of “subjective” documents that contains randomly retrieved reviews. Since we deal with a rather diverse data set, we constructed separate subjectivity classifiers for each of our product categories. We trained the classifier using a Dynamic .

VOL. 23,

NO. 10,

OCTOBER 2011

Language Model classifier with n-grams (n ¼ 8) from the LingPipe toolkit.4 The accuracy of the classifiers according to the Area under the ROC curve (AUC), measured using 10fold cross validation was: 0.85 for audio and video players, 0.87 for digital cameras, and 0.82 for DVDs. After constructing the classifiers for each product category, we used the resulting classification models in the remaining, unseen reviews. Instead of classifying each review as subjective or objective, we instead classified each sentence in each review as either “objective” or “subjective,” keeping the probability being subjective P rsubj ðsÞ for each sentence s. Hence, for each review, we have a “subjectivity” score for each of the sentences. Based on the classification scores for the sentences in each review, we derived the average probability AvgP robðrÞ of the review r being subjective defined as the mean value of the P rsubj ðsi Þ values for the sentences s1 ; . . . ; sn in the review r. Since the same review may be a mixture of objective and subjective sentences, we also kept of standard deviation DevP robðrÞ of the subjectivity scores P rsubj ðsi Þ for the sentences in each review.5 The summary statistics of the data for audio-video players, digital cameras, and DVDs are given in Tables 2, 3, and 4, respectively.

4

EXPLANATORY ECONOMETRIC ANALYSIS

So far, we have explained the different types of data that we collected, that have the potential, according the various hypotheses, to affect the impact and usefulness of the reviews. In this section, we present the results of our explanatory econometric analysis, which will examine the importance of each factor. Through our analysis, we aim to provide a better understanding of how customers are affected by the reviews. (In the next section, we will describe our predictive model, based on machine learning techniques.) In Section 4.1, we analyze the effect of different review and reviewer characteristics on product sales. Our results show what factors are important for a merchant to observe. Then, in Section 4.2, we present our analysis on how different factors affect the helpfulness of a review.

4.1 Effect on Product Sales We first estimate the relationship between sales and stylistic elements of a review. Prior research in economics and in marketing (for instance, [49]) has associated sales ranks with demand levels for products such as software and electronics. The association is based on the experimentally observed fact that the distribution of demand in terms of sales rank has a Pareto distribution (i.e., a power law). Based on this observation, it is possible to convert sales ranks into demand levels using the following Pareto relationship: 4. http://www.alias-i.com/lingpipe/. 5. To examine the extent to which people cross-reference reviews such as “I agree with Tom,” we did an additional study. We posted 2,000 product reviews on Amazon Mechanical Turk, asking workers there to examine the reviews and indicate whether the reviewer refers to some other review. We asked five workers on Mechanical Turk to annotate each review. If at least one worker indicated that the review refers to some other review or webpage, then we classified a review as “cross referencing.” The extent of cross referencing was very small. Out of the 2,000 reviews, only 38 had at least one “cross-referencing” vote (1.9 percent), and only two reviews were judged as “cross referencing” by all five workers (0.1 percent). This corresponds to a relatively limited source of errors and does not affect significantly the accuracy of the subjectivity classifier.

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

lnðDÞ ¼ a þ b  lnðSÞ;

ð1Þ

where D is the unobserved product demand, S is its observed sales rank, and a > 0, b < 0 are industry-specific parameters. Therefore, we can use the log of product sales rank on Amazon.com as a proxy of the log of product demand. Previous work has examined how price, number of reviews, and review valence influence product sales on Amazon and Barnes and Noble [3]. Recent work by Forman et al. [5] also describes how reviewer disclosure of identitydescriptive information (e.g., Real Name or Location) affects product sales. Hence, to be consistent with prior work, we control for all these factors but focus mainly on the textual aspects of the review to see how they affect sales.

4.1.1 Model Specification In order to test our hypotheses 1a to 1c, we adopt a model similar to that used in [3] and [5], while incorporating measures for the quality and the content of the reviews. Chevalier and Mayzlin [3] and Forman et al. [5] define the book’s sales rank as a function of a book fixed effect and other factors that may impact the sales of a book. The dependent variable is lnðSalesRankÞkt , the log of sales rank of product k in time t, which is a linear transformation of the log of product demand, as discussed earlier. The unit of observation in our analysis is a product date: since we only know the date that a review is posted (and not its time) and we observe changes in sales rank on a daily basis, we need to “collapse” multiple reviews posted on the same data in a single observation. Since we have a linear model, we use an additive approach to combine reviews published for the same product on the same date. To study the impact of reviews and the quality of reviews on sales, we estimate the following model: logðSalesRankÞkt ¼  þ 1  logðAmazonP ricekt Þ þ 2  AvgP robkðt1Þ þ 3  DevP robkðt1Þ þ 4  AverageReviewRatingkðt1Þ þ 5  logðNumberofReviewskðt1Þ Þ þ 6  ðReadabilitykðt1Þ Þ

ð2Þ

þ 7  logðSpellingErrorskðt1Þ Þ þ 8  ðAnyDisclosurekðt1Þ Þ þ 9  logðElapsedDatekt Þ þ k þ "kt ; where k is a product fixed effect that accounts for unobserved heterogeneity across products and "kt is the error term. (The other variables are described in Table 1 and Section 3.) To select the variables that are present in the regression, we follow the work in [3] and [5].6 Note that as explained above increases in sales rank mean lower sales, so a negative coefficient on a variable 6. To avoid accidental discovery of “important” variables, and model building, we have performed a large number of statistical significance tests. Toward a systematic process of variable selection for our regressions, we used the well-known stepwise regression method. This is a sequential process for fitting the least-squares model, where at each step, a single explanatory variable is either added to or removed from the model in the next fit. The most commonly used criterion for the addition or deletion of variables in stepwise regression is based on the partialF  statistic for each of the regressions which allows one to compare any reduced (or empty) model to the full model from which it is reduced.

1503

implies that an increase in that variable increases sales. The control variables used in our model include the Amazon retail price, the difference between the date of data collection and the release date of the product (Elapsed Date), the average numeric rating of the product (Rating), and the log of the number of reviews posted for that product (Number of Reviews). This is consistent with prior work such as Chevalier and Mayzlin [3] and Forman et al. [5] To account for potential nonlinearities and to smooth large values, we take the log of the dependent variable and some of the control variables such as Amazon Price, volume of reviews, and days elapsed consistent with the literature [5], [34]. For these regressions in which we examine the relationship between review sentiments and product sales, we aggregate data to the weekly level. By aggregating data in this way, we smooth potential day-to-day volatility in sales rank. (As a robustness check, we also ran regressions at the daily and fortnightly level, and find that the qualitative nature of most of our results remains the same.) We estimate product-level fixed effects to account for differences in average sales rank across products. These fixed effects are algebraically equivalent to including a dummy for every product in our sample, and so this enables us to control for differences in the average quality of products. Thus, any relationship between sales rank and review valence will not reflect differences in average quality across products, but rather will be identified off changes over time in sales rank and review valence within products, diminishing the possibility that our results reflect differences in average unobserved book quality rather than aspects of the reviews themselves [5]. Our primary interest is in examining the association between textual variables in user-generated reviews and sales. To maintain consistency with prior work, we also examine the association between average review valence and sales. However, prior work has shown that review valence may be correlated with product-level unobservables that may be correlated with sales. In our setting, though we control for differences in the average quality of products through our fixed effects, it is possible that changes in the popularity of the product over time may be correlated with changes in review valence. Thus, this parameter reflects not only the information content of reviews but also may reflect exogenous shocks that may influence product popularity [5]. Similarly, the variable Number of Reviews will also capture changes in product popularity or perceived product quality over time; thus, 5 may reflect the combined effects of a causal relationship between number of reviews and sales [7] and changes in unobserved book popularity over time.7

4.1.2 Empirical Results The sign on the coefficient of AvgProb suggests that an increase in the average subjectivity of reviews leads to an 7. Note that prior work in this domain has often transformed the dependent variable (sales rank) into quantities using the specification similar to Ghose and Sundararajan [34]. That was usually done because those papers were interested in demand estimation and imputing price elasticities. However, in this case, we are not interested in estimating demand, and hence, we do not need to make the actual transformation. In this regard, our paper is more closely related to [5].

1504

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

VOL. 23,

NO. 10,

OCTOBER 2011

TABLE 1 The Variables Collected for Our Study

The panel data set contains data collected over a period of 15 months; we collected the variables daily and we capture the variability over time for the variables that change over time (e.g., sales rank, price, reviewer characteristics, and so on).

increase in sales for products, although the estimate is statistically significant only for audio-video players and digital cameras (see Table 5). It is statistically insignificant for DVDs. Our conjecture is that customers prefer to read reviews that describe the individual experiences of other consumers and buy products with significant such (subjective) information available only for search goods (such as cameras and audio-video players) but not for experience goods.8 The coefficient of DevProb has a positive and statistically significant relationship with sales rank in audio-video players and DVDs, but is statistically insignificant for digital cameras. In general, this suggests that a decrease in the deviation of the probability of subjective comments leads to a decrease in sales rank, i.e., an increase in product sales. This means that reviews that have a mixture of objective, and highly subjective sentences have a negative effect on product sales, compared to reviews that tend to include only subjective or only objective information. 8. Search goods are those whose quality can be observed before buying the product (e.g., electronics) while for experience goods, the consumers have to consume/experience the product in order to determine its quality (e.g., books, movies).

The coefficient of the Readability is negative and statistically significant for digital cameras suggesting that reviews that have higher Readability scores are associated with higher sales. This is likely to happen if such reviews are written in more authoritative and sophisticated language which enhances the credibility and informativeness of such reviews. Our results are robust to the use of other Readability TABLE 2 Descriptive Statistics of Audio and Video Players for Econometric Analysis

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

TABLE 3 Descriptive Statistics of Digital Cameras for Econometric Analysis

1505

TABLE 5 These are OLS Regressions with Product-Level Fixed Effects

The dependent variable is Log (Salesrank). Robust standard errors are listed in parenthesis; ***, **, and * denote significance at 1, 5, and 10 percent, respectively. The R-square includes fixed effects in R-square computation.

metrics described in Table 1 such as ARI, Coleman-Liau index, Flesch-Reading Ease, Flesch-Kincaid Grade Level, and the SMOG index. The coefficient of Spelling Errors is positive and statistically significant for DVDs suggesting that an increase in the proportion of spelling mistakes in the content of the reviews decreases product sales for some products whose quality can be assessed only after purchase. However, for hedoniclike products such as audio-video players and digital cameras whose quality can be assessed prior to purchase, the proportion of spelling errors in reviews does not have a statistically significant impact on sales. For all three categories, we find that this result is robust to different specifications of normalizing the number of spelling errors such as normalizing by the number of characters, words, or sentences in a given review. In summary, our results provide support for Hypotheses 1a to 1c. As expected, our control variables suggest that sales decrease as Amazon’s price increases. Further, even though the coefficient of Any Disclosure is statistically insignificant, the negative sign implies that the prevalence of reviewer disclosure of identity-descriptive information would be associated with higher subsequent sales. This is consistent with prior research in the information processing literature supporting a direct effect for source characteristics on product evaluations and purchase intentions when information is processed heuristically [5]. Our results are robust to the use of other disclosure variables in the above regression. For example, instead of “Any Disclosure,” if TABLE 4 Descriptive Statistics of DVD for Econometric Analysis

we were to use disclosures of the two most salient reviewer self-descriptive features (Real Name and Location), results are generally consistent with the existing ones. We also find that an increase in the volume of reviews is positively associated with sales of DVDs and digital cameras. In contrast, average review valence has a statistically significant effect on sales for only audio-video players. These mixed findings are consistent with prior research which have found a statistically significant effect of review valence but not review volume on sales [3], and with others who have found a statistically significant effect of review volume but not valence on sales [5], [7], [6].9 Finally, we also ran regressions that included interaction terms between ratings and the textual variables like AvgProb, DevProb, Readability, and Spelling Errors. For brevity, we cannot include these results in the paper. However, a counterintuitive theme that emerged is that reviews that rate products negatively (ratings <¼ 2) can be associated with increased product sales when the review text is informative and detailed based on its readability score, number of normalized spelling errors, or the mix of subjective and objective sentences. This is likely to occur when the reviewer clearly outlines the pros and cons of the product, thereby providing sufficient information to the consumer to make a purchase. If the negative attributes of the product do not concern the consumer as much as it did the reviewer, then such informative reviews can lead to increased sales. Using these results, it is now possible to generate a ranking scheme for presenting reviews to manufacturers of a product. The reviews that affect sales the most (either positively or negatively) are the reviews that should be presented first to the manufacturer. Such reviews tend to contain information that affects the perception of the customers for the product. Hence, the manufacturer can utilize such reviews, either by modifying future versions of the product or by modifying the existing marketing strategy (e.g., by emphasizing the good characteristics of the product). We should note that the reviews that affect sales 9. Note that we do not have other variables such as “Reviewer Rank” or “Helpfulness” in this regression because of a concern that these variables will lead to biased and inconsistent estimates. Said simply, it is entirely possible that “Reviewer Rank” or “Helpfulness” is correlated with other unobserved review-level attributes. Such correlations between regressors and error terms will lead to the well-known endogeneity bias in OLS regressions [50].

1506

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

most are not necessarily the same as the ones that customers find useful and are typically getting “spotlighted” in review forums, like the one of Amazon. We present related evidence next.

4.2 Effect on Helpfulness Next, we want to analyze the impact of review variables on the extent to which community members would rate reviews helpful after controlling for the presence of self-descriptive information. Recent work [5] describes how reviewer disclosure of identity-descriptive information and the extent of equivocality of reviews (based on the review valence) affect perceived usefulness of reviews. Hence, to be consistent with prior work, we control for these factors but focus mainly on the textual aspects of the review and the reviewer history to see how they affect the usefulness of reviews.10 4.2.1 Model Specification The dependent variable, Helpfulnesskr , is operationalized as the ratio of helpful votes to total votes received for a review r issued for product k. In order to test our hypotheses 2a to 2d, we use a well-known linear specification for our helpfulness estimation [5]: logðHelpfulnessÞkr ¼  þ 1  ðAvgP robÞkr þ 2  ðDevP robÞkr þ 3  ðAnyDisclosureÞkr þ 4  ðReadabilityÞkr þ 5  ðReviewerHistoryMacroÞkr þ 6  logðSpellingErrorsÞkr þ 7  ðModerateÞkr þ 8  logðNumberofReviewsÞkr þ k þ "kr : ð3Þ The unit of observation in our analysis is a product review and k is a product fixed effect that controls for differences in the average helpfulness of reviews across products and "kt is the error term. (The other variables are described in Table 1 and Section 3.) We also constructed a dummy variable to differentiate between extreme reviews, which are unequivocal and therefore provide a great deal of information to inform purchase decisions, and moderate reviews which provide less information. Specifically, ratings of 3 were classified as Moderate reviews while ratings nearer the endpoints of the scale (1, 2, 4, 5) were classified as unequivocal [5]. The above equation can be estimated using a simple panel data fixed effects model. However, one concern with this strategy is that the posting of personal identity information such as Real Name or location may be correlated with some unobservable reviewer-specific characteristics that may 10. We compared our work with the model used in [5] who estimated sales but without incorporating the impact of review text variables (such as AvdProb, DevProb, Readability, and Spelling Errors) and without incorporating the reviewer-level variables (such as Reviewer History Macro and Reviewer History Micro). There is a significant improvement in R-squared for each model. Specifically, for the regressions used to estimate the impact on product sales ( 2), our model increases R-squared by 9 percent for audiovideo products, 14 percent for digital cameras, and 15 percent for DVDs.

VOL. 23,

NO. 10,

OCTOBER 2011

influence review quality [5]. If some explanatory variables are correlated with errors, then ordinary least-squares regression gives biased and inconsistent estimates. To control for this potential problem, we use a Two Stage Least-Squares (2SLS) regression with instrumental variables [50]. Under the 2SLS approach, in the first stage, each endogenous variable is regressed on all valid instruments, including the full set of exogenous variables in the main regression. Since the instruments are exogenous, these approximations of the endogenous covariates will not be correlated with the error term. So, intuitively they provide a way to analyze the relationship between the dependant variable and the endogenous covariates. In the second stage, each endogenous covariate is replaced with its approximation estimated in the first stage and the regression is estimated as usual. The slope estimator thus obtained is consistent [50]. Specifically, we instrument for in the above equation using lagged values of disclosure, subjectivity, and readability. We experimented with different combinations of these instruments and find that the qualitative nature of our results is generally robust. The intuition behind the use of these instrument variables is that they are likely to be correlated with the relevant independent variables but uncorrelated with unobservable characteristics that may influence the dependent variable. For example, the use of a Real Name in prior reviews is likely to be correlated with the use of Real Name in the subsequent reviews but uncorrelated with unobservables that determine perceived helpfulness for a given review. Similarly, the presence of subjectivity in prior reviews is likely to be correlated with the presence of subjectivity in subsequent reviews but unlikely to be correlated with the error term that determines perceived helpfulness for the current review. Hence, these are valid instruments in our 2SLS estimation. This is consistent with prior work [5]. To ensure that our instruments are valid, we conducted the Sargan Test of overidentifying restrictions [50]. The joint null hypothesis is that the instruments are valid instruments, i.e., uncorrelated with the error term, and that the excluded instruments are correctly excluded from the estimated equation. For the 2SLS estimator, the test statistic is typically calculated as N*R-squared from a regression of the IV residuals on the full set of instruments. A rejection casts doubt on the validity of the instruments. Based on the p-values from these tests, we are unable to reject the null hypothesis for each of the three categories, thereby confirming the validity of our instruments.

4.2.2 Empirical Results With regard to the usefulness of reviews, Table 6 contains the results of our analysis. Our findings reveal that for product categories such as audio and video equipments, digital cameras, and DVDs, the extent of subjectivity in a review has a statistically significant effect on the extent to which users perceive the review to be helpful. The coefficient of AvgProb is negative suggesting that highly subjective reviews are rated as being less helpful. Although DevProb is statistically significant for audio-video products only, it always has a positive relationship with helpfulness votes. This result suggests that consumers find reviews that

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

TABLE 6 These Are 2SLS Regressions with Instrument Variable

Fixed effects are at the product level. The dependent variable is Helpful. Robust standard errors are listed in parenthesis; ***, **, and * denote significance at 1, 5, and 10 percent, respectively. The p-values from the Sargan test of overidentifying restrictions confirm the validity of instruments.

have a wide range of subjectivity/objectivity scores across sentences to be more helpful. In other words, reviews that have a mixture of sentences with objective and of sentences with extreme, subjective content is rated highly by users. It is worthwhile to mention that we observed the opposite effect for product sales, indicating that helpful reviews are not necessarily the ones that lead to increases in sales. The negative and statistically significant sign on the coefficient of the Moderate variable for two of the three product categories implies that as the content of the review becomes more moderate or equivocal, the review is considered less helpful by users. This result is consistent with the findings of Forman et al. [5] who analyze a panel of book reviews and find a similar negative relationship between equivocal reviews and perceived helpfulness. Increased disclosure of self-descriptive information Disclosure typically leads to more helpful votes as can be seen for audio-video players and digital cameras. We also find that for audio-video players and DVDs, a higher readability score Readability is associated with a higher percentage of helpful votes. As with sales, these results are robust to the use of other Readability metrics described in Table 1 such as ARI, Coleman-Liau index, Flesch-Reading Ease, Flesch-Kincaid Grade Level, and the SMOG index. In contrast, an increase in the proportion of spelling errors Spelling Errors is associated with a lower percentage of helpful votes for both audio-video players and DVDs. For all three categories, we find that this result is robust to different specifications of normalizing the number of spelling errors such as normalizing by the number of characters, words, or sentences in a given review. Finally, the past historical information about reviewers Reviewer History Macro has a statistically significant effect on the perceived helpfulness of reviews of digital cameras and DVDs, but interestingly, the directional impact is quite mixed across these two categories. In summary, our results provide support for Hypotheses 2a to 2d. Note that the within R-squared values of our models range between 0.02 and 0.08 across the four product categories. This is because these R-squared values are for the “within” (differenced) fixed effect estimator that estimates this regression by differencing out the average values across product sellers. The R-squared reported is obtained by only fitting a mean deviated model where the effects of the groups (all of the dummy variables for

1507

the products) are assumed to be fixed quantities. So, all of the effects for the groups are simply subtracted out of the model and no attempt is made to quantify their overall effect on the fit of the model. This means that the calculated “within” R-squared values do not take into account the explanatory power of the fixed effects. If we estimate the fixed effects instead of differencing them out, the measured R-squared would be much higher. However, this becomes computationally unattractive. This is consistent with prior work ([5]).11 Our econometric analyses imply that we can quickly estimate the helpfulness of a review by performing an automatic stylistic analysis in terms of subjectivity, readability, and linguistic correctness. Hence, we can immediately identify reviews that are likely to have a significant impact on sales and are expected to be helpful to the customers. Therefore, we can immediately rank these reviews higher and display them first to the customers. This is similar to the “spotlight review” feature of Amazon which relies on the number of helpful votes posted for a review. However, a key limitation of this existing feature is that because it relies on a sufficient number of people to vote on reviews, it requires a long time to elapse before identifying a helpful review.

5

PREDICTIVE MODELING

The explanatory study that we described above revealed what factors influence the helpfulness and impact of a review. In this section, we switch from explanatory modeling to predictive modeling. In other words, the main goal now is not to explain which factors affect helpfulness and impact, but to examine whether, given an existing review, how well can we predict the helpfulness and economic impact of an unseen review, i.e., of a review that was not included in the data used to train the predictive model.

5.1 Predicting Helpfulness The Helpfulness of each review in our data set is defined by the votes of the peer customers, who decide whether a review is helpful or not. In our predictive framework, we could use a regression model, as in Section 4, or use a classification approach and build a binary prediction model that classifies a review as helpful or not. We attempted both approaches and the results were similar. Since we have already described a regression framework in Section 4, we now focus instead on a binary prediction model for brevity. In the rest of the section, we first describe our methodology for converting the continuous helpfulness variable into binary. Then, we describe the results of our experiments, using various machine learning approaches. 5.2 Converting Continuous Helpfulness to Binary Converting the continuous variable Helpfulness into a binary one is, in principle, a straightforward process. Since Helpfulness goes from 0 to 1, we can simply select a 11. As before, we compared our work with the model used in [5] who examined the drivers of review helpfulness but without incorporating the impact of review text variables and without incorporating the reviewer history-level variables. Our model increases R-squared by 5 percent for audio-video products, 8 percent for digital cameras, and 5 percent for DVDs.

1508

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

VOL. 23,

NO. 10,

OCTOBER 2011

TABLE 7 Accuracy and Area under the ROC Curve for the Helpfulness Classifiers

reviews as “useful” or not. In other words, if more than 60 percent of the votes indicate that the review is helpful, then we classify a review as “useful.” Otherwise, the review is classified as “nonuseful.” Fig. 1. Picking a decision threshold that minimizes error rates for converting the continuous helpfulness variable into a binary one.

threshold , and mark all reviews that have helpfulness   as helpful and the others as not helpful. However, selecting the proper value for the threshold  is slightly trickier. What is the “best” value of  for separating the “helpful” from the “not helpful” reviews? Setting  too high would mean that helpful reviews would be classified as not helpful, and setting  too low would have the opposite effect. In order to select a good value for , we used two human coders do a content analysis on a sample of 1,000 reviews. The reviews were randomly chosen for each category. The main aim was to analyze whether the review was informative. For this, we asked the coders to read each review and answer the question “Is the review informative or not?” The coders did not have access to the helpful and total votes that were casted for the review, but could see the star rating and the product that the review was referring to. We measured the inter-rater agreement across the two coders, using the kappa statistic. The analysis showed a substantial agreement, with  ¼ 0:739. Our next step was to identify the optimal threshold (in terms of percentage of helpful votes) that separates the reviews that humans consider helpful from the nonhelpful ones. We performed an ROC analysis, trying to balance the false positive rate and the false negative rate. Our analysis indicated that if we set the separation threshold at 0.6, then the error rates are minimized. In other words, if more than 60 percent of the votes indicate that the review is helpful, then we classify a review as “helpful.” Otherwise, the review is classified as “not helpful” and this decision achieves a good balance between false positive errors and false negative errors. Our analysis is presented in Fig. 1. On the x-axis, we have the decision threshold , which is the percentage of useful votes out of all the votes received by a given review. Each review is marked as “useful” or “not-useful” by our coders, independently of the peer votes actually posted on Amazon.com. Based on the coder’s classification, we compute the: 1) percentage of useful reviews that have an Amazon helpfulness rating below , and 2) percentage of not-useful reviews that have an Amazon helpfulness rating above . (These values are essentially the error rates for the two classes if we set the decision threshold at .) Furthermore, by considering the “useful” class as the positive class, we compute the precision and recall metrics. We can see that if we set the separation threshold at 0.6, then the error rate in the classification is minimized. For this reason, we pick 0.6 as the threshold of separating the

5.3 Building the Predictive Model Once we are able to separate the reviews into two classes, we can then use any supervised learning technique to learn a model that classifies an unseen review as helpful or not. We experimented with Support Vector Machines [51] and Random Forests [52]. Support Vector Machines have been reported to work well in the past for the problem of predicting review helpfulness. However, in all our experiments, SVM’s consistently performed worse than Random Forests, for both our techniques and for the existing baselines, such as for the algorithm of Zhang and Varadarajan [23] that we used as a baseline for comparison. Furthermore, training time was significantly higher for SVM’s compared to Random Forests. This empirical finding is consistent with recent comparative experiments [53], [54] that indicate that Random Forests are robust and perform better than SVM’s for a variety of learning tasks. Therefore, in this experimental section, we report only the results that we obtained using Random Forests. In our experiments with Random Forests, we use 20 trees and we generate a different classifier for each product category. Our evaluation results are based on stratified 10fold cross validation and we use as evaluation metrics the classification accuracy and the area under the ROC curve. 5.3.1 Using all Available Features In our first experiment, we used all the features that we had available to build the classifiers. The resulting performance of the classifiers was quite high, as seen in Table 7. One interesting result is the relatively lower predictive performance of the classifier that we constructed for the DVD data set. This can be explained by the nature of the goods: DVDs are experience goods whose quality is difficult to estimate in advance but can be ascertained after consumption. In contrast, digital cameras and audio and video are search goods, i.e., products with features and characteristics easily evaluated before purchase. Therefore, the notion of helpfulness is more subjective for experience goods, as what constitutes a helpful review for one customer is not necessarily helpful for another. This contrasts with the search goods, in which a good review is one that allows customers to evaluate better, before the purchase, the quality of the underlying good. Going beyond the aggregate results, we examined what kinds of reviews have helpfulness scores that are most difficult to predict. Interestingly, we observed a high correlation of classification error with the distribution of the underlying review ratings. Reviews for products that have received widely fluctuating reviews, also have reviews

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

1509

TABLE 8 Accuracy and Area under the ROC Curve for the Helpfulness Classifiers

TABLE 9 Accuracy and Area under the ROC Curve for the Sales Impact Classifiers

of widely fluctuating helpfulness. However, the different helpfulness scores do not necessarily correspond well with reviews of different “inherent” quality. Rather, in such cases, customers tend to vote not on the merits of the review per se, but rather to convey their approval or disapproval of the rating of the review. In such cases, an otherwise detailed and helpful review, may receive a bad helpfulness score. This effect was more pronounced in the DVD category, but also appeared in the digital camera and in the audio and video category. Even though this observation did not help us improve the predictive accuracy of our model, it is a good heuristic for estimating a priori the predictive accuracy of our models for reviews of such products.

compared to the case of using all available features. To explore further this puzzling result, we conducted an additional experiment: we examined whether we could predict the value of the features in one set, using the features from the other two feature sets (e.g., predict review subjectivity using the reviewer-related and review readability features). We conducted the tests for all combinations. Surprisingly, the results indicated that the three feature sets are interchangeable. In other words, the information in the review readability and review subjectivity set is enough to predict the value of variables in the reviewer set, and vice versa. Reviewers who have historical generated helpful reviews, tend to post reviews of specific readability levels, and with specific subjectivity mixtures in the reviews. Even though this may seem counterintuitive at first, it simply indicates that there is correlation between these variables, not causation. Identifying causality is rather difficult and is beyond the scope of this paper. What is of interest in our case is that the three feature sets are roughly equivalent in terms of predictive power.

5.3.2 Examining the Predictive Power of Features The next step was to examine what is the power of the different features that we have generated. As can be seen from Table 1, we have three broad feature categories: 1) reviewer features that include both reviewer history and reviewer characteristics, 2) review subjectivity features, and 3) review readability features. To examine their importance, we built classifiers using only subsets of the features. As a comparison, we also list the results that we got by using the features used by Zhang and Varadarajan [23], which we refer to as “Baseline.” We evaluated each classifier in the same way as above, using stratified 10-fold cross validation, and reporting the accuracy and the area under the ROC curve. Table 8 contains the results of our experimental evaluation. The first result that we observed is that our techniques clearly outperformed the existing baseline from [23]: the increased predictive performance of our models was rather anticipated given the difference in R2 values in the explanatory regression models. The R2 values for the regressions in [23] were around 0.3-0.4, while our explanatory econometric models achieved R2 values in the 0.7-0.9 area. This difference in performance in the training set was also visible in the predictive performance of the models. Another interesting result is that using any of the feature sets resulted in only a modest decrease in performance

5.4 Predicting Impact on Sales Our analysis so far, indicated that we can successfully predict whether a review is going to be rated as helpful by the peer customers or not. The next task that we wanted to examine was whether we can predict the impact of a review on sales. Specifically, we examine whether the review characteristics can be used to predict whether the (comparative) sales of a product will go up or down after a review is published. So, we examine whether the difference SalesRanktðrÞþT  SalesRanktðrÞ ; where tðrÞ is the time the review is posted, is positive or negative. Since the effect of a review is not immediate, we examine variants of the problem for T ¼ 1 , T ¼ 3, T ¼ 7, and T ¼ 14 (in days). By having different time intervals, we wanted to examine how far in the future we can extend our prediction and still get reasonable results. As we can see from the results in Table 9, the prediction accuracy is high, demonstrating that we can predict the direction of sales given the review information. While it is hard, at this point, to claim causality (it is unclear whether the reviews influence sales, or whether the reviews are just a manifestation of the underlying sales trend), it is definitely possible to show a strong correlation between

1510

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

the two. We also observed that the predictive power increases slightly as T increases, indicating that the influence of a review is not immediate. We also performed experiments with subsets of features as in the case of helpfulness. The results were very similar to the case of helpfulness: training models with subsets of the features results in similar predictive power. Given the helpfulness results, which we discussed above, this should not be a surprise.

6

CONCLUSIONS AND FUTURE WORK

In this paper, we build on our previous work [20], [21] by expanding our data to include multiple product categories and multiple textual features such as different readability metrics, information about the reviewer history, different features of reviewer disclosure, and so on. The present paper is unique in looking at how subjectivity levels, readability, and spelling errors in the text of reviews affect product sales and the perceived helpfulness of these reviews. Our key results from both the econometric regressions and predictive models can be summarized as follows: .

.

.

.

Based on Hypothesis 1a, we find that an increase in the average subjectivity of reviews is associated with an increase in sales for products. Further, a decrease in the deviation of the probability of subjective comments is associated with an increase in product sales. This means that reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. Based on Hypothesis 1b, we find that for some products like digital cameras, reviews that have higher Readability scores are associated with higher sales. Based on Hypothesis 1c, we find that an increase in the proportion of spelling mistakes in the content of the reviews decreases product sales for some experience products like DVDs whose quality can be assessed only after purchase. However, for search products such as audio-video players and digital cameras, the proportion of spelling errors in reviews does not have a statistically significant impact on sales. Further, reviews with that rate products negatively can be associated with increased product sales when the review text is informative and detailed. This is likely to occur when the reviewer clearly outlines the pros and cons of the product, thereby providing sufficient information to the consumer to make a purchase. Based on Hypothesis 2a, we find that in general, reviews which tend to include a mixture of subjective and objective elements are considered more informative (or helpful) by the users. In terms of subjectivity and its effect on helpfulness, we observe that for feature-based goods, such as electronics, users prefer reviews that contain mainly objective information with only a few subjective sentences and rate those higher. In other words, users prefer reviews that mainly confirm the validity of the product description, giving a small number of comments (not giving

VOL. 23,

NO. 10,

OCTOBER 2011

comments decreases the usefulness of the review). For experience goods, such as DVDs, the marginally significant coefficient on subjectivity suggests that while users do prefer to see a brief description of the “objective” elements of the good (e.g., the plot), they do expect to see a personalized, highly sentimental positioning, describing aspects of the movie that are not captured by the product description provided by the producers. . Based on Hypothesis 2b through 2d, we find that an increase in the readability of reviews has a positive and statistical impact on review helpfulness while an increase in the proportion of spelling errors has a negative and statistically significant impact on review helpfulness for audio-video products and DVDs. While the past historical information about reviewers has a statistically significant effect on the perceived helpfulness of reviews, interestingly enough, the directional impact is quite mixed across different product categories. . Using Random Forest classifiers, we find that for experience goods like DVDs, the classifiers have a lower performance while predicting the helpfulness of reviews, compared to that for search goods like electronic products. Furthermore, we observe a high correlation of classification error with the distribution of the underlying review ratings. Reviews for products that have received widely fluctuating ratings, also have reviews with widely fluctuating helpfulness votes. In particular, we found evidence that highly detailed and readable reviews can have low helpfulness votes in cases when users tend to vote negatively not because they disapprove of the review quality (extent of helpfulness) but rather to convey their disapproval of the rating provided by the reviewer for that review. . Finally, we examined the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features. We found that using any of the three feature sets resulted in a statistically equivalent performance as in the case of using all available features. Further, we find that the three feature sets are interchangeable. In other words, the information in the “readability” and “subjectivity” set is sufficient to predict the value of variables in the “reviewer” set, and vice versa. Experiments with classifiers for predicting sales yield similar results in terms of the interchangeability of the three broad feature sets. Based on our findings, we can identify quickly reviews that are expected to be helpful to the users, and display them first, improving significantly the usefulness of the reviewing mechanism to the users of the electronic marketplace. While we have taken a first step examining the economic value of textual content in word-of-mouth forums, we acknowledge that our approach has several limitations, many of which are borne by the nature of the data itself. Some of the variables in our data are proxies for the actual measure that one would need for more advanced empirical modeling. For example, we use sales rank as a proxy for demand in accordance with prior work. Future work can look at real demand data. Our sample is also restricted in that our analysis focuses on the sales at one e-commerce retailer. The

GHOSE AND IPEIROTIS: ESTIMATING THE HELPFULNESS AND ECONOMIC IMPACT OF PRODUCT REVIEWS: MINING TEXT AND...

actual magnitude of the impact of textual information on sales may be different for a different retailer. Additional work in other online contexts will be needed to evaluate whether review text information has similar explanatory power that is similar to those we have obtained. There are many other interesting directions to follow when analyzing online reviews. For example, in our work, we analyzed each review independently of the other, existing reviews. A recent stream of research indicates that the helpfulness of a review is also a function of the other submitted reviews [55] and that temporal dynamics can play a role in the perceived helpfulness of a review [25], [56], [57] (e.g., early reviews, everything else being equal, get higher helpfulness scores). Furthermore, the helpfulness of a review may be influenced by the way that reviews are presented to different types of users [58] and by the context in which a user evaluates a given review [59]. Overall, we consider this work a significant first step in understanding the factors that affect the perceived quality and economic impact of reviews and believe that there are many interesting problems that need to be addressed in this area.

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

ACKNOWLEDGMENTS The authors would like to thank Rong Zheng, Ashley Tyrrel, and Leslie Culpepper for assistance in data collection. They want to thank the participants in WITS 2006, SCECR 2007, and ICEC 2007 and the reviewers for very helpful comments. This work was partially supported by NET Institute (http://www.NETinst.org), a Microsoft Live Labs Search Award, a Microsoft Virtual Earth Award, New York University Research Challenge Fund grants N-6011 and R-8637, and by US National Science Foundation (NSF) grants IIS-0643847, IIS-0643846, and DUE-0911982. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the Microsoft Corporation or of the US National Science Foundation (NSF).

REFERENCES [1]

[2]

[3] [4]

[5]

[6] [7]

N. Hu, P.A. Pavlou, and J. Zhang, “Can Online Reviews Reveal a Product’s True Quality? Empirical Findings and Analytical Modeling of Online Word-of-Mouth Communication,” Proc. Seventh ACM Conf. Electronic Commerce (EC ’06), pp. 324-330, 2006. C. Dellarocas, N.F. Awad, and X.M. Zhang, “Exploring the Value of Online Product Ratings in Revenue Forecasting: The Case of Motion Pictures,” Working Paper, Robert H. Smith School Research Paper, 2007. J.A. Chevalier and D. Mayzlin, “The Effect of Word of Mouth on Sales: Online Book Reviews,” J. Marketing Research, vol. 43, no. 3, pp. 345-354, Aug. 2006. D. Reinstein and C.M. Snyder, “The Influence of Expert Reviews on Consumer Demand for Experience Goods: A Case Study of Movie Critics,” J. Industrial Economics, vol. 53, no. 1, pp. 27-51, Mar. 2005. C. Forman, A. Ghose, and B. Wiesenfeld, “Examining the Relationship between Reviews and Sales: The Role of Reviewer Identity Disclosure in Electronic Markets,” Information Systems Research, vol. 19, no. 3, pp. 291-313, Sept. 2008. Y. Liu, “Word of Mouth for Movies: Its Dynamics and Impact on Box Office Revenue,” J. Marketing, vol. 70, no. 3, pp. 74-89, July 2006. W. Duan, B. Gu, and A.B. Whinston, “The Dynamics of Online Word-of-Mouth and Product Sales: An Empirical Investigation of the Movie Industry,” J. Retailing, vol. 84, no. 2, pp. 233-242, 2008.

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

1511

R.G. Hass, “Effects of Source Characteristics on Cognitive Responses and Persuasion,” Cognitive Responses in Persuasion, R.E. Petty, T.M. Ostrom, and T.C. Brock, eds., pp. 1-18, Lawrence Erlbaum Assoc., 1981. S. Chaiken, “Heuristic versus Systematic Information Processing and the Use of Source versus Message Cues in Persuasion,” J. Personality and Social Psychology, vol. 39, no. 5, pp. 752-766, 1980. S. Chaiken, “The Heuristic Model of Persuasion,” Proc. Social Influence: The Ontario Symp., M.P. Zanna, J.M. Olson, and C.P. Herman, eds., vol. 5, pp. 3-39, 1987. J.J. Brown and P.H. Reingen, “Social Ties and Word-of-Mouth Referral Behavior,” J. Consumer Research, vol. 14, no. 3, pp. 350-362, Dec. 1987. R. Spears and M. Lea, “Social Influence and the Influence of the ‘Social’ in Computer-Mediated Communication,” Contexts of Computer-Mediated Communication, M. Lea, ed., pp. 30-65, Harvester Wheatsheaf, June 1992. S.L. Jarvenpaa and D.E. Leidner, “Communication and Trust in Global Virtual Teams,” J. Interactive Marketing, vol. 10, no. 6, pp. 791-815, Nov./Dec. 1999. K.Y.A. McKenna and J.A. Bargh, “Causes and Consequences of Social Interaction on the Internet: A Conceptual Framework,” Media Psychology, vol. 1, no. 3, pp. 249-269, Sept. 1999. U.M. Dholakia, R.P. Bagozzi, and L.K. Pearo, “A Social Influence Model of Consumer Participation in Network- and Small-GroupBased Virtual Communities,” Int’l J. Research in Marketing, vol. 21, no. 3, pp. 241-263, Sept. 2004. T. Hennig-Thurau, K.P. Gwinner, G. Walsh, and D.D. Gremler, “Electronic Word-of-Mouth via Consumer-Opinion Platforms: What Motivates Consumers to Articulate Themselves on the Internet?,” J. Interactive Marketing, vol. 18, no. 1, pp. 38-52, 2004. M. Ma and R. Agarwal, “Through a Glass Darkly: Information Technology Design, Identity Verification, and Knowledge Contribution in Online Communities,” Information Systems Research, vol. 18, no. 1, pp. 42-67, Mar. 2007. A. Ghose, P.G. Ipeirotis, and A. Sundararajan, “Opinion Mining Using Econometrics: A Case Study on Reputation Systems,” Proc. 44th Ann. Meeting of the Assoc. for Computational Linguistics (ACL ’07), pp. 416-423, 2007. N. Archak, A. Ghose, and P.G. Ipeirotis, “Show Me the Money! Deriving the Pricing Power of Product Features by Mining Consumer Reviews,” Proc. 12th ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining (KDD ’07), pp. 56-65, 2007. A. Ghose and P.G. Ipeirotis, “Designing Ranking Systems for Consumer Reviews: The Impact of Review Subjectivity on Product Sales and Review Quality,” Proc. Workshop Information Technology and Systems, 2006. A. Ghose and P.G. Ipeirotis, “Designing Novel Review Ranking Systems: Predicting the Usefulness and Impact of Reviews,” Proc. Ninth Int’l Conf. Electronic Commerce (ICEC ’07), pp. 303-310, 2007. A. Ghose and P.G. Ipeirotis, “Estimating the Socio-Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics,” Center for Digital Economy Research, Technical Report CeDER-08-06, New York Univ., Sept. 2008. Z. Zhang and B. Varadarajan, “Utility Scoring of Product Reviews,” Proc. ACM Int’l Conf. Information and Knowledge Management (CIKM ’06), pp. 51-57, 2006. S.-M. Kim, P. Pantel, T. Chklovski, and M. Pennacchiotti, “Automatically Assessing Review Helpfulness,” Proc. Conf. Empirical Methods in Natural Language Processing (EMNLP ’06), pp. 423-430, 2006. J. Liu, Y. Cao, C.-Y. Lin, Y. Huang, and M. Zhou, “Low-Quality Product Review Detection in Opinion Summarization,” Proc. Joint Conf. Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 334-342, 2007. J. Otterbacher, “Helpfulness in Online Communities: A Measure of Message Quality,” CHI ’09: Proc. 27th Int’l Conf. Human Factors in Computing Systems, pp. 955-964, 2009. Y. Liu, X. Huang, A. An, and X. Yu, “Modeling and Predicting the Helpfulness of Online Reviews,” Proc. Eighth IEEE Int’l Conf. Data Mining (ICDM ’08), pp. 443-452, 2008. M. Weimer, I. Gurevych, and M. Mu¨hlha¨user, “Automatically Assessing the Post Quality in Online Discussions on Software,” Proc. 44th Ann. Meeting of the Assoc. for Computational Linguistics (ACL ’07), pp. 125-128, 2007.

1512

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

[29] M. Weimer and I. Gurevych, “Predicting the Perceived Quality of Web Forum Posts,” Proc. Conf. Recent Advances in Natural Language Processing (RANLP ’07), 2007. [30] J. Jeon, W.B. Croft, J.H. Lee, and S. Park, “A Framework to Predict the Quality of Answers with Non-Textual Features,” Proc. 29th Ann. Int’l ACM SIGIR Conf. Research and Development in Information Retrieval (SIGIR ’06), pp. 228-235, 2006. [31] L. Hoang, J.-T. Lee, Y.-I. Song, and H.-C. Rim, “A Model for Evaluating the Quality of User-Created Documents,” Proc. Fourth Asia Information Retrieval Symp. (AIRS ’08), pp. 496-501, 2008. [32] Y.Y. Hao, Y.J. Li, and P. Zou, “Why Some Online Product Reviews Have No Usefulness Rating?,” Proc. Pacific Asia Conf. Information Systems (PACIS ’09), 2009. [33] O. Tsur and A. Rappoport, “Revrank: A Fully Unsupervised Algorithm for Selecting the Most Helpful Book Reviews,” Proc. Third Int’l AAAI Conf. Weblogs and Social Media (ICWSM ’09), 2009. [34] A. Ghose and A. Sundararajan, “Evaluating Pricing Strategy Using E-Commerce Data: Evidence and Estimation Challenges,” Statistical Science, vol. 21, no. 2, pp. 131-142, May 2006. [35] V. Hatzivassiloglou and K.R. McKeown, “Predicting the Semantic Orientation of Adjectives,” Proc. 38th Ann. Meeting of the Assoc. for Computational Linguistics (ACL ’97), pp. 174-181, 1997. [36] M. Hu and B. Liu, “Mining and Summarizing Customer Reviews,” Proc. 10th ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining (KDD ’04), pp. 168-177, 2004. [37] S.-M. Kim and E. Hovy, “Determining the Sentiment of Opinions,” Proc. 20th Int’l Conf. Computational Linguistics (COLING ’04), pp. 1367-1373, 2004. [38] P.D. Turney, “Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews,” Proc. 40th Ann. Meeting of the Assoc. for Computational Linguistics (ACL ’02), pp. 417-424, 2002. [39] B. Pang and L. Lee, “Thumbs Up? Sentiment Classification Using Machine Learning Techniques,” Proc. Conf. Empirical Methods in Natural Language Processing (EMNLP ’02), 2002. [40] K. Dave, S. Lawrence, and D.M. Pennock, “Mining the Peanut Gallery: Opinion Extraction and Semantic Classification of Product Reviews,” Proc. 12th Int’l Conf. World Wide Web (WWW ’03), pp. 519-528, 2003. [41] B. Pang and L. Lee, “A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts,” Proc. 42nd Ann. Meeting of the Assoc. for Computational Linguistics (ACL ’04), pp. 271-278, 2004. [42] H. Cui, V. Mittal, and M. Datar, “Comparative Experiments on Sentiment Classification for Online Product Reviews,” Proc. 21st Nat’l Conf. Artificial Intelligence (AAAI ’06), 2006. [43] B. Pang and L. Lee, “Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales,” Proc. 43rd Ann. Meeting of the Assoc. for Computational Linguistics (ACL ’05), 2005. [44] T. Wilson, J. Wiebe, and R. Hwa, “Recognizing Strong and Weak Opinion Clauses,” Computational Intelligence, vol. 22, no. 2, pp. 7399, May 2006. [45] K. Nigam and M. Hurst, “Towards a Robust Metric of Opinion,” Proc. AAAI Spring Symp. Exploring Attitude and Affect in Text, pp. 598-603, 2004. [46] B. Snyder and R. Barzilay, “Multiple Aspect Ranking Using the Good Grief Algorithm,” Proc. Human Language Technology Conf. North Am. Chapter of the Assoc. of Computational Linguistics (HLTNAACL ’07), 2007. [47] S. White, “The 2003 National Assessment of Adult Literacy (NAAL),” Center for Education Statistics (NCES), Technical Report NCES 2003495rev, US Dept. of Education, Inst. of Education Sciences, http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid= 2003495rev, Mar. 2003. [48] W.H. DuBay, The Principles of Readability, Impact Information, http://www.nald.ca/library/research/readab/readab.pdf, 2004. [49] J.A. Chevalier and A. Goolsbee, “Measuring Prices and Price Competition Online: Amazon.com and BarnesandNoble.com,” Quantitative Marketing and Economics, vol. 1, no. 2, pp. 203-222, 2003. [50] J.M. Wooldridge, Econometric Analysis of Cross Section and Panel Data. The MIT Press, 2001. [51] C.J.C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121-167, June 1998.

VOL. 23,

NO. 10,

OCTOBER 2011

[52] L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5-32, Oct. 2001. [53] R. Caruana and A. Niculescu-Mizil, “An Empirical Comparison of Supervised Learning Algorithms,” Proc. 23rd Int’l Conf. Machine Learning (ICML ’06), pp. 161-168, 2006. [54] R. Caruana, N. Karampatziakis, and A. Yessenalina, “An Empirical Evaluation of Supervised Learning in High Dimensions,” Proc. 25th Int’l Conf. Machine Learning (ICML ’08), 2008. [55] C. Danescu-Niculescu-Mizil, G. Kossinets, J. Kleinberg, and L. Lee, “How Opinions Are Received by Online Communities: A Case Study on Amazon.com Helpfulness Votes,” Proc. 18th Int’l Conf. World Wide Web (WWW ’09), pp. 141-150, 2009. [56] W. Shen, “Essays on Online Reviews: The Strategic Behaviors of Online Reviewers to Compete for Attention, and the Temporal Pattern of Online Reviews,” PhD proposal, Krannert Graduate School of Management, Purdue Univ., 2008. [57] Q. Miao, Q. Li, and R. Dai, “Amazing: A Sentiment Mining and Retrieval System,” Expert Systems with Applications, vol. 36, no. 3, pp. 7192-7198, 2009. [58] P. Victor, C. Cornelis, M. De Cock, and A. Teredesai, “Trust- and Distrust-Based Recommendations for Controversial Reviews,” Proc. WebSci ’09: Soc. On-Line, 2009. [59] S.A. Yahia, A.Z. Broder, and A. Galland, “Reviewing the Reviewers: Characterizing Biases and Competencies Using Socially Meaningful Attributes,” Proc. Assoc. for the Advancement of Artificial Intelligence (AAAI) Spring Symp., 2008. Anindya Ghose is an associate professor of Information, Operations, and Management Sciences and Robert L. and Dale Atkins Rosen Faculty Fellow at New York University’s Leonard N. Stern School of Business. He is an expert in building econometric models to quantify the economic value from user-generated content in social media; estimating the impact of search engine advertising; modeling consumer behavior in mobile media and mobile Internet; and measuring the welfare impact of the Internet. He has worked on product reviews, reputation and rating systems, sponsored search advertising, mobile Internet, mobile social networks, and online used-good markets. His research has received best paper awards and nominations at leading journals and conferences such as ICIS, WITS, and ISR. In 2007, he received the prestigious US National Science Foundation (NSF) CAREER Award. He is also a winner of an ACM SIGMIS Doctoral Dissertation Award, a Microsoft Live Labs Award, a Microsoft Virtual Earth Award, a Marketing Science Institute grant, several Wharton Interactive Media Initiative (WIMI) Awards, a US National Science Foundation (NSF) SFS Award, a US National Science Foundation (NSF) IGERT Award, and a Google-WPP Marketing Research Award. He serves as an associate editor of Management Science and Information Systems Research. Panagiotis G. Ipeirotis received the PhD degree in computer science from Columbia University in 2004, with distinction. He is an associate professor and George A. Kellner Faculty Fellow at the Department of Information, Operations, and Management Sciences at Leonard N. Stern School of Business of New York University. His recent research interests focus on crowdsourcing and on mining user-generated content on the Internet. He has received two “Best Paper” Awards (IEEE ICDE 2005, ACM SIGMOD 2006), two “Best Paper Runner Up” Awards (JCDL 2002, ACM KDD 2008), and is also a recipient of a CAREER Award from the US National Science Foundation (NSF). He is a member of the IEEE.

. For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.

Loading...

Estimating the Helpfulness and Economic Impact of - NYU Stern

1498 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 23, NO. 10, OCTOBER 2011 Estimating the Helpfulness and Economic Impact of Product...

2MB Sizes 11 Downloads 9 Views

Recommend Documents

Estimating the Helpfulness and Economic Impact of - CiteSeerX
By using Random Forest based classifiers, we show that we can accurately predict the impact of reviews on sales and thei

estimating cash flows - NYU Stern
If looking at cash flows to the firm, look at operating earnings after ... Accounting Earnings, Flawed but Important. Ca

Debt v. Foreign Direct Investment: The Impact of - NYU Stern
countries, debt and foreign direct investment (FDI), from a finance perspective. ... project, if the project is risky, a

1 Valuation and - NYU Stern
The Capital Asset Pricing Model (CAPM) - an equilibrium model of the ..... Valuation and Asset Pricing 64. The APT. CAPM

the economics of compatibility standards - NYU Stern
have made the formerly esoteric subject of technical compatibility standards a fami- liar matter in our ..... the victor

Money and Inflation - NYU Stern
Basic idea: the price level (and the nominal wage rate) depend on the level of the money supply. The rate of ... of the

Course Syllabus - NYU Stern
For managers of digital firms, IT is not simply a useful handmaiden, an enabler, but rather it is the core of the busine

Present Value - NYU Stern
Aswath Damodaran. 13. Annuity, given Present Value. □ The reverse of this problem, is when the present value is known

Scheduling - NYU Stern
Rakesh Nagi (SUNY at Buffalo) e. Tim Nieberg (University of Bonn) f. Uwe Schwiegelshohn (University of Dortmund) g. Nata

final review - NYU Stern
The following is the beta calculaKon for PepsiCo, ... last two reported income statements indicate that the company has