Influencing the Influencers, Marketing Science, 41(3):593–615 (2022)
Abstract: Social media influencers are category enthusiasts who often post product recommendations. Firms sometimes pay influencers to skew their product reviews in favor of the firm. We ask the following research questions. First, what is the optimal level of affiliation (if any) from the firm’s perspective? Affiliation introduces positive bias to the influencer’s review but also decreases the persuasiveness of the review. Secondly, since affiliated reviews are often biased in favor of the firm, what is the impact of affiliation on consumer welfare? We find that the affiliation decision depends on the cost of information acquisition, the consumer’s prior and awareness, and the disclosure regime. When the consumer’s prior belief is low, the firm needs to affiliate less closely or not at all in order to preserve the influencer’s persuasiveness, the change in the consumer’s belief following the influencer’s review. In contrast, when the consumer’s prior belief is high, the firm fully affiliates with the influencer to both maximize awareness and prevent a negative review. We also show that the firm’s involvement can be Pareto-improving if the information acquisition cost is relatively high, and a partial disclosure rule may increase consumer welfare.
Can Curation Algorithms Amplify the Effect of Trolls? (with Dina Mayzlin)
Abstract: Social media platforms use curation algorithms to automate the selection of a personalized subset of posts for each user. We study the effect of “trolls,” interested parties with a hidden agenda, on the effectiveness of curation algorithms. The direct effect occurs when users are exposed to the trolls’ content. However, curation enables trolls to affect users even when they are not exposed to such content by allowing trolls to upvote friends’ content in a way that is consistent with their agenda. We show that if the platform has perfect information about the user’s preferences, curation increases the user’s welfare even in the presence of trolls. However, if the platform does not have perfect information on the user, curation may inadvertently amplify the effect of trolls, making the user worse off. Our results suggest that successful curation in the presence of trolls requires the platform to acquire precise information about the user’s preferences. We also discuss alternative algorithms that do not amplify the effect of trolls and suggest interventions for reducing the effect of trolls under curation.
Work in progress
The Effect of Firm-Provided Information on Consumer Word of Mouth
Abstract. Firms often seek to strategically manage and respond to consumer reviews since they are an important source of information that helps consumers make purchase decisions. However, review information does not always resolve all uncertainties about a product. Does providing official answers to consumers’ questions (official QA) reduce product fit uncertainty and increase positive word of mouth? On the one hand, this may increase the valence of subsequent reviews by improving the match between the products and buyers. On the other hand, this may also have an unintended effect of lowering the incentive to write reviews and lead to a lower volume of reviews. In addition, buyers who experienced a poor match in the presence of the official Q&A might be more likely to leave reviews compared to those who experienced a good match. Since the information provided through the official product Q&A is mostly positive and neutral, positive reviews provide less additional information compared to negative reviews. I empirically examine the effect of official Q&A on consumer word of mouth by comparing the differences in review valence and volume for a given product between two comparable cosmetics retail sites, Sephora (reviews only) and Ulta (reviews and official Q&A), and across time.
Optimal Topic Selection on Social Media in the Presence of Trolls
Abstract. Social media contains a large volume of information on many different topics. However, users are likely to see correlated information that offers little new insight since the same posts on certain popular topics are often cross-shared by many users. Moreover, curation algorithms that select popular topics among the user’s close connections may exacerbate these filter bubbles. However, selecting topics ideologically distant from the user may expose her to content promoted by trolls. For example, the curation algorithm may increase the diversity of a left-wing user’s content consumption at the expense of exposing her to the right-wing misinformation sent by trolls. How should platforms select topics and posts given the trade-off between content diversity and trolls’ influence? I hypothesize that the platform’s optimal topic selection depends on the infiltration level of trolls, the platform’s knowledge about the user’s belief, and the specific structure of the user’s network. I show the effect analytically on a simple network. I also show how the algorithm performs on more complex networks using simulations.