Surveys act as a major catalyst when it comes to empirical research. They can help draw out viewpoints of study subjects or respondents about themselves and the world in general. However, there remains a lot of uncertainty when it comes to analyzing and studying the responses given by participants. Environmental economics is an applied economics discipline and researchers often rely on various revealed preference tactics to value various environmental dimensions. However, when it comes to public good valuation, more often than not behavioural trends can’t be analyzed and thus revealed preference techniques can’t be employed. In such scenarios, economists tend to resort to stated preference approaches. Economists and policymakers tend to use stated preference techniques to conduct exercises related to the non-market valuation of commodities.
TITLE - A REVIEW OF HYPOTHETICAL BIAS IN ENVIRONMENTAL ECONOMICS
ABSTRACT
Environmental Economics is an applied subdiscipline of economics and a major portion of the subject deals with the valuation of goods and services. Hypothetical bias arises as a byproduct when one is trying to conduct a valuation exercise using stated preference techniques. The following essay attempts to summarize various challenges that researchers encounter while dealing with stated preference techniques and also sheds light on various bias mitigation strategies that are used currently and continue to evolve so as to account for the bias.
1. Introduction
Surveys act as a major catalyst when it comes to empirical research. They can help draw out viewpoints of study subjects or respondents about themselves and the world in general. However, there remains a lot of uncertainty when it comes to analyzing and studying the responses given by participants. Environmental economics is an applied economics discipline and researchers often rely on various revealed preference tactics to value various environmental dimensions. However, when it comes to public good valuation, more often than not behavioural trends can't be analyzed and thus revealed preference techniques can't be employed. In such scenarios, economists tend to resort to stated preference approaches. Economists and policymakers tend to use stated preference techniques to conduct exercises related to the non-market valuation of commodities.
Over here, the good in question and even the payment mechanism are both hypothetical and thus researchers are often faced with the issue of hypothetical bias. Hypothetical Bias is a common problem that is encountered when dealing with stated preference approaches. As per ( Mjelde 2012 ), hypothetical bias is the difference between the reported Willingness to pay values ( WTP) and the “revealed” WTP. The main reason it arises is that the respondents or the study subjects consider the survey inconsequential. This presumption of inconsequentiality makes people compromise on their honesty. As Levitt and List (2005) assert, dishonest responses/behaviour result in hypothetical bias. This practice of acting differently than one's stated intentions is an area of interest which is studied in multiple disciplines and is not just related to environmental economics. For example, LaPeire's (1934) research on racial prejudice is one of the first studies using empirical data that shows the disparity between actual and stated intentions.
It becomes imperative to account for hypothetical bias when conducting research as wrong estimates can lead to discrepancies in the policy-making process (Bucket et al. 2020).
2. Hypothetical Bias in Stated Preference Techniques
Stated Preference Techniques are a set of methodologies that come in handy when one wishes to understand or determine the preferences that people have for various products. This toolbox is applied to various allied fields such as applied economics, psychology and marketing (Murphy et al. 2005). In Economics, this set of techniques is widely used for empirical research in the field of environmental economics.
When the amount that people are willing to pay in a survey doesn't match the amount that they would pay had they been in an experiment that actually involved their money, then this bias created is known as hypothetical bias ( Loomis 2011).
2.1 Hypothetical Bias in Contingent Valuation
Hypothetical bias manifests itself in Contingent valuation techniques when respondents tend to overstate the amount their willingness to pay for a certain commodity or a service than they are actually willing to pay. Often when such an overstatement of figures takes place, it puts the accuracy of the results of the study in question.
List & Gallet (2001) ( as cited in Champ 2009) have argued that the prevalence of hypothetical bias is less when the good in question is a private good vis-a-vis when it's a public good. If respondents are unable to perceive the attributes of the good as described by the researcher, hypothetical bias is more likely to arise when using contingent valuation (Whittington et al. 1990).
2.2 Hypothetical Bias in Choice Experiment
It has been seen that although there is less prevalence of hypothetical bias in the case of choice modelling ( Hoyos 2020; Mjelde 2012), it is not completely absent from the model. In order to overcome this challenge, researchers have resorted to a variation of the Choice Experiment known as the Real Choice Experiment or RCE. Most studies carried out have found a significant difference between the results of hypothetical and real choice experiments (Ding et al. 2005; Alfnes et al. 2006; Lusk et al. 2008; Chang et al. 2009; and Loomis et al. 2009) Only in the work by Carlsson and Martinsson (2001) have the writers not come to a conclusion that the two are different. However, there are a couple of bottlenecks that may arise when one wishes to resort to RCE. As argued by Magistris et al. (2012), conducting RCEs can be tedious and time-consuming. Additionally, it needs that the researchers are familiar with all the profile sets of the stated product that the survey participants are shown, and sometimes, various product themes aren't yet available and can't be tested for.
3. Causes of Hypothetical Bias
Literature on the origin and causes of hypothetical bias has evolved and a lot of the recent work has blamed personal attributes as exhibited by individuals. Mjelde et al. (2012) have shown how characteristics like age and the level of educational attainment had mitigating effects on the impact of hypothetical bias in their study. Individual personality traits are also said to be a factor that can influence the decision-making strategy in the context of bidding behaviour. While it is common knowledge that respondents overstate their WTP estimates because of not taking the survey seriously, a lot of it has also got to do with the fact that it's because of a lack of incentives that people don't put enough cognitive work to deliver true responses. This is supported by Harrison (2006) as the author asserts that consumers lack the motivation to reveal their true willingness to pay value and thus this contributes to the bias as well. This happens because the goals of the researcher are not in alignment with the respondents' incentives to give an honest answer. Mitani and Flores (2010) assert that one of the plausible reasons for the existence of hypothetical bias is that in a lot of surveys, respondents are not communicated with or informed about the actual likelihood of payment for the provision of the good and the service and even the rule guarding the decision of provision of the service. This makes them uncertain about the good or the service in question and therefore affects causes hypothetical bias.
Dohmen et al. (2011) as cited in Weisser (2016) argue that a person's actual risk behaviour is somewhat connected to some hypothetical risk behaviours. Uncertainty regarding the good in question often manifests itself in the form of hypothetical bias. Weisser (2016) also suggests that if respondents are interrogated with questions which are outside the realm of their daily lives, it can lead to the formation of biased statements and therefore biased results.
4. Mitigating Factors
There is no one clear-cut mitigating strategy that can be employed in reducing the hypothetical bias and they have to be dealt with on a case-to-case basis. Both Ex-ante and ex-post strategies can be used to reduce potential bias. Initially, a couple of techniques were used, but researchers have discovered various limitations for them over time. Some of the ex-ante techniques include methods like cheap talk, honesty priming, inferred valuation, incentive compatibility, and even pivot designs. As for ex-post techniques, the certainty calibration approach and revealed preference calibration approach can be employed as mitigating methods.
Some of these mitigating strategies are discussed below.
4.1 Cheap Talk
This strategy was employed for the first time by Cummings and Taylor (1999) ( as cited in Champ 2009) where they prepared a script and asked the participants of the experiment to respond the way they would if they were actually making a financial decision and the results of the experiments were indicative of the fact that a CV treatment implemented with a cheap talk script showed results remarkably close to the actual treatment values. Since then, cheap scripts have been widely used as an experimental instrument to reduce the effects of hypothetical bias ( List 2001; Brown, Ajzen and Hrubes 2003) However it is not to say that cheap talks completely eliminate the hint of hypothetical bias from these experiments.
The paper by Murphy et al. (2005) showed that using a cheap talk script mitigated the effect of hypothetical bias but did not completely remove it. Blumenschein et al. (2008) however showed that using a cheap talk script did not remove the effect of the bias at all, and thus we see there is contradicting literature on the subject and the use of the tool.
4.2 Certainty Scale
The Certainty scale calibration is an ex-post method used in dichotomous choice modelling.
In this instance, a question that essentially seeks to assess the respondents' level of assurance is asked after the WTP question. There are two ways to structure these follow-up questions. One appears in the form of "Definitely sure/Probably sure" questions, while the other is similar to a Likert scale, where a score of 10 denotes "extremely certain" and a score of 1 denotes "very doubtful”( Alfnes 2010). There is evidentiary support which suggests that a normal certainty scale may not help in reducing the bias and it's important that the scale is calibrated. As per the study done by Champ et al. (1997), positive responses comprised only those responses which were nothing less than a rating of 10 on the certainty scale, on the other hand, there are studies which have taken the rating of 7 or 8 as the cutoff thresholds to achieve the same results of reduced or removed hypothetical bias (Ethier et al. 2000; Poe et al. 2002; Champ & Bishop 2001)
4.3 Real Talk
The paper by Alfnes et al.(2010) tries to analyse a novel strategy based on cognitive dissonance which can be used in mitigating hypothetical bias. The strategy is known as “Real Talk” in which the experiment subjects are informed that they would be followed up with a non-hypothetical study of similar or identical commodities. It is also predicated on the assumption that consumers prefer to respond consistently and won't want to pay more than they think a product is worth. Veisten and Navrud (2006) ( as cited in Alfnes et al. 2010) show that the number of positive responses decreased after the receipt for payment of the good was forwarded to the survey respondents.
4.4 Dissonance Minimizing Format
Dissonance Minimizing Format is another tool which has been used in the literature as a means to minimize the impact of hypothetical bias in a survey setting( Murphy et al. 2005). It tries to encapsulate if there are any protestors to the payment vehicle who support the problem at hand. This option lets respondents support a cause without having to say whether they would pay the specified sum as indicated by the dichotomous choice question.
4.5 Solemn Oath
There exists literature ( Stevens et al. 2013) which suggests that even something as novel as real talk may not work along with other tools such as cheap talk and certainty scale calibration.
One approach as first proposed by Harrison and Kristrom (1995) attempts to form a contractual agreement between the investigator and the subjects. The underline concept of this particular alternative stems from the discipline of social psychology ( Kulik and Carlino 1987) where it has been observed that if people agree to promise something to one another, they're very likely to honour that promise. The way it maps to the experimental disciplines is as if survey respondents agree to or make a certain promise in a hypothetical scenario, then they are more likely to give a truthful, unbiased response. Although the research has been fairly limited with this tool, whatever work has been done as of now has shown that using oath as a mitigating tactic can be a good strategy to reduce the prevalence of the bias. Jacquemet et a1. (2009) employed this technique in the case of a second price auction where the respondents were asked to swear to give honest answers thereby borrowing the concept from real-world court scenarios where the witnesses are asked to take an oath before testifying before the judge. The results showed that it reduced the hypothetical bias. Despite appearing promising, the method has a number of drawbacks. The main being - the “heavy-handedness” nature of this commitment approach may deter participants ( Magistris et al. 2012). Apart from this, researchers still can't guarantee that the survey respondents are taking the oath seriously or not.
4.6 Honesty Priming
The limitations encountered in the method of oath-taking have given rise to this technique. It originates from the literature on social psychology. “Priming” can alter one's behaviour and choice patterns ( Bargh et al. 2001). The underline theme of priming basically says that if certain words and specific jargon have the power to activate stimuli and if somehow consumers or survey respondents are exposed to them, then their decisions can be influenced by the activated stimuli. This technique is being incorporated more and more in the field of experimental economics to understand the behavioural patterns of people. The way Honesty priming differs from the “solemn oath” method is by the virtue of the direct consent that is needed in the case of the latter. Drouvelis et al. (2010) have illustrated that priming can be extremely beneficial in a game of social dilemma as it enforces the concept of cooperation and exerts stress on increasing the levels of contribution. This technique has been borrowed while studying the interface of religion and economics. , Benjamin, Choi, and Fisher (2016) have tried to elucidate more on this concept by showing how playing with religious identities can increase contributions to public goods. The study by Magistris et al. (2012) showed that while honesty priming wasn't able to reduce or eliminate the effects of hypothetical bias in a non-hypothetical choice experiment scenario but it was successful in doing the same in the case of a hypothetical choice experiment. This result is extremely imperative as non-hypothetical CE is more time-consuming and expensive and therefore this also sets the base for future research.
4.7 Inferred Valuation
In this technique, the interrogation is done in an indirect fashion and the respondents are asked how much they believe somebody else would be willing to pay for the good, not them. The reason this approach is desirable is that a person might overstate their willingness to pay an estimate in order for the surveyors to like them or to give a socially desirable answer but if they are questioned indirectly, they are no longer on the spot and therefore have no incentive to give a false answer ( Norwood & Lusk 2011).
4.8 Incentive Compatibility Approach
Incentive Compatibility is a concept that finds its roots in fields like experimental economics and mechanism design. The point of this approach is to make respondents feel that if they express their true preferences to the surveyors or the researchers, it would be beneficial to them. Taylor (1998) remarks that any elicitation procedure or process is incentive compatible if there is no motivation for the respondents to divulge their actual preferences. Taylor (1998) elucidates with an example of voting where the author says that if individuals have to choose between two candidates A and B and if the rule says that whichever candidate gets over 50% of the votes wins, then by expressing anything other than one's true preferences, an individual will not get their desired results. Groves( 2007) has tried to explain why binary choices ( as in the case of the voting example) are conducive to revealing the true preferences by stating that the consumers/ respondents will choose the alternative which will increase their utility.
Cummings( 1995) in his study showed that “homegrown values” expressed via the real dichotomous choice question which was intensively compatible differed significantly from the one that was hypothetical thereby showing the prevalence of a bias. However, there have been scenarios where the incentive capability approach hasn't worked. Bucket et al. (2020) used the incentive capability approach to mitigate the impact of hypothetical bias in smokers' choice patterns and found that although it incentivized choosing e-cigarettes, it did not have any effect on characteristics that influenced the willingness to pay values.
In the present day of research, it's very common for scholars to use more than one technique to quantify and eliminate bias. For example, in the study conducted by Champ et al.(2009), three treatment approaches have been considered to reduce the effect of hypothetical bias. These include actual payments that can serve as benchmarks to quantify the presence of any hypothetical bias, A CV treatment with a follow-up question and finally employing a cheap talk script as a prologue before asking the question. The certainty question as used in the study asked respondents to rate on a scale of 1-10 how likely were they to pay the specified amount, 1 being very uncertain and 10- very certain. The results demonstrated that using a certainty scale is preferable to a small cheap talk script.
5. Conclusion and scope for future research
Although a lot of research has been going on in the area of environmental and experimental economics, researchers still haven't been able to pinpoint one clear definite cause of hypothetical bias. Most of the factors affecting hypothetical bias are plausible causes which exhibit logical consistency but we still don't have a definite answer. It also becomes imperative to try to explore new ways that can aid in mitigating the effect of hypothetical bias as not all methods have a 100% success rate. All of the methods mentioned above are responses to hypothetical biases originating from various sources and therefore we don't have one simple thumb rule or guideline that could be followed (Loomis 2011). There is very little evidentiary backing which can support the fact that the bias is a function of individual characteristics so more research needs to be conducted on that front (Mjelde et al. 2012). There is very limited literature which tries to relate time preferences with hypothetical bias. Another vital point to note and understand is that questions that are asked in the survey need to be structured properly and the respondents need to be aware of the scenarios and the problem in question so as to avoid biased results. While there is a plethora of research in this domain, more field work with various combinations of ex-ante and ex-post strategies is required to create a template of ready-to-go tactics for biases originating from various potential causes.
References
Alfnes, F., C. Yue, and H. H. Jensen. “Cognitive Dissonance as a Means of Reducing Hypothetical Bias.” European Review of Agricultural Economics 37, no. 2 (2010): 147-63.
https://doi.org/10.1093/erae/jbq012.
Bargh, John A., Peter M. Gollwitzer, Annette Lee-Chai, Kimberly Barndollar, and Roman Trötschel. “The Automated Will: Nonconscious Activation and Pursuit of Behavioral Goals.” Journal of Personality and Social Psychology 81, no. 6 (2001): 1014-27.
https://doi.org/10.1037/0022-3514.81.6.1014.
Benjamin, Daniel J., James J. Choi, and Geoffrey Fisher. “Religious Identity and Economic Behavior.” Review of Economics and Statistics 98, no. 4 (2016): 617-37.
https://doi.org/10.1162/rest_a_00586.
Blumenschein, Karen, Glenn C. Blomquist, Magnus Johannesson, Nancy Horn, and Patricia Freeman. “Eliciting Willingness to Pay without Bias: Evidence from a Field Experiment.” The Economic Journal 118, no. 525 (2007): 114-37. https://doi.org/10.1111/j.1468-
0297.2007.02106.x.
Buckell, John, Justin S. White, and Ce Shang. “Can Incentive-Compatibility Reduce Hypothetical Bias in Smokers' Experimental Choice Behavior? A Randomized Discrete Choice Experiment.” Journal of Choice Modelling 37 (2020): 100255. https://doi.org/10.1016/j.jocm.2020.100255.
Carson, Richard T., and Theodore Groves. “Incentive and Informational Properties of Preference Questions.” Environmental and Resource Economics 37, no. 1 (2007): 181-210.
https://doi.org/10.1007/s10640-007-9124-5.
Champ, Patricia A., Rebecca Moore, and Richard C. Bishop. “A Comparison of Approaches to Mitigate Hypothetical Bias.” Agricultural and Resource Economics Review 38, no. 2 (2009): 16680. https://doi.org/10.1017/s106828050000318x.
Champ, Patricia A., Richard C. Bishop, Thomas C. Brown, and Daniel W. McCollum. “Using Donation Mechanisms to Value Nonuse Benefits from Public Goods.” Journal of Environmental Economics and Management 33, no. 2 (1997): 151-62. https://doi.org/10.1006/jeem.1997.0988.
Chang, Jae Bong, Jayson L. Lusk, and F. Bailey Norwood. “How Closely Do Hypothetical Surveys and Laboratory Experiments Predict Field Behavior?” American Journal of Agricultural Economics 91, no. 2 (2009): 518-34. https://doi.org/10.1111/j.1467-8276.2008.01242.x.
Cummings, Ronald G, and Laura O Taylor. “Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method.” American Economic Review 89, no. 3 (1999): 649-65. https://doi.org/10.1257/aer.89.3.649.
De-Magistris, Tiziana, Azucena Gracia, and Rodolfo M. Nayga. “On the Use of Honesty Priming Tasks to Mitigate Hypothetical Bias in Choice Experiments.” American Journal of Agricultural Economics 95, no. 5 (2013): 1136-54. https://doi.org/10.1093/ajae/aat052.
Ding, Min, Rajdeep Grewal, and John Liechty. “Incentive-Aligned Conjoint Analysis.” Journal of Marketing Research 42, no. 1 (2005): 67-82. https://doi.org/10.1509/jmkr.42.1.67.56890.
Drouvelis, Michalis, Robert Metcalfe, and Nattavudh Powdthavee. “Priming Cooperation in Social Dilemma Games.” SSRN Electronic Journal, 2010. https://doi.org/10.2139/ssrn.1631098.
Harrison, Glenn W. “Experimental Evidence on Alternative Environmental Valuation Methods.” Environmental and Resource Economics 34, no. 1 (2006): 125-62.
https://doi.org/10.1007/s10640-005-3792-9.
Hoyos, David. “The State of the Art of Environmental Valuation with Discrete Choice Experiments.” Ecological Economics 69, no. 8 (2010): 1595-1603.
https://doi.org/10.1016/j.ecolecon.2010.04.011.
Jacquemet, Nicolas, Alexander James, Stéphane Luchini, and Jason F. Shogren. “Referenda under Oath.” Environmental and Resource Economics 67, no. 3 (2016): 479-504.
https://doi.org/10.1007/s10640-016-0023-5.
Kulik, James A., and Patricia Carlino. “The Effect of Verbal Commitment and Treatment Choice on Medication Compliance in a Pediatric Setting.” Journal of Behavioral Medicine 10, no. 4 (1987): 367-76. https://doi.org/10.1007/bf00846476.
LaPiere, R. T. “Attitudes vs. Actions.” Social Forces 13, no. 2 (1934): 230-37. https://doi.org/10.2307/2570339.
List, John A. Using Experimental Methods in Environmental and Resource Economics. Cheltenham: Edward Elgar, 2006.
List, John A., Paramita Sinha, and Michael H. Taylor. “Using Choice Experiments to Value NonMarket Goods and Services: Evidence from Field Experiments.” The B.E. Journal of Economic Analysis & Policy 6, no. 2 (2006). https://doi.org/10.2202/1538-0637.1132.
Loomis, John, Paul Bell, Helen Cooney, and Cheryl Asmus. “A Comparison of Actual and Hypothetical Willingness to Pay of Parents and Non-Parents for Protecting Infant Health: The Case of Nitrates in Drinking Water.” Journal of Agricultural and Applied Economics 41, no. 3 (2009): 697-712. https://doi.org/10.1017/s1074070800003163.
Loomis, John. “What's to Know about Hypothetical Bias in Stated Preference Valuation Studies?”
Journal of Economic Surveys 25, no. 2 (2011): 363-70. https://doi.org/10.1111Zj.1467- 6419.2010.00675.x.
Lusk, J. L., and F. B. Norwood. “An Inferred Valuation Method.” Land Economics 85, no. 3 (2009): 500-514. https://doi.org/10.3368/le.85.3.500.
Mitani, Yohei, and Nicholas E. Flores. “Demand Revelation, Hypothetical Bias, and Threshold Public Goods Provision.” Environmental and Resource Economics 44, no. 2 (2009): 231-43. https://doi.org/10.1007/s10640-009-9281-9.
Mjelde, James W., Yanhong H. Jin, Choong-Ki Lee, Tae-Kyun Kim, and Sang-Yoel Han. “Development of a Bias Ratio to Examine Factors Influencing Hypothetical Bias.” Journal of Environmental Management 95, no. 1 (2012): 39-48.
https://doi.org/10.1016/j.jenvman.2011.10.001.
Murphy, James J., P. Geoffrey Allen, Thomas Stevens, and Darryl A. Weatherhead. “A MetaAnalysis of Hypothetical Bias in Stated Preference Valuation.” SSRN Electronic Journal, 2003. https://doi.org/10.2139/ssrn.437620.
Murphy, James J., Thomas H. Stevens, and Lava Yadav. “A Comparison of Induced Value and Home-Grown Value Experiments to Test for Hypothetical Bias in Contingent Valuation.” Environmental and Resource Economics 47, no. 1 (2010): 111-23.
https://doi.org/10.1007/s10640-010-9367-4.
Norwood, F. Bailey, and Jayson L. Lusk. “Social Desirability Bias in Real, Hypothetical, and
Inferred Valuation Experiments.” American Journal of Agricultural Economics 93, no. 2 (2011): 528-34. https://doi.org/10.1093/ajae/aaq142.
Stevens, T.H., Maryam Tabatabaei, and Daniel Lass. “Oaths and Hypothetical Bias.” Journal of Environmental Management 127 (2013): 135-41. https://doi.org/10.1016/j.jenvman.2013.04.038.
Taylor, Laura O. “Incentive Compatible Referenda and the Valuation of Environmental Goods.” Agricultural and Resource Economics Review 27, no. 2 (1998): 132-39.
https://doi.org/10.1017/s1068280500006456.
Weisse, Reinhard A. How Real Is 'Hypothetical Bias' in the Context of Risk and Time Preference Elicitation? Institute of Labour Economics, 2016.
Whittington, Dale, John Briscoe, Xinming Mu, and William Barron. “Estimating the Willingness to Pay for Water Services in Developing Countries: A Case Study of the Use of Contingent Valuation Surveys in Southern Haiti.” Economic Development and Cultural Change 38, no. 2 (1990): 293-311. https://doi.org/10.1086/451794.
Frequently asked questions
What is the main topic of the review "A REVIEW OF HYPOTHETICAL BIAS IN ENVIRONMENTAL ECONOMICS"?
The review focuses on hypothetical bias in environmental economics, particularly concerning the valuation of goods and services using stated preference techniques.
What is hypothetical bias, according to the review?
Hypothetical bias is the difference between the reported Willingness to pay (WTP) values and the "revealed" WTP. It arises because respondents consider the survey inconsequential, leading to dishonesty.
Why is it important to account for hypothetical bias in research?
Wrong estimates due to hypothetical bias can lead to discrepancies in the policy-making process, making it crucial to account for and mitigate this bias.
What are stated preference techniques, and where are they used?
Stated Preference Techniques are methodologies used to understand people's preferences for various products, applied in fields like economics, psychology, and marketing. In economics, they're used for empirical research in environmental economics.
How does hypothetical bias manifest in Contingent Valuation (CV) techniques?
In CV, hypothetical bias occurs when respondents overstate their willingness to pay for a certain commodity or service, which can compromise the accuracy of study results.
Is hypothetical bias present in Choice Experiment (CE)?
Although less prevalent than in Contingent Valuation, hypothetical bias can still be present in choice modelling. Real Choice Experiments (RCE) are used to address this.
What are some causes of hypothetical bias, as discussed in the review?
Causes include personal attributes, lack of incentives for respondents to provide true responses, uncertainty about the good or service in question, and questions outside the realm of respondents' daily lives.
What mitigating factors or strategies are used to reduce hypothetical bias?
Both Ex-ante and ex-post strategies can be used. Some techniques include cheap talk, certainty scale calibration, real talk, dissonance minimizing format, solemn oath, honesty priming, inferred valuation, and incentive compatibility approaches.
What is "cheap talk," and how is it used to mitigate hypothetical bias?
"Cheap talk" involves using a script to encourage participants to respond as if they were making a real financial decision. It's an experimental instrument meant to reduce the effects of hypothetical bias, though its effectiveness is debated.
How does the Certainty Scale calibration work as a mitigating strategy?
Certainty Scale calibration is an ex-post method in which respondents are asked about their level of assurance after the WTP question, using scales such as "Definitely sure/Probably sure" or Likert scales.
What is the "Real Talk" strategy, and how does it aim to reduce hypothetical bias?
"Real Talk" informs subjects they'll be followed up with a non-hypothetical study of similar commodities, predicated on the assumption consumers prefer to respond consistently and not overpay.
What does the "Dissonance Minimizing Format" try to accomplish?
It tries to encapsulate if there are any protestors to the payment vehicle who support the problem at hand. This option lets respondents support a cause without having to say whether they would pay the specified sum.
What is "Solemn Oath" strategy?
It attempts to form a contractual agreement between the investigator and the subjects. If survey respondents agree to or make a certain promise in a hypothetical scenario, then they are more likely to give a truthful, unbiased response.
How does "Honesty Priming" differ from "Solemn Oath"?
Honesty Priming differs from the "solemn oath" method is by the virtue of the direct consent that is needed in the case of the latter. Honesty Priming originates from the literature on social psychology. “Priming” can alter one's behaviour and choice patterns
How does "Inferred Valuation" work?
In this technique, the interrogation is done in an indirect fashion and the respondents are asked how much they believe somebody else would be willing to pay for the good, not them.
What is the "Incentive Compatibility Approach"?
The point of this approach is to make respondents feel that if they express their true preferences to the surveyors or the researchers, it would be beneficial to them.
What are some limitations in our understanding of hypothetical bias?
Researchers haven't pinpointed one clear cause of hypothetical bias. Most affecting factors are plausible but lack a definitive answer. Additionally, more research is needed on individual characteristics and the relationship between time preferences and hypothetical bias.
- Quote paper
- Raunak Jha (Author), 2023, A Review of Hypothetical Bias in the Context of Environmental Economics, Munich, GRIN Verlag, https://www.hausarbeiten.de/document/1322750