Für neue Autoren:
kostenlos, einfach und schnell
Für bereits registrierte Autoren
80 Seiten, Note: 1,3
List of abbreviations
Index of figures
1. Introduction: Renaissance of protectionist developments
1.1 Protectionism – a Jack-in-the-box?
1.2 Motivation and objective
2. Theoretical approach: A concise overview of game theory
2.1 Determinants of decision making: Strategic choices
2.2 Typical situations in a 2-player-game: The Prisoner’s dilemma
2.3 Iterated 2-by-2-bimatrix games and solution possibilities
for the Prisoner’s dilemma
2.3.1 Discovering the cheaters
2.3.2 Punishing the cheaters
2.3.3 Characteristics of effective punishments
2.3.4 An optimal strategy to solve the prisoner’s dilemma:
Tit for Tat
2.3.5 An additional option: Threats and promises
2.4 Is the Game Theory an appropriate model to explain decision making in World Trade?
2.4.1 Is international trade a non-cooperative game?
2.4.2 Payoff structure in the prisoner’s dilemma and in international trade
2.4.3 International trade as an iterated prisoner’s dilemma?
3. Linking Theory to the real world: World Trade policy as an Iterative Prisoner’s Dilemma
3.1 Decision making in World Trade Policy: Who is in charge?
3.1.1 Responsible organs in the United States of America
3.1.2 Responsible organs in the European Union
3.1.3 International Organizations: WTO and GATT
3.2 Historic perspective: Development of World Trade in the 20th century
3.2.1 The United States of America
3.2.2 The European Union
3.3 Current developments in World Trade Policy explained
3.3.1 The United States of America
3.3.2 The European Union
3.4 Prospects for future development: Problems and solutions in World Trade Policy
3.4.1 Changing the rules of the game
3.4.2 Empowerment and increase in effectiveness of the liberal trading system
18.104.22.168 National solutions: incorporation of WTO/GATT rules into domestic legislation
22.214.171.124 Multilateral approach: reduction of complexity of the WTO dispute settlement system
126.96.36.199 Increased independence: an institutional change of the WTO
4. Summary and Conclusion
Erklärung gemäß §31 Abs. 5 RaPO
illustration not visible in this excerpt
Figure 1: The Payoff matrix
Figure 2: The Payoff matrix for a prisoner’s dilemma
Figure 3: Threats and promises
Figure 4: Effects of a tariff in a big country under perfect competition
Figure 5: The Payoff matrix in international trade according to neoclassical theory
Figure 6: The modified Payoff matrix in international trade including political pressure and other non-monetary factors
"Progress, far from consisting in change, depends on retentiveness. Those who cannot remember the past are condemned to repeat it."
George de Santayana (The Life of Reason)
Over the past few years, and especially between 2001 and 2004, world trade has experienced a renaissance of protectionist tendencies. Despite the promising compromises and the negotiation success of the World Trade Organisation (WTO) round in Geneva in July 2004 and other previous rounds, bilateral relationships between countries often seem to undermine the idea of a world of free trade. Protectionism, though, is no new phenomenon at all: Its roots can be traced back till the 16th century, when the so called mercantilists tried to achieve a positive balance of payments by imposing import tariffs and quotas. Since then, protectionism has not only shown to be a popular measure in developing countries to shelter own infant industries, but repeatedly was utilized by major industrialized countries to stay ahead of competing nations. Today, protectionist measures can especially be observed in economic downturns, when countries reach for import restrictions in order to cushion the negative effects of recessions.
Economic theories show that every country, irrespective of its development status, benefits from free trade. But then, why does decision making in world trade still rely on the over 500 years old mercantilist idea, and why does an always recurring protectionism hinder the optimization of the wealth of nations?
The main motivation for this paper is the above explained recent renaissance of protectionism. The following examination shall not be limited only to the last few years, though: the aim is to study the iterative decision making in world trade policy from 1947 until today and to link the underlying rationale to traditional game theory.
Observations of recent trade politics in major trading blocks, mainly the United States of America (USA) and the European Union (EU), show that especially in the industrialized nations protectionism still is en vogue. Apparently, these countries are oscillating between trade liberalization on the one hand and protectionism on the other – but what determines the decision making in world trade policy? Why do nations strive for trade liberalization at a given moment of time, only to completely turn around and shield their economies from foreign threats? To answer these questions, this paper will link the theoretical basics of game theory and the prisoner’s dilemma to decision making processes in international trade.
The first section therefore gives a concise synopsis of basic ideas and concepts of game theory. The presentation focuses on determinants of strategic decision making and non-cooperative games, especially the prisoner’s dilemma. Further on, solution possibilities for the single-shot and iterated prisoner’s dilemma are pointed out, and the appropriateness of game theory as a model for international trade is tested.
The second section then links the theoretical approach to the real world: Based on a historical review of trade policy and decision making, current issues in world trade policy – concentrating on the protectionist behaviour presented by two of the major global trading blocks, the United States of America and the European Union – are explained using the theoretical model outlined in the preceding chapter. This section discusses the prospects and problems for future development in world trade and tries to offer feasible solutions to some pressing issues.
Wrapping up the results, the summary and conclusion reviews the question of how to – if possible – finally overcome the prisoners’ dilemma of international trade, and provides a brief overview over necessary changes in the world trade system which have to be taken in order to achieve this goal.
What is game theory all about? Basically, it is a tool to analyse competitive situations with decision problems and to develop optimal and adequate strategies, using mathematical and cybernetic techniques. Game theory, thus, “[…] formally [is] a branch of mathematics developed to deal with conflict-of-interest situations in social science”. Actually, game theory roots in the examination of diversion games, but what makes it really interesting is its vast applicability in distinct fields, as for instance in operations research, collective action, political science, psychology, and biology. The most fascinating application, though, and the one this paper is based upon, is economics and international relations: “On a high level of aggregation, game theory can be used in international economics to set up models in which countries compete in choosing tariffs and other trade policies”, says Robert Gibbons in his book “A primer in game theory”.
Game theory seeks to find rational strategies in “[…] situations of strategic interdependence: the implication of one player’s choice (or: strategy) depends on other rational and purposive players’ decisions”, and all players take these interdependencies into account when deciding their respective actions. Decisions, therefore, are not isolated, but incur social interaction between the players.
In the sense of game theory, strategic thinking is the art of outflanking an opponent who tries to do exactly the same with the player. Everyone has to think and act strategically in order to “survive” in competitive environments. Game theory provides a handy toolset to improve competitive behaviour by offering some basic strategic principles.
The strategic alternatives which are open to players of games are (in a simplified model) quite straight forward: either to cooperate or to defect. Cooperation, then, can be reached via two different ways. On the one hand, players can communicate and may enter into binding contracts which provide the necessary certainty to establish cooperation. This approach is examined by the theory on cooperative games. On the other hand, cooperation can be achieved without agreements or even communication between the players if the results of cooperation provide the individual payoff maximization to each player.
The focus of this paper lies on the so called non-cooperative games, which “[…] are characterized by missing communication between players. Binding agreements are not possible in this form of games. Without binding agreements, players can only strive for isolated optimization of the game result”. Players interact solely through the medium of their moves, what means that their strategic behaviour is a kind of “stylised language”. The only facts they know about their opponents is that they are – as the player himself – rational actors and that they play the same game. Rational behaviour in the sense of game theory means that each player tries to maximize his own individual payoff from the game.
The main reason to prefer non-cooperative game theory in this paper is as follows: Non-cooperative games without real communication and binding contracts seem quite abstract and sometimes absurd in contrast to cooperative games; in reality, though, they are the prevalent phenomenon. Cooperative game theory assumes that players always stick to binding contracts, while in non-cooperative theory players can choose whether to adhere or not to do so. Non-cooperative theory thus is an image of the whole negotiation process and stresses the relevance of social interaction, while cooperative theory only represents parts of the whole process.
In non-cooperative games, players come to their respective decisions simultaneously: They do not know see which strategy the other players choose, but will only experience the outcome related to these decisions. These games generally lead to stable outcomes, since the resulting strategies are based upon individual utility maximization: If there is a set of strategies for a game with the property that no player can benefit by changing his strategy while the other players keep their respective strategies unchanged, then that set of strategies and the corresponding payoffs constitute a Nash equilibrium. Interestingly, these equilibria seldom establish efficient (or pareto-optimal) outcomes. If the both players defect in the stable solution, an individually rational behaviour can lead to a joint suboptimal result, since mutual cooperation would often provide a superior outcome.
Dixit/Nalebuff have established a simple set of rules which enables players to solve non-cooperative games in a coordinated way and to find the optimal answer to the opponent’s strategy:
1. In non-cooperative games with simultaneous moves, there is a logical circle of argumentation: “I think that the other player thinks that I think that he thinks etc.” In order to conceive the other player’s actions, draft a matrix which displays all possible outcomes and payoffs in order to determine individual strategies.
A payoff matrix in the “normal form”, which is the game presentation form using matrices, in contrast to the “extensive form” which applies decision trees, usually looks as follows:
illustration not visible in this excerpt
The payoffs in the lower left corners correspond to Player 1, while the upper right numbers apply to Player 2. Payoffs represent the total utility players can draw from the game; this does not necessarily have to be money, but can be any quantifiable utility, as e.g. reputation or protection of the environment. Thus, payoffs only represent a measure how desirable different outcomes are for the players. It is important to distinguish between zero-sum games, in which the gain of one player equals exactly the loss of the other one, and non-zero-sum games, in which joint benefits are possible (as in this example).
2. After drafting the matrix, search for dominant strategies. Among two alternative strategies A and B for one player, strategy A is dominant if the payoff to this strategy, for each strategy other players could choose, is superior to the payoff of strategy B. In the case under 1.), the strategy “defect” is dominant for both players: irrespective of the opponent’s behaviour: each player achieves a higher utility if he does not cooperate.
If there is a dominant strategy, then use it. If only the opponent has a dominant strategy, be sure that he will use it and choose an adequate answer strategy.
3. If none of the players has a dominant strategy, check if there are any dominated strategies, i.e. strategies that in any case are inferior to any other alternative strategy. If this is the case, eliminate all dominated strategies from your decision matrix. The underlying idea is that a rational player will never play a strictly dominated strategy. When all players have eliminated their respective dominated strategies, it can happen that further strategies to which this criterion did not apply before become dominated. Then, apply successive elimination of these dominated strategies from the payoff matrix again. Even if there is no explicit result after elimination of all dominated strategies, this process reduces the size of the game and makes it handier.
4. If there is no dominant or dominated strategy, or if you have simplified the game by elimination of dominated strategies, then search game equilibria: These are strategy pairs where every player’s strategy is the best response to the other player’s decision. If there is an explicit equilibrium, there is an argument for all players to choose the related strategy: rationality requires choosing the individual optimal outcome. If there are multiple equilibria, further agreement on rules and conventions is needed to prefer one equilibrium to the others. In any other case, use mixed strategies to optimize outcomes for situations without equilibria.
The examination in this paper simplifies the concept of game theory to 2-by-2-bimatrix games. They have the minimal configuration necessary to be a game, which are two players with each two decision possibilities (these are, as described above, cooperation and defection). These games can easily be examined and displayed in a matrix. Nevertheless, they are complex enough to explain and model the major concepts of game theory. At the same time, it is interesting to see how many real world situations can be modelled using this basic game form.
As shown in the previous section, playing the dominant strategy does not mean that a joint superior result will be the outcome of a non-cooperative game: instead, there are several conjunctions in which the game equilibrium presents an inferior result. The most famous 2-by-2-bimatrix example in this context is the prisoner’s dilemma. It is one of the most interesting games which game theory has to offer and, at the same time, deals with one of the most interesting questions of social sciences: the interaction of individual and society.
The underlying story of the prisoner’s dilemma is widely known:
Two criminals commit a bank robbery and are arrested by the police. The police, though, have insufficient evidence for a conviction, and having separated the two suspects, visit each of them and offer the same deal: If you confess and your accomplice remains silent, he gets the full 5-year sentence and you go free as principal witness. If he confesses and you remain silent, you get the full 5-year sentence and he goes free. If you both stay silent, all we can do is give you both 1 year for a minor charge, which is illicit possession of firearms. If you both confess, you each get 4 years.
Having analyzed the individual payoffs (in this case a “higher” payoff in terms of jail sentence is a low payoff to the imprisoned criminals), the matrix looks as follows:
illustration not visible in this excerpt
As in every non-cooperative game, it is assumed that each individual player is trying to maximise his own advantage, without concern for the well-being of the other player. However, the outcome of the prisoner’s dilemma depends on the decision of the other suspect. Confessing is a dominant strategy for both players here: Given the other player's choice, any player can in each case reduce his sentence by confessing. Unfortunately for the prisoners, this leads to an inferior outcome where both confess and both get heavy jail sentences. Thus, the Nash equilibrium does not lead to a jointly optimum solution in the prisoner's dilemma; the joint payoff of the players would be higher in the case of cooperation. The heart of the dilemma is that each player has an individual incentive to cheat (even if there was the possibility of promising to cooperate), but the result of application of the two dominant strategies is far from being efficient. It is interesting to see how individual rationality overrules collective rationality here, since both players know that they would be better off cooperating.
One might argue that, if communication between the prisoners was possible, the outcome would be the mutual cooperation (and remaining silent), thus yielding a superior result. The general problem in the prisoner’s dilemma, though, does not root in lack of communication, but in the missing possibility to enter into binding contracts. Even if the prisoners had committed themselves not to confess, they would not have abided by their agreement. If there is no possibility to enter into binding agreements, the solution to the game has to be designed in a way that cooperation lies in any player’s self-interest. This is the basic requirement for any non-cooperative game.
The next section describes the iterated prisoner’s dilemma and leads over to solution possibilities that can outperform the single-shot inferior results presented in this chapter.
The above introduced 2-by-2-bimatrix games were presented as single-shot games, without the possibility of repetition. However, to be more realistic, repeated games should be incorporated in the examination. It is important to realise that completely new strategic options arise in iterated games, and that it is not enough just to add the payoffs of the separate iterated base games.
The reason for this distinction lies in the chance to observe the opponent’s behaviour after each execution of one base game. The feedback which every player receives for his decisions allows him to flexibly adapt the strategy in the next rounds. In this context, the necessary prerequisite for adaptation of strategies is perfect and complete information. Perfect information means that every player can remember any previous game, while complete information refers to the fact that every player knows payoffs to each player and other relevant game information.
Does the general characteristic of iterated games – to achieve better mutual outcomes – apply to the prisoner’s dilemma as well? As shown in the previous chapter, there is only one stable Nash equilibrium in the single-shot prisoner’s dilemma: both prisoners defect and thus choose the inferior mutual payoff. In the (infinitely) iterated form of the game, however, cooperation may arise as an equilibrium outcome: Since the game is repeated, each player is afforded an opportunity to punish the other player for previous non-cooperative play. Thus, the incentive to cheat may be overcome by the threat of punishment, leading to a superior, cooperative outcome. The implication of this solution possibility then is to ask how cheating/defecting can be discovered, and which punishments effectively discourage defecting.
Dixit/Nalebuff have developed a sensible approach to answer these questions which consists of the following parts: The problem of discovering the cheaters, the prerequisites in the prisoner’s dilemma for effective punishment, and characteristics of effective punishments. Subsequently, they present a game strategy which includes effective punishments, TitforTat, and eventually modify this strategy to deal with its shortcomings.
Generally, it is easy to discover that someone has cheated – but not, who exactly defected. In 2-by-2-games, the outcome clearly indicates to one player if the other player has defected; yet, there may be additional factors in reality which negatively influence the outcome although nobody has cheated. If defecting and other external factors cannot be distinguished separately, the only possibility to prevent defection is to establish a critical frontier: If the outcome falls below this decisive line, then the community (all players) assume somebody has cheated and imposes punishments. The punishments then have to be quite undifferentiated and have to hit guilty and innocent players simultaneously.
Exemplary for the problem of discovering the cheaters is international trade: tariffs are the most obvious instruments of trade restriction; and restriction of trade is considered as defection. The General Agreement on Tariffs and Trade (GATT) and following WTO rounds achieved substantial reduction in tariff levels in all major developed countries. There still is domestically induced pressure by powerful lobby groups to restrict imports, though. Step by step, countries began to strive for other, less visible import restrictions which originally were not included in the GATT regulations: voluntary export restraints, customs clearance requirements, bureaucratic procedures, complicated quota regulations etc.
The example shows that agreements tend to concentrate on visible decision dimensions, while competition itself relocates into less observable areas. Discovering defection can be quite difficult in this case.
Every strategy which strives to foster cooperation includes mechanisms to punish the cheaters. Punishments generally occur in one of the two following forms: either are they added ex post to the original game (for instance the threat for the defector/confessor in the prisoner’s dilemma to suffer revenge by friends of the prisoner who remained in jail), or they emerge from the game itself.
In the latter case, punishments usually evolve through repetition of the game. Benefits from defection in the first game are followed by losses in future games. If prospected losses are high enough to discourage defecting, then cooperation can be achieved over a long period of time. In a single-shot game, by contrast, there is no solution which guarantees mutual cooperation. Only continuous relationships include the possibility of punishment and thus a push stick to motivate cooperation.
This simple principle of repeated games bears some important constraints: the first limitation is that some relationships have natural ends, for example the end of one legislative period. This means that the iterated prisoner’s dilemma is going to be iterated for a known constant finite number of turns. Then, the Nash equilibrium is to defect every time. That can easily be proved by backwards induction: One player might well defect on the last turn, since his opponent will not have a chance to punish him afterwards. Therefore, both will defect on the last turn. Then, the first player might as well defect on the second-to-last turn, since his opponent will defect on the last no matter what he does. This behaviour continues until the first round, and cooperation will not be reached at all. For cooperation to remain appealing, thus, the future must be indeterminate for both players.
The second constraint arises from the fact that cheating yields its benefits before the costs of the breakdown of cooperation are incurred. The relative importance of both costs and benefits then depends on the relative importance of the present time compared to the future. In economics and business, usually discount factors are used to allow for reduced importance of future payoffs. In politics, by contrast, the appreciation of values bears more subjectivity: It seems as if the future after the next election did not count at all. Cooperation is hard to reach, then. For the purpose of this paper’s examination, however, discount factors are assumed to be sufficiently small, so that the threats of future losses incur an incentive to cooperate.
When there are different punishments that can prevent cheating and maintain cooperation, which one should be chosen? Several criteria play a relevant role in determining the optimal counterstrike to defection.
Certainly the most important ones are simplicity and clarity. In order to deter from defection, the punishments have to be simply understood: every player who thinks of cheating should be able to calculate the consequences easily and exactly.
The next criterion is certainty. The players should be able to rely on guaranteed punishment for defectors, while cooperation will be rewarded. In international trade, this requirement constitutes a major problem. Due to long periods of investigation and the influence of politics and diplomacy on the judgements of the WTO, the punishment is quite ineffective. The real cause for the punishment (the actual defection) only plays a minor role afterwards, and repetition of the cheating will not be effectively prevented.
The last characteristic to be considered is the severity of the punishment. Most people will tend to impose “adequate” punishments, but these might lack the effect of deterrence. The most effective way to deter from defection is to impose as severe punishments as possible. When cooperation then is maintained, the massive punishment never has to be exercised. The problem of severity ignores wrong decisions, though. If actually no player has cheated, but as a consequence of a misperception one is deemed to have defected, errors can incur expensive implications. In order to reduce the costs of these errors, the grade of punishments has to be somewhat limited to a level just sufficient to deter from cheating. Minimal deterrence reaches its goals without incurring too high costs when the inevitable mistakes take place.
The list with characteristics of effective punishment procedures looks quite demanding. Yet, there is a simple strategy which fulfils every requirement quite well: Tit for Tat, a variant of the old biblical rule “an eye for an eye, a tooth for a tooth”, which implies equivalent retaliation. Its rationale is easy: The player who plays Tit for Tat cooperates in the first game, and responds in kind to the opponent’s behaviour in each of the following rounds. If the opponent was previously cooperative, the player is cooperative. If not, the player is not.
Tit for Tat has four characteristics which has made it the most prevalent strategy for the prisoner’s dilemma:
Tit for Tat, firstly, presents a clear, straightforward behaviour, as clear as only possible. This means that the other player can easily understand the logic behind Tit for Tat's actions, and can therefore figure out how to work alongside it productively.
Secondly, Tit for Tat is nice. It starts off cooperating and only will defect in response to the other player’s defection, and thus never initiates the cycle of mutual defections.
Thirdly, it is provocable. It never leaves defection unpunished as it perfectly copies the opponent’s actions. At the same time, and this is the fourth characteristic, Tit for Tat is forgiving. It immediately responds in kind if the other player starts to cooperate again.
Tit for Tat originally was handed in by mathematician Anatol Rapoport, professor at Toronto University, to a tournament organized by political scientist Robert Axelrod. Axelrod modelled a contest of 2-by-2 prisoner’s dilemmas in which several computer programs competed against each other for 150 times. The key to success for Tit for Tat was that is reacted responsively to the opponent’s behaviour. Thus, it fostered cooperation wherever possible and managed not to be exploited at the same time. It is no coincidence that most of the worst-performing strategies in the tournament were ones that were not designed to be responsive to the other player's choices; against such a player, the best strategy is simply to defect every time, because the player can never be sure if reliable mutual cooperation will be established.
The problem which arises when using Tit for Tat, though, is that errors and misunderstandings are punished as well; when this happens, cooperation can only be re-established when the next error occurs: If cooperation is misperceived as defection, a player using Tit for Tat will retaliate. If the other player applies Tit for Tat as well, he will respond with defection again. This causes a chain reaction to start and it will not end until one player misperceives defection for cooperation and cooperates again in the next round. Dixit/Nalebuff reckon that Tit for Tat is too provocable, and therefore propose a mechanism to be included into Tit for Tat which forgives exceptional defection. They recommend starting off with cooperation and keep on cooperating, irrespective of the other player’s behaviour. The number of defections while the first player himself cooperates should be observed. If the number of defections trespasses a critical level, then original Tit for Tat should be applied. This way, Tit for Tat is not used to reward cooperative behaviour as in the pure strategy, but serves as punishment for defection. In order to find a sensible critical frontier, the opponent’s behaviour has to be observed in the short run, the medium run and the long run, respectively.
An example for the modified Tit for Tat according to Dixit/Nalebuff would look as follows:
- First impression: If the opponent cooperates in the first round, cooperate as well. Defection in the first round, by contrast, is not acceptable. In this case apply Tit for Tat.
- Short run: Observe the opponent’s behaviour in the first three rounds. Keep on cooperating, except if the opponent defects in two out of three rounds. Return to Tit for Tat in this case.
- Medium run: Opponent’s defection in three out of the last 20 rounds is not acceptable. Return to Tit for Tat, otherwise keep on cooperating.
- Long run: If the opponent has defected in five out of the last 100 games, quit cooperation and apply Tit for Tat.
This form of punishment incurred by Tit for Tat does not have to last forever, though. Having observed the opponent’s previous behaviour, the player can return to cooperative behaviour after a certain critical number of mutual defection rounds and give the other side a new chance. The most important principle in this strategy consists of not punishing every defection. The player has to decide if there is a misunderstanding, be it on the own or the opponent’s side. In being lenient, the player enables his opponent to cheat to a certain higher extent than before. But at the same time, the opponent threatens his “advanced trust” each time he defects. When misunderstandings then eventually arise, the first player will not be willing any more to forgive, and subsequently punish the opponent by applying Tit for Tat.
According to Dixit/Nalebuff, strategic moves have two characteristics: they embody an action plan (what should be done in each possible situation) and self-commitment, which makes his plan credible to the other players. If a player is first to take an action, then an unconditional move, i.e. a move which will not be influenced by the opponent’s behaviour, can yield a strategic advantage to this player. If, for instance, the first player has a dominant strategy for his first move, he certainly will choose this strategy and leave the other one behind.
But even if a player is second to go, a strategic advantage can be achieved through establishment of an answer rule: these rules give an (informally) binding response to any opponent’s moves. Answer rules have to be set up and communicated before the opponent chooses his move.
This form of announcing rules which takes place before the game itself is played (since in non-cooperative games, communication is not further possible when one player has chosen his first move) is called preplay communication. Players can announce their answer rules; they are not binding (as would be contracts) to them, though.
Threats, in this context, are answer rules according to which the opponent will be punished if he does not cooperate. There are two main categories of threats: they can either be enforcing or deterring. Enforcing threats exert pressure on the other player to do something (e.g. terrorists who threaten to kill people if their demands are not fulfilled), while deterring threats shall hinder the opponent from doing something (e.g. the threat by the NATO during the Cold War to use nuclear weapons against the Soviet Union if they attack a NATO member state).
Promises, by contrast, reward the other player if he cooperates. Again, there are enforcing and deterring promises. An example for enforcing promises is the principal witness regulation, which promises abatement of the penalty if the convict confesses, as it is the case in the prisoner’s dilemma.
illustration not visible in this excerpt
The general problem with answer rules is that the first player, who set up the rule, has no incentive to stick to it and may take along a higher payoff through deviation from the rule after the other player has fulfilled his obligation.
Rules change a player’s behaviour in such way that decisions are taken which would not have been taken according to rational behaviour without rules. They can include moves against the player’s own interests and therefore constitute real strategic moves. Answer rules can change a simultaneous game into a sequential one, since the player who spoke out the rule has already chosen his move before the opponent can choose his respective action. Outcomes may change dramatically, even if the overall payoff structure is not changed.
For a rule to be credible (and thus to fulfil the criterion of effective self-commitment), threats and promises should be limited to the necessary minimum. Excessive threats have several disadvantages:
1. Too high threats, which often incur actions against own interests, will not be credible and therefore do not work.
2. Even if an excessive threat or promise has worked out, the other party will weigh up the benefits from cooperation against the damage done by the answer rule. If the result is negative, there will be no sustainable cooperation.
3. If the threat has international implications, the rest of the world would impose sanctions on the threatening nation, if threats are realized. If, by contrast, the threat is not executed, reputation suffers.
4. If threats mix up different fields of action, for instance if military actions are imposed upon trade defection, clarity of the original conflicts vanishes and cooperation will be harder to achieve.
Also in the Prisoner’s dilemma of international trade, threats and promises can be used to change payoffs and therewith the opponent’s behaviour: a big country – in terms of trade volume and importance as trade partner – threats to impose tariffs or other trade restrictions in the case that the other nation does not cooperate; then, this answer rule certainly deters the potentially defecting nation from an uncooperative strategy. To recapitulate: the paramount point to bear in mind is that answer rules only work in an environment where binding contracts are not possible. The principle of reciprocity, which will be discussed in more detail in chapter 2.4.3, serves as a relevant example of how to incorporate threats and promises into international negotiations.
 de Santayana, The life of reason (1955), p.82.
 see Gianaris, The European Community and the United States (1991), p.17.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), p. 1f.
 Zagare, Game theory – Concepts and applications (1991), p. 7.
 see Zagare, Game theory – Concepts and applications (1991), p. 7f.
 Gibbons, A primer in Game Theory (1992), p. xi.
 Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), p. 84.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), p. 1.
 see Rieck, Spieltheorie (1993), p. 28.
 Rometsch, Internationale Konversion in spieltheoretischer und politökonomischer Sicht
(2002), p. 112
 see Rieck, Spieltheorie (1993), p. 28.
 see Rieck, Spieltheorie (1993), pp. 28ff.
 see Rometsch, Internationale Konversion in spieltheoretischer und politökonomischer Sicht
(2002), p. 113.
 see Holler/Illing, Einführung in die Spieltheorie(2000), p. 6f.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 84ff.
 see Rieck, Spieltheorie (1993), pp. 20ff.
 see Gibbons, A primer in Game Theory (1992), pp. 4ff.
 see Rieck, Spieltheorie (1993), p. 35.
 ibid, p. 36.
 ibid, p. 36.
 see Rieck, Spieltheorie (1993), pp. 38ff.
 see Holler/Illing, Einführung in die Spieltheorie (2000), p. 20.
 see Rieck, Spieltheorie (1993), pp. 117ff.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 94ff.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 95ff.
 ibid, pp. 97f.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 99ff.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 104f.
 ibid, pp. 94ff.
 see Axelrod, Die Evolution der Kooperation (1991), pp.28ff.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 108ff.
 ibid, pp. 112f.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 122f.
 see Rieck, Spieltheorie (1993), p. 220.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), pp. 122ff.
 see Dixit/Nalebuff, Spieltheorie für Einsteiger (1995), p. 134.
Masterarbeit, 94 Seiten
Hausarbeit, 23 Seiten
Seminararbeit, 33 Seiten
Wissenschaftlicher Aufsatz, 18 Seiten
Akademische Arbeit, 12 Seiten
Forschungsarbeit, 32 Seiten
Seminararbeit, 14 Seiten
Bachelorarbeit, 30 Seiten
Seminararbeit, 13 Seiten
Masterarbeit, 94 Seiten
Hausarbeit, 23 Seiten
Seminararbeit, 33 Seiten
Seminararbeit, 14 Seiten
Bachelorarbeit, 30 Seiten
Seminararbeit, 13 Seiten
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!