A primal condition for approachability with partial monitoring


Journal of Dynamics and Games

juillet 2014, vol. 1, n°3, pp.447-469

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Approachability theory, Online learning, Imperfect monitoring, Partial monitoring, Signals

In approachability with full monitoring there are two types of conditions that are known to be equivalent for convex sets: a primal and a dual condition. The primal one is of the form: a set C is approachable if and only all containing half-spaces are approachable in the one-shot game. The dual condition is of the form: a convex set C is approachable if and only if it intersects all payoff sets of a certain form. We consider approachability in games with partial monitoring. In previous works [5,7] we provided a dual characterization of approachable convex sets and we also exhibited efficient strategies in the case where C is a polytope. In this paper we provide primal conditions on a convex set to be approachable with partial monitoring. They depend on a modified reward function and lead to approachability strategies based on modified payoff functions and that proceed by projections similarly to Blackwell's (1956) strategy. This is in contrast with previously studied strategies in this context that relied mostly on the signaling structure and aimed at estimating well the distributions of the signals received. Our results generalize classical results by Kohlberg [3] (see also [6]) and apply to games with arbitrary signaling structure as well as to arbitrary convex sets

Analogies and Theories: The Role of Simplicity and the Emergence of Norms


Games and Economic Behavior

janvier 2014, vol. 83, pp.267–283

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Case-based reasoning, Rule-based reasoning, Model selection, Social norms, Equilibrium selection

We consider the dynamics of reasoning by general rules (theories) and by specific cases (analogies). When an agent faces an exogenous process, we show that, under mild conditions, if reality happens to be simple, the agent will converge to adopt a theory and discard analogical thinking. If, however, reality is complex, analogical reasoning is unlikely to disappear. By contrast, when the agent is a player in a large population coordination game, and the process is generated by all players' predictions, convergence to a theory is much more likely. This may explain how a large population of players selects an equilibrium in such a game, and how social norms emerge. Mixed cases, involving noisy endogenous processes are likely to give rise to complex dynamics of reasoning, switching between theories and analogies

Beware of black swans: Taking stock of the description-experience gap in decision under uncertainty


Marketing Letters

septembre 2014, vol. 25, n°3, pp.269-280

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Ambiguity Black swans Description-based decision making Fourfold pattern Probabilistic choices Risk

Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, decision makers rarely know in advance the exact outcomes of their decisions. Whereas the consequences of decisions in standard decision theory are explicitly described (the decision from description (DFD) paradigm), the consequences of decisions in the recent decision from experience (DFE) paradigm are learned from experience. In DFD, decision makers typically overrespond to rare events. That is, rare events have more impact on decisions than their objective probabilities warrant (overweighting). In DFE, decision makers typically exhibit the opposite pattern, underresponding to rare events. That is, rare events may have less impact on decisions than their objective probabilities warrant (underweighting). In extreme cases, rare events are completely neglected, a pattern known as the 'Black Swan effect.' This contrast between DFD and DFE is known as a description-experience gap. In this paper, we discuss several tentative interpretations arising from our interdisciplinary examination of this gap. First, while a source of underweighting of rare events in DFE may be sampling error, we observe that a robust description-experience gap remains when these factors are not at play. Second, the residual description-experience gap is not only about experience per se but also about the way in which information concerning the probability distribution over the outcomes is learned in DFE. Econometric error theories may reveal that different assumed error structures in DFD and DFE also contribute to the gap

Economic Models as Analogies


Economic Journal

août 2014, vol. 124, n°578, pp.513-533

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is case-based rather than rule-based. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are “theoretical cases”, which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why case-based reasoning may sometimes be the method of choice and why economists prefer simple cases

Eliciting Prospect Theory When Consequences Are Measured in Time Units: "Time Is Not Money"


Management Science

juillet 2014, vol. 60, n°7, pp.1844-1859

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : time risk; expected utility; prospect theory; reference point; utility; probability weighting; decision weights; loss aversion

We elicited the prospect theory components (utility, probability weighting, and loss aversion) when consequences are expressed as the time dedicated to a specific task or activity. A similar elicitation was performed for monetary consequences to allow an across-attribute (time/money) comparison of the elicited components (at the individual level). We obtained less concave utility and smaller loss aversion for time than for money. Moreover, while the probability weighting was predominantly inverse S-shaped for both attributes, it was less sensitive to probabilities and more elevated for time than for money. This finding implies more optimism for gains and more pessimism for losses

Entry, trade costs, and international business cycles


Journal of International Economics

novembre 2014, vol. 94, n°2, pp.224-238

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : International business cycle; Extensive margin; Entry; Fixed export costs

Are firm entry and fixed exporting costs relevant for understanding the international transmission of business cycles? We revisit this question using a model that includes entry, selection to exporting activity, physical capital accumulation and endogenous labor supply. We determine that once the stochastic process for exogenous productivity is calibrated to consider the endogenous dynamics in TFP created by the number of firms and the time series volatility of entry is calibrated to the data, our model yields minimal departures from the Backus et al. (1992) benchmark. The richer model shares all of the successes of the previous model in terms of the volatilities of aggregate quantities, as well as its failures, in terms of replicating patterns of international co-movement and the volatility of international relative prices

Expectation Propagation for Likelihood-Free Inference


Journal of the American Statistical Association

mars 2014, vol. 109, n°505, pp.315-333

Départements : Economie et Sciences de la décision

Mots clés : Approximate Bayesian computation, Approximate inference, Composite likelihood, Quasi-Monte Carlo

Many models of interest in the natural and social sciences have no closed-form likelihood function, which means that they cannot be treated using the usual techniques of statistical inference. In the case where such models can be efficiently simulated, Bayesian inference is still possible thanks to the approximate Bayesian computation (ABC) algorithm. Although many refinements have been suggested, ABC inference is still far from routine. ABC is often excruciatingly slow due to very low acceptance rates. In addition, ABC requires introducing a vector of “summary statistics” s(y), the choice of which is relatively arbitrary, and often require some trial and error, making the whole process laborious for the user. We introduce in this work the EP-ABC algorithm, which is an adaptation to the likelihood-free context of the variational approximation algorithm known as expectation propagation. The main advantage of EP-ABC is that it is faster by a few orders of magnitude than standard algorithms, while producing an overall approximation error that is typically negligible. A second advantage of EP-ABC is that it replaces the usual global ABC constraint ¿s(y) - s(y¿)¿ ¿ ¿, where s(y¿) is the vector of summary statistics computed on the whole dataset, by n local constraints of the form ¿si(yi) - si(y¿i)¿ ¿ ¿ that apply separately to each data point. In particular, it is often possible to take si(yi) = yi, making it possible to do away with summary statistics entirely. In that case, EP-ABC makes it possible to approximate directly the evidence (marginal likelihood) of the model. Comparisons are performed in three real-world applications that are typical of likelihood-free inference, including one application in neuroscience that is novel, and possibly too challenging for standard ABC techniques

La controverse sur l'entreprise (1940-1950) et la formation de l'"irréalisme méthodologique"


Economies et Sociétés

novembre-décembre 2014, n°51, pp.1805-1860

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Enterprisen Firm, Firms, Marginalism, Methodological, Neo Classical, Philosophy, Realism

Durant les années 1940-1950, la théorie néo-classique de l'entreprise a pu sembler quelquefois menacée. Certains économistes d'Oxford développent alors une doctrine du "coût total" qu'ils présentent comme antagonique du marginalisme, tandis que l'American Economic Review publie des travaux hétérodoxes sur la courbe de coût moyen et sur les règles de décision appliquées par les chefs d'entreprise. On se propose ici d'analyser ces différentes objections, ainsi que les stratégies de réponse mises au point par les défenseurs de l'approche traditionnelle (et notamment par Machlup). On montre que les critiques des "antimarginalistes" ont été inopérantes, partie à cause de leurs faiblesses logiques et techniques, partie à cause de la double reformulation que les "marginalistes" font subir à la théorie de l'entreprise durant la controverse. D'une part, la théorie parvient à éviter certaines objections lorsqu'elle choisit de se définir par l'hypothèse abstraite de maximisation des profits, et non par l'existence ou les propriétés des courbes marginales. D'autre part, et plus fondamentalement, l'adoption d'un principe d' "irréalisme méthodologique" a permis de la mettre à l'abri de toute objection psychologisante, voire de certaines difficultés révélées par l'analyse statique ou l'analyse de statique comparative. L'article s'efforce tout particulièrement de préciser cette reformulation méthodologique, qui anticipe la thèse célèbre de Friedman en 1953 et une position devenue banale aujourd'hui. Il analyse contextuellement la notion d'"irréalisme", en montrant de quelle manière elle fonctionne comme un système de protection ad hoc difficilement justifiable du point de vue d'une philosophie des sciences rigoureuse.

No-Betting Pareto Dominance



juillet 2014, vol. 82, n°4, pp.1405-1442

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Pareto efficiency; betting; speculation; Pareto dominance; beliefs

We argue that the notion of Pareto dominance is not as compelling in the presence of uncertainty as it is under certainty. In particular, voluntary trade based on differences in tastes is commonly accepted as desirable, because tastes cannot be wrong. By contrast, voluntary trade based on incompatible beliefs may indicate that at least one agent entertains mistaken beliefs. We propose and characterize a weaker, No-Betting, notion of Pareto domination which requires, on top of unanimity of preference, the existence of shared beliefs that can rationalize such preference for each agent

On the Limit Perfect Public Equilibrium Payoff Set in Repeated and Stochastic Games


Games and Economic Behavior

mai 2014, vol. 85, n°1, pp.70-83

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Stochastic games, Repeated games, Folk theorem

This paper provides a dual characterization of the existing ones for the limit set of perfect public equilibrium payoffs in a class of finite stochastic games (in particular, repeated games) as the discount factor tends to one. As a first corollary, the folk theorems of Fudenberg et al. (1994), Kandori and Matsushima (1998) and Hörner et al. (2011) obtain. As a second corollary, it is shown that this limit set of payoffs is a convex polytope when attention is restricted to perfect public equilibria in pure strategies. This result fails for mixed strategies, even when attention is restricted to two-player repeated games