Articles

La crise financière : enseignements et perspectives

O. KLEIN

Revue d'Économie Financière

mars 2015, n°117, pp.277-293

Départements : Economie et Sciences de la décision


La dernière crise financière, dont les conséquences sont encore présentes aujourd’hui de par la croissance nulle ou ralentie que connaissent les différentes zones du monde, a été d’une violence inégalée depuis la Seconde Guerre mondiale. Les enseignements que l’on peut en tirer, comme les perspectives que l’on trace de façon encore incertaine, nécessitent de revenir sur les causes de la grande crise financière et économique de 2007-2009, puis sur les causes idiosyncratiques de la crise de la zone euro. Enfin, on pourra tenter d’en tirer quelques enseignements et se demander si cette crise a été résolue ou si elle peut rebondir

Preferences with grades of indecisiveness

S. MINARDI, A. SAVOCHKIN

Journal of Economic Theory

janvier 2015, vol. 155, pp.300-331

Départements : Economie et Sciences de la décision

Mots clés : Incomplete preferences; Knightian uncertainty; Graded preferences; Confidence; Decisiveness

http://dx.doi.org/10.1016/j.jet.2014.11.009


Departing from the traditional approach of modeling indecisiveness based on the weakening of the completeness axiom, we introduce the notion of graded preferences: The agent is characterized by a binary relation over (ordered) pairs of alternatives, which allows her to express her inclination to prefer one alternative over another and her confidence in the relative superiority of the indicated alternative. In the classical Anscombe-Aumann framework, we derive a representation of a graded preference by a measure of the set of beliefs that rank one option better than the other. Our model is a refinement of Bewley's [6] model of Knightian uncertainty: It is based on the same object of representation — the set of beliefs — but provides more information about how the agent compares alternatives

Ranking multidimensional alternatives and uncertain prospects

P. MONGIN, M. PIVATO

Journal of Economic Theory

mai 2015, vol. 157, pp.146-171

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Multiattribute utility, Separability, Subjective probability, Harsanyi, Koopmans, Ex ante versus ex post welfare

http://dx.doi.org/10.1016/j.jet.2014.12.013


We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically not of the Cartesian product form). We apply these representations to (1) streams of commodity baskets through time, (2) uncertain social prospects, (3) uncertain individual prospects. Concerning (1), we propose a finite horizon variant of Koopmans's (1960) [25] axiomatization of infinite discounted utility sums. The main results concern (2). We push the classic comparison between the ex ante and ex post social welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi's (1955) [21] Aggregation Theorem. Concerning (3), we derive a subjective probability for Anscombe and Aumann's (1963) [1] finite case by merely assuming that there are two epistemically independent sources of uncertainty

Rationality and the Bayesian paradigm

I. GILBOA

Journal of Economic Methodology

septembre 2015, vol. 22, n°3, pp.312-334

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Rationality, Probability, Reasoning


It is argued that, contrary to a rather prevalent view within economic theory, rationality does not imply Bayesianism. The note begins by defining these terms and justifying the choice of these definitions, proceeds to survey the main justification for this prevalent view, and concludes by highlighting its weaknesses.

Representation theorems and the semantics of decision-theoretic concepts

M. COZIC, B. HILL

Journal of Economic Methodology

2015, vol. 22, n°3, pp.292-311

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Decision theory, Axiomatization, Theoretical terms, Utility, Probability

http://dx.doi.org/10.1080/1350178X.2015.1071503


Contemporary decision theory places crucial emphasis on a family of mathematical results called representation theorems, which relate criteria for evaluating the available options (such as the expected utility criterion) to axioms pertaining to the decision-maker’s preferences (for example, the transitivity axiom). Various claims have been made concerning the reasons for the importance of these results. The goal of this article is to assess their semantic role: representation theorems are purported to provide definitions of the decision-theoretic concepts involved in the evaluation criteria (such as those of utility or subjective probability that feature in the subjective expected utility criterion). In particular, this claim shall be examined from the perspective of philosophical theories of the meaning of theoretical terms

Stochastic games

E. SOLAN, N. VIEILLE

Proceedings of the National Academy of Sciences of the United States of America (PNAS)

novembre 2015, vol. 112, n°45, pp.13743-13746

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Game theory, Stochastic games


In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution.

The Poisson transform for unnormalised statistical models

N. CHOPIN, S. BARTHELMÉ

Statistics and Computing

juillet 2015, vol. 25, n°4, pp.767-780

Départements : Economie et Sciences de la décision


Contrary to standard statistical models, unnormalised statistical models only specify the likelihood function up to a constant. While such models are natural and popular, the lack of normalisation makes inference much more difficult. Extending classical results on the multinomial-Poisson transform (Baker In: J Royal Stat Soc 43(4):495–504, 1994), we show that inferring the parameters of a unnormalised model on a space Ω can be mapped onto an equivalent problem of estimating the intensity of a Poisson point process on Ω. The unnormalised statistical model now specifies an intensity function that does not need to be normalised. Effectively, the normalisation constant may now be inferred as just another parameter, at no loss of information. The result can be extended to cover non-IID models, which includes for example unnormalised models for sequences of graphs (dynamical graphs), or for sequences of binary vectors. As a consequence, we prove that unnormalised parameteric inference in non-IID models can be turned into a semi-parametric estimation problem. Moreover, we show that the noise-contrastive estimation method of Gutmann and Hyvärinen (J Mach Learn Res 13(1):307–361, 2012) can be understood as an approximation of the Poisson transform, and extended to non-IID settings. We use our results to fit spatial Markov chain models of eye movements, where the Poisson transform allows us to turn a highly non-standard model into vanilla semi-parametric logistic regression.Unnormalised statistical models are a core tool in modern machine learning, especially deep learning (Salakhutdinov and Hinton 2009), computer vision (Markov random fields, Wang et al. 2013) and statistics for point processes (Gu and Zhu 2001), network models (Caimo and Friel 2011) and directional data (Walker 2011). They appear naturally whenever one can best describe data as having to conform to certain features: we may then define an energy function that measures how well the data conform to these constraints. While this way of formulating statistical models is extremely general and useful, immense technical difficulties may arise whenever the energy function involves some unknown parameters which have to be estimated from data. The reason is that the normalisation constant (which ensures that the distribution integrates to one) is in most cases impossible to compute. This prevents direct application of classical methods of maximum likelihood or Bayesian inference, which all depend on the unknown normalisation constant.Many techniques have been developed in recent years for such problems, including contrastive divergence (Hinton 2002; Bengio and Delalleau 2009), noise-contrastive estimation (Gutmann and Hyvärinen 2012) and various forms of MCMC for Bayesian inference (Møller et al. 2006; Murray et al. 2006; Girolami et al. 2013). The difficulty is compounded when unnormalised models are used for non-IID data, either sequential data, or data that include covariates. If the data form a sequence of length n, there are now n normalisation constants to approximate. In our application we look at models of spatial Markov chains, where the transition density of the chain is specified up to a normalisation constant, and again one normalisation constant needs to be estimated per observation.In the first Section, we show that unnormalised estimation is tightly related to the estimation of point process intensities, and formulate a Poisson transform that maps the log-likelihood of a model L(θθ) into an equivalent cost function M(θθ,νν) defined in an expanded space, where the latent variables νν effectively estimate the normalisation constants. In the case of non-IID unnormalised models we show further that optimisation of M(θθ,νν) can be turned into a semi-parametric problem and addressed using standard kernel methods. In the second section, we show that the noise-contrastive divergence described in Gutmann and Hyvärinen (2012) arises naturally as a tractable approximation of the Poisson transform, and that this new interpretation lets us extend its use to non-IID models. (Gutmann and Hyvärinen (2012) call the technique “noise-contrastive estimation”, but we use the term noise-contrastive divergence to designate the corresponding cost function.) Finally, we apply these results to a class of unnormalised spatial Markov chains that are natural descriptions of eye movement sequences.

Truthful Equilibria in Dynamic Bayesian Games

J. HÖRNER, S. TAKAHASHI, N. VIEILLE

Econometrica

septembre 2015, vol. 83, n°5, pp.1795-1848

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Bayesian games, Repeated games, Folk theorem.

http://dx.doi.org/10.2139/ssrn.2552709


This paper characterizes an equilibrium payoff subset for dynamic Bayesian games as discounting vanishes. Monitoring is imperfect, transitions may depend on actions, types may be correlated and values may be interdependent. The focus is on equilibria in which players report truthfully. The characterization generalizes that for repeated games, reducing the analysis to static Bayesian games with transfers. With independent private values, the restriction to truthful equilibria is without loss, except for the punishment level; if players withhold their information during punishment-like phases, a folk theorem obtains

Wrong-way driving crashes on French divided roads

E. KEMEL

Accident Analysis and Prevention

février 2015, vol. 75, pp.69-76

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Wrong-way driving, Logistic regression, Elderly driver, Drunk driving


Context: The objective of divided roads is to increase users’ safety by posting unidirectional traffic flows. It happens however that drivers proceed in the wrong direction, endangering themselves as well as other users. The crashes caused by wrong-way drivers are generally spotlighted by the media and call for public intervention.Objectives: This paper proposes a characterization of wrong-way driving crashes occurring on French divided road on the 2008–2012 period. The objective is to identify the factors that delineate between wrong-way driving crashes and other crashes.Method: Building on the national injury road crash database, 266 crashes involving a wrong-way driver were identified. Their characteristics (related to timing, location, vehicle and driver) are compared to those of the 22,120 other crashes that occurred on the same roads over the same period. The comparison relies on descriptive statistics, completed by a logistic regression.Results: Wrong-way driving crashes are rare but severe. They are more likely to occur during night hours and on non-freeway roads than other crashes.Wrong-way drivers are older, more likely to be intoxicated, to be locals, to drive older vehicles, mainly passenger cars without passengers, than other drivers.Perspectives: The differences observed across networks can help prioritizing public intervention. Most of the identified WW-driving factors deal with cognitive impairment. Therefore, the specific countermeasures such as alternative road signs should be designed for and tested on cognitively impaired drivers. Nevertheless, WW-driving factors are also risk factors for other types of crashes (e.g. elderly driving, drunk driving and age of the vehicle). This suggests that, instead of (or in addition to) developing WW-driving specific countermeasures, managing these risk factors would help reducing a larger number of crashes

A primal condition for approachability with partial monitoring

S. MANNOR, V. PERCHET, G. STOLTZ

Journal of Dynamics and Games

juillet 2014, vol. 1, n°3, pp.447-469

Départements : Economie et Sciences de la décision, GREGHEC (CNRS)

Mots clés : Approachability theory, Online learning, Imperfect monitoring, Partial monitoring, Signals


In approachability with full monitoring there are two types of conditions that are known to be equivalent for convex sets: a primal and a dual condition. The primal one is of the form: a set C is approachable if and only all containing half-spaces are approachable in the one-shot game. The dual condition is of the form: a convex set C is approachable if and only if it intersects all payoff sets of a certain form. We consider approachability in games with partial monitoring. In previous works [5,7] we provided a dual characterization of approachable convex sets and we also exhibited efficient strategies in the case where C is a polytope. In this paper we provide primal conditions on a convex set to be approachable with partial monitoring. They depend on a modified reward function and lead to approachability strategies based on modified payoff functions and that proceed by projections similarly to Blackwell's (1956) strategy. This is in contrast with previously studied strategies in this context that relied mostly on the signaling structure and aimed at estimating well the distributions of the signals received. Our results generalize classical results by Kohlberg [3] (see also [6]) and apply to games with arbitrary signaling structure as well as to arbitrary convex sets


JavaScriptSettings