**Gustav Karreskog**

Ph.D. Student in Economics

Stockholm School of Economics

gustav.karreskog@phdstudent.hhs.se

+46 (0) 762 10 42 20

**Research Fields:** Microeconomic Theory, Behavioral Economics, Experimental Economics

**Topics:** Learning in Games, Bounded Rationality, Machine Learning

I am a Ph.D. student in economics at the Stockholm School of Economics. I am on the academic job market 2020/2021.

My reasearch is primarily in the intersection of microeconomic theory and experimental economics. In particular, my research aims at understanding boundedly rational decision making, how incentives and experience guide human decision making via learning and heuristics, and how it impacts population behavior and economic outcomes.

*with Frederick Callaway and Thomas L. Griffiths (PDF)*

**Abstract:**
Work in behavioral economics suggests that perfect rationality is an insufficient model of human decision making. However, the empirically observed deviations or biases vary substantially between environments. There is, therefore, a need for theories that can tell us when and how we should expect deviations from rational behavior. We suggest that such a theory can be found by assuming optimal use of limited cognitive resources. In this paper, we present a theory of human behavior in one-shot interactions based on the rational use of heuristics. We test our theory by defining a broad family of heuristics for one-shot games and associated cognitive cost functions. In a large, preregistered experiment, we find that behavior is well predicted by our theory, which yields better predictions than existing models. We find that the participants’ actions depend on their environment and previous experiences, in the way predicted by the rational use of heuristics.

*with Drew Fudenberg (PDF, Online Appendix)*

**Abstract:**
We predict cooperation rates across treatments in the experimental play of the indefinitely repeated prisoner’s dilemma using simulations of a simple learning model. We suppose that learning and the game parameters only influence play in the initial round of each supergame. Using data from 17 papers, we find that our model predicts out-of-sample cooperation at least as well as more complicated models with more parameters and machine learning algorithms. Our results let us predict how cooperation rates change with longer experimental sessions, and explain and sharpen past findings on the role of strategic uncertainty.

*with Alexander Aurell (arXiv, PDF)*

**Abstract:**
It is common to model learning in games so that either a deterministic process or a finite state Markov chain describes the evolution of play. Such processes can however produce undesired outputs, where the players’ behavior is heavily influenced by the modeling. In simulations we see how the assumptions in (Young, 1993), a well-studied model for stochastic stability, lead to unexpected behavior in games without strict equilibria, such as Matching Pennies. The behavior should be considered a modeling artifact. In this paper we propose a continuous-state space model for learning in games that can converge to mixed Nash equilibria, the Recency Weighted Sampler (RWS). The RWS is similar in spirit Young’s model, but introduces a notion of best response where the players sample from a recency weighted history of interactions. We derive properties of the RWS which are known to hold for finite-state space models of adaptive play, such as the convergence to and existence of a unique invariant distribution of the process, and the concentration of that distribution on minimal CURB blocks. Then, we establish conditions under which the RWS process concentrates on mixed Nash equilibria inside minimal CURB blocks. While deriving the results, we develop a methodology that is relevant for a larger class of continuous state space learning models.

*with Isak Trygg Kupersmidt and Pavel Kurasov*

*Proceedings of the American Mathematical Society* 144.3 (2016): 1197-1207. (Journal, PDF)

**Abstract:**
Spectral properties of the Schrodinger operator on a finite compact metric graph with delta-type vertex conditions are discussed. Explicit estimates for the lowest eigenvalue (ground state) are obtained using two different methods: Eulerian cycle and symmetrization techniques.

*with Benjamin Mandl*

**Description**
We seek to understand context effets, such as default- and decoy-effects, from the perspective of adaptive heuristics. The fundamental insight is that when a decision maker face a decision problem where she is uncertain about the values of different alternatives, the context and cues can affect the conditional expectation of the different values, even if they do not directly influence the value of the options. If a default is set because someone with good intentions and better information recommends it, conditioning on that information should affect the decision of an uncertain but rational DM. If the default is set randomly on the other hand, even uncertain DM should ignore it. We seek to test if this can explain known decoy effects by comparing situations where the conditional expectation should and shoud not change based on the cues, in otherwise identical situations.

An important question is how to best estimate learning models based on experimental data. Common approaches involving estimating individual parameters based on the exact sequence of decisions made are known to have problems such as low power and biased estimates, (Salmon, 2001; Wilcox 2006). In this project, I suggest that instead of focusing on each decision taken by the individuals, we should search for learning models that are likely to reproduce the time-path of the population’s behavior. By considering data simulated under different assumptions, I show that using Approximate Bayesian Computation to find the learning models that are most likely to reproduce the population’s time-path, we get more reliable estimates of the learning models. Furthermore, this way, we make sure we capture the learning models’ aspects with the most important implications. Lastly, I apply this method on existing data to derive new conclusions.