In search of a dispassionate tribunal

Some thoughts on Experimental Economics on the occasion of the 2002 Bank of Sweden’s Prize in Economic Sciences in Memory of Alfred Nobel

Nicholas Theocarakis, University of Athens
Yanis Varoufakis, University of Athens and University of Sydney

The Lure of the Laboratory

The shape of every theory is the product of a particular moral idea. But the combination of moral engagement with strong, independent, and sometimes original judgement helps a theory transcend its moral origins to become universally relevant. The moral idea at the heart of early political economy was that the free interaction of economic agents within the institution of the market would lead to an ‘equilibrium’ with certain desirable qualities. Classical thinkers, from Boisguilbert to Adam Smith, were content to utilise powerful prose, impressive logic and the occasional allusion to metaphysical notions (such as the ‘invisible hand’) to make their point. Things changed drastically with the marginalist (or neo-classical) ‘revolution’ that occurred in the 1870s and has shaped mainstream economics ever since.

The great shift was that, quite suddenly, economists were seeking to distance themselves from the realm of politics, ethics, and history and threw their lot in with the natural sciences. Instead of striving to persuade in the manner of an orator, of a philosopher, or of a public intellectual, the emergent neo-classical economists courted acceptance by emulating the scientific method of seventeenth century physics. Just as Newton studied objects (for example, bodies, planets) characterised by mass and location, and ruled over by external gravitational forces, the neo-classics focused on social atoms (individuals and firms) defined in terms of preferences/technologies, and ruled over by market forces. Where Newton had placed his Principle of Energy Conservation, neo-classicists inserted their very own Principle of Utility Maximisation (Mirowski 1989). And in the manner Newton had extracted from his Principle infamous mathematical relations (force equals mass times acceleration), the neo-classics contrived their own (marginal gain equals marginal loss).

This is, however, where the analogy loses momentum. For the physicists’ next step was to take Newton’s equation to the laboratory and subject it to remorseless scrutiny at the tribunal of ‘sense experience’. Alas, economists had no equivalent tribunal to turn to. The laboratory weeds out all sorts of false theories and forces physicists to disagree only when they debate phenomena either too large or too small for the laboratory (for example, quanta, the nature of black holes, or other exotic phenomena that our senses have no access to). Economists, on the other hand, manage to disagree even on the most tangible aspects of everyday life (for example, the measurement of unemployment, the meaning of probability in social life). Naturally, a laboratory in which the facts knock economists like us into shape would be greatly appreciated by all (especially by hapless students and policy-makers).

Is econometrics the answer? With all due respect to our econometrician friends, the most they can offer economists is the empirical status of astronomy (Smith 1987a, p. 241). At best, they help us observe phenomena (in the manner of astronomers) through an imperfect lens. Then we try to guess whether these observations could have emerged out of a theory different to ours. Why guess? Because unless the ‘phenomenon’ under study can be ‘re-run’ under different conditions, it is doubtful whether we can have something definitive to say about the correspondence between theory and observation. Just as we cannot re-run the Big Bang under different conditions, it is impossible to do likewise with the Great Depression. Consequently, the causes of unemployment are as much out of reach of scientific certainty as the origins of the universe. Econometrics is, of course, useful. But it offers little relief from the lack of a proper laboratory.

Experimental economics is the fastest growing field in economics departments around the world.

With these thoughts in mind, one may rejoice at the good news from Stockholm: it seems that this black hole plaguing economic science has been plugged. At least this seems to have been the thought in the Nobel committee’s collective mind when they decided to award the coveted prize to two people, Vernon L. Smith (b. 1927) and Daniel Kahneman (b. 1934), hailed as the pillars of the new sub-discipline of experimental economics. Some extra research reveals that experimental economics is the fastest growing field in economics departments around the world. A handbook of experimental economics was published recently comprising more than 720 dense pages (Kagel & Roth 1995). Indeed, today there are whole journals devoted to laboratory economic experiments while the established journals host an increasing number of similar papers. So, does this all mean that economists have at long last acquired access to the coveted laboratory?

That there are now many laboratories, in which exciting economic experiments take place, is beyond doubt. What remains unclear is the precise function of these laboratories and their relation to economic theory. Is economics becoming an experimental science, in the mould of physics? Or are we economists acquiring expensive playgrounds in which to continue the familiar disputes under new guises? A brief look at the work of the two experimentalists recently elevated to Nobel status might offer useful hints.

Testing the Invisible Hand: Vernon Smith’s Mission

Vernon Smith attempted to make the invisible hand visible (Coursey 2002). Unlike many of his colleagues, Smith realised early on the centrality of institutional settings in exploring the workings of market mechanisms. The most illustrious examples of abstract approaches to the latter were the Walrasian tâtonnement (Walras 1954) and Edgeworth’s (1881) recontracting process. Smith used laboratory experiments in order to derive empirical laws regarding these processes. He craved insights denied us by theory: Under what particular settings is convergence to equilibrium more likely than not? The laboratory allowed him to vary the various parameters (which characterised agent interaction) and study convergence to equilibrium in a manner that reflected the Hayekian ‘circumstances of time and place’ (Smith 1987a).

Smith started his experiments as an assistant to Edward Chamberlin, the Harvard Professor and theorist of monopolistic competition (Chamberlin 1933). Chamberlin (1948) used classroom experiments in order to demonstrate that competitive market behaviour is not characteristic of actual interactions among economic agents. Smith thought that his mentor was being unfair to the model of competitive pricing. In his 1956 classroom experiments (Smith 1962), he divided randomly and equally a class of 22 students between ‘buyers’ and ‘sellers’. In that initial experiment each agent was assigned randomly a different ‘reservation price’ for each unit of a notional good. Each buyer had a different ‘resale’ price and each seller a different ‘production’ cost for the same good. The gains from trade for each agent were the difference between the price at which he or she could buy (sell) the good and the resale price (production cost). The experimental setting was that of a ‘double oral auction’, where sellers and buyers shout bid and ask prices until there is a match.

Smith used experiments to bring to light differences in outcomes caused by the different settings of market institutions.

These experiments demonstrated that the competitive outcome was quite robust to information asymmetries (that is, to the fact that each agent knew only her/his reservation price) and that the competitive price still came through even when the number of participants was relatively small. The influence of Smith’s findings can be felt today in the large body of interesting (computer-assisted) experiments (for example, Plott 2000 and Lei, Noussair & Plott 2001).

As anyone who has played a parlour game realises—and parlour games have an illustrious history as prototypes in science from Huygens (1657) to Waldegrave (1713) to von Neumann (1928)—rules are paramount. And although Kriegspiel is not a perfect substitute for war, results do reflect the rules of the game. Smith used experiments to bring to light differences in outcomes caused by the different settings of market institutions. For once, the news that institutions matter was taken seriously. For it came not from an ‘institutionalist’ economist (a species that the mainstream scorns), but from a ‘true blue’ neo-classic who had learnt to respect institutions in his Arizona University laboratory.

Smith made considerable contributions to experimental methodology. First, he insisted that the gains made by experimental subjects ought to be significant and real, albeit small. Second, he conducted repeated experiments to allow the subjects to acquire familiarity with the rules of the game and to observe the results of this familiarity on the outcome. Third, he experimented with the flexibility of the agents to change the posted prices. Fourth, he developed the induced-value method to control for his subjects’ preferences. This method consists of devising a reward structure that overrides the idiosyncratic evaluations of gains and losses by the subjects and forces them to behave as if they follow a given demand function. By controlling for preferences, the experimental setting can focus on other aspects that reflect the dispersion of private information among the subjects.

Smith (1987b) used these settings experimentally to assess the outcomes of different types of auctions. Building on the seminal work of Vickrey (1961), he experimented with four different types of auctions: English, Dutch, sealed-bid first price auction and sealed-bid second price auction. His results showed that the predictions of the theory were borne out by the experiments, with the exception that Dutch and sealed-bid first price auctions were not equivalent, leading him to theorise that there were elements unknown to economists (apart from risk aversion profiles and absolute expected utility) and that these elements do in fact enter into an agent’s actions.

Having demonstrated that theory and experiment might diverge in auctions, Smith effectively started an important trend: researchers are now using laboratory experiments in order to argue against some of the assumptions theorists habitually make in order to come up with determinate predictions. Often the experimental data expose the arbitrariness of these assumptions. At the more constructive level, Smith tries to use the laboratory like aeronautic engineers use the wind tunnel in order to assess the aerodynamic characteristics of a prototype (instead of calculating them theoretically). Two examples of the ‘wind tunnel’ approach are the design of auctions for airport plane slots (where there are interrelations between the preferred flight combinations by passengers) and electricity pricing models (in lieu of standard regulation methods). In fact the pricing models for electricity applied to real markets in Australia and New Zealand were tested experimentally using Smith’s method.

Kahneman and Tversky made their mark by focusing on how the cognitive processes of most people clash with the economists’ assumptions.

To recap, Smith’s experimental work has paved new ground for microeconomic research. Instead of a nomo-theoretical approach where one compares theory and observation, Smith has launched an alternative nomo-empirical approach according to which ‘… one compares the effect of different institutions and/or environments as a means of documenting replicable empirical “laws” that may stimulate modelling energy in new directions’ (Smith 1987a, p.249).

Testing the Bounds of Reason: Daniel Kahneman’s Psychological Insights

Had Amos Tversky not died in 1996, the 2002 Nobel Prize would have been shared in three. For Daniel Kahneman and Amos Tversky had collaborated tirelessly for decades to convince economists that the psychology of homo economicus was systematically different from that of homo sapiens; and that by studying these differences in the laboratory, we would gain important insights into economic and social behaviour. They made their mark (and Kahneman continues to do so) by focusing on a number of ways the cognitive processes of most people clash with the economists’ assumptions. Before mentioning some milestones in their work, a few words on the origin of the latter.

The homo oeconomicus (as it was originally spelt) of nineteenth century neo-classical theory was a thoroughly utilitarian beast. Jevons, and more so Edgeworth, believed that, with the help of German psychophysics, they would eventually pin down the ‘elusive utility’ by means of some hedonimeter. The idea that utility is potentially measurable, and thus comparable across persons (or cardinal, as economists call it), was later abandoned (through the work of mainly Pareto and Hicks). Later it resurfaced in probabilistic garb (without however recovering its interpersonal comparability), in the concept of expected utility (von Neumann & Morgenstern 1944, Savage 1953). The resulting Expected Utility Theory (EUT) made additional demands on the cognitive capacities of the already omniscient and hyper-rational homo economicus. Distinctly male in character (Charusheela 2001), homo economicus was reduced to behaviour consistent with (a) his preferences towards risk, and (b) a statistician’s assessment of the games played by Lady Luck.

To give a flavour of Expected Utility Theory (EUT), the latter proffers no view as to whether you ought to have a preference between ‘lotteries’ A and B; where lottery A, if selected, would give you $3 000 with probability 25 per cent (and nothing with probability 75 per cent), while B would pay you $4 000 with probability twenty per cent. If you loathe risk, you will tend towards lottery A (since, compared to B, A offers a higher probability of winning an admittedly lower sum). On the other hand, a gambler’s disposition would cause you to prefer B over A. EUT has no opinion on what is best for you. In the economist’s mindset, a taste for risk or safety is just as much a sovereign preference as a craving for jam or honey. However, what EUT insists upon unbendingly is that if you prefer A over B, you must also prefer C over D; where C is the safe option of $3 000 no-questions-asked and D the lottery that gives you $4 000 with probability 80 per cent (and nothing with twenty per cent). And vice versa. EUT insists that a preference for B over A must indicate that one also prefers D to C.

Though not universally acclaimed, Kahneman and Tversky's commitment to new theoretical perspectives has sparked off a whole new theoretical agenda.

Why is EUT demanding this? The reason is that EUT is founded on the thought that rational persons choose between risky prospects using two criteria: (a) the actual outcomes that may emerge as a result of one’s risky choice, and (b) the relative probabilities of the ‘good’ versus the ‘bad’ outcomes. In the above example, the actual outcomes are the same regardless of whether you are choosing between A and B or C and D (the potential outcomes are $4 000, $3 000 or $0 in both cases). Moreover, the relative probabilities are also identical (note that in the choice between A and B, the ratio of the probabilities of winning something equals 25 per cent: twenty per cent = 1.25; which is the same as in the choice between C and D where again it equals 100 per cent: 80 per cent = 1.25). As both actual outcomes and relative probabilities are indistinguishable, EUT is forced to claim that the two dilemmas coincide and, therefore, that if you like A more than B, you must (if rational) also like C more than D.

Expected utility theory (EUT) lies at the heart of economic theories of competition, games, consumer choice, finance and so on. So when Allais (1953, 1987), an earlier Nobel winner, demonstrated that EUT systematically fails to predict actual choices (for example, that most people prefer D to C while also preferring A to B), alarm bells sounded. Kahneman and Tversky (K&T) alarmed the profession further by tabling a theoretical take on EUT’s predictive failure. Consider their famous ‘framing effect’: They demonstrated in the laboratory that people behave differently depending on whether pay-offs are described (or ‘framed’) in terms of gains or of losses (see K&T 1973 and T&K 1973, 1974, 1986). Their hunch was that the predictive failure of EUT in the context of the A/B and C/D choices above must have something to do with the ‘frame’ through which subjects evaluate ‘lotteries’ A, B, C and D.

Not content with having established an empirical regularity antagonistic to EUT, they proceeded to formulate a psychological theory at odds with it. They called it ‘prospect theory’ (K&T 1979, T&K 1991, 1992) and argued that our relative valuation of gains and losses depends on some reference point the origin of which is psychological and irreducible to utility considerations. The result is that we dislike losses much more than we like equal-sized gains with respect to that reference point.

Why is this inconsistent with EUT—and therefore with most of modern economics? As we saw in our A/B versus C/D dilemmas, EUT cannot distinguish between the two. Why? Because it assumes that people are either risk-averse (preferring A and C respectively) or risk loving (opting for B and D) and that this depends on the ‘shape’ of their preference (or utility) ordering. K&T argue that people lack a single preference ordering between risky situations (or lotteries) such as A, B, C. Preferences are predicated upon some reference point that relates to the overall ‘prospects’ facing the subject. To make this concrete, when choosing between C and D in our example, the reference point is the availability of a certain $3 000 payoff (offered by ‘lottery’ C). Most likely you would be reluctant, argue K&T, to select D. The prospect of losing the certain $3 000 (if you choose D and end up winning nothing; something that will happen with probability 20 per cent if you take this risk) overshadows the prospect of winning an extra $1 000 (if you do choose D and things work out nicely). For this reason, when facing the choice between C and D, most people tend to be risk averse.

However, when choosing between A and B, things are very different. Here, the prospects of winning something are bad anyway (less than 75 per cent of winning anything). When the ‘prospects’ are poor, as in this case, why not throw caution to the wind and go for broke? Thus, K&T continue, you are more likely to ‘gamble’ when presented with the choice between A and B. And this is the main difference between EUT and K&T’s prospect theory. Whereas the former assumes that one’s attitude towards risk is fixed, K&T argue that people tend to be risk-averse when the prospects are good (wary of the possibility of gambling away near-certain benefits) and risk loving when they are bad. Preferences are, hence, context-dependent rather than exogenous.

The phenomena physicists observe in their labs are independent of their theory’s predictions; this luxury is not available to economists.

Though not universally acclaimed, K&T’s commitment to new theoretical perspectives has sparked off a whole new theoretical agenda (Starmer 2000). Other examples of the deviations from the predictions of EUT that K&T brought to light are the so-called ‘law of small numbers’ (see T&K 1971) and ‘representativeness’ (K&T 1972, 1982). According to the former, laboratory subjects tend to ignore the effect sample size has on expected outcomes. They think that the probability of a sample mean deviating from the population mean is the same for small and large samples. Thus, they believe that the probability of 60 per cent of births being boys is the same in a small hospital as in a large one. They tend to ‘over-inference’ from a small sample by judging, for example, the performance of a fund manager who fared better than the market in two consecutive years, as much better than statistics warrant.

An example of representativeness is when people are asked whether a randomly chosen individual who shows interest in politics, and likes to appear in the media, is more likely to be a member of parliament or a salesman. They tend to answer that it is more likely for this person to be a MP disregarding the fact that the larger proportion of salesmen in the general population makes it more likely that he is a salesman. This observation helps explain certain inefficiencies as well as instances of irrational exuberance or over-pessimism typical of financial markets.

Is Economics Catching Up With the Experimental Sciences?

Our brief sojourn into the work of this year’s Nobelists leaves one in no doubt about the value of economic laboratory experiments. It does not, however, dispel doubts about the plausibility of economics as one of the experimental sciences. The latter are founded on the empiricist dictum that to predict is to explain, or that truth is what works. If the theory fails to predict the motion of some planet, there is but a single option: Reject it! But what if Jill does not act in the laboratory as we predicted? There is always the possibility that she failed to see what was in her interest; in which case, it would be unfair to reject the theory. But the very fact that we can always blame the subject for the theory’s failure to predict blunts the edge of the experimental approach.

On a different but related note, physicists and chemists can go to sleep safe in the thought that the phenomena under observation in their labs are independent of their theory’s predictions. Unfortunately, this is a luxury not available to we economists. Astronomical phenomena happened long before we observed them and thus remain oblivious to the fact that we are watching them. The behaviour of chemicals or bacteria is utterly independent of what they think we expect of them. This independence between the observed facts of Nature and the human theory that tries to explain them, allows the former to be used as a test for the latter.

Unfortunately, in economics there is no such independence: Our ideas influence heavily our actions and thus social theory is seldom independent of observed socio-economic phenomena. For Jill has the bad habit of being influenced by what we, and others, expect of her. Indeed, if she is truly rational, there are occasions when she ought to dissemble by deviating systematically from the theoretical predictions (for example, when convincing a competitor that she is acting irrationally might force him to yield more to her in, say, negotiations).

Experimental subjects may well behave differently depending on the current conventional wisdom regarding how they ought to behave. When running economic experiments we go to considerable lengths to ensure that subjects have no economics. We cannot, however, control the extent to which economics has influenced their milieu indirectly. To the extent that the theory under testing is correlated with conventional ideas, people may deviate from the theoretical predictions not only because they are less rational than homo economicus, but also because they are much smarter than him. Atoms and chemicals have no such proclivities. They always behave according to the laws that govern their behaviour. They lack the fickleness and majesty of the human spirit which can violate even the cleverest theory about it. In short, the social world (unlike Nature) is one in which theory (or, more generally, ‘ideas’) and (social) facts are too intertwined to separate cleanly inside the laboratory.

It would be irresponsibly premature to throw parties in celebration of economic laboratories as the missing link that render economics a ‘real science’. Rather than equipping our discipline with the ‘dispassionate tribunals’ of competing economic theories, the brilliant work of experimentalists like Smith and Kahneman has helped enrich mainstream debate with concerns hitherto confined to the discipline’s margins. Laboratory experimentation may not have turned economics into the social physics that many have dreamt of, but it has created important space for methodological considerations that have so far been scandalously sidelined.


Allais, M. 1953, ‘Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’école américaine’, Econometrica, vol. 21, no. 4, pp. 503–546.

Allais, M. 1987, ‘Allais Paradox’, The New Palgrave: A Dictionary of Economics, vol. 1, eds J. Eatwell, M. Millgate & P. Newman, Macmillan, London, pp. 80–2.

Chamberlin, E. H. 1933, The Theory of Monopolistic Competition, Harvard University Press, Cambridge.

Chamberlin, E.H. 1948, ‘An Experimental Imperfect Market’, Journal of Political Economy, vol. 56, no. 2, pp. 95–108.

Charusheela, S. 2001, ‘Women’s choices and the Ethnocentrism/Relativism Dilemma’, Postmodernism, Economics and Knowledge, eds S. Cullenberg, J. Amariglio & D. Ruccio, Routledge, London and New York.

Coursey, D. 2002, ‘Vernon Smith, Economic Experiments and the Visible Hand’, in Contributors Forum, The Library of Economics and Liberty, October 28. [Online], Available:, [2002, Nov. 10].

Edgeworth, F.Y. 1881, Mathematical Psychics: an essay on the application of mathematics to the moral sciences, C. Kegan Paul, London.

Huygens, C. 1657, Libellus de Ratiociniis in Ludo Aleae, [1st English edition 1714, [Online] Available:] [2002, Nov. 10].

Kagel, J. H. & Roth, A. E. (eds) 1995, The Handbook of Experimental Economics, Princeton University Press, Princeton.

Kahneman, D. & Tversky, A. 1972, ‘Subjective probability: A judgment of representativeness’, Cognitive Psychology, vol. 3, no. 3, pp. 430–454.

Kahneman, D. & Tversky, A. 1973, ‘On the psychology of prediction’, Psychological Review, vol. 80, no. 4, pp. 237–251.

Kahneman, D. & Tversky, A. 1979, ‘Prospect theory: An analysis of decision under risk’, Econometrica, vol. 47, no. 2, pp. 263–291.

Lei, V., Noussair, C.N. & Plott, C., 2001, ‘Non-Speculative Bubbles in Experimental Asset Markets: Lack of Common Knowledge of Rationality vs. Actual Irrationality’, Econometrica, vol. 69, no. 4, pp. 831–859.

Mirowski, P. 1989, More Heat Than Light: Economics as Social Physics, Cambridge University Press, New York.

Plott, C. 2000, ‘Markets as Information Gathering Tools’, Southern Economic Journal, vol. 67, no. 1, pp. 1–15.

Royal Swedish Academy of Sciences 2002, Advanced Information on the Prize in Economic Sciences 2002, [Online], Available: [2002, Nov. 10]

Savage, L. 1953, The Foundations of Statistics, John Wiley and Sons, New York.

Smith, V.L. 1962, ‘An Experimental Study of Competitive Market Behavior’, Journal of Political Economy, vol. 70, no. 2, pp. 111–137.

Smith, V.L. 1987a, ‘Experimental Methods in Economics’, The New Palgrave: A Dictionary of Economics, vol. 2, eds J. Eatwell, M. Millgate & P. Newman, Macmillan, London, pp. 241–249.

Smith, V.L. 1987b, ‘Auctions’ in The New Palgrave: A Dictionary of Economics, vol. 1, eds J. Eatwell, M. Millgate & P. Newman, Macmillan, London, pp. 138–144.

Starmer, C. 2000, ‘Developments in Non-Expected Utility Theory: The hunt for a descriptive theory of choice under risk’, Journal of Economic Literature, vol. 38, no. 2, pp. 332–82.

Tversky, A. & Kahneman, D. 1971, ‘Belief in the law of small numbers’, Psychological Bulletin, vol. 76, no. 2, pp. 105–110.

Tversky, A. & Kahneman, D. 1973, ‘Availability: A heuristic for judging frequency and probability’, Cognitive Psychology, vol. 5, no. 2, pp. 207–232.

Tversky, A. & Kahneman, D 1974, ‘Judgment under uncertainty: Heuristics and biases’, Science, vol. 185, no. 4157, pp. 1124–1131.

Tversky, A. & Kahneman, D. 1982, ‘Judgment of and by representativeness’, Judgment Under Uncertainty: Heuristics and Biases, eds D. Kahneman, P. Slovic & A. Tversky, Cambridge University Press, Cambridge.

Tversky, A. & Kahneman, D. 1986, ‘Rational choice and framing of decisions’, Journal of Business, vol. 59, no. 4, pp. S252–278.

Tversky, A. & Kahneman, D. 1991, ‘Loss aversion in riskless choice: A reference-dependent model’, Quarterly Journal of Economics, vol. 106, no. 4, pp. 1039–1061.

Tversky, A. & Kahneman, D. 1992, ‘Advances in prospect theory: Cumulative representation under uncertainty’, Journal of Risk and Uncertainty, vol. 5, pp. 297–323.

Vickrey, W. 1961, ‘Counter-speculation, auctions and competitive sealed tenders’, Journal of Finance, vol. 16, no. 1, pp. 8–37.

von Neumann, J. 1928, ‘Zur Theorie der Gesellschaftsspiele’, Mathematische Annalen, vol. 100, pp. 295-320. [Translated as ‘On the Theory of Games of Strategy’, Contributions to the Theory of Games Volume IV, eds A. W. Tucker & R. D. Luce, (Annals of Mathematics Studies, vol. 40), pp.13–42].

von Neumann, J. & Morgenstern, O. 1944, Theory of Games and Economic Behavior, Princeton University Press, Princeton.

Waldegrave, J. 1713, ‘Waldegrave’s Comments: Excerpt from Montmort’s Letter to Nicholas Bernoulli’, translated and reprinted in Precursors in Mathematical Economics: An Anthology (Series of Reprints of Scarce Works on Political Economy, 19), eds W. J. Baumol & S. M. Goldfeld, 1968. London School of Economics and Political Science, London, pp. 7–9.

Walras, L. 1954, Elements of Pure Economics, trans. W. Jaffé, Irwin, Homewood, Ill. [French text, Éléments d’ économie politique pure ou théorie de la richesse sociale, Edition définitive, 1902, Librairie Générale de Droit et de Jurisprudence, Paris].

Nicholas Theocarakis studied economics at Athens University before completing his Doctorate at Cambridge University. He teaches history of economic thought and microeconomics at Athens University. Yanis Varoufakis teaches game theory, microeconomics and political economy at the Universities of Athens and Sydney.