Computational Model Library

A Model of Iterated Ultimatum game (1.0.0)

The agents implemented into the simulation are divided into two groups according to their behavior. The amount of money to divide inside the ultimatum game is fixed at 100.00 €. The first group acts in a selfish way, that is, their proposals are always generated randomly under the 50% of the amount of money to split (more in detail, their proposals range from 35.00€ to 45.00€). Conversely, the second group behaves in the opposite way: in-deed, altruistic agents bid higher, even over the half of initial amount of money (par-ticularly, proposals randomly range from 45.00€ to 55.00€). Each agent is endowed with: a random willingness to pay (WTP), accordingly to the behavior of the agent; a minimum of money that the agent would accept in the ulti-matum game (willingness to accept; WTA). The latter is fixed equally for all agents and it is alterable by the user through the interface tab of the program. The interface tab allows also to change the percentage of altruistic agents to be generated (from 0% to 100%).
The program does not taken into account learning behaviors based on players experience or the changing of the initial amount of money to split, given that, as reported by Camerer (2003), the studies which exploited the UG showed only small effects for these factors. However, the agents’ behavior could be retrieved inside the procedure called “initBehav”: this is the part of the program that (more than others) could be rewritten in order to provide different and complex behaviors to agents, leaving unaltered the “engine” of the simulation which works on the iteration of the ultimatum game.

UG_Model_Snapshot.png

Release Notes

1.4 [Latest]

Associated Publications

Camerer, C.F.: Behavioral game theory: Experiments in strategic interaction. Russel Sage Fondation, New York, NY (2003).

This release is out-of-date. The latest version is 1.1.0

A Model of Iterated Ultimatum game 1.0.0

The agents implemented into the simulation are divided into two groups according to their behavior. The amount of money to divide inside the ultimatum game is fixed at 100.00 €. The first group acts in a selfish way, that is, their proposals are always generated randomly under the 50% of the amount of money to split (more in detail, their proposals range from 35.00€ to 45.00€). Conversely, the second group behaves in the opposite way: in-deed, altruistic agents bid higher, even over the half of initial amount of money (par-ticularly, proposals randomly range from 45.00€ to 55.00€). Each agent is endowed with: a random willingness to pay (WTP), accordingly to the behavior of the agent; a minimum of money that the agent would accept in the ulti-matum game (willingness to accept; WTA). The latter is fixed equally for all agents and it is alterable by the user through the interface tab of the program. The interface tab allows also to change the percentage of altruistic agents to be generated (from 0% to 100%).
The program does not taken into account learning behaviors based on players experience or the changing of the initial amount of money to split, given that, as reported by Camerer (2003), the studies which exploited the UG showed only small effects for these factors. However, the agents’ behavior could be retrieved inside the procedure called “initBehav”: this is the part of the program that (more than others) could be rewritten in order to provide different and complex behaviors to agents, leaving unaltered the “engine” of the simulation which works on the iteration of the ultimatum game.

Release Notes

1.4 [Latest]

Version Submitter First published Last modified Status
1.1.0 Andrea Scalco Mon Mar 9 16:13:23 2015 Mon Feb 19 22:00:18 2018 Published
1.0.0 Andrea Scalco Tue Feb 24 16:08:20 2015 Tue Feb 20 18:40:10 2018 Published

Discussion

This website uses cookies and Google Analytics to help us track user engagement and improve our site. If you'd like to know more information about what data we collect and why, please see our data privacy policy. If you continue to use this site, you consent to our use of cookies.
Accept