Our mission is to help computational modelers at all levels engage in the establishment and adoption of community standards and good practices for developing and sharing computational models. Model authors can freely publish their model source code in the Computational Model Library alongside narrative documentation, open science metadata, and other emerging open science norms that facilitate software citation, reproducibility, interoperability, and reuse. Model authors can also request peer review of their computational models to receive a DOI.
All users of models published in the library must cite model authors when they use and benefit from their code.
Please check out our model publishing tutorial and contact us if you have any questions or concerns about publishing your model(s) in the Computational Model Library.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with additional detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 3 of 3 results crowdsourcing clear filters
The purpose of this agent-based model is to compare different variants of crowdworking in a general way, so that the obtained results are independent of specific details of the crowdworking platform. It features many adjustable parameters that can be used to calibrate the model to empirical data, but also when not calibrated it yields essential results about crowdworking in general.
Agents compete for contracts on a virtual crowdworking platform. Each agent is defined by various properties like qualification and income expectation. Agents that are unable to turn a profit have a chance to quit the crowdworking platform and new crowdworkers can replace them. Thus the model has features of an evolutionary process, filtering out the ill suited agents, and generating a realistic distribution of agents from an initially random one. To simulate a stable system, the amount of contracts issued per day can be set constant, as well as the number of crowdworkers. If one is interested in a dynamically changing platform, the simulation can also be initialized in a way that increases or decreases the number of crowdworkers or number of contracts over time. Thus, a large variety of scenarios can be investigated.
A series of studies show the applicability of the NK model in the crowdsourcing research, but it also exposes a problem that the application of the NK model is not tightly integrated with crowdsourcing process, which leads to lack of a basic crowdsourcing simulation model. Accordingly, by introducing interaction relationship among task decisions to define three tasks of different structure: local task, small-world task and random task, and introducing bounded rationality and its two dimensions are taken into account: bounded rationality level that used to distinguish industry types and bounded rationality bias that used to differentiate professional users and ordinary users, an agent-based model that simulates the problem-solving process of tournament-based crowdsourcing is constructed by combining the NK fitness landscapes and the crowdsourcing framework of “Task-Crowd-Process-Evaluation”.
A series of studies show the applicability of the NK model in the crowdsourcing research, but it also exposes a problem that the application of the NK model is not tightly integrated with crowdsourcing process, which leads to lack of a basic crowdsourcing simulation model. Accordingly, by introducing interaction relationship among task decisions to define three tasks of different structure: local task, small-world task and random task, and introducing bounded rationality and its two dimensions are taken into account: bounded rationality level that used to distinguish industry types and bounded rationality bias that used to differentiate professional users and ordinary users, an agent-based model that simulates the problem-solving process of tournament-based crowdsourcing is constructed by combining the NK fitness landscapes and the crowdsourcing framework of “Task-Crowd-Process-Evaluation”.