Our mission is to help computational modelers at all levels engage in the establishment and adoption of community standards and good practices for developing and sharing computational models. Model authors can freely publish their model source code in the Computational Model Library alongside narrative documentation, open science metadata, and other emerging open science norms that facilitate software citation, reproducibility, interoperability, and reuse. Model authors can also request peer review of their computational models to receive a DOI.
All users of models published in the library must cite model authors when they use and benefit from their code.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with additional detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 1 of 1 results metascience clear
We consider scientific communities where each scientist employs one of two characteristic methods: an “adequate” method (A) and a “superior” method (S). The quality of methodology is relevant to the epistemic products of these scientists, and generate credit for their users. Higher-credit methods tend to be imitated, allowing to explore whether communities will adopt one method or the other. We use the model to examine the effects of (1) bias for existing methods, (2) competence to assess relative value of competing methods, and (3) two forms of interdisciplinarity: (a) the tendency for members of a scientific community to receive meaningful credit assignment from those outside their community, and (b) the tendency to consider new methods used outside their community. The model can be used to show how interdisciplinarity can overcome the effects of bias and incompetence for the spread of superior methods.