Computational Model Library Peer Review

CoMSES Net Computational Library Peer Review

Peer Reviewed

Authors who submit their computational models to the CoMSES Net Computational Model Library can request peer review of their models. If the model passes review, models will be granted a peer reviewed badge and a DOI.

Models can remain private during peer review and this is actually recommended so that you can continue to adjust your computational model files and metadata to address any reviewer concerns raised during the peer review process. Publishing a codebase release locks the files associated with that release (but not the metadata), so you would need to draft a new release to address any reviewer concerns related to the files included in your codebase release.

Review Criteria

The CoMSES Net Computational Model Peer Review process is not intended to be time-intensive and consists of a simple checklist to verify that a computational model’s source code and documentation meets baseline standards derived from “good enough practices” in the software engineering and scientific communities we serve. Through this process we hope to foster higher quality models shared in the community for reuse, reproducibility, and advancement of the field in addition to supporting the emerging practice of software citation.

Reviewers should evaluate the computational model according to the following criteria:

  1. Can the model be run with a reasonable amount of effort? This may involve compilation into an executable, resolving input data dependencies or software library dependencies - all of which should be clearly documented by the author(s).
  2. Is the model accompanied by detailed narrative documentation like the ODD protocol or equivalent? Narrative documentation should present a cogent high level overview of how the model works as well as essential internal details and assumptions and ideally be complete enough to allow other computational modelers to replicate the model and its results without having to refer to the source code.
  3. Is the model source code well-structured, formatted and “clean” with relevant comments in addition to documented inputs and expected outputs? Unused or duplicated code, overuse of global variables, or other code smells are some example criteria to consider. Clean, well-documented code makes it easier for others to reuse, review, and make improvements to the code.

We do not ask that reviewers assess whether the model is theoretically sound, has scientific merit or is producing correct outputs. That said, reviewers are free to raise any concerns they may have in their private correspondence with the review editors if they detect “red flags” in the code.

This website uses cookies and Google Analytics to help us track user engagement and improve our site. If you'd like to know more information about what data we collect and why, please see our data privacy policy. If you continue to use this site, you consent to our use of cookies.