CoMSES Net maintains cyberinfrastructure to foster FAIR data principles for access to and (re)use of computational models. Model authors can publish their model code in the Computational Model Library with documentation, metadata, and data dependencies and support these FAIR data principles as well as best practices for software citation. Model authors can also request that their model code be peer reviewed to receive a DOI. All users of models published in the library must cite model authors when they use and benefit from their code.
CoMSES Net also maintains a curated database of over 7500 publications of agent-based and individual based models with additional metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
This a phenomenon-based model plan. Classroom in school are places when students are supposed to learn and the most often do. But things can go awry, the students can play up and that can result in an unruly class and learning can suffer. This model aims to look at how much students learn according to how good the teacher is a classroom control and how good he or she is at teaching per se.
This model was developed to test the usability of evolutionary computing and reinforcement learning by extending a well known agent-based model. Sugarscape (Epstein & Axtell, 1996) has been used to demonstrate migration, trade, wealth inequality, disease processes, sex, culture, and conflict. It is on conflict that this model is focused to demonstrate how machine learning methodologies could be applied.
The code is based on the Sugarscape 2 Constant Growback model, availble in the NetLogo models library. New code was added into the existing model while removing code that was not needed and modifying existing code to support the changes. Support for the original movement rule was retained while evolutionary computing, Q-Learning, and SARSA Learning were added.
The agent-based simulation is set to work on information that is either (a) functional, (b) pseudo-functional, (c) dysfunctional, or (d) irrelevant. The idea is that a judgment on whether information falls into one of the four categories is based on the agent and its network. In other words, it is the agents who interprets a particular information as being (a), (b), (c), or (d). It is a decision based on an exchange with co-workers. This makes the judgment a socially-grounded cognitive exercise. The uFUNK 1.0.2 Model is set on an organization where agent-employee work on agent-tasks.
This repository the multi-agent simulation software for the paper “Comparison of Competing Market Mechanisms with Reinforcement Learning in a CarPooling Scenario”. It’s a mutlithreaded Javaapplication.
Modeling an economy with stable macro signals, that works as a benchmark for studying the effects of the agent activities, e.g. extortion, at the service of the elaboration of public policies..
The TERROIR agent-based model was built for the multi-level analysis of biomass and nutrient flows within agro-sylvo-pastoral villages in West Africa. It explicitly takes into account both human organization and spatial extension of such flows.
We model the epistemic dynamics preceding political uprising. Before deciding whether to start protests, agents need to estimate the amount of discontent with the regime. This model simulates the dynamics of group knowledge about general discontent.
The aim of this model is to explore and understand the factors driving adoption of treatment strategies for ecological disturbances, considering payoff signals, learning strategies and social-ecological network structure
This model implements a classic scenario used in Reinforcement Learning problem, the “Cliff Walking Problem”. Consider the gridworld shown below (SUTTON; BARTO, 2018). This is a standard undiscounted, episodic task, with start and goal states, and the usual actions causing movement up, down, right, and left. Reward is -1 on all transitions except those into the region marked “The Cliff.” Stepping into this region incurs a reward of -100 and sends the agent instantly back to the start (SUTTON; BARTO, 2018).
The problem is solved in this model using the Q-Learning algorithm. The algorithm is implemented with the support of the NetLogo Q-Learning Extension
This is a re-implementation of a the NetLogo model Maze (ROOP, 2006).
This re-implementation makes use of the Q-Learning NetLogo Extension to implement the Q-Learning, which is done only with NetLogo native code in the original implementation.