Computational Model Library

Peer reviewed Personnel decisions in the hierarchy

Smarzhevskiy Ivan | Published Fri Aug 19 08:23:17 2022

This is a model of organizational behavior in the hierarchy in which personnel decisions are made.
The idea of the model is that the hierarchy, busy with operations, is described by such characteristics as structure (number and interrelation of positions) and material, filling these positions (persons with their individual performance). A particular hierarchy is under certain external pressure (performance level requirement) and is characterized by the internal state of the material (the distribution of the perceptions of others over the ensemble of persons).
The World of the model is a four-level hierarchical structure, consisting of shuff positions of the top manager (zero level of the hierarchy), first-level managers who are subordinate to the top manager, second-level managers (subordinate to the first-level managers) and positions of employees (the third level of the hierarchy). ) subordinated to the second-level managers. Such a hierarchy is a tree, i.e. each position, with the exception of the position of top manager, has a single boss.
Agents in the model are persons occupying the specified positions, the number of persons is set by the slider (HumansQty). Personas have some operational performance (harisma, an unfortunate attribute name left over from the first edition of the model)) and a sense of other personas’ own perceptions. Performance values are distributed over the ensemble of persons according to the normal law with some mean value and variance.
The value of perception by agents of each other is positive or negative (implemented in the model as numerical values equal to +1 and -1). The distribution of perceptions over an ensemble of persons is implemented as a random variable specified by the probability of negative perception, the value of which is set by the control elements of the model interface. The numerical value of the probability equal to 0 corresponds to the case in which all persons positively perceive each other (the numerical value of the random variable is equal to 1, which corresponds to the positive perception of the other person by the individual).
The hierarchy is occupied with operational activity, the degree of intensity of which is set by the external parameter Difficulty. The level of productivity of each manager OAIndex is equal to the level of productivity of the department he leads and is the ratio of the sum of productivity of employees subordinate to the head to the level of complexity of the work Difficulty. An increase in the numerical value of Difficulty leads to a decrease in the OAIndex for all subdivisions of the hierarchy. The managerial meaning of the OAIndex indicator is the percentage of completion of the load specified for the hierarchy as a whole, i.e. the ratio of the actual performance of the structural subdivisions of the hierarchy to the required performance, the level of which is specified by the value of the Difficulty parameter.

This project was developed during the Santa Fe course Introduction to Agent-Based Modeling 2022. The origin is a Cellular Automata (CA) model to simulate human interactions that happen in the real world, from Rubens and Oliveira (2009). These authors used a market research with real people in two different times: one at time zero and the second at time zero plus 4 months (longitudinal market research). They developed an agent-based model whose initial condition was inherited from the results of the first market research response values and evolve it to simulate human interactions with Agent-Based Modeling that led to the values of the second market research, without explicitly imposing rules. Then, compared results of the model with the second market research. The model reached 73.80% accuracy.
In the same way, this project is an Exploratory ABM project that models individuals in a closed society whose behavior depends upon the result of interaction with two neighbors within a radius of interaction, one on the relative “right” and other one on the relative “left”. According to the states (colors) of neighbors, a given cellular automata rule is applied, according to the value set in Chooser. Five states were used here and are defined as levels of quality perception, where red (states 0 and 1) means unhappy, state 3 is neutral and green (states 3 and 4) means happy.
There is also a message passing algorithm in the social network, to analyze the flow and spread of information among nodes. Both the cellular automaton and the message passing algorithms were developed using the Python extension. The model also uses extensions csv and arduino.

Peer reviewed BAM: The Bottom-up Adaptive Macroeconomics Model

Alejandro Platas López Alejandro Guerra-Hernández | Published Tue Jan 14 17:04:32 2020 | Last modified Sun Jul 26 00:26:21 2020

Overview

Purpose

Modeling an economy with stable macro signals, that works as a benchmark for studying the effects of the agent activities, e.g. extortion, at the service of the elaboration of public policies..

The purpose of this model is explore how “friend-of-friend” link recommendations, which are commonly used on social networking sites, impact online social network structure. Specifically, this model generates online social networks, by connecting individuals based upon varying proportions of a) connections from the real world and b) link recommendations. Links formed by recommendation mimic mutual connection, or friend-of-friend algorithms. Generated networks can then be analyzed, by the included scripts, to assess the influence that different proportions of link recommendations have on network properties, specifically: clustering, modularity, path length, eccentricity, diameter, and degree distribution.

00b SimEvo_V5.08 NetLogo

Garvin Boyle | Published Sat Oct 5 08:29:38 2019

In 1985 Dr Michael Palmiter, a high school teacher, first built a very innovative agent-based model called “Simulated Evolution” which he used for teaching the dynamics of evolution. In his model, students can see the visual effects of evolution as it proceeds right in front of their eyes. Using his schema, small linear changes in the agent’s genotype have an exponential effect on the agent’s phenotype. Natural selection therefore happens quickly and effectively. I have used his approach to managing the evolution of competing agents in a variety of models that I have used to study the fundamental dynamics of sustainable economic systems. For example, here is a brief list of some of my models that use “Palmiter Genes”:
- ModEco - Palmiter genes are used to encode negotiation strategies for setting prices;
- PSoup - Palmiter genes are used to control both motion and metabolic evolution;
- TpLab - Palmiter genes are used to study the evolution of belief systems;
- EffLab - Palmiter genes are used to study Jevon’s Paradox, EROI and other things.

06b EiLab_Model_I_V5.00 NL

Garvin Boyle | Published Sat Oct 5 08:27:46 2019

EiLab - Model I - is a capital exchange model. That is a type of economic model used to study the dynamics of modern money which, strangely, is very similar to the dynamics of energetic systems. It is a variation on the BDY models first described in the paper by Dragulescu and Yakovenko, published in 2000, entitled “Statistical Mechanics of Money”. This model demonstrates the ability of capital exchange models to produce a distribution of wealth that does not have a preponderance of poor agents and a small number of exceedingly wealthy agents.

This is a re-implementation of a model first built in the C++ application called Entropic Index Laboratory, or EiLab. The first eight models in that application were labeled A through H, and are the BDY models. The BDY models all have a single constraint - a limit on how poor agents can be. That is to say that the wealth distribution is bounded on the left. This ninth model is a variation on the BDY models that has an added constraint that limits how wealthy an agent can be? It is bounded on both the left and right.

EiLab demonstrates the inevitable role of entropy in such capital exchange models, and can be used to examine the connections between changing entropy and changes in wealth distributions at a very minute level.

There is a new type of economic model called a capital exchange model, in which the biophysical economy is abstracted away, and the interaction of units of money is studied. Benatti, Drăgulescu and Yakovenko described at least eight capital exchange models – now referred to collectively as the BDY models – which are replicated as models A through H in EiLab. In recent writings, Yakovenko goes on to show that the entropy of these monetarily isolated systems rises to a maximal possible value as the model approaches steady state, and remains there, in analogy of the 2nd law of thermodynamics. EiLab demonstrates this behaviour. However, it must be noted that we are NOT talking about thermodynamic entropy. Heat is not being modeled – only simple exchanges of cash. But the same statistical formulae apply.

In three unpublished papers and a collection of diary notes and conference presentations (all available with this model), the concept of “entropic index” is defined for use in agent-based models (ABMs), with a particular interest in sustainable economics. Models I and J of EiLab are variations of the BDY model especially designed to study the Maximum Entropy Principle (MEP – model I) and the Maximum Entropy Production Principle (MEPP – model J) in ABMs. Both the MEPP and H.T. Odum’s Maximum Power Principle (MPP) have been proposed as organizing principles for complex adaptive systems. The MEPP and the MPP are two sides of the same coin, and an understanding of their implications is key, I believe, to understanding economic sustainability. Both of these proposed (and not widely accepted) principles describe the role of entropy in non-isolated systems in which complexity is generated and flourishes, such as ecosystems, and economies.

EiLab is one of several models exploring the dynamics of sustainable economics – PSoup, ModEco, EiLab, OamLab, MppLab, TpLab, and CmLab.

06 EiLab V1.36 – Entropic Index Laboratory

Garvin Boyle | Published Sat Jan 31 15:44:18 2015 | Last modified Fri Apr 14 21:29:47 2017

EiLab explores the role of entropy in simple economic models. EiLab is one of several models exploring the dynamics of sustainable economics – PSoup, ModEco, EiLab, OamLab, MppLab, TpLab, and CmLab.

We develop an IBM that predicts how interactions between elephants, poachers, and law enforcement affect poaching levels within a virtual protected area. The model is theoretical at this stage and is not meant to provide a realistic depiction of poaching, but instead to demonstrate how IBMs can expand upon the existing modelling work done in this field, and to provide a framework for future research. The model could be further developed into a useful management support tool to predict the outcomes of various poaching mitigation strategies at real-world locations. The model was implemented in NetLogo version 6.1.0.

We first compared a scenario in which poachers have prescribed, non-adaptive decision-making and move randomly across the landscape, to one in which poachers adaptively respond to their memories of elephant locations and where other poachers have been caught by law enforcement. We then compare a situation in which ranger effort is distributed unevenly across the protected area to one in which rangers patrol by adaptively following elephant matriarchal herds.

The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, smartness, efforts, willfulness, hard work or risk taking. Sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant material success. But, as a matter of fact, it is rather common to underestimate the importance of external forces in individual successful stories. It is very well known that intelligence (or, more in general, talent and personal qualities) exhibits a Gaussian distribution among the population, whereas the distribution of wealth - often considered a proxy of success - follows typically a power law (Pareto law), with a large majority of poor people and a very small number of billionaires. Such a discrepancy between a Normal distribution of inputs, with a typical scale (the average talent or intelligence), and the scale invariant distribution of outputs, suggests that some hidden ingredient is at work behind the scenes. In a recent paper, with the help of this very simple agent-based model realized with NetLogo, we suggest that such an ingredient is just randomness. In particular, we show that, if it is true that some degree of talent is necessary to be successful in life, almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals. As to our knowledge, this counterintuitive result - although implicitly suggested between the lines in a vast literature - is quantified here for the first time. It sheds new light on the effectiveness of assessing merit on the basis of the reached level of success and underlines the risks of distributing excessive honors or resources to people who, at the end of the day, could have been simply luckier than others. With the help of this model, several policy hypotheses are also addressed and compared to show the most efficient strategies for public funding of research in order to improve meritocracy, diversity and innovation.

This website uses cookies and Google Analytics to help us track user engagement and improve our site. If you'd like to know more information about what data we collect and why, please see our data privacy policy. If you continue to use this site, you consent to our use of cookies.
Accept