Variable Index
$#! · 0-9 · A · B · C · D · E · F · G · H · I · J · K · L · M · N · O · P · Q · R · S · T · U · V · W · X · Y · Z
A
 agent variables, agent
 agent-age, agent
 agent-capacity-value, agent
 agent-degree-long-time, agent
 agent-degree-short-time, agent
 agent-destination, agent
 agent-destination-condition-once, agent
 agent-destination-condition-repetitive, agent
 agent-destinations-once, agent
 agent-destinations-repetitive, agent
 agent-destinations-repetitive-position, agent
 agent-edge-delay, agent
 agent-history, agent
 agent-id, agent
 agent-limits-long-time, agent
 agent-limits-short-time, agent
 agent-location, agent
 agent-metrics, agent
 agent-pay-offs, agent
 agent-pay-offs-long-time, agent
 agent-pay-offs-short-time, agent
 agent-repetitive-node, agent
 agent-repetitive-value, agent
 agent-seu-history, agent
 agent-seu-history-information, agent
 agent-seu-value, agent
 agent-stats, agent
 agent-stuck?, agent
 agent-technologies-available, agent
 agent-technology-in-use, agent
 agent-technology-switch-condition, agent
 agent-type, agent
 agent-type-count-vector, global
 agent-types, global
 agent-u, agent
 agents-file, global
 automated-control-soft-decrease-factor, global
 automated-control-soft-increase-factor, global
 automated-control-soft-max-value, global
 automated-control-soft-min-value, global
 automated-run?, global
C
 change-agents-upper-limit-costs-long-time, global
 change-agents-upper-limit-costs-short-time, global
 change-edge-duration, global
 change-nodes-mean-cost-benefits-agents, global
 change-p-cost-threshold, global
 change-restrict-technology, global
 change-technology-factor-bike, global
 change-technology-factor-car, global
 change-technology-factor-ev, global
 change-technology-factor-pt, global
 change-technology-speed-bike, global
 change-technology-speed-car, global
 change-technology-speed-ev, global
 change-technology-speed-pt, global
 change-u-value-reach-node, global
D
 deactivateTimeout, SearchPanel
 deactivateTimeoutLength, SearchPanel
 deaths, global
 debug-level, global
 debugging?, global
 description, global
 destination-conditions-once-default, global
 destination-conditions-repetitive-default, global
 destinations-once-default, global
 destinations-repetitive-default, global
the agent’s age, measured in ticks.
holds the capacity value which was calculated when entering a node/edge the last time.
ordered list, holds degree (calculated from limits and payoffs) for a long time range
ordered list, holds degree (calculated from limits and payoffs) for a short time range
the next node an agents wants to go to.
points to a function, returning true if the next node to visit corresponds to a node that shall still be visited next with respect to agent-destinations-once.
points to a function, returning true if the next node to visit corresponds to a node that shall be visited next with respect to agent-destinations-repetitive.
nodes an agent tries to reach once
nodes an agent tries to reach in the given order.
indicated the position in agent->agent-destinations-repetitive of the node recently visited.
the time left that the agent needs to pass the edge
holds the edges history (everything that may change in length and thus cannot be used with stats-extension)
“who” is used for all breeds, thus for nodes as well as for agents.
ordered list, holds limits for a long time range, each dimension may have an upper and a lower limit, “don’t care” indicated by “e”, Euler’s number
ordered list, holds limits for a short time range, each dimension may have an upper and a lower limit, “don’t care” indicated by “e”, Euler’s number.
agent is on a node or an edge
metrics to calculate p-values dynamically
represents the actual values of the agent (different dimension), calculated by use of node/edge-costs-benefits-agents.
represents the actual values of the agent (different dimension), calculated by use of node/edge-costs-benefits-agents.
represents the actual values of the agent (different dimension), calculated by use of node/edge-costs-benefits-agents.
points to the node the agent is actually located on, used in agent-destination-condition-repetitive.
set by function get-agent-destination-repetitive-value
history off all actions taken into consideration in each decision situation, sorted per tick and (within ticks) by SEU-value
helper variable, used within seu-calculation to memorize information until it is written to agent->agent-seu-history
holds the seu-value of the action chosen in last decision.
holds past values of degree-values
true if agent was not able to choose a new destination in the last step.
all technologies the agent owns and thus may use.
the technology the agent actually uses.
points to a function, returning true if agent is allowed to switch technology in the present situation.
string, used to differ types of agents, regarding their aims/goals
vector to count total numbers of each agent-type.
a list containing all types of agents loaded with the present scenario
u-values for SEU-calculation
if scenario is given by different .txt-files, this variable holds the agents-file
factor to be used by soft-control-algorithm to decrease node-costs-benefits-agents-control-factor-per-technology or edge-costs-benefits-agents-control-factor-per-technology
factor to be used by soft-control-algorithm to increase node-costs-benefits-agents-control-factor-per-technology or edge-costs-benefits-agents-control-factor-per-technology
maximal value to be used by soft-control-algorithm when altering node-costs-benefits-agents-control-factor-per-technology or edge-costs-benefits-agents-control-factor-per-technology
minimal value to be used by soft-control-algorithm when altering node-costs-benefits-agents-control-factor-per-technology or edge-costs-benefits-agents-control-factor-per-technology
true if SimCo is run p.e.
used to alter agents’ upper limit long time values in cost-dimension after setup and before starting experiments.
used to alter agents’ upper limit short time values in cost-dimension after setup and before starting experiments.
used to alter edge-duration values after setup and before starting experiments.
used to alter node-cost-benefits-agent values in cost dimension after setup and before starting experiments.
.used to alter agents’ p-cost-threshold value (used within p-calculation for “costs”) after setup and before starting experiments.
used to delete all technologies of a type indicated by technology-name after setup and before starting experiments.
used to alter “bike” technologies’ correction-factor values after setup and before starting experiments.
used to alter “car” technologies’ correction-factor values after setup and before starting experiments.
used to alter “ev” technologies’ correction-factor values after setup and before starting experiments.
used to alter “pt” technologies’ correction-factor values after setup and before starting experiments.
used to alter “bike” technologies* speed-factor after setup and before starting experiments.
used to alter “car” technologies* speed-factor after setup and before starting experiments.
used to alter “ev” technologies* speed-factor after setup and before starting experiments.
used to alter “pt” technologies* speed-factor after setup and before starting experiments.
used to alter agents’ u-values for reaching nodes meeting the repetitive condition after setup and before starting experiments.
The timeout used between when a control is deactivated and when the entire panel is deactivated.
this.deactivateTimeoutLength
The length of deactivateTimeout in thousandths of a second.
number of deaths - per agent-type
int, indicating which level of information shall be used by “debug” function
bool, shall “debug” output be printed?
holds an optional scenario description.
table containting a conditition to check wether the node of interest equals with a node in agent->agent-destinations-repetitive for each type of agent
table containting a conditition to check wether the node of interest equals with a node in agent->agent-destinations-repetitive for each type of agent
table containing a list of destinations (nodes to reach once) for each type of agent
table containing a list of destinations (nodes to reach again and again) for each type of agent
Close