Doubly-bounded rationality

Long-term decision-related activities, such as bottom-up and top-down policy development, analysis, and planning, stand to benefit from the creation and application of agent-based models that are capable of representing real-world spatiotemporal social human behavior in local contexts. However, the possibility of artificial agents misrepresenting the behavior of real-world decision-makers they were developed to represent is significant, which reduces the usefulness of such models.

This article explores this possibility through two frameworks that establish a comprehensive and mathematically-grounded relationship among (a) a decision situation, (b) a decision-maker making a decision within that decision situation, (c) a modeler modeling the decision-maker, and (d) an artificial agent designed by the modeler to represent the decision-maker. The established relationship sheds light on how and where additional and undesirable bounds on an artificial agent’s rationality can enter the process of modeling decision-making and underlines the importance of recognizing and understanding the specific discrepancies between the rationality of decision-makers and that of the artificial agents being used to model their behavior. By the same token, the two frameworks can be used to test the ability of an artificial agent to represent the behavior of a decision-maker and shed light on how and where improvements can be made to an artificial agent’s ability to represent human decision-making.