On Solvability of Urban Complexity
Whether complexity could be solved completely is a fundamental question for urban planning that is worth pursuing. In this paper, I provide some thoughts on this question in the context of urban complexity by viewing urban development as computation and planning as intelligence. In particular, I introduce a preliminary axiomatic system of an insolvability theorem of urban complexity and provide some implications of the theorem.
Complexity in time and space pervades the world in which we find ourselves. It is not clear, however, whether complexity is solvable completely. There are two fundamental questions related to this quest: Is complexity completely solvable? If so, how? We now know that plans can partially solve complexity in that plans are the sufficient condition for complexity, implying that plans are not the only mode of coordinating decisions (Lai, 2018). Other modes might be needed in order to solve complexity completely, which is the research question the paper attempts to address. We argue that complexity cannot be solved globally, but can only be approached locally. In other words, we can aim at maximizing planning order but only subject to spontaneous complexity.
Complexity implies that decisions are partially interrelated, rather than completely independent. Consider a decision maker facing a complex set of decisions each of which is composed of a subset of actions to choose from. How would the decision maker plan to cope with these complex systems of decisions to obtain a prespecified goal? A natural response is to prioritize these decisions independently before making the first move. When these decisions are interrelated through consequences of actions and through budgetary limitations, prioritization among decisions may not yield the best move. It would be more effective evaluating the decisions at the same time that are interrelated before taking the first action. But there is no analytic backing for this evaluation, or it is only slighted at best in seemingly logical frameworks (e.g., Friend and Hickling’s Strategic Choice Approach (Friend and Hickling, 2005)).
Most decision theories consider the situations where decisions are given, regardless of whether these decisions are interrelated (e.g., Keeney and Raiffa, 1976). Interrelatedness is one of the sufficient conditions for complexity. Considering interrelated decisions in sequence is only slighted in most decision theory texts (e.g., Hammond, Keeney, and Raiffa, 1999). There is a need for guiding decision makers or planners normatively how sets of interrelated decisions should be considered in a particular scope. I set aside at present how the dynamics of decisions evolves, that is how decisions are made and interrelated in time.
Before I proceed, some daily used terms need to be carefully defined. An action is a move to act, a commitment of a decision. Actions may or may not be related. Relatedness of actions is defined by whether the activation of one action affects the values of the consequences of another action. A decision is composed of a set of actions to choose from. Decisions themselves may or may not be related depending on whether the actions in the two decisions are related. A consequence is the result of an action in combination with the effect of the complex system, a collective outcome from enacting related decisions. A plan is a set of related decisions in order to make progress, i. e., enacted collectively to obtain desired consequences.
2 Planning as computational intelligence
Planning is a set of activities to acquire information and to make contingent decisions for the future to yield a desired outcome. It is also considered as procedures for taking actions as defined in the previous section. Such a definition of planning is consistent with that of intelligence (e. g., Ghallab et al., 2004; LaValle, 2006). Intelligence in the context of planning connotes different meanings (e. g., Mandelbaum, 2008), but we define intelligence as computation so they are used interchangeably here. Can these procedures of planning as intelligence be reduced to steps similar to computer algorithms? Systems in which planning takes place are complex in that there are numerous elements interacting with each other forming a coherent whole. Can such systems be described as complex systems capable of universal computation? If the answers to both questions are yes, then it is possible to prove the solvability of a complex network as shown in Definition 3.
The paper is grounded on two assumptions that a city is a discrete dynamical system and that it is capable of universal computation. These two assumptions are based on the fact that there are an increasing number of attempts to model urban spatial evolution through simulations (e. g., White and Engelen, 1993) and the hypothesis that systems showing some level of complexity are computationally equivalent (Wolfram, 2002). It is well known that systems capable of universal computation are inextricable computationally so that prediction of the behaviors of the systems is impossible. The only way to study such systems is through direct evolution. The two assumptions proposed imply that planning based on forecasts is impossible, or at least difficult, because there is no way we can predict what would happen and do something with it in advance. On the other hand, Hopkins (2001) argues that under the conditions of the four I’s of decisions in a complex system, that is interdependence, indivisibility, irreversibility, and imperfect foresight, making plans should lead to different, beneficial outcomes. The former is to plan for a desired target state, whereas the latter is to coordinate decisions.
3 City as a complex system capable of universal computation
That much has been done recently in simulating urban spatial change suggests that the spatial system of a city can at least be viewed as many agents, fixed in locations or floating in a space, interacting with each other forming a complex system. Most of such work is given different names, including cellular automata (e. g., White and Engelen, 1993), agent-based modeling (e. g., Axelrod, 1997), and artificial life (e. g., Langton, 1989). The theme of such work is that a city can be viewed from the bottom up so that the whole spatial phenomena can be simulated by interacting agents based on simple rules forming complex outcomes. This is equivalent to saying that the city can compute in that given initial configurations or data, the results can be traced definitely through the rules. Therefore, these models of the city are also deterministic dynamical systems.
I am mainly interested in cellular automata here, in particular the elementary cellular automata, because of their simplicity in construction and complexity in outcomes. However, most urban spatial simulations based on cellular automata seem to deviate from the original construction of cellular automata. In the original cellular automata (e.g. Wolfram, 1994), there is a single set of transition rules in the course of simulation, while in most urban spatial simulations, there may be more than one set of transition rules and the rules become complicated (e. g. Webster and Wu, 1999a and 1999b ). Regardless, these models seem to assume that urban spatial systems are capable of computation. Lai (2003) investigated deductively the characteristics of urban spatial evolution using the elementary cellular automata, and found that among the 256 transition rules, only eight rules can result in complex structures with semi-lattice structures in the transition graph. He further argued that these deterministic transition rules could give rise to seemingly stochastic phenomena of urban spatial evolution as we observed in our daily lives. In addition, Lai (2019) argued persuasively for a binary approach to modeling complex urban systems and explained convincingly how mixed use pattern comes about through spatial games of elementary cellular automata.
4 Principle of computational equivalence
In his provocative book, Wolfram (2002) conducted numerous simulations of simple programs, including the elementary cellular automata, to explain persuasively many natural phenomena, including natural evolution and thermodynamics. His interpretation of these simple programs was also extended to explain some social phenomena, such as fluctuations of stock prices. The validity of his theory is yet to be proved, but he proposed an interesting hypothesis: the principle of computational equivalence in that in the natural world, many phenomena of complexity are capable of universal computation and thus equivalent. I take his hypothesis as the basis on which I explore the solvability of the city. In particular, I argue that since some elementary cellular automata rules are capable of universal computation, such as rule 30, and urban spatial systems are capable of computation, according to Wolfram’s principle of computational equivalence, the two systems are computational equivalent. Put differently, ignoring the substances of different systems, the elementary cellular automata can be seen as microcosms of a simplified world capable of emulating other complex systems of computation, including urban spatial systems. By simulating the elementary cellular automata, we should be able to gain insights into the characteristics of the transition rules underlying the evolution of real urban spatial systems.
Consider each cell as a decision situation in which there are only two choices represented by two colors or binary values. A one-dimension cellular automaton with two colors and nearest neighbors can be constructed to represent the evolution of the choices in the spacetime. At time step t, the color of a particular cell in the next time step t + 1, is determined by its color at the current time step and the colors of the two nearest neighbors. Assume that the two colors of cells are represented by 0 and 1, forming two possible choices of the decision situation. This simple formulation implies that the choice of a particular cell in the next time step depends on a set of three decision situations. The transition rules specify how the choices of the three decision situations determine the choice of the central cell for the next time step. This simple construct can create very complex structures in the evolution of the spacetime. According to the principle of computational equivalence, a city viewed as a discrete dynamic system, can also evolve into spatial patterns that can be emulated by the elementary cellular automata.
5 Planning as computation in a universal system
Most, if not all, plans are made within particular scopes. A scope of making plan is a transitive closure Cof R on X where
C=R∪RR∪RRR∪RRRR… is a strict partial order because by definition not xRx and implies not yRx. Therefore, not . Any decision outside of the scope is not worth considering in a plan because it can be evaluated independent of other decisions. We are only interested in dependent decisions. This immediately implies that planning as a computational algorithm is solvable.
Any scope implies a set of solvable plans each of which is composed of a set of dependent decisions. A choice function Φon a scope is to select the plan that yields the best outcome within that scope. Thus the choice of the scope and that of a plan confounds each other. We cannot expect to make an optimal plan in a complex system, but we can at least set up appropriate search strategies considering the scope and associated solvable plans at the same time. Lai (2002) proved the optimal search strategy for planning in terms of information gathering in a one-person organization. In particular, information structures that are degarbling and accurate should be sought by the planner in order to yield the best outcome. We focus here however on interdependence among decisions.
Three structural distributions of dependent decisions can be distinguished: independent decisions, uniform distribution of dependent decisions, and clustered distribution of dependent decisions. In the distribution of independent decisions, all decisions in the finite set are independent of each other. There is no gain for making plans in this case. In the uniform distribution of dependent decisions, all the decisions are dependent of each other, and making plans imply that the plan must be complete in that all decisions in which making plans is likely to gain. The question of how plans should be made in order to gain under such distribution can be addressed directly in the proposed analytic framework. Evidence shows that the distribution of the interdependence of decisions is likely to be hierarchical or complex in space and time (e. g., Wolfram, 1994; Lai, forthcoming; Simon, 1998), which implies that making plans should be a continuing activity.
Making plans requires mental, analytic investments. Considering contingent, related actions, guessing moves of others, measuring uncertainty, and making forecasts all require investments in gathering information. We assume for the present purposes that the underlying mechanisms of these mental investments are indistinguishable from the notion of computation, and that the planner does not know the rules based on which the system of interest evolves. Put differently, the planner makes plans according to a logic different from the rules underlying the system evolution. This assumption is plausible because in reality planners do not have the complete knowledge of how the spatial change of the city takes place. They make forecasts and plans and act accordingly depending on the information gathered, not the complete knowledge of how the system works. Therefore, there are two types of logic: the logic of how plans are made and the logic of how the system evolves.
The value of a plan depends on the cost of making the plan and the benefit it might yield compared to that without the plan (Hopkins, 2001). The benefit of a plan can be calculated as the difference through a decision tree between the expected utility of an optimal path with the plan and that without the plan. This notion is equivalent to the calculation of the value of sample information in any management science text (e. g., Anderson et al, 2003). Schaeffer and Hopkins (1987) applied the Bayesian approach to describe how a land developer manipulated right during the investment process to yield profit. This approach can also be used for the present purposes to describe the logic of making plans in the elementary cellular automata. Consider a spacetime plot of ten cells and ten time steps based on rule 30 of nearest neighbors as shown below. The initial state is given randomly where 0 and 1 represent two possible states for a cell.
In order to apply the Bayesian approach to the simple system, the state of the system at each time step is given a benefit index, which could be the mapping between the states and a set of integers. The greater the index is, the more self-organized the state is, and the more preferred the state is. This index of self-organization can be measured by entropy (Wolfram, 1994). The probability of the state of a cell to be zero or one at a particular time step can be evaluated through the Bayesian theorem based on the values of the state at previous time steps. Given the planning logic, we can set the number of time steps backward before the current time step for calculating the Bayesian probabilities as the information gathering scope. We can also set the number of time steps forward after the current time step for predicting the Bayesian probabilities as the planning horizon. Making a plan according to this definition is equivalent to changing the values of the cell states based on the results calculated from the Bayesian theorem. In this way, the logic of planning is blended into the system evolution.
Similar to the self-organization index for a particular state of the system, the effects of planning can be measured by comparing the global index as defined by entropy for the spacetime plot with planning to that without planning. Information gathering scope and planning horizon are parameters in the simulation design that we can manipulate in order to evaluate the sensitivity of the system behavior to the amount of investment of planning. The greater the information gathering scope or planning horizon is, the greater the amount of planning investment. All this can be done on the Mathematica platform (Wolfram, 1999).
6 Insolvability theorem for urban complexity
Definition 1 Target States
Given a network of decisions, a target state is an assignment of problems, decision makers, solutions, and locations to strongly connected decisions in order to yield the maximum expected utility for the planner.
Definition 2 Plans
Given a network of decisions, a plan is a sequence of actions that will achieve a particular target state specified.
In other words, a plan is a set of interdependent decisions, a contingent path in a decision tree (Hopkins, 2001) and making plans is equivalent to making multiple, linked decisions (Han and Lai, 2011; 2012).
Definition 3 Solvability
Given a complex network of decisions, the complex network is solvable if and only if any target state derived from the complex network is attainable through a plan.
For example, we could ask: Is the city or urban complexity solvable? That is, can the city be modeled by mathematics or computer simulations? By ‘rigorous’ I mean that these methodologies, whether qualitative or quantitative, must be tightly logical and aim at internal depth and completeness. On the other hand, we should explain how plans for urban development are made and used and interact with each other and justify how they should be made and used and interact with each other. For example, we could ask: Is the plan solvable? That is, given a plan and a set of individuals, does there exist a “policy” of price or rule setting that brings about the plan? In particular, can we solve urban problems completely through plans that are deemed "wicked"? (Rittel and Webber, 1973)
Lemma 1: Non-additivity of solvable and insolvable algorithms
If there are two algorithms which are solvable and insolvable, then an algorithm deriving from the combination of the two algorithms is insolvable.
The immediate implication of the lemma is that given two algorithms, solvable planning and insolvable urban complexity, the algorithm that blends the two algorithms into one is not solvable.
Theorem: Insolvability Dominance
Given a set of algorithms, if at least one of them is insolvable, then an algorithm deriving from the combination of the algorithms is insolvable.
What constitutes the building blocks of the computational city and plans? My tentative answer would be frames. A frame is a mental construct based on which a decision is made. Activities in the computational city are the outcomes of a series of frames within and between agents based on which sequential decisions are made. The urban development process based on this view and the associated planning both can be considered as computation with different underlying logics. In this paper, I provide a preliminary axiomatic system for an insolvability theorem of urban complexity and draw the tentative conclusion that urban complexity would not be solvable globally through planning, but could be dealt with locally.
Anderson, D. R., D. J. Sweeney, and T. A. Williams. 2003. An Introduction to Management Science. Mason, OH: South-Western.
Axelrod, R. 1997. The Complexity of Cooperation. Princeton, NJ: Princeton University Press.
Friend, J. and A. Hickling. 2005. Planning under Pressure: The Strategic Choice Approach. Amsterdam, The Netherlands: Elsevier.
Ghallab, M. D. Nau, and P. Traverso. 2004. Automated Planning: Theory and Practice. San Francisco, CA: Morgan Kaufmann Publishers.
Hammond, J. S., R. L. Keeney, and H. Raiffa. 1999 Smart Choices: A Practical Guide to Making Better Decisions. Boston, Massachusetts: Harvard Business School Press.
Hopkins, L. D. 2001. Urban Development: The Logic of Making Plans. London: Island Press.
Keeney, R. L. and H. Raiffa. 1976. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York: John Wiley & Sons.
Lai, S-K. 2002. “Information structures exploration as planning for a single person organization.” Planning and Markets 5: 32-41. <http://www-pam.usc.edu>,
Lai, S-K. 2003. “On transition rules of complex structure in one-dimensional cellular automata: Some implications for urban change.” Annals of Regional Science 37: 337-352.
Lai, S-K. 2018. “Why plans matter for cities.” Cities 73: 91-95.
Lai, S-K. 2019. “A binary approach to modeling complex urban systems.” International Journal of Sino-Western Studies 16: 59-67.
Lai, S-K. forthcoming. Planning for Urban Complexity. Singapore: Routledge.
LaValle, S. M. 2006. Planning Algorithms. Cambridge, England: Cambridge University Press.
Langton, C. 1989. Artificial Life. Reading, MA: Addison-Wesley.
Mandelbaum, S. J. 2008. “Planning intelligence.” Planning Theory7(3): 318-322.
Rittel, H. W. and M. M. Webber. 1973. “Dilemmas in a general theory of planning.” Policy Sciences 4(2): 155-169.
Schaeffer, P. V. and L. D. Hopkins. 1987. “Planning behavior: the economics of information and land development.” Environment and Planning A 19: 1221-1232.
Simon, H. A. 1998. The Sciences of the Artificial. Cambridge, Massachusetts: The MIT Press.
Webster, C. J. and F. Wu. 1999a. “Regulation, land-use mix, and urban performance. Part 1: theory.” Environment and Planning A 31: 1433-1442
Webster, C. J. and F. Wu. 1999b. “Regulation, land-use mix, and urban performance. Part 2: simulation.” Environment and Planning A 31: 1529-1545.
White, R. and G. Englen. 1993. “Cellular automata and fractal urban form: a cellular modelling approach to the evolution of urban land-use pattern.” Environment and Planning B: Planning and Design 25: 1175-1199.
Wolfram, S. 1994. Cellular automata and Complexity. New York: Addison-Wesley Publishing Company.
Wolfram, S. 1999. The Mathematica Book, 4th ed. Champaign, IL: Wolfram Media, Inc.
Wolfram, S. 2002. A New Kind of Science. Champaign, IL: Wolfram Media, Inc.
This article is a working paper and has not been officially published.