What are
complex systems? What is the problem? What is the new idea? What are the technical objectives? Why is this hard? Who would care? Hard Issues & Plausible Approaches Spatiotemporal Scale Model Validation Tractable Analysis Causal Analysis Controlling Behavior Publications Software Tools Presentations Demonstrations |
Large collections of interconnected components whose interactions lead to macroscopic behaviors:
What is the problem?No one understands how to measure, predict or control macroscopic behavior in complex information systems
“[Despite] society’s profound dependence on networks, fundamental
knowledge about them is primitive. [G]lobal communication … networks have
quite advanced technological implementations but their behavior under
stress still cannot be predicted reliably.… There is no science today that
offers the fundamental knowledge necessary to design large complex
networks [so] that their behaviors can be predicted prior to building
them.” What is the new idea?Leverage models and mathematics from the physical sciences to define a systematic method to measure, understand, predict and control macroscopic behavior in the Internet and distributed software systems built on the Internet What are the technical objectives?Establish models and analysis methods that (1) are computationally tractable, (2) reveal macroscopic behavior and (3) establish causality. Characterize distributed control techniques, including: (1) economic mechanisms to elicit desired behaviors and (2) biological mechanisms to organize components Why is this hard?Valid computationally tractable models that exhibit macroscopic behavior and reveal causality are difficult to devise. Phase-transitions are difficult to predict and control. Who would care?All designers and users of networks and distributed systems with a 25-year history of unexpected failures:
Businesses and customers who rely on today's information systems:
Designers and users of tomorrow's information systems that will adopt dynamic adaptation as a design principle:
Hard Issues & Plausible ApproachesModel scale – Systems of interest (e.g., Internet and compute grids) extend over large spatiotemporal extent, have global reach, consist of millions of components, and interact through many adaptive mechanisms over various timescales. Which computational models can achieve sufficient spatiotemporal scaling properties? Micro-scale models are not computable at large spatiotemporal scale. Macro-scale models are computable and might exhibit global behavior, but can they reveal causality? Meso-scale models might exhibit global behavior and reveal causality, but are they computable? One plausible approach is to investigate abstract models from the physical sciences. e.g., fluid flows (from hydrodynamics), lattice automata (from gas chemistry), Boolean networks (from biology) and agent automata (from geography). We can apply parallel computing to scale to millions of components and days of simulated time. Model validation – Scalable models from the physical sciences ( e.g., differential equations, cellular automata, nk-Boolean nets) tend to be highly abstract. Can sufficient fidelity be obtained to convince domain experts of the value of insights gained from such abstract models? We can conduct key comparisons along three complementary paths: (1) comparing model data against existing traffic and analysis, (2) comparing results from subsets of macro/meso-scale models against micro-scale models and (3) comparing simulations of distributed control regimes against results from implementations in test facilities, such as the Global Environment for Network Innovations. Tractable analysis – The scale of potential measurement data is expected to be very large – O(1015) – with millions of elements, tens of variables, and millions of seconds of simulated time. How can measurement data be analyzed tractably? We could use homogeneous models, which allow one (or a few) elements to be sampled as representative of all. This reduces data volume to 106 – 107, which is amenable to statistical analyses (e.g., power-spectral density, wavelets, entropy, Kolmogorov complexity) to visualization. Causal analysis – Tractable analysis strategies yield coarse data with limited granularity of timescales, variables and spatial extents. Coarseness may reveal macroscopic behavior that is not explainable from the data. For example, an unexpected collapse in the probability density function of job completion times in a computing grid was unexplainable without more detailed data and analysis. Multidimensional analysis can represent system state as a multidimensional space and depict system dynamics through various projections (e.g., slicing, aggregation, scaling). State-space dynamics can segment system dynamics into an attractor-basin field and then monitor trajectories. Controlling Behavior – Large distributed systems and networks cannot be subjected to centralized control regimes because the system consists of too many elements, too many parameters, too much change, and too many policies Can models and analysis methods be used to determine how well decentralized control regimes stimulate desirable system-wide behaviors? Use price feedback (e.g., auctions, present-value analysis or commodity markets) to modulate supply and demand for resources or services. Use biological processes to differentiate function based on environmental feedback, e.g., morphogen gradients, chemotaxis, local and lateral inhibition, polarity inversion, quorum sensing, energy exchange and reinforcement. Related Publications
Related Software Tools
Related Presentations
Related Demonstrations
| |
www.antd.nist.gov
| ||
Web site owner: The National Institute of Standards and Technology | ||
Disclaimer Notice & Privacy Policy / Security Notice Send comments or suggestions to webmaster@antd.nist.gov The National Institute of Standards and Technology is an Agency of the U.S. Commerce Department's Technology Administration Created, maintained and owned by: ANTD's webmaster Last updated: May, 2006 Date Created: May, 2001 |