Kristen Sparrow • April 05, 2014
Working on my handout for the upcoming AAMA conference in Denver and again wanted to read Andrew Ahn’s paper on complexity and alternative medicine. The image is of a coastline. A classic fractal.
Heart Rate Variability is a way of evaluating a complex system (i.e. all the inputs to the heart.) Ahn expresses it so much more coherently. An excerpt follows.
Complexity theory provides a theoretical framework for evaluating and analyzing complex systems. These systems are “complex,” because they exhibit global properties not made obvious from the properties of the individual components, and they are “systems,” because they are composed of interconnected parts. Historically, complexity theory borrows concepts and tools from a range of disciplines, including chaos theory (physics), control theory (engineering), cybernetics (mathematics), and General Systems Theory (biology). More broadly, these disciplines share a common theme of nonlinearity—a concept maintaining that the size of an output is not proportional to the size of an input. Because of this shared theme, these disciplines may be categorized within a broader field of nonlinear dynamics.
Complex systems are commonly dynamic and contain interacting components whereby feedback and feedforward loops can be formed. The need to characterize the observed properties stemming from these dynamic interactions has led complexity theory to develop concepts that are unique and distinct from traditional reductionist sciences. These concepts include, and are not limited to: emergence—the concept that patterns or properties arise or emerge from the interactions of multiple simple parts; fractal characteristics—the presence of recursive and “self-similar” patterns over multiple spatial or temporal scales; and sensitivity to initial conditions—the idea that small perturbation can have large, unpredictable effects. Most of the models proposed for complex systems are nonlinear, which means that the system response to a sum of inputs is not simply the sum of their separate responses.
This conceptual departure from reductionism results in a heuristic approach that is also noticeably different from traditional methods. Problems are evaluated at the global systems level, and numerous factors are assessed at many time points and/or spatial conditions. The goal is to identify “patterns” that reflect global behavior rather than to identify a singular, distinguishing marker or variable. In addition, because complex systems are frequently sensitive to initial conditions, and thus are often unpredictable, the analyses and their resulting solutions are frequently stochastic—in other words, more probabilistic than deterministic. To accomplish these analytical tasks, sophisticated computational and mathematical tools are commonly used and incorporate a mix of linear algebra, differential calculus, statistics, information theory, and/or computational science.
As a scientific discipline, complexity science is young and continually evolving. Its applications to biology and medicine have a particularly short history and did not become broadly relevant until the postgenomic era. The completion of the human genome project, the development of high throughput tools, and improvements in computer software/hardware were confluent factors that led to the rise of complexity sciences in biology and to the important recognition that reductionist approaches were inadequate for addressing biological complexity. Molecular biology, biochemistry, and biophysics were highly proficient in characterizing individual molecules but did not have the means to describe and capture systemwide behavior effectively. The increased importance of complexity science is reflected by the growth of systems-biology divisions in academic institutions and pharmaceutical industries across the world. The National Institutes of Health (NIH) Roadmap is another testament to the increased importance placed on complexity science and interdisciplinary research.1
Applying complexity-based analytical methods to medicine has a number of theoretical advantages over the applications of traditional reductionist methods. First, complexity-based analytical methods offer the means to analyze “holistically” multivariate and/or time-varying data. The methods help identify distinguishing patterns that exist within a disease condition or between individuals who share a common diagnosis. Second, these methods can be used to extract hidden information from clinical data. One of many examples used throughout the meeting was the use of nonlinear dynamic analyses to show that heart rate over time is predictive for cardiac mortality and arrhythmias despite the fact that the means or variances of heart rate may not differ significantly across individuals.2–4 Third, complexity-based tools may present a revolutionary bridge between qualitative and quantitative measures. Terms such as adaptability, robustness, or health were previously considered qualitative terms and thus were quantitatively intractable. Yet, complexity science has identified analytical methods that can help assess these features.5 Given that “quality” is a global, emergent property not readily traced to a single variable, complexity science appears to be ideally suited to evaluate and provide an explanation for it.6 Finally, complexity science offers a conceptual framework that reflects reality better. In the real world, small inputs can have large effects, processes are dynamic, interactive effects can span across many temporal and spatial scales, and transformations from one state to another can happen gradually or precipitously.
While used relatively sparingly in medicine, complexity-based analytical methods have become increasingly important tools for examining the relationships among genes, proteins, RNA, and other molecules involved with the immune response.7 Use of these techniques is expected to increase as the analytical toolbox for systems-based approaches expand, and awareness of these techniques grows. For studies focused on complexity, the tools that have been used can be categorized into static and dynamic methods. Static analytical methods evaluate many variables at once, assess their interactions, and/or identify patterns that may emerge from them. Examples of these methods include clustering methods8 (agglomerative, hierarchical, disjoint, k-means clustering, Bayesian mixture models, and latent class analysis), factor analyses,9 structural equation modeling,10 and neural networks,11 among others. Dynamic analytical methods evaluate a single or few variable(s) over numerous timepoints (i.e., time series) and assess for patterns across many temporal scales. Examples include correlation dimensions (a measure of dimensionality of fractal objects or time series),12 detrended fluctuation analysis (ameasure of statistical self-affinity of a signal),13 and multiscale entropy (assessment of sample entropy over multiple time scales).14 This categorization of methods is a simplification, as analytical tools can combine both the temporal and multivariate aspects of a process and can also incorporate spatial dimensions as well.