# Embedology

**Aliases**

Taken's Theorem, Phase Space Reconstruction, Attractor Reconstruction

**Intent**

*Describes in a single concise sentence the meaning of the pattern.
*

**Motivation**

* This section describes the reason why this pattern is needed in practice. Other pattern languages indicate this as the Problem. In our pattern language, we express this in a question or several questions and then we provide further explanation behind the question.*

**Sketch**

*This section provides alternative descriptions of the pattern in the form of an illustration or alternative formal expression. By looking at the sketch a reader may quickly understand the essence of the pattern.
*

**Discussion**

*This is the main section of the pattern that goes in greater detail to explain the pattern. We leverage a vocabulary that we describe in the theory section of this book. We donâ€™t go into intense detail into providing proofs but rather reference the sources of the proofs. How the motivation is addressed is expounded upon in this section. We also include additional questions that may be interesting topics for future research.*

**Known Uses**

*Here we review several projects or papers that have used this pattern.*

**Related Patterns**
*
In this section we describe in a diagram how this pattern is conceptually related to other patterns. The relationships may be as precise or may be fuzzy, so we provide further explanation into the nature of the relationship. We also describe other patterns may not be conceptually related but work well in combination with this pattern.*

**Pattern is related to these Canonical Patterns:**

- Entropy (Joint, Cross) Information Theoretic Feature Selection
- Structured Factorization (may beed to remove this pattern)
- Hamilton-Jacobi-Bellman Equation (Out of scope)

**Pattern is cited in:**

*Relationship to other Patterns*

**Further Reading**

*We provide here some additional external material that will help in exploring this pattern in more detail.*

**References**

https://www.youtube.com/watch?v=6i57udsPKms

http://www.csee.wvu.edu/~xinl/library/papers/math/geometry/embedology.pdf Embedology

http://www.scholarpedia.org/article/Attractor_reconstruction Attractor reconstruction

https://arxiv.org/pdf/1609.03971v1.pdf

http://eprints.ma.man.ac.uk/599/01/covered/MIMS_ep2006_369.pdf Embedding Theorems for Non-uniformly Sampled Dynamical Systems

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004537 Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding

https://arxiv.org/abs/1609.06347 Stabilizing Embedology: Geometry-Preserving Delay-Coordinate Maps

Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens' embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system's attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens' result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system's attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the trade-off between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.

https://arxiv.org/abs/1711.09072 Entropy-based Generating Markov Partitions for Complex Systems