Please first read our post Introduction to Nakamoto Terminal, where I outline our cryptofinancial data aggregation & analytics platform. As mentioned in that post, Yupana is one of the project building on, and expanding, NTerminal.

This post is to serve as a conceptual foundation for explaining the Yupana project. Please look out for future posts to better understand how we are testing specific methodologies or to learn more about the architectural considerations for the project.

Simplified Diagram of Yupana Integrated into NTerminal’s CDC

Simplified Diagram of Yupana Integrated into NTerminal’s CDC

Concept

Complex systems like biological ecosystems or the human brain are difficult to represent and properly understand with current tools. Existing processing techniques are effective for tasks like natural language processing (Cambria & White, 2014), anomaly detection (Lane, Terran, & Brodley, 1997), or recognition/categorization (Bishop, 2006); these techniques, however, must be applied to specific uses and require fundamental understanding about the parameters and desired outcomes.4

On the other hand, micro-scale modeling techniques like Agent Based Modeling (ABM) are capable of representing systems of interconnected entities to elucidate emergent principals (Bonabeau, 2002), but require detailed defining relational properties. Yupana attempts to combine these principles into a cohesive and modular framework; through adapting a modeling technique like ABM, with expandable processor types, we envision more dynamic correlative analysis and predictive modeling for complex systems can be achieved.

Understanding Complex Systems

Our minds do not interface directly with the physical world. Instead we create internal representations using data from the world to construct a meaningful and functional model. We draw upon learned patterns from seemingly random inputs to determine attributes and properties which form distinct pieces in which we can generate a relational representation. Through integration of these abstracted objects and rules, we are able to decipher many layers of complexity within the same system; this internal representation is constantly updated by integration of information from various sensory inputs.

To properly represent and interact with the world, we must both understand its components and their interactions. To accomplish this, our brains work to balance between simplifying the noisy input into an orderly configuration and capturing as much information about the environment as possible.

Complex systems are difficult to understand due to their nature; non-linearities, emergent properties, self-organizing principles, agonistic or antagonistic effects, and feed-forward or feed-back mechanisms make it difficult to disentangle governing rules and foundational properties. We often depend on external tools to help us see patterns that are otherwise non-salient. These tools can help us better understand narrow relationships, aggregate and filter large sets of data, run simple analytics, and simplify complicated networks and processes. By exporting these functionalities, we can then better understand properties of complex systems.

Modeling and Simulations

Agent-based Models are effective for modeling distributed systems of autonomous actors at multiple levels of abstraction (Borshchev, Andrei, & Filippov, 2004). To properly build an ABM, however, one must have a firm understanding of the system being represented to begin with, and have a clear goal to get out of the model. Fixed relations, rules, and end goals pigeonhole the ability for complex models to map adaptive and chaotic systems. There are multiple methods of generalizing ABMs to become more adaptive to complex systems; some of these include combining principals from complex network based models (Kurve, Kotobi, & Kesidis, 2013; Niazi, 2011; Niazi, 2012, Kurve, Aditya, et al., 2015), mixing ABM with multi-agent system (MAS) (Dignum, et al., 2016; Chliaoutakis & Chliaoutakis, 2016), or using more complex environments (Ch’ng, E., 2012; Simon, 1996; Niazi, et al., 2009).

Machine and Deep Learning

Computational heuristic intelligence has developed significantly in recent years with specific focus in the areas of machine and deep learning. Machine learning has been refined for highly tuned pattern detection, with advancements in generative models (Bishop, 2006; Hao, et al., 2006). Yet generalization in these techniques are hard to achieve due to highly directed data set requirements.

Deep learning has expanded these strengths through feature classification, allowing for contextualized processing and deeper recognition of emergent principles. Although, the problem remains that use case scenarios become the focus of architectural design for these algorithms.

Researchers have begun utilizing combinations of various machine and deep learning algorithms to approach larger problems, which has produced additional progress in the field. Production of synergistic and altruistic heuristic architectures has advanced the generalizability problem by optimizing multiple tasks towards a single goal (Shi, et al., 2005, etc.). It is apparent that the techniques exist to produce powerful computational results, and one possible direction for the field is to begin integrating these tools into synergistic systems.

Yupana

Complex systems and their underlying mechanisms are non-obvious in principal, and require external modeling and analytics. Typical heuristic solutions need highly directed training sets and generally cannot provide adequate flexible functioning; similarly, simple modeling techniques require rigid rules and have difficulty integrating real-time updates. Our concept attempts to employ various processing modules and be fed real world data updates inform its flexible agent-based modeling structure.

Yupana seeks to channel the power of machine learning and ABM towards mapping a data ecosystem without directly relying on end-goal considerations. Instead, this project seeks to represent the data consumed as a function of units with fixed receptive fields mapped to data streams, in order to represent relational attributes among content producers.

An important factor of Yupana is the continuous nature of the model’s data input. Units within the model will be representing live streams of data which expect regular updates. Initially, Yupana will begin with a semi randomized state for relationships between model structures. Training will consist of integrating historical records into agent based structures to train processors and recognize patterns of agent interactions. This will directly lead into the deployment of Yupana, by continuing the train of inputs with continuous and regular updates from the real environment.

Yupana explained schematically

Application to Cryptofinance

With the publishing of Satoshi Nakamoto’s white paper, “Bitcoin: A peer-to-peer electronic cash system” (2008) using Adam Back’s Proof-of-Work System, and the subsequent open-source software release of Bitcoin in 2009, the door to distributed ledger technology was opened. Bitcoin paved the way for new cryptographic systems to be created at will through forking or development of a new system (Poelstra, 2014). Now, these systems proliferate faster than one can keep up with.

Some projects are interesting academically or in industry by introducing specific functionalities, but many have little merit or are outright scams. Some of these digital assets are designed to be used as currencies, some as stores of value, others function more like securities. These projects, and the value, useability, and security of them, change rapidly in response to regulatory, developer, mining, or community decisions. These factors, in combination with the overall recency and complexity of the networks, make cryptofinancal ecosystems prone to misunderstanding; regulators, investors, and financial institutions alike need the assistance to properly interact with this new asset class.

Given the enormity of available data produced by and about cryptocurrency projects, it is near impossible to manually follow the ecosystem as a whole. Data aggregation and analytics platforms like Nakamoto Terminal (NTerminal.com), provide filtered and contextualized data streams which allow for custom searching and informative tracking capabilities from heterogeneous sources. These sources include market participants, security professionals, regulatory bodies, source code audits, mining pools, and traditional and social media outlets. Yupana, will supplement such platforms by consuming these diverse data streams to inform an adapted agent based model.

Sources

(You can also find our presentation at the Splunk .conf19 event here)

  • Aditya Kurve; Khashayar Kotobi; George Kesidis (2013). “An agent-based framework for performance modeling of an optimistic parallel discrete event simulator”. Complex Adaptive Systems Modeling. 1: 12. doi:10.1186/2194–3206–1–12.
  • Niazi, Muaz A. K. (2011–06–30). “Towards A Novel Unified Framework for Developing Formal, Network and Validated Agent-Based Simulation Models of Complex Adaptive Systems”. hdl:18933365. (PhD Thesis)
  • Niazi, M.A. and Hussain, A (2012), Cognitive Agent-based Computing-I: A Unified Framework for Modeling Complex Adaptive Systems using Agent-based & Complex Network-based Methods Cognitive Agent-based Computing
  • Kurve, Aditya, et al. “Optimizing cluster formation in super-peer networks via local incentive design.” Peer-to-Peer Networking and Applications 8.1 (2015): 1–21.
  • Ch’ng, E. (2012) Macro and Micro Environment for Diversity of Behaviour in Artificial Life Simulation, Artificial Life Session, November 20–24, 2012, Kobe, Japan.
  • Simon, Herbert A. The sciences of the artificial. MIT press, 1996.
  • Niazi, Muaz; Hussain, Amir (March 2009). “Agent based Tools for Modeling and Simulation of Self-Organization in Peer-to-Peer, Ad-Hoc and other Complex Networks” (PDF). IEEE Communications Magazine.
  • Dignum, Virginia, Nigel Gilbert, and Michael P. Wellman. “Introduction to the special issue on autonomous agents for agent-based modeling.” Autonomous Agents and Multi-Agent Systems 30.6 (2016): 1021–1022.
  • Chliaoutakis, Angelos, and Georgios Chalkiadakis. “Agent-based modeling of ancient societies and their organization structure.” Autonomous Agents and Multi-Agent Systems 30.6 (2016): 1072–1116.
  • Lamperti, Francesco, Andrea Roventini, and Amir Sani. “Agent-based model calibration using machine learning surrogates.” Journal of Economic Dynamics and Control 90 (2018): 366–389.
  • Bonabeau, Eric. “Agent-based modeling: Methods and techniques for simulating human systems.” Proceedings of the National Academy of Sciences 99.suppl 3 (2002): 7280–7287.
  • Cambria, Erik, and Bebo White. “Jumping NLP curves: A review of natural language processing research.” IEEE Computational intelligence magazine 9.2 (2014): 48–57.
  • Lane, Terran, and Carla E. Brodley. “An application of machine learning to anomaly detection.” Proceedings of the 20th National Information Systems Security Conference. Vol. 377. Baltimore, USA, 1997.
  • Borshchev, Andrei, and Alexei Filippov. “From system dynamics and discrete event to practical agent based modeling: reasons, techniques, tools.” Proceedings of the 22nd international conference of the system dynamics society. Vol. 22. Oxford: System Dynamics Society, 2004.
  • Hao Zhang, A. C. Berg, M. Maire and J. Malik. “SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition,” (2006).
  • Shi, J., Murray-Smith, R. & Titterington, D. “Hierarchical Gaussian process mixtures for regression” (2005).
  • Bishop, Christopher. “Pattern Recognition and Machine Learning” (2006).
  • Nakamoto, Satoshi. “Bitcoin: A peer-to-peer electronic cash system.” (2008).
  • Poelstra, Andrew. “A treatise on altcoins.” (2014).

Comments or questions? Join the discussion at BlockShop!