Manifesting Storage, Computation, and Communications
How are Ethereum, IPFS/Filecoin, and BigchainDB complementary? What about Golem, Polkadot, or Interledger? I often get questions like this. So, I decided to write about how I answer those questions, via a first-principles framing.
The quick answer: there’s no one magic system called “blockchain” that magically does everything. Rather, there are really good building blocks of computing that can be used together to create effective decentralized applications. Ethereum can play a role, BigchainDB can play a role, and many more as well. Let’s explore…
The elements of computing are storage, compute, and communications. Mainframes, PCs, mobile, and cloud all manifest these elements in their own unique ways. Specialized building blocks emerge to reconcile tradeoffs within a given element.
For example, in the storage element we have both file systems and databases, where file systems are for storing blobs like mp3s with a hierarchy of directories and files, and databases are for storing structured metadata with a query interface like SQL . In the centralized cloud, we might use Amazon S3 for blob storage, MongoDB Atlas for databases, and Amazon EC2 for processing.
This article focuses on the blockchain landscape: the blocks for each element of computing, and some examples of systems manifesting each block. For each block, I will focus on being illustrative over thorough.
Blockchain Building Blocks
Here is each element of computing, with related decentralized building blocks:
- Storage: token storage, database, file system / blobs
- Processing: stateful business logic, stateless business logic, high performance compute
- Communications: connect networks of data, of value, and of state
Blockchain Infrastructure Landscape
Blockchain technology is manifesting in each block, as this image shows :
The fundamental computing element of storage has the following building blocks.
Token storage. Tokens are stores of value (e.g. assets, securities) whether it’s Bitcoins, air miles, or digital art copyright. The main actions on a token storage system are to issue and transfer tokens (with many variants), while preventing double-spends and the like.
Bitcoin and Zcash are two prominent “pure play” systems focusing solely on tokens. Ethereum happens to use tokens in service towards its mission of being a world computer. These are all examples of tokens given out as internal incentives to run the network infrastructure.
Other tokens aren’t internal to a network to power the network itself, but are used for incentives in a higher-level network where the lower-level infrastructure actually stores the tokens. One example is ERC20 tokens like Golem (GNT) running on top of the Ethereum mainnet. Another example is Envoke’s IP licensing tokens, running on the IPDB network.
Finally, I have listed a “.*” to illustrate that most blockchain systems have a mechanism for token storage.
Database. Databases specialize in storing structured metadata, for example as tables (relational DB), document stores (e.g. JSON), key-value stores, time series, or graphs; and then rapidly retrieving that data via queries (e.g. SQL).
Traditional distributed (but centralized) databases like MongoDB and Cassandra routinely store hundreds of Terabytes and even Petabytes of data, with throughput that can exceed 1 million writes per second, like here.
Query languages like SQL are profound because they separate implementation from specification, and are therefore not bound to any particular application. SQL has been a standard for decades. This is why the same database system can be used across many different industries.
Put another way: to generalize beyond Bitcoin to more applications without any application-specific code, you don’t need to go all the way to Turing completeness. You just need a database. This has corresponding benefits in simplicity and scale. There are still great reasons to have Turing completeness in some places; we discuss this further in the “decentralized processing” section.
BigchainDB is decentralized database software; specifically a document store. Being built on MongoDB (or RethinkDB), it inherits the querying and scale of Mongo. But it also has blockchain-y characteristics like decentralized control, tamper-resistance, and token support. IPDB is a public net instance of BigchainDB, with governance.
Also in the blockchain space, we can think of IOTA as a time-series database, if we squint a bit.
File system / data blob storage. These are systems to store large files (movies, mp3s, large datasets), organized in a hierarchy of directories and files.
IPFS and Tahoe-LAFS are decentralized file systems that wrap decentralized or centralized blob storage. FileCoin, Storj, Sia, and Tieron do decentralized blob storage. So does good old BitTorrent, though it uses a tit-for-tat scheme rather than tokens. Ethereum Swarm, Dat, and Swarm-JS do basically both.
Data marketplace. These systems connect the data owners (e.g. enterprises) with data consumers (e.g. AI startups). While they’re higher-level than databases and file systems, they are nonetheless core infrastructure because the countless applications that need data (e.g. anything AI) will depend on such services. Ocean is an example protocol & network, on which data marketplaces can be built. There are also application-specific marketplaces: Enigma Catalyst for crypto markets, Datum for personal data; and DataBroker DAO for IoT streams .
Let’s discuss the fundamental computing element of processing.
“Smart contracts” systems are the popular label for systems that do processing in a decentralized fashion . This actually has two subsets with very different properties: stateless (combinational) business logic and stateful (sequential) business logic. Stateless vs stateful gives radical differences in complexity, verifiability, etc. There’s a third decentralized processing building block: high-performance compute (HPC).
Stateless (combinational) business logic. This is any arbitrary logic that does not retain state internally. In electrical engineering terms, it can be framed as combinational digital logic circuits. The logic is represented as a truth table, schematic diagram, or code holding conditional statements (combining if/then, and, or, not). Because they don’t have state, it’s easy to verify large stateless smart contracts, and therefore to build large verified / secure systems. N inputs and one output requires O(2^N) computations to verify.
Bitshares and Eos also support stateless business logic.
Since stateful logic is a superset of stateless logic, then systems that support stateful logic also support stateless logic (at the expense of additional complexity and verifiability challenges).
BigchainDB, Bitshares, and Eos also support events (example). This gives a level of persistence edging the functionality closer to stateful business logic (thanks to Ian Grigg for pointing this out ).
Stateful (sequential) business logic. This is any arbitrary logic that does retain state internally. That is, it has memory. Or, it’s a combinational logic circuit with at least one feedback loop (and a clock). For example, a microprocessor has an internal register that gets updated according to machine-code instructions that are sent to it. More generally, stateful business logic is a Turing machine that takes in a sequence of inputs, and returns a sequence of outputs. Systems that manifest (a practical approximation of) this are called Turing-complete systems .
Ethereum is the best-known blockchain system that manifests stateful business logic / smart contracts running directly on-chain. Lisk, RChain, DFINITY, Aeternity, Tezos, Fabric, Sawtooth, and many more also implement it. Running code that’s “just out there, somewhere” is a powerful concept, with many use cases. This helps explain why Ethereum took off, why its ecosystem has grown such that it’s almost a platform in its own right, and why so much competition has arisen in this building block.
Because sequential logic is a superset of combinational logic, these systems also support combinational logic.
Small mistakes in code can have grave consequences, as The DAO hack showed. Formal verification can help, just like it helped the chip industry. The Ethereum Foundation is working on this. But it has scale limitations. For combinational circuits, the number of possible mappings is 2^(number of inputs). For sequential, the number of internal states, is 2^(number of internal state variables) if your internal variables are all Boolean. For example, if you have a 3-input combinational circuit, it would have ²³ =8 possible states to verify. But if it’s a sequential circuit with a 32-bit register, then to fully verify you have to check ²³²=4.2 billion states. This restricts the complexity of sequential circuits (if you want to trust them). “Correct-by-construction” is another approach to trusting stateful smart contracts, like Rchain does using rho calculus.
High-Performance Compute (HPC). This is processing to do “heavy lifting” compute for things like rendering, machine learning, circuit simulation, weather forecasting, protein folding, and more. A compute job here might take hours or even weeks on a cluster of machines (CPUs, GPUs, even TPUs).
I see these approaches to decentralized HPC:
- Golem and iEx.ec frame it as a combination of decentralized supercomputer along with associated apps.
- Nyriad frames it as storage processing. Basically, the processing sits next to decentralized storage (which Nyriad also has a solution to).
- TrueBit lets 3rd parties compute but then doing post-compute checking (implicitly checking when possible; explicitly checking if questions get raised).
- Some folks are simply running heavy computation on VMs or Dockercontainers, and putting the result (final VM state, or just computed results) into blob storage with restricted access. Then, they sell access to these containers using, for example, tokenized read permissions. This approach asks more of clients to verify results, but the good thing is that all this tech is possible today. This will naturally combine with TrueBit as TrueBit matures.
Here, we will cover the third and final fundamental computing element, of communications. There are many ways to frame communications; I will focus on connecting networks. It comes in three levels: data, value, and state.
Data. In the 60s we got the ARPAnet. Its success spawned several similar networks like NPL and CYCLADES. A new problem arose: they didn’t talk to each other. Cerf and Kahn invented TCP/IP in the 70s to connect them, to create a network of networks, which we now call the internet. TCP/IP is now the de-facto standard to connect networks. OSI was a competing set of protocols, but it’s long faded; though, ironically, its model has proved useful. So, despite its age, TCP/IP is nonetheless a decentralized building block, for connecting networks of data.
Tor Project can be seen as a TCP/IP overlay to help protect users’ privacy. However, it’s it has points of centralization, not to mention funding from the DoD which raises eyebrows. Tokenized Tor-like projects are emerging; stay tuned .
Value. TCP/IP only connects networks on a data level. You can double-spend packets — send the same packet to more than one destination at once — and it doesn’t care. But what about connecting networks where you can send value across the networks? For example, from Bitcoin to Ethereum, or even SWIFT payments network to say Ripple’s XRP network. You only want the token to be able to go to one destination at a time. One way to connect networks while preventing double-spends is to use an exchange. But that’s traditionally pretty heavy. However, you can strip an exchange to its essence and remove the need for a trusted middleman, by using cryptographic escrow. Alice can send money to Bob via Mallory, where Mallory is passing on the funds but cannot spend them (and there’s a timeout so that Mallory can’t stall things forever). This is the essence behind the Interledger Protocol (ILP). It’s the same conceptual idea as two-way pegs (think sidechains) and state channels (think Lightning & Raiden); but the focus is 100% on connecting networks with respect to value. Besides ILP, there’s also Cosmos which adds a bit more complexity for more convenience.
State. Can we go beyond connecting networks of value? Imagine a computer virus with its own Bitcoin wallet that can hop from one network to another. Or a smart contract in Ethereum mainnet that can move its state to another Ethereum net, or another compatible net? Or, why restrict an AI DAO to just one net?
We’ve now reviewed the three elements of computing (storage, processing, communications), the decentralized building blocks for each, and example projects within each building block.
People are starting to build systems that manifest combinations. There many combinations of two blocks at once, usually IPFS + Ethereum or IPFS + IPDB. But there are even folks using three or more blocks. Here are a couple leading edge examples:
- Ujo uses IPFS|Swarm + IPDB + Ethereum for decentralized music, just as envisioned here. IPFS or Swarm are for file system and blob storage. IPDB (with BigchainDB) is used for metadata storage and querying. Ethereum is used for token storage and stateful business logic.
- Innogy uses IPFS + IPDB + IOTA for supply chain / IoT applications. IPFS is used for file system and blob storage. IPDB (with BigchainDB) is used for metadata storage and querying. IOTA is used for time-series data.
Here are related framings by others in the blockchain community; all of whom I’ve had the pleasure to have great conversations with.
Joel Monegro’s “Fat Protocols” framing emphasizes each building block as a protocol. I think this is a cool way of framing, though it constrains the building blocks to be talking to each other via a network protocol. There’s another way: blocks could simply be one “import” statement or library callaway.
Reasons for using an import could be (a) lower latency: a network call takes time which could hurt or kill the usability; (b) simplicity: using a library (or even embedded code) is usually just simpler than connecting on the network, paying tokens, etc; and (c ) more mature: the protocol stack is just emerging now. We have awesome Unix libraries going back decades, even Python and JS blocks going back 15+ years.
Fred Ehrsam’s “Dapp Developer Stack” has an emphasis on web business models. While it’s also very helpful, it does not aim to make a fine-grained distinction among blocks for a given element of computing (e.g. file system versus database).
The BigchainDB whitepaper (first released Feb 2016) Figure 1 gave an earlier version of the stack of this post. For convenience, here it is:
It focused on the building blocks of processing, file system, and database. It did not frame from the perspective of “elements of computing”, and did not distinguish the types of decentralized processing. What I’ve written in this post is an evolution of my thinking from that paper over the past year and a half; with continual updates in talks such as my May 22 talk at Consensus 2017 which is very similar to this article. (Part of my reason to write this post is that I’ve received many requests to put it in writing:)
The image also emphasized that you there’s a spectrum from fully centralized (left) to fully decentralized (right). This is helpful for updating existing software systems to be more decentralized over time, focusing on updating blocks where decentralization helps the most.
Stephan Tual’s “Web 3.0 Revisited” stack is spiritually similar to this post, though with a bigger focus on Ethereum. It does a good service to the community by trying to make a map that groups many projects into similar building blocks. I was happily surprised by how similar the thinking was to my own. However, its layer of blocks to serve applications (blocks for messaging, storage, consensus, governance, ..) is actually mixing three things: apps, the “what”, and the “how”. To me, blocks should be the “what”. So, messaging is an app (should be at the application level); storage needs to be more fine-grained; consensus is part of the “how” (hidden within some blocks); and governance is also part of the “how” (therefore also hidden). It also has [network] protocols as a separate lower-level block, though I see those as one of the possible ways that blocks can talk to each other, alongside library calls. Nonetheless, I think this is an excellent article and stack:)
Alexander Ruppert’s “Mapping the decentralized world” has about 20 groupings of organizations, with the x-axis giving four higher-level groupings from infrastructure layer to application layer, but with middleware and liquidity as intermediate levels. This is a great piece too; I’m happy to have helped Alex map it out. It has less emphasis on core infrastructure and more on broader trends; whereas this piece is all about core infrastructure from a first-principles framing.
Systems like Ujo combine many blocks together, such as IPFS or Swarm (for blobs) + Ethereum (for tokens and business logic) + IPDB & BigchainDB (for database with fast queries), and therefore leverage the benefits of all of these systems.
I expect that this trend will accelerate as folks get a better understanding of how the building blocks relate. It’s also more productive than framing everything into one monolith called “blockchain”.
I expect this stack to continually evolve, as the decentralization ecosystem evolves. AWS started out as just one service: S3 for blob storage. Then it got processing: EC2. And it kept going; here’s the full timeline. AWS now has more than 50 blocks; though of course a small handful remain the most important. Below is a picture of all the AWS services.
I envision something similar happening in the decentralization space. As a first cut, one could imagine a decentralized version of every single AWS block. However, there will be differences, since each ecosystem (cloud vs mobile vs decentralized) has its own special blocks, such as token storage for decentralization. It will be a fun ride!
 You can actually put further hierarchy into these building blocks. E.g. databases sit on top of file systems, which sit on raw data (blob) storage. And distributed databases involve communication. For example, most modern databases talk to the underlying storage via a file system like Ext4, XFS or GridFS. The framing I give in this article is that of an applications programmer: what’s the UX for a file system, the UX for a database, etc.
 I added some new content here in Sept 2017.
 I’ve never really liked the label “smart contracts”. They’re not really smart in any AI-ish sense of the word. And they usually have nothing to do with “contract” in any legal sense of the word. If they do include legals, they usually state so, e.g. with Ricardian contracts. The labels “decentralized processing” and within it “decentralized business logic” make more sense. However, given “smart contract” now has widespread use, so be it. I have better things to focus on than fighting over labels:)
 I say “Turing complete” here in a practical sense, not in a theoretically pure sense. That is: the machine returns a string of outgoing bits as a function of the incoming bits and its current internal state; but practical in the sense of not running infinitely long or claiming to solve the “when does the machine stop” problem (halting problem).
Thanks to the countless folks who have given me feedback on this stack over the last couple years. And thanks to Carly Sheridan, Troy McConaghy, and Dimi de Jonghe for through editing. Finally, thanks to everyone in the space who continues to improve the building blocks and build ever more interesting applications:)