Vitalik conceived Ethereum as the world computer: a single, composable, open, permissionless state machine that could run trust-minimized code. And while Ethereum was a breakthrough on many fronts—P2P layer, deterministic state machines, composable smart contracts and more—it was lacking in many others. These limitations—most notably, the lack of throughput, high latency, expensive transactions, and the use of a custom programming language in a lackluster virtual machine—have prevented Ethereum from fulfilling its original promise.
Solana offers all the properties that developers of trust-minimized apps need:
For developers, the value of properties #1 – 5 is clear. I’d like to highlight the importance of #6: having a single, global state that supports composable smart contracts. Given the nature of discourse in crypto developer communities over the last few years, the value of #6 cannot be overstated.
Developers building smart contracts don’t want to deal with layer 2 and sharding. Or cross-shard application state and logic. Or cross-shard latency. Or security models in side chains. Or liquidity routing in state channel networks. Or how they might run computations off-chain using zero-knowledge proofs.
The entire point of having a smart contract chain is that the chain itself abstracts all of the lower-level complexities and economic system necessary to deliver trust-minimized computation, allowing application developers to focus on application-logic. Indeed, when Vitalik unveiled Ethereum to the world in Miami in January 2014, this is precisely what he emphasized: the point of the world computer is to abstract everything that is not application-specific!
While there are many types of scaling solutions being worked on, each of them create idiosyncratic forms of complexity for application developers, users, and the ecosystem as a whole. The last of these forms of complexity – what I call “creating ecosystem baggage” – is particularly challenging to deal with. For example, wallets need to know where user assets are across many chains and state channels; users need watchtowers; liquidity providers need to provide liquidity; liquidity pools are broken; latency is introduced in all kinds of weird places; etc.
Or said another way: all of these heterogeneous scaling solutions break the elegance and simplicity of a single logically centralized system (but architecturally and politically decentralized) that is bespoke, not uniform, and logically fragmented. Logical fragmentation increases complexity and friction for users, developers, and service providers.
All of the heterogeneous scaling solutions are responses to the fact that, until now, no one has figured out how to scale layer 1 while also preserving sufficient architectural and political decentralization. When I tell people that Solana has figured out how to scale layer 1, they assume that the architecture must be experimental and risky. They typically also assume that betting the farm on heterogenous layer 2 scaling is much less risky, largely because that’s what the crypto community has discussed since 2014.
Ironically, this is the opposite of reality. None of the layer 2 solutions or sharding are operating beyond proof-of-concept scale, and no one has successfully addressed the second- and third-order problems that come from scaling in a heterogenous way (e.g. bridging side chains via ILP, dealing with crowded shards, requiring application logic to consider exogenous state, etc.).
Meanwhile, developers – both crypto and non-crypto developers – already know how to build and deploy code for layer 1: deploy a smart contract on the chain, and then users send signed messages to the chain. That’s it.
It’s impossible to provide simple abstractions without a logically centralized interface.
This is not to say that layer 2 is bad, or that developers won’t build successful layer 2 products. Rather, the case for Solana is one that developers don’t have to depend on these bespoke scaling solutions (developers will certainly deploy layer 2 things on top of Solana, and they’ll be able to because Solana is permissionless). For the vast majority of use cases, developers building on Solana just don’t have to think about scaling at all because the entire point of Solana’s layer 1 is to abstract complexity.
Solana’s guiding principle is that software shall not get in the way of hardware.
Let me repeat that.
Solana’s guiding principle is that software shall not get in the way of hardware.
This has three major implications:
First, the Solana network as a whole operates at the same speed as a single validator. This is actually intuitive: if software doesn’t get in the way of hardware, the network will perform at the same speed as a single machine, assuming bandwidth is not the bottleneck (it’s not; more on this in the Turbine section below).
Second, aggregate network performance scales alongside bandwidth and the number of GPU cores. Bandwidth continues to double every 18 – 24 months, and modern internet connections are many orders of magnitude away from saturating the physical limits of fiber. And while single threaded CPU performance is no longer improving in line with Moore’s law, GPUs continue to double the number of cores every 18 – 24 months with no end in sight (Solana leverages massively parallel GPUs with 4,000+ cores for transaction processing; more on this in the Pipeline section below).
And third, because of the fact that Solana’s aggregate network performance grows linearly with the underlying hardware, Solana creates abundance where there is currently scarcity: trust-minimized computation. The overarching theme of technology over the last few hundred years has been making previously scarce resources abundant. The idea of abundance is most clearly captured by Moore’s Law, but abundance is not just about sheer computational ability. The impacts of abundance have been felt in almost every industry as software continues to eat the world.
While abundance is generally a good thing, there is one area in which abundance is obviously a bad thing: the money supply. While every permissionless chain comes with scarcity guarantees about the money supply because of permissionless BFT consensus, each chain also forces scarcity of trust-minimized computation. By creating a network in which software does not get in the way of hardware—allowing network performance to scale with hardware—Solana makes trust-minimized computation an abundant rather than scarce resource, while still offering strong guarantees about the money supply.
Scarcity of money supply and scarcity of trust-minimized computation have previously been bundled. Solana unbundles these.
The world computer must offer abundant computation, but be powered by scarce money.
There are seven major technical breakthroughs making Solana possible. I’ll provide just a brief overview of each. The section headers link to detailed explanations from the Solana team. In order going up the stack:
Time is the foundation for everything in distributed systems, and Solana takes a fundamentally new approach to the notion of time in a permissionless, distributed system.
POH also provides one other nice benefit. One of the most common criticisms of POS systems is that they are not objective, but rather weakly subjective. Because of POH, Solana becomes objective. Because the passage of time is encoded into the blocks themselves, and because verifiers can verify the POH at least 1,000x faster than the initial production of the POH via parallelization, a new node can verify the integrity of the chain from genesis to the present without any out-of-band information.
Moreover, Pipeline leverages Berkeley Packet Filter (BPF), meaning that the VM hands off transaction execution directly to the hardware (as opposed to executing transactions in a virtual machine), improving performance even further.
Although Pipeline doesn’t rely on WASM byte code, developers can take code written for WASM compilers and re-compile using the Pipeline compiler with very few changes. This allows Solana to easily support apps being written for WASM-based chains like EOS, Dfinity, Polkadot, and Ethereum 2.0. The flagship language for Pipeline is Rust, in addition to support for C, C++, and Libra’s new Move language (more on that below).
Yes, Solana is so fast that the Solana team had to create a new database structure from the ground up so that disk I/O wouldn’t be the bottleneck.
The common theme among these innovations can be summed in a word: optimization. Solana is the clearest example I’ve seen of first principles-based engineering at every layer of the stack. The team systematically identified every point at which other chains slow down (e.g. consensus overhead, single-threaded computation, and disk I/O) and designed unique solutions to address every problem.
Facebook’s Libra team created a new VM and programming language called Move. Although Libra will not be programmable at the time of mainnet launch in 2020, the Libra team has already open sourced the Move code base. And it turns out that Move and Solana’s Pipeline VM are more similar than different.
Solana natively supports Move, including BPF and parallel transaction processing on GPUs. This means that developers can trivially port applications written for the permissioned Libra chain to the permissionless Solana chain and receive all of the performance that Solana has to offer.
This is an incredible catalyst for Solana as Solana benefits from Libra’s distribution while still operating in an entirely permissionless fashion.
Based on Solana’s projected mainnet launch in October 2019, Solana is likely to be the first chain to actually support Move-based applications.
Solana is so performant that it enables entirely new classes of applications that were previously impossible. An example:
Solana can validate the block headers of the entire history of Bitcoin from genesis through the tip of the chain. The same is true for Bitcoin forks like Litecoin and Zcash, and Ethereum as well. Because Solana can validate the current state of other chains natively, Solana does not need to rely on an oracle (e.g. Cosmos IBC) to understand external state.
This means that Solana can power a non-custodial cross-chain DEX; trades take place on Solana, and settlement happens on the native chain of the asset.
And because POH acts as a clock intra-block (and not just inter-block), Solana offers much stronger guarantees in terms of intra-block transaction ordering. Coupled with Solana’s incredible throughput, the network can support an on-chain orderbook. This is the holy grail of DEXs.
In the latter part of 2017, Anatoly began studying blockchains. He recognized that the core problem underlying consensus is the clock problem. Specifically, that there was not a universal, globally available, trust-minimized clock that all validators could use to timestamp transactions. He realized that a computer can encode the passage of time using simple SHA-256 looping, and that this data structure can be used as a way to synchronize the clocks among a network of distrusting computers. This core innovation has come to be known as Proof Of History (POH), which acts as a global clock before consensus. Having a global clock that operates separately from consensus is a subtle but profound shift that has major implications for everything built on top of POH, including consensus itself.
Anatoly has assembled one of the best engineering teams in crypto. Most of the core engineering team has worked together for 10 years, previously at Qualcomm. The team has expertise at every layer of the stack from wireless networking to CPU/GPU/DSP design, kernel design, embedded systems, OS, SDKs, and more. Some team highlights:
Anatoly Yakovenko designed high performance DSP software that powered Google Tango), the first mobile device to support augmented reality in smartphones.
Rob Walker was Senior Director for Brew, the OS that powered more than 500M CDMA based phones before the iPhone launched.
Greg Fitzgerald worked on the LLVM in the Office of the Chief Scientist at Qualcomm.
Pankaj Garg helped define the LTE Standard and build ARM TrustZone.
Stephen Akridge was a GPU lead at Qualcomm, focused on GPU compilers and drivers.
Eric Williams, PhD was a particle physicist at CERN.
Solana is only possible because of the team’s technical depth and breadth. This team has the depth to go all the way down to the metal, and they’ve leveraged that depth to leave no assumption untested. Every layer of the stack is optimized.
Over the next few months leading into projected mainnet launch in October, the Solana team is embarking on a global tour to meet developers around the world, answer questions, and show the system in action. They’ll be at Web3 Summit in August in Berlin, Wanxiang Blockchain week in Shanghai in September, and Devcon5 in Japan in October, in addition to smaller events around the world. If you’re going to be at any of those events, please reach out to the Solana team and say hello!
There is a unique opportunity to launch a new chain correctly. That means one with key management solutions, exchange and custodian integrations, developer tooling like Truffle, query and API layers, debugging tools, and more. If you’re building Web3 infrastructure or high performance applications and would like to integrate with or build on Solana, you can contact the Solana team.
We are incredibly fortunate to back Anatoly and the Solana team, and are looking forward to seeing the applications that are uniquely enabled by Solana!
Disclosures: Multicoin Capital holds long positions in SOLs. Multicoin Capital abides by a “No Trade Policy” for the assets listed in this report for 72 hours (“No Trade Period”) following its public release. No officer, director or employee shall purchase or sell any of the aforementioned assets during the No Trade Period. This post is for informational purposes only, you should not construe any of the information or other material as investment or financial advice. Nothing in this post constitutes a solicitation, recommendation, endorsement, or offer by Multicoin to buy or sell any tokens or other financial instruments.