#27: Is Solana Decentralized Enough? w/ Kyle Samani, Su Zhu, and Hasu [+transcript]

Author

this episode, Su Zhu and Hasu invited Kyle Samani from Multicoin Capital, one of the most successful venture funds this cycle. Kyle is a day-one supporter of Solana, a smart-contract platform optimized for high throughput. They talked about:

  • Multicoin’s approach to investing
  • How will the winning blockchain scale?
  • What is enough decentralization?
  • Win conditions for Solana
  • What is Serum

Enjoy!

SUBSCRIBE to the Podcast

GUEST

Transcript

Note: The timestamps are about ~1min off, because they are missing the episode intro

Hasu 0:00
Hey, welcome to the show, Su.

Su Zhu 0:02
Hey Hasu.

Hasu 0:03
Our guest today is Kyle Samani, the general partner at Multicoin Capital. And our topic is the two major ways to scale a Layer 1 blockchain. And really like how much decentralization is the winning blockchain going to have?

Before we dive into this, Kyle can you give us a quick intro, both of yourself and also sort of Multicoin and your approach to investing and sort of your time horizon? So we can, you know, get some context on, like, your portfolio and so on.

Kyle Samani 0:35
Sure. So, hi, everyone, I’m a pleasure to be on the show, longtime listener. Uncommon Core is one of my favorite podcasts. And they all do a great job unpacking the fun, meaty debates. So I’ve launched Multicoin. In October of 2017, along with my co-founder Tushar. We, in 2017 launched our hedge fund. We added our first venture fund in July of 18. That fund is fully deployed, we’re now deploying out of our second venture fund.

Today, we manage a few billion across those vehicles. And we invest in you know, all things crypto. Our strategy is pretty straightforward. We are a fundamental focused fund, our time horizon, or hedge fund is measured in 6 to 24 months, typically, when we kind of put on positions on that that timeframe. Many things we own, like Solana, for example, which we’ll talk about today; our time horizon is longer than that, but you know, at a minimum, we kind of have to underwrite it to 6 to 24 months. Then our venture funds are obviously buy and hold for, you know, 5 to 10 years.

We generally, in terms of kind of thesis formation and what we invest in, we’ve invested in kind of every layer of the stack all the way from kind of core technical primitives through core financial primitives through middleware, and all the way through applications.

We are generally comfortable with all forms of risk. So technical risk, product risk, timing risk, all that kind of team risk, whatever, I can think of probably deals where we’ve had serious risk in at least one of those categories.

We generally get more uncomfortable when you’ve got two or three of those that are compounding.

But in fact, some of the best returns are the ones where you in fact, compound those risks. And so on rare occasions, we will kind of, you know, compound those risks, but we prefer to say we’re really underwriting one specific form of risk that we think is the core question at hand. And then try not to compound too many other forms of risk. And now again, it’s always impossible to do that perfectly. The world is not that that neatly cut up. But that’s usually how we like to think about risk.

Hasu 2:44
Interesting, do you have an example for like, something that you would define as a risk, and that you would try to avoid to compound?

Kyle Samani 2:52
Yeah, so um, for example, like we invested in zero knowledge stuff. So we invested in Mina, we invested in StarkWare. And we were investors in both of those since 2018. And we came to the conclusion, again, kind of looking at zero knowledge, I was like, okay, that you can replicate. Or you rather, you can prove to someone that you’ve done a computation, right and demonstrate the integrity of the computation you did without them having to redo the computation. And if you look at blockchains, the way blockchains handle this problem is simply through redundancy and just replication, just have as many people have replicated the same thing over and over and over. And so zero knowledge kind of, in a very abstract sense, represents one of the most disruptive kind of just fundamental changes to the nature of trust minimization that is out there.

In 2018, we looked at StarkWare and Coda, which were the two kind of really only credible zero knowledge plays, and StarkWare and Mina are very different things, by all accounts. And we said, look, this is, if anything is ever gonna kill crypto, it’s probably this.

Our ability to reason about Layer 1 versus Layer 2 at that time was almost non existent. Thinking about you know, that, like zero knowledge programming environments and those things, we had no idea how to reason about any of these things. But we said, okay, look, there’s a real good chance, zero knowledge and kind of break all assumptions we have right now. We had invested in both of those things at that time, where our biggest risk in our mind was timing. I had a pretty strong suspicion that is borne out I think, mostly correctly, that it was too early at the time.

But we said we underwrote that saying we don’t care that we’re too early, because in the event that we’re wrong, and it’s not too early, like this can just have crazy impacts through the rest of our portfolio. And so it’s both a hedge on the rest of our portfolio and of itself kind of an asymmetric opportunity.

And so, the biggest risk there was timing. We weren’t worried about team or math or anything else, like those guys were the world’s experts in this stuff, right? Like we’re not going to underwrite correctness there. Our biggest question was, is zero knowledge three to seven years too early?

I personally have experienced that pain of being too early with my last startup, which is called Pristine. We built software for Google Glass, for surgeons. And in hindsight, it’s been eight years since Google Glass launched, it was 2013, and like, it was obvious, it’s obvious now that it was too early, the hardware just wasn’t there. And like, if you look at the Snapchat summit, they just had like, a few days ago. Like, it’s clear that it’s still too early. Like this stuff still doesn’t really work in a consumer friendly package. And so, you know, I spent two and a half years of my life doing something that was at least eight years too early, probably twelve years too early.

And I looked at, remember looking at zero knowledge, in 2018, thinking the same thing thinking, Okay, is this too early, and I was like, I think probably 85-90% probability it is too early. But we went ahead and pulled the trigger anyways.

Hasu 6:03
Makes sense. And this also brings us to set up the question of today, and that’s basically, will the winning blockchain scale and layers and logical sharding, or rather horizontally within a single shard?

The first of those sort of breaks, you know, the sort of nice synchronous composability that we are used to where all applications can interact with each other atomically. But it has the major benefit that you just only have to verify sort of that small block of the state that they care about. And in the second approach, so you keep the entire state in one huge blob, and this retains the nice composability that we have gotten used to, but at the expense of ballooning, sort of the verification costs for users. And I had a pretty, I used to have a pretty strong stance on this. And I would say that I still have, but I mean, sort of reading, you know, some of your work. And seeing the early success of Solana has made me wonder if I’m personally diversified enough on this, like, as you said, maybe like, is my risk in this area too compounded, that, you know, maybe there’s more than one approach? So, where do you stand on this question?

Kyle Samani 7:19
Yes, one quick clarification I want to make on your comment about the users on the layered approach being able to verify the part of the state they care about. I’m not sure that’s strictly true, even in kind of maximalist sense, in that, like, if you’re, you know, you own less of your assets on one shard or whatever. But if you end up having to interface with three, five, ten other shards, right now, as a user, it’s actually not clear how you will actually verify yourself that you know, things executed correctly on the other shards. In a theoretical world where statelessness, you know, works, you can get there, but like that is still an undefined, an unsolved problem space,

Hasu 8:02
Maybe to interject there just for a second. So I didn’t mean like sort of the sharding that the sharding as in the Ethereum 2.0 roadmap. I see like the rollup-centric roadmap of Ethereum also has logical sharding. Because like, if you don’t use a rollup, then you don’t have to verify it. But it still scales Ethereum as a whole.

Kyle Samani 8:22
Right. Okay. Yeah. So, slightly different definition. I think I just want to make sure we’re very clear. I’d actually argue, so sharding maintains logical centralization, because the difference between sharding and rollups right is that in a shard, if you execute in like Mirror or Polkadot or Cosmos or whatever, theoretically, in these systems, if you say: ‘Hey, go interact with this transaction on this other shard’, you know, the inner shard protocol will figure it out and just do it for you. And the issuing transaction does not need to know or care which other shards the pieces of state are on. And the sharding protocol itself handles that magically, the difference between rollups and sharding is the rollups by definition break that, because the Layer 1 system does not know that Layer 2 even exists. And so you cannot automatically route that through the logic of the of the shard itself. So rollups break logical centralization. Sharding theoretically maintains it. Sorry, I know this this very nuanced kind of technical wizardry. So to answer the original question was, yeah, like layered approach versus kind of horizontal-scaled single approach.

So I think answering this question is a question of like, what trade offs are you making and what are you prioritizing for? I think the right thing you have to prioritize is sufficient decentralization and then optimal user – for some minimum level of decentralization, which primarily gives you censorship resistance. That’s the property you’re really getting. Have some minimum threshold of that, and then beyond that threshold, do not optimize for decentralization more, and instead optimize for developer experience and user experience.

That is how I think about that. The problem is that if you decentralize to the maximum degrees, you go just further and further down the decentralization curve, you create engineering problems. You create developer experience problems, user experience problems. And the theoretical solutions to these problems are things like sharding and rollups. And I’m not convinced that you need to go that far down the decentralization spectrum to make these things sufficiently censorship resistant, to achieve the kinds of properties you want out of these systems.

Hasu 10:50
Yeah, so maybe a good question to ask is, so what is enough decentralization? I mean, lots of people probably have lots of different opinions on this. It’s a lack of first principles, which way to approach this.

Kyle Samani 11:03
Um, so I think probably the best thing I’ve seen written on this right is maybe a blog post Balaji wrote back in, I want to say 2017, on quantifying decentralization. And you can quantify that across lots of metrics. Probably the most obvious ones are stake distribution/hash-power distribution, number of clients, or number of major applications, and obviously, you know, stake or hash distribution of those, number of validators in the consensus group, the number of validators or miners who can impact liveness (so that’s 1/3 in proof of stake or 51% in proof of work). Right, so it’s that distribution. Those are probably the metrics that matter, and then maybe just like general wealth concentration – for you know, general like egalitarian, equality purposes. I don’t think there’s any others that that seriously matter, those are probably the five or six that matter of those, I think you can probably stack rank them to some degree. If you ask Anatoly from Solana, he’ll tell you that the one that matters is number of consensus validators that can get you to 1/3 of the stake, because that is how you freeze, like that’s how you impact censorship resistance and how you can theoretically rollback the chain and create liveness problems and make the system fundamentally less usable for its intended purpose, which is DeFi. And that, to me feels like a very clear and reasonably objective way to think about it. It may not be the correct way, but it is at least a cogent view of the problem.

Su Zhu 12:39
Yeah, I think, to add into what you’re saying there, too, I totally agree. And I also think that decentralization is often a very emotional word for people, because a lot of people when they come into crypto, they think that it needs to be a certain amount of decentralization for it to have any value at all, or have any meaning at all. And I mentioned in one of our earliest Uncommon Core podcasts, the idea of like a spectrum and sort of – at that time, I was talking about decentralized exchanges, and like comparing CME futures trading against FTX, against Deribit, against BitMex and these kind of concepts. And then saying that, you know, people, they, they think too much in terms of the absolutes like, this will kill that or this will kill that. And I think the reality is like, it’s all inter-subjective. If the market demands a very high standard of decentralization for a specific task, and it ends up doing so, then that may make sense for that. But for a lot of what people currently do on DeFi, and a lot of what people might use blockchains for and blockspace for. There’s definitely a constant level of overkill I think, and I think that like you guys have been very smart to this thesis of this idea that doing things on-chain is fundamentally useful. And that if you go a little bit further on the spectrum, you can get a lot more done, basically.

Kyle Samani 14:07
Yeah, it’s obviously a spectrum. I think, you know, a year and a half ago, that discourse was non-existent. I think today, it’s reasonably existent. And I think a lot of people have real open questions about you know, how much decentralization is enough and in which vectors really matter. Right, definitely one of the matters is number of validators that get you to a third of the stake weight. And that needs to be, you know if you look at like, Eth 2.0 – because there’s no native delegation, you have to kind of suss that out between the stated number of validators, and then number of validators who are controlled by Coinbase, or control by Lido, or controlled by Kraken, or Binance or whatever. And so, so that’s kind of an interesting sub-point. The other one to think about is just, and that metric is specifically important as you think about liveness thresholds and what number of people can collude to impact liveness of the system. The other real fundamental property that matters is censorship resistance. And they are basically all you need is just more validators validating the system. And as you go from 1000, to 10,000, to 100,000 validators, you’re just getting more censorship resistance, because basically, right if anyone tries to block a transaction or insert an invalid transaction, you just need more and more people watching them and more and more people in the consensus group, such that the transactions will get included and be verified. So those are probably the two most important ones.

If you look today at those, you know – let’s talk about the second one first: censorship resistance, I think that’s probably the more important one, on kind of a grand human history perspective. Right, is just make sure you can’t be censored. How many nodes need to be in a consensus group, right, such that collusion is sufficiently difficult such that you can get your transactions included? And my intuition there is like probably the number is 10,000. I mean, look, it’s it’s like very subjective, right? But like, if there’s more than 10,000 nodes around the world, and you know, that they are physically distributed, or you have reasonable, reason to belief that, then, you know, like, what’s the probability that you know, half of them, two thirds of them right, are colluding, that you’re not gonna get your transaction included in the block? It just seems very, very hard to foresee that kind of large scale collusion.

Hasu 16:33
Is it really realistic that any system like whether it’s proof of work, but even more so proof of stake that it will ever have 10,000 distinct participants in the validator set because of all the, you know, economies of scale that are involved with staking and mining?

Kyle Samani 16:53
Proof of work? I mean, well, if you’re driving individual people mining versus the hash pools, if you look about individual miners, I’m fairly certain there’s a lot more than 10,000 people who mine today.

Hasu 17:04
Right, but they don’t make their own blocks right?

Kyle Samani 17:07
Correct.

Hasu 17:07
They outsource this to the mining pools, this is unlikely to ever change.

Kyle Samani 17:12
Agreed, that basically 0% probability that will change in proof of work systems. And proof of stake systems today, you know, if you look at Polkadot, I think Polkadot has, something like 800 validators, or thereabouts on the mainnet, I think the Kusama is, like 1200 or something, somewhere in that range. Solana has around 600 validators on mainnet and around 1200 on testnet. Cosmos and Tezos and Algorand are all in the 1000-ish range right now. Maybe 1500. But none of them are 8000 to my knowledge,

Hasu 17:41
But validators on those systems don’t say anything about who owns the stake, right? A validator just represents one fixed amount of stake, and they are all the same size. Why is this different in Solana?

Kyle Samani 17:53
Ah, so I’m not I’m not sure yet. I know on Solana there are 600 nodes, participating in consensus today. So they have stake, and they’ve staked it and there participating in consensus.

Hasu 18:02
Right, but they could be, and are probably controlled by a smaller number of people.

Kyle Samani 18:07
Yeah, there’s possible there’s individuals running multiple nodes. What’s nice, in basically all the systems other than Eth 2.0, is there is no native delegation, excuse me, Eth 2.0 does not have native delegation, but the other ones do. And so the motivation for like having many nodes to represent a single piece of stake is reduced substantially when you have native delegation.

Hasu 18:32
Yeah.

Kyle Samani 18:34
So you’re kind of in that range today is you’re in the 1000 range plus or minus for most of these proof of stake systems. And like, getting to 10,000 doesn’t seem very hard to me, like a 10x is just like pretty reasonable. On a three to five year time horizon. I’d say the probability is, in my mind is like 85-90%, that these things are more than 10,000 individual consensus validators in three to five years time.

Hasu 19:03
And how does it work that, like, they don’t all – they may be in the consensus set, right? But do they really participate in let’s say, the making of the next 100 blocks? Because I remember like in BFT, for example, you have in the BFT based proof of stake you have sort of this hard upper cap of like a 100 validators who can participate because the communication overhead between them is so large.

Kyle Samani 19:31
So, well there is a few things to unpack here. So one is: assuming you have 10,000 consensus validators, assuming they were perfect distribution of stake, then obviously, you’re only participating on average 1 out of every 10,000 blocks.

So, if your threshold is out of 100 blocks, then that doesn’t really work. Second comment on your BFT comment, it’s not quite correct. In all these BFT systems, you have to trade off liveness for safety. Well, either you have to choose prioritizing for liveness or prioritizing for safety, right? If the network splits, then do you stall? Or do you keep making blocks and then eventually remerge somehow? Is the fundamental question at hand.

Tendermint, which I’d say is probably considered kind of the gold standard of proof of stake systems prioritizes safety over liveness. So it does in fact, halt. And what that what that means is, every single block on a block by block basis, every single node has to communicate with every other node so that they can finalize that block before moving on to the next block, which is the messaging overhead you just alluded to. In systems like Solana, as well as Eth 2.0, they prefer liveness to safety. And so that messaging overhead does not have to happen, block by block, like you can make more blocks into the future, even if a block isn’t finalized. And so, Solana does this and I think most of the other liveness focused proof of stake systems do it as well, where basically that communication overhead can afford to fall behind. And the impact of that just means there’s, time delay, latency may increase, right, and maybe one second may go to three seconds or five seconds, who knows (depending on network conditions) but it doesn’t prevent the rate of block production. And therefore it also means that you can increase the validator set and keep block production going at the same pace. What that will increase is latency to finalization.

Hasu 21:27
I see. Yeah, I wasn’t sure if Solana favors safety or liveness. So that answers it for me. Thanks.

Kyle Samani 21:34
Yep.

Hasu 21:34
Yeah, you were talking about decentralization of the validator set.

Kyle Samani 21:37
Right, so most of the non Eth 2.0 proof of stake systems, they have 1000-ish, validators plus or minus a few 100. So the question is like, can that grow? And there’s no theoretical reason why they can’t. There’s just kind of questions of like, do more people want to run nodes, basically. Right? And my intuition is just as these systems grow, if you look at Bitcoin, look at Ethereum, those are the two oldest ones, in basically every dimension, they have continued to decentralize over time. And that’s been a relatively monotonic proces, in terms of political control of, kind of the governance of these systems, in terms of the number of applications built on them, the number of nodes, even just like, who makes them ASICs, right? I mean, just kind of in every way, these things have evolved, decentralized over time, because basically, as the aggregate dollar value of the system grows, there’s just more and more incentive for random people to get involved in some way, shape, or form. And so a growing market cap, I would argue, generally increases decentralization, and I think that trend will continue. I don’t really see why that won’t continue. Even things like a stake distribution, right? Like, there’s a lot of people who invested early Ethereum owned a huge percentage of Ethereum. Like Joe Lubin, obviously owns a massive percentage of Ethereum. I don’t know if he still does, but he certainly did at one point in time, because ConsenSys was burning, like $100 million a month. And so he just had to own a huge amount of Ether to underwrite that. Even like, look at […] Solana as like, oh, well, Multicoin and Alameda own too much. Like, ‘okay’. But like, we are forced sellers, at some point, like literally our fund has a life, like we have to return the money. And so I kind of don’t – even things like stake distribution have to get decentralized over time. And so as long as market cap is growing, I think basically all metrics of decentralization kind of have to move in the right direction.

Hasu 23:45
Yeah, I generally agree with your comment that decentralization increases over time. And it’s a function of how many people care about the protocol, and this is sort of the biggest driver ahead of any, like, technical properties. And those are actually also the two that I would say, like you touched on them in your long explanation, but you didn’t mention them explicitly. I’d say that the political governance of these systems is definitely very important. So who decides the roadmap? Like how difficult is it to change the consensus rules? And then the second sort of – and this is I think, also at the heart of the debate between these two approaches is the culture of validation among users. Because it’s true that sort of you need to pass a certain threshold of malicious block producers and the block producer set in order to corrupt blackness and safety in the systems. But even if sort of, this threshold is reached, then if many users validate the state transition These networks, then what what, like sort of the evil that these block producers can do? So it’s much more strictly limited. So my question to you would be sort of, is this something that you’re willing to sacrifice or, how much do we have to have to sacrifice this property of non block producers also validating the chain? Keeping the block producers in check?

Kyle Samani 25:27
Yeah, so again, this varies a little in proof of work versus proof of stake to some degree. In proof of – you mean, in Bitcoin, right? In Bitcoin is particularly weird, because you had like four mining pools or whatever, they control more than half the hash power. And if I recall, there was an episode, I want to say it was in in 2015, or 2016. Right, where the miners actually stopped, they started producing invalid blocks is like some sort of shortcut to like increase their hashing or something. And the full nodes ended up catching them and right, so that fundamental need for more validating nodes is fundamentally important. What’s interesting in proof of work versus proof of stake is that dynamic exists to a lot lesser degree, because you don’t have this massive cap ex spend where your goal is just to juice your hardware as much as possible at the expense of other people. In proof of stake, you just have, you know, probabilistic rotations based on stake weight. And so that those kind of fixed sum dynamics of I increased my hash power with some game at the expense of everyone else. It exists to some degree in proof of stake but to a substantially less degree. So that dynamic is going to reduce. The other comment is just, you know, doesn’t matter even if you assume there’s no one valid verifying consensus validators other than consensus validators. And it’s not clear to me the answer to that is yes. If you’ve got 20,000 nodes in consensus, or 50,000 nodes, or even, let’s just say 10,000, on the low end. Are enough – can you assume enough of them are honest, that it keeps the system in check? Because the good thing is that if anyone produces an invalid block like that’s easily slashable. And censorship is just a function of node count. And then liveness is just a function of stake weight up to 1/3. And so if your focus is: as a user, I know my transaction will be included; then you just need more nodes in the system to maximize the probability. If your concern is someone is screwing with the system, again you just need more nodes, whether they’re consensus or not actually is not super relevant – as long as there is slashing built in, and as long as some nodes can identify that and submit the, you know, invalid, proof to the rest of the nodes. And then the third is just, you know, will in fact there be some sort of what’s it called, liveness attack right? Where like, you get a large amount of history gets rewritten. And that’s actually the hardest to solve. That’s actually the highest bar of all of these to solve. Because it’s hard to force stake distribution, especially among the top validators.

Hasu 28:16
Okay, so it seems that we started from this, you know, point like, on the one hand, we said, all the systems started as completely centralized and decentralized over time. But now, at the same time, we are arguing this counter force on sort of the meta level, not inside the individual projects, but sort of between them where more known projects come online that sort of erode these ideas of decentralization that has emerged in the community in order to you know, get something out of it, right, get more get better user experience, get better developer experience, be a better platform for DeFi. So, sort of, where does it stop, right? Is there like, in two years from now, instead of the, you know, the more user friendly, the more scalable Solana, that sort of, instead of supporting 1000 validators, where they just say: ‘Okay, we decided that 12 validators, you know, geographically distributed sort of, like Libra, that this is enough’. At what point do the users feel that until here, and no further, in terms of eroding decentralization?

Kyle Samani 29:29
Um, yes. So a few kind of comments around this. The first is should a protocol prescribe a level of decentralization? Eth 2.0 does prescribe a level of decentralization, there are 64 shards – they prescribe the hardware requirements per shard. There is ideological dogma built into the protocol, it has represented in shard count as well as hardware requirements per shard. BSC, the same thing, obviously,

Hasu 30:11
Every blockchain does that.

Kyle Samani 30:12
Opposite direction, but the same thing, right?

Hasu 30:15
Yeah.

Kyle Samani 30:16
Interestingly Solana actually does not prescribe anything at the protocol layer at all. Solana does not prescribe the hardware requirements. Solana does not prescribe node counts of anything. Solana protocol lets all of that fall to the market itself. Now Solana, the protocol does happen to be optimized for GPUs, which do happen to run on 4000 concurrent cores, and I’d say it assumes – I’d say the one thing Solana assumes is that there you have a reasonably high bandwidth computer, just so that the proof of history and all the messages related to that can go in and out. But beyond that, it really assumes nothing about the node count or the hardware accounts. And all of those decisions about what is the degree of hardware, you need to keep up with the parallel transaction execution? And what is the degree of hardware you need to keep up with the proof of history with running the hashing cycles? Those are the two most important questions to actually answering ‘How decentralized is it?’. And those are not prescribed in the protocol whatsoever. Those are exclusively decided by the users, by the market. Where that’s some combination of non-staking users, and then people who stake to validators and then the validators themselves, right? And there’s some and then I guess, you know, there’s obviously kind of the soft social power of like, what does this Solana core team say about what they recommend? What does Sam, or what does Kyle have to say about, you know, those things as well? So there’s all that kind of soft, social discourse. But the protocol itself says nothing. Now, that’s a kind of a technocratic answer but I think it’s worth noting. The better, realistic answer is, you know, in practice like the Solana Foundation, they have recommended computer spec on the website. And most of the validators today do in fact adhere to those specs. So if you try to join with a lesser computer, like you won’t keep up. So there’s obviously some practical reality here. But it’s worth noting that all of these dynamics around does it decentralize, or does it centralize over time, are not in any way dictated by the protocol. It’s always dictated by by the market.

Hasu 32:31
Yeah. I mean, but there are reasons, right? Why all other protocols sort of have these, they kept sort of stuff like throughput state growth bandwidth requirements. And that is for one to sort of protect the non mining users, the non-staking users, because they like their private benefit from validating the state transition is quite low, right? It’s just, I want to make sure that I’m on the right chain. And that’s basically it, right? But for them, you know, the incentive to do this is quite low, as long as enough other people do it. And that’s why I sort of have this verifiers dilemma on all Layer 1 and Layer 2 blockchains. But the second one is also to, and this is, I think, this is a bigger deal in proof of work than proof of stake, but I might be wrong, which is sort of also protect the weakest of the miners and stakers. Because, like, in proof of work, we have seen this, if there’s this theoretical attack vector where, you know, larger miners, their blocks have longer propagation times. And so they want to mine blocks that are as large as possible. And so if you if you leave the block size to the free market, and you know, sort of the steady state is that blocks would just keep growing. Validation, like the the propagation times, we keep growing and this puts this is basically a selfish mining attack vector like automatic one. Do you see any any of those risks in Solana?

Kyle Samani 34:12
So I’m not worried about the selfish mining kind of thing, proof of stake kind of naturally solves that with kind of guaranteed timing of moving between nodes.

Hasu 34:23
I mean, the validators can miss that, there’s a – they have slots right? So in real life in this favoring proof of stake systems, there are like these slots where that your node has, like, let’s say, I don’t know, one second or half a second in Solana time to produce a block. And if they miss that slot, then they don’t get the reward and get like sort of micro slashing or something.

Kyle Samani 34:43
So they don’t penalize you for liveness failures. At least not like on an individual basis like that. And the same is true in Eth as well. Um, but yeah, I mean, conceptually, if you’re not online and ready to go, then you’re gonna miss your – you know with the hardware requirements as a validator you are going to miss your block reward

Hasu 35:01
Can blocks get so large that sort of the smaller nodes fail to produce a block in time, or is this this is like totally outlandish?

Kyle Samani 35:10
No, it’s absolutely a real thing. In fact, if you go to Solana Beach, which is kind of the main Solana block explorer/network overview thing, if you go to the list of the validators, I think there’s the validators tab, and you click on the validators, one of the key metrics you’ll see is basically, like, I think it’s called slot uptime or something, I forget what it’s called. Basically, it means for what percentage of the time are those validators hitting, like responding in time to the rest of the network with transactions from their slots? From what I recall, the median today something like 85%.

Hasu 35:44
Wow.

Kyle Samani 35:45
And that number has been growing.

Hasu 35:48
That’s quite low, like, why do you think it is that low?

Kyle Samani 35:50
Yeah, I mean, it’s because the block slot times are about 400 or 500 milliseconds, and then you rotate blocks every four blocks, or excuse me – rotate validators, every four slots. So let’s say you’re rotating on average every two seconds. So just yeah, communication overhead around the world, like, nevertheless, some people are just missing, are missing that. But that’s just okay, it doesn’t matter – like it actually reduces throughput, because obviously, you just have empty slots. So it does impact performance. But beyond that, it doesn’t really matter.

Hasu 36:24
I mean, it’s a strong centralizing force in the validator set. Because those who missed their slots, they will just lose money over time being a validator and then stop validating.

Kyle Samani 36:36
Um, yeah, I mean, can you do they all collude? Right? And that kind of thing? I’m generally pretty skeptical of like large scale collusion among lots of independent parties.

Hasu 36:46
Oh, I didn’t mean collusion or anything like that. Just that large block producers have a strong incentive to mine larger blocks. But I mean, no – I actually might also just be wrong. And this doesn’t apply at all to sort of this; that, that you can affect this as a validator, because the slot is the slot, right? It’s like, you don’t need to wait for someone else’s block to build on it. Unlike…

Kyle Samani 37:12
Correct.

Hasu 37:13
…I think, unlike in proof of work.

Kyle Samani 37:15
Correct. So right, like in Solana, specifically, the point of the whole proof of history system, is that everyone is maintaining an independent clock, which is the repeated the hash. And so if someone misses their slot, if you’re the next guy, you just don’t care that the last guy missed their slot, then you can make sure you’re ready to go.

Hasu 37:33
Yeah. Okay. But nonetheless, I mean, that 15% of validators missed their slot on a consistent basis that I think that shows that you know, that – there is a centralizing force there in the block producer set.

Kyle Samani 37:50
Yes, potentially, I mean, directionally, that’s obviously true. But I also think that the countervailing forces, just system optimization is just like a long ways to go, there’s a lot of known things that the Solana team wants to do, to improve redundancy in the system and make that better. I would suspect that general, call it, consistency of performance will probably over the next 12 to 24 months, you’ll see a 2x or 3x kind of growth in 2x or 3x reduction in kind of missed failures. It’s just like, all new systems like this, just it takes a long time to optimize them. And the Solana team is very open about that. In fact, they actually still call this as the beta for this reason, because they know there are so many optimizations they haven’t done yet that they’re unwilling to take the beta tag off of it.

Su Zhu 38:45
I think backing up to and just talking about your point about supply decentralization or distributing over time. I think people underestimate how quickly supply can distribute if the protocol is actually being used and it’s useful for people right? You think about Ethereum in 2016, supply was incredibly centralized. It just took one year and ICOs and a lot of activity and, you know, everyone in the world that knows what Ethereum is at that point. And then now to today, very few people relatively talk about Ethereum’s supply being, you know, controlled by only a few people and so I do think that utility solves all actually, when it comes to supply decentralization. Because if people want to get their hands on it, and it’s useful for them, then it’s then it’ll just happen. Even if it starts less decentralized. I think also if you look at the way that Solana and also Polkadot and Kusama the way that they did their sort of listings and then the price history and just being able to, like allowing normal individuals to access those assets from relatively early on. I think there’s clearly a very – relatively broad holder set, than what I think people would have assumed when these products were in their seed phase right? Like, I remember during the seed phase of a lot of these, I remember, Kyle came to us and asked us to join them in one of the rounds. And, and we ended up passing because we didn’t look closely enough. And then later on, we realized that we made a mistake, and we bought a lot of it OTC and we also went, and just like, did a lot more research into the thesis. But back then the main criticism of Solana was that, you know, the, only a few people would own a lot of it and this kind of stuff. And, and I truly think that this is one of the biggest red herrings in investing, because at the end of the day, it’s about, it’s about technology and community. So if they have a way to create a community, if they have legitimate technology, then then like distribution is not a problem. Right? And so I think people forget that all these things started relatively centralized, right? Like Bitcoin, when Satoshi mined the first block, he had it all. So like, everything starts from that. And then, you know, from that point of view, these newer POS chains, they they ultimately are a little bit more well engineered, like in a sense, because they think very critically about how do they want to give out supply? How do they want to bring people in? How do they want to create organic, you know, reward early adopters reward people coming in, like with Mina, you know, with the CoinList, sale, I think, you know, 10s of 1000s of people are able to buy it on CoinList. I think there’s a sort of an advantage of modernity, in a way with some of the newer chains that have launched because they’ve been able to see the history of a lot of other chains and they can go and say: ‘How do we, despite starting relatively centralized, because we need to have actual cash to be able to fund the technology?’ to then later go and say: ‘How do we decentralize supply over time, while making sure that there’s still a lot of activity going on?’. So I just think like, that’s like a complete red herring in investing in crypto.

Kyle Samani 42:07
The other point, I would I would make a couple points I make building on that. One is if you look at the pace of decentralization of Bitcoin versus Ethereum, obviously Ethereum decentralized a lot faster. And it’s because like, no one’s paying attention to Bitcoin in 2009. Right, no one knew what any of these things were – there was a lot of education that had to happen, and such. If you then look at like where Solana is today versus where Ethereum was – Solana is about one year old, it’s about 13 or 14 months old. If you look at the theory, I’m 13 months after it launched in July 15. So 13 or 14 months later, it was like they went through the DAO hard fork, and you know, like there was nothing on the chain, other than the DAO and the hard fork of the DAO. And so it’s obvious that like, the pace in which the ecosystem is growing is just a lot faster now than what it was then. That’s not, I don’t mean to like criticize Ethereum, is just there was no one paying attention to crypto back then. And a whole bunch of things have changed. But the nature of comparing time is time is compressing. Right? Where like the pace at which these things can decentralized is a lot faster now than it was then. If you look at the Solana today, this is crazy to think about. So, Commissioner Hinman from the SEC gave a speech in June of 2018. Saying Ethereum is not a security, or Eth is not a security. If you look at the state of Ethereum network at that time, Uniswap did not exist. I think Compound had not yet launched, I think they had raised money but they hadn’t launched a product. Maker did exist, 0x did exist, I think Kyber had maybe just launched, like v1 or like was about to launch v1. And like EtherDelta was around and that was about it. There was like a few 100 million dollars in stablecoins, not that much even in stablecoins on the system. And so you look at today, there’s a billion stable coins, there’s 1.6 billion in TVL, Serum is doing nine figures in trading volume a day. Like it’s kind of crazy to think about the non-linearity of how fast these things grow. So I think it’s kind of a backwards-facing comment on thinking about this kind of red herring that you alluded to. And then I think if you project that forwards, the non linearity gets even more interesting. What I think is gonna happen, right is like, you know, these permissionless DeFi crypto thing is, right? And obviously, a lot of people around the world are paying attention to this stuff right now. And they’re all trying to figure out what does it mean for my business. And this is true for both kind of finance companies as well as banks. But like, you have to imagine every tech company and every social media company the world right now is thinking about this stuff saying, like, ‘What’s here?’, and they’re all looking at BitClout, they’re looking at social tokens. There’s obviously there’s a lot of other cool ideas here, there is a very interesting design space. None of them have done anything yet on a public chain. Facebook tried to go their own way with with Libra/Diem and doesn’t appear that’s working for, I’m not sure why, but the whatever reason they have problems. But my point is no one actually has done anything on a public chain. Most interestingly, the one company that got close was Reddit, and they like got really excited about like doing a point system thingy. They like did this big public bake-off right, last summer. And their conclusion was, none of these things are ready, we’re gonna do something permissioned and private ourselves. And so it just kind of tells you like, how not ready – that’s the biggest most empirical demonstration of how ‘not ready’, these systems are for scaling to large numbers of users. And that was nine months ago, that happened – it was about nine months ago. When I think about what can happen over the next 9 to 24 months, I know all these companies are looking at doing crypto things. And the number one thing they’re all worried about a scale, right is they don’t want to break, they don’t have bad user experience. They want to make sure it’s gonna work, they wanna make sure the fees are low, and, and all that stuff. And I’d actually argue that like, if a real company with you know, let’s say 50 million plus daily users, says: ‘Hey, we’re gonna move our users onto a blockchain’, and for some core operation that’s like, native to the application, right that like, you expect people 50 million people to use multiple times per day. The first time that happens, that blockchain is now the most likely to become the largest blockchain in the world. Because once that happens, then most other people, other companies in the world, they’re gonna say, ‘Okay, let’s watch and see what happens to these guys’. Because there’s a lot of technical risk involved a lot of product risk, I mean, there’s just like, a lot of operational execution risk, right – in so many ways, to pull this off. So everyone, is gonna kind of sit back and say, ‘Okay, let’s see if these guys fall on their face or not’. And assuming it works, then the amount of convergence of perspective, you’re gonna get among kind of global engineering leadership around the world, it’s like, okay, this is the least risky way to scale these things to 50, 100 million, 200 million users. That perception is gonna change very fast. And so I don’t generally hold this dogma that like – well back to Su’s point of like, perceptions can change. I think the pace at which these things can change is extraordinarily small, and the momentum can shift. And so yeah, I think that I don’t think any of these core debates are over, because you’re gonna see these types of announcements are going to come. I don’t think they’re imminent in the next six months. I think that’s probably a little premature. But I’m optimistic within 18 months, you’re gonna see at least one major tech company do something that’s like fundamental to the business that incorporates the public chain.

Hasu 48:18
I’m I don’t have a good track record of predicting these things, but I am. So I would say that I would be very surprised if that’s true. I mean, maybe I like just the creativity to see what the public using public blockchain can do to, be in other products that these businesses offer and why they wouldn’t rather use an experience that they control, either just using a regular database, or like, do you have any example in your mind of like, even when Reddit made that announcement, I thought that it was stupid, and didn’t make any sense.

Kyle Samani 48:52
So the right frame to think here, I think, is not one of control. It’s actually the opposite. It’s one of liability. Vitalik wrote a really good blog post about this, I don’t know, six, nine, twelve months ago, I forget. It was it was phenomenal. Where he basically said that, the conversations increasingly inside of companies because of GDPR, because of all these hacks. I mean, look, if you’re Google and Facebook, you have like a big machine learning business, like okay, you have some data requirements, but for most other businesses that are not doing large scale ML stuff. Is data an asset or data liability? And you keep seeing these hacks if you just keep seeing all this stuff happening, like Cambridge Analytica. I mean, it’s just targeting, just constantly all these things keep happening. And I think you know, with a little bit more regulatory push from various jurisdictions, it’s not hard to see a world in which a lot of companies start to view data as a liability, instead of data as an asset.

Hasu 49:57
And how do blockchains settle that?

Kyle Samani 50:00
Yeah, so I mean, what blockchains provide is the substrate, such that you can design applications where users own the data, they own the state, whether that state represents money in the form of a social token, or something else, or whether that represents, you know, your Telegram messages or whatever. Now, is Telegram going to move over to some decentralized system soon? No, because the scale of messaging is too large. Like, that’s the highest order engineering problem. Because it’s just like trillions of messages per day are sent. But there’s a lot of intermediate things between here and there that are just much lower volume. That, you know, it’s like 100 million messages per day. You know, can you get that over in decentralized capacity? I think Vitalik probably right that on, the longer your horizon, the more that data is liability, not an asset. I think that’s right. And so what do companies do? I think the obvious design spaces are financial inclusion types of things, where the companies can say, look, we’re not a money transmitter, you know, they kind of say, we’re not liable. And then anything related to kind of social tokens and creator economy stuff, all that feels very ripe for this kind of thing. I do think probably the social media, and socialization companies are probably the most interesting for this design space. I think relatededly, you look at like, you know, Reddit, I mean, look, I’m not like a Reddit user, I’ve always kind of thought, Reddit, it was stupid, it’s messy and hard to filter through. But like, there’s some interesting design space here of karma points, or credibility points, or whatever you wanna call them, on a per Reddit – your perform basis. And people want to put – embed value into that, for Reddit to imbue value into that, as Reddit – [it] makes them a money transmitter and a lot of other things they don’t deal with. And so this is just a clever, regulatory arbitrage for Reddit to imbue value in their systems without becoming a money transmitter. So I think all these vectors are very interesting for big companies to start to engage with the stuff. The other comment, I would say, just generally is, with most new technologies that are orthogonal to a lot of existing things, they tend to seep into the world in ways that are very, not predictable. The Internet being kind of the best example of this. There’s a good Marc Andreessen quote about this, he says, ‘I now assume every entrepreneur that comes into the investment committee to pitch us, I assume that they are right’, like whatever their core thesis is, is correct. The only question is timing. That’s like a somewhat of a kind of extremist view. He’s obviously being a little bit hyperbolic, but like, directionally, there’s like a nugget of truth in there. And I’ve kind of, you know, doing this for a few years now has made me a lot more open to that general line of reasoning. I just kind of assume all these weird things, people pitch me even if I think they’re dumb. You just have to kind of assume they’re right. The question is just a function of timing.

Su Zhu 53:14
I think the interesting thing that happened yesterday to that point, as well as GameStop, announcing that they’re going to do this NFT thing on Ethereum, right. And remember, when GameStop first came into the mainstream, and there was then a lot of talk well at gamestop should put Bitcoin on their balance sheet, and then that would be a great way to like play crypto, and then, you know, they’ve gone and done their research. And they decided, actually, what we want to do is put NFTs on Ethereum, and I think that there’s a few, conclusions we can draw from that. I think one is that for businesses that are, you know, Gen Z native, internet native, millennial native, there is a huge growing interest in the idea of a social collectible, Internet-of-value thesis. This stuff, when you explain it to these types of owners of these businesses, it truly excites them and I think the idea that they want to control a centralized database of this stuff, like, if you’re running with that company, you can’t tell them with a straight face, you want to control that database yourself, right? Because what advantage do you get? Like if GameStop went and said, ‘You know what, I’m going to make a database of collectibles. You can now click these things in my database’. If you said that in the boardroom of GameStop, you’d be laughed at right, like it just sounds insane. So I think that there’s – we’re getting to the point now, where in those discussions if the person advocating a centralized database will soon seem like, the most crazy guy in the room and then the question will only be, you know, which chain do we deploy to? Do we use a newer chain? What features do we want, what what user experience do we want? And, and so I think that, that that whole concept of decentralized database, I think it assumes several things about what companies want that isn’t really true, right? It assumes that they, that they that they aren’t able to do the research and then figure out what could make their business work, they’ve now already seen the growth of DeFi, they’ve seen Top Shots, they’ve seen NFTs. Once that imagination is kind of sparked, and the sort of reflexivity of this entire, like adoption phase – we’re completely entering the stage now where assuming that crypto is sort of able to serve as this credible neutrality settlement layer, that internet-native companies find this incredibly interesting, and want to deploy as soon as possible, their biggest ideas.

Kyle Samani 56:05
Yeah, I agree that – it’s hard to remember now, but like a lot of companies ran a lot of experiments with the Internet right back in the day. And it took a lot of people a long time to figure out what to do. But a lot of the core ideas were there from day one, things like forums, things like chat. And even things like CRMs and databases, and those things, like all those core ideas have been present from, you know, early 90s. And, you know, I look today and kind of the obvious ones are DeFi and digital collectibles are like the two really big obvious ones. And I think there’ll be more that kind of iterate from there. But the number of places you can insert those those core primitives. Anything is measured in billions of daily active users. Say I’m optimistic this stuff will happen. And the companies that already have distribution will, in fact, be the primary distribution for the stuff. I don’t think this is gonna be like the Internet, where basically, you had a whole bunch of new companies come in and disrupt the old guys, and the old guys didn’t figure it out. I’m a lot more optimistic that internet-native companies will see the pattern, see the trends going on. And not all of them will adopt, will adapt correctly, for sure a number of them will fail. But I think the leading social media companies specific – any company that has deep social roots in it, I think it’s very probable that they figure out how to embed these new primitives into their product and service in a compelling way. But they’re not asleep at the wheel. In fact, most of them are founder led, and those companies are likely to rejigger their products.

Hasu 57:49
Yeah, I mean, personally, I think that the vision of decentralized social media is very compelling. In the sense that sort of the state is public and uncensorable, and users can choose between different interfaces, that may have different amounts of, different levels of moderation. And those can then be regulated. I agree that it’s definitely a question of timing. Like in the Web 3.0 vision, this part seems the most compelling to me. But I don’t know how many years out it is. And then about what you said earlier, I think that this is maybe a one of the most interesting aspects of DeFi, sort of removing liability from financial service providers. So there are two reasons why regulation exists, right. One is like for the incumbents to keep sort of competitors out, but also it really to protect, like, consumers and, you know, just protect the economy itself from sort of moral hazard and so on, and contagious sort of effects. And I feel like this part, you can really, at least the second part, you can really get around by using smart contracts that, you know, just in general, this concept of a company tying their own hands. I mean, if you can do that, then all of a sudden a lot of need for cumbersome regulation disappears.

Kyle Samani 59:18
The ‘can’t be evil’, right? Not ‘Don’t be evil’.

Hasu 59:20
Yes, exactly. So okay, so we talked about sort of what it takes for developers to adopt this. And definitely the user experience and general transaction costs and some minimum level of decentralization are definitely big parts of that. But another that we have seen that is a very potent network effect is sort of the execution environment and programming language for developers and all the tooling around that. And I would say that the EVM and Solidity are huge leaders right now in that area. And Solana uses I think a Rust based execution environment. Can you talk a bit about that and how you think that the advantage of the EVM is not already insurmountable?

Kyle Samani 1:00:13
Yeah, Solana has a custom runtime called Sealevel. It compiles to LLVM. And compiles to a new instruction set called extended Berkeley Packet Filter or eBPF. That stuff is well below my understanding of kind of core like, how circuits switch, and how memory is stored and how processors transact things. So there’s a level of technical depth there that I’m not qualified to speak about. What I do know is that Solana compiles down to kind of native code, and the Rust actually gets through the LLVM, compiles down to native code, and you’re getting native execution, as opposed to some intermediate layer, the EVM actually acts as a virtual machine, which is in the name. Solana, you’ll note it’s not the Sealevel is not called a VM, it’s called a runtime. So you have kind of this abstraction with the virtual machine, and again, because of the depth technical depth of virtual machines is beyond me, and probably beyond the scope of this podcast.

Hasu 1:01:19
Yeah.

Kyle Samani 1:01:20
But suffice to say, it’s just generally understood that they’re gonna have a bottleneck, kind of in terms of processing efficacy. So one of the core insights that the Solana team had was to say, look, we need to be able to get – one of the really interesting things to think about in these networks is you have a fixed amount of computational space. And that space is, well, there’s the physical bandwidth but there’s also just the physical processors and graphics cards. And you have literally billions of people trying to theoretically share this fixed amount of resource space. Given that, you need to optimize every ounce of performance out of the system to make sure that you can run at the limits of what the hardware can actually do. And so one of the core things that the Solana team recognized early on, was that the EVM was just not, in any way kind of optimized to take advantage of hardware, both in terms of just efficacy on a kind of a per instruction basis. But even more importantly, in terms of parallelism. And this is probably the most important difference between the EVM and Sealevel, which is that Solana natively supports parallel transaction execution. If you think about DeFi or even just payment flows, right, if I pay Su, and then you, Hasu, pay someone else, there’s no dependencies between those two things. And so those things should transact in parallel. The problem in kind of a blockchain is that, like you have this completely open state, and anyone can submit any transaction into the state at any time. And any transaction can theoretically modify any part of the state. You don’t know in advance what it’s going to modify. And so if you enable concurrent transaction execution, the thing you have to make sure you have to make sure that basically, you don’t have two transactions, reading or writing from the same piece of memory at the same time,

Hasu 1:03:11
Yeah, like two trades in the same Uniswap pool, or whatever.

Kyle Samani 1:03:15
Exactly, whatever, plenty of examples you came up with. On a technical basis, you’re specifically focused on address space, like in memory itself. That’s really the core technical constraint. So the EVM solves this problem by just saying: ‘Don’t solve the problem’ and just force everything to run serially. And if you do that, then I guess the solution but obviously, you forfeit parallelism. Interestingly, of all the other major chains, the only chain that even attempts to solve this problem, within the context of the single shard is Solana. The way they solve it is they basically say every transaction has a transaction header. And the transaction header specifies all the parts of the state that that transaction can modify. Not that it will modify because there may be some branching if logic in the transaction, but that it could modify based on all potential permutations of, you know, if statements in the transaction. So you have to basically just lock all of those pieces of state and say I’m getting monopolistic rights over these parts of state for the course of this block. And so by doing that, basically, the system can then parallelize – it knows what every transaction is going to touch until you can parallelize all transactions that don’t touch overlapping state. And the benefit of doing this is that you can parallelize things. Modern graphics cards have about 4000 cores. And so you get kind of 4000 lanes of parallelism, basically, in a year or so Nvidia is gonna release cards with 8000 cores, and you just double the throughput. When I think about the nature of these blockchain systems, you know, if you assume there’s going to be social media applications from Snapchat and from you know, BitClout and you’re gonna think there’s gonna be DeFi stuff and you think that people are going to be trading tokenized securities, I mean, these are all like largely different things. And by definition, like these different categories of applications are non-overlapping in what they do. And so it’s only natural that you should be able to parallelize these transactions. And it seems relatively clear to me that, you know, if you assume you got 100 million or a billion users, doing all kinds of whatever social media things DeFi things, whatever, on a block by block basis where a block here, let’s just say for simplicity is one second, the percentage of those transactions that will be actually demanding overlapping state. On a per second timescale, my intuition is that it’s probably under 1%. And maybe under 0.1%, of transactions are actually fighting over the same piece of state. Maybe it’s 2 or 3%. Like, but I’m pretty sure it’s like not over 10% of transactions

Hasu 1:05:55
In Ethereum, it’s probably a lot more.

Kyle Samani 1:05:59
What I’m saying, as you as you increase the array of applications, that percentage has to drop. And so my intuition is, it’s probably on the order of 1%, maybe it’s two or three, harder for me to believe it’s higher than that. And so if you don’t parallelize, you’re just forfeiting, you know, massive amounts of throughput per unit of time. And I think that’s like super important for these things. And that’s one of the fundamental differences in Solana vs. Ethereum. The downside, of course, is that you lose backwards compatibility with EVM, which obviously has a fair bit of infrastructure built up around it. I’ve always felt that it hadn’t achieved escape velocity for kind of all the reasons we just talked about with big companies and all these other things. And so it obviously is something you have to overcome. And it was not to be taken for granted that it would be overcome. But at this point, if you look at the state of the Solana ecosystem, it’s hard not to imagine – basically, all the things Ethereum has that Solana doesn’t have today. So things like Dune things like Graph, which was announced, there are a few few more DeFi primitives and those things, it’s pretty hard not to imagine in the next three or so months, maybe six, like all those things are getting built out. And you’re basically kind of feature parity for what I’ll call all the middleware DeFi primitive stuff. If you assume that’s the case in three to six months, then it’s like: Okay, well, what advantage is the EVM actually providing now? And that starts to become very, very insignificant pretty quickly.

Hasu 1:07:34
Right, okay. I mean, I agree about those tools, but just in general, it’s not possible to port something like I mean, not that not thatyou would want to have Uniswap on Solana because, of course, Solana would support more efficient orderbook based exchanges. But if there’s an application in DeFi that the users like, what are sort of the steps to porting it over? I assume it has to – it requires a full rewrite. Yeah?

Kyle Samani 1:08:00
Yeah, full rewrite of the smart contracts, for sure. One thing I have observed that I was wrong about was, you know, back in August, September of last year, you know, reached out to a whole bunch of the DeFi protocols, all the major ones and was like, ‘Hey, guys’, you know, Solanas, you know, serum was announced, and there’s a little bit of volume starting to happen. And Solana team reached out to all the major DeFi protocols on Ethereum and said, ‘Hey, you know, are you guys interested in rebuilding, you know, on Solana’ then all of them said, they all were like, ‘Oh, this is interesting and cool’ and then none of them did anything. And it’s been nine months now and like, you can see that none of them have relaunched, you know, on Solana. And so like, the question is, why? Obviously, they didn’t think it was a priority is kind of implicit in that. But if you kind of dig beyond that, one thing I observed having an interface with a fair number of solidity-based EVM engineering organizations is that the Ethereum developer teams have very little, if any, expertise, building Rust, writing Rust, and playing Rust. Not that Rust is like a weird niche language. I mean, Rust is one of the most popular languages in the world now. But just these teams just don’t have that experience in-house. And so, you know, as an engineering leader, if you don’t understand kind of this, this other code base, technology base, and now you’re tasked to go build some first-class application, it’s just a very hard thing to do. But as an engineering leader, and then also to find and recruit the team, you know, to do it. And so there’s just a lot of organizational momentum that existing Ethereum based DeFi teams have. That’s not easy to overcome. So, you know, the Solana team, I think kind of realized, you know, late last year that those kind of those efforts to try and get people to port over were failing. I mean, they I think the failure rate was 100%. And have realized that okay, well, then we have to build everything new from the ground up with new teams, which feels like a risky strategy and it is obviously risky-er. But I don’t think the amount of risk is actually that high on a relative basis. And if you look today, you’ve got multiple teams building money market things like Compound and Aave, like Jet and Oxygen, I think are the two that I’m aware of, there may be others.

Hasu 1:10:25
What are they called?

Kyle Samani 1:10:26
Jet and oxygen or money markets. You’ve got teams building: margin trading – Mango, you’ve got teams working multiple teams working on perpetual contracts and quarterly futures. You’ve got multiple teams working on options. So I mean like those are the all the most important, kind of core primitives. You have teams already working on all those things. And I think most of those, those market segments will be reasonably competitive. You know, there’ll probably be two or three major players in most of those markets, which is healthy, right? Like, you don’t want there to be a single protocol for each kind of core primitive, it’s healthy for the market to have two or three. And most of these teams that I kind of just named are venture backable. I’m not saying that we are necessarily investors in them, but they are all like venture-backable teams.

Hasu 1:11:12
Can you say something about Serum?

Kyle Samani 1:11:14
Yeah, you know, Sam, was running FTX. And they obviously, it seems like DeFi clicked for them kind of in May or so of last year. They said, ‘Aha!’, like this is this is important. And they started to do stuff on Ethereum and they hit all this group of constraints. And they were like, we just can’t do this. We’re not going to build a product that we want to be able to build. So they started looking around. I remember I had a call with Sam and Anatoly. I want to say it was like on July 7th, or thereabouts last year or something like that. Call started, it was 10 o’clock for me in Texas, eight o’clock for Anatoly. And it was 11am for Sam in Hong Kong, and called was set for thirty minutes, went for two and a half hours. And kind of like just dove like really deep into you know, like, what did Sam want to build? I remember we had a really like existential debate about like, what is the nature of financial markets? Like information theory, like what’s the speed at which things propagate? I mean, literally, the speed of light. But then also, like, more importantly, like, what is the timescale that matters for prices to update? Is it okay, if the price updates aren’t, you know, measured in nanoseconds, but measured in milliseconds? And like, what’s the inefficiencies that creates? Right, and like, kind of very kind of existential questions about the nature of these things. And we kind of reasoned through all this stuff and you know it was obvious the wheels were turning in Sam’s head, and he was like, ‘Yes, like, 400 millisecond to one second timescale is a sufficiently low timescale that you can make this thing work’. 15 seconds is too slow. It’s unclear where exactly the threshold is between one second and 15 seconds. But somewhere in there is the threshold to make this stuff work. And he understood quickly that like, you need parallelism for this to work, because he’s like, yeah, like, I’m gonna have a bunch of Serum markets. And like, obviously, you need these things to transact in parallel, not serially. And so he kind of realized all that stuff pretty quickly over the phone. And I remember I went to bed and then the next day I woke up, and then that’s when he texted me. He said, ‘Dude, someone spamming the Solana network’. And I was like, I bet you it’s the FTX engineers, and like yeah, they started spamming the network overnight to test it. And Sam got underway building Serum from there.

Unknown Speaker 1:13:29
Is Serum is an application or a suite of applications.

Kyle Samani 1:13:32
Serum is a protocol. Serum actually does not have a frontend today, at least not one that’s like endorsed by Sam and the Serum team officially if you go to a projectserum.com, there’s like a list of frontends that are there. But Serum is is not a protocol. I think probably the most interesting thing about Serum is that it is the opposite of FTX in so many ways. FTX is obviously a full-stack experience. The UI is glorious, customer support, fiat on-ramps, all those things. What’s interesting is that like, you know, FTX has been widely recognized for their product execution over the last two years, where they control the full stack. And it’s been very interesting to watch the Serum team do the opposite for Serum, which is to say, Hey guys, here’s a protocol, it’s a protocol that enables you to have orderbooks and markets on-chain and be able to cross the spread basically right and complete a transaction. And it only does that for spot stuff. Although you can use the orderbook infrastructure for theoretically any asset, whether it’s leverage or derivatives or something else. But here’s this infrastructure, please go build other stuff around it. So build frontends, build margin trading build perpetual contracts, quarterly futures, all these other things. And at first, I was kind of confused, kind of watching them do this. You know, like, FTX is a very full-stack, tightly controlled thing. And Serum is not and I just kind of assumed they were going to build Serum in the same way, that was just my default assumption. And they’ve, if you look at the communications from the Serum Telegram, Serum Medium, from the Serum Twitter, you’ll see this continuous repeated focus on Serum as it’s own platform for third parties, where there is no official frontend. And they’re really focused on enabling other developers to build DeFi primitives. This is like not particularly novel thinking in crypto DeFi-land, obviously. But I think it’s very interesting that you’ve got a single entrepreneur who is known for controlling the full-stack experience on land, also then having the wherewithal to say, we’re going to engage DeFi in a DeFi-native way. And not trying to control the whole thing. And I think that’s been super interesting to kind of see that dichotomy play out.

Hasu 1:16:01
Yeah, thank you, that is indeed very interesting. We kind of moved away from the original question, we barely talked about sort of the synchronous versus the asynchronous experience of DeFi and instead talked about how much decentralization is enough, how to measure decentralization and so on, and what you can get in return. And that was also very interesting – so I would say thanks guys for the discussion.

Kyle Samani 1:16:28
Hey, Hasu, Su thank you for having me on. Pleasure to be on, I’ve been listening for a long time and yeah, it’s cool to dive into this stuff. I love the really deep first principles, you know, peeling back the onion, one layer at a time.

Su Zhu 1:16:40
Yeah. Really awesome. Thanks for coming on.

Kyle Samani 1:16:42
Thanks Su, thanks Hasu.

Share this post

Twitter
Facebook
LinkedIn
Reddit
Email

Leave a Reply

Further reading

Evangelizing Bitcoin

There are many similarities between Bitcoin and the universalizing religions, and much for Bitcoin to learn from them.