Web3: Shifting Our Focus From the 1st to 2nd Floor

Tea Project Blog
7 min readSep 16, 2022


This post was written by the TEA Project’s founder Kevin Zhang. Kevin touches on why the race to be the next layer-1 is already over, and why the current approaches of layer-2s aren’t sustainable as far as gaining notable network effects.

Since the birth of BTC more than ten years ago, becoming the main public chain has been the focus of crypto competitors the world over. From the initial fork of various altcoins to the recent cross-chain developments, public chain technology has undergone tremendous development since the inception of Bitcoin.

But all is not well in the land of altcoins. It seems that the public is more interested in the Ethereum merge while almost all other projects lack the spotlight to attract attention. My explanation for this phenomenon should be that the era of public chains has passed. Now that the dust has settled, we can examine more closely what’s truly going on.

A Lesson From the Past

This situation reminds me of the PC OS battles of the 1990s and the mobile OS battles of the 2010s. Take Microsoft, the winner of the first war and the loser of the second. When a market already has one or two dominant competitors, as long as there’s no disruptive improvement from the user’s perspective, there’s no chance of a new winner no matter how powerful the new entrant is. Because of this reality, everyone’s attention then shifts to new areas of competition.

I know that every project trying to challenge Ethereum has unique advantages and competitive points, and there’s no doubt that many shortcomings of Ethereum as the currently predominant smart contract blockchain are cited. But as long as everyone’s essentially either a competing blockchain or just another layer of the main chain, then it’ll be very difficult to pull developers and users from ETH’s market share now that the dust has settled. It can be said that the win rate is very low for new crypto projects aiming to do what Ethereum already does but better.

So if a crypto project wants to increase their market share they’ll have to think outside the box. What problem is blockchain trying to solve? Are there any better solutions to using the existing blockchains? Only in this way can it be possible to find disruptive competitors, and only then can it be possible to win.

Some example cases that exemplify this principle:

  • It’s not Unicom SMS that beats China Mobile SMS, but WeChat.
  • It’s not the big supermarket that beats the small store on the street corner, but the online store that takes both their market share.
  • It’s not Ford or GM that beats Toyota or Honda, but Tesla which doesn’t burn gas at all.

This type of disruption is also referred to as a dimensionality reduction strike.

So for a technology stack generally called layer-1 like the main chain, can a layer-2 that runs on it eventually rival the size and network effect of the layer-1 it runs on top of?

Generally speaking, the answer is no. Although there are already a large number of layer-2 extensions to improve throughput efficiency, most of them execute complicated calculations off-chain before finally the results need to be verified and then uploaded to the main-chain. In other words, these layer-2 can only be regarded as an accelerator that cannot be separated from the layer-1 it runs on top of. It therefore always relies on the existence of a primary main-chain. Obviously, there’s no possibility of dimensionality reduction when you’re hitting at one’s own foundation.

Therefore, any new competitor keen on gaining market & mindshare must be a new type of layer-2. Although the bottom layer also relies on certain main chains for layer-1 functionality, it must be able to independently provide a complete operating environment for the decentralized application software dApp. These transactions wouldn’t rollup and instead would clear separately from the layer-1 with no result verification required. Such a layer-2 wouldn’t be a blockchain, but it would better solve the problem that blockchain’s trying to solve. This is the premise of dimensionality reduction.

What is the Fundamental Problem Blockchain is Trying to Solve?

A thousand people have a thousand different answers to the question of what blockchain is trying to solve. My answer is this: decentralized trust. On the basis of trying to achieve decentralized trust, various solutions can be evolved for different fields. But the most important issue is to solve the problem of decentralized trust for the first time in history.

Of course, blockchain solves this problem at a great price. No matter what kind of consensus algorithm is used, it needs to consume or pledge limited resources to increase the cost of any actors in the system doing evil. The side effect is that it is very inefficient. If we have a way to bypass the traditional blockchain consensus algorithm and also achieve decentralized trust, then we can achieve dimensionality reduction.

What is Trust?

So let’s focus on what trust is. If you have read the science fiction novel “The Three-Body Problem”, you might still be confused about that distant Trisolaran galaxy and its inhabitants, the Trisolarans. Especially the scene of “Romance of the Three Kingdoms” that’s confounded many people made me think deeply. They’re a transparent race that cannot lie, so there’s no worry of trust in their world. This basis in deep truth is why they can far exceed the people of earth with their rapid development of civilization. If we use a certain technology to build a network on the earth in which nodes cannot deceive and can only think transparently, then trust will be achieved. And because it’s a peer-to-peer network, it’s inherently decentralized trust, which addresses the fundamental problem that blockchain is trying to solve.

Human-based vs Silicon-based Nodes

First of all, the fact that humans are nodes isn’t workable from the get go. As long as there’s no restrictions on the consensus agreement, people are too fond of lying. If the carbon-based civilization of humans is excluded, what remains is the so-called silicon-based civilization, i.e. a network of semiconductors and computer chips.

But can we say that semiconductors don’t lie? At present, when AI has not yet ruled human beings and quantum computing has not yet entered the practical stage, semiconductors will not lie. If a semiconductor is lying, it must be intentional through the software that operates the semiconductor. Software is written by human beings, so it’s still the ghost of human beings that are responsible.

So let’s use the following preliminary definition: because semiconductors don’t lie, then the code running inside of it is correct. And if the input data is correct (i.e. hasn’t been altered post-issuance), then a correct result will be obtained. Conversely, if the semiconductor is real (not faked), as long as the correctness of the input code and data can be verified, then we can unconditionally trust the calculation results. You see, verifying the correctness of a computing environment is usually far easier than verifying the calculation result.

If You Trust the Environment, You Can Trust the Result

To make this concept clearer, let me give an example. Let’s calculate the factorial of an integer, let’s say 60. I believe very few people will do the math by hand. It must be done by a calculator or some type of software to calculate on a semiconductor chip. So how many people actually go to the hand calculation to verify whether the calculator is wrong after getting the result? It should be very few. This is so because you believe in the following things:

  1. You use a genuine calculator or software (at least you have never used it before).
  2. You believe that you’ve entered the correct instructions (keys) by hand.
  3. You believe that the data results you see with your eyes are indeed what exists on the LCD screen of the calculator.

So given these conditions, you think the result is correct. In the same way, if we have a series of such calculators, they form a network. There are also algorithms between them that can verify whether other nodes are in a “trusted operating state”.

We can trust this network to do what the original blockchain does: form a “decentralized trust” network. Mainly, there’s no need to verify the calculation results (currently mainstream layer-2 solutions needs to verify the calculation results), but to verify the computing environment. This is the most fundamental philosophical difference with extant layer-2 solutions.

Validating a computing environment is a complex but often necessary technical problem that we don’t need to expand upon here. Electronic devices that we use every day already use similar technology to ensure integrity. It’s just that very few people are currently using them in areas running alongside blockchain. But if trusted computing is used alongside blockchain, then it will become a trusted computing network. Then the so-called dimensionality reduction attack can be carried out on the blockchain.

Once such a network is built, ordinary internet applications can run in such a decentralized trusted environment. On the one hand, the constraints of the inefficient consensus mechanism are eliminated, and on the other hand, no resources are wasted on computing power competitions for the sole purpose of consensus. All computing resources are used to provide services to customers to obtain benefits paid by customers. And more importantly, this new type of trust network is still decentralized. That is to say, there are no longer some tech giants who steal all of your data to make money. Instead, the public owns their data and earns revenue from it and is protected by privacy.

Data privacy and self data ownership should be the future that everyone looks forward to in the Web3 era. I implore you to take your eyes off then main-chain competition that has long been settled. Pointing your view above that limited perspective will reveal a wonderful new world!