TEA Hosting Nodes: The Backend for TApps

Tea Project Blog
4 min readSep 28, 2022

One of the primary aims of the TEA Project is to recreate traditional web2-style cloud computing on the blockchain. We envision this diversification beyond centralized hosting to be one of the main appeals of companies moving to web3. But the promise of unstoppable dApps running full speed on a decentralized cloud needs a compute infrastructure that’s built differently than traditional cloud computing.

Since the TEA Project is using multiple decentralized nodes that interact with a state machine, these parts have to come together like puzzle pieces locking together to form a larger picture. This post will examine the role that TEA hosting nodes play in the TEA Project’s decentralized compute infrastructure.

The TEA Project’s Three Compute Tiers

To fully bring decentralized cloud computing to the blockchain, we need to recreate all three tiers that comprise traditional cloud computing.

Visualization of Three-tier Architecture
  • The front-end or presentation tier where the user interface is presented to the end-user for interaction.
  • The back-end tier is where business logic is executed.
  • The database tier is where the app state changes / end-user account changes are stored.

Let’s look at each of these three tiers in turn and the role that TEA hosting nodes play in each.

The Front-end Tier

The front-end of any TApp is simply static files (html / css / js / jpg etc.) that are pulled from IPFS and loaded into a TEA hosting node. This is basic functionality that’s being provided by a hosting node, but it’s still an important design. Given an IPFS content ID (CID) for a TApp’s front-end, the user can load that CID from any hosting node on the TEA network. This is 100% decentralized which is in contrast to the many smart contract-based platforms that have to rely on cloud computing for their front-ends. These types of dApps that still rely on cloud computing are best considered “hybrid” dApps at best.

The Back-end Tier

After the end-user logs in to the static TApp, they’ll begin interacting with the user interface. This will entail sending requests and listening to the replies before updating the app. This is how the loop looks like in full (with a break point depending on whether the function call is a query or mutation request):

• After the user clicks something in the front-end, the front-end then sends the user request on to the hosting nodes.
• The request (a query or mutation) is sent as a lambda function. These are WASM module binaries (*.wasm files) known as actors in the TEA ecosystem.

The next step depends on whether the function call is a simply query (reads the state) or a mutation (changes the state).

1. If it’s a simple query, then the front-end listens for the return result and displays it to the end-user. The loop is complete at this point.

2. If it’s a mutation, then the lambda function being executed in the hosting node will generate a transaction to a state machine node. These actions follow a mutation function call:

  • The state machine will order the transactions coming in from its various state maintainer nodes.
  • The state machine continuously updates the state and broadcasts it to all TEA hosting nodes.
  • The TApp front-end is constantly querying the hosting nodes for any new updates and will display them to the end-user when available.

Developers upload their code for each TApp to their Github repo, and a Github action will compile their code to *.wasm binary files (note that a single TApp could be comprised of multiple *.wasm binary actor files). These will all be signed by the developer with their signature stored in the binary file. These *.wasm files are all stored on IPFS, and the TAppStore keeps a list of the official CIDs of the binary files for every TApp. Any attempts to hack these binaries will change their CID and make them inaccessible on the TEA network.

Developers who think Github represents a point of centralization can achieve the same ends except using TEA hosting nodes to run the compilation tasks instead.

The Database Tier

Besides a local database cache stored on IPFS, there’s no actual database being kept on the hosting nodes. The TEA hosting nodes can’t directly modify the state. The database instead resides on the state machine that’s maintained by the separate state machine nodes. When the hosting nodes have an update, they instead request the state machine to update the state.

We can say that there’s a very specific delineation of responsibilities for the TEA Project hosting nodes:

  • TEA hosting nodes run function code (actors) for TApps. Some hosting nodes will also host the TApp’s front-end.
  • State maintainer nodes keep the global database and execute the transactions sent by the TApps that change the state.

We’ll dive into the database tier of TEA’s decentralized tech stack in a future article. If you’d like to learn how to run a TEA hosting node (or deploy a TApp that uses them), join our Telegram and we’ll point you in the right direction.

--

--