🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
Polkadot's scalability breakthrough: performance and security without compromise
Blockchain Scalability Challenge: Polkadot's Solution
In today's pursuit of higher efficiency in the blockchain field, a key issue is gradually emerging: how to enhance scalability without sacrificing security and system resilience? This is not only a technical challenge but also a profound choice in architectural design. For the Web3 ecosystem, a faster system built on the sacrifice of trust and security often struggles to support truly sustainable innovation.
This article will delve into the trade-offs and considerations in Polkadot's scalability design, comparing it with the solutions of other mainstream public chains, and exploring their different path choices between performance, security, and decentralization.
Challenges Faced by Polkadot's Expansion Design
The balance between elasticity and decentralization
The architecture of Polkadot relies on a network of validators and a relay chain. The operation of Rollup depends on the sequencer that connects to the relay chain, with communication using the collator protocol mechanism. This protocol is completely permissionless and trustless, allowing anyone with a network connection to use it, connect to a small number of relay chain nodes, and submit state transition requests for the rollup. These requests will be verified by some core of the relay chain, with one prerequisite: they must be valid state transitions; otherwise, the state of the rollup will not be advanced.
Trade-offs of vertical scaling
Rollups can achieve vertical scalability by leveraging Polkadot's multi-core architecture. This new capability is introduced by the "elastic scaling" feature. It was discovered during the design process that since rollup Block validation is not fixed to a specific core, this may affect its elasticity.
Since the protocol for submitting blocks to the relay chain is permissionless and trustless, anyone can submit blocks to any core assigned to the rollup for verification. An attacker may exploit this by repeatedly submitting previously validated legitimate blocks to different cores, maliciously consuming resources and thereby reducing the overall throughput and efficiency of the rollup.
Polkadot's goal is to maintain the elasticity of rollups and the effective utilization of relay chain resources without compromising the key characteristics of the system.
The trust issue of the Sequencer
A simple solution is to set the protocol to "permissioned": for example, adopting a whitelist mechanism, or trusting the default sorters not to act maliciously, thus ensuring the activity of the rollup.
However, in the design philosophy of Polkadot, we cannot make any trust assumptions about the sequencer, as it is essential to maintain the system's "trustless" and "permissionless" characteristics. Anyone should be able to use the collator protocol to submit state transition requests for the rollup.
Polkadot: An Uncompromising Solution
The final solution chosen by Polkadot is to completely entrust the issue to the rollup state transition function (Runtime) for resolution. The Runtime is the sole trusted source of all consensus information, so it must explicitly state in the output on which Polkadot core the validation should be executed.
This design achieves a dual guarantee of flexibility and security. Polkadot will re-execute the rollup's state transitions in the availability process and ensure the correctness of the core allocation through the ELVES encryption economic protocol.
Before any rollup Block is written to the data availability layer of Polkadot, a group of about 5 validators will first verify its legitimacy. They receive the candidate receipts and validity proofs submitted by the sequencer, which contain the rollup Block and the corresponding storage proofs. This information will be processed by the parachain verification function and re-executed by the validators on the relay chain.
The verification result contains a core selector used to specify which core the block should be validated on. The validator will compare whether the index is consistent with the core they are responsible for; if not, the block will be discarded.
This mechanism ensures that the system always maintains its trustless and permissionless attributes, avoiding manipulation of verification positions by malicious actors such as sequencers, and ensuring that even if rollups use multiple cores, resilience can be maintained.
Security
In the pursuit of scalability, Polkadot has not compromised on security. The security of rollups is guaranteed by the relay chain, requiring only one honest sequencer to maintain viability.
With the ELVES protocol, Polkadot completely extends its security to all rollups, verifying all computations on the core without any restrictions or assumptions on the number of cores used.
Therefore, Polkadot's rollup can achieve true scalability without sacrificing security.
Universality
Elastic scaling does not limit the programmability of rollups. Polkadot's rollup model supports the execution of Turing-complete computations in a WebAssembly environment, as long as each execution completes within 2 seconds. With elastic scaling, the total amount of computable tasks within a 6-second cycle is increased, but the types of computations remain unaffected.
Complexity
Higher throughput and lower latency inevitably introduce complexity, which is the only acceptable trade-off in system design.
Rollups can dynamically adjust resources through the Agile Coretime interface to maintain a consistent level of security. They also need to meet certain requirements of RFC103 to accommodate different usage scenarios.
The specific complexity depends on the resource management strategies of the rollup, which may rely on on-chain or off-chain variables. For example:
Although the automated approach is more efficient, the implementation and testing costs have also risen significantly.
Interoperability
Polkadot supports interoperability between different rollups, and elastic scaling does not affect the throughput of message passing.
Cross-rollup message communication is implemented by the underlying transport layer, and the communication block space of each rollup is fixed, regardless of the number of cores allocated to it.
In the future, Polkadot will also support off-chain messaging, with the relay chain serving as the control layer rather than the data layer. This upgrade will enhance the inter-rollup communication capabilities along with elastic scaling, further strengthening the system's vertical scalability.
Trade-offs of Other Protocols
As we all know, performance improvements often come at the cost of decentralization and security. However, from the perspective of the Nakamoto coefficient, even though some competitors of Polkadot have a lower degree of decentralization, their performance is still not satisfactory.
Solana
Solana does not adopt the sharding architecture of Polkadot or Ethereum, but instead achieves scalability through a single-layer high-throughput architecture, relying on historical proof, CPU parallel processing, and a leader-based consensus mechanism, with a theoretical TPS of up to 65,000.
A key design is its publicly disclosed and verifiable leader scheduling mechanism:
History has shown that parallel processing has extremely high hardware requirements, leading to the centralization of validation nodes. The more nodes are staked, the greater the chances of block production, while small nodes have almost no slots, further exacerbating centralization and increasing the risk of system paralysis after an attack.
Solana sacrifices decentralization and resistance to attacks in pursuit of TPS, with its Nakamoto coefficient being only 20, far lower than Polkadot's 172.
TON
TON claims that TPS can reach 104,715, but this number is achieved under private test nets, 256 nodes, and ideal network and hardware conditions. In contrast, Polkadot has already reached 128K TPS on its decentralized public network.
The consensus mechanism of TON has security vulnerabilities: the identity of sharding validation nodes is exposed in advance. The TON white paper also clearly states that while this can optimize bandwidth, it may also be maliciously exploited. Due to the lack of a "gambler's bankruptcy" mechanism, attackers can wait for a shard to be fully controlled by them or disrupt honest validators through DDoS attacks, thereby tampering with the state.
In contrast, Polkadot's validators are randomly assigned and revealed with a delay, so attackers cannot know the identities of the validators in advance. The attack must bet all control to succeed; as long as there is one honest validator who initiates a dispute, the attack will fail and result in losses for the attacker.
Avalanche
Avalanche adopts a mainnet + subnet architecture for scalability, where the mainnet consists of X-Chain (transfers, ~4,500 TPS), C-Chain (smart contracts, ~100-200 TPS), and P-Chain (managing validators and subnets).
The theoretical TPS of each subnet can reach ~5,000, similar to Polkadot's approach: reducing the load on a single shard to achieve scalability. However, Avalanche allows validators to freely choose to participate in subnets, and subnets can set additional requirements such as geographic and KYC, sacrificing decentralization and security.
In Polkadot, all rollups share a unified security guarantee; whereas Avalanche's subnets do not have a default security guarantee, with some even being completely centralized. If you want to improve security, you still have to compromise on performance, and it is difficult to provide deterministic security commitments.
Ethereum
Ethereum's scaling strategy bets on the scalability of the rollup layer rather than addressing issues directly at the base layer. This approach essentially does not solve the problem but rather passes it up to the next layer of the stack.
Optimistic Rollup
Currently, most Optimistic rollups are centralized (some even have only one sequencer), which leads to issues such as insufficient security, isolation from each other, and high latency (requiring a waiting period for fraud proofs, usually a few days).
ZK Rollup
The implementation of ZK rollup is limited by the amount of data that can be processed per transaction. The computational requirements for generating zero-knowledge proofs are extremely high, and the "winner takes all" mechanism can easily lead to system centralization. To ensure TPS, ZK rollup often restricts the transaction volume per batch, which can cause network congestion and gas price increases during high demand, affecting user experience.
In comparison, the cost of Turing-complete ZK rollups is approximately 2x10^6 times that of Polkadot's core cryptoeconomic security protocol.
In addition, the data availability issues of ZK rollups will also exacerbate their disadvantages. To ensure that anyone can verify transactions, complete transaction data still needs to be provided. This often relies on additional data availability solutions, further increasing costs and user fees.
Conclusion
The end of scalability should not be compromise.
Compared to other public chains, Polkadot has not taken the path of trading decentralization for performance or preset trust for efficiency. Instead, it achieves a multidimensional balance of security, decentralization, and high performance through elastic scalability, permissionless protocol design, a unified security layer, and flexible resource management mechanisms.
In today's pursuit of larger-scale applications, Polkadot's adherence to "zero-trust scalability" may be the true solution that can support the long-term development of Web3.