Over the past two years, it has become clear that increasing TPS alone no longer solves blockchain scalability. The critical bottleneck is increasingly shifting from the consensus layer to the execution layer, where transactions compete for shared state and create contention under parallel processing.
Recent 2025 research reinforces this shift. The NEMO (2025) study shows that even high-throughput networks face performance degradation caused by write conflicts, and that parallelism without conflict-management mechanisms does not translate into real scalability. Efficient Parallel Execution of Blockchain Transactions Leveraging Conflict Specifications (2025) demonstrates that meaningful throughput gains are only achieved when read/write sets are explicitly defined and controlled at the transaction level.
In practice, this means that blockchain scalability depends less on consensus and far more on execution architecture — from state design and access optimization to workload scheduling and conflict resolution under peak load. These trends, constraints, and the engineering responses emerging in high-load environments were discussed by Alexander Kalankhodzhaev, Core Engineer Lead at Raiku.
1. Why is the conversation about blockchain scalability shifting from network speed to deeper questions of execution architecture?
High network throughput has long been a baseline requirement across many industries — from streaming to large-scale distributed computing — and blockchains directly benefit from these advances. There are blockchain-specific attempts to push networking further, such as DoubleZero, which is building a high-performance physical network for validators. However, such approaches are expensive, operationally complex, and difficult to scale globally.
If “network” is understood at the application layer — consensus algorithms and P2P communication — this space is also relatively mature. It has evolved for decades and continues to improve, but mostly through incremental optimization. A recent example is the Alpenglow consensus work by Solana, which shows that meaningful gains are still possible, though no longer transformative for scalability.
Execution, by contrast, remains comparatively young. Early blockchains were designed around strictly serial transaction processing, with no assumptions about parallelism. More recent systems, such as Solana, were built with parallel execution as a core principle. Even so, many execution-layer algorithms proved suboptimal in practice and have been repeatedly redesigned. Parallelism also introduces new constraints — state contention, hot accounts, conflicts, scheduling complexity, and synchronization overhead.
As a result, scalability is increasingly determined by execution architecture: state layout, access patterns, transaction scheduling, and conflict resolution, rather than by raw network speed.
2. Which aspects of the execution layer have the greatest impact on user experience — latency, fees, or predictability?
All three shape user experience, but under peak load, the decisive factor is predictability — the ability of the system to produce clear and consistent outcomes. Users primarily want to know whether a transaction will be included within an expected timeframe or rejected under well-defined rules.
Predictability answers the core user question: “Will my transaction execute, or will it start failing — getting stuck, conflicting, or repeatedly repriced?” Under congestion, uncertainty is perceived as more damaging than moderately higher fees.
Latency is critical for trading, gaming, and responsive interfaces, but average latency is a weak signal on its own. What matters is tail latency: when blocks are congested, a small subset of transactions can experience extreme delays, degrading UX even if average performance looks acceptable.
Fees are the most visible cost, yet users are often willing to pay more when transaction inclusion is reliable, and execution outcomes are stable and repeatable.
Ultimately, contention at the execution layer — competition for hot state, read/write conflicts, and inefficient scheduling — is what turns a blockchain from “fast in benchmarks” into “chaotic at peak demand.”
3. Are we entering a new competitive race — not for TPS, but for execution models and state management approaches?
TPS in blockchains remains largely a marketing metric. In isolated, controlled environments, it is possible to demonstrate almost any level of throughput, which makes headline TPS figures poorly representative of real-world performance.
As systems operate under increasing load, attention is shifting away from nominal TPS toward more meaningful measures, such as effective throughput under state contention and system behavior at peak demand. These metrics better capture how a network performs in production conditions rather than in laboratory benchmarks.
The real competition between blockchains today is therefore defined by execution-layer architecture, in particular:
- how much parallelism the system can retain after conflicts are resolved;
- how predictably and gracefully it degrades as contention over the popular state increases;
- what guarantees it provides to users and developers — fairness, determinism, and transparent transaction inclusion rules.
These properties are where true scalability becomes visible, not in headline TPS numbers.
4. What architectural decisions around state design help not only increase performance but also maintain stability under extreme load?
Several architectural choices consistently improve both throughput and system stability:
- State sharding and partitioning — reducing hot spots by design, so load is distributed across independent segments of state.
- Minimizing shared global state — avoiding patterns like “one counter everyone touches,” which inevitably become bottlenecks under contention.
- Deterministic conflict handling — predictable ordering or deterministic retries reduce chaotic behavior when conflicts arise.
- Workload-aware scheduling — prioritizing transactions that unblock others; the more downstream work a transaction enables, the higher its priority. This approach is clearly visible in Firedancer.
- Bounded execution and backpressure mechanisms — protecting both the system and users from cascading failures during overload.
This list is far from exhaustive. Execution-layer architecture and state design remain rapidly evolving areas, continuously producing new ideas and solutions as blockchains are pushed toward real-world limits.
5. How is responsibility for scalability shifting between protocol-level design and application-level engineering?
Responsibility is increasingly shared. Application builders are no longer focused solely on business logic or surface-level security. Modern protocols expose primitives and tools designed for safe, scalable execution — but using them effectively requires a deeper understanding of how the system actually works.
Application engineers now need to reason about the protocol at a lower level: what execution guarantees it provides, how state is accessed, and how contention is handled. This represents a meaningful shift in responsibility. Scalability can no longer be treated as something the protocol “solves” on its own. Poor state design cannot be parallelized away, and applications that ignore access patterns will become bottlenecks — even on the most advanced execution engines.
6. Which execution models are most likely to set industry-wide standards in the next phase of Web3?
Parallel execution is no longer optional — modern hardware makes it a necessity rather than an optimization. Among the emerging approaches, the most promising models are those based on explicit read/write sets or declared conflict specifications.
By requiring transactions to state what they read, write, or may conflict with, the system gains advance visibility into execution dependencies. This enables deterministic scheduling, avoids wasted execution, and preserves stable throughput even under heavy contention. The trade-off is that more responsibility shifts to application developers, which is why we’re likely to see new abstractions and tooling emerge to make these models safer and easier to use without exposing every low-level detail.
7. Could a redefined execution layer reshape blockchain economics—from fee markets to transaction prioritization to validator incentives?
Yes — and this is already starting to happen. As execution becomes more deterministic and conflict-aware, it affects not just performance but the economic structure of blockchains, including transaction prioritization and MEV extraction.
With insight into conflicts and execution impact, validators are no longer constrained to prioritize transactions purely by fee. They can instead favor transactions that reduce contention or unlock more parallelism, maximizing total throughput and revenue. From the user side, fees increasingly reflect not just compute usage, but the cost of contention — effectively paying for access to scarce or hot state.
In that sense, execution architecture becomes an economic primitive, influencing who gets included, what users pay for, and how fairness is defined at the system level.
![]()


