Policy

The legal case for open blockchain networks

Author

Candace Kelly

Publishing date

The outage: October 20, 2025

On October 20, 2025, a DNS configuration error at Amazon Web Services cascaded into a 15-hour outage affecting 113 cloud services and over a thousand companies. For most businesses, this meant inconvenience. For institutions running critical financial operations on affected systems, it meant something more serious.

The Base blockchain—a Layer 2 network processing billions in transactions—saw block finalization times spike from 14 minutes to 78 minutes. Throughput dropped 40%. Users couldn't complete transactions.

During the same window, distributed blockchain networks continued operating without interruption. No degradation. No delays. The difference wasn't luck. It was architecture.

For institutions evaluating blockchain technology for regulated assets, this incident crystallized a question that deserves more attention than it typically receives: What are the legal and risk management implications of your technology choice?

The question regulators will ask

Today, many institutions are choosing blockchain technology based on relationships.

Trust is obviously a critical component of any decision-making process, so it makes sense to start with known-entities and trusted individuals when exploring new solutions. The architecture of decentralized systems shifts that trust analysis from assessing an individual company or service provider to the technology itself.

But regulators will ask harder questions. Not "who do you have a relationship with?" but "how did you assess the technology risk?" Not "is this blockchain popular?" but "what are the points of failure, and have you stress-tested them?"

Financial regulators have spent decades pushing institutions to reduce concentration risk in critical systems. They will expect the same discipline applied to blockchain technology choices.

The five questions institutions should be prepared to answer:

  1. How concentrated is power and risk on your chosen network?
  2. Who controls the network, and what happens if their interests diverge from yours?
  3. Can regulators and auditors verify on-chain data independently?
  4. What are the potential points of failure?
  5. How will you meet regulatory requirements and enforcement orders?

These questions—drawn from standard risk management frameworks—all point toward the same architectural conclusion.

The concentration risk problem

The October AWS outage exposed a specific vulnerability: Base operates with a single sequencer—the entity that orders transactions and proposes blocks. That sequencer is operated by Coinbase. Its systems run on AWS. When AWS failed, Base degraded. The same happened with other Layer 2 networks, like Optimism and Arbitrum.

This is concentration risk in its purest form. A single operator. A single cloud provider. A single point of failure.

Layer 2 networks are often described as "open" because they settle to public blockchains like Ethereum. But the transaction processing layer—where users actually interact—frequently recentralizes around a single sequencer. What starts as open technology ends with a configuration that reintroduces the concentration risks blockchain was supposed to eliminate.

Metrika's post-mortem was direct: the incident "underlines the significant single point of failure risk inherent to L2 blockchains that rely on a centralized entity model."

Private blockchains present a different form of concentration risk. When a network is controlled by a consortium or a dominant company, institutions building on that network are subject to the unilateral decisions of someone else. That company might become a competitor. That consortium might change its rules. And critically: regulators can only verify what the controllers allow them to see.

For regulated institutions, concentration risk isn't just an operational concern. It's a governance question. Boards and risk committees will increasingly need to understand—and document—how technology choices align with their risk management frameworks.

The true definition of "open"

The instinct to prefer "private" or "permissioned" over open networks is understandable. It maps to familiar corporate IT thinking: control access, limit exposure, own your environment.

But this framing confuses two different ideas: control over the network versus control over the assets.

Open doesn't mean uncontrolled. Open means no single controller.

On open networks with distributed validators, no individual party can unilaterally change network rules. The network is neutral because no one owns it.

But—and this is the point many observers miss—asset issuers are able to retain full control over their assets. They can determine who holds them, freeze them when required, execute clawbacks for fraud or regulatory enforcement. The openness is at the network layer; the controls exist at the asset layer.

This distinction matters legally. Open networks provide the resilience and neutrality benefits of decentralization. Asset-level controls provide the compliance capabilities regulators require. These aren't in tension—they're complementary.

The institutions that have moved furthest on tokenization—BlackRock, Franklin Templeton, Fidelity, U.S. Bank—have concluded that this combination represents the superior architecture for regulated assets.

Three legal advantages of open networks

1. Superior auditability and regulatory access

On open networks, every transaction is permanently recorded and independently verifiable. Regulators don't need to request access from a private network operator—they can observe directly. Auditors can verify without relying on what a private network's controllers choose to disclose.

This inverts the common compliance concern. Open networks don't evade regulatory oversight—they enable more comprehensive oversight than closed systems. The question isn't whether regulators can see what's happening. It's whether they're limited to seeing only what a private operator allows.

For institutions subject to examination, this distinction has practical implications. Independent verifiability is a feature, not a bug.

2. Competitive neutrality

Private blockchains create gatekeeping by design—that's their purpose. But gatekeeping in financial technology raises questions that legal and compliance teams should consider carefully.

Who decides which institutions can access the network? On what terms? What happens when the consortium's interests diverge from the interests of the institutions building on their chain? What if the network operator becomes a competitor?

Open networks sidestep these concerns structurally. When no one party controls access, no party can leverage that control.

For institutions building critical financial operations, dependency on a competitor's technology—or a consortium where competitors have influence—creates exposures that merit board-level and regulatory attention.

3. Operational resilience

October 20, 2025, demonstrated the operational case objectively. Distributed validator networks—where multiple independent organizations run nodes across different providers and geographies—continued operating when centralized systems failed.

Networks with distributed validators maintained normal operations during the AWS outage. Networks dependent on single sequencers or concentrated cloud providers degraded or failed.

For financial operations that must run continuously, architectural resilience isn't optional. The standard isn't "usually available." It's the 99.99% uptime that serious financial operations require—achieved through distributed architecture, not through hoping your cloud provider doesn't have a bad day.

The regulatory trajectory

The regulatory landscape is moving toward technology neutrality. In the United States, recent executive action explicitly protects access to open public blockchains and mandates technology-neutral rulemaking.

The Basel Committee's guidance on crypto exposures—which imposed higher capital requirements for assets on permissionless blockchains—is increasingly contested. Industry bodies have argued the framework fails to distinguish between networks where anonymous validators can buy their way in and networks where validators are known, trusted entities operating without financial incentives to manipulate transactions.

And yes, this reality can exist on open blockchains that are not based on proof-of-work or proof-of-stake.

The direction of travel is clear: regulators are converging on frameworks that assess your technology choices based on outcomes – how that technology mitigates risks, rather than how similar it is to familiar legacy systems. Institutions building on open networks today are aligned with that trajectory.

The real question

Regulated assets worth billions of dollars settle on open networks daily. The SEC has approved funds operating on public blockchains. Major banks are piloting stablecoin issuance on open networks. The mechanisms work.

The real question for legal and risk teams is different:

With today’s technology choices, can you continue to justify building critical financial operations on technology controlled by a single operator? Or dependent on a single cloud provider? Or subject to a competitor's decisions?

When the next outage occurs—and it will—what will you tell your board and regulators?


The shift already in progress

The shift to open blockchain technology is underway. The largest asset managers have moved. Regulatory frameworks are adapting. The October outage underscored the operational stakes.

What remains is helping institutions understand how open networks work in practice—how distributed validation provides resilience, how asset-level controls enable compliance, and how the architecture answers the questions regulators will ask.

Failing to explore open networks to improve operational resilience and compliance is not merely complacency; it’s an omission that amounts to strategic risk. Because the case for open networks isn't just technical. It's legal, operational, and strategic.

It's a powerful case that is overlooked at one’s peril.