Decentralized AI Superclouds: Comparing Akash, io.net and Gonka’s DePIN Compute Models
Context: From Cloud Oligopoly to DePIN Superclouds
A new class of decentralized physical infrastructure networks (DePIN) is attempting to turn the world’s fragmented GPU stock into a coherent “AI supercloud.” Akash Network, io.net, and Gonka pursue the same strategic objective from three different angles:
- Aggregate idle or underutilized GPUs (and broader compute) from data centers, miners, and individuals.
- Expose this capacity through a programmable, permissionless marketplace.
- Use crypto-economic incentives and on-chain accounting to coordinate supply and demand at scale.
- Compete with AWS, Google Cloud, Azure, and CoreWeave on price, availability, and censorship resistance.
The core research question is not “who is cheapest today,” but:
Which of these three has the most sustainable combination of
(1) on-chain accounting,
(2) off-chain GPU orchestration, and
(3) token economics
to withstand long‑term price competition with centralized clouds without sacrificing decentralization or developer UX?
This article synthesizes available research on:
- Akash Network (Cosmos-based, general-purpose cloud + AI, reverse auction).
- io.net (Solana-native DePIN, AI-focused, mesh GPU network).
- Gonka (compute-native L1, Proof-of-Compute, AI inference–centric).
We focus on fundamentals, on-chain and market metrics where available, competitive positioning, risks, and scenario analysis. Where data is incomplete or inconsistent, that is highlighted explicitly rather than filled with assumptions.
1. Market Context: Why DePIN for AI Compute?
1.1 Structural Problem in Centralized Cloud
Centralized cloud providers (AWS, GCP, Azure, CoreWeave) control access to high-end GPUs via:
- Long-term contracts and opaque pricing.
- Centralized allocation and waitlists.
- Terms of service that can restrict “controversial” workloads.
At the same time:
- There is substantial underutilized GPU capacity in independent data centers, ex-mining operations, and even residential setups.
- AI demand has exploded: model training, fine-tuning, inference, and experimentation all require large, often bursty, GPU clusters.
- Many AI startups face hardware scarcity (multi-month waitlists) and inflated spot-market pricing.
This creates a classic arbitrage opportunity:
- Supply side: idle GPUs with low marginal cost but no direct access to AI customers.
- Demand side: AI teams willing to pay for compute but locked out of centralized capacity.
DePIN networks aim to bridge this gap by:
- Providing transparent, programmable marketplaces for compute.
- Using blockchains for settlement, accounting, and incentives.
- Coordinating heterogeneous, geographically distributed GPUs into usable clusters.
1.2 Economic Promise and Trade-offs
The theoretical edge of DePIN compute networks:
- No need to build new data centers; they aggregate existing hardware.
- Lower fixed costs and more granular pricing.
- Potentially 70–80% cheaper than hyperscalers for equivalent GPU hours (as some project metrics suggest).
But they face structural trade-offs:
- Latency and reliability: heterogeneous hardware, variable network conditions, and less control over uptime than centralized data centers.
- Operational complexity: orchestrating thousands of independent providers into production-grade clusters.
- Token economics vs. real yield: balancing token emissions and incentives with the need to offer sustainable, non-subsidized pricing.
Within this context, Akash, io.net, and Gonka represent three distinct design points.
2. Akash Network: Cosmos-Based General-Purpose Compute Marketplace
2.1 Positioning and Architecture
Akash is one of the earliest decentralized compute marketplaces, launched on Cosmos in 2018. Its core characteristics:
- General-purpose cloud: supports both AI/ML and broader workloads (web services, databases, generic containers).
- Reverse-auction model:
- Users define resource requirements (GPU type, CPU, RAM, duration, budget).
- Providers submit bids.
- Lowest qualified bid wins the lease.
- Cosmos-native: leverages IBC and a modular stack, with its own validator set and security model.
This design makes Akash more akin to a decentralized AWS/Hetzner hybrid than a pure AI network.
2.2 GPU Capacity, Utilization, and Pricing
Research indicates:
- GPU capacity grew 600%+ in 2024, with on the order of hundreds of high-performance GPUs (A100/H100 class).
- Utilization rates in the 40–60% range for GPUs, which is relatively healthy for a marketplace still in growth mode.
- Pricing (illustrative):
- H100 on Akash: about $1.40/hour.
- Versus AWS: around $4.33/hour.
- Versus Google Cloud: around $3.72/hour.
- Versus CoreWeave: around $6.50/hour.
On raw price, Akash is positioned at roughly 60–80% cheaper than major centralized clouds for high-end GPUs.
Mechanically:
- Lower prices are enabled by tapping underutilized capacity and lower overhead.
- The reverse-auction mechanism pushes prices down via competition.
- Providers must still cover electricity, depreciation, and opportunity cost, so the long-term sustainability of such discounts depends on network effects and demand growth rather than transient token subsidies.
2.3 Onboarding and Orchestration
Historically, Akash’s barrier to entry for providers was non-trivial:
- Providers needed to manage Kubernetes clusters and complex configuration.
- This limited participation to more sophisticated operators.
The introduction of the Akash Provider Console significantly reduces this friction:
- Automates Kubernetes installation.
- Provides dashboards for leases, earnings, and utilization.
- Offers pricing management tools and competitive benchmarks.
- Simplifies persistent storage configuration.
This is a crucial step for scaling supply, especially from mid-sized data centers without deep DevOps teams.
On the orchestration side:
- Akash coordinates workloads via leases and container deployments.
- It is less specialized around AI-specific cluster orchestration (e.g., Ray, model-parallel training) than io.net, but more generalized for any containerized workload.
2.4 Tokenomics and On-Chain Accounting
Akash’s economic design revolves around the AKT token, with several important evolutions:
-
Original model:
- AKT was the primary payment and staking token.
- Users had to acquire AKT to pay for compute.
- This created friction for non-crypto-native users.
-
Stablecoin integration (AEP-23):
- Allowed USDC payments, improving UX.
- But reduced direct demand for AKT, weakening its role as a medium of exchange.
-
Burn Mint Equilibrium (BME) and ACT stablecoin:
- Users can pay via fiat/credit card.
- The system buys AKT on the market and burns it to mint a native stablecoin (ACT).
- Providers are paid in ACT.
- This creates structural buy pressure on AKT as network usage grows.
- If AKT appreciates, burning becomes more “expensive,” potentially introducing deflationary dynamics.
Recent financial performance:
- In Q2 2025, USD-denominated revenue fell ~27%, largely due to AKT price decline.
- However, fee revenue in AKT terms grew ~13% QoQ, indicating real usage growth despite token volatility.
This illustrates a key challenge:
- Network usage and token price are partially decoupled.
- For providers and users, stable pricing matters more than token speculation.
- BME attempts to align token demand with real economic activity without forcing users into volatility.
2.5 Partnerships and Ecosystem
Akash has built a broad Web3 ecosystem presence:
- Partnerships with Kava Labs, Polygon, CertiK, HashQuark, and others.
- Integrations with decentralized storage (e.g., Filecoin/IPFS) to provide more complete cloud stacks.
- “Akash at Home” initiatives to tap residential compute.
- Roadmap items around a “services economy” and AI agent infrastructure.
This breadth is a double-edged sword:
- Positive: diversified demand; Akash is not solely dependent on AI cycles.
- Negative: less focused on being the best-in-class AI platform; network effects are diluted across many use cases.
2.6 Strengths and Weaknesses
Strengths
- Early mover with multi-year operational history.
- General-purpose design: can capture many workload types beyond AI.
- Competitive GPU pricing vs. centralized clouds.
- Reverse-auction mechanism for dynamic price discovery.
- BME tokenomics that tie AKT demand to real usage.
- Improved provider UX via Provider Console.
Weaknesses
- Reverse auctions introduce UX friction and latency for users needing instant provisioning.
- Less AI-specialized orchestration than io.net.
- Revenue in USD terms is sensitive to AKT price volatility.
- Fragmented target audience may slow deep specialization in AI.
3. io.net: Solana-Native AI Compute Mesh
3.1 Positioning and Focus
io.net is a Solana-based DePIN focused almost exclusively on AI and machine learning workloads:
- “Internet of GPUs” narrative: a global mesh of heterogeneous GPUs.
- Emphasis on low-latency clustering, suitable for training and inference.
- Strong focus on transparent on-chain payments and dynamic routing of GPU clusters.
Where Akash is a general-purpose decentralized cloud, io.net is closer to a decentralized CoreWeave optimized for AI.
3.2 Scale, Revenue, and Growth
io.net’s growth metrics are among the most aggressive in the space:
- 30,000+ GPUs across 130+ countries within ~18 months of launch.
- Over $20 million in verified on-chain revenue by October 2024.
- Q4 2024:
- Monthly revenue around $3.1 million.
- 565% quarter-over-quarter growth.
- Annualized run rate of about $12.5 million.
Notably:
- These are on-chain, verifiable revenue figures, not TVL or speculative volume.
- They demonstrate that io.net has found product–market fit with AI workloads.
GPU utilization appears to be somewhat lower (around 30–40% in some references) than Akash, which is typical for a rapidly scaling network that is aggressively onboarding supply ahead of demand.
3.3 Pricing and Workload Types
io.net offers a range of GPU options:
- From consumer-grade GPUs (e.g., RTX 4090) to datacenter-grade (e.g., H200).
- Example pricing:
- RTX 4090 at around $0.25/hour for Ray Cluster deployments.
- H200 around $2.39–$2.49/hour.
Relative to centralized clouds:
- io.net claims ~70% cost reductions vs. AWS for comparable GPUs.
- This is similar in magnitude to Akash’s discount, but with stronger AI specialization.
Workload deployment models:
- Ray Cluster:
- For distributed training and parallel workloads.
- Integrates with Ray, a widely used Python framework for distributed computing.
- Container-as-a-Service (CaaS):
- For containerized ML apps and services.
- Bare Metal:
- For specialized or performance-sensitive workloads.
This triad covers most AI use cases: training, fine-tuning, and production inference.
3.4 Case Studies and Real-World Usage
Several case studies illustrate io.net’s traction:
- Wondera:
- Reduced AI training costs by ~75%.
- Scaled to ~200,000 users in four months.
- Leonardo.AI:
- Scaled from ~14,000 to ~19 million users.
- Cut GPU costs by ~50%.
- KayOS:
- Achieved ~5x developer productivity.
- Reduced compute costs from ~$2,500 to ~$1,000 per month (~60% reduction).
- Vistara Labs:
- Built ~5,600 applications in two months.
- Cut compute costs ~3x.
- Reported zero infrastructure failures in the case study period.
These examples suggest that:
- io.net is not just a speculative network; it is already powering large-scale AI services.
- Cost savings are meaningful and translate into real business outcomes (user growth, faster iteration).
3.5 Architecture: Mesh Networking and io.intelligence
io.net’s technical differentiators include:
-
Solana base layer:
- High throughput and low transaction costs.
- Enables per-minute billing and instant settlement.
-
Mesh architecture:
- Nodes organized in a mesh rather than strict hub-and-spoke.
- IO Mesh Technology reportedly achieves ~47% latency improvements via optimized routing and resource allocation.
- Better suited to distributed training and inference where cross-node communication matters.
-
Ray compatibility:
- Seamless integration with existing Ray-based ML workflows.
- Reduces switching costs for teams already using Ray on centralized clouds.
-
io.intelligence (unified inference API):
- Routes inference requests across optimal models and hardware.
- Claims up to 70% additional cost savings for LLM inference vs. traditional clouds.
- Abstracts away hardware details, presenting a single API endpoint.
This combination targets both power users (who want low-level control over clusters) and developers (who just want a cheap, reliable inference API).
3.6 Tokenomics and Payment Flows
The IO token is central to io.net’s economic design:
- Users can pay in USDC or fiat, but payments are converted into IO under the hood.
- Providers can receive IO or immediately convert to USDC.
- Fee structure:
- Payments in USDC incur a 2% facilitation fee.
- Payments in IO have 0% fee.
This creates:
- A demand sink for IO as network usage grows.
- An economic incentive for heavy users to hold and pay in IO.
On-chain accounting:
- Every GPU lease and payment is settled on Solana.
- This provides transparent revenue metrics and a clear link between token utility and real economic activity.
Incentives:
- Over 101,000+ unique workers have been rewarded.
- More than 49 million IO tokens distributed to contributors.
- This wide distribution supports decentralization on the supply side but also raises questions about long-term emission schedules and dilution.
3.7 Funding, Partnerships, and Ecosystem
io.net has attracted significant venture and strategic backing:
- $30 million Series A led by Hack VC.
- Participation from Multicoin Capital, Delphi Digital, Animoca Brands, OKX, Solana Labs, and founders of Solana and Aptos.
- Strategic partnerships:
- Dell Technologies: bridges enterprise hardware with decentralized GPU compute.
- Leonardo.AI, Wondera, Vistara Labs: high-profile AI users.
- Zerebro: Ethereum validator operations integration.
- OpenLedgerHQ: combining datasets with decentralized compute.
These relationships help io.net move beyond purely crypto-native users and into enterprise and large-scale AI platforms.
3.8 Strengths and Weaknesses
Strengths
- Strong AI specialization and clear product–market fit.
- Rapid growth in GPU count and on-chain revenue.
- Mesh architecture and Ray integration tailored to distributed AI workloads.
- Transparent, granular billing and instant settlement.
- Compelling case studies with real-world AI platforms.
- Well-capitalized and backed by prominent VCs and strategics.
Weaknesses
- Heavy dependence on the Solana ecosystem (technical and economic concentration risk).
- Young network (launched mid-2024), limited track record through full market cycles.
- Token emissions and incentive sustainability still unproven over longer horizons.
- Lower utilization in some periods, indicating potential oversupply or uneven demand.
4. Gonka: Compute-Native L1 with Proof-of-Compute
4.1 Positioning and Design Philosophy
Gonka is the most radical of the three in terms of protocol design. It is:
- A compute-native L1 blockchain, not just a marketplace on an existing chain.
- Built around a Proof-of-Compute (PoC) consensus mechanism.
- Optimized for high-efficiency AI inference and “meaningful workloads.”
The core idea:
Instead of burning electricity on useless hashes (PoW) or rewarding idle capital (PoS), use consensus to direct GPU cycles toward productive AI computation.
4.2 Proof-of-Compute and “PoW 2.0”
Gonka’s PoC mechanism addresses an often-cited inefficiency:
- In PoS systems, the majority of rewards go to stakers who contribute no real compute.
- In PoW systems like Bitcoin, all compute is used for hash puzzles with no extrinsic value.
Gonka introduces:
- Transformer-based Proof-of-Work (“PoW 2.0”):
- Consensus tasks are AI-relevant (e.g., training or inference tasks).
- “Sprint” competitions where hosts compete to solve complex AI tasks within a time window.
- The output is both:
- Verifiable computational evidence for consensus.
- Potentially useful AI artifacts (models, inference results).
This design aims to:
- Align network security with productive compute.
- Create dual revenue streams for hosts:
- Block rewards (GNK emissions).
- Payments from AI developers using the network’s inference/training capabilities.
4.3 Network Scale and Funding
Gonka is much younger than Akash or io.net:
- Launched in August 2025.
- Achieved 5,000+ H100-equivalent units in under six months.
- “H100-equivalent” is a metric Gonka uses to normalize network power across heterogeneous GPUs.
Funding and backing:
- $50 million funding round from Bitfury in December 2025.
- Incubated by Product Science Inc., founded by former Snap product directors.
- Early investors include Coatue Management, Slow Ventures, K5, Insight, and Benchmark partners.
This level of capital and pedigree is notable for such a young network and suggests strong belief in the PoC thesis.
4.4 Tokenomics and Incentives
Key elements of Gonka’s token design (based on available research):
-
GNK token:
- Fixed supply of 1 billion GNK.
- 80% allocated to network hosts (providers) over time.
-
Emission schedule:
- Bitcoin-inspired with exponential decay.
- Initial epochs mint around 323,000 GNK per epoch.
- Rewards decline over time, similar to Bitcoin halvings.
-
Reward distribution:
- Proportional to Proof-of-Compute weight.
- Hosts with more verifiable compute contributions earn more GNK.
-
Governance:
- PoC-weighted voting: voting power tied to compute contribution, not just token balance.
- This aligns governance with active infrastructure operators.
Pricing and fee mechanism:
- Dynamic per-model pricing, modeled after EIP-1559:
- Base fee that adjusts with demand.
- Potential burn component, though details are not fully specified in the available research.
- Per-model pricing suggests that each AI model or workload type can have its own dynamic price curve.
This design attempts to:
- Ensure that GNK is earned by productive work, not just capital.
- Make compute the primary economic unit, with GNK as the tokenized representation of that compute economy.
- Provide a predictable, diminishing emission schedule that reduces long-term inflation.
4.5 Focus on AI Inference and “Meaningful Workloads”
Gonka emphasizes:
- AI inference as the primary workload.
- “Meaningful workloads”:
- Tasks that have intrinsic value beyond consensus.
- This contrasts with generic PoW hashing or even some speculative DePIN tasks.
The “H100-equivalent” metric:
- Provides a standardized measure of network capacity.
- Helps developers and investors understand the effective power of the network, regardless of the exact GPU mix.
However, detailed metrics on:
- Actual utilization.
- Real-world customers.
- Revenue and pricing benchmarks vs. AWS/GCP/CoreWeave
are not present in the available research, so Gonka’s current commercial traction is less clear than Akash’s or io.net’s.
4.6 Strengths and Weaknesses
Strengths
- Novel consensus mechanism that ties security directly to productive AI compute.
- Fixed supply, host-heavy token allocation, and Bitcoin-like emission schedule.
- Strong early backing from Bitfury and top-tier investors.
- Clear focus on AI inference and compute economics.
- Governance linked to active compute contribution.
Weaknesses / Unknowns
- Very young network; operational track record is minimal.
- Limited publicly available metrics on revenue, utilization, and customer adoption.
- Complexity of PoC and verification overhead could introduce latency or UX challenges.
- How per-model EIP-1559-style pricing behaves under real demand shocks is unclear.
- Competes not only with DePIN peers but also with specialized AI inference clouds and centralized providers.
5. Comparative Analysis: Models, Metrics, and Trade-offs
5.1 High-Level Comparison
| Dimension | Akash Network | io.net | Gonka |
|---|---|---|---|
| Base chain / L1 | Cosmos-based L1 | Solana-based protocol | Custom compute-native L1 |
| Primary focus | General-purpose cloud + AI | AI/ML training & inference | High-efficiency AI inference |
| Core mechanism | Reverse-auction marketplace | Mesh GPU network, Ray integration | Proof-of-Compute consensus |
| GPU scale (approx.) | Hundreds of high-end GPUs; 600% growth 2024 | 30,000+ GPUs in 130+ countries | 5,000+ H100-equivalent units |
| Pricing vs. hyperscalers | H100 ~$1.40/hr vs. AWS ~$4.33/hr | ~70% cheaper than AWS for comparable GPUs | Not fully disclosed; dynamic per-model |
| On-chain revenue (directional) | Growing AKT-denominated fees; USD revenue down in Q2 2025 due to token price | >$20M cumulative by Oct 2024; $3.1M in Q4 2024; 565% QoQ growth | Not disclosed in research |
| Token | AKT (plus ACT stablecoin via BME) | IO | GNK |
| Token supply & distribution | Utility + staking; BME ties usage to AKT burns | IO used for payments; incentives to pay in IO | 1B fixed supply; 80% to hosts; PoC-weighted |
| Payment UX | AKT, USDC, fiat → ACT | USDC, fiat → converted to IO | GNK-based with dynamic pricing |
| Governance | Token-based (AKT) | Token-based (IO) | PoC-weighted (compute contribution) |
| Main differentiator | General-purpose, reverse auctions, BME | AI-native, mesh, strong revenue traction | Consensus-as-compute, PoC |
| Maturity | Oldest; multi-year track record | Launched mid-2024; rapid growth | Launched 2025; early stage |
5.2 On-Chain Accounting vs. Off-Chain Orchestration
All three must balance:
- On-chain accounting (payments, incentives, governance).
- Off-chain orchestration (GPU scheduling, clustering, networking).
Akash
- On-chain: leases, payments, staking in AKT; BME/ACT for stable settlements.
- Off-chain: Kubernetes-based orchestration, reverse-auction matching.
- Trade-off: more generalized stack; less specialized AI clustering.
io.net
- On-chain: per-minute billing, IO-based settlement on Solana.
- Off-chain: IO Mesh, Ray-based cluster orchestration, container and bare-metal management.
- Trade-off: deeper AI specialization; relies heavily on Solana’s performance and reliability.
Gonka
- On-chain: PoC consensus, GNK emissions, PoC-weighted governance.
- Off-chain: AI workload execution, verification of PoC tasks.
- Trade-off: consensus and compute are tightly coupled; design is elegant in theory but complex in practice.
5.3 Token Economics and Sustainability
The key question: can these networks offer sustained discounts vs. AWS/GCP without relying on unsustainable token subsidies?
Akash
- BME introduces structural AKT buy pressure as usage grows.
- Stablecoin-like ACT stabilizes provider income.
- Risk: if AKT price falls, USD revenue drops, potentially pressuring providers; if it rises too fast, compute may become expensive relative to centralized clouds.
io.net
- IO demand is tied to real usage via conversion from USDC/fiat.
- Fee differential (2% vs. 0%) nudges users to IO.
- Large early token distributions to workers help bootstrap supply but must taper to avoid long-term inflation.
- Sustainability depends on:
- Maintaining real yield for providers after incentives decline.
- Keeping GPU prices competitive without heavy subsidies.
Gonka
- Fixed 1B supply, 80% to hosts, Bitcoin-like emission schedule.
- PoC ensures that GNK is primarily earned by productive compute, not idle capital.
- Dynamic per-model pricing (EIP-1559 style) could:
- Efficiently allocate scarce compute under high demand.
- Burn fees to offset emissions (if designed that way).
- Unknowns: how quickly emissions decline relative to network adoption; whether GNK price can support competitive provider economics.
5.4 Developer UX and Adoption Friction
From the AI developer’s perspective, three factors dominate:
- Time-to-first-cluster: how fast can I get GPUs running?
- Predictability: are prices and performance stable?
- Integration: does it plug into my existing tooling?
Akash
- Reverse auctions can add delay before provisioning.
- Strong fit for containerized workloads, but less out-of-the-box AI tooling.
- Provider Console improves supply-side UX; demand-side AI UX is still more generic.
io.net
- Near-instant provisioning via mesh clusters.
- Ray integration and io.intelligence API reduce friction for AI teams.
- Pricing is more predictable (fixed hourly rates) than auctions.
Gonka
- Still early; developer UX is less documented in the research.
- PoC and per-model pricing are conceptually powerful but could be complex to expose via simple APIs.
- If Gonka abstracts PoC complexity behind a straightforward inference API, it could be compelling; if not, adoption may skew toward more crypto-native teams.
6. Competitive Positioning vs. Centralized Cloud and Among Peers
6.1 Against AWS, GCP, CoreWeave
All three DePINs claim substantial cost advantages:
- Akash: H100 at ~$1.40/hr vs. AWS ~$4.33/hr, GCP ~$3.72/hr, CoreWeave ~$6.50/hr.
- io.net: ~70% cheaper than AWS for comparable GPUs.
- Gonka: no explicit price benchmarks in the research, but dynamic per-model pricing implies competitive targeting.
However, centralized clouds still dominate on:
- Enterprise-grade SLAs (uptime, support, compliance).
- Integrated services (managed databases, storage, networking, monitoring).
- Regulatory and compliance frameworks (SOC 2, HIPAA, etc.).
DePIN networks counter with:
- Censorship resistance and permissionless access.
- Transparent pricing and on-chain settlement.
- Ability to tap non-traditional supply (miners, smaller data centers, home rigs).
In AI specifically:
- CoreWeave and similar players already behave like semi-decentralized aggregators of GPU supply, but with centralized control and enterprise wrappers.
- DePIN networks must show that they can match or approach reliability and UX while retaining decentralization and cost advantages.
6.2 Among Akash, io.net, and Gonka
At a high level:
- Akash is the generalist: a decentralized AWS-like marketplace with AI as one important vertical.
- io.net is the AI specialist: a decentralized CoreWeave with strong early traction.
- Gonka is the protocol radical: redesigning consensus itself around AI compute.
Their competitive edges:
- Akash:
- Diversified workloads.
- Longest operational history.
- Innovative BME tokenomics.
- io.net:
- Strongest real-world AI adoption and revenue metrics.
- Deep integration with AI tooling (Ray, inference APIs).
- Clear narrative and focus.
- Gonka:
- Most ambitious attempt to align blockchain security with productive compute.
- Strong early institutional backing.
- Compute-native governance and rewards.
Their vulnerabilities:
- Akash:
- Could be outcompeted on AI specialization by io.net and centralized AI clouds.
- Reverse-auction UX may be less appealing for time-sensitive AI workloads.
- io.net:
- Solana concentration risk.
- Needs to show sustainability once token incentives normalize.
- Gonka:
- Needs to prove that PoC is robust, efficient, and easy to use.
- Lacks publicly visible adoption metrics relative to peers.
7. Risk Analysis and Negative Scenarios
7.1 Technical and Operational Risks
- Network reliability:
- Heterogeneous providers may have variable uptime and performance.
- For training large models, even brief interruptions can be costly.
- Latency and bandwidth:
- Distributed GPUs may not match the low-latency interconnects (e.g., NVLink, InfiniBand) of centralized data centers.
- This particularly affects large-scale training more than inference.
Project-specific:
- Akash:
- Reverse auctions could lead to underbidding and unstable provider economics.
- Kubernetes complexity on the provider side, even with the console, may still be a barrier for smaller operators.
- io.net:
- Mesh routing and Ray orchestration add complexity; bugs or misconfigurations could lead to failures at scale.
- Dependence on Solana’s uptime and performance.
- Gonka:
- PoC verification overhead could be significant.
- If consensus tasks are too complex, block times and UX may suffer.
- If they are too simple, they may not be truly “meaningful workloads.”
7.2 Economic and Token Risks
- Token price volatility:
- Impacts USD-denominated revenue (as seen with Akash in Q2 2025).
- Can make provider income unpredictable.
- Incentive sustainability:
- Early high emissions attract providers, but long-term viability requires:
- Real user demand.
- Stable or appreciating token value.
- Reasonable fee structures.
- Early high emissions attract providers, but long-term viability requires:
- Race to the bottom:
- If DePIN networks compete primarily on price, margins may compress to unsustainable levels.
- Providers may churn if they can earn more in centralized markets or alternative uses.
Project-specific:
- Akash:
- BME relies on robust demand; if usage stagnates, AKT buy pressure weakens.
- ACT stablecoin mechanism must maintain trust and solvency.
- io.net:
- Heavy early IO distributions to workers could create sell pressure.
- If fee incentives (0% vs. 2%) are insufficient, IO may not achieve strong monetary premium.
- Gonka:
- Fixed supply is attractive, but if GNK price does not appreciate with network growth, hosts may find rewards insufficient after emission decay.
- Per-model EIP-1559-like pricing must be carefully tuned to avoid fee shocks.
7.3 Regulatory and Market Risks
- Regulatory scrutiny:
- DePIN networks may be viewed as unregulated cloud providers.
- Jurisdictions could impose data residency, KYC, or other constraints.
- Enterprise adoption hurdles:
- Enterprises may hesitate to run critical workloads on permissionless networks without strong SLAs and compliance guarantees.
- Competition from centralized DePIN-like players:
- CoreWeave and others already aggregate third-party GPUs; they could adopt some DePIN-like features (transparent pricing, flexible contracts) without decentralizing governance.
7.4 Adoption and UX Risks
- Developer friction:
- Complex onboarding, tooling gaps, or poor documentation can slow adoption.
- Perceived reliability gap:
- Even if uptime is good, a perception that DePIN is “experimental” can deter conservative users.
- Fragmentation:
- With multiple DePIN options, developers may face decision fatigue or fear of platform risk, slowing adoption across the board.
8. Scenario Analysis: Bull, Base, and Bear Cases
Instead of price targets, we consider qualitative scenarios for each project.
8.1 Akash Network
Bull Case
- Akash solidifies itself as the default decentralized cloud, not just for AI but for a broad range of Web3 and Web2 workloads.
- BME and ACT stabilize token economics:
- AKT demand grows with usage.
- ACT provides smooth fiat-like UX.
- GPU capacity continues to expand, with utilization staying high.
- Reverse-auction UX is refined (e.g., via instant-fulfill mechanisms or better tooling), making it competitive on time-to-provision.
- Enterprises and large Web3 projects adopt Akash for cost-sensitive workloads, while AI remains an important but not exclusive vertical.
Base Case
- Akash remains a relevant but niche decentralized cloud provider.
- It captures a stable share of Web3 infrastructure and some AI workloads.
- BME works reasonably well, but AKT price remains volatile.
- It competes on price with centralized clouds but struggles to match their UX and SLAs for large enterprises.
- io.net and specialized AI clouds dominate the AI segment, while Akash thrives in broader compute niches.
Bear Case
- Reverse-auction complexity and generic positioning limit adoption.
- Token volatility undermines provider trust; ACT fails to gain wide usage.
- Centralized clouds lower prices or offer targeted discounts, eroding Akash’s cost advantage.
- DePIN competitors with more specialized AI stacks (io.net, Gonka, others) capture most AI demand.
- Akash remains a small, specialized network with limited impact on the broader cloud market.
8.2 io.net
Bull Case
- io.net becomes the de facto decentralized AI infrastructure layer.
- It consistently onboards high-profile AI platforms (like Leonardo.AI, Wondera, Vistara Labs) and many more.
- On-chain revenue continues to grow rapidly, and utilization stabilizes at high levels.
- Mesh architecture and Ray integration make io.net the best platform for distributed training and inference.
- Enterprises begin to route non-critical but cost-sensitive AI workloads through io.net, attracted by transparent billing and substantial savings.
- IO token gains strong monetary premium as usage grows; incentives can be tapered without harming provider economics.
Base Case
- io.net remains a leading AI DePIN with strong but not dominant market share.
- It is widely used by crypto-native AI projects and cost-sensitive startups.
- Centralized clouds still dominate enterprise AI workloads, but io.net is a credible alternative for many use cases.
- Revenue growth moderates but remains positive; tokenomics are stable but not exceptional.
- Competition from new DePINs and specialized AI clouds prevents io.net from fully capturing the “AI supercloud” narrative.
Bear Case
- Solana experiences sustained technical or reputational issues, impacting io.net’s reliability.
- Token incentives prove unsustainable; as emissions decline, providers leave for better-paying alternatives.
- Centralized providers aggressively cut prices for AI workloads, narrowing io.net’s cost advantage.
- Regulatory or compliance concerns limit enterprise adoption.
- io.net remains used by a subset of crypto-native projects but fails to break into mainstream AI infrastructure.
8.3 Gonka
Bull Case
- Gonka’s Proof-of-Compute becomes a new standard for tying blockchain security to productive work.
- AI developers flock to Gonka for inference workloads, attracted by:
- High efficiency.
- Fair, per-model dynamic pricing.
- Governance aligned with active compute providers.
- GNK’s fixed supply and host-heavy allocation create strong alignment between network growth and token value.
- Major AI labs and platforms integrate Gonka as a core inference backend, and other blockchains adopt PoC-like mechanisms inspired by Gonka.
Base Case
- Gonka finds a niche as a specialized AI inference L1.
- It attracts a loyal base of providers and developers but does not displace major DePINs or centralized clouds.
- PoC works technically but remains complex; most users interact via higher-level APIs that abstract it away.
- GNK value is stable enough to support providers, but not spectacular; emissions decline as planned.
Bear Case
- PoC proves too complex or inefficient in practice:
- Verification overhead slows the chain.
- Consensus tasks are hard to design as genuinely “meaningful” without gaming.
- Developer UX lags behind competitors; AI teams prefer simpler, more mature platforms like io.net or centralized clouds.
- GNK fails to gain traction; emissions decline before strong demand emerges, leaving providers undercompensated.
- Gonka remains an interesting research experiment with limited commercial impact.
9. Which Model Looks Most Sustainable Today?
Based strictly on the available research:
-
io.net appears to have the strongest near-term traction:
- Largest GPU count.
- Clear AI focus.
- Verifiable on-chain revenue with strong growth.
- Compelling case studies and partnerships.
- Well-designed payment flows that tie IO demand to real usage.
-
Akash offers resilience through diversification:
- Not solely dependent on AI cycles.
- Longest operational history.
- Innovative BME mechanism to align token value with network usage.
- Competitive pricing vs. centralized clouds.
- But may be less optimized for AI-specific UX and orchestration.
-
Gonka is the most conceptually ambitious:
- Attempts to solve the “wasted work” problem of PoW/PoS.
- Aligns consensus with productive AI compute.
- Strong early funding and design, but limited real-world metrics so far.
- Its long-term success hinges on proving that PoC is both technically robust and developer-friendly.
In terms of the original research question - who has the most sustainable combination of on-chain accounting, off-chain orchestration, and token economics to withstand centralized price competition without sacrificing decentralization and UX?
-
Today, io.net looks closest to a practical answer for AI workloads:
- On-chain accounting is transparent and tightly coupled to usage.
- Off-chain orchestration is tailored to AI (mesh + Ray + inference API).
- Tokenomics are usage-linked, with clear incentives to pay in IO while preserving stablecoin-like UX.
-
Akash may prove more resilient across cycles:
- Broader workload base.
- BME gives it a sophisticated monetary mechanism.
- If AI demand cools or central clouds become more competitive, Akash’s general-purpose nature may be an asset.
-
Gonka is a high-variance bet:
- If PoC works as intended and adoption follows, it could redefine how blockchains and AI infrastructure interact.
- If not, it may remain a niche or experimental network.
10. Conclusion
Decentralized AI superclouds are moving from theory to practice. Akash, io.net, and Gonka represent three distinct strategies:
- Akash: a Cosmos-based, reverse-auction marketplace evolving toward a token-economic design (BME) that tightly couples AKT demand with real compute usage, while serving a broad spectrum of workloads beyond AI.
- io.net: a Solana-native AI infrastructure mesh with strong early adoption, significant on-chain revenue, and a developer experience tailored to distributed training and inference.
- Gonka: a compute-native L1 that attempts to fuse blockchain consensus with productive AI computation via Proof-of-Compute, backed by substantial early capital but still early in its lifecycle.
None of these models is guaranteed to win. Each must navigate technical, economic, regulatory, and UX challenges, while competing not only with each other but also with rapidly evolving centralized clouds and semi-decentralized aggregators.
What is clear from the available data is that:
- There is genuine, growing demand for cheaper, more open AI compute.
- DePIN networks can already deliver meaningful cost savings and real-world impact.
- The long-term winners will be those that can sustain these advantages without relying on unsustainable token subsidies, while delivering a developer experience that feels as reliable and simple as the best centralized clouds.
The race to build a decentralized AI supercloud is underway; Akash, io.net, and Gonka are three of its most important early experiments, each illuminating a different path toward that goal.