Over 100,000 GPUs from data centers and private clusters are set to plug into a new decentralized physical infrastructure network (DePIN) beta launched by io.net.
As Cointelegraph previously reported, the startup has developed a decentralized network that sources GPU computing power from various geographically diverse data centers, cryptocurrency miners and decentralized storage providers to power machine learning and AI computing.
The company announced the launch of its beta platform during the Solana Breakpoint conference in Amsterdam, which coincided with a newly formed partnership with Render Network.
Tory Green, chief operating officer of io.net, spoke exclusively to Cointelegraph after a keynote speech alongside business development head Angela Yi. The pair outlined the critical differentiators between io.net’s DePIN and the broader cloud and GPU computing market.
Green identifies cloud providers like AWS and Azure as entities that own their supplies of GPUs and rent them out. Meanwhile, peer-to-peer GPU aggregators were created to solve GPU shortages, but “quickly ran into the same problems” as the exec explained.
— io.net (@ionet_official) November 4, 2023
The wider Web2 industry continues to look to tap into GPU computing from underutilized sources. Still, Green contends that none of these existing infrastructure providers cluster GPUs in the same way that io.net founder Ahmad Shadid has pioneered.
“The problem is that they don’t really cluster. They’re primarily single instance and while they do have a cluster option on their websites, it’s likely that a salesperson is going to call up all of their different data centers to see what’s available,” Green adds.
Meanwhile, Web3 firms like Render, Filecoin and Storj have decentralized services not focused on machine learning. This is part of io.net’s potential benefit to the Web3 space as a primer for these services to tap into the space.
Green points to AI-focused solutions like Akash network, which clusters an average of 8 to 32 GPUs, as well as GenSyn, as the closest service providers in terms of functionality. The latter platform is building its own machine learning compute protocol to provide a peer-to-peer “supercluster” of computing resources.
With an overview of the industry established, Green believes io.net’s solution is novel in its ability to cluster over different geographic locations in minutes. This statement was tested by Yi, who created a cluster of GPUs from different networks and locations during a live demo on stage at Breakpoint.
As for its use of the Solana blockchain to facilitate payments to GPU computing providers, Green and Yi note that the sheer scale of transactions and inferences that io.net will facilitate would not be processable by any other network.
“If you’re a generative art platform and you have a user base that’s giving you prompts, every single time those inferences are made, micro-transactions behind it,” Yi explains.
“So now you can imagine just the sheer size and the scale of transactions that are being made there. And so that’s why we felt like Solana would be the best partner for us.”
The partnership with Render, an established DePIN network of distributed GPU suppliers, provides computing resources already deployed on its platform to io.net. Render’s network is primarily aimed at sourcing GPU rendering computing at lower costs and faster speeds than centralized cloud solutions.
Yi described the partnership as a win-win situation, with the company looking to tap into io.net’s clustering capabilities to make use of the GPU computing that it has access to but is unable to put to use for rendering applications.
Io.net will carry out a $700,000 incentive program for GPU resource providers, while Render nodes can expand their existing GPU capacity from graphical rendering to AI and machine learning applications. The program is aimed at users with consumer-grade GPUs, categorized as hardware from Nvidia RTX 4090s and under.
As for the wider market, Yi highlights that many data centers worldwide are sitting on significant percentages of underused GPU capacity. A number of these locations have “tens of thousands of top-end GPUs” that are idle:
“They’re only utilizing 12 to 18% of their GPU capacity and they didn’t really have a way to leverage their idle capacity. It’s a very inefficient market.”
Io.net’s infrastructure will primarily cater to machine learning engineers and businesses that can tap into a highly modular user interface that allows a user to select how many GPUs they need, location, security parameters and other metrics.