The latest work of Run: ai AI technologies has resulted in a new $30 million Series B round investment from several investors, including Insight Partners and previous investors S-Capital and TLV Partners.
Run: AI has built an orchestration and virtualization software layer tailored to the unique needs of AI workloads running on GPUs and similar chipsets. The platform is the first to bring OS-level virtualization software to workloads running on GPUs, an approach inspired by the virtualization and management of CPUs that revolutionized computing in the 1990s.
Run: AI is designed to work with GPUs from any vendors, but so far it is built to work with GPUs from Nvidia, which is the market leader. It does not yet work with GPUs from other vendors but they are expected in the future, said Hain.
“Tomorrow’s industry leaders will be those companies that master both hardware and software within the AI data centers of the future,” said Omri Geller, co-founder, and CEO of Run: AI. “The more experiments a team runs, and the faster it runs them, the sooner a company can bring AI solutions to market. Every time GPU resources are sitting idle, that’s an experiment that another team member could have been running or a prediction that could have been made, slowing down critical AI initiatives.”
So far most clients are from the finance, manufacturing, defense, automotive, and healthcare industries.
With the new investment, Run: AI will triple the size of its team and plans to provide data science training to software developers joining the company. Plus, Lonne Jaffe of Insight Partners will join the board.
About Run: ai
Run: AI helps companies execute their AI initiatives quickly while keeping budgets under control, by virtualizing and orchestrating AI compute resources in order to pool, share and allocate resources efficiently.
Consolidating computational workloads yields greater server utilization, lowering TCO and speeding delivery of AI initiatives. Data science teams have automatic access to as many resources as they need and can utilize compute resources across sites – whether on-premises or in the cloud.
The Run: AI platform is built on top of Kubernetes, enabling simple integration with existing IT and data science workflows.