AMPD Ventures (AMPD) announced a ‘Machine Learning Cloud’ initiative designed to meet the requirements of academic institutions and companies in the artificial intelligence, machine learning, and deep learning sectors. The platform, featuring AMD Instinct accelerators along with the AMD ROCm open software platform, will initially be hosted at AMPD’s DC1 data center in Vancouver, British Columbia. The platform is expected to be expanded into other territories over the coming months.
AMD provides AMPD with the power of accelerators
AMD is combining its AMD ROCm open software platform with its powerful AMD Instinct accelerators. With an open-source toolset, AMD ROCm is designed to bring a rich foundation to advanced computing. It is the first open-source exascale-class platform for accelerated computing that’s also programming-language independent.
Brad McCredie, corporate vice president, Data Center GPU and Accelerated Processing, AMD, said,
“It’s clear that AMD and AMPD share the same commitment to the open-source community and open-source technologies that are the driving force behind ROCm platform innovations. We are pleased to provide AMPD with the power of these accelerators with the aim of facilitating access for academic institutions and researchers as cost-effectively as possible.”
The AMD Instinct MI100 was announced in November 2020, designed with AMD’s all-new AMD CDNA architecture and becoming the industry’s first data center GPU to exceed 10 Teraflops (FP64).
Additionally, both AMPD and AMD will support the community via the ROCm GitHub forums, including guidance and support for HIP (Heterogeneous-Computing Interface for Portability). AMD has developed a suite of tools designed to ease the conversion of CUDA applications to portable C++ code.