The latest proprietary Power servers from IBM, armed by the long-awaited IBM Power9 processors, look for relevance among next-generation enterprise workloads, but the company will need some help from its friends to take on its biggest market challenger.
IBM emphasizes increased speed and bandwidth with its AC922 Power Systems to better take on high-performance computing tasks, such as building models for AI and machine learning training. The company said it plans to pursue mainstream commercial applications, such as building supply chains and medical diagnostics, but those broader-based opportunities may take longer to materialize.
“Most big enterprises are doing research and development on machine learning, with some even deploying such projects in niche areas,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “But it will be 12 to 18 months before enterprises can even start driving serious volume in that space.”
The IBM Power9-based systems’ best chance for short-term commercial success is at the high end of the market.
“Power9 as a platform for AI is the focus over the next year or two,” said Charles King, principal analyst at Pund-IT Research Inc. “We are still a ways from seeing this sort of technology come down further into the commercial markets.”
Charles Kingprincipal analyst, Pund-IT Research Inc.
But IBM may need to rely on its most important business partner and customer to drive the Power9’s commercial acceptance.
Google, a co-founder, along with IBM and Nvidia, of the OpenPower Foundation, contributed work around Power8 and ported over its applications to work with IBM’s Power-based systems. Google executives have declined to say how the company would deploy the Power9 internally and for what applications, but broadly deploying the IBM Power9 processor in servers for its data centers could seed confidence among corporate users, Moorhead said.
“To gain share at the macro level they need a Google deployment,” he said. “This could inspire others to deploy Power9 who are actually running large amounts of their production workloads.”
Under the hood of Power9
At the heart of the IBM AC922 system’s architecture are PCI-Express 4.0, Nvidia’s NVLink 2.0 and OpenCAPI, which together improve speed and bandwidth, according to the company. The NVLink 2.0, developed jointly by IBM and Nvidia, is claimed to transport data between the IBM Power9 CPU and Nvidia’s GPU seven to 10 times faster than an earlier version of the technology. The systems are also tuned to take advantage of popular AI frameworks: TensorFlow, a Google-developed open source software library for numerical computation using data flow graphs; Chainer, a framework supporting neural networks; and Caffe, a deep learning framework developed by Berkeley AI Research.
These “accelerators” are part of the IBM Power9 evolving hardware architecture, and are designed to solidify the system’s competitive footing in the cloud computing market.
“We have seen how aggressive the compute requirements have grown in the Linux space, especially as AI workloads were added to the mix,” said Stefanie Chiras, vice president of IBM’s Power Systems. “It now requires a different level of infrastructure underneath to support that level of data transport.”
IBM faces off with Intel
Some of IBM’s server competitors have pledged to deliver systems built to handle AI workloads, and some said they believe Intel will be Big Blue’s most serious competitor. Intel unveiled its AI processor called Nervana late last year and promised a finished product by the end of this year.
Intel’s advantage in the budding competition for AI processors is the overwhelming market share of its server-based Xeon processors, compared to that of proprietary chips such as IBM. Nervana could prove a formidable competitor to IBM in the AI market, but the Power9 with its accompanying accelerator technologies has the edge right now, Moorhead said.
“Intel will point out they have about 95% of the processors and their Nervana accelerator, but IBM is the only one out there with NVLink that has the highest bandwidth connection you can have between a CPU and GPU,” Moorhead said. “Intel would have to significantly change its architecture to support something like NVLink, and they won’t do that any time soon.”