Microsoft takes step towards delivering FPGAs as a service
- 09 May, 2018 15:12
Microsoft has taken another step towards allowing its Azure customers to leverage the capabilities of the field-programmable gate arrays (FPGAs) that the company has installed in its cloud data centres.
Microsoft announced this week at its Build developer conference that it would launch a preview of its FPGA-powered Project Brainwave architecture for deep neural net processing. Brainwave has been “fully integrated” with Azure Machine Learning, Microsoft said.
Even before launching the Brainwave preview, Microsoft had already been using FPGAs to accelerate SDN in its Azure data centres — in fact, all Azure compute servers deployed since late 2015 have been fitted with FPGAs.
According to Mark Russinovich, the chief technology officer of Azure, it is likely that Microsoft has more FPGAs in a production environment than any other tech company in the world.
FPGAs can offer significant advantages over GPUs and CPUs when it comes to AI, according to Russinovich.
GPUs are often used for training deep neural networks because it’s a very batch-oriented process, according to the CTO: “You feed the models a bunch of data, you have to iterate over that data for typically hours or days, refine your models and you come out with one that you deploy in production.”
However, as the batch size gets smaller GPUs get less efficient. FPGAs offer advantages when it comes to speed, regardless of batch size, power efficiency and latency when compared to GPUs. They’re ideal for what Russinovich describes as “real-time AI” because they can be far more “bursty”.
Microsoft is “absolutely” looking at more ways of making FPGA-based services available to its customers, Russinovich told Computerworld in an interview conducted last month during a visit to Australia.
“We’ve been looking at it for infrastructure acceleration in general, accelerated networking being one example. But there’s the possibility to accelerate algorithms – a large variety of algorithms on FPGAs, not just machine learning with deep neural networks.”
“Our vision is to create an FPGA platform where customers can come, write distributed FPGA applications, deploy them on to our FPGA fabric and then operate them at scale,” the CTO said. “And really we believe that types of workloads that are applicable to this are kind of open ended.”
“We believe that there’s always this space, this area between hard ASICs and fully general purpose where if you can take advantage of the programmability of the FPGA, and the parallelism of an FPGA, that you can get a much more efficient, much more performant computation off an FPGA,” Russinovich said.
“In some cases those algorithms might get what we called ‘hardened’, which is after you’ve evolved them enough you say, ‘Okay this is the optimal type of algorithm and at least for a few years I want to have that burned into silicon, where I can actually drive down the costs even further.
“But in the exploration and innovation phase of things like networking, where software-defined networking continues to evolve, FPGAs make a fantastic spot to implement that.”