LLVM 8 shines on WebAssembly, machine learning workloads
- 21 March, 2019 06:26
This latest release moves WebAssembly code generation out of LLVM’s experimental status and enables it by default. Compilers have already been provisionally using LLVM’s WebAssembly code generation tools; Rust, for instance, can compile to WebAssembly, although deploying it to run takes some extra fiddling.
Also new to LLVM 8 is support for compiling to Intel’s Cascade Lake chipset, enabled by way of a command-line flag. It’s essentially the same as existing support for Intel Skylake chipsets, but with support for emitting Vector Neural Network Instructions (VNNI), part of the new AVX-512 instruction set available in Intel Xeon Phi and Xeon Scalable processors. VNNI, as the name implies, is intended to boost the speed of deep-learning workloads on Intel systems in circumstances where GPU acceleration isn’t available.
LLVM code generation isn’t limited to CPUs. LLVM 8 also improves code generation for the AMDGPU back-end, which allows LLVM code to be generated for the open source Radeon graphics stack. New AMD GPUs, like the Vega series, will benefit most from the AMDGPU support.
Other changes include improved code generation for IBM Power processor targets, particularly Power9; support for LLVM’s just-in-time compiler (JIT) for MIPS/MIPS64 processors; cache prefetching by way of debug information gleaned from software profiles; and improved support for OpenCL and OpenMP 5.0 in the Clang (C/C++ compiler) project.