It will allow developers to write deep learning applications as standard Spark programmes that run on top of existing Spark or Hadoop clusters.
Intel has launched a deep learning library – the open-source BigDL for Apache Spark cluster-computing framework.
BigDL, which is already running in the Databricks Spark Platform, allows users to write their deep learning applications as standard Spark programmes that can directly run on top of existing Spark or Hadoop clusters.
It allows the exporting of artificial intelligence expertise to data scientists that currently work across several applications in various fields.
BigDL is modeled after Torch, an open source deep learning framework used in scientific computing.
It provides support for deep learning, including various computing through Tensor and high level neural networks.
By using BigDL, users can load pre-trained Caffe or Torch models into Spark programmes.
BigDL leverages Intel MKL and multi-threaded programming in every Spark task for achieving high performance.
The BigDL can also serve as the unified data analytics platform (Hadoop/Spark) for data storage, processing and mining, feature engineering, machine and deep learning workloads, Intel said.
Intel senior vice president and general manager of the Software and Services Group Doug Fisher said: “BigDL is an open-source project, and we encourage all developers to connect with us on the BigDL Github, sample the code and contribute to the project.”
The deep learning library is part of Intel’s AI strategy, which was unveiled in November 2016 and aims to drive breakthrough performance and democratise access.
The strategy outlined the company’s work to make AI training and tools accessible to developers via the Intel Nervana AI Academy.
Built for speed and ease of use, the Intel Nervana portfolio serves as the foundation for highly optimised AI solutions, allowing more data professionals to tackle the major challenges on industry standard technology.
The semiconductor company intends to deliver up to 100x reduction in the time to train a deep learning model, compared to GPU solutions.