HPE launches platforms to simplify enterprise adoption of AI

HPE
Image credit: zakiahza / Shutterstock.com

Hewlett Packard Enterprise (HPE) has announced new purpose-built platforms and services capabilities it says can help companies simplify the adoption of artificial intelligence, with an initial focus on deep learning.

Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition. To take advantage of deep learning, enterprises need a high-performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data.

Many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems.

To help customers overcome these challenges and realize the potential of AI, HPE is announcing the following offerings:

  • HPE’s Rapid Software Development for AI:HPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports Nvidia Tesla V100 GPUs.
  • HPE Deep Learning Cookbook: Built by the AI Research team at Hewlett Packard Labs, the deep learning cookbook is a set of tools to guide customers in selecting the best hardware and software environment for different deep learning tasks. These tools help enterprises estimate performance of various hardware platforms, characterize the most popular deep learning frameworks, and select the ideal hardware and software stacks to fit their individual needs. The Deep Learning Cookbook can also be used to validate the performance and tune the configuration of already purchased hardware and software stacks.
  • HPE AI Innovation Center: Designed for longer term research projects, the innovation center will serve as a platform for research collaboration between universities, enterprises on the cutting edge of AI research and HPE researchers. The centers, located in Houston, Palo Alto, and Grenoble, will give researchers for academia and enterprises access to infrastructure and tools to continue research initiatives.
  • Enhanced HPE Centers of Excellence (CoE): Designed to assist IT departments and data scientists who are looking to accelerate their deep learning applications and realize better ROI from their deep learning deployments in the near term, the HPE CoE offer select customers access to the latest technology and expertise including the latest NVIDIA GPUs on HPE systems. The current CoE are spread across five locations including Houston, Palo Alto, Tokyo, Bangalore, and Grenoble.

HPE offers a flexible consumption services for HPE infrastructure, which avoids over-provisioning, increases cost savings and scales up and down as needed to accommodate the needs of deep learning deployments.

“We live in a world today where we’re generating copious amounts of data, and deep learning can help unleash intelligence from this data,” said Pankaj Goyal, vice president, Artificial Intelligence Business, Hewlett Packard Enterprise. “However, a ‘one size fits all’ solution doesn’t work. Each enterprise has unique needs that require a distinct approach to get started, scale and optimize its infrastructure for deep learning.”

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.