Artificial intelligence is currently seen as one of the big promises in making applications learn behavior and interact intelligently with end-users. More and more companies are exploring AI and want to embed additional intelligent services in their overall enterprise architecture landscapes. Oracle provides a new AI platform as part of the Oracle Cloud Services. The images below showcase the main components of this new service.
Among most desirable features for anyone getting started with the adoption and integration of AI are the ability to develop new functionality and services and not having to work on the infrastructure side of things. Developing new services and bringing them to market is vital in staying competitive. For this reason, Oracle underpinned its AI cloud service with a rich set of high-performance components, including:
Oracle has provided NVIDIA Tesla P100 and NVIDIA Tesla V100 GPU capabilities as part of the infrastructure portfolio of the Oracle Cloud for some time. This capability now provides a ready-made, high-performance solution for deep learning and other AI technologies. Bare metal GPU servers, without the hypervisor overhead, can be used within the Oracle cloud to support AI needs. With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep-learning performance.
The use of GPU power instead of CPU power is already a mainstream technology for many companies that adopt AI. Most modern AI frameworks use GPU power and, in some cases, are already moving toward FPGA-based processing.
For fast access to persistent storage, Oracle equipped the AI cloud with flash storage and flash accelerators. This is not surprising since Flash was already adopted in Oracle Engineered systems. With the acquisition of Sun Microsystems, Oracle acquired a large group of hardware engineers who are now pushing for Flash and Flash-only systems. This trend is also apparent in the Oracle cloud, where Oracle is pushing Flash for high-performance systems, such as the Oracle AI cloud.
25 Gb ethernet
For networking, Oracle uses its own products and the Oracle Dual Port 25 Gb Ethernet Adapter which converges network and storage traffic, dramatically expands resources for server virtualization, supports network overlays for virtualization of data center L2 network infrastructure, and enables RDMA for acceleration of clustered applications in Oracle servers and storage systems.
Oracle’s strategy of building their cloud using their own products lets Oracle control all hardware and software assets and provide customers the cloud performance they need. Additionally, it provides constant feedback to the development teams, resulting in better on-premises products.
As network speed is of extreme importance in developing distributed systems, the addition of 25Gb ethernet connectivity between AI nodes provides a distinct benefit when using the Oracle Cloud.
The Oracle AI cloud is underpinned with the high-performance components provided by the Oracle Cloud. While in the IaaS services you can select standard or high-performance components, in the Oracle AI cloud, Oracle is pushing high performance by default.
This blog series was co-authored by Léon Smiers and Johan Louwers. Léon Smiers is an Oracle ACE and a thought leader on Oracle cloud within Capgemini. Johan Louwers is an Oracle ACE director and global chief architect for Oracle technology. Both can be contacted for more information about this, and other topics, via email; Leon.Smiers@capgemini.com and Johan.Louwers@capgemini.com