28.09.2017 15:00:00
|
AMAX Deep Learning Solutions Upgraded With NVIDIA Tesla V100 GPU Accelerators
FREMONT, Calif., Sept. 28, 2017 /PRNewswire/ -- AMAX, a leading provider of Deep Learning, HPC, Cloud/IaaS servers and appliances, today announced that its GPU solutions, including Deep Learning platforms, are now available with the latest NVIDIA® Tesla® V100 GPU accelerator. Solutions featuring the V100 GPUs are expected to begin shipping in Q4 2017.
Powered by the new NVIDIA Volta architecture, AMAX's V100-based computing solutions are the most powerful GPU solutions on the market to accelerate HPC, Deep Learning, and data analytic workloads. The solutions combine the latest Intel® Xeon® Scalable Processor series with Tesla V100 GPUs to enable 6x the Tensor FLOPS for DL inference when compared to the previous generation NVIDIA Pascal™ GPUs.
"We are thrilled about the biggest breakthrough we've ever seen on data center GPUs," said James Huang, Product Marketing Manager, AMAX. "This will deliver the most dramatic performance gains and cost savings opportunities for HPC and the AI industry that we cannot wait to see."
NVIDIA Tesla V100 GPU accelerators are the most advanced data center GPUs ever built to accelerate AI, HPC and graphics applications. Equipped with 640 Tensor Cores, a single V100 GPU offers the performance of up to 100 CPUs, enabling data scientists, researchers, and engineers to tackle challenges that were once thought to be impossible. The V100 features six major technology breakthroughs:
- New Volta Architecture: By pairing CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.
- Tensor Core: Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That's 12X Tensor FLOPS for Deep Learning training, and 6X Tensor FLOPS for DL inference when compared to NVIDIA Pascal™ GPUs.
- Next-Generation NVIDIA NVLink™ Interconnect Technology: NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server.
- Maximum Efficiency Mode: The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80 percent of the performance at half the power consumption.
- HBM2: With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95 percent, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM benchmark.
- Programmability: Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.
AMAX solutions that will feature the V100 include:
- MATRIX DL-in-a-Box Solutions — The MATRIX Deep-Learning-in-a-Box solutions provide everything a data scientist needs for Deep Learning development. Powered by Bitfusion Flex, the product line encompasses powerful dev workstations, high-compute density servers, and rackscale clusters featuring pre-installed Docker containers with the latest DL frameworks, and GPU virtualization technology to attach local and remote GPUs. The MATRIX solutions can be used as standalone platforms or combined to create the perfect infrastructure for on-premise AI clouds or elastic DL-as-a-Service platforms.
- [SMART]Rack AI — [SMART]Rack AI is a turnkey Machine Learning cluster for training and inference at scale. The solution features up to 96x NVIDIA® Tesla® GPU accelerators to deliver up to 1344 TFLOPs of compute power when populated with Tesla V100 PCle cards. Delivered plug-and-play, the solution also features an All-Flash data repository, 25G high-speed networking, [SMART]DC Data Center Manager, an In-Rack Battery for graceful shutdown during a power loss scenario.
- ServMax G480 — The G480 is a robust 4U 8x GPU platform for HPC and Deep Learning workloads, delivering 56 TFLOPs of double precision or 112 TFLOPs of single precision when populated with Tesla V100 PCle cards.
As an Elite member of the NVIDIA Partner Network Program, AMAX is stringent in providing cutting-edge technologies, delivering enhanced, energy-efficient performance for the Deep Learning and HPC industries featuring NVIDIA Tesla V100, P100, P40 GPU accelerators, and NVIDIA DGX™ systems. AMAX is now accepting pre-orders, quotes and consultations for the Tesla V100-based systems. To learn more about AMAX and GPU solutions, please visit www.amax.com or contact AMAX.
About AMAX
AMAX is an award-winning global leader in application-tailored data center, HPC and Deep Learning solutions designed towards highest efficiency and optimal performance. Recognized by several industry awards, including First Place at ImageNet Large Scale Visual Recognition Challenge, AMAX aims to provide cutting-edge solutions to meet specific customer requirements. Whether you are a Fortune 1000 company seeking significant cost savings through better efficiency for your global data centers, or you're a software startup seeking an experienced manufacturing partner to design and launch your flagship product, AMAX is your trusted solutions provider, delivering the results you need to meet your specific metrics for success. To learn more or request a quote, contact AMAX.
View original content:http://www.prnewswire.com/news-releases/amax-deep-learning-solutions-upgraded-with-nvidia-tesla-v100-gpu-accelerators-300527427.html
SOURCE AMAX
Wenn Sie mehr über das Thema Aktien erfahren wollen, finden Sie in unserem Ratgeber viele interessante Artikel dazu!
Jetzt informieren!
Nachrichten zu Sotheby's Holdings Inc.mehr Nachrichten
Keine Nachrichten verfügbar. |