IT Management  >   Systems Operations  >   Networking  >  

High Performance Computing

RSS Feed    Add to Google    Add to My Yahoo!
ALSO CALLED: HPC, High-Performance Clusters, High-Performance Computing, High Performance Clusters
DEFINITION: High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 1012 floating-point operations per second. The term HPC is occasionally used as a synonym for supercomputing, although technically a supercomputer is a system that performs at or near  … 
Definition continues below.
High Performance Computing Reports
1 - 25 of 144 Matches Previous Page  |  Next Page
The Future-Defined Data Center and the Pursuit of Better
sponsored by Lenovo and Intel
RESOURCE: Download this e-book to delve into the characteristics that define a future-proof data center—namely, one that is software-defined—and learn how with an SDDC you can achieve maximum scalability with purpose-built technology for the intensive demands of high-performance computing, artificial intelligence, and more.
Posted: 21 Jul 2017 | Published: 21 Jul 2017

Lenovo and Intel

The New Digital Outpost: Why We Must Rethink Remote Infrastructure
sponsored by Raritan Inc.
WHITE PAPER: The average enterprise data center supports 55 branches. It's time to rethink your approach to remote computing and infrastructure. Access this white paper to learn about the six imperatives determining your remote computing capabilities, how to meet emerging requirements, and much more.
Posted: 20 Jul 2017 | Published: 20 Jul 2017

Raritan Inc.

High-performance File System Solutions for Hybrid Cloud Infrastructures
sponsored by AWS - Avere
WHITE PAPER: Download this resource to discover how to take advantage of hybrid cloud NAS to drive innovation with high-performance storage access and the flexibility to store data where it makes the most sense for your business, via file system and caching technologies.
Posted: 26 Jun 2017 | Published: 26 Jun 2017

AWS - Avere

Virtual Workstation 101
sponsored by NVIDIA
WHITE PAPER: The virtual workstation is gaining good reputation for professionals that work with high-performance graphical computing. Access this white paper to learn about virtual workstations and how they'll be utilized over the next few years.
Posted: 05 Jun 2017 | Published: 31 Mar 2017

NVIDIA

HPE Synergy - Single Infrastructure. Untapped Cost Savings.
sponsored by Hewlett Packard Enterprise
RESOURCE CENTER: Composable infrastructure is positioned to become the next significant data center model. It bridges the gap between traditional systems and cloud-native apps, empowers greater compute power, and more. Access this resource center to gain a comprehensive view of what composable infrastructure can deliver to your data center.
Posted: 15 May 2017 | Published: 15 May 2017

Hewlett Packard Enterprise

Achieving High Availability Starts with Strong Architecture
sponsored by NetApp
WHITE PAPER: Click inside this white paper, and learn how the seamless scalability. Failure prevention, and data protection offered in this all-flash system can help you achieve the high availability a modern storage system needs.
Posted: 12 May 2017 | Published: 12 May 2017

NetApp

Market Analysis: Enterprise High-Productivity Application Platform as a Service
sponsored by Mendix
ANALYST REPORT: High-productivity PaaS supports declarative, model-driven design and one-step deployment. This Gartner report analyzes the top 14 vendors in the high-productivity application platform as a service (hpaPaaS) market.
Posted: 08 May 2017 | Published: 27 Apr 2017

Mendix

Expect More From Your Primary Storage
sponsored by Veeam Software
EBOOK: Today's customers demand 24-7 data availability from their storage systems. In this e-book, take a look at why traditional backup systems can't quite provide this convenience, and how one program can give you the maximum value form your virtual storage.
Posted: 13 Apr 2017 | Published: 13 Apr 2017

Veeam Software

3 Elements for Efficient Cloud Implementation
sponsored by Red Hat
WHITE PAPER: There's a right way to combine the people, processes, and technology needed to deploy an efficient cloud, and this white paper explores just how to do it. Learn about the 3 elements you need to get started inside.
Posted: 11 Apr 2017 | Published: 11 Apr 2017

Red Hat

6 Storage Pros Talk About Getting a Grip on Machine Data With Scale-Out NAS Storage
sponsored by Qumulo
RESEARCH CONTENT: In this Taneja Group field report, learn how a scale-out NAS system can help you get a grip on your machine data. Read on to see how 6 customers in a variety of industries used their NAS system to store, manage, and curate mission-critical data.
Posted: 07 Apr 2017 | Published: 31 Oct 2016

Qumulo

FlashStack Converged Infrastructure for SAP
sponsored by Pure Storage
WHITE PAPER: Access this white paper to learn about an affordable, scalable all-flash converged infrastructure solution that will accelerate and protect SAP deployments.
Posted: 04 Aug 2016 | Published: 31 Dec 2015

Pure Storage

Dell Storage for HPC with Intel Enterprise Edition 2.3 for Lustre
sponsored by Dell EMC and Intel®
WHITE PAPER: In high performance computing, the efficient delivery of data to and from compute nodes is often complicated to execute. Discover how to deliver the high performance and availability your HPC system craves by deploying a scale-out storage appliance complete with an open source parallel file system.
Posted: 22 Dec 2016 | Published: 31 Oct 2015

Dell EMC and Intel®

The Evolution of HPC Management Software
sponsored by IBM
VIDEO: This webcast examines some of the latest developments in workload management platforms for high performance computing (HPC). Discover how to enable high-throughput resource scheduling, computation workflow automation, and more.
Posted: 05 Oct 2016 | Premiered: 17 Jun 2016

IBM

Managing Complex Computational Workflows With A Process Manager
sponsored by IBM
WEBCAST: Today, many engineering and analytic workflows are controlled by brittle, complex scripts. Data process managers can help increase the reliability of automated sequences of diverse workloads and make workflows more manageable. Find out more on the capabilities of a data process manager in this demonstration on managing complex workflows.
Posted: 03 Oct 2016 | Premiered: Jan 14, 2016

IBM

Faster Results and Less Investment with Data-Centric HPC
sponsored by IBM
WHITE PAPER: The rapid growth of structured and unstructured data in high-performance computing (HPC) environments requires a better approach capable of addressing massive data scale. This white paper details the advantages of taking a data-centric approach that emphasizes moving compute to data and actively manages data workflows in the HPC clusters.
Posted: 19 Sep 2016 | Published: 30 Sep 2016

IBM

Case Study: Wellcome Trust Sanger Institute Accelerates World-Leading Research
sponsored by IBM
CASE STUDY: A world-leading research institute sought to give its employees the tools they needed to collaborate effectively with other teams around the world. Access this case study to see how Wellcome Trust Sanger Institute was able utilize an analytics platform to design more efficient jobs, deliver on HPC utilization requirements, and optimize resources.
Posted: 15 Sep 2016 | Published: 28 Nov 2014

IBM

Hyper Convergence in the Era of Always-On Business
sponsored by HPE and Veeam
WEBCAST: In this expert webcast, discover why hyper convergence remains such a big deal for virtualized environments—and separate fact from vendor hype. Tune in to examine how hyper convergence-based data center modernization will help you set the stage for high availability and rapid recoverability.
Posted: 06 Apr 2016 | Premiered: Mar 23, 2016

HPE and Veeam

Outsmarting the Weather: Met Office Redefines Forecasting
sponsored by IBM
WHITE PAPER: This brief infographic describes the mainframe that allowed the Met Office to rapidly generate reports from hundreds of thousands of weather observations, enabling them to empower industries like aviation to use wind to their advantage, provide critical early warnings to help cities prepare for natural disasters, and more.
Posted: 29 Feb 2016 | Published: 06 Mar 2014

IBM

Case Study: Met Office Delivers Life-Saving Weather Data Quickly & Accurately
sponsored by IBM
CASE STUDY: Access this case study to learn how a supercomputer allows the Met Office to deliver 10 million weather observations to customers every day. Discover how this mainframe enabled the Met Office to ensure that users can access weather forecasts with mobile devices, save employees' time by automating the analytics process, and more.
Posted: 29 Feb 2016 | Published: 29 Feb 2016

IBM

Computer Weekly – 24 November 2015: Technology helping to deliver aid for refugees
sponsored by ComputerWeekly.com
EZINE: In this week's Computer Weekly, we find out how technology is helping to deliver vital aid to Syrian refugees in the Middle East. We look at the barriers to achieving the government's aim of a paperless NHS. And the CTO of special effects studio Framestore talks about the IT challenges behind hit movies such as Gravity. Read the issue now.
Posted: 20 Nov 2015 | Published: 20 Nov 2015

ComputerWeekly.com

Making Complex Storage Systems Simpler
sponsored by IBM
WHITE PAPER: Access this white paper to learn about an easy to use complex storage system that increases capacity, performance, and scalability.
Posted: 01 Apr 2015 | Published: 01 Apr 2015

IBM

GPFS Design Best Practices by Scott Denham
sponsored by IBM
WEBCAST: Access this webcast and learn how to effectively design GPFS so that you can best manage your storage.
Posted: 01 Apr 2015 | Premiered: Apr 1, 2015

IBM

An Introduction to IBM Spectrum Scale
sponsored by IBM
WHITE PAPER: Access this white paper to learn about a software-defined storage system that successfully manages big data and solves storage complexities.
Posted: 01 Apr 2015 | Published: 28 Feb 2015

IBM

A Clustered NFS to Increase Network Efficiency
sponsored by IBM
WHITE PAPER: In this white paper, discover a clustered NFS that integrates with Linux for ultimate efficiency.
Posted: 31 Mar 2015 | Published: 31 Dec 2012

IBM

Choose a storage platform that can handle big data and analytics
sponsored by IBM
WHITE PAPER: Access this white paper to discover a storage platform whose capacity is powerful enough to handle the massive demand of big data analytic workloads now and in the future.
Posted: 17 Mar 2015 | Published: 17 Mar 2015

IBM
1 - 25 of 144 Matches Previous Page    1 2 3 4    Next Page
 
HIGH PERFORMANCE COMPUTING DEFINITION (continued): …  the currently highest operational rate for computers. Some supercomputers work at more than a petaflop or 1015 floating-point operations per second.The most common users of HPC systems are scientific researchers, engineers and academic institutions. Some government agencies, particularly the military, also rely on HPC for complex applications. High-performance systems often use custom-made components in addition to so-called commodity components. As demand for processing power and speed grows, HPC will likely interest businesses of all sizes, particularly for transaction processing and data warehouses. … 
High Performance Computing definition sponsored by SearchEnterpriseLinux.com, powered by WhatIs.com an online computer dictionary

About TechTarget:

TechTarget provides enterprise IT professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective IT purchase decisions and managing their organizations' IT projects - with its network of technology-specific Web sites, events and magazines

All Rights Reserved, Copyright 2000 - 2017, TechTarget | Read our Privacy Statement