How 50 TB per blade makes this the right choice for Spark storage

An In-Depth Discussion on Big Data Analytics for Apache Spark

Cover

Development teams everywhere are using Apache Spark for its rapid big data analytics capabilities. Development teams everywhere are also  running into capacity issues when trying to run the performance-intensive software on their spinning disk storage.

We sat down with Brian Gold, R&D director at Pure Storage to help better understand the storage challenges Spark presents, and how one block-based flash array can help solve them.

Read on to find out how efficient erasure coding, 50 terabytes of effective capacity with each blade, and more help make this system ideal for running your Spark workloads.

Vendor:
Pure Storage
Posted:
13 Sep 2017
Published:
13 Sep 2017
Format:
PDF
Length:
8 Page(s)
Type:
Resource
Language:
English
Already a Bitpipe member? Login here

Download this Resource!

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.