Traditional monolithic storage doesn’t align with today’s open source scale-out databases and big data frameworks. But creating scalable open-compute clusters with the latest storage building blocks demands expertise in balancing capacity, I/O, throughput, latency and quality of service against budgetary constraints.
In the early days of open computing, simple concepts like 1:1 core:spindle ratios made it easy to design commodity-based infrastructures. But in our quest to build vanity free servers, well-intentioned software functions like MySQL/NoSQL sharding and Hadoop replication have compounded server/storage sprawl, driving up space, power and cooling (OpEx). This unintended consequence has restricted our CapEx budgets for new equipment needed to tackle the next big thing.
With exponential improvements in SSD price/performance and relentless efforts to expand HDD capacities and reliability, 1:1 is no longer the right metric for open compute designs.
This presentation will provide an unbiased view of hot, cool and cold data storage trends, along with benchmarking approaches and new metrics for addressing budget realities. We will then share construction blueprints for open compute solutions using the latest in SSD, HDD and software technologies that optimize efficiency and fidelity at scale.