In this blogpost I'm going to discuss some of the unique features of Spectrum Scale and also review some of the key enhancements in Spectrum Scale 5.0.
So what is Spectrum Scale? It's a high performance clustered file system developed via IBM that can be deployed in 3 models:
- As a software on your choice of industry standard x86, POWER or IBM Z systems.
- As pre-built systems these are known as Elastic Storage Server (ESS) that run Spectrum Scale RAID.
- As Cloud service via IBM Cloud or Amazon Web Services (AWS)
Enterprises and organisations are creating, analysing and keeping more data than ever before. 2.5 billion gigabytes of data is currently being created per day and 90% of data was created in the last 2 years!
Organisations that can deliver insights faster while managing rapid infrastructure growth have a commercial advantage over competitors and typically become leaders within their industry.
To deliver those insights an organisations underlying storage must support both new-era big data and traditional applications with security, reliability and high-performance. IBM Spectrum Scale meets these challenges as a high-performance solution for managing data at scale with the distinctive ability to perform archive and analytics in place.
Some of the unique features of Spectrum Scale that enable it to manage data at scale are:
- Extreme Scalability
Leverage no-bottleneck architecture to scale performance for extreme throughput and low-latency access not expected from network-attached storage. This enables the delivery of phenomenal performance and scalability utilising commodity-based hardware. - The enablement of global collaboration
You can enable data-anywhere access that spans storage and locations to accelerate applications across the data centre or around the world. - Data-aware intelligence via policy-driven data placement
Policy-driven data placement enables you to achieve data-aware intelligence and information lifecycle management (ILM) efficiencies through powerful policy-driven automated tiered storage management. Using the capabilities within Spectrum Scale you can automatically determine where to physically store your data regardless of its placement in the logical directory structure.
This is brilliant as it provides the ability to match the cost of your storage resources to the value of your data and ensure file and object data is sat on the optimal tier within your environment.
If you have any of the following workloads running and are experiencing issues with scalability, feel you should be delivering faster insights from your data or have a planned refresh coming up then have a look at Spectrum Scale and get in contact with us at Tectrade
- High-Performance Back-up / Restore
- Archive of data - across disk and tape
- Information Life Cycle Management
- “Data Ocean” Unified Storage
- Data-intensive (HPC) Technical Computing
- Big Data and Analytics
- ISV Solutions – SAS Grid, SAP HANA
- Healthcare, Genomics
- Video, streaming media, media & entertainment
The overall goal of the product from my perspective is to provide a single scale-out data plane for the entire data centre. Which it has certainly enabled in all the deployments we have completed.
Let's take a look at what the key enhancements in Spectrum Scale 5.0 bring to the table.
A new level of storage performance and efficiency resulting in dramatic improvements in I/O performance
Support for newest low-latency, high bandwidth hardware such as
NVM. Which means significantly reduced communication latency between nodes.
Improved performance and space efficiency for mixed workloads.Small and large block size workloads running simultaneously in the same file system.Optimised large block performance via new 4MB default block. This means you can fit 512 8k files in a single 4MB block! Simultaneously optimised small file space efficiency with variable subblock size.
Improved IOP/sec and metadata performance. IOP/s can improve 3x to 5x over previous releases!
Simpler, more powerful system administration
A Faster and simpler out-of-the-box experience.
Enhanced GUI features for many capabilities including Active File Management (AFM), Transparent Cloud Tiering (TCT) & performance and capacity management.
Enhanced security and compliance
Integrated file audit logging capabilities within the Data Management edition. That enables you to:
- Track user accesses to filesystem and events
- Events that can be captured are Open, Close, Destroy (Delete), Rename, Unlink, Remove Directory, Extended Attributed Change, Access Control List (ACL) change
I can't wait to get our next Spectrum Scale deployment completed in the near future. If you want to learn more about Spectrum Scale please feel to get in contact with me via william.bush@tectrade.com
No comments:
Post a Comment