Mark III Systems Blog
IBM’s $1B investment in Flash-based R&D announced early in 2013 is really starting to pay some dividends in our opinion. Not only on the server-side with the recent IBM X6 eXFlash DIMM announcements, but also with a new model of IBM’s FlashSystems line (of which Mark III is a huge fan, as many of you know).
What are our favorite things about the *NEW* FlashSystem 840?
1) MORE! – Quite simply, the new FlashSystem 840 can store up to 40TB usable (RAID5), which is 2x more capacity than the previous FlashSystem 820. If you combine this ultra-fast storage capacity with a unique IBM capacity-optimizing technology like Real-time Compression, you can really boost the performance of your critical applications in an ultra-compact form factor, all without breaking the bank.
2) New GUI – This was a somewhat predictable move by IBM, but the FlashSystem management GUI has been re-skinned to match IBM’s SVC/V7000 (and XIV before that) GUI look and feel. The ease-of-use is obviously incredible and IBM’s consistency across products has been one of the best moves they’ve made from a storage perspective in the last 10 years, in our opinion.
3) Excellent RAS – Everything on the 840 is designed to be hot-swappable, easy to service, and very supportive of concurrent code loads. In short, everything you’d expect an IBM storage system to support. If you’re ultra-paranoid, you can always combine the 840 with SVC vDisk Mirroring (and Real-time Compression) to create a leading-edge and truly unique storage infrastructure in every aspect.
IBM today announced its new X6 architecture for high-end, scale-up servers in its System x line of x86 compute platforms. According to IBM’s press release, this means a refresh for the x3850/x3950 4-8 socket rack server models, as well as a NEW x880 Flex node, which will most likely be a duplicate of these servers but specifically designed for a Flex System chassis.
With a large number of our clients having already invested in a Flex System strategy, we are particularly excited about the idea of applying X6 to a Flex-based strategy, specifically in the areas of high-throughput applications (like backups, etc.), highly consolidated virtualization environments, and very high performance database servers. Combined with the extreme investment protection and backend bandwidth potential of the current Flex technology, we think that the compute portfolio with Flex is now complete with literally something leading-edge and unique for everyone and every type of application.
Since I know you know how to use Google, I won’t re-hash the exact word-for-word details of the announcement, but here are some of the announcement highlights we, personally, find interesting:
- X6 in Flex: No exact details yet, but the implied assumption with this announcement is that there will be an 8-socket Flex node (x880 according to the announcement) that will support 12TB of memory and a significant amount of backend bandwidth for network and SAN to the integrated Flex switches. This will allow enable applications like massive TSM or other types of backup servers that previously had to be standalone due to I/O requirements to be part of a consolidated Flex strategy.
- eXFlash DIMMs: Flash-based memory-like DIMMs that are installed natively on the memory channel itself, which represents an advantage over PCIe based Flash deployments today. The bottom line here is extreme performance and the lowest latency possible… if your application is very performance sensitive from an IOPS perspective, this will greatly benefit you (thinking massive VM farms or large Oracle/SQL deployments)
- Compute books: Each of the 8 possible CPUs and its corresponding memory are modular and can be added and removed, as needed. This means that you can start small, if you’d like, and grow exactly as you need. We’re really interested to see what the entry-level pricing looks like for the smallest configuration looks like, but this strategy looks very promising for those clients that may want to grow into X6 slowly.
As we hear more specifics on the roll out of actual products, I’m sure we’ll be adding more thoughts to this blog!
Those of you that know us well know that we have a lab in our Houston office that serves as a demo and testing center with most of the same systems and software that we help our clients implement and maintain. In the world of IBM, this lab is referred to as a BPIC, or IBM Business Partner Innovation Center.
For those of our clients that aren’t local to Houston and don’t have the luxury of easily traveling to our lab/briefing center, we typically offer remote demos or even remote access, depending on the exact use case and requirements.
Lately, we’ve found ourselves doing a lot of Storwize V3700 demos, so I thought I’d put together a quick synopsis of one of these demos, as I’ve noticed that there aren’t that many good walkthroughs available online… Consider it “Part I” in a new “Mark III Lab Tour” Series of posts.
The Storwize V3700 is a modular storage system with an incredible price point that trades off capacity scalability (limited to 120 2.5” SAS drives) and some of the uniquely powerful features in the Storwize V7000 for that lower price. It runs essentially the same code as the enterprise proven SVC and V7000, so if you can live with less drives per controller, you’re getting an incredible deal for a very mature and high-end code base that’s powering the Tier-1 storage environment of some of the largest enterprises on the planet.
Not surprisingly, the management experience of a V3700 is pretty much exactly the same as the SVC and V7000. I’ve included some screenshots below to give you a general idea of what the management experience (and our typical demo) looks like- you may need to actually click on each screenshot, if you’re interested in seeing more detail.
Overview Screen (after you log in):
Built-in Tutorial Videos for each aspect of the system (storage pools, volumes, hosts, etc.):
Intuitive “bubbly” navigation pane on the left minimizes clicks and speeds up tasks:
System view shows how the V3700 is physically installed and how much capacity has been allocated overall (we only have 21 300GB drives in the base control enclosure):
Performance at a glance (not much going on in our lab right now):
View Storage Pools and MDisks (RAID Arrays) within each Storage Pool (we have (2) 8+2P RAID6 arrays in our single storage pool with 1 global spare):
More granular view of how individual physical drives map into these storage pools, if you wanted to sort and organize by physical drives:
Map the volume via FC, iSCSI, or SAS and you’re done (FC in this case):
Of course, this was probably the most basic demo we can do, but all I have time for this evening–
If you’d like to see a more detailed live demo via WebEx (or in-person if you’re in Houston), including advanced features such as FlashCopy, Remote Replication, Easy Tier, System Migration, and more- feel free to reach out!
Stealing Chris’ (one of our resident TSM gurus) thunder on this, but TSM v7.1 is now available for download (been installing it in our lab this week)!
We’ve been involved in this beta program with IBM for a number of months now so it feels good to be able to finally talk about it. Feel free to ask us about our unfiltered experiences or to see what it looks like in our environment!
As a recap, here are some of the top highlights:
- TSM VE Support for vSphere 5.5
- Instant restores of incremental full VM backups (!!)
- TSM Operations Center 7.1 now supports administrative commands, in other words, new GUI now reports and allows actions
- Exchange 2013 Support
- Item level recovery for Exchange and SQL from VM backups (no more separate TDP schedule needed!!)
- Database enhancements to support up to 10x performance boost for internal deduplication
- More reports, SQL column in TSM database can be queried for VM backup success/failure
- Lots of GUI enhancements, BA client updated, Operations center, VE client to name a few
IBM announced in early September the innovative new NeXtScale system, which is the culmination of years of experience with the iDataPlex and BladeCenter platforms. Much like how Flex Systems (Pure) have completely redefined density for virtualization and more general-purpose application workloads, NeXtScale is doing the same for HPC, Cloud, and compute-intensive workloads.
In short, if you need threads and raw CPU speed for your workload (and lots of it), NeXtScale provides a platform that’s currently unrivaled in terms of density and ease-of-integration (i.e. everything shows up ready to go). Normally I don’t like to refer to other blogs as a primary source of information, but IBM published a couple pretty good blogs that sum up the NeXtScale platform and announcement.
If you didn’t click on the links above, here are some highlights:
- NeXtScale uses the new Intel Ivy Bridge processors (E5-2600 v2)
- (12) nx360 M4 servers can fit into a 6U chassis, which essentially means that you can put 24 CPUs into a 6U form-factor and 168 CPUs into a 42U rack
- Support for high-bandwidth applications with PCIe 3.0 and an extra mezzanine slot on each server that can be used for FDR Infiniband or 10Gb Ethernet
In short, if you need a lot of cores/CPUs and high-bandwidth on the backend, you really can’t beat NeXtScale from a density and price-performance standpoint.
With that said, as most of you know, IBM is a platinum member of the OpenStack Foundation and has fully embraced OpenStack as its open standards strategy for cloud. OpenStack has numerous projects and initiatives within its broad scope, but one of the most popular projects is OpenStack Compute (codenamed “Nova”). Nova enables users to provision and manage large networks of virtual machines across various hypervisors (KVM, XenServer, PowerVM, etc.). These VMs can be easily spun up, spun down, and manipulated through a centralized, open source Nova cloud controller.
In its current form, most Nova users are utilizing OpenStack for “newer” web-centric applications, and not so much older legacy applications, which typically do a little bit better in “traditional” IT environments. Most applications that thrive with Nova tend to need a lot of high-speed threads to run optimally (think web applications), and will lean on Nova to easily spin up and spin down these VMs as they are needed. Accordingly, OpenStack Compute (Nova) will really thrive in a NeXtScale environment, where there is some degree of scale needed. Additionally, NeXtScale can easily be ordered pre-configured and pre-integrated, which really helps to eliminate one of the biggest pains of maintaining and growing an OpenStack-powered compute farm.
Bottom line? NeXtScale and OpenStack are a really good match for one another, and gives OpenStack users something unique from IBM from an x86 perspective that hasn’t been there in the past.
I thought I’d start off our blog with a post about storage, which continues to be one of our top areas of interest from our clients. If you are even remotely tied into the IBM storage ecosystem, you know all about Real-time Compression by now. If you don’t, Real-time Compression (or RTC for short) is a unique IBM technology, backed by 35+ patents, that allows a user to compress primary storage with no performance impact to hosts and applications.
Because there is no performance impact, it allows users to use RTC technology on the most performance intensive applications and, in doing so, use it with the most expensive backend storage available within the enterprise (“Tier 1”). By utilizing RTC with the most expensive backend storage, you really have the ability to build an incredible financial case to contain capital spending on physical storage, all without impacting performance (which is the key). Being in this industry, we’re so used to trade-offs when discussing technology that RTC is always somewhat of a shocker when presented to someone for the first time.
What kind of data can RTC be used to compress?
You can deploy RTC as a NAS-based appliance to contain unstructured, file-based data OR you can use RTC native with the IBM SAN Volume Controller Virtualization Engine or Storwize V7000 to compress block-based storage. As you’re probably aware, the fact that SVC/V7000 can virtualize external storage means that you can really use RTC with almost any kind of existing SAN storage that you want, as long as it’s on the SVC/V7000’s large interoperability matrix.
What kind of applications will generate the most compelling financial case?
In my experience, RTC will almost always will bring you a positive ROI, but the most compelling cases are generated when using RTC with highly transactional applications, like Oracle, VMware, SQL, etc. This is because the storage associated with these applications are typically the most expensive within the overall storage infrastructure, so the the cost case will look that much more appealing.
Can I project how well my data will compress before I actually purchase RTC?
Yes! You can use the Comprestimator to estimate how well your data will compress (within a very small margin of error) before you actually decide to invest in RTC. This is something that we recommend all our potential RTC clients run (with our assistance, if requested), just to make sure that they are happy with the results before jumping in.
Comprestimator Utility Download:
Below, I’ve pasted in a real-world Comprestimator output that was run on an actual Oracle on RHEL deployment on the IBM Storwize V7000 storage system. This Oracle deployment had not yet been compressed, and we were looking to understand what kind of impact turning on RTC would make. As you can see, Comprestimator projected that there will likely be greater than an 80% capacity savings, which was well beyond our conservative expectations!