Skip navigation
2014

StorageIO Out and About Update - VMworld 2014

 

Here is a quick video montage or mash-up if you prefer  that Cory Peden (aka the Server and StorageIO Intern @Studentof_IT) put together  using some video that recorded while at VMworld 2014 in San Francisco. In this  YouTube video we take a quick tour around the expo hall to see who as well as what we  run into while out and about.

 

VMworld 2014 StorageIO Update

 

For those of you who were at VMworld 2014 this  will give you a quick Dejavu memory of the sites and sounds while for those who  were not there, see what you missed to plan for next year. Watch for  appearances from Gina Minks (@Gminks) aka Gina Rosenthal (of BackupU)and Michael (not Dell)  of Dell Data Protection, Luigi Danakos (@Nerdblurt) of HP Data Protection who  lost his voice (tweet Luigi if you can help him find his voice). With Luigi we were able to get in a  quick game of  buzzword bingo before catching up with Marc Farley (@Gofarley) and John Howarth  of Quaddra Software. Mark and John talk about their new   solution from Quaddra which  will enable searching and discovering data across different storage systems and  technologies. 

 

Other visits include a quick look at an EVO:Rail from  Dell, along with Docker for Smarties overview with Nathan LeClaire (@upthecyberpunks)  of Docker (click here to watch the extended interview with Nathan).

Docker for smarties

 

Check out the conversation with Max Kolomyeytsev of StarWind Software (@starwindsan)  before we get interrupted by a sales person. During our walk about, we also bump into Mark Peters (@englishmdp)  of ESG facing off video camera to video camera.

 

Watch for other things including rack cabinets that look like compute servers yet that have a large video screen so they can be software defined for different demo purposes.

virtual software defined server

 

Watch for more Server and StorageIO Industry Trend Perspective podcasts, videos as well as out and about updates soon, meanwhile check out others here.

 

Ok, nuff said (for now)

 

Cheers gs

Storage I/O trends

Lenovo ThinkServer TD340 Server and StorageIO lab Review

Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

Lenovo TD340

The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

The Lenovo TD340 Experience

Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first  answering some questions  to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

TD340 is ready for use
TD340 with Keyboard and Mouse (Monitor and keyboard not included)

 

One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

Welcome to the TD340
Lenovo ThinkServer Setup

 

TD340 Setup
  Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

TD340 as tested

TD340 Selfie of whats inside
  TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

 

TD340 disk drive bays
  TD340 internal drive hot-swap bays

Speeds and Feeds

The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

 

You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

  • Operating systems support include various Windows Servers  (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
  • Form factor is 5U  tower with weight starting at 62 pounds depending on how configured
  • Processors include support for up to two (2) Intel E5-2400 v2  series
  • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to  129GB.
  • Expansion slots vary depending on if a single or dual cpu socket.  Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical,  1 x PCIe Gen3
  • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz  FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled.  These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x  PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8  mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
  • Two 5.25” media bays for CD or DVDs or other devices
  • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter  models
  • Internal storage varies depending on model including up to eight  (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD's or SSDs).
  • Storage  space capacity varies by the type and size of the drives being used.
  • Networking interfaces include two (2) x GbE
  • Power supply options include single 625 watt or 800 watt, or 1+1  redundant hot-swap 800 watt, five fixed fans.
  • Management tools include ThinkServer Management Module and  diagnostics

Lenovo TD340

What Did I do with the TD340

After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

 

Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

TD340 is ready for use
TD340 with Keyboard and Mouse (Monitor and keyboard not included)

What I liked

Unbelievably quiet which may not seem like a big deal,  however if you are looking to deploy a server or system into a small office  workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server  that can be installed into a home media entertainment system, well, this might  be a nice to have consideration ;). Speaking of IO slots, naturally I'm interested in Server  Storage I/O so having multiple slots is a must have, along with the processor  that is multi-core (pretty much standard these days) along with VT and EP for  supporting VMware (these were disabled in the BIOS however that was an easy  fix).

What I did not like

The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

Summary

Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

 

Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

 

Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.


Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

 

Ok, nuff said

Cheers
  Gs

What does server storage I/O scaling mean to you?

 

Scaling means different things to various people depending on the context or what it is referring to.

 

For example, scaling can me having or doing more of something, or less as well as referring to how more, or less of something is implemented.

 

Scaling occurs in a couple of different dimensions and ways:

  • Application workload attributes - Performance, Availability, Capacity, Economics (PACE)
  • Stability without compromise or increased complexity
  • Dimension and direction - Scaling-up (vertical), scaling-out (horizontal), scaling-down

Scaling PACE - Performance Availability Capacity Economics

Often I hear people talk about scaling only in the context of space capacity. However there are aspects including performance, availability as well as scaling-up or scaling-out. Scaling from application workloads perspectives include four main group themes which are performance, availability, capacity and economics (as well as energy).

  • Performance - Transactions, IOP's, bandwidth, response time, errors, quality of service
  • Availability - Accessibility, durability, reliability, HA, BC, DR, Backup/Restore, BR, data protection, security
  • Capacity - Space to store information or place for workload to run on a server, connectivity ports for networks
  • Economics - Capital and operating expenses, buy, rent, lease, subscription

Scaling with Stability

The latter of the above items should be thought of more in terms of a by-product, result or goal for implementing scaling. Scaling should not result in a compromise of some other attribute such as increasing performance and loss of capacity or increased complexity. Scaling with stability also means that as you scale in some direction, or across some attribute (e.g. PACE), there should not be a corresponding increase in complexity of management, or loss of performance and availability. To use a popular buzz-term scaling with stability means performance, availability, capacity, economics should scale linear with their capabilities or perhaps cost less.

Scaling directions: Scaling-up, scaling-down, scaling-out

server and storage i/o scale options

 

Some examples of scaling in different directions include:

  • Scaling-up (vertical scaling with bigger or faster)
  • Scaling-down (vertical scaling with less)
  • Scaling-out (horizontal scaling with more of what being scaled)
  • Scaling-up and out (combines vertical and horizontal)

 

Of course you can combine the above in various combinations such as the example of scaling up and out, as well as apply different names and nomenclature to see your needs or preferences. The following are a closer look at the above with some simple examples.

server and storage i/o scale up
Example of scaling up (vertically)

 

server and storage i/o scale down
Example of scaling-down (e.g. for smaller scenarios)

server and storage i/o scale out
Example of scaling-out (horizontally)

 

server and storage i/o scale out
Example of scaling-out and up(horizontally and vertical)

Summary and what this means

There are many aspects to scaling, as well as side-effects or impacts as a result of scaling.

 

Scaling can refer to different workload attributes as well as how to support those applications.

 

Regardless of what you view scaling as meaning, keep in  mind the context of where and when it is used and that others might have  another scale view of scale.

 

Ok, nuff said (for now)...

Cheers gs

Is Computer Data Storage Complex? It Depends

 

I often get asked, or, told that computer data storage is complex with so many options to choose from, apples to oranges comparison among other things.

 

On a recent trip to Europe while being interviewed by a Dutch journalist in Nijkerk Holland at a Brouwer Storage Consultancy event I was presenting at, the question came up again about storage complexity. Btw, you can read the article on data storage industry trends here (its in dutch).

 

I hesitated and thought for a moment and responded that in  some ways it's not as complex as some make it seem, although there is  more to data storage than just cost per capacity. As I usually do when asked or told how complex data storage is my response is a mixed yes it (storage, data and information infrastructure) are complex, however lets put it in perspective which is storage any more complex than other things?

 

Our conversation then evolved with an example that I find  shopping for an automobile complex unless I know exactly what I'm looking for.  After all there are cars trucks SUV's used new buy lease different manufacturers  makes and models speeds cargo capacity management tools and interfaces not to  mention metrics and fuel.

 

This is where I usually mention how IMHO buying a new car or vehicle is with all the different options, that is unless you know what you want, or know your selection criteria and options. Same with selecting a new laptop computer, tablet or smart phone, not to mention a long list of other things that to the outsiders can also seem complex, intimidating or overwhelming. However lets take a step back to look at storage then return to compare some other things that may be confusing to those who are not focused on them.

Stepping back looking at storage

Similar to other technologies, there are different types of data storage to meet various needs from performance to space capacity as well as support various forms of scaling.

server and storage I/O flow
Server and storage I/O fundamentals

Storage options
Various types of storage devices including HDD's, SSHD/HHDD's and SSD's

Storage type options
Various types of storage devices

Storage I/O decision making
Storage options, block, file, object, ssd, hdd, primary, secondary, local and cloud

Shopping for other things can be complex

During my return trip to the US from the Dutch event, I had a layover at London Heathrow (LHR) and walking the concourse it occurred to me that while there are complexities involved with different technologies including storage, data and information infrastructures, there were other complexities.

 

Same thing with shoes so any differ options not to mention  cell phones or laptops and tablets, or how about tv's?

 

I wan to go on a trip do I book based on lowest cost for air  fare then hotel and car rental, or do I purchase a package? For the air fare is it  the cheapest yet that takes all day to get from point a to b via plane changes  at points c d and e not to mention paying extra fees vs paying a higher price  for a direct flight with extra amenities?

 

Getting hungry so what to do for dinner, what type of  cuisine or food?

Hand Baggage options
How about a new handbag or perhaps shoes?

Baggage options
How about a new backpack, brief case or luggage?

Beverage options
What to drink for a beverage, so many options unless you know what you want.

PDA options
Complexity of choosing what cell phone, PDA or other electronics

What to read options
How about what to read including print vs. online accessible content?

How about auto parts complexity

Once I got home from my European trip I had some mechanical things to tend to including replacing some spark plugs.

Auto part options
How about automobile parts from tires, to windshield wiper blades to spark plugs?

 

Sure if you know the exact part number and assuming that part number has not changed, then you can start shopping for the part. However recently I had a part number based on a vehicle serial number (e.g. make, model, year, etc) only to receive the wrong part. Sure the part numbers were correct, however along the line somewhere the manufacture made a change and not all downstream vendors knew about the part change, granted I eventually received the correct part.

 

Back to tech and data infrastructures

Ok, hopefully you got the point from the above examples among many others in that we live in world full of options and those options can bring complexity.

 

What type of network or server? How about operating system,  browser, database, programming or development language as there are different  needs and options?

 

Sure there are many storage options as not everything is the  same.

 

Likewise while there can be simple answer with a trend of  what to use before the question is understood (perhaps due to a preference) or  explained, the best or applicable answer may be it depends. However saying it  depends may seem complex to those who just want a simple answer.

Closing Comments

So is storage more complex than other technologies, tools, products or services?

 

What say you?

 

Ok, nuff said, for now...

 

Cheers
  Gs

Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

 

This is the second post of a two part series, read the first post here.

 

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD's as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

 

The Server Storage I/O Blender Effect Bottleneck

The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

traditional server storage I/O
Non-virtualized servers with dedicated storage and I/O paths.

 

Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

virtual server storage I/O blender
Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

 

The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

 

In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

Creating a server storage I/O blender bottleneck

xxxxx
Addressing the VMware Server Storage I/O blender with cache

Addressing server storage I/O blender and other bottlenecks

For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD's The 6TB (Enterprise Capacity) HDD was configured as a VMware datastore and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

xxxxx
Server storage I/O with virtualization roof-point configuration topology

 

The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

 

If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

storage I/O blender solved
Solving the VMware Server Storage I/O blender with cache

 

The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization. For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a datastore using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

 

The guest VM system disks which included paging, applications and other data files were virtual disks using a separate datastore mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.     For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

 

During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads. Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD's

 

Seagate 6TB 12Gbs SAS high-capacity HDD

While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

 

This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD's and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD's are also available with Self-Encrypting Drive (SED) options.

Summary

Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD's and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

 

Key themes to keep in mind include:

  • Aggregation can cause aggravation which SSD can alleviate
  • A relative small amount of flash SSD in the right place can go a long way
  • Fast flash storage needs fast server storage I/O access hardware and software
  • Locality of reference with data close to applications is a performance enabler
  • Flash SSD everywhere does not mean everything has to be SSD based
  • Having some amount of flash in different places is important for flash everywhere
  • Different applications have various performance characteristics
  • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

 

Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

 

Ok, nuff said (for now).

Cheers gs

Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

 

This is the first post of a two part series, read the second post here.

 

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD's as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

 

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

Seagate 1200 SSD
Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

 

Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

 

The best server and storage I/O is the one you do not have to do

Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

 

Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

flash cache locality of reference
Server memory storage I/O hierarchy, locality of reference

 

Seagate 1200 12Gbs Enterprise SAS SSD's

Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD's and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

Seagate 1200 Enteprise SSD

This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD's including 12Gbs SAS 6TB near-line high-capacity drives.

 

Seagate 1200 Enterprise SSD Proof Points

The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

 

For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

Server Storage I/O config
Server Storage I/O configuration for proof-points

Microsoft Exchange Email proof-point configuration

For  this proof-point, Microsoft Jet Stress Exchange performance workloads were placed  (e.g. Exchange Database - EDB file) on each of the different devices under test  with various metrics shown including activity rates and response time for reads  as well as writes. For the Exchange testing, the EDB was placed on the device  being tested while its log files were placed on a separate Seagate 400GB  Enterprise 12Gbps SAS SSD.

 

Test configuration: Seagate 400GB 12000 2.5” SSD  (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX)  6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with  6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K  RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on  VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM  (VMware vSphere 5.5) was on a SSD based datastore, had a physical machine  (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI  9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device  Mapped (RDM) where EDB resided. VM on a SSD based separate data store than  devices being tested. Log file IOPs were handled via a separate SSD device also  persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

Microsoft Exchange VMware SSD performance
  Microsoft Exchange proof-points comparing various storage devices

TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

SSD's are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

 

TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD's

 

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

 

VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

TPC-B sql server database SSD performance
TPC-B SQL Server database proof-points comparing various storage devices

TPC-E (Database, Financial Trading) proof-point configuration

The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD's

 

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

 

VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

TPC-E sql server database SSD performance
TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

 

Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

 

Ok, nuff said (for now).

Cheers gs