Pure Storage to present at Auckland VMUG

Im happy to say that I will be presenting alongside Craig Waters @CSWATERS at the next Auckland VMware User Group Conference – VMUG

http://www.vmug.com/p/cm/ld/fid=10438

Craig is the very vocal leader of the Melbourne VMUG and will be able to provide a very insightful view into where VMware is going with Version 6 and how a data reducing All Flash Array provides the best location for your data-stores.

 

More information to follow on the agenda.

 

#PAINTITORANGE

Pure Storage 2015

I have just returned from our 2015 sales kick off in Santa Clara and would like to say I am pumped.

We had 4 days running though among many other things, technology updates.

What makes a great company great?
It’s the willingness of the management to openly share with their staff information that is pertinent to their success.
Pure Storage is a company like know other I have worked. They share so much information that it allows us to make informed decisions easier. It allows us to better do what is right for our partners, customers and eventually shareholders.

Our roadmap is simply amazing and I look forward to being able to share with people whats happening as I can.

We had a lot of fun a #PureSKO2015 and I can say its going to be a great year.

 

To Cache or not to Cache?

Cache

Can be your best friend or your worst nightmare, what do I mean by that?

There are several options today for how you chose to implement cache.

  • Server Side (think Fusion IO cards or similar)
  • Storage Read Cache
  • Storage Write Cache

All types of caching are susceptible to sizing. Once the cache is full you still need go to disk.
Hitachi found this on their old enterprise replication product True Copy. As replication requirements got larger they were prone to cache punctures as during a network outage or if the replication was not sized correctly the cache that was holding the

Server Side:

In a shared storage environment you will have many hosts connecting to one or more storage controllers. These may boot from the storage controller or they may have internal disk but then mount LUNS or Volumes from the shared storage.

With Server Side cache, you typically install either a PCIE DRAM based card(s) or some SSD within each physical host.

Depending on what you chose to implement these will be read and/or write capable and are able to be tuned quite well based on your requirements.

Server Side cache has some challenges however.
Imagine you have a large VM farm with multiple varied workloads which is typical to most enterprises.
If you don’t add the same amount of cache to each server, you run the risk in the event of a host failure of the workloads that are pinned to the accelerated (cached) hosts not being able to provide the required level of services if they are moved to a non-accelerated host (using DRS as an example).

Server Side is good if you want to pin specific workloads to specific hosts such as VDI or a specific database

Storage Read Cache:

At NetApp we sold a lot of this, because until NetApp had Hybrid Aggregates this was the best way to accelerate VM workloads. If you incorporate data reduction techniques like de-duplication you can use a relatively small amount of cache to accelerate a lot of servers.

The problem is, its a read cache and it was also quite small.

With that said however, NetApps Flash Cache was great as it could be tweaked for things like Meta Data lookups which would help you accelerate some tasks like large block video indexes which you wouldn’t normally associate with cache (the video files not the index).

NetApp had a great product called Flash Accel that would interact between the Server Side and the Storage Side caches to determine the best places to accelerate but this was sadly pulled off the market.

The downside of most Read Caches like Flash Cache is they use volitile memory, or non-persistent memory to storage the reads so if you happen to restart your controller you have to re-load the cache. If time is of the essence, then this could be a big problem.

Read Caches are also almost always best used for small block random IO.

Storage Write Cache:

This is where it gets interesting as a Write Cache is almost always also used as a read cache and this most certainly will be non-volatile to sustain a power outage, so will likely be a SSD.

Write Caches will also typically be used for small block random over-writes (blocks that have recently been written to HDD) so not all write IO will be accelerated.

Write Caches are also typically larger than Read Caches and are a lot more flexible however they still suffer from size.

The problem with arrays like some of the hybrids out there that utilize any Write Cache for acceleration is that once that cache is full, your back to disk and if you have a tendency to use slow disk like SATA then your go from hero to zero very quick.

If your have a non-uniform IO size that doesn’t fit nicely into the stripe size then you could rapidly eat up cache and be down to disk before you know it.

Whats best:

Cacheing was introduced to fix a physics problem, disk.

If you don’t have disk, but instead use a form of non-volatile persistent storage like SSD then you are less likely to need a cache as it is technologies like SSDs that storage vendors are using as cache typically anyway.

A lot does come down to the storage operating environment and how it is implemented as some are more efficient than others.

So, think about what and where you need to accelerate or look to an All Flash Array like Pure Storage where you don’t have to think as much about how you architect your data storage needs.

How a good Restful API can benefit storage management

How a good restful API integration can change your world! I’m not a developer and yet within a few hours I wrote an app with the Pure Storage Powershell Toolkit to generate a Visio from live array’s.

I work for Pure Storage, I have worked for NetApp and HDS as a pre-sales engineer. Recently Pure released our PowerShell tool kit and I took it upon myself to see how easy it was to use so I built an application to connect to live controllers and generate Visio’s.
Within a couple of hours I had a working application that I have built into quite a solid application in my downtime over the last week or so.
Over the years I have tried many different tools and integration points including SMI-S and this is by far the easiest mechanism I have used.
The toolkit in its simplest form provides a shell to allow you to build applications around. My application is a read only tool, but you can also use it to perform snapshots, clones etc with very little effort.
Take a look at the blog posts that Barkz, the author of the tool-kit has written to show you just how easy it is.
http://www.purestorage.com/blog/pure-storage-rest-api-windows-powershell-part-1/http://www.purestorage.com/blog/pure-storage-rest-api-windows-powershell-part-2/
I also had to have .Net installed obviously and the VisioAutomation Powershell module.
https://visioautomation.codeplex.com/

Not only do you have get access, but also put access so you can generate snapshots, clones, eradicate LUNs etc and it is just so easy to do.

I hope to have this as a published tool shortly so if you have some Pure Storage in your DC and want to gather, LUN, Host and Port data and present it in a Visio you will easily be able to do it.

Next steps are also looking into developing something for a mobile platform using REST and JSON.

 

Location, Location, Location

When you decided where to live you had to make some conscious decisions. You had to decide where your house was in relation to things that are important to you – schools, the office, transport hubs, freeways or motorways.

You had to decide based on your method of transport how close you needed to be to those things so that your experience was acceptable or great. If you worked in New York, it would not be practical to live in beautiful New Zealand as the commute just wouldn’t work unless you could work remotely. So location is important for things that are important to you.

Storage works in much the same way. A traditional HDD has many platters and the data is typically accessed on the drive from the inside of the platter to the outside of the platter. Physics is physics so information on the inside smaller section of the platter will be quicker to read/write than information on the larger outside.

Back in the old days,  DBAs would have to really think about placement of data. Either by specific spindle and RAID allocation or even getting down to block location on disks. Seek times on the inside of the platter were faster than the outside, so that is where high-performing tables and things like temp-DB needed to live.

Memory has really helped that issue, so has other forms of cache so long as the blocks that need to be cached can fit in the amount of allocated memory or cache, if not you hit a latency wall as you have to go off and seek from disk. This means your reads are essentially free, but writes still require some overhead and I/O tax.

SSD really does go a long way to resolving those issues. SSDs and other flash media are locality reference free! Essentially they are binary charges off and on that means that they are basically free for reads and writes which is why IO latencies are typically sub millisecond.

DBAs now no longer need to worry about where and how they place their data  or even if they used share storage. DBAs will always worry, but now they can worry about things further up the stack.

Imagine if you could live in beautiful New Zealand, work in New York and holiday in the Caribbean without the challenges and cost overheads of mechanical travel.  That is what locality free data access with SSDs and the right Storage OE can give you.

So many miss-truths

I have been in the vendor world now for many years and its great, I really enjoy my job and the companies that I have worked for.

What really gets me is some of the blatant lies and mis-truths that get reported especially when they come from so-called Analysts or the like. I recently read a Edison Article on HP Thin Deduplication and laughed out loud more than once at the claims in the incredulously biased article.

Lets start with “Post processing removes any direct performance impact”. It may remove it at the time of ingest, but it does not remove the impact it has on CPU and memory altogether. Normally Post Process deduplication is a scheduled event that till run when you schedule it – what happens if you run a busy workload during that process. It definitely impacts performance.

Flash helps enables deduplication to happen inline as long as you also have a decent amount of cache and an OE that can efficiently handle metadata. As capacities get bigger so does your metadata table needs.  NetApp was the first to bring deduplication to market and it was, and still is a post process. I have seen the impact fist hand on this going wrong when it hasn’t been managed correctly.

Next lets cover the cost of flash. Data reduction is an integral part of moving to flash,  and also an integral part of the TCO of flash so it should be mentioned. However whats key is that when you talk to customers about this that you validate your data. The company that  I work for, Pure Storage will give you an average data reduction of 3:1 for Database, 6:1 for VSI and 10:1 for VDI workloads. This information is gathered from our cloud assist portal gathering information for all our arrays in the wild and reporting on the non thin-provisioned capacity. The article discussed expressly mentions the deduplication ratios easily being 10:1 over and over again, whereas HP online states that their average data reduction is 4:1. Why is that important – well if you do base your cost per GB on data reduction then 10:1 will mean your cost per GB is a lot better than 4:1, but which information is the accurate of the two? The Pure information, like the NetApp ticker information is consistent messaging across the board. It is key that you look at this information when making your decision and that it is consistent and verifiable.

Take a look at the competitive differences where they rate everyone and claim HP is the best. All 4 vendors rated have very different data reduction methodologies, some like Pure include deduplication and compression but don’t count Thin Provisioning, HP counts Thin Provisioning and deduplication. XtremeIO and Solidfire are different again. How can they say that theirs is the best?

The report said HP Looked at telemetry data from 10s of thousands of systems??  The report states than an analysis was done between 16KiB and 4KiB block sizes and that there was little difference between the two with modest savings of 15%. That information was gained from telemetry data from their phone home system. I have seen similar data sent from NetApp, Hitachi and Pure and I would be very surprised if this was true. The data sent is extremely dense and takes masses of processing power just to manage fault calls let alone do deep statistical analysis. I would like to see some more information on this. I think IMHO that its just simply a very expensive exercise to change the block size. Look what happened with EMC XtremeIO recently going from XIOS 2.4 to 3.0 being a destructive upgrade going from 4KB to 8KB block sizes.

 

Thats enough of a rant for now, but I encourage you to do your homework before investing in any new technology. When you look at analyst reports, they are all paid for, but some are more paid for biased than others.

My advice, ask your vendor to prove it. Put a controller on the floor, run some real world work loads and not synthetic on your own data.

 

SPAM

Well I learnt my lesson. I had my settings setup to allow anyone to comment which was a bad mistake as I got spammed.
So now if you want to comment on this blog, you will need to register.

What was interesting is that they were all hitting the same blog entry, the one about XtremeIOMG and not even the latest one.

Must be a Goggle BOT thing!

EMC Merger – Clarrion powered inkjets

Up and down the web we are seeing posts like this one at The Register talking about EMC and HP possibly merging to create some kind of super behemoth mega IT company – really!

67 year old Joe Tucci is set to retire, after 13 years as head of mega storage company EMC and all of a sudden they are talking to HP, that doesn’t make any sense to me.  Some punters are saying talks have been going on for 12 months, so before Tucci was going to retire last time. Is that why he held out?

HP are a full stack provider. The have compute, network and storage already. HP has invested millions in acquiring new technologies like LeftHand and 3Par to replace their dying EVA brand so why would they look at EMC? Thats right they don’t have a hypervisor.

Quite simply they want VMWare.

Lets face it HP don’t need another storage company. EMC could do with a compute and network company though – CISCO would make more sense or even building out with Lenovo and maybe Brocade – they are two partial stack vendors in need of completing the family.

Other talks are that Dell wants some of the company, they used to OEM EMC which they did well in before making the poor decision to invest in EqualLogic the fledgling iSCSI only array.

So how long do we have to wait before the magpies pick EMC apart after Tucci retires, or will he simply not retire again as was the case last time he we set to retire.

A storage and data management blog