Data Center Journal

VOLUME 39 | AUGUST 2015

Issue link: https://cp.revolio.com/i/549869

Contents of this Issue

Navigation

Page 28 of 32

26 | THE DATA CENTER JOURNAL www.datacenterjournal.com www.datacenterjournal.com 26 | THE DATA CENTER JOURNAL t his strategy is very timely, as cloud adoption has taken off and many companies are moving their applications to the cloud. In fact, according to RightScale's fourth annual State of the Cloud Survey of 930 IT profession- als, 13% of enterprises have more than 1,000 public cloud-based virtual ma- chines and 22% have more than 1,000 private cloud-based virtual machines. A study by MIT and Caltech sug- gested that the best way to manage and optimize the performance of multiple cloud virtual machines is to "con- tinuously monitor and benchmark" the performance of the various resources that a cloud service provider offers. en, on the basis of the review of the compute, memory, storage and network statistics, identify the underperforming 10 to 15 percent and continuously move the workload over to a set of new virtual machines. is approach has become known in the industry as "pruning" or "churning" and is a way to help compa- nies eliminate "shelfware," or underper- forming cloud instances that are still up but not doing anything. Pruning or churning machines is the best approach a company can employ to see and elimi- nate stranded value in the cloud and to identify, eliminate and return underper- forming machines. is approach raises two important issues: how do you know and measure which virtual machines are underper- forming, and how do you prove the value of jettisoning those underper- forming virtual machines? What is needed to make this strat- egy effective is platform-performance- management (PPM) soware. PPM is a new generation of performance management and service analytics. It is built on the principles of benchmarking, baselining and continuous data collec- tion of a simulated application workload to determine the overall cloud-service performance. What's different about this type of technology, compared with traditional performance measures, is that it measures the output of the cloud service in terms of analytic transactions per second (TPS), web pages served or database transactions executed, versus tracking the individual components constituting the cloud service (which the consumer has no control over or visibility of ). putting it to the test We wanted to test this approach to really know whether there is value in developing and running a pruning process. To find out, we outlined a test in which 20 machines were turned up. We measured the performance of each machine, then pruned out the under- performing machines (if any) and pro- visioned replacement machines. Next, we tested again to see whether there was any significant improvement. Although a test of only 20 machines cannot be deemed "statistically significant," it was designed to be a "real-world' example of customer experience and would be a good indicator of whether cloud prun- ing brings the value expected. We kicked off the test with 20 virtual machines provisioned on a single cloud provider with 10 instances in the eastern U.S., nine instances in the west- ern U.S. and one instance in Asia to get a good sampling. All of the machines were provisioned with "two cores and 3.5GB of RAM" and were loaded with the Ubuntu 14.04 operating system. Once we provisioned the ma- chines, we loaded our CloudQoS monitoring and test agent and stressed the compute, memory, storage and network capabilities concurrently while generating a simple calculation opera- tions measure. We ran the stress test on all 20 virtual machines for 12 hours to generate a good sample of the overall virtual-machine "capability" measure. Figure 1 below outlines the results. We graphed the data by showing calculations per second on the y axis Figure 1: Machine performance before the churn

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 39 | AUGUST 2015