Data Center Journal

VOLUME 39 | AUGUST 2015

Issue link: https://cp.revolio.com/i/549869

Contents of this Issue

Navigation

Page 29 of 32

THE DATA CENTER JOURNAL | 27 www.datacenterjournal.com and variability on the x axis. Of the 20 virtual machines that we provisioned, only one (East Asia) ran at an average 10,500 calculations per second for the given 12-hour period; the other 19 ran between 6,500 and 7,000 calculations per second with an average of 6,830 calculations per second over the same 12-hour period. When we first considered this study and reviewed the literature in the public domain, we expected the bottom 5 to 10 percent would be churned. In this case, however, given the low performance and wide variability of the U.S.-based virtual machines, we decided to churn all 19 of the lowest-performing ones. We de-provisioned these virtual machines via the cloud portal and provisioned 19 new virtual machines in the same data centers as the de-provisioned ones. Once again, we executed the process of deploying the monitoring and test agent as well as the test on the new virtual machines while continuing to test the one virtual machine that had yet to be pruned. e test results were enlight- ening, as Figure 2 shows. Aer churning the 19 virtual machines, five (or 25 percent) of them remained at levels previously found to be unsatisfactory. But the average calculations per second across all virtual machines in each set went from 7,020 to 9,410, a 34 percent improvement on the output measured in this test. e impact of this result could be significant. If the workloads running on these servers needed the higher performance levels, it might mean upgrading the cloud service to more cores or more memory—at a higher cost. But getting better perfor- mance from existing services avoids this added cost. It is important to note that this was a one-time test, but that it followed the normal methodology for deploying new services. is same trend has been seen in both public and private clouds based on different architectures, provisioned services, storage types and network de- mands. e test methodology as a simple blind study to duplicate the average customer deployment showed that there is significant opportunity to increase the overall performance if the cloud con- sumer baselines the service performance before deploying an application. Given the variability of service capability in the cloud, it is prudent to measure performance before deploy- ment and to continue to monitor the performance against the baseline over time to ensure that the platform that the cloud-service provider offers is of the highest performance, consistency and quality on day one and throughout the lifetime of the service. n about the author: Clinton France is founder and CEO of Krystallize Technologies. He has 20+ years of experience in IT operations and engineering, with experience at Hewlett- Packard, BP, National City Bank, Microsoft, MSN and Akzo Nobel. Given the variability of service capability in the cloud, it is prudent to measure performance before deployment and to continue to monitor the performance against the baseline over time to ensure that the platform that the cloud- service provider offers is of the highest performance, consistency and quality on day one and throughout the lifetime of the service. Figure 2: Machine performance after the churn

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 39 | AUGUST 2015