Statistical Machine Learning Makes Automatic Control Practical for Internet Datacenters

Peter Bodík, Rean Griffith, Charles Sutton, Armando Fox, Michael Jordan, David Patterson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Horizontally-scalable Internet services on clusters of commodity computers appear to be a great fit for automatic control: there is a target output (service-level agreement), observed output (actual latency), and gain controller (adjusting the number of servers). Yet few datacenters are automated this way in practice, due in part to well-founded skepticism about whether the simple models often used in the research literature can capture complex real-life workload/performance relationships and keep up with changing conditions that might invalidate the models. We argue that these shortcomings can be fixed by importing modeling, control, and analysis techniques from statistics and machine learning. In particular, we apply rich statistical models of the application's performance, simulation-based methods for finding an optimal control policy, and change-point methods to find abrupt changes in performance. Preliminary results running aWeb 2.0 benchmark application driven by real workload traces on Amazon's EC2 cloud show that our method can effectively control the number of servers, even in the face of performance anomalies.
Original languageEnglish
Title of host publicationProceedings of the 2009 Conference on Hot Topics in Cloud Computing (HotCloud 2009)
Place of PublicationBerkeley, CA, USA
PublisherUSENIX Association
Number of pages5
Publication statusPublished - 2009

Fingerprint

Dive into the research topics of 'Statistical Machine Learning Makes Automatic Control Practical for Internet Datacenters'. Together they form a unique fingerprint.

Cite this