Projects per year
The abundance of poorly optimized mobile applications coupled with their increasing centrality in our digital lives make a framework for mobile app optimization an imperative. While tuning strategies for desktop and server application shave a long history, it is difficult to adapt them for use on mobile devices. Reference inputs which trigger behavior similar to a mobile application’s typical are hard to construct. For many classes of applications the very concept of typical behavior is nonexistent, each user interacting with the application in very different ways. In contexts like this, optimization strategies need to evaluate their effectiveness against real user input, but doing so online runs the risk of user dissatisfaction when sub-optimal optimizations are evaluated.In this paper we present an iterative compiler which em-ploys a novel capture and replay technique in order to collect real user test cases and use it later to evaluate different transformations offline. The proposed mechanism identifies and stores only the set of memory pages needed to replay the most heavily used functions of the application. At idle and charging periods, this minimal state is combined with different binaries of the application, each one build with different optimizations enabled. Replaying the targeted functions allows us to evaluate the effectiveness of each set of optimizations for the actual way the user interacts with the application. For the BEEBS benchmark suite, our approach was able to improve performance of hot functions by up to 57%, while keeping the slowdown experienced by the user on average at 0.8%. By focusing only on heavily used functions, we are able to conserve storage space by between two and three orders of magnitude compared to typical capture and replay implementations.
|Title of host publication||Proceedings of the 6th International Workshop on Adaptive Self-tuning Computing Systems (ADAPT 2016)|
|Number of pages||8|
|Publication status||Published - 18 Jan 2016|