Guiding Inlining Decisions by Identifying Post-Inlining Transformations
Wed 7 Nov 2018 18:02 - 18:04 at Georgian - Poster & SRC
Function inlining is a compiler transformation that replaces a function call with the body of the function being called. Such transformation eliminates the overhead of calling a function and increases the scope for method-level compiler optimizations. However, inlining increases method size, which negatively impacts Just-In-Time (JIT) compilers by increasing the time spent in compilation and optimization routines. To decide which callsite-method pairs will minimize runtime, an inlining strategy, i.e., a criteria for discriminating callsite-method pairs [6], must take into account the advantages and disadvantages of inlining.
The focus on balancing the advantages and disadvantages of inlining led researchers to initially reduce this problem into the Knapsack problem [7]. This reduction requires the non-trivial formulation of a weight and value for each callsite-method pair. Researchers have formulated different weight (e.g., estimation of code size) and value definitions (e.g., method invocation or call frequency) for inlining strategies [1 ,7]. However, Chang et al. [3] have shown that, for nested inlining scenarios, inline expansion is a more difficult problem than the Knapsack problem. Therefore, inlining strategies often, in addition to reducing inlining to the Knapsack problem, use heuristics to provide better inlining decisions [1, 5, 7, 9, 10]. Analyses that determine whether inlining a function will increase the scope for method-level compiler optimizations often encode their result as a reduction in weight for a callsite-callee pair [8]. This reduction coupled with inlining thresholds on method sizes makes inlining strategies harder to understand and tune.
To overcome these limitations, we propose an inlining strategy prototype that uses abstract interpretation to identify optimization opportunities that will be unlocked after inlining takes place. The inlining strategy takes into account nested inlining scenarios [4]. Our prototype depends on the creation of method summaries that describe the benefit value given to a callsite-callee pair. Such approach allows the compilation time and analysis effort to be parametrizable. We argue that this approach works well in JIT compilers, because the effort spent by our abstract interpreter can be scaled for different optimization levels. Our initial experiments using the DaCapo benchmark suite (version 9.12 Bach MR1) [2] show code generation is of comparable quality with traditional inlining techniques.