Sunday, August 18, 2019

Essays --

Analysis and Critique of Reading Assignment 1 Paper â€Å"Limits of Instruction-Level Parallelism† In this report the author provides quantifiable results that show the available parallelism. The report defines various terminologies like Instruction Level parallelism, dependencies, Branch Prediction, Data Cache Latency, Jump prediction, Memory-address alias analysis etc. used clearly. A total of eighteen test programs with seven models have been examined and the results show significant effects of the variations on the standard models. The seven models reflect parallelism that is available by various compiler/architecture techniques like branch prediction, register renaming etc. The lack of branch prediction means that it finds intra-block parallelism, and the lack of renaming and alias analysis means it won’t find much of that. The Good model doubles the parallelism, mostly because it introduces some register renaming. Parallelism increases with the model type; while the model adds more advanced features without perfect branch prediction it cannot exceed even the half of t he Perfect model's parallelism. All tests conducted show that the parallelism of entire program executions avoided the question of what constitutes a 'representative' interval because to select a particular interval where the program is at its most parallel stage would be misleading. Widening the cycles would also help in improvising parallelism. Doubling the cycle width improves parallelism; appreciably under the Perfect model. But, most of the programs do not benefit from wide cycle widths even under the Perfect model. Depiction to the parallelism behaviour due to window techniques. Evidently discrete window widening tends to result in lower level of parallelism th... ...h prediction and jump prediction, the negative effect of misprediction can be greater than the positive effects of multiple issues. Alias analysis is better than none, though it rarely increased parallelism by more than a quarter. 75% improvement has been achieved under "alias analysis by compiler" on the programs that do use the heap. Renaming did not improve the parallelism much, but degraded it in a few cases. With few real registers, hardware dynamic renaming offers little over a reasonable static allocator. A few have either increased or decreased parallelism with great latencies. Instruction Level Parallelism basics are well explained. Pipelining is important than size of the program. Increased ILP by branch prediction and loop unrolling techniques. But cycles lost in misprediction and memory aliases handling at compiler time have not been taken into account.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.