Difference between revisions of "Performance Evaluation and Optimization"

From Nsnam
Jump to: navigation, search
 
Line 2: Line 2:
  
 
We expect, however, all proposals to be based on the following workflow:
 
We expect, however, all proposals to be based on the following workflow:
   - define reference application-level scenarios and establish reference benchmarks for them
+
   * define reference application-level scenarios and establish reference benchmarks for them
   - perform cpu and memory profiling of the reference scenarios. Observe hotspots, explain the hotspots.
+
   * perform cpu and memory profiling of the reference scenarios. Observe hotspots, explain the hotspots.
   - propose and implement a solution to decrease the impact of the observed hotspots.
+
   * propose and implement a solution to decrease the impact of the observed hotspots.
   - re-run reference scenarios: if no improvement, hotspot explanation or implementation are most likely buggy. Understand why proposed solution did not improve things. If improvement, go back to profiling.
+
   * re-run reference scenarios: if no improvement, hotspot explanation or implementation are most likely buggy. Understand why proposed solution did not improve things. If improvement, go back to profiling.
  
 
Typical profiling tools on linux include:
 
Typical profiling tools on linux include:
   - oprofile
+
   * oprofile
   - sysprof
+
   * sysprof
   - gprof
+
   * gprof
   - valgrind/kcachegrind
+
   * valgrind/kcachegrind
  
 
A good proposal will most likely integrate results from very early reference scenario definition, profiling, and hotspot explaination. i.e. it should contain a simple rough first iteration of the above workflow.
 
A good proposal will most likely integrate results from very early reference scenario definition, profiling, and hotspot explaination. i.e. it should contain a simple rough first iteration of the above workflow.

Revision as of 15:19, 24 March 2008

Our goal here is to improve the performance of our simulation core and of our models, both from a cpu and a memory perspective. Since this is a very broad project, interested students will have to start by defining better the scope of what they would like to achieve.

We expect, however, all proposals to be based on the following workflow:

 * define reference application-level scenarios and establish reference benchmarks for them
 * perform cpu and memory profiling of the reference scenarios. Observe hotspots, explain the hotspots.
 * propose and implement a solution to decrease the impact of the observed hotspots.
 * re-run reference scenarios: if no improvement, hotspot explanation or implementation are most likely buggy. Understand why proposed solution did not improve things. If improvement, go back to profiling.

Typical profiling tools on linux include:

 * oprofile
 * sysprof
 * gprof
 * valgrind/kcachegrind

A good proposal will most likely integrate results from very early reference scenario definition, profiling, and hotspot explaination. i.e. it should contain a simple rough first iteration of the above workflow.