Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
  • This project
    • Loading...
  • Sign in / Register
B
benchmark
  • Project
    • Overview
    • Details
    • Activity
    • Cycle Analytics
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Charts
  • Issues 0
    • Issues 0
    • List
    • Board
    • Labels
    • Milestones
  • Merge Requests 0
    • Merge Requests 0
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
    • Charts
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Graph
  • Charts
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
  • Chen Yisong
  • benchmark
  • Repository

Switch branch/tag
  • benchmark
  • tools
  • gbench
  • Inputs
  • test3_run1.json
Find file
BlameHistoryPermalink
  • Roman Lebedev's avatar
    [Tooling] Rewrite generate_difference_report(). (#678) · aad33aab
    Roman Lebedev authored Sep 19, 2018
    My knowledge of python is not great, so this is kinda horrible.
    
    Two things:
    1. If there were repetitions, for the RHS (i.e. the new value) we were always using the first repetition,
        which naturally results in incorrect change reports for the second and following repetitions.
        And what is even worse, that completely broke U test. :(
    2. A better support for different repetition count for U test was missing.
        It's important if we are to be able to report 'iteration as repetition',
        since it is rather likely that the iteration count will mismatch.
    
    Now, the rough idea on how this is implemented now. I think this is the right solution.
    1. Get all benchmark names (in order) from the lhs benchmark.
    2. While preserving the order, keep the unique names
    3. Get all benchmark names (in order) from the rhs benchmark.
    4. While preserving the order, keep the unique names
    5. Intersect `2.` and `4.`, get the list of unique benchmark names that exist on both sides.
    6. Now, we want to group (partition) all the benchmarks with the same name.
       ```
       BM_FOO:
           [lhs]: BM_FOO/repetition0 BM_FOO/repetition1
           [rhs]: BM_FOO/repetition0 BM_FOO/repetition1 BM_FOO/repetition2
       ...
       ```
       We also drop mismatches in `time_unit` here.
       _(whose bright idea was it to store arbitrarily scaled timers in json **?!** )_
    7. Iterate for each partition
    7.1. Conditionally, diff the overlapping repetitions (the count of repetitions may be different.)
    7.2. Conditionally, do the U test:
    7.2.1. Get **all** the values of `"real_time"` field from the lhs benchmark
    7.2.2. Get **all** the values of `"cpu_time"` field from the lhs benchmark
    7.2.3. Get **all** the values of `"real_time"` field from the rhs benchmark
    7.2.4. Get **all** the values of `"cpu_time"` field from the rhs benchmark
              NOTE: the repetition count may be different, but we want *all* the values!
    7.2.5. Do the rest of the u test stuff
    7.2.6. Print u test
    8. ???
    9. **PROFIT**!
    
    Fixes #677
    aad33aab
test3_run1.json 1.24 KB
EditWeb IDE
×

Replace test3_run1.json

Attach a file by drag & drop or click to upload


Cancel
A new branch will be created in your fork and a new merge request will be started.