1. 05 Jan, 2018 3 commits
    • Merge pull request #509 from efcs/fix-gtest-install · e1c3a83b
      Eric authored
      Prevent GTest and GMock from being installed with Google Benchmark.
    • Prevent GTest and GMock from being installed with Google Benchmark. · 778b85a7
      Eric Fiselier authored
      When users satisfy the GTest dependancy by placing a googletest
      directory in the project, the targets from GTest and GMock incorrectly
      get installed along side this library. We shouldn't be installing
      our test dependancies.
      
      This patch forces the options that control installation for googletest
      to OFF.
    • Updated documentation. (#503) · 052421c8
      Winston Du authored
      For people who get this library via CMake's AddExternalProject like me.
      Would like a long term tutorial from someone who really understands CMake on how to actually link an externalproject's dependencies to another added external project.
  2. 14 Dec, 2017 1 commit
  3. 13 Dec, 2017 2 commits
    • Add support for GTest based unit tests. (#485) · 7db02be2
      Eric authored
      * Add support for GTest based unit tests.
      
      As Dominic and I have previously discussed, there is some
      need/desire to improve the testing situation in Google Benchmark.
      
      One step to fixing this problem is to make it easier to write
      unit tests by adding support for GTest, which is what this patch does.
      
      By default it looks for an installed version of GTest. However the
      user can specify -DBENCHMARK_BUILD_EXTERNAL_GTEST=ON to instead
      download, build, and use copy of gtest from source. This is
      quite useful when Benchmark is being built in non-standard configurations,
      such as against libc++ or in 32 bit mode.
    • Document new 'v2' branch meant for unstable development. · de725e5a
      Eric Fiselier authored
      This patch documents the newly added v2 branch, which will
      be used to stage, test, and receive feedback on upcoming
      features, most of which will be breaking changes which can't
      be directly applied to master.
  4. 07 Dec, 2017 1 commit
  5. 04 Dec, 2017 1 commit
  6. 30 Nov, 2017 2 commits
  7. 29 Nov, 2017 2 commits
  8. 27 Nov, 2017 1 commit
    • Console reporter: properly account for the lenght of custom counter names (#484) · ec5684ed
      Roman Lebedev authored
      Old output example:
      ```
      Benchmark                                                 Time           CPU Iterations  CPUTime,s   Pixels/s ThreadingFactor
      ------------------------------------------------------------------------------------------------------------------------------
      20170525_0036TEST.RAF/threads:8/real_time                45 ms         45 ms         16   0.718738 79.6277M/s   0.999978   2.41419GB/s    22.2613 items/s FileSize,MB=111.050781; MPix=57.231360
      ```
      
      New output example:
      ```
      Benchmark                                                 Time           CPU Iterations  CPUTime,s   Pixels/s ThreadingFactor
      ------------------------------------------------------------------------------------------------------------------------------
      20170525_0036TEST.RAF/threads:8/real_time                45 ms         45 ms         16   0.713575 80.1713M/s        0.999571   2.43067GB/s    22.4133 items/s FileSize,MB=111.050781; MPix=57.231360
      ```
  9. 26 Nov, 2017 2 commits
    • Improve BENCHMARK_UNREACHABLE() implementation. · 2ec7399c
      Eric Fiselier authored
      This patch primarily changes the BENCHMARK_UNREACHABLE()
      implementation under MSVC to use __assume(false) instead
      of being a NORETURN function, which ironically caused
      unreachable code warnings.
      
      Second, since the NOTHROW function attempt generated the
      warnings we meant to avoid, it has been replaced with a dummy
      null statement.
    • Improve CPU Cache info reporting -- Add Windows support. (#486) · 11dc3682
      Eric authored
      * Improve CPU Cache info reporting -- Add Windows support.
      
      This patch does a couple of thing regarding CPU Cache reporting.
      
      First, it adds an implementation on Windows. Second it fixes
      the JSONReporter to correctly (and actually) output the CPU
      configuration information.
      
      And finally, third, it detects and reports the number of
      physical CPU's that share the same cache.
  10. 22 Nov, 2017 1 commit
    • Refactor System information collection -- Add CPU Cache Info (#483) · 27e0b439
      Eric authored
      * Refactor System information collection.
      
      This patch refactors the system information collection,
      and in particular information about the target CPU. The
      motivation is to make it easier to access CPU information,
      and easier to add new information as need be.
      
      This patch additionally adds information about the cache
      sizes of the CPU.
      
      * Address review comments: Clean up integer types.
      
      This commit cleans up the integer types used in ValueUnion to
      follow the Google style guide.
      
      Additionally it adds a BENCHMARK_UNREACHABLE macro to assist
      in documenting/catching unreachable code paths.
      
      * Rename ValueUnion accessors.
  11. 17 Nov, 2017 1 commit
    • Add NetBSD support (#482) · aad6a5fa
      Kamil Rytarowski authored
      Define BENCHMARK_OS_NETBSD for NetBSD.
      
      Add detection of cpuinfo_cycles_per_second and cpuinfo_num_cpus.
      This code shared detection of these properties with FreeBSD.
  12. 15 Nov, 2017 1 commit
  13. 13 Nov, 2017 1 commit
  14. 07 Nov, 2017 4 commits
    • [Tools] A new, more versatile benchmark output compare tool (#474) · 5e66248b
      Roman Lebedev authored
      * [Tools] A new, more versatile benchmark output compare tool
      
      Sometimes, there is more than one implementation of some functionality.
      And the obvious use-case is to benchmark them, which is better?
      
      Currently, there is no easy way to compare the benchmarking results
      in that case:
          The obvious solution is to have multiple binaries, each one
      containing/running one implementation. And each binary must use
      exactly the same benchmark family name, which is super bad,
      because now the binary name should contain all the info about
      benchmark family...
      
      What if i tell you that is not the solution?
      What if we could avoid producing one binary per benchmark family,
      with the same family name used in each binary,
      but instead could keep all the related families in one binary,
      with their proper names, AND still be able to compare them?
      
      There are three modes of operation:
      1. Just compare two benchmarks, what `compare_bench.py` did:
      ```
      $ ../tools/compare.py benchmarks ./a.out ./a.out
      RUNNING: ./a.out --benchmark_out=/tmp/tmprBT5nW
      Run on (8 X 4000 MHz CPU s)
      2017-11-07 21:16:44
      ------------------------------------------------------
      Benchmark               Time           CPU Iterations
      ------------------------------------------------------
      BM_memcpy/8            36 ns         36 ns   19101577   211.669MB/s
      BM_memcpy/64           76 ns         76 ns    9412571   800.199MB/s
      BM_memcpy/512          84 ns         84 ns    8249070   5.64771GB/s
      BM_memcpy/1024        116 ns        116 ns    6181763   8.19505GB/s
      BM_memcpy/8192        643 ns        643 ns    1062855   11.8636GB/s
      BM_copy/8             222 ns        222 ns    3137987   34.3772MB/s
      BM_copy/64           1608 ns       1608 ns     432758   37.9501MB/s
      BM_copy/512         12589 ns      12589 ns      54806   38.7867MB/s
      BM_copy/1024        25169 ns      25169 ns      27713   38.8003MB/s
      BM_copy/8192       201165 ns     201112 ns       3486   38.8466MB/s
      RUNNING: ./a.out --benchmark_out=/tmp/tmpt1wwG_
      Run on (8 X 4000 MHz CPU s)
      2017-11-07 21:16:53
      ------------------------------------------------------
      Benchmark               Time           CPU Iterations
      ------------------------------------------------------
      BM_memcpy/8            36 ns         36 ns   19397903   211.255MB/s
      BM_memcpy/64           73 ns         73 ns    9691174   839.635MB/s
      BM_memcpy/512          85 ns         85 ns    8312329   5.60101GB/s
      BM_memcpy/1024        118 ns        118 ns    6438774   8.11608GB/s
      BM_memcpy/8192        656 ns        656 ns    1068644   11.6277GB/s
      BM_copy/8             223 ns        223 ns    3146977   34.2338MB/s
      BM_copy/64           1611 ns       1611 ns     435340   37.8751MB/s
      BM_copy/512         12622 ns      12622 ns      54818   38.6844MB/s
      BM_copy/1024        25257 ns      25239 ns      27779   38.6927MB/s
      BM_copy/8192       205013 ns     205010 ns       3479    38.108MB/s
      Comparing ./a.out to ./a.out
      Benchmark                 Time             CPU      Time Old      Time New       CPU Old       CPU New
      ------------------------------------------------------------------------------------------------------
      BM_memcpy/8            +0.0020         +0.0020            36            36            36            36
      BM_memcpy/64           -0.0468         -0.0470            76            73            76            73
      BM_memcpy/512          +0.0081         +0.0083            84            85            84            85
      BM_memcpy/1024         +0.0098         +0.0097           116           118           116           118
      BM_memcpy/8192         +0.0200         +0.0203           643           656           643           656
      BM_copy/8              +0.0046         +0.0042           222           223           222           223
      BM_copy/64             +0.0020         +0.0020          1608          1611          1608          1611
      BM_copy/512            +0.0027         +0.0026         12589         12622         12589         12622
      BM_copy/1024           +0.0035         +0.0028         25169         25257         25169         25239
      BM_copy/8192           +0.0191         +0.0194        201165        205013        201112        205010
      ```
      
      2. Compare two different filters of one benchmark:
      (for simplicity, the benchmark is executed twice)
      ```
      $ ../tools/compare.py filters ./a.out BM_memcpy BM_copy
      RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmpBWKk0k
      Run on (8 X 4000 MHz CPU s)
      2017-11-07 21:37:28
      ------------------------------------------------------
      Benchmark               Time           CPU Iterations
      ------------------------------------------------------
      BM_memcpy/8            36 ns         36 ns   17891491   211.215MB/s
      BM_memcpy/64           74 ns         74 ns    9400999   825.646MB/s
      BM_memcpy/512          87 ns         87 ns    8027453   5.46126GB/s
      BM_memcpy/1024        111 ns        111 ns    6116853    8.5648GB/s
      BM_memcpy/8192        657 ns        656 ns    1064679   11.6247GB/s
      RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpAvWcOM
      Run on (8 X 4000 MHz CPU s)
      2017-11-07 21:37:33
      ----------------------------------------------------
      Benchmark             Time           CPU Iterations
      ----------------------------------------------------
      BM_copy/8           227 ns        227 ns    3038700   33.6264MB/s
      BM_copy/64         1640 ns       1640 ns     426893   37.2154MB/s
      BM_copy/512       12804 ns      12801 ns      55417   38.1444MB/s
      BM_copy/1024      25409 ns      25407 ns      27516   38.4365MB/s
      BM_copy/8192     202986 ns     202990 ns       3454   38.4871MB/s
      Comparing BM_memcpy to BM_copy (from ./a.out)
      Benchmark                               Time             CPU      Time Old      Time New       CPU Old       CPU New
      --------------------------------------------------------------------------------------------------------------------
      [BM_memcpy vs. BM_copy]/8            +5.2829         +5.2812            36           227            36           227
      [BM_memcpy vs. BM_copy]/64          +21.1719        +21.1856            74          1640            74          1640
      [BM_memcpy vs. BM_copy]/512        +145.6487       +145.6097            87         12804            87         12801
      [BM_memcpy vs. BM_copy]/1024       +227.1860       +227.1776           111         25409           111         25407
      [BM_memcpy vs. BM_copy]/8192       +308.1664       +308.2898           657        202986           656        202990
      ```
      
      3. Compare filter one from benchmark one to filter two from benchmark two:
      (for simplicity, the benchmark is executed twice)
      ```
      $ ../tools/compare.py benchmarksfiltered ./a.out BM_memcpy ./a.out BM_copy
      RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmp_FvbYg
      Run on (8 X 4000 MHz CPU s)
      2017-11-07 21:38:27
      ------------------------------------------------------
      Benchmark               Time           CPU Iterations
      ------------------------------------------------------
      BM_memcpy/8            37 ns         37 ns   18953482   204.118MB/s
      BM_memcpy/64           74 ns         74 ns    9206578   828.245MB/s
      BM_memcpy/512          91 ns         91 ns    8086195   5.25476GB/s
      BM_memcpy/1024        120 ns        120 ns    5804513   7.95662GB/s
      BM_memcpy/8192        664 ns        664 ns    1028363   11.4948GB/s
      RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpDfL5iE
      Run on (8 X 4000 MHz CPU s)
      2017-11-07 21:38:32
      ----------------------------------------------------
      Benchmark             Time           CPU Iterations
      ----------------------------------------------------
      BM_copy/8           230 ns        230 ns    2985909   33.1161MB/s
      BM_copy/64         1654 ns       1653 ns     419408   36.9137MB/s
      BM_copy/512       13122 ns      13120 ns      53403   37.2156MB/s
      BM_copy/1024      26679 ns      26666 ns      26575   36.6218MB/s
      BM_copy/8192     215068 ns     215053 ns       3221   36.3283MB/s
      Comparing BM_memcpy (from ./a.out) to BM_copy (from ./a.out)
      Benchmark                               Time             CPU      Time Old      Time New       CPU Old       CPU New
      --------------------------------------------------------------------------------------------------------------------
      [BM_memcpy vs. BM_copy]/8            +5.1649         +5.1637            37           230            37           230
      [BM_memcpy vs. BM_copy]/64          +21.4352        +21.4374            74          1654            74          1653
      [BM_memcpy vs. BM_copy]/512        +143.6022       +143.5865            91         13122            91         13120
      [BM_memcpy vs. BM_copy]/1024       +221.5903       +221.4790           120         26679           120         26666
      [BM_memcpy vs. BM_copy]/8192       +322.9059       +323.0096           664        215068           664        215053
      ```
      
      * [Docs] Document tools/compare.py
      
      * [docs] Document how the change is calculated
    • Reorder inline to avoid warning on MSVC (#469) · 90aa8665
      Dominic Hamon authored
      Fixes #467
    • Fix #382 - MinGW often reports negative CPU times. (#475) · 72a4581c
      Eric authored
      When stopping a timer, the current time is subtracted
      from the start time. However, when the times are identical,
      or sufficiently close together, the subtraction can result
      in a negative number.
      
      For some reason MinGW is the only platform where this problem
      manifests. I suspect it's due to MinGW specific behavior in either
      the CPU timing code, floating point model, or printf formatting.
      
      Either way, the fix for MinGW should be correct across all platforms.
  15. 06 Nov, 2017 1 commit
  16. 03 Nov, 2017 2 commits
  17. 02 Nov, 2017 1 commit
  18. 31 Oct, 2017 1 commit
    • Improve BM_SetInsert example (#465) · fa341e51
      Leo Koppel authored
      * Fix BM_SetInsert example
      
      Move declaration of `std::set<int> data` outside the timing loop, so that the
      destructor is not timed.
      
      * Speed up BM_SetInsert test
      
      Since the time taken to ConstructRandomSet() is so large compared to the time
      to insert one element, but only the latter is used to determine number of
      iterations, this benchmark now takes an extremely long time to run in
      benchmark_test.
      
      Speed it up two ways:
        - Increase the Ranges() parameters
        - Cache ConstructRandomSet() result (it's not random anyway), and do only
          O(N) copy every iteration
      
      * Fix same issue in BM_MapLookup test
      
      * Make BM_SetInsert test consistent with README
      
      - Use the same Ranges everywhere, but increase the 2nd range
      - Change order of Args() calls in README to more closely match the result of Ranges
      - Don't cache ConstructRandomSet, since it doesn't make sense in README
      - Get a smaller optimization inside it, by givint a hint to insert()
  19. 20 Oct, 2017 1 commit
  20. 17 Oct, 2017 3 commits
    • Refactor most usages of KeepRunning to use the perfered ranged-for. (#459) · 25acf220
      Eric authored
      Recently the library added a new ranged-for variant of the KeepRunning
      loop that is much faster. For this reason it should be preferred in all
      new code.
      
      Because a library, its documentation, and its tests should all embody
      the best practices of using the library, this patch changes all but a
      few usages of KeepRunning() into for (auto _ : state).
      
      The remaining usages in the tests and documentation persist only
      to document and test behavior that is different between the two formulations.
      
      Also note that because the range-for loop requires C++11, the KeepRunning
      variant has not been deprecated at this time.
    • Improve KeepRunning loop performance to be similar to the range-based for. (#460) · a37fc0c4
      Eric authored
      This patch improves the performance of the KeepRunning loop in two ways:
      
      (A) it removes the dependency on the max_iterations variable, preventing
      it from being loaded every iteration.
      
      (B) it loops to zero, instead of to an upper bound. This allows a single
      decrement instruction to be used instead of a arithmetic op followed by a
      comparison.
  21. 16 Oct, 2017 1 commit
  22. 13 Oct, 2017 1 commit
  23. 10 Oct, 2017 1 commit
    • Add C++11 Ranged For loop alternative to KeepRunning (#454) · 05267559
      Eric authored
      * Add C++11 Ranged For loop alternative to KeepRunning
      
      As pointed out by @astrelni and @dominichamon, the KeepRunning
      loop requires a bunch of memory loads and stores every iterations,
      which affects the measurements.
      
      The main reason for these additional loads and stores is that the
      State object is passed in by reference, making its contents externally
      visible memory, and the compiler doesn't know it hasn't been changed
      by non-visible code.
      
      It's also possible the large size of the State struct is hindering
      optimizations.
      
      This patch allows the `State` object to be iterated over using
      a range-based for loop. Example:
      
      void BM_Foo(benchmark::State& state) {
      	for (auto _ : state) {
      		[...]
      	}
      }
      
      This formulation is much more efficient, because the variable counting
      the loop index is stored in the iterator produced by `State::begin()`,
      which itself is stored in function-local memory and therefore not accessible
      by code outside of the function. Therefore the compiler knows the iterator
      hasn't been changed every iteration.
      
      This initial patch and idea was from Alex Strelnikov.
      
      * Fix null pointer initialization in C++03
  24. 09 Oct, 2017 3 commits
  25. 27 Sep, 2017 2 commits