1. 31 Aug, 2015 1 commit
  2. 28 Aug, 2015 1 commit
  3. 25 Aug, 2015 2 commits
  4. 21 Aug, 2015 2 commits
  5. 20 Aug, 2015 3 commits
    • Use separate random number generator for each randomization pass · aee5fa8d
      Qining Lu authored
      This removes random number generator from GlobalContext class and decouples different randomization passes
      
      1. Add a new constructor for random number generator which merge three arguments to into one seed for the underlying implementation of random number generator.
      
      RandomNumberGenerator(uint64_t Seed, RandomizationPassesEnum RandomizationPassID, uint64_t Salt=0)
      
      param Seed: Should be the global random number seed passed through command line.
      param RandomizationPassID: Should be the ID for different randomization passes.
      param Salt: Should be an additional integer salt, default to be 0.
      
      2. Move the creation of random number generators to the call sites of randomization passes. Each randomization pass create its own random number generator with specific salt value.
      
      Function reordering:		Salt = 0 (default)
      Basic Block reordering:		Salt = Function Sequence Number
      Global Variable reordering:	Salt = 0 (default)
      Pooled Constants reordering:	Salt = Constants' Kind value (return of getKind())
                     *Jump Tables:	Salt = 0
      Nop Insertion:			Salt = Function Sequence Number
      Register Alloc Randomization:	Salt = (Function Sequence Number << 1) ^ (Kind == RAK_Phi ? 0u : 1u)
      Constants Blinding:		Salt = Function Sequence Number
      
      *Jump tables are treated as pooled constants, but without Kind value as salt.
      
      BUG=
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1300993002.
    • Inline memove for small constant sizes and refactor memcpy and memset. · cfa628b5
      Andrew Scull authored
      The memory intrinsics are only optimized at -O1 and higher unless the
      -fmem-intrin-opt flag is set to force to optimization to take place.
      
      This change also introduces the xchg instruction for two register operands. This
      is no longer used in the memory intrinsic lowering (or by anything else) but the
      implementation is left for future use.
      
      BUG=
      R=jvoung@chromium.org, stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1278173009.
    • Change to use arena allocation for function-local data in parser. · 209318af
      Karl Schimpf authored
      Changes to use arena allocator of the CFG associated with function, for
      vectors in the function parser.
      
      BUG=None
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1293343003 .
  6. 17 Aug, 2015 1 commit
    • Restore function-local variables to use a vector. · 7a99327d
      Karl Schimpf authored
      CL 1282523002 changed the bitcode parser from using a vector, to using
      an unordered map. This was done because one could forward reference a
      local variable, and would freeze the computer trying to allocate a
      vector large enough to contain the index.
      
      This patch goes back to using vectors. To fix the forward variable
      reference, we use the number of bytes in the function to determine if
      the index is possible. This stops very large (probematic) vector
      resizes from happening.
      
      BUG=None
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1293923002 .
  7. 14 Aug, 2015 1 commit
    • Change tracking of basic blocks (within function) to use a vector. · 98ed4464
      Karl Schimpf authored
      Changing the code to "preallocate" basic blocks in a vector, rather
      than dynamically creating on demand. This has the advantage of not
      requiring basic blocks to be sorted after the bitcode is parsed.
      
      This also means that the name of the basic blocks remain constant,
      even during parsing, making debugging easier.
      
      The drawback is that the DECLAREBLOCKS bitcode record of a function
      block can define a very large number of basic blocks. To control this,
      we look at the function block size (within the bitstream) to determine
      the maximal number of basic blocks that could be defined. If the
      DECLAREBLOCKS record specifies a number larger than this, we generate
      an error and recover (if applicable).
      
      We also add an cleanup test that confirms the number of declared basic
      blocks correspond to the number of basic blocks defined in the
      function.
      
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4261
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1297433002 .
  8. 12 Aug, 2015 2 commits
  9. 10 Aug, 2015 2 commits
    • Subzero: Misc fixes/cleanup. · 992f91dd
      Jim Stichnoth authored
      1. Fix MINIMAL build.
        (a) Add a void cast to a var only used in asserts.
        (b) Use "REQUIRES:" instead of "REQUIRES" in a .ll file.
      2. Use StrError instead of StrDump for errors.
      3. Use a lambda instead of a functor because C++11.
      4. Explicit check for -filetype=obj in a non-dump-enabled build, to avoid cryptic downstream error messages.
      5. Run "make format" which was neglected earlier.
      
      BUG= none
      R=kschimpf@google.com
      
      Review URL: https://codereview.chromium.org/1284493003.
    • Fix processing of local variable indices in fuction blocks. · c6acf08f
      Karl Schimpf authored
      The previous code used a vector to hold local values associated with
      indices in the bitcode file. The problem was that the vector would be
      expanded to match the index of a "variable index forward reference".
      If the index was very large, the program would freeze the computer
      trying to allocate an array large enough to contain the index.
      
      This patch fixes this by using a local unordered map instead of a
      vector.  Hence, forward index references just add a sinle entry into
      the map.
      
      Note that this fix doesn't have a corresponding issue. However, the
      problem was made apparent from the problems noted in issues 4257 and
      4261.
      
      BUG=None
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1282523002 .
  10. 09 Aug, 2015 1 commit
    • Add the ARM32 FP register table entries, simple arith, and args. · 86ebec12
      Jan Voung authored
      Lower some instructions, without much guarantee of
      correctness. *Running* generated code will be risky
      because the register allocator isn't aware of register
      aliasing.
      
      Fill in v{add,div,mul,sub}.f{32,64}, vmov, vldr
      and vsqrt.f{32,64}. I tried to make the nacl-other-intrinsics
      test not explode, so added vsqrt too. That was pretty
      easy for sqrt, but then fabs tests also exploded. Those are not
      truly fixed but are currently "fixed" by adding a FakeDef to
      satisfy liveness.
      
      Propagate float/double arguments to the right register
      in lowerArguments, lowerCall, and propagate to s0/d0/q0
      for lowerReturn. May need to double check the calling convention.
      Currently can't test call-ret because vpush/vpop for prologues
      and epilogues isn't done.
      
      Legalize FP immediates to make the nacl-other-intrinsics sqrt
      test happy. Use the correct type of load (vldr (.32 and .64 are
      optional) instead of ldr{b,h,,d}).
      
      Whether or not the float/vector instructions can be
      predicated is a bit interesting. The float/double ones
      can, but the SIMD versions cannot. E.g.
      
      vadd<cond>.f32 s0, s0, s1 is okay
      vadd<cond>.f32 q0, q0, q1 is not okay.
      
      For now, just omit conditions from instructions that may
      end up being reused for SIMD.
      
      Split up the fp.pnacl.ll test into multiple ones so that
      parts of lowering can be tested incrementally.
      
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4076
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1266263003 .
  11. 08 Aug, 2015 1 commit
  12. 07 Aug, 2015 3 commits
    • Fix processing of global variable indices in the global vars block. · aa0ce790
      Karl Schimpf authored
      The code used to use a vector to hold global variables associated with
      indices. The problem was that the count record in the global vars
      block would generate variables for the given count (even if very
      large).
      
      To fix this, we created a local unordered map to associate indices
      with defined/referenced globals. After processing the global vars
      block, this unordered map is used to verify the size makes sense, and
      then install the recognized global variables into the (top-level)
      contents.
      
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4257
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1278793006 .
    • Inline memcpy for small constant sizes. · 9df4a379
      Andrew Scull authored
      Combined with memset inlining, this has shown an improvement of over 11% on
      the eon benchmark. This the only C++ program in spec2k and it seems C++
      programs have a significantly larger number of memset/memcpy calls. Other
      benchmarks also showed improvement of ~5% (perlbmk, parser) while most had
      a 1-2% improvement.
      
      This commit also includes a refactoring of lowerMemset which is much more
      readable and also removed the fake use of the destination pointer register.
      
      BUG=
      R=jvoung@chromium.org, stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1279833005.
    • Subzero: Completely remove tracking of stack pointer live range. · f9df4523
      Jim Stichnoth authored
      Specifically, if a variable is marked with IgnoreLiveness=true, then:
        1. Completely avoid adding any segments to its live range
        2. Assert that no one tries to add segments to its live range
      
      This is done in part by incorporating Variable::IgnoreLiveness into Liveness::RangeMask.
      
      Also, change a functor into a lambda because C++11.
      
      BUG= none
      R=jvoung@chromium.org
      
      Review URL: https://codereview.chromium.org/1273823003.
  13. 06 Aug, 2015 4 commits
  14. 05 Aug, 2015 5 commits
    • Subzero: Fix an Om1 crash from memset lowering. · f6f9825e
      Jim Stichnoth authored
      With a certain combination of memset arguments, legalizeToReg() is called but its result is unused.  Om1 register allocation does not like this because it sees an infinite-weight variable with a definition but no uses.  The simplest fix is to add a fake use.
      
      The problem shows up when building spec2k with -Om1.
      
      BUG= none
      R=ascull@google.com
      
      Review URL: https://codereview.chromium.org/1272823004.
    • Subzero: Slight improvement to phi lowering. · 552490c2
      Jim Stichnoth authored
      When doing post phi lowering register allocation, the original code limited register allocation to pre-colored or infinite-weight variables with a non-empty live range within the new edge-split nodes.  This limitation ends up missing some opportunities.  Specifically, when a temporary is introduced to break a dependency cycle, e.g.:
        // a = phi(b)
        // b = phi(a)
        t = a
        a = b
        b = t
      then t was always stack-allocated, even if a physical register was available.
      
      In the new design, the RangeMask bitvector specifies which variables should have their live ranges tracked and computed.  For normal liveness analysis, all variables are tracked.  For post phi lowering liveness analysis, all variables created from phi lowering, plus all pre-colored variables, plus all infinite-weight variables, are tracked.
      
      The result is slightly better code quality, and sometimes the frame size is 1 or 2 words smaller.
      
      The hope was to narrow the 10% translation-time degradation in pnacl-llc.pexe compared to the old, hackish way of phi lowering, but that goal still proves to be elusive.
      
      BUG= none
      R=jvoung@chromium.org
      
      Review URL: https://codereview.chromium.org/1271923002.
    • Subzero. Implements x86-64 lowerCall. · e0d9afa8
      John Porto authored
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4077
      R=jvoung@chromium.org, stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1266673003.
    • Clarify which type "Label" refers to (generic vs X86) · c2ec5817
      Jan Voung authored
      There is a generic Label that can be used as-is for ARM
      and MIPS, and there is an x86 derived class Label which
      adds the concept of near and far (for 8-bit vs 32-bit
      jumps). Previously, one method "getOrCreateCfgNodeLabel"
      would say that it returns a Label when it means the generic
      one in some cases and the x86 one in other cases.
      
      Split that into getCfgNodeLabel and getOrCreateCfgNodeLabel
      where getCfgNodeLabel returns the generic one and
      getOrCreateCfgNodeLabel is part of the x86 code and
      returns the x86 one.
      
      BUG=none
      R=ascull@google.com, stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1265023003 .
    • Order jump tables for deterministic or randomized emission. · 1eda90a1
      Andrew Scull authored
      BUG=
      R=stichnot@chromium.org, jvoung, stichnot
      
      Review URL: https://codereview.chromium.org/1260183008.
  15. 04 Aug, 2015 2 commits
  16. 02 Aug, 2015 1 commit
    • Subzero: Expand the liveness consistency check. · b3bfcbcb
      Jim Stichnoth authored
      Liveness analysis includes a consistency check on each node, to verify that variables referenced in only one block do not appear to be live coming into a block (and are therefore live across multiple blocks).  This check was disabled in the entry block because there might be function arguments that are referenced only in the entry block but are still live coming in.
      
      It seems that this entry-block exclusion has been largely unnecessary for some time.  This is because input arguments and other special variables are now pre-marked as multi-block.  The exclusion masks problems in some single-block lit tests, so it's best if it can be removed.
      
      This CL removes the exclusion, and fixes some minor issues uncovered in the MIPS and ARM target lowering.  A key issue is that when implementing a new target lowering and using --skip-unimplemented to make progress with existing tests, it may be necessary to add FakeDef instructions to avoid liveness inconsistency errors.
      
      Note that when this patch is applied to 448c16f0, it correctly identifies the liveness consistency error (as shown by a "make check-lit" failure) that was fixed in 59f2d925.
      
      BUG= none
      R=jpp@chromium.org
      
      Review URL: https://codereview.chromium.org/1265093002.
  17. 31 Jul, 2015 5 commits
    • Subzero. Buildable, non-functional TargetLoweringX8664. · 453660ff
      John Porto authored
      This CL adds a TargetLoweringX8664 that inherits from TargetX86Base, but
      other than that it does nothing to generate runnable code.
      
      Things that need to be addressed in follow up CLs:
      1) lowerCall
      2) lowerArguments
      3) lowerRet
      4) addPrologue
      5) addEpilogue
      6) Native 64-bit arithmetic
      7) 32- to 64-bit addressing
      
      (7) will be particularly interesting. Pointers in Pexes are always
      32-bit wide, so pexes have a de facto 32-bit address space. In
      Sandboxed mode that's solved by using RZP (i.e., r15) as a base
      register. For native codegen, we still need to decide what to do
      -- very likely we will start targeting X32.
      
      NOTE: This CL also
      
      s/IceType_ForceRexW/RexTypeForceRexW/g
      
      because I forgot to do it in the X8664 assembler cl.
      
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4077
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1257643004.
    • Subzero. Misc fixes. · 59f2d925
      John Porto authored
      This CL disables the X86 assembler tests by default. They take too long
      to compile, so there's very little point in running them with the other
      unittests.
      
      This CL fixes a bug introduced in
      https://codereview.chromium.org/1260163003/ that caused liveness
      analysis to assert due to a uninitialized Variable.
      
      BUG=
      R=jvoung@chromium.org, stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1266863002.
    • ARM: Add a postRA pass to legalize stack offsets. Greedy approach (reserve IP). · 28068adb
      Jan Voung authored
      Make a post-register allocation and post-addProlog pass to
      go through variables with stack offsets and legalize them
      in case the offsets are not encodeable. The naive approach
      is to reserve IP, and use IP to movw/movt the offset, then
      add/sub the frame/stack pointer to IP and use IP as the new
      base instead of the frame/stack pointer. We do some amount
      of CSE within a basic block, and share the IP base pointer
      when it is (a) within range for later stack references,
      and (b) IP hasn't been clobbered (e.g., by a function call).
      I chose to do this greedy approach for both Om1 and O2,
      since it should just be a linear pass, and it reduces the
      amount of variables/instructions created compared to the
      super-naive peephole approach (so might be faster?).
      
      Introduce a test-only flag and use that to artificially
      bloat the stack frame so that spill offsets are out
      of range for ARM. Use that flag for cross tests to
      stress this new code a bit more (than would have been
      stressed by simply doing a lit test + FileCheck).
      
      Also, the previous version of emitVariable() was using the
      Var's type to determine the range (only +/- 255 for i16,
      vs +/- 4095 for i32), even though mov's emit() always
      uses a full 32-bit "ldr" instead of a 16-bit "ldrh".
      Use a common legality check, which uses the stackSlotType
      instead of the Var's type. This previously caused the
      test_bitmanip to spuriously complain, even though the
      offsets for Om1 were "only" in the 300 byte range. With this
      fixed, we can then enable the test_bitmanip test too.
      
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4076
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1241763002 .
    • Add -reorder-basic-blocks option and fix nop insertion · 969f6a33
      Qining Lu authored
      1. Basic block reordering can be enabled with -reorder-basic-blocks option enabled.
      
      Blocks will be sorted according to the Reversed Post Traversal Order, but the next
      node to visit among all candidate children nodes is selected 'randomly'.
      
      Example:
        A
       / \
      B   C
       \ /
        D
      
      This CFG can generate two possible layouts:
      A-B-C-D or A-C-B-D
      
      2. Fix nop insetion
      
      Add checks to avoiding insertions in empty basic blocks(dead blocks) and bundle locked instructions.
      
      BUG=
      R=jpp@chromium.org, jvoung@chromium.org, stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1255303004.
    • Fix a -Wcovered-switch-default warning in emitJumpTables. · c2648c2d
      Jan Voung authored
      The Subzero build inside of the LLVM build system turns this on.
      
      BUG=none
      R=ascull@google.com
      
      Review URL: https://codereview.chromium.org/1264913005 .
  18. 30 Jul, 2015 2 commits
    • Iasm and obj lowering for advanced switch lowering. · 86df4e9e
      Andrew Scull authored
      Jump table emission is delayed until offsets are known. X86 local jumps can be
      near or far. Sanboxing is applied to indirect jumps from jump table.
      
      BUG=
      R=stichnot@chromium.org, jvoung
      
      Review URL: https://codereview.chromium.org/1257283004.
    • Subzero: Cleanly implement register allocation after phi lowering. · a3f57b9a
      Jim Stichnoth authored
      After finding a valid linearization of phi assignments, the old approach calls a complicated target-specific method that lowers and ad-hoc register allocates the phi assignments.
      
      In the new approach, we use existing target lowering to lower assignments into mov/whatever instructions, and enhance the register allocator to be able to forcibly spill and reload a register if one is needed but none are available.
      
      The new approach incrementally updates liveness and live ranges for newly added nodes and variable uses, to avoid having to expensively recompute it all.
      
      Advanced phi lowering is enabled now on ARM, and constant blinding no longer needs to be disabled during phi lowering.
      
      Some of the metadata regarding which CfgNode a local variable belongs to, needed to be made non-const, in order to add spill/fill instructions to a CfgNode during register allocation.
      
      Most of the testing came from spec2k.  There are some minor differences in the output regarding stack frame offsets, probably related to the order that new nodes are phi-lowered.  The changes related to constant blinding were tested by running spec with "-randomize-pool-immediates=randomize -randomize-pool-threshold=8".
      
      Unfortunately, this appears to add about 10% to the translation time for 176.gcc.  The cost is clear in the -timing output so it can be investigated later.  There is a TODO suggesting the possible cause and solution, for later investigation.
      
      BUG= none
      R=jvoung@chromium.org
      
      Review URL: https://codereview.chromium.org/1253833002.
  19. 28 Jul, 2015 1 commit
    • Removes references to ah. · 448c16f0
      John Porto authored
      AH is a thorn in the flesh for our X86-64 backend. The assembler was
      designed to always encode the low 8-bit registers, so %ah would become
      %spl. While it is true we **could** force %spl to always be encoded as
      %ah, that would not work if the instruction has a rex prefix.
      
      This CL removes references to %ah from TargetX86Base. There used to be
      2 uses of ah in the target lowering:
      
      1) To zero-extend %al before an unsigned div:
          mov <<src0>>, %al
          mov 0, %ah
          div <<src1>>
      
      This pattern has been changed to
          xor %eax, %eax
          mov <<src0>>, %al
          div <<src1>>
      
      2) To access the 8-bit remainder for 8-bit division:
          mov %ah, <<dest>>
      
      This pattern has been changed to
          shr $8, %eax
          mov %al, <<Dest>>
      
      BUG= https://code.google.com/p/nativeclient/issues/detail?id=4077
      R=stichnot@chromium.org
      
      Review URL: https://codereview.chromium.org/1260163003.