LCOV - code coverage report
Current view: top level - gcc - tree-ssa-ccp.cc (source / functions) Coverage Total Hit
Test: gcc.info Lines: 94.7 % 1562 1479
Test Date: 2026-02-28 14:20:25 Functions: 98.0 % 51 50
Legend: Lines:     hit not hit

            Line data    Source code
       1              : /* Conditional constant propagation pass for the GNU compiler.
       2              :    Copyright (C) 2000-2026 Free Software Foundation, Inc.
       3              :    Adapted from original RTL SSA-CCP by Daniel Berlin <dberlin@dberlin.org>
       4              :    Adapted to GIMPLE trees by Diego Novillo <dnovillo@redhat.com>
       5              : 
       6              : This file is part of GCC.
       7              : 
       8              : GCC is free software; you can redistribute it and/or modify it
       9              : under the terms of the GNU General Public License as published by the
      10              : Free Software Foundation; either version 3, or (at your option) any
      11              : later version.
      12              : 
      13              : GCC is distributed in the hope that it will be useful, but WITHOUT
      14              : ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
      15              : FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
      16              : for more details.
      17              : 
      18              : You should have received a copy of the GNU General Public License
      19              : along with GCC; see the file COPYING3.  If not see
      20              : <http://www.gnu.org/licenses/>.  */
      21              : 
      22              : /* Conditional constant propagation (CCP) is based on the SSA
      23              :    propagation engine (tree-ssa-propagate.cc).  Constant assignments of
      24              :    the form VAR = CST are propagated from the assignments into uses of
      25              :    VAR, which in turn may generate new constants.  The simulation uses
      26              :    a four level lattice to keep track of constant values associated
      27              :    with SSA names.  Given an SSA name V_i, it may take one of the
      28              :    following values:
      29              : 
      30              :         UNINITIALIZED   ->  the initial state of the value.  This value
      31              :                             is replaced with a correct initial value
      32              :                             the first time the value is used, so the
      33              :                             rest of the pass does not need to care about
      34              :                             it.  Using this value simplifies initialization
      35              :                             of the pass, and prevents us from needlessly
      36              :                             scanning statements that are never reached.
      37              : 
      38              :         UNDEFINED       ->  V_i is a local variable whose definition
      39              :                             has not been processed yet.  Therefore we
      40              :                             don't yet know if its value is a constant
      41              :                             or not.
      42              : 
      43              :         CONSTANT        ->  V_i has been found to hold a constant
      44              :                             value C.
      45              : 
      46              :         VARYING         ->  V_i cannot take a constant value, or if it
      47              :                             does, it is not possible to determine it
      48              :                             at compile time.
      49              : 
      50              :    The core of SSA-CCP is in ccp_visit_stmt and ccp_visit_phi_node:
      51              : 
      52              :    1- In ccp_visit_stmt, we are interested in assignments whose RHS
      53              :       evaluates into a constant and conditional jumps whose predicate
      54              :       evaluates into a boolean true or false.  When an assignment of
      55              :       the form V_i = CONST is found, V_i's lattice value is set to
      56              :       CONSTANT and CONST is associated with it.  This causes the
      57              :       propagation engine to add all the SSA edges coming out the
      58              :       assignment into the worklists, so that statements that use V_i
      59              :       can be visited.
      60              : 
      61              :       If the statement is a conditional with a constant predicate, we
      62              :       mark the outgoing edges as executable or not executable
      63              :       depending on the predicate's value.  This is then used when
      64              :       visiting PHI nodes to know when a PHI argument can be ignored.
      65              : 
      66              : 
      67              :    2- In ccp_visit_phi_node, if all the PHI arguments evaluate to the
      68              :       same constant C, then the LHS of the PHI is set to C.  This
      69              :       evaluation is known as the "meet operation".  Since one of the
      70              :       goals of this evaluation is to optimistically return constant
      71              :       values as often as possible, it uses two main short cuts:
      72              : 
      73              :       - If an argument is flowing in through a non-executable edge, it
      74              :         is ignored.  This is useful in cases like this:
      75              : 
      76              :                         if (PRED)
      77              :                           a_9 = 3;
      78              :                         else
      79              :                           a_10 = 100;
      80              :                         a_11 = PHI (a_9, a_10)
      81              : 
      82              :         If PRED is known to always evaluate to false, then we can
      83              :         assume that a_11 will always take its value from a_10, meaning
      84              :         that instead of consider it VARYING (a_9 and a_10 have
      85              :         different values), we can consider it CONSTANT 100.
      86              : 
      87              :       - If an argument has an UNDEFINED value, then it does not affect
      88              :         the outcome of the meet operation.  If a variable V_i has an
      89              :         UNDEFINED value, it means that either its defining statement
      90              :         hasn't been visited yet or V_i has no defining statement, in
      91              :         which case the original symbol 'V' is being used
      92              :         uninitialized.  Since 'V' is a local variable, the compiler
      93              :         may assume any initial value for it.
      94              : 
      95              : 
      96              :    After propagation, every variable V_i that ends up with a lattice
      97              :    value of CONSTANT will have the associated constant value in the
      98              :    array CONST_VAL[i].VALUE.  That is fed into substitute_and_fold for
      99              :    final substitution and folding.
     100              : 
     101              :    This algorithm uses wide-ints at the max precision of the target.
     102              :    This means that, with one uninteresting exception, variables with
     103              :    UNSIGNED types never go to VARYING because the bits above the
     104              :    precision of the type of the variable are always zero.  The
     105              :    uninteresting case is a variable of UNSIGNED type that has the
     106              :    maximum precision of the target.  Such variables can go to VARYING,
     107              :    but this causes no loss of infomation since these variables will
     108              :    never be extended.
     109              : 
     110              :    References:
     111              : 
     112              :      Constant propagation with conditional branches,
     113              :      Wegman and Zadeck, ACM TOPLAS 13(2):181-210.
     114              : 
     115              :      Building an Optimizing Compiler,
     116              :      Robert Morgan, Butterworth-Heinemann, 1998, Section 8.9.
     117              : 
     118              :      Advanced Compiler Design and Implementation,
     119              :      Steven Muchnick, Morgan Kaufmann, 1997, Section 12.6  */
     120              : 
     121              : #include "config.h"
     122              : #include "system.h"
     123              : #include "coretypes.h"
     124              : #include "backend.h"
     125              : #include "target.h"
     126              : #include "tree.h"
     127              : #include "gimple.h"
     128              : #include "tree-pass.h"
     129              : #include "ssa.h"
     130              : #include "gimple-pretty-print.h"
     131              : #include "fold-const.h"
     132              : #include "gimple-iterator.h"
     133              : #include "gimple-fold.h"
     134              : #include "tree-eh.h"
     135              : #include "gimplify.h"
     136              : #include "tree-cfg.h"
     137              : #include "tree-ssa-propagate.h"
     138              : #include "dbgcnt.h"
     139              : #include "builtins.h"
     140              : #include "cfgloop.h"
     141              : #include "stor-layout.h"
     142              : #include "optabs-query.h"
     143              : #include "tree-ssa-ccp.h"
     144              : #include "tree-dfa.h"
     145              : #include "diagnostic-core.h"
     146              : #include "stringpool.h"
     147              : #include "attribs.h"
     148              : #include "tree-vector-builder.h"
     149              : #include "cgraph.h"
     150              : #include "alloc-pool.h"
     151              : #include "symbol-summary.h"
     152              : #include "ipa-utils.h"
     153              : #include "sreal.h"
     154              : #include "ipa-cp.h"
     155              : #include "ipa-prop.h"
     156              : #include "internal-fn.h"
     157              : #include "gimple-range.h"
     158              : #include "tree-ssa-strlen.h"
     159              : 
     160              : /* Possible lattice values.  */
     161              : typedef enum
     162              : {
     163              :   UNINITIALIZED,
     164              :   UNDEFINED,
     165              :   CONSTANT,
     166              :   VARYING
     167              : } ccp_lattice_t;
     168              : 
     169    773705447 : class ccp_prop_value_t {
     170              : public:
     171              :     /* Lattice value.  */
     172              :     ccp_lattice_t lattice_val;
     173              : 
     174              :     /* Propagated value.  */
     175              :     tree value;
     176              : 
     177              :     /* Mask that applies to the propagated value during CCP.  For X
     178              :        with a CONSTANT lattice value X & ~mask == value & ~mask.  The
     179              :        zero bits in the mask cover constant values.  The ones mean no
     180              :        information.  */
     181              :     widest_int mask;
     182              : };
     183              : 
     184      5537665 : class ccp_propagate : public ssa_propagation_engine
     185              : {
     186              :  public:
     187              :   enum ssa_prop_result visit_stmt (gimple *, edge *, tree *) final override;
     188              :   enum ssa_prop_result visit_phi (gphi *) final override;
     189              : };
     190              : 
     191              : /* Array of propagated constant values.  After propagation,
     192              :    CONST_VAL[I].VALUE holds the constant value for SSA_NAME(I).  If
     193              :    the constant is held in an SSA name representing a memory store
     194              :    (i.e., a VDEF), CONST_VAL[I].MEM_REF will contain the actual
     195              :    memory reference used to store (i.e., the LHS of the assignment
     196              :    doing the store).  */
     197              : static ccp_prop_value_t *const_val;
     198              : static unsigned n_const_val;
     199              : 
     200              : static void canonicalize_value (ccp_prop_value_t *);
     201              : static void ccp_lattice_meet (ccp_prop_value_t *, ccp_prop_value_t *);
     202              : 
     203              : /* Dump constant propagation value VAL to file OUTF prefixed by PREFIX.  */
     204              : 
     205              : static void
     206           55 : dump_lattice_value (FILE *outf, const char *prefix, ccp_prop_value_t val)
     207              : {
     208           55 :   switch (val.lattice_val)
     209              :     {
     210            0 :     case UNINITIALIZED:
     211            0 :       fprintf (outf, "%sUNINITIALIZED", prefix);
     212            0 :       break;
     213            1 :     case UNDEFINED:
     214            1 :       fprintf (outf, "%sUNDEFINED", prefix);
     215            1 :       break;
     216           15 :     case VARYING:
     217           15 :       fprintf (outf, "%sVARYING", prefix);
     218           15 :       break;
     219           39 :     case CONSTANT:
     220           39 :       if (TREE_CODE (val.value) != INTEGER_CST
     221           39 :           || val.mask == 0)
     222              :         {
     223           36 :           fprintf (outf, "%sCONSTANT ", prefix);
     224           36 :           print_generic_expr (outf, val.value, dump_flags);
     225              :         }
     226              :       else
     227              :         {
     228            3 :           widest_int cval = wi::bit_and_not (wi::to_widest (val.value),
     229            3 :                                              val.mask);
     230            3 :           fprintf (outf, "%sCONSTANT ", prefix);
     231            3 :           print_hex (cval, outf);
     232            3 :           fprintf (outf, " (");
     233            3 :           print_hex (val.mask, outf);
     234            3 :           fprintf (outf, ")");
     235            3 :         }
     236              :       break;
     237            0 :     default:
     238            0 :       gcc_unreachable ();
     239              :     }
     240           55 : }
     241              : 
     242              : 
     243              : /* Print lattice value VAL to stderr.  */
     244              : 
     245              : void debug_lattice_value (ccp_prop_value_t val);
     246              : 
     247              : DEBUG_FUNCTION void
     248            0 : debug_lattice_value (ccp_prop_value_t val)
     249              : {
     250            0 :   dump_lattice_value (stderr, "", val);
     251            0 :   fprintf (stderr, "\n");
     252            0 : }
     253              : 
     254              : /* Extend NONZERO_BITS to a full mask, based on sgn.  */
     255              : 
     256              : static widest_int
     257     51459027 : extend_mask (const wide_int &nonzero_bits, signop sgn)
     258              : {
     259     51459027 :   return widest_int::from (nonzero_bits, sgn);
     260              : }
     261              : 
     262              : /* Compute a default value for variable VAR and store it in the
     263              :    CONST_VAL array.  The following rules are used to get default
     264              :    values:
     265              : 
     266              :    1- Global and static variables that are declared constant are
     267              :       considered CONSTANT.
     268              : 
     269              :    2- Any other value is considered UNDEFINED.  This is useful when
     270              :       considering PHI nodes.  PHI arguments that are undefined do not
     271              :       change the constant value of the PHI node, which allows for more
     272              :       constants to be propagated.
     273              : 
     274              :    3- Variables defined by statements other than assignments and PHI
     275              :       nodes are considered VARYING.
     276              : 
     277              :    4- Initial values of variables that are not GIMPLE registers are
     278              :       considered VARYING.  */
     279              : 
     280              : static ccp_prop_value_t
     281     10420436 : get_default_value (tree var)
     282              : {
     283     10420436 :   ccp_prop_value_t val = { UNINITIALIZED, NULL_TREE, 0 };
     284     10420436 :   gimple *stmt;
     285              : 
     286     10420436 :   stmt = SSA_NAME_DEF_STMT (var);
     287              : 
     288     10420436 :   if (gimple_nop_p (stmt))
     289              :     {
     290              :       /* Variables defined by an empty statement are those used
     291              :          before being initialized.  If VAR is a local variable, we
     292              :          can assume initially that it is UNDEFINED, otherwise we must
     293              :          consider it VARYING.  */
     294      9631511 :       if (!virtual_operand_p (var)
     295      9631511 :           && SSA_NAME_VAR (var)
     296     19262924 :           && VAR_P (SSA_NAME_VAR (var)))
     297      1560974 :         val.lattice_val = UNDEFINED;
     298              :       else
     299              :         {
     300      8070537 :           val.lattice_val = VARYING;
     301      8070537 :           val.mask = -1;
     302      8070537 :           if (flag_tree_bit_ccp && !VECTOR_TYPE_P (TREE_TYPE (var)))
     303              :             {
     304      7633244 :               wide_int nonzero_bits = get_nonzero_bits (var);
     305      7633244 :               tree value;
     306      7633244 :               widest_int mask;
     307              : 
     308      7633244 :               if (SSA_NAME_VAR (var)
     309      7633146 :                   && TREE_CODE (SSA_NAME_VAR (var)) == PARM_DECL
     310      7570990 :                   && ipcp_get_parm_bits (SSA_NAME_VAR (var), &value, &mask))
     311              :                 {
     312        83989 :                   val.lattice_val = CONSTANT;
     313        83989 :                   val.value = value;
     314        83989 :                   widest_int ipa_value = wi::to_widest (value);
     315              :                   /* Unknown bits from IPA CP must be equal to zero.  */
     316        83989 :                   gcc_assert (wi::bit_and (ipa_value, mask) == 0);
     317        83989 :                   val.mask = mask;
     318        83989 :                   if (nonzero_bits != -1)
     319        67145 :                     val.mask &= extend_mask (nonzero_bits,
     320        67145 :                                              TYPE_SIGN (TREE_TYPE (var)));
     321        83989 :                 }
     322      7549255 :               else if (nonzero_bits != -1)
     323              :                 {
     324         1249 :                   val.lattice_val = CONSTANT;
     325         1249 :                   val.value = build_zero_cst (TREE_TYPE (var));
     326         1249 :                   val.mask = extend_mask (nonzero_bits,
     327         1249 :                                           TYPE_SIGN (TREE_TYPE (var)));
     328              :                 }
     329      7633363 :             }
     330              :         }
     331              :     }
     332       788925 :   else if (is_gimple_assign (stmt))
     333              :     {
     334       684197 :       tree cst;
     335       684197 :       if (gimple_assign_single_p (stmt)
     336       362942 :           && DECL_P (gimple_assign_rhs1 (stmt))
     337       696622 :           && (cst = get_symbol_constant_value (gimple_assign_rhs1 (stmt))))
     338              :         {
     339           97 :           val.lattice_val = CONSTANT;
     340           97 :           val.value = cst;
     341              :         }
     342              :       else
     343              :         {
     344              :           /* Any other variable defined by an assignment is considered
     345              :              UNDEFINED.  */
     346       684100 :           val.lattice_val = UNDEFINED;
     347              :         }
     348              :     }
     349       104728 :   else if ((is_gimple_call (stmt)
     350        23733 :             && gimple_call_lhs (stmt) != NULL_TREE)
     351       104728 :            || gimple_code (stmt) == GIMPLE_PHI)
     352              :     {
     353              :       /* A variable defined by a call or a PHI node is considered
     354              :          UNDEFINED.  */
     355       104671 :       val.lattice_val = UNDEFINED;
     356              :     }
     357              :   else
     358              :     {
     359              :       /* Otherwise, VAR will never take on a constant value.  */
     360           57 :       val.lattice_val = VARYING;
     361           57 :       val.mask = -1;
     362              :     }
     363              : 
     364     10420436 :   return val;
     365              : }
     366              : 
     367              : 
     368              : /* Get the constant value associated with variable VAR.  */
     369              : 
     370              : static inline ccp_prop_value_t *
     371   3002561136 : get_value (tree var)
     372              : {
     373   3002561136 :   ccp_prop_value_t *val;
     374              : 
     375   3002561136 :   if (const_val == NULL
     376   6005122272 :       || SSA_NAME_VERSION (var) >= n_const_val)
     377              :     return NULL;
     378              : 
     379   3002552219 :   val = &const_val[SSA_NAME_VERSION (var)];
     380   3002552219 :   if (val->lattice_val == UNINITIALIZED)
     381     10420436 :     *val = get_default_value (var);
     382              : 
     383   3002552219 :   canonicalize_value (val);
     384              : 
     385   3002552219 :   return val;
     386              : }
     387              : 
     388              : /* Return the constant tree value associated with VAR.  */
     389              : 
     390              : static inline tree
     391   2339476990 : get_constant_value (tree var)
     392              : {
     393   2339476990 :   ccp_prop_value_t *val;
     394   2339476990 :   if (TREE_CODE (var) != SSA_NAME)
     395              :     {
     396         1179 :       if (is_gimple_min_invariant (var))
     397              :         return var;
     398              :       return NULL_TREE;
     399              :     }
     400   2339475811 :   val = get_value (var);
     401   2339475811 :   if (val
     402   2339467354 :       && val->lattice_val == CONSTANT
     403   2785082245 :       && (TREE_CODE (val->value) != INTEGER_CST
     404   2299584525 :           || val->mask == 0))
     405     61557018 :     return val->value;
     406              :   return NULL_TREE;
     407              : }
     408              : 
     409              : /* Sets the value associated with VAR to VARYING.  */
     410              : 
     411              : static inline void
     412     58459370 : set_value_varying (tree var)
     413              : {
     414     58459370 :   ccp_prop_value_t *val = &const_val[SSA_NAME_VERSION (var)];
     415              : 
     416     58459370 :   val->lattice_val = VARYING;
     417     58459370 :   val->value = NULL_TREE;
     418     58459370 :   val->mask = -1;
     419     58459370 : }
     420              : 
     421              : /* For integer constants, make sure to drop TREE_OVERFLOW.  */
     422              : 
     423              : static void
     424   3414475448 : canonicalize_value (ccp_prop_value_t *val)
     425              : {
     426   3414475448 :   if (val->lattice_val != CONSTANT)
     427              :     return;
     428              : 
     429   1212056567 :   if (TREE_OVERFLOW_P (val->value))
     430           60 :     val->value = drop_tree_overflow (val->value);
     431              : }
     432              : 
     433              : /* Return whether the lattice transition is valid.  */
     434              : 
     435              : static bool
     436    263417981 : valid_lattice_transition (ccp_prop_value_t old_val, ccp_prop_value_t new_val)
     437              : {
     438              :   /* Lattice transitions must always be monotonically increasing in
     439              :      value.  */
     440    263417981 :   if (old_val.lattice_val < new_val.lattice_val)
     441              :     return true;
     442              : 
     443    165059155 :   if (old_val.lattice_val != new_val.lattice_val)
     444              :     return false;
     445              : 
     446    165059155 :   if (!old_val.value && !new_val.value)
     447              :     return true;
     448              : 
     449              :   /* Now both lattice values are CONSTANT.  */
     450              : 
     451              :   /* Allow arbitrary copy changes as we might look through PHI <a_1, ...>
     452              :      when only a single copy edge is executable.  */
     453    165022996 :   if (TREE_CODE (old_val.value) == SSA_NAME
     454        37378 :       && TREE_CODE (new_val.value) == SSA_NAME)
     455              :     return true;
     456              : 
     457              :   /* Allow transitioning from a constant to a copy.  */
     458    164985618 :   if (is_gimple_min_invariant (old_val.value)
     459    164985618 :       && TREE_CODE (new_val.value) == SSA_NAME)
     460              :     return true;
     461              : 
     462              :   /* Allow transitioning from PHI <&x, not executable> == &x
     463              :      to PHI <&x, &y> == common alignment.  */
     464    164780108 :   if (TREE_CODE (old_val.value) != INTEGER_CST
     465       425656 :       && TREE_CODE (new_val.value) == INTEGER_CST)
     466              :     return true;
     467              : 
     468              :   /* Bit-lattices have to agree in the still valid bits.  */
     469    164381368 :   if (TREE_CODE (old_val.value) == INTEGER_CST
     470    164354452 :       && TREE_CODE (new_val.value) == INTEGER_CST)
     471    328708904 :     return (wi::bit_and_not (wi::to_widest (old_val.value), new_val.mask)
     472    493063356 :             == wi::bit_and_not (wi::to_widest (new_val.value), new_val.mask));
     473              : 
     474              :   /* Otherwise constant values have to agree.  */
     475        26916 :   if (operand_equal_p (old_val.value, new_val.value, 0))
     476              :     return true;
     477              : 
     478              :   /* At least the kinds and types should agree now.  */
     479            0 :   if (TREE_CODE (old_val.value) != TREE_CODE (new_val.value)
     480            0 :       || !types_compatible_p (TREE_TYPE (old_val.value),
     481            0 :                               TREE_TYPE (new_val.value)))
     482            0 :     return false;
     483              : 
     484              :   /* For floats and !HONOR_NANS allow transitions from (partial) NaN
     485              :      to non-NaN.  */
     486            0 :   tree type = TREE_TYPE (new_val.value);
     487            0 :   if (SCALAR_FLOAT_TYPE_P (type)
     488            0 :       && !HONOR_NANS (type))
     489              :     {
     490            0 :       if (REAL_VALUE_ISNAN (TREE_REAL_CST (old_val.value)))
     491              :         return true;
     492              :     }
     493            0 :   else if (VECTOR_FLOAT_TYPE_P (type)
     494            0 :            && !HONOR_NANS (type))
     495              :     {
     496            0 :       unsigned int count
     497            0 :         = tree_vector_builder::binary_encoded_nelts (old_val.value,
     498            0 :                                                      new_val.value);
     499            0 :       for (unsigned int i = 0; i < count; ++i)
     500            0 :         if (!REAL_VALUE_ISNAN
     501              :                (TREE_REAL_CST (VECTOR_CST_ENCODED_ELT (old_val.value, i)))
     502            0 :             && !operand_equal_p (VECTOR_CST_ENCODED_ELT (old_val.value, i),
     503            0 :                                  VECTOR_CST_ENCODED_ELT (new_val.value, i), 0))
     504              :           return false;
     505              :       return true;
     506              :     }
     507            0 :   else if (COMPLEX_FLOAT_TYPE_P (type)
     508            0 :            && !HONOR_NANS (type))
     509              :     {
     510            0 :       if (!REAL_VALUE_ISNAN (TREE_REAL_CST (TREE_REALPART (old_val.value)))
     511            0 :           && !operand_equal_p (TREE_REALPART (old_val.value),
     512            0 :                                TREE_REALPART (new_val.value), 0))
     513              :         return false;
     514            0 :       if (!REAL_VALUE_ISNAN (TREE_REAL_CST (TREE_IMAGPART (old_val.value)))
     515            0 :           && !operand_equal_p (TREE_IMAGPART (old_val.value),
     516            0 :                                TREE_IMAGPART (new_val.value), 0))
     517              :         return false;
     518            0 :       return true;
     519              :     }
     520              :   return false;
     521              : }
     522              : 
     523              : /* Set the value for variable VAR to NEW_VAL.  Return true if the new
     524              :    value is different from VAR's previous value.  */
     525              : 
     526              : static bool
     527    263417981 : set_lattice_value (tree var, ccp_prop_value_t *new_val)
     528              : {
     529              :   /* We can deal with old UNINITIALIZED values just fine here.  */
     530    263417981 :   ccp_prop_value_t *old_val = &const_val[SSA_NAME_VERSION (var)];
     531              : 
     532    263417981 :   canonicalize_value (new_val);
     533              : 
     534              :   /* We have to be careful to not go up the bitwise lattice
     535              :      represented by the mask.  Instead of dropping to VARYING
     536              :      use the meet operator to retain a conservative value.
     537              :      Missed optimizations like PR65851 makes this necessary.
     538              :      It also ensures we converge to a stable lattice solution.  */
     539    263417981 :   if (old_val->lattice_val != UNINITIALIZED
     540              :       /* But avoid using meet for constant -> copy transitions.  */
     541    171246280 :       && !(old_val->lattice_val == CONSTANT
     542    171167525 :            && CONSTANT_CLASS_P (old_val->value)
     543    168269375 :            && new_val->lattice_val == CONSTANT
     544    164568954 :            && TREE_CODE (new_val->value) == SSA_NAME))
     545    171040770 :     ccp_lattice_meet (new_val, old_val);
     546              : 
     547    526835962 :   gcc_checking_assert (valid_lattice_transition (*old_val, *new_val));
     548              : 
     549              :   /* If *OLD_VAL and NEW_VAL are the same, return false to inform the
     550              :      caller that this was a non-transition.  */
     551    526835962 :   if (old_val->lattice_val != new_val->lattice_val
     552    263417981 :       || (new_val->lattice_val == CONSTANT
     553    165022996 :           && (TREE_CODE (new_val->value) != TREE_CODE (old_val->value)
     554    164418746 :               || (TREE_CODE (new_val->value) == INTEGER_CST
     555    164354452 :                   && (new_val->mask != old_val->mask
     556     46488190 :                       || (wi::bit_and_not (wi::to_widest (old_val->value),
     557              :                                            new_val->mask)
     558    309841877 :                           != wi::bit_and_not (wi::to_widest (new_val->value),
     559              :                                               new_val->mask))))
     560     15538926 :               || (TREE_CODE (new_val->value) != INTEGER_CST
     561        64294 :                   && !operand_equal_p (new_val->value, old_val->value, 0)))))
     562              :     {
     563              :       /* ???  We would like to delay creation of INTEGER_CSTs from
     564              :          partially constants here.  */
     565              : 
     566    247842896 :       if (dump_file && (dump_flags & TDF_DETAILS))
     567              :         {
     568           51 :           dump_lattice_value (dump_file, "Lattice value changed to ", *new_val);
     569           51 :           fprintf (dump_file, ".  Adding SSA edges to worklist.\n");
     570              :         }
     571              : 
     572    247842896 :       *old_val = *new_val;
     573              : 
     574    247842896 :       gcc_assert (new_val->lattice_val != UNINITIALIZED);
     575              :       return true;
     576              :     }
     577              : 
     578              :   return false;
     579              : }
     580              : 
     581              : static ccp_prop_value_t get_value_for_expr (tree, bool);
     582              : static ccp_prop_value_t bit_value_binop (enum tree_code, tree, tree, tree);
     583              : void bit_value_binop (enum tree_code, signop, int, widest_int *, widest_int *,
     584              :                       signop, int, const widest_int &, const widest_int &,
     585              :                       signop, int, const widest_int &, const widest_int &);
     586              : 
     587              : /* Return a widest_int that can be used for bitwise simplifications
     588              :    from VAL.  */
     589              : 
     590              : static widest_int
     591    314991721 : value_to_wide_int (ccp_prop_value_t val)
     592              : {
     593    314991721 :   if (val.value
     594    250251315 :       && TREE_CODE (val.value) == INTEGER_CST)
     595    250251315 :     return wi::to_widest (val.value);
     596              : 
     597     64740406 :   return 0;
     598              : }
     599              : 
     600              : /* Return the value for the address expression EXPR based on alignment
     601              :    information.  */
     602              : 
     603              : static ccp_prop_value_t
     604      8579835 : get_value_from_alignment (tree expr)
     605              : {
     606      8579835 :   tree type = TREE_TYPE (expr);
     607      8579835 :   ccp_prop_value_t val;
     608      8579835 :   unsigned HOST_WIDE_INT bitpos;
     609      8579835 :   unsigned int align;
     610              : 
     611      8579835 :   gcc_assert (TREE_CODE (expr) == ADDR_EXPR);
     612              : 
     613      8579835 :   get_pointer_alignment_1 (expr, &align, &bitpos);
     614      8579835 :   val.mask = wi::bit_and_not
     615     17159670 :     (POINTER_TYPE_P (type) || TYPE_UNSIGNED (type)
     616      8579835 :      ? wi::mask <widest_int> (TYPE_PRECISION (type), false)
     617            0 :      : -1,
     618     17159670 :      align / BITS_PER_UNIT - 1);
     619      8579835 :   val.lattice_val
     620     14549825 :     = wi::sext (val.mask, TYPE_PRECISION (type)) == -1 ? VARYING : CONSTANT;
     621      8579835 :   if (val.lattice_val == CONSTANT)
     622      5969990 :     val.value = build_int_cstu (type, bitpos / BITS_PER_UNIT);
     623              :   else
     624      2609845 :     val.value = NULL_TREE;
     625              : 
     626      8579835 :   return val;
     627              : }
     628              : 
     629              : /* Return the value for the tree operand EXPR.  If FOR_BITS_P is true
     630              :    return constant bits extracted from alignment information for
     631              :    invariant addresses.  */
     632              : 
     633              : static ccp_prop_value_t
     634    487249365 : get_value_for_expr (tree expr, bool for_bits_p)
     635              : {
     636    487249365 :   ccp_prop_value_t val;
     637              : 
     638    487249365 :   if (TREE_CODE (expr) == SSA_NAME)
     639              :     {
     640    304746735 :       ccp_prop_value_t *val_ = get_value (expr);
     641    304746735 :       if (val_)
     642    304746505 :         val = *val_;
     643              :       else
     644              :         {
     645          230 :           val.lattice_val = VARYING;
     646          230 :           val.value = NULL_TREE;
     647          230 :           val.mask = -1;
     648              :         }
     649    304746735 :       if (for_bits_p
     650    207030218 :           && val.lattice_val == CONSTANT)
     651              :         {
     652    142827753 :           if (TREE_CODE (val.value) == ADDR_EXPR)
     653       209770 :             val = get_value_from_alignment (val.value);
     654    142617983 :           else if (TREE_CODE (val.value) != INTEGER_CST)
     655              :             {
     656      7712275 :               val.lattice_val = VARYING;
     657      7712275 :               val.value = NULL_TREE;
     658      7712275 :               val.mask = -1;
     659              :             }
     660              :         }
     661              :       /* Fall back to a copy value.  */
     662     97716517 :       if (!for_bits_p
     663     97716517 :           && val.lattice_val == VARYING
     664      9586810 :           && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (expr))
     665              :         {
     666      9581844 :           val.lattice_val = CONSTANT;
     667      9581844 :           val.value = expr;
     668      9581844 :           val.mask = -1;
     669              :         }
     670              :     }
     671    182502630 :   else if (is_gimple_min_invariant (expr)
     672    182502630 :            && (!for_bits_p || TREE_CODE (expr) == INTEGER_CST))
     673              :     {
     674    148505248 :       val.lattice_val = CONSTANT;
     675    148505248 :       val.value = expr;
     676    148505248 :       val.mask = 0;
     677    148505248 :       canonicalize_value (&val);
     678              :     }
     679     33997382 :   else if (TREE_CODE (expr) == ADDR_EXPR)
     680      8370065 :     val = get_value_from_alignment (expr);
     681              :   else
     682              :     {
     683     25627317 :       val.lattice_val = VARYING;
     684     25627317 :       val.mask = -1;
     685     25627317 :       val.value = NULL_TREE;
     686              :     }
     687              : 
     688    487249365 :   if (val.lattice_val == VARYING
     689     99997149 :       && INTEGRAL_TYPE_P (TREE_TYPE (expr))
     690    555299916 :       && TYPE_UNSIGNED (TREE_TYPE (expr)))
     691     33861307 :     val.mask = wi::zext (val.mask, TYPE_PRECISION (TREE_TYPE (expr)));
     692              : 
     693    487249365 :   return val;
     694              : }
     695              : 
     696              : /* Return the likely CCP lattice value for STMT.
     697              : 
     698              :    If STMT has no operands, then return CONSTANT.
     699              : 
     700              :    Else if undefinedness of operands of STMT cause its value to be
     701              :    undefined, then return UNDEFINED.
     702              : 
     703              :    Else if any operands of STMT are constants, then return CONSTANT.
     704              : 
     705              :    Else return VARYING.  */
     706              : 
     707              : static ccp_lattice_t
     708    237775741 : likely_value (gimple *stmt)
     709              : {
     710    237775741 :   bool has_constant_operand, has_undefined_operand, all_undefined_operands;
     711    237775741 :   bool has_nsa_operand;
     712    237775741 :   tree use;
     713    237775741 :   ssa_op_iter iter;
     714    237775741 :   unsigned i;
     715              : 
     716    237775741 :   enum gimple_code code = gimple_code (stmt);
     717              : 
     718              :   /* This function appears to be called only for assignments, calls,
     719              :      conditionals, and switches, due to the logic in visit_stmt.  */
     720    237775741 :   gcc_assert (code == GIMPLE_ASSIGN
     721              :               || code == GIMPLE_CALL
     722              :               || code == GIMPLE_COND
     723              :               || code == GIMPLE_SWITCH);
     724              : 
     725              :   /* If the statement has volatile operands, it won't fold to a
     726              :      constant value.  */
     727    433241092 :   if (gimple_has_volatile_ops (stmt))
     728              :     return VARYING;
     729              : 
     730              :   /* .DEFERRED_INIT produces undefined.  */
     731    237775676 :   if (gimple_call_internal_p (stmt, IFN_DEFERRED_INIT))
     732              :     return UNDEFINED;
     733              : 
     734              :   /* Arrive here for more complex cases.  */
     735    237748254 :   has_constant_operand = false;
     736    237748254 :   has_undefined_operand = false;
     737    237748254 :   all_undefined_operands = true;
     738    237748254 :   has_nsa_operand = false;
     739    487889201 :   FOR_EACH_SSA_TREE_OPERAND (use, stmt, iter, SSA_OP_USE)
     740              :     {
     741    250140947 :       ccp_prop_value_t *val = get_value (use);
     742              : 
     743    250140947 :       if (val && val->lattice_val == UNDEFINED)
     744              :         has_undefined_operand = true;
     745              :       else
     746    249763932 :         all_undefined_operands = false;
     747              : 
     748    250140717 :       if (val && val->lattice_val == CONSTANT)
     749    160150061 :         has_constant_operand = true;
     750              : 
     751    250140947 :       if (SSA_NAME_IS_DEFAULT_DEF (use)
     752    250140947 :           || !prop_simulate_again_p (SSA_NAME_DEF_STMT (use)))
     753              :         has_nsa_operand = true;
     754              :     }
     755              : 
     756              :   /* There may be constants in regular rhs operands.  For calls we
     757              :      have to ignore lhs, fndecl and static chain, otherwise only
     758              :      the lhs.  */
     759    475496508 :   for (i = (is_gimple_call (stmt) ? 2 : 0) + gimple_has_lhs (stmt);
     760    719043799 :        i < gimple_num_ops (stmt); ++i)
     761              :     {
     762    481295545 :       tree op = gimple_op (stmt, i);
     763    481295545 :       if (!op || TREE_CODE (op) == SSA_NAME)
     764    313808633 :         continue;
     765    167486912 :       if (is_gimple_min_invariant (op))
     766              :         has_constant_operand = true;
     767     32096593 :       else if (TREE_CODE (op) == CONSTRUCTOR)
     768              :         {
     769              :           unsigned j;
     770              :           tree val;
     771    481806551 :           FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (op), j, val)
     772       511006 :             if (CONSTANT_CLASS_P (val))
     773              :               {
     774              :                 has_constant_operand = true;
     775              :                 break;
     776              :               }
     777              :         }
     778              :     }
     779              : 
     780    237748254 :   if (has_constant_operand)
     781    187917622 :     all_undefined_operands = false;
     782              : 
     783    237748254 :   if (has_undefined_operand
     784    237748254 :       && code == GIMPLE_CALL
     785    237748254 :       && gimple_call_internal_p (stmt))
     786        20149 :     switch (gimple_call_internal_fn (stmt))
     787              :       {
     788              :         /* These 3 builtins use the first argument just as a magic
     789              :            way how to find out a decl uid.  */
     790              :       case IFN_GOMP_SIMD_LANE:
     791              :       case IFN_GOMP_SIMD_VF:
     792              :       case IFN_GOMP_SIMD_LAST_LANE:
     793              :         has_undefined_operand = false;
     794              :         break;
     795              :       default:
     796              :         break;
     797              :       }
     798              : 
     799              :   /* If the operation combines operands like COMPLEX_EXPR make sure to
     800              :      not mark the result UNDEFINED if only one part of the result is
     801              :      undefined.  */
     802    237728199 :   if (has_undefined_operand && all_undefined_operands)
     803              :     return UNDEFINED;
     804    237657585 :   else if (code == GIMPLE_ASSIGN && has_undefined_operand)
     805              :     {
     806        62460 :       switch (gimple_assign_rhs_code (stmt))
     807              :         {
     808              :         /* Unary operators are handled with all_undefined_operands.  */
     809              :         case PLUS_EXPR:
     810              :         case MINUS_EXPR:
     811              :         case POINTER_PLUS_EXPR:
     812              :         case BIT_XOR_EXPR:
     813              :           /* Not MIN_EXPR, MAX_EXPR.  One VARYING operand may be selected.
     814              :              Not bitwise operators, one VARYING operand may specify the
     815              :              result completely.
     816              :              Not logical operators for the same reason, apart from XOR.
     817              :              Not COMPLEX_EXPR as one VARYING operand makes the result partly
     818              :              not UNDEFINED.  Not *DIV_EXPR, comparisons and shifts because
     819              :              the undefined operand may be promoted.  */
     820              :           return UNDEFINED;
     821              : 
     822              :         case ADDR_EXPR:
     823              :           /* If any part of an address is UNDEFINED, like the index
     824              :              of an ARRAY_EXPR, then treat the result as UNDEFINED.  */
     825              :           return UNDEFINED;
     826              : 
     827              :         default:
     828              :           ;
     829              :         }
     830              :     }
     831              :   /* If there was an UNDEFINED operand but the result may be not UNDEFINED
     832              :      fall back to CONSTANT.  During iteration UNDEFINED may still drop
     833              :      to CONSTANT.  */
     834    237615410 :   if (has_undefined_operand)
     835              :     return CONSTANT;
     836              : 
     837              :   /* We do not consider virtual operands here -- load from read-only
     838              :      memory may have only VARYING virtual operands, but still be
     839              :      constant.  Also we can combine the stmt with definitions from
     840              :      operands whose definitions are not simulated again.  */
     841    237424366 :   if (has_constant_operand
     842    237424366 :       || has_nsa_operand
     843    237424366 :       || gimple_references_memory_p (stmt))
     844              :     return CONSTANT;
     845              : 
     846              :   return VARYING;
     847              : }
     848              : 
     849              : /* Returns true if STMT cannot be constant.  */
     850              : 
     851              : static bool
     852    317815077 : surely_varying_stmt_p (gimple *stmt)
     853              : {
     854              :   /* If the statement has operands that we cannot handle, it cannot be
     855              :      constant.  */
     856    451058514 :   if (gimple_has_volatile_ops (stmt))
     857              :     return true;
     858              : 
     859              :   /* If it is a call and does not return a value or is not a
     860              :      builtin and not an indirect call or a call to function with
     861              :      assume_aligned/alloc_align attribute, it is varying.  */
     862    308894456 :   if (is_gimple_call (stmt))
     863              :     {
     864     16547221 :       tree fndecl, fntype = gimple_call_fntype (stmt);
     865     16547221 :       if (!gimple_call_lhs (stmt)
     866     16547221 :           || ((fndecl = gimple_call_fndecl (stmt)) != NULL_TREE
     867      6709654 :               && !fndecl_built_in_p (fndecl)
     868      3979354 :               && !lookup_attribute ("assume_aligned",
     869      3979354 :                                     TYPE_ATTRIBUTES (fntype))
     870      3979312 :               && !lookup_attribute ("alloc_align",
     871      3979312 :                                     TYPE_ATTRIBUTES (fntype))))
     872     12907772 :         return true;
     873              :     }
     874              : 
     875              :   /* Any other store operation is not interesting.  */
     876    400122830 :   else if (gimple_vdef (stmt))
     877              :     return true;
     878              : 
     879              :   /* Anything other than assignments and conditional jumps are not
     880              :      interesting for CCP.  */
     881    266790021 :   if (gimple_code (stmt) != GIMPLE_ASSIGN
     882              :       && gimple_code (stmt) != GIMPLE_COND
     883              :       && gimple_code (stmt) != GIMPLE_SWITCH
     884              :       && gimple_code (stmt) != GIMPLE_CALL)
     885              :     return true;
     886              : 
     887              :   return false;
     888              : }
     889              : 
     890              : /* Initialize local data structures for CCP.  */
     891              : 
     892              : static void
     893      5537665 : ccp_initialize (void)
     894              : {
     895      5537665 :   basic_block bb;
     896              : 
     897      5537665 :   n_const_val = num_ssa_names;
     898      5537665 :   const_val = XCNEWVEC (ccp_prop_value_t, n_const_val);
     899              : 
     900              :   /* Initialize simulation flags for PHI nodes and statements.  */
     901     51884071 :   FOR_EACH_BB_FN (bb, cfun)
     902              :     {
     903     46346406 :       gimple_stmt_iterator i;
     904              : 
     905    443540580 :       for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
     906              :         {
     907    350847768 :           gimple *stmt = gsi_stmt (i);
     908    350847768 :           bool is_varying;
     909              : 
     910              :           /* If the statement is a control insn, then we do not
     911              :              want to avoid simulating the statement once.  Failure
     912              :              to do so means that those edges will never get added.  */
     913    350847768 :           if (stmt_ends_bb_p (stmt))
     914              :             is_varying = false;
     915              :           else
     916    317815077 :             is_varying = surely_varying_stmt_p (stmt);
     917              : 
     918    317815077 :           if (is_varying)
     919              :             {
     920    235774859 :               tree def;
     921    235774859 :               ssa_op_iter iter;
     922              : 
     923              :               /* If the statement will not produce a constant, mark
     924              :                  all its outputs VARYING.  */
     925    289598579 :               FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
     926     53823720 :                 set_value_varying (def);
     927              :             }
     928    350847768 :           prop_set_simulate_again (stmt, !is_varying);
     929              :         }
     930              :     }
     931              : 
     932              :   /* Now process PHI nodes.  We never clear the simulate_again flag on
     933              :      phi nodes, since we do not know which edges are executable yet,
     934              :      except for phi nodes for virtual operands when we do not do store ccp.  */
     935     51884071 :   FOR_EACH_BB_FN (bb, cfun)
     936              :     {
     937     46346406 :       gphi_iterator i;
     938              : 
     939     63854532 :       for (i = gsi_start_phis (bb); !gsi_end_p (i); gsi_next (&i))
     940              :         {
     941     17508126 :           gphi *phi = i.phi ();
     942              : 
     943     35016252 :           if (virtual_operand_p (gimple_phi_result (phi)))
     944      7934335 :             prop_set_simulate_again (phi, false);
     945              :           else
     946      9573791 :             prop_set_simulate_again (phi, true);
     947              :         }
     948              :     }
     949      5537665 : }
     950              : 
     951              : /* Debug count support. Reset the values of ssa names
     952              :    VARYING when the total number ssa names analyzed is
     953              :    beyond the debug count specified.  */
     954              : 
     955              : static void
     956      5537665 : do_dbg_cnt (void)
     957              : {
     958      5537665 :   unsigned i;
     959    223832208 :   for (i = 0; i < num_ssa_names; i++)
     960              :     {
     961    218294543 :       if (!dbg_cnt (ccp))
     962              :         {
     963            0 :           const_val[i].lattice_val = VARYING;
     964            0 :           const_val[i].mask = -1;
     965            0 :           const_val[i].value = NULL_TREE;
     966              :         }
     967              :     }
     968      5537665 : }
     969              : 
     970              : 
     971              : /* We want to provide our own GET_VALUE and FOLD_STMT virtual methods.  */
     972     22150660 : class ccp_folder : public substitute_and_fold_engine
     973              : {
     974              :  public:
     975              :   tree value_of_expr (tree, gimple *) final override;
     976              :   bool fold_stmt (gimple_stmt_iterator *) final override;
     977              : };
     978              : 
     979              : /* This method just wraps GET_CONSTANT_VALUE for now.  Over time
     980              :    naked calls to GET_CONSTANT_VALUE should be eliminated in favor
     981              :    of calling member functions.  */
     982              : 
     983              : tree
     984    297469182 : ccp_folder::value_of_expr (tree op, gimple *)
     985              : {
     986    297469182 :   return get_constant_value (op);
     987              : }
     988              : 
     989              : /* Do final substitution of propagated values, cleanup the flowgraph and
     990              :    free allocated storage.  If NONZERO_P, record nonzero bits.
     991              : 
     992              :    Return TRUE when something was optimized.  */
     993              : 
     994              : static bool
     995      5537665 : ccp_finalize (bool nonzero_p)
     996              : {
     997      5537665 :   bool something_changed;
     998      5537665 :   unsigned i;
     999      5537665 :   tree name;
    1000              : 
    1001      5537665 :   do_dbg_cnt ();
    1002              : 
    1003              :   /* Derive alignment and misalignment information from partially
    1004              :      constant pointers in the lattice or nonzero bits from partially
    1005              :      constant integers.  */
    1006    218294543 :   FOR_EACH_SSA_NAME (i, name, cfun)
    1007              :     {
    1008    177895402 :       ccp_prop_value_t *val;
    1009    177895402 :       unsigned int tem, align;
    1010              : 
    1011    324197728 :       if (!POINTER_TYPE_P (TREE_TYPE (name))
    1012    321225353 :           && (!INTEGRAL_TYPE_P (TREE_TYPE (name))
    1013              :               /* Don't record nonzero bits before IPA to avoid
    1014              :                  using too much memory.  */
    1015     63098469 :               || !nonzero_p))
    1016     82065402 :         continue;
    1017              : 
    1018     95830000 :       val = get_value (name);
    1019    176958302 :       if (val->lattice_val != CONSTANT
    1020     26928227 :           || TREE_CODE (val->value) != INTEGER_CST
    1021    114019549 :           || val->mask == 0)
    1022     81128302 :         continue;
    1023              : 
    1024     14701698 :       if (POINTER_TYPE_P (TREE_TYPE (name)))
    1025              :         {
    1026              :           /* Trailing mask bits specify the alignment, trailing value
    1027              :              bits the misalignment.  */
    1028      1387115 :           tem = val->mask.to_uhwi ();
    1029      1387115 :           align = least_bit_hwi (tem);
    1030      1387115 :           if (align > 1)
    1031      1327508 :             set_ptr_info_alignment (get_ptr_info (name), align,
    1032      1327508 :                                     (TREE_INT_CST_LOW (val->value)
    1033      1327508 :                                      & (align - 1)));
    1034              :         }
    1035              :       else
    1036              :         {
    1037     13314583 :           unsigned int precision = TYPE_PRECISION (TREE_TYPE (val->value));
    1038     13314583 :           wide_int value = wi::to_wide (val->value);
    1039     13314583 :           wide_int mask = wide_int::from (val->mask, precision, UNSIGNED);
    1040     13314863 :           value = value & ~mask;
    1041     13314583 :           set_bitmask (name, value, mask);
    1042     13314863 :         }
    1043              :     }
    1044              : 
    1045              :   /* Perform substitutions based on the known constant values.  */
    1046      5537665 :   class ccp_folder ccp_folder;
    1047      5537665 :   something_changed = ccp_folder.substitute_and_fold ();
    1048              : 
    1049      5537665 :   free (const_val);
    1050      5537665 :   const_val = NULL;
    1051      5537665 :   return something_changed;
    1052      5537665 : }
    1053              : 
    1054              : 
    1055              : /* Compute the meet operator between *VAL1 and *VAL2.  Store the result
    1056              :    in VAL1.
    1057              : 
    1058              :                 any  M UNDEFINED   = any
    1059              :                 any  M VARYING     = VARYING
    1060              :                 Ci   M Cj          = Ci         if (i == j)
    1061              :                 Ci   M Cj          = VARYING    if (i != j)
    1062              :    */
    1063              : 
    1064              : static void
    1065    236008274 : ccp_lattice_meet (ccp_prop_value_t *val1, ccp_prop_value_t *val2)
    1066              : {
    1067    236008274 :   if (val1->lattice_val == UNDEFINED
    1068              :       /* For UNDEFINED M SSA we can't always SSA because its definition
    1069              :          may not dominate the PHI node.  Doing optimistic copy propagation
    1070              :          also causes a lot of gcc.dg/uninit-pred*.c FAILs.  */
    1071       169855 :       && (val2->lattice_val != CONSTANT
    1072        96519 :           || TREE_CODE (val2->value) != SSA_NAME))
    1073              :     {
    1074              :       /* UNDEFINED M any = any   */
    1075        94155 :       *val1 = *val2;
    1076              :     }
    1077    235914119 :   else if (val2->lattice_val == UNDEFINED
    1078              :            /* See above.  */
    1079       102856 :            && (val1->lattice_val != CONSTANT
    1080        69110 :                || TREE_CODE (val1->value) != SSA_NAME))
    1081              :     {
    1082              :       /* any M UNDEFINED = any
    1083              :          Nothing to do.  VAL1 already contains the value we want.  */
    1084              :       ;
    1085              :     }
    1086    235834693 :   else if (val1->lattice_val == VARYING
    1087    229867166 :            || val2->lattice_val == VARYING)
    1088              :     {
    1089              :       /* any M VARYING = VARYING.  */
    1090      5984545 :       val1->lattice_val = VARYING;
    1091      5984545 :       val1->mask = -1;
    1092      5984545 :       val1->value = NULL_TREE;
    1093              :     }
    1094    229850148 :   else if (val1->lattice_val == CONSTANT
    1095    229774448 :            && val2->lattice_val == CONSTANT
    1096    229751018 :            && TREE_CODE (val1->value) == INTEGER_CST
    1097    224686825 :            && TREE_CODE (val2->value) == INTEGER_CST)
    1098              :     {
    1099              :       /* Ci M Cj = Ci           if (i == j)
    1100              :          Ci M Cj = VARYING      if (i != j)
    1101              : 
    1102              :          For INTEGER_CSTs mask unequal bits.  If no equal bits remain,
    1103              :          drop to varying.  */
    1104    445417786 :       val1->mask = (val1->mask | val2->mask
    1105    445417786 :                     | (wi::to_widest (val1->value)
    1106    668126679 :                        ^ wi::to_widest (val2->value)));
    1107    222708893 :       if (wi::sext (val1->mask, TYPE_PRECISION (TREE_TYPE (val1->value))) == -1)
    1108              :         {
    1109       519085 :           val1->lattice_val = VARYING;
    1110       519085 :           val1->value = NULL_TREE;
    1111              :         }
    1112              :     }
    1113      7141255 :   else if (val1->lattice_val == CONSTANT
    1114      7065555 :            && val2->lattice_val == CONSTANT
    1115     14183380 :            && operand_equal_p (val1->value, val2->value, 0))
    1116              :     {
    1117              :       /* Ci M Cj = Ci           if (i == j)
    1118              :          Ci M Cj = VARYING      if (i != j)
    1119              : 
    1120              :          VAL1 already contains the value we want for equivalent values.  */
    1121              :     }
    1122      6705924 :   else if (val1->lattice_val == CONSTANT
    1123      6630224 :            && val2->lattice_val == CONSTANT
    1124      6606794 :            && (TREE_CODE (val1->value) == ADDR_EXPR
    1125      6400626 :                || TREE_CODE (val2->value) == ADDR_EXPR))
    1126              :     {
    1127              :       /* When not equal addresses are involved try meeting for
    1128              :          alignment.  */
    1129       733264 :       ccp_prop_value_t tem = *val2;
    1130       733264 :       if (TREE_CODE (val1->value) == ADDR_EXPR)
    1131       206168 :         *val1 = get_value_for_expr (val1->value, true);
    1132       733264 :       if (TREE_CODE (val2->value) == ADDR_EXPR)
    1133       631063 :         tem = get_value_for_expr (val2->value, true);
    1134       733264 :       ccp_lattice_meet (val1, &tem);
    1135       733264 :     }
    1136              :   else
    1137              :     {
    1138              :       /* Any other combination is VARYING.  */
    1139      5972660 :       val1->lattice_val = VARYING;
    1140      5972660 :       val1->mask = -1;
    1141      5972660 :       val1->value = NULL_TREE;
    1142              :     }
    1143    236008274 : }
    1144              : 
    1145              : 
    1146              : /* Loop through the PHI_NODE's parameters for BLOCK and compare their
    1147              :    lattice values to determine PHI_NODE's lattice value.  The value of a
    1148              :    PHI node is determined calling ccp_lattice_meet with all the arguments
    1149              :    of the PHI node that are incoming via executable edges.  */
    1150              : 
    1151              : enum ssa_prop_result
    1152     67952630 : ccp_propagate::visit_phi (gphi *phi)
    1153              : {
    1154     67952630 :   unsigned i;
    1155     67952630 :   ccp_prop_value_t new_val;
    1156              : 
    1157     67952630 :   if (dump_file && (dump_flags & TDF_DETAILS))
    1158              :     {
    1159            1 :       fprintf (dump_file, "\nVisiting PHI node: ");
    1160            1 :       print_gimple_stmt (dump_file, phi, 0, dump_flags);
    1161              :     }
    1162              : 
    1163     67952630 :   new_val.lattice_val = UNDEFINED;
    1164     67952630 :   new_val.value = NULL_TREE;
    1165     67952630 :   new_val.mask = 0;
    1166              : 
    1167     67952630 :   bool first = true;
    1168     67952630 :   bool non_exec_edge = false;
    1169    199765556 :   for (i = 0; i < gimple_phi_num_args (phi); i++)
    1170              :     {
    1171              :       /* Compute the meet operator over all the PHI arguments flowing
    1172              :          through executable edges.  */
    1173    138146027 :       edge e = gimple_phi_arg_edge (phi, i);
    1174              : 
    1175    138146027 :       if (dump_file && (dump_flags & TDF_DETAILS))
    1176              :         {
    1177            6 :           fprintf (dump_file,
    1178              :               "\tArgument #%d (%d -> %d %sexecutable)\n",
    1179            3 :               i, e->src->index, e->dest->index,
    1180            3 :               (e->flags & EDGE_EXECUTABLE) ? "" : "not ");
    1181              :         }
    1182              : 
    1183              :       /* If the incoming edge is executable, Compute the meet operator for
    1184              :          the existing value of the PHI node and the current PHI argument.  */
    1185    138146027 :       if (e->flags & EDGE_EXECUTABLE)
    1186              :         {
    1187    132186870 :           tree arg = gimple_phi_arg (phi, i)->def;
    1188    132186870 :           ccp_prop_value_t arg_val = get_value_for_expr (arg, false);
    1189              : 
    1190    132186870 :           if (first)
    1191              :             {
    1192     67952630 :               new_val = arg_val;
    1193     67952630 :               first = false;
    1194              :             }
    1195              :           else
    1196     64234240 :             ccp_lattice_meet (&new_val, &arg_val);
    1197              : 
    1198    132186870 :           if (dump_file && (dump_flags & TDF_DETAILS))
    1199              :             {
    1200            3 :               fprintf (dump_file, "\t");
    1201            3 :               print_generic_expr (dump_file, arg, dump_flags);
    1202            3 :               dump_lattice_value (dump_file, "\tValue: ", arg_val);
    1203            3 :               fprintf (dump_file, "\n");
    1204              :             }
    1205              : 
    1206    132186870 :           if (new_val.lattice_val == VARYING)
    1207              :             break;
    1208    132186870 :         }
    1209              :       else
    1210              :         non_exec_edge = true;
    1211              :     }
    1212              : 
    1213              :   /* In case there were non-executable edges and the value is a copy
    1214              :      make sure its definition dominates the PHI node.  */
    1215     67952630 :   if (non_exec_edge
    1216      5579801 :       && new_val.lattice_val == CONSTANT
    1217      5461999 :       && TREE_CODE (new_val.value) == SSA_NAME
    1218      1468661 :       && ! SSA_NAME_IS_DEFAULT_DEF (new_val.value)
    1219     69280917 :       && ! dominated_by_p (CDI_DOMINATORS, gimple_bb (phi),
    1220      1328287 :                            gimple_bb (SSA_NAME_DEF_STMT (new_val.value))))
    1221              :     {
    1222        79688 :       new_val.lattice_val = VARYING;
    1223        79688 :       new_val.value = NULL_TREE;
    1224        79688 :       new_val.mask = -1;
    1225              :     }
    1226              : 
    1227     67952630 :   if (dump_file && (dump_flags & TDF_DETAILS))
    1228              :     {
    1229            1 :       dump_lattice_value (dump_file, "\n    PHI node value: ", new_val);
    1230            1 :       fprintf (dump_file, "\n\n");
    1231              :     }
    1232              : 
    1233              :   /* Make the transition to the new value.  */
    1234     67952630 :   if (set_lattice_value (gimple_phi_result (phi), &new_val))
    1235              :     {
    1236     65972930 :       if (new_val.lattice_val == VARYING)
    1237              :         return SSA_PROP_VARYING;
    1238              :       else
    1239     59472044 :         return SSA_PROP_INTERESTING;
    1240              :     }
    1241              :   else
    1242              :     return SSA_PROP_NOT_INTERESTING;
    1243     67952630 : }
    1244              : 
    1245              : /* Return the constant value for OP or OP otherwise.  */
    1246              : 
    1247              : static tree
    1248    286483584 : valueize_op (tree op)
    1249              : {
    1250    286483584 :   if (TREE_CODE (op) == SSA_NAME)
    1251              :     {
    1252    274752298 :       tree tem = get_constant_value (op);
    1253    274752298 :       if (tem)
    1254              :         return tem;
    1255              :     }
    1256              :   return op;
    1257              : }
    1258              : 
    1259              : /* Return the constant value for OP, but signal to not follow SSA
    1260              :    edges if the definition may be simulated again.  */
    1261              : 
    1262              : static tree
    1263   3283779233 : valueize_op_1 (tree op)
    1264              : {
    1265   3283779233 :   if (TREE_CODE (op) == SSA_NAME)
    1266              :     {
    1267              :       /* If the definition may be simulated again we cannot follow
    1268              :          this SSA edge as the SSA propagator does not necessarily
    1269              :          re-visit the use.  */
    1270   3283779233 :       gimple *def_stmt = SSA_NAME_DEF_STMT (op);
    1271   3283779233 :       if (!gimple_nop_p (def_stmt)
    1272   3283779233 :           && prop_simulate_again_p (def_stmt))
    1273              :         return NULL_TREE;
    1274   1711639404 :       tree tem = get_constant_value (op);
    1275   1711639404 :       if (tem)
    1276              :         return tem;
    1277              :     }
    1278              :   return op;
    1279              : }
    1280              : 
    1281              : /* CCP specific front-end to the non-destructive constant folding
    1282              :    routines.
    1283              : 
    1284              :    Attempt to simplify the RHS of STMT knowing that one or more
    1285              :    operands are constants.
    1286              : 
    1287              :    If simplification is possible, return the simplified RHS,
    1288              :    otherwise return the original RHS or NULL_TREE.  */
    1289              : 
    1290              : static tree
    1291    237505211 : ccp_fold (gimple *stmt)
    1292              : {
    1293    237505211 :   switch (gimple_code (stmt))
    1294              :     {
    1295       101304 :     case GIMPLE_SWITCH:
    1296       101304 :       {
    1297              :         /* Return the constant switch index.  */
    1298       101304 :         return valueize_op (gimple_switch_index (as_a <gswitch *> (stmt)));
    1299              :       }
    1300              : 
    1301    237403907 :     case GIMPLE_COND:
    1302    237403907 :     case GIMPLE_ASSIGN:
    1303    237403907 :     case GIMPLE_CALL:
    1304    237403907 :       return gimple_fold_stmt_to_constant_1 (stmt,
    1305    237403907 :                                              valueize_op, valueize_op_1);
    1306              : 
    1307            0 :     default:
    1308            0 :       gcc_unreachable ();
    1309              :     }
    1310              : }
    1311              : 
    1312              : /* Determine the minimum and maximum values, *MIN and *MAX respectively,
    1313              :    represented by the mask pair VAL and MASK with signedness SGN and
    1314              :    precision PRECISION.  */
    1315              : 
    1316              : static void
    1317     28793222 : value_mask_to_min_max (widest_int *min, widest_int *max,
    1318              :                        const widest_int &val, const widest_int &mask,
    1319              :                        signop sgn, int precision)
    1320              : {
    1321     28793222 :   *min = wi::bit_and_not (val, mask);
    1322     28793222 :   *max = val | mask;
    1323     28793222 :   if (sgn == SIGNED && wi::neg_p (mask))
    1324              :     {
    1325      6857542 :       widest_int sign_bit = wi::lshift (1, precision - 1);
    1326      6857542 :       *min ^= sign_bit;
    1327      6857542 :       *max ^= sign_bit;
    1328              :       /* MAX is zero extended, and MIN is sign extended.  */
    1329      6857542 :       *min = wi::ext (*min, precision, sgn);
    1330      6857590 :       *max = wi::ext (*max, precision, sgn);
    1331      6857542 :     }
    1332     28793222 : }
    1333              : 
    1334              : /* Apply the operation CODE in type TYPE to the value, mask pair
    1335              :    RVAL and RMASK representing a value of type RTYPE and set
    1336              :    the value, mask pair *VAL and *MASK to the result.  */
    1337              : 
    1338              : void
    1339     85052329 : bit_value_unop (enum tree_code code, signop type_sgn, int type_precision,
    1340              :                 widest_int *val, widest_int *mask,
    1341              :                 signop rtype_sgn, int rtype_precision,
    1342              :                 const widest_int &rval, const widest_int &rmask)
    1343              : {
    1344     85057706 :   switch (code)
    1345              :     {
    1346       712890 :     case BIT_NOT_EXPR:
    1347       712890 :       *mask = rmask;
    1348       712890 :       *val = ~rval;
    1349       712890 :       break;
    1350              : 
    1351       305537 :     case NEGATE_EXPR:
    1352       305537 :       {
    1353       305537 :         widest_int temv, temm;
    1354              :         /* Return ~rval + 1.  */
    1355       305537 :         bit_value_unop (BIT_NOT_EXPR, type_sgn, type_precision, &temv, &temm,
    1356              :                         type_sgn, type_precision, rval, rmask);
    1357       305537 :         bit_value_binop (PLUS_EXPR, type_sgn, type_precision, val, mask,
    1358              :                          type_sgn, type_precision, temv, temm,
    1359       611074 :                          type_sgn, type_precision, 1, 0);
    1360       305537 :         break;
    1361       305537 :       }
    1362              : 
    1363     83937126 :     CASE_CONVERT:
    1364     83937126 :       {
    1365              :         /* First extend mask and value according to the original type.  */
    1366     83937126 :         *mask = wi::ext (rmask, rtype_precision, rtype_sgn);
    1367     83937126 :         *val = wi::ext (rval, rtype_precision, rtype_sgn);
    1368              : 
    1369              :         /* Then extend mask and value according to the target type.  */
    1370     83937126 :         *mask = wi::ext (*mask, type_precision, type_sgn);
    1371     83937126 :         *val = wi::ext (*val, type_precision, type_sgn);
    1372     83937126 :         break;
    1373              :       }
    1374              : 
    1375       102150 :     case ABS_EXPR:
    1376       102150 :     case ABSU_EXPR:
    1377       102150 :       if (wi::sext (rmask, rtype_precision) == -1)
    1378              :         {
    1379        86824 :           *mask = -1;
    1380        86824 :           *val = 0;
    1381              :         }
    1382        15326 :       else if (wi::neg_p (rmask))
    1383              :         {
    1384              :           /* Result is either rval or -rval.  */
    1385          376 :           widest_int temv, temm;
    1386          376 :           bit_value_unop (NEGATE_EXPR, rtype_sgn, rtype_precision, &temv,
    1387              :                           &temm, type_sgn, type_precision, rval, rmask);
    1388          376 :           temm |= (rmask | (rval ^ temv));
    1389              :           /* Extend the result.  */
    1390          376 :           *mask = wi::ext (temm, type_precision, type_sgn);
    1391          376 :           *val = wi::ext (temv, type_precision, type_sgn);
    1392          376 :         }
    1393        14950 :       else if (wi::neg_p (rval))
    1394              :         {
    1395              :           bit_value_unop (NEGATE_EXPR, type_sgn, type_precision, val, mask,
    1396              :                           type_sgn, type_precision, rval, rmask);
    1397              :         }
    1398              :       else
    1399              :         {
    1400         9573 :           *mask = rmask;
    1401         9573 :           *val = rval;
    1402              :         }
    1403              :       break;
    1404              : 
    1405            3 :     default:
    1406            3 :       *mask = -1;
    1407            3 :       *val = 0;
    1408            3 :       break;
    1409              :     }
    1410     85052329 : }
    1411              : 
    1412              : /* Determine the mask pair *VAL and *MASK from multiplying the
    1413              :    argument mask pair RVAL, RMASK by the unsigned constant C.  */
    1414              : static void
    1415     28762365 : bit_value_mult_const (signop sgn, int width,
    1416              :                       widest_int *val, widest_int *mask,
    1417              :                       const widest_int &rval, const widest_int &rmask,
    1418              :                       widest_int c)
    1419              : {
    1420     28762365 :   widest_int sum_mask = 0;
    1421              : 
    1422              :   /* Ensure rval_lo only contains known bits.  */
    1423     28762365 :   widest_int rval_lo = wi::bit_and_not (rval, rmask);
    1424              : 
    1425     28762365 :   if (rval_lo != 0)
    1426              :     {
    1427              :       /* General case (some bits of multiplicand are known set).  */
    1428       764333 :       widest_int sum_val = 0;
    1429      1776377 :       while (c != 0)
    1430              :         {
    1431              :           /* Determine the lowest bit set in the multiplier.  */
    1432      1012044 :           int bitpos = wi::ctz (c);
    1433      1012044 :           widest_int term_mask = rmask << bitpos;
    1434      1012044 :           widest_int term_val = rval_lo << bitpos;
    1435              : 
    1436              :           /* sum += term.  */
    1437      1012044 :           widest_int lo = sum_val + term_val;
    1438      1012044 :           widest_int hi = (sum_val | sum_mask) + (term_val | term_mask);
    1439      1012044 :           sum_mask |= term_mask | (lo ^ hi);
    1440      1012044 :           sum_val = lo;
    1441              : 
    1442              :           /* Clear this bit in the multiplier.  */
    1443      1012044 :           c ^= wi::lshift (1, bitpos);
    1444      1012044 :         }
    1445              :       /* Correctly extend the result value.  */
    1446       764333 :       *val = wi::ext (sum_val, width, sgn);
    1447       764333 :     }
    1448              :   else
    1449              :     {
    1450              :       /* Special case (no bits of multiplicand are known set).  */
    1451     73924715 :       while (c != 0)
    1452              :         {
    1453              :           /* Determine the lowest bit set in the multiplier.  */
    1454     45926683 :           int bitpos = wi::ctz (c);
    1455     45926683 :           widest_int term_mask = rmask << bitpos;
    1456              : 
    1457              :           /* sum += term.  */
    1458     45926683 :           widest_int hi = sum_mask + term_mask;
    1459     45926683 :           sum_mask |= term_mask | hi;
    1460              : 
    1461              :           /* Clear this bit in the multiplier.  */
    1462     45926692 :           c ^= wi::lshift (1, bitpos);
    1463     45926737 :         }
    1464     27998032 :       *val = 0;
    1465              :     }
    1466              : 
    1467              :   /* Correctly extend the result mask.  */
    1468     28762374 :   *mask = wi::ext (sum_mask, width, sgn);
    1469     28762365 : }
    1470              : 
    1471              : /* Fill up to MAX values in the BITS array with values representing
    1472              :    each of the non-zero bits in the value X.  Returns the number of
    1473              :    bits in X (capped at the maximum value MAX).  For example, an X
    1474              :    value 11, places 1, 2 and 8 in BITS and returns the value 3.  */
    1475              : 
    1476              : static unsigned int
    1477       342228 : get_individual_bits (widest_int *bits, widest_int x, unsigned int max)
    1478              : {
    1479       342228 :   unsigned int count = 0;
    1480      1363795 :   while (count < max && x != 0)
    1481              :     {
    1482      1021567 :       int bitpos = wi::ctz (x);
    1483      1021567 :       bits[count] = wi::lshift (1, bitpos);
    1484      1021567 :       x ^= bits[count];
    1485      1021567 :       count++;
    1486              :     }
    1487       342228 :   return count;
    1488              : }
    1489              : 
    1490              : /* Array of 2^N - 1 values representing the bits flipped between
    1491              :    consecutive Gray codes.  This is used to efficiently enumerate
    1492              :    all permutations on N bits using XOR.  */
    1493              : static const unsigned char gray_code_bit_flips[63] = {
    1494              :   0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
    1495              :   0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5,
    1496              :   0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
    1497              :   0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0
    1498              : };
    1499              : 
    1500              : /* Apply the operation CODE in type TYPE to the value, mask pairs
    1501              :    R1VAL, R1MASK and R2VAL, R2MASK representing a values of type R1TYPE
    1502              :    and R2TYPE and set the value, mask pair *VAL and *MASK to the result.  */
    1503              : 
    1504              : void
    1505    254676115 : bit_value_binop (enum tree_code code, signop sgn, int width,
    1506              :                  widest_int *val, widest_int *mask,
    1507              :                  signop r1type_sgn, int r1type_precision,
    1508              :                  const widest_int &r1val, const widest_int &r1mask,
    1509              :                  signop r2type_sgn, int r2type_precision ATTRIBUTE_UNUSED,
    1510              :                  const widest_int &r2val, const widest_int &r2mask)
    1511              : {
    1512    254676115 :   bool swap_p = false;
    1513              : 
    1514              :   /* Assume we'll get a constant result.  Use an initial non varying
    1515              :      value, we fall back to varying in the end if necessary.  */
    1516    254676115 :   *mask = -1;
    1517              :   /* Ensure that VAL is initialized (to any value).  */
    1518    254676115 :   *val = 0;
    1519              : 
    1520    254676115 :   switch (code)
    1521              :     {
    1522      9209755 :     case BIT_AND_EXPR:
    1523              :       /* The mask is constant where there is a known not
    1524              :          set bit, (m1 | m2) & ((v1 | m1) & (v2 | m2)) */
    1525      9209755 :       *mask = (r1mask | r2mask) & (r1val | r1mask) & (r2val | r2mask);
    1526      9209755 :       *val = r1val & r2val;
    1527      9209755 :       break;
    1528              : 
    1529      3042975 :     case BIT_IOR_EXPR:
    1530              :       /* The mask is constant where there is a known
    1531              :          set bit, (m1 | m2) & ~((v1 & ~m1) | (v2 & ~m2)).  */
    1532      6085950 :       *mask = wi::bit_and_not (r1mask | r2mask,
    1533      6085950 :                                wi::bit_and_not (r1val, r1mask)
    1534     12171900 :                                | wi::bit_and_not (r2val, r2mask));
    1535      3042975 :       *val = r1val | r2val;
    1536      3042975 :       break;
    1537              : 
    1538       224830 :     case BIT_XOR_EXPR:
    1539              :       /* m1 | m2  */
    1540       224830 :       *mask = r1mask | r2mask;
    1541       224830 :       *val = r1val ^ r2val;
    1542       224830 :       break;
    1543              : 
    1544        24199 :     case LROTATE_EXPR:
    1545        24199 :     case RROTATE_EXPR:
    1546        24199 :       if (r2mask == 0)
    1547              :         {
    1548        15234 :           widest_int shift = r2val;
    1549        15234 :           if (shift == 0)
    1550              :             {
    1551           14 :               *mask = r1mask;
    1552           14 :               *val = r1val;
    1553              :             }
    1554              :           else
    1555              :             {
    1556        15220 :               if (wi::neg_p (shift, r2type_sgn))
    1557              :                 {
    1558            4 :                   shift = -shift;
    1559            4 :                   if (code == RROTATE_EXPR)
    1560              :                     code = LROTATE_EXPR;
    1561              :                   else
    1562              :                     code = RROTATE_EXPR;
    1563              :                 }
    1564        15216 :               if (code == RROTATE_EXPR)
    1565              :                 {
    1566        14909 :                   *mask = wi::rrotate (r1mask, shift, width);
    1567        14909 :                   *val = wi::rrotate (r1val, shift, width);
    1568              :                 }
    1569              :               else
    1570              :                 {
    1571          311 :                   *mask = wi::lrotate (r1mask, shift, width);
    1572          311 :                   *val = wi::lrotate (r1val, shift, width);
    1573              :                 }
    1574        15220 :               *mask = wi::ext (*mask, width, sgn);
    1575        15220 :               *val = wi::ext (*val, width, sgn);
    1576              :             }
    1577        15234 :         }
    1578        17930 :       else if (wi::ltu_p (r2val | r2mask, width)
    1579        28350 :                && wi::popcount (r2mask) <= 4)
    1580              :         {
    1581        29709 :           widest_int bits[4];
    1582         3301 :           widest_int res_val, res_mask;
    1583         3301 :           widest_int tmp_val, tmp_mask;
    1584         3301 :           widest_int shift = wi::bit_and_not (r2val, r2mask);
    1585         3301 :           unsigned int bit_count = get_individual_bits (bits, r2mask, 4);
    1586         3301 :           unsigned int count = (1 << bit_count) - 1;
    1587              : 
    1588              :           /* Initialize result to rotate by smallest value of shift.  */
    1589         3301 :           if (code == RROTATE_EXPR)
    1590              :             {
    1591         1584 :               res_mask = wi::rrotate (r1mask, shift, width);
    1592         1584 :               res_val = wi::rrotate (r1val, shift, width);
    1593              :             }
    1594              :           else
    1595              :             {
    1596         1717 :               res_mask = wi::lrotate (r1mask, shift, width);
    1597         1717 :               res_val = wi::lrotate (r1val, shift, width);
    1598              :             }
    1599              : 
    1600              :           /* Iterate through the remaining values of shift.  */
    1601        38704 :           for (unsigned int i=0; i<count; i++)
    1602              :             {
    1603        35403 :               shift ^= bits[gray_code_bit_flips[i]];
    1604        35403 :               if (code == RROTATE_EXPR)
    1605              :                 {
    1606        17320 :                   tmp_mask = wi::rrotate (r1mask, shift, width);
    1607        17320 :                   tmp_val = wi::rrotate (r1val, shift, width);
    1608              :                 }
    1609              :               else
    1610              :                 {
    1611        18083 :                   tmp_mask = wi::lrotate (r1mask, shift, width);
    1612        18083 :                   tmp_val = wi::lrotate (r1val, shift, width);
    1613              :                 }
    1614              :               /* Accumulate the result.  */
    1615        35403 :               res_mask |= tmp_mask | (res_val ^ tmp_val);
    1616              :             }
    1617         3301 :           *val = wi::ext (wi::bit_and_not (res_val, res_mask), width, sgn);
    1618         3301 :           *mask = wi::ext (res_mask, width, sgn);
    1619        16505 :         }
    1620              :       break;
    1621              : 
    1622      6698442 :     case LSHIFT_EXPR:
    1623      6698442 :     case RSHIFT_EXPR:
    1624              :       /* ???  We can handle partially known shift counts if we know
    1625              :          its sign.  That way we can tell that (x << (y | 8)) & 255
    1626              :          is zero.  */
    1627      6698442 :       if (r2mask == 0)
    1628              :         {
    1629      5703181 :           widest_int shift = r2val;
    1630      5703181 :           if (shift == 0)
    1631              :             {
    1632         7499 :               *mask = r1mask;
    1633         7499 :               *val = r1val;
    1634              :             }
    1635              :           else
    1636              :             {
    1637      5695682 :               if (wi::neg_p (shift, r2type_sgn))
    1638              :                 break;
    1639      5695504 :               if (code == RSHIFT_EXPR)
    1640              :                 {
    1641      5307622 :                   *mask = wi::rshift (wi::ext (r1mask, width, sgn), shift, sgn);
    1642      5307589 :                   *val = wi::rshift (wi::ext (r1val, width, sgn), shift, sgn);
    1643              :                 }
    1644              :               else
    1645              :                 {
    1646       387921 :                   *mask = wi::ext (r1mask << shift, width, sgn);
    1647       387915 :                   *val = wi::ext (r1val << shift, width, sgn);
    1648              :                 }
    1649              :             }
    1650      5703181 :         }
    1651       995261 :       else if (wi::ltu_p (r2val | r2mask, width))
    1652              :         {
    1653       913448 :           if (wi::popcount (r2mask) <= 4)
    1654              :             {
    1655      3050343 :               widest_int bits[4];
    1656       338927 :               widest_int arg_val, arg_mask;
    1657       338927 :               widest_int res_val, res_mask;
    1658       338927 :               widest_int tmp_val, tmp_mask;
    1659       338927 :               widest_int shift = wi::bit_and_not (r2val, r2mask);
    1660       338927 :               unsigned int bit_count = get_individual_bits (bits, r2mask, 4);
    1661       338927 :               unsigned int count = (1 << bit_count) - 1;
    1662              : 
    1663              :               /* Initialize result to shift by smallest value of shift.  */
    1664       338927 :               if (code == RSHIFT_EXPR)
    1665              :                 {
    1666       127051 :                   arg_mask = wi::ext (r1mask, width, sgn);
    1667       127051 :                   arg_val = wi::ext (r1val, width, sgn);
    1668       127051 :                   res_mask = wi::rshift (arg_mask, shift, sgn);
    1669       127051 :                   res_val = wi::rshift (arg_val, shift, sgn);
    1670              :                 }
    1671              :               else
    1672              :                 {
    1673       211876 :                   arg_mask = r1mask;
    1674       211876 :                   arg_val = r1val;
    1675       211876 :                   res_mask = arg_mask << shift;
    1676       211876 :                   res_val = arg_val << shift;
    1677              :                 }
    1678              : 
    1679              :               /* Iterate through the remaining values of shift.  */
    1680      3239098 :               for (unsigned int i=0; i<count; i++)
    1681              :                 {
    1682      2900171 :                   shift ^= bits[gray_code_bit_flips[i]];
    1683      2900171 :                   if (code == RSHIFT_EXPR)
    1684              :                     {
    1685      1134299 :                       tmp_mask = wi::rshift (arg_mask, shift, sgn);
    1686      1134299 :                       tmp_val = wi::rshift (arg_val, shift, sgn);
    1687              :                     }
    1688              :                   else
    1689              :                     {
    1690      1765872 :                       tmp_mask = arg_mask << shift;
    1691      1765872 :                       tmp_val = arg_val << shift;
    1692              :                     }
    1693              :                   /* Accumulate the result.  */
    1694      2900171 :                   res_mask |= tmp_mask | (res_val ^ tmp_val);
    1695              :                 }
    1696       338927 :               res_mask = wi::ext (res_mask, width, sgn);
    1697       338927 :               res_val = wi::ext (res_val, width, sgn);
    1698       338927 :               *val = wi::bit_and_not (res_val, res_mask);
    1699       338927 :               *mask = res_mask;
    1700      1694635 :             }
    1701       574521 :           else if ((r1val | r1mask) == 0)
    1702              :             {
    1703              :               /* Handle shifts of zero to avoid undefined wi::ctz below.  */
    1704            0 :               *mask = 0;
    1705            0 :               *val = 0;
    1706              :             }
    1707       574521 :           else if (code == LSHIFT_EXPR)
    1708              :             {
    1709       388971 :               widest_int tmp = wi::mask <widest_int> (width, false);
    1710       388971 :               tmp <<= wi::ctz (r1val | r1mask);
    1711       388971 :               tmp <<= wi::bit_and_not (r2val, r2mask);
    1712       388971 :               *mask = wi::ext (tmp, width, sgn);
    1713       388971 :               *val = 0;
    1714       388971 :             }
    1715       185550 :           else if (!wi::neg_p (r1val | r1mask, sgn))
    1716              :             {
    1717              :               /* Logical right shift, or zero sign bit.  */
    1718       168286 :               widest_int arg = r1val | r1mask;
    1719       168286 :               int lzcount = wi::clz (arg);
    1720       168286 :               if (lzcount)
    1721       168278 :                 lzcount -= wi::get_precision (arg) - width;
    1722       168286 :               widest_int tmp = wi::mask <widest_int> (width, false);
    1723       168286 :               tmp = wi::lrshift (tmp, lzcount);
    1724       168286 :               tmp = wi::lrshift (tmp, wi::bit_and_not (r2val, r2mask));
    1725       168286 :               *mask = wi::ext (tmp, width, sgn);
    1726       168286 :               *val = 0;
    1727       168286 :             }
    1728        17264 :           else if (!wi::neg_p (r1mask))
    1729              :             {
    1730              :               /* Arithmetic right shift with set sign bit.  */
    1731         1111 :               widest_int arg = wi::bit_and_not (r1val, r1mask);
    1732         1111 :               int sbcount = wi::clrsb (arg);
    1733         1111 :               sbcount -= wi::get_precision (arg) - width;
    1734         1111 :               widest_int tmp = wi::mask <widest_int> (width, false);
    1735         1111 :               tmp = wi::lrshift (tmp, sbcount);
    1736         1111 :               tmp = wi::lrshift (tmp, wi::bit_and_not (r2val, r2mask));
    1737         1111 :               *mask = wi::sext (tmp, width);
    1738         1111 :               tmp = wi::bit_not (tmp);
    1739         1111 :               *val = wi::sext (tmp, width);
    1740         1111 :             }
    1741              :         }
    1742              :       break;
    1743              : 
    1744    130178333 :     case PLUS_EXPR:
    1745    130178333 :     case POINTER_PLUS_EXPR:
    1746    130178333 :       {
    1747              :         /* Do the addition with unknown bits set to zero, to give carry-ins of
    1748              :            zero wherever possible.  */
    1749    260356666 :         widest_int lo = (wi::bit_and_not (r1val, r1mask)
    1750    260356666 :                          + wi::bit_and_not (r2val, r2mask));
    1751    130178333 :         lo = wi::ext (lo, width, sgn);
    1752              :         /* Do the addition with unknown bits set to one, to give carry-ins of
    1753              :            one wherever possible.  */
    1754    130178576 :         widest_int hi = (r1val | r1mask) + (r2val | r2mask);
    1755    130178333 :         hi = wi::ext (hi, width, sgn);
    1756              :         /* Each bit in the result is known if (a) the corresponding bits in
    1757              :            both inputs are known, and (b) the carry-in to that bit position
    1758              :            is known.  We can check condition (b) by seeing if we got the same
    1759              :            result with minimised carries as with maximised carries.  */
    1760    130178823 :         *mask = r1mask | r2mask | (lo ^ hi);
    1761    130178333 :         *mask = wi::ext (*mask, width, sgn);
    1762              :         /* It shouldn't matter whether we choose lo or hi here.  */
    1763    130178333 :         *val = lo;
    1764    130178333 :         break;
    1765    130178412 :       }
    1766              : 
    1767     23251361 :     case MINUS_EXPR:
    1768     23251361 :     case POINTER_DIFF_EXPR:
    1769     23251361 :       {
    1770              :         /* Subtraction is derived from the addition algorithm above.  */
    1771     23251361 :         widest_int lo = wi::bit_and_not (r1val, r1mask) - (r2val | r2mask);
    1772     23251361 :         lo = wi::ext (lo, width, sgn);
    1773     23251423 :         widest_int hi = (r1val | r1mask) - wi::bit_and_not (r2val, r2mask);
    1774     23251361 :         hi = wi::ext (hi, width, sgn);
    1775     23251485 :         *mask = r1mask | r2mask | (lo ^ hi);
    1776     23251361 :         *mask = wi::ext (*mask, width, sgn);
    1777     23251361 :         *val = lo;
    1778     23251361 :         break;
    1779     23251461 :       }
    1780              : 
    1781     32421267 :     case MULT_EXPR:
    1782     32421267 :       if (r2mask == 0
    1783     28789725 :           && !wi::neg_p (r2val, sgn)
    1784     63831129 :           && (flag_expensive_optimizations || wi::popcount (r2val) < 8))
    1785     28676035 :         bit_value_mult_const (sgn, width, val, mask, r1val, r1mask, r2val);
    1786      3745232 :       else if (r1mask == 0
    1787        87992 :                && !wi::neg_p (r1val, sgn)
    1788      3846281 :                && (flag_expensive_optimizations || wi::popcount (r1val) < 8))
    1789        86330 :         bit_value_mult_const (sgn, width, val, mask, r2val, r2mask, r1val);
    1790              :       else
    1791              :         {
    1792              :           /* Just track trailing zeros in both operands and transfer
    1793              :              them to the other.  */
    1794      3658902 :           int r1tz = wi::ctz (r1val | r1mask);
    1795      3658902 :           int r2tz = wi::ctz (r2val | r2mask);
    1796      3658902 :           if (r1tz + r2tz >= width)
    1797              :             {
    1798           12 :               *mask = 0;
    1799           12 :               *val = 0;
    1800              :             }
    1801      3658890 :           else if (r1tz + r2tz > 0)
    1802              :             {
    1803       901584 :               *mask = wi::ext (wi::mask <widest_int> (r1tz + r2tz, true),
    1804       450792 :                                width, sgn);
    1805       450792 :               *val = 0;
    1806              :             }
    1807              :         }
    1808              :       break;
    1809              : 
    1810     30072302 :     case EQ_EXPR:
    1811     30072302 :     case NE_EXPR:
    1812     30072302 :       {
    1813     30072302 :         widest_int m = r1mask | r2mask;
    1814     30072302 :         if (wi::bit_and_not (r1val, m) != wi::bit_and_not (r2val, m))
    1815              :           {
    1816      2312490 :             *mask = 0;
    1817      2312490 :             *val = ((code == EQ_EXPR) ? 0 : 1);
    1818              :           }
    1819              :         else
    1820              :           {
    1821              :             /* We know the result of a comparison is always one or zero.  */
    1822     27759812 :             *mask = 1;
    1823     27759812 :             *val = 0;
    1824              :           }
    1825     30072302 :         break;
    1826     30072302 :       }
    1827              : 
    1828      7565996 :     case GE_EXPR:
    1829      7565996 :     case GT_EXPR:
    1830      7565996 :       swap_p = true;
    1831      7565996 :       code = swap_tree_comparison (code);
    1832              :       /* Fall through.  */
    1833     12413992 :     case LT_EXPR:
    1834     12413992 :     case LE_EXPR:
    1835     12413992 :       {
    1836     12413992 :         widest_int min1, max1, min2, max2;
    1837     12413992 :         int minmax, maxmin;
    1838              : 
    1839     12413992 :         const widest_int &o1val = swap_p ? r2val : r1val;
    1840      4847996 :         const widest_int &o1mask = swap_p ? r2mask : r1mask;
    1841      4847996 :         const widest_int &o2val = swap_p ? r1val : r2val;
    1842      4847996 :         const widest_int &o2mask = swap_p ? r1mask : r2mask;
    1843              : 
    1844     12413992 :         value_mask_to_min_max (&min1, &max1, o1val, o1mask,
    1845              :                                r1type_sgn, r1type_precision);
    1846     12413992 :         value_mask_to_min_max (&min2, &max2, o2val, o2mask,
    1847              :                                r1type_sgn, r1type_precision);
    1848              : 
    1849              :         /* For comparisons the signedness is in the comparison operands.  */
    1850              :         /* Do a cross comparison of the max/min pairs.  */
    1851     12413992 :         maxmin = wi::cmp (max1, min2, r1type_sgn);
    1852     12413992 :         minmax = wi::cmp (min1, max2, r1type_sgn);
    1853     19321873 :         if (maxmin < (code == LE_EXPR ? 1 : 0))  /* o1 < or <= o2.  */
    1854              :           {
    1855      3002837 :             *mask = 0;
    1856      3002837 :             *val = 1;
    1857              :           }
    1858     12004539 :         else if (minmax > (code == LT_EXPR ? -1 : 0))  /* o1 >= or > o2.  */
    1859              :           {
    1860       452396 :             *mask = 0;
    1861       452396 :             *val = 0;
    1862              :           }
    1863      8958759 :         else if (maxmin == minmax)  /* o1 and o2 are equal.  */
    1864              :           {
    1865              :             /* This probably should never happen as we'd have
    1866              :                folded the thing during fully constant value folding.  */
    1867            0 :             *mask = 0;
    1868            0 :             *val = (code == LE_EXPR ? 1 : 0);
    1869              :           }
    1870              :         else
    1871              :           {
    1872              :             /* We know the result of a comparison is always one or zero.  */
    1873      8958759 :             *mask = 1;
    1874      8958759 :             *val = 0;
    1875              :           }
    1876     12413992 :         break;
    1877     12414078 :       }
    1878              : 
    1879      1982619 :     case MIN_EXPR:
    1880      1982619 :     case MAX_EXPR:
    1881      1982619 :       {
    1882      1982619 :         widest_int min1, max1, min2, max2;
    1883              : 
    1884      1982619 :         value_mask_to_min_max (&min1, &max1, r1val, r1mask, sgn, width);
    1885      1982619 :         value_mask_to_min_max (&min2, &max2, r2val, r2mask, sgn, width);
    1886              : 
    1887      1982619 :         if (wi::cmp (max1, min2, sgn) <= 0)  /* r1 is less than r2.  */
    1888              :           {
    1889         6655 :             if (code == MIN_EXPR)
    1890              :               {
    1891         5893 :                 *mask = r1mask;
    1892         5893 :                 *val = r1val;
    1893              :               }
    1894              :             else
    1895              :               {
    1896          762 :                 *mask = r2mask;
    1897          762 :                 *val = r2val;
    1898              :               }
    1899              :           }
    1900      1975964 :         else if (wi::cmp (min1, max2, sgn) >= 0)  /* r2 is less than r1.  */
    1901              :           {
    1902       110756 :             if (code == MIN_EXPR)
    1903              :               {
    1904         1707 :                 *mask = r2mask;
    1905         1707 :                 *val = r2val;
    1906              :               }
    1907              :             else
    1908              :               {
    1909       109049 :                 *mask = r1mask;
    1910       109049 :                 *val = r1val;
    1911              :               }
    1912              :           }
    1913              :         else
    1914              :           {
    1915              :             /* The result is either r1 or r2.  */
    1916      1865212 :             *mask = r1mask | r2mask | (r1val ^ r2val);
    1917      1865208 :             *val = r1val;
    1918              :           }
    1919      1982619 :         break;
    1920      1982623 :       }
    1921              : 
    1922      1741666 :     case TRUNC_MOD_EXPR:
    1923      1741666 :       {
    1924      1741666 :         widest_int r1max = r1val | r1mask;
    1925      1741666 :         widest_int r2max = r2val | r2mask;
    1926      1741666 :         if (r2mask == 0)
    1927              :           {
    1928       504804 :             widest_int shift = wi::exact_log2 (r2val);
    1929       504804 :             if (shift != -1)
    1930              :               {
    1931              :                 // Handle modulo by a power of 2 as a bitwise and.
    1932        87342 :                 widest_int tem_val, tem_mask;
    1933        87342 :                 bit_value_binop (BIT_AND_EXPR, sgn, width, &tem_val, &tem_mask,
    1934              :                                  r1type_sgn, r1type_precision, r1val, r1mask,
    1935              :                                  r2type_sgn, r2type_precision,
    1936        87342 :                                  r2val - 1, r2mask);
    1937        87342 :                 if (sgn == UNSIGNED
    1938        87188 :                     || !wi::neg_p (r1max)
    1939       134444 :                     || (tem_mask == 0 && tem_val == 0))
    1940              :                   {
    1941        40272 :                     *val = tem_val;
    1942        40272 :                     *mask = tem_mask;
    1943        40272 :                     return;
    1944              :                   }
    1945        87342 :               }
    1946       504804 :           }
    1947      1701394 :         if (sgn == UNSIGNED
    1948      1701394 :             || (!wi::neg_p (r1max) && !wi::neg_p (r2max)))
    1949              :           {
    1950              :             /* Confirm R2 has some bits set, to avoid division by zero.  */
    1951       856707 :             widest_int r2min = wi::bit_and_not (r2val, r2mask);
    1952       856707 :             if (r2min != 0)
    1953              :               {
    1954              :                 /* R1 % R2 is R1 if R1 is always less than R2.  */
    1955       319220 :                 if (wi::ltu_p (r1max, r2min))
    1956              :                   {
    1957        15957 :                     *mask = r1mask;
    1958        15957 :                     *val = r1val;
    1959              :                   }
    1960              :                 else
    1961              :                   {
    1962              :                     /* R1 % R2 is always less than the maximum of R2.  */
    1963       303263 :                     unsigned int lzcount = wi::clz (r2max);
    1964       303263 :                     unsigned int bits = wi::get_precision (r2max) - lzcount;
    1965       303263 :                     if (r2max == wi::lshift (1, bits))
    1966            0 :                       bits--;
    1967       303263 :                     *mask = wi::mask <widest_int> (bits, false);
    1968       303263 :                     *val = 0;
    1969              :                   }
    1970              :                }
    1971       856707 :             }
    1972      1741666 :         }
    1973      1701394 :       break;
    1974              : 
    1975      3395076 :     case EXACT_DIV_EXPR:
    1976      3395076 :     case TRUNC_DIV_EXPR:
    1977      3395076 :       {
    1978      3395076 :         widest_int r1max = r1val | r1mask;
    1979      3395076 :         widest_int r2max = r2val | r2mask;
    1980      4764286 :         if (r2mask == 0
    1981      3395076 :             && (code == EXACT_DIV_EXPR
    1982      2541740 :                 || sgn == UNSIGNED
    1983       707531 :                 || !wi::neg_p (r1max)))
    1984              :           {
    1985      2025866 :             widest_int shift = wi::exact_log2 (r2val);
    1986      2025866 :             if (shift != -1)
    1987              :               {
    1988              :                 // Handle division by a power of 2 as an rshift.
    1989      1513331 :                 bit_value_binop (RSHIFT_EXPR, sgn, width, val, mask,
    1990              :                                  r1type_sgn, r1type_precision, r1val, r1mask,
    1991              :                                  r2type_sgn, r2type_precision, shift, r2mask);
    1992      1513331 :                 return;
    1993              :               }
    1994      2025866 :           }
    1995      1881745 :         if (sgn == UNSIGNED
    1996      1881745 :             || (!wi::neg_p (r1max) && !wi::neg_p (r2max)))
    1997              :           {
    1998              :             /* Confirm R2 has some bits set, to avoid division by zero.  */
    1999       748845 :             widest_int r2min = wi::bit_and_not (r2val, r2mask);
    2000       748845 :             if (r2min != 0)
    2001              :               {
    2002              :                 /* R1 / R2 is zero if R1 is always less than R2.  */
    2003       384654 :                 if (wi::ltu_p (r1max, r2min))
    2004              :                   {
    2005         2705 :                     *mask = 0;
    2006         2705 :                     *val = 0;
    2007              :                   }
    2008              :                 else
    2009              :                   {
    2010       381949 :                     widest_int upper
    2011       381949 :                       = wi::udiv_trunc (wi::zext (r1max, width), r2min);
    2012       381949 :                     unsigned int lzcount = wi::clz (upper);
    2013       381949 :                     unsigned int bits = wi::get_precision (upper) - lzcount;
    2014       381949 :                     *mask = wi::mask <widest_int> (bits, false);
    2015       381949 :                     *val = 0;
    2016       381949 :                   }
    2017              :                }
    2018       748845 :             }
    2019      3395080 :         }
    2020      1881745 :       break;
    2021              : 
    2022    254676115 :     default:;
    2023              :     }
    2024              : }
    2025              : 
    2026              : /* Return the propagation value when applying the operation CODE to
    2027              :    the value RHS yielding type TYPE.  */
    2028              : 
    2029              : static ccp_prop_value_t
    2030     30051909 : bit_value_unop (enum tree_code code, tree type, tree rhs)
    2031              : {
    2032     30051909 :   ccp_prop_value_t rval = get_value_for_expr (rhs, true);
    2033     30051909 :   widest_int value, mask;
    2034     30051909 :   ccp_prop_value_t val;
    2035              : 
    2036     30051909 :   if (rval.lattice_val == UNDEFINED)
    2037            0 :     return rval;
    2038              : 
    2039     37027111 :   gcc_assert ((rval.lattice_val == CONSTANT
    2040              :                && TREE_CODE (rval.value) == INTEGER_CST)
    2041              :               || wi::sext (rval.mask, TYPE_PRECISION (TREE_TYPE (rhs))) == -1);
    2042     60103818 :   bit_value_unop (code, TYPE_SIGN (type), TYPE_PRECISION (type), &value, &mask,
    2043     30051909 :                   TYPE_SIGN (TREE_TYPE (rhs)), TYPE_PRECISION (TREE_TYPE (rhs)),
    2044     60103818 :                   value_to_wide_int (rval), rval.mask);
    2045     30052105 :   if (wi::sext (mask, TYPE_PRECISION (type)) != -1)
    2046              :     {
    2047     23891953 :       val.lattice_val = CONSTANT;
    2048     23891953 :       val.mask = mask;
    2049              :       /* ???  Delay building trees here.  */
    2050     23891953 :       val.value = wide_int_to_tree (type, value);
    2051              :     }
    2052              :   else
    2053              :     {
    2054      6159956 :       val.lattice_val = VARYING;
    2055      6159956 :       val.value = NULL_TREE;
    2056      6159956 :       val.mask = -1;
    2057              :     }
    2058     30051909 :   return val;
    2059     30052324 : }
    2060              : 
    2061              : /* Return the propagation value when applying the operation CODE to
    2062              :    the values RHS1 and RHS2 yielding type TYPE.  */
    2063              : 
    2064              : static ccp_prop_value_t
    2065    142622388 : bit_value_binop (enum tree_code code, tree type, tree rhs1, tree rhs2)
    2066              : {
    2067    142622388 :   ccp_prop_value_t r1val = get_value_for_expr (rhs1, true);
    2068    142622388 :   ccp_prop_value_t r2val = get_value_for_expr (rhs2, true);
    2069    142622388 :   widest_int value, mask;
    2070    142622388 :   ccp_prop_value_t val;
    2071              : 
    2072    142622388 :   if (r1val.lattice_val == UNDEFINED
    2073    142468603 :       || r2val.lattice_val == UNDEFINED)
    2074              :     {
    2075       159694 :       val.lattice_val = VARYING;
    2076       159694 :       val.value = NULL_TREE;
    2077       159694 :       val.mask = -1;
    2078       159694 :       return val;
    2079              :     }
    2080              : 
    2081    187351858 :   gcc_assert ((r1val.lattice_val == CONSTANT
    2082              :                && TREE_CODE (r1val.value) == INTEGER_CST)
    2083              :               || wi::sext (r1val.mask,
    2084              :                            TYPE_PRECISION (TREE_TYPE (rhs1))) == -1);
    2085    155331942 :   gcc_assert ((r2val.lattice_val == CONSTANT
    2086              :                && TREE_CODE (r2val.value) == INTEGER_CST)
    2087              :               || wi::sext (r2val.mask,
    2088              :                            TYPE_PRECISION (TREE_TYPE (rhs2))) == -1);
    2089    284925388 :   bit_value_binop (code, TYPE_SIGN (type), TYPE_PRECISION (type), &value, &mask,
    2090    142462694 :                    TYPE_SIGN (TREE_TYPE (rhs1)), TYPE_PRECISION (TREE_TYPE (rhs1)),
    2091    284925931 :                    value_to_wide_int (r1val), r1val.mask,
    2092    142462694 :                    TYPE_SIGN (TREE_TYPE (rhs2)), TYPE_PRECISION (TREE_TYPE (rhs2)),
    2093    284925388 :                    value_to_wide_int (r2val), r2val.mask);
    2094              : 
    2095              :   /* (x * x) & 2 == 0.  */
    2096    142462694 :   if (code == MULT_EXPR && rhs1 == rhs2 && TYPE_PRECISION (type) > 1)
    2097              :     {
    2098       169328 :       widest_int m = 2;
    2099       169328 :       if (wi::sext (mask, TYPE_PRECISION (type)) != -1)
    2100          525 :         value = wi::bit_and_not (value, m);
    2101              :       else
    2102       168803 :         value = 0;
    2103       169328 :       mask = wi::bit_and_not (mask, m);
    2104       169328 :     }
    2105              : 
    2106    142462716 :   if (wi::sext (mask, TYPE_PRECISION (type)) != -1)
    2107              :     {
    2108    120250091 :       val.lattice_val = CONSTANT;
    2109    120250091 :       val.mask = mask;
    2110              :       /* ???  Delay building trees here.  */
    2111    120250091 :       val.value = wide_int_to_tree (type, value);
    2112              :     }
    2113              :   else
    2114              :     {
    2115     22212603 :       val.lattice_val = VARYING;
    2116     22212603 :       val.value = NULL_TREE;
    2117     22212603 :       val.mask = -1;
    2118              :     }
    2119              :   return val;
    2120    142622691 : }
    2121              : 
    2122              : /* Return the propagation value for __builtin_assume_aligned
    2123              :    and functions with assume_aligned or alloc_aligned attribute.
    2124              :    For __builtin_assume_aligned, ATTR is NULL_TREE,
    2125              :    for assume_aligned attribute ATTR is non-NULL and ALLOC_ALIGNED
    2126              :    is false, for alloc_aligned attribute ATTR is non-NULL and
    2127              :    ALLOC_ALIGNED is true.  */
    2128              : 
    2129              : static ccp_prop_value_t
    2130         7630 : bit_value_assume_aligned (gimple *stmt, tree attr, ccp_prop_value_t ptrval,
    2131              :                           bool alloc_aligned)
    2132              : {
    2133         7630 :   tree align, misalign = NULL_TREE, type;
    2134         7630 :   unsigned HOST_WIDE_INT aligni, misaligni = 0;
    2135         7630 :   ccp_prop_value_t alignval;
    2136         7630 :   widest_int value, mask;
    2137         7630 :   ccp_prop_value_t val;
    2138              : 
    2139         7630 :   if (attr == NULL_TREE)
    2140              :     {
    2141         2696 :       tree ptr = gimple_call_arg (stmt, 0);
    2142         2696 :       type = TREE_TYPE (ptr);
    2143         2696 :       ptrval = get_value_for_expr (ptr, true);
    2144              :     }
    2145              :   else
    2146              :     {
    2147         4934 :       tree lhs = gimple_call_lhs (stmt);
    2148         4934 :       type = TREE_TYPE (lhs);
    2149              :     }
    2150              : 
    2151         7630 :   if (ptrval.lattice_val == UNDEFINED)
    2152            0 :     return ptrval;
    2153        14828 :   gcc_assert ((ptrval.lattice_val == CONSTANT
    2154              :                && TREE_CODE (ptrval.value) == INTEGER_CST)
    2155              :               || wi::sext (ptrval.mask, TYPE_PRECISION (type)) == -1);
    2156         7630 :   if (attr == NULL_TREE)
    2157              :     {
    2158              :       /* Get aligni and misaligni from __builtin_assume_aligned.  */
    2159         2696 :       align = gimple_call_arg (stmt, 1);
    2160         2696 :       if (!tree_fits_uhwi_p (align))
    2161           47 :         return ptrval;
    2162         2649 :       aligni = tree_to_uhwi (align);
    2163         2649 :       if (gimple_call_num_args (stmt) > 2)
    2164              :         {
    2165           36 :           misalign = gimple_call_arg (stmt, 2);
    2166           36 :           if (!tree_fits_uhwi_p (misalign))
    2167            2 :             return ptrval;
    2168           34 :           misaligni = tree_to_uhwi (misalign);
    2169              :         }
    2170              :     }
    2171              :   else
    2172              :     {
    2173              :       /* Get aligni and misaligni from assume_aligned or
    2174              :          alloc_align attributes.  */
    2175         4934 :       if (TREE_VALUE (attr) == NULL_TREE)
    2176            0 :         return ptrval;
    2177         4934 :       attr = TREE_VALUE (attr);
    2178         4934 :       align = TREE_VALUE (attr);
    2179         4934 :       if (!tree_fits_uhwi_p (align))
    2180            0 :         return ptrval;
    2181         4934 :       aligni = tree_to_uhwi (align);
    2182         4934 :       if (alloc_aligned)
    2183              :         {
    2184         4892 :           if (aligni == 0 || aligni > gimple_call_num_args (stmt))
    2185            0 :             return ptrval;
    2186         4892 :           align = gimple_call_arg (stmt, aligni - 1);
    2187         4892 :           if (!tree_fits_uhwi_p (align))
    2188          215 :             return ptrval;
    2189         4677 :           aligni = tree_to_uhwi (align);
    2190              :         }
    2191           42 :       else if (TREE_CHAIN (attr) && TREE_VALUE (TREE_CHAIN (attr)))
    2192              :         {
    2193           21 :           misalign = TREE_VALUE (TREE_CHAIN (attr));
    2194           21 :           if (!tree_fits_uhwi_p (misalign))
    2195            0 :             return ptrval;
    2196           21 :           misaligni = tree_to_uhwi (misalign);
    2197              :         }
    2198              :     }
    2199         7366 :   if (aligni <= 1 || (aligni & (aligni - 1)) != 0 || misaligni >= aligni)
    2200          154 :     return ptrval;
    2201              : 
    2202         7212 :   align = build_int_cst_type (type, -aligni);
    2203         7212 :   alignval = get_value_for_expr (align, true);
    2204        14424 :   bit_value_binop (BIT_AND_EXPR, TYPE_SIGN (type), TYPE_PRECISION (type), &value, &mask,
    2205        14424 :                    TYPE_SIGN (type), TYPE_PRECISION (type), value_to_wide_int (ptrval), ptrval.mask,
    2206        14424 :                    TYPE_SIGN (type), TYPE_PRECISION (type), value_to_wide_int (alignval), alignval.mask);
    2207              : 
    2208         7212 :   if (wi::sext (mask, TYPE_PRECISION (type)) != -1)
    2209              :     {
    2210         7212 :       val.lattice_val = CONSTANT;
    2211         7212 :       val.mask = mask;
    2212         7212 :       gcc_assert ((mask.to_uhwi () & (aligni - 1)) == 0);
    2213         7212 :       gcc_assert ((value.to_uhwi () & (aligni - 1)) == 0);
    2214         7212 :       value |= misaligni;
    2215              :       /* ???  Delay building trees here.  */
    2216         7212 :       val.value = wide_int_to_tree (type, value);
    2217              :     }
    2218              :   else
    2219              :     {
    2220            0 :       val.lattice_val = VARYING;
    2221            0 :       val.value = NULL_TREE;
    2222            0 :       val.mask = -1;
    2223              :     }
    2224         7212 :   return val;
    2225         7630 : }
    2226              : 
    2227              : /* Evaluate statement STMT.
    2228              :    Valid only for assignments, calls, conditionals, and switches. */
    2229              : 
    2230              : static ccp_prop_value_t
    2231    237775741 : evaluate_stmt (gimple *stmt)
    2232              : {
    2233    237775741 :   ccp_prop_value_t val;
    2234    237775741 :   tree simplified = NULL_TREE;
    2235    237775741 :   ccp_lattice_t likelyvalue = likely_value (stmt);
    2236    237775741 :   bool is_constant = false;
    2237    237775741 :   unsigned int align;
    2238    237775741 :   bool ignore_return_flags = false;
    2239              : 
    2240    237775741 :   if (dump_file && (dump_flags & TDF_DETAILS))
    2241              :     {
    2242           66 :       fprintf (dump_file, "which is likely ");
    2243           66 :       switch (likelyvalue)
    2244              :         {
    2245           65 :         case CONSTANT:
    2246           65 :           fprintf (dump_file, "CONSTANT");
    2247           65 :           break;
    2248            1 :         case UNDEFINED:
    2249            1 :           fprintf (dump_file, "UNDEFINED");
    2250            1 :           break;
    2251            0 :         case VARYING:
    2252            0 :           fprintf (dump_file, "VARYING");
    2253            0 :           break;
    2254           66 :         default:;
    2255              :         }
    2256           66 :       fprintf (dump_file, "\n");
    2257              :     }
    2258              : 
    2259              :   /* If the statement is likely to have a CONSTANT result, then try
    2260              :      to fold the statement to determine the constant value.  */
    2261              :   /* FIXME.  This is the only place that we call ccp_fold.
    2262              :      Since likely_value never returns CONSTANT for calls, we will
    2263              :      not attempt to fold them, including builtins that may profit.  */
    2264    237775741 :   if (likelyvalue == CONSTANT)
    2265              :     {
    2266    237505211 :       fold_defer_overflow_warnings ();
    2267    237505211 :       simplified = ccp_fold (stmt);
    2268    237505211 :       if (simplified
    2269     33029159 :           && TREE_CODE (simplified) == SSA_NAME)
    2270              :         {
    2271              :           /* We may not use values of something that may be simulated again,
    2272              :              see valueize_op_1.  */
    2273     16966997 :           if (SSA_NAME_IS_DEFAULT_DEF (simplified)
    2274     16966997 :               || ! prop_simulate_again_p (SSA_NAME_DEF_STMT (simplified)))
    2275              :             {
    2276     12367643 :               ccp_prop_value_t *val = get_value (simplified);
    2277     12367643 :               if (val && val->lattice_val != VARYING)
    2278              :                 {
    2279       617311 :                   fold_undefer_overflow_warnings (true, stmt, 0);
    2280       617311 :                   return *val;
    2281              :                 }
    2282              :             }
    2283              :           else
    2284              :             /* We may also not place a non-valueized copy in the lattice
    2285              :                as that might become stale if we never re-visit this stmt.  */
    2286              :             simplified = NULL_TREE;
    2287              :         }
    2288     27812494 :       is_constant = simplified && is_gimple_min_invariant (simplified);
    2289    236887900 :       fold_undefer_overflow_warnings (is_constant, stmt, 0);
    2290    236887900 :       if (is_constant)
    2291              :         {
    2292              :           /* The statement produced a constant value.  */
    2293     13257698 :           val.lattice_val = CONSTANT;
    2294     13257698 :           val.value = simplified;
    2295     13257698 :           val.mask = 0;
    2296     13257698 :           return val;
    2297              :         }
    2298              :     }
    2299              :   /* If the statement is likely to have a VARYING result, then do not
    2300              :      bother folding the statement.  */
    2301       270530 :   else if (likelyvalue == VARYING)
    2302              :     {
    2303       110264 :       enum gimple_code code = gimple_code (stmt);
    2304       110264 :       if (code == GIMPLE_ASSIGN)
    2305              :         {
    2306          770 :           enum tree_code subcode = gimple_assign_rhs_code (stmt);
    2307              : 
    2308              :           /* Other cases cannot satisfy is_gimple_min_invariant
    2309              :              without folding.  */
    2310          770 :           if (get_gimple_rhs_class (subcode) == GIMPLE_SINGLE_RHS)
    2311          770 :             simplified = gimple_assign_rhs1 (stmt);
    2312              :         }
    2313       109494 :       else if (code == GIMPLE_SWITCH)
    2314            0 :         simplified = gimple_switch_index (as_a <gswitch *> (stmt));
    2315              :       else
    2316              :         /* These cannot satisfy is_gimple_min_invariant without folding.  */
    2317       109494 :         gcc_assert (code == GIMPLE_CALL || code == GIMPLE_COND);
    2318          770 :       is_constant = simplified && is_gimple_min_invariant (simplified);
    2319            0 :       if (is_constant)
    2320              :         {
    2321              :           /* The statement produced a constant value.  */
    2322            0 :           val.lattice_val = CONSTANT;
    2323            0 :           val.value = simplified;
    2324            0 :           val.mask = 0;
    2325              :         }
    2326              :     }
    2327              :   /* If the statement result is likely UNDEFINED, make it so.  */
    2328       160266 :   else if (likelyvalue == UNDEFINED)
    2329              :     {
    2330       160266 :       val.lattice_val = UNDEFINED;
    2331       160266 :       val.value = NULL_TREE;
    2332       160266 :       val.mask = 0;
    2333       160266 :       return val;
    2334              :     }
    2335              : 
    2336              :   /* Resort to simplification for bitwise tracking.  */
    2337    223740466 :   if (flag_tree_bit_ccp
    2338    223657000 :       && (likelyvalue == CONSTANT || is_gimple_call (stmt)
    2339          770 :           || (gimple_assign_single_p (stmt)
    2340          770 :               && gimple_assign_rhs_code (stmt) == ADDR_EXPR))
    2341    447396730 :       && !is_constant)
    2342              :     {
    2343    223656264 :       enum gimple_code code = gimple_code (stmt);
    2344    223656264 :       val.lattice_val = VARYING;
    2345    223656264 :       val.value = NULL_TREE;
    2346    223656264 :       val.mask = -1;
    2347    223656264 :       if (code == GIMPLE_ASSIGN)
    2348              :         {
    2349    179312440 :           enum tree_code subcode = gimple_assign_rhs_code (stmt);
    2350    179312440 :           tree rhs1 = gimple_assign_rhs1 (stmt);
    2351    179312440 :           tree lhs = gimple_assign_lhs (stmt);
    2352    358177456 :           if ((INTEGRAL_TYPE_P (TREE_TYPE (lhs))
    2353     33932003 :                || POINTER_TYPE_P (TREE_TYPE (lhs)))
    2354    352079884 :               && (INTEGRAL_TYPE_P (TREE_TYPE (rhs1))
    2355     29304559 :                   || POINTER_TYPE_P (TREE_TYPE (rhs1))))
    2356    172875245 :             switch (get_gimple_rhs_class (subcode))
    2357              :               {
    2358     38753738 :               case GIMPLE_SINGLE_RHS:
    2359     38753738 :                 val = get_value_for_expr (rhs1, true);
    2360     38753738 :                 break;
    2361              : 
    2362     30051909 :               case GIMPLE_UNARY_RHS:
    2363     30051909 :                 val = bit_value_unop (subcode, TREE_TYPE (lhs), rhs1);
    2364     30051909 :                 break;
    2365              : 
    2366    104052698 :               case GIMPLE_BINARY_RHS:
    2367    104052698 :                 val = bit_value_binop (subcode, TREE_TYPE (lhs), rhs1,
    2368    104052698 :                                        gimple_assign_rhs2 (stmt));
    2369    104052698 :                 break;
    2370              : 
    2371              :               default:;
    2372              :               }
    2373              :         }
    2374     44343824 :       else if (code == GIMPLE_COND)
    2375              :         {
    2376     39995121 :           enum tree_code code = gimple_cond_code (stmt);
    2377     39995121 :           tree rhs1 = gimple_cond_lhs (stmt);
    2378     39995121 :           tree rhs2 = gimple_cond_rhs (stmt);
    2379     79414075 :           if (INTEGRAL_TYPE_P (TREE_TYPE (rhs1))
    2380     47793700 :               || POINTER_TYPE_P (TREE_TYPE (rhs1)))
    2381     38569690 :             val = bit_value_binop (code, TREE_TYPE (rhs1), rhs1, rhs2);
    2382              :         }
    2383      4348703 :       else if (gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
    2384              :         {
    2385      2372303 :           tree fndecl = gimple_call_fndecl (stmt);
    2386      2372303 :           switch (DECL_FUNCTION_CODE (fndecl))
    2387              :             {
    2388       226846 :             case BUILT_IN_MALLOC:
    2389       226846 :             case BUILT_IN_REALLOC:
    2390       226846 :             case BUILT_IN_GOMP_REALLOC:
    2391       226846 :             case BUILT_IN_CALLOC:
    2392       226846 :             case BUILT_IN_STRDUP:
    2393       226846 :             case BUILT_IN_STRNDUP:
    2394       226846 :               val.lattice_val = CONSTANT;
    2395       226846 :               val.value = build_int_cst (TREE_TYPE (gimple_get_lhs (stmt)), 0);
    2396       229125 :               val.mask = ~((HOST_WIDE_INT) MALLOC_ABI_ALIGNMENT
    2397       226846 :                            / BITS_PER_UNIT - 1);
    2398       226846 :               break;
    2399              : 
    2400        53168 :             CASE_BUILT_IN_ALLOCA:
    2401        91576 :               align = (DECL_FUNCTION_CODE (fndecl) == BUILT_IN_ALLOCA
    2402        38412 :                        ? BIGGEST_ALIGNMENT
    2403        14756 :                        : TREE_INT_CST_LOW (gimple_call_arg (stmt, 1)));
    2404        53168 :               val.lattice_val = CONSTANT;
    2405        53168 :               val.value = build_int_cst (TREE_TYPE (gimple_get_lhs (stmt)), 0);
    2406        53168 :               val.mask = ~((HOST_WIDE_INT) align / BITS_PER_UNIT - 1);
    2407        53168 :               break;
    2408              : 
    2409         2484 :             case BUILT_IN_ASSUME_ALIGNED:
    2410         2484 :               val = bit_value_assume_aligned (stmt, NULL_TREE, val, false);
    2411         2484 :               ignore_return_flags = true;
    2412         2484 :               break;
    2413              : 
    2414          130 :             case BUILT_IN_ALIGNED_ALLOC:
    2415          130 :             case BUILT_IN_GOMP_ALLOC:
    2416          130 :               {
    2417          130 :                 tree align = get_constant_value (gimple_call_arg (stmt, 0));
    2418          130 :                 if (align
    2419          122 :                     && tree_fits_uhwi_p (align))
    2420              :                   {
    2421          122 :                     unsigned HOST_WIDE_INT aligni = tree_to_uhwi (align);
    2422          122 :                     if (aligni > 1
    2423              :                         /* align must be power-of-two */
    2424          106 :                         && (aligni & (aligni - 1)) == 0)
    2425              :                       {
    2426          106 :                         val.lattice_val = CONSTANT;
    2427          106 :                         val.value = build_int_cst (ptr_type_node, 0);
    2428          106 :                         val.mask = -aligni;
    2429              :                       }
    2430              :                   }
    2431              :                 break;
    2432              :               }
    2433              : 
    2434         5157 :             case BUILT_IN_BSWAP16:
    2435         5157 :             case BUILT_IN_BSWAP32:
    2436         5157 :             case BUILT_IN_BSWAP64:
    2437         5157 :             case BUILT_IN_BSWAP128:
    2438         5157 :               val = get_value_for_expr (gimple_call_arg (stmt, 0), true);
    2439         5157 :               if (val.lattice_val == UNDEFINED)
    2440              :                 break;
    2441         5157 :               else if (val.lattice_val == CONSTANT
    2442         2471 :                        && val.value
    2443         2471 :                        && TREE_CODE (val.value) == INTEGER_CST)
    2444              :                 {
    2445         2471 :                   tree type = TREE_TYPE (gimple_call_lhs (stmt));
    2446         2471 :                   int prec = TYPE_PRECISION (type);
    2447         2471 :                   wide_int wval = wi::to_wide (val.value);
    2448         2471 :                   val.value
    2449         2471 :                     = wide_int_to_tree (type,
    2450         4942 :                                         wi::bswap (wide_int::from (wval, prec,
    2451              :                                                                    UNSIGNED)));
    2452         2471 :                   val.mask
    2453         4942 :                     = widest_int::from (wi::bswap (wide_int::from (val.mask,
    2454              :                                                                    prec,
    2455              :                                                                    UNSIGNED)),
    2456         2471 :                                         UNSIGNED);
    2457         2471 :                   if (wi::sext (val.mask, prec) != -1)
    2458              :                     break;
    2459         2471 :                 }
    2460         2686 :               val.lattice_val = VARYING;
    2461         2686 :               val.value = NULL_TREE;
    2462         2686 :               val.mask = -1;
    2463         2686 :               break;
    2464              : 
    2465            0 :             default:;
    2466              :             }
    2467              :         }
    2468    223656264 :       if (is_gimple_call (stmt) && gimple_call_lhs (stmt))
    2469              :         {
    2470      4257299 :           tree fntype = gimple_call_fntype (stmt);
    2471      4257299 :           if (fntype)
    2472              :             {
    2473      3778711 :               tree attrs = lookup_attribute ("assume_aligned",
    2474      3778711 :                                              TYPE_ATTRIBUTES (fntype));
    2475      3778711 :               if (attrs)
    2476           42 :                 val = bit_value_assume_aligned (stmt, attrs, val, false);
    2477      3778711 :               attrs = lookup_attribute ("alloc_align",
    2478      3778711 :                                         TYPE_ATTRIBUTES (fntype));
    2479      3778711 :               if (attrs)
    2480         4892 :                 val = bit_value_assume_aligned (stmt, attrs, val, true);
    2481              :             }
    2482      4257299 :           int flags = ignore_return_flags
    2483      4257299 :                       ? 0 : gimple_call_return_flags (as_a <gcall *> (stmt));
    2484      4254815 :           if (flags & ERF_RETURNS_ARG
    2485      4254815 :               && (flags & ERF_RETURN_ARG_MASK) < gimple_call_num_args (stmt))
    2486              :             {
    2487       157292 :               val = get_value_for_expr
    2488       314584 :                          (gimple_call_arg (stmt,
    2489       157292 :                                            flags & ERF_RETURN_ARG_MASK), true);
    2490              :             }
    2491              :         }
    2492    223656264 :       is_constant = (val.lattice_val == CONSTANT);
    2493              :     }
    2494              : 
    2495    223740466 :   tree lhs = gimple_get_lhs (stmt);
    2496    223740466 :   if (flag_tree_bit_ccp
    2497    223657000 :       && lhs && TREE_CODE (lhs) == SSA_NAME && !VECTOR_TYPE_P (TREE_TYPE (lhs))
    2498    405418843 :       && ((is_constant && TREE_CODE (val.value) == INTEGER_CST)
    2499              :           || !is_constant))
    2500              :     {
    2501    181678377 :       tree lhs = gimple_get_lhs (stmt);
    2502    181678377 :       wide_int nonzero_bits = get_nonzero_bits (lhs);
    2503    181678377 :       if (nonzero_bits != -1)
    2504              :         {
    2505     51391055 :           if (!is_constant)
    2506              :             {
    2507      3443635 :               val.lattice_val = CONSTANT;
    2508      3443635 :               val.value = build_zero_cst (TREE_TYPE (lhs));
    2509      3443635 :               val.mask = extend_mask (nonzero_bits, TYPE_SIGN (TREE_TYPE (lhs)));
    2510      3443635 :               is_constant = true;
    2511              :             }
    2512              :           else
    2513              :             {
    2514     47947530 :               if (wi::bit_and_not (wi::to_wide (val.value), nonzero_bits) != 0)
    2515        62322 :                 val.value = wide_int_to_tree (TREE_TYPE (lhs),
    2516              :                                               nonzero_bits
    2517       124644 :                                               & wi::to_wide (val.value));
    2518     47947420 :               if (nonzero_bits == 0)
    2519          422 :                 val.mask = 0;
    2520              :               else
    2521     95894095 :                 val.mask = val.mask & extend_mask (nonzero_bits,
    2522     95893996 :                                                    TYPE_SIGN (TREE_TYPE (lhs)));
    2523              :             }
    2524              :         }
    2525    181678377 :     }
    2526              : 
    2527              :   /* The statement produced a nonconstant value.  */
    2528    223740466 :   if (!is_constant)
    2529              :     {
    2530              :       /* The statement produced a copy.  */
    2531     14085436 :       if (simplified && TREE_CODE (simplified) == SSA_NAME
    2532     83842408 :           && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (simplified))
    2533              :         {
    2534     11731915 :           val.lattice_val = CONSTANT;
    2535     11731915 :           val.value = simplified;
    2536     11731915 :           val.mask = -1;
    2537              :         }
    2538              :       /* The statement is VARYING.  */
    2539              :       else
    2540              :         {
    2541     60376881 :           val.lattice_val = VARYING;
    2542     60376881 :           val.value = NULL_TREE;
    2543     60376881 :           val.mask = -1;
    2544              :         }
    2545              :     }
    2546              : 
    2547    223740466 :   return val;
    2548    237775741 : }
    2549              : 
    2550              : typedef hash_table<nofree_ptr_hash<gimple> > gimple_htab;
    2551              : 
    2552              : /* Given a BUILT_IN_STACK_SAVE value SAVED_VAL, insert a clobber of VAR before
    2553              :    each matching BUILT_IN_STACK_RESTORE.  Mark visited phis in VISITED.  */
    2554              : 
    2555              : static void
    2556          507 : insert_clobber_before_stack_restore (tree saved_val, tree var,
    2557              :                                      gimple_htab **visited)
    2558              : {
    2559          507 :   gimple *stmt;
    2560          507 :   gassign *clobber_stmt;
    2561          507 :   tree clobber;
    2562          507 :   imm_use_iterator iter;
    2563          507 :   gimple_stmt_iterator i;
    2564          507 :   gimple **slot;
    2565              : 
    2566         1018 :   FOR_EACH_IMM_USE_STMT (stmt, iter, saved_val)
    2567          511 :     if (gimple_call_builtin_p (stmt, BUILT_IN_STACK_RESTORE))
    2568              :       {
    2569          498 :         clobber = build_clobber (TREE_TYPE (var), CLOBBER_STORAGE_END);
    2570          498 :         clobber_stmt = gimple_build_assign (var, clobber);
    2571              :         /* Manually update the vdef/vuse here. */
    2572          996 :         gimple_set_vuse (clobber_stmt, gimple_vuse (stmt));
    2573          498 :         gimple_set_vdef (clobber_stmt, make_ssa_name (gimple_vop (cfun)));
    2574          996 :         gimple_set_vuse (stmt, gimple_vdef (clobber_stmt));
    2575          996 :         SSA_NAME_DEF_STMT (gimple_vdef (clobber_stmt)) = clobber_stmt;
    2576          498 :         update_stmt (stmt);
    2577          498 :         i = gsi_for_stmt (stmt);
    2578          498 :         gsi_insert_before (&i, clobber_stmt, GSI_SAME_STMT);
    2579              :       }
    2580           13 :     else if (gimple_code (stmt) == GIMPLE_PHI)
    2581              :       {
    2582           12 :         if (!*visited)
    2583           12 :           *visited = new gimple_htab (10);
    2584              : 
    2585           12 :         slot = (*visited)->find_slot (stmt, INSERT);
    2586           12 :         if (*slot != NULL)
    2587            0 :           continue;
    2588              : 
    2589           12 :         *slot = stmt;
    2590           12 :         insert_clobber_before_stack_restore (gimple_phi_result (stmt), var,
    2591              :                                              visited);
    2592              :       }
    2593            1 :     else if (gimple_assign_ssa_name_copy_p (stmt))
    2594            0 :       insert_clobber_before_stack_restore (gimple_assign_lhs (stmt), var,
    2595          507 :                                            visited);
    2596          507 : }
    2597              : 
    2598              : /* Advance the iterator to the previous non-debug gimple statement in the same
    2599              :    or dominating basic block.  */
    2600              : 
    2601              : static inline void
    2602        11469 : gsi_prev_dom_bb_nondebug (gimple_stmt_iterator *i)
    2603              : {
    2604        11469 :   basic_block dom;
    2605              : 
    2606        11469 :   gsi_prev_nondebug (i);
    2607        23480 :   while (gsi_end_p (*i))
    2608              :     {
    2609          550 :       dom = get_immediate_dominator (CDI_DOMINATORS, gsi_bb (*i));
    2610          550 :       if (dom == NULL || dom == ENTRY_BLOCK_PTR_FOR_FN (cfun))
    2611              :         return;
    2612              : 
    2613         1084 :       *i = gsi_last_bb (dom);
    2614              :     }
    2615              : }
    2616              : 
    2617              : /* Find a BUILT_IN_STACK_SAVE dominating gsi_stmt (I), and insert
    2618              :    a clobber of VAR before each matching BUILT_IN_STACK_RESTORE.
    2619              : 
    2620              :    It is possible that BUILT_IN_STACK_SAVE cannot be found in a dominator when
    2621              :    a previous pass (such as DOM) duplicated it along multiple paths to a BB.
    2622              :    In that case the function gives up without inserting the clobbers.  */
    2623              : 
    2624              : static void
    2625          503 : insert_clobbers_for_var (gimple_stmt_iterator i, tree var)
    2626              : {
    2627          503 :   gimple *stmt;
    2628          503 :   tree saved_val;
    2629          503 :   gimple_htab *visited = NULL;
    2630              : 
    2631        11972 :   for (; !gsi_end_p (i); gsi_prev_dom_bb_nondebug (&i))
    2632              :     {
    2633        11964 :       stmt = gsi_stmt (i);
    2634              : 
    2635        11964 :       if (!gimple_call_builtin_p (stmt, BUILT_IN_STACK_SAVE))
    2636        11469 :         continue;
    2637              : 
    2638          495 :       saved_val = gimple_call_lhs (stmt);
    2639          495 :       if (saved_val == NULL_TREE)
    2640            0 :         continue;
    2641              : 
    2642          495 :       insert_clobber_before_stack_restore (saved_val, var, &visited);
    2643          495 :       break;
    2644              :     }
    2645              : 
    2646          503 :   delete visited;
    2647          503 : }
    2648              : 
    2649              : /* Detects a __builtin_alloca_with_align with constant size argument.  Declares
    2650              :    fixed-size array and returns the address, if found, otherwise returns
    2651              :    NULL_TREE.  */
    2652              : 
    2653              : static tree
    2654        12059 : fold_builtin_alloca_with_align (gimple *stmt)
    2655              : {
    2656        12059 :   unsigned HOST_WIDE_INT size, threshold, n_elem;
    2657        12059 :   tree lhs, arg, block, var, elem_type, array_type;
    2658              : 
    2659              :   /* Get lhs.  */
    2660        12059 :   lhs = gimple_call_lhs (stmt);
    2661        12059 :   if (lhs == NULL_TREE)
    2662              :     return NULL_TREE;
    2663              : 
    2664              :   /* Detect constant argument.  */
    2665        12059 :   arg = get_constant_value (gimple_call_arg (stmt, 0));
    2666        12059 :   if (arg == NULL_TREE
    2667         1057 :       || TREE_CODE (arg) != INTEGER_CST
    2668         1057 :       || !tree_fits_uhwi_p (arg))
    2669              :     return NULL_TREE;
    2670              : 
    2671         1057 :   size = tree_to_uhwi (arg);
    2672              : 
    2673              :   /* Heuristic: don't fold large allocas.  */
    2674         1057 :   threshold = (unsigned HOST_WIDE_INT)param_large_stack_frame;
    2675              :   /* In case the alloca is located at function entry, it has the same lifetime
    2676              :      as a declared array, so we allow a larger size.  */
    2677         1057 :   block = gimple_block (stmt);
    2678         1057 :   if (!(cfun->after_inlining
    2679          696 :         && block
    2680          668 :         && TREE_CODE (BLOCK_SUPERCONTEXT (block)) == FUNCTION_DECL))
    2681          669 :     threshold /= 10;
    2682         1057 :   if (size > threshold)
    2683              :     return NULL_TREE;
    2684              : 
    2685              :   /* We have to be able to move points-to info.  We used to assert
    2686              :      that we can but IPA PTA might end up with two UIDs here
    2687              :      as it might need to handle more than one instance being
    2688              :      live at the same time.  Instead of trying to detect this case
    2689              :      (using the first UID would be OK) just give up for now.  */
    2690          509 :   struct ptr_info_def *pi = SSA_NAME_PTR_INFO (lhs);
    2691          509 :   unsigned uid = 0;
    2692          509 :   if (pi != NULL
    2693          337 :       && !pi->pt.anything
    2694          652 :       && !pt_solution_singleton_or_null_p (&pi->pt, &uid))
    2695              :     return NULL_TREE;
    2696              : 
    2697              :   /* Declare array.  */
    2698          503 :   elem_type = build_nonstandard_integer_type (BITS_PER_UNIT, 1);
    2699          503 :   n_elem = size * 8 / BITS_PER_UNIT;
    2700          503 :   array_type = build_array_type_nelts (elem_type, n_elem);
    2701              : 
    2702          503 :   if (tree ssa_name = SSA_NAME_IDENTIFIER (lhs))
    2703              :     {
    2704              :       /* Give the temporary a name derived from the name of the VLA
    2705              :          declaration so it can be referenced in diagnostics.  */
    2706          464 :       const char *name = IDENTIFIER_POINTER (ssa_name);
    2707          464 :       var = create_tmp_var (array_type, name);
    2708              :     }
    2709              :   else
    2710           39 :     var = create_tmp_var (array_type);
    2711              : 
    2712          503 :   if (gimple *lhsdef = SSA_NAME_DEF_STMT (lhs))
    2713              :     {
    2714              :       /* Set the temporary's location to that of the VLA declaration
    2715              :          so it can be pointed to in diagnostics.  */
    2716          503 :       location_t loc = gimple_location (lhsdef);
    2717          503 :       DECL_SOURCE_LOCATION (var) = loc;
    2718              :     }
    2719              : 
    2720          503 :   SET_DECL_ALIGN (var, TREE_INT_CST_LOW (gimple_call_arg (stmt, 1)));
    2721          503 :   if (uid != 0)
    2722          137 :     SET_DECL_PT_UID (var, uid);
    2723              : 
    2724              :   /* Fold alloca to the address of the array.  */
    2725          503 :   return fold_convert (TREE_TYPE (lhs), build_fold_addr_expr (var));
    2726              : }
    2727              : 
    2728              : /* Fold the stmt at *GSI with CCP specific information that propagating
    2729              :    and regular folding does not catch.  */
    2730              : 
    2731              : bool
    2732    338132378 : ccp_folder::fold_stmt (gimple_stmt_iterator *gsi)
    2733              : {
    2734    338132378 :   gimple *stmt = gsi_stmt (*gsi);
    2735              : 
    2736    338132378 :   switch (gimple_code (stmt))
    2737              :     {
    2738     18677671 :     case GIMPLE_COND:
    2739     18677671 :       {
    2740     18677671 :         gcond *cond_stmt = as_a <gcond *> (stmt);
    2741     18677671 :         ccp_prop_value_t val;
    2742              :         /* Statement evaluation will handle type mismatches in constants
    2743              :            more gracefully than the final propagation.  This allows us to
    2744              :            fold more conditionals here.  */
    2745     18677671 :         val = evaluate_stmt (stmt);
    2746     18677671 :         if (val.lattice_val != CONSTANT
    2747     18677671 :             || val.mask != 0)
    2748     18230533 :           return false;
    2749              : 
    2750       447138 :         if (dump_file)
    2751              :           {
    2752           24 :             fprintf (dump_file, "Folding predicate ");
    2753           24 :             print_gimple_expr (dump_file, stmt, 0);
    2754           24 :             fprintf (dump_file, " to ");
    2755           24 :             print_generic_expr (dump_file, val.value);
    2756           24 :             fprintf (dump_file, "\n");
    2757              :           }
    2758              : 
    2759       447138 :         if (integer_zerop (val.value))
    2760       346467 :           gimple_cond_make_false (cond_stmt);
    2761              :         else
    2762       100671 :           gimple_cond_make_true (cond_stmt);
    2763              : 
    2764              :         return true;
    2765     18677671 :       }
    2766              : 
    2767     23281154 :     case GIMPLE_CALL:
    2768     23281154 :       {
    2769     23281154 :         tree lhs = gimple_call_lhs (stmt);
    2770     23281154 :         int flags = gimple_call_flags (stmt);
    2771     23281154 :         tree val;
    2772     23281154 :         tree argt;
    2773     23281154 :         bool changed = false;
    2774     23281154 :         unsigned i;
    2775              : 
    2776              :         /* If the call was folded into a constant make sure it goes
    2777              :            away even if we cannot propagate into all uses because of
    2778              :            type issues.  */
    2779     23281154 :         if (lhs
    2780      8875408 :             && TREE_CODE (lhs) == SSA_NAME
    2781      7448442 :             && (val = get_constant_value (lhs))
    2782              :             /* Don't optimize away calls that have side-effects.  */
    2783           15 :             && (flags & (ECF_CONST|ECF_PURE)) != 0
    2784     23281154 :             && (flags & ECF_LOOPING_CONST_OR_PURE) == 0)
    2785              :           {
    2786            0 :             tree new_rhs = unshare_expr (val);
    2787            0 :             if (!useless_type_conversion_p (TREE_TYPE (lhs),
    2788            0 :                                             TREE_TYPE (new_rhs)))
    2789            0 :               new_rhs = fold_convert (TREE_TYPE (lhs), new_rhs);
    2790            0 :             gimplify_and_update_call_from_tree (gsi, new_rhs);
    2791            0 :             return true;
    2792              :           }
    2793              : 
    2794              :         /* Internal calls provide no argument types, so the extra laxity
    2795              :            for normal calls does not apply.  */
    2796     23281154 :         if (gimple_call_internal_p (stmt))
    2797              :           return false;
    2798              : 
    2799              :         /* The heuristic of fold_builtin_alloca_with_align differs before and
    2800              :            after inlining, so we don't require the arg to be changed into a
    2801              :            constant for folding, but just to be constant.  */
    2802     22577277 :         if (gimple_call_builtin_p (stmt, BUILT_IN_ALLOCA_WITH_ALIGN)
    2803     22577277 :             || gimple_call_builtin_p (stmt, BUILT_IN_ALLOCA_WITH_ALIGN_AND_MAX))
    2804              :           {
    2805        12059 :             tree new_rhs = fold_builtin_alloca_with_align (stmt);
    2806        12059 :             if (new_rhs)
    2807              :               {
    2808          503 :                 gimplify_and_update_call_from_tree (gsi, new_rhs);
    2809          503 :                 tree var = TREE_OPERAND (TREE_OPERAND (new_rhs, 0),0);
    2810          503 :                 insert_clobbers_for_var (*gsi, var);
    2811          503 :                 return true;
    2812              :               }
    2813              :           }
    2814              : 
    2815              :         /* If there's no extra info from an assume_aligned call,
    2816              :            drop it so it doesn't act as otherwise useless dataflow
    2817              :            barrier.  */
    2818     22576774 :         if (gimple_call_builtin_p (stmt, BUILT_IN_ASSUME_ALIGNED))
    2819              :           {
    2820         2484 :             tree ptr = gimple_call_arg (stmt, 0);
    2821         2484 :             ccp_prop_value_t ptrval = get_value_for_expr (ptr, true);
    2822         2484 :             if (ptrval.lattice_val == CONSTANT
    2823          212 :                 && TREE_CODE (ptrval.value) == INTEGER_CST
    2824         2696 :                 && ptrval.mask != 0)
    2825              :               {
    2826          212 :                 ccp_prop_value_t val
    2827          212 :                   = bit_value_assume_aligned (stmt, NULL_TREE, ptrval, false);
    2828          212 :                 unsigned int ptralign = least_bit_hwi (ptrval.mask.to_uhwi ());
    2829          212 :                 unsigned int align = least_bit_hwi (val.mask.to_uhwi ());
    2830          212 :                 if (ptralign == align
    2831          212 :                     && ((TREE_INT_CST_LOW (ptrval.value) & (align - 1))
    2832          200 :                         == (TREE_INT_CST_LOW (val.value) & (align - 1))))
    2833              :                   {
    2834          200 :                     replace_call_with_value (gsi, ptr);
    2835          200 :                     return true;
    2836              :                   }
    2837          212 :               }
    2838         2484 :           }
    2839              : 
    2840              :         /* Propagate into the call arguments.  Compared to replace_uses_in
    2841              :            this can use the argument slot types for type verification
    2842              :            instead of the current argument type.  We also can safely
    2843              :            drop qualifiers here as we are dealing with constants anyway.  */
    2844     22576574 :         argt = TYPE_ARG_TYPES (gimple_call_fntype (stmt));
    2845     62597315 :         for (i = 0; i < gimple_call_num_args (stmt) && argt;
    2846     40020741 :              ++i, argt = TREE_CHAIN (argt))
    2847              :           {
    2848     40020741 :             tree arg = gimple_call_arg (stmt, i);
    2849     40020741 :             if (TREE_CODE (arg) == SSA_NAME
    2850     15666643 :                 && (val = get_constant_value (arg))
    2851     40020766 :                 && useless_type_conversion_p
    2852           25 :                      (TYPE_MAIN_VARIANT (TREE_VALUE (argt)),
    2853           25 :                       TYPE_MAIN_VARIANT (TREE_TYPE (val))))
    2854              :               {
    2855           25 :                 gimple_call_set_arg (stmt, i, unshare_expr (val));
    2856           25 :                 changed = true;
    2857              :               }
    2858              :           }
    2859              : 
    2860              :         return changed;
    2861              :       }
    2862              : 
    2863    104588510 :     case GIMPLE_ASSIGN:
    2864    104588510 :       {
    2865    104588510 :         tree lhs = gimple_assign_lhs (stmt);
    2866    104588510 :         tree val;
    2867              : 
    2868              :         /* If we have a load that turned out to be constant replace it
    2869              :            as we cannot propagate into all uses in all cases.  */
    2870    104588510 :         if (gimple_assign_single_p (stmt)
    2871     69626475 :             && TREE_CODE (lhs) == SSA_NAME
    2872    137077342 :             && (val = get_constant_value (lhs)))
    2873              :           {
    2874         5139 :             tree rhs = unshare_expr (val);
    2875         5139 :             if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
    2876            0 :               rhs = fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), rhs);
    2877         5139 :             gimple_assign_set_rhs_from_tree (gsi, rhs);
    2878         5139 :             return true;
    2879              :           }
    2880              : 
    2881              :         return false;
    2882              :       }
    2883              : 
    2884              :     default:
    2885              :       return false;
    2886              :     }
    2887              : }
    2888              : 
    2889              : /* Visit the assignment statement STMT.  Set the value of its LHS to the
    2890              :    value computed by the RHS and store LHS in *OUTPUT_P.  If STMT
    2891              :    creates virtual definitions, set the value of each new name to that
    2892              :    of the RHS (if we can derive a constant out of the RHS).
    2893              :    Value-returning call statements also perform an assignment, and
    2894              :    are handled here.  */
    2895              : 
    2896              : static enum ssa_prop_result
    2897    196806343 : visit_assignment (gimple *stmt, tree *output_p)
    2898              : {
    2899    196806343 :   ccp_prop_value_t val;
    2900    196806343 :   enum ssa_prop_result retval = SSA_PROP_NOT_INTERESTING;
    2901              : 
    2902    196806343 :   tree lhs = gimple_get_lhs (stmt);
    2903    196806343 :   if (TREE_CODE (lhs) == SSA_NAME)
    2904              :     {
    2905              :       /* Evaluate the statement, which could be
    2906              :          either a GIMPLE_ASSIGN or a GIMPLE_CALL.  */
    2907    195465351 :       val = evaluate_stmt (stmt);
    2908              : 
    2909              :       /* If STMT is an assignment to an SSA_NAME, we only have one
    2910              :          value to set.  */
    2911    195465351 :       if (set_lattice_value (lhs, &val))
    2912              :         {
    2913    181869966 :           *output_p = lhs;
    2914    181869966 :           if (val.lattice_val == VARYING)
    2915              :             retval = SSA_PROP_VARYING;
    2916              :           else
    2917    124498582 :             retval = SSA_PROP_INTERESTING;
    2918              :         }
    2919              :     }
    2920              : 
    2921    196806343 :   return retval;
    2922    196806343 : }
    2923              : 
    2924              : 
    2925              : /* Visit the conditional statement STMT.  Return SSA_PROP_INTERESTING
    2926              :    if it can determine which edge will be taken.  Otherwise, return
    2927              :    SSA_PROP_VARYING.  */
    2928              : 
    2929              : static enum ssa_prop_result
    2930     23632719 : visit_cond_stmt (gimple *stmt, edge *taken_edge_p)
    2931              : {
    2932     23632719 :   ccp_prop_value_t val;
    2933     23632719 :   basic_block block;
    2934              : 
    2935     23632719 :   block = gimple_bb (stmt);
    2936     23632719 :   val = evaluate_stmt (stmt);
    2937     23632719 :   if (val.lattice_val != CONSTANT
    2938     23632719 :       || val.mask != 0)
    2939     18199453 :     return SSA_PROP_VARYING;
    2940              : 
    2941              :   /* Find which edge out of the conditional block will be taken and add it
    2942              :      to the worklist.  If no single edge can be determined statically,
    2943              :      return SSA_PROP_VARYING to feed all the outgoing edges to the
    2944              :      propagation engine.  */
    2945      5433266 :   *taken_edge_p = find_taken_edge (block, val.value);
    2946      5433266 :   if (*taken_edge_p)
    2947              :     return SSA_PROP_INTERESTING;
    2948              :   else
    2949              :     return SSA_PROP_VARYING;
    2950     23632719 : }
    2951              : 
    2952              : 
    2953              : /* Evaluate statement STMT.  If the statement produces an output value and
    2954              :    its evaluation changes the lattice value of its output, return
    2955              :    SSA_PROP_INTERESTING and set *OUTPUT_P to the SSA_NAME holding the
    2956              :    output value.
    2957              : 
    2958              :    If STMT is a conditional branch and we can determine its truth
    2959              :    value, set *TAKEN_EDGE_P accordingly.  If STMT produces a varying
    2960              :    value, return SSA_PROP_VARYING.  */
    2961              : 
    2962              : enum ssa_prop_result
    2963    232280686 : ccp_propagate::visit_stmt (gimple *stmt, edge *taken_edge_p, tree *output_p)
    2964              : {
    2965    232280686 :   tree def;
    2966    232280686 :   ssa_op_iter iter;
    2967              : 
    2968    232280686 :   if (dump_file && (dump_flags & TDF_DETAILS))
    2969              :     {
    2970          100 :       fprintf (dump_file, "\nVisiting statement:\n");
    2971          100 :       print_gimple_stmt (dump_file, stmt, 0, dump_flags);
    2972              :     }
    2973              : 
    2974    232280686 :   switch (gimple_code (stmt))
    2975              :     {
    2976    191655138 :       case GIMPLE_ASSIGN:
    2977              :         /* If the statement is an assignment that produces a single
    2978              :            output value, evaluate its RHS to see if the lattice value of
    2979              :            its output has changed.  */
    2980    191655138 :         return visit_assignment (stmt, output_p);
    2981              : 
    2982     10450137 :       case GIMPLE_CALL:
    2983              :         /* A value-returning call also performs an assignment.  */
    2984     10450137 :         if (gimple_call_lhs (stmt) != NULL_TREE)
    2985      5151205 :           return visit_assignment (stmt, output_p);
    2986              :         break;
    2987              : 
    2988     23632719 :       case GIMPLE_COND:
    2989     23632719 :       case GIMPLE_SWITCH:
    2990              :         /* If STMT is a conditional branch, see if we can determine
    2991              :            which branch will be taken.   */
    2992              :         /* FIXME.  It appears that we should be able to optimize
    2993              :            computed GOTOs here as well.  */
    2994     23632719 :         return visit_cond_stmt (stmt, taken_edge_p);
    2995              : 
    2996              :       default:
    2997              :         break;
    2998              :     }
    2999              : 
    3000              :   /* Any other kind of statement is not interesting for constant
    3001              :      propagation and, therefore, not worth simulating.  */
    3002     11841624 :   if (dump_file && (dump_flags & TDF_DETAILS))
    3003           42 :     fprintf (dump_file, "No interesting values produced.  Marked VARYING.\n");
    3004              : 
    3005              :   /* Definitions made by statements other than assignments to
    3006              :      SSA_NAMEs represent unknown modifications to their outputs.
    3007              :      Mark them VARYING.  */
    3008     16477274 :   FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
    3009      4635650 :     set_value_varying (def);
    3010              : 
    3011              :   return SSA_PROP_VARYING;
    3012              : }
    3013              : 
    3014              : 
    3015              : /* Main entry point for SSA Conditional Constant Propagation.  If NONZERO_P,
    3016              :    record nonzero bits.  */
    3017              : 
    3018              : static unsigned int
    3019      5537665 : do_ssa_ccp (bool nonzero_p)
    3020              : {
    3021      5537665 :   unsigned int todo = 0;
    3022      5537665 :   calculate_dominance_info (CDI_DOMINATORS);
    3023              : 
    3024      5537665 :   ccp_initialize ();
    3025      5537665 :   class ccp_propagate ccp_propagate;
    3026      5537665 :   ccp_propagate.ssa_propagate ();
    3027     10970215 :   if (ccp_finalize (nonzero_p || flag_ipa_bit_cp))
    3028              :     {
    3029      1655417 :       todo = TODO_cleanup_cfg;
    3030              : 
    3031              :       /* ccp_finalize does not preserve loop-closed ssa.  */
    3032      1655417 :       loops_state_clear (LOOP_CLOSED_SSA);
    3033              :     }
    3034              : 
    3035      5537665 :   free_dominance_info (CDI_DOMINATORS);
    3036      5537665 :   return todo;
    3037      5537665 : }
    3038              : 
    3039              : 
    3040              : namespace {
    3041              : 
    3042              : const pass_data pass_data_ccp =
    3043              : {
    3044              :   GIMPLE_PASS, /* type */
    3045              :   "ccp", /* name */
    3046              :   OPTGROUP_NONE, /* optinfo_flags */
    3047              :   TV_TREE_CCP, /* tv_id */
    3048              :   ( PROP_cfg | PROP_ssa ), /* properties_required */
    3049              :   0, /* properties_provided */
    3050              :   0, /* properties_destroyed */
    3051              :   0, /* todo_flags_start */
    3052              :   TODO_update_address_taken, /* todo_flags_finish */
    3053              : };
    3054              : 
    3055              : class pass_ccp : public gimple_opt_pass
    3056              : {
    3057              : public:
    3058      1428610 :   pass_ccp (gcc::context *ctxt)
    3059      2857220 :     : gimple_opt_pass (pass_data_ccp, ctxt), nonzero_p (false)
    3060              :   {}
    3061              : 
    3062              :   /* opt_pass methods: */
    3063      1142888 :   opt_pass * clone () final override { return new pass_ccp (m_ctxt); }
    3064      1428610 :   void set_pass_param (unsigned int n, bool param) final override
    3065              :     {
    3066      1428610 :       gcc_assert (n == 0);
    3067      1428610 :       nonzero_p = param;
    3068      1428610 :     }
    3069      5539535 :   bool gate (function *) final override { return flag_tree_ccp != 0; }
    3070      5537665 :   unsigned int execute (function *) final override
    3071              :   {
    3072      5537665 :     return do_ssa_ccp (nonzero_p);
    3073              :   }
    3074              : 
    3075              :  private:
    3076              :   /* Determines whether the pass instance records nonzero bits.  */
    3077              :   bool nonzero_p;
    3078              : }; // class pass_ccp
    3079              : 
    3080              : } // anon namespace
    3081              : 
    3082              : gimple_opt_pass *
    3083       285722 : make_pass_ccp (gcc::context *ctxt)
    3084              : {
    3085       285722 :   return new pass_ccp (ctxt);
    3086              : }
    3087              : 
    3088              : /* A simple pass that emits some warnings post IPA.  */
    3089              : 
    3090              : namespace {
    3091              : 
    3092              : const pass_data pass_data_post_ipa_warn =
    3093              : {
    3094              :   GIMPLE_PASS, /* type */
    3095              :   "post_ipa_warn", /* name */
    3096              :   OPTGROUP_NONE, /* optinfo_flags */
    3097              :   TV_NONE, /* tv_id */
    3098              :   ( PROP_cfg | PROP_ssa ), /* properties_required */
    3099              :   0, /* properties_provided */
    3100              :   0, /* properties_destroyed */
    3101              :   0, /* todo_flags_start */
    3102              :   0, /* todo_flags_finish */
    3103              : };
    3104              : 
    3105              : class pass_post_ipa_warn : public gimple_opt_pass
    3106              : {
    3107              : public:
    3108       571444 :   pass_post_ipa_warn (gcc::context *ctxt)
    3109      1142888 :     : gimple_opt_pass (pass_data_post_ipa_warn, ctxt)
    3110              :   {}
    3111              : 
    3112              :   /* opt_pass methods: */
    3113       285722 :   opt_pass * clone () final override { return new pass_post_ipa_warn (m_ctxt); }
    3114      1044139 :   bool gate (function *) final override { return warn_nonnull != 0; }
    3115              :   unsigned int execute (function *) final override;
    3116              : 
    3117              : }; // class pass_fold_builtins
    3118              : 
    3119              : unsigned int
    3120       112264 : pass_post_ipa_warn::execute (function *fun)
    3121              : {
    3122       112264 :   basic_block bb;
    3123       112264 :   gimple_ranger *ranger = NULL;
    3124              : 
    3125      1105173 :   FOR_EACH_BB_FN (bb, fun)
    3126              :     {
    3127       992909 :       gimple_stmt_iterator gsi;
    3128     10670333 :       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
    3129              :         {
    3130      8684515 :           gimple *stmt = gsi_stmt (gsi);
    3131      8684515 :           if (!is_gimple_call (stmt) || warning_suppressed_p (stmt, OPT_Wnonnull))
    3132      8137940 :             continue;
    3133              : 
    3134       546575 :           tree fntype = gimple_call_fntype (stmt);
    3135       546575 :           if (!fntype)
    3136         5318 :             continue;
    3137       541257 :           bitmap nonnullargs = get_nonnull_args (fntype);
    3138              : 
    3139       541257 :           tree fndecl = gimple_call_fndecl (stmt);
    3140      1049740 :           const bool closure = fndecl && DECL_LAMBDA_FUNCTION_P (fndecl);
    3141              : 
    3142       793633 :           for (unsigned i = nonnullargs ? 0 : ~0U;
    3143       793633 :                i < gimple_call_num_args (stmt); i++)
    3144              :             {
    3145       252376 :               tree arg = gimple_call_arg (stmt, i);
    3146       252376 :               if (TREE_CODE (TREE_TYPE (arg)) != POINTER_TYPE)
    3147       252271 :                 continue;
    3148       157954 :               if (!integer_zerop (arg))
    3149       149789 :                 continue;
    3150         8165 :               if (i == 0 && closure)
    3151              :                 /* Avoid warning for the first argument to lambda functions.  */
    3152           18 :                 continue;
    3153         8147 :               if (!bitmap_empty_p (nonnullargs)
    3154         8147 :                   && !bitmap_bit_p (nonnullargs, i))
    3155         8027 :                 continue;
    3156              : 
    3157              :               /* In C++ non-static member functions argument 0 refers
    3158              :                  to the implicit this pointer.  Use the same one-based
    3159              :                  numbering for ordinary arguments.  */
    3160          120 :               unsigned argno = TREE_CODE (fntype) == METHOD_TYPE ? i : i + 1;
    3161          120 :               location_t loc = (EXPR_HAS_LOCATION (arg)
    3162            0 :                                 ? EXPR_LOCATION (arg)
    3163          120 :                                 : gimple_location (stmt));
    3164          120 :               auto_diagnostic_group d;
    3165          120 :               if (argno == 0)
    3166              :                 {
    3167           21 :                   if (warning_at (loc, OPT_Wnonnull,
    3168              :                                   "%qs pointer is null", "this")
    3169           15 :                       && fndecl)
    3170            9 :                     inform (DECL_SOURCE_LOCATION (fndecl),
    3171              :                             "in a call to non-static member function %qD",
    3172              :                             fndecl);
    3173           15 :                   continue;
    3174              :                 }
    3175              : 
    3176          105 :               if (!warning_at (loc, OPT_Wnonnull,
    3177              :                                "argument %u null where non-null "
    3178              :                                "expected", argno))
    3179            0 :                 continue;
    3180              : 
    3181          105 :               tree fndecl = gimple_call_fndecl (stmt);
    3182          105 :               if (fndecl && DECL_IS_UNDECLARED_BUILTIN (fndecl))
    3183           53 :                 inform (loc, "in a call to built-in function %qD",
    3184              :                         fndecl);
    3185           52 :               else if (fndecl)
    3186           52 :                 inform (DECL_SOURCE_LOCATION (fndecl),
    3187              :                         "in a call to function %qD declared %qs",
    3188              :                         fndecl, "nonnull");
    3189          120 :             }
    3190       541257 :           BITMAP_FREE (nonnullargs);
    3191              : 
    3192       541257 :           for (tree attrs = TYPE_ATTRIBUTES (fntype);
    3193       578378 :                (attrs = lookup_attribute ("nonnull_if_nonzero", attrs));
    3194        37121 :                attrs = TREE_CHAIN (attrs))
    3195              :             {
    3196        37121 :               tree args = TREE_VALUE (attrs);
    3197        37121 :               unsigned int idx = TREE_INT_CST_LOW (TREE_VALUE (args)) - 1;
    3198        37121 :               unsigned int idx2
    3199        37121 :                 = TREE_INT_CST_LOW (TREE_VALUE (TREE_CHAIN (args))) - 1;
    3200        37121 :               unsigned int idx3 = idx2;
    3201        37121 :               if (tree chain2 = TREE_CHAIN (TREE_CHAIN (args)))
    3202          885 :                 idx3 = TREE_INT_CST_LOW (TREE_VALUE (chain2)) - 1;
    3203        37121 :               if (idx < gimple_call_num_args (stmt)
    3204        37120 :                   && idx2 < gimple_call_num_args (stmt)
    3205        74240 :                   && idx3 < gimple_call_num_args (stmt))
    3206              :                 {
    3207        37119 :                   tree arg = gimple_call_arg (stmt, idx);
    3208        37119 :                   tree arg2 = gimple_call_arg (stmt, idx2);
    3209        37119 :                   tree arg3 = gimple_call_arg (stmt, idx3);
    3210        37119 :                   if (TREE_CODE (TREE_TYPE (arg)) != POINTER_TYPE
    3211        37030 :                       || !integer_zerop (arg)
    3212          290 :                       || !INTEGRAL_TYPE_P (TREE_TYPE (arg2))
    3213          290 :                       || !INTEGRAL_TYPE_P (TREE_TYPE (arg3))
    3214          290 :                       || integer_zerop (arg2)
    3215          214 :                       || integer_zerop (arg3)
    3216        37305 :                       || ((TREE_CODE (fntype) == METHOD_TYPE || closure)
    3217            0 :                           && (idx == 0 || idx2 == 0 || idx3 == 0)))
    3218        36989 :                     continue;
    3219          186 :                   if (!integer_nonzerop (arg2)
    3220          186 :                       && !tree_expr_nonzero_p (arg2))
    3221              :                     {
    3222           98 :                       if (TREE_CODE (arg2) != SSA_NAME || optimize < 2)
    3223           56 :                         continue;
    3224           98 :                       if (!ranger)
    3225           14 :                         ranger = enable_ranger (cfun);
    3226              : 
    3227           98 :                       int_range_max vr;
    3228          196 :                       get_range_query (cfun)->range_of_expr (vr, arg2, stmt);
    3229           98 :                       if (range_includes_zero_p (vr))
    3230           56 :                         continue;
    3231           98 :                     }
    3232          130 :                   if (idx2 != idx3
    3233           45 :                       && !integer_nonzerop (arg3)
    3234          153 :                       && !tree_expr_nonzero_p (arg3))
    3235              :                     {
    3236           20 :                       if (TREE_CODE (arg3) != SSA_NAME || optimize < 2)
    3237            0 :                         continue;
    3238           20 :                       if (!ranger)
    3239            0 :                         ranger = enable_ranger (cfun);
    3240              : 
    3241           20 :                       int_range_max vr;
    3242           40 :                       get_range_query (cfun)->range_of_expr (vr, arg3, stmt);
    3243           20 :                       if (range_includes_zero_p (vr))
    3244            0 :                         continue;
    3245           20 :                     }
    3246          130 :                   unsigned argno = idx + 1;
    3247          130 :                   unsigned argno2 = idx2 + 1;
    3248          130 :                   unsigned argno3 = idx3 + 1;
    3249          130 :                   location_t loc = (EXPR_HAS_LOCATION (arg)
    3250            0 :                                     ? EXPR_LOCATION (arg)
    3251          130 :                                     : gimple_location (stmt));
    3252          130 :                   auto_diagnostic_group d;
    3253              : 
    3254          130 :                   if (idx2 != idx3)
    3255              :                     {
    3256           45 :                       if (!warning_at (loc, OPT_Wnonnull,
    3257              :                                        "argument %u null where non-null "
    3258              :                                        "expected because arguments %u and %u "
    3259              :                                        "are nonzero", argno, argno2, argno3))
    3260            0 :                         continue;
    3261              :                     }
    3262           85 :                   else if (!warning_at (loc, OPT_Wnonnull,
    3263              :                                         "argument %u null where non-null "
    3264              :                                         "expected because argument %u is "
    3265              :                                         "nonzero", argno, argno2))
    3266            0 :                     continue;
    3267              : 
    3268          130 :                   tree fndecl = gimple_call_fndecl (stmt);
    3269          130 :                   if (fndecl && DECL_IS_UNDECLARED_BUILTIN (fndecl))
    3270           37 :                     inform (loc, "in a call to built-in function %qD",
    3271              :                             fndecl);
    3272           93 :                   else if (fndecl)
    3273           93 :                     inform (DECL_SOURCE_LOCATION (fndecl),
    3274              :                             "in a call to function %qD declared %qs",
    3275              :                             fndecl, "nonnull_if_nonzero");
    3276          130 :                 }
    3277              :             }
    3278              :         }
    3279              :     }
    3280       112264 :   if (ranger)
    3281           14 :     disable_ranger (cfun);
    3282       112264 :   return 0;
    3283              : }
    3284              : 
    3285              : } // anon namespace
    3286              : 
    3287              : gimple_opt_pass *
    3288       285722 : make_pass_post_ipa_warn (gcc::context *ctxt)
    3289              : {
    3290       285722 :   return new pass_post_ipa_warn (ctxt);
    3291              : }
        

Generated by: LCOV version 2.4-beta

LCOV profile is generated on x86_64 machine using following configure options: configure --disable-bootstrap --enable-coverage=opt --enable-languages=c,c++,fortran,go,jit,lto,rust,m2 --enable-host-shared. GCC test suite is run with the built compiler.