LCOV - code coverage report
Current view: top level - gcc - reorg.cc (source / functions) Coverage Total Hit
Test: gcc.info Lines: 1.0 % 1548 16
Test Date: 2026-02-28 14:20:25 Functions: 9.8 % 51 5
Legend: Lines:     hit not hit

            Line data    Source code
       1              : /* Perform instruction reorganizations for delay slot filling.
       2              :    Copyright (C) 1992-2026 Free Software Foundation, Inc.
       3              :    Contributed by Richard Kenner (kenner@vlsi1.ultra.nyu.edu).
       4              :    Hacked by Michael Tiemann (tiemann@cygnus.com).
       5              : 
       6              : This file is part of GCC.
       7              : 
       8              : GCC is free software; you can redistribute it and/or modify it under
       9              : the terms of the GNU General Public License as published by the Free
      10              : Software Foundation; either version 3, or (at your option) any later
      11              : version.
      12              : 
      13              : GCC is distributed in the hope that it will be useful, but WITHOUT ANY
      14              : WARRANTY; without even the implied warranty of MERCHANTABILITY or
      15              : FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
      16              : for more details.
      17              : 
      18              : You should have received a copy of the GNU General Public License
      19              : along with GCC; see the file COPYING3.  If not see
      20              : <http://www.gnu.org/licenses/>.  */
      21              : 
      22              : /* Instruction reorganization pass.
      23              : 
      24              :    This pass runs after register allocation and final jump
      25              :    optimization.  It should be the last pass to run before peephole.
      26              :    It serves primarily to fill delay slots of insns, typically branch
      27              :    and call insns.  Other insns typically involve more complicated
      28              :    interactions of data dependencies and resource constraints, and
      29              :    are better handled by scheduling before register allocation (by the
      30              :    function `schedule_insns').
      31              : 
      32              :    The Branch Penalty is the number of extra cycles that are needed to
      33              :    execute a branch insn.  On an ideal machine, branches take a single
      34              :    cycle, and the Branch Penalty is 0.  Several RISC machines approach
      35              :    branch delays differently:
      36              : 
      37              :    The MIPS has a single branch delay slot.  Most insns
      38              :    (except other branches) can be used to fill this slot.  When the
      39              :    slot is filled, two insns execute in two cycles, reducing the
      40              :    branch penalty to zero.
      41              : 
      42              :    The SPARC always has a branch delay slot, but its effects can be
      43              :    annulled when the branch is not taken.  This means that failing to
      44              :    find other sources of insns, we can hoist an insn from the branch
      45              :    target that would only be safe to execute knowing that the branch
      46              :    is taken.
      47              : 
      48              :    The HP-PA always has a branch delay slot.  For unconditional branches
      49              :    its effects can be annulled when the branch is taken.  The effects
      50              :    of the delay slot in a conditional branch can be nullified for forward
      51              :    taken branches, or for untaken backward branches.  This means
      52              :    we can hoist insns from the fall-through path for forward branches or
      53              :    steal insns from the target of backward branches.
      54              : 
      55              :    The TMS320C3x and C4x have three branch delay slots.  When the three
      56              :    slots are filled, the branch penalty is zero.  Most insns can fill the
      57              :    delay slots except jump insns.
      58              : 
      59              :    Three techniques for filling delay slots have been implemented so far:
      60              : 
      61              :    (1) `fill_simple_delay_slots' is the simplest, most efficient way
      62              :    to fill delay slots.  This pass first looks for insns which come
      63              :    from before the branch and which are safe to execute after the
      64              :    branch.  Then it searches after the insn requiring delay slots or,
      65              :    in the case of a branch, for insns that are after the point at
      66              :    which the branch merges into the fallthrough code, if such a point
      67              :    exists.  When such insns are found, the branch penalty decreases
      68              :    and no code expansion takes place.
      69              : 
      70              :    (2) `fill_eager_delay_slots' is more complicated: it is used for
      71              :    scheduling conditional jumps, or for scheduling jumps which cannot
      72              :    be filled using (1).  A machine need not have annulled jumps to use
      73              :    this strategy, but it helps (by keeping more options open).
      74              :    `fill_eager_delay_slots' tries to guess the direction the branch
      75              :    will go; if it guesses right 100% of the time, it can reduce the
      76              :    branch penalty as much as `fill_simple_delay_slots' does.  If it
      77              :    guesses wrong 100% of the time, it might as well schedule nops.  When
      78              :    `fill_eager_delay_slots' takes insns from the fall-through path of
      79              :    the jump, usually there is no code expansion; when it takes insns
      80              :    from the branch target, there is code expansion if it is not the
      81              :    only way to reach that target.
      82              : 
      83              :    (3) `relax_delay_slots' uses a set of rules to simplify code that
      84              :    has been reorganized by (1) and (2).  It finds cases where
      85              :    conditional test can be eliminated, jumps can be threaded, extra
      86              :    insns can be eliminated, etc.  It is the job of (1) and (2) to do a
      87              :    good job of scheduling locally; `relax_delay_slots' takes care of
      88              :    making the various individual schedules work well together.  It is
      89              :    especially tuned to handle the control flow interactions of branch
      90              :    insns.  It does nothing for insns with delay slots that do not
      91              :    branch.  */
      92              : 
      93              : #include "config.h"
      94              : #include "system.h"
      95              : #include "coretypes.h"
      96              : #include "backend.h"
      97              : #include "target.h"
      98              : #include "rtl.h"
      99              : #include "tree.h"
     100              : #include "predict.h"
     101              : #include "memmodel.h"
     102              : #include "tm_p.h"
     103              : #include "expmed.h"
     104              : #include "insn-config.h"
     105              : #include "emit-rtl.h"
     106              : #include "recog.h"
     107              : #include "insn-attr.h"
     108              : #include "resource.h"
     109              : #include "tree-pass.h"
     110              : 
     111              : 
     112              : /* First, some functions that were used before GCC got a control flow graph.
     113              :    These functions are now only used here in reorg.cc, and have therefore
     114              :    been moved here to avoid inadvertent misuse elsewhere in the compiler.  */
     115              : 
     116              : /* Return the last label to mark the same position as LABEL.  Return LABEL
     117              :    itself if it is null or any return rtx.  */
     118              : 
     119              : static rtx
     120            0 : skip_consecutive_labels (rtx label_or_return)
     121              : {
     122            0 :   rtx_insn *insn;
     123              : 
     124            0 :   if (label_or_return && ANY_RETURN_P (label_or_return))
     125              :     return label_or_return;
     126              : 
     127            0 :   rtx_insn *label = as_a <rtx_insn *> (label_or_return);
     128              : 
     129              :   /* __builtin_unreachable can create a CODE_LABEL followed by a BARRIER.
     130              : 
     131              :      Since reaching the CODE_LABEL is undefined behavior, we can return
     132              :      any code label and we're OK at run time.
     133              : 
     134              :      However, if we return a CODE_LABEL which leads to a shrink-wrapped
     135              :      epilogue, but the path does not have a prologue, then we will trip
     136              :      a sanity check in the dwarf2 cfi code which wants to verify that
     137              :      the CFIs are all the same on the traces leading to the epilogue.
     138              : 
     139              :      So we explicitly disallow looking through BARRIERS here.  */
     140            0 :   for (insn = label;
     141            0 :        insn != 0 && !INSN_P (insn) && !BARRIER_P (insn);
     142            0 :        insn = NEXT_INSN (insn))
     143            0 :     if (LABEL_P (insn))
     144            0 :       label = insn;
     145              : 
     146              :   return label;
     147              : }
     148              : 
     149              : /* Insns which have delay slots that have not yet been filled.  */
     150              : 
     151              : static struct obstack unfilled_slots_obstack;
     152              : static rtx *unfilled_firstobj;
     153              : 
     154              : /* Define macros to refer to the first and last slot containing unfilled
     155              :    insns.  These are used because the list may move and its address
     156              :    should be recomputed at each use.  */
     157              : 
     158              : #define unfilled_slots_base     \
     159              :   ((rtx_insn **) obstack_base (&unfilled_slots_obstack))
     160              : 
     161              : #define unfilled_slots_next     \
     162              :   ((rtx_insn **) obstack_next_free (&unfilled_slots_obstack))
     163              : 
     164              : /* Points to the label before the end of the function, or before a
     165              :    return insn.  */
     166              : static rtx_code_label *function_return_label;
     167              : /* Likewise for a simple_return.  */
     168              : static rtx_code_label *function_simple_return_label;
     169              : 
     170              : /* Mapping between INSN_UID's and position in the code since INSN_UID's do
     171              :    not always monotonically increase.  */
     172              : static int *uid_to_ruid;
     173              : 
     174              : /* Highest valid index in `uid_to_ruid'.  */
     175              : static int max_uid;
     176              : 
     177              : static bool stop_search_p (rtx_insn *, bool);
     178              : static bool resource_conflicts_p (struct resources *, struct resources *);
     179              : static bool insn_references_resource_p (rtx, struct resources *, bool);
     180              : static bool insn_sets_resource_p (rtx, struct resources *, bool);
     181              : static rtx_code_label *find_end_label (rtx);
     182              : static rtx_insn *emit_delay_sequence (rtx_insn *, const vec<rtx_insn *> &,
     183              :                                       int);
     184              : static void add_to_delay_list (rtx_insn *, vec<rtx_insn *> *);
     185              : static rtx_insn *delete_from_delay_slot (rtx_insn *);
     186              : static void delete_scheduled_jump (rtx_insn *);
     187              : static void note_delay_statistics (int, int);
     188              : static int get_jump_flags (const rtx_insn *, rtx);
     189              : static int mostly_true_jump (rtx);
     190              : static rtx get_branch_condition (const rtx_insn *, rtx);
     191              : static bool condition_dominates_p (rtx, const rtx_insn *);
     192              : static bool redirect_with_delay_slots_safe_p (rtx_insn *, rtx, rtx);
     193              : static bool redirect_with_delay_list_safe_p (rtx_insn *, rtx,
     194              :                                              const vec<rtx_insn *> &);
     195              : static bool check_annul_list_true_false (bool, const vec<rtx_insn *> &);
     196              : static void steal_delay_list_from_target (rtx_insn *, rtx, rtx_sequence *,
     197              :                                           vec<rtx_insn *> *,
     198              :                                           struct resources *,
     199              :                                           struct resources *,
     200              :                                           struct resources *,
     201              :                                           int, int *, bool *,
     202              :                                           rtx *);
     203              : static void steal_delay_list_from_fallthrough (rtx_insn *, rtx, rtx_sequence *,
     204              :                                                vec<rtx_insn *> *,
     205              :                                                struct resources *,
     206              :                                                struct resources *,
     207              :                                                struct resources *,
     208              :                                                int, int *, bool *);
     209              : static void try_merge_delay_insns (rtx_insn *, rtx_insn *);
     210              : static rtx_insn *redundant_insn (rtx, rtx_insn *, const vec<rtx_insn *> &);
     211              : static bool own_thread_p (rtx, rtx, bool);
     212              : static void update_block (rtx_insn *, rtx_insn *);
     213              : static bool reorg_redirect_jump (rtx_jump_insn *, rtx);
     214              : static void update_reg_dead_notes (rtx_insn *, rtx_insn *);
     215              : static void fix_reg_dead_note (rtx_insn *, rtx);
     216              : static void update_reg_unused_notes (rtx_insn *, rtx);
     217              : static void fill_simple_delay_slots (bool);
     218              : static void fill_slots_from_thread (rtx_jump_insn *, rtx, rtx, rtx,
     219              :                                     bool, bool, bool, int,
     220              :                                     int *, vec<rtx_insn *> *);
     221              : static void fill_eager_delay_slots (void);
     222              : static void relax_delay_slots (rtx_insn *);
     223              : static void make_return_insns (rtx_insn *);
     224              : 
     225              : /* A wrapper around next_active_insn which takes care to return ret_rtx
     226              :    unchanged.  */
     227              : 
     228              : static rtx
     229            0 : first_active_target_insn (rtx insn)
     230              : {
     231            0 :   if (ANY_RETURN_P (insn))
     232              :     return insn;
     233            0 :   return next_active_insn (as_a <rtx_insn *> (insn));
     234              : }
     235              : 
     236              : /* Return true iff INSN is a simplejump, or any kind of return insn.  */
     237              : 
     238              : static bool
     239            0 : simplejump_or_return_p (rtx insn)
     240              : {
     241            0 :   return (JUMP_P (insn)
     242            0 :           && (simplejump_p (as_a <rtx_insn *> (insn))
     243            0 :               || ANY_RETURN_P (PATTERN (insn))));
     244              : }
     245              : 
     246              : /* Return TRUE if this insn should stop the search for insn to fill delay
     247              :    slots.  LABELS_P indicates that labels should terminate the search.
     248              :    In all cases, jumps terminate the search.  */
     249              : 
     250              : static bool
     251            0 : stop_search_p (rtx_insn *insn, bool labels_p)
     252              : {
     253            0 :   if (insn == 0)
     254              :     return true;
     255              : 
     256              :   /* If the insn can throw an exception that is caught within the function,
     257              :      it may effectively perform a jump from the viewpoint of the function.
     258              :      Therefore act like for a jump.  */
     259            0 :   if (can_throw_internal (insn))
     260              :     return true;
     261              : 
     262            0 :   switch (GET_CODE (insn))
     263              :     {
     264              :     case NOTE:
     265              :     case CALL_INSN:
     266              :     case DEBUG_INSN:
     267              :       return false;
     268              : 
     269              :     case CODE_LABEL:
     270              :       return labels_p;
     271              : 
     272              :     case JUMP_INSN:
     273              :     case BARRIER:
     274              :       return true;
     275              : 
     276            0 :     case INSN:
     277              :       /* OK unless it contains a delay slot or is an `asm' insn of some type.
     278              :          We don't know anything about these.  */
     279            0 :       return (GET_CODE (PATTERN (insn)) == SEQUENCE
     280            0 :               || GET_CODE (PATTERN (insn)) == ASM_INPUT
     281            0 :               || asm_noperands (PATTERN (insn)) >= 0);
     282              : 
     283            0 :     default:
     284            0 :       gcc_unreachable ();
     285              :     }
     286              : }
     287              : 
     288              : /* Return TRUE if any resources are marked in both RES1 and RES2 or if either
     289              :    resource set contains a volatile memory reference.  Otherwise, return FALSE.  */
     290              : 
     291              : static bool
     292            0 : resource_conflicts_p (struct resources *res1, struct resources *res2)
     293              : {
     294            0 :   if ((res1->cc && res2->cc) || (res1->memory && res2->memory)
     295            0 :       || res1->volatil || res2->volatil)
     296              :     return true;
     297              : 
     298            0 :   return hard_reg_set_intersect_p (res1->regs, res2->regs);
     299              : }
     300              : 
     301              : /* Return TRUE if any resource marked in RES, a `struct resources', is
     302              :    referenced by INSN.  If INCLUDE_DELAYED_EFFECTS is set, return if the called
     303              :    routine is using those resources.
     304              : 
     305              :    We compute this by computing all the resources referenced by INSN and
     306              :    seeing if this conflicts with RES.  It might be faster to directly check
     307              :    ourselves, and this is the way it used to work, but it means duplicating
     308              :    a large block of complex code.  */
     309              : 
     310              : static bool
     311            0 : insn_references_resource_p (rtx insn, struct resources *res,
     312              :                             bool include_delayed_effects)
     313              : {
     314            0 :   struct resources insn_res;
     315              : 
     316            0 :   CLEAR_RESOURCE (&insn_res);
     317            0 :   mark_referenced_resources (insn, &insn_res, include_delayed_effects);
     318            0 :   return resource_conflicts_p (&insn_res, res);
     319              : }
     320              : 
     321              : /* Return TRUE if INSN modifies resources that are marked in RES.
     322              :    INCLUDE_DELAYED_EFFECTS is set if the actions of that routine should be
     323              :    included.   */
     324              : 
     325              : static bool
     326            0 : insn_sets_resource_p (rtx insn, struct resources *res,
     327              :                       bool include_delayed_effects)
     328              : {
     329            0 :   struct resources insn_sets;
     330              : 
     331            0 :   CLEAR_RESOURCE (&insn_sets);
     332            0 :   mark_set_resources (insn, &insn_sets, 0,
     333              :                       (include_delayed_effects
     334              :                        ? MARK_SRC_DEST_CALL
     335              :                        : MARK_SRC_DEST));
     336            0 :   return resource_conflicts_p (&insn_sets, res);
     337              : }
     338              : 
     339              : /* Find a label before a RETURN.  If there is none, try to make one; if this
     340              :    fails, return 0.  KIND is either ret_rtx or simple_return_rtx, indicating
     341              :    which type of RETURN we're looking for.
     342              : 
     343              :    The property of the label is that it is placed just before a bare RETURN
     344              :    insn, so that another bare RETURN can be turned into a jump to the label
     345              :    unconditionally.  In particular, the label cannot be placed before a
     346              :    RETURN insn with a filled delay slot.
     347              : 
     348              :    ??? There may be a problem with the current implementation.  Suppose
     349              :    we start with a bare RETURN insn and call find_end_label.  It may set
     350              :    function_return_label just before the RETURN.  Suppose the machinery
     351              :    is able to fill the delay slot of the RETURN insn afterwards.  Then
     352              :    function_return_label is no longer valid according to the property
     353              :    described above and find_end_label will still return it unmodified.
     354              :    Note that this is probably mitigated by the following observation:
     355              :    once function_return_label is made, it is very likely the target of
     356              :    a jump, so filling the delay slot of the RETURN will be much more
     357              :    difficult.  */
     358              : 
     359              : static rtx_code_label *
     360            0 : find_end_label (rtx kind)
     361              : {
     362            0 :   rtx_insn *insn;
     363            0 :   rtx_code_label **plabel;
     364              : 
     365            0 :   if (kind == ret_rtx)
     366            0 :     plabel = &function_return_label;
     367              :   else
     368              :     {
     369            0 :       gcc_assert (kind == simple_return_rtx);
     370            0 :       plabel = &function_simple_return_label;
     371              :     }
     372              : 
     373              :   /* If we found one previously, return it.  */
     374            0 :   if (*plabel)
     375            0 :     return *plabel;
     376              : 
     377              :   /* Otherwise, scan the insns backward from the end of the function.  */
     378            0 :   insn = get_last_insn ();
     379            0 :   while (NOTE_P (insn)
     380            0 :          || (NONJUMP_INSN_P (insn)
     381            0 :              && (GET_CODE (PATTERN (insn)) == USE
     382            0 :                  || GET_CODE (PATTERN (insn)) == CLOBBER)))
     383            0 :     insn = PREV_INSN (insn);
     384              : 
     385              :   /* First, see if there is a RETURN at the end of the function.  If so,
     386              :      put the label before it.  */
     387            0 :   if (BARRIER_P (insn)
     388            0 :       && JUMP_P (PREV_INSN (insn))
     389            0 :       && PATTERN (PREV_INSN (insn)) == kind)
     390              :     {
     391            0 :       rtx_insn *temp = PREV_INSN (PREV_INSN (insn));
     392            0 :       rtx_code_label *label = gen_label_rtx ();
     393            0 :       LABEL_NUSES (label) = 0;
     394              : 
     395              :       /* Put the label before any USE insns that may precede the
     396              :          RETURN insn.  */
     397            0 :       while (GET_CODE (temp) == USE)
     398            0 :         temp = PREV_INSN (temp);
     399              : 
     400            0 :       emit_label_after (label, temp);
     401            0 :       *plabel = label;
     402              :     }
     403              : 
     404              :   /* If the basic block reordering pass has moved the return insn to some
     405              :      other place, try to locate it again and put the label there.  */
     406              :   else
     407              :     {
     408            0 :       rtx_code_label *label = gen_label_rtx ();
     409            0 :       LABEL_NUSES (label) = 0;
     410            0 :       while (insn && ! (JUMP_P (insn) && (PATTERN (insn) == kind)))
     411            0 :         insn = PREV_INSN (insn);
     412            0 :       if (insn)
     413              :         {
     414            0 :           insn = PREV_INSN (insn);
     415              : 
     416              :           /* Put the label before any USE insns that may precede the
     417              :              RETURN insn.  */
     418            0 :           while (GET_CODE (insn) == USE)
     419            0 :             insn = PREV_INSN (insn);
     420              : 
     421            0 :           emit_label_after (label, insn);
     422              :         }
     423              :       else
     424              :         {
     425            0 :           if (targetm.have_epilogue () && ! targetm.have_return ())
     426              :             /* The RETURN insn has its delay slot filled so we cannot
     427              :                emit the label just before it.  Since we already have
     428              :                an epilogue and cannot emit a new RETURN, we cannot
     429              :                emit the label at all.  */
     430              :             return NULL;
     431              : 
     432              :           /* Otherwise, make a new label and emit a RETURN and BARRIER,
     433              :              if needed.  */
     434            0 :           emit_label (label);
     435            0 :           if (targetm.have_return ())
     436              :             {
     437              :               /* The return we make may have delay slots too.  */
     438            0 :               rtx_insn *pat = targetm.gen_return ();
     439            0 :               rtx_insn *insn = emit_jump_insn (pat);
     440            0 :               set_return_jump_label (insn);
     441            0 :               emit_barrier ();
     442            0 :               if (num_delay_slots (insn) > 0)
     443            0 :                 obstack_ptr_grow (&unfilled_slots_obstack, insn);
     444              :             }
     445              :         }
     446            0 :       *plabel = label;
     447              :     }
     448              : 
     449              :   /* Show one additional use for this label so it won't go away until
     450              :      we are done.  */
     451            0 :   ++LABEL_NUSES (*plabel);
     452              : 
     453            0 :   return *plabel;
     454              : }
     455              : 
     456              : /* Put INSN and LIST together in a SEQUENCE rtx of LENGTH, and replace
     457              :    the pattern of INSN with the SEQUENCE.
     458              : 
     459              :    Returns the insn containing the SEQUENCE that replaces INSN.  */
     460              : 
     461              : static rtx_insn *
     462            0 : emit_delay_sequence (rtx_insn *insn, const vec<rtx_insn *> &list, int length)
     463              : {
     464              :   /* Allocate the rtvec to hold the insns and the SEQUENCE.  */
     465            0 :   rtvec seqv = rtvec_alloc (length + 1);
     466            0 :   rtx seq = gen_rtx_SEQUENCE (VOIDmode, seqv);
     467            0 :   rtx_insn *seq_insn = make_insn_raw (seq);
     468              : 
     469              :   /* If DELAY_INSN has a location, use it for SEQ_INSN.  If DELAY_INSN does
     470              :      not have a location, but one of the delayed insns does, we pick up a
     471              :      location from there later.  */
     472            0 :   INSN_LOCATION (seq_insn) = INSN_LOCATION (insn);
     473              : 
     474              :   /* Unlink INSN from the insn chain, so that we can put it into
     475              :      the SEQUENCE.   Remember where we want to emit SEQUENCE in AFTER.  */
     476            0 :   rtx_insn *after = PREV_INSN (insn);
     477            0 :   remove_insn (insn);
     478            0 :   SET_NEXT_INSN (insn) = SET_PREV_INSN (insn) = NULL;
     479              : 
     480              :   /* Build our SEQUENCE and rebuild the insn chain.  */
     481            0 :   start_sequence ();
     482            0 :   XVECEXP (seq, 0, 0) = emit_insn (insn);
     483              : 
     484            0 :   unsigned int delay_insns = list.length ();
     485            0 :   gcc_assert (delay_insns == (unsigned int) length);
     486            0 :   for (unsigned int i = 0; i < delay_insns; i++)
     487              :     {
     488            0 :       rtx_insn *tem = list[i];
     489            0 :       rtx note, next;
     490              : 
     491              :       /* Show that this copy of the insn isn't deleted.  */
     492            0 :       tem->set_undeleted ();
     493              : 
     494              :       /* Unlink insn from its original place, and re-emit it into
     495              :          the sequence.  */
     496            0 :       SET_NEXT_INSN (tem) = SET_PREV_INSN (tem) = NULL;
     497            0 :       XVECEXP (seq, 0, i + 1) = emit_insn (tem);
     498              : 
     499              :       /* SPARC assembler, for instance, emit warning when debug info is output
     500              :          into the delay slot.  */
     501            0 :       if (INSN_LOCATION (tem) && !INSN_LOCATION (seq_insn))
     502            0 :         INSN_LOCATION (seq_insn) = INSN_LOCATION (tem);
     503            0 :       INSN_LOCATION (tem) = 0;
     504              : 
     505            0 :       for (note = REG_NOTES (tem); note; note = next)
     506              :         {
     507            0 :           next = XEXP (note, 1);
     508            0 :           switch (REG_NOTE_KIND (note))
     509              :             {
     510            0 :             case REG_DEAD:
     511              :               /* Remove any REG_DEAD notes because we can't rely on them now
     512              :                  that the insn has been moved.  */
     513            0 :               remove_note (tem, note);
     514            0 :               break;
     515              : 
     516            0 :             case REG_LABEL_OPERAND:
     517            0 :             case REG_LABEL_TARGET:
     518              :               /* Keep the label reference count up to date.  */
     519            0 :               if (LABEL_P (XEXP (note, 0)))
     520            0 :                 LABEL_NUSES (XEXP (note, 0)) ++;
     521              :               break;
     522              : 
     523              :             default:
     524              :               break;
     525              :             }
     526              :         }
     527              :     }
     528            0 :   end_sequence ();
     529              : 
     530              :   /* Splice our SEQUENCE into the insn stream where INSN used to be.  */
     531            0 :   add_insn_after (seq_insn, after, NULL);
     532              : 
     533            0 :   return seq_insn;
     534              : }
     535              : 
     536              : /* Add INSN to DELAY_LIST and return the head of the new list.  The list must
     537              :    be in the order in which the insns are to be executed.  */
     538              : 
     539              : static void
     540            0 : add_to_delay_list (rtx_insn *insn, vec<rtx_insn *> *delay_list)
     541              : {
     542              :   /* If INSN has its block number recorded, clear it since we may
     543              :      be moving the insn to a new block.  */
     544            0 :   clear_hashed_info_for_insn (insn);
     545              : 
     546            0 :   delay_list->safe_push (insn);
     547            0 : }
     548              : 
     549              : /* Delete INSN from the delay slot of the insn that it is in, which may
     550              :    produce an insn with no delay slots.  Return the new insn.  */
     551              : 
     552              : static rtx_insn *
     553            0 : delete_from_delay_slot (rtx_insn *insn)
     554              : {
     555            0 :   rtx_insn *trial, *seq_insn, *prev;
     556            0 :   rtx_sequence *seq;
     557            0 :   bool had_barrier = false;
     558            0 :   int i;
     559              : 
     560              :   /* We first must find the insn containing the SEQUENCE with INSN in its
     561              :      delay slot.  Do this by finding an insn, TRIAL, where
     562              :      PREV_INSN (NEXT_INSN (TRIAL)) != TRIAL.  */
     563              : 
     564            0 :   for (trial = insn;
     565            0 :        PREV_INSN (NEXT_INSN (trial)) == trial;
     566              :        trial = NEXT_INSN (trial))
     567              :     ;
     568              : 
     569            0 :   seq_insn = PREV_INSN (NEXT_INSN (trial));
     570            0 :   seq = as_a <rtx_sequence *> (PATTERN (seq_insn));
     571              : 
     572            0 :   if (NEXT_INSN (seq_insn) && BARRIER_P (NEXT_INSN (seq_insn)))
     573              :     had_barrier = true;
     574              : 
     575              :   /* Create a delay list consisting of all the insns other than the one
     576              :      we are deleting (unless we were the only one).  */
     577            0 :   auto_vec<rtx_insn *, 5> delay_list;
     578            0 :   if (seq->len () > 2)
     579            0 :     for (i = 1; i < seq->len (); i++)
     580            0 :       if (seq->insn (i) != insn)
     581            0 :         add_to_delay_list (seq->insn (i), &delay_list);
     582              : 
     583              :   /* Delete the old SEQUENCE, re-emit the insn that used to have the delay
     584              :      list, and rebuild the delay list if non-empty.  */
     585            0 :   prev = PREV_INSN (seq_insn);
     586            0 :   trial = seq->insn (0);
     587            0 :   delete_related_insns (seq_insn);
     588            0 :   add_insn_after (trial, prev, NULL);
     589              : 
     590              :   /* If there was a barrier after the old SEQUENCE, remit it.  */
     591            0 :   if (had_barrier)
     592            0 :     emit_barrier_after (trial);
     593              : 
     594              :   /* If there are any delay insns, remit them.  Otherwise clear the
     595              :      annul flag.  */
     596            0 :   if (!delay_list.is_empty ())
     597            0 :     trial = emit_delay_sequence (trial, delay_list, XVECLEN (seq, 0) - 2);
     598            0 :   else if (JUMP_P (trial))
     599            0 :     INSN_ANNULLED_BRANCH_P (trial) = 0;
     600              : 
     601            0 :   INSN_FROM_TARGET_P (insn) = 0;
     602              : 
     603              :   /* Show we need to fill this insn again.  */
     604            0 :   obstack_ptr_grow (&unfilled_slots_obstack, trial);
     605              : 
     606            0 :   return trial;
     607            0 : }
     608              : 
     609              : /* Delete INSN, a JUMP_INSN.  */
     610              : 
     611              : static void
     612            0 : delete_scheduled_jump (rtx_insn *insn)
     613              : {
     614            0 :   delete_related_insns (insn);
     615            0 : }
     616              : 
     617              : /* Counters for delay-slot filling.  */
     618              : 
     619              : #define NUM_REORG_FUNCTIONS 2
     620              : #define MAX_DELAY_HISTOGRAM 3
     621              : #define MAX_REORG_PASSES 2
     622              : 
     623              : static int num_insns_needing_delays[NUM_REORG_FUNCTIONS][MAX_REORG_PASSES];
     624              : 
     625              : static int num_filled_delays[NUM_REORG_FUNCTIONS][MAX_DELAY_HISTOGRAM+1][MAX_REORG_PASSES];
     626              : 
     627              : static int reorg_pass_number;
     628              : 
     629              : static void
     630            0 : note_delay_statistics (int slots_filled, int index)
     631              : {
     632            0 :   num_insns_needing_delays[index][reorg_pass_number]++;
     633            0 :   if (slots_filled > MAX_DELAY_HISTOGRAM)
     634              :     slots_filled = MAX_DELAY_HISTOGRAM;
     635            0 :   num_filled_delays[index][slots_filled][reorg_pass_number]++;
     636            0 : }
     637              : 
     638              : /* Optimize the following cases:
     639              : 
     640              :    1.  When a conditional branch skips over only one instruction,
     641              :        use an annulling branch and put that insn in the delay slot.
     642              :        Use either a branch that annuls when the condition if true or
     643              :        invert the test with a branch that annuls when the condition is
     644              :        false.  This saves insns, since otherwise we must copy an insn
     645              :        from the L1 target.
     646              : 
     647              :         (orig)           (skip)         (otherwise)
     648              :         Bcc.n L1        Bcc',a L1       Bcc,a L1'
     649              :         insn            insn            insn2
     650              :       L1:             L1:             L1:
     651              :         insn2           insn2           insn2
     652              :         insn3           insn3         L1':
     653              :                                         insn3
     654              : 
     655              :    2.  When a conditional branch skips over only one instruction,
     656              :        and after that, it unconditionally branches somewhere else,
     657              :        perform the similar optimization. This saves executing the
     658              :        second branch in the case where the inverted condition is true.
     659              : 
     660              :         Bcc.n L1        Bcc',a L2
     661              :         insn            insn
     662              :       L1:             L1:
     663              :         Bra L2          Bra L2
     664              : 
     665              :    INSN is a JUMP_INSN.
     666              : 
     667              :    This should be expanded to skip over N insns, where N is the number
     668              :    of delay slots required.  */
     669              : 
     670              : static void
     671            0 : optimize_skip (rtx_jump_insn *insn, vec<rtx_insn *> *delay_list)
     672              : {
     673            0 :   rtx_insn *trial = next_nonnote_insn (insn);
     674            0 :   rtx_insn *next_trial = next_active_insn (trial);
     675            0 :   int flags;
     676              : 
     677            0 :   flags = get_jump_flags (insn, JUMP_LABEL (insn));
     678              : 
     679            0 :   if (trial == 0
     680            0 :       || !NONJUMP_INSN_P (trial)
     681            0 :       || GET_CODE (PATTERN (trial)) == SEQUENCE
     682            0 :       || recog_memoized (trial) < 0
     683            0 :       || (! eligible_for_annul_false (insn, 0, trial, flags)
     684            0 :           && ! eligible_for_annul_true (insn, 0, trial, flags))
     685            0 :       || RTX_FRAME_RELATED_P (trial)
     686            0 :       || can_throw_internal (trial))
     687            0 :     return;
     688              : 
     689              :   /* There are two cases where we are just executing one insn (we assume
     690              :      here that a branch requires only one insn; this should be generalized
     691              :      at some point):  Where the branch goes around a single insn or where
     692              :      we have one insn followed by a branch to the same label we branch to.
     693              :      In both of these cases, inverting the jump and annulling the delay
     694              :      slot give the same effect in fewer insns.  */
     695            0 :   if (next_trial == next_active_insn (JUMP_LABEL_AS_INSN (insn))
     696            0 :       || (next_trial != 0
     697            0 :           && simplejump_or_return_p (next_trial)
     698            0 :           && JUMP_LABEL (insn) == JUMP_LABEL (next_trial)))
     699              :     {
     700            0 :       if (eligible_for_annul_false (insn, 0, trial, flags))
     701              :         {
     702            0 :           if (invert_jump (insn, JUMP_LABEL (insn), 1))
     703            0 :             INSN_FROM_TARGET_P (trial) = 1;
     704            0 :           else if (! eligible_for_annul_true (insn, 0, trial, flags))
     705              :             return;
     706              :         }
     707              : 
     708            0 :       add_to_delay_list (trial, delay_list);
     709            0 :       next_trial = next_active_insn (trial);
     710            0 :       update_block (trial, trial);
     711            0 :       delete_related_insns (trial);
     712              : 
     713              :       /* Also, if we are targeting an unconditional
     714              :          branch, thread our jump to the target of that branch.  Don't
     715              :          change this into a RETURN here, because it may not accept what
     716              :          we have in the delay slot.  We'll fix this up later.  */
     717            0 :       if (next_trial && simplejump_or_return_p (next_trial))
     718              :         {
     719            0 :           rtx target_label = JUMP_LABEL (next_trial);
     720            0 :           if (ANY_RETURN_P (target_label))
     721            0 :             target_label = find_end_label (target_label);
     722              : 
     723            0 :           if (target_label)
     724              :             {
     725              :               /* Recompute the flags based on TARGET_LABEL since threading
     726              :                  the jump to TARGET_LABEL may change the direction of the
     727              :                  jump (which may change the circumstances in which the
     728              :                  delay slot is nullified).  */
     729            0 :               flags = get_jump_flags (insn, target_label);
     730            0 :               if (eligible_for_annul_true (insn, 0, trial, flags))
     731            0 :                 reorg_redirect_jump (insn, target_label);
     732              :             }
     733              :         }
     734              : 
     735            0 :       INSN_ANNULLED_BRANCH_P (insn) = 1;
     736              :     }
     737              : }
     738              : 
     739              : /*  Encode and return branch direction and prediction information for
     740              :     INSN assuming it will jump to LABEL.
     741              : 
     742              :     Non conditional branches return no direction information and
     743              :     are predicted as very likely taken.  */
     744              : 
     745              : static int
     746            0 : get_jump_flags (const rtx_insn *insn, rtx label)
     747              : {
     748            0 :   int flags;
     749              : 
     750              :   /* get_jump_flags can be passed any insn with delay slots, these may
     751              :      be INSNs, CALL_INSNs, or JUMP_INSNs.  Only JUMP_INSNs have branch
     752              :      direction information, and only if they are conditional jumps.
     753              : 
     754              :      If LABEL is a return, then there is no way to determine the branch
     755              :      direction.  */
     756            0 :   if (JUMP_P (insn)
     757            0 :       && (condjump_p (insn) || condjump_in_parallel_p (insn))
     758            0 :       && !ANY_RETURN_P (label)
     759            0 :       && INSN_UID (insn) <= max_uid
     760            0 :       && INSN_UID (label) <= max_uid)
     761            0 :     flags
     762            0 :       = (uid_to_ruid[INSN_UID (label)] > uid_to_ruid[INSN_UID (insn)])
     763            0 :          ? ATTR_FLAG_forward : ATTR_FLAG_backward;
     764              :   /* No valid direction information.  */
     765              :   else
     766              :     flags = 0;
     767              : 
     768            0 :   return flags;
     769              : }
     770              : 
     771              : /* Return truth value of the statement that this branch
     772              :    is mostly taken.  If we think that the branch is extremely likely
     773              :    to be taken, we return 2.  If the branch is slightly more likely to be
     774              :    taken, return 1.  If the branch is slightly less likely to be taken,
     775              :    return 0 and if the branch is highly unlikely to be taken, return -1.  */
     776              : 
     777              : static int
     778            0 : mostly_true_jump (rtx jump_insn)
     779              : {
     780              :   /* If branch probabilities are available, then use that number since it
     781              :      always gives a correct answer.  */
     782            0 :   rtx note = find_reg_note (jump_insn, REG_BR_PROB, 0);
     783            0 :   if (note)
     784              :     {
     785            0 :       int prob = profile_probability::from_reg_br_prob_note (XINT (note, 0))
     786            0 :                         .to_reg_br_prob_base ();
     787              : 
     788            0 :       if (prob >= REG_BR_PROB_BASE * 9 / 10)
     789              :         return 2;
     790            0 :       else if (prob >= REG_BR_PROB_BASE / 2)
     791              :         return 1;
     792            0 :       else if (prob >= REG_BR_PROB_BASE / 10)
     793              :         return 0;
     794              :       else
     795              :         return -1;
     796              :     }
     797              : 
     798              :   /* If there is no note, assume branches are not taken.
     799              :      This should be rare.  */
     800              :     return 0;
     801              : }
     802              : 
     803              : /* Return the condition under which INSN will branch to TARGET.  If TARGET
     804              :    is zero, return the condition under which INSN will return.  If INSN is
     805              :    an unconditional branch, return const_true_rtx.  If INSN isn't a simple
     806              :    type of jump, or it doesn't go to TARGET, return 0.  */
     807              : 
     808              : static rtx
     809            0 : get_branch_condition (const rtx_insn *insn, rtx target)
     810              : {
     811            0 :   rtx pat = PATTERN (insn);
     812            0 :   rtx src;
     813              : 
     814            0 :   if (condjump_in_parallel_p (insn))
     815            0 :     pat = XVECEXP (pat, 0, 0);
     816              : 
     817            0 :   if (ANY_RETURN_P (pat) && pat == target)
     818            0 :     return const_true_rtx;
     819              : 
     820            0 :   if (GET_CODE (pat) != SET || SET_DEST (pat) != pc_rtx)
     821              :     return 0;
     822              : 
     823            0 :   src = SET_SRC (pat);
     824            0 :   if (GET_CODE (src) == LABEL_REF && label_ref_label (src) == target)
     825            0 :     return const_true_rtx;
     826              : 
     827            0 :   else if (GET_CODE (src) == IF_THEN_ELSE
     828            0 :            && XEXP (src, 2) == pc_rtx
     829            0 :            && ((GET_CODE (XEXP (src, 1)) == LABEL_REF
     830            0 :                 && label_ref_label (XEXP (src, 1)) == target)
     831            0 :                || (ANY_RETURN_P (XEXP (src, 1)) && XEXP (src, 1) == target)))
     832            0 :     return XEXP (src, 0);
     833              : 
     834            0 :   else if (GET_CODE (src) == IF_THEN_ELSE
     835            0 :            && XEXP (src, 1) == pc_rtx
     836            0 :            && ((GET_CODE (XEXP (src, 2)) == LABEL_REF
     837            0 :                 && label_ref_label (XEXP (src, 2)) == target)
     838            0 :                || (ANY_RETURN_P (XEXP (src, 2)) && XEXP (src, 2) == target)))
     839              :     {
     840            0 :       enum rtx_code rev;
     841            0 :       rev = reversed_comparison_code (XEXP (src, 0), insn);
     842            0 :       if (rev != UNKNOWN)
     843            0 :         return gen_rtx_fmt_ee (rev, GET_MODE (XEXP (src, 0)),
     844              :                                XEXP (XEXP (src, 0), 0),
     845              :                                XEXP (XEXP (src, 0), 1));
     846              :     }
     847              : 
     848              :   return 0;
     849              : }
     850              : 
     851              : /* Return true if CONDITION is more strict than the condition of
     852              :    INSN, i.e., if INSN will always branch if CONDITION is true.  */
     853              : 
     854              : static bool
     855            0 : condition_dominates_p (rtx condition, const rtx_insn *insn)
     856              : {
     857            0 :   rtx other_condition = get_branch_condition (insn, JUMP_LABEL (insn));
     858            0 :   enum rtx_code code = GET_CODE (condition);
     859            0 :   enum rtx_code other_code;
     860              : 
     861            0 :   if (rtx_equal_p (condition, other_condition)
     862            0 :       || other_condition == const_true_rtx)
     863              :     return true;
     864              : 
     865            0 :   else if (condition == const_true_rtx || other_condition == 0)
     866              :     return false;
     867              : 
     868            0 :   other_code = GET_CODE (other_condition);
     869            0 :   if (GET_RTX_LENGTH (code) != 2 || GET_RTX_LENGTH (other_code) != 2
     870            0 :       || ! rtx_equal_p (XEXP (condition, 0), XEXP (other_condition, 0))
     871            0 :       || ! rtx_equal_p (XEXP (condition, 1), XEXP (other_condition, 1)))
     872            0 :     return false;
     873              : 
     874            0 :   return comparison_dominates_p (code, other_code);
     875              : }
     876              : 
     877              : /* Return true if redirecting JUMP to NEWLABEL does not invalidate
     878              :    any insns already in the delay slot of JUMP.  */
     879              : 
     880              : static bool
     881            0 : redirect_with_delay_slots_safe_p (rtx_insn *jump, rtx newlabel, rtx seq)
     882              : {
     883            0 :   int flags, i;
     884            0 :   rtx_sequence *pat = as_a <rtx_sequence *> (PATTERN (seq));
     885              : 
     886              :   /* Make sure all the delay slots of this jump would still
     887              :      be valid after threading the jump.  If they are still
     888              :      valid, then return nonzero.  */
     889              : 
     890            0 :   flags = get_jump_flags (jump, newlabel);
     891            0 :   for (i = 1; i < pat->len (); i++)
     892            0 :     if (! (
     893              : #if ANNUL_IFFALSE_SLOTS
     894              :            (INSN_ANNULLED_BRANCH_P (jump)
     895              :             && INSN_FROM_TARGET_P (pat->insn (i)))
     896              :            ? eligible_for_annul_false (jump, i - 1, pat->insn (i), flags) :
     897              : #endif
     898              : #if ANNUL_IFTRUE_SLOTS
     899              :            (INSN_ANNULLED_BRANCH_P (jump)
     900              :             && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)))
     901              :            ? eligible_for_annul_true (jump, i - 1, pat->insn (i), flags) :
     902              : #endif
     903            0 :            eligible_for_delay (jump, i - 1, pat->insn (i), flags)))
     904              :       break;
     905              : 
     906            0 :   return (i == pat->len ());
     907              : }
     908              : 
     909              : /* Return true if redirecting JUMP to NEWLABEL does not invalidate
     910              :    any insns we wish to place in the delay slot of JUMP.  */
     911              : 
     912              : static bool
     913            0 : redirect_with_delay_list_safe_p (rtx_insn *jump, rtx newlabel,
     914              :                                  const vec<rtx_insn *> &delay_list)
     915              : {
     916              :   /* Make sure all the insns in DELAY_LIST would still be
     917              :      valid after threading the jump.  If they are still
     918              :      valid, then return true.  */
     919              : 
     920            0 :   int flags = get_jump_flags (jump, newlabel);
     921            0 :   unsigned int delay_insns = delay_list.length ();
     922            0 :   unsigned int i = 0;
     923            0 :   for (; i < delay_insns; i++)
     924            0 :     if (! (
     925              : #if ANNUL_IFFALSE_SLOTS
     926              :            (INSN_ANNULLED_BRANCH_P (jump)
     927              :             && INSN_FROM_TARGET_P (delay_list[i]))
     928              :            ? eligible_for_annul_false (jump, i, delay_list[i], flags) :
     929              : #endif
     930              : #if ANNUL_IFTRUE_SLOTS
     931              :            (INSN_ANNULLED_BRANCH_P (jump)
     932              :             && ! INSN_FROM_TARGET_P (delay_list[i]))
     933              :            ? eligible_for_annul_true (jump, i, delay_list[i], flags) :
     934              : #endif
     935            0 :            eligible_for_delay (jump, i, delay_list[i], flags)))
     936              :       break;
     937              : 
     938            0 :   return i == delay_insns;
     939              : }
     940              : 
     941              : /* DELAY_LIST is a list of insns that have already been placed into delay
     942              :    slots.  See if all of them have the same annulling status as ANNUL_TRUE_P.
     943              :    If not, return false; otherwise return true.  */
     944              : 
     945              : static bool
     946            0 : check_annul_list_true_false (bool annul_true_p,
     947              :                              const vec<rtx_insn *> &delay_list)
     948              : {
     949            0 :   rtx_insn *trial;
     950            0 :   unsigned int i;
     951            0 :   FOR_EACH_VEC_ELT (delay_list, i, trial)
     952            0 :     if ((annul_true_p && INSN_FROM_TARGET_P (trial))
     953            0 :         || (!annul_true_p && !INSN_FROM_TARGET_P (trial)))
     954              :       return false;
     955              : 
     956              :   return true;
     957              : }
     958              : 
     959              : /* INSN branches to an insn whose pattern SEQ is a SEQUENCE.  Given that
     960              :    the condition tested by INSN is CONDITION and the resources shown in
     961              :    OTHER_NEEDED are needed after INSN, see whether INSN can take all the insns
     962              :    from SEQ's delay list, in addition to whatever insns it may execute
     963              :    (in DELAY_LIST).   SETS and NEEDED are denote resources already set and
     964              :    needed while searching for delay slot insns.  Return the concatenated
     965              :    delay list if possible, otherwise, return 0.
     966              : 
     967              :    SLOTS_TO_FILL is the total number of slots required by INSN, and
     968              :    PSLOTS_FILLED points to the number filled so far (also the number of
     969              :    insns in DELAY_LIST).  It is updated with the number that have been
     970              :    filled from the SEQUENCE, if any.
     971              : 
     972              :    PANNUL_P points to a nonzero value if we already know that we need
     973              :    to annul INSN.  If this routine determines that annulling is needed,
     974              :    it may set that value to true.
     975              : 
     976              :    PNEW_THREAD points to a location that is to receive the place at which
     977              :    execution should continue.  */
     978              : 
     979              : static void
     980            0 : steal_delay_list_from_target (rtx_insn *insn, rtx condition, rtx_sequence *seq,
     981              :                               vec<rtx_insn *> *delay_list,
     982              :                               struct resources *sets,
     983              :                               struct resources *needed,
     984              :                               struct resources *other_needed,
     985              :                               int slots_to_fill, int *pslots_filled,
     986              :                               bool *pannul_p, rtx *pnew_thread)
     987              : {
     988            0 :   int slots_remaining = slots_to_fill - *pslots_filled;
     989            0 :   int total_slots_filled = *pslots_filled;
     990            0 :   auto_vec<rtx_insn *, 5> new_delay_list;
     991            0 :   bool must_annul = *pannul_p;
     992            0 :   bool used_annul = false;
     993            0 :   int i;
     994            0 :   struct resources cc_set;
     995            0 :   rtx_insn **redundant;
     996              : 
     997              :   /* We can't do anything if there are more delay slots in SEQ than we
     998              :      can handle, or if we don't know that it will be a taken branch.
     999              :      We know that it will be a taken branch if it is either an unconditional
    1000              :      branch or a conditional branch with a stricter branch condition.
    1001              : 
    1002              :      Also, exit if the branch has more than one set, since then it is computing
    1003              :      other results that can't be ignored, e.g. the HPPA mov&branch instruction.
    1004              :      ??? It may be possible to move other sets into INSN in addition to
    1005              :      moving the instructions in the delay slots.
    1006              : 
    1007              :      We cannot steal the delay list if one of the instructions in the
    1008              :      current delay_list modifies the condition codes and the jump in the
    1009              :      sequence is a conditional jump. We cannot do this because we cannot
    1010              :      change the direction of the jump because the condition codes
    1011              :      will effect the direction of the jump in the sequence.  */
    1012              : 
    1013            0 :   CLEAR_RESOURCE (&cc_set);
    1014              : 
    1015              :   rtx_insn *trial;
    1016            0 :   FOR_EACH_VEC_ELT (*delay_list, i, trial)
    1017              :     {
    1018            0 :       mark_set_resources (trial, &cc_set, 0, MARK_SRC_DEST_CALL);
    1019            0 :       if (insn_references_resource_p (seq->insn (0), &cc_set, false))
    1020              :         return;
    1021              :     }
    1022              : 
    1023            0 :   if (XVECLEN (seq, 0) - 1 > slots_remaining
    1024            0 :       || ! condition_dominates_p (condition, seq->insn (0))
    1025            0 :       || ! single_set (seq->insn (0)))
    1026            0 :     return;
    1027              : 
    1028              :   /* On some targets, branches with delay slots can have a limited
    1029              :      displacement.  Give the back end a chance to tell us we can't do
    1030              :      this.  */
    1031            0 :   if (! targetm.can_follow_jump (insn, seq->insn (0)))
    1032              :     return;
    1033              : 
    1034            0 :   redundant = XALLOCAVEC (rtx_insn *, XVECLEN (seq, 0));
    1035            0 :   for (i = 1; i < seq->len (); i++)
    1036              :     {
    1037            0 :       rtx_insn *trial = seq->insn (i);
    1038            0 :       int flags;
    1039              : 
    1040            0 :       if (insn_references_resource_p (trial, sets, false)
    1041            0 :           || insn_sets_resource_p (trial, needed, false)
    1042            0 :           || insn_sets_resource_p (trial, sets, false)
    1043              :           /* If TRIAL is from the fallthrough code of an annulled branch insn
    1044              :              in SEQ, we cannot use it.  */
    1045            0 :           || (INSN_ANNULLED_BRANCH_P (seq->insn (0))
    1046            0 :               && ! INSN_FROM_TARGET_P (trial)))
    1047            0 :         return;
    1048              : 
    1049              :       /* If this insn was already done (usually in a previous delay slot),
    1050              :          pretend we put it in our delay slot.  */
    1051            0 :       redundant[i] = redundant_insn (trial, insn, new_delay_list);
    1052            0 :       if (redundant[i])
    1053            0 :         continue;
    1054              : 
    1055              :       /* We will end up re-vectoring this branch, so compute flags
    1056              :          based on jumping to the new label.  */
    1057            0 :       flags = get_jump_flags (insn, JUMP_LABEL (seq->insn (0)));
    1058              : 
    1059            0 :       if (! must_annul
    1060            0 :           && ((condition == const_true_rtx
    1061            0 :                || (! insn_sets_resource_p (trial, other_needed, false)
    1062            0 :                    && ! may_trap_or_fault_p (PATTERN (trial)))))
    1063            0 :           ? eligible_for_delay (insn, total_slots_filled, trial, flags)
    1064            0 :           : (must_annul || (delay_list->is_empty () && new_delay_list.is_empty ()))
    1065            0 :              && (must_annul = true,
    1066            0 :                  check_annul_list_true_false (false, *delay_list)
    1067            0 :                  && check_annul_list_true_false (false, new_delay_list)
    1068            0 :                  && eligible_for_annul_false (insn, total_slots_filled,
    1069              :                                               trial, flags)))
    1070              :         {
    1071            0 :           if (must_annul)
    1072              :             {
    1073              :               /* Frame related instructions cannot go into annulled delay
    1074              :                  slots, it messes up the dwarf info.  */
    1075            0 :               if (RTX_FRAME_RELATED_P (trial))
    1076              :                 return;
    1077              :               used_annul = true;
    1078              :             }
    1079            0 :           rtx_insn *temp = copy_delay_slot_insn (trial);
    1080            0 :           INSN_FROM_TARGET_P (temp) = 1;
    1081            0 :           add_to_delay_list (temp, &new_delay_list);
    1082            0 :           total_slots_filled++;
    1083              : 
    1084            0 :           if (--slots_remaining == 0)
    1085              :             break;
    1086              :         }
    1087              :       else
    1088            0 :         return;
    1089              :     }
    1090              : 
    1091              :   /* Record the effect of the instructions that were redundant and which
    1092              :      we therefore decided not to copy.  */
    1093            0 :   for (i = 1; i < seq->len (); i++)
    1094            0 :     if (redundant[i])
    1095              :       {
    1096            0 :         fix_reg_dead_note (redundant[i], insn);
    1097            0 :         update_block (seq->insn (i), insn);
    1098              :       }
    1099              : 
    1100              :   /* Show the place to which we will be branching.  */
    1101            0 :   *pnew_thread = first_active_target_insn (JUMP_LABEL (seq->insn (0)));
    1102              : 
    1103              :   /* Add any new insns to the delay list and update the count of the
    1104              :      number of slots filled.  */
    1105            0 :   *pslots_filled = total_slots_filled;
    1106            0 :   if (used_annul)
    1107            0 :     *pannul_p = true;
    1108              : 
    1109              :   rtx_insn *temp;
    1110            0 :   FOR_EACH_VEC_ELT (new_delay_list, i, temp)
    1111            0 :     add_to_delay_list (temp, delay_list);
    1112            0 : }
    1113              : 
    1114              : /* Similar to steal_delay_list_from_target except that SEQ is on the
    1115              :    fallthrough path of INSN.  Here we only do something if the delay insn
    1116              :    of SEQ is an unconditional branch.  In that case we steal its delay slot
    1117              :    for INSN since unconditional branches are much easier to fill.  */
    1118              : 
    1119              : static void
    1120            0 : steal_delay_list_from_fallthrough (rtx_insn *insn, rtx condition,
    1121              :                                    rtx_sequence *seq,
    1122              :                                    vec<rtx_insn *> *delay_list,
    1123              :                                    struct resources *sets,
    1124              :                                    struct resources *needed,
    1125              :                                    struct resources *other_needed,
    1126              :                                    int slots_to_fill, int *pslots_filled,
    1127              :                                    bool *pannul_p)
    1128              : {
    1129            0 :   int i;
    1130            0 :   int flags;
    1131            0 :   bool must_annul = *pannul_p;
    1132            0 :   bool used_annul = false;
    1133              : 
    1134            0 :   flags = get_jump_flags (insn, JUMP_LABEL (insn));
    1135              : 
    1136              :   /* We can't do anything if SEQ's delay insn isn't an
    1137              :      unconditional branch.  */
    1138              : 
    1139            0 :   if (! simplejump_or_return_p (seq->insn (0)))
    1140              :     return;
    1141              : 
    1142            0 :   for (i = 1; i < seq->len (); i++)
    1143              :     {
    1144            0 :       rtx_insn *trial = seq->insn (i);
    1145            0 :       rtx_insn *prior_insn;
    1146              : 
    1147            0 :       if (insn_references_resource_p (trial, sets, false)
    1148            0 :           || insn_sets_resource_p (trial, needed, false)
    1149            0 :           || insn_sets_resource_p (trial, sets, false))
    1150              :         break;
    1151              : 
    1152              :       /* If this insn was already done, we don't need it.  */
    1153            0 :       if ((prior_insn = redundant_insn (trial, insn, *delay_list)))
    1154              :         {
    1155            0 :           fix_reg_dead_note (prior_insn, insn);
    1156            0 :           update_block (trial, insn);
    1157            0 :           delete_from_delay_slot (trial);
    1158            0 :           continue;
    1159              :         }
    1160              : 
    1161            0 :       if (! must_annul
    1162            0 :           && ((condition == const_true_rtx
    1163            0 :                || (! insn_sets_resource_p (trial, other_needed, false)
    1164            0 :                    && ! may_trap_or_fault_p (PATTERN (trial)))))
    1165            0 :           ? eligible_for_delay (insn, *pslots_filled, trial, flags)
    1166            0 :           : (must_annul || delay_list->is_empty ()) && (must_annul = true,
    1167            0 :              check_annul_list_true_false (true, *delay_list)
    1168            0 :              && eligible_for_annul_true (insn, *pslots_filled, trial, flags)))
    1169              :         {
    1170            0 :           if (must_annul)
    1171            0 :             used_annul = true;
    1172            0 :           delete_from_delay_slot (trial);
    1173            0 :           add_to_delay_list (trial, delay_list);
    1174              : 
    1175            0 :           if (++(*pslots_filled) == slots_to_fill)
    1176              :             break;
    1177              :         }
    1178              :       else
    1179              :         break;
    1180              :     }
    1181              : 
    1182            0 :   if (used_annul)
    1183            0 :     *pannul_p = true;
    1184              : }
    1185              : 
    1186              : /* Try merging insns starting at THREAD which match exactly the insns in
    1187              :    INSN's delay list.
    1188              : 
    1189              :    If all insns were matched and the insn was previously annulling, the
    1190              :    annul bit will be cleared.
    1191              : 
    1192              :    For each insn that is merged, if the branch is or will be non-annulling,
    1193              :    we delete the merged insn.  */
    1194              : 
    1195              : static void
    1196            0 : try_merge_delay_insns (rtx_insn *insn, rtx_insn *thread)
    1197              : {
    1198            0 :   rtx_insn *trial, *next_trial;
    1199            0 :   rtx_insn *delay_insn = as_a <rtx_insn *> (XVECEXP (PATTERN (insn), 0, 0));
    1200            0 :   bool annul_p = JUMP_P (delay_insn) && INSN_ANNULLED_BRANCH_P (delay_insn);
    1201            0 :   int slot_number = 1;
    1202            0 :   int num_slots = XVECLEN (PATTERN (insn), 0);
    1203            0 :   rtx next_to_match = XVECEXP (PATTERN (insn), 0, slot_number);
    1204            0 :   struct resources set, needed, modified;
    1205            0 :   auto_vec<std::pair<rtx_insn *, bool>, 10> merged_insns;
    1206            0 :   int flags;
    1207              : 
    1208            0 :   flags = get_jump_flags (delay_insn, JUMP_LABEL (delay_insn));
    1209              : 
    1210            0 :   CLEAR_RESOURCE (&needed);
    1211            0 :   CLEAR_RESOURCE (&set);
    1212              : 
    1213              :   /* If this is not an annulling branch, take into account anything needed in
    1214              :      INSN's delay slot.  This prevents two increments from being incorrectly
    1215              :      folded into one.  If we are annulling, this would be the correct
    1216              :      thing to do.  (The alternative, looking at things set in NEXT_TO_MATCH
    1217              :      will essentially disable this optimization.  This method is somewhat of
    1218              :      a kludge, but I don't see a better way.)  */
    1219            0 :   if (! annul_p)
    1220            0 :     for (int i = 1; i < num_slots; i++)
    1221            0 :       if (XVECEXP (PATTERN (insn), 0, i))
    1222            0 :         mark_referenced_resources (XVECEXP (PATTERN (insn), 0, i), &needed,
    1223              :                                    true);
    1224              : 
    1225            0 :   for (trial = thread; !stop_search_p (trial, true); trial = next_trial)
    1226              :     {
    1227            0 :       rtx pat = PATTERN (trial);
    1228            0 :       rtx oldtrial = trial;
    1229              : 
    1230            0 :       next_trial = next_nonnote_insn (trial);
    1231              : 
    1232              :       /* TRIAL must be a CALL_INSN or INSN.  Skip USE and CLOBBER.  */
    1233            0 :       if (NONJUMP_INSN_P (trial)
    1234            0 :           && (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER))
    1235            0 :         continue;
    1236              : 
    1237            0 :       if (GET_CODE (next_to_match) == GET_CODE (trial)
    1238            0 :           && ! insn_references_resource_p (trial, &set, true)
    1239            0 :           && ! insn_sets_resource_p (trial, &set, true)
    1240            0 :           && ! insn_sets_resource_p (trial, &needed, true)
    1241            0 :           && (trial = try_split (pat, trial, 0)) != 0
    1242              :           /* Update next_trial, in case try_split succeeded.  */
    1243            0 :           && (next_trial = next_nonnote_insn (trial))
    1244              :           /* Likewise THREAD.  */
    1245            0 :           && (thread = oldtrial == thread ? trial : thread)
    1246            0 :           && rtx_equal_p (PATTERN (next_to_match), PATTERN (trial))
    1247              :           /* Have to test this condition if annul condition is different
    1248              :              from (and less restrictive than) non-annulling one.  */
    1249            0 :           && eligible_for_delay (delay_insn, slot_number - 1, trial, flags))
    1250              :         {
    1251              : 
    1252            0 :           if (! annul_p)
    1253              :             {
    1254            0 :               update_block (trial, thread);
    1255            0 :               if (trial == thread)
    1256            0 :                 thread = next_active_insn (thread);
    1257              : 
    1258            0 :               delete_related_insns (trial);
    1259            0 :               INSN_FROM_TARGET_P (next_to_match) = 0;
    1260              :             }
    1261              :           else
    1262            0 :             merged_insns.safe_push (std::pair<rtx_insn *, bool> (trial, false));
    1263              : 
    1264            0 :           if (++slot_number == num_slots)
    1265              :             break;
    1266              : 
    1267            0 :           next_to_match = XVECEXP (PATTERN (insn), 0, slot_number);
    1268              :         }
    1269              : 
    1270            0 :       mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
    1271            0 :       mark_referenced_resources (trial, &needed, true);
    1272              :     }
    1273              : 
    1274              :   /* See if we stopped on a filled insn.  If we did, try to see if its
    1275              :      delay slots match.  */
    1276            0 :   if (slot_number != num_slots
    1277            0 :       && trial && NONJUMP_INSN_P (trial)
    1278            0 :       && GET_CODE (PATTERN (trial)) == SEQUENCE
    1279            0 :       && !(JUMP_P (XVECEXP (PATTERN (trial), 0, 0))
    1280            0 :            && INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (trial), 0, 0))))
    1281              :     {
    1282            0 :       rtx_sequence *pat = as_a <rtx_sequence *> (PATTERN (trial));
    1283            0 :       rtx filled_insn = XVECEXP (pat, 0, 0);
    1284              : 
    1285              :       /* Account for resources set/needed by the filled insn.  */
    1286            0 :       mark_set_resources (filled_insn, &set, 0, MARK_SRC_DEST_CALL);
    1287            0 :       mark_referenced_resources (filled_insn, &needed, true);
    1288              : 
    1289            0 :       for (int i = 1; i < pat->len (); i++)
    1290              :         {
    1291            0 :           rtx_insn *dtrial = pat->insn (i);
    1292              : 
    1293            0 :           CLEAR_RESOURCE (&modified);
    1294              :           /* Account for resources set by the insn following NEXT_TO_MATCH
    1295              :              inside INSN's delay list. */
    1296            0 :           for (int j = 1; slot_number + j < num_slots; j++)
    1297            0 :             mark_set_resources (XVECEXP (PATTERN (insn), 0, slot_number + j),
    1298              :                                 &modified, 0, MARK_SRC_DEST_CALL);
    1299              :           /* Account for resources set by the insn before DTRIAL and inside
    1300              :              TRIAL's delay list. */
    1301            0 :           for (int j = 1; j < i; j++)
    1302            0 :             mark_set_resources (XVECEXP (pat, 0, j),
    1303              :                                 &modified, 0, MARK_SRC_DEST_CALL);
    1304            0 :           if (! insn_references_resource_p (dtrial, &set, true)
    1305            0 :               && ! insn_sets_resource_p (dtrial, &set, true)
    1306            0 :               && ! insn_sets_resource_p (dtrial, &needed, true)
    1307            0 :               && rtx_equal_p (PATTERN (next_to_match), PATTERN (dtrial))
    1308              :               /* Check that DTRIAL and NEXT_TO_MATCH does not reference a
    1309              :                  resource modified between them (only dtrial is checked because
    1310              :                  next_to_match and dtrial shall to be equal in order to hit
    1311              :                  this line) */
    1312            0 :               && ! insn_references_resource_p (dtrial, &modified, true)
    1313            0 :               && eligible_for_delay (delay_insn, slot_number - 1, dtrial, flags))
    1314              :             {
    1315            0 :               if (! annul_p)
    1316              :                 {
    1317            0 :                   rtx_insn *new_rtx;
    1318              : 
    1319            0 :                   update_block (dtrial, thread);
    1320            0 :                   new_rtx = delete_from_delay_slot (dtrial);
    1321            0 :                   if (thread->deleted ())
    1322            0 :                     thread = new_rtx;
    1323            0 :                   INSN_FROM_TARGET_P (next_to_match) = 0;
    1324              :                 }
    1325              :               else
    1326            0 :                 merged_insns.safe_push (std::pair<rtx_insn *, bool> (dtrial,
    1327            0 :                                                                      true));
    1328              : 
    1329            0 :               if (++slot_number == num_slots)
    1330              :                 break;
    1331              : 
    1332            0 :               next_to_match = XVECEXP (PATTERN (insn), 0, slot_number);
    1333              :             }
    1334              :           else
    1335              :             {
    1336              :               /* Keep track of the set/referenced resources for the delay
    1337              :                  slots of any trial insns we encounter.  */
    1338            0 :               mark_set_resources (dtrial, &set, 0, MARK_SRC_DEST_CALL);
    1339            0 :               mark_referenced_resources (dtrial, &needed, true);
    1340              :             }
    1341              :         }
    1342              :     }
    1343              : 
    1344              :   /* If all insns in the delay slot have been matched and we were previously
    1345              :      annulling the branch, we need not any more.  In that case delete all the
    1346              :      merged insns.  Also clear the INSN_FROM_TARGET_P bit of each insn in
    1347              :      the delay list so that we know that it isn't only being used at the
    1348              :      target.  */
    1349            0 :   if (slot_number == num_slots && annul_p)
    1350              :     {
    1351            0 :       unsigned int len = merged_insns.length ();
    1352            0 :       for (unsigned int i = len - 1; i < len; i--)
    1353            0 :         if (merged_insns[i].second)
    1354              :           {
    1355            0 :             update_block (merged_insns[i].first, thread);
    1356            0 :             rtx_insn *new_rtx = delete_from_delay_slot (merged_insns[i].first);
    1357            0 :             if (thread->deleted ())
    1358            0 :               thread = new_rtx;
    1359              :           }
    1360              :         else
    1361              :           {
    1362            0 :             update_block (merged_insns[i].first, thread);
    1363            0 :             delete_related_insns (merged_insns[i].first);
    1364              :           }
    1365              : 
    1366            0 :       INSN_ANNULLED_BRANCH_P (delay_insn) = 0;
    1367              : 
    1368            0 :       for (int i = 0; i < XVECLEN (PATTERN (insn), 0); i++)
    1369            0 :         INSN_FROM_TARGET_P (XVECEXP (PATTERN (insn), 0, i)) = 0;
    1370              :     }
    1371            0 : }
    1372              : 
    1373              : /* See if INSN is redundant with an insn in front of TARGET.  Often this
    1374              :    is called when INSN is a candidate for a delay slot of TARGET.
    1375              :    DELAY_LIST are insns that will be placed in delay slots of TARGET in front
    1376              :    of INSN.  Often INSN will be redundant with an insn in a delay slot of
    1377              :    some previous insn.  This happens when we have a series of branches to the
    1378              :    same label; in that case the first insn at the target might want to go
    1379              :    into each of the delay slots.
    1380              : 
    1381              :    If we are not careful, this routine can take up a significant fraction
    1382              :    of the total compilation time (4%), but only wins rarely.  Hence we
    1383              :    speed this routine up by making two passes.  The first pass goes back
    1384              :    until it hits a label and sees if it finds an insn with an identical
    1385              :    pattern.  Only in this (relatively rare) event does it check for
    1386              :    data conflicts.
    1387              : 
    1388              :    We do not split insns we encounter.  This could cause us not to find a
    1389              :    redundant insn, but the cost of splitting seems greater than the possible
    1390              :    gain in rare cases.  */
    1391              : 
    1392              : static rtx_insn *
    1393            0 : redundant_insn (rtx insn, rtx_insn *target, const vec<rtx_insn *> &delay_list)
    1394              : {
    1395            0 :   rtx target_main = target;
    1396            0 :   rtx ipat = PATTERN (insn);
    1397            0 :   rtx_insn *trial;
    1398            0 :   rtx pat;
    1399            0 :   struct resources needed, set;
    1400            0 :   int i;
    1401            0 :   unsigned insns_to_search;
    1402              : 
    1403              :   /* If INSN has any REG_UNUSED notes, it can't match anything since we
    1404              :      are allowed to not actually assign to such a register.  */
    1405            0 :   if (find_reg_note (insn, REG_UNUSED, NULL_RTX) != 0)
    1406              :     return 0;
    1407              : 
    1408              :   /* Scan backwards looking for a match.  */
    1409            0 :   for (trial = PREV_INSN (target),
    1410            0 :          insns_to_search = param_max_delay_slot_insn_search;
    1411            0 :        trial && insns_to_search > 0;
    1412            0 :        trial = PREV_INSN (trial))
    1413              :     {
    1414              :       /* (use (insn))s can come immediately after a barrier if the
    1415              :          label that used to precede them has been deleted as dead.
    1416              :          See delete_related_insns.  */
    1417            0 :       if (LABEL_P (trial) || BARRIER_P (trial))
    1418              :         return 0;
    1419              : 
    1420            0 :       if (!INSN_P (trial))
    1421            0 :         continue;
    1422            0 :       --insns_to_search;
    1423              : 
    1424            0 :       pat = PATTERN (trial);
    1425            0 :       if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
    1426            0 :         continue;
    1427              : 
    1428            0 :       if (GET_CODE (trial) == DEBUG_INSN)
    1429            0 :         continue;
    1430              : 
    1431            0 :       if (rtx_sequence *seq = dyn_cast <rtx_sequence *> (pat))
    1432              :         {
    1433              :           /* Stop for a CALL and its delay slots because it is difficult to
    1434              :              track its resource needs correctly.  */
    1435            0 :           if (CALL_P (seq->element (0)))
    1436              :             return 0;
    1437              : 
    1438              :           /* Stop for an INSN or JUMP_INSN with delayed effects and its delay
    1439              :              slots because it is difficult to track its resource needs
    1440              :              correctly.  */
    1441              : 
    1442            0 :           if (INSN_SETS_ARE_DELAYED (seq->insn (0)))
    1443              :             return 0;
    1444              : 
    1445            0 :           if (INSN_REFERENCES_ARE_DELAYED (seq->insn (0)))
    1446              :             return 0;
    1447              : 
    1448              :           /* See if any of the insns in the delay slot match, updating
    1449              :              resource requirements as we go.  */
    1450            0 :           for (i = seq->len () - 1; i > 0; i--)
    1451            0 :             if (GET_CODE (seq->element (i)) == GET_CODE (insn)
    1452            0 :                 && rtx_equal_p (PATTERN (seq->element (i)), ipat)
    1453            0 :                 && ! find_reg_note (seq->element (i), REG_UNUSED, NULL_RTX))
    1454              :               break;
    1455              : 
    1456              :           /* If found a match, exit this loop early.  */
    1457            0 :           if (i > 0)
    1458              :             break;
    1459              :         }
    1460              : 
    1461            0 :       else if (GET_CODE (trial) == GET_CODE (insn) && rtx_equal_p (pat, ipat)
    1462            0 :                && ! find_reg_note (trial, REG_UNUSED, NULL_RTX))
    1463              :         break;
    1464              :     }
    1465              : 
    1466              :   /* If we didn't find an insn that matches, return 0.  */
    1467            0 :   if (trial == 0)
    1468              :     return 0;
    1469              : 
    1470              :   /* See what resources this insn sets and needs.  If they overlap, it
    1471              :      can't be redundant.  */
    1472              : 
    1473            0 :   CLEAR_RESOURCE (&needed);
    1474            0 :   CLEAR_RESOURCE (&set);
    1475            0 :   mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL);
    1476            0 :   mark_referenced_resources (insn, &needed, true);
    1477              : 
    1478              :   /* If TARGET is a SEQUENCE, get the main insn.  */
    1479            0 :   if (NONJUMP_INSN_P (target) && GET_CODE (PATTERN (target)) == SEQUENCE)
    1480            0 :     target_main = XVECEXP (PATTERN (target), 0, 0);
    1481              : 
    1482            0 :   if (resource_conflicts_p (&needed, &set)
    1483              :       /* The insn requiring the delay may not set anything needed or set by
    1484              :          INSN.  */
    1485            0 :       || insn_sets_resource_p (target_main, &needed, true)
    1486            0 :       || insn_sets_resource_p (target_main, &set, true))
    1487            0 :     return 0;
    1488              : 
    1489              :   /* Insns we pass may not set either NEEDED or SET, so merge them for
    1490              :      simpler tests.  */
    1491            0 :   needed.memory |= set.memory;
    1492            0 :   needed.regs |= set.regs;
    1493              : 
    1494              :   /* This insn isn't redundant if it conflicts with an insn that either is
    1495              :      or will be in a delay slot of TARGET.  */
    1496              : 
    1497              :   unsigned int j;
    1498              :   rtx_insn *temp;
    1499            0 :   FOR_EACH_VEC_ELT (delay_list, j, temp)
    1500            0 :     if (insn_sets_resource_p (temp, &needed, true))
    1501              :       return 0;
    1502              : 
    1503            0 :   if (NONJUMP_INSN_P (target) && GET_CODE (PATTERN (target)) == SEQUENCE)
    1504            0 :     for (i = 1; i < XVECLEN (PATTERN (target), 0); i++)
    1505            0 :       if (insn_sets_resource_p (XVECEXP (PATTERN (target), 0, i), &needed,
    1506              :                                 true))
    1507              :         return 0;
    1508              : 
    1509              :   /* Scan backwards until we reach a label or an insn that uses something
    1510              :      INSN sets or sets something insn uses or sets.  */
    1511              : 
    1512            0 :   for (trial = PREV_INSN (target),
    1513            0 :          insns_to_search = param_max_delay_slot_insn_search;
    1514            0 :        trial && !LABEL_P (trial) && insns_to_search > 0;
    1515            0 :        trial = PREV_INSN (trial))
    1516              :     {
    1517            0 :       if (!INSN_P (trial))
    1518            0 :         continue;
    1519            0 :       --insns_to_search;
    1520              : 
    1521            0 :       pat = PATTERN (trial);
    1522            0 :       if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
    1523            0 :         continue;
    1524              : 
    1525            0 :       if (GET_CODE (trial) == DEBUG_INSN)
    1526            0 :         continue;
    1527              : 
    1528            0 :       if (rtx_sequence *seq = dyn_cast <rtx_sequence *> (pat))
    1529              :         {
    1530            0 :           bool annul_p = false;
    1531            0 :           rtx_insn *control = seq->insn (0);
    1532              : 
    1533              :           /* If this is a CALL_INSN and its delay slots, it is hard to track
    1534              :              the resource needs properly, so give up.  */
    1535            0 :           if (CALL_P (control))
    1536              :             return 0;
    1537              : 
    1538              :           /* If this is an INSN or JUMP_INSN with delayed effects, it
    1539              :              is hard to track the resource needs properly, so give up.  */
    1540              : 
    1541            0 :           if (INSN_SETS_ARE_DELAYED (control))
    1542              :             return 0;
    1543              : 
    1544            0 :           if (INSN_REFERENCES_ARE_DELAYED (control))
    1545              :             return 0;
    1546              : 
    1547            0 :           if (JUMP_P (control))
    1548            0 :             annul_p = INSN_ANNULLED_BRANCH_P (control);
    1549              : 
    1550              :           /* See if any of the insns in the delay slot match, updating
    1551              :              resource requirements as we go.  */
    1552            0 :           for (i = seq->len () - 1; i > 0; i--)
    1553              :             {
    1554            0 :               rtx_insn *candidate = seq->insn (i);
    1555              : 
    1556              :               /* If an insn will be annulled if the branch is false, it isn't
    1557              :                  considered as a possible duplicate insn.  */
    1558            0 :               if (rtx_equal_p (PATTERN (candidate), ipat)
    1559            0 :                   && ! (annul_p && INSN_FROM_TARGET_P (candidate)))
    1560              :                 {
    1561              :                   /* Show that this insn will be used in the sequel.  */
    1562            0 :                   INSN_FROM_TARGET_P (candidate) = 0;
    1563            0 :                   return candidate;
    1564              :                 }
    1565              : 
    1566              :               /* Unless this is an annulled insn from the target of a branch,
    1567              :                  we must stop if it sets anything needed or set by INSN.  */
    1568            0 :               if ((!annul_p || !INSN_FROM_TARGET_P (candidate))
    1569            0 :                   && insn_sets_resource_p (candidate, &needed, true))
    1570              :                 return 0;
    1571              :             }
    1572              : 
    1573              :           /* If the insn requiring the delay slot conflicts with INSN, we
    1574              :              must stop.  */
    1575            0 :           if (insn_sets_resource_p (control, &needed, true))
    1576              :             return 0;
    1577              :         }
    1578              :       else
    1579              :         {
    1580              :           /* See if TRIAL is the same as INSN.  */
    1581            0 :           pat = PATTERN (trial);
    1582            0 :           if (rtx_equal_p (pat, ipat))
    1583              :             return trial;
    1584              : 
    1585              :           /* Can't go any further if TRIAL conflicts with INSN.  */
    1586            0 :           if (insn_sets_resource_p (trial, &needed, true))
    1587              :             return 0;
    1588              :         }
    1589              :     }
    1590              : 
    1591              :   return 0;
    1592              : }
    1593              : 
    1594              : /* Return true if THREAD can only be executed in one way.  If LABEL is nonzero,
    1595              :    it is the target of the branch insn being scanned.  If ALLOW_FALLTHROUGH
    1596              :    is true, we are allowed to fall into this thread; otherwise, we are not.
    1597              : 
    1598              :    If LABEL is used more than one or we pass a label other than LABEL before
    1599              :    finding an active insn, we do not own this thread.  */
    1600              : 
    1601              : static bool
    1602            0 : own_thread_p (rtx thread, rtx label, bool allow_fallthrough)
    1603              : {
    1604            0 :   rtx_insn *active_insn;
    1605            0 :   rtx_insn *insn;
    1606              : 
    1607              :   /* We don't own the function end.  */
    1608            0 :   if (thread == 0 || ANY_RETURN_P (thread))
    1609              :     return false;
    1610              : 
    1611              :   /* We have a non-NULL insn.  */
    1612            0 :   rtx_insn *thread_insn = as_a <rtx_insn *> (thread);
    1613              : 
    1614              :   /* Get the first active insn, or THREAD_INSN, if it is an active insn.  */
    1615            0 :   active_insn = next_active_insn (PREV_INSN (thread_insn));
    1616              : 
    1617            0 :   for (insn = thread_insn; insn != active_insn; insn = NEXT_INSN (insn))
    1618            0 :     if (LABEL_P (insn)
    1619            0 :         && (insn != label || LABEL_NUSES (insn) != 1))
    1620              :       return false;
    1621              : 
    1622            0 :   if (allow_fallthrough)
    1623              :     return true;
    1624              : 
    1625              :   /* Ensure that we reach a BARRIER before any insn or label.  */
    1626            0 :   for (insn = prev_nonnote_insn (thread_insn);
    1627            0 :        insn == 0 || !BARRIER_P (insn);
    1628            0 :        insn = prev_nonnote_insn (insn))
    1629            0 :     if (insn == 0
    1630            0 :         || LABEL_P (insn)
    1631            0 :         || (NONJUMP_INSN_P (insn)
    1632            0 :             && GET_CODE (PATTERN (insn)) != USE
    1633            0 :             && GET_CODE (PATTERN (insn)) != CLOBBER))
    1634              :       return false;
    1635              : 
    1636              :   return true;
    1637              : }
    1638              : 
    1639              : /* Called when INSN is being moved from a location near the target of a jump.
    1640              :    We leave a marker of the form (use (INSN)) immediately in front of WHERE
    1641              :    for mark_target_live_regs.  These markers will be deleted at the end.
    1642              : 
    1643              :    We used to try to update the live status of registers if WHERE is at
    1644              :    the start of a basic block, but that can't work since we may remove a
    1645              :    BARRIER in relax_delay_slots.  */
    1646              : 
    1647              : static void
    1648            0 : update_block (rtx_insn *insn, rtx_insn *where)
    1649              : {
    1650            0 :   emit_insn_before (gen_rtx_USE (VOIDmode, insn), where);
    1651              : 
    1652              :   /* INSN might be making a value live in a block where it didn't use to
    1653              :      be.  So recompute liveness information for this block.  */
    1654            0 :   incr_ticks_for_insn (insn);
    1655            0 : }
    1656              : 
    1657              : /* Similar to REDIRECT_JUMP except that we update the BB_TICKS entry for
    1658              :    the basic block containing the jump.  */
    1659              : 
    1660              : static bool
    1661            0 : reorg_redirect_jump (rtx_jump_insn *jump, rtx nlabel)
    1662              : {
    1663            0 :   incr_ticks_for_insn (jump);
    1664            0 :   return redirect_jump (jump, nlabel, 1);
    1665              : }
    1666              : 
    1667              : /* Called when INSN is being moved forward into a delay slot of DELAYED_INSN.
    1668              :    We check every instruction between INSN and DELAYED_INSN for REG_DEAD notes
    1669              :    that reference values used in INSN.  If we find one, then we move the
    1670              :    REG_DEAD note to INSN.
    1671              : 
    1672              :    This is needed to handle the case where a later insn (after INSN) has a
    1673              :    REG_DEAD note for a register used by INSN, and this later insn subsequently
    1674              :    gets moved before a CODE_LABEL because it is a redundant insn.  In this
    1675              :    case, mark_target_live_regs may be confused into thinking the register
    1676              :    is dead because it sees a REG_DEAD note immediately before a CODE_LABEL.  */
    1677              : 
    1678              : static void
    1679            0 : update_reg_dead_notes (rtx_insn *insn, rtx_insn *delayed_insn)
    1680              : {
    1681            0 :   rtx link, next;
    1682            0 :   rtx_insn *p;
    1683              : 
    1684            0 :   for (p = next_nonnote_insn (insn); p != delayed_insn;
    1685            0 :        p = next_nonnote_insn (p))
    1686            0 :     for (link = REG_NOTES (p); link; link = next)
    1687              :       {
    1688            0 :         next = XEXP (link, 1);
    1689              : 
    1690            0 :         if (REG_NOTE_KIND (link) != REG_DEAD
    1691            0 :             || !REG_P (XEXP (link, 0)))
    1692            0 :           continue;
    1693              : 
    1694            0 :         if (reg_referenced_p (XEXP (link, 0), PATTERN (insn)))
    1695              :           {
    1696              :             /* Move the REG_DEAD note from P to INSN.  */
    1697            0 :             remove_note (p, link);
    1698            0 :             XEXP (link, 1) = REG_NOTES (insn);
    1699            0 :             REG_NOTES (insn) = link;
    1700              :           }
    1701              :       }
    1702            0 : }
    1703              : 
    1704              : /* Called when an insn redundant with start_insn is deleted.  If there
    1705              :    is a REG_DEAD note for the target of start_insn between start_insn
    1706              :    and stop_insn, then the REG_DEAD note needs to be deleted since the
    1707              :    value no longer dies there.
    1708              : 
    1709              :    If the REG_DEAD note isn't deleted, then mark_target_live_regs may be
    1710              :    confused into thinking the register is dead.  */
    1711              : 
    1712              : static void
    1713            0 : fix_reg_dead_note (rtx_insn *start_insn, rtx stop_insn)
    1714              : {
    1715            0 :   rtx link, next;
    1716            0 :   rtx_insn *p;
    1717              : 
    1718            0 :   for (p = next_nonnote_insn (start_insn); p != stop_insn;
    1719            0 :        p = next_nonnote_insn (p))
    1720            0 :     for (link = REG_NOTES (p); link; link = next)
    1721              :       {
    1722            0 :         next = XEXP (link, 1);
    1723              : 
    1724            0 :         if (REG_NOTE_KIND (link) != REG_DEAD
    1725            0 :             || !REG_P (XEXP (link, 0)))
    1726            0 :           continue;
    1727              : 
    1728            0 :         if (reg_set_p (XEXP (link, 0), PATTERN (start_insn)))
    1729              :           {
    1730            0 :             remove_note (p, link);
    1731            0 :             return;
    1732              :           }
    1733              :       }
    1734              : }
    1735              : 
    1736              : /* Delete any REG_UNUSED notes that exist on INSN but not on OTHER_INSN.
    1737              : 
    1738              :    This handles the case of udivmodXi4 instructions which optimize their
    1739              :    output depending on whether any REG_UNUSED notes are present.  We must
    1740              :    make sure that INSN calculates as many results as OTHER_INSN does.  */
    1741              : 
    1742              : static void
    1743            0 : update_reg_unused_notes (rtx_insn *insn, rtx other_insn)
    1744              : {
    1745            0 :   rtx link, next;
    1746              : 
    1747            0 :   for (link = REG_NOTES (insn); link; link = next)
    1748              :     {
    1749            0 :       next = XEXP (link, 1);
    1750              : 
    1751            0 :       if (REG_NOTE_KIND (link) != REG_UNUSED
    1752            0 :           || !REG_P (XEXP (link, 0)))
    1753            0 :         continue;
    1754              : 
    1755            0 :       if (!find_regno_note (other_insn, REG_UNUSED, REGNO (XEXP (link, 0))))
    1756            0 :         remove_note (insn, link);
    1757              :     }
    1758            0 : }
    1759              : 
    1760              : static vec <rtx> sibling_labels;
    1761              : 
    1762              : /* Return the label before INSN, or put a new label there.  If SIBLING is
    1763              :    non-zero, it is another label associated with the new label (if any),
    1764              :    typically the former target of the jump that will be redirected to
    1765              :    the new label.  */
    1766              : 
    1767              : static rtx_insn *
    1768            0 : get_label_before (rtx_insn *insn, rtx sibling)
    1769              : {
    1770            0 :   rtx_insn *label;
    1771              : 
    1772              :   /* Find an existing label at this point
    1773              :      or make a new one if there is none.  */
    1774            0 :   label = prev_nonnote_insn (insn);
    1775              : 
    1776            0 :   if (label == 0 || !LABEL_P (label))
    1777              :     {
    1778            0 :       rtx_insn *prev = PREV_INSN (insn);
    1779              : 
    1780            0 :       label = gen_label_rtx ();
    1781            0 :       emit_label_after (label, prev);
    1782            0 :       LABEL_NUSES (label) = 0;
    1783            0 :       if (sibling)
    1784              :         {
    1785            0 :           sibling_labels.safe_push (label);
    1786            0 :           sibling_labels.safe_push (sibling);
    1787              :         }
    1788              :     }
    1789            0 :   return label;
    1790              : }
    1791              : 
    1792              : /* Scan a function looking for insns that need a delay slot and find insns to
    1793              :    put into the delay slot.
    1794              : 
    1795              :    NON_JUMPS_P is true if we are to only try to fill non-jump insns (such
    1796              :    as calls).  We do these first since we don't want jump insns (that are
    1797              :    easier to fill) to get the only insns that could be used for non-jump insns.
    1798              :    When it is zero, only try to fill JUMP_INSNs.
    1799              : 
    1800              :    When slots are filled in this manner, the insns (including the
    1801              :    delay_insn) are put together in a SEQUENCE rtx.  In this fashion,
    1802              :    it is possible to tell whether a delay slot has really been filled
    1803              :    or not.  `final' knows how to deal with this, by communicating
    1804              :    through FINAL_SEQUENCE.  */
    1805              : 
    1806              : static void
    1807            0 : fill_simple_delay_slots (bool non_jumps_p)
    1808              : {
    1809            0 :   rtx_insn *insn, *trial, *next_trial;
    1810            0 :   rtx pat;
    1811            0 :   int i;
    1812            0 :   int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base;
    1813            0 :   struct resources needed, set;
    1814            0 :   int slots_to_fill, slots_filled;
    1815            0 :   auto_vec<rtx_insn *, 5> delay_list;
    1816              : 
    1817            0 :   for (i = 0; i < num_unfilled_slots; i++)
    1818              :     {
    1819            0 :       int flags;
    1820              :       /* Get the next insn to fill.  If it has already had any slots assigned,
    1821              :          we can't do anything with it.  Maybe we'll improve this later.  */
    1822              : 
    1823            0 :       insn = unfilled_slots_base[i];
    1824            0 :       if (insn == 0
    1825            0 :           || insn->deleted ()
    1826            0 :           || (NONJUMP_INSN_P (insn)
    1827            0 :               && GET_CODE (PATTERN (insn)) == SEQUENCE)
    1828            0 :           || (JUMP_P (insn) && non_jumps_p)
    1829            0 :           || (!JUMP_P (insn) && ! non_jumps_p))
    1830            0 :         continue;
    1831              : 
    1832              :       /* It may have been that this insn used to need delay slots, but
    1833              :          now doesn't; ignore in that case.  This can happen, for example,
    1834              :          on the HP PA RISC, where the number of delay slots depends on
    1835              :          what insns are nearby.  */
    1836            0 :       slots_to_fill = num_delay_slots (insn);
    1837              : 
    1838              :       /* Some machine description have defined instructions to have
    1839              :          delay slots only in certain circumstances which may depend on
    1840              :          nearby insns (which change due to reorg's actions).
    1841              : 
    1842              :          For example, the PA port normally has delay slots for unconditional
    1843              :          jumps.
    1844              : 
    1845              :          However, the PA port claims such jumps do not have a delay slot
    1846              :          if they are immediate successors of certain CALL_INSNs.  This
    1847              :          allows the port to favor filling the delay slot of the call with
    1848              :          the unconditional jump.  */
    1849            0 :       if (slots_to_fill == 0)
    1850            0 :         continue;
    1851              : 
    1852              :       /* This insn needs, or can use, some delay slots.  SLOTS_TO_FILL
    1853              :          says how many.  After initialization, first try optimizing
    1854              : 
    1855              :          call _foo              call _foo
    1856              :          nop                    add %o7,.-L1,%o7
    1857              :          b,a L1
    1858              :          nop
    1859              : 
    1860              :          If this case applies, the delay slot of the call is filled with
    1861              :          the unconditional jump.  This is done first to avoid having the
    1862              :          delay slot of the call filled in the backward scan.  Also, since
    1863              :          the unconditional jump is likely to also have a delay slot, that
    1864              :          insn must exist when it is subsequently scanned.
    1865              : 
    1866              :          This is tried on each insn with delay slots as some machines
    1867              :          have insns which perform calls, but are not represented as
    1868              :          CALL_INSNs.  */
    1869              : 
    1870            0 :       slots_filled = 0;
    1871            0 :       delay_list.truncate (0);
    1872              : 
    1873            0 :       if (JUMP_P (insn))
    1874            0 :         flags = get_jump_flags (insn, JUMP_LABEL (insn));
    1875              :       else
    1876            0 :         flags = get_jump_flags (insn, NULL_RTX);
    1877              : 
    1878            0 :       if ((trial = next_active_insn (insn))
    1879            0 :           && JUMP_P (trial)
    1880            0 :           && simplejump_p (trial)
    1881            0 :           && eligible_for_delay (insn, slots_filled, trial, flags)
    1882            0 :           && no_labels_between_p (insn, trial)
    1883            0 :           && ! can_throw_internal (trial))
    1884              :         {
    1885            0 :           rtx_insn **tmp;
    1886            0 :           slots_filled++;
    1887            0 :           add_to_delay_list (trial, &delay_list);
    1888              : 
    1889              :           /* TRIAL may have had its delay slot filled, then unfilled.  When
    1890              :              the delay slot is unfilled, TRIAL is placed back on the unfilled
    1891              :              slots obstack.  Unfortunately, it is placed on the end of the
    1892              :              obstack, not in its original location.  Therefore, we must search
    1893              :              from entry i + 1 to the end of the unfilled slots obstack to
    1894              :              try and find TRIAL.  */
    1895            0 :           tmp = &unfilled_slots_base[i + 1];
    1896            0 :           while (*tmp != trial && tmp != unfilled_slots_next)
    1897            0 :             tmp++;
    1898              : 
    1899              :           /* Remove the unconditional jump from consideration for delay slot
    1900              :              filling and unthread it.  */
    1901            0 :           if (*tmp == trial)
    1902            0 :             *tmp = 0;
    1903            0 :           {
    1904            0 :             rtx_insn *next = NEXT_INSN (trial);
    1905            0 :             rtx_insn *prev = PREV_INSN (trial);
    1906            0 :             if (prev)
    1907            0 :               SET_NEXT_INSN (prev) = next;
    1908            0 :             if (next)
    1909            0 :               SET_PREV_INSN (next) = prev;
    1910              :           }
    1911              :         }
    1912              : 
    1913              :       /* Now, scan backwards from the insn to search for a potential
    1914              :          delay-slot candidate.  Stop searching when a label or jump is hit.
    1915              : 
    1916              :          For each candidate, if it is to go into the delay slot (moved
    1917              :          forward in execution sequence), it must not need or set any resources
    1918              :          that were set by later insns and must not set any resources that
    1919              :          are needed for those insns.
    1920              : 
    1921              :          The delay slot insn itself sets resources unless it is a call
    1922              :          (in which case the called routine, not the insn itself, is doing
    1923              :          the setting).  */
    1924              : 
    1925            0 :       if (slots_filled < slots_to_fill)
    1926              :         {
    1927              :           /* If the flags register is dead after the insn, then we want to be
    1928              :              able to accept a candidate that clobbers it.  For this purpose,
    1929              :              we need to filter the flags register during life analysis, so
    1930              :              that it doesn't create RAW and WAW dependencies, while still
    1931              :              creating the necessary WAR dependencies.  */
    1932            0 :           bool filter_flags
    1933              :             = (slots_to_fill == 1
    1934            0 :                && targetm.flags_regnum != INVALID_REGNUM
    1935            0 :                && find_regno_note (insn, REG_DEAD, targetm.flags_regnum));
    1936            0 :           struct resources fset;
    1937            0 :           CLEAR_RESOURCE (&needed);
    1938            0 :           CLEAR_RESOURCE (&set);
    1939            0 :           mark_set_resources (insn, &set, 0, MARK_SRC_DEST);
    1940            0 :           if (filter_flags)
    1941              :             {
    1942            0 :               CLEAR_RESOURCE (&fset);
    1943            0 :               mark_set_resources (insn, &fset, 0, MARK_SRC_DEST);
    1944              :             }
    1945            0 :           mark_referenced_resources (insn, &needed, false);
    1946              : 
    1947            0 :           for (trial = prev_nonnote_insn (insn); ! stop_search_p (trial, true);
    1948            0 :                trial = next_trial)
    1949              :             {
    1950            0 :               next_trial = prev_nonnote_insn (trial);
    1951              : 
    1952              :               /* This must be an INSN or CALL_INSN.  */
    1953            0 :               pat = PATTERN (trial);
    1954              : 
    1955              :               /* Stand-alone USE and CLOBBER are just for flow.  */
    1956            0 :               if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
    1957            0 :                 continue;
    1958              : 
    1959              :               /* And DEBUG_INSNs never go into delay slots.  */
    1960            0 :               if (GET_CODE (trial) == DEBUG_INSN)
    1961            0 :                 continue;
    1962              : 
    1963              :               /* Check for resource conflict first, to avoid unnecessary
    1964              :                  splitting.  */
    1965            0 :               if (! insn_references_resource_p (trial, &set, true)
    1966            0 :                   && ! insn_sets_resource_p (trial,
    1967              :                                              filter_flags ? &fset : &set,
    1968              :                                              true)
    1969            0 :                   && ! insn_sets_resource_p (trial, &needed, true)
    1970            0 :                   && ! can_throw_internal (trial))
    1971              :                 {
    1972            0 :                   trial = try_split (pat, trial, 1);
    1973            0 :                   next_trial = prev_nonnote_insn (trial);
    1974            0 :                   if (eligible_for_delay (insn, slots_filled, trial, flags))
    1975              :                     {
    1976              :                       /* In this case, we are searching backward, so if we
    1977              :                          find insns to put on the delay list, we want
    1978              :                          to put them at the head, rather than the
    1979              :                          tail, of the list.  */
    1980              : 
    1981            0 :                       update_reg_dead_notes (trial, insn);
    1982            0 :                       delay_list.safe_insert (0, trial);
    1983            0 :                       update_block (trial, trial);
    1984            0 :                       delete_related_insns (trial);
    1985            0 :                       if (slots_to_fill == ++slots_filled)
    1986              :                         break;
    1987            0 :                       continue;
    1988              :                     }
    1989              :                 }
    1990              : 
    1991            0 :               mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
    1992            0 :               if (filter_flags)
    1993              :                 {
    1994            0 :                   mark_set_resources (trial, &fset, 0, MARK_SRC_DEST_CALL);
    1995              :                   /* If the flags register is set, then it doesn't create RAW
    1996              :                      dependencies any longer and it also doesn't create WAW
    1997              :                      dependencies since it's dead after the original insn.  */
    1998            0 :                   if (TEST_HARD_REG_BIT (fset.regs, targetm.flags_regnum))
    1999              :                     {
    2000            0 :                       CLEAR_HARD_REG_BIT (needed.regs, targetm.flags_regnum);
    2001            0 :                       CLEAR_HARD_REG_BIT (fset.regs, targetm.flags_regnum);
    2002              :                     }
    2003              :                 }
    2004            0 :               mark_referenced_resources (trial, &needed, true);
    2005              :             }
    2006              :         }
    2007              : 
    2008              :       /* If all needed slots haven't been filled, we come here.  */
    2009              : 
    2010              :       /* Try to optimize case of jumping around a single insn.  */
    2011            0 :       if ((ANNUL_IFTRUE_SLOTS || ANNUL_IFFALSE_SLOTS)
    2012              :         && slots_filled != slots_to_fill
    2013              :           && delay_list.is_empty ()
    2014              :           && JUMP_P (insn)
    2015              :           && (condjump_p (insn) || condjump_in_parallel_p (insn))
    2016              :           && !ANY_RETURN_P (JUMP_LABEL (insn)))
    2017              :         {
    2018              :           optimize_skip (as_a <rtx_jump_insn *> (insn), &delay_list);
    2019              :           if (!delay_list.is_empty ())
    2020              :             slots_filled += 1;
    2021              :         }
    2022              : 
    2023              :       /* Try to get insns from beyond the insn needing the delay slot.
    2024              :          These insns can neither set or reference resources set in insns being
    2025              :          skipped, cannot set resources in the insn being skipped, and, if this
    2026              :          is a CALL_INSN (or a CALL_INSN is passed), cannot trap (because the
    2027              :          call might not return).
    2028              : 
    2029              :          There used to be code which continued past the target label if
    2030              :          we saw all uses of the target label.  This code did not work,
    2031              :          because it failed to account for some instructions which were
    2032              :          both annulled and marked as from the target.  This can happen as a
    2033              :          result of optimize_skip.  Since this code was redundant with
    2034              :          fill_eager_delay_slots anyways, it was just deleted.  */
    2035              : 
    2036            0 :       if (slots_filled != slots_to_fill
    2037              :           /* If this instruction could throw an exception which is
    2038              :              caught in the same function, then it's not safe to fill
    2039              :              the delay slot with an instruction from beyond this
    2040              :              point.  For example, consider:
    2041              : 
    2042              :                int i = 2;
    2043              : 
    2044              :                try {
    2045              :                  f();
    2046              :                  i = 3;
    2047              :                } catch (...) {}
    2048              : 
    2049              :                return i;
    2050              : 
    2051              :              Even though `i' is a local variable, we must be sure not
    2052              :              to put `i = 3' in the delay slot if `f' might throw an
    2053              :              exception.
    2054              : 
    2055              :              Presumably, we should also check to see if we could get
    2056              :              back to this function via `setjmp'.  */
    2057            0 :           && ! can_throw_internal (insn)
    2058            0 :           && !JUMP_P (insn))
    2059              :         {
    2060            0 :           bool maybe_never = false;
    2061            0 :           rtx pat, trial_delay;
    2062              : 
    2063            0 :           CLEAR_RESOURCE (&needed);
    2064            0 :           CLEAR_RESOURCE (&set);
    2065            0 :           mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL);
    2066            0 :           mark_referenced_resources (insn, &needed, true);
    2067              : 
    2068            0 :           if (CALL_P (insn))
    2069            0 :             maybe_never = true;
    2070              : 
    2071            0 :           for (trial = next_nonnote_insn (insn); !stop_search_p (trial, true);
    2072            0 :                trial = next_trial)
    2073              :             {
    2074            0 :               next_trial = next_nonnote_insn (trial);
    2075              : 
    2076              :               /* This must be an INSN or CALL_INSN.  */
    2077            0 :               pat = PATTERN (trial);
    2078              : 
    2079              :               /* Stand-alone USE and CLOBBER are just for flow.  */
    2080            0 :               if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
    2081            0 :                 continue;
    2082              : 
    2083              :               /* And DEBUG_INSNs do not go in delay slots.  */
    2084            0 :               if (GET_CODE (trial) == DEBUG_INSN)
    2085            0 :                 continue;
    2086              : 
    2087              :               /* If this already has filled delay slots, get the insn needing
    2088              :                  the delay slots.  */
    2089            0 :               if (GET_CODE (pat) == SEQUENCE)
    2090            0 :                 trial_delay = XVECEXP (pat, 0, 0);
    2091              :               else
    2092              :                 trial_delay = trial;
    2093              : 
    2094              :               /* Stop our search when seeing a jump.  */
    2095            0 :               if (JUMP_P (trial_delay))
    2096              :                 break;
    2097              : 
    2098              :               /* See if we have a resource problem before we try to split.  */
    2099            0 :               if (GET_CODE (pat) != SEQUENCE
    2100            0 :                   && ! insn_references_resource_p (trial, &set, true)
    2101            0 :                   && ! insn_sets_resource_p (trial, &set, true)
    2102            0 :                   && ! insn_sets_resource_p (trial, &needed, true)
    2103            0 :                   && ! (maybe_never && may_trap_or_fault_p (pat))
    2104            0 :                   && (trial = try_split (pat, trial, 0))
    2105            0 :                   && eligible_for_delay (insn, slots_filled, trial, flags)
    2106            0 :                   && ! can_throw_internal (trial))
    2107              :                 {
    2108            0 :                   next_trial = next_nonnote_insn (trial);
    2109            0 :                   add_to_delay_list (trial, &delay_list);
    2110              : 
    2111            0 :                   delete_related_insns (trial);
    2112            0 :                   if (slots_to_fill == ++slots_filled)
    2113              :                     break;
    2114            0 :                   continue;
    2115              :                 }
    2116              : 
    2117            0 :               mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
    2118            0 :               mark_referenced_resources (trial, &needed, true);
    2119              : 
    2120              :               /* Ensure we don't put insns between the setting of cc and the
    2121              :                  comparison by moving a setting of cc into an earlier delay
    2122              :                  slot since these insns could clobber the condition code.  */
    2123            0 :               set.cc = 1;
    2124              : 
    2125              :               /* If this is a call, we might not get here.  */
    2126            0 :               if (CALL_P (trial_delay))
    2127            0 :                 maybe_never = true;
    2128              :             }
    2129              : 
    2130              :           /* If there are slots left to fill and our search was stopped by an
    2131              :              unconditional branch, try the insn at the branch target.  We can
    2132              :              redirect the branch if it works.
    2133              : 
    2134              :              Don't do this if the insn at the branch target is a branch.  */
    2135            0 :           if (slots_to_fill != slots_filled
    2136            0 :               && trial
    2137            0 :               && jump_to_label_p (trial)
    2138            0 :               && simplejump_p (trial)
    2139            0 :               && (next_trial = next_active_insn (JUMP_LABEL_AS_INSN (trial))) != 0
    2140            0 :               && ! (NONJUMP_INSN_P (next_trial)
    2141            0 :                     && GET_CODE (PATTERN (next_trial)) == SEQUENCE)
    2142            0 :               && !JUMP_P (next_trial)
    2143            0 :               && ! insn_references_resource_p (next_trial, &set, true)
    2144            0 :               && ! insn_sets_resource_p (next_trial, &set, true)
    2145            0 :               && ! insn_sets_resource_p (next_trial, &needed, true)
    2146            0 :               && ! (maybe_never && may_trap_or_fault_p (PATTERN (next_trial)))
    2147            0 :               && (next_trial = try_split (PATTERN (next_trial), next_trial, 0))
    2148            0 :               && eligible_for_delay (insn, slots_filled, next_trial, flags)
    2149            0 :               && ! can_throw_internal (trial))
    2150              :             {
    2151              :               /* See comment in relax_delay_slots about necessity of using
    2152              :                  next_real_nondebug_insn here.  */
    2153            0 :               rtx_insn *new_label = next_real_nondebug_insn (next_trial);
    2154              : 
    2155            0 :               if (new_label != 0)
    2156            0 :                 new_label = get_label_before (new_label, JUMP_LABEL (trial));
    2157              :               else
    2158            0 :                 new_label = find_end_label (simple_return_rtx);
    2159              : 
    2160            0 :               if (new_label)
    2161              :                 {
    2162            0 :                   add_to_delay_list (copy_delay_slot_insn (next_trial),
    2163              :                                      &delay_list);
    2164            0 :                   slots_filled++;
    2165            0 :                   reorg_redirect_jump (as_a <rtx_jump_insn *> (trial),
    2166              :                                        new_label);
    2167              :                 }
    2168              :             }
    2169              :         }
    2170              : 
    2171              :       /* If this is an unconditional jump, then try to get insns from the
    2172              :          target of the jump.  */
    2173            0 :       rtx_jump_insn *jump_insn;
    2174            0 :       if ((jump_insn = dyn_cast <rtx_jump_insn *> (insn))
    2175            0 :           && simplejump_p (jump_insn)
    2176            0 :           && slots_filled != slots_to_fill)
    2177            0 :         fill_slots_from_thread (jump_insn, const_true_rtx,
    2178            0 :                                 next_active_insn (JUMP_LABEL_AS_INSN (insn)),
    2179            0 :                                 NULL, 1, 1, own_thread_p (JUMP_LABEL (insn),
    2180              :                                                  JUMP_LABEL (insn), false),
    2181              :                                 slots_to_fill, &slots_filled, &delay_list);
    2182              : 
    2183            0 :       if (!delay_list.is_empty ())
    2184            0 :         unfilled_slots_base[i]
    2185            0 :           = emit_delay_sequence (insn, delay_list, slots_filled);
    2186              : 
    2187            0 :       if (slots_to_fill == slots_filled)
    2188            0 :         unfilled_slots_base[i] = 0;
    2189              : 
    2190            0 :       note_delay_statistics (slots_filled, 0);
    2191              :     }
    2192            0 : }
    2193              : 
    2194              : /* Follow any unconditional jump at LABEL, for the purpose of redirecting JUMP;
    2195              :    return the ultimate label reached by any such chain of jumps.
    2196              :    Return a suitable return rtx if the chain ultimately leads to a
    2197              :    return instruction.
    2198              :    If LABEL is not followed by a jump, return LABEL.
    2199              :    If the chain loops or we can't find end, return LABEL,
    2200              :    since that tells caller to avoid changing the insn.
    2201              :    If the returned label is obtained by following a crossing jump,
    2202              :    set *CROSSING to true, otherwise set it to false.  */
    2203              : 
    2204              : static rtx
    2205            0 : follow_jumps (rtx label, rtx_insn *jump, bool *crossing)
    2206              : {
    2207            0 :   rtx_insn *insn;
    2208            0 :   rtx_insn *next;
    2209            0 :   int depth;
    2210              : 
    2211            0 :   *crossing = false;
    2212            0 :   if (ANY_RETURN_P (label))
    2213              :     return label;
    2214              : 
    2215            0 :   rtx_insn *value = as_a <rtx_insn *> (label);
    2216              : 
    2217            0 :   for (depth = 0;
    2218              :        (depth < 10
    2219            0 :         && (insn = next_active_insn (value)) != 0
    2220            0 :         && JUMP_P (insn)
    2221            0 :         && JUMP_LABEL (insn) != NULL_RTX
    2222            0 :         && ((any_uncondjump_p (insn) && onlyjump_p (insn))
    2223            0 :             || ANY_RETURN_P (PATTERN (insn)))
    2224            0 :         && (next = NEXT_INSN (insn))
    2225            0 :         && BARRIER_P (next));
    2226              :        depth++)
    2227              :     {
    2228            0 :       rtx this_label_or_return = JUMP_LABEL (insn);
    2229              : 
    2230              :       /* If we have found a cycle, make the insn jump to itself.  */
    2231            0 :       if (this_label_or_return == label)
    2232              :         return label;
    2233              : 
    2234              :       /* Cannot follow returns and cannot look through tablejumps.  */
    2235            0 :       if (ANY_RETURN_P (this_label_or_return))
    2236              :         return this_label_or_return;
    2237              : 
    2238            0 :       rtx_insn *this_label = as_a <rtx_insn *> (this_label_or_return);
    2239            0 :       if (NEXT_INSN (this_label)
    2240            0 :           && JUMP_TABLE_DATA_P (NEXT_INSN (this_label)))
    2241              :         break;
    2242              : 
    2243            0 :       if (!targetm.can_follow_jump (jump, insn))
    2244              :         break;
    2245            0 :       if (!*crossing)
    2246            0 :         *crossing = CROSSING_JUMP_P (jump);
    2247            0 :       value = this_label;
    2248              :     }
    2249            0 :   if (depth == 10)
    2250              :     return label;
    2251              :   return value;
    2252              : }
    2253              : 
    2254              : /* Try to find insns to place in delay slots.
    2255              : 
    2256              :    INSN is the jump needing SLOTS_TO_FILL delay slots.  It tests CONDITION
    2257              :    or is an unconditional branch if CONDITION is const_true_rtx.
    2258              :    *PSLOTS_FILLED is updated with the number of slots that we have filled.
    2259              : 
    2260              :    THREAD is a flow-of-control, either the insns to be executed if the
    2261              :    branch is true or if the branch is false, THREAD_IF_TRUE says which.
    2262              : 
    2263              :    OPPOSITE_THREAD is the thread in the opposite direction.  It is used
    2264              :    to see if any potential delay slot insns set things needed there.
    2265              : 
    2266              :    LIKELY is true if it is extremely likely that the branch will be
    2267              :    taken and THREAD_IF_TRUE is set.  This is used for the branch at the
    2268              :    end of a loop back up to the top.
    2269              : 
    2270              :    OWN_THREAD is true if we are the only user of the thread, i.e. it is
    2271              :    the target of the jump when we are the only jump going there.
    2272              : 
    2273              :    If OWN_THREAD is false, it must be the "true" thread of a jump.  In that
    2274              :    case, we can only take insns from the head of the thread for our delay
    2275              :    slot.  We then adjust the jump to point after the insns we have taken.  */
    2276              : 
    2277              : static void
    2278            0 : fill_slots_from_thread (rtx_jump_insn *insn, rtx condition,
    2279              :                         rtx thread_or_return, rtx opposite_thread, bool likely,
    2280              :                         bool thread_if_true, bool own_thread, int slots_to_fill,
    2281              :                         int *pslots_filled, vec<rtx_insn *> *delay_list)
    2282              : {
    2283            0 :   rtx new_thread;
    2284            0 :   struct resources opposite_needed, set, needed;
    2285            0 :   rtx_insn *trial;
    2286            0 :   bool lose = false;
    2287            0 :   bool must_annul = false;
    2288            0 :   int flags;
    2289              : 
    2290              :   /* Validate our arguments.  */
    2291            0 :   gcc_assert (condition != const_true_rtx || thread_if_true);
    2292            0 :   gcc_assert (own_thread || thread_if_true);
    2293              : 
    2294            0 :   flags = get_jump_flags (insn, JUMP_LABEL (insn));
    2295              : 
    2296              :   /* If our thread is the end of subroutine, we can't get any delay
    2297              :      insns from that.  */
    2298            0 :   if (thread_or_return == NULL_RTX || ANY_RETURN_P (thread_or_return))
    2299            0 :     return;
    2300              : 
    2301            0 :   rtx_insn *thread = as_a <rtx_insn *> (thread_or_return);
    2302              : 
    2303              :   /* If this is an unconditional branch, nothing is needed at the
    2304              :      opposite thread.  Otherwise, compute what is needed there.  */
    2305            0 :   if (condition == const_true_rtx)
    2306            0 :     CLEAR_RESOURCE (&opposite_needed);
    2307              :   else
    2308            0 :     mark_target_live_regs (get_insns (), opposite_thread, &opposite_needed);
    2309              : 
    2310              :   /* If the insn at THREAD can be split, do it here to avoid having to
    2311              :      update THREAD and NEW_THREAD if it is done in the loop below.  Also
    2312              :      initialize NEW_THREAD.  */
    2313              : 
    2314            0 :   new_thread = thread = try_split (PATTERN (thread), thread, 0);
    2315              : 
    2316              :   /* Scan insns at THREAD.  We are looking for an insn that can be removed
    2317              :      from THREAD (it neither sets nor references resources that were set
    2318              :      ahead of it and it doesn't set anything needs by the insns ahead of
    2319              :      it) and that either can be placed in an annulling insn or aren't
    2320              :      needed at OPPOSITE_THREAD.  */
    2321              : 
    2322            0 :   CLEAR_RESOURCE (&needed);
    2323            0 :   CLEAR_RESOURCE (&set);
    2324              : 
    2325              :   /* Handle the flags register specially, to be able to accept a
    2326              :      candidate that clobbers it.  See also fill_simple_delay_slots.  */
    2327            0 :   bool filter_flags
    2328              :     = (slots_to_fill == 1
    2329            0 :        && targetm.flags_regnum != INVALID_REGNUM
    2330            0 :        && find_regno_note (insn, REG_DEAD, targetm.flags_regnum));
    2331            0 :   struct resources fset;
    2332            0 :   struct resources flags_res;
    2333            0 :   if (filter_flags)
    2334              :     {
    2335            0 :       CLEAR_RESOURCE (&fset);
    2336            0 :       CLEAR_RESOURCE (&flags_res);
    2337            0 :       SET_HARD_REG_BIT (flags_res.regs, targetm.flags_regnum);
    2338              :     }
    2339              : 
    2340              :   /* If we do not own this thread, we must stop as soon as we find
    2341              :      something that we can't put in a delay slot, since all we can do
    2342              :      is branch into THREAD at a later point.  Therefore, labels stop
    2343              :      the search if this is not the `true' thread.  */
    2344              : 
    2345            0 :   for (trial = thread;
    2346            0 :        ! stop_search_p (trial, ! thread_if_true) && (! lose || own_thread);
    2347            0 :        trial = next_nonnote_insn (trial))
    2348              :     {
    2349            0 :       rtx pat, old_trial;
    2350              : 
    2351              :       /* If we have passed a label, we no longer own this thread.  */
    2352            0 :       if (LABEL_P (trial))
    2353              :         {
    2354            0 :           own_thread = 0;
    2355            0 :           continue;
    2356              :         }
    2357              : 
    2358            0 :       pat = PATTERN (trial);
    2359            0 :       if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
    2360            0 :         continue;
    2361              : 
    2362            0 :       if (GET_CODE (trial) == DEBUG_INSN)
    2363            0 :         continue;
    2364              : 
    2365              :       /* If TRIAL conflicts with the insns ahead of it, we lose.  */
    2366            0 :       if (! insn_references_resource_p (trial, &set, true)
    2367            0 :           && ! insn_sets_resource_p (trial, filter_flags ? &fset : &set, true)
    2368            0 :           && ! insn_sets_resource_p (trial, &needed, true)
    2369              :           /* If we're handling sets to the flags register specially, we
    2370              :              only allow an insn into a delay-slot, if it either:
    2371              :              - doesn't set the flags register,
    2372              :              - the "set" of the flags register isn't used (clobbered),
    2373              :              - insns between the delay-slot insn and the trial-insn
    2374              :              as accounted in "set", have not affected the flags register.  */
    2375            0 :           && (! filter_flags
    2376            0 :               || ! insn_sets_resource_p (trial, &flags_res, true)
    2377            0 :               || find_regno_note (trial, REG_UNUSED, targetm.flags_regnum)
    2378            0 :               || ! TEST_HARD_REG_BIT (set.regs, targetm.flags_regnum))
    2379            0 :           && ! can_throw_internal (trial))
    2380              :         {
    2381            0 :           rtx_insn *prior_insn;
    2382              : 
    2383              :           /* If TRIAL is redundant with some insn before INSN, we don't
    2384              :              actually need to add it to the delay list; we can merely pretend
    2385              :              we did.  */
    2386            0 :           if ((prior_insn = redundant_insn (trial, insn, *delay_list)))
    2387              :             {
    2388            0 :               fix_reg_dead_note (prior_insn, insn);
    2389            0 :               if (own_thread)
    2390              :                 {
    2391            0 :                   update_block (trial, thread);
    2392            0 :                   if (trial == thread)
    2393              :                     {
    2394            0 :                       thread = next_active_insn (thread);
    2395            0 :                       if (new_thread == trial)
    2396            0 :                         new_thread = thread;
    2397              :                     }
    2398              : 
    2399            0 :                   delete_related_insns (trial);
    2400              :                 }
    2401              :               else
    2402              :                 {
    2403            0 :                   update_reg_unused_notes (prior_insn, trial);
    2404            0 :                   new_thread = next_active_insn (trial);
    2405              :                 }
    2406              : 
    2407            0 :               continue;
    2408              :             }
    2409              : 
    2410              :           /* There are two ways we can win:  If TRIAL doesn't set anything
    2411              :              needed at the opposite thread and can't trap, or if it can
    2412              :              go into an annulled delay slot.  But we want neither to copy
    2413              :              nor to speculate frame-related insns.  */
    2414            0 :           if (!must_annul
    2415            0 :               && ((condition == const_true_rtx
    2416            0 :                    && (own_thread || !RTX_FRAME_RELATED_P (trial)))
    2417            0 :                   || (! insn_sets_resource_p (trial, &opposite_needed, true)
    2418            0 :                       && ! may_trap_or_fault_p (pat)
    2419            0 :                       && ! RTX_FRAME_RELATED_P (trial))))
    2420              :             {
    2421            0 :               old_trial = trial;
    2422            0 :               trial = try_split (pat, trial, 0);
    2423            0 :               if (new_thread == old_trial)
    2424            0 :                 new_thread = trial;
    2425            0 :               if (thread == old_trial)
    2426            0 :                 thread = trial;
    2427            0 :               pat = PATTERN (trial);
    2428            0 :               if (eligible_for_delay (insn, *pslots_filled, trial, flags))
    2429            0 :                 goto winner;
    2430              :             }
    2431            0 :           else if (!RTX_FRAME_RELATED_P (trial)
    2432              :                    && ((ANNUL_IFTRUE_SLOTS && ! thread_if_true)
    2433              :                         || (ANNUL_IFFALSE_SLOTS && thread_if_true)))
    2434              :             {
    2435              :               old_trial = trial;
    2436              :               trial = try_split (pat, trial, 0);
    2437              :               if (new_thread == old_trial)
    2438              :                 new_thread = trial;
    2439              :               if (thread == old_trial)
    2440              :                 thread = trial;
    2441              :               pat = PATTERN (trial);
    2442              :               if ((must_annul || delay_list->is_empty ()) && (thread_if_true
    2443              :                    ? check_annul_list_true_false (false, *delay_list)
    2444              :                      && eligible_for_annul_false (insn, *pslots_filled, trial, flags)
    2445              :                    : check_annul_list_true_false (true, *delay_list)
    2446              :                      && eligible_for_annul_true (insn, *pslots_filled, trial, flags)))
    2447              :                 {
    2448              :                   rtx_insn *temp;
    2449              : 
    2450              :                   must_annul = true;
    2451            0 :                 winner:
    2452              : 
    2453              :                   /* If we own this thread, delete the insn.  If this is the
    2454              :                      destination of a branch, show that a basic block status
    2455              :                      may have been updated.  In any case, mark the new
    2456              :                      starting point of this thread.  */
    2457            0 :                   if (own_thread)
    2458              :                     {
    2459            0 :                       rtx note;
    2460              : 
    2461            0 :                       update_block (trial, thread);
    2462            0 :                       if (trial == thread)
    2463              :                         {
    2464            0 :                           thread = next_active_insn (thread);
    2465            0 :                           if (new_thread == trial)
    2466            0 :                             new_thread = thread;
    2467              :                         }
    2468              : 
    2469              :                       /* We are moving this insn, not deleting it.  We must
    2470              :                          temporarily increment the use count on any referenced
    2471              :                          label lest it be deleted by delete_related_insns.  */
    2472            0 :                       for (note = REG_NOTES (trial);
    2473            0 :                            note != NULL_RTX;
    2474            0 :                            note = XEXP (note, 1))
    2475            0 :                         if (REG_NOTE_KIND (note) == REG_LABEL_OPERAND
    2476            0 :                             || REG_NOTE_KIND (note) == REG_LABEL_TARGET)
    2477              :                           {
    2478              :                             /* REG_LABEL_OPERAND could be
    2479              :                                NOTE_INSN_DELETED_LABEL too.  */
    2480            0 :                             if (LABEL_P (XEXP (note, 0)))
    2481            0 :                               LABEL_NUSES (XEXP (note, 0))++;
    2482              :                             else
    2483            0 :                               gcc_assert (REG_NOTE_KIND (note)
    2484              :                                           == REG_LABEL_OPERAND);
    2485              :                           }
    2486            0 :                       if (jump_to_label_p (trial))
    2487            0 :                         LABEL_NUSES (JUMP_LABEL (trial))++;
    2488              : 
    2489            0 :                       delete_related_insns (trial);
    2490              : 
    2491            0 :                       for (note = REG_NOTES (trial);
    2492            0 :                            note != NULL_RTX;
    2493            0 :                            note = XEXP (note, 1))
    2494            0 :                         if (REG_NOTE_KIND (note) == REG_LABEL_OPERAND
    2495            0 :                             || REG_NOTE_KIND (note) == REG_LABEL_TARGET)
    2496              :                           {
    2497              :                             /* REG_LABEL_OPERAND could be
    2498              :                                NOTE_INSN_DELETED_LABEL too.  */
    2499            0 :                             if (LABEL_P (XEXP (note, 0)))
    2500            0 :                               LABEL_NUSES (XEXP (note, 0))--;
    2501              :                             else
    2502            0 :                               gcc_assert (REG_NOTE_KIND (note)
    2503              :                                           == REG_LABEL_OPERAND);
    2504              :                           }
    2505            0 :                       if (jump_to_label_p (trial))
    2506            0 :                         LABEL_NUSES (JUMP_LABEL (trial))--;
    2507              :                     }
    2508              :                   else
    2509            0 :                     new_thread = next_active_insn (trial);
    2510              : 
    2511            0 :                   temp = own_thread ? trial : copy_delay_slot_insn (trial);
    2512            0 :                   if (thread_if_true)
    2513            0 :                     INSN_FROM_TARGET_P (temp) = 1;
    2514              : 
    2515            0 :                   add_to_delay_list (temp, delay_list);
    2516              : 
    2517            0 :                   if (slots_to_fill == ++(*pslots_filled))
    2518              :                     {
    2519              :                       /* Even though we have filled all the slots, we
    2520              :                          may be branching to a location that has a
    2521              :                          redundant insn.  Skip any if so.  */
    2522            0 :                       while (new_thread && ! own_thread
    2523            0 :                              && ! insn_sets_resource_p (new_thread, &set, true)
    2524            0 :                              && ! insn_sets_resource_p (new_thread, &needed,
    2525              :                                                         true)
    2526            0 :                              && ! insn_references_resource_p (new_thread,
    2527              :                                                               &set, true)
    2528            0 :                              && (prior_insn
    2529            0 :                                  = redundant_insn (new_thread, insn,
    2530              :                                                    *delay_list)))
    2531              :                         {
    2532              :                           /* We know we do not own the thread, so no need
    2533              :                              to call update_block and delete_insn.  */
    2534            0 :                           fix_reg_dead_note (prior_insn, insn);
    2535            0 :                           update_reg_unused_notes (prior_insn, new_thread);
    2536            0 :                           new_thread
    2537            0 :                             = next_active_insn (as_a<rtx_insn *> (new_thread));
    2538              :                         }
    2539              :                       break;
    2540              :                     }
    2541              : 
    2542            0 :                   continue;
    2543            0 :                 }
    2544              :             }
    2545              :         }
    2546              : 
    2547              :       /* This insn can't go into a delay slot.  */
    2548            0 :       lose = true;
    2549            0 :       mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
    2550            0 :       mark_referenced_resources (trial, &needed, true);
    2551            0 :       if (filter_flags)
    2552              :         {
    2553            0 :           mark_set_resources (trial, &fset, 0, MARK_SRC_DEST_CALL);
    2554              : 
    2555              :           /* Groups of flags-register setters with users should not
    2556              :              affect opportunities to move flags-register-setting insns
    2557              :              (clobbers) into the delay-slot.  */
    2558            0 :           CLEAR_HARD_REG_BIT (needed.regs, targetm.flags_regnum);
    2559            0 :           CLEAR_HARD_REG_BIT (fset.regs, targetm.flags_regnum);
    2560              :         }
    2561              : 
    2562              :       /* Ensure we don't put insns between the setting of cc and the comparison
    2563              :          by moving a setting of cc into an earlier delay slot since these insns
    2564              :          could clobber the condition code.  */
    2565            0 :       set.cc = 1;
    2566              : 
    2567              :       /* If this insn is a register-register copy and the next insn has
    2568              :          a use of our destination, change it to use our source.  That way,
    2569              :          it will become a candidate for our delay slot the next time
    2570              :          through this loop.  This case occurs commonly in loops that
    2571              :          scan a list.
    2572              : 
    2573              :          We could check for more complex cases than those tested below,
    2574              :          but it doesn't seem worth it.  It might also be a good idea to try
    2575              :          to swap the two insns.  That might do better.
    2576              : 
    2577              :          We can't do this if the next insn modifies our destination, because
    2578              :          that would make the replacement into the insn invalid.  We also can't
    2579              :          do this if it modifies our source, because it might be an earlyclobber
    2580              :          operand.  This latter test also prevents updating the contents of
    2581              :          a PRE_INC.  We also can't do this if there's overlap of source and
    2582              :          destination.  Overlap may happen for larger-than-register-size modes.  */
    2583              : 
    2584            0 :       if (NONJUMP_INSN_P (trial) && GET_CODE (pat) == SET
    2585            0 :           && REG_P (SET_SRC (pat))
    2586            0 :           && REG_P (SET_DEST (pat))
    2587            0 :           && !reg_overlap_mentioned_p (SET_DEST (pat), SET_SRC (pat)))
    2588              :         {
    2589            0 :           rtx_insn *next = next_nonnote_insn (trial);
    2590              : 
    2591            0 :           if (next && NONJUMP_INSN_P (next)
    2592            0 :               && GET_CODE (PATTERN (next)) != USE
    2593            0 :               && ! reg_set_p (SET_DEST (pat), next)
    2594            0 :               && ! reg_set_p (SET_SRC (pat), next)
    2595            0 :               && reg_referenced_p (SET_DEST (pat), PATTERN (next))
    2596            0 :               && ! modified_in_p (SET_DEST (pat), next))
    2597            0 :             validate_replace_rtx (SET_DEST (pat), SET_SRC (pat), next);
    2598              :         }
    2599              :     }
    2600              : 
    2601              :   /* If we stopped on a branch insn that has delay slots, see if we can
    2602              :      steal some of the insns in those slots.  */
    2603            0 :   if (trial && NONJUMP_INSN_P (trial)
    2604            0 :       && GET_CODE (PATTERN (trial)) == SEQUENCE
    2605            0 :       && JUMP_P (XVECEXP (PATTERN (trial), 0, 0)))
    2606              :     {
    2607            0 :       rtx_sequence *sequence = as_a <rtx_sequence *> (PATTERN (trial));
    2608              :       /* If this is the `true' thread, we will want to follow the jump,
    2609              :          so we can only do this if we have taken everything up to here.  */
    2610            0 :       if (thread_if_true && trial == new_thread)
    2611              :         {
    2612            0 :           steal_delay_list_from_target (insn, condition, sequence,
    2613              :                                         delay_list, &set, &needed,
    2614              :                                         &opposite_needed, slots_to_fill,
    2615              :                                         pslots_filled, &must_annul,
    2616              :                                         &new_thread);
    2617              :           /* If we owned the thread and are told that it branched
    2618              :              elsewhere, make sure we own the thread at the new location.  */
    2619            0 :           if (own_thread && trial != new_thread)
    2620            0 :             own_thread = own_thread_p (new_thread, new_thread, false);
    2621              :         }
    2622              :       else if (! thread_if_true)
    2623            0 :         steal_delay_list_from_fallthrough (insn, condition, sequence,
    2624              :                                            delay_list, &set, &needed,
    2625              :                                            &opposite_needed, slots_to_fill,
    2626              :                                            pslots_filled, &must_annul);
    2627              :     }
    2628              : 
    2629              :   /* If we haven't found anything for this delay slot and it is very
    2630              :      likely that the branch will be taken, see if the insn at our target
    2631              :      increments or decrements a register with an increment that does not
    2632              :      depend on the destination register.  If so, try to place the opposite
    2633              :      arithmetic insn after the jump insn and put the arithmetic insn in the
    2634              :      delay slot.  If we can't do this, return.  */
    2635            0 :   if (delay_list->is_empty () && likely
    2636            0 :       && new_thread
    2637            0 :       && !ANY_RETURN_P (new_thread)
    2638            0 :       && NONJUMP_INSN_P (new_thread)
    2639            0 :       && !RTX_FRAME_RELATED_P (new_thread)
    2640            0 :       && GET_CODE (PATTERN (new_thread)) != ASM_INPUT
    2641            0 :       && asm_noperands (PATTERN (new_thread)) < 0)
    2642              :     {
    2643            0 :       rtx dest;
    2644            0 :       rtx src;
    2645              : 
    2646              :       /* We know "new_thread" is an insn due to NONJUMP_INSN_P (new_thread)
    2647              :          above.  */
    2648            0 :       trial = as_a <rtx_insn *> (new_thread);
    2649            0 :       rtx pat = PATTERN (trial);
    2650              : 
    2651            0 :       if (!NONJUMP_INSN_P (trial)
    2652            0 :           || GET_CODE (pat) != SET
    2653            0 :           || ! eligible_for_delay (insn, 0, trial, flags)
    2654            0 :           || can_throw_internal (trial))
    2655            0 :         return;
    2656              : 
    2657            0 :       dest = SET_DEST (pat), src = SET_SRC (pat);
    2658            0 :       if ((GET_CODE (src) == PLUS || GET_CODE (src) == MINUS)
    2659            0 :           && rtx_equal_p (XEXP (src, 0), dest)
    2660            0 :           && (!FLOAT_MODE_P (GET_MODE (src))
    2661            0 :               || flag_unsafe_math_optimizations)
    2662            0 :           && ! reg_overlap_mentioned_p (dest, XEXP (src, 1))
    2663            0 :           && ! side_effects_p (pat))
    2664              :         {
    2665            0 :           rtx other = XEXP (src, 1);
    2666            0 :           rtx new_arith;
    2667            0 :           rtx_insn *ninsn;
    2668              : 
    2669              :           /* If this is a constant adjustment, use the same code with
    2670              :              the negated constant.  Otherwise, reverse the sense of the
    2671              :              arithmetic.  */
    2672            0 :           if (CONST_INT_P (other))
    2673            0 :             new_arith = gen_rtx_fmt_ee (GET_CODE (src), GET_MODE (src), dest,
    2674              :                                         negate_rtx (GET_MODE (src), other));
    2675              :           else
    2676            0 :             new_arith = gen_rtx_fmt_ee (GET_CODE (src) == PLUS ? MINUS : PLUS,
    2677              :                                         GET_MODE (src), dest, other);
    2678              : 
    2679            0 :           ninsn = emit_insn_after (gen_rtx_SET (dest, new_arith), insn);
    2680              : 
    2681            0 :           if (recog_memoized (ninsn) < 0
    2682            0 :               || (extract_insn (ninsn),
    2683            0 :                   !constrain_operands (1, get_preferred_alternatives (ninsn))))
    2684              :             {
    2685            0 :               delete_related_insns (ninsn);
    2686            0 :               return;
    2687              :             }
    2688              : 
    2689            0 :           if (own_thread)
    2690              :             {
    2691            0 :               update_block (trial, thread);
    2692            0 :               if (trial == thread)
    2693              :                 {
    2694            0 :                   thread = next_active_insn (thread);
    2695            0 :                   if (new_thread == trial)
    2696            0 :                     new_thread = thread;
    2697              :                 }
    2698            0 :               delete_related_insns (trial);
    2699              :             }
    2700              :           else
    2701            0 :             new_thread = next_active_insn (trial);
    2702              : 
    2703            0 :           ninsn = own_thread ? trial : copy_delay_slot_insn (trial);
    2704            0 :           if (thread_if_true)
    2705            0 :             INSN_FROM_TARGET_P (ninsn) = 1;
    2706              : 
    2707            0 :           add_to_delay_list (ninsn, delay_list);
    2708            0 :           (*pslots_filled)++;
    2709              :         }
    2710              :     }
    2711              : 
    2712            0 :   if (!delay_list->is_empty () && must_annul)
    2713            0 :     INSN_ANNULLED_BRANCH_P (insn) = 1;
    2714              : 
    2715              :   /* If we are to branch into the middle of this thread, find an appropriate
    2716              :      label or make a new one if none, and redirect INSN to it.  If we hit the
    2717              :      end of the function, use the end-of-function label.  */
    2718            0 :   if (new_thread != thread)
    2719              :     {
    2720            0 :       rtx label;
    2721            0 :       bool crossing = false;
    2722              : 
    2723            0 :       gcc_assert (thread_if_true);
    2724              : 
    2725            0 :       if (new_thread
    2726            0 :           && simplejump_or_return_p (new_thread)
    2727            0 :           && redirect_with_delay_list_safe_p (insn,
    2728              :                                               JUMP_LABEL (new_thread),
    2729              :                                               *delay_list))
    2730            0 :         new_thread = follow_jumps (JUMP_LABEL (new_thread), insn, &crossing);
    2731              : 
    2732            0 :       if (!new_thread)
    2733            0 :         label = find_end_label (simple_return_rtx);
    2734            0 :       else if (ANY_RETURN_P (new_thread))
    2735            0 :         label = find_end_label (new_thread);
    2736            0 :       else if (LABEL_P (new_thread))
    2737              :         label = new_thread;
    2738              :       else
    2739            0 :         label = get_label_before (as_a <rtx_insn *> (new_thread),
    2740              :                                   JUMP_LABEL (insn));
    2741              : 
    2742            0 :       if (label)
    2743              :         {
    2744            0 :           reorg_redirect_jump (insn, label);
    2745            0 :           if (crossing)
    2746            0 :             CROSSING_JUMP_P (insn) = 1;
    2747              :         }
    2748              :     }
    2749              : }
    2750              : 
    2751              : /* Make another attempt to find insns to place in delay slots.
    2752              : 
    2753              :    We previously looked for insns located in front of the delay insn
    2754              :    and, for non-jump delay insns, located behind the delay insn.
    2755              : 
    2756              :    Here only try to schedule jump insns and try to move insns from either
    2757              :    the target or the following insns into the delay slot.  If annulling is
    2758              :    supported, we will be likely to do this.  Otherwise, we can do this only
    2759              :    if safe.  */
    2760              : 
    2761              : static void
    2762            0 : fill_eager_delay_slots (void)
    2763              : {
    2764            0 :   rtx_insn *insn;
    2765            0 :   int i;
    2766            0 :   int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base;
    2767              : 
    2768            0 :   for (i = 0; i < num_unfilled_slots; i++)
    2769              :     {
    2770            0 :       rtx condition;
    2771            0 :       rtx target_label, insn_at_target;
    2772            0 :       rtx_insn *fallthrough_insn;
    2773            0 :       auto_vec<rtx_insn *, 5> delay_list;
    2774            0 :       rtx_jump_insn *jump_insn;
    2775            0 :       bool own_target;
    2776            0 :       bool own_fallthrough;
    2777            0 :       int prediction, slots_to_fill, slots_filled;
    2778              : 
    2779            0 :       insn = unfilled_slots_base[i];
    2780            0 :       if (insn == 0
    2781            0 :           || insn->deleted ()
    2782            0 :           || ! (jump_insn = dyn_cast <rtx_jump_insn *> (insn))
    2783            0 :           || ! (condjump_p (jump_insn) || condjump_in_parallel_p (jump_insn)))
    2784            0 :         continue;
    2785              : 
    2786            0 :       slots_to_fill = num_delay_slots (jump_insn);
    2787              :       /* Some machine description have defined instructions to have
    2788              :          delay slots only in certain circumstances which may depend on
    2789              :          nearby insns (which change due to reorg's actions).
    2790              : 
    2791              :          For example, the PA port normally has delay slots for unconditional
    2792              :          jumps.
    2793              : 
    2794              :          However, the PA port claims such jumps do not have a delay slot
    2795              :          if they are immediate successors of certain CALL_INSNs.  This
    2796              :          allows the port to favor filling the delay slot of the call with
    2797              :          the unconditional jump.  */
    2798            0 :       if (slots_to_fill == 0)
    2799            0 :         continue;
    2800              : 
    2801            0 :       slots_filled = 0;
    2802            0 :       target_label = JUMP_LABEL (jump_insn);
    2803            0 :       condition = get_branch_condition (jump_insn, target_label);
    2804              : 
    2805            0 :       if (condition == 0)
    2806            0 :         continue;
    2807              : 
    2808              :       /* Get the next active fallthrough and target insns and see if we own
    2809              :          them.  Then see whether the branch is likely true.  We don't need
    2810              :          to do a lot of this for unconditional branches.  */
    2811              : 
    2812            0 :       insn_at_target = first_active_target_insn (target_label);
    2813            0 :       own_target = own_thread_p (target_label, target_label, false);
    2814              : 
    2815            0 :       if (condition == const_true_rtx)
    2816              :         {
    2817              :           own_fallthrough = false;
    2818              :           fallthrough_insn = 0;
    2819              :           prediction = 2;
    2820              :         }
    2821              :       else
    2822              :         {
    2823            0 :           fallthrough_insn = next_active_insn (jump_insn);
    2824            0 :           own_fallthrough = own_thread_p (NEXT_INSN (jump_insn),
    2825              :                                           NULL_RTX, true);
    2826            0 :           prediction = mostly_true_jump (jump_insn);
    2827              :         }
    2828              : 
    2829              :       /* If this insn is expected to branch, first try to get insns from our
    2830              :          target, then our fallthrough insns.  If it is not expected to branch,
    2831              :          try the other order.  */
    2832              : 
    2833            0 :       if (prediction > 0)
    2834              :         {
    2835            0 :           fill_slots_from_thread (jump_insn, condition, insn_at_target,
    2836              :                                   fallthrough_insn, prediction == 2, true,
    2837              :                                   own_target, slots_to_fill,
    2838              :                                   &slots_filled, &delay_list);
    2839              : 
    2840            0 :           if (delay_list.is_empty () && own_fallthrough)
    2841              :             {
    2842              :               /* Even though we didn't find anything for delay slots,
    2843              :                  we might have found a redundant insn which we deleted
    2844              :                  from the thread that was filled.  So we have to recompute
    2845              :                  the next insn at the target.  */
    2846            0 :               target_label = JUMP_LABEL (jump_insn);
    2847            0 :               insn_at_target = first_active_target_insn (target_label);
    2848              : 
    2849            0 :               fill_slots_from_thread (jump_insn, condition, fallthrough_insn,
    2850              :                                       insn_at_target, false, false,
    2851              :                                       own_fallthrough, slots_to_fill,
    2852              :                                       &slots_filled, &delay_list);
    2853              :             }
    2854              :         }
    2855              :       else
    2856              :         {
    2857            0 :           if (own_fallthrough)
    2858            0 :             fill_slots_from_thread (jump_insn, condition, fallthrough_insn,
    2859              :                                     insn_at_target, false, false,
    2860              :                                     own_fallthrough, slots_to_fill,
    2861              :                                     &slots_filled, &delay_list);
    2862              : 
    2863            0 :           if (delay_list.is_empty ())
    2864            0 :             fill_slots_from_thread (jump_insn, condition, insn_at_target,
    2865            0 :                                     next_active_insn (insn), false, true,
    2866              :                                     own_target, slots_to_fill,
    2867              :                                     &slots_filled, &delay_list);
    2868              :         }
    2869              : 
    2870            0 :       if (!delay_list.is_empty ())
    2871            0 :         unfilled_slots_base[i]
    2872            0 :           = emit_delay_sequence (jump_insn, delay_list, slots_filled);
    2873              : 
    2874            0 :       if (slots_to_fill == slots_filled)
    2875            0 :         unfilled_slots_base[i] = 0;
    2876              : 
    2877            0 :       note_delay_statistics (slots_filled, 1);
    2878            0 :     }
    2879            0 : }
    2880              : 
    2881              : static void delete_computation (rtx_insn *insn);
    2882              : 
    2883              : /* Recursively delete prior insns that compute the value (used only by INSN
    2884              :    which the caller is deleting) stored in the register mentioned by NOTE
    2885              :    which is a REG_DEAD note associated with INSN.  */
    2886              : 
    2887              : static void
    2888            0 : delete_prior_computation (rtx note, rtx_insn *insn)
    2889              : {
    2890            0 :   rtx_insn *our_prev;
    2891            0 :   rtx reg = XEXP (note, 0);
    2892              : 
    2893            0 :   for (our_prev = prev_nonnote_insn (insn);
    2894            0 :        our_prev && (NONJUMP_INSN_P (our_prev)
    2895            0 :                     || CALL_P (our_prev));
    2896            0 :        our_prev = prev_nonnote_insn (our_prev))
    2897              :     {
    2898            0 :       rtx pat = PATTERN (our_prev);
    2899              : 
    2900              :       /* If we reach a CALL which is not calling a const function
    2901              :          or the callee pops the arguments, then give up.  */
    2902            0 :       if (CALL_P (our_prev)
    2903            0 :           && (! RTL_CONST_CALL_P (our_prev)
    2904            0 :               || GET_CODE (pat) != SET || GET_CODE (SET_SRC (pat)) != CALL))
    2905              :         break;
    2906              : 
    2907              :       /* If we reach a SEQUENCE, it is too complex to try to
    2908              :          do anything with it, so give up.  We can be run during
    2909              :          and after reorg, so SEQUENCE rtl can legitimately show
    2910              :          up here.  */
    2911            0 :       if (GET_CODE (pat) == SEQUENCE)
    2912              :         break;
    2913              : 
    2914            0 :       if (GET_CODE (pat) == USE
    2915            0 :           && NONJUMP_INSN_P (XEXP (pat, 0)))
    2916              :         /* reorg creates USEs that look like this.  We leave them
    2917              :            alone because reorg needs them for its own purposes.  */
    2918              :         break;
    2919              : 
    2920            0 :       if (reg_set_p (reg, pat))
    2921              :         {
    2922            0 :           if (side_effects_p (pat) && !CALL_P (our_prev))
    2923              :             break;
    2924              : 
    2925            0 :           if (GET_CODE (pat) == PARALLEL)
    2926              :             {
    2927              :               /* If we find a SET of something else, we can't
    2928              :                  delete the insn.  */
    2929              : 
    2930              :               int i;
    2931              : 
    2932            0 :               for (i = 0; i < XVECLEN (pat, 0); i++)
    2933              :                 {
    2934            0 :                   rtx part = XVECEXP (pat, 0, i);
    2935              : 
    2936            0 :                   if (GET_CODE (part) == SET
    2937            0 :                       && SET_DEST (part) != reg)
    2938              :                     break;
    2939              :                 }
    2940              : 
    2941            0 :               if (i == XVECLEN (pat, 0))
    2942            0 :                 delete_computation (our_prev);
    2943              :             }
    2944            0 :           else if (GET_CODE (pat) == SET
    2945            0 :                    && REG_P (SET_DEST (pat)))
    2946              :             {
    2947            0 :               int dest_regno = REGNO (SET_DEST (pat));
    2948            0 :               int dest_endregno = END_REGNO (SET_DEST (pat));
    2949            0 :               int regno = REGNO (reg);
    2950            0 :               int endregno = END_REGNO (reg);
    2951              : 
    2952            0 :               if (dest_regno >= regno
    2953            0 :                   && dest_endregno <= endregno)
    2954            0 :                 delete_computation (our_prev);
    2955              : 
    2956              :               /* We may have a multi-word hard register and some, but not
    2957              :                  all, of the words of the register are needed in subsequent
    2958              :                  insns.  Write REG_UNUSED notes for those parts that were not
    2959              :                  needed.  */
    2960            0 :               else if (dest_regno <= regno
    2961            0 :                        && dest_endregno >= endregno)
    2962              :                 {
    2963            0 :                   int i;
    2964              : 
    2965            0 :                   add_reg_note (our_prev, REG_UNUSED, reg);
    2966              : 
    2967            0 :                   for (i = dest_regno; i < dest_endregno; i++)
    2968            0 :                     if (! find_regno_note (our_prev, REG_UNUSED, i))
    2969              :                       break;
    2970              : 
    2971            0 :                   if (i == dest_endregno)
    2972            0 :                     delete_computation (our_prev);
    2973              :                 }
    2974              :             }
    2975              : 
    2976              :           break;
    2977              :         }
    2978              : 
    2979              :       /* If PAT references the register that dies here, it is an
    2980              :          additional use.  Hence any prior SET isn't dead.  However, this
    2981              :          insn becomes the new place for the REG_DEAD note.  */
    2982            0 :       if (reg_overlap_mentioned_p (reg, pat))
    2983              :         {
    2984            0 :           XEXP (note, 1) = REG_NOTES (our_prev);
    2985            0 :           REG_NOTES (our_prev) = note;
    2986            0 :           break;
    2987              :         }
    2988              :     }
    2989            0 : }
    2990              : 
    2991              : /* Delete INSN and recursively delete insns that compute values used only
    2992              :    by INSN.  This uses the REG_DEAD notes computed during flow analysis.
    2993              : 
    2994              :    Look at all our REG_DEAD notes.  If a previous insn does nothing other
    2995              :    than set a register that dies in this insn, we can delete that insn
    2996              :    as well.  */
    2997              : 
    2998              : static void
    2999            0 : delete_computation (rtx_insn *insn)
    3000              : {
    3001            0 :   rtx note, next;
    3002              : 
    3003            0 :   for (note = REG_NOTES (insn); note; note = next)
    3004              :     {
    3005            0 :       next = XEXP (note, 1);
    3006              : 
    3007            0 :       if (REG_NOTE_KIND (note) != REG_DEAD
    3008              :           /* Verify that the REG_NOTE is legitimate.  */
    3009            0 :           || !REG_P (XEXP (note, 0)))
    3010            0 :         continue;
    3011              : 
    3012            0 :       delete_prior_computation (note, insn);
    3013              :     }
    3014              : 
    3015            0 :   delete_related_insns (insn);
    3016            0 : }
    3017              : 
    3018              : /* If all INSN does is set the pc, delete it,
    3019              :    and delete the insn that set the condition codes for it
    3020              :    if that's what the previous thing was.  */
    3021              : 
    3022              : static void
    3023            0 : delete_jump (rtx_insn *insn)
    3024              : {
    3025            0 :   rtx set = single_set (insn);
    3026              : 
    3027            0 :   if (set && GET_CODE (SET_DEST (set)) == PC)
    3028            0 :     delete_computation (insn);
    3029            0 : }
    3030              : 
    3031              : static rtx_insn *
    3032            0 : label_before_next_insn (rtx_insn *x, rtx scan_limit)
    3033              : {
    3034            0 :   rtx_insn *insn = next_active_insn (x);
    3035            0 :   while (insn)
    3036              :     {
    3037            0 :       insn = PREV_INSN (insn);
    3038            0 :       if (insn == scan_limit || insn == NULL_RTX)
    3039              :         return NULL;
    3040            0 :       if (LABEL_P (insn))
    3041              :         break;
    3042              :     }
    3043              :   return insn;
    3044              : }
    3045              : 
    3046              : /* Return TRUE if there is a NOTE_INSN_SWITCH_TEXT_SECTIONS note in between
    3047              :    BEG and END.  */
    3048              : 
    3049              : static bool
    3050            0 : switch_text_sections_between_p (const rtx_insn *beg, const rtx_insn *end)
    3051              : {
    3052            0 :   const rtx_insn *p;
    3053            0 :   for (p = beg; p != end; p = NEXT_INSN (p))
    3054            0 :     if (NOTE_P (p) && NOTE_KIND (p) == NOTE_INSN_SWITCH_TEXT_SECTIONS)
    3055              :       return true;
    3056              :   return false;
    3057              : }
    3058              : 
    3059              : 
    3060              : /* Once we have tried two ways to fill a delay slot, make a pass over the
    3061              :    code to try to improve the results and to do such things as more jump
    3062              :    threading.  */
    3063              : 
    3064              : static void
    3065            0 : relax_delay_slots (rtx_insn *first)
    3066              : {
    3067            0 :   rtx_insn *insn, *next;
    3068            0 :   rtx_sequence *pat;
    3069            0 :   rtx_insn *delay_insn;
    3070            0 :   rtx target_label;
    3071              : 
    3072              :   /* Look at every JUMP_INSN and see if we can improve it.  */
    3073            0 :   for (insn = first; insn; insn = next)
    3074              :     {
    3075            0 :       rtx_insn *other, *prior_insn;
    3076            0 :       bool crossing;
    3077              : 
    3078            0 :       next = next_active_insn (insn);
    3079              : 
    3080              :       /* If this is a jump insn, see if it now jumps to a jump, jumps to
    3081              :          the next insn, or jumps to a label that is not the last of a
    3082              :          group of consecutive labels.  */
    3083            0 :       if (is_a <rtx_jump_insn *> (insn)
    3084            0 :           && (condjump_p (insn) || condjump_in_parallel_p (insn))
    3085            0 :           && !ANY_RETURN_P (target_label = JUMP_LABEL (insn)))
    3086              :         {
    3087            0 :           rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (insn);
    3088            0 :           target_label
    3089            0 :             = skip_consecutive_labels (follow_jumps (target_label, jump_insn,
    3090              :                                                      &crossing));
    3091            0 :           if (ANY_RETURN_P (target_label))
    3092            0 :             target_label = find_end_label (target_label);
    3093              : 
    3094            0 :           if (target_label
    3095            0 :               && next_active_insn (as_a<rtx_insn *> (target_label)) == next
    3096            0 :               && ! condjump_in_parallel_p (jump_insn)
    3097            0 :               && ! (next && switch_text_sections_between_p (jump_insn, next)))
    3098              :             {
    3099            0 :               rtx_insn *direct_label = as_a<rtx_insn *> (JUMP_LABEL (insn));
    3100            0 :               rtx_insn *prev = prev_nonnote_insn (direct_label);
    3101              : 
    3102              :               /* If the insn jumps over a BARRIER and is the only way to reach
    3103              :                  its target, then we need to delete the BARRIER before the jump
    3104              :                  because, otherwise, the target may end up being considered as
    3105              :                  unreachable and thus also deleted.  */
    3106            0 :               if (BARRIER_P (prev) && LABEL_NUSES (direct_label) == 1)
    3107              :                 {
    3108            0 :                   delete_related_insns (prev);
    3109              : 
    3110              :                   /* We have just removed a BARRIER, which means that the block
    3111              :                      number of the next insns has effectively been changed (see
    3112              :                      find_basic_block in resource.cc), so clear it.  */
    3113            0 :                   clear_hashed_info_until_next_barrier (direct_label);
    3114              :                 }
    3115              : 
    3116            0 :               delete_jump (jump_insn);
    3117            0 :               continue;
    3118            0 :             }
    3119              : 
    3120            0 :           if (target_label && target_label != JUMP_LABEL (jump_insn))
    3121              :             {
    3122            0 :               reorg_redirect_jump (jump_insn, target_label);
    3123            0 :               if (crossing)
    3124            0 :                 CROSSING_JUMP_P (jump_insn) = 1;
    3125              :             }
    3126              : 
    3127              :           /* See if this jump conditionally branches around an unconditional
    3128              :              jump.  If so, invert this jump and point it to the target of the
    3129              :              second jump.  Check if it's possible on the target.  */
    3130            0 :           if (next && simplejump_or_return_p (next)
    3131            0 :               && any_condjump_p (jump_insn)
    3132            0 :               && target_label
    3133            0 :               && (next_active_insn (as_a<rtx_insn *> (target_label))
    3134            0 :                   == next_active_insn (next))
    3135            0 :               && no_labels_between_p (jump_insn, next)
    3136            0 :               && targetm.can_follow_jump (jump_insn, next))
    3137              :             {
    3138            0 :               rtx label = JUMP_LABEL (next);
    3139              : 
    3140              :               /* Be careful how we do this to avoid deleting code or
    3141              :                  labels that are momentarily dead.  See similar optimization
    3142              :                  in jump.cc.
    3143              : 
    3144              :                  We also need to ensure we properly handle the case when
    3145              :                  invert_jump fails.  */
    3146              : 
    3147            0 :               ++LABEL_NUSES (target_label);
    3148            0 :               if (!ANY_RETURN_P (label))
    3149            0 :                 ++LABEL_NUSES (label);
    3150              : 
    3151            0 :               if (invert_jump (jump_insn, label, 1))
    3152              :                 {
    3153            0 :                   rtx_insn *from = delete_related_insns (next);
    3154              : 
    3155              :                   /* We have just removed a BARRIER, which means that the block
    3156              :                      number of the next insns has effectively been changed (see
    3157              :                      find_basic_block in resource.cc), so clear it.  */
    3158            0 :                   if (from)
    3159            0 :                     clear_hashed_info_until_next_barrier (from);
    3160              : 
    3161              :                   next = jump_insn;
    3162              :                 }
    3163              : 
    3164            0 :               if (!ANY_RETURN_P (label))
    3165            0 :                 --LABEL_NUSES (label);
    3166              : 
    3167            0 :               if (--LABEL_NUSES (target_label) == 0)
    3168            0 :                 delete_related_insns (target_label);
    3169              : 
    3170            0 :               continue;
    3171            0 :             }
    3172              :         }
    3173              : 
    3174              :       /* If this is an unconditional jump and the previous insn is a
    3175              :          conditional jump, try reversing the condition of the previous
    3176              :          insn and swapping our targets.  The next pass might be able to
    3177              :          fill the slots.
    3178              : 
    3179              :          Don't do this if we expect the conditional branch to be true, because
    3180              :          we would then be making the more common case longer.  */
    3181              : 
    3182            0 :       if (simplejump_or_return_p (insn)
    3183            0 :           && (other = prev_active_insn (insn)) != 0
    3184            0 :           && any_condjump_p (other)
    3185            0 :           && no_labels_between_p (other, insn)
    3186            0 :           && mostly_true_jump (other) < 0)
    3187              :         {
    3188            0 :           rtx other_target = JUMP_LABEL (other);
    3189            0 :           target_label = JUMP_LABEL (insn);
    3190              : 
    3191            0 :           if (invert_jump (as_a <rtx_jump_insn *> (other), target_label, 0))
    3192            0 :             reorg_redirect_jump (as_a <rtx_jump_insn *> (insn), other_target);
    3193              :         }
    3194              : 
    3195              :       /* Now look only at cases where we have a filled delay slot.  */
    3196            0 :       if (!NONJUMP_INSN_P (insn) || GET_CODE (PATTERN (insn)) != SEQUENCE)
    3197            0 :         continue;
    3198              : 
    3199            0 :       pat = as_a <rtx_sequence *> (PATTERN (insn));
    3200            0 :       delay_insn = pat->insn (0);
    3201              : 
    3202              :       /* See if the first insn in the delay slot is redundant with some
    3203              :          previous insn.  Remove it from the delay slot if so; then set up
    3204              :          to reprocess this insn.  */
    3205            0 :       if ((prior_insn = redundant_insn (pat->insn (1), delay_insn, vNULL)))
    3206              :         {
    3207            0 :           fix_reg_dead_note (prior_insn, insn);
    3208            0 :           update_block (pat->insn (1), insn);
    3209            0 :           delete_from_delay_slot (pat->insn (1));
    3210            0 :           next = prev_active_insn (next);
    3211            0 :           continue;
    3212              :         }
    3213              : 
    3214              :       /* See if we have a RETURN insn with a filled delay slot followed
    3215              :          by a RETURN insn with an unfilled a delay slot.  If so, we can delete
    3216              :          the first RETURN (but not its delay insn).  This gives the same
    3217              :          effect in fewer instructions.
    3218              : 
    3219              :          Only do so if optimizing for size since this results in slower, but
    3220              :          smaller code.  */
    3221            0 :       if (optimize_function_for_size_p (cfun)
    3222            0 :           && ANY_RETURN_P (PATTERN (delay_insn))
    3223            0 :           && next
    3224            0 :           && JUMP_P (next)
    3225            0 :           && PATTERN (next) == PATTERN (delay_insn))
    3226              :         {
    3227              :           rtx_insn *after;
    3228              :           int i;
    3229              : 
    3230              :           /* Delete the RETURN and just execute the delay list insns.
    3231              : 
    3232              :              We do this by deleting the INSN containing the SEQUENCE, then
    3233              :              re-emitting the insns separately, and then deleting the RETURN.
    3234              :              This allows the count of the jump target to be properly
    3235              :              decremented.
    3236              : 
    3237              :              Note that we need to change the INSN_UID of the re-emitted insns
    3238              :              since it is used to hash the insns for mark_target_live_regs and
    3239              :              the re-emitted insns will no longer be wrapped up in a SEQUENCE.
    3240              : 
    3241              :              Clear the from target bit, since these insns are no longer
    3242              :              in delay slots.  */
    3243            0 :           for (i = 0; i < XVECLEN (pat, 0); i++)
    3244            0 :             INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0;
    3245              : 
    3246            0 :           rtx_insn *prev = PREV_INSN (insn);
    3247            0 :           delete_related_insns (insn);
    3248            0 :           gcc_assert (GET_CODE (pat) == SEQUENCE);
    3249            0 :           add_insn_after (delay_insn, prev, NULL);
    3250            0 :           after = delay_insn;
    3251            0 :           for (i = 1; i < pat->len (); i++)
    3252            0 :             after = emit_copy_of_insn_after (pat->insn (i), after);
    3253            0 :           delete_scheduled_jump (delay_insn);
    3254            0 :           continue;
    3255            0 :         }
    3256              : 
    3257              :       /* Now look only at the cases where we have a filled JUMP_INSN.  */
    3258            0 :       rtx_jump_insn *delay_jump_insn =
    3259            0 :                 dyn_cast <rtx_jump_insn *> (delay_insn);
    3260            0 :       if (! delay_jump_insn || !(condjump_p (delay_jump_insn)
    3261            0 :           || condjump_in_parallel_p (delay_jump_insn)))
    3262            0 :         continue;
    3263              : 
    3264            0 :       target_label = JUMP_LABEL (delay_jump_insn);
    3265            0 :       if (target_label && ANY_RETURN_P (target_label))
    3266            0 :         continue;
    3267              : 
    3268              :       /* If this jump goes to another unconditional jump, thread it, but
    3269              :          don't convert a jump into a RETURN here.  */
    3270            0 :       rtx trial = skip_consecutive_labels (follow_jumps (target_label,
    3271              :                                                          delay_jump_insn,
    3272              :                                                          &crossing));
    3273            0 :       if (ANY_RETURN_P (trial))
    3274            0 :         trial = find_end_label (trial);
    3275              : 
    3276            0 :       if (trial && trial != target_label
    3277            0 :           && redirect_with_delay_slots_safe_p (delay_jump_insn, trial, insn))
    3278              :         {
    3279            0 :           reorg_redirect_jump (delay_jump_insn, trial);
    3280            0 :           target_label = trial;
    3281            0 :           if (crossing)
    3282            0 :             CROSSING_JUMP_P (delay_jump_insn) = 1;
    3283              :         }
    3284              : 
    3285              :       /* If the first insn at TARGET_LABEL is redundant with a previous
    3286              :          insn, redirect the jump to the following insn and process again.
    3287              :          We use next_real_nondebug_insn instead of next_active_insn so we
    3288              :          don't skip USE-markers, or we'll end up with incorrect
    3289              :          liveness info.  */
    3290            0 :       trial = next_real_nondebug_insn (target_label);
    3291            0 :       if (trial && GET_CODE (PATTERN (trial)) != SEQUENCE
    3292            0 :           && redundant_insn (trial, insn, vNULL)
    3293            0 :           && ! can_throw_internal (trial))
    3294              :         {
    3295              :           /* Figure out where to emit the special USE insn so we don't
    3296              :              later incorrectly compute register live/death info.  */
    3297            0 :           rtx_insn *tmp = next_active_insn (as_a<rtx_insn *> (trial));
    3298            0 :           if (tmp == 0)
    3299            0 :             tmp = find_end_label (simple_return_rtx);
    3300              : 
    3301            0 :           if (tmp)
    3302              :             {
    3303              :               /* Insert the special USE insn and update dataflow info.
    3304              :                  We know "trial" is an insn here as it is the output of
    3305              :                  next_real_nondebug_insn () above.  */
    3306            0 :               update_block (as_a <rtx_insn *> (trial), tmp);
    3307              : 
    3308              :               /* Now emit a label before the special USE insn, and
    3309              :                  redirect our jump to the new label.  */
    3310            0 :               target_label = get_label_before (PREV_INSN (tmp), target_label);
    3311            0 :               reorg_redirect_jump (delay_jump_insn, target_label);
    3312            0 :               next = insn;
    3313            0 :               continue;
    3314              :             }
    3315              :         }
    3316              : 
    3317              :       /* Similarly, if it is an unconditional jump with one insn in its
    3318              :          delay list and that insn is redundant, thread the jump.  */
    3319            0 :       rtx_sequence *trial_seq =
    3320            0 :         trial ? dyn_cast <rtx_sequence *> (PATTERN (trial)) : NULL;
    3321            0 :       if (trial_seq
    3322            0 :           && trial_seq->len () == 2
    3323            0 :           && JUMP_P (trial_seq->insn (0))
    3324            0 :           && simplejump_or_return_p (trial_seq->insn (0))
    3325            0 :           && redundant_insn (trial_seq->insn (1), insn, vNULL))
    3326              :         {
    3327            0 :           rtx temp_label = JUMP_LABEL (trial_seq->insn (0));
    3328            0 :           if (ANY_RETURN_P (temp_label))
    3329            0 :             temp_label = find_end_label (temp_label);
    3330              : 
    3331            0 :           if (temp_label
    3332            0 :               && redirect_with_delay_slots_safe_p (delay_jump_insn,
    3333              :                                                    temp_label, insn))
    3334              :             {
    3335            0 :               update_block (trial_seq->insn (1), insn);
    3336            0 :               reorg_redirect_jump (delay_jump_insn, temp_label);
    3337            0 :               next = insn;
    3338            0 :               continue;
    3339              :             }
    3340              :         }
    3341              : 
    3342              :       /* See if we have a simple (conditional) jump that is useless.  */
    3343            0 :       if (!CROSSING_JUMP_P (delay_jump_insn)
    3344            0 :           && !INSN_ANNULLED_BRANCH_P (delay_jump_insn)
    3345            0 :           && !condjump_in_parallel_p (delay_jump_insn)
    3346            0 :           && prev_active_insn (as_a<rtx_insn *> (target_label)) == insn
    3347            0 :           && !BARRIER_P (prev_nonnote_insn (as_a<rtx_insn *> (target_label))))
    3348              :         {
    3349              :           rtx_insn *after;
    3350              :           int i;
    3351              : 
    3352              :           /* All this insn does is execute its delay list and jump to the
    3353              :              following insn.  So delete the jump and just execute the delay
    3354              :              list insns.
    3355              : 
    3356              :              We do this by deleting the INSN containing the SEQUENCE, then
    3357              :              re-emitting the insns separately, and then deleting the jump.
    3358              :              This allows the count of the jump target to be properly
    3359              :              decremented.
    3360              : 
    3361              :              Note that we need to change the INSN_UID of the re-emitted insns
    3362              :              since it is used to hash the insns for mark_target_live_regs and
    3363              :              the re-emitted insns will no longer be wrapped up in a SEQUENCE.
    3364              : 
    3365              :              Clear the from target bit, since these insns are no longer
    3366              :              in delay slots.  */
    3367            0 :           for (i = 0; i < XVECLEN (pat, 0); i++)
    3368            0 :             INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0;
    3369              : 
    3370            0 :           rtx_insn *prev = PREV_INSN (insn);
    3371            0 :           delete_related_insns (insn);
    3372            0 :           gcc_assert (GET_CODE (pat) == SEQUENCE);
    3373            0 :           add_insn_after (delay_jump_insn, prev, NULL);
    3374            0 :           after = delay_jump_insn;
    3375            0 :           for (i = 1; i < pat->len (); i++)
    3376            0 :             after = emit_copy_of_insn_after (pat->insn (i), after);
    3377            0 :           delete_scheduled_jump (delay_jump_insn);
    3378            0 :           continue;
    3379            0 :         }
    3380              : 
    3381              :       /* See if this is an unconditional jump around a single insn which is
    3382              :          identical to the one in its delay slot.  In this case, we can just
    3383              :          delete the branch and the insn in its delay slot.  */
    3384            0 :       if (next && NONJUMP_INSN_P (next)
    3385            0 :           && label_before_next_insn (next, insn) == target_label
    3386            0 :           && simplejump_p (insn)
    3387            0 :           && XVECLEN (pat, 0) == 2
    3388            0 :           && rtx_equal_p (PATTERN (next), PATTERN (pat->insn (1))))
    3389              :         {
    3390            0 :           delete_related_insns (insn);
    3391            0 :           continue;
    3392              :         }
    3393              : 
    3394              :       /* See if this jump (with its delay slots) conditionally branches
    3395              :          around an unconditional jump (without delay slots).  If so, invert
    3396              :          this jump and point it to the target of the second jump.  We cannot
    3397              :          do this for annulled jumps, though.  Again, don't convert a jump to
    3398              :          a RETURN here.  */
    3399            0 :       if (! INSN_ANNULLED_BRANCH_P (delay_jump_insn)
    3400            0 :           && any_condjump_p (delay_jump_insn)
    3401            0 :           && next && simplejump_or_return_p (next)
    3402            0 :           && (next_active_insn (as_a<rtx_insn *> (target_label))
    3403            0 :               == next_active_insn (next))
    3404            0 :           && no_labels_between_p (insn, next)
    3405            0 :           && !switch_text_sections_between_p (insn, next_active_insn (next)))
    3406              :         {
    3407            0 :           rtx label = JUMP_LABEL (next);
    3408            0 :           rtx old_label = JUMP_LABEL (delay_jump_insn);
    3409              : 
    3410            0 :           if (ANY_RETURN_P (label))
    3411            0 :             label = find_end_label (label);
    3412              : 
    3413              :           /* find_end_label can generate a new label. Check this first.  */
    3414            0 :           if (label
    3415            0 :               && no_labels_between_p (insn, next)
    3416            0 :               && redirect_with_delay_slots_safe_p (delay_jump_insn,
    3417              :                                                    label, insn))
    3418              :             {
    3419              :               /* Be careful how we do this to avoid deleting code or labels
    3420              :                  that are momentarily dead.  See similar optimization in
    3421              :                  jump.cc  */
    3422            0 :               if (old_label)
    3423            0 :                 ++LABEL_NUSES (old_label);
    3424              : 
    3425            0 :               if (invert_jump (delay_jump_insn, label, 1))
    3426              :                 {
    3427              :                   /* Must update the INSN_FROM_TARGET_P bits now that
    3428              :                      the branch is reversed, so that mark_target_live_regs
    3429              :                      will handle the delay slot insn correctly.  */
    3430            0 :                   for (int i = 1; i < XVECLEN (PATTERN (insn), 0); i++)
    3431              :                     {
    3432            0 :                       rtx slot = XVECEXP (PATTERN (insn), 0, i);
    3433            0 :                       INSN_FROM_TARGET_P (slot) = ! INSN_FROM_TARGET_P (slot);
    3434              :                     }
    3435              : 
    3436              :                   /* We have just removed a BARRIER, which means that the block
    3437              :                      number of the next insns has effectively been changed (see
    3438              :                      find_basic_block in resource.cc), so clear it.  */
    3439            0 :                   rtx_insn *from = delete_related_insns (next);
    3440            0 :                   if (from)
    3441            0 :                     clear_hashed_info_until_next_barrier (from);
    3442              : 
    3443              :                   next = insn;
    3444              :                 }
    3445              : 
    3446            0 :               if (old_label && --LABEL_NUSES (old_label) == 0)
    3447            0 :                 delete_related_insns (old_label);
    3448            0 :               continue;
    3449            0 :             }
    3450              :         }
    3451              : 
    3452              :       /* If we own the thread opposite the way this insn branches, see if we
    3453              :          can merge its delay slots with following insns.  */
    3454            0 :       if (INSN_FROM_TARGET_P (pat->insn (1))
    3455            0 :           && own_thread_p (NEXT_INSN (insn), 0, true))
    3456            0 :         try_merge_delay_insns (insn, next);
    3457            0 :       else if (! INSN_FROM_TARGET_P (pat->insn (1))
    3458            0 :                && own_thread_p (target_label, target_label, false))
    3459            0 :         try_merge_delay_insns (insn,
    3460              :                                next_active_insn (as_a<rtx_insn *> (target_label)));
    3461              : 
    3462              :       /* If we get here, we haven't deleted INSN.  But we may have deleted
    3463              :          NEXT, so recompute it.  */
    3464            0 :       next = next_active_insn (insn);
    3465              :     }
    3466            0 : }
    3467              : 
    3468              : 
    3469              : /* Look for filled jumps to the end of function label.  We can try to convert
    3470              :    them into RETURN insns if the insns in the delay slot are valid for the
    3471              :    RETURN as well.  */
    3472              : 
    3473              : static void
    3474            0 : make_return_insns (rtx_insn *first)
    3475              : {
    3476            0 :   rtx_insn *insn;
    3477            0 :   rtx_jump_insn *jump_insn;
    3478            0 :   rtx real_return_label = function_return_label;
    3479            0 :   rtx real_simple_return_label = function_simple_return_label;
    3480            0 :   int slots, i;
    3481              : 
    3482              :   /* See if there is a RETURN insn in the function other than the one we
    3483              :      made for END_OF_FUNCTION_LABEL.  If so, set up anything we can't change
    3484              :      into a RETURN to jump to it.  */
    3485            0 :   for (insn = first; insn; insn = NEXT_INSN (insn))
    3486            0 :     if (JUMP_P (insn) && ANY_RETURN_P (PATTERN (insn)))
    3487              :       {
    3488            0 :         rtx t = get_label_before (insn, NULL_RTX);
    3489            0 :         if (PATTERN (insn) == ret_rtx)
    3490              :           real_return_label = t;
    3491              :         else
    3492              :           real_simple_return_label = t;
    3493              :         break;
    3494              :       }
    3495              : 
    3496              :   /* Show an extra usage of REAL_RETURN_LABEL so it won't go away if it
    3497              :      was equal to END_OF_FUNCTION_LABEL.  */
    3498            0 :   if (real_return_label)
    3499            0 :     LABEL_NUSES (real_return_label)++;
    3500            0 :   if (real_simple_return_label)
    3501            0 :     LABEL_NUSES (real_simple_return_label)++;
    3502              : 
    3503              :   /* Clear the list of insns to fill so we can use it.  */
    3504            0 :   obstack_free (&unfilled_slots_obstack, unfilled_firstobj);
    3505              : 
    3506            0 :   for (insn = first; insn; insn = NEXT_INSN (insn))
    3507              :     {
    3508            0 :       int flags;
    3509            0 :       rtx kind, real_label;
    3510              : 
    3511              :       /* Only look at filled JUMP_INSNs that go to the end of function
    3512              :          label.  */
    3513            0 :       if (!NONJUMP_INSN_P (insn))
    3514            0 :         continue;
    3515              : 
    3516            0 :       if (GET_CODE (PATTERN (insn)) != SEQUENCE)
    3517            0 :         continue;
    3518              : 
    3519            0 :       rtx_sequence *pat = as_a <rtx_sequence *> (PATTERN (insn));
    3520              : 
    3521            0 :       if (!jump_to_label_p (pat->insn (0)))
    3522            0 :         continue;
    3523              : 
    3524            0 :       if (JUMP_LABEL (pat->insn (0)) == function_return_label)
    3525              :         {
    3526            0 :           kind = ret_rtx;
    3527            0 :           real_label = real_return_label;
    3528              :         }
    3529            0 :       else if (JUMP_LABEL (pat->insn (0)) == function_simple_return_label)
    3530              :         {
    3531            0 :           kind = simple_return_rtx;
    3532            0 :           real_label = real_simple_return_label;
    3533              :         }
    3534              :       else
    3535            0 :         continue;
    3536              : 
    3537            0 :       jump_insn = as_a <rtx_jump_insn *> (pat->insn (0));
    3538              : 
    3539              :       /* If we can't make the jump into a RETURN, try to redirect it to the best
    3540              :          RETURN and go on to the next insn.  */
    3541            0 :       if (!reorg_redirect_jump (jump_insn, kind))
    3542              :         {
    3543              :           /* Make sure redirecting the jump will not invalidate the delay
    3544              :              slot insns.  */
    3545            0 :           if (redirect_with_delay_slots_safe_p (jump_insn, real_label, insn))
    3546            0 :             reorg_redirect_jump (jump_insn, real_label);
    3547            0 :           continue;
    3548              :         }
    3549              : 
    3550              :       /* See if this RETURN can accept the insns current in its delay slot.
    3551              :          It can if it has more or an equal number of slots and the contents
    3552              :          of each is valid.  */
    3553              : 
    3554            0 :       flags = get_jump_flags (jump_insn, JUMP_LABEL (jump_insn));
    3555            0 :       slots = num_delay_slots (jump_insn);
    3556            0 :       if (slots >= XVECLEN (pat, 0) - 1)
    3557              :         {
    3558            0 :           for (i = 1; i < XVECLEN (pat, 0); i++)
    3559            0 :             if (! (
    3560              : #if ANNUL_IFFALSE_SLOTS
    3561              :                    (INSN_ANNULLED_BRANCH_P (jump_insn)
    3562              :                     && INSN_FROM_TARGET_P (pat->insn (i)))
    3563              :                    ? eligible_for_annul_false (jump_insn, i - 1,
    3564              :                                                pat->insn (i), flags) :
    3565              : #endif
    3566              : #if ANNUL_IFTRUE_SLOTS
    3567              :                    (INSN_ANNULLED_BRANCH_P (jump_insn)
    3568              :                     && ! INSN_FROM_TARGET_P (pat->insn (i)))
    3569              :                    ? eligible_for_annul_true (jump_insn, i - 1,
    3570              :                                               pat->insn (i), flags) :
    3571              : #endif
    3572            0 :                    eligible_for_delay (jump_insn, i - 1,
    3573              :                                        pat->insn (i), flags)))
    3574              :               break;
    3575              :         }
    3576              :       else
    3577              :         i = 0;
    3578              : 
    3579            0 :       if (i == XVECLEN (pat, 0))
    3580            0 :         continue;
    3581              : 
    3582              :       /* We have to do something with this insn.  If it is an unconditional
    3583              :          RETURN, delete the SEQUENCE and output the individual insns,
    3584              :          followed by the RETURN.  Then set things up so we try to find
    3585              :          insns for its delay slots, if it needs some.  */
    3586            0 :       if (ANY_RETURN_P (PATTERN (jump_insn)))
    3587              :         {
    3588            0 :           rtx_insn *after = PREV_INSN (insn);
    3589              : 
    3590            0 :           delete_related_insns (insn);
    3591            0 :           insn = jump_insn;
    3592            0 :           for (i = 1; i < pat->len (); i++)
    3593            0 :             after = emit_copy_of_insn_after (pat->insn (i), after);
    3594            0 :           add_insn_after (insn, after, NULL);
    3595            0 :           emit_barrier_after (insn);
    3596              : 
    3597            0 :           if (slots)
    3598            0 :             obstack_ptr_grow (&unfilled_slots_obstack, insn);
    3599              :         }
    3600              :       else
    3601              :         /* It is probably more efficient to keep this with its current
    3602              :            delay slot as a branch to a RETURN.  */
    3603            0 :         reorg_redirect_jump (jump_insn, real_label);
    3604              :     }
    3605              : 
    3606              :   /* Now delete REAL_RETURN_LABEL if we never used it.  Then try to fill any
    3607              :      new delay slots we have created.  */
    3608            0 :   if (real_return_label != NULL_RTX && --LABEL_NUSES (real_return_label) == 0)
    3609            0 :     delete_related_insns (real_return_label);
    3610            0 :   if (real_simple_return_label != NULL_RTX
    3611            0 :       && --LABEL_NUSES (real_simple_return_label) == 0)
    3612            0 :     delete_related_insns (real_simple_return_label);
    3613              : 
    3614            0 :   fill_simple_delay_slots (true);
    3615            0 :   fill_simple_delay_slots (false);
    3616            0 : }
    3617              : 
    3618              : /* Try to find insns to place in delay slots.  */
    3619              : 
    3620              : static void
    3621            0 : dbr_schedule (rtx_insn *first)
    3622              : {
    3623            0 :   rtx_insn *insn, *next, *epilogue_insn = 0;
    3624            0 :   bool need_return_insns;
    3625            0 :   int i;
    3626              : 
    3627              :   /* If the current function has no insns other than the prologue and
    3628              :      epilogue, then do not try to fill any delay slots.  */
    3629            0 :   if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
    3630              :     return;
    3631              : 
    3632              :   /* Find the highest INSN_UID and allocate and initialize our map from
    3633              :      INSN_UID's to position in code.  */
    3634            0 :   for (max_uid = 0, insn = first; insn; insn = NEXT_INSN (insn))
    3635              :     {
    3636            0 :       if (INSN_UID (insn) > max_uid)
    3637            0 :         max_uid = INSN_UID (insn);
    3638            0 :       if (NOTE_P (insn)
    3639            0 :           && NOTE_KIND (insn) == NOTE_INSN_EPILOGUE_BEG)
    3640            0 :         epilogue_insn = insn;
    3641              :     }
    3642              : 
    3643            0 :   uid_to_ruid = XNEWVEC (int, max_uid + 1);
    3644            0 :   for (i = 0, insn = first; insn; i++, insn = NEXT_INSN (insn))
    3645            0 :     uid_to_ruid[INSN_UID (insn)] = i;
    3646              : 
    3647              :   /* Initialize the list of insns that need filling.  */
    3648            0 :   if (unfilled_firstobj == 0)
    3649              :     {
    3650            0 :       gcc_obstack_init (&unfilled_slots_obstack);
    3651            0 :       unfilled_firstobj = XOBNEWVAR (&unfilled_slots_obstack, rtx, 0);
    3652              :     }
    3653              : 
    3654            0 :   for (insn = next_active_insn (first); insn; insn = next_active_insn (insn))
    3655              :     {
    3656            0 :       rtx target;
    3657              : 
    3658              :       /* Skip vector tables.  We can't get attributes for them.  */
    3659            0 :       if (JUMP_TABLE_DATA_P (insn))
    3660            0 :         continue;
    3661              : 
    3662            0 :       if (JUMP_P (insn))
    3663            0 :         INSN_ANNULLED_BRANCH_P (insn) = 0;
    3664            0 :       INSN_FROM_TARGET_P (insn) = 0;
    3665              : 
    3666            0 :       if (num_delay_slots (insn) > 0)
    3667            0 :         obstack_ptr_grow (&unfilled_slots_obstack, insn);
    3668              : 
    3669              :       /* Ensure all jumps go to the last of a set of consecutive labels.  */
    3670            0 :       if (JUMP_P (insn)
    3671            0 :           && (condjump_p (insn) || condjump_in_parallel_p (insn))
    3672            0 :           && !ANY_RETURN_P (JUMP_LABEL (insn))
    3673            0 :           && ((target = skip_consecutive_labels (JUMP_LABEL (insn)))
    3674              :               != JUMP_LABEL (insn)))
    3675            0 :         redirect_jump (as_a <rtx_jump_insn *> (insn), target, 1);
    3676              :     }
    3677              : 
    3678            0 :   init_resource_info (epilogue_insn);
    3679              : 
    3680              :   /* Show we haven't computed an end-of-function label yet.  */
    3681            0 :   function_return_label = function_simple_return_label = NULL;
    3682              : 
    3683              :   /* Initialize the statistics for this function.  */
    3684            0 :   memset (num_insns_needing_delays, 0, sizeof num_insns_needing_delays);
    3685            0 :   memset (num_filled_delays, 0, sizeof num_filled_delays);
    3686              : 
    3687              :   /* Now do the delay slot filling.  Try everything twice in case earlier
    3688              :      changes make more slots fillable.  */
    3689              : 
    3690            0 :   for (reorg_pass_number = 0;
    3691            0 :        reorg_pass_number < MAX_REORG_PASSES;
    3692              :        reorg_pass_number++)
    3693              :     {
    3694            0 :       fill_simple_delay_slots (true);
    3695            0 :       fill_simple_delay_slots (false);
    3696            0 :       if (!targetm.no_speculation_in_delay_slots_p ())
    3697            0 :         fill_eager_delay_slots ();
    3698            0 :       relax_delay_slots (first);
    3699              :     }
    3700              : 
    3701              :   /* If we made an end of function label, indicate that it is now
    3702              :      safe to delete it by undoing our prior adjustment to LABEL_NUSES.
    3703              :      If it is now unused, delete it.  */
    3704            0 :   if (function_return_label && --LABEL_NUSES (function_return_label) == 0)
    3705            0 :     delete_related_insns (function_return_label);
    3706            0 :   if (function_simple_return_label
    3707            0 :       && --LABEL_NUSES (function_simple_return_label) == 0)
    3708            0 :     delete_related_insns (function_simple_return_label);
    3709              : 
    3710            0 :   need_return_insns = false;
    3711            0 :   need_return_insns |= targetm.have_return () && function_return_label != 0;
    3712            0 :   need_return_insns |= (targetm.have_simple_return ()
    3713            0 :                         && function_simple_return_label != 0);
    3714            0 :   if (need_return_insns)
    3715            0 :     make_return_insns (first);
    3716              : 
    3717              :   /* Delete any USE insns made by update_block; subsequent passes don't need
    3718              :      them or know how to deal with them.  */
    3719            0 :   for (insn = first; insn; insn = next)
    3720              :     {
    3721            0 :       next = NEXT_INSN (insn);
    3722              : 
    3723            0 :       if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == USE
    3724            0 :           && INSN_P (XEXP (PATTERN (insn), 0)))
    3725            0 :         next = delete_related_insns (insn);
    3726              :     }
    3727              : 
    3728            0 :   obstack_free (&unfilled_slots_obstack, unfilled_firstobj);
    3729              : 
    3730              :   /* It is not clear why the line below is needed, but it does seem to be.  */
    3731            0 :   unfilled_firstobj = XOBNEWVAR (&unfilled_slots_obstack, rtx, 0);
    3732              : 
    3733            0 :   if (dump_file)
    3734              :     {
    3735            0 :       int i, j, need_comma;
    3736            0 :       int total_delay_slots[MAX_DELAY_HISTOGRAM + 1];
    3737            0 :       int total_annul_slots[MAX_DELAY_HISTOGRAM + 1];
    3738              : 
    3739            0 :       for (reorg_pass_number = 0;
    3740            0 :            reorg_pass_number < MAX_REORG_PASSES;
    3741              :            reorg_pass_number++)
    3742              :         {
    3743            0 :           fprintf (dump_file, ";; Reorg pass #%d:\n", reorg_pass_number + 1);
    3744            0 :           for (i = 0; i < NUM_REORG_FUNCTIONS; i++)
    3745              :             {
    3746            0 :               need_comma = 0;
    3747            0 :               fprintf (dump_file, ";; Reorg function #%d\n", i);
    3748              : 
    3749            0 :               fprintf (dump_file, ";; %d insns needing delay slots\n;; ",
    3750              :                        num_insns_needing_delays[i][reorg_pass_number]);
    3751              : 
    3752            0 :               for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++)
    3753            0 :                 if (num_filled_delays[i][j][reorg_pass_number])
    3754              :                   {
    3755            0 :                     if (need_comma)
    3756            0 :                       fprintf (dump_file, ", ");
    3757            0 :                     need_comma = 1;
    3758            0 :                     fprintf (dump_file, "%d got %d delays",
    3759              :                              num_filled_delays[i][j][reorg_pass_number], j);
    3760              :                   }
    3761            0 :               fprintf (dump_file, "\n");
    3762              :             }
    3763              :         }
    3764            0 :       memset (total_delay_slots, 0, sizeof total_delay_slots);
    3765            0 :       memset (total_annul_slots, 0, sizeof total_annul_slots);
    3766            0 :       for (insn = first; insn; insn = NEXT_INSN (insn))
    3767              :         {
    3768            0 :           if (! insn->deleted ()
    3769            0 :               && NONJUMP_INSN_P (insn)
    3770            0 :               && GET_CODE (PATTERN (insn)) != USE
    3771            0 :               && GET_CODE (PATTERN (insn)) != CLOBBER)
    3772              :             {
    3773            0 :               if (GET_CODE (PATTERN (insn)) == SEQUENCE)
    3774              :                 {
    3775            0 :                   rtx control;
    3776            0 :                   j = XVECLEN (PATTERN (insn), 0) - 1;
    3777            0 :                   if (j > MAX_DELAY_HISTOGRAM)
    3778              :                     j = MAX_DELAY_HISTOGRAM;
    3779            0 :                   control = XVECEXP (PATTERN (insn), 0, 0);
    3780            0 :                   if (JUMP_P (control) && INSN_ANNULLED_BRANCH_P (control))
    3781            0 :                     total_annul_slots[j]++;
    3782              :                   else
    3783            0 :                     total_delay_slots[j]++;
    3784              :                 }
    3785            0 :               else if (num_delay_slots (insn) > 0)
    3786            0 :                 total_delay_slots[0]++;
    3787              :             }
    3788              :         }
    3789            0 :       fprintf (dump_file, ";; Reorg totals: ");
    3790            0 :       need_comma = 0;
    3791            0 :       for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++)
    3792              :         {
    3793            0 :           if (total_delay_slots[j])
    3794              :             {
    3795            0 :               if (need_comma)
    3796            0 :                 fprintf (dump_file, ", ");
    3797            0 :               need_comma = 1;
    3798            0 :               fprintf (dump_file, "%d got %d delays", total_delay_slots[j], j);
    3799              :             }
    3800              :         }
    3801            0 :       fprintf (dump_file, "\n");
    3802              : 
    3803            0 :       if (ANNUL_IFTRUE_SLOTS || ANNUL_IFFALSE_SLOTS)
    3804              :         {
    3805              :           fprintf (dump_file, ";; Reorg annuls: ");
    3806              :           need_comma = 0;
    3807              :           for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++)
    3808              :             {
    3809              :               if (total_annul_slots[j])
    3810              :                 {
    3811              :                   if (need_comma)
    3812              :                     fprintf (dump_file, ", ");
    3813              :                   need_comma = 1;
    3814              :                   fprintf (dump_file, "%d got %d delays", total_annul_slots[j], j);
    3815              :                 }
    3816              :             }
    3817              :           fprintf (dump_file, "\n");
    3818              :         }
    3819              : 
    3820            0 :       fprintf (dump_file, "\n");
    3821              :     }
    3822              : 
    3823            0 :   if (!sibling_labels.is_empty ())
    3824              :     {
    3825            0 :       update_alignments (sibling_labels);
    3826            0 :       sibling_labels.release ();
    3827              :     }
    3828              : 
    3829            0 :   free_resource_info ();
    3830            0 :   free (uid_to_ruid);
    3831            0 :   crtl->dbr_scheduled_p = true;
    3832              : }
    3833              : 
    3834              : /* Run delay slot optimization.  */
    3835              : static void
    3836            0 : rest_of_handle_delay_slots (void)
    3837              : {
    3838            0 :   if (DELAY_SLOTS)
    3839              :     dbr_schedule (get_insns ());
    3840            0 : }
    3841              : 
    3842              : namespace {
    3843              : 
    3844              : const pass_data pass_data_delay_slots =
    3845              : {
    3846              :   RTL_PASS, /* type */
    3847              :   "dbr", /* name */
    3848              :   OPTGROUP_NONE, /* optinfo_flags */
    3849              :   TV_DBR_SCHED, /* tv_id */
    3850              :   0, /* properties_required */
    3851              :   0, /* properties_provided */
    3852              :   0, /* properties_destroyed */
    3853              :   0, /* todo_flags_start */
    3854              :   0, /* todo_flags_finish */
    3855              : };
    3856              : 
    3857              : class pass_delay_slots : public rtl_opt_pass
    3858              : {
    3859              : public:
    3860       285722 :   pass_delay_slots (gcc::context *ctxt)
    3861       571444 :     : rtl_opt_pass (pass_data_delay_slots, ctxt)
    3862              :   {}
    3863              : 
    3864              :   /* opt_pass methods: */
    3865              :   bool gate (function *) final override;
    3866            0 :   unsigned int execute (function *) final override
    3867              :     {
    3868            0 :       rest_of_handle_delay_slots ();
    3869            0 :       return 0;
    3870              :     }
    3871              : 
    3872              : }; // class pass_delay_slots
    3873              : 
    3874              : bool
    3875      1471370 : pass_delay_slots::gate (function *)
    3876              : {
    3877              :   /* At -O0 dataflow info isn't updated after RA.  */
    3878      1471370 :   if (DELAY_SLOTS)
    3879              :     return optimize > 0 && flag_delayed_branch && !crtl->dbr_scheduled_p;
    3880              : 
    3881      1471370 :   return false;
    3882              : }
    3883              : 
    3884              : } // anon namespace
    3885              : 
    3886              : rtl_opt_pass *
    3887       285722 : make_pass_delay_slots (gcc::context *ctxt)
    3888              : {
    3889       285722 :   return new pass_delay_slots (ctxt);
    3890              : }
    3891              : 
    3892              : /* Machine dependent reorg pass.  */
    3893              : 
    3894              : namespace {
    3895              : 
    3896              : const pass_data pass_data_machine_reorg =
    3897              : {
    3898              :   RTL_PASS, /* type */
    3899              :   "mach", /* name */
    3900              :   OPTGROUP_NONE, /* optinfo_flags */
    3901              :   TV_MACH_DEP, /* tv_id */
    3902              :   0, /* properties_required */
    3903              :   0, /* properties_provided */
    3904              :   0, /* properties_destroyed */
    3905              :   0, /* todo_flags_start */
    3906              :   0, /* todo_flags_finish */
    3907              : };
    3908              : 
    3909              : class pass_machine_reorg : public rtl_opt_pass
    3910              : {
    3911              : public:
    3912       285722 :   pass_machine_reorg (gcc::context *ctxt)
    3913       571444 :     : rtl_opt_pass (pass_data_machine_reorg, ctxt)
    3914              :   {}
    3915              : 
    3916              :   /* opt_pass methods: */
    3917      1471370 :   bool gate (function *) final override
    3918              :     {
    3919      1471370 :       return targetm.machine_dependent_reorg != 0;
    3920              :     }
    3921              : 
    3922      1471363 :   unsigned int execute (function *) final override
    3923              :     {
    3924      1471363 :       targetm.machine_dependent_reorg ();
    3925      1471363 :       return 0;
    3926              :     }
    3927              : 
    3928              : }; // class pass_machine_reorg
    3929              : 
    3930              : } // anon namespace
    3931              : 
    3932              : rtl_opt_pass *
    3933       285722 : make_pass_machine_reorg (gcc::context *ctxt)
    3934              : {
    3935       285722 :   return new pass_machine_reorg (ctxt);
    3936              : }
        

Generated by: LCOV version 2.4-beta

LCOV profile is generated on x86_64 machine using following configure options: configure --disable-bootstrap --enable-coverage=opt --enable-languages=c,c++,fortran,go,jit,lto,rust,m2 --enable-host-shared. GCC test suite is run with the built compiler.