 
Home > The CortexM7 Instruction Set > Instruction set summary 
The processor implements a version of the Thumb instruction set. Table 3.1 lists the supported instructions.
In Table 3.1:
Angle brackets, <>, enclose alternative forms of the operand.
Braces, {}, enclose optional operands.
The Operands column is not exhaustive.
Op2
is a flexible second
operand that can be either a register or a constant.
Most instructions can use an optional condition code suffix.
For more information on the instructions and operands, see the instruction descriptions.
Table 3.1. CortexM7 instructions
Mnemonic  Operands  Brief description  Flags  Page 

ADC, ADCS 
 Add with Carry  N,Z,C,V  ADD, ADC, SUB, SBC, and RSB 
ADD, ADDS 
 Add  N,Z,C,V  ADD, ADC, SUB, SBC, and RSB 
ADD, ADDW 
 Add    ADD, ADC, SUB, SBC, and RSB 
ADR 
 Address to Register    ADR 
AND, ANDS 
 Logical AND  N,Z,C  AND, ORR, EOR, BIC, and ORN 
ASR, ASRS 
 Arithmetic Shift Right  N,Z,C  ASR, LSL, LSR, ROR, and RRX 
B 
 Branch    B, BL, BX, and BLX 
BFC 
 Bit Field Clear    BFC and BFI 
BFI 
 Bit Field Insert    BFC and BFI 
BIC, BICS 
 Bit Clear  N,Z,C  AND, ORR, EOR, BIC, and ORN 
BKPT 
 Breakpoint    BKPT 
BL 
 Branch with Link    B, BL, BX, and BLX 
BLX 
 Branch indirect with Link and Exchange    B, BL, BX, and BLX 
BX 
 Branch indirect and Exchange    B, BL, BX, and BLX 
CBNZ 
 Compare and Branch if Non Zero    CBZ and CBNZ 
CBZ 
 Compare and Branch if Zero    CBZ and CBNZ 
CLREX    Clear Exclusive    CLREX 
CLZ 
 Count Leading Zeros    CLZ 
CMN 
 Compare Negative  N,Z,C,V  CMP and CMN 
CMP 
 Compare  N,Z,C,V  CMP and CMN 
CPS  Rd, Rn  Change Processor State    CPS 
CPY  Rd, Rn  Copy    CPY 
DMB 
 Data Memory Barrier    DMB 
DSB 
 Data Synchronization Barrier    DSB 
EOR, EORS 
 Exclusive OR  N,Z,C  AND, ORR, EOR, BIC, and ORN 
ISB  {opt}  Instruction Synchronization Barrier    ISB 
IT    If‑Then condition block    IT 
LDM 
 Load Multiple registers    LDM and STM 
LDMDB, LDMEA 
 Load Multiple, decrement before    LDM and STM 
LDMIA, LDMFD 
 Load Multiple registers, increment after    LDM and STM 
LDR, LDRT 
 Load Register with word (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
LDRH, LDRHT 
 Load Register with Halfword (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
LDRSH, LDRSHT 
 Load Register with Signed Halfword (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
LDRB, LDRBT 
 Load Register with byte (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
LDRSB, LDRSBT 
 Load Register with Signed Byte (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
LDR  Rt, [Rn, Rm {, LSL #shift}]  Load Register with word (register offset)    LDR and STR, register offset 
LDRH  Rt, [Rn, Rm {, LSL #shift}]  Load Register with Halfword (register offset)    LDR and STR, register offset 
LDRSH  Rt, [Rn, Rm {, LSL #shift}]  Load Register with Signed Halfword (register offset)    LDR and STR, register offset 
LDRB  Rt, [Rn, Rm {, LSL #shift}]  Load Register with Byte (register offset)    LDR and STR, register offset 
LDRSB  Rt, [Rn, Rm {, LSL #shift}]  Load Register with Signed Byte (register offset)    LDR and STR, register offset 
LDR  Rt, label  Load Register with word (literal)    LDR, PC‑relative 
LDRH  Rt, label  Load Register with Halfword (literal)    LDR, PC‑relative 
LDRB  Rt, label  Load Register with Byte (literal)    LDR, PC‑relative 
LDRD 
 Load Register Dual with two bytes (immediate offset)    LDR and STR, immediate offset 
LDRD  Rt, Rt2, label  Load Register Dual with two bytes (PC relative)    LDR, PC‑relative 
LDREX 
 Load Register Exclusive    LDREX and STREX 
LDREXB 
 Load Register Exclusive with Byte    LDREX and STREX 
LDREXH 
 Load Register Exclusive with Halfword    LDREX and STREX 
LDRSB  Rt, label  Load Register with Signed Byte (PCrelative)    LDR, PC‑relative 
LDRSH  Rt, label  Load Register with Signed Halfword (PCrelative)    LDR, PC‑relative 
LSL, LSLS 
 Logical Shift Left  N,Z,C  ASR, LSL, LSR, ROR, and RRX 
LSR, LSRS 
 Logical Shift Right  N,Z,C  ASR, LSL, LSR, ROR, and RRX 
MLA 
 Multiply with Accumulate, 32bit result  N,Z  MUL, MLA, and MLS 
MLS 
 Multiply and Subtract, 32bit result    MUL, MLA, and MLS 
MOV, MOVS 
 Move  N,Z,C  MOV and MVN 
MOV , MOVS  Rd, Rm  Move (register)  N,Z  MOV and MVN 
MOVT 
 Move Top    MOVT 
MOVW 
 Move 16bit constant  N,Z,C  MOV and MVN 
MRS 
 Move from Special Register to general register    MRS 
MSR 
 Move from general register to Special Register    MSR 
MUL, MULS 
 Multiply, 32bit result  N,Z  MUL, MLA, and MLS 
MVN, MVNS 
 Move NOT  N,Z,C  MOV and MVN 
NEG  {Rd,} Rm  Negate    NEG 
NOP    No Operation    NOP 
ORN, ORNS 
 Logical OR NOT  N,Z,C  AND, ORR, EOR, BIC, and ORN 
ORR, ORRS 
 Logical OR  N,Z,C  AND, ORR, EOR, BIC, and ORN 
PKHTB , PKHBT  {Rd ,} Rn , Rm , {,
Op2}  Pack Halfword    PKHBT and PKHTB 
PLD 
 Preload Data    PLD 
POP 
 Pop registers from stack    PUSH and POP 
PUSH 
 Push registers onto stack    PUSH and POP 
QADD  {Rd ,} Rn , Rm  Saturating double and Add  Q  QADD and QSUB 
QADD16  {Rd ,} Rn , Rm  Saturating Add 16    QADD and QSUB 
QADD8  {Rd ,} Rn , Rm  Saturating Add 8    QADD and QSUB 
QASX  {Rd ,} Rn , Rm  Saturating Add and Subtract with Exchange    QASX and QSAX 
QDADD  {Rd ,} Rn , Rm  Saturating Double and Add  Q  QDADD and QDSUB 
QDSUB  {Rd ,} Rn , Rm  Saturating Double and Subtract  Q  QDADD and QDSUB 
QSAX  {Rd ,} Rn , Rm  Saturating Subtract and Add with Exchange    QASX and QSAX 
QSUB  {Rd ,} Rn , Rm  Saturating Subtract  Q  QADD and QSUB 
QSUB16  {Rd ,} Rn , Rm  Saturating Subtract 16    QADD and QSUB 
QSUB8  {Rd ,} Rn , Rm  Saturating Subtract 8    QADD and QSUB 
RBIT 
 Reverse Bits    REV, REV16, REVSH, and RBIT 
REV 
 Reverse byte order in a word    REV, REV16, REVSH, and RBIT 
REV16

 Reverse byte order in each halfword    REV, REV16, REVSH, and RBIT 
REVSH 
 Reverse byte order in bottom halfword and sign extend    REV, REV16, REVSH, and RBIT 
ROR, RORS 
 Rotate Right  N,Z,C  ASR, LSL, LSR, ROR, and RRX 
RRX, RRXS 
 Rotate Right with Extend  N,Z,C  ASR, LSL, LSR, ROR, and RRX 
RSB, RSBS 
 Reverse Subtract  N,Z,C,V  ADD, ADC, SUB, SBC, and RSB 
SADD16  {Rd ,} Rn , Rm  Signed Add 16  GE  SADD16 and SADD8 
SADD8  {Rd ,} Rn , Rm  Signed Add 8  GE  SADD16 and SADD8 
SASX  {Rd ,} Rn , Rm  Signed Add and Subtract with Exchange  GE  SASX and SSAX 
SBC, SBCS 
 Subtract with Carry  N,Z,C,V  ADD, ADC, SUB, SBC, and RSB 
SBFX 
 Signed Bit Field Extract    SBFX and UBFX 
SDIV 
 Signed Divide    SDIV and UDIV 
SEL 
 Select bytes  GE  SEL 
SEV    Send Event    SEV 
SHADD16  {Rd ,} Rn , Rm  Signed Halving Add 16    SHADD16 and SHADD8 
SHADD8  {Rd ,} Rn , Rm  Signed Halving Add 8    SHADD16 and SHADD8 
SHASX  {Rd ,} Rn , Rm  Signed Halving Add and Subtract with Exchange    SHASX and SHSAX 
SHSAX  {Rd ,} Rn , Rm  Signed Halving Subtract and Add with Exchange    SHASX and SHSAX 
SHSUB16  {Rd ,} Rn , Rm  Signed Halving Subtract 16    SHSUB16 and SHSUB8 
SHSUB8  {Rd ,} Rn , Rm  Signed Halving Subtract 8    SHSUB16 and SHSUB8 
SMLABB,
SMLABT, SMLATB,
SMLATT 
 Signed Multiply Accumulate halfwords  Q  SMLAWB, SMLAWT, SMLABB, SMLABT, SMLATB, and SMLATT 
SMLAD , SMLADX 
 Signed Multiply Accumulate Dual  Q  SMLAD and SMLADX 
SMLAL 
 Signed Multiply with Accumulate Long (32 × 32 + 64), 64bit result    UMULL, UMLAL, SMULL, and SMLAL 
SMLALBB,
SMLALBT,
SMLALTB,
SMLALTT 
 Signed Multiply Accumulate Long, halfwords    SMLALD, SMLALDX, SMLALBB, SMLALBT, SMLALTB, and SMLALTT 
SMLALD, SMLALDX 
 Signed Multiply Accumulate Long Dual    SMLALD, SMLALDX, SMLALBB, SMLALBT, SMLALTB, and SMLALTT 
SMLAWB,
SMLAWT  Rd , Rn , Rm,
Ra  Signed Multiply Accumulate, word by halfword  Q  SMLAWB, SMLAWT, SMLABB, SMLABT, SMLATB, and SMLATT 
SMLSD, SMLSDX  Rd , Rn , Rm,
Ra  Signed Multiply Subtract Dual  Q  SMLSD and SMLSLD 
SMLSLD, SMLSLDX 
 Signed Multiply Subtract Long Dual    SMLSD and SMLSLD 
SMMLA, SMMLAR  Rd , Rn , Rm,
Ra  Signed Most significant word Multiply Accumulate    SMMLA and SMMLS 
SMMLS, SMMLSR 
 Signed Most significant word Multiply Subtract    SMMLA and SMMLS 
SMMUL, SMMULR  {Rd ,} Rn , Rm  Signed Most significant word Multiply    SMMUL 
SMUAD, SMUADX  {Rd ,} Rn , Rm  Signed Dual Multiply Add  Q  SMUAD and SMUSD 
SMULBB,
SMULBT, SMULTB,
SMULTT  {Rd ,} Rn , Rm  Signed Multiply (halfwords)    SMUL and SMULW 
SMULL 
 Signed Multiply Long (32 × 32), 64bit result    UMULL, UMLAL, SMULL, and SMLAL 
SMULWB, SMULWT
 {Rd ,} Rn , Rm  Signed Multiply word by halfword    SMUL and SMULW 
SMUSD, SMUSDX  {Rd ,} Rn , Rm  Signed Dual Multiply Subtract    SMUAD and SMUSD 
SSAT 
 Signed Saturate  Q  SSAT and USAT 
SSAT16  Rd, #n, Rm  Signed Saturate 16  Q  SSAT16 and USAT16 
SSAX  {Rd ,} Rn , Rm  Signed Subtract and Add with Exchange  GE  SASX and SSAX 
SSUB16 
 Signed Subtract 16  GE  SSUB16 and SSUB8 
SSUB8  {Rd ,} Rn , Rm  Signed Subtract 8  GE  SSUB16 and SSUB8 
STM 
 Store Multiple registers    LDM and STM 
STMDB, STMEA 
 Store Multiple registers, decrement before    LDM and STM 
STMIA, STMFD 
 Store Multiple registers, increment after    LDM and STM 
STR, STRT 
 Store Register word (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
STRH, STRHT 
 Store Register Halfword (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
STRB, STRBT 
 Store Register Byte (immediate offset, unprivileged)    LDR and STR, immediate offset, LDR and STR, unprivileged 
STR  Rt, [Rn, Rm {, LSL #shift}]  Store Register word (register offset)    LDR and STR, register offset 
STRH  Rt, [Rn, Rm {, LSL #shift}]  Store Register Halfword (register offset)    LDR and STR, register offset 
STRB  Rt, [Rn, Rm {, LSL #shift}]  Store Register Byte (register offset)    LDR and STR, register offset 
STRD 
 Store Register Dual two words    LDR and STR, immediate offset 
STREX 
 Store Register Exclusive    LDREX and STREX 
STREXB 
 Store Register Exclusive Byte    LDREX and STREX 
STREXH 
 Store Register Exclusive Halfword    LDREX and STREX 
SUB, SUBS 
 Subtract  N,Z,C,V  ADD, ADC, SUB, SBC, and RSB 
SUB, SUBW 
 Subtract    ADD, ADC, SUB, SBC, and RSB 
SVC 
 Supervisor Call    SVC 
SXTAB  {Rd,} Rn, Rm {,ROR #n}  Sign extend 8 bits to 32 and Add    SXTA and UXTA 
SXTAB16  {Rd,} Rn, Rm {,ROR #n}  Sign extend two 8bit values to 16 and Add    SXTA and UXTA 
SXTAH  {Rd,} Rn, Rm {,ROR #n}  Sign extend 16 bits to 32 and Add    SXTA and UXTA 
SXTB 
 Sign extend 8 bits to 32    SXT and UXT 
SXTB16  {Rd,} Rm {,ROR #n}  Sign extend 8 bits to 16    SXT and UXT 
SXTH 
 Sign extend a Halfword to 32    SXT and UXT 
TBB 
 Table Branch Byte    TBB and TBH 
TBH 
 Table Branch Halfword    TBB and TBH 
TEQ 
 Test Equivalence  N,Z,C  TST and TEQ 
TST 
 Test  N,Z,C  TST and TEQ 
UADD16  {Rd,} Rn , Rm  Unsigned Add 16  GE  UADD16 and UADD8 
UADD8  {Rd,} Rn , Rm  Unsigned Add 8  GE  UADD16 and UADD8 
UASX  {Rd,} Rn , Rm  Unsigned Add and Subtract with Exchange  GE  UASX and USAX 
UBFX  Rd, Rn, #lsb, #width  Unsigned Bit Field Extract    SBFX and UBFX 
UDIV  {Rd,} Rn , Rm  Unsigned Divide    SDIV and UDIV 
USAX  {Rd,} Rn , Rm  Unsigned Subtract and Add with Exchange  GE  UASX and USAX 
UHADD16  {Rd,} Rn , Rm  Unsigned Halving Add 16    UHADD16 and UHADD8 
UHADD8  {Rd,} Rn , Rm  Unsigned Halving Add 8    UHADD16 and UHADD8 
UHASX  {Rd,} Rn , Rm  Unsigned Halving Add and Subtract with Exchange    UHASX and UHSAX 
UHSAX  {Rd,} Rn , Rm  Unsigned Halving Subtract and Add with Exchange    UHASX and UHSAX 
UHSUB16  {Rd,} Rn , Rm  Unsigned Halving Subtract 16    UHSUB16 and UHSUB8 
UHSUB8  {Rd,} Rn , Rm  Unsigned Halving Subtract 8    UHSUB16 and UHSUB8 
UMAAL 
 Unsigned Multiply Accumulate Accumulate Long (32 × 32 + 32 + 32), 64bit result    UMULL, UMAAL, UMLAL 
UMLAL 
 Unsigned Multiply with Accumulate Long (32 × 32 + 64), 64bit result    UMULL, UMLAL, SMULL, and SMLAL 
UMULL 
 Unsigned Multiply Long (32 × 32), 64bit result    UMULL, UMLAL, SMULL, and SMLAL 
UQADD16  {Rd,} Rn , Rm  Unsigned Saturating Add 16    UQADD and UQSUB 
UQADD8  {Rd,} Rn , Rm  Unsigned Saturating Add 8    UQADD and UQSUB 
UQASX  {Rd,} Rn , Rm  Unsigned Saturating Add and Subtract with Exchange    UQASX and UQSAX 
UQSAX  {Rd,} Rn , Rm  Unsigned Saturating Subtract and Add with Exchange    UQASX and UQSAX 
UQSUB16  {Rd,} Rn , Rm  Unsigned Saturating Subtract 16    UQADD and UQSUB 
UQSUB8  {Rd,} Rn , Rm  Unsigned Saturating Subtract 8    UQADD and UQSUB 
USAD8  {Rd,} Rn , Rm  Unsigned Sum of Absolute Differences    USAD8 
USADA8  Rd, Rn , Rm,
Ra  Unsigned Sum of Absolute Differences and Accumulate    USADA8 
USAT 
 Unsigned Saturate  Q  SSAT and USAT 
USAT16  Rd, #n, Rm  Unsigned Saturate 16  Q  SSAT16 and USAT16 
USAX  {Rd,} Rn , Rm  Unsigned Subtract and Add with Exchange  GE  UASX and USAX 
USUB16  {Rd,} Rn , Rm  Unsigned Subtract 16  GE  USUB16 and USUB8 
USUB8  {Rd,} Rn , Rm  Unsigned Subtract 8  GE  USUB16 and USUB8 
UXTAB  {Rd,} Rn, Rm {,ROR #n}  Rotate, unsigned extend 8 bits to 32 and Add    SXTA and UXTA 
UXTAB16  {Rd,} Rn, Rm {,ROR #n}  Rotate, unsigned extend two 8bit values to 16 and Add    SXTA and UXTA 
UXTAH  {Rd,} Rn, Rm {,ROR #n}  Rotate, unsigned extend and Add Halfword    SXTA and UXTA 
UXTB 
 Unsigned zeroextend a Byte    SXT and UXT 
UXTB16 
 Unsigned zeroextend Byte 16    SXT and UXT 
UXTH 
 Unsigned zeroextend a Halfword    SXT and UXT 

 Floatingpoint Absolute    VABS 
VADD  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Add    VADD 
VCMP  .F<3264> <SdDd>, <<Sm
#0.0>  Compare two floatingpoint registers, or one floatingpoint register and zero  N,Z,C,V  VCMP and VCMPE 
VCMPE  .F<3264> <SdDd>, <<Sm
#0.0>  Compare two floatingpoint registers, or one floatingpoint register and zero with Invalid Operation check  N,Z,C,V  VCMP and VCMPE 
VCVTA  .Tm.F<3264> <Sd>, <SmDm>  Convert from floatingpoint to integer with directed rounding to nearest with Ties Away    VCVTA, VCVTN, VCVTP and VCVTM 
VCVTN  .Tm.F<3264> <Sd>, <SmDm>  Convert from floatingpoint to integer with directed rounding to nearest with Ties to even    VCVTA, VCVTN, VCVTP and VCVTM 
VCVTP  .Tm.F<3264> <Sd>, <SmDm>  Convert from floatingpoint to integer with directed rounding towards Plus infinity    VCVTA, VCVTN, VCVTP and VCVTM 
VCVTM  .Tm.F<3264> <Sd>, <SmDm>  Convert from floatingpoint to integer with directed rounding towards Minus infinity    VCVTA, VCVTN, VCVTP and VCVTM 
VCVT  .F<3264>.Tm <Sd>, <SmDm>  Convert from floatingpoint to integer    VCVT and VCVTR between floatingpoint and integer 
VCVTR  .Tm.F<3264> <Sd>, <SmDm>  Convert between floatingpoint and integer with rounding.    VCVT and VCVTR between floatingpoint and integer 
VCVT  .Td.F<3264> <SdDd>, <SdDd>,
#fbits  Convert from floatingpoint to fixed point    VCVT between floatingpoint and fixedpoint 
VCVT  <BT>.F<3264>.F16 <SdDd>,
Sm  Convert halfprecision value to singleprecision or doubleprecision    VCVTB and VCVTT 
VCVT  <BT>.F16.F<3264> Sd, <SmDm>  Convert singleprecision or double precision register to halfprecision    VCVTB and VCVTT 
VDIV  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Divide    VDIV 
VFMA  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Fused Multiply Accumulate    VFMA and VFMS 
VFMS  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Fused Multiply Subtract    VFMA and VFMS 
VFNMA  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Fused Negate Multiply Accumulate    VFNMA and VFNMS 
VFNMS  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Fused Negate Multiply Subtract    VFNMA and VFNMS 
VLDM  {mode}{.size} Rn{!}, list  Floatingpoint Load Multiple extension registers    VLDM 
VLDR  .F<3264> <SdDd>, [<Rn> {,
#offset}]  Floatingpoint Load an extension register from memory (immediate)    VLDR 
VLDR.F<3264>  <SdDd>, <label>  Load an extension register from memory    VLDR 
VLDR.F<3264>  <SdDd>, [PC,#0]  Load an extension register from memory    VLDR 
VMAXNM  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Maximum of two floatingpoint numbers with IEEE7542008 NaN handling    VMAXNM and VMINNM 
VMINNM  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Minimum of two floatingpoint numbers with IEEE7542008 NaN handling    VMAXNM and VMINNM 
VMLA  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Floatingpoint Multiply Accumulate    VMLA and VMLS 
VMLS  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Floatingpoint Multiply Subtract    VMLA and VMLS 
VMRS  Rt, FPSCR  Move to Arm core register from floatingpoint Special Register  N,Z,C,V  VMRS 
VMSR  FPSCR, Rt  Move to floatingpoint Special Register from Arm core register    VMSR 
VMOV  <SnRt>, <RtSn>  Copy Arm core register to singleprecision    VMOV Arm Core register to singleprecision 
VMOV  <SmRt>, <Sm1Rt2>, <RtSm>,
<Rt2Sm1>  Copy two Arm core registers to two singleprecision    VMOV two Arm Core registers to two singleprecision registers 
VMOV  {.size} Dd[x], Rt  Copy Arm core register to scalar    VMOV Arm Core register to scalar 
VMOV  {.dt} Rt, Dn[x]  Copy scalar to Arm core register    VMOV Scalar to Arm Core register 
VMOV  .F<3264> <SdDd>, #immm  Floatingpoint Move immediate    VMOV Immediate 
VMOV  .F<3264> <SdDd>, <SdDd>,
<SmDm>  Copies the contents of one register to another    VMOV Register 
VMOV  <DmRt>, <RtRt2>, <Rt2Dm>  Floatingpoint Move transfers two words between two Arm core registers and a doubleword register    VMOV two Arm core registers and a doubleprecision register 
VMUL  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Multiply    VMUL 
VNEG  .F<3264> <SdDd>, <SmDm>  Floatingpoint Negate    VNEG 
VNMLA  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Floatingpoint Multiply Accumulate and Negate    VNMLA, VNMLS and VNMUL 
VNMLS  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Floatingpoint Multiply, Subtract and Negate    VNMLA, VNMLS and VNMUL 
VNMUL  .F<3264> {<SdDd>,} <SnDn>,
<SmDm>  Floatingpoint Multiply and Negate    VNMLA, VNMLS and VNMUL 
VPOP  {.size} list  Load multiple consecutive floatingpoint registers from the stack    VPOP 
VPUSH  {.size} list  Store multiple consecutive floatingpoint registers to the stack    VPUSH 
VRINTA  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with directed rounding to Nearest with Ties Away    VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ 
VRINTN  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with directed rounding to Nearest with Ties to even    VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ 
VRINTP  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with directed rounding to Plus infinity    VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ 
VRINTM  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with directed rounding to Minus infinity    VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ 
VRINTX  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with rounding specified in FPSCR    VRINTR and VRINTX 
VRINTZ  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with rounding towards Zero    VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ 
VRINTR  .F<3264> <SdDd>, <SmDm>  Float to integer in floatingpoint format conversion with rounding towards value specified in FPSCR    VRINTR and VRINTX 
VSEL  .F<3264> <SdDd>, <SnDn>,
<SmDm>  Select register, alternative to a pair of conditional VMOV    VSEL 
VSQRT  .F<3264> <SdDd>, <SmDm>  Calculates floatingpoint Square Root    VSQRT 
VSTM  {mode}{.size} Rn{!}, list  Floatingpoint Store Multiple    VSTM 
VSTR  .F<3264> <SdDd>, [Rn{, #offset}]  Floatingpoint Store Register stores an extension register to memory    VSTR 
VSUB.F<3264>  {<SdDd>,} <SnDn>, <SmDm>  Floatingpoint Subtract    VSUB 
WFE    Wait For Event    WFE 
WFI
   Wait For Interrupt    WFI 