3.1. Instruction set summary

The processor implements a version of the Thumb instruction set. Table 3.1 lists the supported instructions.

Note

In Table 3.1:

  • Angle brackets, <>, enclose alternative forms of the operand.

  • Braces, {}, enclose optional operands.

  • The Operands column is not exhaustive.

  • Op2 is a flexible second operand that can be either a register or a constant.

  • Most instructions can use an optional condition code suffix.

For more information on the instructions and operands, see the instruction descriptions.

Table 3.1. Cortex-M7 instructions

MnemonicOperandsBrief descriptionFlagsPage
ADC, ADCS

{Rd,} Rn, Op2

Add with CarryN,Z,C,VADD, ADC, SUB, SBC, and RSB
ADD, ADDS

{Rd,} Rn, Op2

AddN,Z,C,VADD, ADC, SUB, SBC, and RSB
ADD, ADDW

{Rd,} Rn, #imm12

Add-ADD, ADC, SUB, SBC, and RSB
ADR

Rd, label

Address to Register-ADR
AND, ANDS

{Rd,} Rn, Op2

Logical ANDN,Z,CAND, ORR, EOR, BIC, and ORN
ASR, ASRS

Rd, Rm, <Rs|#n>

Arithmetic Shift RightN,Z,CASR, LSL, LSR, ROR, and RRX
B

label

Branch-B, BL, BX, and BLX
BFC

Rd, #lsb, #width

Bit Field Clear-BFC and BFI
BFI

Rd, Rn, #lsb, #width

Bit Field Insert-BFC and BFI
BIC, BICS

{Rd,} Rn, Op2

Bit ClearN,Z,CAND, ORR, EOR, BIC, and ORN
BKPT

#imm8

Breakpoint-BKPT
BL

label

Branch with Link-B, BL, BX, and BLX
BLX

Rm

Branch indirect with Link and Exchange-B, BL, BX, and BLX
BX

Rm

Branch indirect and Exchange-B, BL, BX, and BLX
CBNZ

Rn, label

Compare and Branch if Non Zero-CBZ and CBNZ
CBZ

Rn, label

Compare and Branch if Zero-CBZ and CBNZ
CLREX-Clear Exclusive-CLREX
CLZ

Rd, Rm

Count Leading Zeros-CLZ
CMN

Rn, Op2

Compare NegativeN,Z,C,VCMP and CMN
CMP

Rn, Op2

CompareN,Z,C,VCMP and CMN
CPSRd, RnChange Processor State-CPS
CPYRd, RnCopy-CPY
DMB

{opt}

Data Memory Barrier-DMB
DSB

{opt}

Data Synchronization Barrier-DSB
EOR, EORS

{Rd,} Rn, Op2

Exclusive ORN,Z,CAND, ORR, EOR, BIC, and ORN
ISB{opt}Instruction Synchronization Barrier-ISB
IT-If‑Then condition block-IT
LDM

Rn{!}, reglist

Load Multiple registers-LDM and STM
LDMDB, LDMEA

Rn{!}, reglist

Load Multiple, decrement before-LDM and STM
LDMIA, LDMFD

Rn{!}, reglist

Load Multiple registers, increment after-LDM and STM
LDR, LDRT

Rt, [Rn, #offset]

Load Register with word (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
LDRH, LDRHT

Rt, [Rn, #offset]

Load Register with Halfword (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
LDRSH, LDRSHT

Rt, [Rn, #offset]

Load Register with Signed Halfword (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
LDRB, LDRBT

Rt, [Rn, #offset]

Load Register with byte (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
LDRSB, LDRSBT

Rt, [Rn, #offset]

Load Register with Signed Byte (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
LDRRt, [Rn, Rm {, LSL #shift}]Load Register with word (register offset)-LDR and STR, register offset
LDRHRt, [Rn, Rm {, LSL #shift}]Load Register with Halfword (register offset)-LDR and STR, register offset
LDRSHRt, [Rn, Rm {, LSL #shift}]Load Register with Signed Halfword (register offset)-LDR and STR, register offset
LDRBRt, [Rn, Rm {, LSL #shift}]Load Register with Byte (register offset)-LDR and STR, register offset
LDRSBRt, [Rn, Rm {, LSL #shift}]Load Register with Signed Byte (register offset)-LDR and STR, register offset
LDRRt, labelLoad Register with word (literal)-LDR, PC‑relative
LDRHRt, labelLoad Register with Halfword (literal)-LDR, PC‑relative
LDRBRt, labelLoad Register with Byte (literal)-LDR, PC‑relative
LDRD

Rt, Rt2, [Rn, #offset]

Load Register Dual with two bytes (immediate offset)-LDR and STR, immediate offset
LDRDRt, Rt2, labelLoad Register Dual with two bytes (PC- relative)-LDR, PC‑relative
LDREX

Rt, [Rn, #offset]

Load Register Exclusive-LDREX and STREX
LDREXB

Rt, [Rn]

Load Register Exclusive with Byte-LDREX and STREX
LDREXH

Rt, [Rn]

Load Register Exclusive with Halfword-LDREX and STREX
LDRSBRt, labelLoad Register with Signed Byte (PC-relative)-LDR, PC‑relative
LDRSHRt, labelLoad Register with Signed Halfword (PC-relative)-LDR, PC‑relative
LSL, LSLS

Rd, Rm, <Rs|#n>

Logical Shift LeftN,Z,CASR, LSL, LSR, ROR, and RRX
LSR, LSRS

Rd, Rm, <Rs|#n>

Logical Shift RightN,Z,CASR, LSL, LSR, ROR, and RRX
MLA

Rd, Rn, Rm, Ra

Multiply with Accumulate, 32-bit resultN,ZMUL, MLA, and MLS
MLS

Rd, Rn, Rm, Ra

Multiply and Subtract, 32-bit result-MUL, MLA, and MLS
MOV, MOVS

Rd, Op2

MoveN,Z,CMOV and MVN
MOV, MOVSRd, RmMove (register)N,ZMOV and MVN
MOVT

Rd, #imm16

Move Top-MOVT
MOVW

Rd, #imm16

Move 16-bit constantN,Z,CMOV and MVN
MRS

Rd, spec_reg

Move from Special Register to general register-MRS
MSR

spec_reg, Rn

Move from general register to Special Register-MSR
MUL, MULS

{Rd,} Rn, Rm

Multiply, 32-bit resultN,ZMUL, MLA, and MLS
MVN, MVNS

Rd, Op2

Move NOTN,Z,CMOV and MVN
NEG{Rd,} RmNegate-NEG
NOP-No Operation-NOP
ORN, ORNS

{Rd,} Rn, Op2

Logical OR NOTN,Z,CAND, ORR, EOR, BIC, and ORN
ORR, ORRS

{Rd,} Rn, Op2

Logical ORN,Z,CAND, ORR, EOR, BIC, and ORN
PKHTB, PKHBT{Rd,} Rn, Rm, {, Op2}Pack Halfword-PKHBT and PKHTB
PLD

[Rn {, #offset}]

Preload Data-PLD
POP

reglist

Pop registers from stack-PUSH and POP
PUSH

reglist

Push registers onto stack-PUSH and POP
QADD{Rd,} Rn, RmSaturating double and AddQQADD and QSUB
QADD16{Rd,} Rn, RmSaturating Add 16-QADD and QSUB
QADD8{Rd,} Rn, RmSaturating Add 8-QADD and QSUB
QASX{Rd,} Rn, RmSaturating Add and Subtract with Exchange-QASX and QSAX
QDADD{Rd,} Rn, RmSaturating Double and AddQQDADD and QDSUB
QDSUB{Rd,} Rn, RmSaturating Double and SubtractQQDADD and QDSUB
QSAX{Rd,} Rn, RmSaturating Subtract and Add with Exchange-QASX and QSAX
QSUB{Rd,} Rn, RmSaturating SubtractQQADD and QSUB
QSUB16{Rd,} Rn, RmSaturating Subtract 16-QADD and QSUB
QSUB8{Rd,} Rn, RmSaturating Subtract 8-QADD and QSUB
RBIT

Rd, Rn

Reverse Bits-REV, REV16, REVSH, and RBIT
REV

Rd, Rn

Reverse byte order in a word-REV, REV16, REVSH, and RBIT
REV16

Rd, Rn

Reverse byte order in each halfword-REV, REV16, REVSH, and RBIT
REVSH

Rd, Rn

Reverse byte order in bottom halfword and sign extend-REV, REV16, REVSH, and RBIT
ROR, RORS

Rd, Rm, <Rs|#n>

Rotate RightN,Z,CASR, LSL, LSR, ROR, and RRX
RRX, RRXS

Rd, Rm

Rotate Right with ExtendN,Z,CASR, LSL, LSR, ROR, and RRX
RSB, RSBS

{Rd,} Rn, Op2

Reverse SubtractN,Z,C,VADD, ADC, SUB, SBC, and RSB
SADD16{Rd,} Rn, RmSigned Add 16GESADD16 and SADD8
SADD8{Rd,} Rn, RmSigned Add 8GESADD16 and SADD8
SASX{Rd,} Rn, RmSigned Add and Subtract with ExchangeGESASX and SSAX
SBC, SBCS

{Rd,} Rn, Op2

Subtract with CarryN,Z,C,VADD, ADC, SUB, SBC, and RSB
SBFX

Rd, Rn, #lsb, #width

Signed Bit Field Extract-SBFX and UBFX
SDIV

{Rd,} Rn, Rm

Signed Divide-SDIV and UDIV
SEL

{Rd,} Rn, Rm

Select bytesGESEL
SEV-Send Event-SEV
SHADD16{Rd,} Rn, RmSigned Halving Add 16-SHADD16 and SHADD8
SHADD8{Rd,} Rn, RmSigned Halving Add 8-SHADD16 and SHADD8
SHASX{Rd,} Rn, RmSigned Halving Add and Subtract with Exchange-SHASX and SHSAX
SHSAX{Rd,} Rn, RmSigned Halving Subtract and Add with Exchange-SHASX and SHSAX
SHSUB16{Rd,} Rn, RmSigned Halving Subtract 16-SHSUB16 and SHSUB8
SHSUB8{Rd,} Rn, RmSigned Halving Subtract 8-SHSUB16 and SHSUB8
SMLABB, SMLABT, SMLATB, SMLATT

Rd, Rn, Rm, Ra

Signed Multiply Accumulate halfwordsQSMLAWB, SMLAWT, SMLABB, SMLABT, SMLATB, and SMLATT
SMLAD, SMLADX

Rd, Rn, Rm, Ra

Signed Multiply Accumulate DualQSMLAD and SMLADX
SMLAL

RdLo, RdHi, Rn, Rm

Signed Multiply with Accumulate Long (32 × 32 + 64), 64-bit result-UMULL, UMLAL, SMULL, and SMLAL
SMLALBB, SMLALBT, SMLALTB, SMLALTT

RdLo, RdHi, Rn, Rm

Signed Multiply Accumulate Long, halfwords-SMLALD, SMLALDX, SMLALBB, SMLALBT, SMLALTB, and SMLALTT
SMLALD, SMLALDX

RdLo, RdHi, Rn, Rm

Signed Multiply Accumulate Long Dual-SMLALD, SMLALDX, SMLALBB, SMLALBT, SMLALTB, and SMLALTT
SMLAWB, SMLAWTRd, Rn, Rm, RaSigned Multiply Accumulate, word by halfwordQSMLAWB, SMLAWT, SMLABB, SMLABT, SMLATB, and SMLATT
SMLSD, SMLSDXRd, Rn, Rm, RaSigned Multiply Subtract DualQ SMLSD and SMLSLD
SMLSLD, SMLSLDX

RdLo, RdHi, Rn, Rm

Signed Multiply Subtract Long Dual- SMLSD and SMLSLD
SMMLA, SMMLARRd, Rn, Rm, RaSigned Most significant word Multiply Accumulate-SMMLA and SMMLS
SMMLS, SMMLSR

Rd, Rn, Rm, Ra

Signed Most significant word Multiply Subtract-SMMLA and SMMLS
SMMUL, SMMULR{Rd,} Rn, RmSigned Most significant word Multiply-SMMUL
SMUAD, SMUADX{Rd,} Rn, RmSigned Dual Multiply AddQSMUAD and SMUSD
SMULBB, SMULBT, SMULTB, SMULTT{Rd,} Rn, RmSigned Multiply (halfwords)-SMUL and SMULW
SMULL

RdLo, RdHi, Rn, Rm

Signed Multiply Long (32 × 32), 64-bit result-UMULL, UMLAL, SMULL, and SMLAL
SMULWB, SMULWT{Rd,} Rn, RmSigned Multiply word by halfword-SMUL and SMULW
SMUSD, SMUSDX{Rd,} Rn, RmSigned Dual Multiply Subtract-SMUAD and SMUSD
SSAT

Rd, #n, Rm {,shift #s}

Signed SaturateQSSAT and USAT
SSAT16Rd, #n, RmSigned Saturate 16QSSAT16 and USAT16
SSAX{Rd,} Rn, RmSigned Subtract and Add with ExchangeGESASX and SSAX
SSUB16

{Rd,} Rn, Rm

Signed Subtract 16GESSUB16 and SSUB8
SSUB8{Rd,} Rn, RmSigned Subtract 8GESSUB16 and SSUB8
STM

Rn{!}, reglist

Store Multiple registers-LDM and STM
STMDB, STMEA

Rn{!}, reglist

Store Multiple registers, decrement before-LDM and STM
STMIA, STMFD

Rn{!}, reglist

Store Multiple registers, increment after-LDM and STM
STR, STRT

Rt, [Rn, #offset]

Store Register word (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
STRH, STRHT

Rt, [Rn, #offset]

Store Register Halfword (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
STRB, STRBT

Rt, [Rn, #offset]

Store Register Byte (immediate offset, unprivileged)-LDR and STR, immediate offset, LDR and STR, unprivileged
STRRt, [Rn, Rm {, LSL #shift}]Store Register word (register offset)-LDR and STR, register offset
STRHRt, [Rn, Rm {, LSL #shift}]Store Register Halfword (register offset)-LDR and STR, register offset
STRBRt, [Rn, Rm {, LSL #shift}]Store Register Byte (register offset)-LDR and STR, register offset
STRD

Rt, Rt2, [Rn, #offset]

Store Register Dual two words-LDR and STR, immediate offset
STREX

Rd, Rt, [Rn, #offset]

Store Register Exclusive-LDREX and STREX
STREXB

Rd, Rt, [Rn]

Store Register Exclusive Byte-LDREX and STREX
STREXH

Rd, Rt, [Rn]

Store Register Exclusive Halfword-LDREX and STREX
SUB, SUBS

{Rd,} Rn, Op2

SubtractN,Z,C,VADD, ADC, SUB, SBC, and RSB
SUB, SUBW

{Rd,} Rn, #imm12

Subtract-ADD, ADC, SUB, SBC, and RSB
SVC

#imm

Supervisor Call-SVC
SXTAB{Rd,} Rn, Rm {,ROR #n}Sign extend 8 bits to 32 and Add-SXTA and UXTA
SXTAB16{Rd,} Rn, Rm {,ROR #n}Sign extend two 8-bit values to 16 and Add-SXTA and UXTA
SXTAH{Rd,} Rn, Rm {,ROR #n}Sign extend 16 bits to 32 and Add-SXTA and UXTA
SXTB

Rd, Rm {,ROR #n}

Sign extend 8 bits to 32-SXT and UXT
SXTB16{Rd,} Rm {,ROR #n}Sign extend 8 bits to 16-SXT and UXT
SXTH

{Rd,} Rm {,ROR #n}

Sign extend a Halfword to 32-SXT and UXT
TBB

[Rn, Rm]

Table Branch Byte-TBB and TBH
TBH

[Rn, Rm, LSL #1]

Table Branch Halfword-TBB and TBH
TEQ

Rn, Op2

Test EquivalenceN,Z,CTST and TEQ
TST

Rn, Op2

TestN,Z,CTST and TEQ
UADD16{Rd,} Rn, RmUnsigned Add 16GEUADD16 and UADD8
UADD8{Rd,} Rn, RmUnsigned Add 8GEUADD16 and UADD8
UASX{Rd,} Rn, RmUnsigned Add and Subtract with ExchangeGEUASX and USAX
UBFXRd, Rn, #lsb, #widthUnsigned Bit Field Extract-SBFX and UBFX
UDIV{Rd,} Rn, RmUnsigned Divide-SDIV and UDIV
USAX{Rd,} Rn, RmUnsigned Subtract and Add with ExchangeGEUASX and USAX
UHADD16{Rd,} Rn, RmUnsigned Halving Add 16-UHADD16 and UHADD8
UHADD8{Rd,} Rn, RmUnsigned Halving Add 8-UHADD16 and UHADD8
UHASX{Rd,} Rn, RmUnsigned Halving Add and Subtract with Exchange-UHASX and UHSAX
UHSAX{Rd,} Rn, RmUnsigned Halving Subtract and Add with Exchange-UHASX and UHSAX
UHSUB16{Rd,} Rn, RmUnsigned Halving Subtract 16-UHSUB16 and UHSUB8
UHSUB8{Rd,} Rn, RmUnsigned Halving Subtract 8-UHSUB16 and UHSUB8
UMAAL

RdLo, RdHi, Rn, Rm

Unsigned Multiply Accumulate Accumulate Long (32 × 32 + 32 + 32), 64-bit result-UMULL, UMAAL, UMLAL
UMLAL

RdLo, RdHi, Rn, Rm

Unsigned Multiply with Accumulate Long (32 × 32 + 64), 64-bit result-UMULL, UMLAL, SMULL, and SMLAL
UMULL

RdLo, RdHi, Rn, Rm

Unsigned Multiply Long (32 × 32), 64-bit result-UMULL, UMLAL, SMULL, and SMLAL
UQADD16{Rd,} Rn, RmUnsigned Saturating Add 16-UQADD and UQSUB
UQADD8{Rd,} Rn, RmUnsigned Saturating Add 8-UQADD and UQSUB
UQASX{Rd,} Rn, RmUnsigned Saturating Add and Subtract with Exchange-UQASX and UQSAX
UQSAX{Rd,} Rn, RmUnsigned Saturating Subtract and Add with Exchange-UQASX and UQSAX
UQSUB16{Rd,} Rn, RmUnsigned Saturating Subtract 16-UQADD and UQSUB
UQSUB8{Rd,} Rn, RmUnsigned Saturating Subtract 8-UQADD and UQSUB
USAD8{Rd,} Rn, RmUnsigned Sum of Absolute Differences-USAD8
USADA8Rd, Rn, Rm, RaUnsigned Sum of Absolute Differences and Accumulate-USADA8
USAT

Rd, #n, Rm {,shift #s}

Unsigned SaturateQSSAT and USAT
USAT16Rd, #n, RmUnsigned Saturate 16QSSAT16 and USAT16
USAX {Rd,} Rn, RmUnsigned Subtract and Add with ExchangeGEUASX and USAX
USUB16{Rd,} Rn, RmUnsigned Subtract 16GEUSUB16 and USUB8
USUB8{Rd,} Rn, RmUnsigned Subtract 8GEUSUB16 and USUB8
UXTAB{Rd,} Rn, Rm {,ROR #n}

Rotate, unsigned extend 8 bits to 32 and Add

-SXTA and UXTA
UXTAB16{Rd,} Rn, Rm {,ROR #n}Rotate, unsigned extend two 8-bit values to 16 and Add-SXTA and UXTA
UXTAH{Rd,} Rn, Rm {,ROR #n}Rotate, unsigned extend and Add Halfword-SXTA and UXTA
UXTB

Rd, Rm {,ROR #n}

Unsigned zero-extend a Byte-SXT and UXT
UXTB16

{Rd,} Rm {,ROR #n}

Unsigned zero-extend Byte 16-SXT and UXT
UXTH

Rd, Rm {,ROR #n}

Unsigned zero-extend a Halfword-SXT and UXT

VABS

.F<32|64> <Sd|Dd>, <Sm|Dm>

Floating-point Absolute

-VABS
VADD.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Add-VADD
VCMP.F<32|64> <Sd|Dd>, <<Sm| #0.0>Compare two floating-point registers, or one floating-point register and zeroN,Z,C,VVCMP and VCMPE
VCMPE.F<32|64> <Sd|Dd>, <<Sm| #0.0>Compare two floating-point registers, or one floating-point register and zero with Invalid Operation checkN,Z,C,VVCMP and VCMPE
VCVTA.Tm.F<32|64> <Sd>, <Sm|Dm>Convert from floating-point to integer with directed rounding to nearest with Ties Away-VCVTA, VCVTN, VCVTP and VCVTM
VCVTN.Tm.F<32|64> <Sd>, <Sm|Dm>Convert from floating-point to integer with directed rounding to nearest with Ties to even-VCVTA, VCVTN, VCVTP and VCVTM
VCVTP.Tm.F<32|64> <Sd>, <Sm|Dm>Convert from floating-point to integer with directed rounding towards Plus infinity-VCVTA, VCVTN, VCVTP and VCVTM
VCVTM.Tm.F<32|64> <Sd>, <Sm|Dm>Convert from floating-point to integer with directed rounding towards Minus infinity-VCVTA, VCVTN, VCVTP and VCVTM
VCVT.F<32|64>.Tm <Sd>, <Sm|Dm>Convert from floating-point to integer-VCVT and VCVTR between floating-point and integer
VCVTR.Tm.F<32|64> <Sd>, <Sm|Dm>Convert between floating-point and integer with rounding.-VCVT and VCVTR between floating-point and integer
VCVT.Td.F<32|64> <Sd|Dd>, <Sd|Dd>, #fbitsConvert from floating-point to fixed point-VCVT between floating-point and fixed-point
VCVT<B|T>.F<32|64>.F16 <Sd|Dd>, SmConvert half-precision value to single-precision or double-precision-VCVTB and VCVTT
VCVT<B|T>.F16.F<32|64> Sd, <Sm|Dm>Convert single-precision or double precision register to half-precision-VCVTB and VCVTT
VDIV.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Divide-VDIV
VFMA.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Fused Multiply Accumulate-VFMA and VFMS
VFMS.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Fused Multiply Subtract-VFMA and VFMS
VFNMA.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Fused Negate Multiply Accumulate-VFNMA and VFNMS
VFNMS.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Fused Negate Multiply Subtract-VFNMA and VFNMS
VLDM{mode}{.size} Rn{!}, listFloating-point Load Multiple extension registers-VLDM
VLDR.F<32|64> <Sd|Dd>, [<Rn> {, #offset}]Floating-point Load an extension register from memory (immediate)-VLDR
VLDR.F<32|64><Sd|Dd>, <label>Load an extension register from memory-VLDR
VLDR.F<32|64><Sd|Dd>, [PC,#-0]Load an extension register from memory-VLDR
VMAXNM.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm> Maximum of two floating-point numbers with IEEE754-2008 NaN handling-VMAXNM and VMINNM
VMINNM.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm>Minimum of two floating-point numbers with IEEE754-2008 NaN handling-VMAXNM and VMINNM
VMLA.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm>Floating-point Multiply Accumulate-VMLA and VMLS
VMLS.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm>Floating-point Multiply Subtract-VMLA and VMLS
VMRSRt, FPSCRMove to Arm core register from floating-point Special RegisterN,Z,C,VVMRS
VMSRFPSCR, RtMove to floating-point Special Register from Arm core register-VMSR
VMOV<Sn|Rt>, <Rt|Sn>Copy Arm core register to single-precision-VMOV Arm Core register to single-precision
VMOV<Sm|Rt>, <Sm1|Rt2>, <Rt|Sm>, <Rt2|Sm1>Copy two Arm core registers to two single-precision-VMOV two Arm Core registers to two single-precision registers
VMOV{.size} Dd[x], RtCopy Arm core register to scalar-VMOV Arm Core register to scalar
VMOV{.dt} Rt, Dn[x]Copy scalar to Arm core register -VMOV Scalar to Arm Core register
VMOV.F<32|64> <Sd|Dd>, #immmFloating-point Move immediate-VMOV Immediate
VMOV.F<32|64> <Sd|Dd>, <Sd|Dd>, <Sm|Dm>Copies the contents of one register to another -VMOV Register
VMOV<Dm|Rt>, <Rt|Rt2>, <Rt2|Dm>Floating-point Move transfers two words between two Arm core registers and a doubleword register-VMOV two Arm core registers and a double-precision register
VMUL.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Multiply-VMUL
VNEG.F<32|64> <Sd|Dd>, <Sm|Dm>Floating-point Negate-VNEG
VNMLA.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm>Floating-point Multiply Accumulate and Negate-VNMLA, VNMLS and VNMUL
VNMLS.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm>Floating-point Multiply, Subtract and Negate-VNMLA, VNMLS and VNMUL
VNMUL.F<32|64> {<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Multiply and Negate-VNMLA, VNMLS and VNMUL
VPOP{.size} listLoad multiple consecutive floating-point registers from the stack-VPOP
VPUSH{.size} listStore multiple consecutive floating-point registers to the stack-VPUSH
VRINTA.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with directed rounding to Nearest with Ties Away-VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ
VRINTN.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with directed rounding to Nearest with Ties to even-VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ
VRINTP.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with directed rounding to Plus infinity-VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ
VRINTM.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with directed rounding to Minus infinity-VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ
VRINTX.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with rounding specified in FPSCR-VRINTR and VRINTX
VRINTZ.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with rounding towards Zero-VRINTA, VRINTN, VRINTP, VRINTM, and VRINTZ
VRINTR.F<32|64> <Sd|Dd>, <Sm|Dm>Float to integer in floating-point format conversion with rounding towards value specified in FPSCR-VRINTR and VRINTX
VSEL.F<32|64> <Sd|Dd>, <Sn|Dn>, <Sm|Dm>Select register, alternative to a pair of conditional VMOV-VSEL
VSQRT.F<32|64> <Sd|Dd>, <Sm|Dm>Calculates floating-point Square Root-VSQRT
VSTM{mode}{.size} Rn{!}, listFloating-point Store Multiple-VSTM
VSTR.F<32|64> <Sd|Dd>, [Rn{, #offset}]Floating-point Store Register stores an extension register to memory-VSTR
VSUB.F<32|64>{<Sd|Dd>,} <Sn|Dn>, <Sm|Dm>Floating-point Subtract-VSUB
WFE-Wait For Event-WFE
WFI-Wait For Interrupt-WFI

Copyright © 2015, 2018 Arm. All rights reserved.ARM DUI 0646C
Non-ConfidentialID121118