1.10.1. Extended ARM instruction set summary

Table 1.7 summarizes the extended ARM instruction set.

Table 1.7. ARM instruction set summary

OperationAssembler
ArithmeticAddADD{cond}{S} <Rd>, <Rn>, <operand2>
 Add with carryADC{cond}{S} <Rd>, <Rn>, <operand2>
 SubtractSUB{cond}{S} <Rd>, <Rn>, <operand2>
 Subtract with carrySBC{cond}{S} <Rd>, <Rn>, <operand2>
 Reverse subtractRSB{cond}{S} <Rd>, <Rn>, <operand2>
 Reverse subtract with carryRSC{cond}{S} <Rd>, <Rn>, <operand2>
 MultiplyMUL{cond}{S} <Rd>, <Rm>, <Rs>
 Multiply-accumulateMLA{cond}{S} <Rd>, <Rm>, <Rs>, <Rn>
 Multiply unsigned longUMULL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
 Multiply unsigned accumulate longUMLAL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
 Multiply signed longSMULL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
 Multiply signed accumulate longSMLAL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
 Saturating addQADD{cond} <Rd>, <Rm>, <Rn>
 Saturating add with doubleQDADD{cond} <Rd>, <Rm>, <Rn>
 Saturating subtractQSUB{cond} <Rd>, <Rm>, <Rn>
 Saturating subtract with doubleQDSUB{cond} <Rd>, <Rm>, <Rn>
 Multiply 16x16SMULxy{cond} <Rd>, <Rm>, <Rs>
 Multiply-accumulate 16x16+32SMLAxy{cond} <Rd>, <Rm>, <Rs>, <Rn>
 Multiply 32x16SMULWy{cond} <Rd>, <Rm>, <Rs>
 Multiply-accumulate 32x16+32SMLAWy{cond} <Rd>, <Rm>, <Rs>, <Rn>
 

Multiply signed

accumulate long 16x16+64

SMLALxy{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
 Count leading zerosCLZ{cond} <Rd>, <Rm>
CompareCompareCMP{cond} <Rn>, <operand2>
 Compare negativeCMN{cond} <Rn>, <operand2>
LogicalMoveMOV{cond}{S} <Rd>, <operand2>
 Move NOTMVN{cond}{S} <Rd>, <operand2>
 TestTST{cond} <Rn>, <operand2>
 Test equivalenceTEQ{cond} <Rn>, <operand2>
 ANDAND{cond}{S} <Rd>, <Rn>, <operand2>
 XOREOR{cond}{S} <Rd>, <Rn>, <operand2>
 ORORR{cond}{S} <Rd>, <Rn>, <operand2>
 Bit clearBIC{cond}{S} <Rd>, <Rn>, <operand2>
 CopyCPY{<cond>} <Rd>, <Rm>
BranchBranchB{cond} <label>
 Branch with linkBL{cond} <label>
 Branch and exchangeBX{cond} <Rm>
 Branch, link and exchangeBLX <label>
Branch, link and exchangeBLX{cond} <Rm>
 Branch and exchange to Jazelle stateBXJ{cond} <Rm>
Status register handlingMove SPSR to registerMRS{cond} <Rd>, SPSR
Move CPSR to registerMRS{cond} <Rd>, CPSR
Move register to SPSRMSR{cond} SPSR_{field}, <Rm>
Move register to CPSRMSR{cond} CPSR_{field}, <Rm>
 Move immediate to SPSR flagsMSR{cond} SPSR_{field}, #<immed_8r>
 Move immediate to CPSR flagsMSR{cond} CPSR_{field}, #<immed_8r>
LoadWordLDR{cond} <Rd>, <a_mode2>
 Word with User mode privilegeLDR{cond}T <Rd>, <a_mode2P>
 PC as destination, branch and exchangeLDR{cond} R15, <a_mode2P>
 ByteLDR{cond}B <Rd>, <a_mode2>
 Byte with User mode privilegeLDR{cond}BT <Rd>, <a_mode2P>
 Byte signedLDR{cond}SB <Rd>, <a_mode3>
 HalfwordLDR{cond}H <Rd>, <a_mode3>
 Halfword signedLDR{cond}SH <Rd>, <a_mode3>
 DoublewordLDR{cond}D <Rd>, <a_mode3>
 Return from exceptionRFE<a_mode4> <Rn>{!}
Load multipleStack operationsLDM{cond}<a_mode4L> <Rn>{!}, <reglist>
 Increment beforeLDM{cond}IB <Rn>{!}, <reglist>{^}
 Increment afterLDM{cond}IA <Rn>{!}, <reglist>{^}
 Decrement beforeLDM{cond}DB <Rn>{!}, <reglist>{^}
 Decrement afterLDM{cond}DA <Rn>{!}, <reglist>{^}
 Stack operations and restore CPSRLDM{cond}<a_mode4> <Rn>{!}, <reglist+pc>^
 User registersLDM{cond}<a_mode4> <Rn>{!}, <reglist>^
Soft preload

Memory system hint

In Non-secure this instruction behaves like a NOP

PLD <a_mode2>
StoreWordSTR{cond} <Rd>, <a_mode2>
 Word with User mode privilegeSTR{cond}T <Rd>, <a_mode2P>
 ByteSTR{cond}B <Rd>, <a_mode2>
 Byte with User mode privilegeSTR{cond}BT <Rd>, <a_mode2P>
 HalfwordSTR{cond}H <Rd>, <a_mode3>
 DoublewordSTR{cond}D <Rd>, <a_mode3>
 Store return stateSRS<a_mode4> <mode>{!}
Store multipleStack operationsSTM{cond}<a_mode4S> <Rn>{!}, <reglist>
User registersSTM{cond}<a_mode4S> <Rn>, <reglist>^
Increment beforeSTM{cond}IB, <Rn>{!}, <reglist>{^}
Increment afterSTM{cond}IA, <Rn>{!}, <reglist>{^}
Decrement beforeSTM{cond}DB, <Rn>{!}, <reglist>{^}
Decrement afterSTM{cond}DA, <Rn>{!}, <reglist>{^}
SwapWordSWP{cond} <Rd>, <Rm>, [<Rn>]
 ByteSWP{cond}B <Rd>, <Rm>, [<Rn>]
Change stateChange processor stateCPS<effect> <iflags>{, <mode>}
Change processor modeCPS <mode>
Change endiannessSETEND <endian_specifier>
NOP-compatible hintsNo Operation

NOP{<cond>}

YIELD{<cond>}

Byte-reverseByte-reverse wordREV{cond} <Rd>, <Rm>
 Byte-reverse halfwordREV16{cond} <Rd>, <Rm>
 Byte-reverse signed halfwordREVSH{cond} <Rd>, <Rm>
Synchronization primitivesLoad exclusiveLDREX{cond} <Rd>, [<Rn>]
Store exclusiveSTREX{cond} <Rd>, <Rm>, [<Rn>]
 Load Byte Exclusive
LDREXB{cond} <Rxf>, [<Rbase>]
 Load Halfword Exclusive
LDREXH{cond} <Rd>, [<Rn>]
 Load Doubleword Exclusive
LDREXD{cond} <Rd>, [<Rn>]
 Store Byte Exclusive
STREXB{cond} <Rd>, <Rm>, [<Rn>]
 Store Halfword Exclusive
STREXH{cond} <Rd>, <Rm>, [<Rn>]
 Store Doubleword Exclusive
STREXD{cond} <Rd>, <Rm>, [<Rn>]
 Clear Exclusive
CLREX
CoprocessorData operationsCDP{cond} <cp_num>, <op1>, <CRd>, <CRn>, <CRm>{, <op2>}
 Move to ARM reg from coprocMRC{cond} <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
 Move to coproc from ARM regMCR{cond} <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
 

Move double to ARM reg from coproc

MRRC{cond} <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
 Move double to coproc from ARM regMCRR{cond} <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
 LoadLDC{cond} <cp_num>, <CRd>, <a_mode5>
 StoreSTC{cond} <cp_num>, <CRd>, <a_mode5>
Alternative coprocessorData operationsCDP2 <cp_num>, <op1>, <CRd>, <CRn>, <CRm>{, <op2>}
Move to ARM reg from coprocMRC2 <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
 Move to coproc from ARM regMCR2 <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
 Move double to ARM reg from coprocMRRC2 <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
 Move double to coproc from ARM regMCRR2 <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
 LoadLDC2 <cp_num>, <CRd>, <a_mode5>
 StoreSTC2 <cp_num>, <CRd>, <a_mode5>
Supervisor callSVC{cond} <immed_24>
Secure Monitor callSMC{cond} <immed_16>
Software breakpointBKPT <immed_16>
Parallel add /subtract

Signed add high 16 + 16,

low 16 + 16, set GE flags

SADD16{cond} <Rd>, <Rn>, <Rm>
 

Saturated add high 16 + 16,

low 16 + 16

QADD16{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16 + 16, low 16 + 16,

halved

SHADD16{cond} <Rd>, <Rn>, <Rm>
 Unsigned high 16 + 16, low 16 + 16, set GE flagsUADD16{cond} <Rd>, <Rn>, <Rm>
 

Saturated unsigned high 16 + 16,

low 16 + 16

UQADD16{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 + 16,

low 16 + 16, halved

UHADD16{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16 + low 16,

low 16 - high 16, set GE flags

SADDSUBX{cond} <Rd>, <Rn>, <Rm>
 

Saturated high 16 + low 16,

low 16 - high 16

QADDSUBX{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16 + low 16,

low 16 - high 16, halved

SHADDSUBX{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 + low 16,

low 16 - high 16, set GE flags

UADDSUBX{cond} <Rd>, <Rn>, <Rm>
 

Saturated unsigned

high 16 + low 16, low 16 - high 16

UQADDSUBX{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 + low 16,

low 16 - high 16, halved

UHADDSUBX{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16 - low 16,

low 16 + high 16, set GE flags

SSUBADDX{cond} <Rd>, <Rn>, <Rm>
 

Saturated high 16 - low 16,

low 16 + high 16

QSUBADDX{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16 - low 16,

low 16 + high 16, halved

SHSUBADDX{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 - low 16,

low 16 + high 16, set GE flags

USUBADDX{cond} <Rd>, <Rn>, <Rm>
 

Saturated unsigned

high 16 - low 16, low 16 + high 16

UQSUBADDX{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 - low 16,

low 16 + high 16, halved

UHSUBADDX{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16-16, low 16-16,

set GE flags

SSUB16{cond} <Rd>, <Rn>, <Rm>
 Saturated high 16 - 16, low 16 - 16QSUB16{cond} <Rd>, <Rn>, <Rm>
 

Signed high 16 - 16, low 16 - 16,

halved

SHSUB16{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 - 16, low 16 - 16,

set GE flags

USUB16{cond} <Rd>, <Rn>, <Rm>
 

Saturated unsigned high 16 - 16,

low 16 - 16

UQSUB16{cond} <Rd>, <Rn>, <Rm>
 

Unsigned high 16 - 16, low 16 - 16,

halved

UHSUB16{cond} <Rd>, <Rn>, <Rm>
 Four signed 8 + 8, set GE flagsSADD8{cond} <Rd>, <Rn>, <Rm>
 Four saturated 8 + 8QADD8{cond} <Rd>, <Rn>, <Rm>
 Four signed 8 + 8, halvedSHADD8{cond} <Rd>, <Rn>, <Rm>
 Four unsigned 8 + 8, set GE flagsUADD8{cond} <Rd>, <Rn>, <Rm>
 Four saturated unsigned 8 + 8UQADD8{cond} <Rd>, <Rn>, <Rm>
 Four unsigned 8 + 8, halvedUHADD8{cond} <Rd>, <Rn>, <Rm>
 Four signed 8 - 8, set GE flagsSSUB8{cond} <Rd>, <Rn>, <Rm>
 Four saturated 8 - 8QSUB8{cond} <Rd>, <Rn>, <Rm>
 Four signed 8 - 8, halvedSHSUB8{cond} <Rd>, <Rn>, <Rm>
 Four unsigned 8 - 8USUB8{cond} <Rd>, <Rn>, <Rm>
 Four saturated unsigned 8 - 8UQSUB8{cond} <Rd>, <Rn>, <Rm>
 Four unsigned 8 - 8, halvedUHSUB8{cond} <Rd>, <Rn>, <Rm>
 Sum of absolute differencesUSAD8{cond} <Rd>, <Rm>, <Rs>
 Sum of absolute differences and accumulateUSADA8{cond} <Rd>, <Rm>, <Rs>, <Rn>
Sign/zero extend and addTwo low 8/16, sign extend to 16 + 16SXTAB16{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
 Low 8/32, sign extend to 32, + 32SXTAB{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
 Low 16/32, sign extend to 32, + 32SXTAH{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
 

Two low 8/16, zero extend

to 16, + 16

UXTAB16{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
 Low 8/32, zero extend to 32, + 32UXTAB{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
 Low 16/32, zero extend to 32, + 32UXTAH{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
 

Two low 8, sign extend to 16,

packed 32

SXTB16{cond} <Rd>, <Rm>{, <rotation>}
 Low 8, sign extend to 32SXTB{cond} <Rd>, <Rm>{, <rotation>}
 Low 16, sign extend to 32SXTH{cond} <Rd>, <Rm>{, <rotation>}
 

Two low 8, zero extend to 16,

packed 32

UXTB16{cond} <Rd>, <Rm>,{, <rotation>}
 Low 8, zero extend to 32UXTB{cond} <Rd>, <Rm>{, <rotation>}
 Low 16, zero extend to 32UXTH{cond} <Rd>, <Rm>{, <rotation>}
Signed multiply and multiply, accumulate

Signed

(high 16 x 16) + (low 16 x 16) + 32,

and set Q flag.

SMLAD{cond} <Rd>, <Rm>, <Rs>, <Rn>

As SMLAD, but high x low,

low x high, and set Q flag

SMLADX{cond} <Rd>, <Rm>, <Rs>, <Rn>
 

Signed

(high 16 x 16) - (low 16 x 16) + 32

SMLSD{cond} <Rd>, <Rm>, <Rs>, <Rn>
 

As SMLSD, but high x low,

low x high

SMLSDX{cond} <Rd>, <Rm>, <Rs>, <Rn>
 

Signed

(high 16 x 16) + (low 16 x 16) + 64

SMLALD{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
 

As SMLALD, but high x low,

low x high

SMLALDX{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
 

Signed

(high 16 x 16) - (low 16 x 16) + 64

SMLSLD{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
 

As SMLSLD, but high x low,

low x high

SMLSLDX{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
 32 + truncated high 16 (32 x 32)SMMLA{cond} <Rd>, <Rm>, <Rs>, <Rn>
 32 + rounded high 16 (32 x 32)SMMLAR{cond} <Rd>, <Rm>, <Rs>, <Rn>
 32 - truncated high 16 (32 x 32)SMMLS{cond} <Rd>, <Rm>, <Rs>, <Rn>
 32 -rounded high 16 (32 x 32)SMMLSR{cond} <Rd>, <Rm>, <Rs>, <Rn>
 

Signed (high 16 x 16) +

(low 16 x 16), and set Q flag

SMUAD{cond} <Rd>, <Rm>, <Rs>
 

As SMUAD, but high x low,

low x high, and set Q flag

SMUADX{cond} <Rd>, <Rm>, <Rs>
 

Signed (high 16 x 16) -

(low 16 x 16)

SMUSD{cond} <Rd>, <Rm>, <Rs>
 

As SMUSD, but high x low,

low x high

SMUSDX{cond} <Rd>, <Rm>, <Rs>
 Truncated high 16 (32 x 32)SMMUL{cond} <Rd>, <Rm>, <Rs>
 Rounded high 16 (32 x 32)SMMULR{cond} <Rd>, <Rm>, <Rs>
 Unsigned 32 x 32, + two 32, to 64UMAAL{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
Saturate, select, and pack

Signed saturation at

bit position n

SSAT{cond} <Rd>, #<immed_5>, <Rm>{, <shift>}
 

Unsigned saturation at

bit position n

USAT{cond} <Rd>, #<immed_5>, <Rm>{, <shift>}
 

Two 16 signed saturation at

bit position n

SSAT16{cond} <Rd>, #<immed_4>, <Rm>
 

Two 16 unsigned saturation at

bit position n

USAT16{cond} <Rd>, #<immed_4>, <Rm>
 

Select bytes from Rn/Rm based

on GE flags

SEL{cond} <Rd>, <Rn>, <Rm>
 Pack low 16/32, high 16/32PKHBT{cond} <Rd>, <Rn>, <Rm>{, LSL #<immed_5>}
 Pack high 16/32, low 16/32PKHTB{cond} <Rd>, <Rn>, <Rm>{, ASR #<immed_5>}

Table 1.8 summarizes addressing mode 2.

Table 1.8. Addressing mode 2

Addressing modeAssembler
Offset-
 Immediate offset[<Rn>, #+/<immed_12>]
 Zero offset[<Rn>]
 Register offset[<Rn>, +/-<Rm>]
 Scaled register offset[<Rn>, +/-<Rm>, LSL #<immed_5>]
  [<Rn>, +/-<Rm>, LSR #<immed_5>]
  [<Rn>, +/-<Rm>, ASR #<immed_5>]
  [<Rn>, +/-<Rm>, ROR #<immed_5>]
  [<Rn>, +/-<Rm>, RRX]
Pre-indexed offset-
 Immediate offset[<Rn>], #+/<immed_12>
 Zero offset[<Rn>]
 Register offset[<Rn>, +/-<Rm>]!
 Scaled register offset[<Rn>, +/-<Rm>, LSL #<immed_5>]!
  [<Rn>, +/-<Rm>, LSR #<immed_5>]!
  [<Rn>, +/-<Rm>, ASR #<immed_5>]!
  [<Rn>, +/-<Rm>, ROR #<immed_5>]!
  [<Rn>, +/-<Rm>, RRX]!
Post-indexed offset-
 Immediate[<Rn>], #+/-<immed_12>
 Zero offset[<Rn>]
 Register offset[<Rn>], +/-<Rm>
 Scaled register offset[<Rn>], +/-<Rm>, LSL #<immed_5>
  [<Rn>], +/-<Rm>, LSR #<immed_5>
  [<Rn>], +/-<Rm>, ASR #<immed_5>
  [<Rn>], +/-<Rm>, ROR #<immed_5>
  [<Rn>], +/-<Rm>, RRX

Table 1.9 summarizes addressing mode 2P, post-indexed only.

Table 1.9. Addressing mode 2P, post-indexed only

Addressing modeAssembler
Post-indexed offset-
 Immediate offset[<Rn>], #+/-<immed_12>
 Zero offset[<Rn>]
 Register offset[<Rn>], +/-<Rm>
 Scaled register offset[<Rn>], +/-<Rm>, LSL #<immed_5>
  [<Rn>], +/-<Rm>, LSR #<immed_5>
  [<Rn>], +/-<Rm>, ASR #<immed_5>
  [<Rn>], +/-<Rm>, ROR #<immed_5>
  [<Rn>], +/-<Rm>, RRX

Table 1.10 summarizes addressing mode 3.

Table 1.10. Addressing mode 3

Addressing modeAssembler
Immediate offset[<Rn>, #+/-<immed_8>]
 Pre-indexed[<Rn>, #+/-<immed_8>]!
 Post-indexed[<Rn>], #+/-<immed_8>
Register offset[<Rn>, +/- <Rm>]
 Pre-indexed[<Rn>, +/- <Rm>]!
 Post-indexed[<Rn>], +/- <Rm>

Table 1.11 summarizes addressing mode 4.

Table 1.11. Addressing mode 4

Addressing modeStack type
Block loadStack pop (LDM, RFE)
IAIncrement afterFDFull descending
IBIncrement beforeEDEmpty descending
DADecrement afterFAFull ascending
DBDecrement beforeEAEmpty ascending
Block storeStack push (STM, SRS)
IAIA Increment afterEAEmpty ascending
IBIB Increment beforeFAFull ascending
DADA Decrement afterEDEmpty descending
DBDB Decrement beforeFDFull descending

Table 1.12 summarizes addressing mode 5.

Table 1.12. Addressing mode 5

Addressing modeAssembler
Immediate offset[<Rn>, #+/-<immed_8*4>]
Immediate pre-indexed[<Rn>, #+/-<immed_8*4>]!
Immediate pre-indexed[<Rn>], #+/-<immed_8*4>
Unindexed[<Rn>], <option>

Table 1.13 summarizes Operand2 assembler.

Table 1.13. Operand2

OperationAssembler
Immediate value#<immed_8r>
Logical shift left<Rm> LSL #<immed_5>
Logical shift right<Rm> LSR #<immed_5>
Arithmetic shift right<Rm> ASR #<immed_5>
Rotate right<Rm> ROR #<immed_5>
Register<Rm>
Logical shift left<Rm> LSL <Rs>
Logical shift right<Rm> LSR <Rs>
Arithmetic shift right<Rm> ASR <Rs>
Rotate right<Rm> ROR <Rs>
Rotate right extended<Rm> RRX

Table 1.14 summarizes the MSR instruction fields.

Table 1.14. Fields

SuffixSets this bit in the MSR field_maskMSR instruction bit number
cControl field mask bit, bit 016
xExtension field mask bit, bit 117
sStatus field mask bit, bit 218
fFlags field mask bit, bit 319

Table 1.15 summarizes condition codes.

Table 1.15. Condition codes

SuffixDescription
EQEqual
NENot equal
HS/CSUnsigned higher or same, carry set
LO/CCUnsigned lower, carry clear
MINegative, minus
PLPositive or zero, plus
VSOverflow
VCNo overflow
HIUnsigned higher
LSUnsigned lower or same
GESigned greater or equal
LTSigned less than
GTSigned greater than
LESigned less than or equal
ALAlways

Copyright © 2004-2009 ARM Limited. All rights reserved.ARM DDI 0301H
Non-Confidential