ARM Technical Support Knowledge Articles


Applies to: General Topics


Information in this article applies to:


I use the following #defines to create the constants I use in my program:

#define AAA 1042
#define BBB (1023 * AAA)

When I use BBB in my program, the result I get is around 17,000 instead of 1,065,966 which is what I expect. What is going on? Why does the compiler generate the wrong value?


This explanation requires use of ANSI C Standard. Note that this is not a bug in the compiler. This code yields the same results in any C compiler where an int type occupies 16 bits.

Here are some interesting things about the ANSI C specification as it applies to our compiler and the code above.

  1. The number 1042 is (by default) an integer value (16-bit).
  2. The number 1023 is (by default again) an integer value (16-bit).
  3. The result (1023 * 1042) is by default an integer value (16-bit).

These three points are by ANSI specification paragraph "Usual Arithmetic Conversions" which states the following (this is paraphrased!):

Binary operators like +, -, *, and / need two operands (for example, A + B). Before the calculation is performed, the compiler implicitly converts the operands to the same type (according to some rules). The compiler then performs the calculation. The generated result is converted to the same type as the operands. For instance, if the operands are converted to float types, the result is a float.

The rules are...

A. If either operand has type long double, the other operand is converted to long double.

B. If either operand has type double, the other operand is converted to double.

C. If either operand has type float, the other operand is converted to float."


G. If either operand has type long int, the other operand is converted to long int. (THIS RULE IS IMPORTANT.)

H. If either operand has type unsigned int, the other operand is converted to unsigned int.

J. If both operands have type int. (THIS IS THE MOST IMPORTANT RULE FOR THIS SITUATION.)

Remember that the result has the SAME TYPE as the operands. So, if the compiler thinks the operands are ints (16-bit), the result is an int (16-bit). Even if the result of the operation overflows!

Now, we can see that 1023 * 1042 generates a value that cannot be represented in 16-bits. However, ANY C COMPILER that conforms to the ANSI standard will calculate this number as...

1042 * 1023 = 1065966 (0x001043EE)

(This is a 32-bit value which is truncated into a 16-bit int.)

Then, when the compiler correctly truncates this value to a 16-bit value, we get 0x43EE or 17,390 in decimal. Note that this is the value that is observed by the engineer. And this is what is specified by rule J above.

The solution is to convert at least one of the operands to a long. Therefore, the result will be a long according to rule G listed above. You may convert both constants to long integers if that makes you feel more comfortable! Rule G still applies.

In our example, to have the compiler implicitly cast the value of BBB to a LONG, either of the integer numeric constants (1042 or 1023) must be a long. The letter L following the number indicates (in C) that the value is a long integer type.

For example, the following code generates the expected results.

#define AAA 1042
#define BBB (1023L * AAA)

If you run into this problem while porting existing code, the original target probably defined ints to be 32-bits. If this is the problem, you should convert all numeric constants to long ints by adding the L suffix.


Article last edited on: 2007-12-18 00:42:46

Rate this article

Disagree? Move your mouse over the bar and click

Did you find this article helpful? Yes No

How can we improve this article?

Link to this article
Copyright © 2011 ARM Limited. All rights reserved. External (Open), Non-Confidential