Integer
Data Type
Introduction
A integer data type (often shortened to "int") is a fundamental data type in computer programming used to store and manipulate whole numbers (both positive and negative) without any decimal or fractional components.
Integers are used for counting discrete items because they store exact values, unlike floating-point types (float or double), which store approximations of real numbers.
Key Characteristics
Whole Numbers Only: Integers store values such as 1, 42, -100, 0, or 5000, but never 1.5 or 9.9.
Exactness: Integer arithmetic is precise and does not suffer from the rounding errors that can occur with floating-point math.
Fixed Range: An integer variable can only hold a value within a specific range determined by the number of bits allocated to it in memory. If a number exceeds this range, it results in an "overflow" error.
Common Types and Sizes
Programming languages offer different integer types to balance memory usage with the size of the number required:
| Type Name (Example in C/Java) | Size (Typical) | Approximate Range | Common Uses |
byte | 8 bits | -128 to 127 | Small flags, raw data streams. |
short | 16 bits | -32,768 to 32,767 | Short loop counters, array indices. |
int | 32 bits | Approx.  billion | The most common default integer type. |
long | 64 bits | Approx.  | Database IDs, timestamps, large sums. |
When to Use Integers
Integers are the preferred data type whenever you are counting items that naturally occur as whole units:
- Counting loop iterations.
- Storing a person's age.
- Tracking the score in a game.
- Managing array indices or item counts.
- Storing currency values that can be handled as whole units (e.g., storing dollars as "cents" to avoid floating-point errors).