### What Is Double in Computer Programming

Double is a fundamental type built into the compiler and used to define numeric variables holding numbers with decimal points, which accommodate fractions. C, C++, C# and many other programming languages recognize double as a type. Other data types include float and int.

The double type can represent values ranging from approximately 5.0 × 10^{−345} to 1.7 × 10^{308} with a precision of 15 digits.

Precision refers to a limit on digits. Double can contain up to 15 digits total, not just following the decimal point.

### Uses for Double

At one time, the float type, which has a smaller range, was used because it was faster when dealing with thousands or millions of floating-point numbers. Because calculation speed has increased dramatically with new processors, the advantage of float over double is negligible. The double type is considered by many programmers to be the default type when working with numbers that require a decimal point.

### Double vs. Float and Int

Double and float are similar types. Float is a single precision, 32-bit floating point data type; double is a double precision, 64-bit floating point data type. The biggest difference is the precision and range. The double accommodates 15 to 16 digits, compared with float's seven digits. The range of float is smaller at approximately 1.5 × 10^{−45} to 3.4 × 10^{38}.

Int also deals with data, but it serves a different purpose. Numbers without fractional parts or any need for a decimal point can be used as int. The int type holds only whole numbers, but it takes up less space, the arithmetic is usually faster, and it uses caches and data transfer bandwidth more efficiently.