Definition of Float

A float variable contains whole numbers and fractions

What Is Float in Computer Programming?

Float is short for "floating point." Float is a fundamental type built into the compiler and used to define numeric values with decimal points, which accommodate fractions. C, C++, C# and many other programming languages recognize float as a data type. Other data types include int and double.

The float type can represent values ranging from approximately 1.5 × 10−45 to 3.4 × 1038 with a precision of seven digits.

Precision refers to a limit on digits. Float can contain up to 7 digits total, not just following the decimal point, so 321.1234567 cannot be stored in float; it has 10 digits. If greater precision—more digits—is necessary, the double type is used. 

Uses for Float

Float is used mostly in graphic libraries because of the extremely high demand for processing power. Because the range is smaller than in the double type, float has been the better choice when dealing with thousands or millions of floating-point numbers—it is faster. Because calculation speed has increased dramatically with new processors, the advantage of float over double is negligible. Float is also used in situations that can tolerate rounding errors that occur due to the float precision of seven digits. 

Float vs. Double and Int

Float and double are similar types. Float is a single precision, 32-bit floating point data type; double is a double precision, 64-bit floating point data type.

The biggest difference is the precision and range. The double accommodates 15 to 16 digits, compared with float's 7 digits. The range of double is 5.0 × 10−345 to 1.7 × 10308. Int also deals with data but it serves a different purpose. Numbers without fractional parts or any need for a decimal point can be used as int.

The int type holds only whole numbers, but it takes up less space, the arithmetic is usually faster and it uses caches and data transfer bandwidth more efficiently.