All About Pixels

How Pixels Changed And Why

Image shows pixels in an Adobe Photoshop image.
Pixels are changing.

When it comes to working with graphics software, it never fails to surprise me that our raw material- the lowly pixel-  is so misunderstood. Then again, it is quite understandable because the term has developed a set of differing definitions.  Let’s take a look at the lowly pixel and try to understand what happened.

In the Beginning There Was The Picture Element

Depending on whom you talk to the original term for the pixel was pel or picture element.

Pel ? This is a contraction of the term picture cell, which was used, by graphics artists and video editors in the early 1970’s. According to Wikipedia, the earliest reference to “picture element” dates back to an 1888 patent of Paul Nipkow who was one of the pioneers of television. The earliest written reference is found in a Wireless World magazine from 1927.

OK, where did the term “pixel” originate?

Though I tell my students it is a contraction of the term “Picture Element PicEl – it was changed to pixel for obvious reasons. That is not exactly true. The “pix” part can be traced back to Variety magazine in the 1930’s. In order to save headline space they usually referred to movies as “pix”. In fact one of my favourite Variety Headlines from that era was “Hix Nix Styx Pix” which ran over an article explaining why the Ma And Pa Kettle movies were unpopular in rural America.

The actual first use of the word “pixel” can be attributed back to Fred Billingsley of the Jet Propulsion Laboratory who, in 1962, used the term to describe video images from space probes of Mars and the Moon.

When asked where the term came from, Billingsley made it clear he didn’t coin the term. Instead he said he got the word from Keith McFarland at the Link Division of General Precision in Palo Alto. McFarland in turn, claimed not to know where it originated and simply claimed “it was in use at the time”.

What exactly is this thing we call “pixel”

In broad terms, a pixel is the smallest single element of a digital image. If you select the Zoom Tool in Photoshop and magnify the image to its maximum size, the image, as shown in the image leading off this article, is reduced to a series of squares. What you are looking at are the pixels that make up the image.

The first thing you should notice is they are in neat little rows. When digital imaging, as we know it, first arrived on the scene in the late 1980’s, a line of 72 pixels could fit on a linear inch, which became generally accepted, as the base measurement – pixels per inch or ppi - for image resolution.

Of course the type guys disputed this because their measurement for a pica was 72.27 points per inch. Legend has it that, when Steve Jobs was made aware of this, he shut it down by claiming, “we don’t do .27 pixels.”

As we progressed from the 1980's to the present, hardware manufacturers moved from the Cathode Ray Tube monitors to the LCD monitors we use today. Along the way they discovered they could jam more and more pixels into that linear inch and, in doing so, images and text became more and more realistic. In fact the famed Apple Retina Display does just that.

By jamming 3 to 4 times the number of pixels in those early displays into a linear inch – now called “pixel density” – it is virtually impossible to pick out the individual pixels on the screen.

What about resolution ?

Glad you asked.

Like energy, pixels can neither be created nor destroyed. Remember pixels are lined up in neat rows at the time of creation. Let’s assume there are 100 pixels per inch (ppi) in the row. If you want to double the number of pixels in that linear inch -Double the resolution- you will in essence be adding another 100 pixels- know as “Upsampling” - to that line. This is dangerous because the computer doesn’t have a clue regarding which colors to use for the new pixels. Instead it makes a “best guess” using a process called interpolation.

So why are there now two types of pixels?

Welcome to our Hi Res world.

With all of the devices out there,  the definition of the lowly pixel is becoming a bit clouded.  A pixel now can be the smallest unit a screen can support (a hardware pixel) or a pixel can be based on an optically consistent unit called a “reference pixel” or CSS pixel.

The w3c currently defines the reference pixel as the standard for all pixel-based measurements. Now, instead of every pixel-based measurement being based on the hardware pixel I have been talking about to this point, it is based on an optical reference unit that might be twice the size of a hardware pixel.

In the W3 standard for CSS, this unit is described as roughly 1/96th of an inch, depending on the ideal distance from the viewer. Although the definition may vary from platform to platform, the concept is the same. The reference pixel gives you a way to consistently set the size of elements that is independent from the actual screen density.

Let’s use a practical example to understand this concept. Assume we have an image that is 100 pixels square. If it is shown on an older, lower screen density phone like the iPhone 3G model. That image will use 100 x 100 hardware pixels. On the newer models where the density increases by a factor of 2, that same image will look like it is half the size. Not quite. It is still using 100 x 100 hardware pixels but remember there are now twice as many pixels in that area.

This is where the reference pixel comes into play. What it does is apply a scaling factor to the image. This scaling factor is generally referred to as the device-pixel-ratio which is the ratio between the reference pixels and the hardware pixels. If you are working with a mobile developer, he or she will be quite familiar with device-pixel-ratios and where they are applied.