The proliferation of digital cameras (and smartphones), and sending pictures is creating many questions about computer images and picture formats. What are they, and how do they work?
The common classification for pictures on the computer are known as "raster images".
Your computer and Television, both scan from left to right in a line, and then go down to the next line (from top to bottom). That scan pattern is the "raster". To make the image you see, there are hundreds of little points on each line, where the computer or TV sets the color for that one dot or element. Each dot is called a pixel (short for picture element).
There are tens of thousands or millions of these elements (pixels) to make an image.
The "resolution" of an image, is how many of these pixels there are across by how many there are vertically. Sometimes you see those numbers separately to tell you the exact resolution, like 1024 x 768 (over a thousand elements across by 768 vertical), or sometimes you see the total amount of elements, like 1 megapixel (meaning it has roughly a million pixels to make an image from).
The more detailed the picture, then the more memory that picture will take. A lot more. So it takes a lot of memory to describe a relatively undetailed picture. The average photographs worth of memory, could easily describe over a thousand pages of text. So "raw" (uncompressed) images are very large.
Computer people needed to make pictures take up less memory; that more could be stored, or so that they are easier to send across the Internet. So there are ways to "compress" the image, or make the image take less memory.
There are two categories of compression, lossy and lossless.
- Lossless compression, compresses the data (image), but only to the point where no true information is lost. (It just describes every element in a more efficient way). These can allow you to get 4 to 10 images in the amount of memory used to hold a single image (depending on the image, and how they compress).
- Lossy compression is much better at compressing. When you describe a picture as these pixels, there are lots of detail that the eye can't see or filters out. Well, if the computer can eliminate that same information, then it saves a lot of memory and you can't perceive any difference. It lost some data, but you won't be able to tell or be able to notice well. Lossy compression can get you from dozens or hundreds times as many pictures in the same amount of memory as used to hold just one image before. But if you look really closely, or blow the image up really large, there is some lost detail.
Some most common formats and terms you might hear about are:
- TIFF (Tagged Image File Format). TIFF is usually a raw (non-compressed format). And when it is compressed, it uses lossless compression. So these images take a lot of memory but retain all their detail. These are used for scanners, or professional work - but not much for consumer stuff.
- RAW is a form of lossesless compression, and in fact is often a different container around TIFF, with some more metadata to describe the image, or images.
- JPEG (Joint Photographic Experts Group) is a very common file format for the Internet, and many digital cameras. It uses "lossy" compression to allow the pictures to be much smaller. It works well for pictures, but if you have text, logos, or sharp lines it can get little halos, ghosts or artifacts.
- GIF (Graphics Interchange Format) is also lossy compression, but it works better for logos and text than JPEG (but worse for photographs with lots of detail). GIF also supports animation and transparent areas. So GIF became the most common format for websites, but is not good for digital pictures.
That's it, you should now understand a lot more about picture formats than you did before.