Go to my Home page.
Go to my Joy of High Tech page.

The Joy of High Tech

by

Rodford Edmiston



This being a collection of random thoughts on bits and pieces of information which should interest the technically oriented reader.

Please note that while I am an engineer (BSCE) and do my research, I am not a professional in this field. Do not take anything here as gospel; check the facts I give. And if you find a mistake, please let me know about it.



Imaging




     First, it is necessary to understand that no common, currently used method of imaging captures images in the same way as living eyes. Even moving images are actually a series of still images shown in rapid sequence, whereas eyes operate continuously. Moreover, there is an extraordinary amount of processing occurring within the living optic system before an image reaches the perception/storage medium; that is, the brain. Retinas and optic nerves perform an enormous amount of data manipulation on the way to the brain, and there the vision center does even more. For later comparison purposes, a human retina has over 120 million sensors in the outermost of its five layers of distinct cell types, in an area less than a centimeter across. Generally there are 6.5 million cones and 120 million rods, with the ratio of one type to the other varying with location on the retina. For instance the fovea centralis - a small pit about .03 cm across located at the very center of the retina - is composed of some 30,000 cones with no rods. Note, however, that some parts of the retina detect motion or change in light, rather than contributing to resolution. Though the cones are smaller in number than the rods they actually provide more of the final resolution, and all of the color. The retina processes the equivalent of ten one-million-point images a second. Also, each optic nerve is a roughly million-fiber active cable. And, yes, there is data compression in there. By the time the data reaches the optic nerve a 6:1 compression has been done for cone vision or 100:1 for rod vision, with more processing on the way to the brain. Finally, some of the other layers of the retina contribute to the images sent to the optic nerve, increasing sensitivity, dynamic range, and so forth.

     Human vision images are also enhanced by having two simultaneous but independent views (one from each eye); by tiny movements of the eyes which add to depth perception and increase resolution by providing slightly different views which are interpolated together (the fovea actually performs much like a 30,000 pixel scanner in this action); by using gross movement of viewer, target or both to acquire additional information; and through time. In other words, all these slightly different images are integrated together to create a gestalt. (To quote Hecht "The actual perception of a scene is constructed by the eye-brain system in a continuous analysis of the time-varying retinal image.") Moreover, organic imaging is a self-adjusting process. Not only does the iris open or close according to ambient light level, but the retina also adjusts, both neurologically and chemically. Additionally, the perceived color balance shifts according to the ambient lighting, so that objects usually look the same color even when the color of the light bouncing off them changes. And so on.

     Give all that, it is very hard to create an objective method of evaluating artificial images which uses human vision as a model. Therefore, the objective measuring standards for images are generally taken from physics - specifically, optics - with only the standards for color accommodating the quirks of our visual sense. Which means that much of the process of evaluating images is not directly applicable to the way humans actually see.

     Resolution is the simplest measure, though even this can have complications. Basically, this is the concentration of distinct changes. It is measured by the number of dots per inch, or line pairs per millimeter, which the image records. How much detail do you need to reproduce a realistic image? Generally, human eyes at a typical reading or image viewing distance - say, around 20 centimeters - can distinguish about 69 line pairs per millimeter, with the very sharp-eyed capable of nearly twice that. So a printer which produces over 300 dots per linear inch (120 dots per cm) or more can theoretically produce realistic images, though for close examination using twice that density or more is required. Theoretically. Of course, you still have to get the image data to the printer. Keep in mind that most recording media - whether film or electronic - are smaller than a usual final product. Therefore, to produce a realistic final image the actual recording device must exceed a density 600 dpi. Most film is more than adequate for this task; even very grainy films can exceed this resolution when used well. A image on a 35mm frame from a roll of average quality ISO 100 film (more on film speeds below) has a resolution of over 4100 X 2700 (over 11 megapixels, in 16 million colors) which is more than good enough for a standard-size print. (Top quality 35mm film can achieve the equivalent of over 24 million pixels per frame). This is roughly 2600 lines per inch (lpi), which is more than good enough for a standard-size print or even moderate enlargements. The CCD chips in digital cameras approach this density, but are smaller than 35mm, being about the size of a postage stamp. (Until the arrival of the 3+ megapixel chips the improvement principally involved cramming more sensors into the same area. Now they're getting larger as well.) High quality consumer digital cameras currently (as of early 2003) have chips with up to six million pixels (a pixel being one sensor element in the chip). Depending on how the array is proportioned, this would mean the image recording surface of a 6+ million pixel chip would be roughly 2840 X 2160, still well below a frame of average quality 35mm film. And there are factors at work reducing the effective resolution in digital cameras.

     Note that the theoretical limit of resolution for any imaging system is determined by the size and behavior of photons. Photons vary in size and energy depending on their frequency. Neither film nor CCDs have reached this limit yet and are unlikely to. Long before that point, limitations in other parts of the system will come into effect.

     Optically, there's no difference between film cameras and digital cameras. Indeed, some use the same interchangeable lenses, since the same company may make both digital and film cameras. So the differences are primarily in the way the different methods detect and record light.

     Contrast is the difference between the lightest and darkest portions of an image. The highest contrast would be between flat black and pure white. Most inkjet and laser printers can produce an adequate contrast range for an image. Digital and film images are roughly equal in this category. However, most CCDs and photographic films don't match the contrast range of human vision, so for a good print some tinkering - and some sacrifices - are usually necessary.

     Grayscale is contrast discrimination. That is, how many distinct shades of gray can be detected. The typical human eye can only perceive differences of about 2% of full brightness, equivalent to 50 levels of gray or - in digital terms - a 6 bit gray scale. Again, film and digital are roughly even in this property, and actually better than the human eye.

     Sensitivity is how much light is needed to create a change in a sensor. The rods and cones in human eyes are theoretically capable of responding to the energy of a single photon. However, the noise in the system swamps such small signals. CCDs can be made very sensitive (some CCDs used for astrophotography are cryogenically cooled to reduce the noise, something you can't do with a living visual system... at least, not the ones we're familiar with) and are far superior to film for low-light exposures. That's because film has something known as reciprocity failure. Film grains require a minimum amount of energy to change state. Without enough photons with enough total energy striking it in a short-enough time, a film grain stays unchanged. This is why exposure times increase non-linearly with decreasing light. (In this respect, then, chemical photography is digital. However, beyond this limit the change in the silver halide crystals is proportional to the amount and energy of the photons it traps during the exposure.) Film, however, can be treated (in a process called 'hypoing') to reduce this effect, and is still used for some astrophotography. This involves time-consuming and elaborate treatments far beyond the average photographer, and is not really practical for anything but taking images of faint objects in the night sky.

     A note, here, on photographic film sensitivity. This is indicated by the ISO rating, which is roughly equivalent to the older ASA rating. The higher the number, the faster - or more sensitive - the film. As a general rule, faster films are grainier. However, film technology has made some major improvements over the past two decades. Today's ISO 400 film is as fine-grained and color-true as the ISO 100 films of 20 years ago, and getting better every year. Top-quality 35mm color film in the ISO 25 to ISO 50 range has a maximum resolution of over 40 million pixels, though this is never actually achieved even on an optical bench, and is only approached in special circumstances. Chemical photography is generally seen as a mature technology, but remember that it is only about a century and a half old. There's lots left to do with it.

     Tonality is a measure of the range of illumination in one image. Here the organic eye is supreme to both standard imaging technologies. Imagine looking at a scene with two identically dressed people, one standing in full sunlight, one standing in deep shade. Humans can generally see both at the same time, and surprisingly well (though never equally well at the same time). Film and digital imagers have great trouble doing this; to clearly view the person in the shade the sunlit portion will be overexposed; and to see the person in the sun the shade will be underexposed. Producing an artificial image (on a print or a monitor) which allows this discrimination would be difficult, requiring special film, careful exposure and processing, whether in the computer or in the darkroom. Both technologies are getting better at this, but still have a long way to go.

     Color fidelity is where things really start to get complicated. In principal human color vision is quite simple, since most folks see just three colors. (Some people see fewer. A very few have a 4th color receptor. Evolution in action, here...) However, recording 3 distinct colors with film or digital cameras requires using 3 distinct filters, since neither silver halide grains nor CCDs can actually distinguish frequencies, only total energy. Within their range of sensitivity all frequencies are treated equally with respect to color. (There is some slight difference, due to shorter wavelength photons carrying more energy, but for visible light the effect of this is minor.) So color filters must be used. In color film this means putting 3 layers of silver halide crystals on each strip of film, with color filter layers between them. To get equal exposure, the bottom layer must be more sensitive than the middle, which must be more sensitive than the top. This layering reduces resolution (for a number of reasons) and introduces other problems, but for the most part these can be compensated for. Modern color photographs are generally of high resolution and surprisingly good color fidelity. (Note that this is the most commonly used method. Other techniques - such as three different pieces of monochrome film, exposed separately, each through a different color filter - have also been used, but are generally much clumsier.)

     CCDs record different colors by having a red, green or blue color filter placed over each individual sensor element (sort of the reverse of a color TV). This means that the resolution for each color is actually lower than the total resolution, since only one-third of the sensors are recording red, one-third recording green, and one-third blue. (Some digital cameras even leave some elements uncovered to record all the light for a grayscale value, but those generally are special-purpose or professional cameras. Interestingly, this is pretty much how a human retina does it.) However, by interpolating within and between the triplet clusters of sensors this can be compensated for, and little resolution is actually lost. (Note that the recently developed Foveon imaging chip actually uses a triple-layer color recording technology similar to that in photographic film, providing a higher effective resolution for the same number of pixels recorded, by recording each color for each pixel separately.)

     Color depth is how many distinct colors the sensors/storage medium recognize. 24-bit depth color is also known as "true color" because it supposedly has enough colors to exceed the limit of typical human color discrimination. Good-quality film exceeds this by a large margin. Good-quality CCD arrays also exceed this.

     So what does all this mean in terms of the final product? If all you want is a 3 X 5 or 4 X 6 snapshot, digital cameras can currently produce images very close in resolution and color fidelity to those produced with film cameras, though in some situations there will be noticeable differences in contrast and tonality. At 600 dpi a 3 X 5 print requires 1800 X 3000 = 5,400,000 pixels. (Greatly simplifying here by assuming a number of only roughly equivalent things are actually equivalent.) Interpolation from 3+ megapixels (inventing values for nonexistent pixels by averaging between the real pixels around it) will allow a digital camera/printer system to achieve and even exceed this. The 5+ megapixel cameras can do this without interpolation, or with it can be used to create adequate images for larger prints.

     However, if you want a sizable enlargement, especially if you (like me) like to crop and enlarge a small section of an image to produce a 3 X 5 or larger print, film is still the best choice. At 600 dpi an image on a standard piece of 8 1/2 X 11 paper would be 5100 X 6600 = 33,660,000 pixels, which is within the range of high-quality 35mm film but not current consumer-level CCD cameras. Additionally, photographic print paper is usually very fine-grained, which means that good quality 3 X 5 prints will have far more information than is visible to the naked eye. And if you like to record color-rich, detailed images of masquerade costumes, a professional quality 35mm film in a consumer-level camera is vastly better than a prosumer-level digital camera. Professional-level digital still cameras which equal 35mm film systems are roughly ten times as expensive as high-end 35mm rigs (though the prices get a bit lower every year). To equal medium- or large-format camera performance, you have to buy a medium- or large-format camera body and attach a scanning back. Even then, these take over a minute to make a high-resolution color scan. (Again, this gets better every year.) These backs are essentially flatbed or drum scanners mounted in a frame which attaches to the camera body. For the price ($10,000 at the very bottom level just for the scanning back) you could purchase a high-quality medium- or large-format camera with several lenses and a great deal of film. It seems, then, that the more demanding the image requirements the stronger the case for using film over digital. For now.

     But there's more to imaging than just the image. Film is an amazingly dense storage medium. See the note above on the number of pixels in a good-quality 35mm film frame. There's also the matter of processing. Yes, you have to develop and print film. But you also have to move digital camera files into a computer (some will connect directly to some printers) to print the image. Which is faster and more convenient depends on the circumstances. Do you have your vacation photos processed and printed during the trip, or when you get home? To achieve photographic print quality with digital images you need to send your files to a professional printer anyway, or buy an expensive photo-quality printer. And here we come to digital imaging's greatest weakness. A digital image equal to a good quality 35mm film frame (such as would be made from a film frame scanned on a drum scanner) will be over 32 megabytes without compression. A frame of color 35mm film stores this much data in a paper-thin piece of material just 3.6 X 2.4 centimeters. Memory cartridges are getting smaller, but they ain't that small yet!

     Not only does all this data need a lot of storage room, but it needs time to store. Exposure times for film are generally in the hundredths to thousandths of a second, and once the shutter is closed the storage work is done. Yes, it needs to be developed and fixed and printed, but exposed film kept in a dark, cool place can be successfully processed decades later. Processed film stored properly will last indefinitely. Exposure times for a digital camera are about the same, but once the image is taken it must be moved to a non-volatile storage medium, usually within the camera, and that takes time. With a modern 35mm camera, taking a shot every second is not difficult. Some higher-level 35mm cameras can take over 6 frames per second. The transfer to memory in a digital camera can take up to several seconds, depending on the camera, image resolution and compression chosen, for each shot. Moreover, a 35mm film camera can take 36 images or more in one loading. With consumer-level digital cameras, to even approach this number of images you must switch to a lower image quality (by reducing the resolution setting and/or by increasing the level of compression, though the second option means taking even more time to store the image). Even a medium-resolution image can be a hundred or more kilobytes, compressed. A maximum-resolution, compressed image could run several tens of megabytes, depending on such things as contrast range and color variation in the image. Yes, digital storage density is improving, but when will it be as convenient as tucking a roll of exposed film in your pocket every 36 high-resolution shots?

     So, for now, film photography is still quite viable. But while (as mentioned above) it still has room for improvement, digital imaging is just taking off. Within 10 to 20 years digital imaging will be superior in all measures to consumer-level film photography, at greatly lower cost. Even then, though, chemical photography won't die. After all, there are still hobbyists making tintypes!


     This work is Copyright 2002 Rodford Edmiston Smith. Permission to reprint it must be obtained from the author, who can be reached at: stickmaker@usa.net