16-bit or 8-bit images for printing?
I make fine-art prints from digital files. My basic information sheet notes that the best prints are created from 16-bit images, like TIFFs, and a wide color-space, such as AdobeRGB or ProPhotoRGB.
One person who got my information sheet was kind enough to write back, and explained to me that my insistance on 16-bit image files was incorrect, and all that was needed was a simple 8-bit JPEG. He suggested that “images files are reduced to 8-bit automatically before being sent to the printer” and sourced that statement to a thread on DPReview and a technician at Bay Photo (a large company that prints photos.) They also (apparently) said that Apple computers only have an 8-bit printing path.
Well, I’ve been printing digital files for more than two decades now (I began in 2001) and tried to assure him that those rumors were incorrect.
So let’s take a look:
1) Apple computers have had a 16-bit printing path since 2007. Epson printer drivers have a 16-bit path as well, and can encode and send that information to the printer.
2) Images are not sent to the printer. Images are sent to the printer –driver– which resides on the computer. The job of that driver is to reduce the image data to a “Printer Command Language” which determines which of 8 (10 or 12) CMYK inks will be squirted out of the print head, and when and where.) The “image” never makes it to the printer. What is sent to the printer is an ASCII text PCL set of instructions.
3) It is true that a final, ready-to-print 8-bit image (which theoretically offers over 16 million tones*) is all that is needed to make a color print. The human eye cannot tell the difference between a color image created from an 8-bit versus a 16-bit color file, especially with today’s modern dithering algorithms. (In some cases flat tonal ranges such as blue sky, or deep shadows, may show banding when the source is 8-bit, but in 2022 that has mostly been eliminated.)
But notice the caveats above: “color image” and “ready-to-print”.
Fine art printing (and in fact, almost any digital printing) requires that the original image be first altered – be “tuned up” to make the best print. Shadows may need to be opened; highlights may need to be adjusted; colors may need correction; flaws may need to be removed… the list of possible edits is long. In fact, adjustments may even need to be made based on the type of paper used, although this is usually reserved for expensive fine-art printing.
There simply is not enough head-room for much adjustment in an 8-bit original file. A 16-bit original can be edited, corrected and prepared for printing, but a JPEG (for practical purposes) cannot. JPEG’s are pretty much “baked” already and so any attempt at adjustment is more likely to corrupt the image and make it worse.
That said, once the adjustments are made and finalized in the 16-bit file, then one could (if desired) save that image file out as an 8-bit JPEG, and successfully send it to the printer. (You’d be doing the switchover yourself; “images files are reduced to 8-bit automatically before being sent to the printer” is simply not true, and not what a driver does.)
That’s where the confusion comes in. It’s only the final edit that can be sent as an 8-bit to the printer.
In fact, Bay Photo uses industrial-size laser printers to output on plastic “paper” and such machines may (I don’t know for sure) require an 8-bit file as input. They may not even have a 16-bit path available.
If that is where the Bay Photo technician got his understanding, he was considering the need of his industrial printer, and also assuming that the file was finalized, and needed no adjustment – “ready-to-print”.
If the image is going to be edited and prepared for a fine-art print, however, a 16-bit file is required to work on. (Yes, you can edit and adjust a jpeg, but not with any quality.)
Second, there is the caveat of “color image” noted above. We are used to seeing color, and our eye/brain tools fill in a lot of detail. That is partly why a color 8-bit print looks as good as a 16-bit. (“Metamerism” is what lets us see only a very few colors (8-12 or so) of ink dots as a massive range of hues and saturation. *)
However, when you remove the color and all that remains is the luminosity information, (in other words, a black and white image) the 256 tones of an 8-bit image are simply too few to prevent “banding” and the problems (usually in the deeper shadows) can become visually obvious.
Using a dedicated B&W printer with toned inks (such as Piezography) will make an 8-bit B&W image fail totally, even if it has been adjusted and finalized.
In other words, fine-art B&W images need the tonal range of 16-bits and must be sent to the printer-driver as such. (Doing so will make it obvious that there is no “automatic reduction to 8-bit” going on, of course.)
So, there you are. I stand by my insistence on a 16-bit file, for the reasons listed above. Besides my own 22-years of experience, I verified all this by going directly to several technical experts, including one who has written printer drivers. If anyone knows what’s going on, it would be the guy who actually wrote the code.
————
*8-bits can represent 256 different numbers. An “8-bit image” is referring to 8-bits of intensity in each of the red, green and blue pixels, or 256*256*256 which is over 16 million combinations of R and G and B. Note that I didn’t say 16 million “colors” since there are thousands of RGB combinations that appear to be the same color to the human eye due to metamerism. (https://www.sciencedirect.com/topics/engineering/metamerism)
In fact, it is generally accepted that most people can see “only” about 1 million different colors, so for both those reasons, an 8-bit image is quite sufficient for a photographic print in most all cases. You just cannot adjust them cleanly for the final print.