Tips for Mac users

resolution, part 2

tvalleau

OK… continuing (this too will have to be modest, as I’ve just discovered an issue with one of my websites that needs fixing today.)

If you were really paying attention last night, you might see something wrong with the definition I signed off with:

Meanwhile, here’s a definition: A pixel is a picture element; the smallest “dot” of information. One of those numbers from that table of numbers we just downloaded from the camera. (Again, that’s not quite the proper definition, but it’s close enough for this discussion.)

The problem? When we get that file of number loaded into a program on our computer, each “pixel” isn’t red or green or blue: we see it as orange, or yellow, or tan, or some other color, one of perhaps millions or billions of shades or hues.

In fact, we have to separate out a true “picture element” from it’s components. Remember that I said you combine red, green and blue to make a color? (To save typing, it’s RGB from now on…) But each of those RGB numbers from the camera file are individual, single numbers, from different individual photo sites.

So, what’s going on? Well, for each one of those different individual photo sites (which are either R or G or B) the computer looks at the sites around it, and decides from those surrounding sites what the likely amount of the other two colors will be. It’s called “de-mosiac.”

Whoa! What? OK: think of a checkerboard again. Each square is alternating one of three colors: RGB. Mentally impose the whole image over the whole checkerboard. You end up with an image made up of only three colors, and they are resting alongside each other. Not remotely realistic.

Now, look at just one of those sites. Say it’s blue. Next to it is a green one, and a red one, on the sides and top. What the computer (either yours or the camera’s) does is look at those surrounding sites and extrapolates how much G and how much R must have been on the B site (based on how much is actually on the sites that surround the B site.)

Yep: in a sense, it “makes up” the missing colors for each site.

So now, when you open one of those files of numbers in Photoshop, you can click on the “Channels” window, and see, or any Pixel on the screen, the Red channel, the Green channel and the Blue channel, and at the top, the combined RGB channel, in full living color (the combination of the three RGB channels).

When you get a jpg file from your camera, the calculation is done for you, so that each pixel comes ready with three numbers, each representing one of the three primary colors. If you use raw files, however, your desktop computer will do that computing, under your control. Either way, you end up with a point (one of those 3000 x 4000 points) that has three numbers attached to it.

As you can see, we’re still dealing with numbers, however. In fact each of those channels (RGB) is just a list of numbers, which your computer can -interpret- as being red or green or blue. They are just numbers, however. You can tell the computer to interpret them however you like. In fact, when one “converts” a color image to Black and White, all that’s really going on is that you’re telling Photoshop “don’t interpret the channels as colors; just interpret it as bright / dark (called “luminosity”).

What do I mean by “interpret?” Pretty simple: if you are looking at say the Blue channel, and the number in the file for some pixel happens to be 30, the computer will turn on the Blue on your monitor to a level of 30. If the number is 255 instead, it will tell the monitor to blast out that blue pixel with full brightness; if the level is Zero, it will tell the monitor to turn off the blue. Just like turning up or down a dimmer lamp. The higher the number, the brighter.

Zero is off – black. 255 is fully on (either R, G, or B).

Colors besides RGB however, are a combination of the RGB primaries at various individual levels. For example 103, 50, 117 are the RGB values for Violet; 255, 255, 0 are the numbers for Yellow (red + green = yellow.) 128, 128, 0 is also yellow, but it isn’t as bright. And any place where all three numbers are the same, is a shade of gray.

So how does the computer monitor make those colors? Just like I said: if you look with a magnifying glass at your computer monitor, you’ll see that it’s pretty much the same thing as the sensor in your camera: there are little squares (or circles or ribbons) which are either red, or green, or blue, all packed tightly together next to each other. So, if you have a photo which as a big red wall, then the pixels where it’s Red have the RGB numbers 255,0,0 (full red brightness; no green and no blue) and the monitor will go thru and turn on all the red squares, and off all the green and blue. Why does it look solid red to you when in fact 2/3 of the little squares are turned off? Because the square are so little that you eye blends them together.

And that’s why if you have a red square at 103, next to a green square at 50, next to a blue square at 117, your eye sees a violet dot.

In short, if you skip over all the discussion, then it’s really like this: a red photo site on the sensor puts out a number value based on how bright the light was that hit it. That number is then use to turn on a red square on your computer screen to the same level of brightness.

Red(camera) -> 128 -> Red (monitor)

end of part 2

back later.

T

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top