<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>General info &#8211; Tracy Valleau</title>
	<atom:link href="https://valleau.art/blog/category/general-info/feed/" rel="self" type="application/rss+xml" />
	<link>https://valleau.art/blog</link>
	<description></description>
	<lastBuildDate>Fri, 20 Mar 2026 18:46:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>AI will not destroy humanity &#8211; our faith in it will.</title>
		<link>https://valleau.art/blog/ai-will-not-destroy-humanity-our-faith-in-it-will/</link>
					<comments>https://valleau.art/blog/ai-will-not-destroy-humanity-our-faith-in-it-will/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 18:28:32 +0000</pubDate>
				<category><![CDATA[Way Off Topic]]></category>
		<category><![CDATA[Just life tips]]></category>
		<category><![CDATA[General info]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=667</guid>

					<description><![CDATA[  AI will not destroy humanity &#8211; our faith in it is what will destroy us. Computer technology is amazing: it can do trillions of simple calculations in a second. But speed is not wisdom. Computer and technology do only dumb, blind, simple calculations. Being amazed by this is just like watching a magician &#8211; [&#8230;]]]></description>
										<content:encoded><![CDATA[<p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue';"> </p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;" data-pm-slice="0 0 []">AI will not destroy humanity &#8211; <em>our faith in it</em> is what will destroy us.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Computer technology is amazing: it can do trillions of simple calculations in a second. But speed is not wisdom. Computer and technology do only dumb, blind, simple calculations. Being amazed by this is just like watching a magician &#8211; there is no such thing as magic.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Computer “knowledge” is not a real thing either&#8230; it’s just speed. But we humans have put our faith in computers. AI is an amazing technology&#8230; but it’s still just a computer. About 50% of what is says in a general response is wrong (HLE.) It can only get 80% of coding tests correct &#8211; 20% is wrong (SWE). Generalized reasoning (ARC-AGI-2) ranges from 0% to 80% correct (and thus from 20% to 100% wrong.)</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Over the past 20-30 years we, as a society and culture, have come to believe that there is a technological solution to everything. We are just applying mathematical formula to every question we ask. That’s what computers, including AI, do after all.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">That trust is misplaced. We should instead be thinking “here’s the AI advice: it’s somewhere between 20% and 50% incorrect, so we need to add in our own human experience and judgement to get the answer that will best serve our needs.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">As an example, look at how AI is being used right now: the attack on Iran is pretty much run by AI. The accuracy of the strikes has been very impressive. But some times, the 20% “errors” creep in and we bomb a school full of children. That’s us putting all our faith in AI, instead of modifying it with human experience and thought.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">And as a further example, AI flatly failed our faith in it when Iran started bombing its neighbors and closed the Strait of Hormuz. A simple application of the human condition could have forseen both. AI did not.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">AI will not destroy humanity &#8211; <em>our blind faith in it</em> is what will destroy us.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Artificial General Intelligence will <em>never happen</em> with computers &#8211; <em>all you can get out of massive computing is a massive computer.</em></p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">AGI is simply magical thinking, used to raise money. That’s why “the bubble will burst.”</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Computers, regardless of size, will never be able to <em>experience reality</em> as humans do. They cannot feel emotion nor experience aesthetics, or ultimately compassion&#8230;and those change human thinking, reasoning and behavior in irreducible ways.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">The tool will never replace the carpenter, nor will the brush replace the artist.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">When I look at the Mona Lisa, or a crying baby, or listen to music, my emotional response, though it may be similar to millions of others, is uniquely my own, simply because all my previous experiences are uniquely my own.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Substituting chaos theory for emotion does not cover it. Not even quantum computing can get around the simple fact that space and time constrain unique experience. No computer-in-a-box, existing as we each do, at one time and one place, and regardless of how it calculates, can have achieve more than that.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">AI will not destroy humanity. <em>Relying on it</em> as if vast knowledge alone were somehow wisdom, is what will destroy us.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">I hope we can wake up.</p>
<p style="caret-color: #000000; color: #000000; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid;">Tracy Valleau</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue';"> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/ai-will-not-destroy-humanity-our-faith-in-it-will/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Fine art printing service award/certification</title>
		<link>https://valleau.art/blog/fine-art-printing-service-award-certification/</link>
					<comments>https://valleau.art/blog/fine-art-printing-service-award-certification/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Fri, 08 Nov 2024 21:19:37 +0000</pubDate>
				<category><![CDATA[printing]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=592</guid>

					<description><![CDATA[Dear Friends, As I push toward 80, I&#8217;ll admit I find it easier to stay home and make prints than to go out and take photos. And, as many of you know, one of my joys is making prints on paper, whether my own images or those of others. To that end, I&#8217;m please to [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Dear Friends,</p>
<p>As I push toward 80, I&#8217;ll admit I find it easier to stay home and make prints than to go out and take photos. And, as many of you know, one of my joys is making prints on paper, whether my own images or those of others.</p>
<p>To that end, I&#8217;m please to let you know that I was recently honored by Canson-Infinity with a Certified Print Lab seal, &#8220;&#8230;representing expertise in Fine Art printing.&#8221; Canson, founded in 1557, has produced some of the world&#8217;s finest papers, used by Picasso, Degas, Matisse, Cézanne, Van Gogh and Monet.</p>
<p>Canson says: &#8220;The Certified Print Labs are a network of ‘best in class’ studios and boutique labs based around the world. All the partners have completed a technical evaluation and offer the best print quality on Canson Infinity papers, combined with excellent service. The network offers services for photographers, printmakers &amp; artists looking for excellent quality and service.&#8221;</p>
<p>Near as I can tell, I&#8217;m the only such certified printer between San Francisco and Los Angeles.</p>
<p>Personally, I&#8217;m delighted because I have favored Canson Infinity and Arches papers for almost two decades now. I like the way their papers take the ink, and the gamut they can handle. Certainly there are other papers I use, but images just seem to look more elegant on Canson papers.</p>
<p>Prints are usually $85 each, and that includes paper, ink, and 30-60 minutes of cleanup, adjusting, and other prep for making an exceptional print.</p>
<p>Please feel free to pass this information along to others who might need my services, and keep me in mind for printing your next show or gallery pieces.</p>
<p>I&#8217;ve got a website for this service: https://itstheprint.com</p>
<p>Stay well!</p>
<p> </p>
<p>Tracy</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/fine-art-printing-service-award-certification/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is a camera raw file?</title>
		<link>https://valleau.art/blog/about-camera-raw-files-2/</link>
					<comments>https://valleau.art/blog/about-camera-raw-files-2/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Sun, 20 Oct 2024 19:39:07 +0000</pubDate>
				<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=568</guid>

					<description><![CDATA[I recent ran across someone who didn&#8217;t understand what a raw file is, and had it confused with an image file. So, let&#8217;s take a quick look at it. Perhaps you&#8217;re old enough to remember the Weston Lightmeter: It had a &#8220;photovoltaic cell&#8221; (don&#8217;t panic) which is simply some goop which when exposed to light [&#8230;]]]></description>
										<content:encoded><![CDATA[<p style="font-size: 16px;">I recent ran across someone who didn&#8217;t understand what a raw file is, and had it confused with an image file.</p>
<p style="font-size: 16px;">So, let&#8217;s take a quick look at it.</p>
<p style="font-size: 16px;">Perhaps you&#8217;re old enough to remember the Weston Lightmeter:</p>
<p style="font-size: 16px;"><img decoding="async" style="display: block; margin-left: auto; margin-right: auto;" title="Weston meter.jpg" src="https://valleau.art/blog/wp-content/uploads/2025/05/Weston-meter-1.jpg" alt="Weston meter." width="146" height="144" border="0" /></p>
<p style="font-size: 16px;">It had a &#8220;photovoltaic cell&#8221; (don&#8217;t panic) which is simply some goop which when exposed to light generates a very tiny electrical current. The brighter the light, the greater the current, and the greater the current, the farther the little needle on the display would swing to the right. Underneath the needle is a printed chart, with numbers, so your light meter reading was simply the number that was underneath the needle when it stopped moving.</p>
<p style="font-size: 16px;">In other words, the light meter effectively measured the &#8220;luminosity&#8221; (intensity/brightness of the light) in any given environment.</p>
<p style="font-size: 16px;">Now, if you were to cover the front of the meter, where the photovoltaic cell lives, with say blue cellophane, then only the blue light would enter, and you&#8217;d be measuring the luminosity of the blue light only. If you wanted to know the luminosity of only the green or red part of the spectrum, you&#8217;d just cover the cell with green (or red) cellophane.</p>
<p style="font-size: 16px;">OK: simple enough&#8230; but <em>that is exactly how your digital camera captures the data it needs to (later) make an image</em>.</p>
<p style="font-size: 16px;">The sensor in your camera, which gets exposed to light when you push the shutter release button, is (in the miracle of modern technology) covered in a checker-board pattern, with literally millions of &#8220;little Weston meters&#8221;&#8230; and each one has a piece of colored cellophane (aka a &#8220;filter&#8221;) on it, either red or green or blue. (The actual arrangement of those filters is called a Bayer pattern.)  Here&#8217;s what it looks like:</p>
<p style="font-size: 16px;"><img fetchpriority="high" decoding="async" style="display: block; margin-left: auto; margin-right: auto;" title="CleanShot 2025-05-23 at 10.38.50.jpg" src="https://valleau.art/blog/wp-content/uploads/2025/05/CleanShot-2025-05-23-at-10.38.50.jpg" alt="CleanShot 2025-05-23 at 10.38.50." width="346" height="225" border="0" /></p>
<p style="font-size: 16px;">So, when the sensor is exposed to light, each sensor cell records a single number that is relative to the intensity of light at that location on the sensor. In the image above, that would look like this:</p>
<p style="font-size: 16px;">(I&#8217;m making up the numbers, of course, for this example&gt;)</p>
<p style="font-size: 16px;">(Row 1)   green = 300    red = 55  green = 340   red = 66 </p>
<p style="font-size: 16px;">(Row 2)  blue = 4000   green = 421  blue  = 3980  green = 345</p>
<p style="font-size: 16px;">(Row 3)  green = 298    red = 66  green = 302   red = 75</p>
<p style="font-size: 16px;">(Row 4)  blue = 4100   green = 407  blue  = 4009  green = 301</p>
<p style="font-size: 16px;">If you were going to save those 16 cells (or all  millions of them) to memory or on a disk, the data for that group, as seen in the recording, would be:</p>
<p style="font-size: 16px;">300 55 340 66 <br />4000 4213 980 345 <br />298 66 302 75 <br />4100 407 4009 301</p>
<p style="font-size: 16px;">or really more like this, all run together:</p>
<p style="font-size: 16px;">300 55 340 66 4000 4213 980 345 298 66 302 75 4100 407 4009 301</p>
<p style="font-size: 16px;">Those individual cells are called &#8220;photosites&#8221; or sometimes &#8220;sensels&#8221;, but you&#8217;ll notice that <em>each one records the intensity of only <strong>one</strong> color</em>. The whole thing is called a &#8220;mosaic&#8221; since that&#8217;s what it looks like.  They are NOT called &#8220;pixels&#8221;  (which are on your monitor or printed photography because &#8220;pixel&#8221; is from an image, and <em>is a single cell</em> <em>with all three values</em>, red, green and blue, so that you get full color in each place, not just one color.)</p>
<p style="font-size: 16px;">That is what a &#8216;raw&#8217; (which means not finished, or not cooked if you prefer) file is: a long string of numbers representing the luminosity intensity values straight from the camera sensor.</p>
<p style="font-size: 16px;">&#8212;-</p>
<p style="font-size: 16px;">Obviously however, that&#8217;s not an image &#8211; it&#8217;s just a bunch of data. Even more apparent is that each cell as recorded is not &#8220;full color&#8221; but only the intensity of red, green or blue.</p>
<p style="font-size: 16px;">Where does the full color image (with a lot more colors than only red, green or blue) come from?</p>
<p style="font-size: 16px;">Computer magic: that mosaic is run through software which &#8220;demosaics&#8221; it. The software looks at all the surrounding cells, RGB,  and extrapolates (figures out) what the full color of each pixel (now is the time to call it a pixel) <em>should</em> be, and saves each cell with three numbers: a value for red, a value for blue, and a value for green. A pixel is a &#8220;picture element&#8221; which has a RGB component to it.</p>
<p style="font-size: 16px;">The data might look like this:</p>
<p style="font-size: 16px;">(Row 1, Cell 1)     red = 255  green = 133   blue = 18</p>
<p style="font-size: 16px;">(Row 1, Cell 2)     red = 255  green = 131   blue = 19</p>
<p style="font-size: 16px;">(Row 1, Cell 3)     red = 254  green = 136   blue = 21</p>
<p style="font-size: 16px;">(Row 1, Cell 4)     red = 253  green = 140   blue = 22</p>
<p style="font-size: 16px;">or those 4 cells just from Row 1 above:</p>
<p style="font-size: 16px;">255 133 18 <br />255 131 19 <br />254 136 21 <br />253 140 22</p>
<p style="font-size: 16px;">aka</p>
<p style="font-size: 16px;">255 133 18 255 131 19 254 136 21 253 140 22</p>
<p style="font-size: 16px;">In the image file, instead of <em>one</em> number per cell,  there are now <em>three numbers per cell</em> representing the full R, G, B value of that single pixel.</p>
<p style="font-size: 16px;">After fully demosaicing the data from the sensor/raw file (<em>which itself remains unchanged</em> ) those extrapolated values are <em>saved into a new, different and familiar &#8220;image&#8221; file</em>, such as a jpg or tif. Now you have two files: a raw file (in which the original data is unchanged) and a new image file (full  of new data).</p>
<p style="font-size: 16px;">Let&#8217;s say you have a bag of groceries, including, flour, eggs, sugar and milk.</p>
<p style="font-size: 16px;">A &#8220;raw&#8221; file is more like the bag of uncooked groceries, while an &#8220;image&#8221; file is more like a finished cake.</p>
<p style="font-size: 16px;"> </p>
<p style="font-size: 16px;">Well, that&#8217;s the gist of it. Vastly oversimplified of course,  but that&#8217;s basically how it all works.</p>
<p style="font-size: 16px;">HTH</p>
<p style="font-size: 16px;"> </p>
<p style="font-size: 16px;">addendum:</p>
<p style="font-size: 16px;">about editing a raw file: You don&#8217;t. The raw file data remains the same. The changes are applied when you demosaic the raw file into a bit-map file.</p>
<p style="font-size: 16px;">In digital photography, the &#8220;sidecar&#8221; file associated with a raw image file typically contains metadata and adjustments made to the image, including exposure adjustments, white balance, and other non-destructive edits. The sidecar file is often in XML format (commonly using the .xmp extension) and is separate from the original raw image file.</p>
<p style="font-size: 16px;">The sidecar file records the changes you&#8217;ve made to the image in your editing software without altering the original RAW data. When you open the raw file in the same or compatible software, these adjustments are applied according to the information stored in the sidecar file. This allows for flexibility, as you can adjust or revert changes without losing any original image data.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/about-camera-raw-files-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Printing a correct color target for custom profiling</title>
		<link>https://valleau.art/blog/printing-a-correct-color-target-for-custom-profiling/</link>
					<comments>https://valleau.art/blog/printing-a-correct-color-target-for-custom-profiling/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Sun, 20 Oct 2024 19:37:58 +0000</pubDate>
				<category><![CDATA[printing]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=566</guid>

					<description><![CDATA[NOTE:  as of MacOS Tahoe 26.3, the part below related to using Colorsync no longer works for me.  Please use Print-Tool instead.   Printing a correct color target for custom profiling Here is how to print a target of patches, for use in creating a custom color profile. (Note: this requires that you either have [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><em><strong>NOTE:  as of MacOS Tahoe 26.3, the part below related to using Colorsync no longer works for me.  Please use Print-Tool instead.</strong></em></p>
<p> </p>
<p>Printing a correct color target for custom profiling</p>
<p>Here is how to print a target of patches, for use in creating a custom color profile. (Note: this requires that you either have your own spectrophotometer, or are printing a target sent to you by someone you hired to create the custom profile for you.)</p>
<p>A target image is composed of hundreds or thousands of little color patches. The profiling software knows exactly what those colors are. So if printed correctly (as in &#8220;unaltered&#8221;) then the spectrophotometer can read t he printed value; compare it to the correct value, and create a profile. Obviously then, when you print that target on your computer, you do NOT want anything to change the colors accidentally! In other words, &#8220;color management&#8221; must be OFF.</p>
<p>Macs are notoriously difficult to print a &#8220;pure, unmanaged&#8221; color patch target without corrupting it.</p>
<p>The usual advice used to be to use Adobe&#8217;s Color Print Utility (CPU), but unfortunately, CPU is no longer supported on Catalina or later.</p>
<p>However, if you are printing from a Windows machine, you can still use the Adobe CPU:  (https://helpx.adobe.com/photoshop/kb/no-color-management-option-missing.html)</p>
<p>Most pros will say to use Print Tool from Roy Harrington. (http://www.quadtonerip.com/html/QTRprinttool.html)</p>
<p>[FWIW, I too recommend this product and use it for all my printing, but it&#8217;s not free.  Since it is also a RIP, it does FAR more than just print clean targets. IMHO it will be the best $50 you&#8217;ve spent lately.]</p>
<p>Or you can use the (free) software you already have: ColorSync Utility. It&#8217;s in your &#8220;Utilities&#8221; folder. It&#8217;s more fussy to use than Print Tool, but it works. </p>
<p>Here&#8217;s how to print an unmodified, clean target using Apple&#8217;s ColorSync.</p>
<p> </p>
<p>LATEST version of CS</p>
<p>1. run colorsync and choose file/open and load the target. (The target MUST NOT have an assigned profile!)<br />2. across the bottom of the window, you will see three popup menus. Set them to &#8220;Match to Profile&#8221;  &#8220;None&#8221; and &#8220;Relative Colormetric (media relative)&#8221;<br />3. choose File &#8211; Print from the main menu<br />4. in the resulting dialog box, twirl down the arrow to see the contents of &#8220;Color Sync&#8221;<br />5. at &#8220;Color:&#8221; change the popup menu selection to &#8220;Print as color target&#8221; (If it&#8217;s grayed out, you likely have a profile assigned to the image. See the built-in ColorSync &#8220;help&#8221;.)<br />6. finally, select &#8220;Print&#8221;</p>
<p><img decoding="async" style="display: block; margin-left: auto; margin-right: auto;" title="CleanShot 2025-09-17 at 19.20.55.jpg" src="https://valleau.art/blog/wp-content/uploads/2025/09/CleanShot-2025-09-17-at-19.20.55.jpg" alt="CleanShot 2025-09-17 at 19.20.55." width="407" height="280" border="0"></p>
<p>OLDER version:</p>
<p>1. run colorsync and choose file/open and load the target<br />2. choose Print and select your desired printer<br />3. select &#8220;color matching&#8221; from the popup menu<br />4. choose any profile, except &#8220;automatic&#8221; &#8211; I use ARGB1998<br />5. from the same popup menu choose the top item: &#8220;colorsync utility&#8221;<br />6. from the &#8220;Color:&#8221; menu, choose &#8220;Print as Color Target&#8221;<br />7. Finally, select &#8220;Print&#8221;</p>
<p> </p>
<p>HTH</p>
<p>Tracy</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/printing-a-correct-color-target-for-custom-profiling/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why you should NOT use black backgrounds for editing photos</title>
		<link>https://valleau.art/blog/why-you-should-not-use-black-backgrounds-for-editing-photos/</link>
					<comments>https://valleau.art/blog/why-you-should-not-use-black-backgrounds-for-editing-photos/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Sun, 20 Oct 2024 19:37:18 +0000</pubDate>
				<category><![CDATA[printing]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=564</guid>

					<description><![CDATA[While dark environments, such as Apple&#8217;s Mojave, or the default settings for Photoshop &#38; Pixelmator Pro, may look fashionable, they are terrible for editing photos. Why? Because they screw up your ability to see tones properly. Using a dark background will trick your mind into producing a print that has clogged up shadows, and is [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>While dark environments, such as Apple&#8217;s Mojave, or the default settings for Photoshop &amp; Pixelmator Pro, may look fashionable, they are terrible for editing photos.</p>
<p>Why? Because they screw up your ability to see tones properly. Using a dark background will trick your mind into producing a print that has clogged up shadows, and is overall too dark.</p>
<p>Don&#8217;t believe me? Check out the image below.</p>
<p><img loading="lazy" decoding="async" style="display: block; margin-left: auto; margin-right: auto;" title="sample.png" src="https://valleau.art/blog/wp-content/uploads/2022/03/sample.png" alt="Sample" width="600" height="400" border="0" /><br />See that grey band in the middle? It is <em>exactly</em> the same shade of gray all the way across. The left end is <strong><em>NOT</em></strong> lighter than then right end.</p>
<p> </p>
<p>Take a look at squares A &amp; B, below.</p>
<p> </p>
<p><img loading="lazy" decoding="async" style="display: block; margin-left: auto; margin-right: auto;" title="1200px-Checker_shadow_illusion.svg_.png" src="https://valleau.art/blog/wp-content/uploads/2022/03/1200px-Checker_shadow_illusion.svg_.png" alt="1200px Checker shadow illusion svg" width="598" height="456" border="0" /></p>
<p> </p>
<p>A &amp; B are <em><strong>exactly</strong> the same shade</em> of gray&#8221; (RGB 110,110, 110). </p>
<p>This is built-in our human perception. You can look at the A/B image above all day, and B will always look lighter to you than A.</p>
<p> </p>
<p>Upshot? Your editing environment <em>really does</em> have an effict on the work you produce. Don&#8217;t use dark backgrounds for editing photos.</p>
<p>Set your editing tool to as light an environment as you can, and change the background to white, to keep your brain from messing with you!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/why-you-should-not-use-black-backgrounds-for-editing-photos/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>When &#8220;good enough&#8221; isn&#8217;t: canned paper profiles (Tips for making your own)</title>
		<link>https://valleau.art/blog/when-good-enough-isnt-canned-paper-profiles-tips-for-making-your-own-2/</link>
					<comments>https://valleau.art/blog/when-good-enough-isnt-canned-paper-profiles-tips-for-making-your-own-2/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Sun, 20 Oct 2024 19:36:49 +0000</pubDate>
				<category><![CDATA[printing]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=562</guid>

					<description><![CDATA[  When &#8220;good enough&#8221; isn&#8217;t: canned paper profiles In my business (making prints for museums and galleries) the usual prebuilt paper/ink profile, often described as &#8220;good enough&#8221; really isn&#8217;t. Instead I make my own profiles using X-Rite&#8217;s i1Publish Pro 3. If that applies to you as well, here are some tips: Printing on expensive paper [&#8230;]]]></description>
										<content:encoded><![CDATA[<p> </p>
<p>When &#8220;good enough&#8221; isn&#8217;t: canned paper profiles</p>
<p>In my business (making prints for museums and galleries) the usual prebuilt paper/ink profile, often described as &#8220;good enough&#8221; really isn&#8217;t. Instead I make my own profiles using X-Rite&#8217;s i1Publish Pro 3. If that applies to you as well, here are some tips:</p>
<p>Printing on expensive paper is, er, expensive, so I print the calibration target on a single sheet of 13 x 19 paper. I print 1586 patches because this number gives a chart with 30 shades of black, from white to darkest black. Choosing some other number of patches may only offer 10 or 12 luminosity values. The greater number helps your textures stand out.</p>
<p>The patches are 0.340&#8243; wide and 0.302&#8243; tall, allowing the full 1586 to be printed on a single sheet.</p>
<p>Also, at least with Epson printers, I print the chart using the same DPI (1440/2880) as my final prints. That&#8217;s because 1440 shows more paper-white than 2880, and thus the patches are less dense when read by the spectrophotometer. In other words, the resulting profile is different with different DPI.</p>
<p>I allow the print to dry for 24 hours before reading it. This is critical for matte paper in particular.</p>
<p>I do not have a mechanized reader, so do the scanning my hand, using the supplied tools. I time a single pass of the scanner to take at least 4 seconds. The chart has 28 columns, so I&#8217;m reading 7 of them each second. The version 3 hardware scans at 400 samples per second, so each patch is getting about 60 samples. (This is about the same time that X-Rite&#8217;s mechanical arm takes on a single pass.)</p>
<p>Also, I find it easier to maintain even speed during a single pass to push or pull the spectro unit (instead of swiping left or right) and so turn the table 90 degrees.</p>
<p>Finally (and this will depend on your printer) I add a bit of smoothing to the profile, slightly beyond the default 50%.</p>
<p>I hope these tips help my fellow i1Publish Pro users make better profiles.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/when-good-enough-isnt-canned-paper-profiles-tips-for-making-your-own-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to really see a color print: use bulbs with a high CRI (Color Rendering Index)</title>
		<link>https://valleau.art/blog/how-to-really-see-a-color-print-use-bulbs-with-a-high-cri-color-rendering-index/</link>
					<comments>https://valleau.art/blog/how-to-really-see-a-color-print-use-bulbs-with-a-high-cri-color-rendering-index/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Sun, 20 Oct 2024 19:35:58 +0000</pubDate>
				<category><![CDATA[printing]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<category><![CDATA[cri]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=560</guid>

					<description><![CDATA[Let&#8217;s say you&#8217;ve just made a print of your latest image, but how do you know what it -really- looks like? You would not take a flashlight and cover the end with blue cellophane, and shine it on the print, because it would trash all the other colors. To get a more rational view, you [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Let&#8217;s say you&#8217;ve just made a print of your latest image, but how do you know what it -really- looks like?</p>
<p>You would not take a flashlight and cover the end with blue cellophane, and shine it on the print, because it would trash all the other colors. To get a more rational view, you might take it outside and look at it in the sunshine, which has a balance of all the colors, right?</p>
<p>As a print-maker, you want to have a lightbulb you can use indoors that shows <em><strong>all</strong></em> the colors evenly (unlike the blue flashlight) and thus similar to sunshine.</p>
<p>The color temperature of sunshine is agreed to be about 5000K. Lower temperature is &#8220;warm&#8221; (making white paper look orange-ish) and higher is &#8220;cool&#8221; (making white paper look bluer).</p>
<p>But besides the color temperature, sunlight is also a reference to all the colors in balanced amounts. How close any lightbulb comes to that even balance is the bulb&#8217;s CRI, Color Rendering Index. By definition, sunlight&#8217;s CRI is 100. Fluorescent bulbs usually have a CRI of 80 or less, while specialized bulbs can get to 95 or more. </p>
<p>Unlike sunlight, all bulbs have a spectrum where some colors have more energy than other colors. Fluorescents, for example, exaggerate the green and orange dramatically, and the emission graph looks like a saw tooth blade. Most LEDs peak in the dark blue and greens. Sunlight however has no peaks or valleys, and is a smooth, nearly horizontal graph. </p>
<p><img loading="lazy" decoding="async" style="display: block; margin-left: auto; margin-right: auto;" title="CleanShot 2024-08-21 at 13.42.11.jpg" src="https://www.itstheprint.com/blog/wp-content/uploads/2024/08/CleanShot-2024-08-21-at-13.42.11.jpg" alt="CleanShot 2024-08-21 at 13.42.11." width="413" height="600" border="0" /></p>
<p>Fluorescents and LEDs have low CRI, and so you are seeing exaggerations of some parts of the spectrum and a muting of other parts. No good if you&#8217;re trying to analyze a print.</p>
<p>Generally speaking, any CRI above 93 or so is suitable for viewing photos, but the closer you get to 100, the better. Such bulbs are usually expensive, often in the $20-$40 range. Solux &#8220;museum&#8221; bulbs were 4700 K, about 94 CRI and $30 each.</p>
<p>All that leads here: I have found standard base lightbulbs, with 5000K temperature, and a CRI of 98 (which is amazing) and furthermore are LEDs, using less electricity than halogen or tungsten.</p>
<p>AND&#8230; they are less than $3 each.  <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>So I&#8217;m sharing what I use with all the photographers I know. You can buy them on Amazon. Here is the URL:</p>
<p><a href="https://www.amazon.com/gp/product/B0BNBN5TY4/">https://www.amazon.com/gp/product/B0BNBN5TY4/</a></p>
<p> </p>
<p>Finally, if you&#8217;ve never had such a light before, it will take 2 or 3 days for your brain to adjust to it. As my drill instructor used to say &#8220;Suck it up sweetheart. You&#8217;ll get used to it.&#8221; (For you cynics: no, I do not benefit from this recommendation. It&#8217;s entirely altruistic.)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/how-to-really-see-a-color-print-use-bulbs-with-a-high-cri-color-rendering-index/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Alexa&#8217;s annoying &#8220;OK&#8221; (aka &#8220;brief mode&#8221;) finally fixed? (updated 12/3/24)</title>
		<link>https://valleau.art/blog/alexas-annoying-ok-aka-brief-mode-finally-fixed/</link>
					<comments>https://valleau.art/blog/alexas-annoying-ok-aka-brief-mode-finally-fixed/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Fri, 04 Oct 2024 00:36:31 +0000</pubDate>
				<category><![CDATA[Just life tips]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<category><![CDATA[Alexa]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=551</guid>

					<description><![CDATA[  The usual thing you find if you search the web looking for a way to keep Alexa from constantly saying &#8220;OK&#8221; to your commands, is &#8220;Oh, that&#8217;s easy: just turn on Brief Mode.&#8221; Thing is, that only works  (ahem)  briefly,  for one or two commands, and then the syrupy &#8220;OK&#8221; comes back.  What we [&#8230;]]]></description>
										<content:encoded><![CDATA[<p> </p>
<p>The usual thing you find if you search the web looking for a way to keep Alexa from constantly saying &#8220;OK&#8221; to your commands, is &#8220;Oh, that&#8217;s easy: just turn on Brief Mode.&#8221;</p>
<p>Thing is, that only works  (ahem)  briefly,  for one or two commands, and then the syrupy &#8220;OK&#8221; comes back.  What we want is for &#8220;brief mode&#8221; to <strong><em>stick forever</em></strong>, not for a few hours. This has been driving me, and thousands of others, crazy for years now.</p>
<p> </p>
<p>Try this &#8211; it worked for me:</p>
<p>First, turn on brief mode, with &#8220;Alexa, enable brief mode&#8221; on each Echo.</p>
<p>Then:</p>
<p>Run the Alexa app On Your Phone.</p>
<p>Select devices, and then filter by type: Echo &amp; Alexa</p>
<p><em>For <strong>each and every</strong> Echo device you have, do this:</em></p>
<p> </p>
<p>1) Click on the device in the list of devices</p>
<p>2) When the device panel comes up, click on the little gear in the upper  right corner</p>
<p>3) Scroll down to the General gray bar</p>
<p>4) Click on Sounds</p>
<p>5) Under Custom Sounds gray bar, set Notification to NONE</p>
<p>6) Under Request Sounds gray bar, set <em>both</em> Start of request <em>and</em> End of request to OFF</p>
<p> </p>
<p>Again, do this for every Echo you own.</p>
<p> </p>
<p>That should kill the annoying &#8220;OK&#8221;</p>
<p> </p>
<p>7 ? )Note: Based on a post I found on the web I did, at one point, toggle the Alexa&#8217;s Voice setting (back on the main device setting page) from American 1 to American 2 (female to male) and that -MAY- have acted to help finalize t he new settings. My own Echoes now have a mixture of male and female, but none of them say &#8220;OK&#8221; any longer, much to my delight.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/alexas-annoying-ok-aka-brief-mode-finally-fixed/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Home Wi-Fi smart switches are crack for couch potatoes.</title>
		<link>https://valleau.art/blog/home-wi-fi-smart-switches-are-crack-for-couch-potatoes/</link>
					<comments>https://valleau.art/blog/home-wi-fi-smart-switches-are-crack-for-couch-potatoes/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Sat, 27 Jul 2024 07:13:44 +0000</pubDate>
				<category><![CDATA[Just life tips]]></category>
		<category><![CDATA[General info]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=539</guid>

					<description><![CDATA[  Being brief and cursory introduction to Smart Device plugs and switches. ! Here are some useful things you can do with them: 1. Turn lights or appliances on and off from anywhere using your smartphone or if you have a smart assistant like Amazon Alexa, Google Assistant, or Apple HomeKit, you can control your [&#8230;]]]></description>
										<content:encoded><![CDATA[<p> </p>
<p>Being  brief and cursory introduction to Smart Device plugs and switches.</p>
<p>! Here are some useful things you can do with them:</p>
<p>1. Turn lights or appliances on and off from anywhere using your smartphone or if you have a smart assistant like Amazon Alexa, Google Assistant, or Apple HomeKit, you can control your switches with voice commands.  Light can be controlled individually or in groups</p>
<p>I can turn on lights when go to the kitchen, and off when I leave. Change the central heating by voice.</p>
<p>Set schedules for your devices. For example, you can program your lights to turn on at sunset and off at bedtime, or set your coffee maker to start brewing at a specific time.</p>
<p>Some smart switches come with energy monitoring features that let you track how much power your devices are using. This can help you identify ways to reduce your energy consumption.</p>
<p>Create automated routines that integrate with other smart devices. For example, you can set your lights to turn on when your smart door lock is unlocked, or when a motion sensor detects movement. or simulate your presence at home by setting your lights to turn on and off in a random pattern while you’re away. This can help deter potential burglars.</p>
<p>You can create “scenes” that set multiple devices to specific states. For instance, you might set a “Movie Night” scene that dims the lights, turns on the TV, and adjusts the thermostat or receive notifications on your phone when a device is turned on or off. This can be useful for checking if you left something on when you’re away.</p>
<p>There are other things to control besides simple on/off:</p>
<p>HVAC controllers</p>
<p>Smart locks</p>
<p>Smart blinds and shades</p>
<p>Smart TVs</p>
<p>and more</p>
<p> </p>
<p>I&#8217;m barely scratching the surface and merely controlling lights, but I&#8217;ll tell you &#8211; after 6 years of the convenience of voice control, it&#8217;s rough to give it up. I have 6 lights in our livingroom. They are all in one group, so I just say &#8220;Alexa, turn on livingroom lights&#8221; instead of walking to 6 different places to flip a switch in each one.  Later, as I curl up in bed, I say &#8220;Alexa, turn off everything.&#8221;</p>
<p>The switches come in two types: a wall switch, which requires you (or your electrician) to remove the current switch and replace it with the smart switch.</p>
<p>Here&#8217;s an example : https://www.amazon.com/dp/B0CBNX6Q51</p>
<p>The other kind is a simple no-tools-required plug</p>
<p>Plug example: https://www.amazon.com/gp/product/B0BXMNJDW3</p>
<p>unplug your lamp from the wall; plug in this unit and then plug your lamp back into it.</p>
<p>To get these units recognized (added so you can use it), you use free software for your iphone</p>
<p>The actual process depends on the type of connection your devices use : either wifi direct or hub-based.</p>
<p>Older hub-based may be a closed or proprietary system, and I never used them. But coming full-circle, the newest, easiest, fastest and most compatible is called &#8220;Matter&#8221;.</p>
<p>If you are just getting started, you probably should choose &#8220;Matter&#8221; compatible devices.</p>
<p>Just to make the point however, here&#8217;s a brief overview of the wifi unit process, (Matter units will follow.)</p>
<p>Wifi is the most complex to add, involving a button-press (on the unit) until the unit LED signals that it&#8217;s ready.  Then in the app on your phone, (SmartLife [SL] is a common such app) press the search button. Normally the unit will be recognized in 30 seconds, but it may take up to 2 minutes.  (It may fail too, in which case, follow the secondary instructions in the box.)</p>
<p>Once it&#8217;s added successfully, give it a unique name such as &#8220;desk Lamp&#8221; and hit save.  At this point, you can control the light from your phone by tapping on its name in your list of devices.</p>
<p>If you want to use Alexa to voice control the unit, you will need to add the unit name to the Alexa app on your phone. This is done using the same app [SL] you just used to recognize the unit. Fortunately it&#8217;s as simple as finding and clicking the &#8220;Add to Alexa&#8221; button.</p>
<p>Finally, you can create Groups in either Alexa or SmartLife; Matter or wifi.  There are rules to creating groups, particularly if you want the same named unit in two different groups. Ask me for help if you need it.</p>
<p>Now on to Matter.</p>
<p>First, you don&#8217;t need SmartLife. </p>
<p>Next, you don&#8217;t need button presses nor LEDs flashing just so. All you need is the Alexa app (or Apple TV or a Home Pod or&#8230;</p>
<p>&#8230; well there are too many to list. Visit here for the current state of Matter (2024):</p>
<p> </p>
<p>https://www.theverge.com/23568091/matter-compatible-devices-accessories-apple-amazon-google-samsung</p>
<p> </p>
<p>Adding a Matter device is very simple: plug it in and then run the Alexa app. It will ask you if you want to add it. Agree and say &#8220;yes&#8221; when it asks if you have a QR code. Then just scan the code and enter a name for the unit.</p>
<p>Done.  You can voice control right away.</p>
<p>(See? I said it was simple! )</p>
<p>I have four Alexa/Echo devices scattered around the house, and 15 plugs and switches which amounts to about half the wifi nodes the router runs with all the time.</p>
<p> </p>
<p>Before you ask:</p>
<p>Smartdevices run on 2.4 Ghz wifi, and not on 5, 5.1 or 6.</p>
<p>They use less than a watt if they are just waiting around for a command.</p>
<p>No they don&#8217;t slow down anything.</p>
<p>Finally, it&#8217;s not all a bed of roses &#8211; there are thorns now and then. Reliabilty is high but not perfect and every now and then one may drop off. They are easy to pick back up. Plugs, in particular will reset if you just unplug them and then plug back in. Recent wall switches have a reset-button.</p>
<p>Both switches and plugs have physical buttons, so in an emergency you can still turn things on and off.</p>
<p>Obviously there&#8217;s more to all this, but some friends asked me to write just enough to give them the gist of here it. Well, there it is</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/home-wi-fi-smart-switches-are-crack-for-couch-potatoes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>When &#8220;good enough&#8221; isn&#8217;t: canned paper profiles  (Tips for making your own)</title>
		<link>https://valleau.art/blog/when-good-enough-isnt-canned-paper-profiles-tips-for-making-your-own/</link>
					<comments>https://valleau.art/blog/when-good-enough-isnt-canned-paper-profiles-tips-for-making-your-own/#respond</comments>
		
		<dc:creator><![CDATA[tvalleau]]></dc:creator>
		<pubDate>Fri, 19 Jul 2024 06:13:25 +0000</pubDate>
				<category><![CDATA[General info]]></category>
		<category><![CDATA[Photo]]></category>
		<category><![CDATA[Tips for Mac users]]></category>
		<guid isPermaLink="false">https://valleau.art/blog/?p=537</guid>

					<description><![CDATA[When &#8220;good enough&#8221; isn&#8217;t: canned paper profiles In my business (making prints for museums and galleries) the usual prebuilt paper/ink profile, often described as &#8220;good enough&#8221; really isn&#8217;t. Instead I make my own profiles using X-Rite&#8217;s i1Publish Pro 3. If that applies to you as well, here are some tips: Printing on expensive paper is, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>When &#8220;good enough&#8221; isn&#8217;t: canned paper profiles</p>
<p>In my business (making prints for museums and galleries) the usual prebuilt paper/ink profile, often described as &#8220;good enough&#8221; really isn&#8217;t. Instead I make my own profiles using X-Rite&#8217;s i1Publish Pro 3. If that applies to you as well, here are some tips:</p>
<p>Printing on expensive paper is, er, expensive, so I print the calibration target on a single sheet of 13 x 19 paper. I print 1586 patches because this number gives a chart with 30 shades of black, from white to darkest black. Choosing some other number of patches may only offer 10 or 12 luminosity values. The greater number helps your textures stand out.</p>
<p>The patches are 0.340&#8243; wide and 0.302&#8243; tall, allowing the full 1586 to be printed on a single sheet.</p>
<p>Also, at least with Epson printers, I print the chart using the same DPI (1440/2880) as my final prints. That&#8217;s because 1440 shows more paper-white than 2880, and thus the patches are less dense when read by the spectrophotometer. In other words, the resulting profile is different with different DPI.</p>
<p>I allow the print to dry for 24 hours before reading it. This is critical for matte paper in particular.</p>
<p>I do not have a mechanized reader, so do the scanning my hand, using the supplied tools. I time a single pass of the scanner to take at least 4 seconds. The chart has 28 columns, so I&#8217;m reading 7 of them each second. The version 3 hardware scans at 400 samples per second, so each patch is getting about 60 samples. (This is about the same time that X-Rite&#8217;s mechanical arm takes on a single pass.)</p>
<p>Also, I find it easier to maintain even speed during a single pass to push or pull the spectro unit (instead of swiping left or right) and so turn the table 90 degrees.</p>
<p>Finally (and this will depend on your printer) I add a bit of smoothing to the profile, slightly beyond the default 50%.</p>
<p>I hope these tips help my fellow i1Publish Pro users make better profiles.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://valleau.art/blog/when-good-enough-isnt-canned-paper-profiles-tips-for-making-your-own/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
