<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Some stuff &#187; image</title>
	<atom:link href="http://blog.yhuang.org/?feed=rss2&#038;tag=image" rel="self" type="application/rss+xml" />
	<link>https://blog.yhuang.org</link>
	<description>here.</description>
	<lastBuildDate>Wed, 27 Aug 2025 08:50:58 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.1</generator>
		<item>
		<title>plenoptic cameras and the meaning of photography</title>
		<link>https://blog.yhuang.org/?p=525</link>
		<comments>https://blog.yhuang.org/?p=525#comments</comments>
		<pubDate>Sun, 26 Jun 2011 00:28:24 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aperture size]]></category>
		<category><![CDATA[astronomy literature]]></category>
		<category><![CDATA[camera]]></category>
		<category><![CDATA[cartoon version]]></category>
		<category><![CDATA[conventional cameras]]></category>
		<category><![CDATA[fourier optics]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[object]]></category>
		<category><![CDATA[resolution]]></category>
		<category><![CDATA[spatial]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=525</guid>
		<description><![CDATA[Raytrix introduced the R11 Lightfield Camera not too long ago. It is still low-res and expensive, but improved versions of these should eventually catch on &#8212; they make too much sense not to. The idea of plenoptic cameras has been thrown around for quite a while. Instead of a conventional camera with a single lens [...]]]></description>
			<content:encoded><![CDATA[<p>Raytrix introduced the <a href="http://www.raytrix.de/index.php/r11.185.html">R11 Lightfield Camera</a> not too long ago. It is still low-res and expensive, but improved versions of these should eventually catch on &#8212; they make too much sense not to.</p>
<p>The idea of plenoptic cameras has been thrown around for quite a while. Instead of a conventional camera with a single lens focusing a single object plane onto the in-camera image plane (i.e. the sensor), a plenoptic camera attempts to capture enough additional information so as to be able to <a href="http://groups.csail.mit.edu/graphics/classes/CompPhoto06/html/lecturenotes/25_LightField_6.pdf">reconstruct &#8220;all possible&#8221; images</a> that can be obtained from the light entering the same aperture. The most talked-about application is subsequent refocusing; if it were just this, then multi-capture with mechanical focal-length sweeps using a conventional camera would suffice. Another is stereograms, but again, two spaced shots would suffice for that. A plenoptic camera does more in one shot and makes these merely special post-processing cases. The <a href="http://eces.colorado.edu/~pavani/Plenoptic.pdf">simplest conception of a plenoptic camera</a> is essentially an array of mini-cameras (microlens + micropixel array for each logical pixel) that separately captures light from all directions at each point in the image. In between conventional cameras and plenoptic cameras are perhaps smarter, sparser non-regular arrays like these <a href="http://www.digitalcamerainfo.com/content/Mitsubishi-Electric-Develops-Camera-to-Refocus-Photos.htm">coded aperture systems</a> that hark back to <a href="http://www.umiacs.umd.edu/~aagrawal/sig06/CodedApertureImaging.html">old radio-astronomy literature</a>. These have good signal processing properties from a deconvolution perspective, but the full-array plenoptic camera like the R11 seems fully general, and with some future industrial scaling, the saved expenses of a compromise may be inconsequential.</p>
<p>Fine, so a plenoptic camera may make clever use of its given aperture size, but do we really get something for nothing? To answer that, first a digression.<br />
<span id="more-525"></span><br />
Why does a conventional camera lose information? According to the cartoon version of Fourier optics, a lens is a spatially-variant phase transformer and space itself is an array of lightfield integrators. One can sort of see that light from the object arriving at farther off-axis locations on the lens will have greater (i.e. higher-frequency) phase variations corresponding to various points of the object, the result being that the pattern at the front end of the lens is the Fourier Transform of the object. The lens multiplies its aperture shape onto this, and by its phase transforming capability, sends the product out to the image plane, where the analogous thing happens. The pattern at the image plane is thus the Inverse Fourier Transform of the pattern coming out of the lens. The net result is the image is the object convolved with the transform pair of the lens aperture shape, known as the Point Spread Function (PSF).</p>
<p>So right off the bat, the object has been low-passed through the finite aperture due to diffraction. Nothing can be done about finite bandwidth, but at least the focused distance to the image plane is also where usually the PSF main lobe is narrowest for a particular object plane distance (coincides with geometric optics results). Away from the designated object plane distance, we no longer get good approximations to Fourier Transforms so the image is further distorted by non-invertible transformations that can never be uniquely decoded.</p>
<p>A plenoptic camera is supposed to get around this with its microlens array structure because (1) each small aperture microlens is a large depth-of-field subsystem operating in parallel with others, (2) total bandwidth is not sacrificed since the synthetic aperture is still large, (3) better than a <a href="http://www.biox.kth.se/kjellinternet/Pinhole.pdf">pinhole</a> array as microlenses still do your Inverse Fourier Transforms for you, no deconvolution of potentially non-invertible transformations involved. But for this, there is now a design tradeoff between higher spatial resolution per depth vs. depth resolution. So it&#8217;s not a free gain.</p>
<p>In particular, once the camera is built (at least the current type), we don&#8217;t get the option of choosing non-uniform tradeoffs across the image. We get some spatial resolution and we get some depth resolution that is the characteristic of the camera, then all we can do is to degrade it computationally for effect, always throwing away the bulk of the data in each computation, but having the flexibility to choose what to throw away. In a conventional camera, the optical resources are deployed differently. We could obtain high spatial and depth resolution around one particular depth and decaying spatial and depth resolution at depths away from that. Or we could obtain high spatial resolution at all depths, but no depth resolution. Or something in between. The choices are limited but given it is what we actually want, the data is combined fully in the desired way. It is only when we think we are mistaken do we regret that information has been &#8220;lost&#8221; (but really just combined in the &#8220;wrong&#8221; way.) Because a conventional camera has evolved for 100+ years, in capable hands it is kind of matched to exactly what you want to to do in photography. Of course you can&#8217;t do weird things like having two depths at which things are focused, but that is also weird in a way that our (conventional) eyes are not used.</p>
<p>Then this brings us to the question of what is photography. Is it a reproduction of the physics of radiation? Then a plenoptic camera isn&#8217;t enough, one would need a hyperspectral holographic recorder of some sort. But it isn&#8217;t that. Is it to reproduce what our eyes see? It isn&#8217;t that, either, because photographs are somewhat realistic but surely not real. With more and more in-camera and post-processing gimmicks, real may even be unacceptable. Is it just free painting with a really complicated and constrained brush &#8212; in other words, masochism? No, isn&#8217;t that, because then no interaction with the real world input would be required. The meaning of photography, so far as I can see, is an art which not unlike other arts, is the communcation of an emotional state. But this emotional state is defined by that which the photographer experiences at the <em>acute moment</em> when a particular real world phenomenon (the scene) is observed. It&#8217;s what the photographer imagines is seen that he tries his best to manipulate his instrument to reproduce, but this imagination is seeded irrevocably by the transient real phenomenon. Anything else, he could just patiently paint from scratch; but for <em>this</em>, the imagination seeded by the transient real phenomenon, he somehow <em>needs</em> photography.</p>
<p>But does he need post-processing? Does he need a fully capable plenoptic camera so he can do post-processing? Is that still true to the moment or is it okay to have the emotional state last a while (and possibly change) all the way back to the photoshop computer? Heck, why not have photoshop onboard? That would free the imagination. I lean towards yes, if only for the reason that more options can&#8217;t hurt. But there is unresolved tension on this question. A camera with low capabilities can have its quirks and constraints become part of the real phenomenon, if the photographer is presumed to feel the world through its viewfinder. Is this a bit recursive? Once the instrument has too much software, then what is to separate photography from virtual brush painting in its onboard photoshop?</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=525</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>watson v. mit</title>
		<link>https://blog.yhuang.org/?p=306</link>
		<comments>https://blog.yhuang.org/?p=306#comments</comments>
		<pubDate>Thu, 17 Feb 2011 02:52:42 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[common sense]]></category>
		<category><![CDATA[condition]]></category>
		<category><![CDATA[experiential knowledge]]></category>
		<category><![CDATA[general purpose]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[knowledge]]></category>
		<category><![CDATA[mit]]></category>
		<category><![CDATA[sense knowledge]]></category>
		<category><![CDATA[watson]]></category>

		<guid isPermaLink="false">http://allegro.mit.edu/~zong/wpress/?p=306</guid>
		<description><![CDATA[So being at the event captured in the image, I got to ask a question toward the end. Actually I asked two questions. The first was whether Watson would ring in and use the remaining 3 seconds or whatever to continue to compute. Gondek said it would if it helped. In actual competition it doesn&#8217;t [...]]]></description>
			<content:encoded><![CDATA[<p><img src="wp-content/uploads/images/539w.jpg" alt="http://cache.boston.com/resize/bonzai-fba/Globe_Photo/2011/02/14/1297740468_0202/539w.jpg" /></p>
<p>So being at the event <a href="http://www.boston.com/business/technology/articles/2011/02/15/computer_holds_its_own_in_1st_jeopardy_contest/">captured in the image</a>, I got to ask a question toward the end. Actually I asked two questions. The first was whether Watson would ring in and use the remaining 3 seconds or whatever to continue to compute. Gondek said it would if it helped. In actual competition it doesn&#8217;t appear to be the case, as the buzz-in thresholding condition ensured that further computation would not have been helpful. The second question was a follow-up on the identified weakness of Watson &#8212; learning on &#8220;common sense&#8221; knowledge. I asked what path AI research would take to tackle such knowledge, which are by its very definition, &#8220;not in the books.&#8221; Gondek said that IBM is building up semantic information (e.g. a &#8220;report&#8221; is something that can be &#8220;turned in&#8221; and &#8220;assessed,&#8221; etc.) from corpus. That wasn&#8217;t exactly what I was asking, however.</p>
<p>My point was whether all &#8220;knowledge&#8221; is written down. There is such a thing as experiential &#8220;knowledge,&#8221; and humans take years to learn it/be trained in it through parenting (i.e., to &#8220;mature&#8221;). If only there were a handbook on life, or that life could be learned through reading a series of textbooks, then perhaps I&#8217;d believe that the kind of general-purpose AI that most people are probably imagining (rather than an expert/Q&#038;A system) can be achieved along the lines of current methods.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=306</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>How to (maybe) get WordPress Stats to accept the API key</title>
		<link>https://blog.yhuang.org/?p=88</link>
		<comments>https://blog.yhuang.org/?p=88#comments</comments>
		<pubDate>Tue, 15 Jan 2008 23:44:56 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Akismet]]></category>
		<category><![CDATA[code]]></category>
		<category><![CDATA[com]]></category>
		<category><![CDATA[course stats]]></category>
		<category><![CDATA[dashboard]]></category>
		<category><![CDATA[database index]]></category>
		<category><![CDATA[hidden image]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[stupid script]]></category>
		<category><![CDATA[Wordpress]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=88</guid>
		<description><![CDATA[WordPress stats is a nice little filter for WordPress, one of those devious hidden image IP trackers. Unfortunately, getting it to accept the API key doesn&#8217;t always work (even if it works for Akismet without complaint). You may keep getting Enter your WordPress.com API key to link this blog to your WordPress.com account. Be sure [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://wordpress.org/extend/plugins/stats/">WordPress stats</a> is a nice little filter for WordPress, one of those devious hidden image IP trackers. Unfortunately, getting it to accept the API key doesn&#8217;t always work (even if it works for Akismet without complaint).</p>
<p>You may keep getting</p>
<blockquote><p>Enter your WordPress.com API key to link this blog to your WordPress.com account. Be sure to use your own API key! Using any other key will lock you out of your stats.</p></blockquote>
<p>even if the API key is correct. or if you hard code the API key</p>
<blockquote><p>An API Key is present in the source code but it did not work.</p>
<p>The WordPress.com Stats Plugin is not working because it needs to be linked to a<br />
WordPress.com account.</p></blockquote>
<p>I didn&#8217;t find a solution in any of the forums, so I looked at the stupid script some more. Basically it uses the API key to get a &#8220;blog_id&#8221; (database index, most likely) from WordPress.com and can&#8217;t find one. So I made up a blog_id in the code. That shut up the plug-in, but of course stats aren&#8217;t tracked.</p>
<p>Finally, I went to <a href="http://dashboard.wordpress.com">http://dashboard.wordpress.com</a>, logged into the account, made a new garbage *.Wordpress.com blog, then a bit later took out the made-up blog_id from the code, de-activated and re-activated, and &#8230; everything works. The external blog shows in the &#8220;Global dashboard.&#8221; Also the real blog_id is returned from code. But, if I take out the hard coded API key, it stops working again.</p>
<p>This is definitely a WordPress.com problem with registering externally hosted blogs, so to make it work, hard code the API key, make sure there is at least one *.Wordpress.com blog, wait a little bit, then re-activate the stats plug in.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=88</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>interactive sites, many of which i&#8217;ve never heard of</title>
		<link>https://blog.yhuang.org/?p=70</link>
		<comments>https://blog.yhuang.org/?p=70#comments</comments>
		<pubDate>Sat, 07 Apr 2007 07:00:02 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Hmm]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[marketing]]></category>
		<category><![CDATA[marketing image]]></category>
		<category><![CDATA[statistics]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=70</guid>
		<description><![CDATA[Some marketing image and statistics Hmm&#8230;]]></description>
			<content:encoded><![CDATA[<p>Some marketing image</p>
<p><img src="wp-content/uploads/images/web20_alexa_value_chart_3.jpg" alt="http://esnips.blogs.com/photos/uncategorized/2007/04/02/web20_alexa_value_chart_3.jpg"/></p>
<p>and <a href="http://movers20.esnips.com/TableStatAction.ns?reportId=100">statistics</a></p>
<p>Hmm&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=70</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>old problem about mirrors</title>
		<link>https://blog.yhuang.org/?p=68</link>
		<comments>https://blog.yhuang.org/?p=68#comments</comments>
		<pubDate>Fri, 23 Mar 2007 01:30:12 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[axis of rotation]]></category>
		<category><![CDATA[equivalence]]></category>
		<category><![CDATA[fact]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[image in the mirror]]></category>
		<category><![CDATA[mental rotation]]></category>
		<category><![CDATA[mirror]]></category>
		<category><![CDATA[waist]]></category>
		<category><![CDATA[wall mirror]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=68</guid>
		<description><![CDATA[Why do left and right flip but top and bottom do not in a mirror? After all, the plane of mirroring is neutral to all directions parallel to the plane. The answer of course is in the form of an axis of rotation preference. Rotation about the bodyline axis (left right flip) and rotation about [...]]]></description>
			<content:encoded><![CDATA[<p>Why do left and right flip but top and bottom do not in a mirror?<br />
After all, the plane of mirroring is neutral to all directions parallel to the plane.</p>
<p><span id="more-68"></span><br />
The answer of course is in the form of an axis of rotation preference. Rotation about the bodyline axis (left right flip) and rotation about waist axis (top bottom flip) are not equivalent. But let&#8217;s first be clear that the mirror does treat them equivalently in terms of optics, so that&#8217;s not where the non-equivalence comes from. In fact, it is futile to think too much about the mirror.</p>
<p>Instead, let&#8217;s think about the person. When you physically turn around to face the other way, you make a choice to rotate around one particular axis, and not the other. That&#8217;s a very obvious breaking of equivalence. When you look at somebody else or at your image in the mirror, you are likewise choosing a preferred axis &#8212; except in your mind. You make a mental rotation and that&#8217;s where the equivalence is broken. That&#8217;s why left and right flip but top and bottom do not, because your mind chooses to do the left-right flip and not the top-bottom flip.</p>
<p>But why does the mind choose one rotation over the other? Because one can be done, and the other can&#8217;t. Left and right in the body plan are symmetric, so the image seen in a mirror makes &#8220;sense&#8221; as a rotated image along the bodyline axis. It would not make &#8220;sense&#8221; as a rotated image along the waist axis, because head and feet are not symmetric.</p>
<p>On a related but different matter, if you look into a lake, you see a top-bottom flip. That&#8217;s perfectly normal because the plane of mirroring in this case is itself preferential. In this case, it is a purely optical equivalence breaking. So far so good. But you are somehow aware that the image in the lake is abnormal (i.e. upside down). Now if you face a wall mirror sideways, you have likewise a left-right optically flipped image. But nothing seems abnormal there. What is it about upside down that is abnormal? Well, it has to be due to the external reference of updown-ness known as gravity. In fact, if you stayed in the space shuttle long enough, I bet seeing an upside down person or image would no longer seem at all weird.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=68</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ghost</title>
		<link>https://blog.yhuang.org/?p=51</link>
		<comments>https://blog.yhuang.org/?p=51#comments</comments>
		<pubDate>Sun, 14 Jan 2007 01:04:21 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[caveat emptor]]></category>
		<category><![CDATA[crap]]></category>
		<category><![CDATA[disk]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[image disks]]></category>
		<category><![CDATA[inkling]]></category>
		<category><![CDATA[norton antivirus]]></category>
		<category><![CDATA[older computers]]></category>
		<category><![CDATA[piece of crap]]></category>
		<category><![CDATA[Symantec]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=51</guid>
		<description><![CDATA[Ever since the makers of Ghost got bought by Symantec and Symantec got bought by Norton (or is it the other way around?), I have had an inkling of what Ghost might have become through the unfortunate experience of having used Symantec/Norton Antivirus (8.0, I believe it is that MIT offers?) I got a chance [...]]]></description>
			<content:encoded><![CDATA[<p>Ever since the makers of Ghost got bought by Symantec and Symantec got bought by Norton (or is it the other way around?), I have had an inkling of what Ghost might have become through the unfortunate experience of having used Symantec/Norton Antivirus (8.0, I believe it is that MIT offers?)</p>
<p>I got a chance to use Ghost again. Ghost 10.0 that is. Unbelievable! What a piece of crap! I just wanted to image a disk, but now you have to run the ugly yellow UI in Windows &#8212; wait, you have to first install it in Windows so it can &#8220;help&#8221; you &#8220;automatically&#8221; &#8220;define&#8221; &#8220;restore points&#8221; so you can &#8220;backup your computer.&#8221; What does that user-fuddy gibberish mean?! Oh look here, I can be an &#8220;advanced&#8221; user and make a straight disk-to-disk copy (no disk to image?) but every time I click the button it wants to install .NET Runtime 1.1 first, what the &#8230;? And it keeps wanting me to activate the product and get &#8220;LiveUpdates.&#8221; Umrghh! Booting the CD up by itself gives me a patchy &#8220;recovery console.&#8221; No option to image disks in sight. Needless to say I junked the CD.</p>
<p>Fortunately the package tucks in another CD called &#8220;Ghost 2003&#8243; for &#8220;older&#8221; computers. So it turns out Ghost 2003 is the Ghost that I remember. Man, thank goodness for older computers&#8230; Snorton has totally killed Ghost. Caveat emptor.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=51</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>hilly Seattle</title>
		<link>https://blog.yhuang.org/?p=50</link>
		<comments>https://blog.yhuang.org/?p=50#comments</comments>
		<pubDate>Thu, 11 Jan 2007 08:26:36 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[crown hill]]></category>
		<category><![CDATA[denny hill]]></category>
		<category><![CDATA[Gee]]></category>
		<category><![CDATA[hill]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[retreating glaciers]]></category>
		<category><![CDATA[seattle seattle]]></category>
		<category><![CDATA[south axis]]></category>
		<category><![CDATA[steep]]></category>
		<category><![CDATA[yesterday]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=50</guid>
		<description><![CDATA[Seattle is really long in the north-south axis. Not only that, anybody who has been in Seattle for a while will notice that lots of interesting geographic features run along the north-south axis, like, hills, peninsulas, and lakes. I&#8217;m just guessing that retreating glaciers had something to do with it. Hills, yes. Yesterday I drove [...]]]></description>
			<content:encoded><![CDATA[<p><img src="wp-content/uploads/images/seattle_elevation.png" align="right" />Seattle is really long in the north-south axis. Not only that, anybody who has been in Seattle for a while will notice that lots of interesting geographic features run along the north-south axis, like, hills, peninsulas, and lakes. I&#8217;m just guessing that retreating glaciers had something to do with it.</p>
<p>Hills, yes.</p>
<p>Yesterday I drove up a fairly steep hill called Phinney Ridge (really quite steep, but not super steep by Seattle standards). I was also going west, so suddenly I was reminded of this geographic and geologic fact of Seattle and thought &#8230; &#8220;Yeah, I must be hitting a large gradient against one of these hill-spines. Gee, it&#8217;s even called a &#8216;ridge.&#8217; Wonder if I can skirt around it,&#8221; and so on.</p>
<p>In fact there are lots of descriptive place names that didn&#8217;t register with me &#8212; like, what is &#8220;Capitol Hill&#8221; anyway? &#8212; until I saw this image. I cut it out of the <a href="http://seamless.usgs.gov">USGS data site</a>, where you can play around with such things to your heart&#8217;s content. (I also took two looks at <a href="http://www.topozone.com">topozone.com</a> but decided they sucked.)</p>
<p>Seattle is supposed to be built originally on &#8220;seven&#8221; &#8220;hills,&#8221; (c.f. Rome) and I labeled them here in red numbers 1-7: First Hill, Capitol Hill, Queen Anne, Magnolia, Beacon Hill, Denny Hill (razed early on), and Crown Hill. I call BS on the seven hills theory. Not only do some of these &#8220;hills&#8221; not look like distinct hills, some of them aren&#8217;t even that impressive. There are lots more hills around &#8230; and Crown Hill looks sorely out of place, like an after-thought to make the number 7.</p>
<p>I haven&#8217;t really been everywhere in Seattle, so I can&#8217;t say where the hills are the steepest &#8212; they probably keep records of this. Where I have been though &#8230; some parts of the eastern ridge of Capitol Hill (green 1), the Downtown (green 2), and Magnolia have made for hair-raising experiences in a manual transmission car. By comparison, Phinney Ridge (green 3) really isn&#8217;t so bad.</p>
<p>The only really flat part of Seattle is the industrial/stadium/international district along the shores of the <a href="http://en.wikipedia.org/wiki/Duwamish_River">Duwamish River</a>, which can be seen in the image, flanked by West Seattle and Beacon Hill. Seattle could have developed along the only river in the city: Seattle was almost called Duwamps, after all &#8230; but no, people had to go live on hills instead of flat land (yeah ok fine, there was too little of it and it was an Asian ghetto from the days of <a href="?p=14">Chin Chun Hock</a>).</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=50</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Today I can go no farther (part 6)</title>
		<link>https://blog.yhuang.org/?p=46</link>
		<comments>https://blog.yhuang.org/?p=46#comments</comments>
		<pubDate>Sun, 31 Dec 2006 03:53:21 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[certificate store]]></category>
		<category><![CDATA[ddrescue]]></category>
		<category><![CDATA[disk]]></category>
		<category><![CDATA[external storage device]]></category>
		<category><![CDATA[hard disk recovery]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[logical conclusion]]></category>
		<category><![CDATA[recovery]]></category>
		<category><![CDATA[Seagate]]></category>
		<category><![CDATA[seagate drive]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=46</guid>
		<description><![CDATA[This is part of the hard disk recovery documentation. Part 6. Today I can go no farther (so I stopped) The last days of this project are spent on two tiring tasks that do not gain me very much, but must be done to carry this project to its logical conclusion. One of these is [...]]]></description>
			<content:encoded><![CDATA[<p>This is part of the hard disk recovery documentation.</p>
<p>Part 6.</p>
<p><font color="#770033"><br />
Today I can go no farther (so I stopped)</p>
<p>The last days of this project are spent on two tiring tasks that do not gain me very much, but must be done to carry this project to its logical conclusion. One of these is to decrypt a few very small, but important NTFS-encrypted files. The other is to wring the last readable bits out of the broken Seagate drive by splitting all the error regions to isolate the unreadable regions as much as possible. These can proceed in parallel.</font><br />
<span id="more-46"></span><br />
<font color="#770033">I set Knoppix to slave away on the broken Seagate drive using ddrescue again. This ended up taking more than one day constantly running and made the disk really hot. I had to manually intervene from time to time just so this will get somewhere. This is where I wished ddrescue was smarter about skipping around unreadable regions.</p>
<p>The other task was more interesting from a learning perspective. When a file is to be encrypted by NTFS, it generates a file-specific symmetric &#8220;file encryption key&#8221; and stores the key with the file. Of course the key isn&#8217;t stored in plaintext but is encrypted using the user&#8217;s public key and is only decodeable by the user&#8217;s private key. Normally the user should export the private key into a .pfx certificate file and store it on a smart card or USB pen drive or something, and more importantly, remove it from the certificate store on Windows. I did make a copy of the private key off the computer. I could just re-import the certificate, and then decrypting the data would be trivial. But the external storage device that holds the private key isn&#8217;t with me during the break, unfortunately. On the other hand, I didn&#8217;t bother to remove the private key from the computer either, because I wasn&#8217;t really serious about this particular piece of data, just toying around with the NTFS encryption feature, mostly.</p>
<p>That made the encrypted file potentially recoverable without delay. Windows doesn&#8217;t store the user&#8217;s private key in plaintext either, of course. From what I understand, the user&#8217;s private key is encrypted somehow with user&#8217;s password hash, then maybe it is encrypted again with a master key belonging to the system. Somehow, the system&#8217;s master key is the weak link. I think it is stored in the SAM database of the system registry, which is in %SYSTEMROOT%\system32\config, though I may be mistaken on this point. The user&#8217;s various keys are stored in %USERPROFILE%\Application Data\Microsoft\Protect and %USERPROFILE%\Application Data\Microsoft\Crypto. Here is <a href="http://www.beginningtoseethelight.org/efsrecovery/index.php">another explanation</a> of this.</p>
<p>To make a long story short, if all of these files are available, and the user&#8217;s password is known, then it is possible to recover the encrypted files directly from the drive without an explicitly exported private key (hence encryption with these files around is ultimately pointless).</p>
<p>I ended up using the <a href="http://www.elcomsoft.com/aefsdr.html">Elcomsoft Advanced EFS Data Recovery</a> tool (AEFSDR). They have a trial version, too. Unfortunately it&#8217;s crippleware, unlike Mount Image Pro. All I can say is, if you really want to decrypt files, you find a way around the &#8220;cripple&#8221; in crippleware. Enough said.</p>
<p>For the record, AEFSDR is pretty good and fast. However, I am also not completely satisfied by it. I haven&#8217;t found any bugs, but the problem with it is its incompleteness. I don&#8217;t know what file system access method it uses, but it cannot see any removable drives, including removable USB drives or any mounted virtual disks (e.g. by Mount Image Pro). This is severely limiting, because encrypted files are not easily copied (Windows says Permission Denied), and I basically ran into a wall because of this. Please, AEFSDR, go work on this some more. You are selling a <b>recovery</b> tool, so your users are most likely in desperate straits with limited resources&#8230; they don&#8217;t want to deal with your programming kludges.</p>
<p>The way around it is to use NTBACKUP (part of Windows), which can archive encrypted files to a backup image, and then you can restore these files onto a permanent physical disk. After also copying the security related files from Windows to a permanent physical disk, AEFSDR decrypts all files successfully.</p>
<p>RELOADING THE DDRESCUE IMAGE</p>
<p>After I got as much as I am ever going to get out of the Seagate disk using ddrescue (I got another few tens of megabytes out of the disk), I retraced the steps on the new image. This time, the image loads as a valid NTFS partition and Windows will read it (apparently the last few bits recovered was pretty important). However, the Windows directory is still not readable, although almost all other directories are directly readable. NFI also works on the image &#8212; this is very good, as it means I can finally profile what I lost down to the last sector. I also ran chkdsk on it again &#8212; fewer errors to correct this time, but hopefully fewer zero-filled data issues to deal with at the end of the day. This chkdsk&#8217;d image is now the gold recovery image. I backed it up immediately.</p>
<p>That is the end of it. The story has a happy ending, but what a giant waste of time. Back up your data.</font></p>
<p>Lessons today:</p>
<ul>
<li>AEFSDR is nice. But I wouldn&#8217;t buy it, either.</li>
<li>Remove your private keys from the computer.</li>
<li>Near total data recovery is very possible from an apparently dead drive, but a delicate procedure needs to be followed.</li>
<li>The right tools are very important, and a complete disk recovery toolchain simply does not exist &#8212; you are left to find your own pieces.</li>
</ul>
<p>On to <a href="http://scripts.mit.edu/~zong/wpress/?p=47">the Appendix</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=46</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>The tide turns (part 5)</title>
		<link>https://blog.yhuang.org/?p=45</link>
		<comments>https://blog.yhuang.org/?p=45#comments</comments>
		<pubDate>Thu, 28 Dec 2006 15:13:21 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[disk]]></category>
		<category><![CDATA[ext]]></category>
		<category><![CDATA[ext2 partition]]></category>
		<category><![CDATA[external usb drive]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[knoppix linux]]></category>
		<category><![CDATA[letter]]></category>
		<category><![CDATA[mode]]></category>
		<category><![CDATA[ntfs partition]]></category>
		<category><![CDATA[seagate drive]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=45</guid>
		<description><![CDATA[This is part of the hard disk recovery documentation. Part 5. The tide turns (rather quickly) After the exceedingly annoying but ultimately inconsequential ext2 interlude, I&#8217;m back on track with the original problem of recovering data from the broken Seagate drive. After the disaster with file-copying using the Windows ext2ifs driver last time, I made [...]]]></description>
			<content:encoded><![CDATA[<p>This is part of the hard disk recovery documentation.</p>
<p>Part 5.</p>
<p><font color="#770033"><br />
The tide turns (rather quickly)</p>
<p>After the exceedingly annoying but ultimately inconsequential ext2 interlude, I&#8217;m back on track with the original problem of recovering data from the broken Seagate drive.</font><br />
<span id="more-45"></span><br />
<font color="#770033">After the disaster with file-copying using the Windows ext2ifs driver last time, I made sure to make a copy of the disk image while the external USB drive holding it was mounted under Knoppix Linux. The destination was another external drive formatted with ext2.</p>
<p>Then I took the copy of the image under Windows. This way I didn&#8217;t care what the ext2ifs driver wanted to do. But this time it didn&#8217;t mangle the partition (probably because this ext2 partition actually has a legal physical formatting!).</p>
<p>Finally I can run NTFS tools on the broken disk, but I need to mount the (broken) NTFS image first. There is a free tool, <a href="http://www.acc.umu.se/~bosse/">FileDisk</a> by Bo Branten, to do this. Unfortunately, it gives a drive letter to the whole disk, instead of to the partitions. This is nearly impossible to work with because (1) there are 63 sectors of MBR, partition table, and filler at the front of the disk, and (2) there is the Dell diagnostic partition. And FileDisk doesn&#8217;t virtualize a physical disk device, which would be the more correct metaphor.</p>
<p>A way around this is to use VMWare, and virtualize the disk image as a disk. But no, that&#8217;s too much trouble&#8230; although, after this incident, I&#8217;m really considering putting a VMWare image with Windows preinstalled onto a DVD or something &#8212; that would be the complement to Knoppix.</p>
<p>A somewhat more advanced tool called <a href="http://www.mountimage.com/">Mount Image Pro</a> (MIP) exists that does what I want. It isn&#8217;t free (<a href="#footnote">*</a>), but who cares when there is a one-month trial? (For the record, I&#8217;m not entirely happy with this thing, either. It works, but almost every other time it fails to load the system driver and requires a reboot and retry.)</p>
<p>With MIP, the NTFS partition comes up and is mounted under K:, but it is completely broken as expected. The file system cannot be read normally by Windows, nor would the diagnostic tool NFI (part of the <a href="http://support.microsoft.com/kb/253066">OEM Support Tools</a>) work on it. NFI relates what files a sector contains and what sectors a file resides in.</p>
<p>The next step is then to run chkdsk:</p>
<p>  > chkdsk /v K:</p>
<p>Very specific errors relating to the file table are immediately detected and chkdsk says it cannot continue in the (default) read-only mode.</p>
<p>So let&#8217;s try</p>
<p>  > chkdsk /v /r K:</p>
<p>Same complaint by chkdsk. What? It turns out MIP (at least the GUI) only mounts images in read-only mode, but it doesn&#8217;t specify this anywhere! Nor does it give an indication of how to get around this. Wow great.</p>
<p>After poking around, I noticed there is a command-line interface to MIP, too, and it is there that you can specify mounting in &#8220;read-write&#8221; mode, instead of &#8220;write-block&#8221; mode. (I would also call &#8220;write-block&#8221; mode &#8220;read-only&#8221; mode, but that&#8217;s just me, so what do I know!) Moving on:</p>
<p>  > mip mount rescue.image /rw /p:2 /l:K</p>
<p>Finally, the second partition in the disk image is mounted on the drive letter K: (hence the last two parameters). Unfortunately, MIP screws up again and says there is only &#8220;1&#8243; partition, and the drive letter K is associated with the first, Dell diagnostic, partition&#8230; even though it clearly displays two partitions and that the drive letter K is in reality associated with the second partition. No matter, it still works, so I&#8217;ll leave the MIP people unflogged. But they should still go home and fix these.</p>
<p>Trying chkdsk again and it slaves away through a 5-step process, dumping output on all the files and directories on which it detected problems. The disk image is broken enough that chkdsk had to kill most of the NTFS file permission. After about 2 or 3 hours, K: is miraculously readable again in Windows, with the basic directory structure at first glance intact. Very glaring, however, was that K:\windows isn&#8217;t there. There is a new K:\found.000 folder with all the directory trees that lost their names (but otherwise fairly intact) re-rooted there. There were 30+ such re-rooted directories, and it was trivial to find the one that should be K:\windows.</p>
<p>Running through the directories on the disk image that had been combed over by chkdsk, everything was readable and accessible. I&#8217;m more worried about the data parts that got zero-filled by ddrescue, which makes for a much more insidious type of data corruption. On the other hand, I am comforted by the high percentage of data that must not be zero-filled based on the percentage of raw recovered bits recovered by ddrescue. Also, anything that chkdsk had a problem with (in the file table) must have been due to zero-filling. Thus, the more problems chkdsk found, the less problems remain in the data portion, so statistically I am satisfied. With NFI, the extent of the damage at the file level can be ascertained precisely, but for all practical purposes, the recovery is near total. Certainly, all the files that I care about (defined as those not &#8220;easily&#8221; regenerated), which was maybe 20GB of the 40GB, are recovered, including really small text files to fairly large (hundreds of MB) NTFS compressed files.</p>
<p>There is a lot more to do and this project is far from over, but I can see the end-game now, and so the recovery effort can be declared a success.</font></p>
<p>Lessons today:</p>
<ul>
<li>MIP is pretty useful, but I still won&#8217;t buy it.</li>
<li>NTFS is truly robust, and the NTFS version of chkdsk is remarkable.</li>
</ul>
<p>On to <a href="http://scripts.mit.edu/~zong/wpress/?p=46">Part 6</a>.</p>
<p><a name=footnote>*</a> Edit: I&#8217;ve since found a free tool called <a href="http://chitchat.at.infoseek.co.jp/vmware/vdk.html">vdk</a> that makes Mount Image Pro obsolete. Yes, it mounts ddrescue images. Yay!</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=45</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
