<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Some stuff &#187; camera</title>
	<atom:link href="http://blog.yhuang.org/?feed=rss2&#038;tag=camera" rel="self" type="application/rss+xml" />
	<link>https://blog.yhuang.org</link>
	<description>here.</description>
	<lastBuildDate>Wed, 27 Aug 2025 08:50:58 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.1</generator>
		<item>
		<title>the propellers go around (part 2)</title>
		<link>https://blog.yhuang.org/?p=551</link>
		<comments>https://blog.yhuang.org/?p=551#comments</comments>
		<pubDate>Wed, 06 Jul 2011 15:39:26 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[angular range]]></category>
		<category><![CDATA[camera]]></category>
		<category><![CDATA[discrete piece]]></category>
		<category><![CDATA[discrete pieces]]></category>
		<category><![CDATA[elliptical region]]></category>
		<category><![CDATA[phone]]></category>
		<category><![CDATA[propeller]]></category>
		<category><![CDATA[region]]></category>
		<category><![CDATA[scanlines]]></category>
		<category><![CDATA[series]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=551</guid>
		<description><![CDATA[&#8230; but not on camera. This is part of the Toronto visit series. So I inadvertently took a picture of the running propeller on my side of the plane, and it came out weird. And I mean, really weird (1): The propeller does not look like this. It has like six blades spaced out evenly [...]]]></description>
			<content:encoded><![CDATA[<p>&#8230; but not on camera.</p>
<p><em>This is part of the Toronto visit series.</em></p>
<p>So I inadvertently took a picture of the running propeller on my side of the plane, and it came out weird. And I mean, really weird (1):<br />
<img src="wp-content/uploads/images/prop1.jpg" width=600 /></p>
<p>The propeller does not look like this. It has like six blades spaced out evenly and all of them straight. So of course this is due to aliasing of the damned camera. But wait now, I just said not too long ago that <a href="?p=525">this is photography</a>, so indeed, I only took this and<br />
<span id="more-551"></span><br />
this (2)<br />
<img src="wp-content/uploads/images/prop2.jpg" width=600 /></p>
<p>and this (3)<br />
<img src="wp-content/uploads/images/prop3.jpg" width=600 /></p>
<p>along with this (4)<br />
<img src="wp-content/uploads/images/prop4.jpg" width=600 /></p>
<p>because I was strangely mesmerized. Not only is this pseudo-reality only observable through a cheap phone camera like this, but I&#8217;m measuring the camera as much as I&#8217;m measuring the propeller.</p>
<p>See, the effect is only possible because the phone camera has no shutter (so DSLR&#8217;s shouldn&#8217;t exhibit this). The image is literally read out line by line, top to bottom. By the looks of it, all pixels on one horizontal scanline are read out or replicated to memory in parallel, but different scanlines belong to different times, with a sampling time gap between lines. Meanwhile, the propeller moved on. Since the angle between attachment points of the blades is reduced in (1) and expanded in (4), the propeller is spinning clockwise.</p>
<p>However, what appears to be the blades in these photos, are not the blades! Otherwise how could some of them hang around in mid-air and also have two ends. Well, first note that the entire region where there are fin-like objects is circumscribed by the elliptical region that the real propeller traces out (from this viewpoint). This clue suggests that each discrete piece is a portion of an image of a single blade sweeping through a certain angular range. It&#8217;s only a portion, because the blade in question is captured only when it is within the current scanning region. Once it&#8217;s out, the camera doesn&#8217;t &#8220;see&#8221; it. There are gaps between these discrete pieces because there is an angular gap between two adjacent blades on the propeller.</p>
<p>If the rotational speed of the propeller is known then we can even figure out the rate at which image lines are scanned. As is, there are about a dozen or so discrete pieces in a single image, so given that there are only six blades, the propeller makes nearly two turns during the shot.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=551</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>plenoptic cameras and the meaning of photography</title>
		<link>https://blog.yhuang.org/?p=525</link>
		<comments>https://blog.yhuang.org/?p=525#comments</comments>
		<pubDate>Sun, 26 Jun 2011 00:28:24 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[aperture size]]></category>
		<category><![CDATA[astronomy literature]]></category>
		<category><![CDATA[camera]]></category>
		<category><![CDATA[cartoon version]]></category>
		<category><![CDATA[conventional cameras]]></category>
		<category><![CDATA[fourier optics]]></category>
		<category><![CDATA[image]]></category>
		<category><![CDATA[object]]></category>
		<category><![CDATA[resolution]]></category>
		<category><![CDATA[spatial]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=525</guid>
		<description><![CDATA[Raytrix introduced the R11 Lightfield Camera not too long ago. It is still low-res and expensive, but improved versions of these should eventually catch on &#8212; they make too much sense not to. The idea of plenoptic cameras has been thrown around for quite a while. Instead of a conventional camera with a single lens [...]]]></description>
			<content:encoded><![CDATA[<p>Raytrix introduced the <a href="http://www.raytrix.de/index.php/r11.185.html">R11 Lightfield Camera</a> not too long ago. It is still low-res and expensive, but improved versions of these should eventually catch on &#8212; they make too much sense not to.</p>
<p>The idea of plenoptic cameras has been thrown around for quite a while. Instead of a conventional camera with a single lens focusing a single object plane onto the in-camera image plane (i.e. the sensor), a plenoptic camera attempts to capture enough additional information so as to be able to <a href="http://groups.csail.mit.edu/graphics/classes/CompPhoto06/html/lecturenotes/25_LightField_6.pdf">reconstruct &#8220;all possible&#8221; images</a> that can be obtained from the light entering the same aperture. The most talked-about application is subsequent refocusing; if it were just this, then multi-capture with mechanical focal-length sweeps using a conventional camera would suffice. Another is stereograms, but again, two spaced shots would suffice for that. A plenoptic camera does more in one shot and makes these merely special post-processing cases. The <a href="http://eces.colorado.edu/~pavani/Plenoptic.pdf">simplest conception of a plenoptic camera</a> is essentially an array of mini-cameras (microlens + micropixel array for each logical pixel) that separately captures light from all directions at each point in the image. In between conventional cameras and plenoptic cameras are perhaps smarter, sparser non-regular arrays like these <a href="http://www.digitalcamerainfo.com/content/Mitsubishi-Electric-Develops-Camera-to-Refocus-Photos.htm">coded aperture systems</a> that hark back to <a href="http://www.umiacs.umd.edu/~aagrawal/sig06/CodedApertureImaging.html">old radio-astronomy literature</a>. These have good signal processing properties from a deconvolution perspective, but the full-array plenoptic camera like the R11 seems fully general, and with some future industrial scaling, the saved expenses of a compromise may be inconsequential.</p>
<p>Fine, so a plenoptic camera may make clever use of its given aperture size, but do we really get something for nothing? To answer that, first a digression.<br />
<span id="more-525"></span><br />
Why does a conventional camera lose information? According to the cartoon version of Fourier optics, a lens is a spatially-variant phase transformer and space itself is an array of lightfield integrators. One can sort of see that light from the object arriving at farther off-axis locations on the lens will have greater (i.e. higher-frequency) phase variations corresponding to various points of the object, the result being that the pattern at the front end of the lens is the Fourier Transform of the object. The lens multiplies its aperture shape onto this, and by its phase transforming capability, sends the product out to the image plane, where the analogous thing happens. The pattern at the image plane is thus the Inverse Fourier Transform of the pattern coming out of the lens. The net result is the image is the object convolved with the transform pair of the lens aperture shape, known as the Point Spread Function (PSF).</p>
<p>So right off the bat, the object has been low-passed through the finite aperture due to diffraction. Nothing can be done about finite bandwidth, but at least the focused distance to the image plane is also where usually the PSF main lobe is narrowest for a particular object plane distance (coincides with geometric optics results). Away from the designated object plane distance, we no longer get good approximations to Fourier Transforms so the image is further distorted by non-invertible transformations that can never be uniquely decoded.</p>
<p>A plenoptic camera is supposed to get around this with its microlens array structure because (1) each small aperture microlens is a large depth-of-field subsystem operating in parallel with others, (2) total bandwidth is not sacrificed since the synthetic aperture is still large, (3) better than a <a href="http://www.biox.kth.se/kjellinternet/Pinhole.pdf">pinhole</a> array as microlenses still do your Inverse Fourier Transforms for you, no deconvolution of potentially non-invertible transformations involved. But for this, there is now a design tradeoff between higher spatial resolution per depth vs. depth resolution. So it&#8217;s not a free gain.</p>
<p>In particular, once the camera is built (at least the current type), we don&#8217;t get the option of choosing non-uniform tradeoffs across the image. We get some spatial resolution and we get some depth resolution that is the characteristic of the camera, then all we can do is to degrade it computationally for effect, always throwing away the bulk of the data in each computation, but having the flexibility to choose what to throw away. In a conventional camera, the optical resources are deployed differently. We could obtain high spatial and depth resolution around one particular depth and decaying spatial and depth resolution at depths away from that. Or we could obtain high spatial resolution at all depths, but no depth resolution. Or something in between. The choices are limited but given it is what we actually want, the data is combined fully in the desired way. It is only when we think we are mistaken do we regret that information has been &#8220;lost&#8221; (but really just combined in the &#8220;wrong&#8221; way.) Because a conventional camera has evolved for 100+ years, in capable hands it is kind of matched to exactly what you want to to do in photography. Of course you can&#8217;t do weird things like having two depths at which things are focused, but that is also weird in a way that our (conventional) eyes are not used.</p>
<p>Then this brings us to the question of what is photography. Is it a reproduction of the physics of radiation? Then a plenoptic camera isn&#8217;t enough, one would need a hyperspectral holographic recorder of some sort. But it isn&#8217;t that. Is it to reproduce what our eyes see? It isn&#8217;t that, either, because photographs are somewhat realistic but surely not real. With more and more in-camera and post-processing gimmicks, real may even be unacceptable. Is it just free painting with a really complicated and constrained brush &#8212; in other words, masochism? No, isn&#8217;t that, because then no interaction with the real world input would be required. The meaning of photography, so far as I can see, is an art which not unlike other arts, is the communcation of an emotional state. But this emotional state is defined by that which the photographer experiences at the <em>acute moment</em> when a particular real world phenomenon (the scene) is observed. It&#8217;s what the photographer imagines is seen that he tries his best to manipulate his instrument to reproduce, but this imagination is seeded irrevocably by the transient real phenomenon. Anything else, he could just patiently paint from scratch; but for <em>this</em>, the imagination seeded by the transient real phenomenon, he somehow <em>needs</em> photography.</p>
<p>But does he need post-processing? Does he need a fully capable plenoptic camera so he can do post-processing? Is that still true to the moment or is it okay to have the emotional state last a while (and possibly change) all the way back to the photoshop computer? Heck, why not have photoshop onboard? That would free the imagination. I lean towards yes, if only for the reason that more options can&#8217;t hurt. But there is unresolved tension on this question. A camera with low capabilities can have its quirks and constraints become part of the real phenomenon, if the photographer is presumed to feel the world through its viewfinder. Is this a bit recursive? Once the instrument has too much software, then what is to separate photography from virtual brush painting in its onboard photoshop?</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=525</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>mosquito-shooting laser</title>
		<link>https://blog.yhuang.org/?p=244</link>
		<comments>https://blog.yhuang.org/?p=244#comments</comments>
		<pubDate>Wed, 24 Feb 2010 23:56:23 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[camera]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[Demonstrating]]></category>
		<category><![CDATA[ebay]]></category>
		<category><![CDATA[Maglite]]></category>
		<category><![CDATA[maglite flashlights]]></category>
		<category><![CDATA[myhrvold]]></category>
		<category><![CDATA[reflective material]]></category>
		<category><![CDATA[small shelf]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=244</guid>
		<description><![CDATA[So this video of a mosquito-shooting laser in the article here has been making the Youtube rounds recently. This isn&#8217;t really new, is it? It was already demonstrated last year, as described in this article. I was curious about the technology that supposedly you can assemble from eBay-acquired parts, and it turns out to be [...]]]></description>
			<content:encoded><![CDATA[<p>So <a href="http://www.youtube.com/watch?v=eYXPqrXZ1eU">this video</a> of a mosquito-shooting laser in the article <a href="http://www.popsci.com/technology/article/2010-02/video-laser-zaps-pesky-skeets">here</a> has been making the Youtube rounds recently. This isn&#8217;t really new, is it? It was already demonstrated last year, as described in <a href="http://online.wsj.com/article/SB123680870885500701.html">this article</a>. I was curious about the technology that supposedly you can assemble from eBay-acquired parts, and it turns out to be kind of lame&#8230;</p>
<blockquote><p>Demonstrating the technology recently, Dr. Kare, Mr. Myhrvold and other researchers stood below a small shelf mounted on the wall about 10 feet off the ground. On the shelf were five Maglite flashlights, a zoom lens from a 35mm camera, and the laser itself &#8212; a little black box with an assortment of small lenses and mirrors. On the floor below sat a Dell personal computer that is the laser&#8217;s brain.<br />
&#8230;<br />
To locate individual mosquitoes, light from the flashlights hits the tank across the room, creating tiny mosquito silhouettes on reflective material behind it. The zoom lens picks up the shadows and feeds the data to the computer, which controls the laser and fires it at the bug.</p></blockquote>
<p>I&#8217;m sorry, but having a screen behind to form an image for detection is cheating and makes this much less exciting. How is this going to work in the field (cheaply) and be something more relevant than a net?</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=244</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
