the propellers go around (part 2)

… but not on camera.

This is part of the Toronto visit series.

So I inadvertently took a picture of the running propeller on my side of the plane, and it came out weird. And I mean, really weird (1):

The propeller does not look like this. It has like six blades spaced out evenly and all of them straight. So of course this is due to aliasing of the damned camera. But wait now, I just said not too long ago that this is photography, so indeed, I only took this and
(Read the article)

plenoptic cameras and the meaning of photography

Raytrix introduced the R11 Lightfield Camera not too long ago. It is still low-res and expensive, but improved versions of these should eventually catch on — they make too much sense not to.

The idea of plenoptic cameras has been thrown around for quite a while. Instead of a conventional camera with a single lens focusing a single object plane onto the in-camera image plane (i.e. the sensor), a plenoptic camera attempts to capture enough additional information so as to be able to reconstruct “all possible” images that can be obtained from the light entering the same aperture. The most talked-about application is subsequent refocusing; if it were just this, then multi-capture with mechanical focal-length sweeps using a conventional camera would suffice. Another is stereograms, but again, two spaced shots would suffice for that. A plenoptic camera does more in one shot and makes these merely special post-processing cases. The simplest conception of a plenoptic camera is essentially an array of mini-cameras (microlens + micropixel array for each logical pixel) that separately captures light from all directions at each point in the image. In between conventional cameras and plenoptic cameras are perhaps smarter, sparser non-regular arrays like these coded aperture systems that hark back to old radio-astronomy literature. These have good signal processing properties from a deconvolution perspective, but the full-array plenoptic camera like the R11 seems fully general, and with some future industrial scaling, the saved expenses of a compromise may be inconsequential.

Fine, so a plenoptic camera may make clever use of its given aperture size, but do we really get something for nothing? To answer that, first a digression.
(Read the article)

mosquito-shooting laser

So this video of a mosquito-shooting laser in the article here has been making the Youtube rounds recently. This isn’t really new, is it? It was already demonstrated last year, as described in this article. I was curious about the technology that supposedly you can assemble from eBay-acquired parts, and it turns out to be kind of lame…

Demonstrating the technology recently, Dr. Kare, Mr. Myhrvold and other researchers stood below a small shelf mounted on the wall about 10 feet off the ground. On the shelf were five Maglite flashlights, a zoom lens from a 35mm camera, and the laser itself — a little black box with an assortment of small lenses and mirrors. On the floor below sat a Dell personal computer that is the laser’s brain.

To locate individual mosquitoes, light from the flashlights hits the tank across the room, creating tiny mosquito silhouettes on reflective material behind it. The zoom lens picks up the shadows and feeds the data to the computer, which controls the laser and fires it at the bug.

I’m sorry, but having a screen behind to form an image for detection is cheating and makes this much less exciting. How is this going to work in the field (cheaply) and be something more relevant than a net?