<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Some stuff &#187; technology</title>
	<atom:link href="http://blog.yhuang.org/?feed=rss2&#038;tag=technology" rel="self" type="application/rss+xml" />
	<link>https://blog.yhuang.org</link>
	<description>here.</description>
	<lastBuildDate>Wed, 27 Aug 2025 08:50:58 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.1</generator>
		<item>
		<title>mosquito-shooting laser</title>
		<link>https://blog.yhuang.org/?p=244</link>
		<comments>https://blog.yhuang.org/?p=244#comments</comments>
		<pubDate>Wed, 24 Feb 2010 23:56:23 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[camera]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[Demonstrating]]></category>
		<category><![CDATA[ebay]]></category>
		<category><![CDATA[Maglite]]></category>
		<category><![CDATA[maglite flashlights]]></category>
		<category><![CDATA[myhrvold]]></category>
		<category><![CDATA[reflective material]]></category>
		<category><![CDATA[small shelf]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=244</guid>
		<description><![CDATA[So this video of a mosquito-shooting laser in the article here has been making the Youtube rounds recently. This isn&#8217;t really new, is it? It was already demonstrated last year, as described in this article. I was curious about the technology that supposedly you can assemble from eBay-acquired parts, and it turns out to be [...]]]></description>
			<content:encoded><![CDATA[<p>So <a href="http://www.youtube.com/watch?v=eYXPqrXZ1eU">this video</a> of a mosquito-shooting laser in the article <a href="http://www.popsci.com/technology/article/2010-02/video-laser-zaps-pesky-skeets">here</a> has been making the Youtube rounds recently. This isn&#8217;t really new, is it? It was already demonstrated last year, as described in <a href="http://online.wsj.com/article/SB123680870885500701.html">this article</a>. I was curious about the technology that supposedly you can assemble from eBay-acquired parts, and it turns out to be kind of lame&#8230;</p>
<blockquote><p>Demonstrating the technology recently, Dr. Kare, Mr. Myhrvold and other researchers stood below a small shelf mounted on the wall about 10 feet off the ground. On the shelf were five Maglite flashlights, a zoom lens from a 35mm camera, and the laser itself &#8212; a little black box with an assortment of small lenses and mirrors. On the floor below sat a Dell personal computer that is the laser&#8217;s brain.<br />
&#8230;<br />
To locate individual mosquitoes, light from the flashlights hits the tank across the room, creating tiny mosquito silhouettes on reflective material behind it. The zoom lens picks up the shadows and feeds the data to the computer, which controls the laser and fires it at the bug.</p></blockquote>
<p>I&#8217;m sorry, but having a screen behind to form an image for detection is cheating and makes this much less exciting. How is this going to work in the field (cheaply) and be something more relevant than a net?</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=244</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>autotune and avatar</title>
		<link>https://blog.yhuang.org/?p=235</link>
		<comments>https://blog.yhuang.org/?p=235#comments</comments>
		<pubDate>Sun, 24 Jan 2010 02:32:20 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[automation technology]]></category>
		<category><![CDATA[autotune]]></category>
		<category><![CDATA[creative fields]]></category>
		<category><![CDATA[movie]]></category>
		<category><![CDATA[need]]></category>
		<category><![CDATA[perfect pitch]]></category>
		<category><![CDATA[singer]]></category>
		<category><![CDATA[study]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[true creativity]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=235</guid>
		<description><![CDATA[Although autotune is now used as a digital effect, it was originally used to correct pitch in songs. With its use, singers can sing in perfect pitch, so long as they are not too far off. Indeed, autotune does not need to work in real time, and at a high level, it is no different [...]]]></description>
			<content:encoded><![CDATA[<p>Although autotune is now used as a digital effect, it was originally used to correct pitch in songs. With its use, singers can sing in perfect pitch, so long as they are not too far off. Indeed, autotune does not need to work in real time, and at a high level, it is no different from an instrument synthesizer, but with the instrument sampled in real time. (Perhaps such a hybrid approach could render even more realistic real acoustic instruments, and make almost anybody a &#8220;great&#8221; music player.) As the automated portion of the autotune&#8217;s capability improves, less and less of the singer&#8217;s input is needed, and one finds less and less need for the perfect singer, and more and more need for the perfect song and its performance <em>intention</em> &#8212; this is, after all, the essence of a creative work &#8212; not the much valued virtuosity with which it is performed (for its &#8220;difficulty&#8221;).</p>
<p>A similar thing has been taking place in motion picture production, with computer assisted graphics taking over for effects and stunts. Lately, the production process for the movie Avatar has pushed this process to a mini-plateau of some sort. Avatar, as you may recall, is produced by sampling the expressivity of the actors on a body grid, then re-rendering in a very different way. Much like the human-controlled machines in the movie, the actors are just giving input to a machine, which follows the director&#8217;s desires. Again, as the technology improves, less and less of the actor&#8217;s input is needed, and eventually, they, like the singers, will be unnecessary.</p>
<p>When it comes to the creative fields, as it does &#8212; I believe &#8212; in any field, the evolution of automation technology diminishes birth advantages, allows compartmentalization of skills, promotes specialization of skills, and therefore equalizes opportunities. The beneficiaries are people who engage in <em>true</em> creativity <u>of the mind</u>, both in the arts and in the engineering of the technology, while the losers are the human &#8220;performers&#8221;, save for the few truly great ones, who will be needed to go through the dehumanizing experience of being sampled as input for a machine.</p>
<p>So&#8230; study what a machine cannot do, or, study how to make a machine do that.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=235</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>chrome os, wave, collaboration</title>
		<link>https://blog.yhuang.org/?p=222</link>
		<comments>https://blog.yhuang.org/?p=222#comments</comments>
		<pubDate>Fri, 20 Nov 2009 02:10:17 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cash cow]]></category>
		<category><![CDATA[chrome]]></category>
		<category><![CDATA[collaboration]]></category>
		<category><![CDATA[critical pieces]]></category>
		<category><![CDATA[desktop programs]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[novel experience]]></category>
		<category><![CDATA[novel technology]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[today]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=222</guid>
		<description><![CDATA[Something in the news says Chrome OS got a demo today. I don&#8217;t even care, since I don&#8217;t think what&#8217;s being demonstrated &#8212; a glorified PDA with internet connection &#8212; is, by itself, very interesting. What&#8217;s important is what runs on it that can&#8217;t be run in another way or with as much ease. What [...]]]></description>
			<content:encoded><![CDATA[<p>Something in the news says <a href="http://www.pcworld.com/businesscenter/article/182711/googles_chrome_os_stands_to_make_waves.html">Chrome OS</a> got a demo today. I don&#8217;t even care, since I don&#8217;t think what&#8217;s being demonstrated &#8212; a glorified PDA with internet connection &#8212; is, by itself, very interesting. What&#8217;s important is what runs on it that can&#8217;t be run in another way or with as much ease. What might that be? It seems to me this &#8220;novel experience&#8221; (not necessarily novel technology) is in the roadmap of Google and other big companies &#8212; but only in pieces spread among them, with none of them seeming to have the entirety of it. And that is ridiculous&#8230;</p>
<p>So Google has the ideas. Microsoft has the delivery mechanism in the form of the installed base and the ready platform with the ability to propogate via a simple update. Apple has the hardware designs and marketing to get people to adopt. Yet, each is missing the critical pieces held by others. And so we stall in Year 2009 as each company tries to replicate some existing thing that another company is already good at.</p>
<p>(<a href="http://arstechnica.com/open-source/news/2009/11/chromeos-announcement.ars/1">This very good article</a> gives too much credit to Google, I believe. The situation is a lot more symmetrical and Google should not be elevated to a privileged position. The current Chrome OS for netbooks, I believe, is a clear misread of the market. People want a better phone, not a worse computer, and Google will likely fail with this if they make the latter without the former (Android?) catching on first. I think the &#8220;PC companies&#8221; are not that far behind either. It&#8217;s much harder for inexperienced Google to make a good cloud client than for say Microsoft to deliver good cloud integration. In some sense, Microsoft&#8217;s lack of execution on this front is due to politics, i.e. lack of will-power to lose a cash cow until it is inevitable, not due to technical barriers.)<br />
<span id="more-222"></span><br />
Which leads me to this envisioned usage scenario that (1) is actually useful/productive and (2) could really benefit from cloud integration rather than be a poor knock-off of existing desktop programs. It&#8217;s actually simple and not novel at all. In fact it has been talked about for decades, but it still cannot be done today (with commonplace hardware and software). Why? If something as simple as this and that should be achievable overnight (technology is there) isn&#8217;t even available, then forget about real innovation.</p>
<p>So let me make it very concrete. I take out a piece of hardware &#8212; it is a tablet. I would like to write a paper. But I have coauthors. So I start a new document &#8212; maybe it&#8217;s for writing LaTeX. I start writing &#8212; with a pen &#8212; or I type, whatever. At some point I decide I want collaboration, so maybe I turn this into a &#8220;collaboration&#8221; mode. Think of this as going to a public space to write on the whiteboard. Now anybody can see this in progress, <em>in real time</em>, not as its autosaved version, if open up the same program of theirs and I&#8217;m on their collaborator list or something. Maybe I tell them out of band that I want to collaborate or they already know. They can request to join, and I can let them, and even add them to the list of collaborators for this document (or project, as every document should be automatically versioned into a project tree). They can edit <em>at the same time</em> as I edit, and each person can see what the other person is doing, if they just went to the right part of the document to look. The part you are editing can be locked if you wish. Or you can lock other parts, doesn&#8217;t matter. You can make comments on the margins. You can run a view of the current snapshot any time, without interfering with other editors. At any time, you can save a state as a satisfactory &#8220;version&#8221;. References can be added by dropping anything &#8212; PDF, URL, some search text &#8212; and the editor can go look for the reference and turn it into the right format, and cache it into your library of references. If you want to draw figures, you should be able to do it in place, with a pen, and it will be turned into nice figures (<a href="?p=204">discussed separately before</a>)&#8230; etc. etc.</p>
<p>It would be even nicer if existing applications can acquire collaboration and versioning abilities simply by the operating system seamlessly supporting things like online storage, multiple inputs, and even more interesting file storage, tagging, linking, and presentation of such information.</p>
<p>Instead, what we get is <a href="http://shreevatsa.wordpress.com/2006/10/11/collaborative-work-with-latex/">this</a> and <a href="http://www.scribtex.com/pages/index">this</a> and <a href="https://coltex.inf.ed.ac.uk/">this</a> and the like.</p>
<p>Google has &#8220;wave&#8221;, which gets the &#8220;real-time collaboration&#8221; piece of the puzzle. Microsoft should be able to modify the OS to take care of existing applications and make online storage seamless (still no integrated network drive in 2009, incredibly). Apple should make a useable tablet (rumored as it is) as a form factor that people can like. But what we will get, I&#8217;m almost sure, will be half-baked versions of all of these and they won&#8217;t work together. Now, although Chrome OS seems to disown local hardware and software in favor of a crackling, slow, and flaky internet pipe, a decision that I do not believe is ideal at this time, I can at least see it work out eventually. But this depends on Chrome OS enabling collaboration &#8212; something so well matched to cloud integration &#8212; so a value proposition could be made for using it. If they do not even do this, I would consider them incompetent.</p>
<hr />
<p>As a (long) footenote, I would like to say that big companies these days have regressed in terms of being bold with innovation &#8212; understandable, given the environment &#8212; but still disappointing considering the pace of change in the 1990&#8242;s. Look at the state of affairs:</p>
<p>- Microsoft has the vast reserves of technical expertise and vendor sway to do some great things, but it has already turned into another IBM &#8212; sucking on the teats of enterprise &#8212; and long forgotten the consumer. Innovation, if it can be called that, comes in the form of cheap me-too projects.</p>
<p>- Apple with its fashionable packaged goods and excellent marketing certainly has the ear of the consumer but rip away the covers of any product and it doesn&#8217;t do anything that hasn&#8217;t already been done with uglier hardware. The proprietary locking of hardware and expense also funnel third-party development into mostly useless crapwhere and jokeware.</p>
<p>- Google has put out the most ideas with at least a modicum of novelty in recent years, but lately its resources have clearly been put into some kind of positioning game for some impending fight for becoming the platform of choice. Or perhaps existing pieces have gotten too big or complex that it takes so much overhead to manage, who knows. Its latest products have not been bold, Chrome OS included &#8212; I mean they are good, even better than what exists perhaps, but not outright bold and not convincing, always leaving a lingering feeling of what-if.</p>
<p>- Facebook and Amazon? Beyond their respective primary business, they are still nothing, though they are looking for ideas.</p>
<p>I&#8217;m actually hoping I am entirely wrong and this is the calm before the storm of massive transformation. But I&#8217;m waiting&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=222</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>apture&#8230;</title>
		<link>https://blog.yhuang.org/?p=219</link>
		<comments>https://blog.yhuang.org/?p=219#comments</comments>
		<pubDate>Wed, 11 Nov 2009 17:24:18 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[amazon]]></category>
		<category><![CDATA[brilliant business]]></category>
		<category><![CDATA[codemonkey]]></category>
		<category><![CDATA[college]]></category>
		<category><![CDATA[conception]]></category>
		<category><![CDATA[facebook]]></category>
		<category><![CDATA[funding]]></category>
		<category><![CDATA[manual fashion]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[web]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=219</guid>
		<description><![CDATA[I guess it happened like they say in the brochures, the guys in my college dorm started a company (called &#8216;apture&#8217;), now with millions of dollars of initial funding. This by itself is a great feat and I&#8217;m impressed that these guys, who could have been you or me in other respects &#8212; I know [...]]]></description>
			<content:encoded><![CDATA[<p>I guess it happened like they say in the brochures, the guys in my college dorm started a company (called <a href="http://www.apture.com">&#8216;apture&#8217;</a>), now with millions of dollars of initial funding. This by itself is a great feat and I&#8217;m impressed that these guys, who could have been you or me in other respects &#8212; I know them so I know &#8212; made it happen. (But please, this is not to say there is something wrong with being Joe Codemonkey or Bob Engineer.)</p>
<p>At first I wasn&#8217;t too impressed with the technology. In terms of conception, popping up content on a page isn&#8217;t too much different from what people naturally do by opening tabs or what designers do in more laboriously manual fashion. Sure it looks nicer and saves work, but is that really going to revolutionize the web? Does that befit the ridiculous moniker web 3.0? Of course not. If that&#8217;s all there is, it&#8217;s a flash in the pan and stupid. But when I realized that <a href="http://www.apture.com/builder/">server-side embedding is really what they&#8217;re doing</a>, I recognized the <a href="?p=30">Trojan potential</a> in this, and I must say it&#8217;s a brilliant business move and sets up for potentially much more exciting technologies once they get around to implementing them.<br />
<span id="more-219"></span><br />
Everybody is trying to sell platform these days, and they do this by taking over your browser and making it merely a passive window, like a television set. If Google, Yahoo, Microsoft, Amazon, or Facebook can just get you to surrender your browser to them by installing one of their &#8220;platform&#8221; tools, then they&#8217;re set. They can leisurely figure out on their end what they can deliver to you, which is: whatever they want. What happens on the server side they can control.</p>
<p>So apture can figure out what they want later. What they need is to hook you to them now, and each company hooks you with something useful, be it search, video, socializing, games, page presentation, or something else. Although I must say if you look at the situation this way, then in this race, the most compelling platforms remain Google and Facebook, whose hooks rise to some level of indispensability.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=219</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Wired on the Gaussian copula</title>
		<link>https://blog.yhuang.org/?p=164</link>
		<comments>https://blog.yhuang.org/?p=164#comments</comments>
		<pubDate>Wed, 25 Feb 2009 04:37:47 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[covariance matrix]]></category>
		<category><![CDATA[default correlation]]></category>
		<category><![CDATA[Gaussian]]></category>
		<category><![CDATA[gaussian copula]]></category>
		<category><![CDATA[heck]]></category>
		<category><![CDATA[marginal distributions]]></category>
		<category><![CDATA[paper]]></category>
		<category><![CDATA[pointless exercise]]></category>
		<category><![CDATA[structure]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=164</guid>
		<description><![CDATA[Because this article is spamming the internet today, I decided to read Li&#8217;s paper and learn what the heck is this Gaussian copula. For five years, Li&#8217;s formula, known as a Gaussian copula function, looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more [...]]]></description>
			<content:encoded><![CDATA[<p>Because <a href="http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all">this article</a> is spamming the internet today, I decided to read Li&#8217;s paper and learn what the heck is this Gaussian copula.</p>
<blockquote><p>For five years, Li&#8217;s formula, known as a Gaussian copula function, looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more ease and accuracy than ever before. With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.</p></blockquote>
<p>And anyway, here is the <a href="http://www.defaultrisk.com/_pdf6j4/On%20Default%20Correlation-%20A%20Copula%20Function%20Approach.pdf">paper</a> referenced in the article.<br />
<span id="more-164"></span><br />
Firstly, so much for the sensationalism: so far as I can tell, the paper doesn&#8217;t say anything worthy of a Nobel Prize &#8212; but still it is mildly interesting. In fact, the whole point of the paper appears to be to introduce to the finance community an already known method for solving the inverse problem of distribution marginalization, that is, (non-uniquely) go from marginal distributions back to the joint distribution, by specifying a mediating copula that captures marginal-invariant joint structure. The technology is very straightforward, and Li didn&#8217;t invent it.</p>
<p>That aside, I did wonder, why the heck go through the motion of constructing a Gaussian copula (as in the article) if you assume your marginals and joint are all Gaussian to begin with and all you wanted to capture is the covariance matrix; you can just specify the joint Gaussian explicitly. It seems like a totally pointless exercise. After reading the paper though, I see that wasn&#8217;t really Li&#8217;s entire suggestion at all. He&#8217;s being descriptive rather than prescriptive of what his firm already did by casting it in the language of copulas, an interpretive generalization that allows for potentially more accurate modeling (of non-Gaussian marginals and complicated joint structure if so desired).</p>
<p>Now on to the accusations. The article says that Li tried to &#8220;model default correlation&#8221; using credit default swaps rather than ratings agency data. It turns out that wasn&#8217;t even a problem being solved in this paper. He suggested to use CDS market data to get implied <em>marginal</em> distribution, an established practice. As for how correlation is obtained from limited data, you&#8217;d have to blame one Greg Gupton:</p>
<blockquote><p>Having chosen a copula function, we need to compute the pairwise correlation of survival times. Using the CreditMetrics (Gupton et al. [1997]) asset correlation approach, we can obtain the default correlation of two discrete events over one year period.</p></blockquote>
<p>However, it is true that there is something funny going on with the concept of using market pricing to price other market instruments, when the only novel input for all of them must be what little information is collected from actual due diligence. A classic case of Garbage In Garbage Out in statistical modeling.</p>
<p>As somebody elsewhere wrote, this sort of thing would not pass muster in &#8220;real&#8221; engineering design. We&#8217;ve seen that dichotomy before between the absolutely error-free stricture of &#8220;hardware&#8221; design (chips and bridges) vs. the more lax attitude toward &#8220;software&#8221; design (operating systems and capital market systems). Maybe this dichotomy needs to go away.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=164</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>execution methods and consciousness</title>
		<link>https://blog.yhuang.org/?p=41</link>
		<comments>https://blog.yhuang.org/?p=41#comments</comments>
		<pubDate>Tue, 02 Jan 2007 22:46:13 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[body]]></category>
		<category><![CDATA[consciousness]]></category>
		<category><![CDATA[death]]></category>
		<category><![CDATA[development of modern technology]]></category>
		<category><![CDATA[eeg signals]]></category>
		<category><![CDATA[execution methods]]></category>
		<category><![CDATA[internal organs]]></category>
		<category><![CDATA[notion]]></category>
		<category><![CDATA[strange phenomenon]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=41</guid>
		<description><![CDATA[With the news of Sodamn Insane&#8217;s execution plastered in big letters over the front pages of the new year weekend dailies (a strange phenomenon in itself), I realized that most of the world&#8217;s ancient execution methods do indeed go straight for the head. There is hanging, which we have seen. That internally severes the head [...]]]></description>
			<content:encoded><![CDATA[<p>With the news of Sodamn Insane&#8217;s execution plastered in big letters over the front pages of the new year weekend dailies (a strange phenomenon in itself), I realized that most of the world&#8217;s ancient execution methods do indeed go straight for the head.<br />
<span id="more-41"></span></p>
<ul>
<li>There is hanging, which we have seen. That internally severes the head from the body with the aid of gravity.</li>
<li>There is decapitation, which of course completely severes the head from the body.</li>
<li>There is the firing squad, which severely damages the head.</li>
</ul>
<p>Interestingly, some of the more modern execution methods are exceptions.</p>
<ul>
<li>The electric chair, which burns the head but also internal organs.</li>
<li>Lethal injection</li>
</ul>
<p>But those were developed after acquiring a better understanding of biology and the development of modern technology. The central role that the head plays in sustaining life though, must have been very ancient knowledge indeed. Even predator animals know this, it seems, so it doesn&#8217;t surprise me that executioners of the pre-modern era needed no neuroscience to help in the task.</p>
<p>Excepting methods designed to cause suffering, execution methods demand a quick, possibly instantaneous end to life. This brings up the question of what constitutes death. Currently, the lack of EEG signals from the brain is considered the criterion of death. There are people who argue that only EEG signals from specific parts of the brain like the neocortex should be considered signs of life. I note that there also exists this very ill-defined notion of <a href="http://en.wikipedia.org/wiki/Information_theoretic_death">information theoretical death</a>, which attempts to pin down yet another notion of life.</p>
<p>The difficulty appears to reside in finding out what is consciousness and where it resides. Does it reside in a particular part of the brain? Or perhaps in a particular state of the brain? And how complete must life be to be considered life? Some would argue that the body, as part of a connected system (whether neurologically or otherwise through various feedback mechanisms), is a part of consciousness as well. Yet clearly many parts of the body are not necessary for continued living, albeit in a disabled state; this is especially true with advancing medical technology, where replacements can be substituted for parts of the body.</p>
<p>Suppose the thought experiment goes further. Let&#8217;s start from the view of the pre-modern executioner who considers the head-body connection to be the last link of life. Let&#8217;s replace the entire interface from the head down with a life sustaining medical device, so you have a head-person. I didn&#8217;t come up with this idea, either. In any case, this is a fairly conceivable idea. But let&#8217;s go even further. People are doing <a href="http://domino.watson.ibm.com/comm/pr.nsf/pages/rsc.bluegene_cognitive.html">neuronal simulation</a> now. It is also conceivable that we can replace parts of the brain with machine-simulated parts. As long as the appropriate interface is preserved, there shouldn&#8217;t be a problem, right? So keep doing this, part by part. But wait a minute, at some point, there is so little biological material left that any notion of death &#8212; be it brain death, or information theoretic death, or whatever &#8212; must fail. And on the face of it, with mostly a machine simulation, the biological entity is dead, too, from an intuitive perspective. Yet, there should still be consciousness, should there not? There should not only be consciousness, but there should be the self-aware kind of consciousness that &#8220;I exist, I am still alive,&#8221; should there not? But <b>what</b> is the self and <b>what</b> is aware at this point? Is it not the machine?</p>
<p>This difficulty applies to natural death, too. We know neurons can live individually in the laboratory. Perhaps a rather large collection of them can even be kept alive with their internal connections. At what point does consciousness end at death? If a limb is chopped off, the severed neuron picks up the damage and passes the information to the brain, which acknowledges it. If a piece of the brain is chopped off, is not the remaining part of the brain aware of the damage if the two parts used to be connected? &#8220;Aware&#8221; in the most basic, objective sense, surely. If you chop off all parts of the brain, save for a few neurons which you then keep alive in the laboratory, do those neurons (or to put it in a more provocative way, do &#8220;I&#8221;) retain consciousness and the awareness that, ouch, I&#8217;m missing my other parts? Is the original person still alive in some sense?</p>
<p>When somebody is executed, it takes a few moments for neurons to die. In those few moments, there is no output from the body at all, because it is not possible, so as far as the interface with the outside world is concerned, the person is &#8220;dead,&#8221; &#8220;unconscious,&#8221; &#8220;not suffering any more.&#8221; But if the neurons are still alive for a few moments, and supposing they host the consciousness, do they the neorons (or, again, do &#8220;I&#8221;) &#8220;suffer&#8221; even as there is no way to detect it except perhaps to measure their EEG signals?</p>
<p>Isn&#8217;t there some argument over whether plants feel pain, or at least the chemical equivalent it, in a similar vein? Plants certainly don&#8217;t express pain in ways we are familiar. The problem with the plant pain is the &#8220;feel&#8221; part, of course, not the &#8220;pain&#8221; part, which is to say, it is again the difficulty with consciousness. What is consciousness. Is it an emerging phenonemon of all complex systems, as <a href="http://www.mitpublicevent.org/">Kurzweil claims</a>? This is hand-waving, of course&#8230; what is &#8220;emerging&#8221; and what is &#8220;complex,&#8221; but essentially this is a statement that consciousness doesn&#8217;t really exist and what we consider to be consciousness is just a description of the behavior of a complex system, and so if we must say where consciousness exists, it must exist in the collective state of the complex system, and not in a particular part of it. That takes care of the outward interface and what appears from the outside to be consciousness. But Gelernter makes a very good counterpoint to Kurzweil, which is that &#8220;emerging phenomenon&#8221; does not address the self-awareness problem at all. I&#8217;m going to take a jab at it.</p>
<p>Self-awareness is also an emerging phenomenon of the complex system. Do this thought experiment. Break the brain into two halves, let&#8217;s say A and B. To half-brain A, half-brain B is a complex system with the emerging phenomenon of appearing to be perfectly conscious. To half-brain B, half-brain A is a complex system with the emerging phenomenon of appearing to be perfectly conscious. By virtue of the connection between half-brain A and half-brain B, half-brains A and B both recognize the other as a conscious extension of its own system (biological plug-and-play, if you will), and thus is aware of it. But since to A, B is part of its own system and to B, A is part of its own system, the awareness of the other part really appears to be self-awareness, even though it isn&#8217;t truly &#8220;self&#8221;-awareness. Now take this argument down, down, down to single neurons, single molecules, single atoms, and any cut-set you want&#8230;</p>
<p>A chilling thought, if you toss this around a little bit. Especially if you consider what it means for death, suffering, and execution&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=41</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>automatic parking</title>
		<link>https://blog.yhuang.org/?p=15</link>
		<comments>https://blog.yhuang.org/?p=15#comments</comments>
		<pubDate>Mon, 06 Nov 2006 23:35:17 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[car]]></category>
		<category><![CDATA[electronic dictionaries]]></category>
		<category><![CDATA[human resistance]]></category>
		<category><![CDATA[parking]]></category>
		<category><![CDATA[qwerty keyboard]]></category>
		<category><![CDATA[technological aid]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[test]]></category>
		<category><![CDATA[ti 89]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=15</guid>
		<description><![CDATA[This is interesting, this self-parking car thing. Well, it&#8217;s not completely automatic, but it is supposed to take care of the most difficult part. Now I know my driving test required a demonstration of parallel parking ability &#8211; not sure about other states, so it is natural to ask, if somebody had this car, should [...]]]></description>
			<content:encoded><![CDATA[<p>This is interesting, this <a href="http://blogs.cars.com/kickingtires/2006/09/ls_460_parking.html">self-parking car</a> thing. Well, it&#8217;s not completely automatic, but it is supposed to take care of the most difficult part. Now I know my driving test required a demonstration of parallel parking ability &#8211; not sure about other states, so it is natural to ask, if somebody had this car, should they turn this off for the test?</p>
<p>Are there existing conventions dealing with technological aid in various kinds of skills testing? I can think of some, and they generally seem to fall on the side of accepting technology, with restrictions that can sometimes be arbitrary. For example, on the SAT you could use a graphing calculator, but not one with a &#8220;QWERTY&#8221; keyboard, so a TI-89 was okay but the TI-92 was not, even though they ran the exact same firmware. Not sure if they changed this. Guess not. The College Board still appears to be living in the stone age with regard to <a href="http://www.collegeboard.com/prof/counselors/tests/sat/test_day/acceptable_calc.html">some of these banned calculators</a>: </p>
<ul>
<li>calculators with QWERTY (typewriter-like) keypads <em>arbitrary</em></li>
<li>calculators that contain electronic dictionaries <em>they all do now or can</em></li>
<li>calculators with paper tape or printers <em>cash register? lol</em></li>
<li>calculators that &#8220;talk&#8221; or make noise <em>right</em></li>
<li>calculators that require an electrical outlet <em>haha</em></li>
<li>etc&#8230;</li>
</ul>
<p>That aside, the car that drives itself has got to be the most popular civilian application touted by sensor networks people. So far, most automated components of the car are not fully automated. There is always some human element in some key part of the chain, unlike in airplanes. There is some human resistance toward giving up control on this matter. The trend toward more automation may be unstoppable, though, if automation creeps in a bit at a time like this.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=15</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
