compactness

I swear the concept of compactness was invented to remedy the shortcomings of closedness. Compact sets are closed (in Hausdorff spaces and therefore metric spaces), so compactness is stricter than closedness. It evidently patches some feebleness in the definition of closedness to make it more useful.

Closedness of a set in a metric space (“includes all limit points”), by the sound of it, really wants to be something akin to “has solid boundaries.” But it isn’t. The problem is that the existence of limit points depends on the embedding space. If the embedding space lacks those limit points, then a set in it can be technically closed even though it isn’t really “like” other closed sets. For example, the set \(\mathbb R\) in space \((\mathbb R, d_{\text{Eucl.}})\) is closed, because the space has no point called \(\infty\).
(Read the article)

watson v. mit

http://cache.boston.com/resize/bonzai-fba/Globe_Photo/2011/02/14/1297740468_0202/539w.jpg

So being at the event captured in the image, I got to ask a question toward the end. Actually I asked two questions. The first was whether Watson would ring in and use the remaining 3 seconds or whatever to continue to compute. Gondek said it would if it helped. In actual competition it doesn’t appear to be the case, as the buzz-in thresholding condition ensured that further computation would not have been helpful. The second question was a follow-up on the identified weakness of Watson — learning on “common sense” knowledge. I asked what path AI research would take to tackle such knowledge, which are by its very definition, “not in the books.” Gondek said that IBM is building up semantic information (e.g. a “report” is something that can be “turned in” and “assessed,” etc.) from corpus. That wasn’t exactly what I was asking, however.

My point was whether all “knowledge” is written down. There is such a thing as experiential “knowledge,” and humans take years to learn it/be trained in it through parenting (i.e., to “mature”). If only there were a handbook on life, or that life could be learned through reading a series of textbooks, then perhaps I’d believe that the kind of general-purpose AI that most people are probably imagining (rather than an expert/Q&A system) can be achieved along the lines of current methods.

earthquake prediction

Nowadays there is a large amount of geological and seismic data collected. When earthquakes occur people try to do data analysis on this data to see if there are predictors. For example, there are people who look for foreshocks or changes in wave propagation, and so on. It seems to me that the next step beyond passive data collection would be to send active probe impulses to find the current condition of faults and whether they would fail soon. Is this done or not?

In any case, earthquake prediction may be a misnomer. One can never predict the precise time of an earthquake. But with more data and detection of ever smaller features, one can give more granular probabilistic predictions. So instead of saying there is a probability \(p\) of earthquake in the next 30 years, we may be able to either say (at any given moment) there is a probability \(p_1 \ll p\), or probability \(p_2 \gg p\) of one within the next year.