Settled Science and the Munchhausen Trilemma

Chaos Manor View, Wednesday, August 26, 2015

Very hot today in Los Angeles. Pounding away on fiction, but it’s not easy. Typing continues difficult. I have dozens of suggestions regarding Dragon, and one day I’ll implement it and try, but the weather and time pressure both argue against trying a whole new way to “write”; the last time I tried dictation it was a flop; I got so concerned with the way I was composing sentences, and waiting for them to appear on the screen, that after a while I was more worrying about the writing details than about what I dictated. Of course that’s something of what is happening now.

As it happens, yesterday over in another conference (SFWA) where I spend too much time, a new member asked for advice on career management. I answered:

I don’t think there has ever been better advice given than that of Mr. Heinlein:

To be a writer you must write.

I will add, until you are established as a writer, you would do well not to spend a lot of time talking about writing or listening to others talk about writing in the hopes that you will learn some secret formulae. You won’t. Randall Garrett was fond of saying he knew no professional writers who got there through workshops or discussing writing with other beginners. I do, but not many.

To be a writer, you must finish what you write.

I will add that there is something sadly amusing about the “writer” who always has an unfinished manuscript to inflict on his friends.

Do not rewrite unless instructed to do so by someone who is going to buy it.

This was probably the most controversial, and most badly misunderstood, of Heinlein;s dicta. He did not mean write first draft and never rewrite; he meant that the rewrite is part of finishing and it should be done and over. Don’t rewrite finished work. You will do much better to work on something new.

Send your work to someone who can buy it, and start on something else. Keep that up. Keep writing, finishing, and sending to editors.

Basically that’s it.

The magic is in doing the writing. For story tellers it takes a while to make writing automatic so you can concentrate on the story, not on how you tell it.

And there are nine and sixty ways of constructing tribal lays…

As to career management, it used to be that you sold to the magazines, got the cover, graduated to a novel, etc. Now there are alternatives, many discussed here. But before you manage a career in writing you have to write, and the best way to learn that is to write, finish what you write, send it to someone who can buy it, and don’t rewrite unless someone who will buy it tells you to. Obviously there are stories that if rewritten can be made better, but a better investment is to do a new story. Then another. Then one more. Finishing each.

After a while the writing comes easier and you can concentrate on what you want to say, not on how to say it.

Of course you may be well past needing that advice.

Jerry Pournelle

The point being that if you have to think about what you are doing, rather than on what you are trying to say, you have a severe handicap; and that’s what I am trying to overcome. I’m getting there but it’s slower than I like. But then it took longer than I like just to feed myself…

For some reason I cannot fathom, the Word grammar program does not like the first sentence in that paragraph. I give up on why it thinks it is bad grammar. If it be, then so be it. Oh. I see. It wants a proper verb. Ah well, it’s clear enough.

Anyway we will continue the discussion of philosophy of science. To summarize my views, which are derived entirely from Sir Karl Popper and St. Thomas Aquinas:

Science has become a very useful way of discovering truth about the world. To most of the world, “reason” and “science” are essentially synonymous.

Science has strict rules. The most fundamental rule is that no theorem or hypothesis is scientific if it cannot be falsified. It does not mean that “I saw a man who wasn’t there” cannot be true, but it is not a scientific truth because there is no conceivable way to falsify it.

We may act as if scientific theories (those which can be falsified) were true, but always with the understanding that they may someday be falsified.

This can lead to conflicts of theories, and sometimes does. An example is the late Petr Beckmann’s theory of entailed aether, as opposed to Einstein’s Theory of Relativity; they both, as I understand it, “explain” all the relevant data; where they make different predictions, falsification of either requires experiments we cannot perform. That leads to wildly different possibilities, but we cannot choose among them given the present state of observations. There is an overwhelming consensus in favor of Einstein, but there is no crucial experiment to choose between them at this time.

When conflicting theories lead reasonably to disparate courses of action the situation becomes critical, in particular if the different actions have high cost; this is the situation in which we find ourselves regarding global warming, with the added problem that there are mutual assertions of falsifications of the different theories, as well as conflicting claims of the validity of certain evidence.

Some statements may be true, but are not scientific because there is no way to falsify them. My prediction that unrestricted capitalism will lead to the sale of human flesh in the market place is “scientific” in that it could be falsified, but it also rests on the non-scientific assumption that the sale of human flesh – or baby parts – is not morally acceptable. “Ethicists” and religious leaders may or may not agree on that assumption, but their disagreements cannot be settled by any scientific process I am aware of. At some point you are faced with “good” and “evil”, and it is meaningless to say that good is better than evil because good’s gooder. There are those (I am among them) that say that certain morality systems lead to a “better” way of life than others, and there are many examples, but this not science; one reason why education needs to include the liberal arts, but this goes far afield of this discussion.

bubbles

Regarding philosophy of science

Jerry,

Just now catching up on the latest blog post. Last couple days were busy writing/recording/editing the weekly Osborn Cosmic Weather Report. So I want to respond to some talking points.

1) Astronomers certainly did NOT pounce upon Doppler shift uncritically, after Hubble’s discovery — more like throwing a firecracker into an ant’s nest. I didn’t go into the details of the history because I could have written a book about it. Many books HAVE been written about it. And like it or not, the bulk of the demonstrable evidence that we have today lands on the side of large-scale expansion. Note I said LARGE-SCALE. It’s long been known that localized inhomogeneities were required even to develop the galaxies we see, let alone clusters, superclusters, and the other structures we’re still discovering, like “walls” and “bubbles.” So this is no new thing. I will say that we’re still working on how it all came about, but we know it did, because we see the results.

Hubble’s discovery and subsequent others produced an uproar in the community, with huge infighting about the validity of the results between the “Steady-State-ers” and the “Expansionist Universe-ers.” This is in fact a close parallel between a similar and more or less concurrent, long-running controversy in geology between the concept of uniformitarianism and catastrophism, where uniformitarianism can be likened to steady state and catastrophism could be likened to big bang/expansion. Geologists now think that the reality seems to be a blending of the two, a kind of uniformitarianism punctuated by episodes of catastrophism; is it then so surprising that cosmology is proving to be the same?

Moreover in no wise are astronomers/cosmologists/astrophysicists favoring a particular model over another, as evidenced by the large number of theories/models that are put forward. (My friend, physicist Dr. W, and I have discussed the whole “dark matter/dark energy” concepts several times; neither of us is disposed to care for either one, and are inclined to think that it will eventually be disproven. But right now it does seem to explain observations.) My entire point was that these things are indeed being considered, but just because we seem to find a data point that is in conflict with current theory does not mean we automatically throw out the baby with the bath water and start over from scratch.

Also note that I am not saying that any theories would be “knocked out if new theories were accepted.” Obviously Newtonian physics was not “knocked out” by relativity theories, nor quantum mechanics, nor any of the rest. In fact what we find is that Newtonian physics is what the others reduce to in the everyday world. Quantum mechanics devolves to Newtonian physics as the scale increases from subatomic to macro world. Relativity devolves into Newtonian physics at increasingly lower sublight speeds. Et cetera. This is what a proper “new theory” SHOULD do — reduce to the established, observable ways/models when “ordinary world” initial conditions are plugged in. What is happening, however, is that this thrust experiment is contradicting the “ordinary world” model, which has been demonstrably proven correct over centuries (and arguably millennia) of observation. And THAT is what experienced scientists take issue with.

2) I think some may be confusing the difference between the universe and the models we have of the universe. When new, unexplained data is discovered, obviously this is coming FROM the universe, and it is the MODELS that must be adjusted to try to see if the new data can be explained. It isn’t that we’re trying to shoehorn the universe to fit our theories. We are looking to see if this new evidence has uncovered something that needs to be added, something we didn’t know about before. It is a MODIFICATION of our theories/models, not changing the universe, that is occurring. This usually requires several iterations, and not infrequently does in fact require the model to be reduced to its basic components and rebuilt, or occasionally thrown out altogether and replaced.

Think about it like this: You want to race cars, and you want to win. You’re on a budget constrained by other factors — house payment, credit card payments, food bill, kid in college, etc. So which is easier and more economical, which fits into your budget better: Take the stock car already in your garage and modify it to juice it up, or throw out the stock car and start building an Indy race car from the pavement up? You’re going to start with your stock car and modify it, then you’re going to race it and see if you win. If you don’t win, you keep modifying the stock car until you’ve reached the limits of what the frame will handle. If you’re still not winning, you scrap the stock car and start work on an Indy car design.

In this analogy, your budget constraints are the body of existing observations. Your stock car is existing science and its models. Winning in this case means your model correctly predicts the observations; juicing up the stock car represents the modifications to existing theory you have to make to try to predict the data. The Indy car design is when you can’t get existing theory to match observation, so you scrap the theory and construct another. But you still have those budget constraints! The new model has to accurately predict, not just the new observations, but all the old ones too. It has to be “drivable on the road,” as it were. Sort of like a Transformer that goes from Indy car to your mom’s sedan and back.

3) String theories: there are in fact five basic string theories. (And while I’m about it, let me point out that there is a difference between a cosmic string and a superstring. Here I refer to superstrings.) Each theory was developed by a different researcher or group of researchers, and each one accurately predicts some of the observable data — but no one superstring theory predicts ALL of the observable data. Nor, so far, can they be made to do so.

This is a case where the scientists dropped back and punted. It wasn’t exactly that they scrapped the stock car, but they definitely were pulling Indy car concepts into the modifications! (To continue my racecar analogy, I’d say they kept the frame but put in a new engine and more aerodynamic body.)

Unable to get their superstring  models to wrap around the whole problem, they made a fundamental realization that relates back to that “new theories should reduce to the older forms” comment I made earlier: They realized it was very likely that the five different superstring theories were actually special cases of an overarching theory. So they instead created a new theory/model, called M Theory. And this, so far, DOES accurately predict all of the observable data, though again it may possibly not be the simplest way to do so; Occam’s Razor and all. But it’s the best we’ve come up with so far.

(This is a case where Dr. W might be more up on the latest developments than I am, since it falls more into the realm of particle/quantum physics in which he specializes than the astronomy/astrophysics in which I specialized. I did study M theory and the related stuff in order to write both Extraction Point with Travis S. Taylor, and my Displaced Detective series. And I’ve tried to stay up on what’s going on with the theory — I get asked about it a lot at SF cons. I don’t claim to be an expert in M theory by any means.)

4) Quasars: given that, in recent years, we’ve been able to image the distant galaxies in which quasars are embedded, and we have been able to generate models of the mechanism that predict observational data, it’s going to be rather hard to argue away the notion that they are indeed embedded in galaxies.

As for proper motion, that is still in debate. Proper motion is not, contrary to what you might think, immediately obvious to the observer, especially when we are looking at extragalactic objects. Why? It’s complicated — because the Earth is making a truly spectacular gyration through the universe: it is spinning on its axis, revolving around the Sun, following the Sun in its orbit about the galactic center, and moving with the galaxy as it orbits the center of mass of the local cluster, which is in turn orbiting the center of mass of the local supercluster, which is experiencing linear motion through the universe…and then there is precessional motion of all of that, and more. All those motions have to be determined as accurately as possible, and then SUBTRACTED FROM THE MEASUREMENTS of the apparent proper motion of any given object. Only then can we say that the object MAY be experiencing true proper motion.

Current studies of quasar proper motion seem to be indicating that there is an inadvertent systemic error in the reduced measurements (as well as a couple of other things occurring within individual quasars) that, if corrected properly, will remove most if not all of the purported proper motion. Or to put it more simply, we may have an error in our estimate of the motions we ourselves are making, which is causing an apparent motion of the studied objects, when there really is little or none. The jury is still out on that, but legitimate research is ongoing.

Also consider that we currently have a nice spectrum of galaxy “types” or morphologies, ranging from “ordinary,” to interacting, to Seyfert/BL Lacertae/radio galaxies, to quasars. These in general range nicely from nearby, to a little farther out, to pretty far out, to way the hell over there. There’s a whole lot of observational evidence that quasars are embedded in galaxies, and that they have a lifetime that takes them through several morphologies, it’s going to be hard to disprove. Note I didn’t say impossible. There are arguments for other kinds of Doppler shifting, such as relativistic gravitational. But, “With great power comes great resp–” no, sorry, wrong quote. “Extraordinary claims require extraordinary evidence.”

Now, somebody reading all this is bound to be thinking that I’m just one of these “accepted science” conspirators who are trying to stifle anything new. Not so. I have a brain, I use it most days, and I am trained to be a skeptic. (Blonde hair notwithstanding.) If I were into “accepted science” then I would not be posting guest blogs like this:

http://accordingtohoyt.com/2015/05/30/solar-space-and-terrestrial-weather-some-reflections-by-stephanie-osborn/

No, I sit down and look at the data in the light of what I know. I look at the models and decide if they make sense, or if they are off in the weeds someplace. If it all lines up, and if I can take the data, feed it into the model, and predict more data, and that prediction is demonstrably correct by collecting the additional data, then I conclude that the model is correct insofar as we understand the science to this point in time. If it does not, I conclude that the model is wrong, and possibly the theory behind it as well, depending on whether I can determine if it was just a poorly-constructed model or if the problem with it is more fundamental.

This is not simply going along for the ride because someone else says so. And this is the way science is supposed to work. Does it always work like this? No, it doesn’t. Because scientists are human too, and we can get hidebound and attached to our pet theories. (Go read up on William Thomson, Lord Kelvin’s successes, as well as his failed predictions, if you don’t believe me. And he was as “established” as they come.) But it does so more often than not, and especially in my chosen fields, I’m pleased to say.

Stephanie Osborn

“The Interstellar Woman of Mystery”
http://www.Stephanie-Osborn.com




It is clear that we are at the edge of observational accuracy, and possibly many statements which appear to be falsifiable are in fact not so with present equipment. It would not be the first time.

And I will repeat my own view: the extraordinary claim of reactionless drive needs considerable evidence that it exists, since it falsifies a fundamental principle of Newtonian physics, as well as being incompatible with Relativity.

bubbles

More on Beckmann and Einstein
<<Jerry P I commend to you Petr Beckmann and his Einstein Plus Two…>>
And I commend to you Tom Bethell’s book, Questioning Einstein: Is Relativity Necessary? (2009), explicating Beckmann’s theory, and putting it into the whole historical context of the development and testing of relativity theory, and the wider and continuing question of the nature and existence of the “ether”.
Bethell has been a contributing columnist and/or editor of National Review, The American Spectator, Harpers, and other intellectual periodicals, and he is a Hoover fellow. He specializes in whistle blowing on politically correct orthodoxies, so of course he is personal non-grata with the elite establishment, which in my book is one of his strongest credential.
Among the many thoughtful and trenchant pieces of his I’ve clipped and saved, was a two part swipe (in the June and July/August American Spectator at the cancer research mafia that has deflected so many tens of billions of dollars of taxpayer money into unproductive reinforcement of the established paradigms that retroviruses (and now faulty genes) cause cancer, while shunting aside the fact that virtually all solid tumors consist of cells that contain more than the two chromosomes of normal cells: this phenomenon is called aneuploidy and it has been known since the 1960s, yet practically no research has been done on the replication errors that must lie at the heart of it. How many lives have been cut short and/or blighted because of this waste of funds and scientific talent?
Bethell worked closely with Beckmann and with his colleague and collaborator, physicist Howard Hayden (who wrote the introduction to Bethell’s book), and Bethell did extensive research of his own, drawing on papers of Einstein that have only recently become available, and also on the papers of Nobel Laureate Albert Michelson, who designed the interferometer that was used in the classic Michelson-Morley experiment, and who went on to design and conduct the Michelson-Gale experiment in 1924 that conclusively established that there was indeed an ether – a gravitational ether detectable against the earth’s rotation – a finding that has been partially replicated in passing by the Brilet-Hall experiments of 1979, which ironically were focused on finding the same kind of ether (detectable against the frame of the earth’s orbital motion) that the Michelson-Morley experiment failed to find in the first place (Brillet-Hall predictably repeated that original failure).
Einstein himself was one of the chief encouragers to Michelson, then at the University of Chicago, to conduct the Michelson-Gale experiment, which involved constructing an apparatus that spanned an area of some 50 acres, and he traveled to Chicago and met with Michelson for that purpose. Einstein had also begun to recognize as early as 1911 that his General Theory of Relativity REQUIRED a gravitational ether, regardless of the fact that his Special Theory of 1905 had dispensed with it. Bethell quotes Einstein thus {p182}:
“In a article published in 1911, ‘On the Influence of Gravitation on the Propagation of Light,’ Einstein acknowledged that the constancy of the velocity of light is ‘not valid in the formulation which is usually taken as the basis for the ordinary [special] theory of relativity.’ The velocity of light in the gravitational field ‘is a function of the place,’ Einstein said. Light rays ‘propagated across a gravitational field undergo a deflexion.'”
Einstein may thus be said to have backtracked on his premature discarding in the Special Theory of Relativity of the ether principle that presumes that some medium is necessary for the propagation of waves, whether they are light quanta or gravitational quanta, and to have anticipated, not only Beckmann, but the Michelson-Gale experiment.
None of this casts any shadow of doubt on Einstein’s theory of General Relativity, except that it suggests that it ought to have been called Einstein’s Theory of Gravitation, dropping the relativity moniker altogether. However, it is clear from the body of evidence reviewed by Bethell that Einstein’s Special Theory is both irrelevant to practical modern physics and pernicious in its paradoxical implications. The Special Theory is irrelevant because it applies only to inertial (constant velocity) frames of reference, yet we live in a universe of accelerations. But for that, the Michelson-Gale experiment of 1924 would have falsified special relativity since light was found to travel at different speeds depending on the beam’s orientation with respect to the rotation of the earth.
The 1971 Hafele-Keating experiments transporting atomic clocks around the world in opposite directions also appear to contradict Special Relativity, which asserts that time slows down for an object moving with respect to the observer, which would mean that the airplane clock would appear to be faster than the Naval Observatory clock on the ground, but that the reverse would be true too if the airplane clock were taken to be the fixed observer, but the interpretation of the results (which were consistent both with General Relativity and with Beckmann’s theory) required the postulation of an inertial clock at the center of the earth with which the times of the other clocks could be compared.
Because we live in a universe of accelerative forces such as gravity, the Special Theory may well be unfalsifiable, which would make it a metaphysical, not a scientific hypothesis, in Popperian terms. Certainly no one has ever observed the predicted dilations of space, or the mutual speeding up of clocks from the points of view of two observers moving relative to each other, or of the corresponding relative buildup of masses in both of the relative frames of reference as their relative velocity approached the speed of light. Science fiction has had fun with many of these paradoxes, but the Special Theory of Relativity, properly understood, gives us no reason to suspect that any of these phenomena are features of our universe.
The Special Theory of Relativity is also pernicious, not only because it gives rise to incomprehensible paradoxes that suggest that our whole conception of physics is wrong, but also because the second postulate of the Special Theory, that the speed of light is a constant independent not only of the source but of the observer permeates the thinking of modern physicists as a dogma, even though it is ignored in practice, and for good reason. For example, if the speed of light were always constant in our universe, the concept of simultaneity would dissolve into meaninglessness and there would be no way to synchronize clocks, nor could the GPS satellite system be made to work: it does work, of course, but only because a fixed temporal frame of reference is presumed.
The main problem with the Special Theory is that in order to preserve Einstein’s dogmatic postulate of the constancy of the speed of light independent of both source and observer, and its relativity implications, the mathematics of the General Theory of Relativity had to be unduly complicated. And, as you note, Beckmann’s work has demonstrated that the gravitational phenomena with which the General Theory is concerned can be accounted for in classical Newtonian terms, without all the mystification and paradoxes. However, Hayden notes in his introduction that the actual mathematics of overlapping gravitational fields (e.g. taking into consideration where the balance points lie between the gravitational fields of the earth, the moon, the sun, etc.) can still be quite complex, and Beckmann himself never got around to working those out.
John B. Robb

Thank you for the summary. I have many times recommended Tom Bethell’s book http://www.amazon.com/Questioning-Einstein-Is-Relativity-Necessary/dp/0971484597 and possibly should have done so again; there are other works on modern aether theory as well. Google “Is Einstein necessary”… Relativity was “confirmed” by the bending of light rays in a gravitational field; there are other explanations which do not require tensor calculus. Whether understanding the universe requires mathematics of that order of difficulty I cannot say; I confess that I hope not. Tom leaves out the math, which is perhaps wise.

bubbles

Subject: Epistemology and the Münchhausen trilemma

I had an email about this in my drafts. Since you’ll discuss epistemology, where do you stand on the Münchhausen trilemma? How do we overcome the balkanization of epistemology? Why don’t we teach epistemology in high school? I think that and general semantics (per Alfred Korzybski) would solve many problems.

◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

Probably but I have only so much time. What is the Munchhausen trilemma?

Jerry Pournelle

Chaos Manor

Epistemology and the Münchhausen trilemma

I’ll keep this very short, relatively speaking; this is an outlined response:

The Münchhausen trilemma is the crux of epistemology. Anyone who studies epistemology soon becomes aware that every tendency in the epistemology has weaknesses — including science — making it fallible. No tendency e.g. authority, faith, science, empiricism, logic, rationalism, idealism, constructivism reveals truth because you come to a point of infinite regress. John Pollock describes it best:

“… to justify a belief one must appeal to a further justified belief. This means that one of two things can be the case. Either there are some beliefs that we can be justified for holding, without being able to justify them on the basis of any other belief, or else for each justified belief there is an infinite regress of (potential) justification [the nebula theory]. On this theory there is no rock bottom of justification. Justification just meanders in and out through our network of beliefs, stopping nowhere.” You find no solid truth that everything is built upon and it becomes more like a ball of ants crossing a river. Now you have to make a choice; you have three options — all of these undesirable. You must face the Münchhausen trilemma.

The story is named after the story Baron Münchhausen, who pulled himself out of quicksand by his own hair. The Münchhausen tribesman is the how we answer the question “How do I know this is true?” When we ask ourselves this, we provide proof but then we need proof that our proof is true and so on with subsequent proof. We can deal with this problem in three ways:

1. We create a circular viz. coherent argument where theory and proof support each other. X is true because of Y; Y is true because of X.

This is dangerous because the arguments can be logically valid i.e.

their conclusions follow from their premises. It is an informal fallacy. The main problem is one already believes the conclusion and, anyway, the premises do not prove the conclusion in this way.

Therefore, the argument will not persuade — well, it might persuade non-critical thinkers. This is where guys like Hitler do really well.

Coherentism is the approach.

2. We agree on axioms. Consider the conch shell in Lord of the Flies; the group agreed on the axiom that whomever held the conch had the right to speak. In epistemology, we might agree that a series of statements is true even though we cannot verify this. Most social organizations seem to run on this approach. Some more esoteric organizations even mention a “substitute” for something that was “lost” and this is a reference to the axiomatic tendency of the organization. Foundationalism is the approach.

3. We accept infinite regress. We realize that each proof requires further proof, ad infinitum. This view rejects the fallacies and weaknesses of the previous two choices, while accepting reality. Infinitism is the approach.

While most scientists I speak with relegate this as an “exercise in abstract philosophy”, I think they’re wrong. I think they’re not comfortable with the weaknesses of their paradigm and I confirmed this when discussing the weaknesses of empiricism and rationalism — the constituents of science. I normally get the “science is settled” or “if it isn’t science it’s not worth knowing” arguments and we reach an impasse (their axioms). Despite their inability to prove their claims, that is the only criticism I’ve entertained involving the trilemma.

◊ ◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

Having had only an undergraduate training in philosophy with some help from the Jesuits, I may have missed something, but I could have sworn that “The Munchhausen trilemma” was not commonly, or indeed at all, referred to in the late 40’s and early 50’s; it may be well known today by that name, but this has not always been so. As to the identity of the Baron, thank you, but I have had some previous exposure to the nobleman’s exploits. His name is also used in psychological diagnoses, although that is rather modern; before the need for appellations to use in insurance claims, we were content to use less colorful diagnostic terms than Munchhausen By Proxy. But that’s a different story

I was not aware that a common philosophical problem/criticism had been given the Baron’s name; I suspect this is due to the popularity of certain movies.

I am not ready to undertake the rather long task of providing instruction in philosophical principles; I simply have to make do with Sir Karl Popper, and the rather mundane notion that if two psychiatrists, a nurse, and the wardboys tell you that you are not covered with bees, you might as well stop brushing them off your coat. We can insist that statements about the world are not science if they cannot be falsified. As to what is truth; we can agree that statements that can be falsified but have not been may be acted on as if verified, even though full verification is not possible. Of course this can lead to having to treat two different views of reality as true if they do not generate falsifiable statements that conflict with each other. Rather like Beckmann and Einstein.

Excuse my brevity. It is painful to type while staring at the keyboard.

Jerry Pournelle

Chaos Manor

I appreciate you taking the time to respond at some length; especially considering that it’s not easy to type right now.

I did some research on it and the term was coined in 1968 in reference to Karl Popper’s trilemma of dogmatism vs. infinite regress vs.

psychologism. Popper, in his 1935 publication, attributed the concept to Jakob Fries. However, the trilemma of Fries is slightly different from the Münchhausen trilemma. You can read more here if you’re

interested: http://wiki.ironchariots.org/index.php?title=M%C3%BCnchhausen_trilemma

I understand that your time is limited; perhaps you could publish our exchange on the Münchhausen trilemma? My understanding of the trilemma is limited but it is important to me. For me, this is the bottom of the pile and it’s something I spent a good part of my life searching for and I want to popularize it as much as possible along with General Semantics, arete, Bloom’s Taxonomy, cheaper energy, and the Classical Trivium. =)

As an aside, under the Jesuits, you may know Agrippa’s trilemma — presented by Sextus Empiricus and attributed to Agrippa the Skeptic by Diogenes Laertius (not the Cynic). However, Agrippa’s trilemma has five — not three — choices.

◊ ◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

The map is not the territory. Science can provide us with better maps – if we follow the rules – but they are only maps.

bubbles

Hello Jerry,

Reader James had a nice comment on your post for 24 August re Settled Science about the uncanny precision of planetary temperature measurements, over century time frames, when an instrumentation calibration lab, under controlled conditions, would be hard put to duplicate it over 24 continuous hours.

Without critiquing every ‘a’,’and’, and ’the’ of his post it sounded spot on to  me.

Coincidentally, the pooh-bah’s of climate science from around the world made the following announcement this month:  July 2015 was the hottest month ever, since records began in 1880.

Here is a quote from NOAA’s official announcement ( http://www.ncdc.noaa.gov/sotc/global/201507 ):

“The combined average temperature over global land and ocean surfaces for July 2015 was the highest for July in the 136-year period of record, at 0.81°C (1.46°F) above the 20th century average of 15.8°C (60.4°F), surpassing the previous record set in 1998 by 0.08°C (0.14°F). As July is climatologically the warmest month of the year globally, this monthly global temperature of 16.61°C (61.86°F) was also the highest among all 1627 months in the record that began in January 1880. The July temperature is currently increasing at an average rate of 0.65°C (1.17°F) per century.”

In the spirit of James’ comment, are the people producing such drivel stupid enough to believe it themselves?  Do they REALLY believe that we have had a planet wide instrumentation system in place since 1880 and a 135 year data base of its output that would allow us to list the 1627 months since 1880 in rank order of the temperature of the entire planet for each month?  In spite of the fact that there is AFAIK no universally agreed upon method of even CALCULATING the temperature of the planet for a given month?  And if there IS a cookbook procedure, do they really believe that the planetary instrumentation system provided sufficient coverage and precision over the entire 1627 months to justify their proclamation of an anomaly of 0.08 C for a specific month to be a ‘record’ ?

Back when I was a Navy tech and we were faced with some incredible feat of technological wizardry our typical response was “Modern science knows no limitations!’.  That would appear to be especially applicable to ‘Modern Climate Science’.

Bob Ludwick

I am at a loss to explain why they cannot tell us the formula for “the temperature of the Earth”. I suspect they are afraid they would be laughed at.

bubbles

if you want more of something, subsidize it
Dr. Pournelle,
A case demonstrating your point: How Carbon Credit Program Resulted In Even More Greenhouse Gas Emissions
http://news.google.com/news/url?sr=1&ct2=us%2F1_0_s_0_1_a&sa=t&usg=AFQjCNF8uoAGTACLRE6JnuGarMAtOl-nkg&cid=52778934561097&url=http%3A%2F%2Fwww.csmonitor.com%2FEnvironment%2F2015%2F0825%2FHow-carbon-credit-program-resulted-in-even-more-greenhouse-gas-emissions&ei=5KLcVeD9H4HRhAHjq7CICw&rt=SECTION&vm=STANDARD&bvm=section&did=1096999885361212996&sid=en_us%3Asnc&ssid=snc&st=1&at=dt
-d

Surprise!

bubbles

Turning Atmospheric CO2 into Carbon Nanofibers
Dr. Pournelle,
Regardless of what one believes about Climate Change, an economic process for manufacturing carbon nanofibers from atmospheric CO2 is pretty cool stuff. Projected cost is $1000/ton of nanofibers.
http://www.nanodaily.com/reports/Diamonds_from_the_sky_approach_turns_CO2_into_valuable_products_999.html

Jeffrey

If the system produces a product worth more than the cost of making it, I would assume it will be capitalized soon enough. I’m too lazy to do the numbers. But I suspect the CO2 entering the atmosphere each year far exceeds the amount of carbon fiber you can sell, so if it be actually needful it may have to be subsidized, but that’s better than bankrupting ourselves.

bubbles

Security Theater, er Theatre

“Toddler’s Minions ‘fart blaster’ not allowed on flight as it has a trigger”

http://www.telegraph.co.uk/news/aviation/11807263/Toddlers-Minions-fart-blaster-not-allowed-on-flight-as-it-has-a-trigger.html

Well don’t we all feel SO much safer now?

“Will there ever again be an England?” -Anon

Cordially,

John

I dare not answer that…

bubbles

: Celebrating George Orwell’s birthday

A group of Dutch artists celebrated George Orwell’s birthday on June 25th by putting party hats on surveillance cameras around the city of Utrecht.

http://front404.com/george-orwells-birthday-party/

“If you want any discipline to shape up, first get it laughed at.”

– Paul Harvey

Cordially,

John

bubbles

http://www.nanodaily.com/reports/Diamonds_from_the_sky_approach_turns_CO2_into_valuable_products_999.html

Solar Minimum as Dangerous as Solar Maximum

clip_image001

by Mitch Battros – Earth Changes Media

In a new study just published in the scientific journal Geophysical Research, charged particles from various sources is amplified near the Earth’s equator. Brett A. Carter, lead author from Boston College Institute for Scientific Research provides evidence indicating smaller geomagnetic events occurring in equatorial regions, are amplified by the equatorial electrojets.

The article is well beyond my expertise, but is very interesting. The climate modelers tend to secrecy about such matters.

bubbles

bubbles

bubbles

bubbles

bubbles

Freedom is not free. Free men are not equal. Equal men are not free.

bubbles

clip_image003

bubbles

The Science is Settled

Chaos Manor View, Monday, August 24, 2015

“Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded—here and there, now and then—are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

“This is known as ‘bad luck’.”

– Robert A. Heinlein

bubbles

http://earthguide.ucsd.edu/virtualmuseum/climatechange2/01_1.shtml

After this great glaciation, a succession of smaller glaciations has followed, each separated by about 100,000 years from its predecessor, according to changes in the eccentricity of the Earth’s orbit (a fact first discovered by the astronomer Johannes Kepler, 1571-1630). These periods of time when large areas of the Earth are covered by ice sheets are called “ice ages.” The last of the ice ages in human experience (often referred to as the Ice Age) reached its maximum roughly 20,000 years ago, and then gave way to warming. Sea level rose in two major steps, one centered near 14,000 years and the other near 11,500 years. However, between these two periods of rapid melting there was a pause in melting and sea level rise, known as the “Younger Dryas” period. During the Younger Dryas the climate system went back into almost fully glacial conditions, after having offered balmy conditions for more than 1000 years. The reasons for these large swings in climate change are not yet well understood.

bubbles

I have been brooding over the mess with the Hugo Awards all weekend, and you do not need to tell me that thinking about the subject is a waste of time, particularly since I have no stake whatever in it. So the less said about it all, the better. My other excuse is that it’s been hot. And maybe I confess to a bit of laziness.

I’ve also been thinking about more important matters. The problem is that typing is painful. Less so with this Logitech K360 keyboard, but I still must use two-finger typing and stare at the keyboard rather than what I am typing, and I still look up to discover to my horror a flood of red wavy lines – words I have had to fix. At least I no longer often hit the alt key and the spacebar simultaneously (at least not very often) which causes me sometimes to lose everything I have written.

I need to write an essay on the philosophy of science as I understand it. That’s what they call epistemology in universities nowadays: the study of how we know what we know, and how well we know it. I “took” Philosophy of Science from Gustav Bergmann at the University of Iowa when I was an undergraduate there in its golden days. Bergmann was one of the former members of the Vienna Circle who fled to the United States before WW II, and had worked with Karl Popper.

I never met Karl Popper, although I wish I had. Popper seems to me to have made as concise a statement of how science works as has ever been done. You can never “prove” an empirical scientific (as opposed to a logical) statement or hypothesis. We can never know Truth as science. What we can do is falsify statements, or attempt to; those that have not been falsified may be treated as true, always reserving the possibility that they will some day be falsified. Statements that cannot be falsified by any means whatever are simply not scientific. In some philosophical realm they may be “true” but they are not scientific and are not the business of the scientists.

This does not seem radical today, but when first put forth by Popper it exposed Freudianism and other such “sciences” to the criticism that, since they could explain everything and thus could not be falsified, they in fact explained nothing and were not science.

This is a simplification of a rather complex subject. Those who want to know more will have little difficulty in finding discussions.

The essence is that if you cannot falsify a statement it is not science; and if an experiment gives evidence of the falsification of an hypothesis, the hypothesis is false. You may not diddle with it to make it fit the facts, you must make a new – and falsifiable – hypothesis that covers all the facts. Adding non-falsifiable modifications to your theory in order to cover the new facts is right out. That may seem obvious now, but it was not always accepted, and is not actually universally applied now.

With that introduction we consider the extraordinary evidence situation.

The Extraordinary Evidence Fallacy

Dear Jerry:
You wrote in your View for August 19, 2015:

“Extraordinary claims require extraordinary evidence.”

https://www.jerrypournelle.com/chaosmanor/extraordinary-claims-and-other-matters/

I’ve read that Carl Sagan popularized the saying. This philosophical claim turns out not to be true.
The criterion of extraordinary evidence is frequently raised by those arguing against the existence of God, or against the reality of miracles such as the resurrection of Christ. William Lane Craig discusses the fallacy regularly in his debates and podcasts.
To get the flavor of the counter argument, consider Craig’s response to Lawrence Kraus during their debate at North Carolina State University on March 30,2011 Craig said:

Now he [Kraus] says, “Extraordinary claims require extraordinary evidence. David Hume’s argument against miracles is sound.” Here, what you need to understand is that that claim is demonstrably false. It is not true. Hume didn’t understand the probability calculus. It wasn’t yet developed in his day. His argument neglects the crucial probability that we would have the evidence which we do if the miracle in question had not occurred. And that factor can completely balance out any intrinsic improbability that you think might occur in a miracle. In any case, why think that a miracle like the resurrection is intrinsically improbable? I think what’s improbable is that Jesus rose naturally from the dead. But, of course, that’s not the hypothesis. The hypothesis is that God raised Jesus from the dead. And you can’t show that that’s intrinsically improbable unless you’re prepared to argue that the existence of God is improbable. And Dr. Krauss isn’t doing that tonight. That’s not the debate topic, as he explained. The topic tonight is, “Is there evidence for God?,” and so we’re not assessing the prior probabilities of whether or not God’s existence is intrinsically probable or not. And so I think the approach that I’m taking tonight is right in line with probability theory and does show that, given the facts that I’ve laid out, God’s existence is more probable than it would have been without them.

Read more: http://www.reasonablefaith.org/the-craig-krauss-debate-at-north-carolina-state-university

In a podcast on 8/3/2014 Craig elaborated:

So this slogan, I think, is simply demonstrably false. In fact, it is contradicted all the time when we believe highly improbable, perfectly natural events have occurred because we have good evidence for them – not miraculous or extraordinary evidence but ordinary evidence. But it would be very, very improbable that we would have this sort of evidence if the event had not taken place. So this first claim is nothing more than a slogan that the unbeliever can use to dismiss any evidence that you present. He can use it as a slogan and simply say that is not extraordinary enough for me to believe. It really tells us more about his personal psychology and skepticism than it does about the value of the evidence we are presenting.

Read more: http://www.reasonablefaith.org/top-10-debate-topics

Thus, one man’s extraordinary claim can be another man’s mundane assumption. Much depends on one’s criterion for incredulity.
Carl Sagan frequently claimed that “The Cosmos is all that is or was or ever will be.” To him this was a truism. Yet I consider Sagan’s claim an extraordinary philosophical leap of faith. I wonder “How does he know?” Our two worldviews were far apart.
Those interested in pursuing this issue will find much more on the Web, of course.
Best regards,
–Harry M.

To be clear, this concept was first published by Laplace, who said “The weight of evidence for an extraordinary claim must be proportioned to its strangeness.” Sagan popularized this concept, and it seems intuitively true. If I tell you that the sun will not rise tomorrow, that is a falsifiable statement, and therefore might be said to be scientific; but if I then tell you the sun did not rise, and I seem to be the only person to have noticed that, could I be said to have falsified the hypothesis that the sun will rise tomorrow and every day thereafter per omnia secula seculorem? Or would you demand more evidence?

If two psychiatrists, a nurse, and an orderly all tell you that you are not covered with bees, you may as well stop trying to brush them off your coat, to quote some psychiatric book I read fifty years ago.

Similarly for the various reactionless drives: if someone claims he has one, is that sufficient evidence? Obviously if he claims he is married we tend to believe it; if he claims he is happily married, we may accept that on his say-so; but if he claims he is happily married to a talking gorilla, most of us would require somewhat more evidence.

Similarly, the Apostles understood well that their claim to have seen the Master, and fed Him a bit of boiled fish, was extraordinary; and those who recorded it took care to identify the witnesses. It is not a scientific claim, and you may doubt the evidence for the Resurrection; people have done so for a thousand years. But then statements about miracles are not scientific hypotheses, and are not subject to the rules of science. By definition a miracle is exceptional and takes place outside science. That does not mean there are no miracles, or that no one has ever observed one. The keepers of Lourdes claim to have extensive documentation of a very great many of them.

Incidentally I have credentials from a major university that state that I do “understand the probability calculus”, but I’m having trouble understanding Craig’s point. Given the existence of God, all things are possible; but the calculus of probability still cannot predict miracles without a great deal of carefully gathered evidence suitably organized, and even then there is no reason to assume the conditions making your probability estimate possible will prevail.

I once had this discussion with Marvin Minsky. I related an incident that changed my life. Marvin’s graduate student, Danny Hillis, immediately pointed out the probabilities, but ran into the problem that we were approaching the age of the universe in estimating the probable times between such events. It might have been fun to pursue the discussion but we were in a NASA weekend conference and had to go back to the session.

Claims of a reactionless drive are extraordinary. Evidence of the existence of such a drive needs to be “extraordinary” in the sense that the existence of such working gadget is an improbable event, and thus needs to be observed to work by a number of people.

bubbles

The unsettled science of the Big Bang hypothesis

With respect to Stephanie Osborne’s citation of Hubble’s Law:
“Steady State Universe. There were no galaxies, there was only the universe, and it had always been just like it is. Then Edwin Hubble realized that the unusual spectra he was getting from those peculiar stars could be explained if they were regular spectra with extreme blueshifts, and he discovered that those peculiar stars are what we now call quasars, and they were far distant galaxies in their own right, speeding away from us at incredible velocities. And then astronomers began to realize that all those ”spiral nebulae” and such were also galaxies, and they were also blueshifted, but at an amount corresponding to their distance. And lo, Hubble’s law was born.”

Physicist Hilton Ratcliffe in The Static Universe: Exploding the Myth of Cosmic Expansion (2010) points out that what became known as Hubble’s Law started out as a tentative speculative hypothesis advanced in 1929 and based on a very limited observational data that the galaxies beyond our own that Hubble was the first to observe had redshifts that correlated inversely, though very roughly, with their magnitudes. Other astronomers pounced eagerly on this hypothesis, adopting it more or less uncritically, and interpreted the redshifts to be due to the Doppler Effect, which, if true, would give them a tool to estimate the distances to extragalactic stellar systems. And in short order other astronomers made a wild leap from this scant and dubious data, and from the unfounded assumption that these redshifts were instances of the Doppler Effect, to the staggering conclusion that the universe was flying apart at an accelerating rate and that therefore its history was analogous to an explosion.

Plots were made of the supposed recessional velocity of Hubble’s distant galaxies versus their distances, but these were analogous to circular arguments, since recession and distance were precisely the phenomena that the data were being invoked to establish. Plotting the same data as redshifts versus magnitude collapsed and scattered the same data to the extent that there no longer seemed to be any clear pattern. The situation only deteriorated from that point on.

It was soon discovered that this expanding universe model didn’t apply “locally” (i.e. to observational objects within 100 megaparsecs of us) – this was rationalized away as due to local gravity, although no rationale for considering entire galaxies or clusters of galaxies as point objects was ever advanced. Nor was it explained why more distant galaxies shouldn’t also be included, or what the criteria for inclusion in the local field should be. If the universe were actually expanding like a physico-chemical explosion, every object ought to be accelerating away from every other object, or at least moving away at constant linear velocity. And if there were exceptions to this rule due to gravitational effects, that by itself ought to distort and complicate any inference of distance from redshifts of faint stellar objects.

The exemption of “local” objects from the theory (for which alone other means of determining distance exist, such as interpolation and overlap) was the first ad hoc adjustment to preserve “Hubble’s Law” and the Big Bang Theory, to which astronomers were already heavily committed (by the 1930’s), but it wouldn’t be the last. It also became necessary to postulate that the space between mutually recessionary objects was itself expanding, and then that this space was populated with still undetectable “dark matter”. The other, much weaker, pillar supporting the BBT theory, the supposed background radiation that’s the residue of the original explosion has been subjected to such convoluted and ever-changeable modeling as to become virtually a metaphysical substance itself, like the dark matter and the ether that the Michelson-Morley experiment failed to find (Ratcliffe devotes a chapter to the torturing of background radiation data in to evidence for the Big Bang hypothesis.

Unfortunately, the creation of a local zone exempt from the BBT theory also had the effect of wiping out the data that supposedly undergirded it, as all of Hubble’s original observations, and most of those that had accumulated since, and that were likewise nebulous, fell within the area of localization. By 1935 Hubble and his colleague Richard Tolman were warning of “the possibility that red-shift may be due to some other cause, connected with the long time or distance involved in the passage of the light from the nebula to the observer”, and by 1947 Hubble was writing more broadly “it seems likely that red-shifts may not be due to an expanding universe, and much of the speculation on the structure of the universe may require re-examination.” (note Hubble’s word “speculation”).

Hubble had also argued as early as 1942, that (contrary to the BBT theorists) the fact that the redshift data were related linearly to magnitude argued for their being static, not recessionary, contrary to the assumption of the BBTheorists.

Hubble and Tolman also proposed a method of testing the Hubble hypothesis in their 1936 publication – a method that was applied in two studies published in 2006, both of which failed to confirm the theory that apparent luminosity is related to Doppler redshifting, and in fact the few studies that have purported to confirm this relationship have all been flawed by the same kinds of circular assumptions of the relationships to be demonstrated. Meanwhile, plentiful disconfirming evidence has accumulated, including classical visual astronomical observations, examples of which Ratcliffe reproduces in his book. On page 83, he lists (as I count) 32 alternate hypotheses proposed by physicists and astrophysicists to explain the redshift data. He also devotes a whole chapter to the problems that observations of quasars have caused for the classic redshift theory, ten of which were identified in a 2009 paper of Martin Lopez-Corredoira.

The most damning and disturbing aspects of Ratcliffe’s presentation are the parallels between the cultist BBT orthodoxy and the cultist AGW orthodoxy – both “settled sciences”. One doesn’t have to be a physical scientist to recognize the hallmarks of a cult: excommunication and persecution of heretics; extravagant claims of unanimity and certitude; and (not least) the vested financial and power interests at stake. One of the strengths of Ratcliffe’s book is that it amounts to a tutorial and exemplar in illustration of the Kuhnian analysis of the history of science. It is ever thus: great leaps forward must always await a prolonged period of futile and counterproductive attempts to uphold the old failing, anomaly-ridden paradigm. Thus, we are treated to the following quotation from premier philosopher of science Karl Popper:

“Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem that it was intended to solve”.

And this from Carl Sagan, laying out the two fundamental rules of science:
“First: There are no sacred truths; all assumptions must be critically examined; arguments from authority are worthless.
“Second: Whatever is inconsistent with facts must be discarded or revised. We must understand cosmos as it is and not confuse how it is with how we wish it to be. The obvious is sometimes false; the unexpected is sometimes true.”

Ms. Osborne refers to deep layers of physical scientific understanding that would be disrupted if the EM drive were to turn out to be real, but I think that you and most of your readers would agree that it’s best to keep an open mind about it nonetheless. OTOH the only edifice I see endangered by the final collapse of the BBT and it’s retinue of ad hoc additions (dark matter, perhaps black holes) would be that of an increasingly rickety and sterile cosmology. Astrophysicists would have to rethink many things no doubt, but I’d say that it was high time that be done anyway.

John B. Robb

I commend to you Petr Beckmann and his Einstein Plus Two http://rationalwiki.org/wiki/Petr_Beckmann which seeks at great and careful length to show that while there is no experiment to falsify Einstein’s Relativity, all the crucial experiments relied on to corroborate that theory can also be explained by Newtonian physics with several assumptions including a finite speed of propagation of the force of gravity. Beckmann proved to his satisfaction that there were no observations that falsified Newton, given that assumption; and that his theory of an aether entailed by planetary motion, consistent with Newton, was sufficient to explain the Michelson Morley experiment that formed the basis for Special Relativity.

Note that he never claimed to falsify Relativity; only that in his theory the math was simpler, and explained all the data. He did note that there was something strange about spectroscopic binaries; I will let you find that for yourself. His premise is summarized on the first page of his book: http://www.stephankinsella.com/wp-content/uploads/texts/beckmann_einstein-dissident-physics-material.pdf

bubbles

The science is settled

Hello Jerry,

That the ‘science is settled’ is NOT confined to what is euphemistically known as ‘climate science’ but is now apparently the position of ALL science.  In particular, physics.

Here are two articles on the subject, both of which point out that nowadays, when observations of the behavior of the universe in action conflict with ‘settled science’ the universe is adjusted to fit theory.

Dark matter and dark energy are cases in point:  when large scale astronomical structures were observed to behave in ways not predicted by ‘settled science’, it was considered to be conclusive evidence that the universe was constructed largely (~95%) of ‘dark matter’ and ‘dark energy’ whose properties, quantities, and distribution could be deduced from the requirement that the universe conform to ‘settled science’.  In other words, since the theory was correct, the universe as observed wasn’t, so the universe was adjusted.

This article was precipitated by the reaction of the experts to the announcement of thrust from what are generically known as EmDrives, but includes references to dark matter and ‘cold fusion’:

http://www.digitaljournal.com/science/op-ed-emdrive-does-work-but-spectator-science-disagrees/article/441374#tab=comments&sc=0

I will be the first to admit that the existence of the ‘EmDrive’ effect is far from confirmed, but what the article is bemoaning is the immediate reaction of the experts:  the observations conflict with theory, therefore they are experimental error or deliberate hoax.  They may be right in this case, but is it necessary to trash the reputations of the apostates, personally and professionally (as they did with Pons and Fleischmann when they announced anomalous heat from their experiments) and as they are now doing with Dr. McCulloch with his MiHsC theory as he describes on his blog posting for 18 August:  http://physicsfromtheedge.blogspot.com ?

McCulloch claims (I certainly don’t have the ‘creds’ to either support or reject his theory) that his theory explains the observations from which the existence of dark matter/dark energy was confirmed (and quite a few other deviations of observations from theory) without requiring either.  The response by ‘settled science’ has not been to point out the error of his ways, but to make him a ‘physics non-person’ and to remove anything about his theory from common reference sources such as Wikipedia (ongoing) and arXiv.

As Dr. McCulloch says:  “It is possible for a paradigm to survive not because it is more successful, but because it deletes the alternatives, and this is what an unscientific minority of dark matter supporters are doing.

That is the common practice in ‘Climate Science’, by the way.  Note how over the last 5 years or so the reputation of Dr. Judith Curry has changed from the respected climate scientist who was the Chair of the School of Earth and Atmospheric Sciences at Georgia Tech, when she was enthusiastically on board with Catastrophic Global Warming driven by anthropogenic CO2 (ACO2) to now, when she has merely expressed doubts as to the certainty of the looming catastrophe, she is portrayed as an incompetent, anti-science shill of the Republicans and oil companies by her former comrades-in-arms.

The same goes for anyone with the temerity to engage in research into the existence of low energy nuclear reactions (generic cold fusion).  Even suggesting that research should be conducted in the field, never mind opining that it may be real, is a career killer for budding physicists.

I certainly can’t support OR reject LENR, EmDrives, or theories in conflict with general relativity using theoretical arguments, but as a layman I think that the ex cathedra rejection of experiments and the creation of an unobservable 95% of the universe because of conflict with EXISTING theory bodes ill for the advancement of science.

Bob Ludwick

I do not believe the science is settled. I have enough “creds” to have a right to an opinion on whether they actually have an engine that produces thrust without loss of mass.  I have not seen the device, so I cannot give an opinion; what I have seen goes to show that a number of the more usual explanations are not present; but the observations I have seen have been light on observations of the thrust, both in magnitude and time.  If they have a gadget that will operate for weeks producing thrust all the while, I think it would be simple to make sure there were no hidden means of introducing mass to the apparatus. The observation that there is reactionless drive has apparently not been falsified, but the observers seemed unsure.  I’d love to see this thing in operation.  A reactionless thrust would give us the solar system; we can leave it to future generations to give us the stars.

The science is settled

Hi Jerry,

I read Stephanie’s commentary on the subject and found it reasonable.  As usual for her commentary.

It included the following:

“But when we start looking at cosmology and such like, we are looking at fundamental physics on many levels. And that physics does have many levels, starting with Newtonian physics, then adding special relativity, general relativity, quantum mechanics, the various string theories, M theory, et cetera. So if you encounter something that appears to knock out one of those levels, you have to realize that it doesn’t JUST knock out that level, it knocks out pretty much all the levels above it. The lower the level, the more fundamental and earth-shaking the result. We’re talking, in some cases, about scrapping pretty much the whole of physics and starting from scratch, or nearly so. This is Not A Good Thing, in many ways, because we have used established physics in so many ways in our world. (Engineering is largely physics applied to the real world — imagine if we found, e.g., that quantum mechanical fluctuations could readily occur on a macro scale, and affected a particular structure commonly used in architecture, say. Would you ever feel safe in a high-rise again? In your own house??) Consequently there is a strong urge to try to make the current levels fit observations, rather than immediately going back and saying, “Oh, physics is wrong, drop back and punt.” But this is not a new thing; it is the way it has ALWAYS been.

Example: Epicycles. An attempt to make the previously-known science fit observations of planetary motion. And then Kepler came along and there was a hullaballoo for awhile, and then it was found that his model fit observations better, and so now we have Kepler’s Laws of Planetary Motion.

“Would you ever feel safe in a high-rise again?  In your own house?”

Of course I would.  Regardless of the impact of the new theories at the esoteric limits of cosmology/physics, standard old pre-relativity/pre-quantum theory mechanics has been demonstrated to produce perfectly acceptable houses, cars, airplanes, and bridges.  Finding out that (for example) MiHsC replaced General Relativity when applied at extremely low accelerations on cosmological scales and explained galactic rotation without the requirement for either dark matter or dark energy wouldn’t impact my life a whit. Likewise, experimental confirmation that a frustum energized at resonance by microwaves would produce thrust (there is none, yet) would not affect any pre-existing, working technology AT ALL.  Everything that worked would continue to work.  It would certainly allow for activities that are not achievable by current technology, but it wouldn’t impact working devices at all.

She cites various string theories and M Theory as scientific ‘steps forward’.

Well, maybe.

The key is ‘various string theories’.   There are apparently lots.  AFIK, no one has yet devised a test of string theories that could confirm OR reject them.  Does that make them science?  This article is about a press release a few years ago about the discovery of a test for string theory:  https://www.math.columbia.edu/~woit/wordpress/?p=6561.  The article says in effect, “Not so fast there, kemo sabe!.  The announced ‘test for string theory’ does no such thing.”.

I’m not qualified to critique the theory OR the proposed test, but one of the  commenters on the article, Peter Woit, had this to say:

“I do think you’re both right: string theory predicts anything you want, either EP violation or no EP violation.”  So does ACO2 driven Catastrophic Anthropogenic Global Climate Change (nee Anthropogenic Global Warming).  EVERY undesirable climactic event is instantly attributed, as predicted by the experts, to ACO2.

Ditto for ‘M Theory’.  No agreement within the ranks as to exactly what it is, what it implies, or how to test it.  But some think that some parts of it are at least mathematically consistent, however M Theory relates to our actual physical reality.  I suppose that is progress.

She cites this example of ‘accepted science’ that would be ‘knocked out’ if new theories were accepted:

  “Example: Steady State Universe. There were no galaxies, there was only the universe, and it had always been just like it is. Then Edwin Hubble realized that the unusual spectra he was getting from those peculiar stars could be explained if they were regular spectra with extreme blueshifts, and he discovered that those peculiar stars are what we now call quasars, and they were far distant galaxies in their own right, speeding away from us at incredible velocities.

Funny she should mention the ‘accepted science’ that quasars are far distant galaxies speeding away from us at incredible velocities.  She is correct of course:  it IS accepted science.  And, per my original premise, anyone who suggests otherwise is drummed out of the cosmological physicist corps.  It is ‘accepted science’ that based on their observed energy bursts and considering their accepted distance,  computed from their red shift, individual quasars  are producing energy during the burst at a rate equal to that of most of the remainder of the observable universe.  Mechanism unknown.

A large number of known quasars exhibit proper motion.  This means that if they are at the ‘accepted Hubble red shift distance’ of billions of light years the component of their velocity perpendicular to our line of sight must be multiples of c.

A few formerly respectable astronomers (Halton Arp, for example) have noticed the seeming contradiction and written papers on it, proposing that quasars are NOT at cosmological distances and can therefore be expected to exhibit proper motion.  Their relative nearness, according to the alternate theory,  removes the requirement that their observed energy bursts are at the universe level.

Accepted science has (so far) been undeterred by the contradiction and has dutifully marched Dr. Arp and anyone who has exhibited the slightest sympathy for his ideas off the ‘cosmological physicist’ plank, with the result that the mention of Arp in a paper, except to deride him as a kook, will instantly relegate the paper to the ash heap of history, whatever its other merits.  As explained in the following paper about the apparent contradiction between the cosmological red shift (Stephanie said blue shift, but it was clearly a typo) implying vast distance and their proper motion, which implies that they are relatively near and vast energy is not required:

http://www.deceptiveuniverse.com/Quasars.htm .

Quasars and the Hubble Law

A few astronomers have argued that quasars are not really that far away, and that the Hubble Law does not apply to them. Astronomer Halton Arp, for example, has spent much of his long and successful career providing evidence of associations between quasars and galaxies, suggesting that they may be at similar distances. He has also amassed a large number of photographs of galaxies with widely different redshifts which appear to be interacting, as if they were near each other. His discoveries, which have taken him out of mainstream astronomy, raise serious questions about the redshifts of galaxies being caused by recessional velocity.

Another persistent voice against cosmological distances for quasars is astronomer Tom Van Flandern, formerly of the U.S. Naval Observatory.

The problem with quasars is that using the Hubble Law to compute their distance leads to extreme distance estimates — to the edge of the universe, in fact. If quasars were not at the distances currently ascribed to them there would be no need to for them to have extraordinary energy. Non-cosmological distances would also be consistent with the observed proper motion of many quasars.”

The author notes that Arp’s ‘discoveries’ have taken him out of mainstream astronomy.  Accepted science does not take kindly to non-acceptance of its catechism.

Possibly unrelated is the comment about sympathizer Tom Van Flanders: “……formerly of the U. S. Naval Observatory.”  Wonder if his support of theories contrary to ‘accepted science’ has anything to do with his being ‘former’?

The argument may be advanced:  “You’re an idiot; you’re not qualified to critique theoretical cosmology.”

The first is possible; the second undeniable.

I’m not critiquing theories.  I am critiquing the observable response of theoretical cosmologists/climate scientists/et al to those who, based on observations, question the accepted dogma of the applicable field.  Which is that rather than QUESTION the now axiomatic nature of what was formerly a theory, use the conflicting data (cosmology) and the no longer questionable accepted science to announce the detection, properties, and distribution of the otherwise undetectable 95% of the universe for which the only evidence (so far) is the conflict between observations and accepted science.  Or when the observations of the climate do not match the accepted projections of the climate models, adjust the data.  And to shun apostates within the field.  Accepted science is no longer falsifiable by observations, since the universe can be modified at will to make the observations match theory.

Bob Ludwick

I can only refer you to Beckmann. Orthodox theory keeps inventing new constructs, like dark matter and dark energy, not from observation but from theoretical necessity. The concepts keep multiplying, and apparently the principle of similarity (the principles that govern the solar system apply to the whole universe) needs to be abandoned. As to string theory, I am unaware of any falsifiable hypotheses it has generated.

bubbles

Cherry Picking, Black Swans and Falsifiability

Jerry –
You might enjoy this by Doug L. Hoffman:
http://theresilientearth.com/?q=content/cherry-picking-black-swans-and-falsifiability
– a sample:
“Whenever a skeptic points out a new paper or journal article refuting some claim made by the theory of anthropogenic global warming, climate change alarmists often shout “cherry picking!” Evidently, most climate change true believers do not understand how science works or how theories are tested. Scientific theories must make predictions by which they can be tested. Providing evidence that AGW has failed in its predictions is not cherry picking, it is refutation. Unfortunately, when confronted with failed predictions the standard alarmist answer is to disavow the predictions. They will say that those are not predictions at all, they are projections—and that means AGW is not a scientific theory at all.”
And this:
“Returning to the subject of proving or disproving the theory of anthropogenic global warming, there are only three possibilities here: AGW makes no predictions and hence is not a scientific theory; AGW depends on vague feedback mechanisms that must be constantly reinterpreted, making AGW a very weak theory and scientifically useless; or the predictions made by climate scientists about the effects of AGW are just that, predictions, and if those predictions can be shown to not be true then AGW is a false theory.”
One of my favorite living historians, Paul Johnson, makes a similar appeal to Popper’s requirements for scientific falsifiability (with an unexpected references to Wordsworth and the Venerable Bede – perhaps the first in the post-Climategate AGW debate):
http://spectator.org/archives/2010/02/03/the-real-way-to-save-the-plane

He closes by suggesting that rather than destroy our economies tilting at the AGW windmill, we could better spend what we can on a project dear to all our hearts:
“So vast sums of money will continue to be spent on an unproven and unprovable theory, predicting a global catastrophe from the realms of fantasy. The money could be much more profitably spent on space exploration.”
David

I very much agree, and so does Bayesian analysis: we should spend money on reducing uncertainties in our predictions, not on preparing for outcomes.

 

bubbles

The AGW half-truth

Dr. Pournelle,

Your AGW correspondent made the claim:

“It’s something that non-scientists don’t quite understand: Science is all about models.”

I can’t know if the statement was sloppily imprecise or precisely disingenuous,  but science is all about FALSIFIABLE models, as Karl Popper persuasively argued. “You don’t have a better explanation, so I must be right” is more childish than scientific.

Contrary to the assertion, I think that the average person does understand that a reliable model must be able to make reliable predictions. A reliable prediction would tell us what the climate will be like next year, so we could plan what kind of crops to plant and when. If the predictions turn out to be grossly in error, we would tell the climate modelers to go away and come back when they understood climate prediction better.

Why should we believe that climate modelers can precisely predict the climate in 100 years if they cannot precisely predict next year? And if they can precisely predict next year, why don’t they set up a publicly accessible global temperature measurement experiment to verify their model predictions? Then everyone could compare the model predictions to the data, the accuracy (or inaccuracy) of the models would become increasingly apparent.

I think that most “non-scientists” could easily grasp this idea, but I am not so sure about the AGW modelers.

Steve Chu

Global Warming
Sir,
Having gone through the imminent ice age scare back the 70’s I have been skeptical about the global warming claims over the past several years. If there is in fact man-made global warming I don’t see any way to do anything about it. I believe most of the evidence used to proclaim the desperate situation we are in has been produced by cherry picking data to substantiate their claims or just flat out making up data and concealing any evidence that doesn’t agree with their theories. A large part of the problem is that research grants for studying global warming are easily obtained and the financial backing will make researchers tend to skew their results to keep the money coming in. There is already a huge network of companies funded by government dollars that are based totally on saving the world from climate change. As long as their is money to be made by defrauding the country I’m afraid it will only get worse. I have a background in turbine engine testing and instrumentation and I know how hard it is to measure temperatures to within one degree, much less 1/10 of a degree. Just maintaining the calibration of measuring equipment is difficult. Many measurements taken anymore are software adjusted to average out inconsistent or unexpected data. A model can be created to give you any results you want. When we were correlating instrument readings we threw out the high and the low and averaged the remaining results and we still couldn’t repeat temp readings to a degree. I don’t have any education regarding statistics but I could see how you could skew results based on how the data was segregated.

James

bubbles

bubbles

bubbles

bubbles

Freedom is not free. Free men are not equal. Equal men are not free.

bubbles

clip_image002

bubbles

Extraordinary Claims and other matters

Chaos Manor View, Wednesday, August 19, 2015

“Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded—here and there, now and then—are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

“This is known as ‘bad luck’.”

– Robert A. Heinlein

bubbles

http://earthguide.ucsd.edu/virtualmuseum/climatechange2/01_1.shtml

After this great glaciation, a succession of smaller glaciations has followed, each separated by about 100,000 years from its predecessor, according to changes in the eccentricity of the Earth’s orbit (a fact first discovered by the astronomer Johannes Kepler, 1571-1630). These periods of time when large areas of the Earth are covered by ice sheets are called “ice ages.” The last of the ice ages in human experience (often referred to as the Ice Age) reached its maximum roughly 20,000 years ago, and then gave way to warming. Sea level rose in two major steps, one centered near 14,000 years and the other near 11,500 years. However, between these two periods of rapid melting there was a pause in melting and sea level rise, known as the “Younger Dryas” period. During the Younger Dryas the climate system went back into almost fully glacial conditions, after having offered balmy conditions for more than 1000 years. The reasons for these large swings in climate change are not yet well understood.

bubbles

I hate Time Warner. So, I suppose, does everyone else; certainly I am not alone.

Today I got the Blu-Ray disks for my Pioneer Blu-Ray burner, and thought I would test it out, but I had some questions, The box needs USB 3.0 cable in; the connector in the box is USB 3.0 Micro B, which is not like any other USB cable I know of. The one that came with the box has a strange looking male plug that cannot possibly fit into any normal looking USB port on one end, and two normal looking USB male plugs on the other. The instructions say they are 3.0, but that is the only way I would know it; they don’t look different from the USB plug on the old keyboard except the innards of one is blue and the other the more usual whit. The two are connected in parallel, so that they can work together to supply power if your computer can’t put enough amperage out of one USB port.

The whole USB port/plug situation interested me. I know there are 3 levels of USB, and several levels of cable/plugs. The Kindle Fire needs one kind, there’s and older mini size for other stuff, and it’s a bit of a mess. Of course when I went to look it up it was just after four o’clock and there was the usual Time Warner slowdown so I couldn’t use the Internet. Given the years it took to get me any high speed connection I suppose I should rejoice, but Time Warner seems to dislike Studio City. Eric gets faster and better connections way north of the city, and they don’t die at four o’clock.

I connected the Pioneer Blu-ray BDR-XD05S Slim Portable Burner to the docking station for the Surface Pro 3, and everything just worked. It was a bit frustrating at first, causing several – uh, intemperate – remarks. The slim Pioneer is not easy to open and nearly impossible to open when it is not under power. There is an easy way to remove the disk, but it’s not apparent at first. The software – I presume Windows 10 – works although some of the prompts are not what you expect. Plan on spending half an hour the first time you use it if you haven’t burned some disks for a while; it works very well, but it’s a bit different from earlier times and older OS. And you don’t need NERO, Burning ROM any longer. Windows 10 knows how to do everything. The bottom line is that I have all my critical works – books in progress – burned to a Blu-ray, I have taken the burner out of the system and the disk out of the machine, put the disk back in and reconnected the burner to another machine, read enough of the files to know they are good copies, and struck away that set in the box it came in.

bubbles

Blu-ray backups
I’ve been reading about your backup issues, and was wondering what your opinion was of the M-Disc technology? I bought and LG Blu-Ray burner that apparently has the ability to use an M-Disc, and until recently had never heard of it. Blu-ray versions are apparently available in 25GB and 100GB sizes.
http://www.mdisc.com/

Tom Brosz

I put this question to my advisors and got:

    It’s a case of getting what you pay for. DoD testing did find the M-Disc had superior longevity, although it did not go so far as to support all of the company’s claims. The cost per discs is substantially higher, so use should be reserved for items that merit as opposed to stuff that need only last a couple of year before being replaced several times over by more recent archival backups.

https://en.wikipedia.org/wiki/M-DISC

Eric

Blu-ray backups

I wonder how easy it will be to find high-quality blank m-disk media in the future?  Uptake seems pretty small so far, and without a big user base I don’t think media will be easy to find or cheap when located. 
I was at Costco earlier today. They had a Seagate 5 (!) TB portable hard drive for $139.  USB 3 interface.   Almost bought one, but I’ve already got a couple of 2 TB WD Passports that I’m not really using to best advantage.
I was alive when 5 TB was more storage capacity than existed in the entire universe (as we know it).
RBM

It’s a niche market that the general public isn’t ever likely to know or care about. Blu-ray burning is still pricier than it would be if it had seen the same uptake as DVD before it but if you aren’t handling high quality video the need can be hard to find. A lot of businesses that would love to clear out their warehouse space full of old records can make good use of it but that market isn’t enough for every new PC to ship with a BD-R reader, never mind a burner. Perhaps as 4K recording becomes mainstream it will get a boost.

    If you have the need the product is there with a little looking.

    I suspect the premium on M-Disc includes a very healthy margin for the company. They know their market is always going to be limited but willing to pay for what the product delivers. So long as Milleniata remains in business the media should be available.

Eric

And that, I think, ought to do it. The Blu-ray burner wasn’t expensive, and with USB was extremely rapid. The disks are now safely stored away from fire, ransomware, computer crashes, and it was all quick. The RAID will be automatic, and I have merely to remember to make a disk copy at frequent intervals.

bubbles

The science is settled

Hello Jerry,

That the ‘science is settled’ is NOT confined to what is euphemistically known as ‘climate science’ but is now apparently the position of ALL science.  In particular, physics.

Here are two articles on the subject, both of which point out that nowadays, when observations of the behavior of the universe in action conflict with ‘settled science’ the universe is adjusted to fit theory. 

Dark matter and dark energy are cases in point:  when large scale astronomical structures were observed to behave in ways not predicted by ‘settled science’, it was considered to be conclusive evidence that the universe was constructed largely (~95%) of ‘dark matter’ and ‘dark energy’ whose properties, quantities, and distribution could be deduced from the requirement that the universe conform to ‘settled science’.  In other words, since the theory was correct, the universe as observed wasn’t, so the universe was adjusted.

This article was precipitated by the reaction of the experts to the announcement of thrust from what are generically known as EmDrives, but includes references to dark matter and ‘cold fusion’:

http://www.digitaljournal.com/science/op-ed-emdrive-does-work-but-spectator-science-disagrees/article/441374#tab=comments&sc=0

I will be the first to admit that the existence of the ‘EmDrive’ effect is far from confirmed, but what the article is bemoaning is the immediate reaction of the experts:  the observations conflict with theory, therefore they are experimental error or deliberate hoax.  They may be right in this case, but is it necessary to trash the reputations of the apostates, personally and professionally (as they did with Pons and Fleischmann when they announced anomalous heat from their experiments) and as they are now doing with Dr. McCulloch with his MiHsC theory as he describes on his blog posting for 18 August:  http://physicsfromtheedge.blogspot.com ?

McCulloch claims (I certainly don’t have the ‘creds’ to either support or reject his theory) that his theory explains the observations from which the existence of dark matter/dark energy was confirmed (and quite a few other deviations of observations from theory) without requiring either.  The response by ‘settled science’ has not been to point out the error of his ways, but to make him a ‘physics non-person’ and to remove anything about his theory from common reference sources such as Wikipedia (ongoing) and arXiv. 

As Dr. McCulloch says:  “It is possible for a paradigm to survive not because it is more successful, but because it deletes the alternatives, and this is what an unscientific minority of dark matter supporters are doing.

That is the common practice in ‘Climate Science’, by the way.  Note how over the last 5 years or so the reputation of Dr. Judith Curry has changed from the respected climate scientist who was the Chair of the School of Earth and Atmospheric Sciences at Georgia Tech, when she was enthusiastically on board with Catastrophic Global Warming driven by anthropogenic CO2 (ACO2) to now, when she has merely expressed doubts as to the certainty of the looming catastrophe, she is portrayed as an incompetent, anti-science shill of the Republicans and oil companies by her former comrades-in-arms.

The same goes for anyone with the temerity to engage in research into the existence of low energy nuclear reactions (generic cold fusion).  Even suggesting that research should be conducted in the field, never mind opining that it may be real, is a career killer for budding physicists.

I certainly can’t support OR reject LENR, EmDrives, or theories in conflict with general relativity using theoretical arguments, but as a layman I think that the ex cathedra rejection of experiments and the creation of an unobservable 95% of the universe because of conflict with EXISTING theory bodes ill for the advancement of science.

Bob Ludwick

I asked Stephanie to comment on this because I still have problems typing. I may also get comments from other physicist friends.

Okay, here’s the thing, Jerry. This is my personal opinion on the matter, as well as my attempt to explain; YMMV.

Climate science is, or should be, based on the physics and chemistry of the atmosphere. This is, in fact, the reason why physicists, astronomers, chemists, etc. often do NOT go along with the “consensus” on AGW, because it does not fit the physics/chemistry/astronomy of the situation as we know it. (Yes, I’m aware that the group of astronomers in Belgium is playing games with historical records of sunspot numbers, and as an astronomer I’m not best pleased by it. I see no scientific justification for doing so.)

But when we start looking at cosmology and such like, we are looking at fundamental physics on many levels. And that physics does have many levels, starting with Newtonian physics, then adding special relativity, general relativity, quantum mechanics, the various string theories, M theory, et cetera. So if you encounter something that appears to knock out one of those levels, you have to realize that it doesn’t JUST knock out that level, it knocks out pretty much all the levels above it. The lower the level, the more fundamental and earth-shaking the result. We’re talking, in some cases, about scrapping pretty much the whole of physics and starting from scratch, or nearly so. This is Not A Good Thing, in many ways, because we have used established physics in so many ways in our world. (Engineering is largely physics applied to the real world — imagine if we found, e.g., that quantum mechanical fluctuations could readily occur on a macro scale, and affected a particular structure commonly used in architecture, say. Would you ever feel safe in a high-rise again? In your own house??) Consequently there is a strong urge to try to make the current levels fit observations, rather than immediately going back and saying, “Oh, physics is wrong, drop back and punt.” But this is not a new thing; it is the way it has ALWAYS been.

Example: Epicycles. An attempt to make the previously-known science fit observations of planetary motion. And then Kepler came along and there was a hullaballoo for awhile, and then it was found that his model fit observations better, and so now we have Kepler’s Laws of Planetary Motion.

Example: Steady State Universe. There were no galaxies, there was only the universe, and it had always been just like it is. Then Edwin Hubble realized that the unusual spectra he was getting from those peculiar stars could be explained if they were regular spectra with extreme blueshifts, and he discovered that those peculiar stars are what we now call quasars, and they were far distant galaxies in their own right, speeding away from us at incredible velocities. And then astronomers began to realize that all those “spiral nebulae” and such were also galaxies, and they were also blueshifted, but at an amount corresponding to their distance. And lo, Hubble’s law was born.

I can go on and on like this for a very long time. It is the history, and the nature, of science done properly, according to the scientific method.

So.

The problem most experts have with the “Em Drive” is that it apparently violates principles that are in one of those lower levels of physics. It’s like a perpetual-motion machine — a PMM violates the Second Law of Thermodynamics, which is in the very foundation of physics; if a true PMM were ever constructed, we would have to throw out the whole of physics and start over. Something similar is happening with this “Em Drive,” in that it would nix a very fundamental brick in the foundation of physics, and to most experts and experienced scientists, it smacks strongly of “perpetual-motion machine.” Therefore they are either inclined to the notion that the whole concept is wrong, or that there is something about the setup that hasn’t been taken into account, which CAN be explained by physics as we know it.

(Also, feel a little sorry for those experts — you cannot imagine how many really way-out-there concepts, inventions, etc. they get, and have to deal with. I myself have reached a level of fed-up re: Moon Hoaxers that borders on knee-jerk.)

Stephanie Osborn

“The Interstellar Woman of Mystery”
http://www.Stephanie-Osborn.com

My own view is that if they keep having to adjust the data to fit the models, I don’t care how much consensus they have.

Extraordinary claims require extraordinary evidence. EM drive requires strong evidence that you can get thrust without reaction mass. They have not really shown that yet. Until they do – allow someone not connected with them test it in a swing and demonstrate continuous thrust over time, the burden of proof is on them.

Man made climate change is in the same situation except that there are no lab experiments; but their models do no predict the past, so why should we believe they will predict the future? We know it has been warmer (in Viking times, and probably in Roman times), and rather than ex[plain that they adjust the data. I don’t know how to measure the temperature of my city block to a tenth of a degree; when they can do that reliably I will believe they know the Earth temperature to that accuracy. When you can take the conditions of 1950 as input to a model and run it and it gives today’s conditions, I will take the model seriously; but I cannot see any reason to spend billions of dollars on measures there is no real evidence to show we need. I like Los Angeles without smog. I don’t worry a lot about CO2 “pollution”. But then I don’t invest in green technology.

bubbles

Iran Deal Worsens

You will need to set down your beverage before reading this one:

<.>

Iran will be allowed to use its own inspectors to investigate a site it has been accused of using to develop nuclear arms, operating under a secret agreement with the U.N. agency that normally carries out such work, according to a document seen by The Associated Press.

</>

http://hosted.ap.org/dynamic/stories/I/IRAN_NUCLEAR?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT&CTIME=2015-08-19-13-06-05

◊ ◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

bubbles

Footfall and fusion rockets

Jerry,

Your recent discussion of an illustration of the Archangel Michel from Footfall has inspired me to reread the book again. It has been years. Since I have sense dallied with writing military hard Sci Fi with as many rivets as possible, I invested a bit of time in working out some of the math relevant to fusion rockets. The results are somewhat sobering. It is dependent on vehicle mass and acceleration of course, but the type of ships that make interesting stories would require fusion rockets with a power output of Petawatts (1eex15) to Exawatts (1eex18 Watts). To put this in perspective, the insolation of the Earth and ther habitable planets is on the order of 1eex17 Watts. Most of that energy is hopefully the KE of the exhaust, but the Gaussian distribution of velocities in a high temperature plasma combined with collisions with dust and gases in the near vacuum of space would transform much of that energy into heat. Most of that heat energy will be radiated as X-rays and UV, but some will be in the visible spectrum. If you gave any info on the mass of the alien ship in Footfall, I have not reached it yet but I would imagine that it is a multimillion ton ship. The bottom line is that a big ship with a fusion rocket is going to light up the night sky. You will not have to discover it by comparing subsequent photo images.

Of course dramatic license must be exercised to promote a good story. The size of the Motie light sail along with the power of the launching lasers and boost duration were all exaggerated to make a great story.

James Crawford=

Okay for starters, his comment about “Earth and their habitable planets” doesn’t make sense. Right now we have one habitable planet in our system, and it IS Earth. His calculation regarding the solar irradiance of Earth is roughly correct if you simply assume a circular cross-section intercepting the — it is indeed about 100 petawatts, as an order of magnitude. (Actually it’s closer to 200, being around 175, but order of magnitude, sure; and yes, I did the calculation myself, and THEN found the value online that confirmed my calculation.)

Jim is in a better position to calculate the useful output of a fusion engine. At first glance I’m inclined to doubt that it requires that much power to move a spacecraft of reasonable size. Constant acceleration of a decent-output engine(s) is the key, as we all know already. So we can reduce the size of the powerplant/engine significantly.

Then, of course, we must define “reasonable size.” I’m thinking offhand that “multimillion ton” spacecraft are overkill in general. I suppose if you’re carrying a small spaceborne city as a generation ship, as in Footfall, it’s possibly reasonable. However, the bigger/more massive the craft, the greater the initial drain on the system building it, so I think that would have been a bigger limitation than the engines, actually. If a Space Shuttle impact can seriously damage the thing, then it ain’t no Borg cube for sure.

For comparison, ISS is about 500 tons currently and IMHO is not the most efficient design, given the multiple countries involved, with no governing body overseeing. The mass of one of the World Trade Center towers was about 495 million tons, and contained nearly 2 million square feet of office space. If we assume a habitable-area ceiling height of 8 feet, then we have a usable/habitable volume of over 15 million cubic feet. If this ain’t enough for a spaceborne city, there’s a problem — and we still haven’t reached a million tons. I’d have to say he needs to significantly scale down his notion of the size of the craft.

I would also think that a 100PW engine output (which I think I’ve already established is way overkill) is hardly going to “light up the night sky.” It might resemble a small planet moving through the sky — IF the “exhaust port” happened to be angled in the right direction for the observer to even see. If the bulk of the spacecraft is between the observer and the exhaust, arguably you would see nothing, or perhaps a smallish IR-emitting cloud.

On the other hand, if you have a solar slingshot trajectory, that should have been the point at which the astronomers detected the alien craft, based on my experience as an astronomer. Might not have been initially recognized as a spacecraft, but they’d almost certainly have known something was there, IMHO. If nothing else, they’d probably have seen the transit against the solar disk. But stuff happens, and we didn’t have things like the STEREO probes up yet, so for story purposes, hey, it works.

The previous discussion is assuming he’s talking entirely the alien craft. If we’re talking the PO spacecraft, at least to some extent the answer is “we don’t know,” because we don’t really know how nukes behave in space, never having made the tests.

Stephanie Osborn

“The Interstellar Woman of Mystery”
http://www.Stephanie-Osborn.com

Jerry, I mistyped yesterday and only caught it when I was reading your blog today. The mass of the World Trade Center North Tower was 495 million POUNDS, not tons. And that’s of order a quarter of a million tons. Hence yes, we can say readily that a small spaceborne city does not have to weigh multi-million tons.

Sorry about that.
Stephanie Osborn

“The Interstellar Woman of Mystery”

I did enough due diligence for a novel written in the 80’s…

bubbles

Batgirl, RIP.

<http://www.nbcnews.com/pop-culture/tv/batgirl-tv-actress-yvonne-craig-dies-cancer-78-family-n412206>

—————————————

Roland Dobbins

bubbles

bubbles

bubbles

bubbles

Freedom is not free. Free men are not equal. Equal men are not free.

bubbles

clip_image002

bubbles

Lippmann, Bezos, RAID 5, Microsoft Windows 10, and other distractions from fiction.

Chaos Manor View, Monday, August 17, 2015

I’m still in the throes of fiction, so this is a mixed bag of things you might want to pay attention to.

Amazon strikes.  Saturday I ordered a Blu-Ray burner and disks for making a full backup of everything important.  A Pioneer burner came today: a slim thing, USB 3 (2 works), ready to use – but the disks have not yet shipped, so I can’t try it out.  Why Amazon thinks it important to get the burner here without disks is a matter for speculation.  And in my case all the DVD disks are upstairs and this not accessible to me anyway. Fie. Fie I say.  Of course I had no plans to do anything with the Blu-Ray today because I didn’t expect it to be here, so I have no cause to be angry with Amazon for getting it here before the disks – even ONE blank disk – but, well, Fie! Fie, I say.

bubbles

clip_image002

http://books.simonandschuster.com/Lord-of-Janissaries/Jerry-Pournelle/BAEN/9781476780795

bubbles

RAID 5 is not as safe as you think!
I have been caught out by this recently and you should consider rebuilding your raid as RAID 6 if your device supports it.
RAID 5 only allows a single disk to fail. If all your drives come from the same batch it is possible that a second drive could fail (and the probability is proportional to the size of the disk) while your raid is rebuilding itself after the first drive has failed. (This is what happened to me – I only survived because of the excellent disk recovery toolset R-TOOLS and their amazing virtual raid facility.)
If RAID 6 is not a possibility then you should – at least – find an alternate sources for your drives and try and ensure that your disks do not all come from the same batch.
Good luck and best wishes,
Roy

True, but our new RAID 5 system is only part of my backup mania. It will replace something else, and it will be automatic; but critical items get copied to several places as they are made, and at periodic intervals are burned into DVD’s, soon to be replaced by a new Blu-ray burner, which can hold most everything on a disk that can be carried home by Niven. The RAID 5 will back up all my systems, invisibly, at low power costs, for about $600 for the system. And it’s something to write about.

A DVD or Blu-Ray burner does it all, cheaper, but less conveniently. I do make certain to burn copies of all works in progress. Everything else can, with effort, be replaced if lost.

Eric adds:

    All I can really say is that RAID 5 is a step up from relying on single disks on a networked PC. The NAS will not be the primary back up solution, it will be the center of storage to enable that solution, which is to burn BD-R discs. This can be done frequently at low cost. The purpose of the NAS is to simplify the storage situation and reduce the power consumption of keeping multiple PCs available on the network. (The use of SSDs for fast booting should also reduce the desirability of leaving machines on 24/7 as it becomes more convenient to wake a machine as needed, especially for a single user.) RAID in general is not a backup strategy. It can be PART of one in that it can simplify by providing a central target to backup.

    The issue has been known for a while, although not given much consideration by NAS companies in their marketing:

http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

http://www.zdnet.com/article/sorry-about-your-broken-raid-5/

http://www.zdnet.com/article/has-raid5-stopped-working/

There remain questions of what the consumer should and can do. Without getting into a several times more costly NAS box with several more bays and dedicating those bays to multiple parity drives, how does one safely get a lot of storage in one place? Should one forego the higher RAID levels and just be prepared to restore the whole thing if a drive fails? Considering the sacrifice in capacity (in this case 10.9 TB usable out of 16 TB raw capacity) for RAID 5 and higher, then factoring in the risk factor of high capacity drives, it makes one wonder if it would just be easier to apply an 8 bay NAS as a mirrored pair of striped arrays and  have any failure and repair be a matter of copying rather than a long arduous rebuild with risk of failure.

There are no easy answers. Would a set of 1 TB drives have been safer? Probably, as the number of reads required during the rebuild would be substantially lower. Avoid RAID altogether if performance isn’t at issue? We’re talking about a single user most of the time in this case. I recently got a Seagate 4 TB single drive NAS for under $100. (This has been replaced in the product line by a newer model with improved features and performance but is fine for a single user needing an independent drive seen by multiple machines.) I suppose we could have gotten two of these, used one as backup to the other, and had about the same level of safety for an adequate amount of capacity, though on a device where the enclosure is scarcely adds more than the price of the bare drive, failure of the enclosure electronics becomes a significant issue. I’ve recovered numerous working drives from failed USB enclosures. On the positive side, another virtue is that these small units can easily be snatched up and taken away in case of a disaster, such as fire or earthquake requiring evacuation.

The second link above lists some measures that can help alleviate the risk. I’ll be investigating whether any of these are implemented in the equipment in question. 4K sectors are almost certainly used but I need to look into the others.

bubbles

You wrote:

<.>

Walter Lippmann once said that diplomacy was like writing checks; but the account they were written against was military power. He later added that he included industrial power in that.

</>

https://www.jerrypournelle.com/chaosmanor/heat-wave/

Lippmann was onto something, but he never asked the deeper question:

What are military power and industrial power instances of? These are instances of national power. I read RAND monographs on measuring national power when I learned threat analysis.

They taught me that we are part of an entity called a “society” and that another entity called “the state” extracts resources from a society and transmutes those resources into national power, which is ultimately military power. For all activities of state exist on a continuum of warfare from diplomacy, through covert action, punitive military action, and war.

Everything a citizen does can be measured in terms of national power.

How well are people educated? Do they have access to formal and informal education? Can we exploit existing ethnic divisions among the people? All of these things come into consideration when measuring national power and finding ways to exploit a state’s weaknesses when planning covert or military action.

◊ ◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

Well, yes; but in my defense I wasn’t writing an essay on Mr. Lippmann, or on systems analysis, or on threat assessment. I have to look at the blasted keyboard as I type now, and writing is a bit more painful than it used to be. My point was that the size of the Army is one factor; it might have been important in trying to deter Hitler or Stalin before WW II, and it might not; deterrence is an event that takes place in the mind of the opponent (as is surprise) and his assessment of your will is probably more important than the absolute size of your army. An opponent might not be able to assess your potential on the proper time scale.

Hitler’s advisors had no idea of how quickly the United States could raise, equip, and train a huge military force. They even thought they had detected a fatal flaw in our mobilization capabilities (not in our plans, which were pretty laughable): the ability to make military optics. Based on their own experience they saw this as a major bottleneck; as it happens, we merely invented ways of building opticals by new and much more rapid processes. Same with many other bottlenecks.

Sometimes intel finds weaknesses that aren’t really there. Surprise!

bubbles

http://www.siliconbeat.com/2015/08/17/quoted-jeff-bezos-disputes-article-about-amazons-ruthless-culture/

Jeff Bezos disputes article about Amazon’s ruthless culture (MN)

By Levi Sumagaysay / August 17, 2015 at 6:56 AM

“I strongly believe that anyone working in a company that really is like the one described in the NYT would be crazy to stay. I know I would leave such a company.”

Jeff Bezos, Amazon CEO, on the New York Times article over the weekend that painted a nightmarish portrait of his company’s work environment. In a memo to employees obtained by GeekWire, Bezos said “doesn’t describe the Amazon I know” and urged employees who see the type of harsh practices described in the article to tell HR, or him directly.

Money quote from the NYT article: “Nearly every person I worked with, I saw cry at their desk,” said Bo Olson, who worked in book marketing at Amazon for less than two years.

The article, based on interviews with more than 100 current and former employees of the retail behemoth, included gems such as: a peer review system in which employees gang up on other employees they see as poor performers; a woman who suffered a miscarriage pressured to go on a business trip the day after surgery; a woman who had breast cancer who was put on a “performance improvement plan.”

In his memo, Bezos told employees: “Hopefully, you don’t recognize the company described.”

The article is not the first unflattering account of Amazon’s demanding culture. The 2013 book “The Everything Store: Jeff Bezos and the Age of Amazon” described Bezos’ management style as sometimes brutal. (“If I hear that idea again, I’m gonna have to kill myself” is among the many putdowns Bezos is said to have uttered to employees.) And that’s just about treatment of the company’s white-collar workers. The experiences of Amazon’s warehouse workers have also gotten plenty of press over the years.

I don’t know any Amazon workers, but I cannot think that a big company of surly terrified workers could be much of a retail success.

bubbles

I used to doubt Microsoft. Then I installed Windows 10.

http://www.washingtonpost.com/news/innovations/wp/2015/08/17/i-used-to-doubt-microsoft-then-i-installed-windows-10/

By Vivek Wadhwa August 17 at 7:00 AM

I don’t know if I broke a law of computing or committed heresy.  But I installed Windows 10 on my MacBook Pro. I had feared that this would condemn me to purgatory in the gates of computing hell.  But it has been an incredibly positive experience: my favorite Microsoft Office applications — Outlook, Word, and PowerPoint — work faster than ever before, and I can still use Apple peripherals — a Thunderbolt Display and Thunderbolt external hard drives. The best part is Windows 10 itself: it is a beautifully designed operating system that gives me the best of the past and present — maintaining the usability and familiarity of the old Windows operating system, and letting me download slick apps designed for tablets.

Another Microsoft product that I had written off years ago is Microsoft’s Internet Explorer.  The jury is still out, but Microsoft’s new browser, Edge, seems faster than Google’s Chrome.  I may end up switching browsers as well.

I had thought I would never install a Microsoft operating system ever again after my experience with Windows 8.  It was terrible: inelegant, difficult and expensive. It took me about 10 minutes to conclude that Microsoft had lost touch with its customers and was destined to go the way of AOL and Myspace, and I switched all I could to Apple. 

But I still needed the Microsoft Office tools, because they are industrial strength and Apple still has no products that are as good.  To use these, I had to load Windows and Office under VMWare on my MacBook.  Instead of getting the best of both worlds, though, I got the worst: pathetically slow applications, poor battery life, and inconsistent user interfaces.

Then, last week, at an event hosted by CIO magazine, where I gave a keynote, I spoke to a group of Chief Information Officers of large and midsized companies about technology trends.  The vast majority said they were buying Microsoft’s Surface Pro tablets for their users and upgrading desktop machines to Windows 10. In this era of iPads and iPhones, why would any company install such antiquated and clumsy technology, I asked. I was surprised at the response. 

Several CIOs told me that I was out of touch with Microsoft’s new products.  They told me that Surface tablets integrated better with their enterprise-computing infrastructure than do iPads; have much-needed features such as USB 3.0 ports and keyboards; are more secure than iPads; and most importantly, provide a consistent user interface and experience to business users. The CIOs said that Microsoft is a much better company to deal with than Apple, which has become known for arrogance and a lack of concern for the needs of enterprise customers.

I realized that Microsoft is no longer the same “evil empire,” the monopoly, which everyone once hated. It has many loyal fans in the business world.

This didn’t jive with all the criticism that I have been reading in the press about the lack of security of Microsoft’s new operating system. The commonly raised concerns are about Windows 10’s continual uploads and downloads of data to Microsoft servers and the default installation options — which give Microsoft all sorts of rights.

I shared these serious criticisms of Windows 10 with Microsoft chief executive Satya Nadella, and asked him how Microsoft planned to address them.  In response, he said that the “core reasons for Windows 10 as a service is more assurance of continuous security updates, app compatibility and roaming of the user info across devices you use with transparency and control with the user. For any business customer there are tons of tools that provide all kinds of additional control.”  He assured me that Microsoft was in touch with customer needs; and all of the CIOs I spoke with agreed with his assessment.  They said that they had customized the Windows 10 installation for their needs and believed that the new method for distributing updates would provide better security.

This is what convinced me to give Microsoft another chance and take the plunge into Windows 10.

The default options for consumers in the Windows 10 installation are indeed problematic.  I would not suggest that anyone use its default installation settings. They grant Microsoft the right to use your data to market to you; to automatically connect you to Wi-Fi networks and marketing “beacons;” and to sell some of your information. But all of these options can be turned off. Microsoft is actually being more honest than other technology companies are that do much of this without informing customers and hide details in the lengthy contracts that no one reads.  Given that Microsoft is providing Windows 10 for free to the majority of its customers, this is a small inconvenience for people who really care about their privacy or don’t want to be marketed to.

What is clear is that Microsoft is back — in full force.  This is a good thing; Apple and Google desperately need the competition that Microsoft will once again provide.

For what it’s worth, my experiences with Windows 10 have been mostly positive. It does take patience, or did for me. I have not used Windows 10 on Apple equipment, but I will probably get a MacBook Air to replace the one that died of a swollen battery (long after the warranty) and we’ll see; I do these silly things so you don’t have to. This is, after all, the successor to the User’s Column.

It may be, though, that the Surface Pro will become my favorite machine. On the other hand, Apple hardware is elegant, and Thunderbolt is great technology; so we’ll see. I’m in the middle of fiction now so it will be a while; I’m in no hurry.

And I much agree that competition can only improve both Apple and Microsoft.

bubbles

Sandbox bypass in Android Google Admin console revealed

http://www.zdnet.com/article/sandbox-bypass-in-android-google-admin-console-revealed/

A researcher has unveiled the details of a vulnerability in the console after Google failed to patch the flaw. [UPDATED]

By Charlie Osborne for Zero Day | August 17, 2015 — 08:07 GMT (01:07 PDT) |

[Update 11.34GMT: Google statement added]

A security flaw allows third-party applications to bypass sandbox restrictions in the Google Admin console has been disclosed.

Posted on Full Disclosure on Friday, Rob Miller, senior security researcher, from MWR Labs says the flaw, found within Google’s Android Admin application, allows third-party apps to bypass sandbox restrictions and read arbitrary files through symbolic links.

If the console received a URL through an IPC call from another application on the same device, the Android app loads this link in WebView. However, if an attacker used a file:// URL which pointed to a domain they controlled, then it is possible that symbolic links bypass Same Origin Policy and is able to retrieve data out of the Google Admin sandbox.<snip>

bubbles

Astronauts found something troubling in these shots from space

http://www.techinsider.io/astronaut-photos-light-polution-led-nasa-esa-2015-8

clip_image003NASAThe International Space Station in orbit.

Astronauts aboard the International Space Station are snapping photos of Earth to measure light pollution, and they’ve found something surprising: Light-emitting diodes (LEDs) — which are touted for their energy-saving properties — are actually making light pollution worse. And the change is so intense that ISS crew members can see it from space.

To see it, take a look at these photos that astronauts snapped of the bustling city of Milan.<snip>

bubbles

Are Driverless Cars Safer Cars?    (journal)

Regulators likely to accept assisted driver technologies that emphasize protection 

By

Orr Hirschauge

Aug. 14, 2015 5:30 a.m. ET

JERUSALEM—Automotive executives touting self-driving cars as a way to make commutes more productive or relaxing may want to consider another potential marketing pitch: safety.

“If you want to create a car technology with mass adoption, it needs to be about safety,” says Amnon Shashua, chairman of Mobileye NV, a fast-growing supplier of assisted-driving technology. “Positioned as a comfort feature or as something that is cool to have, the autonomous car would not make it to the mass market.”

Jerusalem-based Mobileye develops machine-vision chips and software. According to Mr. Shashua, its chips by 2018 will be used on a car that takes over steering if the driver has a heart attack, falls asleep at the wheel or becomes otherwise incapacitated.

He declined to comment on the manufacturer or how its vehicle would monitor the driver’s condition. Mr. Shashua said such technology could be via a smart wristband, or biometric sensors in the seat. <snip>

http://www.wsj.com/articles/are-driverless-cars-safer-cars-1439544601

Apple shows interest in driverless car test track (LA Times)

By DAINA BETH SOLOMON

Is Apple building a self-driving car? That’s the rumor, and Apple’s not saying.

The British newspaper the Guardian said Apple may sign up with GoMentum Station in Concord, northeast of San Francisco. The former naval base is now a testing ground for driverless cars, boasting 20 miles of roads and a military guard. Mercedes-Benz and Honda have already put the space to use, said the Guardian.

Apple declined to comment.<snip>

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

bubbles

Freedom is not free. Free men are not equal. Equal men are not free.

bubbles

clip_image005

bubbles