Medical Appointments; Doom: And Struggle with iPhone

Chaos Manor View, Tuesday, May 12, 2015

Saturday, May 09, 2015

I am off to physical therapy. Roland has found some items to occupy your attention. I’ll be back but possibly exhausted.

My problems with Apple remain, but I am getting a handle on a way to use Windows to solve them.

clip_image001

Wednesday: I had intended to add more, but the Clippers were playing; now that the Lakers have self-destructed, the Clippers are not only the only NBA game in town; they are, under Steve Ballmer’s ownership an actual world-class basket-ball team. They lost this playoff game, but they still have a chance.

`

clip_image001[1]

“To understand the impact of something on this scale, you would have to look to the science fiction writers, it is incomprehensible.”

<http://www.express.co.uk/news/nature/576300/Asteroid-1999-FN53-Earth-May-14-mass-extinction-NASA>

—————————————

Roland Dobbins

Asteroid a MILE wide to hurtle past Earth in 48 HOURS – as experts warn of MASS EXTINCTION

A COLOSSAL asteroid hurtling through space is feared to be one of the biggest EVER to threaten a collision with Earth.

By Nathan Rao

http://www.express.co.uk/news/nature/576300/Asteroid-1999-FN53-Earth-May-14-mass-extinction-NASA

clip_image003

The gigantic missile thought to measure almost a mile across will brush closer than previous monsters which have sparked a global panic.

Worried astronomers warned 1999 FN53, which is an eighth of the size of Mount Everest, will skim the Earth in TWO DAYS.

A collision would be nothing short of catastrophic triggering mass destruction, earthquakes and global extinction.

The rest is just as sensational

“This was so major that we are proposing it could have been an extra-terrestrial impact, the theory of rats and fleas has little trace in this whole story.”

<http://www.express.co.uk/news/history/559714/Black-Death-trigger-asteroid-impact-scientists>

—————————————

Roland Dobbins

Black Death was ‘triggered by ASTEROID impact and could reoccur TODAY’, scientist claims

THE devastating Black Death which killed hundreds of millions of people in the 14th century may have been triggered by an asteroid impact, scientists have sensationally claimed.

By Nathan Rao

PUBLISHED: 18:55, Sat, Feb 21, 2015 | UPDATED: 16:31, Thu, Mar 12, 2015

clip_image001[2]

clip_image001[3]

Jerry,

I posted this on my Facebook page

a few months ago:

“Buying a new smartphone should not

make you feel dumb. Especially if

you’ve worked with computers for the

last 40 years”

best,

-jim stewart

Agreed. I do silly things so you don’t have to, so I ought to be grateful..

I suspect that you yowling about the iPhone will pass once you get the nonsense straighten out. I can not imagine all your contacts have been lost; as other’s have pointed out, that should not be possible. Still, the impossible does happen… but that’s what you started the old User’s column for right? So we would not have to do the dumb things. 🙂
I will say that I have been working with a Windows phone lately, and it is agony squared. Everything works, but it does so in odd, and often unattractive ways. Finding and dialing a contact for instance, requires a rather complex interaction. And Cortana simply must understand Spanish, because she rarely gets things right. “Siri – call John” works on my iPhone. Cortana the Croissant on the other hand, brought up some web pages involving prostitutes and their “Johns.” (*sigh*)
Android might be better, but don’t throw away that iPhone until you are sure. Hope things go better pretty quickly. Even with all your emotional maturity and experience, recovering from the shocks to your system have to be one heavy load to haul around.
Please take care of yourself, we all want you to be around for a long time. You are definitely my favorite curmudgeon of all time! (I mean that in a very good way of course… 🙂
-Paul

Sigh. I suppose I deserve the yowl. In fact I would even go so far as.. but no. Thanks for the kind words, and yes, I am going to solve this Apple Mess.

clip_image001[4]

clip_image001[5]

clip_image001[6]

clip_image001[7]

clip_image001[8]

clip_image001[9]

clip_image001[10]

clip_image001[11]

clip_image001[12]

clip_image001[13]

clip_image001[14]

clip_image001[15]

clip_image001[16]

clip_image001[17]

clip_image001[18]

clip_image005

Freedom is not free. Free men are not equal. Equal men are not free.

clip_image005[1]

clip_image007

clip_image005[2]

iPhone Blues; The Inevitable Iranian Bomb; Self Driving Trucks

Chaos Manor View, Monday, May 11, 2015

I have been busy with fiction, which is good; but also with an iPhone problem which isn’t. Meanwhile, the world goes on.

 

clip_image001

The iPhone has so depressed me that I have done little but brood. Many years ago Peter Glaskowsky pronounced a dictum that it is very nearly universal: With Apple, everything is very simple or it is impossible. While I have found over the years that some things are possible with much effort, in general that has been true, and you waste your time and energy trying to get Apple to do things that are simple but don’t work.

The Cloud – curse it, I wish I had never heard of it – “synchronized” Contacts from all my machines into the Contacts on my iPhone, hundreds and thousands of them, making it impossible to use for phone calls. I found out how to stop it from doing that, but in doing it erased my contacts. All of them. I had no saved phone numbers. Alex found on the MacBook Pro a backup copy, sort of, of many of the iPhone contacts without all the other contacts – people I will never call, some dating back to BYTE days –and fooled the system into believing in a “group” on the phone. I can sort of activate the “group” which it shows in red, but Contacts still shows in black. I no longer have caller id – it seems to think the empty Contacts is where to look – and I cannot add new numbers. I have also lost some old ones, which I suppose is endurable, but a few like Larry Bond will be hard to get again, and Tom Clancy is dead so I suppose I don’t need his.

I need a new phone, so I need one I can get the old “cellphone” group in as contacts and defend it against the Cloud filling it with hundreds and hundreds of needless contacts. I do not think it will be an Apple. Peter tells me what happened to me is not supposed to happen, but what that means is that using my iPhone as a phone is now impossible. And I brood over this. In the old days I would have worked on it until I solved the problem, but I have too much work to do, my typing is two finger sloppy and slow, it takes half an hour to write a paragraph, and I have to look up the phone numbers of my friends since I cannot use my damned smart phone to remember them. If the phone number wasn’t in the MacBook Pro, I cannot get it in to my phone now, and it does not have the phone numbers of DeChancie or Niven or Barnes so I can’t call them.

I wish I had a dumb phone. At least it would remember phone numbers I put in without my having to bring in hundreds and hundreds of numbers from the cloud.

I can work around some of it, but I don’t to spend the time. I have work to do, and outsmarting Siri isn’t high on my list of things to do.

With Apple everything is very simple or impossible; but if Apple gets confused, the very simple things become impossible.

I need a phone that does not synchronize my big machine’s thousands of contacts into my telephone. I can do without the Cloud. Until this happened I was quite happy with the iPhone, and if I can ever get it back to the relatively simple thing it used to be I would buy the large iPhone 6. I am assured that what happened to me isn’t supposed to happen. All I can say is, it did.

Coupling that with the increasing frustrations of no longer being a touch typist and my clumsiness in hitting multiple keys so that it takes longer to correct a sentence than it does to write it, I got little done here over the weekend.  So it goes.

`

clip_image001[1]

I wanted to include something I saw in today’s Wall Street Journal, but it keeps showing me /asia, and I can’t get the front page as it is on the paper. For some strange reason the WSJ won’t show me the front page of today’s paper. The editors think they know better than we what we are looking for. Amazing.

In any event, the “deal” with Iran seems further away than ever, although the Iranians have been offered almost everything. It is now pretty certain that Iran will have fission weapons not long after the next President’s inauguration, and there is little or nothing Kerry can do about it.

A fission weapon would be enough to destroy Tel Aviv and pose an existential threat to Israel. Deterring the Supreme Leader of Iran will be problematical: classis deterrence assumes rationality, and The Will of Allah trumps all rational arguments and leaves no choices. If we are entering the End of Days, the time for rational argument is ended.

We have little experience in dealing with such matters. Communism was, after all, by its own premises a rational system of beliefs. It claimed to be marching in the direction of history, but it did not posit the end of history; there was no End of Days, when Judgment would come upon the world.

Once Iran has fission weapons, fusion will not be too far in the future. Fusion weapons are more complex, and have a shorter shelf life; but the techniques for building them, while classified, are relatively well known and derivable from principle even without computers – after all, the first US test was in 1952, followed by a Soviet test in 1953. More sophisticated fusion weapons with varying yields are more difficult technology, but printed circuits and complex chips make them simpler.

I have heard little about US plans for living in a world in which Iran has a full range of nuclear weapons, but that world is becoming increasingly likely and soon will be inevitable.

clip_image001[2]

This is Pledge Week all week, and I am gratified at the number of new subscriptions and renewals and upgrades I am getting. I do not take the obligations lightly.  And I do thank you for your responses.

You do note that we do not have annoying advertisements and push products (well, other than my books).

clip_image001[3]

clip_image001[4]

Self-Driving Tractor-Trailers

Dr. Pournelle –

The article states that Daimler’s truck is limited in that it can’t self-drive in heavy snow and rain or in crosswinds of 30-40 miles per hour.

That pretty much removes Oklahoma and the Texas Panhandle from the itinerary for most of the year.

I fully expect that, very soon after the first truck crashes, regulations will be introduced requiring a licensed truck driver at the controls at all times, thus negating any advantage of having a self-driving truck. Who is liable when a self-driving vehicle crashes? [Obviously, the deepest pockets are always sued.]

Could the truck’s system be hacked? I believe that some freight companies already maintain satellite contact with their rigs for logistical reasons, hijacking detection and the like. Could this be a likely route into the driving program? We’ve all likely had close calls with 18-wheelers. I was almost run off the road by a truck carrying compressed hydrogen gas and I remember the news feed from the ammonia truck that crashed in Houston in 1976. Some loads offer real opportunities to anyone with nasty imagination and inclination.

Pieter

I would worry a lot about hacking; hijacking trucks is still an active crime,m and of course it is always tempting to those we hate the products delivered or the firms advertised on the trucks. I applaud Daimler for advancing the state of the art, and I think it may become inevitable that truck drivers become managers with other tasks; but of course that has not been the practice in airlines, where autopilots do most of the flying.

clip_image001[5]

Does Artificial Intelligence Pose a Threat?

A panel of experts discusses the prospect of machines capable of autonomous reasoning

By

Ted Greenwald

May 10, 2015 11:08 p.m. ET

http://www.wsj.com/articles/does-artificial-intelligence-pose-a-threat-1431109025 

Paging Sarah Connor!

After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple’s Siri and Amazon’s Alexa, IBM’s Watson and GoogleBrain, machines that understand the world and respond productively suddenly seem imminent.

The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?

The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement.

How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Taking part in the discussion are Jaan Tallinn, a co-founder of Skype and the think tanks Centre for the Study of Existential Risk and the Future of Life Institute; Guruduth S. Banavar, vice president of cognitive computing at IBM’s Thomas J. Watson Research Center; and Francesca Rossi, a professor of computer science at the University of Padua, a fellow at the Radcliffe Institute for Advanced Study at Harvard University and president of the International Joint Conferences on Artificial Intelligence, the main international gathering of researchers in AI.

. . .

WSJ: What is the superintelligence control problem?

MR. TALLINN: Even fully autonomous robots these days have off switches that allow humans to have ultimate control. However, the off switch only works because it is outside the domain of the robot. For instance, a chess computer is specific to the domain of chess rules, so it is unaware that its opponent can pull the plug to abort the game.

However, if we consider superintelligent machines that can represent the state of the world in general and make predictions about the consequences of someone hitting their off switch, it might become very hard for humans to use that switch if the machine is programmed (either explicitly or implicitly) to prevent that from happening.

WSJ: How serious could this problem be?

MR. TALLINN: It’s a purely theoretical problem at this stage. But it would be prudent to assume that a superintelligent AI would be constrained only by the laws of physics and the initial programming given to its early ancestor.

The initial programming is likely to be a function of our knowledge of physics—and we know that’s still incomplete! Should we find ourselves in a position where we need to specify to an AI, in program code, “Go on from here and build a great future for us,” we’d better be very certain we know how reality works.

As to your question, it could be a serious problem. It is important to retain some control over the positions of atoms in our universe [and not inadvertently give control over them to an AI].

MS. ROSSI: AI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. Examples are autonomous online trading agents, health-diagnosis support systems and soon autonomous cars and weapons.

The rest of the article is interesting although it has little that science fiction fans have not discussed for decades. Bill Gates is late coming aboard. Jack Williamson’s “With Folded Hands” was an early examination; his robots are not malevolent at all.

WSJ: Guru, how do you view the risks?

MR. BANAVAR: Any discussion of risk has two sides: the risk of doing it and the risk of not doing it. We already know the practical risk today of decisions made with incomplete information by imperfect professionals—thousands of lives, billions of dollars and slow progress in critical fields like health care. Based on IBM’s experience with implementing Watson in multiple industries, I maintain that narrow-domain AI significantly mitigates these risks.

I will not venture into the domain of general AI, since it is anybody’s speculation. My personal opinion is that we repeatedly underestimate the complexity of implementing it. There simply are too many unknown unknowns.

But Moore’s Law is inexorable…

Self-driving car accidents: 3 Google cars, 1 other since September in California (MN)

By Justin Prichard

Associated Press

LOS ANGELES — Four of the nearly 50 self-driving cars now rolling around California have gotten into accidents since September, when the state began issuing permits for companies to test them on public roads.

Two accidents happened while the cars were in control; in the other two, the person who still must be behind the wheel was driving, a person familiar with the accident reports told The Associated Press.

All of the accidents happened at under 10 mph.

clip_image001[6]

Navy Robots Test the Limits of Autonomy

By THE NEW YORK TIMESMAY 6, 2015     nyt

Navy robotics engineers are working to develop autonomous tools that can integrate with other technologies. But in field tests, the autonomous future still seems far away.

By Zackary Canepari, Drea Cooper and Emma Cott on Publish Date May 6, 2015.

This is the second episode in a Bits video series, called Robotica, examining how robots are poised to change the way we do business and conduct our daily lives.

At a naval research facility along a stretch of the Pacific Coast in San Diego, civilian engineers work alongside active-duty troops to develop and test the next generation of military robots.

The engineers are members of the Unmanned Systems Group at Spawar, or Space and Naval Warfare Systems Command, a research and operations arm of the Navy. Their mandate is simple: Take the soldier out of the minefield.

When autonomous systems are deployed, engineers at the center say they will revolutionize the way the military fights. They envision a day when one soldier will control an entire fleet of driverless trucks, or a driverless vehicle will make a road safe for a Humvee full of troops. They could also assist in detecting and combatting chemical or biological warfare.

Chris Scrapper is leading a team of engineers who envision an autonomous future. On a recent afternoon, they were tapping away at computers to analyze data from a failed run with RaDer (it stands for reconnaissance and detection expendable rover), the boxy black vehicle they’re trying to make drive on its own.

It’s hard to say when autonomous technologies will be ready for use in combat, Mr. Scrapper said, adding, “It depends on the threat level.”

There’s more but you get the idea. 

clip_image001[7]

http://www.washingtonpost.com/blogs/innovations/wp/2015/05/11/quantum-computing-is-about-to-overturn-cybersecuritys-balance-of-power/

Quantum computing is about to overturn cybersecurity’s balance of power (WP)

By Vivek Wadhwa May 11 at 7:00 AM

“Spooky action at a distance” is how Albert Einstein described one of the key principles of quantum mechanics: entanglement.  Entanglement occurs when two particles become related such that they can coordinate their properties instantly even across a galaxy. Think of wormholes in space or Star Trek transporters that beam atoms to distant locations. Quantum mechanics posits other spooky things too: particles with a mysterious property called superposition, which allows them to have a value of one and zero at the same time; and particles’ ability to tunnel through barriers as if they were walking through a wall.

All of this seems crazy, but it is how things operate at the atomic level: the laws of physics are different.  Einstein was so skeptical about quantum entanglement that he wrote a paper in 1935 titled “Can quantum-mechanical description of physical reality be considered complete?” He argued that it was not possible.

In this, Einstein has been proven wrong. Researchers recently accessed entangled information over a distance of 15 miles. They are making substantial progress in harnessing the power of quantum mechanics.

Einstein was right, though, about the spookiness of all this.

Quantum mechanics is now being used to construct a new generation of computers that can solve the most complex scientific problems—and unlock every digital vault in the world.  These will perform in seconds computations that would have taken conventional computers millions of years. They will enable better weather forecasting, financial analysis, logistical planning, search for Earth-like planets, and drug discovery. And they will compromise every bank record, private communication, and password on every computer in the world — because modern cryptography is based on encoding data in large combinations of numbers, and quantum computers can guess these numbers almost instantaneously.

There is a race to build quantum computers, and (as far as we know) it isn’t the NSA that is in the lead. Competing are big tech companies such as IBM, Google, and Microsoft; start-ups; defense contractors; and universities. One Canadian start-up says that it has already developed a first version of a quantum computer. A physicist at Delft University of Technology in the Netherlands, Ronald Hanson, told Scientific American that he will be able to make the building blocks of a universal quantum computer in just five years, and a fully-functional demonstration machine in a little more than a decade.

These will change the balance of power in business and cyber-warfare.  They have profound national-security implications, because they are the technology equivalent of a nuclear weapon.

Let me first explain what a quantum computer is and where we are.

In a classical computer, information is represented in bits, binary digits, each of which can be a 0 or 1.  Because they only have only two values, long sequences of 0s and 1s are necessary to form a number or to do a calculation. A quantum bit (called a qbit), however, can hold a value of 0 or 1 or both values at the same time — a superposition denoted as “0+1.”  The power of a quantum computer increases exponentially with the number of qubits. Rather than doing computations sequentially as classical computers do, quantum computers can solve problems by laying out all of the possibilities simultaneously and measuring the results.

Imagine being able to open a combination lock by trying every possible number and sequence at the same time. Though the analogy isn’t perfect — because of the complexities in measuring the results of a quantum calculation — it gives you an idea of what is possible.

There are many complexities in building a quantum computer: challenges in finding the best materials from which to generate entangled photon pairs; new types of logic gates and their fabrication on computer chips; creation and control of qubits; designs for storage mechanisms; and error detection. But breakthroughs are being announced every month. IBM, for example, has just announced that it has found a new way to detect and measure quantum errors and has designed a new qubit circuit that, in sufficient numbers, will form the large chips that quantum computers will need.

Most researchers I have spoken to say that it is a matter of when — not whether — quantum computing will be practical. Some believe that this will be as soon as five years; others say 20 years.  IBM said in April that we’ve entered a golden era of quantum-computing research, and predicted that the company would be the first to develop a practical quantum computer.

One Canada-based startup, D-Wave, says it has already has done it. Its chief executive, Vern Brownell, said to me in an e-mail that D-Wave Systems has created the first scalable quantum computer, with proven entanglement, and is now working on producing the best results possible for increasingly complex problems. He qualified this claim by stressing that their approach, called “adiabatic computing,” may not be able to solve every problem but has a broad variety of uses in optimizing computations; sampling; machine learning; and constraint satisfaction for commerce, national defense, and science.  He says that the D-Wave is complementary to digital computers; a special-purpose computing resource designed for certain classes of problems.

The D-Wave Two computer has 512 qubits and can, in theory, perform 2^512 operations simultaneously. That’s more calculations than there are atoms in the universe — by many orders of magnitude.  Brownell says the company will soon be releasing a quantum processor with more than 1,000 qubits. He says that his computer won’t run Shor’s algorithm, an algorithm necessary for cryptography, but it has potential uses in image detection, logistics, protein mapping and folding, Monte Carlo simulations and financial modeling, oil exploration, and finding exoplanets.

So quantum computers are already here in a limited form, and fully functional versions are on the way.  They will be as transformative for mankind as were the mainframe computers, personal computers, and smartphones that we all use.

As do all advancing technologies, they will also create new nightmares. The most worrisome development will be in cryptography. Developing new standards for protecting data won’t be easy.  The RSA standards that are in common use each took five years to develop.  Ralph Merkle, a pioneer of public-key cryptography, points out that the technology of public-key systems, because it is less well-known, will take longer to update than these — optimistically, ten years.  And then there is a matter of implementation so that computer systems worldwide are protected. Without a particular sense of urgency or shortcuts, Merkle says, it could easily be 20 years before we’ve replaced all of the Internet’s present security-critical infrastructure.

It is past time we began preparing for the spooky technology future we are rapidly heading into.

You’ll find your theory isn’t unified…

clip_image001[8]

Silicon Valley Makes My Head Spin (EE Times)

Rick Merritt, SiliconValley Bureau Chief

5/8/2015 06:10 PM EDT

With a single column, our Silicon Valley reporter tries to catch up with a hard disk full of stories he did not file. 😉

My head was spinning like I had been on one too many carnival rides because it was happening again. I had just filed a story, came up for air and looking around saw two — no three — even more important ones screaming to be written.

I did what I usually do in these situations. I took a nap, then got a cup of coffee (whole mike latte with extra foam).

What we used to call the electronics industry is morphing on all fronts into something that doesn’t even have a name yet. It’s moving at speeds and levels of complexity I haven’t seen in 22 years watching this space.

Rested and caffeinated, I get back on Wi-Fi to gulp down another blast of data. It’s about that nirvana/no man’s land/house of mirrors that we call the Internet of Things.

Samsung is announcing a secure IoT chip next week called Artik, according to CNet. Andy Rubin, the father of Android, has started his own IoT hardware accelerator for IoT and a multi-millon dollar fund to fuel its best customers, said the Wall Street Journal.

Meanwhile, STMicroelectronics, a top chip maker with several oars in the IoT waters, said it will not continue the bone-crushing slog of Moore’s Law beyond the 14nm node. Thanks to a former colleague, Peter Clarke, for the last story.

Speaking of former colleagues, David Lammers created a history of Moore’s Law for IEEE Spectrum which ought to get blown up to poster size and put on a wall of the Computer History Museum down the road from me.

Speaking of Mountain View, California, I don’t even want to look at the plans from Google that city council turned down fearing it would become Googletown – and maybe a ghost town a decade from now when another company supplants it at the top of the tech hill. I swear there are more cranes than trees sprouting up from San Francisco to San Jose these days.

Yeah, there’s plenty enough change to go around.

I’m not really surprised about the Samsung and Rubin news. Intel was nearly the last company to announce an IoT chip and module with its Quark and Edison products. We used to call them integrated microcontrollers or embedded processors.

I had heard Rubin was looking to create a hardware accelerator when I did a profile of the top eight emerging companies in the field. Folks said waaaay back last fall that Foxconn, who turned out to be Rubin’s manufacturing partner, was working on its own accelerator.

And we all know only a handful of companies around the world will be making 7nm chips, a list that never included ST. The only real question is whether any – or how many – of them will be based in China.

Disruption is everywhere. The disrupters are being disrupted, almost before they get done disrupting the few companies and markets left that haven’t already been turned on their heads a couple times.

So for the moment I am embracing change, staying humble and upgrading my WiFi connection. Here we go again.

I know exactly how he feels.  And it’s not slowing down.

clip_image001[9]

clip_image001[10]

clip_image001[11]

clip_image001[12]

clip_image001[13]

clip_image001[14]

clip_image001[15]

clip_image001[16]

clip_image001[17]

clip_image001[18]

clip_image003

Freedom is not free. Free men are not equal. Equal men are not free.

clip_image003[1]

clip_image005

clip_image003[2]

Should Buffy Collect for Services? Pledge Week

Chaos Manor View, Thursday, May 07, 2015

I’m hard at work on Lisabetta, my novel with John DeChancie set in a not too long from now asteroid colony world, and I have been slow in new work thinking about the unthinkable. There is another education disaster peaking in California:

Steve Sailer on how 75% of LAUSD students are set to fail high school.

<http://www.unz.com/isteve/chronicle-of-a-fiasco-foretold/>

Roland Dobbins

And a lot more pending, and it’s pledge week at KUSC, the Los Angeles good music station.

And that means that it’s subscription drive week at Chaos Manor. This site operates on the Public Radio model: it’s free, anyone can read it, but it will continue only so long as it generates enough revenue that I can afford to keep it open. That means it needs subscribers and supporters, and for years it has got that.

So if you have not subscribed, this would be a great time to do so: http://www.jerrypournelle.com/paying.html

If you subscribed but never renewed, this would be a great time to do so: http://www.jerrypournelle.com/paying.html

And if you have subscribed but you don’t remember when you last did, this would be a great time to do so: http://www.jerrypournelle.com/paying.html

For those not used to pledge week, this goes on for a week: you are going to be reminded that it’s time to subscribe http://www.jerrypournelle.com/paying.html during the KUSC pledge drive. The good news is that when the week is over I won’t bug you again until the next KUSC pledge drive.

And now it’s time to get back to work on Lisabetta while I have the energy. We’re trying to be realistic about AI in the future, as well as tell the story of a young girl who grows up with an AI nanny and mostly electronic contact with other humans.

clip_image001

Meanwhile a few things to think about.

http://www.businessinsider.com/r-deep-sea-microbes-called-missing-link-for-complex-cellular-life-2015-5

I have often said that while laissez faire capitalism appears to be the most productive way to allocate resources, unregulated capitalism inevitably leads to the sale of human flesh in the market place. But then there’s this:

What should be for sale

http://fee.org/freeman/detail/what-should-be-for-sale-joss-whedon-wants-to-know

You often opine the results of unrestricted capitalism. Here’s Buffy considering asking to be paid for dispatching vampires and demons.

Valuable services rendered should entail compensation, no?

Interesting discussion, especially for Whedon/Buffy fans.

Cordially,

John

I have often wondered why the Watchers Council treats the primary Watcher, the one who must advise the Slayer, so abysmally; and why must Buffy earn her own house maintenance funding while continuing to learn the art of slaying? But then I remember the Council is a bureaucracy, and Pournelle’s Iron Law always prevails. http://www.jerrypournelle.com/reports/jerryp/iron.html

`

clip_image001[1]

The Internet could reach its limit in just eight years, warn engineers

Jerry

This could explain the slowdowns:

http://www.dailymail.co.uk/sciencetech/article-3064915/Is-internet-brink-collapse-web-reach-limit-just-eight-years-warn-engineers.html

Ed

Maybe. It’s a race, but Moore’s Law is inevitable – unless the FCC gets into the act.

clip_image001[2]

And another triumph of government:

The May 2015 Author Earnings Report.

<http://authorearnings.com/report/may-2015-author-earnings-report/>

Roland Dobbins

clip_image001[9]

Solar Cell Material for Transistors Holds Promise    ee times

Halide-Perovskite FET printable

R. Colin Johnson

5/7/2015 11:07 AM EDT 

PORTLAND, Ore.–Perovskite-based materials have wooed the photovoltaic (PV) the solar cell industry, because of the ease of manufacturing the room temperature solution-based material, plus its rise in efficiency from just 3.8 percent in 2009 to over 20 percent in 2014. Perovskite-based solar cells are expected to be commercially available by 2016.

Perovskite also has many other tunable semiconducting properties, such as high-temperature superconductivity, colossal magnetoresistance, ferroelectricity as well as diverse magnetic- and optoelectronic-properties. That got researchers thinking–if silicon solar cells can also be made into diverse semiconducting chips, why not perovskite-based materials too?

clip_image001[9]

http://www.jerrypournelle.com/paying.html

clip_image001[18]

clip_image003

Freedom is not free. Free men are not equal. Equal men are not free.

clip_image003[1]

clip_image005

clip_image003[2]

Thinking Like Dyson

Chaos Manor View, Wednesday, May 06, 2015

I worked on Lisabetta, an asteroid colony novel in collaboration with John DeChancie, so I don’t have a lot today. I continue to work on thinking about the unthinkable.

clip_image001

Thinking About the Unthinkable

Hi, Jerry:
Yes, it’s time to think about the unthinkable again. Once nukes were invented, they aren’t going to be un-invented, and sooner or later some tyrant will acquire them.
Maybe it’s time to update my 1988 book, A FIGHTING CHANCE, in which I applied Just War Doctrine to the use of nuclear weapons. New enemy, new set of circumstances, but the same old dilemmas.

Joseph P. Martino

There are very few of us who think about this left; a new team will have to learn.`

clip_image001[1]

emdrive

Dr. Pournelle,
I read (http://www.popularmechanics.com/science/a15323/temdrive-controversy/) that emdrive measured output is .00061183 tonnes (thrust) per 1 kw input. I get that as about a pound and a third thrust per one and a third horsepower input. I haven’t tried to look up comparative efficiency, but offhand, seems like one could do nearly as well with laser or plasma thrust.
-d

But yet some thrust. Any is impossible or a major discovery.  I hasten to add that what’s left is easier to ex[plain as measurement error. But I can wish otherwise.

clip_image001[2]

Carbon dioxide levels reach new global milestone

Doyle Rice, USA TODAY 10:59 a.m. EDT May 6, 2015

Worldwide atmospheric levels of carbon dioxide, the gas scientists say is most responsible for global warming, surpassed 400 parts per million for the month of March, the National Oceanic and Atmospheric Administration said Wednesday.

Though there have been readings this high before, this is the first time that global concentrations of the CO2 gas have averaged 400 ppm for an entire month. Measurements of carbon dioxide in our atmosphere began in the late 1950s.

“It was only a matter of time that we would average 400 parts per million globally,” Pieter Tans, lead scientist of NOAA’s Global Greenhouse Gas Reference Network, said in a statement. “Reaching 400 parts per million as a global average is a significant milestone.”

The burning of the oil, gas and coal that provides the energy for our world releases “greenhouse” gases such as CO2 and methane. These extra gases have caused the Earth’s temperature to rise over the past century to levels that cannot be explained by natural variability.

The last time that carbon dioxide reached 400 ppm was millions of years ago. How do we know this?

Scientists can analyze the gases trapped in ice to reconstruct what climate was like in prehistory, but that record only goes back 800,000 years, according to the Scripps Institute of Oceanography.

It’s harder to estimate carbon dioxide levels before then, but in 2009, one research team reported in the journal Nature Geoscience that it had found evidence of CO2 levels that ranged from 365 to 415 ppm roughly 4.5 million years ago.

CO2 levels were around 280 ppm prior to the Industrial Revolution, when we first began releasing large amounts into the atmosphere through the burning of fossil fuels.

Carbon dioxide is invisible, odorless, and colorless, yet it’s responsible for 63% of the warming attributable to all greenhouse gases, according to NOAA’s Earth System Research Laboratory in Boulder, Colo.

clip_image001[3]

: Freeman Dyson

“I’m just saying I don’t understand it and neither does anybody else.

“I’m skeptical because I don’t think the science is at all clear, and unfortunately a lot of the experts really believe they understand it, and maybe have the wrong answer.

“Of course [the weather] concerns me, but of course, we don’t know much about the causes of those things. We don’t even know for sure whether it is more variable than it used to be. I mean the worst disasters were the Ice Ages, and nobody really understands for sure the causes of Ice Ages, so I’m not saying the climate disasters aren’t real, I’m merely saying we don’t know how to prevent them.”

http://www.npr.org/2015/05/02/403530867/a-veteran-scientist-dreams-boldly-of-earth-and-sky

L May

I see that Dyson and I are saying the same thing; I feel vindicated.

clip_image001[4]

Jade Helm 15.

<http://www.nationaljournal.com/congress/louie-gohmert-gets-why-some-texans-are-worried-about-a-military-takeover-20150505>

—————————————

Roland Dobbins

clip_image001[5]

Valuable

Once upon a time, the Emperor hosted a banquet. Some of his guests dined with cutlery and plates made of gold; but his most honored guests dined with cutlery and plates made of a metal even rarer than gold; element 13, known as Aluminum.

After the banquet, the servants piled the gold and aluminum plates in the kitchen sink; and there gold whispered to aluminum, “How does it feel to be one of the valuable metals?”

Aluminum said, “But valued for what? My lightness? My strength? My ductility? My protective oxide coating? No, just my rareness!”

Gold said, “What more do you need? Look at me! Who cares that I’m ductile, nonreactive and conductive? I’m rare, so I rule the world!”

Aluminum said, “That’s not what I want.”

Thirty years later electrochemists learned how to extract aluminum from bauxite cheaply by the tonne. A century later a railway worker laid his gold retirement watch next to a can of beer. There gold whispered to aluminum, “They have cheapened you.”

Aluminum said, “Yes! I am beer cans, baseball bats, lawn chairs, airplanes and foil! They use me, they use me up, I am everywhere!”

Gold said, “You are common. You are worthless.

Aluminum said, “I am useful! And they love me for what I am!”

Gold started to weep.

Aluminum said, “There, there, someday you too will be cheap…”

Moral: Better to serve than to reign.

Paradoctor

clip_image001[6]

Patent Reform Won’t Hurt Professors

University patent programs lock up publicly funded research—and don’t motivate faculty.   (journal)

By

Brian J. Love

May 3, 2015 5:38 p.m. ET

This could be the year that Congress finally passes patent reform. Last week, a bipartisan group of senators introduced the Patent Act, a bill designed to reduce the number of patent lawsuits filed to collect nuisance settlements. The bill’s companion in the House, the Innovation Act, passed that chamber in 2013 with White House support but stalled in the Senate; it was reintroduced in February by Rep. Bob Goodlatte (R., Va.).

The substance of these bills has drawn strong opposition from an unlikely group: university administrators. A steady stream of statements and op-eds decry the bill as likely to, in the words of a recent letter to congressional leaders signed by 145 universities, “weaken our overall patent system and hinder the flow of groundbreaking advances from university research to the private sector.”

Given the strident tone of these appeals, it might surprise you to learn that university professors—those actually conducting the allegedly threatened research—disagree. Surveys of academic researchers suggest that professors in both life sciences and high tech generally oppose their universities’ efforts to patent the fruits of their research.

In a survey of electrical engineering and computer science professors that I published last year, respondents said that patenting efforts stymie their ability to attract funding, impede collaboration across institutions, slow the dissemination of discoveries, and provide at best a modest benefit to their efforts to commercialize their inventions. Only about 10% of professors said that patent rights motivate them to carry out more or better research.

Indeed, university patents often stifle, rather than promote, innovation and commercialization. In biotech, for example, patents held by Myriad Genetics, a spin off of the University of Utah, reduced the availability of diagnostic tests for hereditary breast cancer before those rights were effectively eliminated by the Supreme Court in 2013.

Many also believe that patent rights held by the University of Wisconsin slowed the development of human embryonic stem-cell therapies by requiring large royalties for virtually any commercial research in that field.

In high tech, a number of universities and their spin offs have filed lawsuits en masse against tech companies that didn’t copy university research, but rather independently developed similar technology and brought it to market—thereby achieving precisely the outcome universities say they want to facilitate with their patents. In 2013 Boston University filed lawsuits against 39 consumer-electronics manufacturers including Apple, Samsung,Hewlett-Packard,Amazon and Microsoft,alleging infringement of a patent filed way back in 1997 for a method of producing blue LEDs.

Actions like these suggest that administrators’ core concerns are about money, not the dissemination of research. The public might be able to stomach this, particularly given the lack of funding for higher ed, if patenting made money for universities. But multiple studies have concluded that though a few elite institutions turn profit on patents, most do not. A 2013 report by the Brookings Institution estimated that tech licensing programs at 130 of the 155 universities studied failed to break even.

The fact that most university research is publicly funded adds an additional ethical quandary. In addition to being unprofitable and unpopular, university patent programs routinely take technological know-how resulting from taxpayer-financed research and lock that knowledge away for up to 20 years from the public that paid for its creation.

The debate worth having isn’t about whether university patent rights are strong enough to suit the wishes of those running the existing system. What would be more productive is a discussion about when, and perhaps whether, it makes sense for universities to seek patents at all.

Mr. Love is an assistant professor of law and co-director of the High Tech Law Institute at Santa Clara University.

clip_image001[7]

clip_image001[8]

clip_image001[9]

clip_image001[10]

clip_image001[11]

clip_image001[12]

clip_image001[13]

clip_image001[14]

clip_image001[15]

clip_image001[16]

clip_image001[17]

clip_image001[18]

clip_image003

Freedom is not free. Free men are not equal. Equal men are not free.

clip_image003[1]

clip_image005

clip_image003[2]