Chaos Manor View, Monday, May 11, 2015
I have been busy with fiction, which is good; but also with an iPhone problem which isn’t. Meanwhile, the world goes on.
The iPhone has so depressed me that I have done little but brood. Many years ago Peter Glaskowsky pronounced a dictum that it is very nearly universal: With Apple, everything is very simple or it is impossible. While I have found over the years that some things are possible with much effort, in general that has been true, and you waste your time and energy trying to get Apple to do things that are simple but don’t work.
The Cloud – curse it, I wish I had never heard of it – “synchronized” Contacts from all my machines into the Contacts on my iPhone, hundreds and thousands of them, making it impossible to use for phone calls. I found out how to stop it from doing that, but in doing it erased my contacts. All of them. I had no saved phone numbers. Alex found on the MacBook Pro a backup copy, sort of, of many of the iPhone contacts without all the other contacts – people I will never call, some dating back to BYTE days –and fooled the system into believing in a “group” on the phone. I can sort of activate the “group” which it shows in red, but Contacts still shows in black. I no longer have caller id – it seems to think the empty Contacts is where to look – and I cannot add new numbers. I have also lost some old ones, which I suppose is endurable, but a few like Larry Bond will be hard to get again, and Tom Clancy is dead so I suppose I don’t need his.
I need a new phone, so I need one I can get the old “cellphone” group in as contacts and defend it against the Cloud filling it with hundreds and hundreds of needless contacts. I do not think it will be an Apple. Peter tells me what happened to me is not supposed to happen, but what that means is that using my iPhone as a phone is now impossible. And I brood over this. In the old days I would have worked on it until I solved the problem, but I have too much work to do, my typing is two finger sloppy and slow, it takes half an hour to write a paragraph, and I have to look up the phone numbers of my friends since I cannot use my damned smart phone to remember them. If the phone number wasn’t in the MacBook Pro, I cannot get it in to my phone now, and it does not have the phone numbers of DeChancie or Niven or Barnes so I can’t call them.
I wish I had a dumb phone. At least it would remember phone numbers I put in without my having to bring in hundreds and hundreds of numbers from the cloud.
I can work around some of it, but I don’t to spend the time. I have work to do, and outsmarting Siri isn’t high on my list of things to do.
With Apple everything is very simple or impossible; but if Apple gets confused, the very simple things become impossible.
I need a phone that does not synchronize my big machine’s thousands of contacts into my telephone. I can do without the Cloud. Until this happened I was quite happy with the iPhone, and if I can ever get it back to the relatively simple thing it used to be I would buy the large iPhone 6. I am assured that what happened to me isn’t supposed to happen. All I can say is, it did.
Coupling that with the increasing frustrations of no longer being a touch typist and my clumsiness in hitting multiple keys so that it takes longer to correct a sentence than it does to write it, I got little done here over the weekend. So it goes.
`
I wanted to include something I saw in today’s Wall Street Journal, but it keeps showing me /asia, and I can’t get the front page as it is on the paper. For some strange reason the WSJ won’t show me the front page of today’s paper. The editors think they know better than we what we are looking for. Amazing.
In any event, the “deal” with Iran seems further away than ever, although the Iranians have been offered almost everything. It is now pretty certain that Iran will have fission weapons not long after the next President’s inauguration, and there is little or nothing Kerry can do about it.
A fission weapon would be enough to destroy Tel Aviv and pose an existential threat to Israel. Deterring the Supreme Leader of Iran will be problematical: classis deterrence assumes rationality, and The Will of Allah trumps all rational arguments and leaves no choices. If we are entering the End of Days, the time for rational argument is ended.
We have little experience in dealing with such matters. Communism was, after all, by its own premises a rational system of beliefs. It claimed to be marching in the direction of history, but it did not posit the end of history; there was no End of Days, when Judgment would come upon the world.
Once Iran has fission weapons, fusion will not be too far in the future. Fusion weapons are more complex, and have a shorter shelf life; but the techniques for building them, while classified, are relatively well known and derivable from principle even without computers – after all, the first US test was in 1952, followed by a Soviet test in 1953. More sophisticated fusion weapons with varying yields are more difficult technology, but printed circuits and complex chips make them simpler.
I have heard little about US plans for living in a world in which Iran has a full range of nuclear weapons, but that world is becoming increasingly likely and soon will be inevitable.
This is Pledge Week all week, and I am gratified at the number of new subscriptions and renewals and upgrades I am getting. I do not take the obligations lightly. And I do thank you for your responses.
You do note that we do not have annoying advertisements and push products (well, other than my books).
Self-Driving Tractor-Trailers
Dr. Pournelle –
The article states that Daimler’s truck is limited in that it can’t self-drive in heavy snow and rain or in crosswinds of 30-40 miles per hour.
That pretty much removes Oklahoma and the Texas Panhandle from the itinerary for most of the year.
I fully expect that, very soon after the first truck crashes, regulations will be introduced requiring a licensed truck driver at the controls at all times, thus negating any advantage of having a self-driving truck. Who is liable when a self-driving vehicle crashes? [Obviously, the deepest pockets are always sued.]
Could the truck’s system be hacked? I believe that some freight companies already maintain satellite contact with their rigs for logistical reasons, hijacking detection and the like. Could this be a likely route into the driving program? We’ve all likely had close calls with 18-wheelers. I was almost run off the road by a truck carrying compressed hydrogen gas and I remember the news feed from the ammonia truck that crashed in Houston in 1976. Some loads offer real opportunities to anyone with nasty imagination and inclination.
Pieter
I would worry a lot about hacking; hijacking trucks is still an active crime,m and of course it is always tempting to those we hate the products delivered or the firms advertised on the trucks. I applaud Daimler for advancing the state of the art, and I think it may become inevitable that truck drivers become managers with other tasks; but of course that has not been the practice in airlines, where autopilots do most of the flying.
Does Artificial Intelligence Pose a Threat?
A panel of experts discusses the prospect of machines capable of autonomous reasoning
By
Ted Greenwald
May 10, 2015 11:08 p.m. ET
http://www.wsj.com/articles/does-artificial-intelligence-pose-a-threat-1431109025
Paging Sarah Connor!
After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple’s Siri and Amazon’s Alexa, IBM’s Watson and GoogleBrain, machines that understand the world and respond productively suddenly seem imminent.
The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?
The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement.
How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Taking part in the discussion are Jaan Tallinn, a co-founder of Skype and the think tanks Centre for the Study of Existential Risk and the Future of Life Institute; Guruduth S. Banavar, vice president of cognitive computing at IBM’s Thomas J. Watson Research Center; and Francesca Rossi, a professor of computer science at the University of Padua, a fellow at the Radcliffe Institute for Advanced Study at Harvard University and president of the International Joint Conferences on Artificial Intelligence, the main international gathering of researchers in AI.
. . .
WSJ: What is the superintelligence control problem?
MR. TALLINN: Even fully autonomous robots these days have off switches that allow humans to have ultimate control. However, the off switch only works because it is outside the domain of the robot. For instance, a chess computer is specific to the domain of chess rules, so it is unaware that its opponent can pull the plug to abort the game.
However, if we consider superintelligent machines that can represent the state of the world in general and make predictions about the consequences of someone hitting their off switch, it might become very hard for humans to use that switch if the machine is programmed (either explicitly or implicitly) to prevent that from happening.
WSJ: How serious could this problem be?
MR. TALLINN: It’s a purely theoretical problem at this stage. But it would be prudent to assume that a superintelligent AI would be constrained only by the laws of physics and the initial programming given to its early ancestor.
The initial programming is likely to be a function of our knowledge of physics—and we know that’s still incomplete! Should we find ourselves in a position where we need to specify to an AI, in program code, “Go on from here and build a great future for us,” we’d better be very certain we know how reality works.
As to your question, it could be a serious problem. It is important to retain some control over the positions of atoms in our universe [and not inadvertently give control over them to an AI].
MS. ROSSI: AI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. Examples are autonomous online trading agents, health-diagnosis support systems and soon autonomous cars and weapons.
The rest of the article is interesting although it has little that science fiction fans have not discussed for decades. Bill Gates is late coming aboard. Jack Williamson’s “With Folded Hands” was an early examination; his robots are not malevolent at all.
WSJ: Guru, how do you view the risks?
MR. BANAVAR: Any discussion of risk has two sides: the risk of doing it and the risk of not doing it. We already know the practical risk today of decisions made with incomplete information by imperfect professionals—thousands of lives, billions of dollars and slow progress in critical fields like health care. Based on IBM’s experience with implementing Watson in multiple industries, I maintain that narrow-domain AI significantly mitigates these risks.
I will not venture into the domain of general AI, since it is anybody’s speculation. My personal opinion is that we repeatedly underestimate the complexity of implementing it. There simply are too many unknown unknowns.
But Moore’s Law is inexorable…
Self-driving car accidents: 3 Google cars, 1 other since September in California (MN)
By Justin Prichard
Associated Press
LOS ANGELES — Four of the nearly 50 self-driving cars now rolling around California have gotten into accidents since September, when the state began issuing permits for companies to test them on public roads.
Two accidents happened while the cars were in control; in the other two, the person who still must be behind the wheel was driving, a person familiar with the accident reports told The Associated Press.
All of the accidents happened at under 10 mph.
Navy Robots Test the Limits of Autonomy
By THE NEW YORK TIMESMAY 6, 2015 nyt
Navy robotics engineers are working to develop autonomous tools that can integrate with other technologies. But in field tests, the autonomous future still seems far away.
By Zackary Canepari, Drea Cooper and Emma Cott on Publish Date May 6, 2015.
This is the second episode in a Bits video series, called Robotica, examining how robots are poised to change the way we do business and conduct our daily lives.
At a naval research facility along a stretch of the Pacific Coast in San Diego, civilian engineers work alongside active-duty troops to develop and test the next generation of military robots.
The engineers are members of the Unmanned Systems Group at Spawar, or Space and Naval Warfare Systems Command, a research and operations arm of the Navy. Their mandate is simple: Take the soldier out of the minefield.
When autonomous systems are deployed, engineers at the center say they will revolutionize the way the military fights. They envision a day when one soldier will control an entire fleet of driverless trucks, or a driverless vehicle will make a road safe for a Humvee full of troops. They could also assist in detecting and combatting chemical or biological warfare.
Chris Scrapper is leading a team of engineers who envision an autonomous future. On a recent afternoon, they were tapping away at computers to analyze data from a failed run with RaDer (it stands for reconnaissance and detection expendable rover), the boxy black vehicle they’re trying to make drive on its own.
It’s hard to say when autonomous technologies will be ready for use in combat, Mr. Scrapper said, adding, “It depends on the threat level.”
There’s more but you get the idea.
Quantum computing is about to overturn cybersecurity’s balance of power (WP)
By Vivek Wadhwa May 11 at 7:00 AM
“Spooky action at a distance” is how Albert Einstein described one of the key principles of quantum mechanics: entanglement. Entanglement occurs when two particles become related such that they can coordinate their properties instantly even across a galaxy. Think of wormholes in space or Star Trek transporters that beam atoms to distant locations. Quantum mechanics posits other spooky things too: particles with a mysterious property called superposition, which allows them to have a value of one and zero at the same time; and particles’ ability to tunnel through barriers as if they were walking through a wall.
All of this seems crazy, but it is how things operate at the atomic level: the laws of physics are different. Einstein was so skeptical about quantum entanglement that he wrote a paper in 1935 titled “Can quantum-mechanical description of physical reality be considered complete?” He argued that it was not possible.
In this, Einstein has been proven wrong. Researchers recently accessed entangled information over a distance of 15 miles. They are making substantial progress in harnessing the power of quantum mechanics.
Einstein was right, though, about the spookiness of all this.
Quantum mechanics is now being used to construct a new generation of computers that can solve the most complex scientific problems—and unlock every digital vault in the world. These will perform in seconds computations that would have taken conventional computers millions of years. They will enable better weather forecasting, financial analysis, logistical planning, search for Earth-like planets, and drug discovery. And they will compromise every bank record, private communication, and password on every computer in the world — because modern cryptography is based on encoding data in large combinations of numbers, and quantum computers can guess these numbers almost instantaneously.
There is a race to build quantum computers, and (as far as we know) it isn’t the NSA that is in the lead. Competing are big tech companies such as IBM, Google, and Microsoft; start-ups; defense contractors; and universities. One Canadian start-up says that it has already developed a first version of a quantum computer. A physicist at Delft University of Technology in the Netherlands, Ronald Hanson, told Scientific American that he will be able to make the building blocks of a universal quantum computer in just five years, and a fully-functional demonstration machine in a little more than a decade.
These will change the balance of power in business and cyber-warfare. They have profound national-security implications, because they are the technology equivalent of a nuclear weapon.
Let me first explain what a quantum computer is and where we are.
In a classical computer, information is represented in bits, binary digits, each of which can be a 0 or 1. Because they only have only two values, long sequences of 0s and 1s are necessary to form a number or to do a calculation. A quantum bit (called a qbit), however, can hold a value of 0 or 1 or both values at the same time — a superposition denoted as “0+1.” The power of a quantum computer increases exponentially with the number of qubits. Rather than doing computations sequentially as classical computers do, quantum computers can solve problems by laying out all of the possibilities simultaneously and measuring the results.
Imagine being able to open a combination lock by trying every possible number and sequence at the same time. Though the analogy isn’t perfect — because of the complexities in measuring the results of a quantum calculation — it gives you an idea of what is possible.
There are many complexities in building a quantum computer: challenges in finding the best materials from which to generate entangled photon pairs; new types of logic gates and their fabrication on computer chips; creation and control of qubits; designs for storage mechanisms; and error detection. But breakthroughs are being announced every month. IBM, for example, has just announced that it has found a new way to detect and measure quantum errors and has designed a new qubit circuit that, in sufficient numbers, will form the large chips that quantum computers will need.
Most researchers I have spoken to say that it is a matter of when — not whether — quantum computing will be practical. Some believe that this will be as soon as five years; others say 20 years. IBM said in April that we’ve entered a golden era of quantum-computing research, and predicted that the company would be the first to develop a practical quantum computer.
One Canada-based startup, D-Wave, says it has already has done it. Its chief executive, Vern Brownell, said to me in an e-mail that D-Wave Systems has created the first scalable quantum computer, with proven entanglement, and is now working on producing the best results possible for increasingly complex problems. He qualified this claim by stressing that their approach, called “adiabatic computing,” may not be able to solve every problem but has a broad variety of uses in optimizing computations; sampling; machine learning; and constraint satisfaction for commerce, national defense, and science. He says that the D-Wave is complementary to digital computers; a special-purpose computing resource designed for certain classes of problems.
The D-Wave Two computer has 512 qubits and can, in theory, perform 2^512 operations simultaneously. That’s more calculations than there are atoms in the universe — by many orders of magnitude. Brownell says the company will soon be releasing a quantum processor with more than 1,000 qubits. He says that his computer won’t run Shor’s algorithm, an algorithm necessary for cryptography, but it has potential uses in image detection, logistics, protein mapping and folding, Monte Carlo simulations and financial modeling, oil exploration, and finding exoplanets.
So quantum computers are already here in a limited form, and fully functional versions are on the way. They will be as transformative for mankind as were the mainframe computers, personal computers, and smartphones that we all use.
As do all advancing technologies, they will also create new nightmares. The most worrisome development will be in cryptography. Developing new standards for protecting data won’t be easy. The RSA standards that are in common use each took five years to develop. Ralph Merkle, a pioneer of public-key cryptography, points out that the technology of public-key systems, because it is less well-known, will take longer to update than these — optimistically, ten years. And then there is a matter of implementation so that computer systems worldwide are protected. Without a particular sense of urgency or shortcuts, Merkle says, it could easily be 20 years before we’ve replaced all of the Internet’s present security-critical infrastructure.
It is past time we began preparing for the spooky technology future we are rapidly heading into.
You’ll find your theory isn’t unified…
Silicon Valley Makes My Head Spin (EE Times)
Rick Merritt, SiliconValley Bureau Chief
5/8/2015 06:10 PM EDT
With a single column, our Silicon Valley reporter tries to catch up with a hard disk full of stories he did not file. 😉
My head was spinning like I had been on one too many carnival rides because it was happening again. I had just filed a story, came up for air and looking around saw two — no three — even more important ones screaming to be written.
I did what I usually do in these situations. I took a nap, then got a cup of coffee (whole mike latte with extra foam).
What we used to call the electronics industry is morphing on all fronts into something that doesn’t even have a name yet. It’s moving at speeds and levels of complexity I haven’t seen in 22 years watching this space.
Rested and caffeinated, I get back on Wi-Fi to gulp down another blast of data. It’s about that nirvana/no man’s land/house of mirrors that we call the Internet of Things.
Samsung is announcing a secure IoT chip next week called Artik, according to CNet. Andy Rubin, the father of Android, has started his own IoT hardware accelerator for IoT and a multi-millon dollar fund to fuel its best customers, said the Wall Street Journal.
Meanwhile, STMicroelectronics, a top chip maker with several oars in the IoT waters, said it will not continue the bone-crushing slog of Moore’s Law beyond the 14nm node. Thanks to a former colleague, Peter Clarke, for the last story.
Speaking of former colleagues, David Lammers created a history of Moore’s Law for IEEE Spectrum which ought to get blown up to poster size and put on a wall of the Computer History Museum down the road from me.
Speaking of Mountain View, California, I don’t even want to look at the plans from Google that city council turned down fearing it would become Googletown – and maybe a ghost town a decade from now when another company supplants it at the top of the tech hill. I swear there are more cranes than trees sprouting up from San Francisco to San Jose these days.
Yeah, there’s plenty enough change to go around.
I’m not really surprised about the Samsung and Rubin news. Intel was nearly the last company to announce an IoT chip and module with its Quark and Edison products. We used to call them integrated microcontrollers or embedded processors.
I had heard Rubin was looking to create a hardware accelerator when I did a profile of the top eight emerging companies in the field. Folks said waaaay back last fall that Foxconn, who turned out to be Rubin’s manufacturing partner, was working on its own accelerator.
And we all know only a handful of companies around the world will be making 7nm chips, a list that never included ST. The only real question is whether any – or how many – of them will be based in China.
Disruption is everywhere. The disrupters are being disrupted, almost before they get done disrupting the few companies and markets left that haven’t already been turned on their heads a couple times.
So for the moment I am embracing change, staying humble and upgrading my WiFi connection. Here we go again.
I know exactly how he feels. And it’s not slowing down.
Freedom is not free. Free men are not equal. Equal men are not free.