By Greg Lindsay
THE Southwest is famously fertile territory for ghost towns. They didn’t start out depopulated, of course — which is what makes the latest addition to their rolls so strange. Starting next year, Pegasus Holdings, a Washington-based technology company, will build a medium-size town on 20 square miles of New Mexico desert, populated entirely by robots.
Scheduled to open in 2014, the Center for Innovation, Testing and Evaluation, as the town is officially known, will come complete with roads, buildings, water lines and power grids, enough to support 35,000 people — even though no one will ever live there. It will be a life-size laboratory for companies, universities and government agencies to test smart power grids, cyber security and intelligent traffic and surveillance systems — technologies commonly lumped together under the heading of “smart cities.”
The only humans present will be several hundred engineers and programmers huddled underground in a Disneyland-like warren of control rooms. They’ll be playing SimCity for real.
Since at least the 1960s, when New York’s Jane Jacobs took on the autocratic city planner Robert Moses, it’s been an article of faith that cities are immune to precisely this kind of objective, computation-driven analysis. Much like the weather, Ms. Jacobs said, cities are astoundingly complex systems, governed by feedback loops that are broadly understood yet impossible to replicate.
But Pegasus and others insist there’s now another way — that, armed with enough data and computing muscle, we can translate cities’ complexity into algorithms. Sensors automatically do the measuring for us, while software makes the complexity manageable.
“We think that sensor development has gotten to the point now where you can replicate human behavior,” said Robert H. Brumley, the managing director and co-founder of Pegasus. These days, he and others believe, even the unpredictable “human factor” is, given enough computing power, predictable. “You can build randomness in.”
Mr. Brumley isn’t alone in his faith that software can do a better job of replicating human behavior than the humans themselves. A start-up named Living PlanIT is busy building a smart city from scratch in Portugal, run by an “urban operating system” in which efficiency is all that matters: buildings are ruthlessly junked at the first signs of obsolescence, their architectural quality being beside the point.
To the folks at Living PlanIT and Pegasus, such programs are worth it because they let planners avoid the messiness of politics and human error. But that’s precisely why they are likely to fail.
Take the 1968 decision by New York Mayor John V. Lindsay to hire the RAND Corporation to streamline city management through computer models. It built models for the Fire Department to predict where fires were likely to break out, and to decrease response times when they did. But, as the author Joe Flood details in his book “The Fires,” thanks to faulty data and flawed assumptions — not a lack of processing power — the models recommended replacing busy fire companies across Brooklyn, Queens and the Bronx with much smaller ones.
What RAND could not predict was that, as a result, roughly 600,000 people in the poorest sections of the city would lose their homes to fire over the next decade. Given the amount of money and faith the city had put into its models, it’s no surprise that instead of admitting their flaws, city planners bent reality to fit their models — ignoring traffic conditions, fire companies’ battling multiple blazes and any outliers in their data.
The final straw was politics, the very thing the project was meant to avoid. RAND’s analysts recognized that wealthy neighborhoods would never stand for a loss of service, so they were placed off limits, forcing poor ones to compete among themselves for scarce resources. What was sold as a model of efficiency and a mirror to reality was crippled by the biases of its creators, and no supercomputer could correct for that.
Despite its superior computing power and life-size footprint, Pegasus’ project is hobbled by the equally false assumption that such smart cities are relevant outside the sterile conditions of a computer lab. There’s no reason to believe the technologies tested there will succeed in cities occupied by people instead of Sims.
The bias lurking behind every large-scale smart city is a belief that bottom-up complexity can be bottled and put to use for top-down ends — that a central agency, with the right computer program, could one day manage and even dictate the complex needs of an actual city.
Instead, the same lesson that New Yorkers learned so painfully in the 1960s and ’70s still applies: that the smartest cities are the ones that embrace openness, randomness and serendipity — everything that makes a city great.
Greg Lindsay is a visiting scholar at the Rudin Center for Transportation Policy and Management and the co-author of “Aerotropolis: The Way We’ll Live Next.” This piece appeared recently in the New York Times Sunday Review.