This post starts what will be a multi-part series on trends that will shape our future and, in particular, the future of our cities over the next decade or two. Those of you who are regular readers of this blog know that I routinely comment on the rapid pace of change and the need to think of the future as being something different than simply an extrapolation of the recent past. In fact, I think much of the current political and social upheaval in this country is due to people who either don’t understand the changes that are taking place or who understand the changes but actively fight against change in an effort to preserve the status quo. One of the failures of our political system is that we rarely level with people on the inevitable changes that are coming. Instead, politicians are more successful if they make hollow promises that allay popular fears and then apply band-aid solutions after the changes have occurred.
I need to begin this series with an important caveat: I may well be wrong. I’m not omniscient and I have no special ability to predict the future. I am, however, pretty good at seeing the commonalities that weave through a variety of occurrences, media outlets and expert opinions, and then synthesizing those into broad trends. Combined with a lot of experience in urban planning and a lot of time for reading and exploring, I think that the odds that I am completely off base are fairly low. In my mind, I’m not so much trying to predict the future as to recognize existing trends that are in their nascent stage now but will have an outsized impact when all is said and done. I should also point out that very little of what I will be predicting is original to me -- I’m simply combining, extending and repackaging what many other people who are much smarter than me have been saying for several years. Some of the details may be wrong, but I’m quite confident that the general gist is right.
I also want to point out that my goal in writing this series is not to necessarily predict the future with a high degree of accuracy, but to start a conversation about how we should deal with the future in a general sense. If I am even remotely right about the changes that are coming, we will be much better off if we start preparing for those changes now. It is hard for government to be proactive because there are always current problems that seem more critical than future problems and because there are always people with a vested interest in the status quo who will undermine and second guess any efforts to prepare for change. My true goal is to try to build a consensus about what the future should be like so that local governments can bypass the gridlock at the national levels and take steps which will make them successful and resilient. As this series of blog posts unfolds, please let me know your ideas and opinions.
Mega Trends and Urban Impacts
In this first post, I will be discussing three trends that are widely recognized now but which I think are also widely under-appreciated in terms of their eventual impact on our lives. These “mega trends” are global in their reach and so foundational in their impact that they will affect just about every aspect of our lives. I will follow up with posts on eight “mini trends” that I will refer to as Urban Impacts because I will be focusing on how they will end up shaping cities over the next decade or two. Again, none of these topics are likely to be surprising but you may be underestimating their impact and you may not be prepared for the new reality that they will bring.
I should also point out that I am not trying to predict everything that is going to change in the next decade or two. In keeping with the theme of this blog, I’m going to focus on trends that will impact cities. Thus, I won’t be writing about cryptocurrencies or e-sports or vertical farming or any number of other interesting trends and technologies because I don’t think that their impact on cities will be significant. This first post is the exception to that rule in that I won’t be talking much about cities -- instead, I will be laying the foundation for the posts to come.
The Age of Ideas and the Power of Scale
My first “mega trend” has to do with the world economy and the changes that are currently taking place. To give a little bit of context, I’m going to start with a quick review of history. Economists and historians, of course, love to segment time into economic ages, complete with starting and ending dates that are ridiculously arbitrary. Still, it does give a useful perspective on the current pace of economic change.
In very rough terms, the age of hunters and gatherers gave way to the agricultural age which gave way to the industrial age which has now given way to the information age. This process, however, was anything but linear or consistent. Homo Sapiens appeared on the scene roughly 300,000 years ago with enough brain power to be able to make and utilize crude tools and enough social skills to be able to form small bands or tribes. Hunter/gatherers, of course, live a pretty day-to-day existence so there wouldn’t have been much of an “economy” other than simply sharing or trading.
Archaeologists have dated the first signs of agriculture to about 10,000 BC, although it was several thousand more years (around 7,000 BC) until organized farming is thought to have started along with the domestication of goats, sheep and pigs. Between 5,000 BC and 6,000 BC permanent settlements supported by agriculture began to appear, along with early attempts at irrigation. Over the next several thousand years, the plow is invented, wool is turned into textiles, farming spreads across Europe, and eventually the horse is domesticated. All of this creates an agricultural economy where surpluses are routinely created (although not guaranteed) and people are stationary enough to be able to easily trade their surplus of one commodity for someone else’s surplus of another commodity.
The agricultural age lasted several thousand more years until roughly the middle of the 18th century when the industrial age became the primary economic driver. What most people now refer to as the industrial revolution started in England with the invention of the steam engine by Thomas Newcomen in 1712 and a series of inventions in the textile industry such as the flying shuttle, the spinning jenny, the water frame and the cotton gin during the middle of the 1700s. These inventions (and many others) increased productivity dramatically, amplified the power that humans could apply to tasks, and led to new products (and new types of jobs) that had never existed or even been imagined by previous generations. [1]
The Spinning Jenny |
The transition to the information age can be illustrated by looking at the largest companies (by market capitalization) at the end of the industrial age versus today. In 1970, the list included industrial giants such as General Motors, Ford, General Electric, IBM and US Steel; along with energy companies such as Exxon, Mobil, Texaco and Gulf Oil. Today, the top 10 list doesn’t include any of those companies. The top 5 are Apple, Microsoft, Alphabet (Google), Amazon and Facebook -- all companies that are heavily invested in creative ideas and specialized knowledge as their path to profits. If you look in more detail at General Electric, an industrial stalwart 50 years ago, their market capitalization is currently $116 Billion, their net 2020 income was $5 Billion (although they are not always profitable), and they employ roughly 174,000 people. Compare that to Facebook with a market cap of $915 Billion, net 2020 income of $29 Billion, and 63,000 employees. The information age companies are far more valuable and make far more money with far fewer employees (market cap per employee is 20 times greater).
At this point, I think it is useful to underscore some interesting facts about the continuum of economic change. First there is the issue of the pace of change. If you think of this continuum as a 24-hour clock, the hunter/gatherer phase lasted from midnight until roughly 11 PM, the agricultural age lasted nearly an hour and concluded at 11:58:30, the industrial age lasted another minute and 15 seconds, and the information age (up to this point) the final 15 seconds. Our species has continued to evolve during that time but not nearly at that pace. I think it is fair to question whether the technological change that is driving economic change will eventually outpace the ability of our species to adapt, particularly those who are average or below average in intelligence.
Second, there is the issue of scale -- by which I mean the ability for an increase in production to create a disproportionate increase in profit or wealth. This concept was irrelevant to the hunter/gatherer and just barely relevant during the agricultural age. Adding more land increased agricultural production but unless it also enabled your workers to farm more acres per person, the increase was proportionate, not disproportionate. It is not until the industrial age that economies of scale really become an important part of economic success. Large factories were disproportionately more profitable than small factories (and certainly individual artisans) because they could negotiate better prices for raw materials, they could spread fixed costs over more units of production, and they could get favorable terms for warehousing and shipping. There is a size at which companies become so unwieldy and dysfunctional that economies of scale cease to expand, but the global corporate powerhouses of the industrial age, and now the information age, are a far cry from the tiny fiefdoms and artisans of the agricultural age.
Third, there is the issue of “agglomeration” -- by which I mean the tendency for economic producers to cluster together to gain some type of advantage. Again, this concept was irrelevant to the hunter/gatherer, but it was relevant during the agricultural age. The more land (and agricultural output) that was under the control of one central authority, the more that the surpluses could be pooled to support larger cities, more specialized craftsmen, and more powerful armies. In the industrial age, agglomeration advantages resulted in the rise of great industrial centers. Factories clustered in select cities where labor was abundant, transportation for raw materials and finished products was efficient, and equipment suppliers with technical knowledge were nearby.
What is perhaps under-appreciated about the current information age is that these same trends continue to play out but to a much greater degree. Ideas “scale” much better than things. Yes, Henry Ford discovered numerous economies of scale as he perfected the assembly line for the production of cars, but it required huge amounts of capital, large factories and thousands of employees. Compare that, for example, to a song which can now be recorded, sold and distributed digitally to millions of people worldwide by a relatively small number of people without the need for factories, warehouses or trucks. It is equally true for the numerous software products, financial instruments and entertainment channels that came into existence in just the past decade or two.
Agglomeration is also continuing in the information age although now the key factors are brain power and venture capital, not factories. In his recent book “The New Urban Crisis,” Richard Florida calls the current agglomeration process “winner-take-all urbanism” and describes it as follows:
Thanks to the clustering force, the most important and innovative industries and the most talented, ambitious, and wealthiest people are converging as never before in a relative handful of leading superstar cities and knowledge and tech hubs. This small group of elite places forges ever forward, while many -- if not most -- others struggle, stagnate, or fall behind.”
“Superstar cities generate the greatest levels of innovation; control and attract the largest shares of global capital and investment; have far greater concentrations of leading-edge companies in the finance, media, entertainment, and high-tech industries; and are home to a disproportionate share of the world’s talent. They are not just the places where the most ambitious and talented people want to be -- they are where such people need to be. The dynamic is cumulative and self-reinforcing.” [2]
According to his research, the cities of New York, London, Tokyo and Hong Kong have the top “superstar” status. If you look at just the United States, it includes Los Angeles, Boston, Chicago, and San Francisco. Outside of this relative handful of elite cities, all other cities are simply scrambling for second- and third-tier businesses and hoping to stay economically relevant. A quick look at property values confirms that the superstar cities have skyrocketed in value compared with other large cities. For the price of one condo in New York’s SoHo neighborhood (median value = $3 million), you could buy 26 homes in Cincinnati, 29 homes in Detroit, or 34 homes in St Louis (based on median residential unit value). [2]
Of course, our national economy includes hundreds of thousands of companies that are not ultra high-tech businesses requiring the best and the brightest. The problem is that the farther a business is down the innovation scale, the more likely their product or service will behave like a commodity that can be readily replaced by a similar product or service from another company in another city or country. Thus, the economy of cities who rely on these second- and third-tier businesses will be less lucrative, less dynamic and less secure because the businesses will be vulnerable to competition from lower cost markets and to takeovers from larger companies.
I don’t believe, however, that everything is doom and gloom. After all, recent events have proven that many high-tech employees prefer working remotely which may counteract the pull of the superstar cities. And parts shortages, shipping delays and governmental instability have undermined the reliability of global manufacturing based on distributed supply chains and commodity parts, thus giving local suppliers a potential advantage over suppliers in third-world countries. Still, cities will need to adapt to the “age of ideas” and the new economic realities that are the foundation of this new age.
Climate Change
Of my three mega-trends, I’m fairly confident that climate change will be the most controversial. There are many people, after all, who continue to believe that the climate change we are experiencing can be explained primarily by natural climate cycles or that the effects of climate change will be largely benign. They view efforts to reduce greenhouse gas levels as a waste of time and money that will harm our economy and place additional burdens on poor people and poor countries.
At the other extreme are those who see climate change in apocalyptic terms, think that no measure to reduce greenhouse gases is too extreme, and who see each severe thunderstorm or wildfire as proof that climate change will be disastrous. My opinion lands somewhere in the middle and I will briefly try to explain those climate change arguments that I find most compelling. But in the end I will argue that my opinion doesn’t really matter. The “climate change train” has left the station as they say and it is not coming back. The only real point of debate any more is how fast the train will gather speed.
Scientists who search for life in other solar systems pay particular attention to planets located in what they call the “Goldilocks Zone” -- not so hot that water boils away and not so cold that everything is completely frozen. Earth is in the Goldilocks zone for our sun which is one of the reasons it teems with life. That does not mean, however, that the climate on earth is constant. It is, for a variety of reasons, always changing, albeit at a relatively slow pace under normal circumstances.
If you go back 20,000 years (a mere blink of the eye in geologic terms), most of the midwest was covered by glaciers during the most recent ice age. Starting 10,000 to 15,000 years ago, Earth entered a relatively warm period known as an “interglacial” where glaciers retreat in size and sea levels rise. This means that all of recorded human history has taken place during a relatively warm and stable phase in the earth’s climate cycle.
Global Temperatures Relative to the Average for the 20th Century |
One of the factors affecting the pattern of global temperature change is the accumulation of greenhouse gases (such as carbon dioxide and methane) that trap heat from the earth and keep it from escaping into space. Greenhouse gases are an essential part of the climate system that keeps Earth a habitable place for humans. The climate change controversy is not that greenhouse gases exist but that the level of these gases has spiked over the last 100 to 200 years to levels above anything thought to have existed for hundreds of thousands of years. There has been no plausible explanation for this rise other than the activities of humankind during the industrial age and now the information age.
Atmospheric Carbon Dioxide Levels Over Time |
My research leads me to believe that climate change is real and that it has largely been caused by our own actions. What is less clear is exactly what impact that will have on people, cities and countries. How hot will it be in Cincinnati during the summer 10 years from now? Will Kansas City get more rain or less rain? Will the ski resorts in northern Michigan get enough snow and cold weather to operate as they do now? Answering these questions requires complex models that turn global climate projections into more localized long-range weather forecasts. I am skeptical of our ability to do that accurately. There are hundreds of variables that shape weather patterns, and I don’t believe that we fully understand how they all interact. Furthermore, the earth may have checks and balances that moderate climate change more than we expect, or conversely, there may be tipping points that once they are passed cause weather changes to be more rapid and extreme than expected. In short, I think we just don’t know.
What we do know is that climate change in the past has sharply curtailed some species of both plants and animals, sometimes to the point of extinction. Better adapted species, however, have always risen to take their place. What is different now is that the climate is changing relatively rapidly and there is one particular species -- us -- that we don’t want to see go extinct. Thus in my opinion, climate change is one giant crap shoot where 7 billion of us are hoping for a friendly roll of the dice. Unfortunately, not all 7 billion of us will be lucky -- there will be climate change winners and losers, and the losers will be unhappy about the results.
Now that I have given you my opinion, I’m going to explain why it doesn’t matter. Opinion surveys in this country (and developed countries in general) show that a majority of people now consider climate change to be a serious problem that we as a society need to address. Support for this position is particularly strong among members of the Millennial and Gen Z generations [3] -- the very people who will soon be taking the reins of power. Thus, something is going to happen on the topic of climate change and, given the scale of the issue, it is likely to reverberate throughout the world economy and is likely to shape the future form of cities in a variety of ways.
Exactly what actions will be taken I’m not even going to attempt to predict, but they fall into two general categories. First, there are preventative actions that are aimed at reducing greenhouse gases in the atmosphere so that temperature increases are held in check to at least some degree (e.g. replacing coal-fired power plants with solar- and wind-powered energy). Second, there are adaptive measures that attempt to minimize the impact of climate-induced weather changes (e.g. installing pumps in Miami Beach to pump out the sea water that bubbles up from the ground during high tides, a growing necessity as sea levels have risen).
The actions that end up being taken will undoubtedly come from both categories, but I think cities need to be particularly prepared to take adaptive measures. Countries around the globe have repeatedly set climate goals to curtail greenhouse gas emissions and then failed to muster the collective willpower to meet those goals. Change is hard and when that change is inconvenient and expensive -- which is exactly what climate change prevention will be -- change is really hard. As a result, I expect preventative actions to be only modestly successful. That leaves adaptive measures which cities and countries will be forced to take when preventative measures come up short. Some cities may be lucky enough to have relatively benign climate impacts which will necessitate only simple adaptive measures. Other cities won’t be lucky and may end up spending billions. Exactly how that cost gets allocated to the various levels of government (and various pools of taxpayers) will be interesting to watch. In general, I think midwestern cities will be better positioned than most, but in the game of climate craps there are no guarantees.
Artificial Intelligence and Virtual Reality
My final mega trend involves two computer technologies that are really separate and distinct, but there is enough overlap that I have chosen to combine them for the purposes of this post. I’m also going to lump “augmented reality” in with virtual reality because I think the technologies will eventually blur and merge to the point where most people won’t care about the difference. All three of these things have been slowly worming their way into our lives, but are likely to explode in importance over the next decade. Once they do, the way we live, work and entertain ourselves will never be the same.
Artificial intelligence (AI) often means different things to different people, but in a very general sense it is a computer system that is able to perform tasks normally requiring human intelligence. Note that the definition does not require that the system complete the task in the same way that a human would. Systems that play chess, for example, generally rely upon the ability to analyze thousands of possible moves in a split second to find the best move to make -- a strategy that no human could replicate. While early AI systems did try to mimic the decision making steps that human experts would take for a particular task, newer systems have largely abandoned that approach for machine learning algorithms that sift through mountains of past examples to find the best parameters for achieving the optimal solution. As long as the computer has a clear set of criteria for determining success versus failure, it can “learn” the best way to solve a particular problem without a human telling it what to do.
The example of artificial intelligence that people are most familiar with is probably the voice-activated digital assistants such as Alexa, Siri and Google Assistant. I asked my Google device, for example, to give me a list of the five tallest mountains in the world. To complete that task, it had to (1) understand the question I was asking, (2) find a database containing information on mountains and their elevation, (3) translate my question into a database query and retrieve the results, and then (4) put the results of the query into the form of an English statement that I could understand. The correct response came back in just a couple of seconds. The technology is not perfect, but it is pretty amazing how much better it has gotten in just a few years -- and the more you talk to it, the better it gets.
This is a fairly rudimentary example of artificial intelligence but its popularity has been enormous. It took Amazon’s Alexa four years to reach 100 million devices. It took just one more year to double that number. Amazon has partnered with thousands of other devices and data sources so that it is now possible to ask Alexa to lower your thermostat by three degrees or place an order for a large pepperoni pizza with Pizza Hut. These third-party “skills” now number in the tens of thousands.
One of the strengths of AI systems is pattern recognition, not only in data but in images as well. Google is particularly strong in this area and is building advanced AI functions into its Android operating system for smartphone cameras. I took a picture of a running shoe that I liked, for example, and asked Google to search on that image. Within seconds I not only knew the manufacturer and model, but had links to several online sites where I could make a purchase.
In a more serious example, researchers at Stanford University developed a system in 2017 to detect skin cancer. If given a photo of a freckle or skin lesion, it can tell whether it is cancerous as accurately as leading dermatologists. It works because it can compare the skin anomaly in question to 130,000 past cases, searching for similarities. In essence, it is able to extract “rules” that either dermatologists do not consciously know or at least are not able to articulate. [4]
The current state-of-the-art in AI is that it works best when it is applied to a relatively narrowly defined task (or set of related tasks) where the desired outcome is clearly understood and where there are a lot of past examples to learn from. As a result, it is likely to initially be used to augment existing workers to make them far more productive at key tasks but not replacing humans for all of the tasks that a particular job is required to do. Instead of ten paralegals searching past cases for relevant precedents and summarizing the results, you may simply need an AI system and one or two paralegals. The paralegals that are still employed will monitor and double check the progress of the AI system, and then present the results to the partner (who has no interest in computer systems of any kind). Given that the system can work 24/7 without getting bored, needing meal breaks, or getting distracted by funny cat videos, the results are likely to be better, faster and less expensive.
And that, in a nutshell, explains both the promise and threat of artificial intelligence. Our society will benefit from job performance that reaches levels which have never before been attainable, and at a cost which is likely to fall significantly over time. On the other hand, AI systems will cause job losses over a wide range of both white-collar and blue-collar job categories. There may be some human skills (empathy or creativity, for example) that computers will have a hard time replicating, but I suspect that list is far shorter than most people think. The percentage of jobs that could be replaced with AI systems (combined with robotics where necessary) will probably end up being well above 50 percent. How fast job replacement actually happens will depend more on economic issues than the theoretical limits of artificial intelligence. Tasks that are simple (e.g. check-out clerks) or where the potential profit from automation are high (e.g. digital assistants such as Alexa) will be the first targets of AI technology. [5] But the trend of AI-powered automation replacing human workers will inevitably spread and the impact will be larger than most anticipate.
Virtual reality (VR) is a computer generated simulation of a three-dimensional environment that the user can interact with in a seemingly physical way. Currently, users wear a special headset that provides an immersive visual and aural experience of the virtual reality being presented, and users manipulate controllers or wearable sensors to allow direct interaction with objects in the virtual environment. VR programs are often multi-user with each user represented within the virtual reality by an “avatar” that other users can see and interact with.
So far, virtual reality has been used primarily to create more engaging computer games (an extremely lucrative market). But the technology is set to explode over the next 5 to 10 years in numerous other ways. VR technology will enable new forms of social interaction, retail marketing, education and training. Imagine, for example, med students being able to “operate” on virtual patients that are nearly as realistic as actual patients, or history buffs being able to visit Boston prior to the revolutionary war.
Virtual Reality as a Design Tool |
Years ago, there was an educational TV show called The Magic School Bus where the driver, Ms Frizzle (voiced by Lily Tomlin!), could transform the school bus so that it could travel into outer space, or inside an anthill, or through the human bloodstream as she would toss out educational tidbits. Using virtual reality, these same imaginary trips could be taken in a much more immersive form with sights and sounds that make the user feel as if they are really in, for example, a human body. Each viewer would be able to change their view simply by turning their head or by using the controller to turn “the bus” into a different organ -- all without influencing what other users were seeing. Rather than a linear story-line that is the same for everyone, virtual reality builds a digital replica of the human body that each user can explore as they prefer.
This ability to use computer systems to build an alternate reality, often referred to as the “metaverse” by VR developers, is seen as being transformative by many in the computer industry. Over the past 20 years, we have become used to doing things remotely, whether they be watching YouTube videos, or online shopping, or attending Zoom meetings. But the experience has always been detached, with the end user being as much an outside observer as a participant. The goal of virtual reality is to flip that experience so that the user is almost as fully engaged as if they were physically present in whatever “metaverse” has been created for them.
Many in the computer industry view virtual reality as the next big thing. Mark Zuckerberg, CEO of Facebook, recently announced that his company will hire 10,000 workers in Europe and spend billions each year developing VR products and metaverse content. In fact, Zuckerberg is going so far as to rename the company “Meta Platforms Inc.” to indicate their new focus. [6]
Summary
One thing that all three of the mega trends have in common is that they will all have a substantial impact on the world economy. And, of course, the economy shapes our perception of our society, it shapes much of how we live our lives, and it shapes the physical form of the cities we inhabit. That influence is pretty abstract, however, so it is difficult to leap from artificial intelligence to changes in urban density, transportation systems or land use patterns. That is where the next set of blog posts come in. Each post will build on the three mega trends but focus in much greater detail on a specific ramification that will have more obvious implications for the future of midwestern cities. Stay tuned!
Thoughts? As always, share your thoughts and ideas by leaving a comment below or sending me an email at doug@midwesturbanism.com. Want to be notified whenever I add a new posting? Send me an email with your name and email address.
Notes:
Richard Baldwin; The Globotics Upheaval, Globalization, Robotics and the Future of Work; Oxford University Press; 2019.
Richard Florida; The New Urban Crisis; Basic Books, Hachette Book Group; 2018.
Alec Tyson, Brian Kennedy, and Cary Funk; “Gen Z, Millennials Stand Out for Climate Change Activism, Social Media Engagement with Issue;” Pew Research Center; May 2021; https://www.pewresearch.org/science/2021/05/26/gen-z-millennials-stand-out-for-climate-change-activism-social-media-engagement-with-issue/.
Taylor Kubota; “Deep Learning Algorithm Does as Well as Dermatologists in Identifying Skin Cancer;” Stanford News; January 2017; https://news.stanford.edu/2017/01/25/artificial-intelligence-used-identify-skin-cancer/.
Daniel Susskind; “A World Without Work: Technology, Automation and How We Should Respond;” Metropolitan Books/Henry Holt & Company; 2020.
Meghan Borrowsky; “Facebook Bets Big on Virtual World;” The Wall Street Journal; October 27, 2021.