Fear of Acorns
Guest post by Ted Lamade, Managing Director at The Carnegie Institution for Science
I am a big fan of fables. Be it the Ant and the Grasshopper, the Boy Who Cried Wolf, or the Tortoise and the Hare, each are great because they are concise, entertaining, and most importantly, forever relevant. That said, a fable crossed my desk in recent weeks that I found especially relevant to the world and markets we are currently living in — “Chicken Little”.
For those of you not familiar with the story of Chicken Little, it goes something like this.
Chicken Little is walking in the woods when she is struck by an acorn falling from one of the trees. Convinced this is a sign the sky is falling, Chicken Little rushes from the woods to go and warn the king.
On her way to see the king, she runs into several friends, who are also birds and go by names like Henny Penny, Goosey Loosey, Ducky Lucky, Turkey Lurkey, and so on. As she meets each along her way, Chicken Little warns them that sky is falling and that she has first-hand evidence of this.
As a result, these birds join Chicken Little as she makes her way to the king. Soon enough, there is a large group of them convinced that the sky is falling on them.
On their way, they come across Foxy Loxy (a fox, of course), who asks them why they are in such a hurry. Chicken Little explains that the sky is falling and that they are on their way to tell the king. Foxy Loxy offers to take them to the castle where they will find the king, and the birds agree to accompany him. However, the cunning fox leads them not to the castle, but to his den, and the birds are never seen alive again.
Fear is not something that is forced upon us. Rather, it is something we force upon ourselves.
Because fear is a reaction we have when we are confronted by something, typically a threat.
This raises a question — is fearing something a problem?
In short, no. Fear itself is not necessarily a bad thing. In fact, a reasonable amount of fear is actually a good thing because it is what makes us more aware of our surroundings and cautious when warranted.
However, an irrational amount of fear is a problem because it makes us susceptible to the “Foxy Loxy’s” of the world. Those who aim to leverage fear for personal gain. Those who sell advice, products, and services that feed into the fear. Those who want it to magnify it at every turn. The media is the obvious culprit, but there are countless others.
The reason this is such an important issue is because while Chicken Little treated a single acorn as a sign that the sky was falling, most people today, investors in particular, seem to be treating each and every “acorn” (i.e. negative headlines) as a sure fire indication that the economy and/or markets are bound to crash.
Just think about the last decade alone. From Covid-19 to disinformation, crypto and the FTX fraud, Iran, China, Russia, climate change, a tech bubble 2.0, supply chain shortages, globalization, Silicon Valley Bank’s collapse, office vacancies, and higher interest rates (just to name a few) have all been deemed perilous threats to financial and/or geopolitical stability. Yet we are still here with unemployment close to all-time lows and the stock market near record highs.
So, this begs the question — if we look up in the sky today, what is the next acorn to fall? The next thing to fear?
It is pretty obvious – Artificial Intelligence (“AI”).
This past weekend alone there were more than two dozen articles in the various papers I read highlighting the risks surrounding AI, how it is going to dismantle the American workforce, cause the wealth gap to widen even further, destabilize the economy, and even lead to nuclear holocaust.
Whoa. Talk about Chicken Little.
But should we fear this acorn? Could this finally be the true sign the sky is falling?
History tells us the answer is clearly no. That said, AI is likely going to impact sectors of the economy and markets very differently. Understanding how is the first step towards not fearing its arrival. Here are just a few examples that have been top of mind.
Remember those history reports you had to write in middle school about the Roman Empire? Or essays on the Classics in high school? Or a senior thesis in college on World War II?
While these were great ways to test how well we could regurgitate information, they were utter failures at testing how well we understood it. They taught us nothing about drawing parallels across disciplines, time periods, or circumstances. Said another way, they did not make us think.
The good news is that AI has the potential to enable future students to go well beyond these exercises in regurgitation. Instead of simply reporting on the Roman Empire, Socrates and Plato, or World War II, AI could provide the opportunity for these students to apply the lessons from each to their own lives and the world around them.
As for education more broadly, the news may be even brighter as the Economist recently reported that AI is doing things like “helping teachers write lesson plans and worksheets that are at different reading levels and even in different languages.” Said another way, it is enabling more specialized teaching using the same amount of “manpower”. If so, isn’t this the definition of increased productivity?
Just as calculators replaced the need to manually run math equations, AI has the potential to perform much of the mindless regurgitation that students have grown accustomed to doing, enabling them to be freed up for real thought and creativity.
Diagnosing and treating physical ailments is currently all about probabilities and a trial & error approach. Have stomach pain? The first step is typically to change your diet. Could it be something more serious? Sure, but doctors always start with the highest probability first, rightfully so. If symptoms subside, you are all set. If they don’t, your doctor will probably move to the next highest probability. Maybe they will prescribe antibiotics or another prescription drug. Still not better? Next up will be a CT Scan or an MRI, but this will likely be months down the road.
So how could this change with AI? Going forward, doctors could be able to access your personal genetic makeup, cross reference your symptoms with your family history, and check it all against other patients who have experienced similar symptoms and have similar family histories/genetics. Could this change the probability picture? How about the course of treatment? What about the response time? I am guessing it would, and quite possibly in a very big way.
This is just the tip of the iceberg as AI will likely also revolutionize countless other aspects, such as the way therapeutics and treatments are researched, devised, created, and administered.
Any company that produces something in a factory, plant, or on an assembly line should benefit tremendously from AI given that it should enable them to streamline operations, save energy (and therefore costs), increase throughput, and raise overall efficiency. This is a pretty visible end result. The better you understand your operations, the better you can run your business.
Since I cannot say it any better than Bloomberg’s Matt Levine, I am just going to show you what he wrote last week about AI. Needless to say, I can’t imagine a sector that will experience more booms and busts than finance as a result of AI.
“The widespread use of relatively early-stage AI will introduce new ways of making mistakes into finance. Right now there are some classic ways of making mistakes in finance, and they periodically lead to consequences ranging from funny embarrassment through multimillion-dollar trading loss up to systemic financial crises. Many of the most classic mistakes have the broad shape of “overly confident generalizing from limited historical data,” though some are, like, hitting the wrong button. But there are only so many ways to go wrong, and they are all sort of intuitive. But now there are new ways! Weird ways! Oh sure an AI can probably make overly confident generalizations from limited historical data, but perhaps there is room for novelty. Now some banker is going to type into a chat bot “our client wants to hedge the risk of the Turkish election,” and the chatbot will be like “she should sell some Dogecoin call options and use the proceeds to buy a lot of nickel futures,” and the banker will be like “weird okay whatever.” And that trade will go wrong in surprising ways, the client will sue, the client and the banker and the chatbot will all come to court, the judge will ask the chat bot “well why would this trade hedge anything,” and the chatbot will shrug its little imaginary shoulders and be like “bro why are you asking me I’m a chat bot.” Or it will say “actually the Dogecoin/nickel spread was ex ante an excellent proxy for Turkish political risk because” and then emit a series of ones and zeros and emojis and high-pitched noises that you and I and the judge can’t understand but that make perfect sense to the chat bot. New ways to be wrong! It will make life more exciting for financial columnists, for a bit, before we are all replaced by the chat bots.
The Wall Street Journal had an entire section this week dedicated to this topic titled “AI has Madison Avenue Excited — and Worried” that nearly perfectly sums up this sector. In short, there will be plenty of pros and cons.
On one hand, as trends change, preferences adjust, and demographics shift, how is AI or Chat GPT supposed to figure out what the next fad will be, what the next spring’s clothing lineup should look like, or what destinations will be popular when consumers don’t even know until they know? Ever look at clothing styles decade by decade? Good luck to AI bot that attempts to figure that one out. Or what about music? Movies? Cars? Home Design? Check, check, and check.
This said, there will be parts of the consumer sector that will benefit tremendously from AI, specifically those that are focused on the “here and now”, client service, and sales, as opposed to predicting the future. Look no further than a company called Cresta, which uses generative AI to better inform, educate, and assist people in a wide variety of jobs and industries as they engage with potential customers, existing clients, and current colleagues.
There are too many other possibilities to mention in this article and, as with most things that depend on human behavior, time will tell how they turn out.
We’ve seen a version of AI in sports for years in the form of Billy Beane’s “Moneyball”, the Houston Rockets/Golden State Warriors “Three Ball Strategy”, and in how all teams scout players, watch film, and study their opponents. I would imagine this next phase will just hypercharge this phenomenon. Ironically though, I don’t think AI adoption will determine future winners. Why? Because it won’t be novel. Everyone will be doing it. Instead, it will more likely just make the entire ecosystem more competitive, which should make it even more challenging to win a title at the highest levels due to the Paradox of Skill.
Geopolitics and Warfare
This might be the trickiest, and most important, of all.
I wrote a few years ago about a guy named Stanislov Petrov titled “A Centaur Future”. While you may not have heard of him, he might be the most important person of the 20th century.
Because he single handedly may have saved the planet and civilization as we know it.
In the fall of 1983 Petrov was in charge of Russia’s Oko nuclear early warning system. On September 23rd, the system reported that the United States had launched five nuclear missiles at the Soviet Union. At the time, the Soviets had the second most advanced missile defense technology in the world, so it would have been perfectly logical for Petrov to conclude that the threat was real. Yet, Petrov was skeptical. He concluded that it was much more likely that (a) a U.S. strike would more likely be an “all-out” attack as opposed to just five missiles, (b) the launch detection system was new and potentially faulty, (c) the alert had passed through 30 layers of verification too quickly, and (d) the ground radar failed to pick up corroborating evidence. Despite the potential personal consequences (the end of his career at best or his life at worst), Petrov chose to disobey his orders.
As you may have guessed, Petrov’s intuition proved to be correct. By relying on his instincts and not blindly following the new technology, he likely prevented a major escalation in the Cold War and an unwarranted nuclear event. Had Russia relied solely on the “AI of the day” without a human override, things might have turned out very, very differently.
So, what does it all mean?
For something that is so confusing and complicated, the answer is likely relatively simple. For industries less dependent on human behavior, AI will likely be a highly beneficial development. However, for those more dependent us and our whims, caution is likely warranted.
This said, the majority of industries will unsurprisingly fall somewhere in the middle, which means they will be better off if they find a way to leverage, but not rely too heavily on these new technologies.
The question is, how might this look?
In his best selling book, “Range”, author David Epstein profiled a chess match between chess-master Gary Casparov and IBM’s Supercomputer Deep Blue in 1997. After losing to Deep Blue, Casparov responded reticently that,
“Anything we can do, machines will do it better. If we can codify it and pass it to computers, they will do it better”.
However, after studying the match more deeply, Casparov became convinced that something else was at play. In short, he turned to “Moravec’s Paradox”, which makes the case that,
“Machines and humans have opposite strengths and weaknesses. Therefore, the optimal scenario might be one in which the two work in tandem.”
In chess, it boils down to tactics vs. strategy. While tactics are short combinations of moves used to get an immediate advantage, strategy refers to the bigger picture planning needed to win the game. The key is that while machines are tactically flawless, they are much less capable of strategizing because strategy involves creativity.
Casparov determined through a series of chess scenarios that the optimal chess player was not Big Blue or an even more powerful machine. Instead, it came in the form of a human “coaching” multiple computers. The coach would first instruct a computer on what to examine. Then, the coach would synthesize this information in order to form an overall strategy and execute on it. These combo human/computer teams proved to be far superior, earning the nickname “centaurs”.
By taking care of the tactics, computers enabled the humans to do what they do best — strategize.
Sounds about right to me.