Last winter, America's central bank -- the Federal Reserve -- was busy patting itself on the back. The Fed's cut in its basic interest rate to 1.75 percent per year seemed to have worked: the recession was ending. Despite sobered expectations about the high-tech revolution's impact on productivity and profits, as well as the jitters inspired by the terror attack on the WTC, US businesses, it was believed, would soon start investing again big-time because borrowing money at 1.75 percent was too good a deal to pass up.
By late spring, those expectations had disintegrated alongside the collapse of Enron, WorldCom, and Arthur Andersen. Suddenly, everyone doubted the integrity of the financial accounts of US companies, and everyone saw just how much the US system of corporate control had deteriorated during the bubble of the 1990s.
The US stock market fell 15 to 20 percent below its winter levels. Spreads between the interest rate at which the US government could borrow and the interest rates at which US corporations could borrow widened. Suddenly, the Fed stopped congratulating itself: a 1.75 percent interest rate might be the right interest rate to fuel a recovery when the Dow-Jones stock market index stands at 10,000, but not when the Dow-Jones index stands at 8,500.
Throughout the summer, corporate investment news remained disappointing. More and more analysts began to speak about the possibility of a "double dip" recession.
Yet throughout all this, the Federal Reserve remained passive. The short-term interest rates it controlled didn't budge. Only in mid-August did the Fed hint that interest rates might be cut.
The informal and unofficial rationale leaking out of the Fed for its inaction had two parts. First, short-term interest rates were already so low that everyone would see further cuts as temporary only. They would thus have little effect on the longer-term rates that really drive business investment.
Second, short-term interest rates were so low that additional cuts would spook the financial markets: if even the Fed thought conditions warranted cuts, the argument went, businesses would respond not by increasing their investments but by reducing them. The Fed's judgment appeared to be that it was largely (if not completely) powerless: it had done all that it could, and the levers of monetary policy were no longer strongly connected to determining the level of economic activity.
So the US in 2002 joined Japan in what economists have for sixty-five years called a "Liquidity Trap": a situation in which the short-term nominal interest rates the central bank controls are so low and so loosely connected to the level of aggregate demand that further reductions are ineffective in fighting a recession.
The US situation was not unique: Japan had been in been in thrall to a liquidity trap since the mid-1990s. But there had been no other examples since the Great Depression of the 1930s.
Whether the US now really is in a liquidity trap is uncertain. How long this state of affairs will last is unknown. Nevertheless, even if the US is only on the edge of a liquidity trap, and even if it moves away from the current state of affairs soon, this is a frightening situation. If monetary policy is not effective, the only lever the US government has to manage its economy is fiscal policy: changes in the government's tax and spending plans to change the government's direct contribution to aggregate demand.
But the lesson of the decades since World War II is that the US government -- with its complex, baroque, eighteenth-century organization -- is incapable of changing policy fast enough to make effective use of fiscal policy as a tool for managing the economy. It simply takes too long for changes in taxes and spending to work their way through Congress and the bureaucracy. A US caught in a liquidity trap is, quite simply, a country with no effective tools of macroeconomic management.
There have been two eras since World War II when policymakers US policymakers, at least -- believed that they had solved the riddle of the business cycle and had learned how to manage a modern industrial or post-industrial economy. The first was the Keynesian high-water mark of confidence in demand management of the 1960s. It was destroyed by the inflation of the Vietnam era and the oil-price shocks of the 1970s. The second was the decade of successful business-cycle management by Alan Greenspan's independent, apolitical, and technocratic Fed during the 1990s. This second era now appears to have been as fleeting as the first.
Eighty years ago, John Maynard Keynes argued that governments needed to take responsibility for maintaining full employment and price stability that the pre-World War I gold standard had not been the golden age people thought it was, and that its successes were the result of a lucky combination of circumstances unlikely to be repeated. Keynes was an optimist in believing that governments could learn to manage the business cycle. He would be shocked to look at today's world: a Europe with stubbornly high unemployment, a Japan mired in a decade of near stagnation, and now a US lacking the policy tools to deal with any additional economic bad news.
J. Bradford DeLong is professor of economics at the University of California at Berkeley, and former Assistant US Treasury Secretary. Copyright: Project Syndicate
Why is Chinese President Xi Jinping (習近平) not a “happy camper” these days regarding Taiwan? Taiwanese have not become more “CCP friendly” in response to the Chinese Communist Party’s (CCP) use of spies and graft by the United Front Work Department, intimidation conducted by the People’s Liberation Army (PLA) and the Armed Police/Coast Guard, and endless subversive political warfare measures, including cyber-attacks, economic coercion, and diplomatic isolation. The percentage of Taiwanese that prefer the status quo or prefer moving towards independence continues to rise — 76 percent as of December last year. According to National Chengchi University (NCCU) polling, the Taiwanese
It would be absurd to claim to see a silver lining behind every US President Donald Trump cloud. Those clouds are too many, too dark and too dangerous. All the same, viewed from a domestic political perspective, there is a clear emerging UK upside to Trump’s efforts at crashing the post-Cold War order. It might even get a boost from Thursday’s Washington visit by British Prime Minister Keir Starmer. In July last year, when Starmer became prime minister, the Labour Party was rigidly on the defensive about Europe. Brexit was seen as an electorally unstable issue for a party whose priority
US President Donald Trump is systematically dismantling the network of multilateral institutions, organizations and agreements that have helped prevent a third world war for more than 70 years. Yet many governments are twisting themselves into knots trying to downplay his actions, insisting that things are not as they seem and that even if they are, confronting the menace in the White House simply is not an option. Disagreement must be carefully disguised to avoid provoking his wrath. For the British political establishment, the convenient excuse is the need to preserve the UK’s “special relationship” with the US. Following their White House
US President Donald Trump’s return to the White House has brought renewed scrutiny to the Taiwan-US semiconductor relationship with his claim that Taiwan “stole” the US chip business and threats of 100 percent tariffs on foreign-made processors. For Taiwanese and industry leaders, understanding those developments in their full context is crucial while maintaining a clear vision of Taiwan’s role in the global technology ecosystem. The assertion that Taiwan “stole” the US’ semiconductor industry fundamentally misunderstands the evolution of global technology manufacturing. Over the past four decades, Taiwan’s semiconductor industry, led by Taiwan Semiconductor Manufacturing Co (TSMC), has grown through legitimate means