JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: economics, Most Recent at Top [Help]
Results 1 - 25 of 157
How to use this Page
You are viewing the most recent posts tagged with the words: economics in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Paula Yoo is a children’s book writer, television writer, and freelance violinist living in Los Angeles. Her latest book, Twenty-two Cents: Muhammad Yunus and the Village Bank, was released last month. Twenty-two Cents is about Muhammad Yunus, Nobel Peace Prize winner and founder of Grameen Bank. He founded Grameen Bank so people could borrow small amounts of money to start a job, and then pay back the bank without exorbitant interest charges. Over the next few years, Muhammad’s compassion and determination changed the lives of millions of people by loaning the equivalent of more than ten billion US dollars in micro-credit. This has also served to advocate and empower the poor, especially women, who often have limited options. In this post, we asked her to share advice on what’s she’s learned about banking, loans, and managing finances while writing Twenty-two Cents.
What are some reasons why someone might want to take out a loan? Why wouldn’t banks loan money to poor people in Bangladesh?
PAULA: People will take out a loan when they do not have enough money in their bank account to pay for a major purchase, like a car or a house. Sometimes, they will take out a loan because they need the money to help set up a business they are starting. Other times, loans are also used to help pay for major expenses, like unexpected hospital bills for a family member who is sick or big repairs on a house or car. But asking for a loan is a very complicated process because a person has to prove they can pay the loan back in a reasonable amount of time. A person’s financial history can affect whether or not they are approved for a loan. For many people who live below the poverty line, they are at a disadvantage because their financial history is very spotty. Banks may not trust them to pay the loan back on time.
In addition, most loans are given to people who are requesting a lot of money for a very expensive purchase like a house or a car. But sometimes a person only needs a small amount of money – for example, a few hundred dollars. This type of loan does not really exist because most people can afford to pay a few hundred dollars. But if you live below the poverty line, a hundred dollars can seem like a million dollars. Professor Yunus realized this when he met Sufiya Begum, a poor woman who only needed 22 cents to keep her business of making stools and mats profitable in her rural village. No bank would loan a few hundred dollars, or even 22 cents, to a woman living in a mud hut. This is what inspired Professor Yunus to come up with the concept of “microcredit” (also known as microfinancing and micro banking).
In TWENTY-TWO CENTS, microcredit is described as a loan with a low interest rate. What is a low interest rate compared to a high interest rate?
PAULA: When you borrow money from a bank, you have to pay the loan back with an interest rate. The interest rate is an additional amount of money that you now owe the bank on top of the original amount of money you borrowed. There are many complex math formulas involved with calculating what a fair and appropriate interest rate could be for a loan. The interest rate is also affected by outside factors such as inflation and unemployment. Although it would seem that a lower interest rate would be preferable to the borrower, it can be risky to the general economy. A low interest rate can create a potential “economic bubble” which could burst in the future and cause an economic “depression.” Interest rates are adjusted to make sure these problems do not happen. Which means that sometimes there are times when the interest rates are higher for borrowers than other times.
What is a loan shark?
PAULA: A loan shark is someone who offers loans to poor people at extremely high interest rates. This is also known as “predatory lending.” It can be illegal in several cases, especially when the loan shark uses blackmail or threats of violence to make sure a person pays back the loan by a certain deadline. Often people in desperate financial situations will go to a loan shark to help them out of a financial problem, only to realize later that the loan shark has made the problem worse, not better.
Did your parents explain how a bank works to you when you were a child? Or did you learn about it in school?
PAULA: I remember learning about how a bank works from elementary school and through those “Schoolhouse Rocks!” educational cartoons they would show on Saturday mornings. But overall, I would say I learned about banking as a high school student when I got my first minimum wage job at age 16 as a cashier at the Marshall’s department store. I learned how banking worked through a job and real life experience.
TWENTY-TWO CENTS is a story about economic innovation. Could you explain why Muhammad Yunus’s Grameen Bank was so innovative or revolutionary?
PAULA ANSWER: Muhammad Yunus’ theories on microcredit and microfinancing are revolutionary and innovative because they provided a practical solution on how banks can offer loans to poor people who do not have any financial security. By having women work together as a group to understand how the math behind the loan would work (along with other important concepts) and borrowing the loan as a group, Yunus’ unique idea gave banks the confidence to put their trust into these groups of women. The banks were able to loan the money with the full confidence in knowing that these women would be able to pay them back in a timely manner. The humanitarian aspect of Yunus’ economic theories were also quite revolutionary because it gave these poverty-stricken women a newfound sense of self-confidence. His theories worked to help break the cycle of poverty for these women as they were able to save money and finally become self-sufficient. The Nobel Committee praised Yunus’ microcredit theories for being one of the first steps towards eradicating poverty, stating, “Lasting peace cannot be achieved unless large population groups find ways in which to break out of poverty.”
Twenty-two Cents: Muhammad Yunus and the Village Bank is a biography of 2006 Nobel Peace Prize winner Muhammad Yunus, who founded Grameen Bank and revolutionized global antipoverty efforts by developing the innovative economic concept of micro-lending.
The business press and general media often lament that firm executives are exhibiting “short-termism”, succumbing to the pressure by stock market investors to maximize quarterly earnings while sacrificing long-term investments and innovation. In our new article in the Socio-Economic Review, we suggest that this complaint is partly accurate, but partly not.
What seems accurate is that the maximization of short-term earnings by firms and their executives has become somewhat more prevalent in recent years, and that some of the roots of this phenomenon lead to stock market investors. What is inaccurate, though, is the assumption that investors – even if they were “short-term traders” – would inherently attend to short-term quarterly earnings when making trading decisions. Namely, even “short-term trading” (i.e. buying stocks with the aim to sell them after few minutes, days, or months) does not equal or necessitate “short-term earnings focus”, i.e., making trading decisions based on short-term earnings (let alone based on short-term earnings only). This means that in case the media observes – or executives perceive – that firms are pressured by stock market investors to focus on short-term earnings, such a pressure is illusionary, in part.
The illusion, in turn, is based on the phenomenon of “vociferous minority”: a minority of stock investors may be focusing on short-term earnings, causing some weak correlation between short-term earnings and stock price jumps / drops. But the illusion is born when this gets interpreted as if most or all investors (i.e., the majority) would be focusing on short-term earnings only. Alas, such an interpretation may, in the dynamic markets, lead to a self-fulfilling prophecy – whereby an increasing number of investors join the vociferous minority and focus increasingly on short-term earnings (even if still not the majority of investors would focus on short-term earnings only). And more importantly – or more unfortunately – firm executives may start to increasingly maximize short-term earnings, too, due to the (inaccurate) illusion that the majority of investors would prefer that.
A final paradox is the role of the media. Of course, the media have good intentions in lamenting about short-termism in the markets, trying to draw attention to an unsatisfactory state of affairs. However, such lamenting stories may actually contribute to the emergence of the self-fulfilling prophecy. Namely, despite the lamenting tone of the media articles, they are in any case emphasizing that the market participants are focusing just on short-term earnings. This contributes to the illusion that all investors are focusing on short-term earnings only – which in turn may lead a bigger majority of investors and firms to actually join the minority’s bandwagon, in the illusion that everyone else is doing that too.
Should the media do something different, then? Well, we suggest that in this case, the media should report more on “positive stories”, or cases whereby firms have managed to create great innovations with a patient, longer-term focus. The media could also report on an increasing number of investors looking at alternative, long-term measures (such as patents or innovation rates) instead of short-term earnings.
So, more stories like this one about Rolls-Royce – however, without claiming or lamenting that most investors are just wanting “quick results” (i.e., without portraying cases like Rolls-Royce just as rare exceptions). Such positive stories could, in the best scenario, contribute to a reverse, self-fulfilling prophecy – whereby more and more investors, and thereafter firm executives, would replace some of the excessive focus on short-term earnings that they might currently have.
Political economy is back on the centre stage of development studies. The ultimate test of its respectability is that the World Bank has realised that it is not possible to separate social and political issues such as corruption and democracy from other factors that influence the effectiveness of its investments, and started using the concept.
It predates the creation of “economics” as a discipline. Adam Smith, David Ricardo, Thomas Malthus, James Mill, and a generation later Karl Marx and Friederich Engels, explored how groups or classes in society exploited each other or were exploited, and used their conclusions to create theories of change or growth.
Marx’s ideas were taken up in the 1950s by economists and sociologists of the left, such as Paul Baran (The Political Economy of Growth, 1957) and later Samir Amin (The Political Economy of the Twentieth Century, 2000) who linked it to theories of imperialism and neo-colonialism to interpret what was happening in newly independent African countries where nationalist political parties had taken power.
Marx and Engels in their early writings, and Marxist orthodoxy subsequently, espoused determinist theories in which development went through pre-determined stages – primitive forms of social organisation, feudalism, capitalism, and then socialism. But in their later writings Marx and Engels were much more open, and recognised that some pre-capitalist formations could survive, and that there was no single road to socialism. Class analysis, and exploration of the economic interests of powerful classes, and their uses of the technologies available to them, could inform a study of history, but not substitute for it.
That was how I interpreted what happened in Tanzania in the 1970s. The country was built around the economic interests of those involved, and the mistakes made, both inside Tanzania but also outside. It focussed on the choices made by those who controlled the Tanzanian state or negotiated “foreign aid” deals with Western governments—Issa Shivji’s bureaucratic bourgeoisie. These themes are still current today.
I am not alone. Michael Lofchie’s (A Political Economy of Tanzania, 2014) focuses on the difficult years of structural adjustment in the 1980s and 1990s). He argues how the salaried elite could personally benefit from an overvalued exchange rate. From 1979 on, under the influence of the charismatic President Julius Nyerere, Tanzania resisted the IMF and World Bank which urged it to devalue. But eventually, around the mid-1980s, they realised that they had the possibility of making even bigger financial gains if the country devalued and there were open markets, which would allow them to make money from trade or production. They were becoming a productive bourgeoisie.
Lofchie’s analysis can be contested. The benefits of the chaos that resulted from the extremely over-valued exchange rates of the 1980s were reaped by only a few. It is true that rapid growth followed from around 1990 to the present, but that is also due to the high price of gold on international markets and the rapid expansion of gold mining and tourism. There is still plenty of evidence of individuals making money illegitimately – corruption is ever present in the political discourse, and will continue to be so up till the Presidential elections due in October 2015.
A challenge for the ruling class in Tanzania, leaving the 1970’s, was would they be able to convert their economic strategies into meaningful growth and benefits for the population? By 2011 the challenge was even more acute, because very large reserves of gas had been discovered off the coast of Southern Tanzania, so money for investment would no longer be a binding constraint. But would those resources be used to create real assets which would create the prerequisites for rapid expansions in manufacturing, services and especially agriculture? Or would they be frittered away through imports of non-productive machinery and infrastructure (such as the non-existent electricity generators purchased through the Richmond Project in 2006 in which several leading members of the ruling political party were implicated)? Or end up in Swiss bank accounts? The jury is very much still out. To achieve the current ambition of a rapid transition to a middle income country will require much greater understanding of engineering, agricultural science, and much better contracts than have been recently achieved – and more proactive responses to the challenges of corruption. It will need to take its own political economy seriously.
Headline image credit: Tanzania – Mikumi by Marc Veraart. CC-BY-2.0 via Flickr.
It is well known that obesity rates have been increasing around the Western world.
The American obesity prevalence was less than 20% in 1994. By 2010, the obesity prevalence was greater than 20% in all states and 12 states had an obesity prevalence of 30%. For American children aged 2 – 19, approximately 17% are obese in 2011-2012. In the UK, the rifeness of obesity was similar to the US numbers. Between 1993 and 2012, the commonness of obesity increased from 13.2% to 24.4% for men and for women from 16.4% to 25.1%. The obesity prevalence is around 18% for children aged 11-15 and 11% for children aged 2-10.
Policy makers, researchers, and the general public are concerned about this trend because obesity is linked to an increase likelihood of health conditions such as diabetes and heart disease, among others. The increase in the obesity prevalence among children is of concern because of the possibility that obesity during childhood will increase the likelihood of being obese as an adult thereby leading to even higher rates of these health conditions in the future.
Researchers have investigated many possible causes for this trend including lower rates of participation in physical activity and easier access to fast food. Anderson, Butcher, and Levine (2003) identified maternal employment as a possible culprit when they noticed that in the US the timing of these two trends was similar. While the prevalence of obesity was increasing for children so was the employment rate of mothers. Other researchers have found similar results for other countries – more hours of maternal employment is related to a higher likelihood of children being obese.
What could be the relationship between a mother’s hours of work and childhood obesity? When mothers work they have less time to devote to activities around the home, which may mean less concern about nutrition, more meals eaten outside of the home or less time devoted to physical activities. On the other hand, more maternal employment could mean more income and an ability to purchase more nutritious food or encourage healthy activities for children.
We looked at this relationship for Canadian children 12-17 years old – an older group of children than studied in earlier papers. For youths aged 12 to 17 in Canada, the obesity prevalence was 7.8% in 2008. We analysed not only at the relationship between maternal employment and child obesity, but also the possible reasons that maternal employment may affect child obesity.
We find that the effect of hours of work differs from the effect of weeks of work. More hours of maternal work are related to activities we expect to be related to higher rates of obesity – more television viewing, less likely to eat breakfast daily, and a higher allowance. On the other hand, more weeks of maternal employment are related to behaviour expected to lower obesity – less television viewing and more physical activity. This difference between hours and weeks of work raises some interesting questions. How do families adapt to different aspects of the labour market? When mothers work for more weeks does this indicate a more regular attachment to the labour force? Do these families have schedules and routines that allow them to manage their child’s weight?
Unlike other studies that focus on younger children, we do not find a relationship between maternal employment and likelihood of obesity for adolescents. Does the impact of maternal employment at younger ages not last into adolescence? Is adolescence a stage during which obesity status is difficult to predict?
The debate over appropriate policy remedies should not focus on whether mothers should work, but rather should focus on what children are doing when mothers are working. What can be done to reduce the obesity prevalence in adolescents? Some ideas include working with the education system and local communities to create an environment for adolescents that fosters healthy weight status, supporting families with quality childcare, provision of viable and high-quality alternative activities, or flexible work hours. Programs or policies that help families establish a healthy routine are important. It may not be a case of simply providing activities for adolescents, but that these activities are easy for families to attend on a regular basis.
I havewrittenabout the dangers of making economic policy on the basis of ideology rather than cold, hard economic analysis. Ideologically-based economic policy has laid the groundwork for many of the worst economic disasters of the last 200 years.
The decision to abandon the first and second central banks in the United States in the early 19th century led to chronic financial instability for much of the next three quarters of a century.
Britain’s re-establishment of the gold standard in 1925, which encouraged other countries to do likewise, contributed to the spread and intensification of the Great Depression.
Europe’s decision to adopt the euro, despite the fact that economic theory and history suggested that it was a mistake, contributed to the European sovereign debt crisis.
President George W. Bush’s decision to cut taxes three times during his first term while embarking on substantial spending connected to the wars in Afghanistan and Iraq, was an important driver of the macroeconomic boom-bust cycle that led to the subprime crisis.
In each of these four cases, a policy was adopted for primarily ideological, rather than economic reasons. In each case, prominent thinkers and policy makers argued forcefully against adoption, but were ignored. In each case, the consequences of the policy were severe.
So how do we avoid excessively ideological economic policy?
One way is by making sure that policy-makers are exposed to a wide range of opinions during their deliberations. This method has been taken on board by a number central banks, where many important officials are either foreign-born or have considerable policy experience outside of their home institution and/or country. Mark Carney, a Canadian who formerly ran that that country’s central bank, is not the first non-British governor of the Bank of England in its 320-year history. Stanley Fischer, who was born in southern Africa and has been governor of the Bank of Israel, is now the vice chairman of the US Federal Reserve. The widely respected governor of the Central Bank of Ireland, Patrick Honohan, spent nearly a decade at the World Bank in Washington, DC. One of Honohan’s deputies is a Swede with experience at the Hong Kong Monetary Authority; the other is a Frenchman.
But isn’t it unreasonable to expect politicians to come to the policy making process without any ideological bent whatsoever? After all, don’t citizens deserve to see a grand contest of ideas between those who propose higher taxes and greater public spending with those who argue for less of both?
In fact, we do expect—and want–our politicians to come to the table with differing views. Nonetheless, politicians often support their arguments with unfounded assertions that their policies will lead to widespread prosperity, while those of their adversaries will lead to doom. The public needs to be able to subject those competing claims to cold, hard economic analysis.
Fortunately, the United States and a growing number of other countries have established institutions that are mandated to provide high quality, professional, non-partisan economic analysis. Typically, these institutions are tasked with forecasting the budgetary effects of legislation, making it difficult for one side or the other to tout the economic benefits of their favorite policies without subjecting them to a reality check by a disinterested party.
In the United States, this job is undertaken by the Congressional Budget Office (CBO) which offers well-regarded forecasts of the budgetary effects of legislation under consideration by Congress. [Disclaimer: The current director of the CBO is a graduate school classmate of mine.]
The CBO is not always the most popular agency in Washington. When the CBO calculates that that the cost of a congressman’s pet project is excessive, that congressman can be counted on to take the agency to task in the most public manner possible.
According to the New York Times, the CBO’s “…analyses of the Clinton-era legislation were so unpopular among Democrats that [then-CBO Director Robert Reischauer] was referred to as the ‘skunk at the garden party.’ It has since become a budget office tradition for the new director to be presented with a stuffed toy skunk.”
For the most part, however, congressional leaders from both sides of the aisle hold the CBO and its work in high regard, as do observers of the economic scene from the government, academia, journalism, and the private sector.
These organizations each have their own institutional history and slightly different responsibilities. For the most part, however, they are constituted to be non-partisan, independent agencies of the legislative branch of government. We should be grateful for their existence.
Congrats (!) to House of Debt authors Atif Mian and Amir Sufi for making the shortlist for the Financial Times and McKinsey Business Book of the Year. Now in competition with five other titles from an initial offering of 300 nominations, House of Debt—and its story of the predatory lending practices behind the Great American Recession, the burden of consumer debt on fragile markets, and the need for government-bailed banks to share risk-taking rather than skirt blame—will find out its fate at the November 11th award ceremony.
From the official announcement:
“The provocative questions raised by this year’s titles have been addressed with originality, depth of research and lively writing.”
The award, now in its 10th edition, aims to find the book that provides “the most compelling and enjoyable insight into modern business issues, including management, finance and economics.” The judges—who include former winners Mohamed El-Erian and Steve Coll—also gave preference this year to books “whose influence is most likely to stand the test of time.”
To read more about House of Debt, including a list of reviews and a link to the authors’ blog, click here.
Dystopias are trending in contemporary popular culture. Novels and movies abound that deal with fictional societies within which humans, individually and collectively, have to cope with repressive, technologically powerful states that do not usually care for the well-being or safety of their citizens, but instead focus on their control and extortion. The latest resounding dystopian success is The Hunger Games—a box-office hit located in a nation known as Panem, which consists of 12 poor districts, starved for resources, under the absolute control of a wealthy centre called the Capitol. In the story, competitive struggle is carried to its brutal extreme, as poor young adults in a reality TV show must fight to death in an outdoor arena controlled by an authoritarian Gamemaker, until only one individual remains. The poverty and starvation, combined with terror, create an atmosphere of fear and helplessness that pre-empts any resistance based on hope for a better world.
We fear that part of the popularity of this science fiction action-drama, in Europe at least, lies in the fact that it has a real-life analogue: the Spectacle—in Debord’s (1967) meaning of the term—of the current ‘competitiveness game’ in which the Eurozone economies are fighting for their survival. Its Gamemaker is the European Central Bank (ECB), which—completely stuck to Berlin’s hard line that fiscal profligacy in combination with rigid, over-regulated labour markets has created a deep crisis of labour cost competitiveness—has been keeping the pressure on Eurozone countries so as to let them pay for their alleged fiscal sins. The ECB insists that there will be ‘no gain without pain’ and that the more one is prepared to suffer, the more one is expected to prosper later on.
The contestants in the game are the Eurozone members—each one trying to bootstrap its economy out of the throes of the most severe crisis in living memory. The audience judging each country’s performance is not made up of reality TV watchers but of financial (bond) markets and credit rating agencies, whose supposedly rational views can make or break any economy. The name of the game is boosting cost-competitiveness and exports—and its rules are carved into stone in March 2011 in a Euro Plus ‘Competitiveness Pact’ (Gros, 2011).
Raising competitiveness here means reducing costs, and more specifically cutting labour costs, which means lowering the wage share by means of reducing employment protection, lowering minimum wages, raising retirement ages, lowering pensions and, last but not least, cutting real wages. Economic inequality, poverty and social exclusion will all initially increase, but don’t worry: structural reforms hurt in the beginning, but their negative effects will be offset over time by changes in ‘confidence,’ boosting spending and exports. But it will not work, and the damage done by austerity and structural reforms is enormous; sadly, most of it was and is avoidable. The wrong policies follow from ‘design faults’ built into the Euro project right from the start—the creation of an ‘independent’ European Central Bank being the biggest ‘fault’, as it precluded the necessary co-ordination of fiscal and monetary policy and disabled the central banking system from providing support to national governments (Arestis and Sawyer, 2011). But as Palma (2009) reminds us, it is wrong to think about these ‘faults’ as being caused by perpetual incompetence—the monetarist Euro project should instead be read as a purposeful ‘technology of power’ to transform capitalism into a rentiers’ paradise. This way, one can understand why policy makers persist in abandoning the unemployed.
A landmark achievement by Naomi Klein, This Changes Everything is essential reading on the ways climate change creates opportunities for us to reexamine our entire free market system — and will hopefully provoke us into lasting, significant action. Books mentioned in this post This Changes Everything: Capitalism... Naomi Klein Sale Hardcover $21.00
China has all but overtaken the United States based on GDP at newly-computed purchasing power parity (PPP) exchange rates, twenty years after Paul Krugman predicted: “Although China is still a very poor country, its population is so huge that it will become a major economic power if it achieves even a fraction of Western productivity levels.” But will it eclipse the United States, as Arvind Subramanian has claimed, with the yuan eventually vying with the dollar for international reserve currency status?
Not unless China battles three economic foes. One is well-known: diminishing marginal returns to capital. Two others have received less attention. The first is Carlos Diaz-Alejandro. Not the man, but the results uncovered by his research on the Southern Cone following the opening up of its capital account that culminated in a sovereign debt crisis and contributed to Latin America’s lost 1980s. If the capital account is liberalized before the domestic financial system is ready, the country sets itself up for a fall: goodbye financial repression, hello financial crash. The second is the “reality of transition”: rejuvenating growth requires hard budgets and competition to improve resource allocation and stimulate innovation, counterbalanced with a more competitive real exchange rate. This is the principal insight from the transition in Central and Eastern Europe (CEE), which was far simpler than anything China faces.
China was able to raise total factor productivity (TFP) growth as an offset to diminishing marginal returns to capital, especially after joining the World Trade Organization (WTO) in 2001, and faster growth was accompanied by a rising savings rate. But TFP growth is hard to sustain. Any developing country targeting growth above the steady state level given by the sum of human capital growth, TFP growth and population growth (the latter two falling rapidly in China) will find that its investment rates need to continually increase unless it can rejuvenate TFP growth. China’s investment rates have risen from around 42% of GDP over 2005-7 (prior to the global crisis) to 48% in recent years even as growth has dropped from the 12% to the 7.5% range. Savings rates have hovered around 50%, reducing current account surpluses (numbers drawn from IMF 2010 and 2014 Article IV reports).
This configuration has forced China to choose between either investing even more, or lowering growth targets. It has chosen the latter, with its leaders espousing anti-corruption, deleveraging, environmental improvement and structural reform to achieve higher quality growth. The central bank, People’s Bank of China (PBoC), has reaffirmed its goal of internationalizing the yuan and liberalizing the capital account.
China’s proposed antidote is to “rebalance” from investment and exports to domestic consumption. But growth arithmetic would require consumption to grow at unrealistic rates, given the relative shares of investment and private consumption in GDP, even to meet scaled-down growth targets. Besides, households need better social benefits and market interest rates on bank deposits to save less and consume more. Hukou reform alone, or placing social benefits received by rural migrants on a par with their urban counterparts, could easily cost 3% of GDP a year for the next seven years as some 150 million additional people gain access to such benefits—quite apart from the public investment needed to upgrade urban infrastructure, according to calculations shared by Xinxin Li of the Observatory Group. And the failure to liberalize bank deposit rates has led to the rise of “wealth management products” in the shadow banking system. These “WMPs” offer higher returns but are poorly regulated and more risky.
Indeed, total social financing, a broad measure of credit, has soared from 125% to 200% of GDP over the five years 2009-2013 (Figure 2 in the July 2014 IMF Article IV report, with Box 5 warning that such a rapid trajectory usually ends in tears). Local government debt was estimated at 32% of GDP in mid-2013, much of it short-term and used to fund infrastructure projects and social housing with long paybacks. Housing prices show the signs of a bubble, especially away from the four major cities. Corporate credit is 115% of GDP, about half of it collateralized by land or property. While the focus recently has been on risks from shadow banking, it is hard to separate the shadow from the core. Besides, WMPs have become intertwined with the booming real estate market, a major engine of growth yet the centre of a “web of vulnerabilities” (to quote the IMF) encompassing banks, shadow banks, and local government finances. A real estate shock would ripple through the system, lowering growth and forcing bailouts. The gross cost of the bank workout at the end of the 1990s was 15% of GDP in a much simpler world!
2014 began with fears of a hard landing and an impending default by a bankrupt coal mine on a $500 million WMP-funded loan intermediated by a mega-bank. The government eventually intervened rather than let investors take a hit and risk a confidence crisis. And starting in April, stimulus packages were launched to meet the 7.5% growth target, a tacit admission that rebalancing is not working. But concerns persist around real estate. Besides, stimulus will help only temporarily and China is likely to be facing the same questions about growth and financial vulnerability by the end of the year.
With rebalancing infeasible, and investing even more prohibitively costly, virtually the only remaining option is to spur total factor productivity growth: China is still far from the global technological frontier. This calls for a package that cleans up the financial sector and implements hard budgets and genuine competition, especially for the state-owned enterprises (SOEs), while keeping real exchange rates competitive. The real appreciation of the past few years may have been offset by rising productivity, but continued appreciation will make it harder for the domestic economy to restructure and create 12 million jobs a year to absorb new graduates and displaced SOE workers.
In sum, China must heed Diaz-Alejandro. No one knows what the non-performing loans ratio is in China and few believe the official rate of 1%. If the cornerstone of a financial system is confidence and transparency, China is severely deficient. This must first be fixed and market-determined interest rates adopted before entertaining hopes of internationalizing the currency. China must also accept the reality of transition; the formidable remaining agenda in the fiscal, financial, social, and SOE sectors reminds us that China is still in transition to a full-fledged market economy.
The combination of a financial clean up and the policy trio of hard budgets, competition, and a competitive real exchange rate will improve resource allocation and force innovation, boosting total factor productivity growth. But doing this is hard—that’s the essence of the “middle-income trap”. Huge vested interests will be encountered, evoking Raghuram Rajan’s description of the middle-income trap as one “where crony capitalism creates oligarchies that slow down growth”. Dealing with this agenda is the Chinese leadership’s biggest challenge.
The era of cheap China is ending, while the ability of the government to virtually decree the growth rate has fallen victim to diminishing returns to capital. Diaz-Alejandro and the reality of transition are no less important as China seeks a way forward.
Headline image credit: The Great Wall in fall, by Canary Wu. CC-BY-SA-2.0 via Wikimedia Commons.
Innovation is a primary driver of economic growth and of the rise in living standards, and a substantial body of research has been devoted to documenting the welfare benefits from it (an example being Trajtenberg’s 1989 study). Few areas have experienced more rapid innovation than the Personal Computers (PC) industry, with much of this progress being associated with a particular component, the Central Processing Unit (CPU). The past few decades had seen a consistent process of CPU innovation, in line with Moore’s Law: the observation that the number of transistors on an integrated circuit doubles every 18-24 months (see figure below). This remarkable innovation process has clearly benefitted society in many, profound ways.
A notable feature of this innovation process is that a new PC is often considered “obsolete” within a very short period of time, leading to the rapid elimination of non-frontier products from the shelf. This happens despite the heterogeneity of PC consumers: while some (e.g., engineers or gamers) have a high willingness-to-pay for cutting edge PCs, many consumers perform only basic computing tasks, such as word processing and Web browsing, that require modest computing power. A PC that used to be on the shelf, say, three years ago, would still adequately perform such basic tasks today. The fact that such PCs are no longer available (except via a secondary market for used PCs which remains largely undeveloped) raises a natural question: is there something inefficient about the massive elimination of products that can still meet the needs of large masses of consumers?
Consider, for example, a consumer whose currently-owned, four-year old laptop PC must be replaced since it was severely damaged. Suppose that this consumer has modest computing-power needs, and would have been perfectly happy to keep using the old laptop, had it remained functional. This consumer cannot purchase the old model since it has long vanished from the shelf. Instead, she must purchase a new laptop model, and pay for much more computing power than she actually needs. Could it be, then, that some consumers are actually hurt by innovation?
A natural response to this concern might be that the elimination of older PC models from the shelves likely indicates that demand for them is low. After all, if we believe in markets, we may think that high levels of demand for something would provide ample incentives for firms to offer it. This intuition, however, is problematic: as shown in seminal theoretical work by Nobel Prize laureate Michael Spence, the set of products offered in an oligopoly equilibrium need not be efficient due to the misalignment of private and social incentives. The possibility that yesterday’s PCs vanish from the shelf “too fast” cannot, therefore, be ruled out by economic theory alone, motivating empirical research.
A recent article addresses this question by applying a retrospective analysis of the U.S. Home Personal Computer market during the years 2001-2004. Data analysis is used to explore the nature of consumers’ demand for PCs, and firms’ incentives to offer different types of products. Product obsolescence is found to be a real issue: the average household’s willingness-to-pay for a given PC model is estimated to drop by 257 $US as the model ages by one year. Nonetheless, substantial heterogeneity is detected: some consumers’ valuation of a PC drops at a much faster rate, while from the perspective of other consumers, PCs becomes “obsolete” at a much lower pace.
The paper focuses on a leading innovation: Intel’s introduction of its Pentium M® chip, widely considered as a landmark in mobile computing. This innovation is found to have crowded out laptops based on older Intel technologies, such as the Pentium III® and Pentium 4®. It is also found to have made a substantial contribution to the aggregate consumer surplus, boosting it by 3.2%- 6.3%.
These substantial aggregatebenefits were, however, far from being uniform across different consumer types: the bulk of the benefits were enjoyed by the 20% least price-sensitive households, while the benefits to the remaining 80% were small and sometimes negligible. The analysis also shows that the benefits from innovation could have “trickled down” to the masses of price-sensitive households, had the older laptop models been allowed to remain on the shelf, alongside the cutting-edge ones. This would have happened since the presence of the new models would have exerted a downward pressure on the prices of older models. In the market equilibrium, this channel is shut down, since the older laptops promptly disappear.
Importantly, while the analysis shows that some consumers benefit from innovation much more than others, no consumers were found to be actually hurt by it. Moreover, the elimination of the older laptops was not found to be inefficient: the social benefits from keeping such laptops on the shelf would have been largely offset by fixed supplier costs.
So what do we make of this analysis? The main takeaway is that one has to go beyond aggregate benefits and consider the heterogeneous effects of innovation on different consumer types, and the possibility that rapid elimination of basic configurations prevents the benefits from trickling down to price-sensitive consumers. Just the same, the paper’s analysis is constrained by its focus on short-run benefits. In particular, it misses certain long-term benefits from innovation, such as complementary innovations in software that are likely to trickle down to all consumer types. Additional research is, therefore, needed in order to fully appreciate the dramatic contribution of innovation in personal computing to economic growth and welfare.
Quite abruptly income inequality has returned to the political agenda as a prominent societal issue. At least part of this can be attributed to Piketty’s provoking premise of rising concentration at the top end of the income and wealth distribution in Capital in the Twenty-First Century (2014), providing some academic ground for the ‘We are the 99 percent’ Occupy movement slogan. Yet, this revitalisation of inequality is based on broader concerns than the concentration at the very top alone. There is growing evidence that earnings in the bottom and the middle of the distribution have hardly risen, if at all, during the last 20 years or so. Incomes are becoming more dispersed not only at the top, but also more generally within developed countries.
We should distinguish between increasing concentration at the top and the rise of inequality across the entire population. Even though both developments might take place simultaneously, the causes, consequences, and possible policy responses differ.
The most widely accepted explanation for rising inequality across the entire population is so-called skill-biased technological change. Current technological developments are particularly suited for replacing routine jobs, which disproportionally lie in the middle of the income distribution. In addition, low- and middle-skilled manufacturing jobs are gradually being outsourced to low-wage countries (see for instance Autor et al., 2013). Decreasing influence of trade unions and more decentralised levels of wage coordination are also likely to play a role in creating more dispersed earnings patterns.
Increased globalisation or technological change are not likely to be main drivers of rising top income shares, though the larger size of markets allows for higher rewards at the top. Since the rise of top income shares was especially an Anglo-Saxon phenomenon, and as the majority of the top 1 per cent in these countries comes from the financial sector, executive compensation practices play a role. Marginal top tax cuts implemented in these countries and inherited wealth are potentially important as well.
So should we care about these larger income differences? At the end of the day this remains a normative question. Yet, whether higher levels of inequality have negative effects on the size of our total wealth is a more technical issue, albeit not a less contested one in political economy. Again, we should differentiate between effects of increasing concentration at the top and the broader higher levels of inequality. To start with the latter, higher dispersion could incite people to put forth additional effort, as the rewards will be higher as well. Yet, when inequality of income disequalises opportunities, there will be an economic cost as Krugman also argues. Investment in human capital for instance will be lower as Standard & Poor’s notes for the US.
High top income shares do not lead to suboptimal human capital investment, but will disrupt growth if the rich use their wealth for rent-seeking activities. Stiglitz and Hacker and Pierson in Winner-Take All Politics (2010) argue that this indeed takes place in the US. On the other hand, a concentration of wealth could facilitate large and risky investments with positive externalities.
If large income differences indeed come at the price of lower total economic output, then the solution seems simple: redistribute income from the rich to the poor. Yet, both means-tested transfers and progressive taxes based on economic outcomes such as income will negatively affect economic growth as they lower the incentives to gain additional wealth. It might thus be that ‘the cure is worse than the disease’, as the IMF phrases this dilemma. Nevertheless, there can be benefits of redistribution in addition to lessening any negative effects of inequality on growth. The provision of public insurance could have stimulating effects by allowing individuals to take risks to generate income.
How to leave from here? First of all, examining whether inequality or redistribution affects growth requires data that makes a clean distinction between inequality before and after redistribution across countries over time. There are interesting academic endeavours trying to decompose inequality into a part resulting from differences in effort and a part due to fixed circumstances, such as gender, race, or educational level of parents. This can help our understanding which ‘types’ of inequality negatively affect growth and which might boost it. Moreover, redistribution itself can be achieved through multiple means, some of which, such as higher heritage taxes, are likely to be more pro-growth than others, such as higher income tax rates.
All things considered, whether inequality or redistribution hampers growth is too broad of a question. Inequality at which part of the distribution, due to what economic factors, and how the state intervenes all matter a great deal for total growth.
On September 18, Scots will go to the polls to vote on the question “Should Scotland be an independent country?” A “yes” vote would end the political union between England and Scotland that was enacted in 1707.
The main economic reasons for independence, according to the “Yes Scotland” campaign, is that an independent Scotland would have more affordable daycare, free university tuition, more generous retirement and health benefits, less burdensome regulation, and a more sensible tax system.
As a citizen of a former British colony, it is tempting to compare the situation in Scotland with those of British colonies and protectorates that gained their independence, such as the United States, India/Pakistan, and a variety of smaller countries in Africa, Asia, and the Americas, although such a comparison is unwarranted.
Historically, independence movements have been motivated by absence of representation in the institutions of government, discrimination against the local population, and economic grievances. These arguments do not hold in the Scottish case.
Scotland is an integral part of the United Kingdom. It is represented in the British Parliament in Westminster, where it holds 9% of the seats—fair representation, considering that Scotland’s population is a bit less than 8.5% of total UK population.
Scots do not seem to have been systematically discriminated against. At least eight prime ministers since 1900, including recent ex-PMs Tony Blair and Gordon Brown, were either born in Scotland or had significant Scottish connections.
Scotland is about as prosperous as the rest of the UK, with output per capita greater than those of Wales, Northern Ireland, and England outside of London (see figure).
Because the referendum asks only whether Scotland should become independent and contains no further details on how the break-up with the UK would be managed, it is important to consider some key economic issues that will need to be tackled should Scotland declare its independence.
Since Scotland already has a parliament that makes many spending and taxing decisions, we know something about Scottish fiscal policy. According to the World Bank figures, excluding oil (a resource that is expected to decline in importance in coming decades), Scotland’s budget deficit as a share of gross domestic product already exceeds those of fiscally troubled neighbors Greece, Spain, Ireland, Portugal, and Italy. Given the “Yes” campaign’s promise to make Scotland’s welfare system even more generous, the fiscal sustainability of an independent Scotland’s is unclear.
As in any divorce, the parties would need to divide their assets and liabilities.
The largest component of UK liabilities are represented by the British national debt, recently calculated at around £1.4 trillion ($2.4 trillion), or about 90 percent of UK GDP. What share of this would an independent Scotland “acquire” in the break-up?
Assets would also have to be divided. One of the greatest assets—North Sea oil—may be more straightforward to divide given that the legislation establishing the Scottish Parliament also established a maritime boundary between England and Scotland, although this may be subject to negotiation. But what about infrastructure in England funded by Scottish taxes and Scottish infrastructure paid for with English taxes?
An even more contentious item is the currency that would be used by an independent Scotland. The pro-independence camp insists that an independent Scotland would remain in a monetary union with the rest of the UK and continue to use the British pound. And, in fact, there is no reason why an independent Scotland could not declare the UK pound legal tender. Or the euro. Or the US dollar, for that matter.
The problem is that the “owner” of the pound, the Bank of England, would be under no obligation to undertake monetary policy actions to benefit Scotland. If a sluggish Scottish economy is in need of loose monetary policy while the rest of the UK is more concerned about inflation, the Bank of England would no doubt carry out policy aimed at the best interests of the UK—not Scotland.
If a Scottish financial institution was on the point of failure, would the Bank of England feel duty-bound to lend pounds? As lender of last resort in England, the Bank has an obligation to supervise—and assist, via the extension of credit—troubled English financial institutions. It seems unlikely that an independent Scotland would allow its financial institutions to be supervised and regulated by a foreign power—nor would that power be morally or legally required to extend the UK financial safety net to Scotland.
At the time of this writing (the second half of August), the smart money (and they do bet on these things in Britain) is on Scotland saying no to independence, although poll results released on August 18 found a surge in pro-independence sentiment. Whatever the polls indicate, no one is taking any chances. Several Scottish-based financial companies are establishing themselves as corporations in England so that, in the case of independence they will not be at a foreigner’s disadvantage vis-à-vis their English clients. Given the economic uncertainty generated by the vote, the sooner September 18 comes, the better for both Scotland and the UK.
Headline image credit: Scottish Parliament building, by Jamieli. Public domain via Wikimedia Commons.
Long-run trends suggest a broad shift is taking place in the institutional financing structure that supports academic research. According to data compiled by the OECD reported in Figure 1, industry sources are financing a growing share of academic research while “core” public funding is generally shrinking. This ongoing shift from public to private sponsorship is a cause for concern because these sponsorship relationships are fundamentally different. Available evidence suggests that industry financing does not simply replace dwindling public money, but imposes additional restrictions on academic researchers. In particular, industry sponsors frequently limit disclosure of research findings, methods, or materials by delaying or banning public release.
Recent economic research highlights why public disclosure of academic research is important. Disclosure permits the stock of public knowledge to be cumulative, accessible, and reliable. It limits duplication of research efforts, allows new knowledge to be replicated and verified by professional peers, and permits access and use by other researchers which enhances opportunities for complementary research. Some work finds that greater access to ideas and materials in academic research not only increased incentives for direct follow-on research, but led to an increase in the diversity of research by increasing the number of experimental research lines. Other work, examining the theoretical conditions supporting “open science” versus “secrecy”, stressed that maintaining and growing the stock of public knowledge requires a limit on the private financial returns obtained through secrecy.
To better understand the potential implications of increased industry funding, we implemented a research project that examined the relationship between industry sponsorship and restrictions on publication disclosure using individual-level data on German academic researchers. Germany is an apt setting for examining this relationship. It has a strong tradition of public financial support for academic research and, according to the OECD, Germany experienced the most dramatic growth in its share of industry sponsorship, an 11.3 percentage point increase from 1995 to 2010 (see Figure 1).
German academic researchers were surveyed about the degree of publication disclosure restrictions experienced during research projects sponsored by government, foundations, industry, and other sources. To examine if industry sponsorship jeopardizes disclosure of academic research, we modeled the degree of restrictiveness (i.e. delay and secrecy) as a function of the researcher’s budget share financed by industry. This formulation allows us to examine two potential effects of industry sponsored research contracts. The first is an adoption effect that takes place when academic researchers commit to industry funding. The second is an intensity effect that captures how publication restrictions depend on the researcher’s exposure to greater ex post review and evaluation by industry sponsors. Our models include covariates that control for non-industry extramural sponsorship, personal characteristics, research characteristics, institutional affiliations, and scientific fields of study.
Both the descriptive and regression results show a positive relationship between the degree of publication restrictions and industry sponsorship. The percentage of respondents who reported higher secrecy (partial or full) is significantly larger for industry sponsored researchers than it is for researchers with other extramural sponsors, 41% and 7% respectively. Controlling for selection, adopting industry sponsorship more than doubles the expected probabilities of publication delay and secrecy. The intensity effect is positive and significant with a larger effect on publication secrecy than on publication delay when academic researchers become heavily supported by industrial firms. These results are robust to the possibility that researchers self-select into extramural sponsorship and to the possibility that the share of industry sponsorship is endogenous due to unobserved variables.
Based on our analysis, the shift from public to private sponsorship seen in the OECD aggregate data reflects changes in the microeconomic environment shaping incentives for disclosure by academic researchers. On average, academic researchers are willing to restrict disclosure in exchange for financial support by industry sponsors. Our results shed light on an important challenge facing policymakers. Understanding the trade-off between public and private sponsorship of academic research involves gauging the impact of disclosure restrictions on the quantity, quality, and evolution of academic research to better understand how these restrictions may ultimately influence innovation and economic growth.
In 1985, Nobel Laureate Gary Becker observed that the gap in employment between mothers and fathers of young children had been shrinking since the 1960s in OECD countries. This led Becker to predict that such sex differences “may only be a legacy of powerful forces from the past and may disappear or be greatly attenuated in the near future.” In the 1990s, however, the shrinking of the mother-father gap stalled before Becker’s prediction could be realized. In today’s economy, how big is this mother-father employment gap, what forces underlie it, and are there any policies which could close it further?
A simple way to characterize the mother-father employment gap is to sum up how much more work is done by mothers compared to fathers of children from ages 0 to 10. In 2010, fathers in the United States worked 3.1 more years on average than mothers over this age 0 to 10 age range. In the United Kingdom, the comparable number is 3.8, while in Canada it is 2.9 and Germany 4.5. The figure below traces the evolution of this mother-father employment gap for all four of these countries.
Becker’s theorizing about the family can help us to understand the development of this mother-father employment gap. Becker’s theoretical models suggest that if there are even slight differences between the productivity of mothers and fathers in the home vs. the workplace, spouses will tend to specialize completely in either in-home or in out-of-home work. These kind of productivity differences could arise because of cultural conditioning, as society pushes certain roles and expectations on women and men. Also, biology could be important as women have a heavier physical burden during pregnancy and after the birth of a child women have an advantage in breastfeeding. It is possible that the initial impact of these unique biological roles for mothers lingers as their children age. Biology is not destiny, but should be acknowledged as a potential barrier that contributes to the origins of the mother-father work gap.
Will today’s differences in mother-father work patterns persist into the future? To some extent that may depend on how cultural attitudes evolve. But there’s also the possibility that family-friendly policy can move things along more quickly. Both parental leave and subsidized childcare are options to consider.
Analysis of some data across the four countries suggest that these kinds of policies can make some difference, but the impact is limited.
Parental leave makes a very big difference when the children are age zero and the parent is actually taking the leave—but because mothers take much more parental leave than fathers, this increases the mother-father employment gap rather than shrinking it. Evidence suggests that after age 0 when most parents return to work, there doesn’t seem to be any lasting impact of having taken a maternity leave on mothers’ employment patterns when their children are ages 1 to 10.
Another policy that might matter is childcare. In the Canadian province of Quebec, a subsidized childcare program was put in place in 1997 that required parents to pay only $5 per day for childcare. This program not only increased mothers’ work at pre-school ages, but also seems to have had a lasting impact when their children reach older ages, as employment of women in Quebec increased at all ages from 0 to 10. When summed up over these ages, Quebec’s subsidized childcare closed the mother-father employment gap by about half a year of work.
Gary Becker’s prediction about the disappearance of mother-father work gaps hasn’t come true – yet. Evidence from Canada, Germany, the United States, and the United Kingdom suggests that policy can contribute to a shrinking of the mother-father employment gap. However, the analysis makes clear that policy alone may not be enough to overcome the combination of strong cultural attitudes and any persistence of intrinsic biological differences between mothers and fathers.
Someone asked me at a recent book talk why I chose to write about hope and children in poverty. They asked whether it was frivolous to write about such a topic at a time when children are experiencing the challenges associated with poverty and economic disadvantage at high rates. As I thought about that question, I began to reflect on the stories of people I know and families I’ve worked with who, despite the challenges they experienced, were managing their lives successfully. I also reflected on popular figures who shared stories in the media about the ways in which they overcame early adversity in their lives.
As I reflected on these stories, it occurred to me that a common theme among these individuals was hope. I began to see the various ways in which hope is a highly influential and motivating force in their lives. This kind of hope is not passive—it is not merely wishing for a better life, but it is active. It involves thinking, planning, and acting on those thoughts and plans to achieve desired outcomes. It is the driving force that keeps us moving despite the adversity and allows us to adapt and to be resilient in the midst of these circumstances. In reflecting on these themes, I decided that I wanted to tell these stories and to link the stories with theoretical frameworks that help illuminate why I believe hope is so important. Most of the theories and ideas I discuss are well known to those of us who study children and families. However, it occurred to me that practitioners and policymakers may not be so familiar with these ideas and may find them useful in planning their work with children and families. My goal is to foster understandings of hope and resilience in practical terms so that together researchers, practitioners, and policymakers alike can help more children and families manage their circumstances and chart pathways toward well-being.
So when I think about a response to the question “Why focus on hope?” — I respond “Why not?” Why not focus on strengths rather than deficits? Why not focus our interventions, legislative activities, and funding priorities on processes that will motivate individuals to strive for the best outcomes for themselves and their children? In so doing, we can formulate an action agenda on behalf of children and families that first assumes they can and will succeed in rising above their circumstances.
As I learned from the families I interviewed, success means different things to different families. For some, success is being able to keep their family together—have dinner together, talk with each other, and support each other. For other families, success means being able to be a good parent– to go to bed at night realizing that you’ve provided for your child emotionally, spiritually as well as materially, and that by doing so, your child might have an even better opportunity than you did to achieve success. These individuals are truly courageous. They have overcome many obstacles and are striving to continue along that path. There are countless other courageous individuals who may never have the opportunity to tell their stories or to have their experiences validated with concepts and theories I discuss from the psychological literature. I hope this volume will represent their lives too. I challenge those of us who work with children and families and who advocate for or legislate on their behalf, to have the courage to “ hope” and to allow that hope to be a motivating and unrelenting force in our efforts to foster resilience and well-being in these families.
Dr. Valerie Maholmeshas devoted her career to studying factors that affect child developmental outcomes. Low-income minority children have been a particular focus of her research, practical, and civic work. She has been a faculty member at the Yale Child Study Center in the Yale School of Medicine where she held the Irving B. Harris Assistant Professorship of Child Psychiatry, an endowed professorial chair. She is the author of Fostering Resilience and Well-Being in Children and Families in Poverty.
Subscribe to the OUPblog via email or RSS.
Subscribe to only psychology articles on the OUPblog via email or RSS.
As an early-stage graduate student in the 1980s, I took a summer off from academia to work at an investment bank. One of my most eye-opening experiences was discovering just how much effort Wall Street devoted to “Fed watching”, that is, trying to figure out the Federal Reserve’s monetary policy plans.
If you spend any time following the financial news today, you will not find that surprising. Economic growth, inflation, stock market returns, and exchange rates, among many other things, depend crucially on the course of monetary policy. Consequently, speculation about monetary policy frequently dominates the financial headlines.
Back in the 1980s, the life of a Fed watcher was more challenging: not only were the Fed’s future actions unknown, its current actions were also something of a mystery.
You read that right. Thirty years ago, not only did the Fed not tell you where monetary policy was going but, aside from vague statements, it did not say much about where it was either.
Given that many of the world’s central banks were established as private, profit-making institutions with little public responsibility, and even less public accountability, it is unremarkable that central bankers became accustomed to conducting their business behind closed doors. Montagu Norman, the governor of the Bank of England between 1920 and 1944 was famous for the measures he would take to avoid of the press. He adopted cloak and dagger methods, going so far as to travel under an assumed name, to avoid drawing unwanted attention to himself.
The Federal Reserve may well have learned a thing or two from Norman during its early years. The Fed’s monetary policymaking body, the Federal Open Market Committee (FOMC), was created under the Banking Act of 1935. For the first three decades of its existence, it published brief summaries of its policy actions only in the Fed’s annual report. Thus, policy decisions might not become public for as long as a year after they were made.
Limited movements toward greater transparency began in the 1960s. By the mid-1960s, policy actions were published 90 days after the meetings in which they were taken; by the mid-1970s, the lag was reduced to approximately 45 days.
More recently, FOMC publicly announces its target for the Federal Funds rate, a key monetary policy tool, and explains its reasoning for the particular policy course chosen. Since 2007, the FOMC minutes include the numerical forecasts generated by the Federal Reserve’s staff economists. And, in a move that no doubt would have appalled Montagu Norman, since 2011 the Federal Reserve chair has held regular press conferences to explain its most recent policy actions.
The Fed is not alone in its move to become more transparent. The European Central Bank, in particular, has made transparency a stated goal of its monetary policy operations. The Bank of Japan and Bank of England have made similar noises, although exactly how far individual central banks can, or should, go in the direction of transparency is still very much debated.
Despite disagreements over how much transparency is desirable, it is clear that the steps taken by the Fed have been positive ones. Rather than making the public and financial professionals waste time trying to figure out what the central bank plans to do—which, back in the 1980s took a lot of time and effort and often led to incorrect guesses—the Fed just tells us. This make monetary policy more certain and, therefore, more effective.
Greater transparency also reduces uncertainty and the risk of violent market fluctuations based on incorrect expectations of what the Fed will do. Transparency makes Fed policy more credible and, at the same time, pressures the Fed to adhere to its stated policy. And when circumstances force the Fed to deviate from the stated policy or undertake extraordinary measures, as it has done in the wake of the financial crisis, it allows it to do so with a minimum of disruption to financial markets.
Montagu Norman is no doubt spinning in his grave. But increased transparency has made us all better off.
Crime is a hot issue on the policy agenda in the United States. Despite a significant fall in crime levels during the 1990s, the costs to taxpayers have soared together with the prison population. The US prison population has doubled since the early 1980s and currently stands at over 2 million inmates. According to the latest World Prison Population List (ICPS, 2013), the prison population rate in 2012 stood at 716 inmates per 100,000 inhabitants, against about 480 in the United Kingdom and the Russian Federation – the two OECD countries with the next highest rates – and against a European average of 154. The rise in the prison population is not just a phenomenon in the United States. Over the last twenty years, prison population rates have grown by over 20% in almost all countries in the European Union and by at least 40% in one half of them. The pattern appears remarkably similar in other regions, with a growth of 50% in Australia, 38% in New Zealand and about 6% worldwide.
In many countries – such as the United States and Canada – this fast-paced growth has occurred against a backdrop of stable or decreasing crime rates and is mostly due to mandatory and longer prison sentencing for non-violent offenders. But how much does prison actually cost? And who goes to jail?
The average annual cost per prison inmate in the United States was close to 30,000 dollars in 2008. Costs are even higher in countries like the United Kingdom and Canada. Punishment is an expensive business. These figures have prompted a shift of interest, among both academics and policymakers, from tougher sentencing to other forms of intervention. Prison populations overwhelmingly consist of individuals with poor education and even poorer job prospects. Over 70% of US inmates in 1997 did not have a high school degree. In an influential paper, Lochner and Moretti (2004) establish a sizable negative effect of education, in particular of high school graduation, on crime. There is also a growing body of evidence on the positive effect of education subsidies on school completion rates. In light of this evidence, and given the monetary and human costs of crime, it is crucial to quantify the relative benefits of policies promoting incarceration vis-à-vis alternatives such as boosting educational attainment, and in particular high school graduation.
When it comes to reducing crime, prevention may be more efficient than punishment. Resources devoted to running jails could profitably be employed in productive activities if the same crime reduction could be achieved through prevention.
Establishing which policies are more efficient requires a framework that accounts for individuals’ responses to alternative policies and can compare their costs and benefits. In other words, one needs a model of education and crime choices that allows for realistic heterogeneity in individuals’ labor market opportunities and propensity to engage in property crime. Crucially, this analysis must be empirically relevant and account for several features of the data, in particular for the crime response to changes in enrollment rates and the enrollment response to graduation subsidies.
The findings from this type of exercise are fairly clear and robust. For the same crime reduction, subsidizing high school graduation entails large output and efficiency gains that are absent in the case of tougher sentences. By improving the education composition of the labor force, education subsidies increase the differential between labor market and illegal returns for the average worker and reduce crime rates. The increase in average productivity is also reflected in higher aggregate output. The responses in crime rate and output are large. A subsidy equivalent to about 9% of average labor earnings during each of the last two years of high school induces almost a 10% drop in the property crime rate and a significant increase in aggregate output. The associated welfare gain for the average worker is even larger, as education subsidies weaken the link between family background and lifetime outcomes. In fact, one can show that the welfare gains are twice as large as the output gains. This compares to negligible output and welfare gains in the case of increased punishment. These results survive a variety of robustness checks and alternative assumptions about individual differences in crime propensity and labor market opportunities.
To sum up, the main message is that, although interventions which improve lifetime outcomes may take time to deliver results, given enough time they appear to be a superior way to reduce crime. We hope this research will advance the debate on the relative benefits of alternative policies.
Giulio Fella is a Senior Lecturer in the School of Economics and Finance at Queen Mary University, United Kingdom. Giovanni Gallipoli is an Associate Professor at the Vancouver School of Economics (University of British Columbia) in Canada. They are the co-authors of the paper ‘Education and Crime over the Life Cycle‘ in the Review of Economic Studies.
Review of Economic Studies aims to encourage research in theoretical and applied economics, especially by young economists. It is widely recognised as one of the core top-five economics journal, with a reputation for publishing path-breaking papers, and is essential reading for economists.
Adaptation to climate change is currently high on the agenda of EU bureaucrats exploring the regulatory scope of the topic. Climate change may potentially bring about changes in the frequency of extreme weather events such as heat waves, flooding or thunder storms, which in turn may require adaptation to changes in our living conditions. Adaptation to these conditions cannot stop climate change, but it can reduce the cost of climate change. Building dikes protects the landscape from an increase in sea level. New vaccines protect the population from diseases that may spread due to the change in the climate. Leading politicians, the media and prominent interest groups call for more efforts in adaptation.
But who should be in charge? Do governments have to play a leading role in adaptation? Will firms and households make the right choices? Or do governments have to intervene to correct insufficient or false adaptation choices? If intervention is necessary, will the policy have to be decided on a local level or on a national or even supranational (EU) level? In a recent article we review the main arguments for government intervention in climate change adaptation. Overall, we find that the role of the state in adaptation policy is limited.
In many cases, adaptation decisions can be left to private individuals or firms. This is true if private sector decision-makers both bear the cost and enjoy the benefits of their own decisions. Superior insulation of buildings is a good example. It shields the occupants of a building from extreme temperatures during cold winters and hot summers. The occupants – and only the occupants – benefit from the improved insulation. They also bear the costs of the new insulation. If the benefit exceeds the cost, they will invest in the superior insulation. If it does not pay off, they will refrain from the adaptation measure (and they should do so from an efficiency point of view). There is no need for government intervention in the form of building regulation or rehabilitation programmes.
In some other cases, adaptation affects an entire community as in the case of dikes. A single household will hardly be able – nor have the incentive – to build a dike of the appropriate size. But the local municipality can and should be able to so. All inhabitants of the municipality can share the costs and appropriate the benefit from flood protection. The decision on the dike could be made on the state level if not at the municipal level. The local population will probably have a long-standing experience and superior knowledge about the flood events and its potential damages. The subsidiarity principle, which is a major principle of policy task assignment in the European Union, suggests that the decisions should be made on the most decentralized level for which there are no major externalities between the decision-makers. In the case of the dike, the appropriate level for the adaptation measure would be the municipality. Again there is no need for intervention from upper-level governments.
So what role is left for the upper echelons of government in climate change adaptation? Firstly, the government has to help in improving our knowledge. Information about climate change and information about technical adaptation measures are typical public goods: the cost of generating the information has to be incurred once, whereas the information can be used at no additional cost. Without government intervention, too little information would be generated. Therefore, financing basic research in this area is one of the fundamental tasks for a central government.
Secondly, the government has to provide the regulatory framework for insurance markets. The economic consequences of natural disasters can be cushioned through insurance markets. However, the incentives to buy insurance are insufficient for several reasons. For instance, whenever a major disaster threatens the economic existence of a larger group of citizens, the government is under social pressure and will typically provide help to all those in need. By anticipating government support in case of a disaster, there is little or no incentive to buy insurance in the market. Why should they pay the premium for private insurance, or invest in self-insurance or self-protection measures if they enjoy a similar amount of free protection from the government? If the government wants to avoid being pressured for disaster relief, it has to make disaster insurance mandatory. And to induce citizens to the appropriate amount of self-protection, insurance premiums have to be differentiated according to local disaster risks.
Thirdly, fostering growth helps coping with the consequences of climate change and facilitates adaptation. Poor societies and population groups with low levels of education have the highest exposure to climate change, whereas richer societies have the means to cope with the implications of climate change. Hence, economic growth – properly measured – and education should not be dismissed easily as they act as powerful self-insurance devices against the uncertain future challenges of climate change.
Kai A. Konrad is Director at the Max Planck Institute for Tax Law and Public Finance. Marcel Thum is Professor of Economics at TU Dresden and Director of ifo Dresden. They are the authors of the paper ‘The Role of Economic Policy in Climate Change Adaptation’ published in CESifo Economic Studies.
CESifo Economic Studies publishes provocative, high-quality papers in economics, with a particular focus on policy issues. Papers by leading academics are written for a wide and global audience, including those in government, business, and academia. The journal combines theory and empirical research in a style accessible to economists across all specialisations.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Flooding, July 2007, by Mat Fascoine. CC-BY-SA-2.0 via Wikimedia Commons.
Technology is changing. The climate is changing. Economic inequality is growing. These issues dominate much public debate and shape policy discussions from local city council meetings to the United Nations. Can we tackle them, or are the issues divisive enough that they’ll eventually get the better of us? In terms of global poverty, economist Marcelo Giugale believes that human beings have the resources and will to overcome the dire state of circumstances under which many people live. In this excerpt from his latest book, Economic Development: What Everyone Needs to Know, Giugale asserts that humans have the means to quash abject poverty on a global scale and make it a thing of the past.
Define poverty as living with two dollars a day or less. Now imagine that governments could put those two dollars and one cent in every poor person’s pocket with little effort and minimal waste. Poverty is finished. Of course, things are more complicated than that. But you get a sense of where modern social policy is going—and what will soon be possible.
Nancy Lindborg trip to South Sudan. USAID U.S. Agency for International Development. CC BY-NC 2.0 via USAID Flickr.
To start with, there is a budding consensus—amply corroborated by the 2008–9 global crisis—on what reduces poverty: it is the combination of fast and sustained economic growth (more jobs), stable consumer prices (no inflation), and targeted redistribution (subsidies only to the poor). On those three fronts, developing countries are beginning to make real progress.
So, where will poverty fighters focus next? First, on better jobs. What matters to reduce poverty is not just jobs, but how productive that employment is. This highlights the need for a broad agenda of reforms to make an economy more competitive. It also points toward something much closer to the individual: skills, both cognitive (e.g., critical thinking or communication ability) and non-cognitive (e.g., attitude toward newness or sense of responsibility).
Second, poverty fighters will target projects that augment human opportunity. As will be explained next, it is now possible to measure how important personal circumstances—like skin color, birthplace, or gender—are in a child’s probability of accessing the services—like education, clean water, or the Internet—necessary to succeed in life. That measure, called the Human Opportunity Index, has opened the door for policymakers to focus not just on giving everybody the same rewards but also the same chances, not just on equality but on equity. A few countries, mostly in Latin America, now evaluate existing social programs, and design new ones, with equality of opportunity in mind. Others will follow.
And third, greater focus will be put on lowering social risk and enhancing social protection. A few quarters of recession, a sudden inflationary spike, or a natural disaster, and poverty counts skyrocket—and stay sky-high for years. The technology to protect the middle class from slipping into poverty, and the poor from sinking even deeper, is still rudimentary in the developing world. Just think of the scant coverage of unemployment insurance.
But the real breakthrough is that, to raise productivity, expand opportunity, or reduce risk, you now have a power tool: direct cash transfers. Most developing countries (thirty-five of them in Africa) have, over the last ten years, set up logistical mechanisms to send money directly to the poor—mainly through debit cards and cell phones. Initially, the emphasis was on the conditions attached to the transfer, such as keeping your child in school or visiting a clinic if you were pregnant. It soon became clear that the value of these programs was to be found less in their conditions than in the fact that they forced government agencies to know the poor by name. Now we know where they live, how much they earn, and how many kids they have.
That kind of state–citizen relationship is transforming social policy. Think of the massive amount of information it is generating in real time—how much things actually cost, what people really prefer, what impact government is having, what remains to be done. This is helping improve the quality of expenditures, that is, better targeting, design, efficiency, fairness, and, ultimately, results. It also helps deal with shocks like the global crisis (have you ever wondered why there was no social explosion in Mexico when the US economy nose-dived in early 2009?). Sure, giving away taxpayers’ money was bound to cause debate (how do you know you are not financing bums?). But so far, direct transfers have survived political transitions, from left to right (Chile) and from right to left (El Salvador). The debate has been about doing transfers well, not about abandoning them.
A final point. For all the promise of new poverty-reduction techniques, just getting everybody in the developing world over the two-dollar-a-day threshold would be no moral success. To understand why, try to picture your own life on a two-dollar-a-day budget (really, do it). But it would be a very good beginning.
Marcelo M. Giugale is the Director of Economic Policy and Poverty Reduction Programs for Africa at the World Bank and the author of Economic Development: What Everyone Needs to Know. A development expert and writer, his twenty-five years of experience span Africa, Central Asia, Eastern Europe, Latin-America and the Middle-East. He received decorations from the governments of Bolivia and Peru, and taught at American University in Cairo, the London School of Economics, and Universidad Católica Argentina.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
By Thomas Peeters, Victor Matheson, and Stefan Szymanski
The World Cup, the Olympics and other mega sporting events give cities and countries the opportunity to be in the world’s spotlight for several weeks, and the competition among them to host these events can be as fierce as the competition among the athletes themselves. Bids that had traditionally gone to wealthier countries have recently become a prize to be won by prospective hosts in the developing world. South Africa became the first African host of the FIFA World Cup in 2010, and this summer, Brazil is hosting the first South American World Cup in 35 years. Russia recently completed its first Winter Olympics in Sochi and will return to the international stage in 2018 when the World Cup heads to Eastern Europe for the first time.
On the surface, this might appear to be a leveling of the playing field, allowing developing countries to finally share in the riches that these events bring to their hosts. A closer look, however, shows that hosting these events is an enormously expensive and risky undertaking that is unlikely to pay off from a purely economic standpoint.
Because of the extensive infrastructure required to host the World Cup or the Summer or Winter Olympics, the cost of hosting these events can run into the tens of billions of dollars, especially for developing countries with limited sports and tourism infrastructure already in place. Cost estimates are often unreliable, but it is said that Brazil is spending a combined $30 billion to host the Olympics and World Cup, Beijing spent $40 billion on the 2008 summer games, and Russia set an all-time record with a $51 billion price tag on the Sochi games. Russia’s record is not likely to stand for long, however, as Qatar looks poised to spend upwards of $200 billion bringing the World Cup to the Middle East in 2022.
South Africa fan in Johannesburg during World Cup 2010
Why do countries throw their hat into the rings to host these events? Politicians typically claim that hosting will generate a financial windfall For example, the 2010 World Cup in South Africa, the focus of our paper, cost the country $3.9 billion including at least $1.3 billion in stadium construction costs. The consulting firm Grant Thornton initially predicted 483,000 international visitors would come to the country for the event and that it would generate “a gross economic impact of $12 billion to the country’s economy”. The firm later revised its figures downward, to 373,000 international visitors and lowered the estimated economic impact to $7.5 billion. Following the event, a FIFA report stated that “309,554 foreign tourists arrived in South Africa for the primary purpose of attending the 2010 FIFA World Cup.”
Our analysis of monthly tourist arrivals into South Africa during the months of the event, however, suggests that the tourist arrivals were even lower than this. The expected crowds and congestion associated with the tournament reduced the number of non-sports fans traveling to the country by over 100,000 leaving the net increase in tourists to the country during the World Cup at just 220,000 visitors. This figure is less than half that of Grant Thornton’s early projections and a full third below even the lowest visitor estimates provided after the tournament. We estimate that the cost to the nation per World Cup visitor lies in the range $4,700 to $13,000.
Our results provide a cautionary tale for cities and countries bidding for mega-events. The anticipated crowds may not materialize, and the economic gains from the sports fans who do come to watch the games need to be weighed against the economic losses associated from other potential travelers who avoid the region during the event.
Thomas Peeters is a PhD-fellow of the Flanders Research Foundation at the University of Antwerp. His main research interests are industrial organization and labor issues related to professional sports leagues. His work has been published in journals such as Economic Policy, the International Journal of Industrial Organization and the Journal of African Economies. Victor Matheson is a professor of economics at the College of the Holy Cross in Worcester, Massachusetts, USA. He is the author of numerous studies concerning the economic impact of major sporting events on host countries and is a member of the executive board of the North American Association of Sports Economists. Stefan Szymanski is the Stephen J. Galetti Professor of Sport Management at the University of Michigan. His research in the economics of sports includes work on the relationship between performance and spending in professional football leagues, the theory of contests applied to sports, the application of sports law to sports organizations, financing of professional leagues and insolvency, the costs and benefits of hosting major sporting events. They are the authors of the paper ‘Tourism and the 2010 World Cup: Lessons for developing countries’, which is published in the Journal of African Economies.
The Journal of African Economies is a vehicle to carry rigorous economic analysis, focused entirely on Africa, for Africans and anyone interested in the continent – be they consultants, policymakers, academics, traders, financiers, development agents or aid workers.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: South Africa fan in Johannesburg during World Cup 2010. By Iscar Blanco [Public domain], via Wikimedia Commons
Is thinking on international development pulling itself together or tearing itself apart? The phrase ‘international development’ can be problematic, embracing multiple meanings to those inside the business, but often meaningless to those outside of it.
On the surface, the Millennium Development Goals and debates towards a post-2015 agenda imply a move towards consensus. The past 60 years saw the development agenda embrace political independence, economic growth, human needs, sustainability, poverty reduction, and human capabilities. Each represented a particular understanding of what constitutes “development” and how to achieve it. Once dominated by economics and political science, the ideas and concepts used to explain development now draw on insights from across the natural and social sciences. Beyond academic debate, such ideas motivate real-life organizations and practitioners and shape their actions.
A female doctor with the International Medical Corps examines a woman patient at a mobile health clinic in Pakistan. Photo by DFID/Russell Watkins; UK Department for International Development. CC BY 2.0 via DFID Flickr
After interacting with over 90 writers over three years, I am convinced that thinking on development is tearing apart. Contemporary thinking comes from an increasingly diverse set of locations, with Beijing, New Delhi, and Rio de Janeiro challenging and enriching the ideas emanating from London, New York, and Paris. The rise of regional powers and localized approaches to development are reshaping our understanding of how human societies change over time. Fundamentally, the policy space for “international development” is tearing into three separate dialogues.
Sovereign problems concern the use of national wealth. All polities face real constraints in public finance and our societies face analogous challenges, such as: expanding access to and improving the quality of education and health, designing social protection to ensure a minimum wellbeing for everyone, or encouraging opportunities for entrepreneurs and minorities. Dialogue and action on sovereign problems involve national treasuries, political parties, and (mis)informed citizens. One can be inspired by experience abroad, but solutions must be tailored to fit within local cultural, political, and economic reality. This aspect of international development is growing.
Common problems concern international public goods. A sizable portion of our problems spill across borders and potential solutions require cooperation among different polities. Climate change, emergent diseases, and trade regimes surpass the ability of any one country and are affected by the choices made by others (indeed the nation-state is seldom the most useful unit of analysis). Common problems involve separate actors, ranging from municipalities and hospitals, to trade negotiators and the alphabet soup of international forums (IPCC, WHO, IFIs). After the rise of globalization, this aspect of international development is holding steady given the reality of a multi-polar world.
Foreign problems concern how to respond to troubled places abroad. Six decades of development saw substantial increases in life expectancy, human rights, and literacy. Yet there remains a stubborn set of poverty hotspots, ungoverned spaces, and fragile states where life continues to be nasty, brutish, and short. Dialogue and action involve foreign ministries, aid agencies, and NGOs. There are encouraging signs that this aspect of international development is in decline. The long-term trend witnessed a decrease in inter-state conflict and a dwindling list of low-income countries reliant on foreign aid. As such, the agenda is narrowing towards humanitarian relief, rural development, and state-building in remote locations.
This triad of sovereign-common-foreign offers one potential typology for the future evolution of thinking currently gathered under “international development”. Put more simply, when world leaders meet, the problems they discuss fit into the categories of mine-ours-theirs: those involving dialogue at home, those that require coordinated action across borders, and those related to hotspots beyond our borders.
In short, the label of “international development” has outlived its usefulness and is tearing apart in both academics and practice. For example, in the United Kingdom there is a tension between “development studies” focused on low-and-middle income countries, and “development sciences” applying technology to the needs of the poor. Meanwhile the range of organizations that engage in development has expanded, diversified, and coalesced into specialized communities. Once the exclusive purview of aid agencies and international organizations, there is an increasingly role for national treasuries, domestic charities, and diasporas in addressing different problems.
Looking forward, it seems likely that what is described as “international development” is destined to become a historic juncture, describing a period when we jumbled things together differently.
Bruce Currie-Alder is Regional Director, based in Cairo, with Canada’s International Development Research Centre (IDRC). He is an expert in natural resource management, and on the policies that govern public research funding and scientific cooperation with developing countries. His previous experience includes facilitating corporate strategy, contributing to Canada’s foreign policy, and work in the Mexican oil industry. He co-edited International Development: Ideas, Experience and Prospects (Oxford 2014) which traces the evolution of thinking about international development over sixty years. Currie-Alder holds a Master’s in Natural Resource Management from Simon Fraser University and a PhD in Public Policy from Carleton University.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
From Lawrence H. Summers, former Secretary of the Treasury and president emeritus of Harvard University, in the Financial Times:
“Atif Mian and Amir Sufi’s House of Debt, despite some tough competition, looks likely to be the most important economics book of 2014; it could be the most important book to come out of the 2008 financial crisis and subsequent Great Recession. Its arguments deserve careful attention, and its publication provides an opportunity to reconsider policy choices made in 2009 and 2010 regarding mortgage debt.”
House of Debt takes a complicated premise—unraveling the threads of the 2008 financial crisis from a tangle of Federal Reserve policies, insolvent investment banks, predatory mortgage lenders, and private label securities—and delivers a clean-cut conclusion: the Great Recession and Great Depression, as well as the current economic malaise in Europe, were caused by a large run-up in household debt followed by a significantly large drop in household spending. Recently, in addition to Summers’s endorsement in today’s Financial Times, the book has been profiled at the New York Times, the Wall Street Journal, the Atlantic, and the Economist, among others; Paul Krugman, writing for the NYT, noted that its associated House of Debt blog has “instantly become must reading.”
How do we move forward and break the cycle? With a direct attack on debt, say Mian and Sufi. More aggressive debt forgiveness after the crash helps, but as they illustrate, we can be rid of painful bubble-and-bust episodes only if the financial system moves away from its reliance on inflexible debt contracts.
To follow developments in global policy at the House of Debt blog, click here.
By Claudia Gabbioneta, Rajshree Prakash, and Royston Greenwood
Professional service firms have been implicated in numerous cases of corporate fraud. Enron is probably the most striking – albeit by no means the only – example of this involvement. Arthur Andersen (who audited Enron’s financial statements) was accused of helping the company ‘design accounting techniques or models’ that Enron used to boost its performance (Batson Report, 2003: 40-41). Nine banks were named as key players in a series of fraudulent transactions that ultimately cost shareholders more than $25 billion. Two law firms were accused of malpractice as they failed to respond to red flags about Enron’s accounting practices. The three major credit rating agencies were blamed for not lowering their ratings of the company as its financial situation deteriorated. Securities analysts were criticized for not taking into account the company’s cryptic ‘mark to market’ accounting, which allowed Enron to include as current earnings the profits they expected from future contracts, and for staying positive in their assessments and ratings well after the company’s earnings had begun to plummet.
But why did professional service firms – whose collective function is to ensure the probity of financial markets and to nurture the trust necessary for markets to function – fail to recognize and expose corporate corruption? In our paper, we argue that one reason why professionals may fail to recognize and expose corporate corruption is because of the processes of institutional ascription that take place within professional networks.
Institutional ascription occurs when professionals assume that other professionals are behaving ‘professionally’- that is, when professionals assume that other professionals have conducted and completed their work honestly and diligently, and consistent with the idealized version of professional behaviour. This assumption, in turn, makes them accept uncritically the work done by other professionals. Professionals assume that the opinions expressed by other professionals are reliable and robust, and – importantly – base their own work also on these opinions. Ascription is consistent with the ‘moral seduction’ thesis put forward by Moore et al. (2006) who emphasize that, contrary to popular imagery, corruption is often not an occurrence of a personal decision to deviant from an ethical code, but the outcome of systemic structural features that shape professional behaviour.
The assumption that others are acting professionally means that, if any link in a professional network is weak, the entire network is at risk of ‘contagion’ and thus vulnerable to collective blindness. The initial weakness propagates inside the network as more and more professionals rely on the work of other professionals to reach their own – supposedly independent – assessment of the firm. The initial involvement of a few actors results in the entire network being implicated in the failure to expose corporate corruption. As a consequence, networks of professionals, which are supposed to act as gatekeepers against corporate corruption, may actually – albeit unwittingly – enable its concealment because of reciprocal and socially emphasized processes of collective ascription.
Emblematic of professionals’ reliance upon other professionals is again the case of Enron. In 2001, Curt Launer of Credit Suisse First Boston wrote that ‘the so-called LJM Partnerships were fully disclosed in Enron’s financial statements and were subject to appropriate scrutiny by Enron’s board, outside auditors and outside legal counsel. Considering the disclosures made and the appropriateness of the accounting treatment… we anticipate that the negative sentiment surrounding these issues will dissipate over time’ (Financial Oversight of Enron: The SEC and Private-Sector Watchdogs, 2002; emphasis added). And, when asked if she ‘thought that because Vincent & Elkins had said there was no problem, …that did not trigger any kind of requirement…’, Nancy Temple, in-house attorney for Arthur Andersen, answered that she ‘noted that the law firm reported that there was nothing further to follow up on at that point in time; and this was a very large law firm representing Enron Corporation’ (Enron Hearings).
The development of the idea of institutional ascription has two important implications. First, it helps explaining why and how networks of professionals may fail to recognize and expose corporate corruption, whereas prior research has focused mainly on the dyadic relationship between a professional service firm and its clients. Second, it seriously questions the behavior of financial markets as currently designed and, intriguingly, cautions against our own ascription of trust to them.
Claudia Gabbioneta is Assistant Professor of Business Economics at the University of Genoa. Her research interests focus upon institutional, political, and social processes on financial markets. She is currently studying the role of professionals in corporate corruption. Rajshree Prakash is Assistant Professor in the Management Department at John Molson School of Business, Concordia University. Her research interests include examining the changing relationship of the professions with their stakeholders and its impact on professional responsibility. Royston Greenwood is the Telus Professor of Strategic Management in the School of Business, University of Alberta. His research interests focus upon institutional and organizational change. Currently he is examining how hybrid organizations cope with the presence of multiple, often competing institutional demands. They are the authors of the paper ‘Sustained corporate corruption and processes of institutional ascription within professional networks‘, which is published in the Journal of Professions and Organization.
The Journal of Professions and Organization (JPO) aims to be the premier outlet for research on organizational issues concerning professionals, including their work, management and their broader social and economic role.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
What do the Irish famine and the euro crisis have in common?
The famine, which afflicted Ireland during 1845-1852, was a humanitarian tragedy of massive proportions. It left roughly one million people—or about 12 percent of Ireland’s population—dead and led an even larger number to emigrate.
The euro crisis, which erupted during the autumn of 2009, has resulted in a virtual standstill in economic growth throughout the Eurozone in the years since then. The crisis has resulted in widespread discontent in countries undergoing severe austerity and in those where taxpayers feel burdened by the fiscal irresponsibility of their Eurozone partners.
Despite these widely differing circumstances, these crises have an important element in common: both were caused by economic policies that were motivated by ideology rather than cold hard economic analysis.
The Irish famine came about when the infestation of a fungus, Phythophthora infestans, led to the decimation of the potato crop. Because the Irish relied so heavily on potatoes for food, this had a devastating effect on the population.
At the time of the famine, Ireland was part of the United Kingdom. Britain’s Conservative government of the time, led by Prime Minister Sir Robert Peel, swiftly undertook several measures aimed at alleviating the crisis, including arranging a large shipment of grain from the United States in order to offer temporary relief to those starving in Ireland.
More importantly, Peel engineered a repeal of the Corn Laws, a set of tariffs that kept grain prices high. Because the Corn Laws benefitted Britain’s landed aristocracy—an important constituency of the Conservative Party, Peel soon lost his job and was replaced as prime minister by the Liberal Party’s Lord John Russell.
Russell and his Liberal Party colleagues were committed to an ideology that opposed any and all government intervention in markets. Although the Liberals had supported the repeal of the Corn Laws, they opposed any other measures that might have alleviated the crisis. Writing of Peel’s decision to import grain, Russell wrote: “It must be thoroughly understood that we cannot feed the people. It was a cruel delusion to pretend to do so.”
Contemporaries and historians have judged Russell’s blind adherence to economic orthodoxy harshly. One of the many coroner’s inquests that followed a famine death recommended that a charge of willful murder be brought against Russell for his refusal to intervene in the famine.
The euro was similarly the result of an ideologically based policy that was not supported by economic analysis.
In the aftermath of two world wars, many statesmen called for closer political and economic ties within Europe, including Winston Churchill, French premiers Edouard Herriot and Aristide Briand, and German statesmen Gustav Stresemann and Konrad Adenauer.
The post-World War II response to this desire for greater European unity was the European Coal and Steel Community, the European Economic Community, and eventually, the European Union each of which brought increasingly closer economic ties between member countries.
By the 1990s, European leaders had decided that the time was right for a monetary union and, with the Treaty of Maastricht (1993), committed themselves to the establishment of the euro by the end of the decade.
The leap from greater trade and commercial integration to a monetary union was based on ideological, rather than economic reasoning. Economists warned that Europe did not constitute an “optimal currency area,” suggesting that such a currency union would not be successful. The late German-American economist Rüdiger Dornbusch classified American economists as falling into one of three camps when it came to the euro: “It can’t happen. It’s a bad idea. It won’t last.”
The historical experience also suggested that monetary unions that precede political unions, such as the Latin Monetary Union (1865-1927) and the Scandinavian Monetary Union (1873-1914), were bound to fail, while those that came after political union, such as those in the United States in 18th century, Germany and Italy in the 19th century, and Germany in the 20th century were more likely to succeed. The various European Monetary System arrangements in the 1970s, none of which lasted very long, also provided evidence that European monetary unification was not likely to be smooth.
Concluding that it was a mistake to adopt the euro in the 1990s is, of course, not the same thing as recommending that the euro should be abandoned in 2014. German taxpayers have every reason to resent the cost of supporting their economically weaker—and frequently financially irresponsible—neighbors. However, Germany’s prosperity rests in large measure on its position as Europe’s most prolific exporter. Should Germany find itself outside the euro-zone, using a new, more expensive German mark, German prosperity would be endangered.
What we can say about the response to the Irish Famine and the decision to adopt the euro is that they were made for ideological, rather than economic reasons. These—and other episodes during the last 200 years—show that economic policy should never be made on the basis of economic ideology, but only on the basis of cold, hard economic analysis.
The recent firing of Jill Abramson, the first female executive editor of the New York Times, after less than three years on the job focused the news cycle on gender inequity, with discussions of glass cliffs (women get shorter leashes even when they get the top jobs) and reports showing the persistence of glass ceilings and pay disparities (e.g. Abramson was paid less than her male predecessor). In the United States, women now represent a substantial majority of those earning advanced degrees. Yet as we look higher and higher up the ladders of career attainment, we see smaller and smaller percentages of women – as well as the persistence of pay gaps for women, even in senior positions. In other words, even as women break through one glass ceiling, they encounter another on the next rung.
Take law firms. Women make up almost half of US law school graduates (up from 5% in 1950). But they represent only 20% of US law firm partners and an even smaller share (16%) of the more elite class of equity partners. And the higher one looks within the partnership stratosphere, the less diverse it gets. Furthermore, the leaders of the profession, as well as clients of law firms, express frustration with the slow pace of progress in generating more gender and ethnic equality at the top of the profession. These efforts can be aided by improving our understanding of the work and career processes within law firms and, by extension, partnerships in other professional fields, such as accounting, consulting, and investment banking.
So how exactly do partners rise to different levels within the partnership hierarchy, and how do those processes challenge female partners? To date, researchers have analyzed the challenge of becominga partner, but we know curiously little about how professional careers unfold after that. Although partners at large law firms may all be one-percenters, they are certainly not equal, with distinctions made between equity and non-equity partners, and recent surveys showing some “super-partners” earn up to 25 times more than their peers.
To get at these questions, we studied how partners gain power within a partnership, as measured by their “book of business” – the fees paid to the firm by clients with whom the partner holds the primary relationship. The more client revenue that a partner is responsible for, the more that partner will hold influence in their firm, command respect, and generate career mobility options in the wider profession. To understand power in a partnership, then, is to understand how partners come to obtain books of business.
What we found was intriguing. In short, although women may be disadvantaged in a primary “path to power” in the partnership, they may have opportunities along a second pathway of growing importance.
The primary pathway involves “inheriting” clients from an established power partner. To build a book of business, one needs to either pursue that strategy, or the alternative of “making rain” by bringing new clients to the firm. A newly minted partner thus needs to decide which path to invest in—or how much to invest in each path. Do you spend time working for clients of power partners nearing retirement—or pounding the pavement (or the cocktail circuit) seeking new clients of your own? Of course, each path has its risks. Investing in the inheritance path can backfire, for example, if a retiring benefactor bequeaths a client to a rival partner. And the rainmaking strategy can backfire if nibbles of new-client business don’t eventually turn into a large revenue stream for the firm. Since both investments require time and energy, what’s the optimal career strategy?
Deepening the puzzle, both paths are also likely to pose particular challenges to female attorneys, as they depend on forming social relationships with either the senior power partners or with decision makers at potential new client firms. Much research shows the existence of “homophily” in interpersonal relationships, or the tendency for people to be drawn to and feel greater affinity for people who are like themselves in terms of race and gender. So where senior partners and/or client decision makers are largely male, female junior partners may be at a disadvantage in forming the bonds of affinity or trust that help win the client business.
Analysis of the internal records of law firms shows, unsurprisingly, that female partners have smaller books of business than their male peers. More interestingly, though, we are finding that the rate of return on investments in the two paths to power differs between men and women. In fact, the inheritance strategy appears to be a particularly poor investment for women. For women, larger investments in the inheritance path are associated with lower future books of business. Why? We speculate this could be because of “selective affinity.” That is, when it comes time for the power partners to pass on their clients, they may unconsciously favor partners who are more demographically similar to them.
Yet, when it comes to the rainmaking strategy, the opposite may be true. For female partners, investments in the rainmaking path appear to pay handsomely. In fact even better than for male partners. Why could that be? Perhaps female partners recruit new clients in different ways than male partners, or perhaps “selective affinity” can actually favor female partners in the open marketplace (rather than the closed ecosystem of the firm’s internal networks).
What does it all mean? First off, for partnerships, there may be considerable value in studying the inheritance and rainmaking processes going on in their own organizations. Virtually all firms now have the relevant internal data waiting to be analyzed. Second, our findings are important for managing diversity in partnerships. For example, the results suggest there could be a “double payoff” to supporting rainmaking efforts for newly-made female partners – double in the sense of the firm’s overall revenue generation as well as diversity goals.