Working papers 2013 Back to index

No posts.

  • A macroeconomic model of liquidity crises

    Abstract

    We develop a macroeconomic model in which liquidity plays an essential role in the production process, because firms have a commitment problem regarding factor payments. A liquidity crisis occurs when firms fail to obtain sufficient liquidity, and may be caused either by self-fulfilling beliefs or by fundamental shocks. Our model is consistent with the observation that the decline in output during the Great Recession is mostly attributable to the deterioration in the labor wedge, rather than in productivity. The government’s commitment to guarantee bank deposits reduces the possibility of a self-fulfilling crisis, but it increases that of a fundamental crisis.

    Introduction

    The Great Recession, that is, the global recession in the late 2000s, was the deepest economic downturn since the 1930s. Lucas and Stokey (2011), among others, argue that just as in the Great Depression, the recession was made severer by a liquidity crisis. A liquidity crisis is a sudden evaporation of the supply of liquidity that leads to a large drop in production and employment. In addition, the decline in output in the Great Recession was mostly due to deterioration in the labor wedge, rather than in productivity, as emphasized by Arellano, Bai, and Kehoe (2012).

  • Constrained Inefficiency and Optimal Taxation with Uninsurable Risks

    Abstract

    When individuals’ labor and capital income are subject to uninsurable idiosyncratic risks, should capital and labor be taxed, and if so how? In a two period general equilibrium model with production, we derive a decomposition formula of the welfare effects of these taxes into insurance and distribution effects. This allows us to determine how the sign of the optimal taxes on capital and labor depend on the nature of the shocks, the degree of heterogeneity among consumers’ income as well as on the way in which the tax revenue is used to provide lump sum transfers to consumers. When shocks affect primarily labor income and heterogeneity is small, the optimal tax on capital is positive. However in other cases a negative tax on capital is welfare improving. (JEL codes: D52, H21. Keywords: optimal linear taxes, incomplete markets, constrained efficiency)

    Introduction

    The main objective of this paper is to investigate the effects and the optimal taxation of investment and labor income in a two period production economy with uninsurable background risk. More precisely, we examine whether the introduction of linear, distortionary taxes or subsidies on labor income and/or on the returns from savings are welfare improving and what is then the optimal sign of such taxes. This amounts to studying the Ramsey problem in a general equilibrium set-up. We depart however from most of the literature on the subject for the fact that we consider an environment with no public expenditure, where there is no need to raise tax revenue. Nonetheless, optimal taxes are typically nonzero; even distortionary taxes can improve the allocation of risk in the face of incomplete markets. Then the question is which production factor should be taxed: we want to identify the economic properties which determine the signs of the optimal taxes on production factors.

  • Estimating Daily Inflation Using Scanner Data: A Progress Report

    Abstract

    We construct a T¨ornqvist daily price index using Japanese point of sale (POS) scanner data spanning from 1988 to 2013. We find the following. First, the POS based inflation rate tends to be about 0.5 percentage points lower than the CPI inflation rate, although the difference between the two varies over time. Second, the difference between the two measures is greatest from 1992 to 1994, when, following the burst of bubble economy in 1991, the POS inflation rate drops rapidly and turns negative in June 1992, while the CPI inflation rate remains positive until summer 1994. Third, the standard deviation of daily POS inflation is 1.1 percent compared to a standard deviation for the monthly change in the CPI of 0.2 percent, indicating that daily POS inflation is much more volatile, mainly due to frequent switching between regular and sale prices. We show that the volatility in daily inflation can be reduced by more than 2daily inflation rate 0 percent by trimming the tails of product-level price change distributions. Finally, if we measure price changes from one day to the next and construct a chained T¨ornqvist index, a strong chain drift arises so that the chained price index falls to 10−10 of the base value over the 25-year sample period, which is equivalent to an annual deflation rate of 60 percent. We provide evidence suggesting that one source of the chain drift is fluctuations in sales quantity before, during, and after a sale period.

    Introduction

    Japan's central bank and government are currently engaged in a major experiment to raise the rate of inflation to the target of 2 percent set by the Bank of Japan (BOJ). With overcoming deflation being a key policy priority, a first step in this direction is the accurate assessment of price developments. In Japan, prices are measured by the Statistics Bureau, Ministry of Internal Affairs and Communications, and the consumer price index (CPI) published by the Statistics Bureau is the most important indicator that the BOJ pays attention to when making policy decisions. The CPI, moreover, is of direct relevance to people's lives as, for example, public pension benefits are linked to the rate of inflation as measured by the CPI.

  • Analytical Derivation of Power Laws in Firm Size Variables from Gibrat’s Law and Quasi-inversion Symmetry: A Geomorphological Approach

    Abstract

    We start from Gibrat’s law and quasi-inversion symmetry for three firm size variables (i.e., tangible fixed assets K, number of employees L, and sales Y) and derive a partial differential equation to be satisfied by the joint probability density function of K and L. We then transform K and L, which are correlated, into two independent variables by applying surface openness used in geomorphology and provide an analytical solution to the partial differential equation. Using worldwide data on the firm size variables for companies, we confirm that the estimates on the power-law exponents of K, L, and Y satisfy a relationship implied by the theory.

    Introduction

    In econophysics, it is well-known that the cumulative distribution functions (CDFs) of capital K, labor L, and production Y of firms obey power laws in large scales that exceed certain size thresholds, which are given by K0, L0, and Y0:

  • The Structure and Evolution of Buyer-Supplier Networks

    Abstract

    In this paper, we investigate the structure and evolution of customer-supplier networks in Japan using a unique dataset that contains information on customer and supplier linkages for more than 500,000 incorporated non-financial firms for the five years from 2008 to 2012. We find, first, that the number of customer links is unequal across firms; the customer link distribution has a power-law tail with an exponent of unity (i.e., it follows Zipf’s law). We interpret this as implying that competition among firms to acquire new customers yields winners with a large number of customers, as well as losers with fewer customers. We also show that the shortest path length for any pair of firms is, on average, 4.3 links. Second, we find that link switching is relatively rare. Our estimates indicate that the survival rate per year for customer links is 92 percent and for supplier links 93 percent. Third and finally, we find that firm growth rates tend to be more highly correlated the closer two firms are to each other in a customer-supplier network (i.e., the smaller is the shortest path length for the two firms). This suggests that a non-negligible portion of fluctuations in firm growth stems from the propagation of microeconomic shocks – shocks affecting only a particular firm – through customer-supplier chains.

    Introduction

    Firms in a modern economy tend to be closely interconnected, particularly in the manufacturing sector. Firms typically rely on the delivery of materials or intermediate products from their suppliers to produce their own products, which in turn are delivered to other downstream firms. Two recent episodes vividly illustrate just how closely firms are interconnected. The first is the recent earthquake in Japan. The earthquake and tsunami hit the Tohoku region, the north-eastern part of Japan, on March 11, 2011, resulting in significant human and physical damage to that region. However, the economic damage was not restricted to that region and spread in an unanticipated manner to other parts of Japan through the disruption of supply chains. For example, vehicle production by Japanese automakers, which are located far away from the affected areas, was stopped or slowed down due to a shortage of auto parts supplies from firms located in the affected areas. The shock even spread across borders, leading to a substantial decline in North American vehicle production. The second episode is the recent financial turmoil triggered by the subprime mortgage crisis in the United States. The adverse shock originally stemming from the so-called toxic assets on the balance sheets of U.S. financial institutions led to the failure of these institutions and was transmitted beyond entities that had direct business with the collapsed financial institutions to those that seemed to have no relationship with them, resulting in a storm that affected financial institutions around the world.

  • Buyer-Size Discounts and Inflation Dynamics

    Abstract

    This paper considers the macroeconomic effects of retailers’ market concentration and buyer-size discounts on inflation dynamics. During Japan's “lost decades,” large retailers enhanced their market power, leading to increased exploitation of buyer-size discounts in procuring goods. We incorporate this effect into an otherwise standard New-Keynesian model. Calibrating to the Japanese economy during the lost decades, we find that despite a reduction in procurement cost, strengthened buyer-size discounts did not cause deflation; rather, they caused inflation of 0.1% annually. This arose from an increase in the real wage due to the expansion of production.

    Introduction

    In this paper, we aim to consider the macroeconomic effects of buyer-size discounts on inflation dynamics. It is the conventional wisdom that large buyers (downstream firms) are better bargainers than small buyers in procuring goods from sellers (upstream firms). Retailers, wholesalers, and manufacturers negotiate prices, taking account of trade size. The increase in sales of retail giants such as Wal-Mart in the United States, Tesco in the United Kingdom, and Aeon in Japan has been accompanied by the increase in their bargaining power over wholesalers and manufacturers. Figure 1 shows evidence that larger buyers enjoy larger price discounts in Japan. In 2007, the National Survey of Prices by the Statistics Bureau reported the prices of the same types of goods sold by retailers with differing floor space. For nine kinds of goods, from perishables to durable goods, retail prices decrease with the floor space of retailers. This suggests that large retailers purchase goods from wholesalers and manufacturers at lower prices than small retailers do. It is natural to think that these buyer-size discounts influence macro inflation dynamics.

  • Lending Pro-Cyclicality and Macro-Prudential Policy: Evidence from Japanese LTV Ratios

    Abstract

    Using a large and unique micro dataset compiled from the official real estate registry in Japan, we examine the loan-to-value (LTV) ratios for business loans from 1975 to 2009 to draw some implications for the ongoing debate on the use of LTV ratio caps as a macro-prudential policy measure. We find that the LTV ratio exhibits counter-cyclicality, implying that the increase (decrease) in loan volume is smaller than the increase (decrease) in land values during booms (busts). Most importantly, LTV ratios are at their lowest during the bubble period in the late 1980s and early 1990s. The counter-cyclicality of LTV ratios is robust to controlling for various characteristics of loans, borrowers, and lenders. We also find that borrowers that exhibited high-LTV loans performed no worse ex-post than those with lower LTV loans, and sometimes performed better during the bubble period. Our findings imply that a simple fixed cap on LTV ratios might not only be ineffective in curbing loan volume in boom periods but also inhibit well-performing firms from borrowing. This casts doubt on the efficacy of employing a simple LTV cap as an effective macro-prudential policy measure.

    Introduction

    The recent financial crisis with its epicenter in the U.S. followed a disastrous financial crisis in Japan more than a decade before. It is probably not an exaggeration to argue that these crises shattered the illusion that the Basel framework – specifically Basel I and Basel II – had ushered in a new era of financial stability. These two crises centered on bubbles that affected both the business sector (business loans) and the household sector (residential mortgages). In Japan banks mostly suffered from the damage in the business sector, while in the U.S. banks mostly suffered from damage in the household sector. Following the first of these crises, the Japanese crisis, a search began for policy tools that would reduce the probability of future crises and minimize the damage when they occur. Consensus began to build in favor of countercyclical macro-prudential policy levers (e.g., Kashyap and Stein 2004). For example, there was great interest and optimism associated with the introduction by the Bank of Spain of dynamic loan loss provisioning in 2000. Also, Basel III adopted a countercyclical capital buffer to be implemented when regulators sensed that credit growth has become excessive.

  • The Uncertainty Multiplier and Business Cycles

    Abstract

    I study a business cycle model where agents learn about the state of the economy by accumulating capital. During recessions, agents invest less, and this generates noisier estimates of macroeconomic conditions and an increase in uncertainty. The endogenous increase in aggregate uncertainty further reduces economic activity, which in turn leads to more uncertainty, and so on. Thus, through changes in uncertainty, learning gives rise to a multiplier effect that amplifies business cycles. I use the calibrated model to measure the size of this uncertainty multiplier.

    Introduction

    What drives business cycles? A rapidly growing literature argues that shocks to uncertainty are a significant source of business cycle dynamics—see, for example, Bloom (2009), Fern´andezVillaverde et al. (2011), Gourio (2012), and Christiano et al. (forthcoming). However, the literature faces at least two important criticisms. In uncertainty shock theories, recessions are caused by exogenous increases in the volatility of structural shocks. First, fluctuations in uncertainty may be, at least partially, endogenous. The distinction is crucial because if uncertainty is an equilibrium object that is coming from agents’ actions, policy experiments that treat uncertainty as exogenous are subject to the Lucas critique. Second, some authors (Bachmann and Bayer 2013, Born and Pfeifer 2012, and Chugh 2012) have argued that, given small and transient fluctuations in observed ex-post volatility, changes in uncertainty have negligible effects. However, time-varying volatility need not be the only source of time-varying uncertainty. If this is the case, these papers may be understating the contribution of changes in uncertainty to aggregate fluctuations.

  • Liquidity, Trends and the Great Recession

    Abstract

    We study the impact that the liquidity crunch in 2008-2009 had on the U.S. economy's growth trend. To this end, we propose a model featuring endogenous growth á la Romer and a liquidity friction á la Kiyotaki-Moore. A key finding in our study is that liquidity declined around the demise of Lehman Brothers, which lead to the severe contraction in the economy. This liquidity shock was a tail event. Improving conditions in financial markets were crucial in the subsequent recovery. Had conditions remained at their worst level in 2008, output would have been 20 percent below its actual level in 2011.

    Introduction

    A few years into the recovery from the Great Recession, it is becoming clear that real GDP is failing to recover. Namely, although the economy is growing at pre-crisis growth rates, the crisis seems to have impinged a shift upon output. Figure 1 shows real GDP and its growth rate over the past decade. Without much effort, one can see that the economy is moving along a (new) trend that lies below the one prevailing in 2007. It is also apparent that if the economy continues to display the dismal post-crisis growth rates (blue dashed line), it will not revert to the old trend. Hence, this tepid recovery has spurred debate on whether the shift is permanent and if so what the long-term implications are for the economy. In this paper, we tackle the issue of the long-run impact of the Great Recession by means of a structural model.

  • Growing through cities in developing countries

    Abstract

    This paper examines the effects of urbanisation on development and growth. It starts with a labour market perspective and emphasises the importance of agglomeration economies, both static and dynamic. It then argues that more productive jobs in cities do not come in a void and underscores the importance of job and firm dynamics. In turn, these dynamics are shaped by the broader characteristics of urban systems. A number of conclusions are drawn. First, agglomeration effects are quantitatively important and pervasive. Second, the productive advantage of large cities is constantly eroded and needs to be sustained by new job creations and innovations. Third, this process of creative destruction in cities, which is fundamental for aggregate growth, is determined in part by the characteristics of urban systems and broader institutional features. We highlight important differences between developing countries and more advanced economies. A major challenge for developing countries is to make sure their urban systems acts as drivers of economic growth.

    Introduction

    Urbanisation and development are tightly linked. The strong positive correlation between the rate of urbanisation of a country and its per capita income has been repeatedly documented. See for instance World Bank (2009), Henderson (2010), or Henderson (2002) among many others. There is no doubt that much of the causation goes from economic growth to increased urbanisation. As countries grow, they undergo structural change and labour is reallocated from rural agriculture to urban manufacturing and services (Michaels, Rauch, and Redding, 2012). The traditional policy focus is then to make sure that this reallocation occurs at the ‘right time’ and that the distribution of population across cities is ‘balanced’. Urbanisation without industrialisation (Fay and Opal, 1999, Gollin, Jedwab, and Vollrath, 2013, Jedwab, 2013) and increased population concentrations in primate cities (Duranton, 2008) are often viewed as serious urban and development problems.

  • The Political Economy of Financial Systems: Evidence from Suffrage Reforms in the Last Two Centuries

    Abstract

    Initially, voting rights were limited to wealthy elites providing political support for stock markets. The franchise expansion induces the median voter to provide political support for banking development as this new electorate has lower financial holdings and benefits less from the uncertainty and financial returns from stock markets. Our panel data evidence covering 1830-1999 shows that tighter restrictions on the voting franchise induce a greater stock market development, whereas a broader voting franchise is more conducive towards the banking sector, consistent with Perotti and von Thadden (2006). Our results are robust to controlling for other political determinants and endogeneity.

    Introduction

    Fundamental institutions drive financial development. Political institutions are together with legal institutions and cultural traits of first order importance (La Porta, Lopezde-Silanes, Shleifer, and Vishny, 1998; Rajan and Zingales, 2003; Guiso, Sapienza, and Zingales, 2004; Acemoglu and Robinson, 2005). This paper is the first to empirically study how an important political institution – the scope of the voting franchise – impacts on different forms of financial development (stock market and banking) through shifts in the distribution of preferences of the voting class.

  • Investment Horizon and Repo in the Over-the-Counter Market

    Abstract

    This paper presents a three-period model featuring a shortterm investor and dealers in an over-the-counter bond market. A short-term investor invests cash in the short term because of a need to pay cash soon. This time constraint lowers the resale price of bonds held by a short-term investor through bilateral bargaining in an over-the-counter market. Ex-ante, this hold-up problem explains the use of a repo by a short-term investor, a positive haircut due to counterparty risk, and the fragility of a repo market. This result holds without any risk to the dividends and principals of underlying bonds or asymmetric information.

    Introduction

    Many securities primarily trade in an over-the-counter (OTC) market. A notable example of such securities is bonds. The key feature of an OTC market is that the buyer and the seller in each OTC trade set the terms of trade bilaterally. There has been developed a theoretical literature analyzing the effects of this market structure on spot trading, such as Spulber (1996), Rust and Hall (2003), Duffie, Gˆarleanu, and Pedersen (2005), Miao (2006), Vayanos and Wang (2007), Lagos and Rocheteau (2010), Lagos, Rocheteau and Weill (2011), and Chiu and Koeppl (2011), for example. This literature uses search models, in which each transaction is bilateral, to analyze various aspects of trading and price dynamics, such as liquidity and bid-ask spread, in OTC spot markets.

  • Separating the Age Effect from a Repeat Sales Index: Land and Structure Decomposition

    Abstract

    Since real estate is heterogeneous and infrequently traded, the repeat sales model has become a popular method to estimate a real estate price index. However, the model fails to adjust for depreciation, as age and time between sales have an exact linear relationship. This paper proposes a new method to estimate an age-adjusted repeat sales index by decomposing property value into land and structure components. As depreciation is more relevant to the structure than land, the property’s depreciation rate should depend on the relative size of land and structure. The larger the land component, the lower is the depreciation rate of the property. Based on housing transactions data from Hong Kong and Tokyo, we find that Hong Kong has a higher depreciation rate (assuming a fixed structure-to-property value ratio), while the resulting age adjustment is larger in Tokyo because its structure component has grown larger from the first to second sales.

    Introduction

    A price index aims to capture the price change of products free from any variations in quantity or quality. When it comes to real estate, the core problem is that it is heterogeneous and infrequently traded. Mean or median price indices are simple to compute, but properties sold in one period may differ from those in another period. To overcome this problem, two regression-based approaches are used to construct a constant-quality real estate price index (Shimizu et al. (2010)).

  • MEASURING THE EVOLUTION OF KOREA’S MATERIAL LIVING STANDARDS 1980-2010

    Abstract

    Based on a production-theoretic framework, we measure the effects of real output prices, primary inputs, multi-factor productivity growth, and depreciation on Korea’s real net income growth over the past 30 years. The empirical analysis is based on a new dataset for Korea with detailed information on labour and capital inputs, including series on land and inventories assets. We find that while over the entire period, capital and labour inputs explain the bulk of Korean real income growth, productivity growth has come to play an increasingly important role since the mid-1990s, providing some evidence of a transition from ‘input-led’ to ‘productivity-led’ growth. Terms of trade and other price effects were modest over the longer period, but had significant real income effects over sub-periods. Overall, real depreciation had only limited effects except during periods of crises where it bore negatively on real net income growth.

    Introduction

    The vast majority of studies on economic growth have been concerned with the growth of gross domestic product (GDP), in other words with the growth of countries’ production. The OECD, in common with many other organisations and economists, has also approximated material living standards in terms of the level and growth of gross domestic product.

  • Matching Indices for Thinly-Traded Commercial Real Estate in Singapore

    Abstract

    We use a matching procedure to construct three commercial real estate indices (office, shop and multiple-user factory) in Singapore using transaction sales from 1995Q1 to 2010Q4. The matching approach is less restrictive than the repeat sales estimator, which is restricted to properties sold at least twice during the sample period. The matching approach helps to overcome problems associated with thin markets and non-random sampling by pairing sales of similar but not necessarily identical properties across the control and treatment periods. We use the matched samples to estimate not just the mean changes in prices, but the full distribution of quality-adjusted sales prices over different target quantiles. The matched indices show three distinct cycles in commercial real estate markets in Singapore, including two booms in 1995- 1996 and 2006-2011, and deep and prolonged recessions with declines in prices around the time from 1999-2005. We also use kernel density function to illustrate the shift in the distribution of house prices across the two post-crisis periods in 1998 and 2008.

    Introduction

    Unlike residential real estate markets where transactions are abundant, commercial real estate transactions are thin and lumpy. Many institutional owners hold commercial real estate for long-term investment purposes. The dearth of transaction data has led to the widespread use of appraisal based indices, such as the National Council of Real Estate Investment Fiduciaries (NCREIF) index, as an alternative to transaction-based indices in the U.S. However, appraisalbased indices are vulnerable to smoothing problems. Appraisers appear to systematically under-estimate the variance and correlation in real estate returns other asset returns (Webb, Miles and Guilkey, 1992). Despite various attempts to correct appraisal bias, it remains an Achilles’ heel of appraisal-based indices. Corgel and deRoos (1999) found that recovering the true variance and correlation of appraisal-based returns reduces the weights of real estate in multi-asset portfolios.

  • The Consumer Price Index: Recent Developments

    Abstract

    The 2004 International Labour Office Consumer Price Index Manual: Theory and Practice summarized the state of the art for constructing Consumer Price Indexes (CPIs) at that time. In the intervening decade, there have been some significant new developments which are reviewed in this paper. The CPI Manual recommended the use of chained superlative indexes for a month to month CPI. However, subsequent experience with the use of monthly scanner data has shown that a significant chain drift problem can occur. The paper explains the nature of the problem and reviews possible solutions to overcome the problem. The paper also describes the recently developed Time Dummy Product method for constructing elementary index numbers (indexes at lower levels of aggregation where only price information is available).

    Introduction

    A decade has passed since the Consumer Price Index Manual: Theory and Practice was published. Thus it seems appropriate to review the advice given in the Manual in the light of research over the past decade. It turns out that there have been some significant developments that should be taken into account in the next revision of the Manual.

  • The Estimation of Owner Occupied Housing Indexes using the RPPI: The Case of Tokyo

    Abstract

    Dramatic increases and decreases in housing prices have had an enormous impact on the economies of various countries. If this kind of fluctuation in housing prices is linked to fluctuations in the consumer price index (CPI) and GDP, it may be reflected in fiscal and monetary policies. However, during the 1980s housing bubble in Japan and the later U.S. housing bubble, fluctuations in asset prices were not sufficiently reflected in price statistics and the like. The estimation of imputed rent for owneroccupied housing is said to be one of the most important factors for this. Using multiple previously proposed methods, this study estimated the imputed rent for owner-occupied housing in Tokyo and clarified the extent to which the estimated imputed rent diverged depending on the estimation method. Examining the results obtained showed that, during the bubble’s peak, there was an 11-fold discrepancy between the Equivalent Rent Approach currently employed in Japan and Equivalent Rent calculated with a hedonic approach using market rent. Meanwhile, with the User Cost Approach, during the bubble period when asset prices rose significantly, the values became negative with some estimation methods. Accordingly, we estimated Diewert’s OOH Index, which was proposed by Diewert and Nakamura (2009). When the Diewert’s OOH Index results estimated here were compared to Equivalent Rent Approach estimation results modified with the hedonic approach using market rent, it revealed that from 1990 to 2009, the Diewert’s OOH Index results were on average 1.7 times greater than the Equivalent Rent Approach results, with a maximum 3-fold difference. These findings suggest that even when the Equivalent Rent Approach is improved, significant discrepancies remain.

    Introduction

    Housing price fluctuations exert effects on the economy through various channels. More precisely, however, relative prices between housing and other assets prices and goods/services prices are the variable that should be observed.

  • Residential Property Price Indexes for Tokyo

    Abstract

    The paper uses hedonic regression techniques in order to decompose the price of a house into land and structure components using real estate sales data for Tokyo. In order to get sensible results, a nonlinear regression model using data that covered multiple time periods was used. Collinearity between the amount of land and structure in each residential property leads to inaccurate estimates for the land and structure value of a property. This collinearity problem was solved by using exogenous information on the rate of growth of construction costs in Tokyo in order to get useful constant quality subindexes for the price of land and structures separately.

    Introduction

    In this paper, we will use hedonic regression techniques in order to construct a quarterly constant quality price index for the sales of residential properties in Tokyo for the years 2000-2010 (44 quarters in all). The usual application of a time dummy hedonic regression model to sales of houses does not lead to a decomposition of the sale price into a structure component and a land component. But such a decomposition is required for many purposes. Our paper will attempt to use hedonic regression techniques in order to provide such a decomposition for Tokyo house prices. Instead of entering characteristics into our regressions in a linear fashion, we enter them as piece-wise linear functions or spline functions to achieve greater flexibility.

  • A Conceptual Framework for Commercial Property Price Indexes

    Abstract

    The paper studies the problems associated with the construction of price indexes for commercial properties that could be used in the System of National Accounts. Property price indexes are required for the stocks of commercial properties in the Balance Sheets of the country and related price indexes for the land and structure components of a commercial property are required in the Income Accounts of the country if the Multifactor Productivity of the Commercial Property Industry is calculated as part of the System of National accounts. The paper suggests a variant of the capitalization of the Net Operating Income approach to the construction of property price indexes and uses the one hoss shay or light bulb model of depreciation as a model of depreciation for the structure component of a commercial property.

    Introduction

    Many of the property price bubbles experienced during the 20th century were triggered by steep increases and sharp decreases in commercial property prices. Given this, there is a need to construct commercial property price indexes but exactly how should these prices be measured? Since commercial property is highly heterogeneous compared to housing and the number of transactions is also much lower, it is extremely difficult to capture trends in this market. In addition, many countries have been experiencing large investments in commercial properties and in countries where the market has matured, depreciation and investments in improvements and renovations represents a substantial fraction of national output. But clear measurement methods for the treatment of these expenditures in the System of National Accounts are lacking. Given this, one may say that the economic value of commercial property in particular is one of the indicators that is most difficult to measure on a day-to-day basis and that statistical development related to this is one of the fields that has perhaps lagged the furthest behind. Indexes based on transaction prices for commercial properties have begun to appear in recent years, especially in the U.S. However, in many cases, these indexes are based on property appraisal prices. But appraisal prices need to be based on a firm methodology. Thus in this paper, we will briefly review possible appraisal methodologies and then develop in more detail what we think is the most promising approach.

  • How Much Do Official Price Indexes Tell Us About Inflation?

    Abstract

    Official price indexes, such as the CPI, are imperfect indicators of inflation calculated using ad hoc price formulae different from the theoretically well-founded inflation indexes favored by economists. This paper provides the first estimate of how accurately the CPI informs us about “true” inflation. We use the largest price and quantity dataset ever employed in economics to build a Törnqvist inflation index for Japan between 1989 and 2010. Our comparison of this true inflation index with the CPI indicates that the CPI bias is not constant but depends on the level of inflation. We show the informativeness of the CPI rises with inflation. When measured inflation is low (less than 2.4% per year) the CPI is a poor predictor of true inflation even over 12-month periods. Outside this range, the CPI is a much better measure of inflation. We find that the U.S. PCE Deflator methodology is superior to the Japanese CPI methodology but still exhibits substantial measurement error and biases rendering it a problematic predictor of inflation in low inflation regimes as well.

    Introduction

    We have long known that the price indexes constructed by statistical agencies, such as the Consumer Price Index (CPI) and the Personal Consumption Expenditure (PCE) deflator, measure inflation with error. This error arises for two reasons. First, formula biases or errors appear because statistical agencies do not use the price aggregation formula dictated by theory. Second, imperfect sampling means that official price indexes are inherently stochastic. A theoretical macroeconomics literature starting with Svensson and Woodford [2003] and Aoki [2003] has noted that these stochastic measurement errors imply that one cannot assume that true inflation equals the CPI less some bias term. In general, the relationship is more complex, but what is it? This paper provides the first answer to this question by analyzing the largest dataset ever utilized in economics: 5 billion Japanese price and quantity observations collected over a 23 year period. The results are disturbing. We show that when the Japanese CPI measures inflation as low (below 2.4 percent in our baseline estimates) there is little relation between measured inflation and actual inflation. Outside of this range, measured inflation understates actual inflation changes. In other words, one can infer inflation changes from CPI changes when the CPI is high, but not when the CPI close to zero. We also show that if Japan were to shift to a methodology akin to the U.S. PCE deflator, the non-linearity would be reduced but not eliminated. This non-linear relationship between measured and actual inflation has important implications for the conduct of monetary policy in low inflation regimes.

  • Zero Lower Bound and Parameter Bias in an Estimated DSGE Model

    Abstract

    This paper examines how and to what extent parameter estimates can be biased in a dynamic stochastic general equilibrium (DSGE) model that omits the zero lower bound constraint on the nominal interest rate. Our experiments show that most of the parameter estimates in a standard sticky-price DSGE model are not biased although some biases are detected in the estimates of the monetary policy parameters and the steady-state real interest rate. Nevertheless, in our baseline experiment, these biases are so small that the estimated impulse response functions are quite similar to the true impulse response functions. However, as the probability of hitting the zero lower bound increases, the biases in the parameter estimates become larger and can therefore lead to substantial differences between the estimated and true impulse responses.

    Introduction

    Dynamic stochastic general equilibrium (DSGE) models have become a prominent tool for policy analysis. In particular, following the development of Bayesian estimation and evaluation techniques, estimated DSGE models have been extensively used by a range of policy institutions, including central banks. At the same time, the zero lower bound constraint on the nominal interest rates has been a primary concern for policymakers. Much work has been devoted to understand how the economy works and how policy should be conducted in the presence of this constraint from a theoretical perspective. However, empirical studies that estimate DSGE models including the interest-rate lower bound are still scarce because of computational difficulties in the treatment of nonlinearity arising from the bound, and hence most practitioners continue to estimate linearized DSGE models without explicitly considering the lower bound.

  • Exchange Rates and Fundamentals: Closing a Two-country Model

    Abstract

    In an influential paper, Engel and West (2005) claim that the near random-walk behavior of nominal exchange rates is an equilibrium outcome of a variant of present-value models when economic fundamentals follow exogenous first-order integrated processes and the discount factor approaches one. Subsequent empirical studies further confirm this proposition by estimating a discount factor that is close to one under distinct identification schemes. In this paper, I argue that the unit market discount factor implies the counterfactual joint equilibrium dynamics of random-walk exchange rates and economic fundamentals within a canonical, two-country, incomplete market model. Bayesian posterior simulation exercises of a two-country model based on post-Bretton Woods data from Canada and the United States reveal difficulties in reconciling the equilibrium random-walk proposition within the two-country model; in particular, the market discount factor is identified as being much lower than one.

    Introduction

    Few equilibrium models for nominal exchange rates systematically beat a naive randomwalk counterpart in terms of out-of-sample forecast performance. Since the study of Meese and Rogoff (1983), this robust empirical property of nominal exchange rate fluctuations has stubbornly resisted theoretical challenges to understand the behavior of nominal exchange rates as equilibrium outcomes. The recently developed open-economy dynamic stochastic general equilibrium (DSGE) models also suffer from this problem. Infamous as the disconnect puzzle, open-economy DSGE models fail to generate random-walk nominal exchange rates along an equilibrium path because their exchange rate forecasts are closely related to other macroeconomic fundamentals.

  • The Relation between Inventory Investment and Price Dynamics in a Distributive Firm

    Abstract

    In this paper, we examine the role of inventory in the price-setting behavior of a distributive firm. Empirically, we show the 5 empirical facts relating to pricing behavior and selling quantity of a certain consumer goods based on daily scanner data to examine the relation between store properties and pricing behavior. These results denote that price stickiness varies by the retailers’ characteristics. We consider that the hidden mechanism of price stickiness comes from the retailer’s policy for inventory investment. A partial equilibrium model of the retailer’s optimization behavior with inventory is constructed so as to replicate the five empirical facts. The results of the numerical experiments in the constructed model suggest that price change frequency depends on the retailer’s order cost, storage cost, and menu cost, not on the price elasticity of demand.

    Introduction

    Price stickiness is one of the most important and controversial concepts in macroeconomics. Many macroeconomists consider it as a key concept of the real effect of monetary policy in a macroeconomic model. So far, they have turned to the theory of price dynamics and investigated data to establish empirical facts. This paper studies the mechanism of price stickiness by examining the role of inventory in the price-setting behavior of a distributive firm empirically using micro-data scanned in retail stores, and through numerical experiments of a quantitative model of a distributive firm.

  • Labor Force Participation and Monetary Policy in the Wake of the Great Recession

    Abstract

    In this paper, we provide compelling evidence that cyclical factors account for the bulk of the post-2007 decline in the U.S. labor force participation rate. We then proceed to formulate a stylized New Keynesian model in which labor force participation is essentially acyclical during "normal times" (that is, in response to small or transitory shocks) but drops markedly in the wake of a large and persistent aggregate demand shock. Finally, we show that these considerations can have potentially crucial implications for the design of monetary policy, especially under circumstances in which adjustments to the short-term interest rate are constrained by the zero lower bound.

    Introduction

    A longstanding and well-established fact in labor economics is that the labor supply of primeage and older adults has been essentially acyclical throughout the postwar period, while that of teenagers has been moderately procyclical; cf. Mincer (1966), Pencavel (1986), and Heckman and Killingsworth (1986). Consequently, macroeconomists have largely focused on the unemployment rate as a business cycle indicator while abstracting from movements in labor force participation. Similarly, the literature on optimal monetary policy and simple rules has typically assumed that unemployment gaps and output gaps can be viewed as roughly equivalent; cf. Orphanides (2002), Taylor and Williams (2010).

  • Who faces higher prices? An empirical analysis based on Japanese homescan data

    Abstract

    On the basis of household-level scanner data (homescan) for Japan, we construct a household-level price index and investigate the causes of price differences between households. We observe large price differentials between households, as did Aguiar and Hurst (2007). However, the differences between age and income groups are small. In addition, we find that elderly people face higher prices than the younger ones, which is contrary to the results of Aguiar and Hurst (2007). The most important determinant of the price level is reliance on bargain sales; an increase in the purchase of goods at bargain sales by one standard deviation decreases the price level by more than 0.9%, while shopping frequency has only limited effects on the price level.

    Introduction

    Owing to recent technological developments in data creation, numerous commodity price researchers have begun to use not only traditional aggregates, such as the consumer price index, but also micro-level information on commodity prices. To date, commodity-level price information is used in various economic fields, such as macroeconomics (Nakamura and Steinsson, 2007), international economics (Haskel and Wolf, 2001), and industrial economics (Bay et al., 2004, Goldberg and Frank 2005). Recently, on the basis of commodity-level homescan data,5 Aguiar and Hurst (2007) (hereafter AH) found a violation of the law of one price between different age groups.

  • Is Downward Wage Flexibility the Primary Factor of Japan’s Prolonged Deflation?

    Abstract

    By using both macro- and micro-level data, this paper investigates how wages and prices evolved during Japan’s lost two decades. We find that downward nominal wage rigidity was present in Japan until the late 1990s but disappeared after 1998 as annual wages became downwardly flexible. Moreover, nominal wage flexibility may have contributed to relatively low unemployment rates in Japan. Although macro-level movements in nominal wages and prices seemed to be synchronized, such synchronicity was not observed at the industry level. Therefore, wage deflation does not seem to be a primary factor of Japan’s prolonged deflation.

    Introduction

    Most central banks are now targeting a positive inflation rate of a few percentage points. One of the reasons for not targeting a zero inflation rate is the downward rigidity of nominal wages, which could cause huge inefficiency in the resource allocation of the labor market (Akerlof et al. 1996). By creating an environment in which real wages can be adjusted, a positive inflation rate thereby serves as a “safety margin” against the risk of declining prices.

  • Micro Price Dynamics during Japan’s Lost Decades

    Abstract

    We study micro price dynamics and their macroeconomic implications using daily scanner data from 1988 to 2013. We provide five facts. First, posted prices in Japan are ten times as flexible as those in the U.S. scanner data. Second, regular prices are almost as flexible as those in the U.S. and Euro area. Third, heterogeneity is large. Fourth, during Japan’s lost decades, temporary sales played an increasingly important role. Fifth, the frequency of upward regular price revisions and the frequency of sales are significantly correlated with the macroeconomic environment like the indicators of labor market.

    Introduction

    Since the asset price bubble went bust in the early 1990s, Japan has gone through prolonged stagnation and very low rates of inflation (see Figure 1). To investigate its background, in this paper, we study micro price dynamics at a retail shop and product level. In doing so, we use daily scanner or Point of Sales (POS) data from 1988 to 2013 covering over 6 billion records. From the data, we examine how firms’ price setting changed over these twenty years; report similarities and differences in micro price dynamics between Japan and foreign countries; and draw implications for economic theory as well as policy.

  • Chronic Deflation in Japan

    Abstract

    Japan has suffered from long-lasting but mild deflation since the latter half of the 1990s. Estimates of a standard Phillips curve indicate that a decline in inflation expectations, the negative output gap, and other factors such as a decline in import prices and a higher exchange rate, all account for some of this development. These factors, in turn, reflect various underlying structural features of the economy. This paper examines a long list of these structural features that may explain Japan's chronic deflation, including the zero-lower bound on the nominal interest rate, public attitudes toward the price level, central bank communication, weaker growth expectations coupled with declining potential growth or the lower natural rate of interest, risk averse banking behavior, deregulation, and the rise of emerging economies.

    Introduction

    Why have price developments in Japan been so weak for such a long time? What can leading-edge economic theory and research tell us about the possible causes behind these developments? Despite the obvious policy importance of these questions, there has been no consensus among practitioners nor in academia. This paper is an attempt to shed some light on these issues by relying on recent works on the subject in the literature.

  • A pass-through revival

    Abstract

    It has been argued that pass-through of the exchange rate and import prices to domestic prices has declined in recent years. This paper argues that it has come back strong, at least in Japan, in the most recent years. To make this point, I estimate a time-varying parameter-volatility VAR model for the Japanese exchange rates and prices. This method allows me to estimate responses of domestic prices to the exchange rate and import prices at different points in time. I find that the response was fairly strong in the early 1980s but, since then, had gone down considerably. Since the early 2000s, however, pass-through starts to show a sign of life again. This implies that the exchange rate may have regained the status of an important policy transmission mechanism to domestic prices. At the end of the paper, I look for a possible cause of this pass-through revival by studying the evolution of the Japanese Input-Output structure.

    Introduction

    This paper re-examines the effects of the exchange rate and import prices on Japanese domestic prices. In recent literature, it has been claimed that the extent of pass-through has declined substantially. My goal is to re-examine this claim by studying the most updated data from Japan, using an approach that allows for flexible forms of structural changes.

  • Product Downsizing and Hidden Price Increases: Evidence from Japan’s Deflationary Period

    Abstract

    Consumer price inflation in Japan has been below zero since the mid-1990s. Given this, it is difficult for firms to raise product prices in response to an increase in marginal costs. One pricing strategy firms have taken in this situation is to reduce the size or the weight of a product while leaving the price more or less unchanged, thereby raising the effective price. In this paper, we empirically examine the extent to which product downsizing occurred in Japan as well as the effects of product downsizing on prices and quantities sold. Using scanner data on prices and quantities for all products sold at about 200 supermarkets over the last ten years, we find that about one third of product replacements that occurred in our sample period were accompanied by a size/weight reduction. The number of product replacements with downsizing has been particularly high since 2007. We also find that prices, on average, did not change much at the time of product replacement, even if a product replacement was accompanied by product downsizing, resulting in an effective price increase. However, comparing the magnitudes of product downsizings, our results indicate that prices declined more for product replacements that involved a larger decline in size or weight. Finally, we show that the quantities sold decline with product downsizing, and that the responsiveness of quantity purchased to size/weight changes is almost the same as the price elasticity, indicating that consumers are as sensitive to size/weight changes as they are to price changes. This implies that quality adjustments based on per-unit prices, which are widely used by statistical agencies in countries around the world, may be an appropriate way to deal with product downsizing.

    Introduction

    Consumer price inflation in Japan has been below zero since the mid-1990s, clearly indicating the emergence of deflation over the last 15 years. The rate of deflation as measured by the headline consumer price index (CPI) has been around 1 percent annually, which is much smaller than the rates observed in the United States during the Great Depression, indicating that although Japan’s deflation is persistent, it is only moderate. It has been argued by researchers and practitioners that at least in the early stages the main cause of deflation was weak aggregate demand, although deflation later accelerated due to pessimistic expectations reflecting firms’ and households’ view that deflation was not a transitory but a persistent phenomenon and that it would continue for a while.

  • Why are product prices in online markets not converging?

    Abstract

    Why are product prices in online markets dispersed in spite of very small search costs? To address this question, we construct a unique dataset from a Japanese price comparison site, which records price quotes offered by e-retailers as well as customers’ clicks on products, which occur when they proceed to purchase the product. We find that the distribution of prices retailers quote for a particular product at a particular point in time (divided by the lowest price) follows an exponential distribution, showing the presence of substantial price dispersion. For example, 20 percent of all retailers quote prices that are more than 50 percent higher than the lowest price. Next, comparing the probability that customers click on a retailer with a particular rank and the probability that retailers post prices at a particular rank, we show that both decline exponentially with price rank and that the exponents associated with the probabilities are quite close. This suggests that the reason why some retailers set prices at a level substantially higher than the lowest price is that they know that some customers will choose them even at that high price. Based on these findings, we hypothesize that price dispersion in online markets stems from heterogeneity in customers’ preferences over retailers; that is, customers choose a set of candidate retailers based on their preferences, which are heterogeneous across customers, and then pick a particular retailer among the candidates based on the price ranking

    Introduction

    The number of internet users worldwide is 2.4 billion, constituting about 35 percent of the global population. The number of users has more than doubled over the last five years and continues to increase [1]. In the early stages of the internet boom, observers predicted that the spread of the internet would lead the retail industry toward a state of perfect competition, or a Bertrand equilibrium [2]. For instance, The Economist stated in 1990 that “[t]he explosive growth of the Internet promises a new age of perfectly competitive markets. With perfect information about prices and products at their fingertips, consumers can quickly and easily find the best deals. In this brave new world, retailers’ profit margins will be competed away, as they are all forced to price at cost” [3]. Even academic researchers argued that online markets will soon be close to perfectly competitive markets [4][5][6][7].

  • Detecting Real Estate Bubbles: A New Approach Based on the Cross-Sectional Dispersion of Property Prices

    Abstract

    We investigate the cross-sectional distribution of house prices in the Greater Tokyo Area for the period 1986 to 2009. We find that size-adjusted house prices follow a lognormal distribution except for the period of the housing bubble and its collapse in Tokyo, for which the price distribution has a substantially heavier right tail than that of a lognormal distribution. We also find that, during the bubble era, sharp price movements were concentrated in particular areas, and this spatial heterogeneity is the source of the fat upper tail. These findings suggest that, during a bubble period, prices go up prominently for particular properties, but not so much for other properties, and as a result, price inequality across properties increases. In other words, the defining property of real estate bubbles is not the rapid price hike itself but an increase in price dispersion. We argue that the shape of cross sectional house price distributions may contain information useful for the detection of housing bubbles.

    Introduction

    Property market developments are of increasing importance to practitioners and policymakers. The financial crises of the past two decades have illustrated just how critical the health of this sector can be for achieving financial stability. For example, the recent financial crisis in the United States in its early stages reared its head in the form of the subprime loan problem. Similarly, the financial crises in Japan and Scandinavia in the 1990s were all triggered by the collapse of bubbles in the real estate market. More recently, the rapid rise in real estate prices - often supported by a strong expansion in bank lending - in a number of emerging market economies has become a concern for policymakers. Given these experiences, it is critically important to analyze the relationship between property markets, finance, and financial crisis.

PAGE TOP