Working papers

This project is closely related to two earlier projects on inflation dynamics at the University of Tokyo and Hitotsubashi University. In addition to the Working Papers on Central Bank Communication, this page provides a repository of working papers from these earlier projects from 2006 to 2018.

No posts.

  • On the Source of Seasonality in Price Changes: The Role of Seasonality in Menu Costs

    Abstract


    Seasonality is among the most salient features of price changes, but it is notably less analyzed than seasonality of quantities and the business cycle component of price changes. To fill this gap, we use the scanner data of 199 categories of goods in Japan to empirically study the seasonality of price changes from 1990 to 2021. We find that the following four features generally hold for most categories: (1) The frequency of price increases and decreases rises in March and September; (2) Seasonal components of the frequency of price changes are negatively correlated with those of the size of price changes; (3) Seasonal components of the inflation rate track seasonal components of net frequency of price changes; (4) The seasonal pattern of the frequency of price changes is responsive to changes in the category-level annual inflation rate for the year. We use simple state-dependent price models and show seasonal cycles in menu costs play an essential role in generating seasonality of price changes.

     

    Introduction


    It is widely known among both scholars and policymakers that the time series of prices have a sizable degree of seasonality. Figure 1 shows the decomposition of the yearly growth rate of the CPI, for all items and for goods less fresh food and energy, into twelve month-to-month changes within the same year in Japan. It can be seen that there are months in which prices generally increase, such as March and April, and months in which prices generally decrease, such as January and February. Such seasonal patterns have been stable from the 1990s to 2020s.

     

    WP052

     

     

  • Liquidity Trap and Optimal Monetary Policy: Evaluations for U.S. Monetary Policy

    Abstract

    This paper shows that the Fed’s exit strategy works as optimal monetary policy in a liquidity trap. We use the conventional new Keynesian model including a recent inflation persistence and confirm several similarities between optimal monetary policy and the Fed’s monetary policy. The zero interest rate policy continues even after inflation rates are sufficiently accelerated over the 2 percent target and hit a peak. Under optimal monetary policy, the zero interest rate policy continues until the second quarter of 2022 and the Fed terminates it one quarter earlier. Eventually, inflation rates exceed the target rate for over three years until the latest quarter. The policy rates continue to overshoot the long-run level to suppress high inflation rates. Furthermore, high inflation rates under optimal monetary policy can explain about 70 percent of the inflation data for 2021 and 2022 years. However, these are still lower than the inflation data. This is because optimal monetary policy raises the policy rates faster than the Fed does. The remaining 30 percent of inflation rates can be constrained by the Fed’s more aggressive monetary policy tightening after the zero interest rate policy.

     

    Introduction


    The theory of monetary policy has been developed since the 1990s based on a new Keynesian model as represented by Clarida et al. (1999) and Woodford (2003). Woodford (2003) finds history dependence as a general property of optimal monetary policy with commitment in a purely forward-looking new Keynesian model. He shows that the forward-looking economy and history dependence are two sides of a coin in optimal monetary policy. Eggertsson and Woodford (2003b,a), Jung et al. (2001, 2005), and Adam and Billi (2006) extend optimal monetary policy analysis with commitment to an economy in a liquidity trap and show that a robust conclusion about a feature of optimal monetary policy is history dependence. The consequence of optimal monetary policy under commitment in a liquidity trap is predicted by these papers. However, such predictions have not been evaluated in the past two decades. Now, we show the answer.

     

    WP051

     

     

  • Optimal Monetary Policy in a Liquidity Trap: Evaluations for Japan’s Monetary Policy

    Abstract

    This paper shows that the Bank of Japan’s monetary policy shares several common points with optimal monetary policy in a liquidity trap to large negative shocks by the recent pandemic. The zero interest rate policy continues even after inflation rates sufficiently exceed the 2 percent and hit the peak. Optimal monetary policy keeps the zero interest rate policy until the second quarter of 2024 and the Bank of Japan continues the zero interest rate at least until the second quarter of 2024. Recent high inflation rates can be explained by a prolonged zero interest rate policy. Average inflation rates from 2021 to 2023 years are 2.2 percent and 2.1 percent in the data and the simulation, respectively. According to scenarios for anchored inflation expectations and long-run natural interest rates, the optimal timing to terminate the zero interest rate policy and a speed of the monetary tightening after the zero interest rate policy change. As anchored inflation expectations and natural interest rates decline, the zero interest rate policy continues longer.

     

    Introduction


    In Japan, the Bank of Japan (BOJ) virtually introduces the zero interest rate policy from September 1995 by cutting the policy rate to about 0.5 percent. During the zero interest rate policy, a policy commitment, recently so called as the forward-guidance policy, is a key for monetary policy. For example, the BOJ Governor, Masaru Hayami, announces at a press conference in April 1999 that the BOJ continues the zero interest rate policy until the deflationary concerns are dispelled to lower long-term interest rates. This is the first case of the commitment policy in a liquidity trap. Moreover, in September 2016, the BOJ introduces the inflation-overshooting commitment. Under this policy, the BOJ commits to continue the monetary easing until the year-on-year CPI inflation rate stably exceeds the 2 percent target rate. This commitment policy works as optimal monetary policy to increase an inflation rate and its expectation and to lower the real interest rate as discussed below. Now, the BOJ faces an exit policy from a liquidity trap under the commitment and we would like to evaluate whether the BOJ conducts optimal monetary policy.

     

    WP050

     

     

  • The Bank of Japan’s Stock Holdings and Long-term Returns

    Abstract

    The Bank of Japan (BoJ) purchased equity index exchange-traded funds (ETFs), including Nikkei 225 ETFs, for over a decade and has not sold any ETFs it purchased. On March 31, 2021, the BoJ’s ETF holdings were more than 10% of the free float of the First Section of the Tokyo Stock Exchange. Primarily because the Nikkei index is price-weighted, the BoJ’s indirect holdings as a percentage of the market capitalization vary widely among individual stocks. To identify the effects of the uneven demand shocks, this paper runs instrumental-variable cross-sectional regressions of cumulative returns between September 30, 2010, a few days before the first announcement of ETF purchases, and March 31, 2021, when the BoJ terminated Nikkei 225 ETF purchases. The results suggest that the price multiplier is around 6 to 9; a 1 percentage point higher BoJ share in a stock’s market capitalization is associated with a roughly 6 to 9 percentage point higher return. The estimated multiplier is much higher than a typical estimate of 1 based on U.S. data. There is no evidence of a return reversal in the 9 months after Nikkei 225 ETF purchases ended. Various analyses, including monthly return regressions, support the analysis of cumulative returns and provide additional insights.  

     

    Introduction

    Many empirical studies find that demand for stocks influences stock prices. The literature often estimates the price multiplier, the percent change in the price of a particular stock when investors purchase 1% of the market capitalization of that stock. Gabaix and Koijen’s (2022) survey suggests that a typical estimate of the price multiplier is about 1.1 On the other hand, finance theory indicates that the impact of demand shocks on asset prices depends on the nature of demand. For instance, if demand shocks are expected to be more persistent, the impact is larger since asset prices reflect not only current but also expected demand. To contribute to this literature and the literature on unconventional monetary policy, this paper explores a unique natural experiment, the Bank of Japan’s (BoJ’s) persistent holdings of equity index exchange-traded funds (ETFs).  

     

    WP049

  • Rich by Accident: the Second Welfare Theorem with a Redundant Asset Under Imperfect Foresight

    Abstract

    We consider a multiperiod (T-period) model with no uncertainty where short term bonds co-exist with a long term bond. Markets are complete with just the short term bonds so that under the usual hypothesis of perfect foresight, the long term bond is redundant by no arbitrage in that it has no allocational implications. We dispense with perfect foresight, derive appropriate no arbitrage conditions and show that the presence of the long term bond has significant allocational implications. Specifically, in the model with just the short term bond, we show that a T dimensional subset of efficient allocations can arise as Walrasian equilibria whereas the dimension of efficient allocations is one less than the number of households (assumed to be much larger than T). In the model with the both types of bonds, essentially all efficient allocations might arise as Walrasian equilibria; minute errors in forecasting prices might generate all income transfers that are consistent with efficiency. We argue that the beneficiaries of such unanticipated income transfers are determined not by the superiority of forecasts but rather by accident.

     

    Introduction

    What allocational role might a redundant financial asset play in an intertemporal Walrasian setting? Traditional wisdom would suggest none, since by definition, a redundant financial asset can be replicated by trading other assets dynamically at market prices so that any trader is indifferent between holding it and ignoring it, and so its presence in no way alters the possibilities of income transfers across periods/states. But notice that this conclusion might not be valid if the market prices are not correctly anticipated. That is, this conclusion relies entirely on the feature that the axiom of perfect foresight is built into the particular equilibrium concept, Radner equilibrium, used in the analysis. We dispense with perfect foresight and show that essentially all intertemporally efficient allocations can arise as Walrasian equilibria when a redundant asset is traded.

     

    WP048

     

  • Oligopolistic Competition, Price Rigidity, and Monetary Policy

    Abstract

    This study investigates how strategic and heterogeneous price setting influences the real effect of monetary policy. Japanese data show that firms with larger market shares exhibit more frequent and larger price changes than those with smaller market shares. We then construct an oligopolistic competition model with sticky prices and asymmetry in terms of competitiveness and price stickiness, which shows that a positive cross superelasticity of demand generates dynamic strategic complementarity, resulting in decreased price adjustments and an amplified real effect of monetary policy. Whether a highly competitive firm sets its price more sluggishly and strategically than a less competitive firm depends on the shape of the demand system, and the empirical results derived from the Japanese data support Hotelling’s model rather than the constant elasticity of substitution preferences model. Dynamic strategic complementarity and asymmetry in price stickiness can substantially enhance the real effect of monetary policy.

     

    Introduction

    The COVID-19 pandemic has resulted in a resurgence of inflation, which some policymakers and scholars attribute to a surge in firms’ markups.1 The upward trajectory of market oligopoly and markups over the past few decades may have contributed to the inflationary upswing. In contrast, Japan’s inflation has remained low relative to other countries, with firms frequently attributing this phenomenon to the presence of other firms with inflexible pricing policies. These findings underscore the importance of considering strategic price setting in an oligopolistic market, yet macroeconomic analyses in this area are limited due to the predominance of monopolistic competition in macroeconomic models, despite strategic complementarity in price setting being a major source of real rigidity (Romer 2001, Woodford 2003). Furthermore, while markups are increasing, their development is not uniform across firms, and heterogeneity, such as the emergence of superstar firms, cannot be ignored.

     

    WP047

     

    WP047_Appendix

  • On the Welfare Role of Redundant Assets with Heterogeneous Forecasts

    Abstract

    We study a multiperiod model with a nominal bond that matures in one period and identify the set of efficient allocations that can be sustained as Walrasian equilibria with heterogeneous forecasts. We next add a long maturity bond, which under perfect foresight would be a redundant asset, and show that it fundamentally expands the set of efficient allocations that can be sustained as Walrasian equilibria. Indeed all wealth transfers compatible with efficiency can arise endogenously. The key feature driving this conclusion are forecasting errors, which lead to ex post arbitrage opportunities that induce these income transfers. (JEL classification numbers: D51, D53, D61)

     

    Introduction

    No arbitrage conditions play a fundamental role in the way assets are priced and therefore are instrumental in deciding the set of allocations that can be generated by Walrasian markets. The axiom of perfect foresight is built into the methodology most frequently used to price assets. This paper investigates the allocational implications of relaxing perfect foresight in a model where a short term bond coexists with a longer maturity bond, where the latter under perfect foresight would be a redundant asset. Forecasts are required to satisfy no arbitrage conditions so that market equilibrium is well defined in each period. However, due to errors in forecasting, there may exist arbitrage possibilities in an ex post sense, which allows the presence of the long term bond to expand significantly the set of intertemporally efficient allocations that can be sustained as Walrasian equilibria.

     

    WP046

  • The Demand for Money at the Zero Interest Rate Bound

    Abstract

    This paper undertakes both a narrow and wide replication of the estimation of a money demand function conducted by Ireland (American Economic Review, 2009). Using US data from 1980 to 2013, we show that the substantial increase in the money-income ratio during the period of near-zero interest rates is captured well by the log-log specification but not by the semi-log specification, contrary to the result obtained by Ireland (2009). Our estimate of the interest elasticity of money demand over the 1980-2013 period is about one-tenth that of Lucas (2000), who used a log-log specification. Finally, neither specification satisfactorily fits post-2015 US data.

     

    Introduction

    In regression analyses of money demand functions, there is no consensus on whether the nominal interest rate as an independent variable should be used in linear or log form. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., regressing real money balances (or real money balances relative to nominal GDP) in log on nominal interest rates in log), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log specification (i.e., nominal interest rates are not in log).

     

    WP044

    WP044_Appendix

  • Individual Trend Inflation

    Abstract

    This paper extends the recent approaches to estimate trend inflation from the survey responses of individual forecasters. It relies on a noisy information model to estimate the trend inflation of individual forecasters. Applying the model to the recent Japanese data, it reveals that the added noise term plays a crucial role and there exists considerable heterogeneity among individual trend inflation forecasts that drives the dynamics of the mean trend inflation forecasts. Divergences in forecasts as well as moves in estimates of trend inflation are largely driven by a identifiable group of forecasters who see less noise in the inflationary process, expect the impact of transitory inflationary shocks to wane more quickly, and are more flexible in adjusting their forecasts of trend inflation in response to new information.

     

    Introduction

    There is no doubt that trend inflation, embedded in actual data of consumer prices and in inflation expectations of various economic players, is one of the most important variables for the conduct of monetary policy. For this reason, huge effort has been made by a number of researchers to extract trend inflation. In this paper, we try to contribute to this literature by extending the existing studies in the following two ways. First, we incorporate a noisy information model more explicitly in an unobserved components model. An unobserved components model such as Beveridge and Nelson (1981) is a useful tool to decompose actual data into its trend and transitory components. Stock and Watson (2007, 2016) apply the procedure to estimate trend inflation by incorporating stochastic volatility in the model. Kozicki and Tinsley (2012) use an unobserved components model to analyze inflation forecasts. Other research papers, many of them more recent, have extracted trend inflation from actual and forecast inflation rates (Chan et al. (2018), Nason and Smith (2021), Patton and Timmermann (2010) and Yoneyama (2021)).

     

    WP042

  • Understanding Cross-Country Heterogeneity in Health and Economic Outcomes during the COVID-19 Pandemic: A Revealed-Preference Approach

    Abstract

    There is a large heterogeneity in health and macroeconomic outcomes across countries during the COVID-19 pandemic. We present a novel framework to understand the source of this heterogeneity, combining an estimated macro-epidemiological model and the idea of revealed preference. Our framework allows us to decompose the difference in health and macroeconomic outcomes across countries into two components: preference and constraint. We find that there is a large heterogeneity in both components across countries and that some countries such as Japan or Australia are willing to accept a large output loss to reduce the number of COVID-19 deaths.

     

    Introduction

    The COVID-19 pandemic has posed the world a question that has not been asked for many decades: How should a society balance infection control and economic activity during a pandemic? Different countries have struggled with this question differently, and we have witnessed a diverse set of health and macroeconomic outcomes across countries during the COVID-19 pandemic. As shown in Figure 1, there are countries that have seen large output loss and many deaths, while there are countries that have seen small output loss and few deaths. There are countries with large output loss and few deaths, yet there are countries with small output loss and many deaths.

     

    WP041

  • Going Cashless: Government’s Point Reward Program vs. COVID-19

    Abstract

    Using credit card transaction data, we examine the impacts of two successive events that promoted cashless payments in Japan: the government’s program and the COVID19 pandemic. We find that the number of card users was 9-12 percent higher in restaurants that participated in the program than those that did not. We present a simple framework accounting for the spread of cashless payments. Our model predicts that the impact of the policy intervention diminished as the use of cashless payments increased, which accords well with Japan’s COVID-19 experience. The estimated impact of COVID-19 was around two-thirds of that of the program.

     

    Introduction

    The share of payments using cashless methods is much lower in Japan than many other countries. BIS statistics, for example, show that total payments via cashless means such as credit cards, debit cards, and e-money in Japan amounted to 74 trillion yen or 24 percent of household final consumption expenditure in 2018. This percentage is considerably lower than the 40 percent or more in other developed countries such as the United States, the United Kingdom, and Singapore. The social cost of relying on cash payments is substantial. For instance, using data for several European countries, Schmiedel et al. (2012) show that the unit cost of cash payments is higher than that of debit card payments. In addition, Rogoff (2015) argues that cash makes transactions anonymous, which potentially facilitates underground or illegal activities and leads to law-enforcement costs.

     

    WP040

  • Optimal and Robust Disclosure of Public Information

    Abstract

    A policymaker discloses public information to interacting agents who also acquire costly private information. More precise public information reduces the precision and cost of acquired private information. Considering this effect, what disclosure rule should the policymaker adopt? We address this question under two alternative assumptions using a linear-quadratic-Gaussian game with arbitrary quadratic material welfare and convex information costs. First, the policymaker knows the cost of private information and adopts an optimal disclosure rule to maximize the expected welfare. Second, the policymaker is uncertain about the cost and adopts a robust disclosure rule to maximize the worst-case welfare. Depending on the elasticity of marginal cost, an optimal rule is qualitatively the same as in the case of either a linear information cost or exogenous private information. The worst-case welfare is strictly increasing if and only if full disclosure is optimal under some information costs, which provides a new rationale for central bank transparency.

     

    Introduction

    Consider a policymaker (such as a central bank) who discloses public information to interacting agents (such as firms and consumers) who also acquire costly private information. The policymaker’s concern is social welfare, including the agents’ cost of information acquisition. When the policymaker provides more precise public information, the agents have less incentive to acquire private information, reducing its precision and cost. This effect of public information is referred to as the crowding-out effect (Colombo et al., 2014). Less private information can be harmful to welfare, but less information cost is beneficial; that is, the welfare implication of the crowding-out effect is unclear. Then, what disclosure rule should the policymaker adopt?

     

    WP039

  • Robust Voting under Uncertainty

    Abstract

    This paper proposes normative criteria for voting rules under uncertainty about individual preferences to characterize a weighted majority rule (WMR). The criteria stress the significance of responsiveness, i.e., the probability that the social outcome coincides with the realized individual preferences. A voting rule is said to be robust if, for any probability distribution of preferences, the responsiveness of at least one individual is greater than one-half. This condition is equivalent to the seemingly stronger condition requiring that, for any probability distribution of preferences and any deterministic voting rule, the responsiveness of at least one individual is greater than that under the deterministic voting rule. Our main result establishes that a voting rule is robust if and only if it is a WMR without ties. This characterization of a WMR avoiding the worst possible outcomes provides a new complement to the well-known characterization of a WMR achieving the optimal outcomes, i.e., efficiency in the set of all random voting rules.

     

    Introduction

    Consider the choice of a voting rule on a succession of two alternatives (such as “yes” or “no”) by a group of individuals. When a voting rule is chosen, the alternatives to come in the future are unknown, and the individuals are uncertain about their future preferences. An individual votes sincerely being concerned with the probability that the outcome agrees with his or her preference, which is referred to as responsiveness (Rae, 1969). More specifically, an individual prefers a voting rule with higher responsiveness because he or she can expect that a favorable alternative is more likely to be chosen. For example, if an individual has a von NeumannMorgenstern (VNM) utility function such that the utility from the passage of a favorable issue is one and that of an unfavorable issue is zero, then the expected utility equals the responsiveness.

     

    WP038

  • Imprecise Information and Second-Order Beliefs

    Abstract

    A decision problem under uncertainty is often given with a piece of objective but imprecise information about the states of the world such as in the Ellsberg urn. By incorporating such information into the smooth ambiguity model of Seo (2009), we characterize a class of smooth ambiguity representations whose second-order beliefs are consistent with the objective information. As a corollary, we provide an axiomatization for the second-order expected utility, which has been studied by Nau (2001), Neilson (2009), Grant, Polak, and Strzalecki (2009), Strzalecki (2011), and Ghirardato and Pennesi (2019). In our model, attitude toward uncertainty can be disentangled from a perception about uncertainty and connected with attitude toward reduction of compound lotteries.

     

    Introduction

    Choice under uncertainty is an important aspect of decision making. Since Ellsberg’s seminal work, it is admitted that a decision maker’s behavior may be inconsistent with a probabilistic belief (or a subjective probability measure) about the states of the world. Such a situation is called ambiguity and has been studied by several models of decision making, such as the Choquet expected utility (Schmeidler [31]), the maxmin expected utility (Gilboa and Schmeidler [12]), and the smooth ambiguity model (Klibanoff, Marinacci, and Mukerji [22] and Seo [33]).

     

    WP037

  • Going Cashless: Evidence from Japan’s Point Reward Program

    Abstract

    In October 2019, the Japanese government started a unique program that offered points (discounts) for cashless payments. Using credit card transaction data, we compare credit card usage at restaurants that participated in this program and those that did not. Our main findings are as follows. First, the number of card users was 9- 12 percent higher in participating than in non-participating restaurants. Second, the positive impact of the program on the number of card users persisted even after the program ended in June 2020, indicating that the program had a lasting effect to promote cashless payments. Third, the impact of the program was significantly larger at restaurants that started accepting credit cards more recently, since the share of cash users at those restaurants was larger just before the program started. Finally, two-thirds of the difference between participating and non-participating restaurants disappeared during the first surge of COVID-19 in April 2020, suggesting that customers switched from cash to cashless payments to reduce the risk of infection both at participating and non-participating restaurants, but the extent to which customers switched was larger at non-participating restaurants with a larger share of cash users just before the pandemic.

     

    Introduction

    The share of payments using cashless methods is much lower in Japan than many other countries. BIS statistics, for example, show that total payments via cashless means such as credit cards, debit cards, and e-money in Japan amounted to 74 trillion yen or 24 percent of household final consumption expenditure in 2018. This percentage is considerably lower than the 40 percent or more in other developed countries such as the United States, the United Kingdom, and Singapore. The social cost of relying on cash payments is substantial. For instance, using data for several European countries, Schmiedel et al. (2012) show that the unit cost of cash payments is higher than that of debit card payments. In addition, Rogoff (2015) argues that cash makes transactions anonymous, which potentially facilitates underground or illegal activities and leads to law-enforcement costs.

    WP036

  • Online Consumption During and After the COVID-19 Pandemic: Evidence from Japan

    Abstract

    The spread of COVID-19 infections has led to substantial changes in consumption patterns. While demand for services that involve face-to-face contact has decreased sharply, online consumption of goods and services, such as through e-commerce, is increasing. The aim of this paper is to investigate whether online consumption will continue to increase even after COVID-19 subsides. Online consumption requires upfront costs, which have been regarded as one of the factors inhibiting the diffusion of online consumption. However, if many consumers made such upfront investments due to the pandemic, they would have no reason to return to offline consumption after the pandemic has ended. We examine whether this was actually the case using credit card transaction data. Our main findings are as follows.  First, the main group responsible for the increase in online consumption are consumers who were already familiar with it before the pandemic. These consumers increased the share of online spending in their overall spending. Second, some consumers that had never used the internet for purchases before started to do so due to COVID-19. However, the fraction of consumers making this switch was not very different from the trend before the crisis. Third, by age group, the switch to online consumption was more pronounced among youngsters than seniors. These findings suggest that it is not the case that during the pandemic a large number of consumers made the upfront investment necessary to switch to online consumption, so a certain portion of the increase in online consumption is likely to fall away again once COVID-19 subsides.

     

    Introduction

    People’s consumption patterns have changed substantially as a result of the spread of the COVID-19 infections. One such change is a reduction in the consumption of services that involve face-to-face contact. For instance, “JCB Consumption NOW” data, credit card transaction data provided jointly by JCB Co., Ltd., and Nowcast Inc., show that, since February this year, spending on eating out, entertainment, travel, and lodging have shown substantial decreases. Even in the case of goods consumption, there has been a tendency to avoid face-to-face contact such as at convenience stores and supermarkets. For example, with regard to supermarket shopping, the amount of spending per consumer has increased, but the number of shoppers has decreased, indicating that consumers purchase more than usual at supermarkets but try to minimize the risk of infection by reducing the number of visits. Another important change is the increase in the consumption of services and goods that do not involve face-to-face contact. The credit card transaction data indicate that with regard to services consumption, spending on movies and theaters has decreased substantially, while spending on streaming media services has increased. As for the consumption of goods, so-called e-commerce, i.e., purchases via the internet, has shown substantial increases.

     

    WP035

  • The Demand for Money at the Zero Interest Rate Bound

    Abstract

    This paper estimates a money demand function using US data from 1980 onward, including the recent near-zero interest rate period. We show that the substantial increase in the money-income ratio during the period of near-zero interest rates is captured well by the log-log specification, but not by the semi-log specification. Our result is the opposite of the result obtained by Ireland (2009), who found that the semi-log specification performs better. This mainly stems from the difference in the sample period employed: ours contains 24 quarters with interest rates below 1 percent, while Ireland’s (2009) sample period contains only three quarters.

     

    Introduction

    In regression analyses of money demand functions, there is no consensus on whether the nominal interest rate as an independent variable should be used in linear or log form. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., regressing real money balances (or real money balances relative to nominal GDP) in log on nominal interest rates in log), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log specification (i.e., nominal interest rates are not in log).

     

    WP034

  • Household Inventory, Temporary Sales, and Price Indices

    Abstract

    Large-scale household inventory buildups occurred in Japan five times over the last decade, including those triggered by the Tohoku earthquake in 2011, the spread of COVID-19 infections in 2020, and the consumption tax hikes in 2014 and 2019. Each of these episodes was accompanied by considerable swings in GDP, suggesting that fluctuations in household inventories are one of the sources of macroeconomic fluctuations in Japan. In this paper, we focus on changes in household inventories associated with temporary sales and propose a methodology to estimate changes in household inventories at the product level using retail scanner data. We construct a simple model on household stockpiling and derive equations for the relationships between the quantity consumed and the quantity purchased and between consumption and purchase prices. We then use these relationships to make inferences about quantities consumed, consumption prices, and inventories. Next, we test the validity of this methodology by calculating price indices and check whether the intertemporal substitution bias we find in the price indices is consistent with theoretical predictions. We empirically show that there exists a large bias in the Laspeyres, Paasche, and Törnqvist price indices, which is smaller at lower frequencies but non-trivial even at a quarterly frequency and that intertemporal substitution bias disappears for a particular type of price index if we switch from purchase-based data to consumption-based data.  

     

    Introduction

    In the first week of March 2020, when the first wave of COVID-19 infections hit Japan, supermarket sales went up more than 20% over the previous year. This was due to hoarding by consumers stemming from an increase in uncertainty regarding the spread of the virus. Similar hoarding occurred during the third wave, which struck Japan in October 2020. Such hoarding has occurred not only during the COVID-19 pandemic but also after the Tohoku earthquake in March 2011 and the subsequent nuclear power plant accident in Fukushima, when residents of Tokyo and other areas that were spared serious damage went on a buying spree for food and other necessities. Consumer hoarding also occurred due to policy shocks: when the consumption tax rate was raised in April 2014 and in October 2019, people hoarded large amounts of goods just before the tax rate was raised, and a prolonged consumption slump occurred thereafter. Each of these episodes was accompanied by considerable swings in GDP, suggesting that fluctuations in household inventories are one of the sources of macroeconomic fluctuations in Japan.

     

    WP033

  • Strategic Ambiguity in Global Games

    Abstract

    In incomplete information games with ambiguous information, rational behavior depends on fundamental ambiguity (ambiguity about states) and strategic ambiguity (ambiguity about others’ actions). We study the impact of strategic ambiguity in global games, which is evident when one of the actions yields a constant payoff. Ambiguous-quality information makes more players choose this action, whereas (unambiguous) low-quality information makes more players choose an ex-ante best response to the uniform belief over the opponents’ actions. If the ex-ante best-response action yields a constant payoff, sufficiently ambiguous-quality information makes most players choose this action, thus inducing a unique equilibrium, whereas sufficiently low-quality information generates multiple equilibria. In applications to financial crises, we demonstrate that news of more ambiguous quality triggers a debt rollover crisis, whereas news of less ambiguous quality triggers a currency crisis. 

     

    Introduction

    Consider an incomplete information game with players who have ambiguous beliefs about a payoff-relevant state. Players receive signals about a state, but they do not exactly know the true joint distribution of signals and a state. In this game, players’ beliefs about the opponents’ actions are also ambiguous even if players know the opponents’ strategies, which assign an action to each signal, because their beliefs about the opponents’ signals are ambiguous. Thus, rational behavior depends not only on fundamental ambiguity (ambiguity about states) but also on strategic ambiguity (ambiguity about others’ actions).

     

    WP032

  • Decentralizability of Efficient Allocations with Heterogenous Forecasts

    Abstract

    Do price forecasts of rational economic agents need to coincide in perfectly competitive complete markets in order for markets to allocate resources efficiently? To address this question, we define an efficient temporary equilibrium (ETE) within the framework of a two period economy. Although an ETE allocation is intertemporally efficient and is obtained by perfect competition, it can arise without the agents forecasts being coordinated on a perfect foresight price. We show that there is a one dimensional set of such Pareto efficient allocations for generic endowments.  

     

    Introduction

    Intertemporal trade in complete markets is known to achieve Pareto efficiency when the price forecasts of agents coincide and are correct. The usual justification for this coincidence of price forecasts is that if agents understand the market environment perfectly,  they ought to reach the same conclusions, and hence in particular, their price forecasts must coincide. But it is against the spirit of perfect competition to require that agents should understand the market environment beyond the market prices they commonly observe; we therefore study intertemporal trade without requiring that price forecasts of heterogenous agents coincide.  

     

    WP031

  • Imperfect Information, Heterogeneous Demand Shocks, and Inflation Dynamics

    Abstract

    Using sector-level survey data for the universe of Japanese firms, we establish the positive co-movement in the firm’s expectations about aggregate and sector-specific demand shocks. We show that a simple model with imperfect information on the current aggregate and sector-specific components of demand explains the positive co-movement of expectations in the data. The model predicts that an increase in the relative volatility of sector-specific demand shocks compared to aggregate demand shocks reduces the sensitivity of inflation to changes in aggregate demand. We test and corroborate the theoretical prediction on Japanese data and find that the observed decrease in the relative volatility of sector-specific demand has played a significant role for the decline in the sensitivity of inflation to movements in aggregate demand from mid-1980s to mid-2000s.

     

    Introduction

    A large class of macroeconomic models builds on the premise that firms set prices to fulfil demand. Several studies show that shocks to demand are heterogeneous and reflect aggregate and sector-specific disturbances.1 Knowing the source that originates the change in demand is important for setting the price consistent with profit maximization. Ball and Mankiw (1995) shows that the price should adjust if the change in demand originates from the aggregate shock, but it should remain unchanged if the change originates from the sector-specific shock. In reality, firms cannot observe the source of any change in demand. They therefore form expectations of aggregate and sector-specific components of demand based on the observed total demand and accrued knowledge from past aggregate and sector-specific shocks. Our analysis establishes important empirical regularities about firms’ expectations on the different aggregate and sector-specific components of total demand, it develops a parsimonious model of imperfect information that explains the co-movements in the expectations, and it studies the implications of empirically-congruous expectations for the sensitivity of inflation to changes in aggregate demand.

     

    WP030

  • Japan’s Voluntary Lockdown: Further Evidence Based on Age-Specific Mobile Location Data

    Abstract

    Changes in people's behavior during the COVID-19 pandemic can be regarded as the result of two types of effects: the "intervention effect" (changes resulting from government orders or requests for people to change their behavior) and the "information effect" (voluntary changes in people's behavior based on information about the pandemic). Using mobile location data to construct a stay-at-home measure for different age groups, we examine how the intervention and information effects differ across age groups. Our main findings are as follows. First, the age profile of the intervention effect of the state of emergency declaration in April and May 2020 shows that the degree to which people refrained from going out was smaller for older age groups, who are at a higher risk of serious illness and death, than for younger age groups. Second, the age profile of the information effect shows that, unlike the intervention effect, the degree to which people stayed at home tended to increase with age for weekends and holidays. Thus, while Acemoglu et al. (2020) proposed targeted lockdowns requiring stricter lockdown policies for the oldest group in order to protect those at a high risk of serious illness and death, our findings suggest that Japan's government intervention had a very different effect in that it primarily reduced outings by the young, and what led to the quarantining of older groups at higher risk instead was people's voluntary response to information about the pandemic. Third, the information effect has been on a downward trend since the summer of 2020. While this trend applies to all age groups, it is relatively more pronounced among the young, so that the age profile of the information effect remains upward sloping, suggesting that people's response to information about the pandemic is commensurate with their risk of serious illness and death.

     

    Introduction

    The number of COVID-19 infections in Japan began to increase in earnest in the latter half of February, and by the end of March, the cumulative number of infections had reached 2,234. In response to the spread of infections, the government declared a state of emergency on April 7 for seven prefectures including Tokyo, and on April 16, the state of emergency was expanded to cover all prefectures. As a result, people refrained from going out, and the number of new infections in Japan, after peaking at 720 on April 11, began to drop, falling to almost zero by the end of May. This was the first wave of infections. However, in July, the number of new infections began to increase again, and continued to increase throughout the summer (peaking at 1,605 new infections on August 7). This was the second wave. While the second wave had subsided by the end of August, the number of new infections began to increase once again in late October, and on December 31, 2020, the number of new infections in Tokyo reached 1,353, exceeding 1,000 for the first time (the number of new infections nationwide was 4,534). In response, the government again declared a state of emergency on January 7. We are currently in the middle of the third wave.

     

    WP029

  • The Welfare Implications of Massive Money Injection: The Japanese Experience from 2013 to 2020

    Abstract

    This paper derives a money demand function that explicitly takes the costs of storing money into account. This function is then used to examine the consequences of the large-scale money injection conducted by the Bank of Japan since April 2013. The main findings are as follows. First, the opportunity cost of holding money calculated using 1-year government bond yields has been negative since the fourth quarter of 2014, and most recently (2020:Q2) was -0.2%. Second, the marginal cost of storing money, which was 0.3% in the most recent quarter, exceeds the marginal utility of money, which was 0.1%. Third, the optimum quantity of money, measured by the ratio of M1 to nominal GDP, is 1.2. In contrast, the actual money-income ratio in the most recent quarter was 1.8. The welfare loss relative to the maximum welfare obtained under the optimum quantity of money in the most recent quarter was 0.2% of nominal GDP. The findings imply that the Bank of Japan needs to reduce M1 by more than 30%, for example through measures that impose a penalty on holding money.

    Introduction

    Seven years have passed since the Bank of Japan (BOJ) welcomed Haruhiko Kuroda as its new Governor and started a new regime of monetary easing, which was nicknamed the “Kuroda bazooka.” The policy goal that the BOJ set itself was to overcome deflation. At the time, the year-on-year rate of change in Japan’s consumer price index (CPI) was -0.9% and had been below zero for a long time. The measure the BOJ chose to escape deflation was to print lots of money. That is, the BOJ thought that it would be possible to overcome deflation by increasing the quantity of money. Specifically, in April 2013, the BOJ announced that it would double the monetary base within two years and thereby raise the CPI inflation rate to 2%. However, currently, CPI inflation remains stuck at 0.3%. The BOJ has not achieved its target of 2%, and there is little prospect that it will be achieved in the near future. While it is true that inflation currently is heavily affected by the sharp fall in aggregate demand since the outbreak of the COVID crisis in February 2020, which is putting downward pressure on prices, even before the crisis CPI inflation was only between 0.2 and 0.8% and therefore below the BOJ’s target.

     

     

    WP028

  • Japan’s Voluntary Lockdown

    Abstract

    Japan’s government has taken a number of measures, including declaring a state of emergency, to combat the spread COVID-19. We examine the mechanisms through which the government’s policies have led to changes in people’s behavior. Using smartphone location data, we construct a daily prefecture-level stay-at-home measure to identify the following two effects: (1) the effect that citizens refrained from going out in line with the government’s request, and (2) the effect that government announcements reinforced awareness with regard to the seriousness of the pandemic and people voluntarily refrained from going out. Our main findings are as follows. First, the declaration of the state of emergency reduced the number of people leaving their homes by 8.6% through the first channel, which is of the same order of magnitude as the estimate by Goolsbee and Syverson (2020) for lockdowns in the United States. Second, a 1% increase in new infections in a prefecture reduces people’s outings in that prefecture by 0.026%. Third, the government’s requests are responsible for about one quarter of the decrease in outings in Tokyo, while the remaining three quarters are the result of citizens obtaining new information through government announcements and the daily release of the number of infections. Our results suggest that what is necessary to contain the spread of COVID-19 is not strong, legally binding measures but the provision of appropriate information that encourages people to change their behavior.

    Introduction

    In response to the spread of COVID-19, the Japanese government on February 27 issued a request to local governments such as prefectural governments to close schools. Subsequently, the Japanese government declared a state of emergency on April 7 for seven prefectures, including Tokyo, and on April 16 expanded the state of emergency to all 47 prefectures. Prime Minister Abe called on citizens to reduce social interaction by at least 70% and, if possible, by 80% by refraining from going out. In response to these government requests, people restrained from going out. For example, in March, the share of people in Tokyo leaving their homes was down by 18% compared to January before the spread of COVID-19, and by April 26, during the state of emergency, the share had dropped as much as 64%. As a result of people refraining from leaving their homes, the number of daily new infections in Tokyo fell from 209 at the peak to two on May 23, and the state of emergency was lifted on May 25.

     

     

    WP027

  • Consumer Inventory and the Cost of Living Index: Theory and Some Evidence from Japan

    Abstract

    This paper examines the implications of consumer inventory for cost-of-living indices (COLIs) and business cycles. We begin by providing stylized facts about consumer inventory using scanner data. We then construct a quasi-dynamic model to describe consumers’ purchase, consumption, and inventory behavior. A key feature of our model is that inventory is held by household producers, not by consumers, which enables us to construct a COLI in a static manner even in an economy with storable goods. Based on this model, we show that stockpiling during temporary sales generates a substantial bias, or so-called chain drift, in conventional price indices, which are constructed without paying attention to consumer inventory. However, the chain drift is greatly mitigated in our COLI, which is based on consumption prices (rather than purchase prices) and quantities consumed (rather than quantities purchased). We provide empirical evidence supporting these theoretical predictions. We also show empirically that consumers’ inventory behavior tends to depend on labor market conditions and the interest rate.

    Introduction

    Storable goods are abundant in the real world (e.g., pasta, toilet rolls, shampoos, and even vegetables and milk), although most economic models deal with perishable goods for the sake of simplicity. Goods storability implies that purchases (which are often observable) do not necessarily equal consumption (which is often unobservable), and the difference between the two serves as consumer inventory. In particular, temporary sales and the anticipation of an increase in the value-added tax rate often lead to a greater increase in purchases than consumption. Moreover, the COVID-19 outbreak in 2020 caused many products, such as pasta and toilet rolls, to disappear from supermarket shelves, which would not have happened if these products were not storable. The stockpiling behavior by consumers poses challenges for economists, for example in the construction of price indices. 

     

     

    WP025

    WP025_Appendix

  • Efficiency, Quality of Forecasts and Radner Equilibria

    Abstract

    We study a simple two period economy with no uncertainty and complete markets where agents trade based on forecasts about the second period spot price. We propose as our solution concept a set of forecasts with the following properties: there exist (heterogenous) forecasts contained in this set that lead to efficient allocations, the set contains only those forecasts that correspond to some efficient equilibrium, and finally that the forecasts assign positive probability to the actual market clearing spot price. We call such a set of prices an efficient equilibrium with ambiguity, and interpret it as a generalization of Radner equilibrium that delivers efficient allocations under forecasts that possess a self-fulfilling property that is weaker than perfect foresight. 

    Introduction

    Walrasian trade in intertemporal economies require households to forecast prices that will prevail in spot markets at future dates. The ubiquitous nancial equilibrium model that is used to address this aspect of intertemporal economies is the one proposed by Radner (1972) (following Arrow (1963)) and is the bedrock of modern treatments of general equilibrium. This resulting Radner equilibrium (henceforth, RE), postulates that households correctly anticipate all spot prices at future dates; a RE is accordingly an equilibrium with perfect foresight (henceforth, PFE), where the forecasts of heterogenous households are perfectly aligned.

     

     

    WP024

  • Online Consumption During the COVID-19 Crisis: Evidence from Japan

    Abstract

    The spread of novel coronavirus (COVID-19) infections has led to substantial changes in consumption patterns. While demand for services that involve face-to-face contact has decreased sharply, online consumption of goods and services, such as through e-commerce, is increasing. The aim of this study is to investigate whether online consumption will continue to increase even after COVID-19 subsides, using credit card transaction data. Online consumption requires upfront costs, which have been regarded as one of the factors inhibiting the diffusion of online consumption. However, if many consumers made such upfront investments due to the coronavirus pandemic, they would have no reason to return to offline consumption after the pandemic has ended, and high levels of online consumption should continue. Our main findings are as follows. First, the main group responsible for the increase in online consumption are consumers who were already familiar with online consumption before the pandemic and purchased goods and service both online and offline. These consumers increased the share of online spending in their spending overall and/or stopped offline consumption completely and switched to online consumption only. Second, some consumers that had never used the internet for purchases before started to use the internet for their consumption activities due to COVID-19. However, the share of consumers making this switch was not very different from the trend before the crisis. Third, by age group, the switch to online consumption was more pronounced among youngsters than seniors. These findings suggest that it is not the case that during the pandemic a large number of consumers made the upfront investment necessary to switch to online consumption, so a certain portion of the increase in online consumption is likely to fall away again as COVID-19 subsides. 

    Introduction

    People’s consumption patterns have changed substantially as a result of the spread of the novel coronavirus (COVID-19). One such change is a reduction of the consumption of services that involve face-to-face (F2F) contact. For instance, “JCB Consumption NOW” data, credit card transaction data provided jointly by JCB Co., Ltd. and Nowcast Inc., show that, since February this year, spending on eating out, entertainment, travel, and lodging have shown substantial decreases. Even in the case of goods consumption, there has been a tendency to avoid face-to-face contact such as at convenience stores and supermarkets. For example, with regard to supermarket shopping, the amount of spending per consumer has increased, but the number of shoppers has decreased. Another important change is the increase in the consumption of services and goods that do not involve face-to-face contact. The credit card transaction data indicate that with regard to service consumption, spending on movies and theaters has decreased substantially, while spending on content delivery has increased. As for the consumption of goods, so-called e-commerce, i.e., purchases via the internet, has shown substantial increases.  

     

     

    WP023

  • コロナ収束後もオンライン消費の増加は続くか クレカ取引データを用いた分析

    要旨

    新型コロナの感染拡大に伴い人々の消費スタイルが大きく変化している。外食や娯楽などFace-to-face の接触を伴うサービスへの需要が激減する一方,E コマースなどモノやサービスのオンライン消費は増えており,コロナ収束後も続くとの見方がある。ポストコロナはコロナ前に戻るのではなく,オンライン消費を軸に新たな消費スタイルが生まれるとの見方もある。

    本稿ではコロナ収束後もオンライン消費の増加が続くかどうかについてクレジットカード取引データを用いた検討を行う。オンライン消費には,端末の入手やネット環境の整備,ノウハウの習得など,初期コストがかかり,これが普及を妨げる要因のひとつとみられていた。しかし,コロナを機に多くの消費者が既に初期投資を行ったということであれば,コロナが去った後も,オフライン消費に戻る理由はなく,高水準のオンライン消費が続くということになる。

    本稿では以下のファインディングを得た。第1 に,オンライン消費増加の主たる担い手は,コロナ前からオンライン消費に馴染み,オンラインとオフラインを併用していた消費者である。こうした消費者が,オンライン消費の割合を高め,さらにはオフライン消費を一切やめてオンラインのみに切り替えた。第2 に,オンライン消費の経験のない消費者の一部が,コロナを機にオンライン消費を始める動きもみられた。ただし,その度合いはコロナ前のオンライン化の趨勢と大きく異ならなかった。第3 に,年齢別にみると,若年層がオンライン消費を増やした一方,シニア層の寄与は小さかった。例えば,コロナ前にオンラインとオフラインを併用していた20 代後半の消費者のうち16%がオンライン消費のみに切り替えたが,同じくコロナ前にオンラインとオフラインを併用していた60 代前半の消費者のうちでオンラインのみに切り替えたのは11%であった。オンライン消費への切り替えの年代間の差は,デジタルリタラシーの差によるものではなく,感染を回避する姿勢の差を反映していると考えられる。

    上記のファインディングは「オンライン消費の経験のない消費者(特にシニア層)がコロナを機に新規参入した」という見方が適切でないことを示唆している。消費者の多くはコロナを機に初期投資を行ったわけではなく,したがって,オンライン消費増の一定部分はコロナ収束とともに剥げ落ちる可能性がある。

    1 ポストコロナの個人消費

    新型コロナの感染に伴い人々の消費スタイルが大きく変化している。ひとつはFace-to-face の接触を伴うサービスの消費を抑える動きである。『JCB 消費NOW』でも本年2 月以降,外食や娯楽,旅行,宿泊が大幅な減少を示している。モノ消費でも,コンビニやスーパーの店頭でのFace-to-face の接触を嫌う傾向がある。例えば,スーパーでの購買は,1 人の消費者が購買する金額は増えており,そのためスーパーでの購買金額は増えているものの,購買者数自体は減っている。
    もうひとつの重要な変化は,Face-to-face の接触を伴わないサービスやモノの消費の拡大である。『JCB 消費NOW』でみると,サービス消費では,映画や劇場での消費が大幅に減少する一方,コンテンツ配信は増えている。モノ消費についても,ネット経由での購買,いわゆるE コマースが大幅な伸びを示している。

     

     

    WP022

  • How Much Did People Refrain from Service Consumption due to the Outbreak of COVID-19?

    Abstract

    With the spread of coronavirus infections, there has been a growing tendency to refrain from consuming services such as eating out that involve contact with people. Self-restraint in service consumption is essential to stop the spread of infections, and the national government as well as local governments such as the Tokyo government are calling for consumers as well as firms providing such services to exercise self-restraint. One way to measure the degree of self-restraint has been to look at changes in the flow of people using smart phone location data. As a more direct approach, this note uses credit card transaction data on service spending to examine the degree to which people exercise self-restraint. The results indicate that of men aged 35-39 living in the Tokyo metropolitan area, the share that used their credit card to pay for eating out in March 2020 was 27 percent. Using transaction data for January, i.e., before the full outbreak of the virus in Japan, yields an estimated share of 32 percent for March. This means that the number of people eating out fell by 15 percent. Apart from eating out, similar self-restraint effects can be observed in various other sectors such as entertainment, travel, and accommodation. Looking at the degree of self-restraint by age shows that the self-restraint effect was relatively large among those in their late 30s to early 50s. However, below that age bracket, the younger the age group, the smaller was the self-restraint effect. Moreover, the self-restraint effect was also small among those aged 55 and above. Further, the degree of self-restraint varies depending on the type of service; it is highest with regard to entertainment, travel, and accommodation. The number of people who spent on these services in March 2020 was about half of the number during normal times. However, the 80 percent reduction demanded by the government has not been achieved.

    Introduction

    With the spread of coronavirus infections, there has been a growing tendency to refrain from consuming services such as eating out that involve contact with people. Self-restraint in service consumption is essential to stop the spread of infections, and the national government as well as local governments such as the Tokyo government are calling for consumers as well as firms providing such services to exercise self-restraint.
    Specifically, Prime Minister Shinzo Abe declared a one-month long state of emergency in Tokyo and six other prefectures on April 7, 2020 and expanded it to the entire country on April 16. PM Abe stated in his speech on April 7 that “According to an estimate by the experts, if all of us make efforts and reduce opportunities for person-to-person contact by a minimum of 70 percent, or ideally 80 percent, we will cause the increase in the number of patients to reach its peak two weeks from now and shift over into a decrease. . . . I ask people to refrain from going out, aiming at a 70 to 80 percent decrease, for the limited period of one month between now and the end of Golden Week holidays on May 6.”
    The purpose of this note is to measure the degree to which people in Japan have been exercising self-restraint since the outbreak of COVID-19. One way to do so is to look at changes in the flow of people using mobile phone location data.2 As a more direct approach, we use credit card transaction data on service spending to examine the degree to which people exercise self-restraint.

     

     

    WP021

  • The Responses of Consumption and Prices in Japan to the COVID-19 Crisis and the Tohoku Earthquake

    Abstract

    This note compares the responses of consumption and prices to the COVID-19 shock and another large-scale natural disaster that hit Japan, the Tohoku earthquake in March 2011. The comparison shows that the responses of supermarket sales and prices at a daily frequency during the two crises are quite similar: (1) the year-on-year rate of sales growth increased quickly and reached a peak of 20 percent two weeks after the outbreak of COVID-19 in Japan, which is quite similar to the response immediately after the earthquake; (2) the items consumers purchased at supermarkets in these two crisis are almost identical; (3) the year-on-year rate of consumer price inflation for goods rose by 0.6 percentage points in response to the coronavirus shock, compared to 2.2 percentage points in the wake of the earthquake. However, evidence suggests that whereas people expected higher inflation for goods and services in the wake of the earthquake, they expect lower inflation in response to the coronavirus shock. This difference in inflation expectations suggests that the economic deterioration due to COVID-19 should be viewed as driven mainly by an adverse aggregate demand shock to face-to-face service industries such as hotels and leisure, transportation, and retail, rather than as driven by an aggregate supply shock.

    1 The Spread of COVID-19 in Japan and the World

    The spread of COVID-19 is still gaining momentum. The number of those infected in Japan started to rise from the last week of February, and the spread of the virus began to gradually affect everyday life, as exemplified by increasingly empty streets in Ginza. In March, the outbreak spread to Europe and the United States, and stock markets in the United States and other country began to drop sharply on a daily basis, leading to market turmoil reminiscent of the global financial crisis. At the time of writing (March 29), the Dow Jones Index of the New York Stock Exchange had dropped by 35%, while the Nikkei Index had fallen by 30%.

     

    WP020

  • Incomplete Information Robustness

    Abstract

    Consider an analyst who models a strategic situation in terms of an incomplete information game and makes a prediction about players’ behavior. The analyst’s model approximately describes each player’s hierarchies of beliefs over payoff-relevant states, but the true incomplete information game may have correlated duplicated belief hierarchies, and the analyst has no information about the correlation. Under these circumstances, a natural candidate for the analyst’s prediction is the set of belief-invariant Bayes correlated equilibria (BIBCE) of the analyst’s incomplete information game. We introduce the concept of robustness for BIBCE: a subset of BIBCE is robust if every nearby incomplete information game has a BIBCE that is close to some BIBCE in this set. Our main result provides a sufficient condition for robustness by introducing a generalized potential function of an incomplete information game. A generalized potential function is a function on the Cartesian product of the set of states and a covering of the action space which incorporates some information about players’ preferences. It is associated with a belief-invariant correlating device such that a signal sent to a player is a subset of the player’s actions, which can be interpreted as a vague prescription to choose some action from this subset. We show that, for every belief-invariant correlating device that maximizes the expected value of a generalized potential function, there exists a BIBCE in which every player chooses an action from a subset of actions prescribed by the device, and that the set of such BIBCE is robust, which can differ from the set of potential maximizing BNE.

    Introduction

    Consider an analyst who models a strategic situation in terms of an incomplete information game and makes a prediction about players’ behavior. He believes that his model correctly describes the probability distribution over the players’ Mertens-Zamir hierarchies of beliefs over payoff-relevant states (Mertens and Zamir, 1985). However, players may have observed signals generated by an individually uninformative correlating device (Liu, 2015), which allows the players to correlate their behavior. In other words, the true incomplete information game may have correlated duplicated belief hierarchies (Ely and Peski, 2006; Dekel et al., 2007). Then, a natural candidate for the analyst’s prediction is the set of outcomes that can arise in some Bayes Nash equilibrium (BNE) of some incomplete information game with the same distribution over belief hierarchies. Liu (2015) shows that this set of outcomes can be characterized as the set of belief-invariant Bayes correlated equilibria (BIBCE) of the analyst’s model. A BIBCE is a Bayes correlated equilibrium (BCE) in which a prescribed action does not reveal any additional information to the player about the opponents’ types and the payoff-relevant state, thus preserving the player’s belief hierarchy.

     

     

    WP019

  • LQG Information Design

    A linear-quadratic-Gaussian (LQG) game is an incomplete information game with quadratic payoff functions and Gaussian information structures. It has many applications such as a Cournot game, a Bertrand game, a beauty contest game, and a network game among others. LQG information design is a problem to find an information structure from a given collection of feasible Gaussian information structures that maximizes a quadratic objective function when players follow a Bayes Nash equilibrium. This paper studies LQG information design by formulating it as semidefinite programming, which is a natural generalization of linear programming. Using the formulation, we provide sufficient conditions for optimality and suboptimality of no and full information disclosure. In the case of symmetric LQG games, we characterize the optimal symmetric information structure, and in the case of asymmetric LQG games, we characterize the optimal public information structure, each of which is in a closed-form expression.

    Introduction

    An equilibrium outcome in an incomplete information game depends not only upon a payoff structure, which consists of payoff functions together with a probability distribution of a payoff state, but also upon an information structure, which maps a payoff state to possibly stochastic signals of players. Information design analyzes the influence of an information structure on equilibrium outcomes, and in particular, characterizes an optimal information structure that induces an equilibrium outcome maximizing the expected value of an objective function of an information designer, who is assumed to be able to choose and commit to the information structure.1 General approaches to information design are presented by Bergemann and Morris (2013, 2016a,b, 2019), Taneva (2019), and Mathevet et al. (2020). A rapidly growing body of literature have investigated the economic application of information design in areas such as matching markets (Ostrovsky and Schwarz, 2010), voting games (Alonso and Camara, 2016), congestion games (Das et al., 2017), auctions (Bergemann et al., 2017), contests (Zhang and Zhou, 2016), and stress testing (Inostroza and Pavan, 2018), among others.

     

    WP018

  • Gaussian Hierarchical Latent Dirichlet Allocation: Bringing Polysemy Back

    Abstract

    Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as “bank.” In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors.

    Introduction

    Topic models are widely used to identify the latent representation of a set of documents. Since latent Dirichlet allocation (LDA) [4] was introduced, topic models have been used in a wide variety of applications. Recent work includes the analysis of legislative text [24], detection of malicious websites [33], and analysis of the narratives of dermatological disease [23]. The modular structure of LDA, and graphical models in general [17], has made it possible to create various extensions to the plain vanilla version. Significant works include the correlated topic model (CTM), which incorporates the correlation among topics that co-occur in a document [6]; hierarchical LDA (hLDA), which jointly learns the underlying topic and the hierarchical relational structure among topics [3]; and the dynamic topic model, which models the time evolution of topics [7].

     

     

    WP017

  • Decentralizability of Efficient Allocations with Heterogenous Forecasts

    Abstract

    Do price forecasts of rational economic agents need to coincide in perfectly competitive complete markets in order for markets to allocate resources efficiently? To address this question, we define an efficient temporary equilibrium (ETE) within the framework of a two period economy. Although an ETE allocation is intertemporally efficient and is obtained by perfect competition, it can arise without the agents forecasts being coordinated on a perfect foresight price. We show that there is a one dimensional set of such Pareto efficient allocations for generic endowments.

    Introduction

    Intertemporal trade in complete markets is known to achieve Pareto efficiency when the price forecasts of agents coincide and are correct. The usual justification for this coincidence of price forecasts is that if agents understand the market environment perfectly, they ought to reach the same conclusions, and hence in particular, their forecasts must coincide. But it is against the spirit of perfect competition to require that agents should understand the market environment beyond the market prices they commonly observe; we therefore study intertemporal trade without requiring that price forecasts of heterogenous agents coincide.

     

     

    WP016

  • Search and Matching in Rental Housing Market   

    Abstract

    This paper builds up a model for a rental housing market. With a search and matching friction in a rental housing market, a new house entry is endogenized according to a business cycle. A price negotiation happens only when owner and tenant newly match and make a contract for a rental price. After making a contract, a rental price is fixed until the contract ends. Simulations show that variations of a price and a market tightness change according to a search friction in a housing market, a speed of a housing cycle, a bargaining power between owner and tenant for a price setting. An extensive margin effect brought by a housing entry well contributes to a price variation and this effect significantly changes by parameters.  

    Introduction

    Former studies, such as Wheaton (1990), focus on a search behavior in a housing market and show advantage of a search model to explain a housing market.
    Non-homeownership rates are at nontrivial level for a business cycle analysis across countries. In Japan, Statistics Bureau of Japan (2018) shows that a non-homeownership rate keep about 40 percent for many years. Australian Bureau of Statistics reports that the proportion of Australian households renting their home is 32 percent in 2017–18. In the U.S., the Census Bureau releases national non-homeownership rates and it is about 35 percent in the last few years. As well as buying and selling houses, a leasing house behavior can contribute to a business cycle.  

     

     

    WP015

  • Debt Intolerance: Threshold Level and Composition

    Abstract

    Fiscal vulnerabilities depend on both the level and composition of government debt. This study investigates this threshold level of debt and its composition to understand the non-linear behavior of the long-term interest rate by developing a novel approach: a panel smooth transition regression with a general logistic model (i.e., a generalized panel smooth transition regression). Our main findings are threefold: (i) the impact of the expected public debt on the interest rate would increase exponentially and significantly as the foreign private holdings ratio exceeds approximately 20 percent; otherwise, strong home bias would mitigate the upward pressure of an increase in public debt on the interest rate; (ii) if the expected public debt-to-GDP ratio exceeds a certain level that depends on the funding source, an increase in foreign private holdings of government debt would cause a rise in long-term interest rates, offsetting the downward effect on long-term interest rates by expanding market liquidity; and (iii) out-of-sample forecast of our novel non-linear model is more accurate than those of previous methods. As such, the composition of government debt plays an important role in the highly non-linear behavior of the long-term interest rate.

    Introduction

    As argued by Reinhart et al. (2003), fiscal vulnerabilities depend on both the level and composition (foreign vs. domestic) of government debt. They describe the “debt intolerance” phenomenon, in which interest rates in developing economies can spike above the “tolerance ceiling,” even though the debt levels could be considered manageable by advanced country standards. Long-term interest rates in advanced economies have been lower than those in emerging markets although debt levels in advanced economies such as Japan, the United Kingdom and the United State are much higher than in emerging markets (Figures 1 and 2). While significant research has been devoted to estimating the marginal impact of public debt on long-term interest rates, there are different estimated impacts, even though they control for fundamental variables such as inflation expectations and growth rates.

     

     

    WP014

  • How Large is the Demand for Money at the ZLB? Evidence from Japan

    Abstract

    This paper estimates a money demand function using Japanese data from 1985 to 2017, which includes the period of near-zero interest rates over the last two decades. We compare a log-log specification and a semi-log specification by employing the methodology proposed by Kejriwal and Perron (2010) on cointegrating relationships with structural breaks. Our main finding is that there exists a cointegrating relationship with a single break between the money-income ratio and the interest rate in the case of the log-log form but not in the case of the semi-log form. More specifically, we show that the substantial increase in the money-income ratio during the period of near-zero interest rates is well captured by the log-log form but not by the semi-log form. We also show that the demand for money did not decline in 2006 when the Bank of Japan terminated quantitative easing and started to raise the policy rate, suggesting that there was an upward shift in the money demand schedule. Finally, we find that the welfare gain from moving from 2 percent inflation to price stability is 0.10 percent of nominal GDP, which is more than six times as large as the corresponding estimate for the United States.

    Introduction

    There is no consensus about whether the interest rate variable should be used in log or not when estimating the money demand function. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., the log of real money balances is regressed on the log of the nominal interest rate), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log form (i.e., the log of real money demand is regressed on the level of the nominal interest rate). The purpose of this paper is to specify the functional form of money demand using Japanese data covering the recent period with nominal interest rates very close to zero.

     

    WP013

  • Dynamic Productivity Decomposition with Allocative Efficiency

    Abstract

    We propose a novel approach to decomposing aggregate productivity growth into changes in technical efficiency, allocative efficiency, and variety of goods as well as relative efficiency of entrants and exiters. We measure technical efficiency by the aggregate production possibility frontier and allocative efficiency by the distance from the frontier. Applying our approach to establishment- and firm-level datasets from Japan, we find that the allocative efficiency among survivors declined during the banking crisis period, while the technical efficiency declined during the Global Financial Crisis period. Furthermore, we find that both entrants and exiters were likely to be more efficient than survivors.  

    Introduction

    Growth in aggregate productivity is key to economic growth in both developing and developed economies. Past studies have proposed various methods of analysis to gain further insight into its driving forces. These include aggregating producer-level productivity to economy-wide productivity, and decomposing changes in aggregate productivity. This decomposition consists of changes in technology, allocation of resources across producers, and the relative productivity of entrants and survivors. However, as far as we know, no preceding study decomposes aggregate productivity into technical efficiency in terms of the aggregate production possibility frontier, and allocative efficiency in terms of distance from the frontier, although this decomposition is straightforward from a microeconomic view.

     

    WP012

  • Who Needs Guidance from a Financial Adviser? Evidence from Japan

    Abstract

    Using individual family household data from Japan, we find that households prefer financial institutions, family and friends, and financial experts as actual sources of financial information, and financial institutions, neutral institutions not reflecting the interests of a particular industry, and financial experts as desirable sources of financial information. We find that households choosing actual sources of financial information involving financial experts have better financial knowledge, as measured in terms of knowledge about the Deposit Insurance Corporation of Japan, than those selecting family and friends for the same purpose. These same households are also more willing to purchase high-yielding financial products entailing the possibility of a capital loss within one to two years. We also find that households choosing desirable sources of financial information involving financial experts and neutral institutions also have better financial knowledge. Conditional on the choice of financial institutions as the actual source, households that regard neutral institutions as a more desirable source tend to have better financial knowledge. However, it is unclear whether households that seek the guidance of a financial expert have higher ratios of stock and investment trusts to financial assets than those selecting family and friends as their source of financial information. 

    Introduction

    The prolonged period of low economic growth and interest rates that has accompanied rapid population aging in Japan over the past two decades requires ever more Japanese households to decide more carefully how much to save and where to invest. For example, many Japanese corporations have begun to implement defined contribution corporate pension plans, such that workers must take much more responsibility for their own saving. However, the Japanese flow of funds accounts show that riskier (higher yielding) assets, such as stocks or investment trusts, represent just 16% of all household financial assets as of December 2018. Observing this rapidly changing landscape for retirement savings, the Financial Services Agency (FSA) of Japan has been actively promoting investment in FSA-selected no-load and simple investment trusts, through tax exemptions on dividend and interest earnings on securities. However, it remains for households to choose from the products approved by the FSA, and they still need sufficient financial knowledge for this purpose.

     

    WP011

  • Detecting Stock Market Bubbles Based on the Cross‐Sectional Dispersion of Stock Prices

    Abstract

    A statistical method is proposed for detecting stock market bubbles that occur when speculative funds concentrate on a small set of stocks. The bubble is defined by stock price diverging from the fundamentals. A firm’s financial standing is certainly a key fundamental attribute of that firm. The law of one price would dictate that firms of similar financial standing share similar fundamentals. We investigate the variation in market capitalization normalized by fundamentals that is estimated by Lasso regression of a firm’s financial standing. The market capitalization distribution has a substantially heavier upper tail during bubble periods, namely, the market capitalization gap opens up in a small subset of firms with similar fundamentals. This phenomenon suggests that speculative funds concentrate in this subset. We demonstrated that this phenomenon could have been used to detect the dot-com bubble of 1998-2000 in different stock exchanges. 

    Introduction

    It is common knowledge in macroeconomics that, as Federal Reserve Board Chairman Alan Greenspan said in 2002, ”...it is very difficult to identify a bubble until after the fact; that is, when its bursting confirms its existence.” In other words, before a bubble bursts, there is no way to establish whether the economy is in a bubble or not. In economics, a stock bubble is defined as a state in which speculative investment flows into a firm in excess of the firm’s fundamentals, so the market capitalization (= stock price × number of shares issued) becomes excessively high compared to the fundamentals. Unfortunately, it is exceedingly difficult to precisely measure a firm’s fundamentals and this has made it nearly impossible to detect a stock bubble by simply measuring the divergence between fundamentals and market capitalization [1–3]. On the other hand, we empirically know that market capitalization and PBR (= market capitalization / net assets) of some stocks increase during bubble periods [4–7]. However, they are also buoyed by rising fundamentals, so it is not always possible to figure out if increases can be attributed to an emerging bubble.

     

    WP010

  • Product Cycle and Prices: a Search Foundation

    Abstract

    This paper develops a price model with a product cycle. Through a frictional product market with search and matching frictions, an endogenous product cycle is accompanied with a price cycle where a price for a new good and a price for an existing good are set in a different manner. This model nests a New Keynesian Phillips curve with the Calvo's price adjustment as a special case and generates several new phenomena. Our simple model captures observed facts in Japanese product level data such as the pro-cyclicality among product entry, demand, and price. In a general equilibrium model, an endogenous product entry increases variation of the inflation rate by 20 percent in Japan. This number increases to 72 percent with a price discounting after a first price.

    Introduction

    "We have all visited several stores to check prices and/or to find the right item or the right size. Similarly, it can take time and effort for a worker to find a suitable job with suitable pay and for employers to receive and evaluate applications for job openings. Search theory explores the workings of markets once facts such as these are incorporated into the analysis. Adequate analysis of market frictions needs to consider how reactions to frictions change the overall economic environment: not only do frictions change incentives for buyers and sellers, but the responses to the changed incentives also alter the economic environment for all the participants in the market. Because of these feedback effects, seemingly small frictions can have large effects on outcomes."

     

    Peter Diamond

     

    WP009

  • House Price Dispersion in Boom-Bust Cycles: Evidence from Tokyo

    Abstract

    We investigate the cross-sectional distribution of house prices in the Greater Tokyo Area for the period 1986 to 2009. We find that size-adjusted house prices follow a lognormal distribution except for the period of the housing bubble and its collapse in Tokyo, for which the price distribution has a substantially heavier upper tail than that of a lognormal distribution. We also find that, during the bubble era, sharp price movements were concentrated in particular areas, and this spatial heterogeneity is the source of the fat upper tail. These findings suggest that, during a bubble, prices increase markedly for certain properties but to a much lesser extent for other properties, leading to an increase in price inequality across properties. In other words, the defining property of real estate bubbles is not the rapid price hike itself but an increase in price dispersion. We argue that the shape of cross-sectional house price distributions may contain information useful for the detection of housing bubbles. 

    Introduction

    Property market developments are of increasing importance to practitioners and policymakers. The financial crises of the past two decades have illustrated just how critical the health of this sector can be for achieving financial stability. For example, the recent financial crisis in the United States in its early stages reared its head in the form of the subprime loan problem. Similarly, the financial crises in Japan and Scandinavia in the 1990s were all triggered by the collapse of bubbles in the real estate market. More recently, the rapid rise in real estate prices - often supported by a strong expansion in bank lending - in a number of emerging market economies has become a concern for policymakers. Given these experiences, it is critically important to analyze the relationship between property markets, finance, and financial crisis.

     

    WP008

  • The Lucas Imperfect Information Model with Imperfect Common Knowledge

    Abstract

    In the Lucas Imperfect Information model, output responds to unanticipated monetary shocks. We incorporate more general information structures into the Lucas model and demonstrate that output also responds to (dispersedly) anticipated monetary shocks if the information is imperfect common knowledge. Thus, the real effects of money consist of the unanticipated part and the anticipated part, and we decompose the latter into two effects, an imperfect common knowledge effect and a private information effect. We then consider an information structure composed of public and private signals. The real effects disappear when either signal reveals monetary shocks as common knowledge. However, when the precision of private information is fixed, the real effects are small not only when a public signal is very precise but also when it is very imprecise. This implies that a more precise public signal can amplify the real effects and make the economy more volatile.  

    Introduction

    In the Lucas Imperfect Information model (Lucas, 1972, 1973), which formalizes the idea of Phelps (1970), markets are decentralized and agents in each market have only limited information about prices in other markets. As a consequence, output responds to unanticipated monetary shocks; that is, imperfect information about prices generates real effects of money. However, if monetary shocks are anticipated, no real effects arise. This implies that monetary shocks cannot have lasting effects, which is considered to be a serious shortcoming of the Lucas model.

     

    WP007

  • Notes on “Refinements and Higher Order Beliefs”

    The abstract of our 1997 survey paper Kajii and Morris (1997b) on "Refinements and Higher Order Beliefs" reads:

     

    This paper presents a simple framework that allows us to survey and relate some different strands of the game theory literature. We describe a “canonical” way of adding incomplete information to a complete information game. This framework allows us to give a simple “complete theory” interpretation (Kreps 1990) of standard normal form refinements such as perfection, and to relate refinements both to the “higher order beliefs literature” (Rubinstein 1989; Monderer and Samet 1989; Morris, Rob and Shin 1995; Kajii and Morris 1997a) and the “payoff uncertainty approach” (Fudenberg, Kreps and Levine 1988; Dekel and Fudenberg 1990).

     

    In particular, this paper provided a unified framework to relate the notion of equilibria robust to incomplete information introduced in Kajii and Morris (1997a) [Hereafter, KM1997] to the classic refinements literature. It followed Fudenberg, Kreps and Levine (1988) and Kreps (1990) in relating refinements of Nash equilibria to a "complete theory" where behavior was rationalized by explicit incomplete information about payoffs, rather than depending on action trembles or other exogenous perturbations. It followed Fudenberg and Tirole (1991), chapter 14, in providing a unified treatment of refinements and a literature on higher-order beliefs rather than proposing a particular solution concept.

     

    The primary purpose of the survey paper was to promote the idea of robust equilibria in KM1997 and we did not try to publish it as an independent paper. Since we wrote this paper, there have been many developments in the literature on robust equilibria, fortunately. But there has been little work emphasizing a unified perspective, and consequently this paper seems more relevant than ever. We are therefore very happy to publish it twenty years later. We provide some notes in the following on relevant developments in the literature and how they relate to the survey. These notes assume familiarity with the basic concepts introduced in the survey paper and KM1997.

     

     

    WP006

  • Stability of Sunspot Equilibria under Adaptive Learning with Imperfect Information

    Abstract

    This paper investigates whether sunspot equilibria are stable under agents’ adaptive learning with imperfect information sets of exogenous variables. Each exogenous variable is observable for a part of agents and unobservable from others so that agents’ forecasting models are heterogeneously misspecified. The paper finds that stability conditions of sunspot equilibria are relaxed or unchanged by imperfect information. In a basic New Keynesian model with highly imperfect information, sunspot equilibria are stable if and only if nominal interest rate rules violate the Taylor principle. This result is contrast to the literature in which sunspot equilibria are stable only if policy rules follow the principle, and is consistent with the observations during past business cycles fluctuations.  

    Introduction

    Sunspot-driven business cycle models are popular tools to account for the features of macroeconomic fluctuations that are not explained by fundamental shocks. US business cycles in the pre-Volcker period are considered to be driven by self-fulfilling expectations, so-called ”sunspots” (see Benhabib and Farmer, 1994; Farmer and Guo, 1994). Those non-fundamental expectations are considered to stem from the Fed’s passive stance to inflation (see Clarida, Gali, and Gertler, 2000; Lubik and Schorfheide, 2004). Even recently, global financial turmoils in the last decade had historic magnitudes that could not be explained by fundamental reasons, and hence it is analyzed in models with sunspot expectations (see Benhabib and Wang, 2013; Gertler and Kiyotaki, 2015)

     

    WP005

  • Predicting Adverse Media Risk using a Heterogeneous Information Network

    Abstract

    The media plays a central role in monitoring powerful institutions and identifying any activities harmful to the public interest. In the investing sphere constituted of 46,583 officially listed domestic firms on the stock exchanges worldwide, there is a growing interest “to do the right thing”, i.e., to put pressure on companies to improve their environmental, social and government (ESG) practices. However, how to overcome the sparsity of ESG data from non-reporting firms, and how to identify the relevant information in the annual reports of this large universe? Here, we construct a vast heterogeneous information network that covers the necessary information surrounding each firm, which is assembled using seven professionally curated datasets and two open datasets, resulting in about 50 million nodes and 400 million edges in total. Exploiting this heterogeneous information network, we propose a model that can learn from past adverse media coverage patterns and predict the occurrence of future adverse media coverage events on the whole universe of firms. Our approach is tested using the adverse media coverage data of more than 35,000 firms worldwide from January 2012 to May 2018. Comparing with state-of-the-art methods with and without the network, we show that the predictive accuracy is substantially improved when using the heterogeneous information network. This work suggests new ways to consolidate the diffuse information contained in big data in order to monitor dominant institutions on a global scale for more socially responsible investment, better risk management, and the surveillance of powerful institutions.

    Introduction

    Adverse media coverage sometimes leads to fatal results for a company. In the press release sent out by Cambridge Analytica on May 2, 2018, the company wrote that “Cambridge Analytica has been the subject of numerous unfounded accusations, ... media coverage has driven away virtually all of the company’s customers and suppliers” [5]. This is just one recent example highlighting the impact of adverse media coverage on a firm’s fate. In another example, the impact of adverse media coverage on Swiss bank profits was estimated to be 3.35 times the median annual net profit of small banks and 0.73 times that of large banks [3]. These numbers are significant, indicating how adverse media coverage can cause huge damage to a bank. Moreover, a new factor, priced as the “no media coverage premium” [10], has been identified to help explain financial returns: stocks with no media coverage earn higher returns than stocks with high media coverage. Within the rational-agent framework, this may result from impediments to trade and/or from lower investor recognition leading to lower diversification [10]. Another mechanism could be associated with the fact that most of the coverage of mass media is negative [15, 23].

     

    WP004

  • Term Structure Models During the Global Financial Crisis: A Parsimonious Text Mining Approach

    Abstract

    This work develops and estimates a three-factor term structure model with explicit sentiment factors in a period including the global financial crisis, where market confidence was said to erode considerably. It utilizes a large text data of real time, relatively high-frequency market news and takes account of the difficulties in incorporating market sentiment into the models. To the best of our knowledge, this is the first attempt to use this category of data in term-structure models.  

    Although market sentiment or market confidence is often regarded as an important driver of asset markets, it is not explicitly incorporated in traditional empirical factor models for daily yield curve data because they are unobservable. To overcome this problem, we use a text mining approach to generate observable variables which are driven by otherwise unobservable sentiment factors. Then, applying the Monte Carlo filter as a filtering method in a state space Bayesian filtering approach, we estimate the dynamic stochastic structure of these latent factors from observable variables driven by these latent variables. 

    As a result, the three-factor model with text mining is able to distinguish (1) a spread-steepening factor which is driven by pessimists’ view and explaining the spreads related to ultra-long term yields from (2) a spread-flattening factor which is driven by optimists’ view and influencing the long and medium term spreads. Also, the three-factor model with text mining has better fitting to the observed yields than the model without text mining.

    Moreover, we collect market participants’ views about specific spreads in the term structure and find that the movement of the identified sentiment factors are consistent with the market participants’ views, and thus market sentiment.  

    Introduction

    Although “market sentiment” is often regarded as an important driver of asset markets,1 it is not explicitly incorporated in traditional empirical factor models for the term structure of interest rates. This is because (1) it is not clear what sentiment factors mean, and moreover, (2) there are scant observations, if any, about these sentiment factors. This work formulates and estimates a factor model with explicit sentiment factors in the period including the global financial crisis, in which uncertainty was said to be heightened considerably. It utilizes a large text data of real-time, relatively high-frequency market news and takes account of the difficulties (1) and (2). To the best of our knowledge, this is the first attempt to use this category of data in term-structure models.

     

    WP003

  • The Demand for Money at the Zero Interest Rate Bound

    Abstract

    This paper estimates a money demand function using US data from 1980 onward, including the period of near-zero interest rates following the global financial crisis. We conduct cointegration tests to show that the substantial increase in the money-income ratio during the period of near-zero interest rates is captured well by the money demand function in log-log form, but not by that in semi-log form. Our result is the opposite of the result obtained by Ireland (2009), who, using data up until 2006, found that the semi-log specification performs better. The difference in the result from Ireland (2009) mainly stems from the difference in the observation period employed: our observation period contains 24 quarters with interest rates below 1 percent, while Ireland’s (2009) observation period contains only three quarters. We also compute the welfare cost of inflation based on the estimated money demand function to find that it is very small: the welfare cost of 2 percent inflation is only 0.04 percent of national income, which is of a similar magnitude as the estimate obtained by Ireland (2009) but much smaller than the estimate by Lucas (2000).

    Introduction

    In regression analyses of money demand functions, there is no consensus on whether the nominal interest rate as an independent variable should be used in linear or log form. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., regressing real money balances (or real money balances relative to nominal GDP) in log on nominal interest rates in log), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log specification (i.e., nominal interest rates are not in log).

     

    WP002

  • The Formation of Consumer Inflation Expectations: New Evidence From Japan’s Deflation Experience

    Abstract

    Using a new micro-level dataset we investigate the relationship between the inflation experience and inflation expectations of households in Japan. We focus on the period after 1995, when Japan began its era of deflation. Our key findings are fourfold. Firstly, we find that inflation expectations tend to increase with age. Secondly, we find that measured inflation rates of items purchased also increase with age. However, we find that age and inflation expectations continue to have a positive correlation even after controlling for the household-level rate of inflation. Further analysis suggests that the positive correlation between age and inflation expectations is driven to a significant degree by the correlation between cohort and inflation expectations, which we interpret to represent the effect of historical inflation experience on expectations of future inflation rates.

    Introduction

    Since at least the time of Keynes (1936), economic agents’ expectations of future inflation rates have played a pivotal role in macroeconomics. Woodford (2003) describes the central importance of inflation expectations to modern macroeconomic models due to the intertemporal nature of economic problems, while Sargent (1982) and Blinder (2000) highlight the dependence of monetary policy on these expectations. However, despite the important role of inflation expectations, their formal inclusion in macroeconomic models is usually ad-hoc with little empirical justification.

     

    WP001

  • Role of Expectation in a Liquidity Trap

    Abstract

    This paper investigates how expectation formation affects monetary policy effectiveness in a liquidity trap. We examine two expectation formations: (i) different degrees in anchoring expectation and (ii) different degrees in forward-lookingness to form expectation. We reveal several points as follows. First, under optimal commitment policy, expectation formation for an inflation rate does not markedly change the effects of monetary policy. Second, contrary to optimal commitment policy, the effects of monetary policy significantly change according to different inflation expectation formations under the Taylor rule. The reductions to an inflation rate and the output gap are mitigated if the expectation is well anchored. This rule, however, can not avoid large drops when the degree of forward-lookingness to form expectation decreases. Third, a simple rule with price-level targeting shows some similar outcomes according to different expectation formations as the Taylor rule does. However, in a simple rule with price-level targeting, an inflation rate and the output gap drop less severe due to a history dependent easing and are less sensitive to expectation formations than in the Taylor rule. Even for the Japanese economy, the effects of monetary policy on economic dynamics significantly change according to expectation formations for rules except optimal commitment policy. Furthermore, when the same expectation formations for the output gap are assumed, we observe similar outcomes.

    Introduction

    Expectation is one of the most important factors in the conduct of monetary policy. In particular, managing expectation of an agent is a nontrivial tool in a liquidity trap since the central bank faces limitation in reducing the policy interest rate. A lot of papers, such as Eggertsson and Woodford (2003b), Jung, Teranishi, and Watanabe (2005), Adam and Billi (2006, 2007), and Nakov (2008), analyze optimal monetary policy in a liquidity trap and conclude that optimal commitment policy is very effective. The commitment policy can reduce the real interest rate and stimulate the economy by controlling the inflation expectation. Their conclusions, however, are solely dependent on two important assumptions, i.e., rational expectation and optimal commitment monetary policy.

  • Product Cycles and Prices: Search Foundation

    Abstract

    This paper develops a price model with a search foundation based on product cycles and prices. Observations conclude that firms match with a new product, then set a new price through negotiation and fix the price until the product exits from a market. This evident behavior results in a new model of price stickiness as a Search-based Phillips curve. The model includes a New Keynesian Phillips curve with Calvo’s price adjustment as a special case and describes new phenomena. First, new parameters and variables of a frictional goods market determine price dynamics. As separation rate in a goods market decreases, price becomes more sticky, i.e., a flatter slope in a Search-based Phillips curve, since the product turnover cycle is sluggish. Moreover, other goods market features, such as probability of match, elasticity of match, and bargaining power for a price setting decide price dynamics. Second, goods market friction can make endogenously persistent inflation dynamics without an assumption of indexation to a lagged inflation rate. Third, when the number of a product persistently increases, deflation continues for a long period. It can explain a secular deflation.

    Introduction

    “We have all visited several stores to check prices and/or to find the right item or the right size. Similarly, it can take time and effort for a worker to find a suitable job with suitable pay and for employers to receive and evaluate applications for job openings. Search theory explores the workings of markets once facts such as these are incorporated into the analysis. Adequate analysis of market frictions needs to consider how reactions to frictions change the overall economic environment: not only do frictions change incentives for buyers and sellers, but the responses to the changed incentives also alter the economic environment for all the participants in the market. Because of these feedback effects, seemingly small frictions can have large effects on outcomes.”

  • Why Has Japan Failed to Escape from Deflation?

    Abstract

    Japan has failed to escape from deflation despite extraordinary monetary policy easing over the past four years. Monetary easing undoubtedly stimulated aggregate demand, leading to an improvement in the output gap. However, since the Phillips curve was almost flat, prices hardly reacted. Against this background, the key question is why prices were so sticky. To examine this, we employ sectoral price data for Japan and seven other countries including the United States, and use these to compare the shape of the price change distribution. Our main finding is that Japan differs significantly from the other countries in that the mode of the distribution is very close to zero for Japan, while it is near 2 percent for other countries. This suggests that whereas in the United States and other countries the “default” is for firms to raise prices by about 2 percent each year, in Japan the default is that, as a result of prolonged deflation, firms keep prices unchanged.

    Introduction

    From the second half of the 1990s onward, Japan suffered a period of prolonged deflation, in which the consumer price index (CPI) declined as a trend. During this period, both the government and the Bank of Japan (BOJ) tried various policies to escape from deflation. For instance, from 1999 to 2000, the BOJ adopted a “zero interest rate policy” in which it lowered the policy interest rate to zero. This was followed by “quantitative easing” from 2001 until 2006. More recently, in January 2013, the BOJ adopted a “price stability target” with the aim of raising the annual rate of increase in the CPI to 2 percent. In April 2013, it announced that it was aiming to achieve the 2 percent inflation target within two years and, in order to achieve this, introduced Quantitative and Qualitative Easing (QQE), which sought to double the amount of base money within two years. Further, in February 2016, the BOJ introduced a “negative interest rate policy,” in which the BOJ applies a negative interest rate of minus 0.1 percent to current accounts held by private banks at the BOJ, followed, in September 2016, by the introduction of “yield curve control,” in which the BOJ conducts JGB operations so as to keep the 10-year JGB yield at zero percent. See Table 1 for an overview of recent policy decisions made by the BOJ.

  • The Formation of Consumer Inflation Expectations: New Evidence From Japan’s Deflation Experience

    Abstract

    Using a new micro-level dataset we investigate the relationship between the inflation experience and inflation expectations of individuals in Japan. We focus on the period after 1995, when Japan began its era of deflation. Our key findings are fourfold. Firstly, we find that inflation expectations tend to increase with age. Secondly, we find that measured inflation rates of items purchased also increase with age. However, we find that age and inflation expectations continue to have a positive correlation even after controlling for the individual-level rate of inflation. Further analysis suggests that the positive correlation between age and inflation expectations is driven to a significant degree by the correlation between cohort and inflation expectations, which we interpret to represent the effect of historical inflation experience on expectations of future inflation rates.

    Introduction

    Since at least the time of Keynes (1936), economic agents’ expectations of future inflation rates have played a pivotal role in macroeconomics. Woodford (2003) describes the central importance of inflation expectations to modern macroeconomic models due to the intertemporal nature of economic problems, while Sargent (1982) and Blinder (2000) highlight the dependence of monetary policy on these expectations. However, despite the important role of inflation expectations, their formal inclusion in macroeconomic models is usually ad-hoc with little empirical justification.

  • Extracting fiscal policy expectations from a cross section of daily stock returns

    Abstract

    The "Fiscal foresight problem" poses a challenge to researchers who wish to estimate macroeconomic impacts of fiscal policies. That is, as much of the policies are pre-announced, the traditional identification strategy which relies on the timing and the amount of actual spending changes could be misleading. In Shioji and Morita (2015), we addressed this problem by constructing a daily indicator of surprises about future public investment spending changes for Japan. Our approach combined a detailed analysis of newspaper articles with information from the stock market. The latter was represented by a weighted average of stock returns across companies from the sector deeply involved with public work, namely the construction industry. A potential shortcoming with this approach is that any shock that has an industry-wide consequence, which happened to arrive on the same day that a news about policy arrived will be reflected in this average return. In contrast, in this paper, we propose a new indicator which takes advantage of heterogeneity across firms within the same industry. Degrees of dependence on public procurement differ markedly between construction companies. For some firms, over 80% of their work is government-related. Others essentially do all their work for the private sector. Yet they share many other features, such as large land ownership and a heavy reliance on bank finance. By looking at differences in the reactions of stock returns between those firms, we should be able to come up with a more purified measure of changes in the private agents' expectations about policies. Based on this idea, we propose two new indicators. One is simply the difference in the average excess returns between two groups of firms characterized by different degrees of dependence on public investment. The other one is more elaborate and is based on the "Target Rotation" approach in the factor analysis.

    Introduction

    This paper is a sequel to Shioji and Morita (2015). In that paper, we tried to overcome a common difficulty faced by many researchers who try to estimate macroeconomic effects of fiscal policies, known as the "fiscal foresight" problem. The recognition of the presence and importance of this issue has arguably been one of the most noteworthy developments in the field of empirical studies on fiscal policy in recent years. As Ramey (2011) argues, government spending increases, especially major ones, are typically announced long before the actual spending is made. Forward looking agents would start changing their behaviors based on those expectations as soon as the news comes in. In such a circumstance, if an empirical macroeconomist uses only the conventional indicator of policy, namely the actual amount of spending, she/he is unlikely to be able to capture the entire impact of the policy correctly. This is the reason why we need to know when the news about policy changes was perceived by the private sector as well as how large the surprise was.

  • Price Rigidity at Near-Zero Inflation Rates: Evidence from Japan

    Abstract

    A notable characteristic of Japan’s deflation since the mid-1990s is the mild pace of price decline, with the CPI falling at an annual rate of only around 1 percent. Moreover, even though unemployment increased, prices hardly reacted, giving rise to a flattening of the Phillips curve. In this paper, we address why deflation was so mild and why the Phillips curve flattened, focusing on changes in price stickiness. Our first finding is that, for the majority of the 588 items constituting the CPI, making up about 50 percent of the CPI in terms of weight, the year-on-year rate of price change was near-zero, indicating the presence of very high price stickiness. This situation started during the onset of deflation in the second half of the 1990s and continued even after the CPI inflation rate turned positive in spring 2013. Second, we find that there is a negative correlation between trend inflation and the share of items whose rate of price change is near zero, which is consistent with Ball and Mankiw’s (1994) argument based on the menu cost model that the opportunity cost of leaving prices unchanged decreases as trend inflation approaches zero. This result suggests that the price stickiness observed over the last two decades arose endogenously as a result of the decline in inflation. Third and finally, a cross-country comparison of the price change distribution reveals that Japan differs significantly from other countries in that the mode of the distribution is very close to zero for Japan, while it is near 2 percent for other countries including the United States. Japan continues to be an “outlier” even if we look at the mode of the distribution conditional on the rate of inflation. This suggests that whereas in the United States and other countries the “default” is for firms to raise prices by about 2 percent each year, in Japan the default is that, as a result of prolonged deflation, firms keep prices unchanged.

    Introduction

    From the second half of the 1990s onward, Japan suffered a period of prolonged deflation, in which the consumer price index (CPI) declined as a trend. During this period, both the government and the Bank of Japan (BOJ) tried various policies to escape from deflation. For instance, from 1999 to 2000, the BOJ adopted a “zero interest rate policy” in which it lowered the policy interest rate to zero. This was followed by “quantitative easing” from 2001 until 2006. More recently, in January 2013, the BOJ adopted a “price stability target” with the aim of raising the annual rate of increase in the CPI to 2 percent. In April 2013, it announced that it was aiming to achieve the 2 percent inflation target within two years and, in order to achieve this, introduced Quantitative and Qualitative Easing (QQE), which seeks to double the amount of base money within two years. Further, in February 2016, the BOJ introduced a “negative interest rate policy,” in which the BOJ applies a negative interest rate of minus 0.1 percent to current accounts held by private banks at the BOJ, followed, in September 2016, by the introduction of “yield curve control,” in which the BOJ conducts JGB operations so as to keep the 10-year JGB yield at zero percent. See Table 1 for an overview of recent policy decisions made by the BOJ.

  • The Optimum Quantity of Debt for Japan

    Abstract

    Japan's net government debt reached 130% of GDP in 2013. The present paper analyzes the welfare implications of the large debt for Japan. We use an Aiyagari (1994)-style heterogeneous-agent, incomplete-market model with idiosyncratic wage risk and endogenous labor supply. We find that under the utilitarian welfare measure, the optimal government debt for Japan is -50% of GDP and the current level of debt incurs the welfare cost that is 0.22% of consumption. Decomposing the welfare cost by the Flodén (2001) method reveals substantial welfare effects arising from changes in the level, inequality, and uncertainty. The level and inequality costs are 0.38% and 0.52% respectively, whereas the uncertainty benefit is 0.68%. Adjusting consumption taxes instead of the factor income taxes to balance the government budget substantially reduces the overall welfare cost.

    Introduction

    Japan's net government debt reached 130% of GDP in 2013 and the debt to GDP ratio is the highest among developed countries. A large number of papers, including Hoshi and Ito (2014), Imrohoroğlu, Kitao, and Yamada (2016), and Hansen and Imrohoroğlu (2016), analyze Japan's debt problem. However, the welfare effect of the large government debt has been less understood. Flodén (2001) finds that the optimal government debt for the United States is 150% of GDP. Is the optimal debt for Japan similar and hence should Japan accumulate more debt? Or does the current debt exceed the optimal level? How much is the welfare benefit of having the optimal level of debt instead of the current debt? 

  • The Effectiveness of Consumption Taxes and Transfers as Insurance against Idiosyncratic Risk

    Abstract

    We quantitatively evaluate the effectiveness of a consumption tax and transfer program as insurance against idiosyncratic earnings risk. Our framework is a heterogeneousagent, incomplete-market model with idiosyncratic wage risk and indivisible labor. The model is calibrated to the U.S. economy. We find a weak insurance effect of the transfer program. Extending the transfer system from the current scale raises consumption uncertainty, which increases aggregate savings and reduces the interest rate. Furthermore, consumption inequality shows a small decrease.

    Introduction

    Households face substantial idiosyncratic labor income risk, and private insurance against such risk is far from perfect. The presence of uninsurable idiosyncratic earnings risk implies a potential role of government policies. The present paper examines the effectiveness of a consumption tax and transfer system as insurance against idiosyncratic earnings risk in an Aiyagari (1994)-style model with endogenous labor supply. We find that the transfer program is ineffective in terms of risk sharing. Expanding the current transfer scheme in the United States increases consumption volatility and precautionary savings. Thus, aggregate savings increase and the interest rate falls.

  • Product Turnover and Deflation: Evidence from Japan

    Abstract

    In this study, we evaluate the effects of product turnover on a welfare-based cost-of-living index. We first present several facts about price and quantity changes over the product cycle employing scanner data for Japan for the years 1988-2013, which cover the deflationary period that started in the mid 1990s. We then develop a new method to decompose price changes at the time of product turnover into those due to the quality effect and those due to the fashion effect (i.e., the higher demand for products that are new). Our main findings are as follows: (i) the price and quantity of a new product tend to be higher than those of its predecessor at its exit from the market, implying that Japanese firms use new products as an opportunity to take back the price decline that occurred during the life of its predecessor under deflation; (ii) a considerable fashion effect exists, while the quality effect is slightly declining; and (iii) the discrepancy between the cost-ofliving index estimated based on our methodology and the price index constructed only from a matched sample is not large. Our study provides a plausible story to explain why Japan’s deflation during the lost decades was mild.

    Introduction

    Central banks need to have a reliable measure of inflation when making decisions on monetary policy. Often, it is the consumer price index (CPI) they refer to when pursuing an inflation targeting policy. However, if the CPI entails severe measurement bias, monetary policy aiming to stabilize the CPI inflation rate may well bring about detrimental effects on the economy. One obstacle lies in frequent product turnover; for example, supermarkets in Japan sell hundreds of thousands of products, with new products continuously being created and old ones being discontinued. The CPI does not collect the prices of all these products. Moreover, new products do not necessarily have the same characteristics as their predecessors, so that their prices may not be comparable.

  • Payment Instruments and Collateral in the Interbank Payment System

    Abstract

    This paper presents a three-period model to analyze the need for bank reserves in the presence of other liquid assets like Treasury securities. If a pair of banks settle bank transfers without bank reserves, they must prepare extra liquidity for interbank payments, because depositors’ demand for timely payments causes a hold-up problem in the bilateral settlement of bank transfers. In light of this result, the interbank payment system provided by the central bank can be regarded as an implicit interbank settlement contract to save liquidity. The central bank is necessary for this contract as the custodian of collateral. Bank reserves can be characterized as the balances of liquid collateral submitted by banks to participate into this contract. This result explains the rate-of-return dominance puzzle and the need for substitution between bank reserves and other liquid assets simultaneously. The optimal contract is the floor system, not only because it pays interest on bank reserves, but also because it eliminates the overthe-counter interbank money market. The model indicates it is efficient if all banks share the same custodian of collateral, which justifies the current practice that a public institution provides the interbank payment system.

    Introduction

    Base money consists of currency and bank reserves. Banks hold bank reserves not merely to satisfy a reserve requirement, but also to make interbank payments to settle bank transfers between their depositors. In fact, the daily transfer of bank reserves in a country tends to be as large as a sizable fraction of annual GDP. Also, several countries have abandoned a reserve requirement. Banks in these countries still use bank reserves to settle bank transfers.

  • Who buys what, where: Reconstruction of the international trade flows by commodity and industry

    Abstract

    We developed a model to reconstruct the international trade network by considering both commodities and industry sectors in order to study the effects of reduced trade costs. First, we estimated trade costs to reproduce WIOD and NBERUN data. Using these costs, we estimated the trade costs of sector specific trade by types of commodities. We successfully reconstructed sector-specific trade for each types of commodities by maximizing the configuration entropy with the estimated costs. In WIOD, trade is actively conducted between the same industry sectors. On the other hand, in NBER-UN, trade is actively conducted between neighboring countries. This seems like a contradiction. We conducted community analysis for the reconstructed sector-specific trade network by type of commodities. The community analysis showed that products are actively traded among same industry sectors in neighboring countries. Therefore the observed features of the community structure for WIOD and NBER-UN are complementary.

    Introduction

    In the era of economic globalization, most national economies are linked by international trade, which in turn consequently forms a complex global economic network. It is believed that greater economic growth can be achieved through free trade based on the establishment of Free Trade Agreements (FTAs) and Economic Partnership Agreements (EPAs). However, there is limitation to the resolution of the currently available trade data. For instance, NBER-UN records trade amounts between bilateral countries without industry sector information for each type of commodities [1], and the World Input-Output Database (WIOD) records sector-specific trade amount without commodities information [2]. This limited resolution makes it difficult to analyze community structures in detail and systematically assess the effects of reduced trade tariffs and trade barriers.

  • Power laws in market capitalization during the Dot-com and Shanghai bubble periods

    Abstract

    The distributions of market capitalization across stocks listed in the NASDAQ and Shanghai stock exchanges have power law tails. The power law exponents associated with these distributions fluctuate around one, but show a substantial decline during the dot-com bubble in 1997-2000 and the Shanghai bubble in 2007. In this paper, we show that the observed decline in the power law exponents is closely related to the deviation of the market values of stocks from their fundamental values. Specifically, we regress market capitalization of individual stocks on financial variables, such as sales, profits, and asset sizes, using the entire sample period (1990 to 2015) in order to identify variables with substantial contributions to fluctuations in fundamentals. Based on the regression results for stocks in listed in the NASDAQ, we argue that the fundamental value of a company is well captured by the value of its net asset, therefore a price book-value ratio (PBR) is a good measure of the deviation from fundamentals. We show that the PBR distribution across stocks listed in the NASDAQ has a much heavier upper tail in 1997 than in the other years, suggesting that stock prices deviate from fundamentals for a limited number of stocks constituting the tail part of the PBR distribution. However, we fail to obtain a similar result for Shanghai stocks.

    Introduction

    Since B. Mandelbrot identified the fractal structure of price fluctuations in asset markets in 1963 [1], statistical physicists have been investigating the economic mechanism through which a fractal structure emerges. Power laws is an important characteristic in the fractal structure. For example, some studies found that the size distribution of asset price fluctuations follows power law [2,3]. Also, it is shown that firm size distribution (e.g., the distribution of sales across firms) also follows power law [4–8]. The power law exponent associated with firm size distributions is close to one over the last 30 years in many countries [9, 10]. The situation in which the exponent is equal to one is special in that it is the critical point between the oligopolistic phase and the pseudoequal phase [11]. If the power law exponent less than one, the finite number of top firms occupy a dominant share in the market even if there are infinite number of firms.

  • Puzzles in the Tokyo Fixing in the Forex Market: Order Imbalances and Bank Pricing

    Abstract

    “Fixing” in the foreign exchange market, in Tokyo at 10am and in London at 4pm, is a market practice that determines the bid-ask-mid-point exchange rate at a scheduled time of the day in Japan. The fixing exchange rate is then applied to the settlement of foreign exchange transactions between banks and retail customers including broker dealers, institutional investors, insurance companies, exporters and importers, with varying bid-ask spreads. The findings for the Tokyo fixing are summarized as follows. (1) Price spikes are more frequent than the London fixing. (2) The customer orders are biased toward buying the foreign currencies, and this is predictable. (3) Trading volumes and liquidity concentrate on the USD/JPY. (4) Before 2008, the fixing price set by banks was biased upward, and higher than the highest transaction price during the fixing time window; the banks were earning monopolistic profits, but this gap disappeared after 2008. (5) The fixing price is still above the average transaction prices in the fixing window, suggesting that banks make profits, but that can be understood considering the risk of maintaining the fix for the rest of the business day. And (6) calendar effects also matter for the determination of the fixing rate and the price fluctuation around fixing.

    Introduction

    “Fixing” in the foreign exchange market is a market practice that determines the bid-ask mid-point exchange rate around a pre-announced time of the day. The fixing exchange rate is then applied to the settlement of foreign exchange transactions between banks and retail customers including broker dealers, institutional investors, insurance companies, exporters and importers, with varying bid-ask spreads.

  • The gradual evolution of buyer-seller networks and their role in aggregate fluctuations

    Abstract

    Buyer–seller relationships among firms can be regarded as a longitudinal network in which the connectivity pattern evolves as each firm receives productivity shocks. Based on a data set describing the evolution of buyer–seller links among 55,608 firms over a decade and structural equation modeling, we find some evidence that interfirm networks evolve reflecting a firm’s local decisions to mitigate adverse effects from neighbor firms through interfirm linkage, while enjoying positive effects from them. As a result, link renewal tends to have a positive impact on the growth rates of firms. We also investigate the role of networks in aggregate fluctuations.

    Introduction

    The interfirm buyer–seller network is important from both the macroeconomic and the microeconomic perspectives. From the macroeconomic perspective, this network represents a form of interconnectedness in an economy that allows firm-level idiosyncratic shocks to be propagated to other firms. Previous studies has suggested that this propagation mechanism interferes with the averaging-out process of shocks, and possibly has an impact on macroeconomic variables such as aggregate fluctuations (Acemoglu, Ozdaglar and Tahbaz-Salehi (2013), Acemoglu et al. (2012), Carvalho (2014), Carvalho (2007), Shea (2002), Foerster, Sarte and Watson (2011) and Malysheva and Sarte (2011)). From the microeconomic perspective, a network at a particular point of time is a result of each firms link renewal decisions in order to avoid (or share) negative (or positive) shocks with its neighboring firms. These two views of a network is related by the fact that both concerns propagation of shocks. The former view stresses the fact that idiosyncratic shocks propagates through a static network while the latter provides a more dynamic view where firms have the choice of renewing its link structure in order to share or avoid shocks. The question here is that it is not clear how the latter view affects the former view. Does link renewal increase aggregate fluctuation due to firms forming new links that conveys positive shocks or does it decrease aggregate fluctuation due to firms severing links that conveys negative shocks or does it have a different effect?

  • A Double Counting Problem in the Theory of Rational Bubbles

    Abstract

    In a standard overlapping generations model, the unique equilibrium price of a Lucas’ tree can be decomposed into the present discounted value of dividends and the stationary monetary equilibrium price of fiat money, the latter of which is a rational bubble. Thus, the standard interpretation of a rational bubble as the speculative component in an asset price double-counts the value of pure liquidity that is already part of the fundamental price of an interest-bearing asset.

    Introduction

    A rational bubble is usually modeled as an intrinsically useless asset. As shown by Tirole (1985), it attains a positive market value if it is expected to be exchangeable for goods in the future. It becomes worthless if it is expected to be worthless in the future, given that it has no intrinsic use. This property of self-fulfilling multiple equilibria has been used to explain a large boom-bust cycle in an asset price, as a stochastic transition between the two equilibria can generate a boom-bust cycle without any associated change in asset fundamentals.

  • Consumption Taxes and Divisibility of Labor under Incomplete Markets

    Abstract

    We analyze lump-sum transfers financed through consumption taxes in a heterogeneousagent model with uninsured idiosyncratic wage risk and endogenous labor supply. The model is calibrated to the U.S. economy. We find that consumption inequality and uncertainty decrease with transfers much more substantially under divisible than indivisible labor. Increasing transfers by raising the consumption tax rate from 5% to 35% decreases the consumption Gini by 0.04 under divisible labor, whereas it has almost no effect on the consumption Gini under indivisible labor. The divisibility of labor also affects the relationship among consumption-tax financed transfers, aggregate saving, and the wealth distribution.

    Introduction

    What is the effect of government transfers on inequality and risk sharing when households face labor income uncertainty? Previous studies, such as FlodÈn (2001) and Alonso-Ortiz and Rogerson (2010), find that increasing lump-sum transfers financed through labor and/or capital income taxes substantially decreases consumption inequality and uncertainty in a general equilibrium model with uninsured earnings risk. However, little is known about the impact of increasing consumption-tax financed transfers. Does it help people smooth consumption and does it reduce inequality? What is the impact on efficiency? The present paper analyzes these questions quantitatively.

  • The Mechanism of Inflation Expectation Formation among Consumers

    Abstract

    How do we determine our expectations of inflation? Because inflation expectations greatly influence the economy, researchers have long considered this question. Using a survey with randomized experiments among 15,000 consumers, we investigate the mechanism of inflation expectation formation. Learning theory predicts that once people obtain new information on future inflation, they change their expectations. In this regard, such expectations are the weighted average of prior belief and information. We confirm that the weight for prior belief is a decreasing function of the degree of uncertainty. Our results also show that monetary authority information affects consumers to a greater extent when expectations are updated. With such information, consumers change their inflation expectations by 32% from the average. This finding supports improvements to monetary policy publicity

    Introduction

    Expectations vis-à-vis future inflation are very important for economic decision-making. People contemplate the future on many occasions, including when they consider how much to save, or whether to postpone the purchase of a house or not. Thus, economists have been discussing what inflation expectations are, how they influence the overall dynamics of the economy, and how they are formed. Occasionally, such expectations become central to policy debates because the effectiveness of some types of monetary policies crucially depends upon how these are formed (Blinder, 2000; McCallum, 1984; Sargent, 1982). In spite of their long history, inflation expectations have also been renowned for being difficult to measure (Mankiw et al., 2004; Schwarz, 1999).

  • Parameter Bias in an Estimated DSGE Model: Does Nonlinearity Matter?

    Abstract

    How can parameter estimates be biased in a dynamic stochastic general equilibrium model that omits nonlinearity in the economy? To answer this question, we simulate data from a fully nonlinear New Keynesian model with the zero lower bound constraint and estimate a linearized version of the model. Monte Carlo experiments show that significant biases are detected in the estimates of monetary policy parameters and the steady-state inflation and real interest rates. These biases arise mainly from neglecting the zero lower bound constraint rather than linearizing equilibrium conditions. With fixed parameters, the variance-covariance matrix and impulse response functions of observed variables implied by the linearized model substantially differ from those implied by its nonlinear counterpart. However, we find that the biased estimates of parameters in the estimated linear model can make most of the differences small.

    Introduction

    Following the development of Bayesian estimation and evaluation techniques, many economists have estimated dynamic stochastic general equilibrium (DSGE) models using macroeconomic time series. In particular, estimated New Keynesian models, which feature nominal rigidities and monetary policy rules, have been extensively used by policy institutions such as central banks. Most of the estimated DSGE models are linearized around a steady state because a linear state-space representation along with the assumption of normality of exogenous shocks enables us to efficiently evaluate likelihood using the Kalman filter. However, Fern´andez-Villaverde and Rubio-Ram´ırez (2005) and Fern´andez-Villaverde, Rubio-Ram´ırez, and Santos (2006) demonstrate that the level of likelihood and parameter estimates based on a linearized model can be significantly different from those based on its original nonlinear model. Moreover, in the context of New Keynesian models, Basu and Bundick (2012), Braun, K¨orber, and Waki (2012), Fern´andez-Villaverde, Gordon, Guerr´on-Quintana, and Rubio-Ram´ırez (2015), Gavin, Keen, Richter, and Throckmorton (2015), Gust, L´opez-Salido, and Smith (2012), Nakata (2013a, 2013b), and Ngo (2014) emphasize the importance of considering nonlinearity in assessing the quantitative implications of the models when the zero lower bound (ZLB) constraint on the nominal interest rate is taken into account.

  • Strategic Central Bank Communication: Discourse and Game-Theoretic Analyses of the Bank of Japan’s Monthly Report

    Abstract

    We conduct a discourse analysis of the Bank of Japan’s Monthly Report and examine its characteristics in relation to business cycles. We find that the difference between the number of positive and negative expressions in the reports leads the leading index of the economy by approximately three months, which suggests that the central bank’s reports have some superior information about the state of the economy. Moreover, ambiguous expressions tend to appear more frequently with negative expressions. Using a simple persuasion game, we argue that the use of ambiguity in communication by the central bank can be seen as strategic information revelation when the central bank has an incentive to bias the reports (and hence beliefs in the market) upwards.

    Introduction

    Central banks not only implement monetary policy but also provide a significant amount of information for the market (Blinder [2004], Eijffinger and Geraats [2006]). Indeed, most publications of central banks are not solely about monetary policy but provide data and analyses on the state of the economy. It has been widely recognized that central banks use various communication channels to influence market expectations so as to enhance the effectiveness of their monetary policy. Meanwhile, it is not readily obvious whether central banks reveal all information they have exactly as it stands. In particular, although central banks cannot make untruthful statements owing to accountability and fiduciary requirements, they may communicate strategically and can be selective about the types of information they disclose. This concern takes on special importance when central banks’ objectives (e.g., keeping inflation/deflation under control and achieving maximum employment) may not be aligned completely with those of market participants, and possibly, governments.

  • Liquidity Trap and Optimal Monetary Policy Revisited

    Abstract

    This paper investigates history dependent easing known as a conventional wisdom of optimal monetary policy in a liquidity trap. We show that, in an economy where the rate of inflation exhibits intrinsic persistence, monetary tightening is earlier as inflation becomes more persistent. This property is referred as early tightening and in the case of a higher degree of inflation persistence, a central bank implements front-loaded tightening so that it terminates the zero interest rate policy even before the natural rate of interest turns positive. As a prominent feature in a liquidity trap, a forward guidance of smoothing the change in inflation rates contributes to an early termination of the zero interest rate policy.

    Introduction

    The theory of monetary policy has been developed since 1990s based on a new Keynesian model as represented by Clarida et al. (1999) and Woodford (2003). In particular, Woodford (2003) finds history dependence as a general property of optimal monetary policy. The optimal monetary policy rule explicitly includes lagged endogenous variables and the current monetary policy reflects the past economic environment.

  • Transmission Channels and Welfare Implications of Unconventional Monetary Easing Policy in Japan

    Abstract

    This paper examines the effects of the Quantitative and Qualitative Monetary Easing Policy (QQE <2013-current>) of the Bank of Japan (BOJ) by transmission channels in comparison with those of the Comprehensive Monetary Easing Policy (CE) and the subsequent monetary easing policies (2010-2012), based on the event study using financial market data. As for the QQE under normal market conditions, depreciation of foreign exchange rate in the context of portfolio balance channel functions quite strongly, while as for the CE, signaling channel through the commitment and credit easing channel at the dysfunctional markets work. The direct inflation expectation channel is weak for both QQE and CE, although the QQE has adopted various ways to exert a direct and strong influence on inflation expectation. It can be conjectured that the gradual rise in inflation expectation comes mainly from other channels like the depreciation of the yen. The most crucial characteristic of the QQE is to maximize the potential effects of easing policy by explicitly doubling and later tripling the purchased amount of JGBs and then the monetary base proportionally. The amount of JGB purchases by the BOJ surpasses the issuance amount of JGBs, thereby reducing the outstanding amount of JGBs in the markets. Shortage of safety assets would increase the convenience yield, which itself would reduce the economic welfare and not permeate the yields of other risky assets theoretically. This paper then examines the impact of reduction in JGBs on yield spreads between corporate bonds and JGBs based on money-in-utility type model applied to JGBs, and finds that at least severe scarcity situations of JGBs as safe assets are avoided, since the size of Japan’s public debt outstanding is the largest in the world. Even so, the event study shows no clear evidence that the decline in the yield of long-maturity JGBs induced by the QQE permeates the yields of corporate bonds. Recently demands for JGBs have been increasing from both domestic and foreign investors as collaterals after the Global Financial Crisis and from financial institutions that have to correspond to strengthened global liquidity regulation, while the Government of Japan is planning to consolidate the public debts. These recent changes as well as market expectation for future path of JGB amounts should also be taken account of to examine the scarcity of safe assets in case of further massive purchases of JGBs.

    Introduction

    Facing the zero lower bound of short-term interest rate, the Bank of Japan (BOJ) conducted the Quantitative Easing Monetary Policy (QEP) from 2001 to 2006, well in advance of other developing countries. At that time there were heated discussions about its effects (Ugai (2007)). After the Global Financial Crisis (GFC) in 2008, most of the major central banks have also faced the zero interest rate lower bound (Graph 1), and the Federal Reserve pursued the Large Scale Asset Purchases (LSAPs), followed by the Bank of England and the BOJ. Recently, although the Federal Reserve has terminated the LSAP, European Central Bank has newly adopted unconventional monetary policy including an expanded asset purchase program. Although researchers have started to summarize the effects and side-effects of these unconventional monetary easing policies theoretically and empirically (IMF (2013)), there is no consensus about them so far.

  • Conservatism and Liquidity Traps

    Abstract

    In an economy with an occasionally binding zero lower bound (ZLB) constraint, the anticipation of future ZLB episodes creates a trade-off for discretionary central banks between inflation and output stabilization. As a consequence, inflation systematically falls below target even when the policy rate is above zero. Appointing Rogoff’s (1985) conservative central banker mitigates this deflationary bias away from the ZLB and enhances welfare by improving allocations both at and away from the ZLB.

    Introduction

    Over the past few decades, a growing number of central banks around the world have adopted inflation targeting as a policy framework. The performance of inflation targeting in practice has been widely considered a success. However, some economists and policymakers have voiced the need to re-examine central banks’ monetary policy frameworks in light of the liquidity trap conditions currently prevailing in many advanced economies. As shown in Eggertsson and Woodford (2003) among others, the zero lower bound (ZLB) on nominal interest rates severely limits the ability of inflation-targeting central banks to stabilize the economy absent an explicit commitment technology. Some argue that the ZLB is likely to bind more frequently and that liquidity trap episodes might hit the economy more severely in the future than they have in the past. Understanding the implications of the ZLB for the conduct of monetary policy is therefore of the utmost importance for economists and policymakers alike.

  • Short- and Long-Run Tradeoff Monetary Easing

    Abstract

    In this study, we illustrate a tradeoff between the short-run positive and long-run negative effects of monetary easing by using a dynamic stochastic general equilibrium model embedding endogenous growth with creative destruction and sticky prices due to menu costs. While a monetary easing shock increases the level of consumption because of price stickiness, it lowers the frequency of creative destruction (i.e., product substitution) because inflation reduces the reward for innovation via menu cost payments. The model calibrated to the U.S. economy suggests that the adverse effect dominates in the long run.

    Introduction

    The Great Recession during 2007–09 prompted many central banks to conduct unprecedented levels of monetary easing. Although this helped in preventing an economic catastrophe such as the Great Depression, many economies have experienced only slow and modest recoveries (i.e., they faced secular stagnation) since then. Japan has fallen into even longer stagnations, namely the lost decades. Firm entry and productive investment have been inactive since the burst of the asset market bubble around 1990 despite a series of monetary easing measures (Caballero et al. (2008)).

  • Money creation at the zero lower bound on interest rates in Japan(in Japanese)

    Abstract

    本研究は名目金利が下限に達した下での日本の銀行行動、特に貸出行動を実証的に分析する。日本は他国に先駆けて、金利に引き下げ余地がない下でマネタリーベースの量を拡大する政策を採用してきた。しかしそれに反応してマネーストックが増加した形跡はほとんど見られない。すなわち信用創造過程は弱体化し、貨幣乗数(限界的な意味での)は消失したかに見える。このことはマクロ経済学の標準的理論とも整合的である。しかしながら、議論の余地はあるものの、これらの政策は生産や物価などに一定の効果を及ぼしてきたと見られる。その源泉は何だろうか。本研究は貨幣乗数が実は完全にゼロになってしまったわけではなく、マネタリーベースの大量供給がわずかながらマネーストックの増加に寄与してきた可能性を追求する。本研究では個別銀行の財務諸表を基にパネルデータを構築し、「前期末時点でより多くの超過準備を抱えていた銀行ほど、今期中に貸出を増加させる傾向があるか」を検証する。その結果、ゼロ金利のもとでそのような傾向が平均的に観察されることが示される。さらに検討してみると、この傾向に関しては銀行間で異質性が認められた。すなわち、超過準備に貸出が反応する傾向は不良債権を多く抱えている銀行ほど強く、また業態によっても差異が認められる。よって、近年のマネタリーベースの急激な増加は銀行部門全体を通してと言うよりも、その一部を通じて信用創造過程に流れ出している可能性が示唆される。

    Introduction

    本研究では日本の個別銀行財務諸表を基にパネルデータを構築し銀行行動、特に貸出行動に関する実証分析を行う。主たる関心は、名目金利が下限に達した下で、超過準備の追加供給を受けた銀行がその一部でも貸出に回す傾向が認められるかである。

  • Payment Instruments and Collateral in the Interbank Payment System

    Abstract

    This paper presents a three-period model to analyze why banks need bank reserves for interbank payments despite the availability of other liquid assets like Treasury securities. The model shows that banks need extra liquidity if they settle bank transfers without the central bank. In this case, each pair of banks sending and receiving bank transfers must determine the terms of settlement between them bilaterally in an overthe-counter transaction. As a result, a receiving bank can charge a sending bank a premium for the settlement of bank transfers, because depositors’ demand for timely payments causes a hold-up problem for a sending bank. In light of this result, the large value payment system operated by the central bank can be regarded as an interbank settlement contract to save liquidity. A third party like the central bank must operate this system because a custodian of collateral is necessary to implement the contract. This result implies that bank reserves are not independent liquid assets, but the balances of collateral submitted by banks to participate into a liquidity-saving contract. The optimal contract is the floor system. Whether a private clearing house can replace the central bank depends on the range of collateral it can accept.

    Introduction

    Base money consists of cash and bank reserves. Banks hold bank reserves not merely to satisfy a reserve requirement, but also to make interbank payments to settle bank transfers between their depositors. In fact, the daily transfer of bank reserves in a country tends to be as large as a sizable fraction of annual GDP. Also, several countries have abandoned a reserve requirement. Banks in these countries still use bank reserves to settle bank transfers.

  • Novel and topical business news and their impact on stock market activities

    Abstract

    We propose an indicator to measure the degree to which a particular news article is novel, as well as an indicator to measure the degree to which a particular news item attracts attention from investors. The novelty measure is obtained by comparing the extent to which a particular news article is similar to earlier news articles, and an article is regarded as novel if there was no similar article before it. On the other hand, we say a news item receives a lot of attention and thus is highly topical if it is simultaneously reported by many news agencies and read by many investors who receive news from those agencies. The topicality measure for a news item is obtained by counting the number of news articles whose content is similar to an original news article but which are delivered by other news agencies. To check the performance of the indicators, we empirically examine how these indicators are correlated with intraday financial market indicators such as the number of transactions and price volatility. Specifically, we use a dataset consisting of over 90 million business news articles reported in English and a dataset consisting of minuteby-minute stock prices on the New York Stock Exchange and the NASDAQ Stock Market from 2003 to 2014, and show that stock prices and transaction volumes exhibited a significant response to a news article when it is novel and topical.

    Introduction

    Financial markets can be regarded as a non-equilibrium open system. Understanding how they work remains a great challenge to researchers in finance, economics, and statistical physics. Fluctuations in financial market prices are sometimes driven by endogenous forces and sometimes by exogenous forces. Business news is a typical example of exogenous forces. Casual observation indicates that stock prices respond to news articles reporting on new developments concerning companies’ circumstances. Market reactions to news have been extensively studied by researchers in several different fields [1]–[13], with some researchers attempting to construct models that capture static and/or dynamic responses to endogenous and exogenous shocks [14], [15]. The starting point for neoclassical financial economists typically is what they refer to as the “efficient market hypothesis,” which implies that stock prices respond at the very moment that news is delivered to market participants. A number of empirical studies have attempted to identify such an immediate price response to news but have found little evidence supporting the efficient market hypothesis [16]– [21].

  • Replicating Japan’s CPI Using Scanner Data

    Abstract

    We examine how precisely one can reproduce the CPI constructed based on price surveys using scanner data. Specifically, we closely follow the procedure adopted by the Statistics Bureau of Japan when we sample outlets, products, and prices from our scanner data and aggregate them to construct a scanner data-based price index. We show that the following holds the key to precise replication of the CPI. First, the scanner databased index crucially depends on how often one replaces the products sampled. The scanner data index shows a substantial deviation from the actual CPI when one chooses a value for the parameter associated with product replacement such that replacement occurs frequently, but the deviation becomes much smaller if one picks a parameter value such that product replacement occurs only infrequently. Second, even when products are replaced only infrequently, the scanner data index differs significantly from the actual CPI in terms of volatility. The standard deviation of the scanner data-based monthly inflation rate is 1.54 percent, which is more than three times as large as that for actual CPI inflation. We decompose the difference in volatility between the two indexes into various factors, showing that it mainly stems from the difference in price rigidity for individual products. We propose a filtering technique to make individual prices in the scanner data stickier, thereby making scanner data-based inflation less volatile.

    Introduction

    Scanner data has started to be used by national statistical offices in a number of countries, including Australia, the Netherlands, Norway, Sweden, and Switzerland, for at least part of the production of their consumer price indexes (CPIs). Many other national statistical offices have also already started preparing for the use of scanner data in constructing their CPIs. The purpose of this paper is to empirically examine whether price indexes based on scanner data is consistent with price indexes constructed using the traditional survey based method.

  • Structure of global buyer-supplier networks and its implications for conflict minerals regulations

    Abstract

    We investigate the structure of global inter-firm linkages using a dataset that contains information on business partners for about 400, 000 firms worldwide, including all the firms listed on the major stock exchanges. Among the firms, we examine three networks, which are based on customer-supplier, licensee-licensor, and strategic alliance relationships. First, we show that these networks all have scale-free topology and that the degree distribution for each follows a power law with an exponent of 1.5. The shortest path length is around six for all three networks. Second, we show through community structure analysis that the firms comprise a community with those firms that belong to the same industry but different home countries, indicating the globalization of firms’ production activities. Finally, we discuss what such production globalization implies for the proliferation of conflict minerals (i.e., minerals extracted from conflict zones and sold to firms in other countries to perpetuate fighting) through global buyer-supplier linkages. We show that a limited number of firms belonging to some specific industries and countries plays an important role in the global proliferation of conflict minerals. Our numerical simulation shows that regulations on the purchases of conflict minerals by those firms would substantially reduce their worldwide use.

    Introduction

    Many complex physical systems can be modeled and better understood as complex networks [1, 2, 3]. Recent studies show that economic systems can also be regarded as complex networks in which economic agents, like consumers, firms, and governments, are closely connected [4, 5]. To understand the interaction among economic agents, we must uncover the structure of economic networks.

  • Price Stickiness and Trend Inflation: Evidence from Japan’s Deflation Period (in Japanese)

    Abstract

    我が国では 1995 年から 2013 年春まで消費者物価(CPI)が趨勢的に低下するデフレが続いた。このデフレは,下落率が毎年 1%程度であり,物価下落の緩やかさに特徴がある。また,失業率が上昇したにもかかわらず物価の反応は僅かで,フィリップス曲線の平坦化が生じた。デフレがなぜ緩やかだったのか,フィリップス曲線がなぜ平坦化したのかを考察するために,本稿ではデフレ期における価格硬直性の変化に注目する。本稿の主なファインディングは以下のとおりである。第 1 に,CPI を構成する 588 の品目のそれぞれについて前年比変化率を計算すると,ゼロ近傍の品目が最も多く,CPI ウエイトで約 50%を占める。この意味で価格硬直性が高い。この状況は1990 年代後半のデフレ期に始まり,CPI 前年比がプラスに転じた 2013 年春以降も続いている。米国などでは上昇率 2%近傍の品目が最も多く,我が国と異なっている。これらの国では各企業が毎年 2%程度の価格引き上げを行うことがデフォルトなのに対して,我が国ではデフレの影響を引きずって価格据え置きがデフォルトになっていると解釈できる。第 2 に,1970 年以降の月次データを使って,前年比がゼロ近傍の品目の割合と CPI 前年比の関係をみると,CPI 前年比が高ければ高いほど(CPI 前年比がゼロからプラス方向に離れれば離れるほど)ゼロ近傍の品目の割合が線形に減少するという関係がある。インフレ率が高まると価格を据え置きに伴う機会費用が大きくなるためと解釈でき,メニューコスト仮説と整合的である。この結果を踏まえると,1990 年代後半以降の価格硬直化は,CPI 前年比の低下に伴って内生的に生じたものであり,今後 CPI 前年比が高まれば徐々に伸縮性を取り戻すと考えられる。第 3 に,シミュレーション分析によれば,長期にわたってデフレ圧力が加わると,実際の価格が本来あるべき価格水準を上回る企業が,通常よりも多く存在する状況が生まれる。つまり,「価格引き下げ予備軍」(できることなら価格を下げたいと考えている企業)が多い。一方,実際の価格が本来あるべき価格水準を下回る「価格引き上げ予備軍」は少ない。この状況では金融緩和が物価に及ぼす影響は限定的である。我が国では,長期にわたるデフレの負の遺産として,「価格引き下げ予備軍」が今なお多く存在しており,これを一掃するのは容易でない。

    Introduction

    我が国では 1990 年代半ば以降,消費者物価(CPI)が下落する傾向にあり,デフレーションが続いてきた。デフレからの脱却を目指し,政府と日本銀行はいくつかの施策を実施してきた。1999 年から 2000 年に日銀の政策金利であるコールレートをゼロに下げる「ゼロ金利政策」を採用したのに続き,2001 年から 2006 年には「量的緩和政策」を行った。最近では,2013 年1 月に物価上昇率の目標値として CPI 上昇率2%を掲げる物価目標政策を開始した。さらに2013 年 4 月には 2%の物価目標を 2 年以内に達成するとアナウンスし,その実現に向けてベースマネーの量を 2 年間で 2 倍にする「量的・質的緩和政策(Quantitative Qualitative Easing,QQE)」を開始した。

  • Optimal taxation and debt with uninsurable risks to human capital accumulation

    Abstract

    We consider an economy where individuals face uninsurable risks to their human capital accumulation, and analyze the optimal level of linear taxes on capital and labor income together with the optimal path of government debt. We show that in the presence of such risks it is beneficial to tax both labor and capital and to issue public debt. We also assess the quantitative importance of these findings, and show that the benefits of government debt and capital taxes both increase with the magnitude of idiosyncratic risks and the degree of relative risk aversion.

    Introduction

    Human capital is an important component of wealth both at the individual and aggregate level, and its role has been investigated in various fields in economics. In public finance, Jones, Manuelli and Rossi (1997) show that the zero-capital-tax result of Chamley (1986) and Judd (1985)1 can be strengthened if human capital accumulation is explicitly taken into account. Specifically, they demonstrate that, in a deterministic economy with human capital accumulation, in the long run not only capital but also labor income taxes should be zero, hence the government must accumulate wealth - that is, public debt be negative - to finance its expenditure.

  • Time varying pass-through: Will the yen depreciation help Japan hit the inflation target?

    Abstract

    There is a growing recognition that pushing up the public’s inflation expectation is a key to a successful escape from a chronic deflation. The question is how this can be achieved when the economy is stuck in a liquidity trap. This paper argues that, for Japan, the currency depreciation since the late 2012 could turn out to be useful for ending the country’s long battle with falling prices. Prior studies have suggested that household expectations are greatly influenced by prices of items that they purchase frequently. This paper demonstrates that the extent of exchange rate pass-through to those prices, once near-extinct, has come back strong in recent years. Evidence based on VARs as well as TVP-VARs indicates that a 25% depreciation of the yen would produce a 2% increase in the prices of goods that households purchase regularly.

    Introduction

    This paper re-examines the issue of exchange rate pass-through to the Japanese CPI. Special attention is paid to prices of items that households purchase more frequently. This focus is partly motivated by recent statements regarding the transmission mechanism of monetary policy from the Bank of Japan officials. They stress importance of shifting the public’s inflation expectation upwards. A question that immediately comes to mind is how to achieve such a goal in an environment of zero interest rate, in which the monetary authority lacks a clear way to directly influence the course of the private sector.

  • A Reformulation of Normative Economics for Models with Endogenous Preferences

    Abstract

    This paper proposes a framework to balance considerations of welfarism and virtue ethics in the normative analysis of economic models with endogenous preferences. We introduce the moral evaluation function (MEF), which ranks alternatives based purely on virtue ethics, and define the social objective function (SOF), which combines the Social Welfare Function (SWF) and the MEF. In a model of intergenerational altruism with endogenous time preference, using numerical simulations we show that maximizing the SWF may not yield a socially desirable state if the society values virtue. This problem can be resolved by using the SOF to evaluate alternative social states.

    Introduction

    Many theoretical and empirical studies have emphasized and identified various channels through which preferences might be endogenously determined in the economy. In the models studied in the literature of intergenerational cultural preference transmission and formation (see Bisin and Verdier (2011)for a survey), children’s preferences are affected by parents’ decisions. Habit formation models have been used in macroeconomics (see, e.g., Lawrence, Eichenbaum and Evans (2005)), and finance (see, e.g., Constantinides (1990)). Addiction models have been used in microeconomis (e.g., Becker and Murphy (1988)). In the literature of behavioral economics, reference points are often endogenously determined (see, e.g., K˝oszegi and Rabin (2006)).

  • Estimating Quality Adjusted Commercial Property Price Indexes Using Japanese REIT Data

    Abstract

    We propose a new method to estimate quality adjusted commercial property price indexes using real estate investment trust (REIT) data. Our method is based on the present value approach, but the way the denominator (i.e., the discount rate) and the numerator (i.e., cash flows from properties) are estimated differs from the traditional method. We run a hedonic regression to estimate the quality adjusted discount rate based on the share prices of REITs, which can be regarded as the stock market’s valuation of the set of properties owned by the REITs. As for the numerator, we use rental prices associated only with new rental contracts rather than those associated with all existing contracts. Using a dataset with prices and cash flows for about 400 commercial properties included in Japanese REITs for the period 2001 to 2013, we find that our price index signals turning points much earlier than an appraisal-based price index; specifically, our index peaks in the second quarter of 2007, while the appraisal-based price index exhibits a turnaround only in the third quarter of 2008. Our results suggest that the share prices of REITs provide useful information in constructing commercial property price indexes.

    Introduction

    Looking back at the history of economic crises, there are a considerable number of cases where a crisis was triggered by the collapse of a real estate price bubble. For example, it is widely accepted that the collapse of Japan’s land and stock price bubble in the early 1990s has played an important role in the subsequent economic stagnation, and in particular the banking crisis that started in the latter half of the 1990s. Similarly, the Nordic banking crisis in the early 1990s also occurred in tandem with a property bubble collapse, while the global financial crisis that began in the United States in 2008 and the European debt crisis were also triggered by the collapse of bubbles in the property and financial markets.

  • Relative Prices and Inflation Stabilisations

    Abstract

    When price adjustment is sluggish, inflation is costly in terms of welfare because it distorts various kinds of relative prices. Stabilising aggregate price inflation does not necessarily minimise these costs, but stabilising a well-designed core inflation minimises the cost of relative price fluctuations and thus the cost of inflation.

    Introduction

    In macroeconomic theories, the aggregate price level, often denoted as P, is defined as the monetary value of the minimum cost of attaining a reference utility level. Measures of the price level are called a “cost-of-living” index. Measuring the price level has been an important topic in macroeconomics, because any fluctuation in the price level —inflation— is regarded as affecting the well-being of households.

  • Constrained Inefficiency and Optimal Taxation with Uninsurable Risks

    Abstract

    When individuals’ labor and capital income are subject to uninsurable idiosyncratic risks, should capital and labor be taxed, and if so how? In a two period general equilibrium model with production, we derive a decomposition formula of the welfare effects of these taxes into insurance and distribution effects. This allows us to determine how the sign of the optimal taxes on capital and labor depend on the nature of the shocks, the degree of heterogeneity among consumers’ income as well as on the way in which the tax revenue is used to provide lump sum transfers to consumers. When shocks affect primarily labor income and heterogeneity is small, the optimal tax on capital is positive. However in other cases a negative tax on capital is welfare improving. (JEL codes: D52, H21. Keywords: optimal linear taxes, incomplete markets, constrained efficiency)

    Introduction

    The main objective of this paper is to investigate the welfare effects of investment and labor income taxes in a two period production economy with uninsurable background risk. More precisely, we examine whether the introduction of linear, distortionary taxes or subsidies on labor income and/or on the returns from savings are welfare improving and what is then the optimal sign of such taxes. This amounts to studying the Ramsey problem in a general equilibrium set-up. We depart however from most of the literature on the subject for the fact that we consider an environment with no public expenditure, where there is no need to raise tax revenue. Nonetheless, optimal taxes are typically nonzero; even distortionary taxes can improve the allocation of risk in the face of incomplete markets. Then the question is which production factor should be taxed: we want to identify the economic properties which determine the signs of the optimal taxes on production factors.

  • Rational Bubble on Interest-Bearing Assets

    Abstract

    This paper compares fiat money and a Lucas’ tree in an overlapping generations model. A Lucas’ tree with a positive dividend has a unique competitive equilibrium price. Moreover, the price converges to the monetary equilibrium value of fiat money as the dividend goes to zero in the limit. Thus, the value of liquidity represented by a rational bubble is part of the fundamental price of a standard interestbearing asset. A Lucas’ tree has multiple equilibrium prices if the dividend vanishes permanently with some probability. This case may be applicable to public debt, but not to stock or urban real estate.

    Introduction

    Fiat money and a rational bubble have the same property as liquidity. As shown by Samuelson (1958) and Tirole (1985), they attain a positive market value if they are expected to be exchangeable for goods in the future. It is also common that they become worthless if they are expected to be worthless in the future, given no intrinsic use of them. This property of selffulfilling multiple equilibria has been used to explain a large boom-bust cycle in an asset price, because a stochastic transition between the two equilibria can generate a boom-bust cycle by speculation without any change in asset fundamentals.

  • The Optimal Degree of Monetary-Discretion in a New Keynesian Model with Private Information

    Abstract

    This paper considers the optimal degree of discretion in monetary policy when the central bank conducts policy based on its private information about the state of the economy and is unable to commit. Society seeks to maximize social welfare by imposing restrictions on the central bank’s actions over time, and the central bank takes these restrictions and the New Keynesian Phillips curve as constraints. By solving a dynamic mechanism design problem we find that it is optimal to grant “constrained discretion” to the central bank by imposing both upper and lower bounds on permissible inflation, and that these bounds must be set in a history-dependent way. The optimal degree of discretion varies over time with the severity of the time-inconsistency problem, and, although no discretion is optimal when the time-inconsistency problem is very severe, our numerical experiment suggests that no-discretion is a transient phenomenon, and that some discretion is granted eventually.

    Introduction

    How much flexibility should society allow a central bank in its conduct of monetary policy? At the center of the case for flexibility is the argument that central bankers have private information (Canzoneri, 1985), perhaps about the economy’s state or structure, or perhaps about the distributional costs of inflation arising through heterogeneous preferences (Sleet, 2004). If central banks have flexibility over policy decisions, then this gives them the ability to use for the public’s benefit any private information that they have. However, if central banks face a time-inconsistency problem (Kydland and Prescott, 1977), then it may be beneficial to limit their flexibility. Institutionally, many countries have balanced these competing concerns by delegating monetary policy to an independent central bank that is required to keep inflation outcomes low and stable, often within a stipulated range, but that is otherwise given the freedom to conduct policy without interference. Inflation targeting is often characterized as “constrained discretion” (Bernanke and Mishkin, 1997) precisely because it endeavors to combine flexibility with rule-like behavior.

  • The Effects of Financial and Real Shocks, Structural Vulnerability and Monetary Policy on Exchange Rates from the Perspective of Currency Crises Models

    Abstract

    Is there any factor that is not analyzed in the literature but is important for preventing currency crises? What kind of shock is important as a trigger of a currency crisis? Given the same shock, how does the impact of a currency crisis differ across countries depending on the degree of each country’s structural vulnerability? To answer these questions, this paper analyzes currency crises both theoretically and empirically. In the theoretical part, I argue that exports are an important factor to prevent currency crises that has not been frequently analyzed in the existing theoretical literature. Using the third generation model of currency crises, I derive a simple and intuitive formula that captures an economy’s structural vulnerability characterized by the elasticity of exports and repayments for foreign currency denominated debt. I graphically show that the possibility of currency crisis equilibrium depends on this structural vulnerability. In the empirical part, I use unbalanced panel data comprising 51 emerging countries from 1980 to 2011. The results obtained here are consistent with the prediction of the theoretical models. First, I found that monetary tightening by the central banks can have a significant effect on exchange rates. Second, I found that both productivity shocks in the real sector and shocks to a country’s risk premium in the financial markets affect exchange rate dynamics, while productivity shocks appeared more quantitatively important during the Asian currency crisis. Finally, the structural vulnerability of the country plays a statistically significant role for propagating the effects of the shock.

    Introduction

    The literature on currency crises has analyzed causes and mechanisms of how the crises occur and what happens when countries experience the crises. Little theoretical literature has focused on factors that prevent currency crises other than policy responses. Is there any factor that is not analyzed in the literature but is important for preventing currency crises?

  • A New Look at Uncertainty Shocks: Imperfect Information and Misallocation

    Abstract

    Uncertainty faced by individual firms appears to be heterogeneous. In this paper, I construct new empirical measures of firm-level uncertainty using data from the I/B/E/S and Compustat. These new measures reveal persistent differences in the degree of uncertainty facing individual firms not reflected by existing measures. Consistent with existing measures, I find that the average level of uncertainty across firms is countercyclical, and that it rose sharply at the start of the Great Recession. I next develop a heterogeneous firm model with Bayesian learning and uncertainty shocks to study the aggregate implications of my new empirical findings. My model establishes a close link between the rise in firms’ uncertainty at the start of a recession and the slow pace of subsequent recovery. These results are obtained in an environment that embeds Jovanovic’s (1982) model of learning in a setting where each firm gradually learns about its own productivity, and each occasionally experiences a shock forcing it to start learning afresh. Firms differ in their information; more informed firms have lower posterior variances in beliefs. An uncertainty shock is a rise in the probability that any given firm will lose its information. When calibrated to reproduce the level and cyclicality of my leading measure of firm-level uncertainty, the model generates a prolonged recession followed by anemic recovery in response to an uncertainty shock. When confronted with a rise in firm-level uncertainty consistent with advent of the Great Recession, it explains 79 percent of the observed decline in GDP and 89 percent of the fall in investment.

    Introduction

    "Subjective uncertainty is about the "unknown unknowns". When, as today, the unknown unknowns dominate, and the economic environment is so complex as to appear nearly incomprehensible, the result is extreme prudence, [. . . ], on the part of investors, consumers and firms." Olivier Blanchard (2012)

  • Post-Crisis Slow Recovery and Monetary Policy

    Abstract

    In the aftermath of the recent financial crisis and subsequent recession, slow recoveries have been observed and slowdowns in total factor productivity (TFP) growth have been measured in many economies. This paper develops a model that can describe a slow recovery resulting from an adverse financial shock in the presence of an endogenous mechanism of TFP growth, and examines how monetary policy should react to the financial shock in terms of social welfare. It is shown that in the face of the financial shocks, a welfare-maximizing monetary policy rule features a strong response to output, and the welfare gain from output stabilization is much more substantial than in the model where TFP growth is exogenously given. Moreover, compared with the welfare-maximizing rule, a strict inflation or price-level targeting rule induces a sizable welfare loss because it has no response to output, whereas a nominal GDP growth or level targeting rule performs well, although it causes high interest-rate volatility. In the presence of the endogenous TFP growth mechanism, it is crucial to take into account a welfare loss from a permanent decline in consumption caused by a slowdown in TFP growth.

    Introduction

    In the aftermath of the recent financial crisis and subsequent recession, slow recoveries have been observed in many economies. GDP has not recovered to its pre-crisis growth trend in the U.S., while it has not returned to even its pre-crisis level in the Euro area. As indicated by recent studies, such as Cerra and Saxena (2008) and Reinhart and Rogoff (2009), financial crises tend to be followed by slow recoveries in which GDP scarcely returns to its pre-crisis growth trend and involves a considerable economic loss. Indeed, since the financial crisis in the 1990s, Japanís GDP has never recovered to its pre-crisis growth trend, and Japanís economy has experienced a massive loss in GDP. The post-crisis slow recoveries therefore cast doubt on the validity of the argument in the literature starting from Lucas (1987) that the welfare costs of business cycles are small enough that they do not justify stabilization policy. Thus our paper addresses the question of whether and to what extent monetary policy can ameliorate social welfare in the face of a severe recession that is caused by a financial factor and is followed by a slow recovery. Particularly, in that situation, should monetary policy focus mainly on inflation stabilization and make no response to output, as advocated in the existing monetary policy literature including Schmitt-GrohÈ and Uribe (2006, 2007a, b)?

  • Multi-Belief Rational-Expectations Equilibria: Indeterminacy, Complexity and Sustained Deflation

    Abstract

    In this paper, we extend the concept of rational-expectations equilibrium, from a traditional single-belief framework to a multi-belief one. In the traditional framework of single belief, agents are supposed to know the equilibrium price “correctly.” We relax this requirement in the framework of multiple beliefs. While agents do not have to know the equilibrium price exactly, they must be correct in that it must be always contained in the support of each probability distribution they think possible. We call this equilibrium concept a multibelief rational-expectations equilibrium. We then show that such an equilibrium exists, that indeterminacy and complexity of equilibria can happen even when the degree of risk aversion is moderate and, in particular, that a decreasing price sequence can be an equilibrium. The last property is highlighted in a linear-utility example where any decreasing price sequence is a multi-belief rational-expectations equilibrium while only possible single-belief rational-expectations equilibrium price sequences are those which are constant over time.

    Introduction

    This paper considers a pure-endowment nonstochastic overlapping-generations economy. In this framework, we extend the concept of rational-expectations equilibrium, or in other words perfect-foresight equilibrium in our setting, in which generations in the model are supposed to know the equilibrium price “correctly.” Thus, there is no surprise in this rational-expectations equilibrium. We relax this requirement to the one that while generations do not know the equilibrium price exactly, they have a set of purely-subjective probability distributions of possible prices. In addition, they must not be surprised by the realization of the equlibrium price. That is, generations’ multi-belief expectations must be “correct” in that the equilibrium price is always contained in the support of each probability distribution they think possible. We call this equilibrium concept multi-belief rational-expectations equilibrium. Furthermore, the realization of the price which clears the market never disappoints generations’ expectations since they assign a positive (but possibly less than unity) probability to the occurrence of that price. Thus, their expectations are “realized.” Importantly, the generations’ beliefs are endogenously determined as a part of multi-belief rational-expectations equilibrium. This is similar to sequential equilibrium in an extensive-form game where the probability distribution at each information set is endogenously determined (although while a unique distribution is determined in a sequential equilibrium, a set of distributions is determined in ours). Obviously, single-belief rational-expectations equilibrium where generations’ expectations are singleton sets is ordinary rational-expectations equilibrium.

  • Reputation and Liquidity Traps

    Abstract

    Can the central bank credibly commit to keeping the nominal interest rate low for an extended period of time in the aftermath of a deep recession? By analyzing credible plans in a sticky-price economy with occasionally binding zero lower bound constraints, I find that the answer is yes if contractionary shocks hit the economy with sufficient frequency. In the best credible plan, if the central bank reneges on the promise of low policy rates, it will lose reputation and the private sector will not believe such promises in future recessions. When the shock hits the economy sufficiently frequently, the incentive to maintain reputation outweighs the short-run incentive to close consumption and inflation gaps, keeping the central bank on the originally announced path of low nominal interest rates.

    Introduction

    Statements about the period during which the short-term nominal interest rate is expected to remain near zero have been an important feature of recent monetary policy in the United States. The FOMC has stated that a highly accomodative stance of monetary policy will remain appropriate for a considerable time after the economic recovery strengthens. With the current policy rate at its effective lower bound, the expected path of short-term rates is a prominent determinant of the long-term interest rates, which affects the decisions of households and businesses. Thus, the statement expressing the FOMC’s intention to keep the policy rate low for a considerable period has likely done much to keep the long-term nominal rates low and thereby stimulating economic activities.

  • Effects of Commodity Price Shocks on Inflation:A Cross-Country Analysis

    Abstract

    Since 2000s, large fluctuations in non-energy commodity prices have become a concern among policymakers about price stability. Using local projections, this paper investigates the effects of commodity price shocks on inflation. We estimate impulse responses of the consumer price indexes (CPIs) to commodity price shocks from a monthly panel consisting of 120 countries. Our analyses show that the effects of commodity price shocks on inflation are transitory. While the effect on the level of consumer prices varies across countries, the transitory effects on inflation are fairly robust, suggesting that policymakers may not need to pay special attention to the recent fluctuation in non-energy commodity prices. Employing the smooth transition autoregessive models that use the past inflation rate as the transition variable, we also explore the possibility that the effect of commodity price shocks is influenced by the inflation regimes. In this specification, commodity prices may not have transitory effects when a country is less developed and its currency is pegged to the U.S. dollar. However, the effect remains transitory in developed countries with exchange rate flexibility.

    Introduction

    Fluctuations in the non-energy commodity prices since the early 2000s have renewed policymakers’ attention to their effects on inflation. One of the issues for policymakers is how monetary policy should respond to the commodity price shocks. Among others, Yellen (2011) argues that commodity price shocks have only modest and transitory effects on U.S. inflation and that a recent surge of commodity prices does not “warrant any substantial shift in the stance of monetary policy.” On the other hand, European Central Bank (2008) and International Monetary Fund (2008) express some concerns about the upside risks to price stability due to rising inflation expectations triggered by commodity price shocks.

  • Payment Instruments and Collateral in the Interbank Payment System

    Abstract

    This paper presents a three-period model to analyze the endogenous need for bank reserves in the presence of Treasury securities. The model highlights the fact that the interbank market is an overthe-counter market. It characterizes the large value payment system operated by the central bank as an implicit contract, and shows that the contract requires less liquidity than decentralized settlement of bank transfers. In this contract, bank reserves are the balances of liquid collateral pledged by banks. The optimal contract is equivalent to the floor system. A private clearing house must commit to a time-inconsistent policy to provide the contract.

    Introduction

    Base money consists of cash and bank reserves. Banks do not hold bank reserves merely to satisfy a reserve requirement, but also to settle the transfer of deposit liabilities due to bank transfers. In fact, the average figure for the daily transfer of bank reserves is as large as a sizable fraction of annual GDP in the country. Also, several countries have abandoned a reserve requirement. Banks in these countries still settle bank transfers through a transfer of bank reserves

  • Time-Varying Employment Risks, Consumption Composition, and Fiscal Policy

    Abstract

    This study examines the response of aggregate consumption to active labor market policies that reduce unemployment. We develop a dynamic general equilibrium model with heterogeneous agents and uninsurable unemployment as well as policy regime shocks to quantify the consumption effects of policy. By implementing numerical experiments using the model, we demonstrate a positive effect on aggregate consumption even when the policy serves as a pure transfer from the employed to the unemployed. The positive effect on consumption results from the reduced precautionary savings of the households who indirectly benefit from the policy by a decreased unemployment hazard in future.

    Introduction

    The impact of the recent recession on the labor market was so severe that the unemployment rate in the U.S. is still above normal and the duration of unemployment remains unprecedentedly large. There is a growing interest in labor market policies as effective macroeconomic policy instruments to combat such high unemployment (Nie and Struby (2011)) that has been used conservatively to help the unemployed. Two major questions presented in this literature are as follows: (i) What is the effect of the policy on the labor market performance of program participants? and (ii) What is the general equilibrium consequence of such policy? While there have been extensive microeconometric evaluations and discussions that have led to a consensus on the first question, the second question is unanswered because the indirect effects of the programs on nonparticipants via general equilibrium adjustments are inconclusive. Heckman, Lalonde and Smith (1999) pointed out that the commonly used partial equilibrium approach implicitly assumes that the indirect effects are negligible and can therefore produce misleading estimates when the indirect effects are substantial. Moreover, Calmfors (1994) investigated several indirect effects, and concluded that microeconometric estimates merely provide partial knowledge about the entire policy impact of such programs.

  • Investment Horizon and Repo in the Over-the-Counter Market

    Abstract

    This paper presents a three-period model featuring a short-term investor in the over-the-counter bond market. A short-term investor stores cash because of a need to pay cash at some future date. If a short-term investor buys bonds, then a deadline for retrieving cash lowers the resale price of bonds for the investor through bilateral bargaining in the bond market. Ex-ante, this hold-up problem explains the use of a repo by a short-term investor, the existence of a haircut, and the vulnerability of a repo market to counterparty risk. This result holds without any uncertainty about bond returns or asymmetric information.

    Introduction

    A repo is one of the primary instruments in the money market. In this transaction, a shortterm investor buys long-term bonds with a repurchase agreement in which the seller of the bonds promises to buy back the bonds at a later date. From the seller’s point of view, this transaction is akin to a secured loan with the underlying bonds as collateral. A question remains, however, regarding why a short-term investor needs a repurchase agreement when the investor can simply resell bonds to a third party in a spot market. The answer to this question is not immediately clear, as the bonds traded in the repo market include Treasury securities and agency mortgage-backed securities, for which a secondary market is available.

  • State Dependency in Price and Wage Setting

    Abstract

    The frequency of nominal wage adjustments varies with macroeconomic conditions. Existing macroeconomic analyses exclude such state dependency in wage setting, assuming exogenous timing and constant frequency of wage adjustments under timedependent setting (e.g., Calvo- and Taylor-style setting). To investigate how state dependency in wage setting influences the transmission of monetary shocks, this paper develops a New Keynesian model in which the timing and frequency of wage changes are endogenously determined in the presence of fixed wage-setting costs. I find that state-dependent wage setting reduces the real impacts of monetary shocks compared to time-dependent setting. Further, with state dependency, monetary nonneutralities decrease with the elasticity of demand for differentiated labor, while the opposite holds under time-dependent setting. Next, this paper examines the empirical importance of state dependency in wage setting. To this end, I augment the model with habit formation, capital accumulation, capital adjustment costs, and variable capital utilization. When parameterized to reproduce the fluctuations in wage rigidity observed in the U.S. data, the statedependent wage-setting model shows a response to monetary shocks quite similar to that of the time-dependent counterpart. The result suggests that for the U.S. economy, state dependency in wage setting is largely irrelevant to the monetary transmission.

    Introduction

    The transmission of monetary disturbances has been an important issue in macroeconomics. Recent studies using a dynamic general equilibrium model, such as Huang and Liu (2002) and Christiano, Eichenbaum, and Evans (2005), show that nominal wage stickiness is one of the key factors in accounting for the monetary transmission. However, existing studies establish the importance of sticky wages under Calvo (1983)- and Taylor (1980)-style setting. Such time-dependent setting models are extreme in that because of the exogenous timing and constant frequency of wage setting, wage adjustments occur only through changes in the intensive margin. In contrast, there is evidence that the extensive margin also matters, i.e., evidence for state dependency in wage setting. For example, reviewing empirical studies on micro-level wage adjustments, Taylor (1999) concludes that "the frequency of wage setting increases with the average rate of inflation."Further, according to Daly, Hobijn, and Lucking (2012) and Daly and Hobijn (2014), the fraction of wages not changed for a year rises in recessions in the U.S.1 How does the impact of monetary shocks differ under state- and timedependent wage setting? Is state dependency in wage setting relevant for the U.S. monetary transmission?

  • Buyer-Supplier Networks and Aggregate Volatility

    Abstract

    In this paper, we investigate the structure and evolution of customer-supplier networks in Japan using a unique dataset that contains information on customer and supplier linkages for more than 500,000 incorporated non-financial firms for the five years from 2008 to 2012. We find, first, that the number of customer links is unequal across firms; the customer link distribution has a power-law tail with an exponent of unity (i.e., it follows Zipf’s law). We interpret this as implying that competition among firms to acquire new customers yields winners with a large number of customers, as well as losers with fewer customers. We also show that the shortest path length for any pair of firms is, on average, 4.3 links. Second, we find that link switching is relatively rare. Our estimates indicate that the survival rate per year for customer links is 92 percent and for supplier links 93 percent. Third and finally, we find that firm growth rates tend to be more highly correlated the closer two firms are to each other in a customer-supplier network (i.e., the smaller is the shortest path length for the two firms). This suggests that a non-negligible portion of fluctuations in firm growth stems from the propagation of microeconomic shocks – shocks affecting only a particular firm – through customer-supplier chains.

    Introduction

    Firms in a modern economy tend to be closely interconnected, particularly in the manufacturing sector. Firms typically rely on the delivery of materials or intermediate products from their suppliers to produce their own products, which in turn are delivered to other downstream firms. Two recent episodes vividly illustrate just how closely firms are interconnected. The first is the recent earthquake in Japan. The earthquake and tsunami hit the Tohoku region, the north-eastern part of Japan, on March 11, 2011, resulting in significant human and physical damage to that region. However, the economic damage was not restricted to that region and spread in an unanticipated manner to other parts of Japan through the disruption of supply chains. For example, vehicle production by Japanese automakers, which are located far away from the affected areas, was stopped or slowed down due to a shortage of auto parts supplies from firms located in the affected areas. The shock even spread across borders, leading to a substantial decline in North American vehicle production. The second episode is the recent financial turmoil triggered by the subprime mortgage crisis in the United States. The adverse shock originally stemming from the so-called toxic assets on the balance sheets of U.S. financial institutions led to the failure of these institutions and was transmitted beyond entities that had direct business with the collapsed financial institutions to those that seemed to have no relationship with them, resulting in a storm that affected financial institutions around the world.

  • Payment Instruments and Collateral in the Interbank Payment System

    Abstract

    This paper analyzes the distinction between payment instruments and collateral in the interbank payment system. Given the interbank market is an over-the-counter market, decentralized settlement of bank transfers is inefficient if bank loans are illiquid. In this case, a collateralized interbank settlement contract improves efficiency through a liquidity-saving effect. The large value payment system operated by the central bank can be regarded as an implicit implementation of such a contract. This result explains why banks swap Treasury securities for bank reserves despite that both are liquid assets. This paper also discusses if a private clearing house can implement the contract.

    Introduction

    Base money consists of cash and bank reserves. The latter type of money is used by banks when they settle bank transfers between their depositors. Typically, the daily transfer of bank reserves in a country is as large as a sizable fraction of annual GDP. This large figure implies that banks do not hold bank reserves merely to satisfy a reserve requirement, but also to settle the transfer of deposit liabilities due to daily bank transfers. In fact, several countries have abandoned a reserve requirement. Banks in these countries still settle bank transfers through a transfer of bank reserves.

  • Using Online Prices to Anticipate Official CPI Inflation

    Introduction

    The inflation rate a consumer faces should be, in principle, a relatively simple process to measure. Intuitively, a person has a typical consumption basket, and following the monthly average price of such basket should be the inflation rate the individual experiences. This is, however, hard to implement in practice. Complications abound, such as the fact that consumption baskets are rarely stable through time, consumer’s substitute products when they face changes in relative prices, and many products are often discontinued and replaced with new version or even entirely new product categories. At the aggregate level, the inflation rate is even a harder process to characterize. Baskets and consumption behavior differ markedly across consumers, and traditional data collection methods are expensive and very limited in the quantity of goods and the frequency with which they can be sampled.

  • Optimal Macroprudential Policy

    Abstract

    This paper introduces financial market frictions into a standard New Keynesian model through search and matching in the credit market. Under such financial market frictions, a second-order approximation of social welfare includes a term involving credit, in addition to terms for inflation and consumption. As a consequence, the optimal monetary and macroprudential policies must contribute to both financial and price stability. This result holds for various approximated welfares that can change corresponding to macroprudential policy variables. The key features of optimal policies are as follows. The optimal monetary policy requires keeping the credit market countercyclical against the real economy. Commitment in monetary and macroprudential policy, rather than approximated welfare, justifies history dependence and pre-emptiveness. Appropriate combinations of macroprudential and monetary policy achieve perfect financial and price stability.

    Introduction

    The serious economic disruptions caused by financial crises reveal the critical roles played by financial markets in the U.S. and the Euro area. Acknowledging that the current policy framework cannot fully mitigate nor avoid financial crises, policymakers have begun to shed light on two policy measures. The first is monetary policy, which aims to achieve, in addition to traditional policy goals, stability of the financial system. The second is a new policy tool, macroprudential policy geared toward financial stability.

  • Working Less and Bargain Hunting More: Macro Implications of Sales during Japan’s Lost Decades

    Abstract

    Standard New Keynesian models have often neglected temporary sales. In this paper, we ask whether this treatment is appropriate. In the empirical part of the paper, we provide evidence using Japanese scanner data covering the last two decades that the frequency of sales was closely related with macroeconomic developments. Specifically, we find that the frequency of sales and hours worked move in opposite directions in response to technology shocks, producing a negative correlation between the two. We then construct a dynamic stochastic general equilibrium model that takes households’ decisions regarding their allocation of time for work, leisure, and bargain hunting into account. Using this model, we show that the rise in the frequency of sales, which is observed in the data, can be accounted for by the decline in hours worked during Japan’s lost decades. We also find that the real effect of monetary policy shocks weakens by around 40% due to the presence of temporary sales, but monetary policy still matters.

    Introduction

    Standard New Keynesian models have often neglected temporary sales, although the frequency of sales is far higher than that of regular price changes, and hence it is not necessarily guaranteed that the assumption of sticky prices holds. Ignoring this fact is justified, however, if retailers’ decision to hold sales is independent of macroeconomic developments. If this is the case, temporary sales do not eliminate the real effect of monetary policy. In fact, Guimaraes and Sheedy (2011, hereafter GS) develop a dynamic stochastic general equilibrium (DSGE) model incorporating sales and show that the real effect of monetary policy remains largely unchanged. Empirical studies such as Kehoe and Midrigan (2010), Eichenbaum, Jaimovich, and Rebelo (2011), and Anderson et al. (2012) argue that retailers’ decision to hold a sale is actually orthogonal to changes in macroeconomic developments.

  • Inflation Stabilization and Default Risk in a Currency Union

    Abstract

    By developing a class of dynamic stochastic general equilibrium models with nominal rigidities and assuming a two-country currency union with sovereign risk, we show that there is not necessarily a trade-off between the prevention of default risk and stabilizing inflation. Under optimal monetary and fiscal policy, comprising a de facto inflation stabilization policy, the tax rate as an optimal fiscal policy tool plays an important role in stabilizing inflation, although not completely because of the distorted steady state. Changes in the tax rate to minimize welfare costs via stabilizing inflation then improve the fiscal surplus, and because of this and the incompletely stabilized inflation, the default rate does not increase as much.

    Introduction

    How do we conduct monetary policy in a currency union amid sovereign risk premiums? How do the monetary and fiscal authorities behave in this difficult situation? What clues do we have for removing the trade-off between the prevention of default risk and stabilizing inflation? In this paper, we show that there is not necessarily a trade-off between the prevention of default risk and stabilizing inflation. Policy authorities, namely, the central bank and the government, should without hesitation, conduct optimal monetary and fiscal policies, which is equivalent to stabilizing inflation.

  • Pareto-improving Immigration and Its Effect on Capital Accumulation in the Presence of Social Security

    Abstract

    The effect of accepting more immigrants on welfare in the presence of a pay-as-yougo social security system is analyzed qualitatively and quantitatively. First, it is shown that if initially there exist intergenerational government transfers from the young to the old, the government can lead an economy to the (modified) golden rule level within a finite time in a Pareto-improving way by increasing the percentage of immigrants to natives (PITN). Second, using the computational overlapping generation model, the welfare gain is calculated of increasing the PITN from 15.5 percent to 25.5 percent and years needed to reach the (modified) golden rule level in a Pareto-improving way in a model economy. The simulation shows that the present value of the welfare gain of increasing the PITN comprises 23 percent of the initial GDP. It takes 112 years for the model economy to reach the golden rule level in a Pareto-improving way.

    Introduction

    Transforming a pay-as-you-go(PYGO) social security system into a funded system is not easy. When the PYGO social security system is changed to a funded system, some generations must bear the so called “the double burden”, such that a young generation needs to pay the social security tax twice. Thus, although the transition from a PYGO social security to a funded system is desirable since a PYGO social security system causes under-accumulation of capital, it is difficult to transit in a Pareto-improving way.

  • Investment Horizon and Repo in the Over-the-Counter Market

    Abstract

    This paper presents a three-period model featuring a short-term investor in the over-the-counter bond market. A short-term investor stores cash because of a need to pay cash at some future date. If a short-term investor buys bonds, then a deadline for retrieving cash lowers the resale price of bonds for the investor through bilateral bargaining in the bond market. Ex-ante, this hold-up problem explains the use of a repo by a short-term investor, the existence of a haircut, and the vulnerability of a repo market to counterparty risk. This result holds without any uncertainty about bond returns or asymmetric information.

    Introduction

    Many securities primarily trade in an over-the-counter (OTC) market. A notable example of such securities is bonds. The key feature of an OTC market is that the buyer and the seller in each OTC trade set the terms of trade bilaterally. There has been developed a theoretical literature analyzing the effects of this market structure on spot trading, such as Spulber (1996), Rust and Hall (2003), Duffie, Gˆarleanu, and Pedersen (2005), Miao (2006), Vayanos and Wang (2007), Lagos and Rocheteau (2010), Lagos, Rocheteau and Weill (2011), and Chiu and Koeppl (2011), for example. This literature typically models bilateral transactions using search models and analyzes various aspects of trading and price dynamics, such as liquidity and bid-ask spread, in an OTC spot market.

  • An Estimated DSGE Model with a Deflation Steady State

    Abstract

    Benhabib, Schmitt-GrohÈ, and Uribe (2001) argue for the existence of a deflation steady state when the zero lower bound on the nominal interest rate is considered in a Taylor-type monetary policy rule. This paper estimates a medium-scale DSGE model with a deflation steady state for the Japanese economy during the period from 1999 to 2013, when the Bank of Japan conducted a zero interest rate policy and the inflation rate was almost always negative. Although the model exhibits equilibrium indeterminacy around the deflation steady state, a set of specific equilibria is selected by Bayesian methods. According to the estimated model, shocks to householdsí preferences, investment adjustment costs, and external demand do not necessarily have an inflationary effect, in contrast to a standard model with a targeted-inflation steady state. An economy in the deflation equilibrium could experience unexpected volatility because of sunspot fluctuations, but it turns out that the effect of sunspot shocks on Japanís business cycles is marginal and that macroeconomic stability during the period was a result of good luck.

    Introduction

    Dynamic stochastic general equilibrium (DSGE) models have become a popular tool in macroeconomics. In particular, following the development of Bayesian estimation and evaluation techniques, an increased number of researchers have estimated DSGE models for empirical research as well as quantitative policy analysis. These models typically consist of optimizing behavior of households and firms, and a monetary policy rule, along the lines of King (2000) and Woodford (2003). In this class of models, a central bank follows an active monetary policy rule; that is, the nominal interest rate is adjusted more than one for one when inflation deviates from a given target, and the economy fluctuates around the steady state where actual inflation coincides with the targeted inflation. In addition to such a target-inflation steady state, Benhabib, Schmitt-GrohÈ, and Uribe (2001) argue that the combination of an active monetary policy rule and the zero lower bound on the nominal interest rate gives rise to another long-run equilibrium, called a deflation steady state, where the inflation rate is negative and the nominal interest rate is very close to zero.

  • Beauty Contests and Fat Tails in Financial Markets

    Abstract

    Using a simultaneous-move herding model of rational traders who infer other traders’ private information on the value of an asset by observing their aggregate actions, this study seeks to explain the emergence of fat-tailed distributions of transaction volumes and asset returns in financial markets. Without making any parametric assumptions on private information, we analytically show that traders’ aggregate actions follow a power law distribution. We also provide simulation results to show that our model successfully reproduces the empirical distributions of asset returns. We argue that our model is similar to Keynes’s beauty contest in the sense that traders, who are assumed to be homogeneous, have an incentive to mimic the average trader, leading to a situation similar to the indeterminacy of equilibrium. In this situation, a trader’s buying action causes a stochastic chain-reaction, resulting in power laws for financial fluctuations. Keywords: Herd behavior, transaction volume, stock return, fat tail, power law JEL classification code: G14

    Introduction

    Since Mandelbrot [25] and Fama [13], it has been well established that stock returns exhibit fat-tailed and leptokurtic distributions. Jansen and de Vries [19], for example, have shown empirically that the power law exponent for stock returns is in the range of 3 to 5, which guarantees that the variance is finite but the distribution deviates substantially from the normal distribution in terms of the fourth moment. Such an anomaly in the tail shape, as well as kurtosis, has been regarded as one reason for the excess volatility of stock returns.

  • Offshoring, Sourcing Substitution Bias and the Measurement of US Import Prices, GDP and Productivity

    Abstract

    The decade ending in 2007 was a period of rapid sourcing substitution for manufactured goods consumed in the US. Imports were substituted for local sourcing, and patterns of supply for imports changed to give a large role to new producers in emerging economies. The change in the price paid by the buyer of an item who substitutes an import for local sourcing is out of scope for the US import price index, and the price change for an imported item when a new supplier in a different country is substituted for an existing one is also likely to be excluded from the index calculation. Sourcing substitution bias can arise in measures of change in import prices, real GPD and productivity if these excluded price changes are systematically different from other price changes. To determine bounds for how large sourcing substitution bias could be, we analyze productlevel data on changes in import sourcing patterns between 1997 and 2007. Next, we identify products in the US industry accounts that are used for household consumption and that are supplied by imports. We aggregate CPIs, and combinations of MPI and PPIs that cover these products up to the product group level using weights that reflect household consumption patterns. With some adjustments, the gap between the growth rate of the product group index containing MPIs and the growth rate of a corresponding product group index constructed from CPIs can be used to estimate sourcing substitution bias. For the nondurable goods, which were not subject to much sourcing substitution, the gap is near zero. Apparel and textile products, which were subject to considerable offshoring, have an adjusted growth rate gap of 0.6 percent per year. Durable goods have an adjusted gap of 1.2 percent per year, but the upper bound calculation suggests that some of this gap comes from effects other than sourcing substitution. During the period examined, sourcing substitution bias may have accounted for a tenth of the reported multifactor productivity growth of the US private business sector.

    Introduction

    Globalization has brought with it increased international engagement for many of the world’s economies. In the case of the US economy, one of the more striking changes over the past few decades was the growing substitution of imports for products once sourced from local producers. As a share of US domestic absorption of nonpetroleum goods, imports of nonpetroleum goods grew from a starting point of just 8 percent in 1970-71 to 30 percent in 2008 (figure 1). Imports of goods used for personal consumption expenditures (PCE) exhibited similar growth. Between 1969 and 2009, imports at f.o.b. prices grew from 6.1 to 21.4 percent of PCE for durable goods, from 5.1 to 31.9 percent of PCE for clothing and footwear, and from 2.4 percent to 18.6 percent of PCE for nondurables other than clothing, food and energy (McCully, 2011, p. 19).

  • Zipf’s Law, Pareto’s Law, and the Evolution of Top Incomes in the U.S.

    Abstract

    This paper presents a tractable dynamic general equilibrium model of income and firm-size distributions. The size and value of firms result from idiosyncratic, firm-level productivity shocks. CEOs can invest in their own firms’ risky stocks or in risk-free assets, implying that the CEO’s asset and income also depend on firm-level productivity shocks. We analytically show that this model generates the Pareto distribution of top income earners and Zipf’s law of firms in the steady state. Using the model, we evaluate how changes in tax rates can account for the recent evolution of top incomes in the U.S. The model matches the decline in the Pareto exponent of income distribution and the trend of the top 1% income share in the U.S. in recent decades. In the model, the lower marginal income tax for CEOs strengthens their incentive to increase the share of their firms’ risky stocks in their own asset portfolios. This leads to both higher dispersion and concentration of income in the top income group.

    Introduction

    For the last three decades, there has been a secular trend of concentration of income among the top earners in the U.S. economy. According to Alvaredo et al. (2013), the top 1% income share, the share of total income going to the richest top 1% of the population, declined from around 18% to 8% after the 1930s, but the trend was reversed during the 1970s. Since then, the income share of the top 1% has grown and had reached 18% by 2010, on par with the prewar level.

  • A macroeconomic model of liquidity crises

    Abstract

    We develop a macroeconomic model in which liquidity plays an essential role in the production process, because firms have a commitment problem regarding factor payments. A liquidity crisis occurs when firms fail to obtain sufficient liquidity, and may be caused either by self-fulfilling beliefs or by fundamental shocks. Our model is consistent with the observation that the decline in output during the Great Recession is mostly attributable to the deterioration in the labor wedge, rather than in productivity. The government’s commitment to guarantee bank deposits reduces the possibility of a self-fulfilling crisis, but it increases that of a fundamental crisis.

    Introduction

    The Great Recession, that is, the global recession in the late 2000s, was the deepest economic downturn since the 1930s. Lucas and Stokey (2011), among others, argue that just as in the Great Depression, the recession was made severer by a liquidity crisis. A liquidity crisis is a sudden evaporation of the supply of liquidity that leads to a large drop in production and employment. In addition, the decline in output in the Great Recession was mostly due to deterioration in the labor wedge, rather than in productivity, as emphasized by Arellano, Bai, and Kehoe (2012).

  • Constrained Inefficiency and Optimal Taxation with Uninsurable Risks

    Abstract

    When individuals’ labor and capital income are subject to uninsurable idiosyncratic risks, should capital and labor be taxed, and if so how? In a two period general equilibrium model with production, we derive a decomposition formula of the welfare effects of these taxes into insurance and distribution effects. This allows us to determine how the sign of the optimal taxes on capital and labor depend on the nature of the shocks, the degree of heterogeneity among consumers’ income as well as on the way in which the tax revenue is used to provide lump sum transfers to consumers. When shocks affect primarily labor income and heterogeneity is small, the optimal tax on capital is positive. However in other cases a negative tax on capital is welfare improving. (JEL codes: D52, H21. Keywords: optimal linear taxes, incomplete markets, constrained efficiency)

    Introduction

    The main objective of this paper is to investigate the effects and the optimal taxation of investment and labor income in a two period production economy with uninsurable background risk. More precisely, we examine whether the introduction of linear, distortionary taxes or subsidies on labor income and/or on the returns from savings are welfare improving and what is then the optimal sign of such taxes. This amounts to studying the Ramsey problem in a general equilibrium set-up. We depart however from most of the literature on the subject for the fact that we consider an environment with no public expenditure, where there is no need to raise tax revenue. Nonetheless, optimal taxes are typically nonzero; even distortionary taxes can improve the allocation of risk in the face of incomplete markets. Then the question is which production factor should be taxed: we want to identify the economic properties which determine the signs of the optimal taxes on production factors.

  • Estimating Daily Inflation Using Scanner Data: A Progress Report

    Abstract

    We construct a T¨ornqvist daily price index using Japanese point of sale (POS) scanner data spanning from 1988 to 2013. We find the following. First, the POS based inflation rate tends to be about 0.5 percentage points lower than the CPI inflation rate, although the difference between the two varies over time. Second, the difference between the two measures is greatest from 1992 to 1994, when, following the burst of bubble economy in 1991, the POS inflation rate drops rapidly and turns negative in June 1992, while the CPI inflation rate remains positive until summer 1994. Third, the standard deviation of daily POS inflation is 1.1 percent compared to a standard deviation for the monthly change in the CPI of 0.2 percent, indicating that daily POS inflation is much more volatile, mainly due to frequent switching between regular and sale prices. We show that the volatility in daily inflation can be reduced by more than 2daily inflation rate 0 percent by trimming the tails of product-level price change distributions. Finally, if we measure price changes from one day to the next and construct a chained T¨ornqvist index, a strong chain drift arises so that the chained price index falls to 10−10 of the base value over the 25-year sample period, which is equivalent to an annual deflation rate of 60 percent. We provide evidence suggesting that one source of the chain drift is fluctuations in sales quantity before, during, and after a sale period.

    Introduction

    Japan's central bank and government are currently engaged in a major experiment to raise the rate of inflation to the target of 2 percent set by the Bank of Japan (BOJ). With overcoming deflation being a key policy priority, a first step in this direction is the accurate assessment of price developments. In Japan, prices are measured by the Statistics Bureau, Ministry of Internal Affairs and Communications, and the consumer price index (CPI) published by the Statistics Bureau is the most important indicator that the BOJ pays attention to when making policy decisions. The CPI, moreover, is of direct relevance to people's lives as, for example, public pension benefits are linked to the rate of inflation as measured by the CPI.

  • Analytical Derivation of Power Laws in Firm Size Variables from Gibrat’s Law and Quasi-inversion Symmetry: A Geomorphological Approach

    Abstract

    We start from Gibrat’s law and quasi-inversion symmetry for three firm size variables (i.e., tangible fixed assets K, number of employees L, and sales Y) and derive a partial differential equation to be satisfied by the joint probability density function of K and L. We then transform K and L, which are correlated, into two independent variables by applying surface openness used in geomorphology and provide an analytical solution to the partial differential equation. Using worldwide data on the firm size variables for companies, we confirm that the estimates on the power-law exponents of K, L, and Y satisfy a relationship implied by the theory.

    Introduction

    In econophysics, it is well-known that the cumulative distribution functions (CDFs) of capital K, labor L, and production Y of firms obey power laws in large scales that exceed certain size thresholds, which are given by K0, L0, and Y0:

  • The Structure and Evolution of Buyer-Supplier Networks

    Abstract

    In this paper, we investigate the structure and evolution of customer-supplier networks in Japan using a unique dataset that contains information on customer and supplier linkages for more than 500,000 incorporated non-financial firms for the five years from 2008 to 2012. We find, first, that the number of customer links is unequal across firms; the customer link distribution has a power-law tail with an exponent of unity (i.e., it follows Zipf’s law). We interpret this as implying that competition among firms to acquire new customers yields winners with a large number of customers, as well as losers with fewer customers. We also show that the shortest path length for any pair of firms is, on average, 4.3 links. Second, we find that link switching is relatively rare. Our estimates indicate that the survival rate per year for customer links is 92 percent and for supplier links 93 percent. Third and finally, we find that firm growth rates tend to be more highly correlated the closer two firms are to each other in a customer-supplier network (i.e., the smaller is the shortest path length for the two firms). This suggests that a non-negligible portion of fluctuations in firm growth stems from the propagation of microeconomic shocks – shocks affecting only a particular firm – through customer-supplier chains.

    Introduction

    Firms in a modern economy tend to be closely interconnected, particularly in the manufacturing sector. Firms typically rely on the delivery of materials or intermediate products from their suppliers to produce their own products, which in turn are delivered to other downstream firms. Two recent episodes vividly illustrate just how closely firms are interconnected. The first is the recent earthquake in Japan. The earthquake and tsunami hit the Tohoku region, the north-eastern part of Japan, on March 11, 2011, resulting in significant human and physical damage to that region. However, the economic damage was not restricted to that region and spread in an unanticipated manner to other parts of Japan through the disruption of supply chains. For example, vehicle production by Japanese automakers, which are located far away from the affected areas, was stopped or slowed down due to a shortage of auto parts supplies from firms located in the affected areas. The shock even spread across borders, leading to a substantial decline in North American vehicle production. The second episode is the recent financial turmoil triggered by the subprime mortgage crisis in the United States. The adverse shock originally stemming from the so-called toxic assets on the balance sheets of U.S. financial institutions led to the failure of these institutions and was transmitted beyond entities that had direct business with the collapsed financial institutions to those that seemed to have no relationship with them, resulting in a storm that affected financial institutions around the world.

  • Buyer-Size Discounts and Inflation Dynamics

    Abstract

    This paper considers the macroeconomic effects of retailers’ market concentration and buyer-size discounts on inflation dynamics. During Japan's “lost decades,” large retailers enhanced their market power, leading to increased exploitation of buyer-size discounts in procuring goods. We incorporate this effect into an otherwise standard New-Keynesian model. Calibrating to the Japanese economy during the lost decades, we find that despite a reduction in procurement cost, strengthened buyer-size discounts did not cause deflation; rather, they caused inflation of 0.1% annually. This arose from an increase in the real wage due to the expansion of production.

    Introduction

    In this paper, we aim to consider the macroeconomic effects of buyer-size discounts on inflation dynamics. It is the conventional wisdom that large buyers (downstream firms) are better bargainers than small buyers in procuring goods from sellers (upstream firms). Retailers, wholesalers, and manufacturers negotiate prices, taking account of trade size. The increase in sales of retail giants such as Wal-Mart in the United States, Tesco in the United Kingdom, and Aeon in Japan has been accompanied by the increase in their bargaining power over wholesalers and manufacturers. Figure 1 shows evidence that larger buyers enjoy larger price discounts in Japan. In 2007, the National Survey of Prices by the Statistics Bureau reported the prices of the same types of goods sold by retailers with differing floor space. For nine kinds of goods, from perishables to durable goods, retail prices decrease with the floor space of retailers. This suggests that large retailers purchase goods from wholesalers and manufacturers at lower prices than small retailers do. It is natural to think that these buyer-size discounts influence macro inflation dynamics.

  • Lending Pro-Cyclicality and Macro-Prudential Policy: Evidence from Japanese LTV Ratios

    Abstract

    Using a large and unique micro dataset compiled from the official real estate registry in Japan, we examine the loan-to-value (LTV) ratios for business loans from 1975 to 2009 to draw some implications for the ongoing debate on the use of LTV ratio caps as a macro-prudential policy measure. We find that the LTV ratio exhibits counter-cyclicality, implying that the increase (decrease) in loan volume is smaller than the increase (decrease) in land values during booms (busts). Most importantly, LTV ratios are at their lowest during the bubble period in the late 1980s and early 1990s. The counter-cyclicality of LTV ratios is robust to controlling for various characteristics of loans, borrowers, and lenders. We also find that borrowers that exhibited high-LTV loans performed no worse ex-post than those with lower LTV loans, and sometimes performed better during the bubble period. Our findings imply that a simple fixed cap on LTV ratios might not only be ineffective in curbing loan volume in boom periods but also inhibit well-performing firms from borrowing. This casts doubt on the efficacy of employing a simple LTV cap as an effective macro-prudential policy measure.

    Introduction

    The recent financial crisis with its epicenter in the U.S. followed a disastrous financial crisis in Japan more than a decade before. It is probably not an exaggeration to argue that these crises shattered the illusion that the Basel framework – specifically Basel I and Basel II – had ushered in a new era of financial stability. These two crises centered on bubbles that affected both the business sector (business loans) and the household sector (residential mortgages). In Japan banks mostly suffered from the damage in the business sector, while in the U.S. banks mostly suffered from damage in the household sector. Following the first of these crises, the Japanese crisis, a search began for policy tools that would reduce the probability of future crises and minimize the damage when they occur. Consensus began to build in favor of countercyclical macro-prudential policy levers (e.g., Kashyap and Stein 2004). For example, there was great interest and optimism associated with the introduction by the Bank of Spain of dynamic loan loss provisioning in 2000. Also, Basel III adopted a countercyclical capital buffer to be implemented when regulators sensed that credit growth has become excessive.

  • The Uncertainty Multiplier and Business Cycles

    Abstract

    I study a business cycle model where agents learn about the state of the economy by accumulating capital. During recessions, agents invest less, and this generates noisier estimates of macroeconomic conditions and an increase in uncertainty. The endogenous increase in aggregate uncertainty further reduces economic activity, which in turn leads to more uncertainty, and so on. Thus, through changes in uncertainty, learning gives rise to a multiplier effect that amplifies business cycles. I use the calibrated model to measure the size of this uncertainty multiplier.

    Introduction

    What drives business cycles? A rapidly growing literature argues that shocks to uncertainty are a significant source of business cycle dynamics—see, for example, Bloom (2009), Fern´andezVillaverde et al. (2011), Gourio (2012), and Christiano et al. (forthcoming). However, the literature faces at least two important criticisms. In uncertainty shock theories, recessions are caused by exogenous increases in the volatility of structural shocks. First, fluctuations in uncertainty may be, at least partially, endogenous. The distinction is crucial because if uncertainty is an equilibrium object that is coming from agents’ actions, policy experiments that treat uncertainty as exogenous are subject to the Lucas critique. Second, some authors (Bachmann and Bayer 2013, Born and Pfeifer 2012, and Chugh 2012) have argued that, given small and transient fluctuations in observed ex-post volatility, changes in uncertainty have negligible effects. However, time-varying volatility need not be the only source of time-varying uncertainty. If this is the case, these papers may be understating the contribution of changes in uncertainty to aggregate fluctuations.

  • Liquidity, Trends and the Great Recession

    Abstract

    We study the impact that the liquidity crunch in 2008-2009 had on the U.S. economy's growth trend. To this end, we propose a model featuring endogenous growth á la Romer and a liquidity friction á la Kiyotaki-Moore. A key finding in our study is that liquidity declined around the demise of Lehman Brothers, which lead to the severe contraction in the economy. This liquidity shock was a tail event. Improving conditions in financial markets were crucial in the subsequent recovery. Had conditions remained at their worst level in 2008, output would have been 20 percent below its actual level in 2011.

    Introduction

    A few years into the recovery from the Great Recession, it is becoming clear that real GDP is failing to recover. Namely, although the economy is growing at pre-crisis growth rates, the crisis seems to have impinged a shift upon output. Figure 1 shows real GDP and its growth rate over the past decade. Without much effort, one can see that the economy is moving along a (new) trend that lies below the one prevailing in 2007. It is also apparent that if the economy continues to display the dismal post-crisis growth rates (blue dashed line), it will not revert to the old trend. Hence, this tepid recovery has spurred debate on whether the shift is permanent and if so what the long-term implications are for the economy. In this paper, we tackle the issue of the long-run impact of the Great Recession by means of a structural model.

  • Growing through cities in developing countries

    Abstract

    This paper examines the effects of urbanisation on development and growth. It starts with a labour market perspective and emphasises the importance of agglomeration economies, both static and dynamic. It then argues that more productive jobs in cities do not come in a void and underscores the importance of job and firm dynamics. In turn, these dynamics are shaped by the broader characteristics of urban systems. A number of conclusions are drawn. First, agglomeration effects are quantitatively important and pervasive. Second, the productive advantage of large cities is constantly eroded and needs to be sustained by new job creations and innovations. Third, this process of creative destruction in cities, which is fundamental for aggregate growth, is determined in part by the characteristics of urban systems and broader institutional features. We highlight important differences between developing countries and more advanced economies. A major challenge for developing countries is to make sure their urban systems acts as drivers of economic growth.

    Introduction

    Urbanisation and development are tightly linked. The strong positive correlation between the rate of urbanisation of a country and its per capita income has been repeatedly documented. See for instance World Bank (2009), Henderson (2010), or Henderson (2002) among many others. There is no doubt that much of the causation goes from economic growth to increased urbanisation. As countries grow, they undergo structural change and labour is reallocated from rural agriculture to urban manufacturing and services (Michaels, Rauch, and Redding, 2012). The traditional policy focus is then to make sure that this reallocation occurs at the ‘right time’ and that the distribution of population across cities is ‘balanced’. Urbanisation without industrialisation (Fay and Opal, 1999, Gollin, Jedwab, and Vollrath, 2013, Jedwab, 2013) and increased population concentrations in primate cities (Duranton, 2008) are often viewed as serious urban and development problems.

  • The Political Economy of Financial Systems: Evidence from Suffrage Reforms in the Last Two Centuries

    Abstract

    Initially, voting rights were limited to wealthy elites providing political support for stock markets. The franchise expansion induces the median voter to provide political support for banking development as this new electorate has lower financial holdings and benefits less from the uncertainty and financial returns from stock markets. Our panel data evidence covering 1830-1999 shows that tighter restrictions on the voting franchise induce a greater stock market development, whereas a broader voting franchise is more conducive towards the banking sector, consistent with Perotti and von Thadden (2006). Our results are robust to controlling for other political determinants and endogeneity.

    Introduction

    Fundamental institutions drive financial development. Political institutions are together with legal institutions and cultural traits of first order importance (La Porta, Lopezde-Silanes, Shleifer, and Vishny, 1998; Rajan and Zingales, 2003; Guiso, Sapienza, and Zingales, 2004; Acemoglu and Robinson, 2005). This paper is the first to empirically study how an important political institution – the scope of the voting franchise – impacts on different forms of financial development (stock market and banking) through shifts in the distribution of preferences of the voting class.

  • Investment Horizon and Repo in the Over-the-Counter Market

    Abstract

    This paper presents a three-period model featuring a shortterm investor and dealers in an over-the-counter bond market. A short-term investor invests cash in the short term because of a need to pay cash soon. This time constraint lowers the resale price of bonds held by a short-term investor through bilateral bargaining in an over-the-counter market. Ex-ante, this hold-up problem explains the use of a repo by a short-term investor, a positive haircut due to counterparty risk, and the fragility of a repo market. This result holds without any risk to the dividends and principals of underlying bonds or asymmetric information.

    Introduction

    Many securities primarily trade in an over-the-counter (OTC) market. A notable example of such securities is bonds. The key feature of an OTC market is that the buyer and the seller in each OTC trade set the terms of trade bilaterally. There has been developed a theoretical literature analyzing the effects of this market structure on spot trading, such as Spulber (1996), Rust and Hall (2003), Duffie, Gˆarleanu, and Pedersen (2005), Miao (2006), Vayanos and Wang (2007), Lagos and Rocheteau (2010), Lagos, Rocheteau and Weill (2011), and Chiu and Koeppl (2011), for example. This literature uses search models, in which each transaction is bilateral, to analyze various aspects of trading and price dynamics, such as liquidity and bid-ask spread, in OTC spot markets.

  • Separating the Age Effect from a Repeat Sales Index: Land and Structure Decomposition

    Abstract

    Since real estate is heterogeneous and infrequently traded, the repeat sales model has become a popular method to estimate a real estate price index. However, the model fails to adjust for depreciation, as age and time between sales have an exact linear relationship. This paper proposes a new method to estimate an age-adjusted repeat sales index by decomposing property value into land and structure components. As depreciation is more relevant to the structure than land, the property’s depreciation rate should depend on the relative size of land and structure. The larger the land component, the lower is the depreciation rate of the property. Based on housing transactions data from Hong Kong and Tokyo, we find that Hong Kong has a higher depreciation rate (assuming a fixed structure-to-property value ratio), while the resulting age adjustment is larger in Tokyo because its structure component has grown larger from the first to second sales.

    Introduction

    A price index aims to capture the price change of products free from any variations in quantity or quality. When it comes to real estate, the core problem is that it is heterogeneous and infrequently traded. Mean or median price indices are simple to compute, but properties sold in one period may differ from those in another period. To overcome this problem, two regression-based approaches are used to construct a constant-quality real estate price index (Shimizu et al. (2010)).

  • MEASURING THE EVOLUTION OF KOREA’S MATERIAL LIVING STANDARDS 1980-2010

    Abstract

    Based on a production-theoretic framework, we measure the effects of real output prices, primary inputs, multi-factor productivity growth, and depreciation on Korea’s real net income growth over the past 30 years. The empirical analysis is based on a new dataset for Korea with detailed information on labour and capital inputs, including series on land and inventories assets. We find that while over the entire period, capital and labour inputs explain the bulk of Korean real income growth, productivity growth has come to play an increasingly important role since the mid-1990s, providing some evidence of a transition from ‘input-led’ to ‘productivity-led’ growth. Terms of trade and other price effects were modest over the longer period, but had significant real income effects over sub-periods. Overall, real depreciation had only limited effects except during periods of crises where it bore negatively on real net income growth.

    Introduction

    The vast majority of studies on economic growth have been concerned with the growth of gross domestic product (GDP), in other words with the growth of countries’ production. The OECD, in common with many other organisations and economists, has also approximated material living standards in terms of the level and growth of gross domestic product.

  • Matching Indices for Thinly-Traded Commercial Real Estate in Singapore

    Abstract

    We use a matching procedure to construct three commercial real estate indices (office, shop and multiple-user factory) in Singapore using transaction sales from 1995Q1 to 2010Q4. The matching approach is less restrictive than the repeat sales estimator, which is restricted to properties sold at least twice during the sample period. The matching approach helps to overcome problems associated with thin markets and non-random sampling by pairing sales of similar but not necessarily identical properties across the control and treatment periods. We use the matched samples to estimate not just the mean changes in prices, but the full distribution of quality-adjusted sales prices over different target quantiles. The matched indices show three distinct cycles in commercial real estate markets in Singapore, including two booms in 1995- 1996 and 2006-2011, and deep and prolonged recessions with declines in prices around the time from 1999-2005. We also use kernel density function to illustrate the shift in the distribution of house prices across the two post-crisis periods in 1998 and 2008.

    Introduction

    Unlike residential real estate markets where transactions are abundant, commercial real estate transactions are thin and lumpy. Many institutional owners hold commercial real estate for long-term investment purposes. The dearth of transaction data has led to the widespread use of appraisal based indices, such as the National Council of Real Estate Investment Fiduciaries (NCREIF) index, as an alternative to transaction-based indices in the U.S. However, appraisalbased indices are vulnerable to smoothing problems. Appraisers appear to systematically under-estimate the variance and correlation in real estate returns other asset returns (Webb, Miles and Guilkey, 1992). Despite various attempts to correct appraisal bias, it remains an Achilles’ heel of appraisal-based indices. Corgel and deRoos (1999) found that recovering the true variance and correlation of appraisal-based returns reduces the weights of real estate in multi-asset portfolios.

  • The Consumer Price Index: Recent Developments

    Abstract

    The 2004 International Labour Office Consumer Price Index Manual: Theory and Practice summarized the state of the art for constructing Consumer Price Indexes (CPIs) at that time. In the intervening decade, there have been some significant new developments which are reviewed in this paper. The CPI Manual recommended the use of chained superlative indexes for a month to month CPI. However, subsequent experience with the use of monthly scanner data has shown that a significant chain drift problem can occur. The paper explains the nature of the problem and reviews possible solutions to overcome the problem. The paper also describes the recently developed Time Dummy Product method for constructing elementary index numbers (indexes at lower levels of aggregation where only price information is available).

    Introduction

    A decade has passed since the Consumer Price Index Manual: Theory and Practice was published. Thus it seems appropriate to review the advice given in the Manual in the light of research over the past decade. It turns out that there have been some significant developments that should be taken into account in the next revision of the Manual.

  • The Estimation of Owner Occupied Housing Indexes using the RPPI: The Case of Tokyo

    Abstract

    Dramatic increases and decreases in housing prices have had an enormous impact on the economies of various countries. If this kind of fluctuation in housing prices is linked to fluctuations in the consumer price index (CPI) and GDP, it may be reflected in fiscal and monetary policies. However, during the 1980s housing bubble in Japan and the later U.S. housing bubble, fluctuations in asset prices were not sufficiently reflected in price statistics and the like. The estimation of imputed rent for owneroccupied housing is said to be one of the most important factors for this. Using multiple previously proposed methods, this study estimated the imputed rent for owner-occupied housing in Tokyo and clarified the extent to which the estimated imputed rent diverged depending on the estimation method. Examining the results obtained showed that, during the bubble’s peak, there was an 11-fold discrepancy between the Equivalent Rent Approach currently employed in Japan and Equivalent Rent calculated with a hedonic approach using market rent. Meanwhile, with the User Cost Approach, during the bubble period when asset prices rose significantly, the values became negative with some estimation methods. Accordingly, we estimated Diewert’s OOH Index, which was proposed by Diewert and Nakamura (2009). When the Diewert’s OOH Index results estimated here were compared to Equivalent Rent Approach estimation results modified with the hedonic approach using market rent, it revealed that from 1990 to 2009, the Diewert’s OOH Index results were on average 1.7 times greater than the Equivalent Rent Approach results, with a maximum 3-fold difference. These findings suggest that even when the Equivalent Rent Approach is improved, significant discrepancies remain.

    Introduction

    Housing price fluctuations exert effects on the economy through various channels. More precisely, however, relative prices between housing and other assets prices and goods/services prices are the variable that should be observed.

  • Residential Property Price Indexes for Tokyo

    Abstract

    The paper uses hedonic regression techniques in order to decompose the price of a house into land and structure components using real estate sales data for Tokyo. In order to get sensible results, a nonlinear regression model using data that covered multiple time periods was used. Collinearity between the amount of land and structure in each residential property leads to inaccurate estimates for the land and structure value of a property. This collinearity problem was solved by using exogenous information on the rate of growth of construction costs in Tokyo in order to get useful constant quality subindexes for the price of land and structures separately.

    Introduction

    In this paper, we will use hedonic regression techniques in order to construct a quarterly constant quality price index for the sales of residential properties in Tokyo for the years 2000-2010 (44 quarters in all). The usual application of a time dummy hedonic regression model to sales of houses does not lead to a decomposition of the sale price into a structure component and a land component. But such a decomposition is required for many purposes. Our paper will attempt to use hedonic regression techniques in order to provide such a decomposition for Tokyo house prices. Instead of entering characteristics into our regressions in a linear fashion, we enter them as piece-wise linear functions or spline functions to achieve greater flexibility.

  • A Conceptual Framework for Commercial Property Price Indexes

    Abstract

    The paper studies the problems associated with the construction of price indexes for commercial properties that could be used in the System of National Accounts. Property price indexes are required for the stocks of commercial properties in the Balance Sheets of the country and related price indexes for the land and structure components of a commercial property are required in the Income Accounts of the country if the Multifactor Productivity of the Commercial Property Industry is calculated as part of the System of National accounts. The paper suggests a variant of the capitalization of the Net Operating Income approach to the construction of property price indexes and uses the one hoss shay or light bulb model of depreciation as a model of depreciation for the structure component of a commercial property.

    Introduction

    Many of the property price bubbles experienced during the 20th century were triggered by steep increases and sharp decreases in commercial property prices. Given this, there is a need to construct commercial property price indexes but exactly how should these prices be measured? Since commercial property is highly heterogeneous compared to housing and the number of transactions is also much lower, it is extremely difficult to capture trends in this market. In addition, many countries have been experiencing large investments in commercial properties and in countries where the market has matured, depreciation and investments in improvements and renovations represents a substantial fraction of national output. But clear measurement methods for the treatment of these expenditures in the System of National Accounts are lacking. Given this, one may say that the economic value of commercial property in particular is one of the indicators that is most difficult to measure on a day-to-day basis and that statistical development related to this is one of the fields that has perhaps lagged the furthest behind. Indexes based on transaction prices for commercial properties have begun to appear in recent years, especially in the U.S. However, in many cases, these indexes are based on property appraisal prices. But appraisal prices need to be based on a firm methodology. Thus in this paper, we will briefly review possible appraisal methodologies and then develop in more detail what we think is the most promising approach.

  • How Much Do Official Price Indexes Tell Us About Inflation?

    Abstract

    Official price indexes, such as the CPI, are imperfect indicators of inflation calculated using ad hoc price formulae different from the theoretically well-founded inflation indexes favored by economists. This paper provides the first estimate of how accurately the CPI informs us about “true” inflation. We use the largest price and quantity dataset ever employed in economics to build a Törnqvist inflation index for Japan between 1989 and 2010. Our comparison of this true inflation index with the CPI indicates that the CPI bias is not constant but depends on the level of inflation. We show the informativeness of the CPI rises with inflation. When measured inflation is low (less than 2.4% per year) the CPI is a poor predictor of true inflation even over 12-month periods. Outside this range, the CPI is a much better measure of inflation. We find that the U.S. PCE Deflator methodology is superior to the Japanese CPI methodology but still exhibits substantial measurement error and biases rendering it a problematic predictor of inflation in low inflation regimes as well.

    Introduction

    We have long known that the price indexes constructed by statistical agencies, such as the Consumer Price Index (CPI) and the Personal Consumption Expenditure (PCE) deflator, measure inflation with error. This error arises for two reasons. First, formula biases or errors appear because statistical agencies do not use the price aggregation formula dictated by theory. Second, imperfect sampling means that official price indexes are inherently stochastic. A theoretical macroeconomics literature starting with Svensson and Woodford [2003] and Aoki [2003] has noted that these stochastic measurement errors imply that one cannot assume that true inflation equals the CPI less some bias term. In general, the relationship is more complex, but what is it? This paper provides the first answer to this question by analyzing the largest dataset ever utilized in economics: 5 billion Japanese price and quantity observations collected over a 23 year period. The results are disturbing. We show that when the Japanese CPI measures inflation as low (below 2.4 percent in our baseline estimates) there is little relation between measured inflation and actual inflation. Outside of this range, measured inflation understates actual inflation changes. In other words, one can infer inflation changes from CPI changes when the CPI is high, but not when the CPI close to zero. We also show that if Japan were to shift to a methodology akin to the U.S. PCE deflator, the non-linearity would be reduced but not eliminated. This non-linear relationship between measured and actual inflation has important implications for the conduct of monetary policy in low inflation regimes.

  • Zero Lower Bound and Parameter Bias in an Estimated DSGE Model

    Abstract

    This paper examines how and to what extent parameter estimates can be biased in a dynamic stochastic general equilibrium (DSGE) model that omits the zero lower bound constraint on the nominal interest rate. Our experiments show that most of the parameter estimates in a standard sticky-price DSGE model are not biased although some biases are detected in the estimates of the monetary policy parameters and the steady-state real interest rate. Nevertheless, in our baseline experiment, these biases are so small that the estimated impulse response functions are quite similar to the true impulse response functions. However, as the probability of hitting the zero lower bound increases, the biases in the parameter estimates become larger and can therefore lead to substantial differences between the estimated and true impulse responses.

    Introduction

    Dynamic stochastic general equilibrium (DSGE) models have become a prominent tool for policy analysis. In particular, following the development of Bayesian estimation and evaluation techniques, estimated DSGE models have been extensively used by a range of policy institutions, including central banks. At the same time, the zero lower bound constraint on the nominal interest rates has been a primary concern for policymakers. Much work has been devoted to understand how the economy works and how policy should be conducted in the presence of this constraint from a theoretical perspective. However, empirical studies that estimate DSGE models including the interest-rate lower bound are still scarce because of computational difficulties in the treatment of nonlinearity arising from the bound, and hence most practitioners continue to estimate linearized DSGE models without explicitly considering the lower bound.

  • Exchange Rates and Fundamentals: Closing a Two-country Model

    Abstract

    In an influential paper, Engel and West (2005) claim that the near random-walk behavior of nominal exchange rates is an equilibrium outcome of a variant of present-value models when economic fundamentals follow exogenous first-order integrated processes and the discount factor approaches one. Subsequent empirical studies further confirm this proposition by estimating a discount factor that is close to one under distinct identification schemes. In this paper, I argue that the unit market discount factor implies the counterfactual joint equilibrium dynamics of random-walk exchange rates and economic fundamentals within a canonical, two-country, incomplete market model. Bayesian posterior simulation exercises of a two-country model based on post-Bretton Woods data from Canada and the United States reveal difficulties in reconciling the equilibrium random-walk proposition within the two-country model; in particular, the market discount factor is identified as being much lower than one.

    Introduction

    Few equilibrium models for nominal exchange rates systematically beat a naive randomwalk counterpart in terms of out-of-sample forecast performance. Since the study of Meese and Rogoff (1983), this robust empirical property of nominal exchange rate fluctuations has stubbornly resisted theoretical challenges to understand the behavior of nominal exchange rates as equilibrium outcomes. The recently developed open-economy dynamic stochastic general equilibrium (DSGE) models also suffer from this problem. Infamous as the disconnect puzzle, open-economy DSGE models fail to generate random-walk nominal exchange rates along an equilibrium path because their exchange rate forecasts are closely related to other macroeconomic fundamentals.

  • The Relation between Inventory Investment and Price Dynamics in a Distributive Firm

    Abstract

    In this paper, we examine the role of inventory in the price-setting behavior of a distributive firm. Empirically, we show the 5 empirical facts relating to pricing behavior and selling quantity of a certain consumer goods based on daily scanner data to examine the relation between store properties and pricing behavior. These results denote that price stickiness varies by the retailers’ characteristics. We consider that the hidden mechanism of price stickiness comes from the retailer’s policy for inventory investment. A partial equilibrium model of the retailer’s optimization behavior with inventory is constructed so as to replicate the five empirical facts. The results of the numerical experiments in the constructed model suggest that price change frequency depends on the retailer’s order cost, storage cost, and menu cost, not on the price elasticity of demand.

    Introduction

    Price stickiness is one of the most important and controversial concepts in macroeconomics. Many macroeconomists consider it as a key concept of the real effect of monetary policy in a macroeconomic model. So far, they have turned to the theory of price dynamics and investigated data to establish empirical facts. This paper studies the mechanism of price stickiness by examining the role of inventory in the price-setting behavior of a distributive firm empirically using micro-data scanned in retail stores, and through numerical experiments of a quantitative model of a distributive firm.

  • Labor Force Participation and Monetary Policy in the Wake of the Great Recession

    Abstract

    In this paper, we provide compelling evidence that cyclical factors account for the bulk of the post-2007 decline in the U.S. labor force participation rate. We then proceed to formulate a stylized New Keynesian model in which labor force participation is essentially acyclical during "normal times" (that is, in response to small or transitory shocks) but drops markedly in the wake of a large and persistent aggregate demand shock. Finally, we show that these considerations can have potentially crucial implications for the design of monetary policy, especially under circumstances in which adjustments to the short-term interest rate are constrained by the zero lower bound.

    Introduction

    A longstanding and well-established fact in labor economics is that the labor supply of primeage and older adults has been essentially acyclical throughout the postwar period, while that of teenagers has been moderately procyclical; cf. Mincer (1966), Pencavel (1986), and Heckman and Killingsworth (1986). Consequently, macroeconomists have largely focused on the unemployment rate as a business cycle indicator while abstracting from movements in labor force participation. Similarly, the literature on optimal monetary policy and simple rules has typically assumed that unemployment gaps and output gaps can be viewed as roughly equivalent; cf. Orphanides (2002), Taylor and Williams (2010).

  • Who faces higher prices? An empirical analysis based on Japanese homescan data

    Abstract

    On the basis of household-level scanner data (homescan) for Japan, we construct a household-level price index and investigate the causes of price differences between households. We observe large price differentials between households, as did Aguiar and Hurst (2007). However, the differences between age and income groups are small. In addition, we find that elderly people face higher prices than the younger ones, which is contrary to the results of Aguiar and Hurst (2007). The most important determinant of the price level is reliance on bargain sales; an increase in the purchase of goods at bargain sales by one standard deviation decreases the price level by more than 0.9%, while shopping frequency has only limited effects on the price level.

    Introduction

    Owing to recent technological developments in data creation, numerous commodity price researchers have begun to use not only traditional aggregates, such as the consumer price index, but also micro-level information on commodity prices. To date, commodity-level price information is used in various economic fields, such as macroeconomics (Nakamura and Steinsson, 2007), international economics (Haskel and Wolf, 2001), and industrial economics (Bay et al., 2004, Goldberg and Frank 2005). Recently, on the basis of commodity-level homescan data,5 Aguiar and Hurst (2007) (hereafter AH) found a violation of the law of one price between different age groups.

  • Is Downward Wage Flexibility the Primary Factor of Japan’s Prolonged Deflation?

    Abstract

    By using both macro- and micro-level data, this paper investigates how wages and prices evolved during Japan’s lost two decades. We find that downward nominal wage rigidity was present in Japan until the late 1990s but disappeared after 1998 as annual wages became downwardly flexible. Moreover, nominal wage flexibility may have contributed to relatively low unemployment rates in Japan. Although macro-level movements in nominal wages and prices seemed to be synchronized, such synchronicity was not observed at the industry level. Therefore, wage deflation does not seem to be a primary factor of Japan’s prolonged deflation.

    Introduction

    Most central banks are now targeting a positive inflation rate of a few percentage points. One of the reasons for not targeting a zero inflation rate is the downward rigidity of nominal wages, which could cause huge inefficiency in the resource allocation of the labor market (Akerlof et al. 1996). By creating an environment in which real wages can be adjusted, a positive inflation rate thereby serves as a “safety margin” against the risk of declining prices.

  • Micro Price Dynamics during Japan’s Lost Decades

    Abstract

    We study micro price dynamics and their macroeconomic implications using daily scanner data from 1988 to 2013. We provide five facts. First, posted prices in Japan are ten times as flexible as those in the U.S. scanner data. Second, regular prices are almost as flexible as those in the U.S. and Euro area. Third, heterogeneity is large. Fourth, during Japan’s lost decades, temporary sales played an increasingly important role. Fifth, the frequency of upward regular price revisions and the frequency of sales are significantly correlated with the macroeconomic environment like the indicators of labor market.

    Introduction

    Since the asset price bubble went bust in the early 1990s, Japan has gone through prolonged stagnation and very low rates of inflation (see Figure 1). To investigate its background, in this paper, we study micro price dynamics at a retail shop and product level. In doing so, we use daily scanner or Point of Sales (POS) data from 1988 to 2013 covering over 6 billion records. From the data, we examine how firms’ price setting changed over these twenty years; report similarities and differences in micro price dynamics between Japan and foreign countries; and draw implications for economic theory as well as policy.

  • Chronic Deflation in Japan

    Abstract

    Japan has suffered from long-lasting but mild deflation since the latter half of the 1990s. Estimates of a standard Phillips curve indicate that a decline in inflation expectations, the negative output gap, and other factors such as a decline in import prices and a higher exchange rate, all account for some of this development. These factors, in turn, reflect various underlying structural features of the economy. This paper examines a long list of these structural features that may explain Japan's chronic deflation, including the zero-lower bound on the nominal interest rate, public attitudes toward the price level, central bank communication, weaker growth expectations coupled with declining potential growth or the lower natural rate of interest, risk averse banking behavior, deregulation, and the rise of emerging economies.

    Introduction

    Why have price developments in Japan been so weak for such a long time? What can leading-edge economic theory and research tell us about the possible causes behind these developments? Despite the obvious policy importance of these questions, there has been no consensus among practitioners nor in academia. This paper is an attempt to shed some light on these issues by relying on recent works on the subject in the literature.

  • A pass-through revival

    Abstract

    It has been argued that pass-through of the exchange rate and import prices to domestic prices has declined in recent years. This paper argues that it has come back strong, at least in Japan, in the most recent years. To make this point, I estimate a time-varying parameter-volatility VAR model for the Japanese exchange rates and prices. This method allows me to estimate responses of domestic prices to the exchange rate and import prices at different points in time. I find that the response was fairly strong in the early 1980s but, since then, had gone down considerably. Since the early 2000s, however, pass-through starts to show a sign of life again. This implies that the exchange rate may have regained the status of an important policy transmission mechanism to domestic prices. At the end of the paper, I look for a possible cause of this pass-through revival by studying the evolution of the Japanese Input-Output structure.

    Introduction

    This paper re-examines the effects of the exchange rate and import prices on Japanese domestic prices. In recent literature, it has been claimed that the extent of pass-through has declined substantially. My goal is to re-examine this claim by studying the most updated data from Japan, using an approach that allows for flexible forms of structural changes.

  • Product Downsizing and Hidden Price Increases: Evidence from Japan’s Deflationary Period

    Abstract

    Consumer price inflation in Japan has been below zero since the mid-1990s. Given this, it is difficult for firms to raise product prices in response to an increase in marginal costs. One pricing strategy firms have taken in this situation is to reduce the size or the weight of a product while leaving the price more or less unchanged, thereby raising the effective price. In this paper, we empirically examine the extent to which product downsizing occurred in Japan as well as the effects of product downsizing on prices and quantities sold. Using scanner data on prices and quantities for all products sold at about 200 supermarkets over the last ten years, we find that about one third of product replacements that occurred in our sample period were accompanied by a size/weight reduction. The number of product replacements with downsizing has been particularly high since 2007. We also find that prices, on average, did not change much at the time of product replacement, even if a product replacement was accompanied by product downsizing, resulting in an effective price increase. However, comparing the magnitudes of product downsizings, our results indicate that prices declined more for product replacements that involved a larger decline in size or weight. Finally, we show that the quantities sold decline with product downsizing, and that the responsiveness of quantity purchased to size/weight changes is almost the same as the price elasticity, indicating that consumers are as sensitive to size/weight changes as they are to price changes. This implies that quality adjustments based on per-unit prices, which are widely used by statistical agencies in countries around the world, may be an appropriate way to deal with product downsizing.

    Introduction

    Consumer price inflation in Japan has been below zero since the mid-1990s, clearly indicating the emergence of deflation over the last 15 years. The rate of deflation as measured by the headline consumer price index (CPI) has been around 1 percent annually, which is much smaller than the rates observed in the United States during the Great Depression, indicating that although Japan’s deflation is persistent, it is only moderate. It has been argued by researchers and practitioners that at least in the early stages the main cause of deflation was weak aggregate demand, although deflation later accelerated due to pessimistic expectations reflecting firms’ and households’ view that deflation was not a transitory but a persistent phenomenon and that it would continue for a while.

  • Why are product prices in online markets not converging?

    Abstract

    Why are product prices in online markets dispersed in spite of very small search costs? To address this question, we construct a unique dataset from a Japanese price comparison site, which records price quotes offered by e-retailers as well as customers’ clicks on products, which occur when they proceed to purchase the product. We find that the distribution of prices retailers quote for a particular product at a particular point in time (divided by the lowest price) follows an exponential distribution, showing the presence of substantial price dispersion. For example, 20 percent of all retailers quote prices that are more than 50 percent higher than the lowest price. Next, comparing the probability that customers click on a retailer with a particular rank and the probability that retailers post prices at a particular rank, we show that both decline exponentially with price rank and that the exponents associated with the probabilities are quite close. This suggests that the reason why some retailers set prices at a level substantially higher than the lowest price is that they know that some customers will choose them even at that high price. Based on these findings, we hypothesize that price dispersion in online markets stems from heterogeneity in customers’ preferences over retailers; that is, customers choose a set of candidate retailers based on their preferences, which are heterogeneous across customers, and then pick a particular retailer among the candidates based on the price ranking

    Introduction

    The number of internet users worldwide is 2.4 billion, constituting about 35 percent of the global population. The number of users has more than doubled over the last five years and continues to increase [1]. In the early stages of the internet boom, observers predicted that the spread of the internet would lead the retail industry toward a state of perfect competition, or a Bertrand equilibrium [2]. For instance, The Economist stated in 1990 that “[t]he explosive growth of the Internet promises a new age of perfectly competitive markets. With perfect information about prices and products at their fingertips, consumers can quickly and easily find the best deals. In this brave new world, retailers’ profit margins will be competed away, as they are all forced to price at cost” [3]. Even academic researchers argued that online markets will soon be close to perfectly competitive markets [4][5][6][7].

  • Detecting Real Estate Bubbles: A New Approach Based on the Cross-Sectional Dispersion of Property Prices

    Abstract

    We investigate the cross-sectional distribution of house prices in the Greater Tokyo Area for the period 1986 to 2009. We find that size-adjusted house prices follow a lognormal distribution except for the period of the housing bubble and its collapse in Tokyo, for which the price distribution has a substantially heavier right tail than that of a lognormal distribution. We also find that, during the bubble era, sharp price movements were concentrated in particular areas, and this spatial heterogeneity is the source of the fat upper tail. These findings suggest that, during a bubble period, prices go up prominently for particular properties, but not so much for other properties, and as a result, price inequality across properties increases. In other words, the defining property of real estate bubbles is not the rapid price hike itself but an increase in price dispersion. We argue that the shape of cross sectional house price distributions may contain information useful for the detection of housing bubbles.

    Introduction

    Property market developments are of increasing importance to practitioners and policymakers. The financial crises of the past two decades have illustrated just how critical the health of this sector can be for achieving financial stability. For example, the recent financial crisis in the United States in its early stages reared its head in the form of the subprime loan problem. Similarly, the financial crises in Japan and Scandinavia in the 1990s were all triggered by the collapse of bubbles in the real estate market. More recently, the rapid rise in real estate prices - often supported by a strong expansion in bank lending - in a number of emerging market economies has become a concern for policymakers. Given these experiences, it is critically important to analyze the relationship between property markets, finance, and financial crisis.

  • Repos in Over-the-Counter Markets

    Abstract

    This paper presents a dynamic matching model featuring dealers and short-term investors in an over-the-counter bond market. The model illustrates that bilateral bargaining in an over-the-counter market results in an endogenous bond-liquidation cost for short-term investors. This cost makes short-term investors need repurchase agreements to buy long-term bonds. The cost also explains the existence of a margin specific to repurchase agreements held by short-term investors, if repurchase agreements must be renegotiation-proof. Without repurchase agreements, short-term investors do not buy long-term bonds. In this case, the bond yield rises unless dealers have enough capital to buy and hold bonds.

    Introduction

    Repurchase agreements, or repos, are one of the primary instruments in the money market. In a repo, a short-term investor buys bonds with a future contract in which the seller of the bonds, typically a bond dealer, promises to buy back the bonds at a later date. A question arises from this observation regarding why investors need such a promise when they can simply buy and resell bonds in a series of spot transactions. In this paper, I present a model to illustrate that a bond-liquidation cost due to over-the-counter (OTC) trading can explain short-term investors’ need for repos in bond markets. It is not necessary to introduce uncertainty or asymmetric information to obtain this result.

  • Estimating Quality Adjusted Commercial Property Price Indexes Using Japanese REIT Data

    Abstract

    We propose a new method to estimate quality adjusted commercial property price indexes using real estate investment trust (REIT) data. Our method is based on the present value approach, but the way the denominator (i.e., the discount rate) and the numerator (i.e., cash flows from properties) are estimated differs from the traditional method. We estimate the discount rate based on the share prices of REITs, which can be regarded as the stock market’s valuation of the set of properties owned by the REITs. As for the numerator, we use rental prices associated only with new rental contracts rather than those associated with all existing contracts. Using a dataset with prices and cash flows for about 500 commercial properties included in Japanese REITs for the period 2003 to 2010, we find that our price index signals turning points much earlier than an appraisal-based price index; specifically, our index peaks in the first quarter of 2007, while the appraisal-based price index exhibits a turnaround only in the third quarter of 2008. Our results suggest that the share prices of REITs provide useful information in constructing commercial property price indexes.

    Introduction

    Looking back at the history of economic crises, there are a considerable number of cases where a crisis was triggered by the collapse of real estate price bubbles. For example, it is widely accepted that the collapse of Japan’s land/stock price bubble in the early 1990s has played an important role in the subsequent economic stagnation, and in particular the banking crisis that started in the latter half of the 1990s. Similarly, the Nordic banking crisis in the early 1990s also occurred in tandem with a property bubble collapse, while the global financial crisis that began in the U.S. in 2008 and the recent European debt crisis were also triggered by the collapse of bubbles in the property and financial markets.

  • The Emergence of Different Tail Exponents in the Distributions of Firm Size Variables

    Abstract

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y , we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (log K, log L, log Y ), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

    Introduction

    In various phase transitions, it is universally observed that physical quantities near critical points obey power laws. For instance, in magnetic substances, specific heat, magnetic dipole density, and magnetic susceptibility follow power laws of heat or magnetic flux. It is also known that the cluster-size distribution of the spin follows power laws. The renormalization group approach has been employed to confirm that power laws arise as critical phenomena of phase transitions [1].

  • High quality topic extraction from business news explains abnormal financial market volatility

    Abstract

    Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affect trading and the pricing of firms in organized stock markets. In this paper we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 205 major stocks in the S&P US stock index. We show that the whole landscape of news that affect stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their “thematic” features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized fact in financial economies, namely that at certain times trading volumes appear to be “abnormally large,” can be explained by the flow of news. In this sense, our results prove that there is no “excess trading,” if the news are genuinely novel and provide relevant financial information.

    Introduction

    Neoclassical financial economics based on the “efficient market hypothesis” (EMH) considers price movements as almost perfect instantaneous reactions to information flows. Thus, according to the EMH, price changes simply reflect exogenous news. Such news - of all possible types (geopolitical, environmental, social, financial, economic, etc.) - lead investors to continuously reassess their expectations of the cash flows that firms’ investment projects could generate in the future. These reassessments are translated into readjusted demand/supply functions, which then push prices up or down, depending on the net imbalance between demand and supply, towards a fundamental value. As a consequence, observed prices are considered the best embodiments of the present value of future cash flows. In this view, market movements are purely exogenous without any internal feedback loops. In particular, the most extreme losses occurring during crashes are considered to be solely triggered exogenously.

  • How Fast Are Prices in Japan Falling?

    Abstract

    The consumer price inflation rate in Japan has been below zero since the mid-1990s. However, despite the presence of a substantial output gap, the rate of deflation has been much smaller than that observed in the United States during the Great Depression. Given this, doubts have been raised regarding the accuracy of Japan’s official inflation estimates. Against this background, the purpose of this paper is to investigate to what extent estimates of the inflation rate depend on the methodology adopted. Our specific focus is on how inflation estimates depend on the method of outlet, product, and price sampling employed. For the analysis, we use daily scanner data on prices and quantities for all products sold at about 200 supermarkets over the last ten years. We regard this dataset as the “universe” and send out (virtual) price collectors to conduct sampling following more than sixty different sampling rules. We find that the officially released outcome can be reproduced when employing a sampling rule similar to the one adopted by the Statistics Bureau. However, we obtain numbers quite different from the official ones when we employ different rules. The largest rate of deflation we find using a particular rule is about 1 percent per year, which is twice as large as the official number, suggesting the presence of substantial upward-bias in the official inflation rate. Nonetheless, our results show that the rate of deflation over the last decade is still small relative to that in the United States during the Great Depression, indicating that Japan’s deflation is moderate.

    Introduction

    The consumer price index (CPI) inflation rate in Japan has been below zero since the mid-1990s, clearly indicating the emergence of deflation over the last 15 years. However, the rate of deflation measured by headline CPI in each year was around 1 percent, which is much smaller than the rates observed in the United States during the Great Depression. Some suggest that this simply reflects the fact that although Japan’s deflation is persistent, it is only moderate. Others, both inside and outside the country, however, argue that something must be wrong with the deflation figures, questitioning Japan’s price data from a variety of angles. One of these is that, from an economic perspective, the rate of deflation, given the huge and persistent output gap in Japan, should be higher than the numbers released by the government suggest. Fuhrer et al. (2011), for example, estimating a NAIRU model for Japan, conclude that it would not have been surprising if the rate of deflation had reached 3 percent per year. Another argument focuses on the statistics directly. Broda and Weinstein (2007) and Ariga and Matsui (2003), for example, maintain that there remains non-trivial mismeasurement in the Japanese consumer price index, so that the officially released CPI inflation rate over the last 15 years contains substantial upward bias.

  • Moderate but persistent deflation in Japan (in Japanese)

    Abstract

    日本では,1990 年代後半以降,政策金利がゼロになる一方,物価上昇率もゼロ近傍となっている。この「二つのゼロ」現象は,この時期における日本経済の貨幣的側面を特徴づけるものであり,実物的側面の特徴である成長率の長期低迷と対をなしている。本稿では「二つのゼロ」現象の原因を解明すべく行われてきたこれまでの研究成果を概観する。
    ゼロ金利現象については,自然利子率(貯蓄投資を均衡させる実質利子率)が負の水準へと下落したのを契機として発生したという見方と,企業や家計が何らかの理由で強いデフレ予想をもつようになり,それが起点となって自己実現的なデフレ均衡に陥ったという見方がある。試算によれば,日本の自然利子率は 1990 年代後半以降かなり低い水準にあり,マイナスに落ち込んだ時期もあった。一方,物価下落を予想する家計は少数派である。これらの事実は,日本のゼロ金利の原因として,負の自然利子率説が有力であることを示している。ただし,企業や家計の強い円高予想が起点となって自己実現的なデフレ均衡に陥っている可能性も否定できない。物価については,原価や需要が変化しても即座には商品の販売価格を変更しないとする企業が 9 割を超えており,価格の硬直性が存在する。さらに,POS データを用いた分析によれば,1990 年代後半以降,価格の更新頻度が高まる一方,価格の更新幅は小幅化する傾向がある。このような小刻みな価格変更が物価下落を緩やかにしている。小刻みな価格変更の背景には,ライバルが価格を変更すれば自分も価格を変更する,ライバルが変更しなければ自分も変更しないという意味で,店舗や企業間の相互牽制が強まっている可能性がある。
    「二つのゼロ」現象は,ケインズが提示した「流動性の罠」と「価格硬直性」というアイディアと密接に関係している。しかし,「流動性の罠」についてはケインズ以後,本格的な研究がなされておらず,「価格硬直性」についてもその原因をデータから探る研究が本格化したのはここ10 年のことに過ぎない。「二つのゼロ」現象に関する議論が混迷し,政策対応が遅れた背景にはこうした事情がある。ケインズの残した宿題に精力的に取り組むことが研究者に求められている。

    Introduction

    マクロの経済現象を実物的側面と貨幣的側面に分けるとすれば,1990 年代初のバブル崩壊後,実物的な側面における最も重要な現象は成長率の低下であった。成長率の低下やそれに伴う雇用の喪失は多くの人にとって差し迫った問題であり,研究者の間でも「失われた十年」を巡って様々な検討が進められてきた。これに対して,貨幣的な側面については,少なくともバブル崩壊直後はさほど注目されず,研究者の関心を集めることも少なかった。しかし実はこの時期,貨幣的な側面でも重要な変化が進行していた。

  • Emergence of power laws with different power-law exponents from reversal quasi-symmetry and Gibrat’s law

    Abstract

    To explore the emergence of power laws in social and economic phenomena, the authors discuss the mechanism whereby reversal quasi-symmetry and Gibrat’s law lead to power laws with different powerlaw exponents. Reversal quasi-symmetry is invariance under the exchange of variables in the joint PDF (probability density function). Gibrat’s law means that the conditional PDF of the exchange rate of variables does not depend on the initial value. By employing empirical worldwide data for firm size, from categories such as plant assets K, the number of employees L, and sales Y in the same year, reversal quasi-symmetry, Gibrat’s laws, and power-law distributions were observed. We note that relations between power-law exponents and the parameter of reversal quasi-symmetry in the same year were first confirmed. Reversal quasi-symmetry not only of two variables but also of three variables was considered. The authors claim the following. There is a plane in 3-dimensional space (log K, log L, log Y ) with respect to which the joint PDF PJ (K, L, Y ) is invariant under the exchange of variables. The plane accurately fits empirical data (K, L, Y ) that follow power-law distributions. This plane is known as the Cobb-Douglas production function, Y = AKαLβ which is frequently hypothesized in economics.

    Introduction

    In various phase transitions, it has been universally observed that physical quantities near critical points obey power laws. For instance, in magnetic substances, the specific heat, magnetic dipole density, and magnetic susceptibility follow power laws of heat or magnetic flux. We also know that the cluster-size distribution of the spin follows power laws. Using renormalization group methods realizes these conformations to power law as critical phenomena of phase transitions [1].

  • Beauty Contests and Fat Tails in Financial Markets

    Abstract

    This paper demonstrates that fat-tailed distributions of trade volume and stock returns emerge in a simultaneous-move herding model of rational traders who infer other traders’ private information on the value of assets by observing aggregate actions. Without parametric assumptions on the private information, I analytically show that the traders’ aggregate actions follow a power-law distribution with exponential truncation. Numerical simulations show that the model is able to generate the fat-tailed distributions of returns as observed empirically. I argue that the learning among a large number of traders leads to a criticality condition for the power-law clustering of actions.

    Introduction

    Since Mandelbrot [27] and Fama [14], it has been well established that the short-term stock returns exhibit a fat-tailed, leptokurtic distribution. Jansen and de Vries [20], for example, estimated the exponent of the power-law tail to be in the range 3 to 5, which warrants a finite variance and yet deviates greatly from the normal distribution in the fourth moment. This anomaly in the tail and kurtosis has been considered as a reason for the excess volatility of stock returns.

  • A New Method for Measuring Tail Exponents of Firm Size Distributions

    Abstract

    We propose a new method for estimating the power-law exponents of firm size variables. Our focus is on how to empirically identify a range in which a firm size variable follows a power-law distribution. As is well known, a firm size variable follows a power-law distribution only beyond some threshold. On the other hand, in almost all empirical exercises, the right end part of a distribution deviates from a power-law due to finite size effect. We modify the method proposed by Malevergne et al. (2011) so that we can identify both of the lower and the upper thresholds and then estimate the power-law exponent using observations only in the range defined by the two thresholds. We apply this new method to various firm size variables, including annual sales, the number of workers, and tangible fixed assets for firms in more than thirty countries.

    Introduction

    Power-law distributions are frequently observed in social phenomena (e.g., Pareto
    (1897); Newman (2005); Clauset et al. (2009)). One of the most famous examples
    in Economics is the fact that personal income follows a power-law, which was
    first found by Pareto (1897) about a century ago, and thus referred to as Pareto
    distribution. Specifically, the probability that personal income x is above x0 is
    given by

    P>(x) ∝ x −µ   for x > x0

    where µ is referred to as a Pareto exponent or a power-law exponent.

  • Nominal Wage Rigidity in Japan(1993-2006) Quasi-panel approach (in Japanese, abstract in English)

    Abstract

    We examine the downward rigidity of nominal wage in Japan between 1993 and 2006 by constructing a quasi-panel data set of individual worker from the microfile of Basic Survey on Wage Structure. The results are as follows: although the downward rigidity of nominal hourly wage of fulltime regular workers is weak in Japan, it has increased since around 2000s. The flexibility of hourly wage comes from the changeable scheduled hour worked. During the economic downturn, many establishments reduce the hourly wage by increasing scheduled hour worked while keeping the amount of payment constant.

    Introduction

    賃金は、労働市場で決定される重要な価格変数である。マクロ経済を考察するうえでは、賃金決定の様相が財政政策や金融政策の効果を左右するとされることが少なくない。労働市場においても、需要や供給の変化に対する賃金、すなわち均衡価格の振る舞いは、市場の機能そのものを表象する要素として、いまもって研究者に重要な手がかりを与え続けている。

  • A New Method for Identifying the Effects of Foreign Exchange Interventions

    Abstract

    Central banks react even to intraday changes in the exchange rate; however, in most cases, intervention data is available only at a daily frequency. This temporal aggregation makes it difficult to identify the effects of interventions on the exchange rate. We apply the Bayesian MCMC approach to this endogeneity problem. We use “data augmentation” to obtain intraday intervention amounts and estimate the efficacy of interventions using the augmented data. Applying this new method to Japanese data, we find that an intervention of one trillion yen moves the yen/dollar rate by 1.7 percent, which is more than twice as much as the magnitude reported in previous studies applying OLS to daily observations. This shows the quantitative importance of the endogeneity problem due to temporal aggregation.

    Introduction

    Are foreign exchange interventions effective? This issue has been debated extensively since the 1980s, but no conclusive consensus has emerged. A key difficulty faced by researchers in answering this question is the endogeneity problem: the exchange rate responds “within the period” to foreign exchange interventions and the central bank reacts “within the period” to fluctuations in the exchange rate. This difficulty would not arise if the central bank responded only slowly to fluctuations in the exchange rate, or if the data sampling interval were sufficiently fine.

  • House Prices at Different Stages of the Buying/Selling Process

    Abstract

    In constructing a housing price index, one has to make at least two important choices. The first is the choice among alternative estimation methods. The second is the choice among different data sources of house prices. The choice of the dataset has been regarded as critically important from a practical viewpoint, but has not been discussed much in the literature. This study seeks to fill this gap by comparing the distributions of prices collected at different stages of the house buying/selling process, including (1) asking prices at which properties are initially listed in a magazine, (2) asking prices when an offer for a property is eventually made and the listing is removed from the magazine, (3) contract prices reported by realtors after mortgage approval, and (4) registry prices. These four prices are collected by different parties and recorded in different datasets. We find that there exist substantial differences between the distributions of the four prices, as well as between the distributions of house attributes. However, once quality differences are controlled for, only small differences remain between the different house price distributions. This suggests that prices collected at different stages of the house buying/selling process are still comparable, and therefore useful in constructing a house price index, as long as they are quality adjusted in an appropriate manner.

    Introduction

    In constructing a housing price index, one has to make several nontrivial choices. One of them is the choice among alternative estimation methods, such as repeatsales regression, hedonic regression, and so on. There are numerous papers on this issue, both theoretical and empirical. Shimizu et al. (2010), for example, conduct a statistical comparison of several alternative estimation methods using Japanese data. However, there is another important issue which has not been discussed much in the literature, but has been regarded as critically important from a practical viewpoint: the choice among different data sources for housing prices. There are several types of datasets for housing prices: datasets collected by real estate agencies and associations; datasets provided by mortgage lenders; datasets provided by government departments or institutions; and datasets gathered and provided by newspapers, magazines, and websites. Needless to say, different datasets contain different types of prices, including sellers’ asking prices, transactions prices, valuation prices, and so on.

  • A New Method for Specifying Functional Form of Production Function (in Japanese)

    Abstract

    本稿では生産関数の形状を選択する手法を提案する。世の中には数人の従業員で営まれる零細企業から数十万人の従業員を擁する超巨大企業まで様々な規模の企業が存在する。どの規模の企業が何社存在するかを表したものが企業の規模分布であり,企業の規模を示す変数である Y (生産)と K(資本)と L(労働)のそれぞれはベキ分布とよばれる分布に従うことが知られている。本稿では,企業規模の分布 関数 と生産 関数 という 2 つの関数の間に存在する関係に注目し,それを手がかりとして生産関数の形状を特定するという手法を提案する。具体的には,KL についてデータから観察された分布の関数形をもとにして,仮に生産関数がある形状をとる場合に得られるであろう Y の分布関数を導出し,データから観察される Y の分布関数と比較する。日本を含む 25 カ国にこの手法を適用した結果,大半の国や産業において,YKL の分布と整合的なのはコブダグラス型であることがわかった。また,Y の分布の裾を形成する企業,つまり巨大企業では,KL の投入量が突出して大きいために Y も突出して大きい傾向がある。一方,全要素生産性が突出して高くそれが原因で Y が突出して大きいという傾向は認められない。

    Introduction

    企業の生産関数の形状としてはコブダグラス型やレオンチェフ型など様々な形状がこれまで提案されており,ミクロやマクロの研究者によって広く用いられている。例えば,マクロの生産性に関する研究では,コブダグラス生産関数が広く用いられており,そこから全要素生産性を推計することが行われている。しかし,生産 Y と資本 K と雇用 L の関係をコブダグラス型という特定の関数形で表現できるのはなぜか。どういう場合にそれが適切なのか。そうした点にまで踏み込んで検討する研究は限られている。多くの実証研究では,いくつかの生産関数の形状を試してみて,回帰の当てはまりの良さを基準に選択するという便宜的な取り扱いがなされている。

  • Price Rigidity and Market Structure: Evidence from the Japanese Scanner Data

    Abstract

    This paper investigates price rigidity arise out of the specific market structures, such as degree of market concentration and pricing decisions of retailers and manufacturers. Using Japanese scanner data that contains transaction prices and sales for more than 1,600 commodity groups from 1988 to 2008, we find statistically significant negative correlation between the degree of market concentration and the frequency of price changes, including both bargain price changes and regular price changes. The results of two-way analysis of variance suggests that the variation of the frequency of price changes depends on the dierences among manufacturers as well as those among retailers.

    Introduction

    The relationship between price rigidity and market structure has been discussed since the American economist, Gardinar C. Means suggested that the downward rigidity of price during the Great Depression had a relationship to industrial concentration in a Senate Document in 1935. The implication of Means' findings is that the prices of less competitive markets tend to be sticky. This is referred to as the "administered prices" hypothesis (Domberger, 1979) and still attracts considerable attention. This is partly because empirical literature in this field found strong heterogeneity in price stickiness across commodity items and is interested in the determinants of item-levels variation in the frequency of price changes.

  • Fiscal Policy Switching in Japan, the U.S., and the U.K.

    Abstract

    This paper estimates fiscal policy feedback rules in Japan, the United States, and the United Kingdom for more than a century, allowing for stochastic regime changes. Estimating a Markovswitching model by the Bayesian method, we find the following: First, the Japanese data clearly reject the view that the fiscal policy regime is fixed, i.e., that the Japanese government adopted a Ricardian or a non-Ricardian regime throughout the entire period. Instead, our results indicate a stochastic switch of the debt-GDP ratio between stationary and nonstationary processes, and thus a stochastic switch between Ricardian and non-Ricardian regimes. Second, our simulation exercises using the estimated parameters and transition probabilities do not necessarily reject the possibility that the debt-GDP ratio may be nonstationary even in the long run (i.e., globally nonstationary). Third, the Japanese result is in sharp contrast with the results for the U.S. and the U.K. which indicate that in these countries the government’s fiscal behavior is consistently characterized by Ricardian policy.

    Introduction

    Recent studies about the conduct of monetary policy suggest that the fiscal policy regime has important implications for the choice of desirable monetary policy rules, particularly, monetary policy rules in the form of inflation targeting (Sims (2005), Benigno and Woodford (2007)). It seems safe to assume that fiscal policy is characterized as “Ricardian” in the terminology of Woodford (1995), or “passive” in the terminology of Leeper (1991), if the government shows strong fiscal discipline. If this is the case, we can design an optimal monetary policy rule without paying any attention to fiscal policy. However, if the economy is unstable in terms of the fiscal situation, it would be dangerous to choose a monetary policy rule independently of fiscal policy rules. For example, some researchers argue that the recent accumulation of public debt in Japan is evidence of a lack of fiscal discipline on the part of the Japanese government, and that it is possible that government bond market participants may begin to doubt the government’s intention and ability to repay the public debt. If this is the case, we may need to take the future evolution of the fiscal regime into consideration when designing a monetary policy rule.

  • Fair or unfair pricing in internet auctions(in Japanese)

    Abstract

    価格はなぜ硬直的なのか。Arthur Okun は,需要の増加時に価格を引き上げることを顧客はアンフェアとみるので,顧客の怒りを買うことを恐れる企業や商店は価格を上げないと説明した。例えば,雪の日にシャベルの需要が高まることに乗じて値札を付け替える行為はアンフェアである。本稿では,このフェアネス仮説がネットオークション市場にも当てはまるか否かを検証するため,2009 年の新型インフルエンザ騒動時におけるヤフーオークション市場でのマスク価格の変化を分析した。マスクの落札率(落札件数を出品件数で除したもの)は 5 月初と 8 月後半に 8 割超の水準まで上昇しており,その時期に需要が集中していたことがわかる。前者は日本で最初の「感染の疑い」事例が出た時期であり,後者は本格的な流行期入りを政府が宣言した時期である。5 月の局面では,売り手は「開始」価格(入札を開始する価格)と「即決」価格(その価格で入札すればセリを経ずに落札できる価格)の両方を引き上げる行動をとった。特に,即決価格は開始価格と比べても大幅に引き上げられており,落札価格を高めに誘導する意図があったとみられる。一方,8 月の局面では,開始価格の小幅な引き上げは見られたものの即決価格は引き上げられていない。5 月と 8 月の違いは売り手の属性の違いに起因しており,5月の局面では売り手は主として個人であり,8 月の局面では主として企業であった。企業は買い手の評判を意識するため,需要の増加に乗じて価格を引き上げることはしなかったと解釈できる。Okun は,売り手と買い手が長期的な関係をもつ顧客市場(customermarkets)と,そうした関係のないオークション市場(auction markets)を区別することの重要性を強調し,フェアネス仮説は前者にだけ当てはまると主張した。本稿の分析結果は,ネットオークション市場はフェアネスの観点からは顧客市場に近い性質をもつことを示している。

    Introduction

    一橋大学物価研究センターが 2008 年春に行った企業を対象としたアンケート調査によると,需要やコストの変動に対して直ちに出荷価格を変更するかという問いに対して 90%が変更しないと回答している。ミクロ経済学では需要曲線または供給曲線がシフトすると均衡は新しい交点に移り,それに伴って価格は直ちに変わると教える。しかし実際には,企業を取り巻く需要やコストの環境が変化しても企業は即座には価格を変更しないのである。これは価格の硬直性または粘着性とよばれる現象である。価格硬直性はマクロ経済学の根幹を成す概念であり,価格が瞬時には調整されないがゆえに失業や設備稼働率の変動が生じる。

  • Mild deflation at the zero lower bound on interest rates (in Japanese)

    Abstract

    日本では,1990 年代後半以降,政策金利がゼロになる一方,物価上昇率もゼロ近傍となっている。この「二つのゼロ」現象は,この時期における日本経済の貨幣的側面を特徴づけるものであり,実物的側面の特徴である成長率の長期低迷と対をなしている。本稿では「二つのゼロ」現象の原因を解明すべく行われてきたこれまでの研究成果を概観する。
    ゼロ金利現象については,自然利子率(貯蓄投資を均衡させる実質利子率)が負の水準へと下落したのを契機として発生したという見方と,企業や家計が何らかの理由で強いデフレ予想をもつようになり,それが起点となって自己実現的なデフレ均衡に陥ったという見方がある。試算によれば,日本の自然利子率は 1990 年代後半以降かなり低い水準にあり,マイナスに落ち込んだ時期もあった。一方,予想物価上昇率は,企業間や家計間で大きなばらつきがあり,全員が持続的な物価下落を予想していたわけではない。これらの事実は,日本のゼロ金利の原因として,負の自然利子率説が有力であることを示している。ただし,企業や家計の強い円高予想が起点となって自己実現的なデフレ均衡に陥っている可能性も否定できない。
    物価については,原価や需要が変化しても即座には商品の販売価格を変更しないとする企業が 9 割を超えており,価格の硬直性が存在する。さらに,POS データを用いた分析によれば,1990 年代後半以降,価格の更新頻度が高まる一方,価格の更新幅は小幅化する傾向がある。このような小刻みな価格変更が物価下落を緩やかにしている。小刻みな価格変更の背景には,ライバルが価格を変更すれば自分も価格を変更する,ライバルが変更しなければ自分も変更しないという意味で,店舗や企業間の相互牽制が強まっている可能性がある。
    「二つのゼロ」現象は,ケインズが提示した「流動性の罠」と「価格硬直性」というアイディアと密接に関係している。しかし,「流動性の罠」についてはケインズ以後,本格的な研究がなされておらず,「価格硬直性」についてもその原因をデータから探る研究が本格化したのはここ10 年のことに過ぎない。「二つのゼロ」現象に関する議論が混迷し,政策対応が遅れた背景にはこうした事情がある。ケインズの残した宿題に精力的に取り組むことが研究者に求められている。

    Introduction

    マクロの経済現象を実物的側面と貨幣的側面に分けるとすれば,1990 年代初のバブル崩壊後,実物的な側面における最も重要な現象は成長率の低下であった。成長率の低下やそれに伴う雇用の喪失は多くの人にとって差し迫った問題であり,研究者の間でも「失われた十年」を巡って様々な検討が進められてきた。これに対して,貨幣的な側面については,少なくともバブル崩壊直後はさほど注目されず,研究者の関心を集めることも少なかった。しかし実はこの時期,貨幣的な側面でも重要な変化が進行していた。

  • On the nominal rigidities of housing rents(in Japanese)

    Abstract

    1990年代前半の日本のバブル崩壊期では住宅価格の大幅下落にもかかわらず家賃はほとんど変化しなかった。同様の現象はバブル崩壊後の米国でも観察されている。家賃はなぜ変化しないのか。なぜ住宅価格と家賃は連動しないのか。本稿では,こうした疑問に答えるため,大手住宅管理会社により提供された約 15,000 戸の家賃データを用いて分析を行い,以下の結果を得た。第 1 に,家賃が変更される住戸の割合は 1 年間で約 5%に過ぎないことがわかった。これは米国の 14 分の1,ドイツの 4 分の 1 であり,極端に低い。この高い硬直性の背景には,店子の入れ替えが少ない一方,家賃の契約期間が 2 年と長いため,そもそも家賃を変更する機会が限定されているという日本の住宅市場に特有の事情がある。しかしそれ以上に重要なのは,店子の入れ替えや契約更新など家賃変更の機会が訪れても家賃を変更していないということであり,これが家賃の変更確率を大きく引き下げている。店子の入れ替え時においては 76%の住戸で以前と同じ家賃が適用されており,契約更新の際には 97%の住戸で家賃が据え置かれている。第 2 に,Caballeroand Engel (2007)によって提案された Adjustment hazard function の手法を用いた分析の結果,各住戸の家賃が変更されるか否かは,その住戸の現行家賃が市場実勢からどの程度乖離しているかにほとんど依存しないことがわかった。つまり,家賃改定は状態依存ではなく時間依存であり,カルボ型モデルで描写できる。

    Introduction

    多くの先進主要国においては,住宅価格を中心とした資産価格の急激な上昇とその後の下落が,金融システムに対して甚大な影響をもたらすことで経済活動の停滞を招いた共通の歴史を持つ。1990 年代の日本・スウェーデン,そして,今回の米国のサブプライム問題に端を発した金融危機が,最も代表的な事例としてあげることができる。Reinhart andRogoff (2008)では,多くの国の経済データを網羅的かつ長期の時系列で比較分析し,金融危機がもたらされる背後には,多くの共通する経済現象が発生していることを明らかにした。その一つの事象が,資産価格,なかでも不動産価格が,賃貸料と比較して大きく上昇していることを指摘した。

  • On the Nonstationarity of the Exchange Rate Process

    Abstract

    We empirically investigate the nonstationarity property of the dollar-yen exchange rate by using an eight year span of high frequency data set. We perform a statistical test of strict stationarity based on the two-sample KolmogorovSmirnov test for the absolute price changes, and the Pearson’s chi-square test for the number of successive price changes in the same direction, and find statistically significant evidence of nonstationarity. We further study the recurrence intervals between the days in which nonstationarity occurs, and find that the distribution of recurrence intervals is well-approximated by an exponential distribution. Also, we find that the mean conditional recurrence interval 〈T|T0〉 is independent of the previous recurrence interval T0. These findings indicate that the recurrence intervals is characterized by a Poisson process. We interpret this as reflecting the Poisson property regarding the arrival of news.

    Introduction

    Financial time series data have been extensively investigated using a wide
    variety of methods in econophysics. These studies tend to assume, explicitly
    or implicitly, that a time series is stationary, since stationarity is a requirement
    for most of the mathematical theories underlying time series analysis.
    However, despite its nearly universal assumption, there is little previous studies
    that seek to test stationarity in a reliable manner. (Toth1a et al. (2010)).

  • Stochastic Herding by Institutional Investment Managers

    Abstract

    This paper demonstrates that the behavior of institutional investors around the downturn of the U.S. equity markets in 2007 is consistent with stochastic herding in attempts to time the market. We consider a model of large number of institutional investment managers who simultaneously decide whether to remain invested in an assets or liquidate their positions. Each fund manager receives imperfect information about the market’s ability to supply liquidity and chooses whether or not to sell the security based on her private information as well as the actions of others. Due to feedback effects the equilibrium is stochastic and the “aggregate action” is characterized by a power-law probability distribution with exponential truncation predicting occasional “explosive” sell-out events. We examine highly disaggregated institutional ownership data of publicly traded stocks to find that stochastic herding explains the underlying data generating mechanism. Furthermore, consistent with market-timing considerations, the distribution parameter measuring the degree of herding rises sharply immediately prior the sell-out phase. The sell-out phase is consistent with the transition from subcritical to supercritical phase, whereby the system swings sharply to a new equilibrium. Specifically, exponential truncation vanishes as the distribution of fund manager actions becomes centered around the same action – all sell.

    Introduction

    Many apparent violations of the efficient market hypothesis, such as bubbles, crashes and “fat tails” in the distribution of returns have been attributed to the tendency of investors to herd. Particularly, in a situation where traders may have private information related to the payoff of a financial assets their individual actions may trigger a cascade of similar actions by other traders. While the mechanism of a chain reaction through information revelation can potentially explain a number of stylized facts in finance, such behavior remains notoriously difficult to identify empirically. This is partly because many theoretical underpinnings of herding, such as informational asymmetry, are unobservable and partly because the complex agent-based models of herding do not yield closedform solutions to be used for direct econometric tests.

  • Forecasting Japanese Stock Returns with Financial Ratios and Other Variables

    Abstract

    This paper extends the previous analyses of the forecastability of Japanese stock market returns in two directions. First, we carefully construct smoothed market priceñearnings ratios and examine their predictive ability. We find that the empirical performance of the priceñearnings ratio in forecasting stock returns in Japan is generally weaker than both the priceñearnings ratio in comparable US studies and the price dividend ratio. Second, we also examine the performance of several other forecasting variables, including lagged stock returns and interest rates. We find that both variables are useful in predicting aggregate stock returns when using Japanese data. However, while we find that the interest rate variable is useful in early subsamples in this regard, it loses its predictive ability in more recent subsamples. This is because of the extremely limited variability in interest rates associated with operation of the Bank of Japanís zero interest policy since the late 1990s. In contrast, the importance of lagged returns increases in subsamples starting from the 2000s. Overall, a combination of logged price dividend ratios, lagged stock returns, and interest rates yield the most stable performance when forecasting Japanese stock market returns.

    Introduction

    In our previous study (Aono & Iwaisako, 2010), we examine the ability of dividend yields to forecast Japanese aggregate stock returns using the singlevariable predictive regression framework of Lewellen (2004) and Campbell & Yogo (2006). This paper continues and extends our earlier efforts in two respects. First, we examine the predictive ability of another popular financial ratio, namely, the priceñearnings ratio. This is motivated by the fact that some studies using US data (for example, Campbell & Vuolteenaho, 2004) find that smoothed market priceñearnings ratios have better forecasting ability than dividend yields. We carefully construct Japanese priceñearnings ratios following the methodology pioneered by Robert Shiller (1989, 2005) and examine their ability to forecast aggregate stock returns. We find that the predictive ability of the price dividend ratio is consistently better than that of the priceñearnings ratio.

  • Housing Prices in Tokyo: A Comparison of Hedonic and Repeat Sales Measures

    Abstract

    Do indexes of house prices behave differently depending on the estimation method? If so, to what extent? To address these questions, we use a unique dataset that we compiled from individual listings in a widely circulated real estate advertisement magazine. The dataset contains more than 470,000 listings of housing prices between 1986 and 2008, including the period of the housing bubble and its burst. We find that there exists a substantial discrepancy in terms of turning points between hedonic and repeat sales indexes, even though the hedonic index is adjusted for structural changes and the repeat sales index is adjusted in the way Case and Shiller suggested. Specifically, the repeat sales measure signals turning points later than the hedonic measure: for example, the hedonic measure of condominium prices bottomed out at the beginning of 2002, while the corresponding repeat sales measure exhibits a reversal only in the spring of 2004. This discrepancy cannot be fully removed even if we adjust the repeat sales index for depreciation.

    Introduction

    Fluctuations in real estate prices have a substantial impact on economic activity. In Japan, the sharp rise in real estate prices during the latter half of the 1980s and their decline in the early 1990s have led to a decade-long, or even longer, stagnation of the economy. More recently, the rapid rise in housing prices and their reversal in the United States have triggered a global financial crisis. Against this background, having a reliable index that correctly identifies trends in housing prices is of utmost importance.

  • On the Evolution of the House Price Distribution

    Abstract

    Is the cross-sectional distribution of house prices close to a (log)normal distribution, as is often assumed in empirical studies on house price indexes? How does the distribution evolve over time? To address these questions, we investigate the cross-sectional distribution of house prices in the Greater Tokyo Area. We find that house prices (Pi) are distributed with much fatter tails than a lognormal distribution and that the tail is quite close to that of a power-law distribution. We also find that house sizes (Si) follow an exponential distribution. These findings imply that size-adjusted house prices, defined by lnPi − aSi, should be normally distributed. We find that this is indeed the case for most of the sample period, but not the bubble era, during which the price distribution has a fat upper tail even after adjusting for size. The bubble was concentrated in particular areas in Tokyo, and this is the source of the fat upper tail.

    Introduction

    Researchers on house prices typically start their analysis by producing a time series of the mean of prices across different housing units in a particular region by, for example, running a hedonic or repeat-sales regression. In this paper, we pursue an alternative research strategy: we look at the entire distribution of house prices across housing units in a particular region at a particular point of time and then investigate the evolution of such cross-sectional distribution over time. We seek to describe price dynamics in the housing market not merely by changes in the mean but by changes in some key parameters that fully characterize the entire cross-sectional price distribution.

  • The Great Moderation in the Japanese Economy

    Abstract

    This paper investigates the contribution of technology and nontechnology shocks to the changing volatility of output and labor growth in the postwar Japanese economy. A time-varying vector autoregression (VAR) with drifting coefficients and stochastic volatilities is modeled and long-run restriction is used to identify technology shocks in line with Gal´ı (1999) and Gal´ı and Gambetti (2009). We find that technology shocks are responsible for significant changes in the output volatility throughout the total sample period while the volatility of labor input is largely attributed to nontechnology shocks. The driving force behind these results is the negative correlation between labor input and productivity, which holds significantly and persistently over the postwar period.

    Introduction

    Most industrialized economies have experienced a substantial decline in output growth volatility in the postwar period, a phenomenon known as “the Great Moderation.” In the U.S. case, many authors have investigated the characteristics of and reasons for the Great Moderation that started in the mid-1980s. Possible explanations include good luck, better monetary policy, and changes in the economic structure, such as inventory management and labor market statistics. Based on the time-varying and Markov-switching structural VAR methods, the good luck hypothesis has been advocated by many authors, including Stock and Watson (2002, 2005), Primiceri (2005), Sims and Zha (2006), Arias, Hansen, and Ohanian (2006), and Gambetti, Pappa, and Canova (2006). On the other hand, the good policy hypothesis has been supported by many other authors including Clarida, Gal´ı, and Gertler (2000), Lubik and Schorfheide (2004), Boivin and Giannoni (2006), and Benati and Surico (2009). There are different approaches to considering structural changes, including Campbell and Hercowitz (2005) and Gal´ı and Gambetti (2009). In particular, Gal´ı and Gambetti (2009) capture the changing patterns of the correlations among the labor market variables.

  • The Role of the IMF Under the Noise of Signals

    Abstract

    This paper theoretically analyzes the Early Warning System (EWS) of the IMF based on the principal-agent model. We search for trade-off of the optimal contract of the IMF under the interim intervention and the noise of the signal. The main findings are as follows. First, when the net loss coming from noise under good fundamental is higher than the net gain by interim intervention under bad fundamental, the debtor country exerts less effort as the noise effect becomes larger. Secondly, when the net loss in good fundamental is smaller than the net gain in the bad fundamental, accurate signal may give rise to the moral hazard problem. Thirdly, when the marginal utility by the intervention of the IMF is higher on bad fundamentals than on good fundamentals, the higher ability of the IMF to mitigate the crisis will elicit a less policy effort from the country. On the other hand, when the economy has higher marginal utility in case of good fundamentals, deeper intervention of the IMF offers an incentive of a greater policy effort to the country. Fourthly, mandating the IMF to care about the country welfare as well as safeguarding its resources, does not necessarily mean the debtor country will exerts less efforts.

    Introduction

    As more developing countries liberalize their capital control regulations, and as more investors invest huge money abroad, the possiblity of financial crisis could get higher. Then vulnerable countries are always at the risk of currency crisis in exchange with chance of welcoming beneficial capital flows. The IMF is expected to take necessary action to prevent crisis by forecasting and advising developing country authorities. The EWS seems to be a good tool for this challenging work. The IMF is expected to help developing countries build needed and reliable economic statistical database. It is a foundation of every EWS studies for crisis prevention. Prevention of possible crisis concerns with both the effort of the program country and the IMF.

  • News shocks and the Japanese macroeconomic fluctuations

    Abstract

    Are the changes in the future technology process, the so-called “news shocks,” the main contributors to the macroeconomic fluctuations in Japan over the past forty years? In this paper, we take two structural vector-auto-regression (SVAR) approaches to answer this question. First, we quantitatively evaluate the relative importance of news shocks among candidate shocks, estimating a structural vectorerror-correction model (SVECM). Our estimated results suggest that the contribution of the TFP news shocks is nonnegligible, which is in line with the findings of previous works. Furthermore, we disentangle the source of news shocks by adopting several kinds of restrictions and find that news shocks on investment-specific technology (IST) also have an important effect. Second, to minimize the gap between the SVAR approach and the Bayesian estimation of a dynamic stochastic general equilibrium model, we adopt an alternative approach: SVAR with sign restrictions. The SVAR with sign restrictions reconfirms the results that the news shocks are important in explaining the Japanese macroeconomic fluctuations.

    Introduction

    Are news shocks the main source of the Japanese macroeconomic fluctuations? Previous works have presented different results. Beaudry and Portier (2005) employ a SVECM with a combination of long-run and short-run restrictions to divide the TFP shocks into surprise and news components. The news shock in their econometric model is the shock that does not have an impact effect on the current TFP but increases the future TFP several quarters after. They find that the estimated TFP news shock is a dark horse behind the Japanese macroeconomic fluctuations, and that a negative news shock occurred in the beginning at the 1990s which might have been relevant with the so-called “lost decade.” Fujiwara, Hirose, and Shintani (2011) assess the importance of news shocks based on an estimation of a dynamic stochastic general equilibrium (DSGE) model using a Bayesian method. They introduced one-to-four-quarters-ahead TFP news shocks and find that the TFP news shocks are nonnegligible but minor in explaining the macroeconomic fluctuations in Japan.

  • Closely Competing Firms and Price Adjustment: Some Findings from an Online Marketplace

    Abstract

    We investigate retailers’ price setting behavior using a unique dataset containing by-the-second records of prices offered by closely competing retailers on a major Japanese price comparison website. First, we find that, when the average price of a product across retailers falls rapidly, the frequency of price adjustments increases, and the size of price adjustments becomes larger. Second, we find positive autocorrelation in the frequency of price adjustments, implying that there tends to be clustering where price adjustments occur in succession. In contrast, there is no such autocorrelation in the size of price adjustments. These two findings indicate that the behavior of competing retailers is characterized by state-dependent pricing rather than time-dependent pricing.

    Introduction

    Since the seminal study by Bils and Klenow (2004), there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. Using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.

  • On the Evolution of the House Price Distribution”

    Abstract

    Is the cross-sectional distribution of house prices close to a (log)normal distribution, as is often assumed in empirical studies on house price indexes? How does it evolve over time? How does it look like during the period of housing bubbles? To address these questions, we investigate the cross-secional distribution of house prices in the Greater Tokyo Area. Using a unique dataset containing individual listings in a widely circulated real estate advertisement magazine in 1986 to 2009, we find the following. First, the house price, Pit, is characterized by a distribution with much fatter tails than a lognormal distribution, and the tail part is quite close to that of a power-law or a Pareto distribution. Second, the size of a house, Si, follows an exponential distribution. These two findings about the distributions of Pit and Si imply that the the price distribution conditional on the house size, i.e., Pr(Pit | Si), follows a lognormal distribution. We confirm this by showing that size adjusted prices indeed follow a lognormal distribution, except for periods of the housing bubble in Tokyo when the price distribution remains asymmetric and skewed to the right even after controlling for the size effect.

    Introduction

    Researches on house prices typically start by producing a time series of the mean of prices across housing units in a particular region by, for example, running a hedonic regression or by adopting a repeat-sales method. In this paper, we propose an alternative research strategy: we look at the entire distribution of house prices across housing units in a particular region at a particular point of time, and then investigate the evolution of such cross sectional distributions over time. We seek to describe price dynamics in a housing market not merely by changes in the mean but by changes in some key parameters that fully characterize the entire cross sectional price distribution. Our ultimate goal is to produce a new housing price index based on these key parameters.

  • Sales Distribution of Consumer Electronics

    Abstract

    Using the uniform most powerful unbiased test, we observed the sales distribution of consumer electronics in Japan on a daily basis and report that it follows both a lognormal distribution and a power-law distribution and depends on the state of the market. We show that these switches occur quite often. The underlying sales dynamics found between both periods nicely matched a multiplicative process. However, even though the multiplicative term in the process displays a sizedependent relationship when a steady lognormal distribution holds, it shows a size-independent relationship when the power-law distribution holds. This difference in the underlying dynamics is responsible for the difference in the two observed distributions.

    Introduction

    Since Pareto pointed out in 1896 that the distribution of income exhibits a heavy-tailed structure [1], many papers has argued that such distributions can be found in a wide range of empirical data that describe not only economic phenomena but also biological, physical, ecological, sociological, and various man-made phenomena [2]. The list of the measurements of quantities whose distributions have been conjectured to obey such distributions includes firm sizes [3], city populations [4], frequency of unique words in a given novel (5-6), the biological genera by number of species [7], scientists by number of published papers [8], web files transmitted over the internet [9], book sales [10], and product market shares (11]. Along with these reports the argument over the exact distribution, whether these heavy-tailed distributions obey a lognormal distribution or a power-law distribution, has been repeated over many years as well [2]. In this paper we use the statistical techniques developed in this literature to clarify the sales distribution of consumer electronics.

  • Competing Firms and Price Adjustment: Evidence from an Online Marketplace

    Abstract

    We investigate retailers’ price setting behavior, and in particular strategic interaction between retailers, using a unique dataset containing by-the-second records of prices offered by competing retailers on a major Japanese price comparison website. First, we find that, when the average price of a product across retailers falls rapidly, the frequency of price adjustments is high, while the size of adjustments remains largely unchanged. Second, we find a positive autocorrelation in the frequency of price adjustments, implying that there tends to be a clustering where once a price adjustment occurs, such adjustments occur in succession. In contrast, there is no such autocorrelation in the size of price adjustments. These two findings indicate that the behavior of competing retailers is characterized by state-dependent pricing, rather than time-dependent pricing, especially when prices fall rapidly, and that strategic complementarities play an important role when retailers decide to adjust (or not to adjust) their prices.

    Introduction

    Since Bils and Klenow’s (2004) seminal study, there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. For example, using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.

  • Japan’s Intangible Capital and Valuation of Corporations in a Neoclassical Framework

    Abstract

    Employing a new accounting data set, this paper estimates the value of productive capital stocks in Japan using the neoclassical model of McGrattan and Prescott (2005). We compare those estimates to actual corporate valuations, and show that the actual value of equity plus net debt falls within a reasonable range of the theory’s prediction for the value of Japanese corporations during the periods 1981-86 and 1993-97. This finding differs from previous results based on studies of aggregate data sets or based on studies of micro data sets that neglected intangible capital. We also show that the Japanese ratio of the amount of intangible capital stock to the amount of tangible capital stock is comparable to the analogous ratios for the U.S. and U.K.

    Introduction

    This paper provides a new interpretation of Japanese stock market developments since 1980, taking into account the role of intangible capital, based on the framework of McGrattan and Prescott (2005). To do so, we employ a new accounting data set, together with a national aggregate data set of the System of National Account (SNA). We show that the ratio of the amount of intangible capital stock to the amount of tangible capital stock for Japan is close to the values for the U.S. and the U.K.. Our estimates of the ratio of the actual corporate value to the fundamental value of capital stocks differ from previous studies using national aggregate data, and from previous studies using micro data sets. We show that intangible capital is an important source of actual corporate values in the Japanese stock market, despite being neglected in the previous studies of Japanese stock markets.

  • Structural and Temporal Changes in the Housing Market and Hedonic Housing Price Indices

    Abstract

    An economic indicator faces two requirements. It should be timely reported and should not significantly be altered afterward to avoid erroneous messages. At the same time they should reflect changing market conditions constantly and appropriately. These requirements are particularly challenging for housing price indices, since housing markets are subject to large temporal/seasonal changes and occasional structural changes. In this study we estimate a hedonic price index of previously-owned condominiums of Tokyo 23 Wards from 1986 through 2006, taking account of seasonal sample selection biases and structural changes in a way it enables us to report the indexes timely which are not subject to change after reporting. Specifically, we propose an overlapping-period hedonic model (OPHM), in which a hedonic price index is calculated every month based on data in the “window” of a year ending this month (this month and previous eleven months). We also estimate hedonic housing price indexes under alternative assumptions: (i) no structural change (“structurally restricted”) and (ii) different structure for every month (“structurally unrestricted”). Results suggest that the structure of the housing market, including seasonality, changes over time, and these changes occur continuously over time. It is also demonstrated that structurally restricted indices that do not account for structural changes involve a large time lag compared with indices that do account for structural changes during periods with significant price fluctuations.

    Introduction

    Japan, the United States, and most advanced nations have experienced housing bubbles and subsequent collapses of the bubbles in succession. Recently, much attention has been focused on housing price indices. In macroeconomic policy, housing price indices are considered to be a possible candidate of “early warning signals” of sometimes devastating financial bubbles. In microeconomic spheres, there are growing needs for hedging against volatility in housing markets, and housing price indices may be used as a means of index trades.

  • Estimation of Redevelopment Probability using Panel Data -Asset Bubble Burst and Office Market in Tokyo-

    Abstract

    Purpose: When Japan’s asset bubble burst, the office vacancy rate soared sharply. This study targets the office market in Tokyo’s 23 special wards during Japan’s bubble burst period. It aims to define economic conditions for the redevelopment/conversion of offices into housing and estimate the redevelopment/conversion probability under the conditions.
    Design/methodology/approach: The precondition for land-use conversion is that subsequent profit excluding destruction and reconstruction costs is estimated to increase from the present level for existing buildings. We estimated hedonic functions for offices and housing, computed profit gaps for approximately 40,000 buildings used for offices in 1991, and projected how the profit gaps would influence the land-use conversion probability. Specifically, we used panel data for two time points in the 1990s to examine the significance of redevelopment/conversion conditions.
    Findings: We found that if random effects are used to control for individual characteristics of buildings, the redevelopment probability rises significantly when profit from land after redevelopment is expected to exceed that from present land uses. This increase is larger in the central part of a city.
    Research limitations/implications: Limitations stem from the nature of Japanese data limited to the conversion of offices into housing. In the future, we may develop a model to generalize land-use conversion conditions.
    Originality/value: This is the first study to specify the process of land-use adjustments that emerged during the bubble burst. This is also the first empirical study using panel data to analyse conditions for redevelopment.
    Key words: hedonic approach, random probit model, urban redevelopment, Japan’s asset bubble Paper type: Research paper

    Introduction

    Sharp real estate price hikes and declines, or the formation and bursting of real estate bubbles, have brought about serious economic problems in many countries.

  • Housing Bubble in Japan and the United States

    Abstract

    Japan and the United States have experienced the housing bubbles and subsequent collapses of the bubbles in succession. In this paper, these two bubbles are compared and the following findings are obtained.
    Firstly, upon applying twenty years of past data from Japan to the “repeat-sales method” and the “hedonic pricing method”, which are representative methods for calculating house prices, it was found that the timing at which prices bottomed out after the collapses of the bubbles differed depending on the two methods. The timing for bottoming out as estimated by the repeat-sales method delayed when compared to the estimate using the hedonic pricing method, by 13 months for condominiums and by three months for single-family homes. This delay is caused by the depreciation effect of building not being processed appropriately by the repeat-sales method. In the United States, the S&P/Case-Shiller Home Price Indices are representative house prices indices, which use the repeat-sales method. Therefore, it is possible that the timing for bottoming out is estimated to be delayed. As there are increasing interests in the timing for bottoming out of the US housing market, there is a risk that the existence of such a lag in cognition causes the increase of uncertainty and the delay in economic recovery.
    Secondly, when looking at the relationship between the demand for houses and house prices based on the time-series data, there is a positive correlation between the two elements. However, upon conducting an analysis using the panel data, which is based on data in units of prefectures or states, there is no significant relationship between the demand for houses and house prices in both Japan and the United States. In this sense, it is hard to explain whether there is a bubble and the size of the bubble according to prefecture (state) using demand elements. This suggests that it is possible that the concept of demographics having an impact on the demand for houses, which thus caused the house prices to increase, is not effective in explaining the price fluctuations in neither Japan nor the United States.
    Thirdly, when looking at the co-movement between the house prices and rent, a phenomenon which the rent almost does not fluctuate at all even when the significant change of house prices change in the process of the formation and collapse of a bubble was confirmed for both Japan and the United States. Its background is that landlords and tenants have formed long-term contractual relationships so that both parties can save on various transactional costs. In addition, the imputed rent of one’s home is not assessed using market prices in Japan, which is an aspect to weaken the co-movement. A lack of co-movement causes a phenomenon in Japan and the United States where consumer prices that include this rent as an important element do not increase since rent does not increase even if housing prices increase during a bubble period. Thus, it results in a delay towards a shift to tighten credits. Since rent prices do not move together with the house prices even after house prices decrease after the collapse of the bubble, a phenomenon which consumer prices do not decrease was observed. This served as a factor for the delay in a shift towards monetary relaxation. Rent prices are an important variable that serves as a node between asset prices and prices of goods and services. It is necessary to increase the accuracy with which it is measured.

    Introduction

    This paper’s objective is to find similarities and differences between the Japanese and US housing markets by comparing Japan’s largest postwar real estate bubbles in the 1980s and U.S. housing bubbles since 2000 that have reportedly caused the worst financial crisis since the 1929 Great Depression. While various points have been made about the housing bubbles, this paper attempts to specify the following points.

  • Incumbent’s Price Response to New Entry:The Case of Japanese Supermarkets

    Abstract

    Large-scale supermarkets have rapidly expanded in Japan over the past two decades, partly because of zoning deregulations for large-scale merchants. This study examines the effect of supermarket openings on the price of national-brand products sold at local incumbents, using scanner price data with a panel structure. Detailed geographic information on store location enables us to define treatment and control groups to control for unobserved heterogeneity and temporary demand shock. The analysis reveals that stores in the treatment group lowered their prices of curry paste, bottled tea, instant noodles, and toothpaste by 0.4 to 3.1 percent more than stores in a control group in response to a large-scale supermarket opening.

    Introduction

    The retail sector has been regarded as one of Japan’s least productive industries. In 2000, the McKinsey Global Institute issued a very influential report, which found Japan’s overall retail productivity is half of the US’s; in particular, the productivity of small-scale retail stores is only 19 percent of that in the US. The report points out that the large share of unproductive small retail shops was the main cause of overall low productivity. The report claims that this lower productivity hurt Japanese consumers through high prices.

  • House Prices in Tokyo: A Comparison of Repeat-Sales and Hedonic Measures”

    Abstract

    Do the indexes of house prices behave differently depending on the estimationmethods? If so, to what extent? To address these questions, we use a unique datasetthat we have compiled from individual listings in a widely circulated real estateadvertisement magazine. The dataset contains more than 400 thousand listingsof housing prices in 1986 to 2008, including the period of housing bubble andits burst. We find that there exists a substantial discrepancy in terms of turningpoints between hedonic and repeat sales indexes, even though the hedonic indexis adjusted for structural change and the repeat sales index is adjusted in a wayCase and Shiller suggested. Specifically, the repeat sales measure tends to exhibita delayed turn compared with the hedonic measure; for example, the hedonicmeasure of condominium prices hit bottom at the beginning of 2002, while thecorresponding repeat-sales measure exhibits reversal only in the spring of 2004.Such a discrepancy cannot be fully removed even if we adjust the repeat salesindex for depreciation (age effects).

    Introduction

    Fluctuations in real estate prices have substantial impacts on economic activities. InJapan, a sharp rise in real estate prices during the latter half of the 1980s and its declinein the early 1990s has led to a decade-long stagnation of the Japanese economy.More recently, a rapid rise in housing prices and its reversal in the United States havetriggered a global financial crisis. In such circumstances, the development of appropriateindexes that allow one to capture changes in real estate prices with precision isextremely important not only for policy makers but also for market participants whoare looking for the time when housing prices hit bottom.

  • A study of income risks faced by recent young people(in Japanese)

    Abstract

    家計パネルデータ(KHPS)の所得と消費支出に関する共分散構造を用い、近年の日本の勤労家計が直面する所得変動の恒常的要因と一時的要因の分解を試みた。その結果、30 代家計においては、直近において、一時的変動の重要性が低下し、恒常的変動が支配的となっていることが明らかとなった。逆に、40 代家計では、直近の所得変動の大部分は変動ショックによるものであるという結果を得た。これは、近年の経済状況が、特に 30 代家計にとって特に厳しいものであることを示唆するものである。

    Introduction

    橘木(1998)の研究を嚆矢とし、日本家計間の所得格差に関して多くの調査・研究がなされてきた。様々な研究機関や新聞社が行っている格差に関する意識調査によると、所得格差が拡大していると認識している家計はかなりの割合に達しており、近年では様々な白書で政府機関による分析が公表されるようになっている。『平成 18 年度経済財政白書』によると、「全国消費実態調査」に基づく家計所得のジニ係数は、1999 年から 2004 年にかけての 24 歳以下の家計を例外として、1989 年以降、各世帯主年齢階層で上昇傾向は観察されない。また、『平成 20 年度厚生労働白書』は「国民生活基礎調査」に基づいて家計所得のジニ係数を計算しており、やはり 1998年から 2004年にかけての 25歳未満家計を例外として、各年齢層でジニ係数の水準は安定しており、特に格差拡大を示す兆候がみられないことを報告している。二つの白書の結果に従うと、20 世紀末から今世紀初頭における若年層を例外として、同一年齢階級で比較する限り、近年における所得格差拡大は二つの統計調査からは確認されない、ということになる。

  • Measuring Nominal and Real Rigidities of Prices: Some Methodological Issues(in Japanese)

    Abstract

    本稿では,各企業が互いの価格設定行動を模倣することに伴って生じる価格の粘着性を自己相関係数により計測する方法を提案するとともに,オンライン市場のデータを用いてその度合いを計測する。Bils and Klenow (2004) 以降の研究では,価格改定から次の価格改定までの経過時間の平均値をもって価格粘着性の推計値としてきたが,本稿で分析対象とした液晶テレビではその値は 1.9 日である。これに対して自己相関係数を用いた計測によれば,価格改定イベントは最大 6 日間の過去依存性をもつ。つまり,価格調整の完了までに各店舗は平均 3 回の改定を行っている。店舗間の模倣行動の結果,1 回あたりの価格改定幅が小さくなり,そのため価格調整の完了に要する時間が長くなっていると考えられる。これまでの研究は,価格改定イベントの過去依存性を無視してきたため,価格粘着性を過小評価していた可能性がある。

    Introduction

    Bils and Klenow (2004) 以降,ミクロ価格データを用いて価格粘着性を計測する研究が活発に行われている。一連の研究では,価格が時々刻々,連続的に変化しているわけではなく,数週間あるいは数ヶ月に一度というように infrequent に変更されている点に注目し,そうした価格改定イベントの起こる頻度を調べるという手法が用いられている。そこでの主要な発見は,価格改定イベントはかなり頻繁に起きているということである。例えば,Bils and Klenow (2004) は,米国 CPIの原データを用いて改定頻度は 4.3ヶ月に一度と報告している。Nakamura and Steinsson (2008) は同じく米国 CPI の原データを用いて,特売を考慮すれば改定頻度は 8-11ヶ月に一度と推計している。欧州諸国に関する Dhyne et al (2006) の研究や,日本に関する Higoand Saita (2007) の研究でも,数ヶ月に一度程度の頻度で価格改定が行われるとの結果が報告されている。

  • Japan’s Foreign Exchange Interventions during the Quantitative Easing Period(in Japanese)

    Abstract

    日本の通貨当局は 2003 年初から 2004 年春にかけて大量の円売りドル買い介入を行った。この時期の介入は John Taylor によって Great intervention と命名されている。本稿では,この Great intervention が,当時,日本銀行によって実施されていた量的緩和政策とどのように関係していたかを検討した。第 1 に,円売り介入により市場に供給された円資金のうち 60%は日本銀行の金融調節によって直ちにオフセットされたものの残りの 40%はオフセットされず,しばらくの間,市場に滞留した。この結果は,それ以前の時期にほぼ 100%オフセットされていたという事実と対照的である。第 2 に,介入と介入以外の財政の支払いを比較すると,介入によって供給された円資金が日銀のオペによってオフセットされる度合いは低かった。この結果は日本銀行が介入とそれ以外の財政の支払いを区別して金融調節を行っていたことを示唆している。第 3 に,不胎化された介入と不胎化されない介入を比較すると,為替相場に与える効果は後者の方が強い傾向が見られ,ゼロ金利の下でも,介入が不胎化されたか否かによって為替への効果に違いがあることを示している。ただし,この結果は,不胎化されるか否かに関する市場参加者の予想の定式化に依存しており,必ずしも頑健でない。

    Introduction

    2001 年から 2006 年にかけて日本の通貨当局は 2 つの重要かつ興味深い政策を採用した。第 1 は,日本銀行によって 2001 年 3 月に導入された量的緩和政策である。この政策は,日本銀行がそれまで政策金利としていたコール翌日物金利を下限であるゼロまで引き下げても十分な景気刺激効果が得られなかったため,さらなる金融緩和策として政策変数を金利からマネー供給量に変更するというものである。量的緩和政策は日本経済が回復する 2006 年 3 月まで継続された。第 2に,日本の財務省は 2003 年 1 月から 2004 年 3 月にかけて外国為替市場において大規模な円売り介入を実行した。Taylor (2006) はこれを Great intervention とよんでいる。この時期の介入は 2 日に一度という頻度で行われており,1 日当りの介入金額は 2700 億円,総額で 35 兆円にのぼった。日本の通貨当局は活発な介入行動で知られるが,それにしてもこの頻度と金額は他の時期に例を見ないものである。

  • Real Rigidities: Evidence from an Online Marketplace

    Abstract

    Are prices sticky due to the presence of strategic complementarity in price setting? If so, to what extent? To address these questions, we investigate retailers’ price setting behavior, and in particular strategic interaction between retailers, using a unique dataset containing by-the-second records of prices offered by retailers on a major Japanese price comparison website. We focus on fluctuations in the lowest price among retailers, rather than the average price, examining how quickly the lowest price is updated in response to changes in marginal costs. First, we find that, when the lowest price falls rapidly, the frequency of changes in the lowest price is high, while the size of downward price adjustments remains largely unchanged. Second, we find a positive autocorrelation in the frequency of changes in the lowest price, and that there tends to be a clustering where once a change in the lowest price occurs, such changes occur in succession. In contrast, there is no such autocorrelation in the size of changes in the lowest price. These findings suggest that retailers imitate each other when deciding to adjust (or not to adjust) their prices, and that the extensive margin plays a much more important role than the intensive margin in such strategic complementarity in price setting.

    Introduction

    Since Bils and Klenow’s (2004) seminal study, there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. For example, using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.

  • On the Predictability of Japanese Stock Returns using Dividend Yield

    Abstract

    The aim of this paper is to provide a critical and comprehensive reexamination of empirical evidence on the ability of the dividend yield to predict Japanese stock returns. Our empirical results suggest that in general, the predictability is weak. However, (1) if the bubble economy period (1986—1998), during which dividend yields were persistently lower than the historical average, is excluded from the sample, and (2) if positive autocorrelation in monthly aggregate returns is taken into account, there is some evidence that the log dividend yield is indeed useful in forecasting future stock returns. More specifically, the log dividend yield contributes to predicting monthly stock returns in the sample after 1990 and when lagged stock returns are included simultaneously.

    Introduction

    The conventional present value relationship suggests that the “dividend yield” or “price-dividend ratio” is useful in explaining the behaviors of stock prices (see Campbell, Lo, & MacKinlay 1997, Chap. 7 for a review). Accordingly, there is a large literature examining the ability of the dividend yield to predict future stock returns. Empirical studies of US data include Fama & French (1988), Mankiw & Shapiro (1986), Stambaugh (1986, 1999), Lewellen (2004), Torous, Valkanov & Yan (2004), Campbell & Yogo (2006), Ang & Bekaert (2007), and Cochrane (2008) among others.

  • The welfare effect of disclosure through media: a zero-sum case

    Abstract

    We extend the beauty contest framework to allow the disclosure of the authority to be received with an additional noise, the realization of which varies among agents. In this setup, we find that there could be a situation in which an anti-transparency policy maximizes welfare however precise the signal the authority can obtain.

    Introduction

    In a beauty contest framework developed by Morris and Shin (2002, henceforth MS), public information, which is interpreted as a disclosure of economic forecast by the authority, may be harmful to social welfare; that is, anti-transparency may be optimal. However, the robustness of their result has been questioned. Angeletos and Pavan (2004) and Hellwig (2005) show that MS’s result depends on the form of the payoff function. Svensson (2006) claims that even in MS’s model, public information increases welfare under plausible parameter values.

  • Housing Prices and Rents in Tokyo: A Comparison of Repeat-Sales and Hedonic Measures

    Abstract

    Do the indices of house prices and rents behave differently depending on the estimation methods? If so, to what extent? To address these questions, we use a unique dataset that we have compiled from individual listings in a widely circulated real estate advertisement magazine. The dataset contains more than 400 thousand listings of housing prices and about one million listings of housing rents, both from 1986 to 2008, including the period of housing bubble and its burst. We find that there exists a substantial discrepancy in terms of turning points between hedonic and repeat sales indices, even though the hedonic index is adjusted for structural change and the repeat sales index is adjusted in a way Case and Shiller suggested. Specifically, the repeat sales measure tends to exhibit a delayed turn compared with the hedonic measure; for example, the hedonic measure of condominium prices hit bottom at the beginning of 2002, while the corresponding repeat-sales measure exhibits reversal only in the spring of 2004. Such a discrepancy cannot be fully removed even if we adjust the repeat sales index for depreciation (age effects).

    Introduction

    Fluctuations in real estate prices have substantial impacts on economic activities. In Japan, a sharp rise in real estate prices during the latter half of the 1980s and its decline in the early 1990s has led to a decade-long stagnation of the Japanese economy. More recently, a rapid rise in housing prices and its reversal in the United States have triggered a global financial crisis. In such circumstances, the development of appropriate indices that allow one to capture changes in real estate prices with precision is extremely important not only for policy makers but also for market participants who are looking for the time when housing prices hit bottom.

  • A statistical analysis of product prices in online markets

    Abstract

    We empirically investigate fluctuations in product prices in online markets by using a tick-bytick price data collected from a Japanese price comparison site, and find some similarities and differences between product and asset prices. The average price of a product across e-retailers behaves almost like a random walk, although the probability of price increase/decrease is higher conditional on the multiple events of price increase/decrease. This is quite similar to the property reported by previous studies about asset prices. However, we fail to find a long memory property in the volatility of product price changes. Also, we find that the price change distribution for product prices is close to an exponential distribution, rather than a power law distribution. These two findings are in a sharp contrast with the previous results regarding asset prices. We propose an interpretation that these differences may stem from the absence of speculative activities in product markets; namely, e-retailers seldom repeat buy and sell of a product, unlike traders in asset markets.

    Introduction

    In recent years, price comparison sites have attracted the attention of internet users. In these sites, e-retailers update their selling prices every minute, or even every second. Those who visit the sites can compare prices quoted by different e-retailers, thus finding the cheapest one without paying any search costs. E-retailers seek to attract as many customers as possible by offering good prices to them, and this sometimes results in a price war among e-retailers.

  • The bursting of housing bubble as jamming phase transition

    Abstract

    Recently housing market bubble and its burst attracts much interest of researchers in various fields including economics and physics. Economists have been regarding bubble as a disorder in prices. However, this research strategy has overlooked an importance of the volume of transactions. In this paper, we have proposed a bubble burst model by focusing on transaction volume incorporating a traffic model that represents spontaneous traffic jam. We find that the phenomenon of bubble burst shares many similar properties with traffic jam formation on highway by comparing data taken from the U.S. housing market. Our result suggests that transaction volume could be a driving force of bursting phenomenon.

    Introduction

    Fluctuations in real estate prices have substantial impacts on economic activities. For example, land prices in Japan exhibited a sharp rise in the latter half of the 1980s, and its rapid reversal in the early 1990s. This large swing had led to a significant deterioration of the balance sheets of firms, especially those of financial firms, thereby causing a decade-long stagnation of the Japanese economy, which is called Japan’s “lost decade”. A more recent example is the U.S. housing market bubble, which started somewhere around 2000 and is now in the middle of collapsing. This has already caused substantial damages to financial systems in the U.S. and the Euro area, and it is expected that it may spread worldwide as in the case of the Great Depression in the 1920s and 30s.

  • The Firm as a Bundle of Barcodes

    Abstract

    We empirically investigate the firm growth model proposed by Buldyrev et al. by using a unique dataset that contains the daily sales of more than 200 thousand products, which are collected from about 200 supermarkets in Japan over the last 20 years. We find that the empirical firm growth distribution is characterized by a Laplace distribution at the center and powerlaw at the tails, as predicted by the model. However, some of these characteristics disappear once we randomly reshuffle products across firms, implying that the shape of the empirical distribution is not produced as described by the model. Our simulation results suggest that the shape of the empirical distribution stems mainly from the presence of relationship between the size of a product and its growth rate.

    Introduction

    Why do firms exist? What determines a firm’s boundaries? These questions have been repeatedly addressed by social scientists since Adam Smith argued more than two centuries ago that division of labor or specialization is a key to the improvement of labor productivity.

  • Factor Analysis of Price Changes Using a Large Scale POS Data (in Japanese)

    Abstract

    本稿では大規模な小売スキャナー・データを用いて価格変動の因子分析を行った.因子モデルによって価格変化を共通因子と独自因子に分解し,価格変動が店舗特有の要因,あるいはマクロレベルの要因にどの程度影響を受けていたかを検討した.分析の結果,1990 年代末から 2000 年代半ばにかけては個別の価格系列の独自性が上昇する傾向が見られた.一方,2000 年代後半には共通因子の価格分散への寄与率が高まる傾向が顕著となった.特売の値下げ率についても同様な傾向が観察された.

    Introduction

    日本のフィリップス曲線(Phillips Curve)は 1990 年代以降,それ以前の時代に比較してフラット化したとされている.この間,日本の価格のマイクロデータの実証研究では価格改訂確率が上昇したことが報告されており,価格改定確率の上昇がフィリップス曲線の傾きの上昇となって現れるニュー・ケインジアン・フィリップス曲線(New Keynesian Phillips Curve:NKPC)の含意とは矛盾する結果となっている.もう1つのフィリップス曲線の理論としては Lucas(1972)の不完全情報モデルがある.この理論ではフィリップ曲線の傾きを決めるのは価格改定確率ではなく価格変動の分散に対する個別ショックの分散の比率であり,この個別ショックの分散寄与率が高いほどフィリップス曲線の傾きはフラットになる.日本におけるフィリップス曲線のフラット化はニュー・ケインジアンのメカニズムではなく,ルーカス型の不完全情報のメカニズムで起こっているのではないかという問題意識がこの研究の動機である.

  • Employment and Wage Adjustments at Firms under Distress in Japan: An Analysis Based upon a Survey

    Abstract

    We use the result from a survey of Japanese firms in manufacturing and service to investigate the choice of wage and employment adjustments when they needed to reduce substantially the total labor cost. Our regression analysis indicates that the large size reduction favors the layoffs of the core employees, whereas the base wage cuts are more likely if the firms do not feel immediate pressures from the external labor market or the strong competition in the product market. We also find some evidence that the concerns over adverse selection or demoralizing effects of wage cuts are real. Firms do try to avoid using base wage cuts if they consider these factors more important.

    Introduction

    The decade long stagnation of the economy left visible and perhaps also invisible scars in many facets of the Japanese economy. During the decade of the stagnation (take,1992-2001, for example, as the decade), the economy lost 3.5million regular and full time jobs. Although the precise breakdown is not readily available, the severity of the recession is shown in the proportion of the job loss due to outright layoffs, rather than those by not replacing retiring employees. Figure 1 can be used to compare the lost decade with past recessions. The share of layoffs was indeed large during the period. Still it is comparable to the figure in the recession after the first oil shock.

  • The Effects of Monetary Policy on the Stock and Bond Markets -An Empirical Examination Using Euro–Yen 3-month Future Rates- (in Japanese)

    Abstract

    本論文では,日本における金融政策の株式市場,債券市場への影響を明らかにする為に,4つの分析を行っている.1つ目は,日本における株式収益率や債券収益率に予測可能性があるかどうかについての分析である.青野(2008)では,株式市場のみを対象にしていたが,本論文では,CampbellandAmmer(1993)の分析手法に倣い,株式市場に関連する変数と債券市場に関連する双方の変数を含むVAR体系を利用し,株式市場と債券市場を分析している.その結果、株式収益率には,青野(2008)と同様に,予測可能性が存在する事を確認するとともに,債券収益率にも予測可能性が存在する事を確認した。2つ目は,Kuttner(1996)やBernankeandKuttner(2005)で利用されている,先物金利を利用した「Surprise変数に対応する変数を,日本のデータを用いて作成した上での時系列分析である.本論文では,日本における先物金利として,HondaandKuroki(2006)と同様に,「先物ユーロ円3ヶ月もの」を利用した.その結果,株式収益率・債券収益率に対して,「Surprise」変数だけが有意に説明能力を持つ事が確認出来た。この結果より,本論文で作成した金融政策変数が「予期されない」金融政策の代替変数として,一定程度有効に機能していると判断出来る。3つ目は,「Surprise」変数を用いて,産業別の株式収益率に対する金融政策の影響を分析したこの結果、「非鉄金属・機械・小売業」などの業種では「Surprisel変数が有意に説明能力を持つが、公共性の高い「電気・ガス」や「水産・農林業」などの第1次産業、政府の規制が強い「保険業」・「空運業」などの業種では,「Surprise」変数が有意な説明能力を持たず,金融政策の影響を受けにくい事を確認した。「4つ目は,これまでの分析を踏まえた上で,アメリカでのBernankeandKuttner(2005)における分析に倣った,「Surprise」変数を用いた金融政策に対する株式市場と債券市場への影響についての分析である。結果は,短期において,金融政策が株式市場に影響を与えるものの,その効果は減少していく事が確認された。株式市場における「Campbell型分散分解」の各要因の反応係数を確認すると,配当と実質利子率の係数が正となった。これは,予期しない金融政策に対して,配当と実質利子率が正の方向に反応することを通じて株式収益率へ影響していることを示している.配当の反応はアメリカにおける結果とは異なっている.このことから,ショックの影響の源泉が日本とアメリカにおいて異なる可能性が推察された。債券市場における「Campbell型分散分解」の各要因の反応係数を確認すると,実質利子率とインフレ率の係数がとなった。これは,予期しない金融政策に対して,実質利子率とインフレ率が正の方向に反応することを通じて株式収益率へ影響していることを示している。

    Introduction

    経済の動向を把握する有効な手段の一つは,株式市場や債券市場などの資産市場の動向を注視することである.また,株価の変動や債券価格の変動には多くの要因が考えられる.その中で特に重要な要因として,株式市場と債券市場が互いに影響を及ぼしあう要因や,経済政策に関連する要因が挙げられる。

  • Understanding the Decline in the Japanese Saving Rate in the New Millennium

    Abstract

    This paper investigates why the Japanese household saving rate, which fell from the late 1990s to the first few years of the new millennium, suddenly stabilized after 2003. Analyzing income and spending data for different age groups, we argue that this is explained by Japanese corporate restructuring prompted by the 1997 financial crisis and the resulting labor income decrease being concentrated in older working households. We believe two important changes in income distribution are associated with this mechanism. First, the negative labor income shock, which was mostly borne by the younger generation in the initial stages of the “lost decade” finally spread to older working households in the late 1990s and early 2000s. Second, there was a significant income shift from labor to shareholders associated with corporate restructuring during this time. This resulted in a decline in the wage share, so that the increase in corporate saving offset the decline in household saving.

    Introduction

    It is more than two decades since Fumio Hayashi tried to explain the apparently high Japanese saving rate in his seminal article (Hayashi 1986). Today, Japan is widely recognized as a country with a "declining saving rate". As shown in Figure 1, the Japanese household saving rate was around 18% at the beginning of 1980s. It has been declining ever since, down to 3.3% in 2006. The total decline is now about 15% over little more than a quarter of a century. There is little doubt that this declining trend is mostly explained by the aging of Japanese society (Horioka 1997; Dekle 2005; Chen, Imrohoro · glu, and º Imrohoro · glu 2006; º Braun, Ikeda, and Joines 2008).

  • Price Dynamics in Japan over the Last Quarter Century(in Japanese)

    Abstract

    過去四半世紀を振り返ると,資産価格は 1980 年代後半に大幅に上昇し 90 年代前半に急落するという大きな変動を示した。ところが消費者物価や GDP デフレータに代表される財サービス価格はそれほど変化していない。資産価格と財サービス価格の連動性の欠如がこの時期の特徴であり,それが金融政策などの運営を難しくした。本稿ではその原因を探るため資産価格と財サービス価格の重要な結節点である家賃に焦点を絞り,住宅の売買価格との連動性を調べた。その結果,日本の家賃には米国の約 3 倍の粘着性があり,それが住宅価格との裁定を妨げていることがわかった。仮に家賃の粘着性が米国並みであったとすれば,消費者物価上昇率はバブル期には実績値に比べ約 1%高く,バブル崩壊期には約 1%低くなっていたと試算できる。バブル期における金融引き締めへの転換,バブル崩壊期における金融緩和への転換が早まっていた可能性がある。

  • Housing Bubbles in Japan and US(in Japanese)

    Abstract

    日本と米国は相次いで住宅バブルとその崩壊を経験した。本稿ではこの 2 つのバブルを比較し以下のファインディングを得た。

    第 1 に,住宅価格の代表的な計測手法である「リピートセールス法」と「ヘドニック法」をわが国の過去 20 年間のデータに適用した結果,バブル崩壊後の底入れの時期が 2 つの方法で異なることがわかった。リピートセールス法で推計される底入れ時期はヘドニック法の推計に比べマンションで 13 ヶ月,戸建てで 3 ヶ月遅れている。この遅れはリピートセールス法が建物の築年減価を適切に処理できていないために生じるものである。米国ではS&P/Case-Shiller 指数が代表的な住宅価格指数であるがこれはリピートセールス法を用いており,底入れ時期を遅く見積もる可能性がある。米国住宅市場の底入れの時期に関心が集まっている状況下,こうした認知ラグの存在は不確実性を増加させ経済の回復を遅らせる危険がある。

    第 2 に,住宅需要と住宅価格の関係を時系列データでみると両者の間には正の相関がある。しかし県あるいは州単位のデータを用いてクロスセクションでみると,日米ともに両者の間に有意な相関は見られない。この意味で,バブルの県(州)別の有無または大小を需要要因で説明することはできない。人口動態が住宅需要に影響を及ぼしそれが住宅価格を押し上げるというストーリーは少なくともバブル期の価格上昇を説明する上では有効でない可能性を示唆している。

    第 3 に,住宅価格と家賃の連動性をみると,バブルの形成・崩壊の過程で住宅価格が大きく変動しても家賃はほとんど動かないという現象が日米ともに確認できる。この背景には,家主と店子の双方が様々な取引コストを節約するために長期的な契約関係を結んでいることが挙げられる。また,日本については,持ち家の帰属家賃が市場価格で評価されておらず,それが連動性を弱めている面もある。連動性の欠如は,バブル期に住宅価格が上昇しても家賃が上昇しないためその家賃を重要な要素として含む消費者物価が上昇しないという現象を日米で生み,それが金融引き締めへの転換を遅らせた。また,バブル崩壊後は,住宅価格が下落しても家賃が連動しないため消費者物価が下落しないという現象が見られ,これは金融緩和への転換を遅らせる原因となった。家賃は資産価格と財サービス価格の結節点となる重要な変数であり,その計測精度を高める必要がある。

    Introduction

    本稿の目的は,戦後のもっとも大きな不動産バブルといわれた 1980 年代の日本と,1929年の世界大恐慌以来の金融危機をもたらした原因であるといわれる2000年以降の米国の住宅バブルを比較することで,その両市場の共通点と相違点を浮き彫りにすることである。住宅バブルに関して様々なことが指摘される中で,本稿では,特に,以下の点を明らかにすることを目的とした。

  • Goodness-of-Fit Test for Price Duration Distributions

    Abstract

    Is the actual price-setting behavior of an individual commodity item consistent with the assumptions of a sticky-price model? Part of the question may formally be addressed by performing a goodnessof-fit test for price duration distributions. For each of the 429 items in the Japanese retail price data for 2000–2005, we fitted the standard parametric models with or without unobserved heterogeneity to the data and tested the goodness of fit. We found that 8.6 percent of the tested items cannot reject the hypothesis that the underlying distribution is exponential, which corresponds to the time-dependent pricing model of Calvo (1983).

    Introduction

    This paper examines the distributional assumption of the duration of price spells. It forms part of an attempt to construct a formal theory dealing with sticky prices, because existing sticky-price models in macroeconomics explicitly formulate the mechanism of a firm’s price change by assuming that the length of price spells follows a certain distribution. One example is the Calvo (1983) model, which assumes that the probability of a firm’s price change is determined exogenously and does not change over time. This assumption implies that price spell durations have an exponential distribution with a constant hazard rate. The other example is the Dotsey, King, and Wolman (1999) model, which assumes a fixed cost of adjusting price. This model predicts a monotonically increasing hazard function when the general level of prices continues upward.

  • A New Method for Identifying the Effects of Central Bank Interventions

    Abstract

    Central banks react even to intraday changes in the exchange rate; however, in most cases, intervention data is available only at a daily frequency. This temporal aggregation makes it difficult to identify the effects of interventions on the exchange rate. We propose a new method based on Markov Chain Monte Carlo simulations to cope with this endogeneity problem: We use “data augmentation” to obtain intraday intervention amounts and then estimate the efficacy of interventions using the augmented data. Applying this method to Japanese data, we find that an intervention of one trillion yen moves the yen/dollar rate by 1.7 percent, which is more than twice as large as the magnitude reported in previous studies applying OLS to daily observations. This shows the quantitative importance of the endogeneity problem due to temporal aggregation.

    Introduction

    Are foreign exchange interventions effective? This issue has been debated extensively in the 1980s and 1990s, but no conclusive consensus has emerged. A key difficulty faced by researchers in answering this question is the endogeneity problem: the exchange rate responds “within the period” to central bank interventions and the central bank reacts “within the period” to fluctuations in the exchange rate. As an example, consider the case of Japan. The monetary authorities of Japan, which are known to be one of the most active interveners, started to disclose intervention data in July 2001, and this has rekindled researchers’ interest in the effectiveness of interventions. However, the information disclosed is limited: only the total amount of interventions on a day is released to the public at the end of a quarter, and no detailed information, such as on the time of the intervention(s), the number of interventions over the course of the day, and the market(s) (Tokyo, London, or New York) in which the intervention(s) were executed, is disclosed. Most importantly, the low frequency of the disclosed data poses a serious problem for researchers because it is well known that the Japanese monetary authorities often react to intraday fluctuations in the exchange rate

  • Residential Rents and Price Rigidity: Micro Structure and Macro Consequences

    Abstract

    Why was the Japanese consumer price index for rents so stable even during the period of housing bubble in the 1980s? In addressing this question, we start from the analysis of microeconomic rigidity and then investigate its implications about aggregate price dynamics. We find that ninety percent of the units in our dataset had no change in rents per year, indicating that rent stickiness is three times as high as in the US. We also find that the probability of rent adjustment depends little on the deviation of the actual rent from its target level, suggesting that rent adjustments are not state dependent but time dependent. These two results indicate that both intensive and extensive margins of rent adjustments are very small, thus yielding a slow reponse of the CPI to aggregate shocks. We show that the CPI inflation rate would have been higher by one percentage point during the bubble period, and lower by more than one percentage point during the period of bubble bursting, if the Japanese housing rents were as flexible as in the US.

    Introduction

    Fluctuations in real estate prices have substantial impacts on economic activities. For example, land and house prices in Japan exhibited a sharp rise in the latter half of the 1980s, and its rapid reversal in the early 1990s. This wild swing led to a significant deterioration of the balance sheets of firms, especially those of financial firms, thereby causing a decade-long stagnation of the economy. Another recent example is the U.S. housing market bubble, which started somewhere around 2000 and is now in the middle of collapsing. These recent episodes have rekindled researchers’ interest on housing bubbles.

  • Measuring Fiscal Multipliers during the Zero Interest Rate Period(in Japanese)

    Abstract

    本稿では,政府税収の四半期データと四半期での税収の産出量弾性値を作成した上で,それを用いて構造VAR モデルを推計し日本の財政乗数を計測する。分析の結果,財政乗数は 1980 年代半ば以降,顕著に低下していることが確認された。すなわち,バブル前(1965-86 年)の期間は,政府支出や税へのショックが産出量に有意な影響を及ぼしていたが,それ以降(1987-2004 年)はほとんど影響を及ぼしていない。ただし,1980 年代以降の財政乗数の低下は米英などでも観察されており,必ずしも日本に固有の現象ではない。

    Introduction

    財政乗数は低下したのか。低下したとすればそれはなぜか。これらは,バブル崩壊以降,財政をめぐる重要な論点であった。しかし現時点でもコンセンサスが得られているとは言いがたい。例えば,井堀・中里・川出 (2002) などは 1990 年代に財政乗数が低下したと主張している一方,堀・伊藤 (2002) は,1990 年代に財政乗数が低下したという証拠は見当たらないとしている。

  • Stickiness in Producer and Consumer Prices-Investigation through Survey and Scanner Data- (in Japanese)

    Abstract

    本稿ではわが国の食品・日用雑貨を生産・出荷する企業 123 社を対象として価格設定行動に関するアンケート調査を行い以下のファインディングを得た。第1 に,約 9 割の企業は原価や需要が変化しても直ちには出荷価格を変更しないという行動をとっており,その意味で価格は粘着的である。その理由としては,原価や需要の情報収集・加工に要する費用や戦略的補完性を挙げる企業の割合がそれぞれ約 3 割であり,粘着性の主因である。一方,メニューコストなど価格変更の物理的費用は重要でない。第 2 に,価格の変更頻度については,過去10 年間で出荷価格を一度も変更したことのない企業が 3 割を超えており強い粘着性が存在する。この粘着性は他国と比較しても高い。第 3 に,アンケートの回答と POS データをマッチングさせることにより,メーカー出荷価格変更時における末端価格の反応をみると,統計的に有意な連動性は見られなかった。また,末端価格の変更頻度は出荷価格の変更頻度を大きく上回っている。これらの結果は,末端価格の変動の大部分がメーカー企業ではなく流通企業の行動を反映していることを示唆している。

    Introduction

    価格の粘着性を計測する最近の研究では,消費者物価統計の原データやスーパーマーケットの POS データなどを用いて,価格の改定が一定期間に何回起きたかを数えるという単純な手法が用いられている。例えば Bils and Klenow (2004) は米国の消費者物価統計の原データを用いて価格の改訂頻度を計測し,平均的には価格改定は 4ヶ月に 1 度程度の頻度との結果を得ている。この数字は 1 年に 1 度程度の価格改定というマクロ経済学での「相場」を大きく下回るものである。一方,Nakamura and Steinsson (2008) は特売を除けば 11ヶ月に 1 度程度であり,粘着性は「相場」に近いと主張している。

  • Wage – Employment Adjustment and Price Setting Behavior (in Japanese)

    Abstract

    Using a survey of Japanese firms, this paper empirically examines interactions among employment, wage and price adjustments at firm level. Our major findings are as follows. (1) The firms in the survey generally view moral hazards as the most important factor preventing downward wage adjustments, a perception shared by German firms in a similar survey. Based upon questions on episodes of large negative shocks on labor demand, our findings are: (2) they tended to reduce their employment, not wages, if they viewed that they would lose more productive workers when they lower wages; and (3) those firms facing more competitive product markets tended to use employment, rather than wage, adjustments.

    Introduction

    1990 年代の日本の労働市場は、国内的には硬直的な労働市場の結果として解釈されることが多い。すなわち、過度に保護され雇用も賃金も保証された正規社員の調整が進まず、その歪みが就業者の非正規化や失業率の急上昇に現れているとする議論が典型例である。これらの議論に対する賛否は分かれるが、注意すべき点は、構造改革の時代ともいうべき 1990 年代において、労働市場を巡る議論が他の市場や企業・消費者行動と切り離されて観念されてきた点であろう。

  • Random Walk or A Run -Market Microstructure Analysis of the Foreign Exchange Rate Movements based on Conditional Probability-

    Abstract

    Using tick-by-tick data of the dollar-yen and euro-dollar exchange rates recorded in the actual transaction platform, a “run”—continuous increases or decreases in deal prices for the past several ticks—does have some predictable information on the direction of the next price movement. Deal price movements, that are consistent with order flows, tend to continue a run once it started i.e., conditional probability of deal prices tend to move in the same direction as the last several times in a row is higher than 0.5. However, quote prices do not show such tendency of a run. Hence, a random walk hypothesis is refuted in a simple test of a run using the tick by tick data. In addition, a longer continuous increase of the price tends to be followed by larger reversal. The findings suggest that those market participants who have access to real-time, tick-by-tick transaction data may have an advantage in predicting the exchange rate movement. Findings here also lend support to the momentum trading strategy.

    Introduction

    The foreign exchange market remains sleepless around the clock. Someone is trading somewhere all the time—24 hours a day, 7 days a week, 365 days a year. Analyzing the behavior of the exchange rate has become a popular sport of international finance researchers, while global financial institutions are spending millions of dollars to build real-time computer trading systems (program trading). High-frequency, reliable data are the key in finding robust results for good research for academics or profitable schemes for businesses.

  • Why the Law of One Price Does Not Hold? Evidence from a Japanese Price Comparison Site(in Japanese)

    Abstract

    本稿では,価格比較サイト「価格.com」において仮想店舗が提示する価格と,それに対する消費者のクリック行動を秒単位で記録した新しいデータセットを用いて,店舗の価格設定行動と消費者の購買行動を分析した。本稿の主要なファインディングは以下のとおりである。第 1 に,店舗の価格順位(その店舗の価格がその時点において何番目に安いか)が 1 位でない場合でもクリックが発生する確率はゼロではない。ただし,価格順位が下がるとクリック確率は下がり,価格順位とクリック確率(の対数値)の間には線形に近い関係が存在する。この線形の関係は,消費者に店舗の好みがあり,消費者が自分の好みの店舗群の中で最も安い価格を提示する店舗を選択していることを示唆している。第 2 に,各店舗が提示する価格の平均値は,ドリフト付きのランダムウォークに従っている。これは価格変動の大部分が店舗が保有する在庫のランダムな増減によって引き起こされていることを示している。ただし,価格が急落する局面などではランダムウォークからの乖離がみられ,各店舗の価格づけの戦略的補完性が値崩れを招いている可能性を示唆している。

    Introduction

    インターネットの普及が我々の生活を根底から変えるのではないかという予測は急速に支持を失いつつあるようにみえる。ネット社会において消費者や企業の行動が変化してきたし,これからも変化を続けるのは事実であるがそれは普及の当初に考えられていたほどではなかったということであろう。

  • Financing Behavior of Japanese Firms(in Japanese)

    Abstract

    本稿は、1964 年から 2005 年までの日本の上場企業のパネルデータを用い、日本企業の資金調達行動が、資本構成理論におけるトレードオフ理論とペッキングオーダー理論のいずれに従うのかについて検証を行ったものである。検証にあたっては、トレードオフ理論とペッキングオーダー理論のそれぞれから導出される推定モデルを用い、それぞれの推定モデルの説明力の高さについて比較検証を行った。本稿のおもな結論は以下である。第一に、日本企業の資金調達行動においては、トレードオフ理論とペッキングオーダー理論のいずれもが統計的に有意な説明力をもつものの、相対的にペッキングオーダー理論の説明力が強く、この傾向は 1964 年以降のほぼ全期間において成立している。第二に、従属変数の条件付分布の歪みを考慮した分位推定を行うと、日本企業の過半数の資金調達行動は、ペッキングオーダー理論の理論予測である bP O = 1 と完全に一致しており、その意味では、日本企業の過半数が完全なペッキングオーダー理論に従っている。また、この傾向は 1964 年以降の大半の時期において成立している。

    Introduction

    企業の資金調達行動の規則性を解明することは、金融システムの安定性や実体経済の成長性の観点からも非常に重要な課題である。バブル崩壊後の日本経済においては、企業の過剰債務問題が、企業の倒産や信用リスクの増大を通じて金融システムの不安定化を招き、さらに企業の設備投資の低迷を通じて実体経済の成長を阻害したことは記憶に新しい。

  • Tests of the Rank Size Rule Regression (in Japanese)

    Abstract

    多くの実証研究では都市サイズ,企業の資産や売上高の規模などの研究対象がパレート性を持つことを,ランクサイズ回帰で観察してきた.具体的には,順位の対数値をその規模の対数値に回帰することにより,その係数が-1 になるかを調べる.また,パレート性の有無には,二次項の係数が 0 であることも条件になるので,本稿では,二次項を加えたものを回帰モデルとする.パレート性の検証には,一次項,二次項それぞれの t 検定と,一次項の係数が-1,二次項の係数が 0 という複合仮説が成立しているかを F 検定で調べる方法がある.しかし,分析対象がパレート分布に従う時,データ数が大きくなると,t 値は発散してしまうため通常のt 検定を行えないことがわかっており,F 検定でも同様の問題が観察された.そこで本稿では,F 値の棄却域をシミュレーションによって構成し,ランクサイズ回帰の複合仮説を検証可能とし,パレート性の検定の新たな手法として提案した.

    Introduction

    都市・地域経済学でランクサイズ回帰のあてはまりがよく,古くから応用されている分野に都市人口分布の分析がある.まず一国の都市の人口を大きい順に並べ替え,1 位,2 位 . . . と順位(ランク)をつける.ランクサイズ回帰とは,都市の人口規模の対数値を当該都市の順位(ランク) の対数値に回帰したものである.すると,多くの国において定数項がほぼサンプルサイズの対数に等しく,傾きはほぼ −1 に等しくなるという結果が得られる.つまり人口規模が1 番大きい都市から順に 2 番目の都市は 1/2 の人口,3 番目は 1/3,. . . と減少していく.Si,i = 1, . . . , n をある国の都市 i の人口とし,S(i) をそれを大きい順に並べ替えた順序統計量とする.つまり S(1) ≥ S(2) ≥ . . . ≥ S(n) である.

  • The Cause of Volatility in Panel Data of Household Expenditure – A Study on Measurement Errors and Time Aggregation- (in Japanese)

    Abstract

    標準的な家計消費モデルに従うと、家計消費は所得に比べてスムーズに変化し、その動きはランダムウォークに近くなる。しかしながら、各国のパネルデータに記録されている家計消費変化率の分散は所得変化率の分散よりも大きく、ランダムウォークよりも i.i.d.に近い挙動を示している。本論文では、消費データの不安定性が測定誤差によるものなのか、それとも調査期間が短いためであるかを検証した。分析の結果、測定誤差よりはむしろ、消費支出調査期間の短さが消費変動の主要因であるという結論を得た。Needs-Scan/Panel を用いた食料消費支出の分析では、家計消費がランダムウォークに近くなるのは四半期以上の長期間の集計期間を用いた場合であり、またその場合でも、長期保存可能な食料支出はランダムウォークよりも i.i.d.に近い挙動を示した。これは通常の一週間や一カ月の情報に基づく家計消費データでは、消費支出の平滑化やランダムウォーク性の検証を行うことが困難であることを示唆するものである。

    Introduction

    標準的な家計消費モデルによると、家計消費は所得に比べスムーズに変化し、その動きはランダムウォークに近いものとなる。上記の性質は効用関数が時間に関して加法に分離可能であり、各時点での効用関数が凹関数であるという標準的な仮定に基づくものであり、マクロ動学モデルに限らず、家計の動学的意思決定を扱う多くの経済分析の基礎となっている。そのため、家計消費の平滑化およびランダムウォーク仮説に関しては非常に多くの検証がなされてきた。

  • Disagreement and Stock Prices in the JASDAQ -An Empirical Investigation Using Market Survey Data

    Abstract

    This article empirically examines “disagreement” models using JASDAQ market data by exploiting institutional investors’ forecasts of future stock prices. We use the standard deviations of the one-month ahead forecasts of stock prices in the QSS Equity Survey as the measure of disagreement in the market. The results indicate that an increase in disagreement is associated with an increase in contemporaneous stock returns and lower average expected returns. In terms of the latter, while the survey data provides an average assessment of marketwide expectations, when disagreement is high the current market price tends to reflect the opinions of more optimistic market participants. These results contrast with comparable findings using TOPIX data (representing larger firms on the Tokyo Stock Exchange’s first section) that contradict the predictions of disagreement models. One reason posited is that firms on the JASDAQ market are much smaller and the number of market participants more limited. Accordingly, institutional “limits of arbitrage”, such as short-sale and liquidity constraints, are more binding and their influence on stock prices is thereby greater.

    Introduction

    This paper uses Japanese data to test the empirical implications of recent developments in behavioral finance, which we refer to as “disagreement” models. These models, starting with seminal contributions by Miller (1977), and Harrison and Kreps (1978), and surveyed in Hong and Stein (2007), display two key elements. First, they assume some disagreement or difference in opinion among investors over the valuation of assets. Second, some institutional factors, typically short-sales constraints, prevent the arbitrage mechanism from working completely. As a result, the market price tends to reflect the expectations of optimistic investors. Accordingly, the informational role of asset prices is partially confined and the market price can be persistently higher than the fundamentals.

  • Disagreement and Stock Prices in the JASDAQ -An Empirical Investigation Using Market Survey Data (in Japanese)

    Abstract

    行動ファイナンス研究における,投資家間の「意見の不一致のモデル」のインプリケーションを,日本の JASDAQ 市場のデータを用いて検証した.(株)QUICK による,QSS 株式機関投資家の株価予想のサーベイ結果の標準偏差を投資家の「意見の不一致」の尺度として用い,まずそれが現在の株価を上昇させることを示した.さらに,「意見の不一致」が大きい時には,投資家が同時点では観察できないサーベイ予想ベースの予想株価上昇率が低下することを見た.このことは,サーベイ調査がカバーしている市場参加者全体の来月の株価予想の平均に比べ,市場価格に直接反映されている楽観的なサブ・グループのそれが高いことを示唆している.また,制度的な「裁定の限界」から,「意見の不一致」のモデルがより当てはまると考えられる日経 JASDAQ 平均に比べ,より成熟した市場の指数であるTOPIX についての分析結果は,まったくと言っていいほどモデルのインプリケーションを満たしていない.したがって,東証一部のような大規模銘柄に比べ,JASDAQ 市場で取引されているような小規模銘柄に関しては,制度的な「裁定の限界」が株価形成により大きな影響を与えているものと考えられる.

    Introduction

    「行動ファイナンス behavioral finance」という呼び名で括られる研究分野は,近年,日本でも大きく注目を浴びるようになってきた.一般向けの議論で「行動ファイナンス」という言葉が使われる場合,ファイナンスの主流派に対して否定的な意味合いで用いられることが多い.心理学や行動科学に基づくアプローチの優越性・革新性が強調され,伝統的な新古典派アプローチは時代遅れの大艦巨砲主義のように言及されている.

  • Micro and Macro Price Dynamics over Twenty Years in Japan ―A Large Scale Study Using Daily Scanner Data―

    Abstract

    Using large-scale daily scanner data, we investigate micro and macro price dynamics in Japan between 1988 and 2005. Drawing upon three billion observations of prices and the number of sold units, we find: (i) the frequency of price change is increasing, (ii) the frequency varies greatly between products and stores, and (iii) the choice of data frequency is crucial when estimating the degree of price stickiness. The estimates obtained with daily data are very different from those employing monthly data. Moreover, (iv) a Consumer Price Index (CPI) based on scanner data exhibits similar movements to the official CPI, except for the first half of the 1990s and in the 2000s, (v) the lower substitution bias does not comprise a serious problem, and (vi) the scanner-based CPI is more strongly correlated with the GDP gap than the official CPI. Our findings of the increasing frequency of price changes and very flexible prices are inconsistent with New Keynesian models of the Phillips Curve and recent Japanese experience with the flattening of the Phillips Curve. The second and third findings cast doubt on the use of monthly data to estimate the degree of price stickiness.

    Introduction

    Investigation of the price dynamics of individual commodities and their aggregates, such as the Consumer Price Index (CPI), has been a central theme of modern macroeconomics. For a long time, researchers have been seeking theories that can describe and predict the price dynamics, statistical indicators that can capture the aggregate movement of prices, and policy tools that enable policymakers to control inflation. Recently, an increasing number of researchers and policymakers have made use of scanner data to analyze price dynamics because of their rich information on prices and the amount of sales.

  • Menu Costs and Price Change Distributions: Evidence from Japanese Scanner Data

    Abstract

    This paper investigates implications of the menu cost hypothesis about the distribution of price changes using daily scanner data covering all products sold at about 200 Japanese supermarkets in 1988 to 2005. First, we find that small price changes are indeed rare. The price change distribution for products with sticky prices has a dent at the vicinity of zero inflation, while no such dent is observed for products with flexible prices. Second, we find that the longer the time that has passed since the last price change, the higher is the probability that a large price change occurs. Combined with the fact that the price change probability is a decreasing function of price duration, this means that although the price change probability decreases as price duration increases, once a price adjustment occurs, the magnitude of such an adjustment is large. Third, while the price change distribution is symmetric on a short time scale, it is asymmetric on a long time scale, with the probability of a price decrease being significantly larger than the probability of a price increase. This asymmetry seems to be related to the deflation that the Japanese economy has experienced over the last five years.

    Introduction

    The menu cost hypothesis has several important implications: those relating to the probability of the occurrence of a price change; and those relating to the distribution of price changes conditional on the occurrence of a change. The purpose of this paper is to examine the latter implications using daily scanner data covering all products sold at about 200 Japanese supermarkets in 1988 to 2005.

  • Price change frequency, bargain sales, and the CPI – An empirical analysis based on large scale scanner data(in Japanese)

    Abstract

    近年のマクロ経済理論において,個別商品価格の粘着性の強度はフィリップス曲線の形状および金融政策の有効性を決定する極めて重要なパラメターである.しかしながら,(1) 実際の個別価格の動向は店舗間・商品間で大きく異なること,(2) 頻繁に生じる特売時の大量販売,等の理由により,価格改定頻度の測定は容易な作業ではなく、粘着性の強度の推計に関する先行研究の結果は一致していない.本論文は,販売数量情報を含む日次の大規模な POSデータを用い,価格改定頻度の計測を行い,さらに特売や下位代替が消費者物価指数に与える影響を計測した.その結果,日本の価格改定頻度は極めて高く,また近年上昇傾向にあることが明らかになった.また,POS データに基づく物価指数はおおむね公式 CPI と同様の傾向を示すが,1990 年代前半および 2000 年以降で違いが生じており,前者では POS に基づく CPI は公式 CPI よりも下落しており,後者では上昇している.1990 年代前半のずれは,公式 CPI が特売の影響を考慮しなかったために生じた可能性が高い.

    Introduction

    消費者物価指数 (CPI) はマクロ経済政策を決定するうえで極めて重要な指数の一つであり,その変動メカニズムの解明は,マクロ経済学の歴史の中でも特に古くから追及されているテーマである. 特に,CPI と総生産をつなぐ理論であるフィリップス曲線がどのような性質を有しているか,および公式の CPI が,マクロ経済理論が想定している物価水準とどのような関係にあるか,に関して多くの論文が書かれてきた. 前者に関しては個別価格の価格粘着性に依拠する一連の NewKeynesian 達の研究があり,後者に関してはいわゆるボスキンレポート (1996),日本おいては白塚 (1998) が代表的な研究である.

  • Unobserved Heterogeneity in Price-Setting Behavior: a Duration Analysis Approach

    Abstract

    There is strong empirical evidence that the degree of price stickiness differs across commodity items, and that the nonparametric hazard function of price changes is downward-sloping with some spikes. We introduce item-specific heterogeneity into the standard single-sector model of Calvo (1983) and estimate a hazard function of price adjustment, by applying duration analysis. We present the appropriate form of heterogeneity for the data structure, and show that the decreasing (population) hazard function is well described. In the presence of item-specific heterogeneity, the probability that prices remain unchanged is predicted to be higher than in the single-sector model.

    Introduction

    Previous studies (Bils and Klenow, 2004; Dhyne et al., 2005; Saita et al., 2006) have shown that the degree of price stickiness differs across commodity items. The time-dependent pricing model (Calvo, 1983), in which one single parameter represents price stickiness, cannot reproduce the strong empirical evidence in such a way that the nonparametric hazard function of price changes is decreasing (Álvarez, Burriel and Hernando, 2005). Each item has specific factors related to its survival experience. These specific factors, whether observable or not, change the shape of the (individual) hazard function. If the variability in hazard is not fully captured by covariates, it is necessary to model unobserved heterogeneity.

  • Microeconometric Analysis on Nominal Price Stickiness (in Japanese)

    Abstract

    本研究は日本における小売物価の硬直性を Calvo(1983)で与えられた定義に従って計測し、その価格改定パターンを解明することを目的とする。分析には総務省統計局が作成している小売物価統計調査の価格データ(2000 年 1 月~2005 年 12 月)を用い、消費者物価指数のバスケットを構成する銘柄の価格硬直性を計測した。計測結果から価格変更のハザード率が 21.1%であり、平均して 4 ヶ月間は価格が同じ水準で維持されることが明らかになった。また、本研究では生存時間分析の枠組みを応用し、Calvo 型の価格設定行動の妥当性を検証した。Weibull ハザードモデルによって推定を行い、ハザード率一定の Wald検定を行った結果、Calvo 型価格設定行動では一般物価の価格改定パターンを十分に説明できないことが明らかになった。

    Introduction

    総需要の変化に対して雇用や産出量といった実物変数がどのように反応するかという問題は、古くからマクロ経済学の中心的な争点であった。総需要のショックが実物変数に波及していく経路があるとするならば、その有力な論拠の一つとなるのが名目価格の硬直性である。財・サービスの名目価格が即座に調整されないとき、総需要の変化に対して企業は所与の価格のもとで生産量を変化させる。価格調整に関するコストや制度的要因といった「摩擦」があることにより、総需要ショックが実物変数に影響する経路が生まれるのである。したがって、そうした価格調整にともなう「摩擦」とは何であり、どういった要因にもとづいて価格が硬直的になるのかを実証的に明らかにすることは、短期的なマクロ経済の変動を描写するにあたって、また総需要に影響を与える様々な経済政策を評価するにあたってきわめて重要な意義をもっている。

  • Firm Growth and Interfirm Network: Evidence from Japan (in Japanese)

    Abstract

    本稿では日本の法人企業約 82 万社(全法人企業の約 3 分の 1)をカバーするデータセットを用いて,企業の資本・取引関係数が企業規模とどのように関係するかを調べた。第 1 に,企業の資本・取引関係数の分布はロングテールである。資本・取引関係数の多い企業の上位 1%がもつ関係数は全関係数の約 50%であり,偏在している。第 2 に,取引関係数が多いハブ企業だけを取り出しその相互関係をみると,一部の超ハブ企業に関係数が集中しており,偏在が一層顕著である。第 3 に,企業規模が大きいほど関係数は多く全体として両者は比例関係にある。しかし既に多くの関係をもつ企業では,規模が拡大するほどには関係数を増やしていない。これは関係の維持コストを企業が節約しているためと解釈できる。

    Introduction

    企業活動は様々な相互関係性の上に成り立っている。第 1 は物流である。企業は素原材料を川上の企業から購入する一方,生産物を川下の企業に対して中間投入として販売したり,流通企業に対して最終財として販売したりする。第 2 に,物流はしばしば企業間の与信関係を伴う。商品取引の決済時点を遅らせるために手形が発行されるのはその代表例であるが,より一般的に,商品の受渡しと資金の受渡しのタイミングがずれればそこに企業間の与信が発生する。さらに企業間与信が銀行与信に置き換えられることも少なくない。第3 は資本関係である。親会社が子会社を作るというだけでなく,物流で密接な関係にある企業が資本関係を結ぶことも少なくない。第 4 は役員派遣などの人的な交流である。

  • The Great Intervention and Massive Money Injection: The Japanese Experience 2003-2004

    Abstract

    From the beginning of 2003 to the spring of 2004, Japan’s monetary authorities conducted large-scale yen-selling/dollar-buying foreign exchange operations in what Taylor (2006) has labeled the “Great Intervention.” The purpose of the present paper is to empirically examine the relationship between this “Great Intervention” and the quantitative easing policy the Bank of Japan (BOJ) was pursuing at that time. Using daily data of the amount of foreign exchange interventions and current account balances at the BOJ, our analysis arrives at the following conclusions. First, while about 60 percent of the yen funds supplied to the market by yen-selling interventions were immediately offset by the BOJ’s monetary operations, the remaining 40 percent were not offset and remained in the market for some time; this is in contrast with the preceding period, when almost 100 percent were offset. Second, comparing foreign exchange interventions and other government payments, the extent to which the funds were offset by the BOJ were much smaller in the case of foreign exchange interventions, and the funds also remained in the market longer. This finding suggests that the BOJ differentiated between and responded differently to foreign exchange interventions and other government payments. Third, the majority of financing bills issued to cover intervention funds were purchased by the BOJ from the market immediately after they were issued. For that reason, no substantial decrease in current account balances linked with the issuance of FBs could be observed. These three findings indicate that it is highly likely that the BOJ, in order to implement its policy target of maintaining current account balances at a high level, intentionally did not sterilize yen-selling/dollar-buying interventions.

    Introduction

    During the period from 2001 to 2006, the Japanese monetary authorities pursued two important and very interesting policies. The first of these is the quantitative easing policy introduced by the Bank of Japan (BOJ) in March 2001. This step was motivated by the fact that although the overnight call rate, the BOJ’s policy rate, had reached its lower bound at zero percent, it failed to sufficiently stimulate the economy. To achieve further monetary easing, the BOJ therefore changed the policy variable from the interest rate to the mone suppl . The quantitative easing polic remained in place until March 2006, b which time the Japanese econom had recovered. The second major polic during this period were interventions in the foreign exchange market b Japan’s Ministr of Finance (MOF), which engaged in large-scale selling of the en from Januar 2003 to March 2004. Ta lor (2006) has called this the “Great Intervention.” The interventions during this period occurred at a frequenc of once ever two business da s, with the amount involved per dail intervention averaging \286 billion and the total reaching \35 trillion. Even for Japan’s monetar authorities, which are known for their active interventionism, this frequency as well as the sums involved were unprecedented.

  • On the Predictability of Japanese Stock Market (in Japanese)

    Abstract

    本論文では,日本の株式市場での株式収益率に予測可能性があるか否かを明らかにすることを目的に,Campbell(1991) に倣って Campbelll and Shiller(1988) の対数線形近似の手法と分散分解の手法を利用した分析を行った.日本における超過収益率の分析の結果,予期されなかった株式超過収益率の変動の分散に対して貢献を比較すると,いずれのサンプルにおいても将来の配当支払に関する期待の見直しの分散が大きく貢献しているものの,将来の超過収益率に対する期待の見直しの分散が貢献が一定程度確認出来るという結論を得た.さらに,1980 年代における日本の株式市場の特殊性を考慮し,1980 年代をサンプル期間から除いて,株式超過収益率の変動の分散に対する貢献を比較すると,将来の超過収益率に対する期待の見直しの分散が貢献する割合が相対的に大きくなった.また,超過収益率の予測方程式に構造変化がある可能性を考慮した.その結果,超過収益率の分析では 1989 年 12 月に構造変化が観測された.構造変化点以前のサンプルと以降のサンプルを用いた分析結果とフルサンプルの結果を比較すると,構造変化点以降のサンプルの方が,超過収益率についての予測可能性に関して,安定的な結果を得られている事が発見された.

    Introduction

    「株式収益率を予測出来るか否か」というテーマは,ファイナンスの学術研究においても,金融実務においても常に中心的な課題の1つであった.本論文では,株式収益率の予測可能性に関する重要なベンチマークである Campbell(1991) や Campbell and Ammer(1993) などで利用されている「Campbell 型分散分解」を用いて,日本の株式市場を分析する.本論文は,日本の株式市場の月次データに焦点をあて,本格的に「Campbell 型分散分解」を用いた分析をした最初の論文である.

  • Do Larger Firms Have More Interfirm Relationships?

    Abstract

    In this study, we investigate interfirm networks by employing a unique dataset containing information on more than 800,000 Japanese firms, about half of all corporate firms currently operating in Japan. First, we find that the number of relationships, measured by the indegree, has a fat tail distribution, implying that there exist “hub” firms with a large number of relationships. Moreover, the indegree distribution for those hub firms also exhibits a fat tail, suggesting the existence of “super-hub” firms. Second, we find that larger firms tend to have more counterparts, but the relationship between firms’ size and the number of their counterparts is not necessarily proportional; firms that already have a large number of counterparts tend to grow without proportionately expanding it.

    Introduction

    When examining interfirm networks, it comes as little surprise to find that larger firms tend to have more interfirm relatioships than smaller firms. For example, Toyota purchases intermediate products and raw materials from a large number of firms, located inside and outside the country, and sells final products to a large number of customers; it has close relationships with numerous commercial and investment banks; it also has a large number of affiliated firms. Somewhat surprisingly, however, we do not know much about the statistical relationship between the size of a firm and the number of its relationships. The main purpose of this paper is to take a closer look at the linkage between the two variables.

  • The Consumption-Wealth Ratio and the Japanese Stock Market

    Abstract

    Following Lettau and Ludvigson (2001a,b), we examine whether the consumption—wealth ratio can explain Japanese stock market data. We construct the data series cayt, the residuals from the cointegration relationship between the consumption and the total wealth of households. Unlike the US results, cayt does not predict future Japanese stock returns. On the other hand, it does help to explain the crosssection of Japanese stock returns of industry portfolios. In the US case, cayt is used as a scaling variable that explains time variation in the market beta. In the Japanese case, the movement of cayt is interpreted as the change in the constant terms, hence the change in average stock returns. We also propose to improve cayt by taking real estate wealth into consideration.

    Introduction

    The consumption-based asset pricing model is among the most important benchmarks in financial economics. Yet, its empirical performance with a structural Euler equation of households using aggregate data has been a major disappointment (see Campbell [2003] for a recent survey). Hence, recent studies started looking into other aspects of the consumption-based model. An attractive alternative research strategy is to use disaggregate consumption data, which has been explored by authors such as Mankiw and Zeldes (1991) and Vissing-Jorgensen (2002). More recent studies including Lettau and Ludvigson (2001a,b), Parker and Julliard (2005), and Yogo (2006) examine, using aggregate data, long-run restrictions implied by consumption-based models, and they obtain useful results. In particular, Lettau and Ludvigson (2001a,b) consider the long-run cointegration relationship between consumption and household wealth. They propose to use the “cay” variable, which is in essence the consumption—wealth ratio of the household sector, in both predicting aggregate stock returns and explaining cross-sectional patterns of the US stock market.

  • Voluntary Information Disclosure and Corporate Governance The Empirical Evidence on Earnings Forecasts

    Abstract

    This study investigates the determinants of companies’ voluntary information disclosure. Employing a large and unique dataset on the companies’ own earnings forecasts and their frequencies, we conducted an empirical analysis of the effects of a firm’s ownership, board, and capital structures on information disclosure. Our finding is consistent with the hypothesis that the custom of cross-holding among companies strengthens entrenchment by managers. We also find that bank directors force managers to disclose information more frequently. In addition, our results show the borrowing ratio is positively associated with information frequency, suggesting that the manager is likely to reveal more when his or her firm borrows money from financial institutions. However, additional borrowings beyond the minimum level of effective borrowings decrease the management’s disclosing incentive.

    Introduction

    The corporate governance literature has discussed many mechanisms for resolving the fundamental issue: the agency problem. Perhaps the most pervasive and important factor causing the agency problem between a manager and an investor is the informational asymmetries between them. If managers who are better informed about their future prospects have divergent incentives with their investors, they may expropriate investors’ benefits for their private objectives.

  • Consumption, Saving, and Labor Profiles in Japan (in Japanese)

    Abstract

    日本家計の 10 年にわたるパネルデータを使用し、消費・労働供給・資産蓄積に関する平均年齢プロファイルおよび共分散構造を分析した。その結果、消費プロファイルの Hump Shape、右下がりの労働供給プロファイル等、平均プロファイルに関しては欧米の先行研究と概ね整合的であることがわかった。また、共分散構造に関しては、自己ラグとの相関パターンは欧米の先行研究とほぼ同様の傾向を示しているが、分散水準、特に所得成長率の分散が欧米にくらべて著しく小さく、また所得と労働や消費間での同時点の相関も小さいことが明らかになった。

    Introduction

    不完備資本市場のライフサイクルモデルは、家計の消費・貯蓄行動のみならず、近年では労働供給行動を分析するツールとしても広く利用されている。特に、コンピューター技術の発展によりシミュレーションベースの推計が可能になったこと、およびマイクロデータへのアクセスが昔にくらべて容易になったことから、従来のように、線形近似されたオイラー方程式の推計にとどまらず、動学モデルの構造を最大限利用した動学構造推計によりライフサイクルモデルのデータ説明力を検証する試みが広がりつつある。

  • IMPACTS OF CORPORATE GOVERNANCE AND PERFORMANCE ON MANAGERIAL TURNOVER IN RUSSIAN FIRMS

    Abstract

    In this paper, we deliberate the possible impacts of corporate governance and performance on managerial turnover using a unique dataset of Russian corporations. This study is different from most previous works in that we deal with not only CEO dismissals, but also with managerial turnover in a company as a whole. We find that nonpayment of dividends is correlated significantly with managerial turnover. We also find that the presence of dominant shareholders and foreign investors is another important factor in causing managerial dismissal in Russian corporations, but these two kinds of company owners reveal different effects in terms of turnover magnitude.

    Introduction

    Establishing an effective governance system to discipline top management to produce maximized shareholder wealth is very important, because the diffuse ownership structure in public companies means that shareholders must delegate the daily management of a business to professional managers, and they do not always bend over backward to satisfy their principals.

  • On the Efficiency Costs of De-tracking Secondary Schools

    Abstract

    During the postwar period, many countries have de-tracked their secondary schools, based on the view that early tracking was unfair. What are the e¢ ciency costs, if any, of de-tracking schools? To answer this question, we develop a two skills - two jobs model with a frictional labour market, where new school graduates need to actively search for their best match. We compute optimal tracking length and the output gain/loss associated to the gap between actual and optimal tracking length. Using a sample of 18 countries, we find that: a) actual tracking length is often longer than optimal, which might call for some e¢ cient de-tracking; b) the output loss of having a tracking length longer or shorter than optimal is sizeable, and close to 2 percent of total net output.

    Introduction

    In most education systems in the developed world, heterogeneous pupils are initially mixed in comprehensive schools - typically primary and lower secondary education. As some stage of the curriculum, however, some form of (self) selection takes place, typically based on ability and past performance, and students are allocated to schools which specialize in different curricula (tracks) or to classes where subjects are taught at a different level of di¢ culty (streams). The former system is typical of Central European countries, such as Germany, Austria, The Netherlands and Hungary, but exists also in Korea and Japan, and the latter system is typically observed in the US. When no selection whatsoever occurs during upper secondary school, as in some Scandinavian countries, choice and specialization are delayed until college education.

  • The Liquidity Trap and Optimal Monetary Policy: A Survey (in Japanese)

    Abstract

    本稿では Krugman (1998) 以降の流動性の罠に関する研究をサーベイし,そこで得られた知見を整理する。第 1 に,最近の研究が対象とするのは超短期金利の非負制約がバインディングになる現象であり,永久国債の金利に注目するケインズの定義と異なっている。ケインズの罠は超短期金利がバインディングな状態が無限遠の将来まで続く恒久的な罠であり,最近の研究が扱っているのは一時的な罠である。第 2 に,一時的な罠に対する処方箋としてこれまで提案されてきたアイディアの多くは,現代の金融政策論に照らして標準的なものである。流動性の罠の下で経済厚生を最大化する金融政策ルールは広い意味でのインフレターゲティングとして表現できる。流動性の罠はその奇異な見かけから特殊な現象と受け取られがちであるが,少なくとも罠が一時的である限り,それに対する処方箋は意外なほどにオーソドックスである。

    Introduction

    流動性の罠に関する先駆的な論文である Krugman(1998) が執筆された当時,流動性の罠(liquiditytraps)という言葉を EconLit で検索すると,論文数は1975 年以降で 21 本に過ぎなかった(Krugman (1998,p.138))。クルーグマンはこの関心の低さの背景として “a liquidity trap cannot happen, did not happen,and will not happen again” という認識がマクロ経済学者の間に広まっていたことを挙げている1。しかし現時点(2006 年 7 月)で同じ検索を行うと,論文数は160 本を超えており(図 1),マクロ経済学者の関心の低さが急速に是正されてきたことがわかる。これは,言うまでもなく,流動性の罠が実際に生じ得る現象であることを日本経済が証明した結果である2。本稿の目的は,流動性の罠に関する Krugman (1998) 以降の研究をサーベイし,そこで得られた新たな知見が何であるかを考察することである。

  • Optimal Monetary Policy at the Zero Interest Rate Bound: The Case of Endogenous Capital Formation

    Abstract

    This paper characterizes optimal monetary policy in an economy with the zero interest rate bound and endogenous capital formation. First, we show that, given an adverse shock to productivity growth, the natural rate of interest is less likely to fall below zero in an economy with endogenous capital than the one with fixed capital. However, our numerical exercises show that, unless investment adjustment costs are very close to zero, we still have a negative natural rate of interest for large shocks to productivity growth. Second, the optimal commitment solution is characterized by a negative interest rate gap (i.e., real interest rate is lower than its natural rate counterpart) before and after the shock periods during which the natural rate of interest falls below zero. The negative interest rate gap after the shock periods represents the history dependence property, while the negative interest rate gap before the shock periods emerges because the central bank seeks to increase capital stock before the shock periods, so as to avoid a decline in capital stock after the shock periods, which would otherwise occur due to a substantial decline in investment during the shock periods. The latter property may be seen as central bank’s preemptive action against future binding shocks, which is entirely absent in fixed capital models. We also show that the targeting rule to implement the commitment solution is characterized by history-dependent inflation-forecast targeting. Third, a central bank governor without sophisticated commitment technology tends to resort to preemptive action more than the one with it. The governor without commitment technology controls natural rates of consumption, output, and so on in the future periods, by changing capital stock today through monetary policy.

    Introduction

    Recent literature on optimal monetary policy with the zero interest rate bound has assumed that capital stock is exogenously given. This assumption of fixed capital stock has some important implications. First, the natural rate of interest is exogenously determined simply due to the lack of endogenous state variables: namely, it is affected by exogenous factors such as changes in technology and preference, but not by changes in endogenous variables. For example, Jung et al. (2005) and Eggertsson and Woodford (2003a, b) among others, start their analysis by assuming that the natural rate of interest is an exogenous process, which is a deterministic or a two-state Markov process. More recent researches such as Adam and Billi (2004a, b) and Nakov (2005) extend analysis to a fully stochastic environment, but continue to assume that the natural rate process is exogenously given. These existing researches typically consider a situation in which the natural rate of interest, whether it is a deterministic or a stochastic process, declines to a negative level entirely due to exogenous shocks, and conduct an exercise of characterizing optimal monetary policy responses to the shock, as well as monetary policy rules to implement the optimal outcome.

  • Fiscal Policy Switching: Evidence from Japan, US, and UK

    Abstract

    This paper estimates fiscal policy feedback rules in Japan, the United States, and the United Kingdom, allowing for stochastic regime changes. Using Markov-switching regression methods, we find that the Japanese data clearly reject the view that fiscal policy regime is fixed; i.e., the Japanese government has been adopting either of Ricardian or Non-Ricardian policy at all times. Instead, our results indicate that fiscal policy regimes evolve over time in a stochastic manner. This is in a sharp contrast with the U.S. and U.K. results in which the government’s fiscal behavior is consistently characterized by Ricardian policy.

    Introduction

    Recent studies about the conduct of monetary policy argue that fiscal policy regime has important implications for the choice of desirable monetary policy rules, in particular, monetary policy rules in the form of inflation targeting (Sims (2005), Benigno and Woodford (2006)). Needless to say, we can safely believe that fiscal regime during the peace time is characterized as “Ricardian” in the terminology of Woodford (1995), or “passive” in the terminology of Leeper (1991). In such a case, we are allowed to design an optimal monetary policy rule without paying any attention to fiscal regimes. However, if the economy is unstable in terms of fiscal situations, it would be dangerous to choose a monetary policy rule independently of fiscal policy regimes. For example, some researchers argue that rapid accumulation of public debt in Japan is an evidence for the lack of fiscal discipline of the Japanese government. If this is the case, it would be possible that participants in the government bond market will come to have doubts about the government’s intention to repay public debt. Given this environment, it would not be desirable to design a monetary policy rule without paying any attention to the future evolution of fiscal policy regime. The purpose of this paper is to estimate fiscal policy feedback rules in Japan, the United States, and the United Kingdom for more than a century, so as to acquire a deeper understanding about the evolution of fiscal policy regime.

  • Massive Money Injection in an Economy with Broad Liquidity Services: The Japanese Experience 2001-2006

    Abstract

    This paper presents a model with broad liquidity services to discuss the consequences of massive money injection in an economy with the zero interest rate bound. We incorporate Goodfriend’s (2000) idea of broad liquidity services into the model by allowing the amounts of bonds with various maturities held by a household to enter its utility function. We show that the satiation of money (or the zero marginal utility of money) is not a necessary condition for the one-period interest rate to reach the zero lower bound; instead, we present a weaker necessary condition that the marginal liquidity service provided by money coincides with the marginal liquidity service provided by the one-period bonds, both of which are not necessarily equal to zero. This result implies that massive money injection would have some influences on an equilibrium of the economy even if it does not alter the private sector’s expectations about future monetary policy. Our empirical results indicate that forward interest rates started to decline relative to the corresponding futures rates just after March 2001, when a quantitative monetary easing policy started by the Bank of Japan, and that the forward and futures spread has never closed until the policy ended in March 2006. We argue that these findings are not easy to explain by a model without broad liquidity services.

    Introduction

    Recent researches on the optimal monetary policy in an economy with the zero interest rate bound have found the importance of a central bank’s commitment about future monetary policy (Woodford (1999), Jung et al. (2005), Eggertsson and Woodford (2003) among others). In a usual environment, a central bank conducts monetary easing by lowering the current overnight interest rate through an additional injection of money to the market. However, this does not work well once the overnight interest rate reaches the zero lower bound. Further monetary easing in such a situation could be implemented only through central bank’s announcements about the future path of the overnight interest rate. Specifically, it has been shown that the optimal monetary policy rule is characterized by “history dependence” in the sense that a central bank commits itself to continuing monetary easing even after the economy returns to a normal situation.

PAGE TOP