Seasonality is among the most salient features of price changes, but it is notably less analyzed than seasonality of quantities and the business cycle component of price changes. To fill this gap, we use the scanner data of 199 categories of goods in Japan to empirically study the seasonality of price changes from 1990 to 2021. We find that the following four features generally hold for most categories: (1) The frequency of price increases and decreases rises in March and September; (2) Seasonal components of the frequency of price changes are negatively correlated with those of the size of price changes; (3) Seasonal components of the inflation rate track seasonal components of net frequency of price changes; (4) The seasonal pattern of the frequency of price changes is responsive to changes in the category-level annual inflation rate for the year. We use simple state-dependent price models and show seasonal cycles in menu costs play an essential role in generating seasonality of price changes.
It is widely known among both scholars and policymakers that the time series of prices have a sizable degree of seasonality. Figure 1 shows the decomposition of the yearly growth rate of the CPI, for all items and for goods less fresh food and energy, into twelve month-to-month changes within the same year in Japan. It can be seen that there are months in which prices generally increase, such as March and April, and months in which prices generally decrease, such as January and February. Such seasonal patterns have been stable from the 1990s to 2020s.
WP052
This paper undertakes both a narrow and wide replication of the estimation of a money demand function conducted by Ireland (American Economic Review, 2009). Using US data from 1980 to 2013, we show that the substantial increase in the money-income ratio during the period of near-zero interest rates is captured well by the log-log specification but not by the semi-log specification, contrary to the result obtained by Ireland (2009). Our estimate of the interest elasticity of money demand over the 1980-2013 period is about one-tenth that of Lucas (2000), who used a log-log specification. Finally, neither specification satisfactorily fits post-2015 US data.
In regression analyses of money demand functions, there is no consensus on whether the nominal interest rate as an independent variable should be used in linear or log form. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., regressing real money balances (or real money balances relative to nominal GDP) in log on nominal interest rates in log), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log specification (i.e., nominal interest rates are not in log).
WP044
Using credit card transaction data, we examine the impacts of two successive events that promoted cashless payments in Japan: the government’s program and the COVID19 pandemic. We find that the number of card users was 9-12 percent higher in restaurants that participated in the program than those that did not. We present a simple framework accounting for the spread of cashless payments. Our model predicts that the impact of the policy intervention diminished as the use of cashless payments increased, which accords well with Japan’s COVID-19 experience. The estimated impact of COVID-19 was around two-thirds of that of the program.
The share of payments using cashless methods is much lower in Japan than many other countries. BIS statistics, for example, show that total payments via cashless means such as credit cards, debit cards, and e-money in Japan amounted to 74 trillion yen or 24 percent of household final consumption expenditure in 2018. This percentage is considerably lower than the 40 percent or more in other developed countries such as the United States, the United Kingdom, and Singapore. The social cost of relying on cash payments is substantial. For instance, using data for several European countries, Schmiedel et al. (2012) show that the unit cost of cash payments is higher than that of debit card payments. In addition, Rogoff (2015) argues that cash makes transactions anonymous, which potentially facilitates underground or illegal activities and leads to law-enforcement costs.
WP040
In October 2019, the Japanese government started a unique program that offered points (discounts) for cashless payments. Using credit card transaction data, we compare credit card usage at restaurants that participated in this program and those that did not. Our main findings are as follows. First, the number of card users was 9- 12 percent higher in participating than in non-participating restaurants. Second, the positive impact of the program on the number of card users persisted even after the program ended in June 2020, indicating that the program had a lasting effect to promote cashless payments. Third, the impact of the program was significantly larger at restaurants that started accepting credit cards more recently, since the share of cash users at those restaurants was larger just before the program started. Finally, two-thirds of the difference between participating and non-participating restaurants disappeared during the first surge of COVID-19 in April 2020, suggesting that customers switched from cash to cashless payments to reduce the risk of infection both at participating and non-participating restaurants, but the extent to which customers switched was larger at non-participating restaurants with a larger share of cash users just before the pandemic.
The share of payments using cashless methods is much lower in Japan than many other countries. BIS statistics, for example, show that total payments via cashless means such as credit cards, debit cards, and e-money in Japan amounted to 74 trillion yen or 24 percent of household final consumption expenditure in 2018. This percentage is considerably lower than the 40 percent or more in other developed countries such as the United States, the United Kingdom, and Singapore. The social cost of relying on cash payments is substantial. For instance, using data for several European countries, Schmiedel et al. (2012) show that the unit cost of cash payments is higher than that of debit card payments. In addition, Rogoff (2015) argues that cash makes transactions anonymous, which potentially facilitates underground or illegal activities and leads to law-enforcement costs.
WP036
The spread of COVID-19 infections has led to substantial changes in consumption patterns. While demand for services that involve face-to-face contact has decreased sharply, online consumption of goods and services, such as through e-commerce, is increasing. The aim of this paper is to investigate whether online consumption will continue to increase even after COVID-19 subsides. Online consumption requires upfront costs, which have been regarded as one of the factors inhibiting the diffusion of online consumption. However, if many consumers made such upfront investments due to the pandemic, they would have no reason to return to offline consumption after the pandemic has ended. We examine whether this was actually the case using credit card transaction data. Our main findings are as follows. First, the main group responsible for the increase in online consumption are consumers who were already familiar with it before the pandemic. These consumers increased the share of online spending in their overall spending. Second, some consumers that had never used the internet for purchases before started to do so due to COVID-19. However, the fraction of consumers making this switch was not very different from the trend before the crisis. Third, by age group, the switch to online consumption was more pronounced among youngsters than seniors. These findings suggest that it is not the case that during the pandemic a large number of consumers made the upfront investment necessary to switch to online consumption, so a certain portion of the increase in online consumption is likely to fall away again once COVID-19 subsides.
People’s consumption patterns have changed substantially as a result of the spread of the COVID-19 infections. One such change is a reduction in the consumption of services that involve face-to-face contact. For instance, “JCB Consumption NOW” data, credit card transaction data provided jointly by JCB Co., Ltd., and Nowcast Inc., show that, since February this year, spending on eating out, entertainment, travel, and lodging have shown substantial decreases. Even in the case of goods consumption, there has been a tendency to avoid face-to-face contact such as at convenience stores and supermarkets. For example, with regard to supermarket shopping, the amount of spending per consumer has increased, but the number of shoppers has decreased, indicating that consumers purchase more than usual at supermarkets but try to minimize the risk of infection by reducing the number of visits. Another important change is the increase in the consumption of services and goods that do not involve face-to-face contact. The credit card transaction data indicate that with regard to services consumption, spending on movies and theaters has decreased substantially, while spending on streaming media services has increased. As for the consumption of goods, so-called e-commerce, i.e., purchases via the internet, has shown substantial increases.
WP035
This paper estimates a money demand function using US data from 1980 onward, including the recent near-zero interest rate period. We show that the substantial increase in the money-income ratio during the period of near-zero interest rates is captured well by the log-log specification, but not by the semi-log specification. Our result is the opposite of the result obtained by Ireland (2009), who found that the semi-log specification performs better. This mainly stems from the difference in the sample period employed: ours contains 24 quarters with interest rates below 1 percent, while Ireland’s (2009) sample period contains only three quarters.
In regression analyses of money demand functions, there is no consensus on whether the nominal interest rate as an independent variable should be used in linear or log form. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., regressing real money balances (or real money balances relative to nominal GDP) in log on nominal interest rates in log), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log specification (i.e., nominal interest rates are not in log).
WP034
Large-scale household inventory buildups occurred in Japan five times over the last decade, including those triggered by the Tohoku earthquake in 2011, the spread of COVID-19 infections in 2020, and the consumption tax hikes in 2014 and 2019. Each of these episodes was accompanied by considerable swings in GDP, suggesting that fluctuations in household inventories are one of the sources of macroeconomic fluctuations in Japan. In this paper, we focus on changes in household inventories associated with temporary sales and propose a methodology to estimate changes in household inventories at the product level using retail scanner data. We construct a simple model on household stockpiling and derive equations for the relationships between the quantity consumed and the quantity purchased and between consumption and purchase prices. We then use these relationships to make inferences about quantities consumed, consumption prices, and inventories. Next, we test the validity of this methodology by calculating price indices and check whether the intertemporal substitution bias we find in the price indices is consistent with theoretical predictions. We empirically show that there exists a large bias in the Laspeyres, Paasche, and Törnqvist price indices, which is smaller at lower frequencies but non-trivial even at a quarterly frequency and that intertemporal substitution bias disappears for a particular type of price index if we switch from purchase-based data to consumption-based data.
In the first week of March 2020, when the first wave of COVID-19 infections hit Japan, supermarket sales went up more than 20% over the previous year. This was due to hoarding by consumers stemming from an increase in uncertainty regarding the spread of the virus. Similar hoarding occurred during the third wave, which struck Japan in October 2020. Such hoarding has occurred not only during the COVID-19 pandemic but also after the Tohoku earthquake in March 2011 and the subsequent nuclear power plant accident in Fukushima, when residents of Tokyo and other areas that were spared serious damage went on a buying spree for food and other necessities. Consumer hoarding also occurred due to policy shocks: when the consumption tax rate was raised in April 2014 and in October 2019, people hoarded large amounts of goods just before the tax rate was raised, and a prolonged consumption slump occurred thereafter. Each of these episodes was accompanied by considerable swings in GDP, suggesting that fluctuations in household inventories are one of the sources of macroeconomic fluctuations in Japan.
WP033
Changes in people's behavior during the COVID-19 pandemic can be regarded as the result of two types of effects: the "intervention effect" (changes resulting from government orders or requests for people to change their behavior) and the "information effect" (voluntary changes in people's behavior based on information about the pandemic). Using mobile location data to construct a stay-at-home measure for different age groups, we examine how the intervention and information effects differ across age groups. Our main findings are as follows. First, the age profile of the intervention effect of the state of emergency declaration in April and May 2020 shows that the degree to which people refrained from going out was smaller for older age groups, who are at a higher risk of serious illness and death, than for younger age groups. Second, the age profile of the information effect shows that, unlike the intervention effect, the degree to which people stayed at home tended to increase with age for weekends and holidays. Thus, while Acemoglu et al. (2020) proposed targeted lockdowns requiring stricter lockdown policies for the oldest group in order to protect those at a high risk of serious illness and death, our findings suggest that Japan's government intervention had a very different effect in that it primarily reduced outings by the young, and what led to the quarantining of older groups at higher risk instead was people's voluntary response to information about the pandemic. Third, the information effect has been on a downward trend since the summer of 2020. While this trend applies to all age groups, it is relatively more pronounced among the young, so that the age profile of the information effect remains upward sloping, suggesting that people's response to information about the pandemic is commensurate with their risk of serious illness and death.
The number of COVID-19 infections in Japan began to increase in earnest in the latter half of February, and by the end of March, the cumulative number of infections had reached 2,234. In response to the spread of infections, the government declared a state of emergency on April 7 for seven prefectures including Tokyo, and on April 16, the state of emergency was expanded to cover all prefectures. As a result, people refrained from going out, and the number of new infections in Japan, after peaking at 720 on April 11, began to drop, falling to almost zero by the end of May. This was the first wave of infections. However, in July, the number of new infections began to increase again, and continued to increase throughout the summer (peaking at 1,605 new infections on August 7). This was the second wave. While the second wave had subsided by the end of August, the number of new infections began to increase once again in late October, and on December 31, 2020, the number of new infections in Tokyo reached 1,353, exceeding 1,000 for the first time (the number of new infections nationwide was 4,534). In response, the government again declared a state of emergency on January 7. We are currently in the middle of the third wave.
WP029
This paper derives a money demand function that explicitly takes the costs of storing money into account. This function is then used to examine the consequences of the large-scale money injection conducted by the Bank of Japan since April 2013. The main findings are as follows. First, the opportunity cost of holding money calculated using 1-year government bond yields has been negative since the fourth quarter of 2014, and most recently (2020:Q2) was -0.2%. Second, the marginal cost of storing money, which was 0.3% in the most recent quarter, exceeds the marginal utility of money, which was 0.1%. Third, the optimum quantity of money, measured by the ratio of M1 to nominal GDP, is 1.2. In contrast, the actual money-income ratio in the most recent quarter was 1.8. The welfare loss relative to the maximum welfare obtained under the optimum quantity of money in the most recent quarter was 0.2% of nominal GDP. The findings imply that the Bank of Japan needs to reduce M1 by more than 30%, for example through measures that impose a penalty on holding money.
Seven years have passed since the Bank of Japan (BOJ) welcomed Haruhiko Kuroda as its new Governor and started a new regime of monetary easing, which was nicknamed the “Kuroda bazooka.” The policy goal that the BOJ set itself was to overcome deflation. At the time, the year-on-year rate of change in Japan’s consumer price index (CPI) was -0.9% and had been below zero for a long time. The measure the BOJ chose to escape deflation was to print lots of money. That is, the BOJ thought that it would be possible to overcome deflation by increasing the quantity of money. Specifically, in April 2013, the BOJ announced that it would double the monetary base within two years and thereby raise the CPI inflation rate to 2%. However, currently, CPI inflation remains stuck at 0.3%. The BOJ has not achieved its target of 2%, and there is little prospect that it will be achieved in the near future. While it is true that inflation currently is heavily affected by the sharp fall in aggregate demand since the outbreak of the COVID crisis in February 2020, which is putting downward pressure on prices, even before the crisis CPI inflation was only between 0.2 and 0.8% and therefore below the BOJ’s target.
WP028
Japan’s government has taken a number of measures, including declaring a state of emergency, to combat the spread COVID-19. We examine the mechanisms through which the government’s policies have led to changes in people’s behavior. Using smartphone location data, we construct a daily prefecture-level stay-at-home measure to identify the following two effects: (1) the effect that citizens refrained from going out in line with the government’s request, and (2) the effect that government announcements reinforced awareness with regard to the seriousness of the pandemic and people voluntarily refrained from going out. Our main findings are as follows. First, the declaration of the state of emergency reduced the number of people leaving their homes by 8.6% through the first channel, which is of the same order of magnitude as the estimate by Goolsbee and Syverson (2020) for lockdowns in the United States. Second, a 1% increase in new infections in a prefecture reduces people’s outings in that prefecture by 0.026%. Third, the government’s requests are responsible for about one quarter of the decrease in outings in Tokyo, while the remaining three quarters are the result of citizens obtaining new information through government announcements and the daily release of the number of infections. Our results suggest that what is necessary to contain the spread of COVID-19 is not strong, legally binding measures but the provision of appropriate information that encourages people to change their behavior.
In response to the spread of COVID-19, the Japanese government on February 27 issued a request to local governments such as prefectural governments to close schools. Subsequently, the Japanese government declared a state of emergency on April 7 for seven prefectures, including Tokyo, and on April 16 expanded the state of emergency to all 47 prefectures. Prime Minister Abe called on citizens to reduce social interaction by at least 70% and, if possible, by 80% by refraining from going out. In response to these government requests, people restrained from going out. For example, in March, the share of people in Tokyo leaving their homes was down by 18% compared to January before the spread of COVID-19, and by April 26, during the state of emergency, the share had dropped as much as 64%. As a result of people refraining from leaving their homes, the number of daily new infections in Tokyo fell from 209 at the peak to two on May 23, and the state of emergency was lifted on May 25.
WP027
This paper examines the implications of consumer inventory for cost-of-living indices (COLIs) and business cycles. We begin by providing stylized facts about consumer inventory using scanner data. We then construct a quasi-dynamic model to describe consumers’ purchase, consumption, and inventory behavior. A key feature of our model is that inventory is held by household producers, not by consumers, which enables us to construct a COLI in a static manner even in an economy with storable goods. Based on this model, we show that stockpiling during temporary sales generates a substantial bias, or so-called chain drift, in conventional price indices, which are constructed without paying attention to consumer inventory. However, the chain drift is greatly mitigated in our COLI, which is based on consumption prices (rather than purchase prices) and quantities consumed (rather than quantities purchased). We provide empirical evidence supporting these theoretical predictions. We also show empirically that consumers’ inventory behavior tends to depend on labor market conditions and the interest rate.
Storable goods are abundant in the real world (e.g., pasta, toilet rolls, shampoos, and even vegetables and milk), although most economic models deal with perishable goods for the sake of simplicity. Goods storability implies that purchases (which are often observable) do not necessarily equal consumption (which is often unobservable), and the difference between the two serves as consumer inventory. In particular, temporary sales and the anticipation of an increase in the value-added tax rate often lead to a greater increase in purchases than consumption. Moreover, the COVID-19 outbreak in 2020 caused many products, such as pasta and toilet rolls, to disappear from supermarket shelves, which would not have happened if these products were not storable. The stockpiling behavior by consumers poses challenges for economists, for example in the construction of price indices.
WP025
The spread of novel coronavirus (COVID-19) infections has led to substantial changes in consumption patterns. While demand for services that involve face-to-face contact has decreased sharply, online consumption of goods and services, such as through e-commerce, is increasing. The aim of this study is to investigate whether online consumption will continue to increase even after COVID-19 subsides, using credit card transaction data. Online consumption requires upfront costs, which have been regarded as one of the factors inhibiting the diffusion of online consumption. However, if many consumers made such upfront investments due to the coronavirus pandemic, they would have no reason to return to offline consumption after the pandemic has ended, and high levels of online consumption should continue. Our main findings are as follows. First, the main group responsible for the increase in online consumption are consumers who were already familiar with online consumption before the pandemic and purchased goods and service both online and offline. These consumers increased the share of online spending in their spending overall and/or stopped offline consumption completely and switched to online consumption only. Second, some consumers that had never used the internet for purchases before started to use the internet for their consumption activities due to COVID-19. However, the share of consumers making this switch was not very different from the trend before the crisis. Third, by age group, the switch to online consumption was more pronounced among youngsters than seniors. These findings suggest that it is not the case that during the pandemic a large number of consumers made the upfront investment necessary to switch to online consumption, so a certain portion of the increase in online consumption is likely to fall away again as COVID-19 subsides.
People’s consumption patterns have changed substantially as a result of the spread of the novel coronavirus (COVID-19). One such change is a reduction of the consumption of services that involve face-to-face (F2F) contact. For instance, “JCB Consumption NOW” data, credit card transaction data provided jointly by JCB Co., Ltd. and Nowcast Inc., show that, since February this year, spending on eating out, entertainment, travel, and lodging have shown substantial decreases. Even in the case of goods consumption, there has been a tendency to avoid face-to-face contact such as at convenience stores and supermarkets. For example, with regard to supermarket shopping, the amount of spending per consumer has increased, but the number of shoppers has decreased. Another important change is the increase in the consumption of services and goods that do not involve face-to-face contact. The credit card transaction data indicate that with regard to service consumption, spending on movies and theaters has decreased substantially, while spending on content delivery has increased. As for the consumption of goods, so-called e-commerce, i.e., purchases via the internet, has shown substantial increases.
WP023
With the spread of coronavirus infections, there has been a growing tendency to refrain from consuming services such as eating out that involve contact with people. Self-restraint in service consumption is essential to stop the spread of infections, and the national government as well as local governments such as the Tokyo government are calling for consumers as well as firms providing such services to exercise self-restraint. One way to measure the degree of self-restraint has been to look at changes in the flow of people using smart phone location data. As a more direct approach, this note uses credit card transaction data on service spending to examine the degree to which people exercise self-restraint. The results indicate that of men aged 35-39 living in the Tokyo metropolitan area, the share that used their credit card to pay for eating out in March 2020 was 27 percent. Using transaction data for January, i.e., before the full outbreak of the virus in Japan, yields an estimated share of 32 percent for March. This means that the number of people eating out fell by 15 percent. Apart from eating out, similar self-restraint effects can be observed in various other sectors such as entertainment, travel, and accommodation. Looking at the degree of self-restraint by age shows that the self-restraint effect was relatively large among those in their late 30s to early 50s. However, below that age bracket, the younger the age group, the smaller was the self-restraint effect. Moreover, the self-restraint effect was also small among those aged 55 and above. Further, the degree of self-restraint varies depending on the type of service; it is highest with regard to entertainment, travel, and accommodation. The number of people who spent on these services in March 2020 was about half of the number during normal times. However, the 80 percent reduction demanded by the government has not been achieved.
With the spread of coronavirus infections, there has been a growing tendency to refrain from consuming services such as eating out that involve contact with people. Self-restraint in service consumption is essential to stop the spread of infections, and the national government as well as local governments such as the Tokyo government are calling for consumers as well as firms providing such services to exercise self-restraint. Specifically, Prime Minister Shinzo Abe declared a one-month long state of emergency in Tokyo and six other prefectures on April 7, 2020 and expanded it to the entire country on April 16. PM Abe stated in his speech on April 7 that “According to an estimate by the experts, if all of us make efforts and reduce opportunities for person-to-person contact by a minimum of 70 percent, or ideally 80 percent, we will cause the increase in the number of patients to reach its peak two weeks from now and shift over into a decrease. . . . I ask people to refrain from going out, aiming at a 70 to 80 percent decrease, for the limited period of one month between now and the end of Golden Week holidays on May 6.” The purpose of this note is to measure the degree to which people in Japan have been exercising self-restraint since the outbreak of COVID-19. One way to do so is to look at changes in the flow of people using mobile phone location data.2 As a more direct approach, we use credit card transaction data on service spending to examine the degree to which people exercise self-restraint.
WP021
This note compares the responses of consumption and prices to the COVID-19 shock and another large-scale natural disaster that hit Japan, the Tohoku earthquake in March 2011. The comparison shows that the responses of supermarket sales and prices at a daily frequency during the two crises are quite similar: (1) the year-on-year rate of sales growth increased quickly and reached a peak of 20 percent two weeks after the outbreak of COVID-19 in Japan, which is quite similar to the response immediately after the earthquake; (2) the items consumers purchased at supermarkets in these two crisis are almost identical; (3) the year-on-year rate of consumer price inflation for goods rose by 0.6 percentage points in response to the coronavirus shock, compared to 2.2 percentage points in the wake of the earthquake. However, evidence suggests that whereas people expected higher inflation for goods and services in the wake of the earthquake, they expect lower inflation in response to the coronavirus shock. This difference in inflation expectations suggests that the economic deterioration due to COVID-19 should be viewed as driven mainly by an adverse aggregate demand shock to face-to-face service industries such as hotels and leisure, transportation, and retail, rather than as driven by an aggregate supply shock.
The spread of COVID-19 is still gaining momentum. The number of those infected in Japan started to rise from the last week of February, and the spread of the virus began to gradually affect everyday life, as exemplified by increasingly empty streets in Ginza. In March, the outbreak spread to Europe and the United States, and stock markets in the United States and other country began to drop sharply on a daily basis, leading to market turmoil reminiscent of the global financial crisis. At the time of writing (March 29), the Dow Jones Index of the New York Stock Exchange had dropped by 35%, while the Nikkei Index had fallen by 30%.
WP020
This paper estimates a money demand function using Japanese data from 1985 to 2017, which includes the period of near-zero interest rates over the last two decades. We compare a log-log specification and a semi-log specification by employing the methodology proposed by Kejriwal and Perron (2010) on cointegrating relationships with structural breaks. Our main finding is that there exists a cointegrating relationship with a single break between the money-income ratio and the interest rate in the case of the log-log form but not in the case of the semi-log form. More specifically, we show that the substantial increase in the money-income ratio during the period of near-zero interest rates is well captured by the log-log form but not by the semi-log form. We also show that the demand for money did not decline in 2006 when the Bank of Japan terminated quantitative easing and started to raise the policy rate, suggesting that there was an upward shift in the money demand schedule. Finally, we find that the welfare gain from moving from 2 percent inflation to price stability is 0.10 percent of nominal GDP, which is more than six times as large as the corresponding estimate for the United States.
There is no consensus about whether the interest rate variable should be used in log or not when estimating the money demand function. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., the log of real money balances is regressed on the log of the nominal interest rate), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log form (i.e., the log of real money demand is regressed on the level of the nominal interest rate). The purpose of this paper is to specify the functional form of money demand using Japanese data covering the recent period with nominal interest rates very close to zero.
WP013
A statistical method is proposed for detecting stock market bubbles that occur when speculative funds concentrate on a small set of stocks. The bubble is defined by stock price diverging from the fundamentals. A firm’s financial standing is certainly a key fundamental attribute of that firm. The law of one price would dictate that firms of similar financial standing share similar fundamentals. We investigate the variation in market capitalization normalized by fundamentals that is estimated by Lasso regression of a firm’s financial standing. The market capitalization distribution has a substantially heavier upper tail during bubble periods, namely, the market capitalization gap opens up in a small subset of firms with similar fundamentals. This phenomenon suggests that speculative funds concentrate in this subset. We demonstrated that this phenomenon could have been used to detect the dot-com bubble of 1998-2000 in different stock exchanges.
It is common knowledge in macroeconomics that, as Federal Reserve Board Chairman Alan Greenspan said in 2002, ”...it is very difficult to identify a bubble until after the fact; that is, when its bursting confirms its existence.” In other words, before a bubble bursts, there is no way to establish whether the economy is in a bubble or not. In economics, a stock bubble is defined as a state in which speculative investment flows into a firm in excess of the firm’s fundamentals, so the market capitalization (= stock price × number of shares issued) becomes excessively high compared to the fundamentals. Unfortunately, it is exceedingly difficult to precisely measure a firm’s fundamentals and this has made it nearly impossible to detect a stock bubble by simply measuring the divergence between fundamentals and market capitalization [1–3]. On the other hand, we empirically know that market capitalization and PBR (= market capitalization / net assets) of some stocks increase during bubble periods [4–7]. However, they are also buoyed by rising fundamentals, so it is not always possible to figure out if increases can be attributed to an emerging bubble.
WP010
We investigate the cross-sectional distribution of house prices in the Greater Tokyo Area for the period 1986 to 2009. We find that size-adjusted house prices follow a lognormal distribution except for the period of the housing bubble and its collapse in Tokyo, for which the price distribution has a substantially heavier upper tail than that of a lognormal distribution. We also find that, during the bubble era, sharp price movements were concentrated in particular areas, and this spatial heterogeneity is the source of the fat upper tail. These findings suggest that, during a bubble, prices increase markedly for certain properties but to a much lesser extent for other properties, leading to an increase in price inequality across properties. In other words, the defining property of real estate bubbles is not the rapid price hike itself but an increase in price dispersion. We argue that the shape of cross-sectional house price distributions may contain information useful for the detection of housing bubbles.
Property market developments are of increasing importance to practitioners and policymakers. The financial crises of the past two decades have illustrated just how critical the health of this sector can be for achieving financial stability. For example, the recent financial crisis in the United States in its early stages reared its head in the form of the subprime loan problem. Similarly, the financial crises in Japan and Scandinavia in the 1990s were all triggered by the collapse of bubbles in the real estate market. More recently, the rapid rise in real estate prices - often supported by a strong expansion in bank lending - in a number of emerging market economies has become a concern for policymakers. Given these experiences, it is critically important to analyze the relationship between property markets, finance, and financial crisis.
WP008
This paper estimates a money demand function using US data from 1980 onward, including the period of near-zero interest rates following the global financial crisis. We conduct cointegration tests to show that the substantial increase in the money-income ratio during the period of near-zero interest rates is captured well by the money demand function in log-log form, but not by that in semi-log form. Our result is the opposite of the result obtained by Ireland (2009), who, using data up until 2006, found that the semi-log specification performs better. The difference in the result from Ireland (2009) mainly stems from the difference in the observation period employed: our observation period contains 24 quarters with interest rates below 1 percent, while Ireland’s (2009) observation period contains only three quarters. We also compute the welfare cost of inflation based on the estimated money demand function to find that it is very small: the welfare cost of 2 percent inflation is only 0.04 percent of national income, which is of a similar magnitude as the estimate obtained by Ireland (2009) but much smaller than the estimate by Lucas (2000).
In regression analyses of money demand functions, there is no consensus on whether the nominal interest rate as an independent variable should be used in linear or log form. For example, Meltzer (1963), Hoffman and Rasche (1991), and Lucas (2000) employ a log-log specification (i.e., regressing real money balances (or real money balances relative to nominal GDP) in log on nominal interest rates in log), while Cagan (1956), Lucas (1988), Stock and Watson (1993), and Ball (2001) employ a semi-log specification (i.e., nominal interest rates are not in log).
WP002
Using a new micro-level dataset we investigate the relationship between the inflation experience and inflation expectations of households in Japan. We focus on the period after 1995, when Japan began its era of deflation. Our key findings are fourfold. Firstly, we find that inflation expectations tend to increase with age. Secondly, we find that measured inflation rates of items purchased also increase with age. However, we find that age and inflation expectations continue to have a positive correlation even after controlling for the household-level rate of inflation. Further analysis suggests that the positive correlation between age and inflation expectations is driven to a significant degree by the correlation between cohort and inflation expectations, which we interpret to represent the effect of historical inflation experience on expectations of future inflation rates.
Since at least the time of Keynes (1936), economic agents’ expectations of future inflation rates have played a pivotal role in macroeconomics. Woodford (2003) describes the central importance of inflation expectations to modern macroeconomic models due to the intertemporal nature of economic problems, while Sargent (1982) and Blinder (2000) highlight the dependence of monetary policy on these expectations. However, despite the important role of inflation expectations, their formal inclusion in macroeconomic models is usually ad-hoc with little empirical justification.
WP001
Japan has failed to escape from deflation despite extraordinary monetary policy easing over the past four years. Monetary easing undoubtedly stimulated aggregate demand, leading to an improvement in the output gap. However, since the Phillips curve was almost flat, prices hardly reacted. Against this background, the key question is why prices were so sticky. To examine this, we employ sectoral price data for Japan and seven other countries including the United States, and use these to compare the shape of the price change distribution. Our main finding is that Japan differs significantly from the other countries in that the mode of the distribution is very close to zero for Japan, while it is near 2 percent for other countries. This suggests that whereas in the United States and other countries the “default” is for firms to raise prices by about 2 percent each year, in Japan the default is that, as a result of prolonged deflation, firms keep prices unchanged.
From the second half of the 1990s onward, Japan suffered a period of prolonged deflation, in which the consumer price index (CPI) declined as a trend. During this period, both the government and the Bank of Japan (BOJ) tried various policies to escape from deflation. For instance, from 1999 to 2000, the BOJ adopted a “zero interest rate policy” in which it lowered the policy interest rate to zero. This was followed by “quantitative easing” from 2001 until 2006. More recently, in January 2013, the BOJ adopted a “price stability target” with the aim of raising the annual rate of increase in the CPI to 2 percent. In April 2013, it announced that it was aiming to achieve the 2 percent inflation target within two years and, in order to achieve this, introduced Quantitative and Qualitative Easing (QQE), which sought to double the amount of base money within two years. Further, in February 2016, the BOJ introduced a “negative interest rate policy,” in which the BOJ applies a negative interest rate of minus 0.1 percent to current accounts held by private banks at the BOJ, followed, in September 2016, by the introduction of “yield curve control,” in which the BOJ conducts JGB operations so as to keep the 10-year JGB yield at zero percent. See Table 1 for an overview of recent policy decisions made by the BOJ.
Using a new micro-level dataset we investigate the relationship between the inflation experience and inflation expectations of individuals in Japan. We focus on the period after 1995, when Japan began its era of deflation. Our key findings are fourfold. Firstly, we find that inflation expectations tend to increase with age. Secondly, we find that measured inflation rates of items purchased also increase with age. However, we find that age and inflation expectations continue to have a positive correlation even after controlling for the individual-level rate of inflation. Further analysis suggests that the positive correlation between age and inflation expectations is driven to a significant degree by the correlation between cohort and inflation expectations, which we interpret to represent the effect of historical inflation experience on expectations of future inflation rates.
Since at least the time of Keynes (1936), economic agents’ expectations of future inflation rates have played a pivotal role in macroeconomics. Woodford (2003) describes the central importance of inflation expectations to modern macroeconomic models due to the intertemporal nature of economic problems, while Sargent (1982) and Blinder (2000) highlight the dependence of monetary policy on these expectations. However, despite the important role of inflation expectations, their formal inclusion in macroeconomic models is usually ad-hoc with little empirical justification.
A notable characteristic of Japan’s deflation since the mid-1990s is the mild pace of price decline, with the CPI falling at an annual rate of only around 1 percent. Moreover, even though unemployment increased, prices hardly reacted, giving rise to a flattening of the Phillips curve. In this paper, we address why deflation was so mild and why the Phillips curve flattened, focusing on changes in price stickiness. Our first finding is that, for the majority of the 588 items constituting the CPI, making up about 50 percent of the CPI in terms of weight, the year-on-year rate of price change was near-zero, indicating the presence of very high price stickiness. This situation started during the onset of deflation in the second half of the 1990s and continued even after the CPI inflation rate turned positive in spring 2013. Second, we find that there is a negative correlation between trend inflation and the share of items whose rate of price change is near zero, which is consistent with Ball and Mankiw’s (1994) argument based on the menu cost model that the opportunity cost of leaving prices unchanged decreases as trend inflation approaches zero. This result suggests that the price stickiness observed over the last two decades arose endogenously as a result of the decline in inflation. Third and finally, a cross-country comparison of the price change distribution reveals that Japan differs significantly from other countries in that the mode of the distribution is very close to zero for Japan, while it is near 2 percent for other countries including the United States. Japan continues to be an “outlier” even if we look at the mode of the distribution conditional on the rate of inflation. This suggests that whereas in the United States and other countries the “default” is for firms to raise prices by about 2 percent each year, in Japan the default is that, as a result of prolonged deflation, firms keep prices unchanged.
From the second half of the 1990s onward, Japan suffered a period of prolonged deflation, in which the consumer price index (CPI) declined as a trend. During this period, both the government and the Bank of Japan (BOJ) tried various policies to escape from deflation. For instance, from 1999 to 2000, the BOJ adopted a “zero interest rate policy” in which it lowered the policy interest rate to zero. This was followed by “quantitative easing” from 2001 until 2006. More recently, in January 2013, the BOJ adopted a “price stability target” with the aim of raising the annual rate of increase in the CPI to 2 percent. In April 2013, it announced that it was aiming to achieve the 2 percent inflation target within two years and, in order to achieve this, introduced Quantitative and Qualitative Easing (QQE), which seeks to double the amount of base money within two years. Further, in February 2016, the BOJ introduced a “negative interest rate policy,” in which the BOJ applies a negative interest rate of minus 0.1 percent to current accounts held by private banks at the BOJ, followed, in September 2016, by the introduction of “yield curve control,” in which the BOJ conducts JGB operations so as to keep the 10-year JGB yield at zero percent. See Table 1 for an overview of recent policy decisions made by the BOJ.
In this study, we evaluate the effects of product turnover on a welfare-based cost-of-living index. We first present several facts about price and quantity changes over the product cycle employing scanner data for Japan for the years 1988-2013, which cover the deflationary period that started in the mid 1990s. We then develop a new method to decompose price changes at the time of product turnover into those due to the quality effect and those due to the fashion effect (i.e., the higher demand for products that are new). Our main findings are as follows: (i) the price and quantity of a new product tend to be higher than those of its predecessor at its exit from the market, implying that Japanese firms use new products as an opportunity to take back the price decline that occurred during the life of its predecessor under deflation; (ii) a considerable fashion effect exists, while the quality effect is slightly declining; and (iii) the discrepancy between the cost-ofliving index estimated based on our methodology and the price index constructed only from a matched sample is not large. Our study provides a plausible story to explain why Japan’s deflation during the lost decades was mild.
Central banks need to have a reliable measure of inflation when making decisions on monetary policy. Often, it is the consumer price index (CPI) they refer to when pursuing an inflation targeting policy. However, if the CPI entails severe measurement bias, monetary policy aiming to stabilize the CPI inflation rate may well bring about detrimental effects on the economy. One obstacle lies in frequent product turnover; for example, supermarkets in Japan sell hundreds of thousands of products, with new products continuously being created and old ones being discontinued. The CPI does not collect the prices of all these products. Moreover, new products do not necessarily have the same characteristics as their predecessors, so that their prices may not be comparable.
We developed a model to reconstruct the international trade network by considering both commodities and industry sectors in order to study the effects of reduced trade costs. First, we estimated trade costs to reproduce WIOD and NBERUN data. Using these costs, we estimated the trade costs of sector specific trade by types of commodities. We successfully reconstructed sector-specific trade for each types of commodities by maximizing the configuration entropy with the estimated costs. In WIOD, trade is actively conducted between the same industry sectors. On the other hand, in NBER-UN, trade is actively conducted between neighboring countries. This seems like a contradiction. We conducted community analysis for the reconstructed sector-specific trade network by type of commodities. The community analysis showed that products are actively traded among same industry sectors in neighboring countries. Therefore the observed features of the community structure for WIOD and NBER-UN are complementary.
In the era of economic globalization, most national economies are linked by international trade, which in turn consequently forms a complex global economic network. It is believed that greater economic growth can be achieved through free trade based on the establishment of Free Trade Agreements (FTAs) and Economic Partnership Agreements (EPAs). However, there is limitation to the resolution of the currently available trade data. For instance, NBER-UN records trade amounts between bilateral countries without industry sector information for each type of commodities [1], and the World Input-Output Database (WIOD) records sector-specific trade amount without commodities information [2]. This limited resolution makes it difficult to analyze community structures in detail and systematically assess the effects of reduced trade tariffs and trade barriers.
The distributions of market capitalization across stocks listed in the NASDAQ and Shanghai stock exchanges have power law tails. The power law exponents associated with these distributions fluctuate around one, but show a substantial decline during the dot-com bubble in 1997-2000 and the Shanghai bubble in 2007. In this paper, we show that the observed decline in the power law exponents is closely related to the deviation of the market values of stocks from their fundamental values. Specifically, we regress market capitalization of individual stocks on financial variables, such as sales, profits, and asset sizes, using the entire sample period (1990 to 2015) in order to identify variables with substantial contributions to fluctuations in fundamentals. Based on the regression results for stocks in listed in the NASDAQ, we argue that the fundamental value of a company is well captured by the value of its net asset, therefore a price book-value ratio (PBR) is a good measure of the deviation from fundamentals. We show that the PBR distribution across stocks listed in the NASDAQ has a much heavier upper tail in 1997 than in the other years, suggesting that stock prices deviate from fundamentals for a limited number of stocks constituting the tail part of the PBR distribution. However, we fail to obtain a similar result for Shanghai stocks.
Since B. Mandelbrot identified the fractal structure of price fluctuations in asset markets in 1963 [1], statistical physicists have been investigating the economic mechanism through which a fractal structure emerges. Power laws is an important characteristic in the fractal structure. For example, some studies found that the size distribution of asset price fluctuations follows power law [2,3]. Also, it is shown that firm size distribution (e.g., the distribution of sales across firms) also follows power law [4–8]. The power law exponent associated with firm size distributions is close to one over the last 30 years in many countries [9, 10]. The situation in which the exponent is equal to one is special in that it is the critical point between the oligopolistic phase and the pseudoequal phase [11]. If the power law exponent less than one, the finite number of top firms occupy a dominant share in the market even if there are infinite number of firms.
Buyer–seller relationships among firms can be regarded as a longitudinal network in which the connectivity pattern evolves as each firm receives productivity shocks. Based on a data set describing the evolution of buyer–seller links among 55,608 firms over a decade and structural equation modeling, we find some evidence that interfirm networks evolve reflecting a firm’s local decisions to mitigate adverse effects from neighbor firms through interfirm linkage, while enjoying positive effects from them. As a result, link renewal tends to have a positive impact on the growth rates of firms. We also investigate the role of networks in aggregate fluctuations.
The interfirm buyer–seller network is important from both the macroeconomic and the microeconomic perspectives. From the macroeconomic perspective, this network represents a form of interconnectedness in an economy that allows firm-level idiosyncratic shocks to be propagated to other firms. Previous studies has suggested that this propagation mechanism interferes with the averaging-out process of shocks, and possibly has an impact on macroeconomic variables such as aggregate fluctuations (Acemoglu, Ozdaglar and Tahbaz-Salehi (2013), Acemoglu et al. (2012), Carvalho (2014), Carvalho (2007), Shea (2002), Foerster, Sarte and Watson (2011) and Malysheva and Sarte (2011)). From the microeconomic perspective, a network at a particular point of time is a result of each firms link renewal decisions in order to avoid (or share) negative (or positive) shocks with its neighboring firms. These two views of a network is related by the fact that both concerns propagation of shocks. The former view stresses the fact that idiosyncratic shocks propagates through a static network while the latter provides a more dynamic view where firms have the choice of renewing its link structure in order to share or avoid shocks. The question here is that it is not clear how the latter view affects the former view. Does link renewal increase aggregate fluctuation due to firms forming new links that conveys positive shocks or does it decrease aggregate fluctuation due to firms severing links that conveys negative shocks or does it have a different effect?
We propose an indicator to measure the degree to which a particular news article is novel, as well as an indicator to measure the degree to which a particular news item attracts attention from investors. The novelty measure is obtained by comparing the extent to which a particular news article is similar to earlier news articles, and an article is regarded as novel if there was no similar article before it. On the other hand, we say a news item receives a lot of attention and thus is highly topical if it is simultaneously reported by many news agencies and read by many investors who receive news from those agencies. The topicality measure for a news item is obtained by counting the number of news articles whose content is similar to an original news article but which are delivered by other news agencies. To check the performance of the indicators, we empirically examine how these indicators are correlated with intraday financial market indicators such as the number of transactions and price volatility. Specifically, we use a dataset consisting of over 90 million business news articles reported in English and a dataset consisting of minuteby-minute stock prices on the New York Stock Exchange and the NASDAQ Stock Market from 2003 to 2014, and show that stock prices and transaction volumes exhibited a significant response to a news article when it is novel and topical.
Financial markets can be regarded as a non-equilibrium open system. Understanding how they work remains a great challenge to researchers in finance, economics, and statistical physics. Fluctuations in financial market prices are sometimes driven by endogenous forces and sometimes by exogenous forces. Business news is a typical example of exogenous forces. Casual observation indicates that stock prices respond to news articles reporting on new developments concerning companies’ circumstances. Market reactions to news have been extensively studied by researchers in several different fields [1]–[13], with some researchers attempting to construct models that capture static and/or dynamic responses to endogenous and exogenous shocks [14], [15]. The starting point for neoclassical financial economists typically is what they refer to as the “efficient market hypothesis,” which implies that stock prices respond at the very moment that news is delivered to market participants. A number of empirical studies have attempted to identify such an immediate price response to news but have found little evidence supporting the efficient market hypothesis [16]– [21].
We examine how precisely one can reproduce the CPI constructed based on price surveys using scanner data. Specifically, we closely follow the procedure adopted by the Statistics Bureau of Japan when we sample outlets, products, and prices from our scanner data and aggregate them to construct a scanner data-based price index. We show that the following holds the key to precise replication of the CPI. First, the scanner databased index crucially depends on how often one replaces the products sampled. The scanner data index shows a substantial deviation from the actual CPI when one chooses a value for the parameter associated with product replacement such that replacement occurs frequently, but the deviation becomes much smaller if one picks a parameter value such that product replacement occurs only infrequently. Second, even when products are replaced only infrequently, the scanner data index differs significantly from the actual CPI in terms of volatility. The standard deviation of the scanner data-based monthly inflation rate is 1.54 percent, which is more than three times as large as that for actual CPI inflation. We decompose the difference in volatility between the two indexes into various factors, showing that it mainly stems from the difference in price rigidity for individual products. We propose a filtering technique to make individual prices in the scanner data stickier, thereby making scanner data-based inflation less volatile.
Scanner data has started to be used by national statistical offices in a number of countries, including Australia, the Netherlands, Norway, Sweden, and Switzerland, for at least part of the production of their consumer price indexes (CPIs). Many other national statistical offices have also already started preparing for the use of scanner data in constructing their CPIs. The purpose of this paper is to empirically examine whether price indexes based on scanner data is consistent with price indexes constructed using the traditional survey based method.
We investigate the structure of global inter-firm linkages using a dataset that contains information on business partners for about 400, 000 firms worldwide, including all the firms listed on the major stock exchanges. Among the firms, we examine three networks, which are based on customer-supplier, licensee-licensor, and strategic alliance relationships. First, we show that these networks all have scale-free topology and that the degree distribution for each follows a power law with an exponent of 1.5. The shortest path length is around six for all three networks. Second, we show through community structure analysis that the firms comprise a community with those firms that belong to the same industry but different home countries, indicating the globalization of firms’ production activities. Finally, we discuss what such production globalization implies for the proliferation of conflict minerals (i.e., minerals extracted from conflict zones and sold to firms in other countries to perpetuate fighting) through global buyer-supplier linkages. We show that a limited number of firms belonging to some specific industries and countries plays an important role in the global proliferation of conflict minerals. Our numerical simulation shows that regulations on the purchases of conflict minerals by those firms would substantially reduce their worldwide use.
Many complex physical systems can be modeled and better understood as complex networks [1, 2, 3]. Recent studies show that economic systems can also be regarded as complex networks in which economic agents, like consumers, firms, and governments, are closely connected [4, 5]. To understand the interaction among economic agents, we must uncover the structure of economic networks.
我が国では 1995 年から 2013 年春まで消費者物価(CPI)が趨勢的に低下するデフレが続いた。このデフレは,下落率が毎年 1%程度であり,物価下落の緩やかさに特徴がある。また,失業率が上昇したにもかかわらず物価の反応は僅かで,フィリップス曲線の平坦化が生じた。デフレがなぜ緩やかだったのか,フィリップス曲線がなぜ平坦化したのかを考察するために,本稿ではデフレ期における価格硬直性の変化に注目する。本稿の主なファインディングは以下のとおりである。第 1 に,CPI を構成する 588 の品目のそれぞれについて前年比変化率を計算すると,ゼロ近傍の品目が最も多く,CPI ウエイトで約 50%を占める。この意味で価格硬直性が高い。この状況は1990 年代後半のデフレ期に始まり,CPI 前年比がプラスに転じた 2013 年春以降も続いている。米国などでは上昇率 2%近傍の品目が最も多く,我が国と異なっている。これらの国では各企業が毎年 2%程度の価格引き上げを行うことがデフォルトなのに対して,我が国ではデフレの影響を引きずって価格据え置きがデフォルトになっていると解釈できる。第 2 に,1970 年以降の月次データを使って,前年比がゼロ近傍の品目の割合と CPI 前年比の関係をみると,CPI 前年比が高ければ高いほど(CPI 前年比がゼロからプラス方向に離れれば離れるほど)ゼロ近傍の品目の割合が線形に減少するという関係がある。インフレ率が高まると価格を据え置きに伴う機会費用が大きくなるためと解釈でき,メニューコスト仮説と整合的である。この結果を踏まえると,1990 年代後半以降の価格硬直化は,CPI 前年比の低下に伴って内生的に生じたものであり,今後 CPI 前年比が高まれば徐々に伸縮性を取り戻すと考えられる。第 3 に,シミュレーション分析によれば,長期にわたってデフレ圧力が加わると,実際の価格が本来あるべき価格水準を上回る企業が,通常よりも多く存在する状況が生まれる。つまり,「価格引き下げ予備軍」(できることなら価格を下げたいと考えている企業)が多い。一方,実際の価格が本来あるべき価格水準を下回る「価格引き上げ予備軍」は少ない。この状況では金融緩和が物価に及ぼす影響は限定的である。我が国では,長期にわたるデフレの負の遺産として,「価格引き下げ予備軍」が今なお多く存在しており,これを一掃するのは容易でない。
我が国では 1990 年代半ば以降,消費者物価(CPI)が下落する傾向にあり,デフレーションが続いてきた。デフレからの脱却を目指し,政府と日本銀行はいくつかの施策を実施してきた。1999 年から 2000 年に日銀の政策金利であるコールレートをゼロに下げる「ゼロ金利政策」を採用したのに続き,2001 年から 2006 年には「量的緩和政策」を行った。最近では,2013 年1 月に物価上昇率の目標値として CPI 上昇率2%を掲げる物価目標政策を開始した。さらに2013 年 4 月には 2%の物価目標を 2 年以内に達成するとアナウンスし,その実現に向けてベースマネーの量を 2 年間で 2 倍にする「量的・質的緩和政策(Quantitative Qualitative Easing,QQE)」を開始した。
We propose a new method to estimate quality adjusted commercial property price indexes using real estate investment trust (REIT) data. Our method is based on the present value approach, but the way the denominator (i.e., the discount rate) and the numerator (i.e., cash flows from properties) are estimated differs from the traditional method. We run a hedonic regression to estimate the quality adjusted discount rate based on the share prices of REITs, which can be regarded as the stock market’s valuation of the set of properties owned by the REITs. As for the numerator, we use rental prices associated only with new rental contracts rather than those associated with all existing contracts. Using a dataset with prices and cash flows for about 400 commercial properties included in Japanese REITs for the period 2001 to 2013, we find that our price index signals turning points much earlier than an appraisal-based price index; specifically, our index peaks in the second quarter of 2007, while the appraisal-based price index exhibits a turnaround only in the third quarter of 2008. Our results suggest that the share prices of REITs provide useful information in constructing commercial property price indexes.
Looking back at the history of economic crises, there are a considerable number of cases where a crisis was triggered by the collapse of a real estate price bubble. For example, it is widely accepted that the collapse of Japan’s land and stock price bubble in the early 1990s has played an important role in the subsequent economic stagnation, and in particular the banking crisis that started in the latter half of the 1990s. Similarly, the Nordic banking crisis in the early 1990s also occurred in tandem with a property bubble collapse, while the global financial crisis that began in the United States in 2008 and the European debt crisis were also triggered by the collapse of bubbles in the property and financial markets.
In this paper, we investigate the structure and evolution of customer-supplier networks in Japan using a unique dataset that contains information on customer and supplier linkages for more than 500,000 incorporated non-financial firms for the five years from 2008 to 2012. We find, first, that the number of customer links is unequal across firms; the customer link distribution has a power-law tail with an exponent of unity (i.e., it follows Zipf’s law). We interpret this as implying that competition among firms to acquire new customers yields winners with a large number of customers, as well as losers with fewer customers. We also show that the shortest path length for any pair of firms is, on average, 4.3 links. Second, we find that link switching is relatively rare. Our estimates indicate that the survival rate per year for customer links is 92 percent and for supplier links 93 percent. Third and finally, we find that firm growth rates tend to be more highly correlated the closer two firms are to each other in a customer-supplier network (i.e., the smaller is the shortest path length for the two firms). This suggests that a non-negligible portion of fluctuations in firm growth stems from the propagation of microeconomic shocks – shocks affecting only a particular firm – through customer-supplier chains.
Firms in a modern economy tend to be closely interconnected, particularly in the manufacturing sector. Firms typically rely on the delivery of materials or intermediate products from their suppliers to produce their own products, which in turn are delivered to other downstream firms. Two recent episodes vividly illustrate just how closely firms are interconnected. The first is the recent earthquake in Japan. The earthquake and tsunami hit the Tohoku region, the north-eastern part of Japan, on March 11, 2011, resulting in significant human and physical damage to that region. However, the economic damage was not restricted to that region and spread in an unanticipated manner to other parts of Japan through the disruption of supply chains. For example, vehicle production by Japanese automakers, which are located far away from the affected areas, was stopped or slowed down due to a shortage of auto parts supplies from firms located in the affected areas. The shock even spread across borders, leading to a substantial decline in North American vehicle production. The second episode is the recent financial turmoil triggered by the subprime mortgage crisis in the United States. The adverse shock originally stemming from the so-called toxic assets on the balance sheets of U.S. financial institutions led to the failure of these institutions and was transmitted beyond entities that had direct business with the collapsed financial institutions to those that seemed to have no relationship with them, resulting in a storm that affected financial institutions around the world.
Standard New Keynesian models have often neglected temporary sales. In this paper, we ask whether this treatment is appropriate. In the empirical part of the paper, we provide evidence using Japanese scanner data covering the last two decades that the frequency of sales was closely related with macroeconomic developments. Specifically, we find that the frequency of sales and hours worked move in opposite directions in response to technology shocks, producing a negative correlation between the two. We then construct a dynamic stochastic general equilibrium model that takes households’ decisions regarding their allocation of time for work, leisure, and bargain hunting into account. Using this model, we show that the rise in the frequency of sales, which is observed in the data, can be accounted for by the decline in hours worked during Japan’s lost decades. We also find that the real effect of monetary policy shocks weakens by around 40% due to the presence of temporary sales, but monetary policy still matters.
Standard New Keynesian models have often neglected temporary sales, although the frequency of sales is far higher than that of regular price changes, and hence it is not necessarily guaranteed that the assumption of sticky prices holds. Ignoring this fact is justified, however, if retailers’ decision to hold sales is independent of macroeconomic developments. If this is the case, temporary sales do not eliminate the real effect of monetary policy. In fact, Guimaraes and Sheedy (2011, hereafter GS) develop a dynamic stochastic general equilibrium (DSGE) model incorporating sales and show that the real effect of monetary policy remains largely unchanged. Empirical studies such as Kehoe and Midrigan (2010), Eichenbaum, Jaimovich, and Rebelo (2011), and Anderson et al. (2012) argue that retailers’ decision to hold a sale is actually orthogonal to changes in macroeconomic developments.
Using a simultaneous-move herding model of rational traders who infer other traders’ private information on the value of an asset by observing their aggregate actions, this study seeks to explain the emergence of fat-tailed distributions of transaction volumes and asset returns in financial markets. Without making any parametric assumptions on private information, we analytically show that traders’ aggregate actions follow a power law distribution. We also provide simulation results to show that our model successfully reproduces the empirical distributions of asset returns. We argue that our model is similar to Keynes’s beauty contest in the sense that traders, who are assumed to be homogeneous, have an incentive to mimic the average trader, leading to a situation similar to the indeterminacy of equilibrium. In this situation, a trader’s buying action causes a stochastic chain-reaction, resulting in power laws for financial fluctuations. Keywords: Herd behavior, transaction volume, stock return, fat tail, power law JEL classification code: G14
Since Mandelbrot [25] and Fama [13], it has been well established that stock returns exhibit fat-tailed and leptokurtic distributions. Jansen and de Vries [19], for example, have shown empirically that the power law exponent for stock returns is in the range of 3 to 5, which guarantees that the variance is finite but the distribution deviates substantially from the normal distribution in terms of the fourth moment. Such an anomaly in the tail shape, as well as kurtosis, has been regarded as one reason for the excess volatility of stock returns.
We construct a T¨ornqvist daily price index using Japanese point of sale (POS) scanner data spanning from 1988 to 2013. We find the following. First, the POS based inflation rate tends to be about 0.5 percentage points lower than the CPI inflation rate, although the difference between the two varies over time. Second, the difference between the two measures is greatest from 1992 to 1994, when, following the burst of bubble economy in 1991, the POS inflation rate drops rapidly and turns negative in June 1992, while the CPI inflation rate remains positive until summer 1994. Third, the standard deviation of daily POS inflation is 1.1 percent compared to a standard deviation for the monthly change in the CPI of 0.2 percent, indicating that daily POS inflation is much more volatile, mainly due to frequent switching between regular and sale prices. We show that the volatility in daily inflation can be reduced by more than 2daily inflation rate 0 percent by trimming the tails of product-level price change distributions. Finally, if we measure price changes from one day to the next and construct a chained T¨ornqvist index, a strong chain drift arises so that the chained price index falls to 10−10 of the base value over the 25-year sample period, which is equivalent to an annual deflation rate of 60 percent. We provide evidence suggesting that one source of the chain drift is fluctuations in sales quantity before, during, and after a sale period.
Japan's central bank and government are currently engaged in a major experiment to raise the rate of inflation to the target of 2 percent set by the Bank of Japan (BOJ). With overcoming deflation being a key policy priority, a first step in this direction is the accurate assessment of price developments. In Japan, prices are measured by the Statistics Bureau, Ministry of Internal Affairs and Communications, and the consumer price index (CPI) published by the Statistics Bureau is the most important indicator that the BOJ pays attention to when making policy decisions. The CPI, moreover, is of direct relevance to people's lives as, for example, public pension benefits are linked to the rate of inflation as measured by the CPI.
We start from Gibrat’s law and quasi-inversion symmetry for three firm size variables (i.e., tangible fixed assets K, number of employees L, and sales Y) and derive a partial differential equation to be satisfied by the joint probability density function of K and L. We then transform K and L, which are correlated, into two independent variables by applying surface openness used in geomorphology and provide an analytical solution to the partial differential equation. Using worldwide data on the firm size variables for companies, we confirm that the estimates on the power-law exponents of K, L, and Y satisfy a relationship implied by the theory.
In econophysics, it is well-known that the cumulative distribution functions (CDFs) of capital K, labor L, and production Y of firms obey power laws in large scales that exceed certain size thresholds, which are given by K0, L0, and Y0:
In this paper, we investigate the structure and evolution of customer-supplier networks in Japan using a unique dataset that contains information on customer and supplier linkages for more than 500,000 incorporated non-financial firms for the five years from 2008 to 2012. We find, first, that the number of customer links is unequal across firms; the customer link distribution has a power-law tail with an exponent of unity (i.e., it follows Zipf’s law). We interpret this as implying that competition among firms to acquire new customers yields winners with a large number of customers, as well as losers with fewer customers. We also show that the shortest path length for any pair of firms is, on average, 4.3 links. Second, we find that link switching is relatively rare. Our estimates indicate that the survival rate per year for customer links is 92 percent and for supplier links 93 percent. Third and finally, we find that firm growth rates tend to be more highly correlated the closer two firms are to each other in a customer-supplier network (i.e., the smaller is the shortest path length for the two firms). This suggests that a non-negligible portion of fluctuations in firm growth stems from the propagation of microeconomic shocks – shocks affecting only a particular firm – through customer-supplier chains.
Firms in a modern economy tend to be closely interconnected, particularly in the manufacturing sector. Firms typically rely on the delivery of materials or intermediate products from their suppliers to produce their own products, which in turn are delivered to other downstream firms. Two recent episodes vividly illustrate just how closely firms are interconnected. The first is the recent earthquake in Japan. The earthquake and tsunami hit the Tohoku region, the north-eastern part of Japan, on March 11, 2011, resulting in significant human and physical damage to that region. However, the economic damage was not restricted to that region and spread in an unanticipated manner to other parts of Japan through the disruption of supply chains. For example, vehicle production by Japanese automakers, which are located far away from the affected areas, was stopped or slowed down due to a shortage of auto parts supplies from firms located in the affected areas. The shock even spread across borders, leading to a substantial decline in North American vehicle production. The second episode is the recent financial turmoil triggered by the subprime mortgage crisis in the United States. The adverse shock originally stemming from the so-called toxic assets on the balance sheets of U.S. financial institutions led to the failure of these institutions and was transmitted beyond entities that had direct business with the collapsed financial institutions to those that seemed to have no relationship with them, resulting in a storm that affected financial institutions around the world.
Dramatic increases and decreases in housing prices have had an enormous impact on the economies of various countries. If this kind of fluctuation in housing prices is linked to fluctuations in the consumer price index (CPI) and GDP, it may be reflected in fiscal and monetary policies. However, during the 1980s housing bubble in Japan and the later U.S. housing bubble, fluctuations in asset prices were not sufficiently reflected in price statistics and the like. The estimation of imputed rent for owneroccupied housing is said to be one of the most important factors for this. Using multiple previously proposed methods, this study estimated the imputed rent for owner-occupied housing in Tokyo and clarified the extent to which the estimated imputed rent diverged depending on the estimation method. Examining the results obtained showed that, during the bubble’s peak, there was an 11-fold discrepancy between the Equivalent Rent Approach currently employed in Japan and Equivalent Rent calculated with a hedonic approach using market rent. Meanwhile, with the User Cost Approach, during the bubble period when asset prices rose significantly, the values became negative with some estimation methods. Accordingly, we estimated Diewert’s OOH Index, which was proposed by Diewert and Nakamura (2009). When the Diewert’s OOH Index results estimated here were compared to Equivalent Rent Approach estimation results modified with the hedonic approach using market rent, it revealed that from 1990 to 2009, the Diewert’s OOH Index results were on average 1.7 times greater than the Equivalent Rent Approach results, with a maximum 3-fold difference. These findings suggest that even when the Equivalent Rent Approach is improved, significant discrepancies remain.
Housing price fluctuations exert effects on the economy through various channels. More precisely, however, relative prices between housing and other assets prices and goods/services prices are the variable that should be observed.
Official price indexes, such as the CPI, are imperfect indicators of inflation calculated using ad hoc price formulae different from the theoretically well-founded inflation indexes favored by economists. This paper provides the first estimate of how accurately the CPI informs us about “true” inflation. We use the largest price and quantity dataset ever employed in economics to build a Törnqvist inflation index for Japan between 1989 and 2010. Our comparison of this true inflation index with the CPI indicates that the CPI bias is not constant but depends on the level of inflation. We show the informativeness of the CPI rises with inflation. When measured inflation is low (less than 2.4% per year) the CPI is a poor predictor of true inflation even over 12-month periods. Outside this range, the CPI is a much better measure of inflation. We find that the U.S. PCE Deflator methodology is superior to the Japanese CPI methodology but still exhibits substantial measurement error and biases rendering it a problematic predictor of inflation in low inflation regimes as well.
We have long known that the price indexes constructed by statistical agencies, such as the Consumer Price Index (CPI) and the Personal Consumption Expenditure (PCE) deflator, measure inflation with error. This error arises for two reasons. First, formula biases or errors appear because statistical agencies do not use the price aggregation formula dictated by theory. Second, imperfect sampling means that official price indexes are inherently stochastic. A theoretical macroeconomics literature starting with Svensson and Woodford [2003] and Aoki [2003] has noted that these stochastic measurement errors imply that one cannot assume that true inflation equals the CPI less some bias term. In general, the relationship is more complex, but what is it? This paper provides the first answer to this question by analyzing the largest dataset ever utilized in economics: 5 billion Japanese price and quantity observations collected over a 23 year period. The results are disturbing. We show that when the Japanese CPI measures inflation as low (below 2.4 percent in our baseline estimates) there is little relation between measured inflation and actual inflation. Outside of this range, measured inflation understates actual inflation changes. In other words, one can infer inflation changes from CPI changes when the CPI is high, but not when the CPI close to zero. We also show that if Japan were to shift to a methodology akin to the U.S. PCE deflator, the non-linearity would be reduced but not eliminated. This non-linear relationship between measured and actual inflation has important implications for the conduct of monetary policy in low inflation regimes.
Consumer price inflation in Japan has been below zero since the mid-1990s. Given this, it is difficult for firms to raise product prices in response to an increase in marginal costs. One pricing strategy firms have taken in this situation is to reduce the size or the weight of a product while leaving the price more or less unchanged, thereby raising the effective price. In this paper, we empirically examine the extent to which product downsizing occurred in Japan as well as the effects of product downsizing on prices and quantities sold. Using scanner data on prices and quantities for all products sold at about 200 supermarkets over the last ten years, we find that about one third of product replacements that occurred in our sample period were accompanied by a size/weight reduction. The number of product replacements with downsizing has been particularly high since 2007. We also find that prices, on average, did not change much at the time of product replacement, even if a product replacement was accompanied by product downsizing, resulting in an effective price increase. However, comparing the magnitudes of product downsizings, our results indicate that prices declined more for product replacements that involved a larger decline in size or weight. Finally, we show that the quantities sold decline with product downsizing, and that the responsiveness of quantity purchased to size/weight changes is almost the same as the price elasticity, indicating that consumers are as sensitive to size/weight changes as they are to price changes. This implies that quality adjustments based on per-unit prices, which are widely used by statistical agencies in countries around the world, may be an appropriate way to deal with product downsizing.
Consumer price inflation in Japan has been below zero since the mid-1990s, clearly indicating the emergence of deflation over the last 15 years. The rate of deflation as measured by the headline consumer price index (CPI) has been around 1 percent annually, which is much smaller than the rates observed in the United States during the Great Depression, indicating that although Japan’s deflation is persistent, it is only moderate. It has been argued by researchers and practitioners that at least in the early stages the main cause of deflation was weak aggregate demand, although deflation later accelerated due to pessimistic expectations reflecting firms’ and households’ view that deflation was not a transitory but a persistent phenomenon and that it would continue for a while.
Why are product prices in online markets dispersed in spite of very small search costs? To address this question, we construct a unique dataset from a Japanese price comparison site, which records price quotes offered by e-retailers as well as customers’ clicks on products, which occur when they proceed to purchase the product. We find that the distribution of prices retailers quote for a particular product at a particular point in time (divided by the lowest price) follows an exponential distribution, showing the presence of substantial price dispersion. For example, 20 percent of all retailers quote prices that are more than 50 percent higher than the lowest price. Next, comparing the probability that customers click on a retailer with a particular rank and the probability that retailers post prices at a particular rank, we show that both decline exponentially with price rank and that the exponents associated with the probabilities are quite close. This suggests that the reason why some retailers set prices at a level substantially higher than the lowest price is that they know that some customers will choose them even at that high price. Based on these findings, we hypothesize that price dispersion in online markets stems from heterogeneity in customers’ preferences over retailers; that is, customers choose a set of candidate retailers based on their preferences, which are heterogeneous across customers, and then pick a particular retailer among the candidates based on the price ranking
The number of internet users worldwide is 2.4 billion, constituting about 35 percent of the global population. The number of users has more than doubled over the last five years and continues to increase [1]. In the early stages of the internet boom, observers predicted that the spread of the internet would lead the retail industry toward a state of perfect competition, or a Bertrand equilibrium [2]. For instance, The Economist stated in 1990 that “[t]he explosive growth of the Internet promises a new age of perfectly competitive markets. With perfect information about prices and products at their fingertips, consumers can quickly and easily find the best deals. In this brave new world, retailers’ profit margins will be competed away, as they are all forced to price at cost” [3]. Even academic researchers argued that online markets will soon be close to perfectly competitive markets [4][5][6][7].
We investigate the cross-sectional distribution of house prices in the Greater Tokyo Area for the period 1986 to 2009. We find that size-adjusted house prices follow a lognormal distribution except for the period of the housing bubble and its collapse in Tokyo, for which the price distribution has a substantially heavier right tail than that of a lognormal distribution. We also find that, during the bubble era, sharp price movements were concentrated in particular areas, and this spatial heterogeneity is the source of the fat upper tail. These findings suggest that, during a bubble period, prices go up prominently for particular properties, but not so much for other properties, and as a result, price inequality across properties increases. In other words, the defining property of real estate bubbles is not the rapid price hike itself but an increase in price dispersion. We argue that the shape of cross sectional house price distributions may contain information useful for the detection of housing bubbles.
Property market developments are of increasing importance to practitioners and policymakers. The financial crises of the past two decades have illustrated just how critical the health of this sector can be for achieving financial stability. For example, the recent financial crisis in the United States in its early stages reared its head in the form of the subprime loan problem. Similarly, the financial crises in Japan and Scandinavia in the 1990s were all triggered by the collapse of bubbles in the real estate market. More recently, the rapid rise in real estate prices - often supported by a strong expansion in bank lending - in a number of emerging market economies has become a concern for policymakers. Given these experiences, it is critically important to analyze the relationship between property markets, finance, and financial crisis.
We propose a new method to estimate quality adjusted commercial property price indexes using real estate investment trust (REIT) data. Our method is based on the present value approach, but the way the denominator (i.e., the discount rate) and the numerator (i.e., cash flows from properties) are estimated differs from the traditional method. We estimate the discount rate based on the share prices of REITs, which can be regarded as the stock market’s valuation of the set of properties owned by the REITs. As for the numerator, we use rental prices associated only with new rental contracts rather than those associated with all existing contracts. Using a dataset with prices and cash flows for about 500 commercial properties included in Japanese REITs for the period 2003 to 2010, we find that our price index signals turning points much earlier than an appraisal-based price index; specifically, our index peaks in the first quarter of 2007, while the appraisal-based price index exhibits a turnaround only in the third quarter of 2008. Our results suggest that the share prices of REITs provide useful information in constructing commercial property price indexes.
Looking back at the history of economic crises, there are a considerable number of cases where a crisis was triggered by the collapse of real estate price bubbles. For example, it is widely accepted that the collapse of Japan’s land/stock price bubble in the early 1990s has played an important role in the subsequent economic stagnation, and in particular the banking crisis that started in the latter half of the 1990s. Similarly, the Nordic banking crisis in the early 1990s also occurred in tandem with a property bubble collapse, while the global financial crisis that began in the U.S. in 2008 and the recent European debt crisis were also triggered by the collapse of bubbles in the property and financial markets.
We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y , we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (log K, log L, log Y ), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.
In various phase transitions, it is universally observed that physical quantities near critical points obey power laws. For instance, in magnetic substances, specific heat, magnetic dipole density, and magnetic susceptibility follow power laws of heat or magnetic flux. It is also known that the cluster-size distribution of the spin follows power laws. The renormalization group approach has been employed to confirm that power laws arise as critical phenomena of phase transitions [1].
Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affect trading and the pricing of firms in organized stock markets. In this paper we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 205 major stocks in the S&P US stock index. We show that the whole landscape of news that affect stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their “thematic” features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized fact in financial economies, namely that at certain times trading volumes appear to be “abnormally large,” can be explained by the flow of news. In this sense, our results prove that there is no “excess trading,” if the news are genuinely novel and provide relevant financial information.
Neoclassical financial economics based on the “efficient market hypothesis” (EMH) considers price movements as almost perfect instantaneous reactions to information flows. Thus, according to the EMH, price changes simply reflect exogenous news. Such news - of all possible types (geopolitical, environmental, social, financial, economic, etc.) - lead investors to continuously reassess their expectations of the cash flows that firms’ investment projects could generate in the future. These reassessments are translated into readjusted demand/supply functions, which then push prices up or down, depending on the net imbalance between demand and supply, towards a fundamental value. As a consequence, observed prices are considered the best embodiments of the present value of future cash flows. In this view, market movements are purely exogenous without any internal feedback loops. In particular, the most extreme losses occurring during crashes are considered to be solely triggered exogenously.
The consumer price inflation rate in Japan has been below zero since the mid-1990s. However, despite the presence of a substantial output gap, the rate of deflation has been much smaller than that observed in the United States during the Great Depression. Given this, doubts have been raised regarding the accuracy of Japan’s official inflation estimates. Against this background, the purpose of this paper is to investigate to what extent estimates of the inflation rate depend on the methodology adopted. Our specific focus is on how inflation estimates depend on the method of outlet, product, and price sampling employed. For the analysis, we use daily scanner data on prices and quantities for all products sold at about 200 supermarkets over the last ten years. We regard this dataset as the “universe” and send out (virtual) price collectors to conduct sampling following more than sixty different sampling rules. We find that the officially released outcome can be reproduced when employing a sampling rule similar to the one adopted by the Statistics Bureau. However, we obtain numbers quite different from the official ones when we employ different rules. The largest rate of deflation we find using a particular rule is about 1 percent per year, which is twice as large as the official number, suggesting the presence of substantial upward-bias in the official inflation rate. Nonetheless, our results show that the rate of deflation over the last decade is still small relative to that in the United States during the Great Depression, indicating that Japan’s deflation is moderate.
The consumer price index (CPI) inflation rate in Japan has been below zero since the mid-1990s, clearly indicating the emergence of deflation over the last 15 years. However, the rate of deflation measured by headline CPI in each year was around 1 percent, which is much smaller than the rates observed in the United States during the Great Depression. Some suggest that this simply reflects the fact that although Japan’s deflation is persistent, it is only moderate. Others, both inside and outside the country, however, argue that something must be wrong with the deflation figures, questitioning Japan’s price data from a variety of angles. One of these is that, from an economic perspective, the rate of deflation, given the huge and persistent output gap in Japan, should be higher than the numbers released by the government suggest. Fuhrer et al. (2011), for example, estimating a NAIRU model for Japan, conclude that it would not have been surprising if the rate of deflation had reached 3 percent per year. Another argument focuses on the statistics directly. Broda and Weinstein (2007) and Ariga and Matsui (2003), for example, maintain that there remains non-trivial mismeasurement in the Japanese consumer price index, so that the officially released CPI inflation rate over the last 15 years contains substantial upward bias.
日本では,1990 年代後半以降,政策金利がゼロになる一方,物価上昇率もゼロ近傍となっている。この「二つのゼロ」現象は,この時期における日本経済の貨幣的側面を特徴づけるものであり,実物的側面の特徴である成長率の長期低迷と対をなしている。本稿では「二つのゼロ」現象の原因を解明すべく行われてきたこれまでの研究成果を概観する。
ゼロ金利現象については,自然利子率(貯蓄投資を均衡させる実質利子率)が負の水準へと下落したのを契機として発生したという見方と,企業や家計が何らかの理由で強いデフレ予想をもつようになり,それが起点となって自己実現的なデフレ均衡に陥ったという見方がある。試算によれば,日本の自然利子率は 1990 年代後半以降かなり低い水準にあり,マイナスに落ち込んだ時期もあった。一方,物価下落を予想する家計は少数派である。これらの事実は,日本のゼロ金利の原因として,負の自然利子率説が有力であることを示している。ただし,企業や家計の強い円高予想が起点となって自己実現的なデフレ均衡に陥っている可能性も否定できない。物価については,原価や需要が変化しても即座には商品の販売価格を変更しないとする企業が 9 割を超えており,価格の硬直性が存在する。さらに,POS データを用いた分析によれば,1990 年代後半以降,価格の更新頻度が高まる一方,価格の更新幅は小幅化する傾向がある。このような小刻みな価格変更が物価下落を緩やかにしている。小刻みな価格変更の背景には,ライバルが価格を変更すれば自分も価格を変更する,ライバルが変更しなければ自分も変更しないという意味で,店舗や企業間の相互牽制が強まっている可能性がある。
「二つのゼロ」現象は,ケインズが提示した「流動性の罠」と「価格硬直性」というアイディアと密接に関係している。しかし,「流動性の罠」についてはケインズ以後,本格的な研究がなされておらず,「価格硬直性」についてもその原因をデータから探る研究が本格化したのはここ10 年のことに過ぎない。「二つのゼロ」現象に関する議論が混迷し,政策対応が遅れた背景にはこうした事情がある。ケインズの残した宿題に精力的に取り組むことが研究者に求められている。
マクロの経済現象を実物的側面と貨幣的側面に分けるとすれば,1990 年代初のバブル崩壊後,実物的な側面における最も重要な現象は成長率の低下であった。成長率の低下やそれに伴う雇用の喪失は多くの人にとって差し迫った問題であり,研究者の間でも「失われた十年」を巡って様々な検討が進められてきた。これに対して,貨幣的な側面については,少なくともバブル崩壊直後はさほど注目されず,研究者の関心を集めることも少なかった。しかし実はこの時期,貨幣的な側面でも重要な変化が進行していた。
To explore the emergence of power laws in social and economic phenomena, the authors discuss the mechanism whereby reversal quasi-symmetry and Gibrat’s law lead to power laws with different powerlaw exponents. Reversal quasi-symmetry is invariance under the exchange of variables in the joint PDF (probability density function). Gibrat’s law means that the conditional PDF of the exchange rate of variables does not depend on the initial value. By employing empirical worldwide data for firm size, from categories such as plant assets K, the number of employees L, and sales Y in the same year, reversal quasi-symmetry, Gibrat’s laws, and power-law distributions were observed. We note that relations between power-law exponents and the parameter of reversal quasi-symmetry in the same year were first confirmed. Reversal quasi-symmetry not only of two variables but also of three variables was considered. The authors claim the following. There is a plane in 3-dimensional space (log K, log L, log Y ) with respect to which the joint PDF PJ (K, L, Y ) is invariant under the exchange of variables. The plane accurately fits empirical data (K, L, Y ) that follow power-law distributions. This plane is known as the Cobb-Douglas production function, Y = AKαLβ which is frequently hypothesized in economics.
In various phase transitions, it has been universally observed that physical quantities near critical points obey power laws. For instance, in magnetic substances, the specific heat, magnetic dipole density, and magnetic susceptibility follow power laws of heat or magnetic flux. We also know that the cluster-size distribution of the spin follows power laws. Using renormalization group methods realizes these conformations to power law as critical phenomena of phase transitions [1].
We propose a new method for estimating the power-law exponents of firm size variables. Our focus is on how to empirically identify a range in which a firm size variable follows a power-law distribution. As is well known, a firm size variable follows a power-law distribution only beyond some threshold. On the other hand, in almost all empirical exercises, the right end part of a distribution deviates from a power-law due to finite size effect. We modify the method proposed by Malevergne et al. (2011) so that we can identify both of the lower and the upper thresholds and then estimate the power-law exponent using observations only in the range defined by the two thresholds. We apply this new method to various firm size variables, including annual sales, the number of workers, and tangible fixed assets for firms in more than thirty countries.
Power-law distributions are frequently observed in social phenomena (e.g., Pareto
(1897); Newman (2005); Clauset et al. (2009)). One of the most famous examples
in Economics is the fact that personal income follows a power-law, which was
first found by Pareto (1897) about a century ago, and thus referred to as Pareto
distribution. Specifically, the probability that personal income x is above x0 is
given by
P>(x) ∝ x −µ for x > x0
where µ is referred to as a Pareto exponent or a power-law exponent.
Central banks react even to intraday changes in the exchange rate; however, in most cases, intervention data is available only at a daily frequency. This temporal aggregation makes it difficult to identify the effects of interventions on the exchange rate. We apply the Bayesian MCMC approach to this endogeneity problem. We use “data augmentation” to obtain intraday intervention amounts and estimate the efficacy of interventions using the augmented data. Applying this new method to Japanese data, we find that an intervention of one trillion yen moves the yen/dollar rate by 1.7 percent, which is more than twice as much as the magnitude reported in previous studies applying OLS to daily observations. This shows the quantitative importance of the endogeneity problem due to temporal aggregation.
Are foreign exchange interventions effective? This issue has been debated extensively since the 1980s, but no conclusive consensus has emerged. A key difficulty faced by researchers in answering this question is the endogeneity problem: the exchange rate responds “within the period” to foreign exchange interventions and the central bank reacts “within the period” to fluctuations in the exchange rate. This difficulty would not arise if the central bank responded only slowly to fluctuations in the exchange rate, or if the data sampling interval were sufficiently fine.
In constructing a housing price index, one has to make at least two important choices. The first is the choice among alternative estimation methods. The second is the choice among different data sources of house prices. The choice of the dataset has been regarded as critically important from a practical viewpoint, but has not been discussed much in the literature. This study seeks to fill this gap by comparing the distributions of prices collected at different stages of the house buying/selling process, including (1) asking prices at which properties are initially listed in a magazine, (2) asking prices when an offer for a property is eventually made and the listing is removed from the magazine, (3) contract prices reported by realtors after mortgage approval, and (4) registry prices. These four prices are collected by different parties and recorded in different datasets. We find that there exist substantial differences between the distributions of the four prices, as well as between the distributions of house attributes. However, once quality differences are controlled for, only small differences remain between the different house price distributions. This suggests that prices collected at different stages of the house buying/selling process are still comparable, and therefore useful in constructing a house price index, as long as they are quality adjusted in an appropriate manner.
In constructing a housing price index, one has to make several nontrivial choices. One of them is the choice among alternative estimation methods, such as repeatsales regression, hedonic regression, and so on. There are numerous papers on this issue, both theoretical and empirical. Shimizu et al. (2010), for example, conduct a statistical comparison of several alternative estimation methods using Japanese data. However, there is another important issue which has not been discussed much in the literature, but has been regarded as critically important from a practical viewpoint: the choice among different data sources for housing prices. There are several types of datasets for housing prices: datasets collected by real estate agencies and associations; datasets provided by mortgage lenders; datasets provided by government departments or institutions; and datasets gathered and provided by newspapers, magazines, and websites. Needless to say, different datasets contain different types of prices, including sellers’ asking prices, transactions prices, valuation prices, and so on.
本稿では生産関数の形状を選択する手法を提案する。世の中には数人の従業員で営まれる零細企業から数十万人の従業員を擁する超巨大企業まで様々な規模の企業が存在する。どの規模の企業が何社存在するかを表したものが企業の規模分布であり,企業の規模を示す変数である Y (生産)と K(資本)と L(労働)のそれぞれはベキ分布とよばれる分布に従うことが知られている。本稿では,企業規模の分布 関数 と生産 関数 という 2 つの関数の間に存在する関係に注目し,それを手がかりとして生産関数の形状を特定するという手法を提案する。具体的には,K と L についてデータから観察された分布の関数形をもとにして,仮に生産関数がある形状をとる場合に得られるであろう Y の分布関数を導出し,データから観察される Y の分布関数と比較する。日本を含む 25 カ国にこの手法を適用した結果,大半の国や産業において,Y ,K,L の分布と整合的なのはコブダグラス型であることがわかった。また,Y の分布の裾を形成する企業,つまり巨大企業では,K や L の投入量が突出して大きいために Y も突出して大きい傾向がある。一方,全要素生産性が突出して高くそれが原因で Y が突出して大きいという傾向は認められない。
企業の生産関数の形状としてはコブダグラス型やレオンチェフ型など様々な形状がこれまで提案されており,ミクロやマクロの研究者によって広く用いられている。例えば,マクロの生産性に関する研究では,コブダグラス生産関数が広く用いられており,そこから全要素生産性を推計することが行われている。しかし,生産 Y と資本 K と雇用 L の関係をコブダグラス型という特定の関数形で表現できるのはなぜか。どういう場合にそれが適切なのか。そうした点にまで踏み込んで検討する研究は限られている。多くの実証研究では,いくつかの生産関数の形状を試してみて,回帰の当てはまりの良さを基準に選択するという便宜的な取り扱いがなされている。
This paper estimates fiscal policy feedback rules in Japan, the United States, and the United Kingdom for more than a century, allowing for stochastic regime changes. Estimating a Markovswitching model by the Bayesian method, we find the following: First, the Japanese data clearly reject the view that the fiscal policy regime is fixed, i.e., that the Japanese government adopted a Ricardian or a non-Ricardian regime throughout the entire period. Instead, our results indicate a stochastic switch of the debt-GDP ratio between stationary and nonstationary processes, and thus a stochastic switch between Ricardian and non-Ricardian regimes. Second, our simulation exercises using the estimated parameters and transition probabilities do not necessarily reject the possibility that the debt-GDP ratio may be nonstationary even in the long run (i.e., globally nonstationary). Third, the Japanese result is in sharp contrast with the results for the U.S. and the U.K. which indicate that in these countries the government’s fiscal behavior is consistently characterized by Ricardian policy.
Recent studies about the conduct of monetary policy suggest that the fiscal policy regime has important implications for the choice of desirable monetary policy rules, particularly, monetary policy rules in the form of inflation targeting (Sims (2005), Benigno and Woodford (2007)). It seems safe to assume that fiscal policy is characterized as “Ricardian” in the terminology of Woodford (1995), or “passive” in the terminology of Leeper (1991), if the government shows strong fiscal discipline. If this is the case, we can design an optimal monetary policy rule without paying any attention to fiscal policy. However, if the economy is unstable in terms of the fiscal situation, it would be dangerous to choose a monetary policy rule independently of fiscal policy rules. For example, some researchers argue that the recent accumulation of public debt in Japan is evidence of a lack of fiscal discipline on the part of the Japanese government, and that it is possible that government bond market participants may begin to doubt the government’s intention and ability to repay the public debt. If this is the case, we may need to take the future evolution of the fiscal regime into consideration when designing a monetary policy rule.
価格はなぜ硬直的なのか。Arthur Okun は,需要の増加時に価格を引き上げることを顧客はアンフェアとみるので,顧客の怒りを買うことを恐れる企業や商店は価格を上げないと説明した。例えば,雪の日にシャベルの需要が高まることに乗じて値札を付け替える行為はアンフェアである。本稿では,このフェアネス仮説がネットオークション市場にも当てはまるか否かを検証するため,2009 年の新型インフルエンザ騒動時におけるヤフーオークション市場でのマスク価格の変化を分析した。マスクの落札率(落札件数を出品件数で除したもの)は 5 月初と 8 月後半に 8 割超の水準まで上昇しており,その時期に需要が集中していたことがわかる。前者は日本で最初の「感染の疑い」事例が出た時期であり,後者は本格的な流行期入りを政府が宣言した時期である。5 月の局面では,売り手は「開始」価格(入札を開始する価格)と「即決」価格(その価格で入札すればセリを経ずに落札できる価格)の両方を引き上げる行動をとった。特に,即決価格は開始価格と比べても大幅に引き上げられており,落札価格を高めに誘導する意図があったとみられる。一方,8 月の局面では,開始価格の小幅な引き上げは見られたものの即決価格は引き上げられていない。5 月と 8 月の違いは売り手の属性の違いに起因しており,5月の局面では売り手は主として個人であり,8 月の局面では主として企業であった。企業は買い手の評判を意識するため,需要の増加に乗じて価格を引き上げることはしなかったと解釈できる。Okun は,売り手と買い手が長期的な関係をもつ顧客市場(customermarkets)と,そうした関係のないオークション市場(auction markets)を区別することの重要性を強調し,フェアネス仮説は前者にだけ当てはまると主張した。本稿の分析結果は,ネットオークション市場はフェアネスの観点からは顧客市場に近い性質をもつことを示している。
一橋大学物価研究センターが 2008 年春に行った企業を対象としたアンケート調査によると,需要やコストの変動に対して直ちに出荷価格を変更するかという問いに対して 90%が変更しないと回答している。ミクロ経済学では需要曲線または供給曲線がシフトすると均衡は新しい交点に移り,それに伴って価格は直ちに変わると教える。しかし実際には,企業を取り巻く需要やコストの環境が変化しても企業は即座には価格を変更しないのである。これは価格の硬直性または粘着性とよばれる現象である。価格硬直性はマクロ経済学の根幹を成す概念であり,価格が瞬時には調整されないがゆえに失業や設備稼働率の変動が生じる。
日本では,1990 年代後半以降,政策金利がゼロになる一方,物価上昇率もゼロ近傍となっている。この「二つのゼロ」現象は,この時期における日本経済の貨幣的側面を特徴づけるものであり,実物的側面の特徴である成長率の長期低迷と対をなしている。本稿では「二つのゼロ」現象の原因を解明すべく行われてきたこれまでの研究成果を概観する。
ゼロ金利現象については,自然利子率(貯蓄投資を均衡させる実質利子率)が負の水準へと下落したのを契機として発生したという見方と,企業や家計が何らかの理由で強いデフレ予想をもつようになり,それが起点となって自己実現的なデフレ均衡に陥ったという見方がある。試算によれば,日本の自然利子率は 1990 年代後半以降かなり低い水準にあり,マイナスに落ち込んだ時期もあった。一方,予想物価上昇率は,企業間や家計間で大きなばらつきがあり,全員が持続的な物価下落を予想していたわけではない。これらの事実は,日本のゼロ金利の原因として,負の自然利子率説が有力であることを示している。ただし,企業や家計の強い円高予想が起点となって自己実現的なデフレ均衡に陥っている可能性も否定できない。
物価については,原価や需要が変化しても即座には商品の販売価格を変更しないとする企業が 9 割を超えており,価格の硬直性が存在する。さらに,POS データを用いた分析によれば,1990 年代後半以降,価格の更新頻度が高まる一方,価格の更新幅は小幅化する傾向がある。このような小刻みな価格変更が物価下落を緩やかにしている。小刻みな価格変更の背景には,ライバルが価格を変更すれば自分も価格を変更する,ライバルが変更しなければ自分も変更しないという意味で,店舗や企業間の相互牽制が強まっている可能性がある。
「二つのゼロ」現象は,ケインズが提示した「流動性の罠」と「価格硬直性」というアイディアと密接に関係している。しかし,「流動性の罠」についてはケインズ以後,本格的な研究がなされておらず,「価格硬直性」についてもその原因をデータから探る研究が本格化したのはここ10 年のことに過ぎない。「二つのゼロ」現象に関する議論が混迷し,政策対応が遅れた背景にはこうした事情がある。ケインズの残した宿題に精力的に取り組むことが研究者に求められている。
マクロの経済現象を実物的側面と貨幣的側面に分けるとすれば,1990 年代初のバブル崩壊後,実物的な側面における最も重要な現象は成長率の低下であった。成長率の低下やそれに伴う雇用の喪失は多くの人にとって差し迫った問題であり,研究者の間でも「失われた十年」を巡って様々な検討が進められてきた。これに対して,貨幣的な側面については,少なくともバブル崩壊直後はさほど注目されず,研究者の関心を集めることも少なかった。しかし実はこの時期,貨幣的な側面でも重要な変化が進行していた。
1990年代前半の日本のバブル崩壊期では住宅価格の大幅下落にもかかわらず家賃はほとんど変化しなかった。同様の現象はバブル崩壊後の米国でも観察されている。家賃はなぜ変化しないのか。なぜ住宅価格と家賃は連動しないのか。本稿では,こうした疑問に答えるため,大手住宅管理会社により提供された約 15,000 戸の家賃データを用いて分析を行い,以下の結果を得た。第 1 に,家賃が変更される住戸の割合は 1 年間で約 5%に過ぎないことがわかった。これは米国の 14 分の1,ドイツの 4 分の 1 であり,極端に低い。この高い硬直性の背景には,店子の入れ替えが少ない一方,家賃の契約期間が 2 年と長いため,そもそも家賃を変更する機会が限定されているという日本の住宅市場に特有の事情がある。しかしそれ以上に重要なのは,店子の入れ替えや契約更新など家賃変更の機会が訪れても家賃を変更していないということであり,これが家賃の変更確率を大きく引き下げている。店子の入れ替え時においては 76%の住戸で以前と同じ家賃が適用されており,契約更新の際には 97%の住戸で家賃が据え置かれている。第 2 に,Caballeroand Engel (2007)によって提案された Adjustment hazard function の手法を用いた分析の結果,各住戸の家賃が変更されるか否かは,その住戸の現行家賃が市場実勢からどの程度乖離しているかにほとんど依存しないことがわかった。つまり,家賃改定は状態依存ではなく時間依存であり,カルボ型モデルで描写できる。
多くの先進主要国においては,住宅価格を中心とした資産価格の急激な上昇とその後の下落が,金融システムに対して甚大な影響をもたらすことで経済活動の停滞を招いた共通の歴史を持つ。1990 年代の日本・スウェーデン,そして,今回の米国のサブプライム問題に端を発した金融危機が,最も代表的な事例としてあげることができる。Reinhart andRogoff (2008)では,多くの国の経済データを網羅的かつ長期の時系列で比較分析し,金融危機がもたらされる背後には,多くの共通する経済現象が発生していることを明らかにした。その一つの事象が,資産価格,なかでも不動産価格が,賃貸料と比較して大きく上昇していることを指摘した。
We empirically investigate the nonstationarity property of the dollar-yen exchange rate by using an eight year span of high frequency data set. We perform a statistical test of strict stationarity based on the two-sample KolmogorovSmirnov test for the absolute price changes, and the Pearson’s chi-square test for the number of successive price changes in the same direction, and find statistically significant evidence of nonstationarity. We further study the recurrence intervals between the days in which nonstationarity occurs, and find that the distribution of recurrence intervals is well-approximated by an exponential distribution. Also, we find that the mean conditional recurrence interval 〈T|T0〉 is independent of the previous recurrence interval T0. These findings indicate that the recurrence intervals is characterized by a Poisson process. We interpret this as reflecting the Poisson property regarding the arrival of news.
Financial time series data have been extensively investigated using a wide
variety of methods in econophysics. These studies tend to assume, explicitly
or implicitly, that a time series is stationary, since stationarity is a requirement
for most of the mathematical theories underlying time series analysis.
However, despite its nearly universal assumption, there is little previous studies
that seek to test stationarity in a reliable manner. (Toth1a et al. (2010)).
Do indexes of house prices behave differently depending on the estimation method? If so, to what extent? To address these questions, we use a unique dataset that we compiled from individual listings in a widely circulated real estate advertisement magazine. The dataset contains more than 470,000 listings of housing prices between 1986 and 2008, including the period of the housing bubble and its burst. We find that there exists a substantial discrepancy in terms of turning points between hedonic and repeat sales indexes, even though the hedonic index is adjusted for structural changes and the repeat sales index is adjusted in the way Case and Shiller suggested. Specifically, the repeat sales measure signals turning points later than the hedonic measure: for example, the hedonic measure of condominium prices bottomed out at the beginning of 2002, while the corresponding repeat sales measure exhibits a reversal only in the spring of 2004. This discrepancy cannot be fully removed even if we adjust the repeat sales index for depreciation.
Fluctuations in real estate prices have a substantial impact on economic activity. In Japan, the sharp rise in real estate prices during the latter half of the 1980s and their decline in the early 1990s have led to a decade-long, or even longer, stagnation of the economy. More recently, the rapid rise in housing prices and their reversal in the United States have triggered a global financial crisis. Against this background, having a reliable index that correctly identifies trends in housing prices is of utmost importance.
Is the cross-sectional distribution of house prices close to a (log)normal distribution, as is often assumed in empirical studies on house price indexes? How does the distribution evolve over time? To address these questions, we investigate the cross-sectional distribution of house prices in the Greater Tokyo Area. We find that house prices (Pi) are distributed with much fatter tails than a lognormal distribution and that the tail is quite close to that of a power-law distribution. We also find that house sizes (Si) follow an exponential distribution. These findings imply that size-adjusted house prices, defined by lnPi − aSi, should be normally distributed. We find that this is indeed the case for most of the sample period, but not the bubble era, during which the price distribution has a fat upper tail even after adjusting for size. The bubble was concentrated in particular areas in Tokyo, and this is the source of the fat upper tail.
Researchers on house prices typically start their analysis by producing a time series of the mean of prices across different housing units in a particular region by, for example, running a hedonic or repeat-sales regression. In this paper, we pursue an alternative research strategy: we look at the entire distribution of house prices across housing units in a particular region at a particular point of time and then investigate the evolution of such cross-sectional distribution over time. We seek to describe price dynamics in the housing market not merely by changes in the mean but by changes in some key parameters that fully characterize the entire cross-sectional price distribution.
We investigate retailers’ price setting behavior using a unique dataset containing by-the-second records of prices offered by closely competing retailers on a major Japanese price comparison website. First, we find that, when the average price of a product across retailers falls rapidly, the frequency of price adjustments increases, and the size of price adjustments becomes larger. Second, we find positive autocorrelation in the frequency of price adjustments, implying that there tends to be clustering where price adjustments occur in succession. In contrast, there is no such autocorrelation in the size of price adjustments. These two findings indicate that the behavior of competing retailers is characterized by state-dependent pricing rather than time-dependent pricing.
Since the seminal study by Bils and Klenow (2004), there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. Using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.
Is the cross-sectional distribution of house prices close to a (log)normal distribution, as is often assumed in empirical studies on house price indexes? How does it evolve over time? How does it look like during the period of housing bubbles? To address these questions, we investigate the cross-secional distribution of house prices in the Greater Tokyo Area. Using a unique dataset containing individual listings in a widely circulated real estate advertisement magazine in 1986 to 2009, we find the following. First, the house price, Pit, is characterized by a distribution with much fatter tails than a lognormal distribution, and the tail part is quite close to that of a power-law or a Pareto distribution. Second, the size of a house, Si, follows an exponential distribution. These two findings about the distributions of Pit and Si imply that the the price distribution conditional on the house size, i.e., Pr(Pit | Si), follows a lognormal distribution. We confirm this by showing that size adjusted prices indeed follow a lognormal distribution, except for periods of the housing bubble in Tokyo when the price distribution remains asymmetric and skewed to the right even after controlling for the size effect.
Researches on house prices typically start by producing a time series of the mean of prices across housing units in a particular region by, for example, running a hedonic regression or by adopting a repeat-sales method. In this paper, we propose an alternative research strategy: we look at the entire distribution of house prices across housing units in a particular region at a particular point of time, and then investigate the evolution of such cross sectional distributions over time. We seek to describe price dynamics in a housing market not merely by changes in the mean but by changes in some key parameters that fully characterize the entire cross sectional price distribution. Our ultimate goal is to produce a new housing price index based on these key parameters.
We investigate retailers’ price setting behavior, and in particular strategic interaction between retailers, using a unique dataset containing by-the-second records of prices offered by competing retailers on a major Japanese price comparison website. First, we find that, when the average price of a product across retailers falls rapidly, the frequency of price adjustments is high, while the size of adjustments remains largely unchanged. Second, we find a positive autocorrelation in the frequency of price adjustments, implying that there tends to be a clustering where once a price adjustment occurs, such adjustments occur in succession. In contrast, there is no such autocorrelation in the size of price adjustments. These two findings indicate that the behavior of competing retailers is characterized by state-dependent pricing, rather than time-dependent pricing, especially when prices fall rapidly, and that strategic complementarities play an important role when retailers decide to adjust (or not to adjust) their prices.
Since Bils and Klenow’s (2004) seminal study, there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. For example, using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.
Japan and the United States have experienced the housing bubbles and subsequent collapses of the bubbles in succession. In this paper, these two bubbles are compared and the following findings are obtained.
Firstly, upon applying twenty years of past data from Japan to the “repeat-sales method” and the “hedonic pricing method”, which are representative methods for calculating house prices, it was found that the timing at which prices bottomed out after the collapses of the bubbles differed depending on the two methods. The timing for bottoming out as estimated by the repeat-sales method delayed when compared to the estimate using the hedonic pricing method, by 13 months for condominiums and by three months for single-family homes. This delay is caused by the depreciation effect of building not being processed appropriately by the repeat-sales method. In the United States, the S&P/Case-Shiller Home Price Indices are representative house prices indices, which use the repeat-sales method. Therefore, it is possible that the timing for bottoming out is estimated to be delayed. As there are increasing interests in the timing for bottoming out of the US housing market, there is a risk that the existence of such a lag in cognition causes the increase of uncertainty and the delay in economic recovery.
Secondly, when looking at the relationship between the demand for houses and house prices based on the time-series data, there is a positive correlation between the two elements. However, upon conducting an analysis using the panel data, which is based on data in units of prefectures or states, there is no significant relationship between the demand for houses and house prices in both Japan and the United States. In this sense, it is hard to explain whether there is a bubble and the size of the bubble according to prefecture (state) using demand elements. This suggests that it is possible that the concept of demographics having an impact on the demand for houses, which thus caused the house prices to increase, is not effective in explaining the price fluctuations in neither Japan nor the United States.
Thirdly, when looking at the co-movement between the house prices and rent, a phenomenon which the rent almost does not fluctuate at all even when the significant change of house prices change in the process of the formation and collapse of a bubble was confirmed for both Japan and the United States. Its background is that landlords and tenants have formed long-term contractual relationships so that both parties can save on various transactional costs. In addition, the imputed rent of one’s home is not assessed using market prices in Japan, which is an aspect to weaken the co-movement. A lack of co-movement causes a phenomenon in Japan and the United States where consumer prices that include this rent as an important element do not increase since rent does not increase even if housing prices increase during a bubble period. Thus, it results in a delay towards a shift to tighten credits. Since rent prices do not move together with the house prices even after house prices decrease after the collapse of the bubble, a phenomenon which consumer prices do not decrease was observed. This served as a factor for the delay in a shift towards monetary relaxation. Rent prices are an important variable that serves as a node between asset prices and prices of goods and services. It is necessary to increase the accuracy with which it is measured.
This paper’s objective is to find similarities and differences between the Japanese and US housing markets by comparing Japan’s largest postwar real estate bubbles in the 1980s and U.S. housing bubbles since 2000 that have reportedly caused the worst financial crisis since the 1929 Great Depression. While various points have been made about the housing bubbles, this paper attempts to specify the following points.
Do the indexes of house prices behave differently depending on the estimationmethods? If so, to what extent? To address these questions, we use a unique datasetthat we have compiled from individual listings in a widely circulated real estateadvertisement magazine. The dataset contains more than 400 thousand listingsof housing prices in 1986 to 2008, including the period of housing bubble andits burst. We find that there exists a substantial discrepancy in terms of turningpoints between hedonic and repeat sales indexes, even though the hedonic indexis adjusted for structural change and the repeat sales index is adjusted in a wayCase and Shiller suggested. Specifically, the repeat sales measure tends to exhibita delayed turn compared with the hedonic measure; for example, the hedonicmeasure of condominium prices hit bottom at the beginning of 2002, while thecorresponding repeat-sales measure exhibits reversal only in the spring of 2004.Such a discrepancy cannot be fully removed even if we adjust the repeat salesindex for depreciation (age effects).
Fluctuations in real estate prices have substantial impacts on economic activities. InJapan, a sharp rise in real estate prices during the latter half of the 1980s and its declinein the early 1990s has led to a decade-long stagnation of the Japanese economy.More recently, a rapid rise in housing prices and its reversal in the United States havetriggered a global financial crisis. In such circumstances, the development of appropriateindexes that allow one to capture changes in real estate prices with precision isextremely important not only for policy makers but also for market participants whoare looking for the time when housing prices hit bottom.
本稿では,各企業が互いの価格設定行動を模倣することに伴って生じる価格の粘着性を自己相関係数により計測する方法を提案するとともに,オンライン市場のデータを用いてその度合いを計測する。Bils and Klenow (2004) 以降の研究では,価格改定から次の価格改定までの経過時間の平均値をもって価格粘着性の推計値としてきたが,本稿で分析対象とした液晶テレビではその値は 1.9 日である。これに対して自己相関係数を用いた計測によれば,価格改定イベントは最大 6 日間の過去依存性をもつ。つまり,価格調整の完了までに各店舗は平均 3 回の改定を行っている。店舗間の模倣行動の結果,1 回あたりの価格改定幅が小さくなり,そのため価格調整の完了に要する時間が長くなっていると考えられる。これまでの研究は,価格改定イベントの過去依存性を無視してきたため,価格粘着性を過小評価していた可能性がある。
Bils and Klenow (2004) 以降,ミクロ価格データを用いて価格粘着性を計測する研究が活発に行われている。一連の研究では,価格が時々刻々,連続的に変化しているわけではなく,数週間あるいは数ヶ月に一度というように infrequent に変更されている点に注目し,そうした価格改定イベントの起こる頻度を調べるという手法が用いられている。そこでの主要な発見は,価格改定イベントはかなり頻繁に起きているということである。例えば,Bils and Klenow (2004) は,米国 CPIの原データを用いて改定頻度は 4.3ヶ月に一度と報告している。Nakamura and Steinsson (2008) は同じく米国 CPI の原データを用いて,特売を考慮すれば改定頻度は 8-11ヶ月に一度と推計している。欧州諸国に関する Dhyne et al (2006) の研究や,日本に関する Higoand Saita (2007) の研究でも,数ヶ月に一度程度の頻度で価格改定が行われるとの結果が報告されている。
日本の通貨当局は 2003 年初から 2004 年春にかけて大量の円売りドル買い介入を行った。この時期の介入は John Taylor によって Great intervention と命名されている。本稿では,この Great intervention が,当時,日本銀行によって実施されていた量的緩和政策とどのように関係していたかを検討した。第 1 に,円売り介入により市場に供給された円資金のうち 60%は日本銀行の金融調節によって直ちにオフセットされたものの残りの 40%はオフセットされず,しばらくの間,市場に滞留した。この結果は,それ以前の時期にほぼ 100%オフセットされていたという事実と対照的である。第 2 に,介入と介入以外の財政の支払いを比較すると,介入によって供給された円資金が日銀のオペによってオフセットされる度合いは低かった。この結果は日本銀行が介入とそれ以外の財政の支払いを区別して金融調節を行っていたことを示唆している。第 3 に,不胎化された介入と不胎化されない介入を比較すると,為替相場に与える効果は後者の方が強い傾向が見られ,ゼロ金利の下でも,介入が不胎化されたか否かによって為替への効果に違いがあることを示している。ただし,この結果は,不胎化されるか否かに関する市場参加者の予想の定式化に依存しており,必ずしも頑健でない。
2001 年から 2006 年にかけて日本の通貨当局は 2 つの重要かつ興味深い政策を採用した。第 1 は,日本銀行によって 2001 年 3 月に導入された量的緩和政策である。この政策は,日本銀行がそれまで政策金利としていたコール翌日物金利を下限であるゼロまで引き下げても十分な景気刺激効果が得られなかったため,さらなる金融緩和策として政策変数を金利からマネー供給量に変更するというものである。量的緩和政策は日本経済が回復する 2006 年 3 月まで継続された。第 2に,日本の財務省は 2003 年 1 月から 2004 年 3 月にかけて外国為替市場において大規模な円売り介入を実行した。Taylor (2006) はこれを Great intervention とよんでいる。この時期の介入は 2 日に一度という頻度で行われており,1 日当りの介入金額は 2700 億円,総額で 35 兆円にのぼった。日本の通貨当局は活発な介入行動で知られるが,それにしてもこの頻度と金額は他の時期に例を見ないものである。
Are prices sticky due to the presence of strategic complementarity in price setting? If so, to what extent? To address these questions, we investigate retailers’ price setting behavior, and in particular strategic interaction between retailers, using a unique dataset containing by-the-second records of prices offered by retailers on a major Japanese price comparison website. We focus on fluctuations in the lowest price among retailers, rather than the average price, examining how quickly the lowest price is updated in response to changes in marginal costs. First, we find that, when the lowest price falls rapidly, the frequency of changes in the lowest price is high, while the size of downward price adjustments remains largely unchanged. Second, we find a positive autocorrelation in the frequency of changes in the lowest price, and that there tends to be a clustering where once a change in the lowest price occurs, such changes occur in succession. In contrast, there is no such autocorrelation in the size of changes in the lowest price. These findings suggest that retailers imitate each other when deciding to adjust (or not to adjust) their prices, and that the extensive margin plays a much more important role than the intensive margin in such strategic complementarity in price setting.
Since Bils and Klenow’s (2004) seminal study, there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. For example, using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.
Do the indices of house prices and rents behave differently depending on the estimation methods? If so, to what extent? To address these questions, we use a unique dataset that we have compiled from individual listings in a widely circulated real estate advertisement magazine. The dataset contains more than 400 thousand listings of housing prices and about one million listings of housing rents, both from 1986 to 2008, including the period of housing bubble and its burst. We find that there exists a substantial discrepancy in terms of turning points between hedonic and repeat sales indices, even though the hedonic index is adjusted for structural change and the repeat sales index is adjusted in a way Case and Shiller suggested. Specifically, the repeat sales measure tends to exhibit a delayed turn compared with the hedonic measure; for example, the hedonic measure of condominium prices hit bottom at the beginning of 2002, while the corresponding repeat-sales measure exhibits reversal only in the spring of 2004. Such a discrepancy cannot be fully removed even if we adjust the repeat sales index for depreciation (age effects).
Fluctuations in real estate prices have substantial impacts on economic activities. In Japan, a sharp rise in real estate prices during the latter half of the 1980s and its decline in the early 1990s has led to a decade-long stagnation of the Japanese economy. More recently, a rapid rise in housing prices and its reversal in the United States have triggered a global financial crisis. In such circumstances, the development of appropriate indices that allow one to capture changes in real estate prices with precision is extremely important not only for policy makers but also for market participants who are looking for the time when housing prices hit bottom.
We empirically investigate fluctuations in product prices in online markets by using a tick-bytick price data collected from a Japanese price comparison site, and find some similarities and differences between product and asset prices. The average price of a product across e-retailers behaves almost like a random walk, although the probability of price increase/decrease is higher conditional on the multiple events of price increase/decrease. This is quite similar to the property reported by previous studies about asset prices. However, we fail to find a long memory property in the volatility of product price changes. Also, we find that the price change distribution for product prices is close to an exponential distribution, rather than a power law distribution. These two findings are in a sharp contrast with the previous results regarding asset prices. We propose an interpretation that these differences may stem from the absence of speculative activities in product markets; namely, e-retailers seldom repeat buy and sell of a product, unlike traders in asset markets.
In recent years, price comparison sites have attracted the attention of internet users. In these sites, e-retailers update their selling prices every minute, or even every second. Those who visit the sites can compare prices quoted by different e-retailers, thus finding the cheapest one without paying any search costs. E-retailers seek to attract as many customers as possible by offering good prices to them, and this sometimes results in a price war among e-retailers.
Recently housing market bubble and its burst attracts much interest of researchers in various fields including economics and physics. Economists have been regarding bubble as a disorder in prices. However, this research strategy has overlooked an importance of the volume of transactions. In this paper, we have proposed a bubble burst model by focusing on transaction volume incorporating a traffic model that represents spontaneous traffic jam. We find that the phenomenon of bubble burst shares many similar properties with traffic jam formation on highway by comparing data taken from the U.S. housing market. Our result suggests that transaction volume could be a driving force of bursting phenomenon.
Fluctuations in real estate prices have substantial impacts on economic activities. For example, land prices in Japan exhibited a sharp rise in the latter half of the 1980s, and its rapid reversal in the early 1990s. This large swing had led to a significant deterioration of the balance sheets of firms, especially those of financial firms, thereby causing a decade-long stagnation of the Japanese economy, which is called Japan’s “lost decade”. A more recent example is the U.S. housing market bubble, which started somewhere around 2000 and is now in the middle of collapsing. This has already caused substantial damages to financial systems in the U.S. and the Euro area, and it is expected that it may spread worldwide as in the case of the Great Depression in the 1920s and 30s.
We empirically investigate the firm growth model proposed by Buldyrev et al. by using a unique dataset that contains the daily sales of more than 200 thousand products, which are collected from about 200 supermarkets in Japan over the last 20 years. We find that the empirical firm growth distribution is characterized by a Laplace distribution at the center and powerlaw at the tails, as predicted by the model. However, some of these characteristics disappear once we randomly reshuffle products across firms, implying that the shape of the empirical distribution is not produced as described by the model. Our simulation results suggest that the shape of the empirical distribution stems mainly from the presence of relationship between the size of a product and its growth rate.
Why do firms exist? What determines a firm’s boundaries? These questions have been repeatedly addressed by social scientists since Adam Smith argued more than two centuries ago that division of labor or specialization is a key to the improvement of labor productivity.
過去四半世紀を振り返ると,資産価格は 1980 年代後半に大幅に上昇し 90 年代前半に急落するという大きな変動を示した。ところが消費者物価や GDP デフレータに代表される財サービス価格はそれほど変化していない。資産価格と財サービス価格の連動性の欠如がこの時期の特徴であり,それが金融政策などの運営を難しくした。本稿ではその原因を探るため資産価格と財サービス価格の重要な結節点である家賃に焦点を絞り,住宅の売買価格との連動性を調べた。その結果,日本の家賃には米国の約 3 倍の粘着性があり,それが住宅価格との裁定を妨げていることがわかった。仮に家賃の粘着性が米国並みであったとすれば,消費者物価上昇率はバブル期には実績値に比べ約 1%高く,バブル崩壊期には約 1%低くなっていたと試算できる。バブル期における金融引き締めへの転換,バブル崩壊期における金融緩和への転換が早まっていた可能性がある。
日本と米国は相次いで住宅バブルとその崩壊を経験した。本稿ではこの 2 つのバブルを比較し以下のファインディングを得た。
第 1 に,住宅価格の代表的な計測手法である「リピートセールス法」と「ヘドニック法」をわが国の過去 20 年間のデータに適用した結果,バブル崩壊後の底入れの時期が 2 つの方法で異なることがわかった。リピートセールス法で推計される底入れ時期はヘドニック法の推計に比べマンションで 13 ヶ月,戸建てで 3 ヶ月遅れている。この遅れはリピートセールス法が建物の築年減価を適切に処理できていないために生じるものである。米国ではS&P/Case-Shiller 指数が代表的な住宅価格指数であるがこれはリピートセールス法を用いており,底入れ時期を遅く見積もる可能性がある。米国住宅市場の底入れの時期に関心が集まっている状況下,こうした認知ラグの存在は不確実性を増加させ経済の回復を遅らせる危険がある。
第 2 に,住宅需要と住宅価格の関係を時系列データでみると両者の間には正の相関がある。しかし県あるいは州単位のデータを用いてクロスセクションでみると,日米ともに両者の間に有意な相関は見られない。この意味で,バブルの県(州)別の有無または大小を需要要因で説明することはできない。人口動態が住宅需要に影響を及ぼしそれが住宅価格を押し上げるというストーリーは少なくともバブル期の価格上昇を説明する上では有効でない可能性を示唆している。
第 3 に,住宅価格と家賃の連動性をみると,バブルの形成・崩壊の過程で住宅価格が大きく変動しても家賃はほとんど動かないという現象が日米ともに確認できる。この背景には,家主と店子の双方が様々な取引コストを節約するために長期的な契約関係を結んでいることが挙げられる。また,日本については,持ち家の帰属家賃が市場価格で評価されておらず,それが連動性を弱めている面もある。連動性の欠如は,バブル期に住宅価格が上昇しても家賃が上昇しないためその家賃を重要な要素として含む消費者物価が上昇しないという現象を日米で生み,それが金融引き締めへの転換を遅らせた。また,バブル崩壊後は,住宅価格が下落しても家賃が連動しないため消費者物価が下落しないという現象が見られ,これは金融緩和への転換を遅らせる原因となった。家賃は資産価格と財サービス価格の結節点となる重要な変数であり,その計測精度を高める必要がある。
本稿の目的は,戦後のもっとも大きな不動産バブルといわれた 1980 年代の日本と,1929年の世界大恐慌以来の金融危機をもたらした原因であるといわれる2000年以降の米国の住宅バブルを比較することで,その両市場の共通点と相違点を浮き彫りにすることである。住宅バブルに関して様々なことが指摘される中で,本稿では,特に,以下の点を明らかにすることを目的とした。
Central banks react even to intraday changes in the exchange rate; however, in most cases, intervention data is available only at a daily frequency. This temporal aggregation makes it difficult to identify the effects of interventions on the exchange rate. We propose a new method based on Markov Chain Monte Carlo simulations to cope with this endogeneity problem: We use “data augmentation” to obtain intraday intervention amounts and then estimate the efficacy of interventions using the augmented data. Applying this method to Japanese data, we find that an intervention of one trillion yen moves the yen/dollar rate by 1.7 percent, which is more than twice as large as the magnitude reported in previous studies applying OLS to daily observations. This shows the quantitative importance of the endogeneity problem due to temporal aggregation.
Are foreign exchange interventions effective? This issue has been debated extensively in the 1980s and 1990s, but no conclusive consensus has emerged. A key difficulty faced by researchers in answering this question is the endogeneity problem: the exchange rate responds “within the period” to central bank interventions and the central bank reacts “within the period” to fluctuations in the exchange rate. As an example, consider the case of Japan. The monetary authorities of Japan, which are known to be one of the most active interveners, started to disclose intervention data in July 2001, and this has rekindled researchers’ interest in the effectiveness of interventions. However, the information disclosed is limited: only the total amount of interventions on a day is released to the public at the end of a quarter, and no detailed information, such as on the time of the intervention(s), the number of interventions over the course of the day, and the market(s) (Tokyo, London, or New York) in which the intervention(s) were executed, is disclosed. Most importantly, the low frequency of the disclosed data poses a serious problem for researchers because it is well known that the Japanese monetary authorities often react to intraday fluctuations in the exchange rate
Why was the Japanese consumer price index for rents so stable even during the period of housing bubble in the 1980s? In addressing this question, we start from the analysis of microeconomic rigidity and then investigate its implications about aggregate price dynamics. We find that ninety percent of the units in our dataset had no change in rents per year, indicating that rent stickiness is three times as high as in the US. We also find that the probability of rent adjustment depends little on the deviation of the actual rent from its target level, suggesting that rent adjustments are not state dependent but time dependent. These two results indicate that both intensive and extensive margins of rent adjustments are very small, thus yielding a slow reponse of the CPI to aggregate shocks. We show that the CPI inflation rate would have been higher by one percentage point during the bubble period, and lower by more than one percentage point during the period of bubble bursting, if the Japanese housing rents were as flexible as in the US.
Fluctuations in real estate prices have substantial impacts on economic activities. For example, land and house prices in Japan exhibited a sharp rise in the latter half of the 1980s, and its rapid reversal in the early 1990s. This wild swing led to a significant deterioration of the balance sheets of firms, especially those of financial firms, thereby causing a decade-long stagnation of the economy. Another recent example is the U.S. housing market bubble, which started somewhere around 2000 and is now in the middle of collapsing. These recent episodes have rekindled researchers’ interest on housing bubbles.
本稿では,政府税収の四半期データと四半期での税収の産出量弾性値を作成した上で,それを用いて構造VAR モデルを推計し日本の財政乗数を計測する。分析の結果,財政乗数は 1980 年代半ば以降,顕著に低下していることが確認された。すなわち,バブル前(1965-86 年)の期間は,政府支出や税へのショックが産出量に有意な影響を及ぼしていたが,それ以降(1987-2004 年)はほとんど影響を及ぼしていない。ただし,1980 年代以降の財政乗数の低下は米英などでも観察されており,必ずしも日本に固有の現象ではない。
財政乗数は低下したのか。低下したとすればそれはなぜか。これらは,バブル崩壊以降,財政をめぐる重要な論点であった。しかし現時点でもコンセンサスが得られているとは言いがたい。例えば,井堀・中里・川出 (2002) などは 1990 年代に財政乗数が低下したと主張している一方,堀・伊藤 (2002) は,1990 年代に財政乗数が低下したという証拠は見当たらないとしている。
本稿ではわが国の食品・日用雑貨を生産・出荷する企業 123 社を対象として価格設定行動に関するアンケート調査を行い以下のファインディングを得た。第1 に,約 9 割の企業は原価や需要が変化しても直ちには出荷価格を変更しないという行動をとっており,その意味で価格は粘着的である。その理由としては,原価や需要の情報収集・加工に要する費用や戦略的補完性を挙げる企業の割合がそれぞれ約 3 割であり,粘着性の主因である。一方,メニューコストなど価格変更の物理的費用は重要でない。第 2 に,価格の変更頻度については,過去10 年間で出荷価格を一度も変更したことのない企業が 3 割を超えており強い粘着性が存在する。この粘着性は他国と比較しても高い。第 3 に,アンケートの回答と POS データをマッチングさせることにより,メーカー出荷価格変更時における末端価格の反応をみると,統計的に有意な連動性は見られなかった。また,末端価格の変更頻度は出荷価格の変更頻度を大きく上回っている。これらの結果は,末端価格の変動の大部分がメーカー企業ではなく流通企業の行動を反映していることを示唆している。
価格の粘着性を計測する最近の研究では,消費者物価統計の原データやスーパーマーケットの POS データなどを用いて,価格の改定が一定期間に何回起きたかを数えるという単純な手法が用いられている。例えば Bils and Klenow (2004) は米国の消費者物価統計の原データを用いて価格の改訂頻度を計測し,平均的には価格改定は 4ヶ月に 1 度程度の頻度との結果を得ている。この数字は 1 年に 1 度程度の価格改定というマクロ経済学での「相場」を大きく下回るものである。一方,Nakamura and Steinsson (2008) は特売を除けば 11ヶ月に 1 度程度であり,粘着性は「相場」に近いと主張している。
Using tick-by-tick data of the dollar-yen and euro-dollar exchange rates recorded in the actual transaction platform, a “run”—continuous increases or decreases in deal prices for the past several ticks—does have some predictable information on the direction of the next price movement. Deal price movements, that are consistent with order flows, tend to continue a run once it started i.e., conditional probability of deal prices tend to move in the same direction as the last several times in a row is higher than 0.5. However, quote prices do not show such tendency of a run. Hence, a random walk hypothesis is refuted in a simple test of a run using the tick by tick data. In addition, a longer continuous increase of the price tends to be followed by larger reversal. The findings suggest that those market participants who have access to real-time, tick-by-tick transaction data may have an advantage in predicting the exchange rate movement. Findings here also lend support to the momentum trading strategy.
The foreign exchange market remains sleepless around the clock. Someone is trading somewhere all the time—24 hours a day, 7 days a week, 365 days a year. Analyzing the behavior of the exchange rate has become a popular sport of international finance researchers, while global financial institutions are spending millions of dollars to build real-time computer trading systems (program trading). High-frequency, reliable data are the key in finding robust results for good research for academics or profitable schemes for businesses.
本稿では,価格比較サイト「価格.com」において仮想店舗が提示する価格と,それに対する消費者のクリック行動を秒単位で記録した新しいデータセットを用いて,店舗の価格設定行動と消費者の購買行動を分析した。本稿の主要なファインディングは以下のとおりである。第 1 に,店舗の価格順位(その店舗の価格がその時点において何番目に安いか)が 1 位でない場合でもクリックが発生する確率はゼロではない。ただし,価格順位が下がるとクリック確率は下がり,価格順位とクリック確率(の対数値)の間には線形に近い関係が存在する。この線形の関係は,消費者に店舗の好みがあり,消費者が自分の好みの店舗群の中で最も安い価格を提示する店舗を選択していることを示唆している。第 2 に,各店舗が提示する価格の平均値は,ドリフト付きのランダムウォークに従っている。これは価格変動の大部分が店舗が保有する在庫のランダムな増減によって引き起こされていることを示している。ただし,価格が急落する局面などではランダムウォークからの乖離がみられ,各店舗の価格づけの戦略的補完性が値崩れを招いている可能性を示唆している。
インターネットの普及が我々の生活を根底から変えるのではないかという予測は急速に支持を失いつつあるようにみえる。ネット社会において消費者や企業の行動が変化してきたし,これからも変化を続けるのは事実であるがそれは普及の当初に考えられていたほどではなかったということであろう。
This paper investigates implications of the menu cost hypothesis about the distribution of price changes using daily scanner data covering all products sold at about 200 Japanese supermarkets in 1988 to 2005. First, we find that small price changes are indeed rare. The price change distribution for products with sticky prices has a dent at the vicinity of zero inflation, while no such dent is observed for products with flexible prices. Second, we find that the longer the time that has passed since the last price change, the higher is the probability that a large price change occurs. Combined with the fact that the price change probability is a decreasing function of price duration, this means that although the price change probability decreases as price duration increases, once a price adjustment occurs, the magnitude of such an adjustment is large. Third, while the price change distribution is symmetric on a short time scale, it is asymmetric on a long time scale, with the probability of a price decrease being significantly larger than the probability of a price increase. This asymmetry seems to be related to the deflation that the Japanese economy has experienced over the last five years.
The menu cost hypothesis has several important implications: those relating to the probability of the occurrence of a price change; and those relating to the distribution of price changes conditional on the occurrence of a change. The purpose of this paper is to examine the latter implications using daily scanner data covering all products sold at about 200 Japanese supermarkets in 1988 to 2005.
本稿では日本の法人企業約 82 万社(全法人企業の約 3 分の 1)をカバーするデータセットを用いて,企業の資本・取引関係数が企業規模とどのように関係するかを調べた。第 1 に,企業の資本・取引関係数の分布はロングテールである。資本・取引関係数の多い企業の上位 1%がもつ関係数は全関係数の約 50%であり,偏在している。第 2 に,取引関係数が多いハブ企業だけを取り出しその相互関係をみると,一部の超ハブ企業に関係数が集中しており,偏在が一層顕著である。第 3 に,企業規模が大きいほど関係数は多く全体として両者は比例関係にある。しかし既に多くの関係をもつ企業では,規模が拡大するほどには関係数を増やしていない。これは関係の維持コストを企業が節約しているためと解釈できる。
企業活動は様々な相互関係性の上に成り立っている。第 1 は物流である。企業は素原材料を川上の企業から購入する一方,生産物を川下の企業に対して中間投入として販売したり,流通企業に対して最終財として販売したりする。第 2 に,物流はしばしば企業間の与信関係を伴う。商品取引の決済時点を遅らせるために手形が発行されるのはその代表例であるが,より一般的に,商品の受渡しと資金の受渡しのタイミングがずれればそこに企業間の与信が発生する。さらに企業間与信が銀行与信に置き換えられることも少なくない。第3 は資本関係である。親会社が子会社を作るというだけでなく,物流で密接な関係にある企業が資本関係を結ぶことも少なくない。第 4 は役員派遣などの人的な交流である。
From the beginning of 2003 to the spring of 2004, Japan’s monetary authorities conducted large-scale yen-selling/dollar-buying foreign exchange operations in what Taylor (2006) has labeled the “Great Intervention.” The purpose of the present paper is to empirically examine the relationship between this “Great Intervention” and the quantitative easing policy the Bank of Japan (BOJ) was pursuing at that time. Using daily data of the amount of foreign exchange interventions and current account balances at the BOJ, our analysis arrives at the following conclusions. First, while about 60 percent of the yen funds supplied to the market by yen-selling interventions were immediately offset by the BOJ’s monetary operations, the remaining 40 percent were not offset and remained in the market for some time; this is in contrast with the preceding period, when almost 100 percent were offset. Second, comparing foreign exchange interventions and other government payments, the extent to which the funds were offset by the BOJ were much smaller in the case of foreign exchange interventions, and the funds also remained in the market longer. This finding suggests that the BOJ differentiated between and responded differently to foreign exchange interventions and other government payments. Third, the majority of financing bills issued to cover intervention funds were purchased by the BOJ from the market immediately after they were issued. For that reason, no substantial decrease in current account balances linked with the issuance of FBs could be observed. These three findings indicate that it is highly likely that the BOJ, in order to implement its policy target of maintaining current account balances at a high level, intentionally did not sterilize yen-selling/dollar-buying interventions.
During the period from 2001 to 2006, the Japanese monetary authorities pursued two important and very interesting policies. The first of these is the quantitative easing policy introduced by the Bank of Japan (BOJ) in March 2001. This step was motivated by the fact that although the overnight call rate, the BOJ’s policy rate, had reached its lower bound at zero percent, it failed to sufficiently stimulate the economy. To achieve further monetary easing, the BOJ therefore changed the policy variable from the interest rate to the mone suppl . The quantitative easing polic remained in place until March 2006, b which time the Japanese econom had recovered. The second major polic during this period were interventions in the foreign exchange market b Japan’s Ministr of Finance (MOF), which engaged in large-scale selling of the en from Januar 2003 to March 2004. Ta lor (2006) has called this the “Great Intervention.” The interventions during this period occurred at a frequenc of once ever two business da s, with the amount involved per dail intervention averaging \286 billion and the total reaching \35 trillion. Even for Japan’s monetar authorities, which are known for their active interventionism, this frequency as well as the sums involved were unprecedented.
In this study, we investigate interfirm networks by employing a unique dataset containing information on more than 800,000 Japanese firms, about half of all corporate firms currently operating in Japan. First, we find that the number of relationships, measured by the indegree, has a fat tail distribution, implying that there exist “hub” firms with a large number of relationships. Moreover, the indegree distribution for those hub firms also exhibits a fat tail, suggesting the existence of “super-hub” firms. Second, we find that larger firms tend to have more counterparts, but the relationship between firms’ size and the number of their counterparts is not necessarily proportional; firms that already have a large number of counterparts tend to grow without proportionately expanding it.
When examining interfirm networks, it comes as little surprise to find that larger firms tend to have more interfirm relatioships than smaller firms. For example, Toyota purchases intermediate products and raw materials from a large number of firms, located inside and outside the country, and sells final products to a large number of customers; it has close relationships with numerous commercial and investment banks; it also has a large number of affiliated firms. Somewhat surprisingly, however, we do not know much about the statistical relationship between the size of a firm and the number of its relationships. The main purpose of this paper is to take a closer look at the linkage between the two variables.
本稿では Krugman (1998) 以降の流動性の罠に関する研究をサーベイし,そこで得られた知見を整理する。第 1 に,最近の研究が対象とするのは超短期金利の非負制約がバインディングになる現象であり,永久国債の金利に注目するケインズの定義と異なっている。ケインズの罠は超短期金利がバインディングな状態が無限遠の将来まで続く恒久的な罠であり,最近の研究が扱っているのは一時的な罠である。第 2 に,一時的な罠に対する処方箋としてこれまで提案されてきたアイディアの多くは,現代の金融政策論に照らして標準的なものである。流動性の罠の下で経済厚生を最大化する金融政策ルールは広い意味でのインフレターゲティングとして表現できる。流動性の罠はその奇異な見かけから特殊な現象と受け取られがちであるが,少なくとも罠が一時的である限り,それに対する処方箋は意外なほどにオーソドックスである。
流動性の罠に関する先駆的な論文である Krugman(1998) が執筆された当時,流動性の罠(liquiditytraps)という言葉を EconLit で検索すると,論文数は1975 年以降で 21 本に過ぎなかった(Krugman (1998,p.138))。クルーグマンはこの関心の低さの背景として “a liquidity trap cannot happen, did not happen,and will not happen again” という認識がマクロ経済学者の間に広まっていたことを挙げている1。しかし現時点(2006 年 7 月)で同じ検索を行うと,論文数は160 本を超えており(図 1),マクロ経済学者の関心の低さが急速に是正されてきたことがわかる。これは,言うまでもなく,流動性の罠が実際に生じ得る現象であることを日本経済が証明した結果である2。本稿の目的は,流動性の罠に関する Krugman (1998) 以降の研究をサーベイし,そこで得られた新たな知見が何であるかを考察することである。
This paper characterizes optimal monetary policy in an economy with the zero interest rate bound and endogenous capital formation. First, we show that, given an adverse shock to productivity growth, the natural rate of interest is less likely to fall below zero in an economy with endogenous capital than the one with fixed capital. However, our numerical exercises show that, unless investment adjustment costs are very close to zero, we still have a negative natural rate of interest for large shocks to productivity growth. Second, the optimal commitment solution is characterized by a negative interest rate gap (i.e., real interest rate is lower than its natural rate counterpart) before and after the shock periods during which the natural rate of interest falls below zero. The negative interest rate gap after the shock periods represents the history dependence property, while the negative interest rate gap before the shock periods emerges because the central bank seeks to increase capital stock before the shock periods, so as to avoid a decline in capital stock after the shock periods, which would otherwise occur due to a substantial decline in investment during the shock periods. The latter property may be seen as central bank’s preemptive action against future binding shocks, which is entirely absent in fixed capital models. We also show that the targeting rule to implement the commitment solution is characterized by history-dependent inflation-forecast targeting. Third, a central bank governor without sophisticated commitment technology tends to resort to preemptive action more than the one with it. The governor without commitment technology controls natural rates of consumption, output, and so on in the future periods, by changing capital stock today through monetary policy.
Recent literature on optimal monetary policy with the zero interest rate bound has assumed that capital stock is exogenously given. This assumption of fixed capital stock has some important implications. First, the natural rate of interest is exogenously determined simply due to the lack of endogenous state variables: namely, it is affected by exogenous factors such as changes in technology and preference, but not by changes in endogenous variables. For example, Jung et al. (2005) and Eggertsson and Woodford (2003a, b) among others, start their analysis by assuming that the natural rate of interest is an exogenous process, which is a deterministic or a two-state Markov process. More recent researches such as Adam and Billi (2004a, b) and Nakov (2005) extend analysis to a fully stochastic environment, but continue to assume that the natural rate process is exogenously given. These existing researches typically consider a situation in which the natural rate of interest, whether it is a deterministic or a stochastic process, declines to a negative level entirely due to exogenous shocks, and conduct an exercise of characterizing optimal monetary policy responses to the shock, as well as monetary policy rules to implement the optimal outcome.
This paper estimates fiscal policy feedback rules in Japan, the United States, and the United Kingdom, allowing for stochastic regime changes. Using Markov-switching regression methods, we find that the Japanese data clearly reject the view that fiscal policy regime is fixed; i.e., the Japanese government has been adopting either of Ricardian or Non-Ricardian policy at all times. Instead, our results indicate that fiscal policy regimes evolve over time in a stochastic manner. This is in a sharp contrast with the U.S. and U.K. results in which the government’s fiscal behavior is consistently characterized by Ricardian policy.
Recent studies about the conduct of monetary policy argue that fiscal policy regime has important implications for the choice of desirable monetary policy rules, in particular, monetary policy rules in the form of inflation targeting (Sims (2005), Benigno and Woodford (2006)). Needless to say, we can safely believe that fiscal regime during the peace time is characterized as “Ricardian” in the terminology of Woodford (1995), or “passive” in the terminology of Leeper (1991). In such a case, we are allowed to design an optimal monetary policy rule without paying any attention to fiscal regimes. However, if the economy is unstable in terms of fiscal situations, it would be dangerous to choose a monetary policy rule independently of fiscal policy regimes. For example, some researchers argue that rapid accumulation of public debt in Japan is an evidence for the lack of fiscal discipline of the Japanese government. If this is the case, it would be possible that participants in the government bond market will come to have doubts about the government’s intention to repay public debt. Given this environment, it would not be desirable to design a monetary policy rule without paying any attention to the future evolution of fiscal policy regime. The purpose of this paper is to estimate fiscal policy feedback rules in Japan, the United States, and the United Kingdom for more than a century, so as to acquire a deeper understanding about the evolution of fiscal policy regime.
This paper presents a model with broad liquidity services to discuss the consequences of massive money injection in an economy with the zero interest rate bound. We incorporate Goodfriend’s (2000) idea of broad liquidity services into the model by allowing the amounts of bonds with various maturities held by a household to enter its utility function. We show that the satiation of money (or the zero marginal utility of money) is not a necessary condition for the one-period interest rate to reach the zero lower bound; instead, we present a weaker necessary condition that the marginal liquidity service provided by money coincides with the marginal liquidity service provided by the one-period bonds, both of which are not necessarily equal to zero. This result implies that massive money injection would have some influences on an equilibrium of the economy even if it does not alter the private sector’s expectations about future monetary policy. Our empirical results indicate that forward interest rates started to decline relative to the corresponding futures rates just after March 2001, when a quantitative monetary easing policy started by the Bank of Japan, and that the forward and futures spread has never closed until the policy ended in March 2006. We argue that these findings are not easy to explain by a model without broad liquidity services.
Recent researches on the optimal monetary policy in an economy with the zero interest rate bound have found the importance of a central bank’s commitment about future monetary policy (Woodford (1999), Jung et al. (2005), Eggertsson and Woodford (2003) among others). In a usual environment, a central bank conducts monetary easing by lowering the current overnight interest rate through an additional injection of money to the market. However, this does not work well once the overnight interest rate reaches the zero lower bound. Further monetary easing in such a situation could be implemented only through central bank’s announcements about the future path of the overnight interest rate. Specifically, it has been shown that the optimal monetary policy rule is characterized by “history dependence” in the sense that a central bank commits itself to continuing monetary easing even after the economy returns to a normal situation.