ワーキングペーパー 2010年度 一覧に戻る

投稿はありません。

  • Fiscal Policy Switching in Japan, the U.S., and the U.K.

    Abstract

    This paper estimates fiscal policy feedback rules in Japan, the United States, and the United Kingdom for more than a century, allowing for stochastic regime changes. Estimating a Markovswitching model by the Bayesian method, we find the following: First, the Japanese data clearly reject the view that the fiscal policy regime is fixed, i.e., that the Japanese government adopted a Ricardian or a non-Ricardian regime throughout the entire period. Instead, our results indicate a stochastic switch of the debt-GDP ratio between stationary and nonstationary processes, and thus a stochastic switch between Ricardian and non-Ricardian regimes. Second, our simulation exercises using the estimated parameters and transition probabilities do not necessarily reject the possibility that the debt-GDP ratio may be nonstationary even in the long run (i.e., globally nonstationary). Third, the Japanese result is in sharp contrast with the results for the U.S. and the U.K. which indicate that in these countries the government’s fiscal behavior is consistently characterized by Ricardian policy.

    Introduction

    Recent studies about the conduct of monetary policy suggest that the fiscal policy regime has important implications for the choice of desirable monetary policy rules, particularly, monetary policy rules in the form of inflation targeting (Sims (2005), Benigno and Woodford (2007)). It seems safe to assume that fiscal policy is characterized as “Ricardian” in the terminology of Woodford (1995), or “passive” in the terminology of Leeper (1991), if the government shows strong fiscal discipline. If this is the case, we can design an optimal monetary policy rule without paying any attention to fiscal policy. However, if the economy is unstable in terms of the fiscal situation, it would be dangerous to choose a monetary policy rule independently of fiscal policy rules. For example, some researchers argue that the recent accumulation of public debt in Japan is evidence of a lack of fiscal discipline on the part of the Japanese government, and that it is possible that government bond market participants may begin to doubt the government’s intention and ability to repay the public debt. If this is the case, we may need to take the future evolution of the fiscal regime into consideration when designing a monetary policy rule.

  • 「ネットオークション価格はアンフェアか?」

    Abstract

    価格はなぜ硬直的なのか。Arthur Okun は,需要の増加時に価格を引き上げることを顧客はアンフェアとみるので,顧客の怒りを買うことを恐れる企業や商店は価格を上げないと説明した。例えば,雪の日にシャベルの需要が高まることに乗じて値札を付け替える行為はアンフェアである。本稿では,このフェアネス仮説がネットオークション市場にも当てはまるか否かを検証するため,2009 年の新型インフルエンザ騒動時におけるヤフーオークション市場でのマスク価格の変化を分析した。マスクの落札率(落札件数を出品件数で除したもの)は 5 月初と 8 月後半に 8 割超の水準まで上昇しており,その時期に需要が集中していたことがわかる。前者は日本で最初の「感染の疑い」事例が出た時期であり,後者は本格的な流行期入りを政府が宣言した時期である。5 月の局面では,売り手は「開始」価格(入札を開始する価格)と「即決」価格(その価格で入札すればセリを経ずに落札できる価格)の両方を引き上げる行動をとった。特に,即決価格は開始価格と比べても大幅に引き上げられており,落札価格を高めに誘導する意図があったとみられる。一方,8 月の局面では,開始価格の小幅な引き上げは見られたものの即決価格は引き上げられていない。5 月と 8 月の違いは売り手の属性の違いに起因しており,5月の局面では売り手は主として個人であり,8 月の局面では主として企業であった。企業は買い手の評判を意識するため,需要の増加に乗じて価格を引き上げることはしなかったと解釈できる。Okun は,売り手と買い手が長期的な関係をもつ顧客市場(customermarkets)と,そうした関係のないオークション市場(auction markets)を区別することの重要性を強調し,フェアネス仮説は前者にだけ当てはまると主張した。本稿の分析結果は,ネットオークション市場はフェアネスの観点からは顧客市場に近い性質をもつことを示している。

    Introduction

    一橋大学物価研究センターが 2008 年春に行った企業を対象としたアンケート調査によると,需要やコストの変動に対して直ちに出荷価格を変更するかという問いに対して 90%が変更しないと回答している。ミクロ経済学では需要曲線または供給曲線がシフトすると均衡は新しい交点に移り,それに伴って価格は直ちに変わると教える。しかし実際には,企業を取り巻く需要やコストの環境が変化しても企業は即座には価格を変更しないのである。これは価格の硬直性または粘着性とよばれる現象である。価格硬直性はマクロ経済学の根幹を成す概念であり,価格が瞬時には調整されないがゆえに失業や設備稼働率の変動が生じる。

  • 「ゼロ金利と緩やかな物価下落」

    Abstract

    日本では,1990 年代後半以降,政策金利がゼロになる一方,物価上昇率もゼロ近傍となっている。この「二つのゼロ」現象は,この時期における日本経済の貨幣的側面を特徴づけるものであり,実物的側面の特徴である成長率の長期低迷と対をなしている。本稿では「二つのゼロ」現象の原因を解明すべく行われてきたこれまでの研究成果を概観する。
    ゼロ金利現象については,自然利子率(貯蓄投資を均衡させる実質利子率)が負の水準へと下落したのを契機として発生したという見方と,企業や家計が何らかの理由で強いデフレ予想をもつようになり,それが起点となって自己実現的なデフレ均衡に陥ったという見方がある。試算によれば,日本の自然利子率は 1990 年代後半以降かなり低い水準にあり,マイナスに落ち込んだ時期もあった。一方,予想物価上昇率は,企業間や家計間で大きなばらつきがあり,全員が持続的な物価下落を予想していたわけではない。これらの事実は,日本のゼロ金利の原因として,負の自然利子率説が有力であることを示している。ただし,企業や家計の強い円高予想が起点となって自己実現的なデフレ均衡に陥っている可能性も否定できない。
    物価については,原価や需要が変化しても即座には商品の販売価格を変更しないとする企業が 9 割を超えており,価格の硬直性が存在する。さらに,POS データを用いた分析によれば,1990 年代後半以降,価格の更新頻度が高まる一方,価格の更新幅は小幅化する傾向がある。このような小刻みな価格変更が物価下落を緩やかにしている。小刻みな価格変更の背景には,ライバルが価格を変更すれば自分も価格を変更する,ライバルが変更しなければ自分も変更しないという意味で,店舗や企業間の相互牽制が強まっている可能性がある。
    「二つのゼロ」現象は,ケインズが提示した「流動性の罠」と「価格硬直性」というアイディアと密接に関係している。しかし,「流動性の罠」についてはケインズ以後,本格的な研究がなされておらず,「価格硬直性」についてもその原因をデータから探る研究が本格化したのはここ10 年のことに過ぎない。「二つのゼロ」現象に関する議論が混迷し,政策対応が遅れた背景にはこうした事情がある。ケインズの残した宿題に精力的に取り組むことが研究者に求められている。

    Introduction

    マクロの経済現象を実物的側面と貨幣的側面に分けるとすれば,1990 年代初のバブル崩壊後,実物的な側面における最も重要な現象は成長率の低下であった。成長率の低下やそれに伴う雇用の喪失は多くの人にとって差し迫った問題であり,研究者の間でも「失われた十年」を巡って様々な検討が進められてきた。これに対して,貨幣的な側面については,少なくともバブル崩壊直後はさほど注目されず,研究者の関心を集めることも少なかった。しかし実はこの時期,貨幣的な側面でも重要な変化が進行していた。

  • 「家賃の名目硬直性」

    Abstract

    1990年代前半の日本のバブル崩壊期では住宅価格の大幅下落にもかかわらず家賃はほとんど変化しなかった。同様の現象はバブル崩壊後の米国でも観察されている。家賃はなぜ変化しないのか。なぜ住宅価格と家賃は連動しないのか。本稿では,こうした疑問に答えるため,大手住宅管理会社により提供された約 15,000 戸の家賃データを用いて分析を行い,以下の結果を得た。第 1 に,家賃が変更される住戸の割合は 1 年間で約 5%に過ぎないことがわかった。これは米国の 14 分の1,ドイツの 4 分の 1 であり,極端に低い。この高い硬直性の背景には,店子の入れ替えが少ない一方,家賃の契約期間が 2 年と長いため,そもそも家賃を変更する機会が限定されているという日本の住宅市場に特有の事情がある。しかしそれ以上に重要なのは,店子の入れ替えや契約更新など家賃変更の機会が訪れても家賃を変更していないということであり,これが家賃の変更確率を大きく引き下げている。店子の入れ替え時においては 76%の住戸で以前と同じ家賃が適用されており,契約更新の際には 97%の住戸で家賃が据え置かれている。第 2 に,Caballeroand Engel (2007)によって提案された Adjustment hazard function の手法を用いた分析の結果,各住戸の家賃が変更されるか否かは,その住戸の現行家賃が市場実勢からどの程度乖離しているかにほとんど依存しないことがわかった。つまり,家賃改定は状態依存ではなく時間依存であり,カルボ型モデルで描写できる。

    Introduction

    多くの先進主要国においては,住宅価格を中心とした資産価格の急激な上昇とその後の下落が,金融システムに対して甚大な影響をもたらすことで経済活動の停滞を招いた共通の歴史を持つ。1990 年代の日本・スウェーデン,そして,今回の米国のサブプライム問題に端を発した金融危機が,最も代表的な事例としてあげることができる。Reinhart andRogoff (2008)では,多くの国の経済データを網羅的かつ長期の時系列で比較分析し,金融危機がもたらされる背後には,多くの共通する経済現象が発生していることを明らかにした。その一つの事象が,資産価格,なかでも不動産価格が,賃貸料と比較して大きく上昇していることを指摘した。

  • On the Nonstationarity of the Exchange Rate Process

    Abstract

    We empirically investigate the nonstationarity property of the dollar-yen exchange rate by using an eight year span of high frequency data set. We perform a statistical test of strict stationarity based on the two-sample KolmogorovSmirnov test for the absolute price changes, and the Pearson’s chi-square test for the number of successive price changes in the same direction, and find statistically significant evidence of nonstationarity. We further study the recurrence intervals between the days in which nonstationarity occurs, and find that the distribution of recurrence intervals is well-approximated by an exponential distribution. Also, we find that the mean conditional recurrence interval 〈T|T0〉 is independent of the previous recurrence interval T0. These findings indicate that the recurrence intervals is characterized by a Poisson process. We interpret this as reflecting the Poisson property regarding the arrival of news.

    Introduction

    Financial time series data have been extensively investigated using a wide
    variety of methods in econophysics. These studies tend to assume, explicitly
    or implicitly, that a time series is stationary, since stationarity is a requirement
    for most of the mathematical theories underlying time series analysis.
    However, despite its nearly universal assumption, there is little previous studies
    that seek to test stationarity in a reliable manner. (Toth1a et al. (2010)).

  • Stochastic Herding by Institutional Investment Managers

    Abstract

    This paper demonstrates that the behavior of institutional investors around the downturn of the U.S. equity markets in 2007 is consistent with stochastic herding in attempts to time the market. We consider a model of large number of institutional investment managers who simultaneously decide whether to remain invested in an assets or liquidate their positions. Each fund manager receives imperfect information about the market’s ability to supply liquidity and chooses whether or not to sell the security based on her private information as well as the actions of others. Due to feedback effects the equilibrium is stochastic and the “aggregate action” is characterized by a power-law probability distribution with exponential truncation predicting occasional “explosive” sell-out events. We examine highly disaggregated institutional ownership data of publicly traded stocks to find that stochastic herding explains the underlying data generating mechanism. Furthermore, consistent with market-timing considerations, the distribution parameter measuring the degree of herding rises sharply immediately prior the sell-out phase. The sell-out phase is consistent with the transition from subcritical to supercritical phase, whereby the system swings sharply to a new equilibrium. Specifically, exponential truncation vanishes as the distribution of fund manager actions becomes centered around the same action – all sell.

    Introduction

    Many apparent violations of the efficient market hypothesis, such as bubbles, crashes and “fat tails” in the distribution of returns have been attributed to the tendency of investors to herd. Particularly, in a situation where traders may have private information related to the payoff of a financial assets their individual actions may trigger a cascade of similar actions by other traders. While the mechanism of a chain reaction through information revelation can potentially explain a number of stylized facts in finance, such behavior remains notoriously difficult to identify empirically. This is partly because many theoretical underpinnings of herding, such as informational asymmetry, are unobservable and partly because the complex agent-based models of herding do not yield closedform solutions to be used for direct econometric tests.

  • Forecasting Japanese Stock Returns with Financial Ratios and Other Variables

    Abstract

    This paper extends the previous analyses of the forecastability of Japanese stock market returns in two directions. First, we carefully construct smoothed market priceñearnings ratios and examine their predictive ability. We find that the empirical performance of the priceñearnings ratio in forecasting stock returns in Japan is generally weaker than both the priceñearnings ratio in comparable US studies and the price dividend ratio. Second, we also examine the performance of several other forecasting variables, including lagged stock returns and interest rates. We find that both variables are useful in predicting aggregate stock returns when using Japanese data. However, while we find that the interest rate variable is useful in early subsamples in this regard, it loses its predictive ability in more recent subsamples. This is because of the extremely limited variability in interest rates associated with operation of the Bank of Japanís zero interest policy since the late 1990s. In contrast, the importance of lagged returns increases in subsamples starting from the 2000s. Overall, a combination of logged price dividend ratios, lagged stock returns, and interest rates yield the most stable performance when forecasting Japanese stock market returns.

    Introduction

    In our previous study (Aono & Iwaisako, 2010), we examine the ability of dividend yields to forecast Japanese aggregate stock returns using the singlevariable predictive regression framework of Lewellen (2004) and Campbell & Yogo (2006). This paper continues and extends our earlier efforts in two respects. First, we examine the predictive ability of another popular financial ratio, namely, the priceñearnings ratio. This is motivated by the fact that some studies using US data (for example, Campbell & Vuolteenaho, 2004) find that smoothed market priceñearnings ratios have better forecasting ability than dividend yields. We carefully construct Japanese priceñearnings ratios following the methodology pioneered by Robert Shiller (1989, 2005) and examine their ability to forecast aggregate stock returns. We find that the predictive ability of the price dividend ratio is consistently better than that of the priceñearnings ratio.

  • Housing Prices in Tokyo: A Comparison of Hedonic and Repeat Sales Measures

    Abstract

    Do indexes of house prices behave differently depending on the estimation method? If so, to what extent? To address these questions, we use a unique dataset that we compiled from individual listings in a widely circulated real estate advertisement magazine. The dataset contains more than 470,000 listings of housing prices between 1986 and 2008, including the period of the housing bubble and its burst. We find that there exists a substantial discrepancy in terms of turning points between hedonic and repeat sales indexes, even though the hedonic index is adjusted for structural changes and the repeat sales index is adjusted in the way Case and Shiller suggested. Specifically, the repeat sales measure signals turning points later than the hedonic measure: for example, the hedonic measure of condominium prices bottomed out at the beginning of 2002, while the corresponding repeat sales measure exhibits a reversal only in the spring of 2004. This discrepancy cannot be fully removed even if we adjust the repeat sales index for depreciation.

    Introduction

    Fluctuations in real estate prices have a substantial impact on economic activity. In Japan, the sharp rise in real estate prices during the latter half of the 1980s and their decline in the early 1990s have led to a decade-long, or even longer, stagnation of the economy. More recently, the rapid rise in housing prices and their reversal in the United States have triggered a global financial crisis. Against this background, having a reliable index that correctly identifies trends in housing prices is of utmost importance.

  • On the Evolution of the House Price Distribution

    Abstract

    Is the cross-sectional distribution of house prices close to a (log)normal distribution, as is often assumed in empirical studies on house price indexes? How does the distribution evolve over time? To address these questions, we investigate the cross-sectional distribution of house prices in the Greater Tokyo Area. We find that house prices (Pi) are distributed with much fatter tails than a lognormal distribution and that the tail is quite close to that of a power-law distribution. We also find that house sizes (Si) follow an exponential distribution. These findings imply that size-adjusted house prices, defined by lnPi − aSi, should be normally distributed. We find that this is indeed the case for most of the sample period, but not the bubble era, during which the price distribution has a fat upper tail even after adjusting for size. The bubble was concentrated in particular areas in Tokyo, and this is the source of the fat upper tail.

    Introduction

    Researchers on house prices typically start their analysis by producing a time series of the mean of prices across different housing units in a particular region by, for example, running a hedonic or repeat-sales regression. In this paper, we pursue an alternative research strategy: we look at the entire distribution of house prices across housing units in a particular region at a particular point of time and then investigate the evolution of such cross-sectional distribution over time. We seek to describe price dynamics in the housing market not merely by changes in the mean but by changes in some key parameters that fully characterize the entire cross-sectional price distribution.

  • The Great Moderation in the Japanese Economy

    Abstract

    This paper investigates the contribution of technology and nontechnology shocks to the changing volatility of output and labor growth in the postwar Japanese economy. A time-varying vector autoregression (VAR) with drifting coefficients and stochastic volatilities is modeled and long-run restriction is used to identify technology shocks in line with Gal´ı (1999) and Gal´ı and Gambetti (2009). We find that technology shocks are responsible for significant changes in the output volatility throughout the total sample period while the volatility of labor input is largely attributed to nontechnology shocks. The driving force behind these results is the negative correlation between labor input and productivity, which holds significantly and persistently over the postwar period.

    Introduction

    Most industrialized economies have experienced a substantial decline in output growth volatility in the postwar period, a phenomenon known as “the Great Moderation.” In the U.S. case, many authors have investigated the characteristics of and reasons for the Great Moderation that started in the mid-1980s. Possible explanations include good luck, better monetary policy, and changes in the economic structure, such as inventory management and labor market statistics. Based on the time-varying and Markov-switching structural VAR methods, the good luck hypothesis has been advocated by many authors, including Stock and Watson (2002, 2005), Primiceri (2005), Sims and Zha (2006), Arias, Hansen, and Ohanian (2006), and Gambetti, Pappa, and Canova (2006). On the other hand, the good policy hypothesis has been supported by many other authors including Clarida, Gal´ı, and Gertler (2000), Lubik and Schorfheide (2004), Boivin and Giannoni (2006), and Benati and Surico (2009). There are different approaches to considering structural changes, including Campbell and Hercowitz (2005) and Gal´ı and Gambetti (2009). In particular, Gal´ı and Gambetti (2009) capture the changing patterns of the correlations among the labor market variables.

  • The Role of the IMF Under the Noise of Signals

    Abstract

    This paper theoretically analyzes the Early Warning System (EWS) of the IMF based on the principal-agent model. We search for trade-off of the optimal contract of the IMF under the interim intervention and the noise of the signal. The main findings are as follows. First, when the net loss coming from noise under good fundamental is higher than the net gain by interim intervention under bad fundamental, the debtor country exerts less effort as the noise effect becomes larger. Secondly, when the net loss in good fundamental is smaller than the net gain in the bad fundamental, accurate signal may give rise to the moral hazard problem. Thirdly, when the marginal utility by the intervention of the IMF is higher on bad fundamentals than on good fundamentals, the higher ability of the IMF to mitigate the crisis will elicit a less policy effort from the country. On the other hand, when the economy has higher marginal utility in case of good fundamentals, deeper intervention of the IMF offers an incentive of a greater policy effort to the country. Fourthly, mandating the IMF to care about the country welfare as well as safeguarding its resources, does not necessarily mean the debtor country will exerts less efforts.

    Introduction

    As more developing countries liberalize their capital control regulations, and as more investors invest huge money abroad, the possiblity of financial crisis could get higher. Then vulnerable countries are always at the risk of currency crisis in exchange with chance of welcoming beneficial capital flows. The IMF is expected to take necessary action to prevent crisis by forecasting and advising developing country authorities. The EWS seems to be a good tool for this challenging work. The IMF is expected to help developing countries build needed and reliable economic statistical database. It is a foundation of every EWS studies for crisis prevention. Prevention of possible crisis concerns with both the effort of the program country and the IMF.

  • News shocks and the Japanese macroeconomic fluctuations

    Abstract

    Are the changes in the future technology process, the so-called “news shocks,” the main contributors to the macroeconomic fluctuations in Japan over the past forty years? In this paper, we take two structural vector-auto-regression (SVAR) approaches to answer this question. First, we quantitatively evaluate the relative importance of news shocks among candidate shocks, estimating a structural vectorerror-correction model (SVECM). Our estimated results suggest that the contribution of the TFP news shocks is nonnegligible, which is in line with the findings of previous works. Furthermore, we disentangle the source of news shocks by adopting several kinds of restrictions and find that news shocks on investment-specific technology (IST) also have an important effect. Second, to minimize the gap between the SVAR approach and the Bayesian estimation of a dynamic stochastic general equilibrium model, we adopt an alternative approach: SVAR with sign restrictions. The SVAR with sign restrictions reconfirms the results that the news shocks are important in explaining the Japanese macroeconomic fluctuations.

    Introduction

    Are news shocks the main source of the Japanese macroeconomic fluctuations? Previous works have presented different results. Beaudry and Portier (2005) employ a SVECM with a combination of long-run and short-run restrictions to divide the TFP shocks into surprise and news components. The news shock in their econometric model is the shock that does not have an impact effect on the current TFP but increases the future TFP several quarters after. They find that the estimated TFP news shock is a dark horse behind the Japanese macroeconomic fluctuations, and that a negative news shock occurred in the beginning at the 1990s which might have been relevant with the so-called “lost decade.” Fujiwara, Hirose, and Shintani (2011) assess the importance of news shocks based on an estimation of a dynamic stochastic general equilibrium (DSGE) model using a Bayesian method. They introduced one-to-four-quarters-ahead TFP news shocks and find that the TFP news shocks are nonnegligible but minor in explaining the macroeconomic fluctuations in Japan.

  • Closely Competing Firms and Price Adjustment: Some Findings from an Online Marketplace

    Abstract

    We investigate retailers’ price setting behavior using a unique dataset containing by-the-second records of prices offered by closely competing retailers on a major Japanese price comparison website. First, we find that, when the average price of a product across retailers falls rapidly, the frequency of price adjustments increases, and the size of price adjustments becomes larger. Second, we find positive autocorrelation in the frequency of price adjustments, implying that there tends to be clustering where price adjustments occur in succession. In contrast, there is no such autocorrelation in the size of price adjustments. These two findings indicate that the behavior of competing retailers is characterized by state-dependent pricing rather than time-dependent pricing.

    Introduction

    Since the seminal study by Bils and Klenow (2004), there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. Using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.

  • On the Evolution of the House Price Distribution”

    Abstract

    Is the cross-sectional distribution of house prices close to a (log)normal distribution, as is often assumed in empirical studies on house price indexes? How does it evolve over time? How does it look like during the period of housing bubbles? To address these questions, we investigate the cross-secional distribution of house prices in the Greater Tokyo Area. Using a unique dataset containing individual listings in a widely circulated real estate advertisement magazine in 1986 to 2009, we find the following. First, the house price, Pit, is characterized by a distribution with much fatter tails than a lognormal distribution, and the tail part is quite close to that of a power-law or a Pareto distribution. Second, the size of a house, Si, follows an exponential distribution. These two findings about the distributions of Pit and Si imply that the the price distribution conditional on the house size, i.e., Pr(Pit | Si), follows a lognormal distribution. We confirm this by showing that size adjusted prices indeed follow a lognormal distribution, except for periods of the housing bubble in Tokyo when the price distribution remains asymmetric and skewed to the right even after controlling for the size effect.

    Introduction

    Researches on house prices typically start by producing a time series of the mean of prices across housing units in a particular region by, for example, running a hedonic regression or by adopting a repeat-sales method. In this paper, we propose an alternative research strategy: we look at the entire distribution of house prices across housing units in a particular region at a particular point of time, and then investigate the evolution of such cross sectional distributions over time. We seek to describe price dynamics in a housing market not merely by changes in the mean but by changes in some key parameters that fully characterize the entire cross sectional price distribution. Our ultimate goal is to produce a new housing price index based on these key parameters.

  • Sales Distribution of Consumer Electronics

    Abstract

    Using the uniform most powerful unbiased test, we observed the sales distribution of consumer electronics in Japan on a daily basis and report that it follows both a lognormal distribution and a power-law distribution and depends on the state of the market. We show that these switches occur quite often. The underlying sales dynamics found between both periods nicely matched a multiplicative process. However, even though the multiplicative term in the process displays a sizedependent relationship when a steady lognormal distribution holds, it shows a size-independent relationship when the power-law distribution holds. This difference in the underlying dynamics is responsible for the difference in the two observed distributions.

    Introduction

    Since Pareto pointed out in 1896 that the distribution of income exhibits a heavy-tailed structure [1], many papers has argued that such distributions can be found in a wide range of empirical data that describe not only economic phenomena but also biological, physical, ecological, sociological, and various man-made phenomena [2]. The list of the measurements of quantities whose distributions have been conjectured to obey such distributions includes firm sizes [3], city populations [4], frequency of unique words in a given novel (5-6), the biological genera by number of species [7], scientists by number of published papers [8], web files transmitted over the internet [9], book sales [10], and product market shares (11]. Along with these reports the argument over the exact distribution, whether these heavy-tailed distributions obey a lognormal distribution or a power-law distribution, has been repeated over many years as well [2]. In this paper we use the statistical techniques developed in this literature to clarify the sales distribution of consumer electronics.

  • Competing Firms and Price Adjustment: Evidence from an Online Marketplace

    Abstract

    We investigate retailers’ price setting behavior, and in particular strategic interaction between retailers, using a unique dataset containing by-the-second records of prices offered by competing retailers on a major Japanese price comparison website. First, we find that, when the average price of a product across retailers falls rapidly, the frequency of price adjustments is high, while the size of adjustments remains largely unchanged. Second, we find a positive autocorrelation in the frequency of price adjustments, implying that there tends to be a clustering where once a price adjustment occurs, such adjustments occur in succession. In contrast, there is no such autocorrelation in the size of price adjustments. These two findings indicate that the behavior of competing retailers is characterized by state-dependent pricing, rather than time-dependent pricing, especially when prices fall rapidly, and that strategic complementarities play an important role when retailers decide to adjust (or not to adjust) their prices.

    Introduction

    Since Bils and Klenow’s (2004) seminal study, there has been extensive research on price stickiness using micro price data. One vein of research along these lines concentrates on price adjustment events and examines the frequency with which such events occur. An important finding of such studies is that price adjustment events occur quite frequently. For example, using raw data of the U.S. consumer price index (CPI), Bils and Klenow (2004) report that the median frequency of price adjustments is 4.3 months. Using the same U.S. CPI raw data, Nakamura and Steinsson (2008) report that when sales are excluded, prices are adjusted with a frequency of once every 8 to 11 months. Similar studies focusing on other countries include Dhyne et al. (2006) for the euro area and Higo and Saita (2007) for Japan.

PAGE TOP