Bayesian Portfolio Analysis: Analyzing the Global Investment Market

By Daniel Roeder
The Developing Economist
2015, Vol. 2 No. 1 | pg. 1/1

Abstract

The goal of portfolio optimization is to determine the ideal allocation of assets to a given set of possible investments. Many optimization models use classical statistical methods, which do not fully account for estimation risk in historical returns or the stochastic nature of future returns. By using a fully Bayesian analysis, however, I am able to account for these aspects and incorporate a complete information set as a basis for the investment decision. I use Bayesian methods to combine different estimators into a succinct portfolio optimization model that takes into account an investor's utility function. I will test the model using monthly return data on stock indices from Australia, Canada, France, Germany, Japan, the U.K. and the U.S.

I. Introduction

Portfolio optimization is one of the fastest growing areas of research in financial econometrics, and only recently has computing power reached a level where analysis on numerous assets is even possible. There are a number of portfolio optimization models used in financial econometrics and many of them build on aspects of previously defined models. The model I will be building uses Bayesian statistical methods to combine insights from Markowitz, BL and Zhou. Each of these papers use techniques from the previous one to specify and create a novel modeling technique.

Bayesian statistics specify a few types of functions that are necessary to complete an analysis, the prior distribution, the likelihood function, and the posterior distribution. A prior distribution defines how one expects a variable to be distributed before viewing the data. Prior distributions can be of different weights in the posterior distribution depending on how confident one is in their prior. A likelihood function describes the observed data in the study. Finally, the posterior distribution describes the final result, which is the combination of the prior distribution with the likelihood function. This is done by using Bayes theorem2, which multiplies the prior times the posterior and divides by the normalizing constant, which conditions that the probability density function (PDF) of the posterior sums to 1. Bayesian analysis is an ideal method to use in a portfolio optimization problem because it accounts for the estimation risk in the data. The returns of the assets form a distribution centered on the mean returns, but we are not sure that this mean is necessarily the true mean. Therefore it is necessary to model the returns as a distribution to account for the inherent uncertainty in the mean, and this is exactly what Bayesian analysis does.

Zhou incorporates all of the necessary Bayesian components in his model; the market equilibrium and the investor?s views act as a joint prior and the historical data defines the likelihood function. This strengthens the model by making it mostly consistent with Bayesian principles, but some aspects are still not statistically sound. In particular, I disagree with the fact that Zhou uses the historical covariance matrix, Σ, in each stage of the analysis (prior and likelihood). The true covariance matrix is never observable to an investor, meaning there is inherent uncertainty in modeling Σ, which must be accounted for in the model. Zhou underestimates this uncertainty by using the historical covariance matrix to initially estimate the matrix, and by re-updating the matrix with the historical data again in the likelihood stage. This method puts too much confidence in the historical matrix by re-updating the prior with the same historical matrix. I plan to account for this uncertainty by incorporating an inverse-Wishart prior distribution on the Black-Litterman prior estimate, which will model Σ as a distribution and not a point estimate. The inverse-Wishart prior will use the Black-Litterman covariance matrix as a starting point, but the investor can now model the matrix as a distribution and adjust confidence in the starting point with a tuning parameter. This is a calculation that must be incorporated to make the model statistically sound, and it also serves as a starting point for more extensive analysis of the covariance matrix.

The empirical analysis in Zhou is based on equity index returns from Australia, Canada, France, Germany, Japan, the United Kingdom and the United States. My dataset is comprised of the total return indices for the same countries, but the data spans through 2013 instead of 2007 like in Zhou. This is a similar dataset to that chosen by BL, which was used in order to analyze different international trading strategies based on equities, bonds and currencies.

The goal of this paper is to extend the Bayesian model created by Zhou by relaxing his strict assumption on the modeling of the covariance matrix by incorporating the inverse-Wishart prior extension. This will in turn create a statistically sound and flexible model, usable by any type of investor. I will then test the models by using an iterative out-of-sample modeling procedure.

In section II, I further describe the literature on the topic and show how it influenced my analysis. In section III I will describe the baseline models and the inverse-Wishart prior extension. In Section IV I will summarize the dataset and provide descriptive statistics. In section V I will describe how the models are implemented and tested. In Section VI I will describe the results and compare the models, and in Section VII I will offer conclusions and possible extensions to my model.

II. Literature Review

Models

Harry Markowitz established one of the first frameworks for portfolio optimization in 1952. In his paper, Portfolio Selection, Markowitz solves for the portfolio weights that maximize a portfolio?s return while minimizing the volatility, by maximizing a specified expected utility function for the investor. The utility function is conditional on the historical mean and variance of the data, which is why it is often referred to as a mean-variance analysis. These variables are the only inputs, so the model tends to be extremely sensitive to small changes in either of them. The model also assumes historical returns on their own predict future returns, which is something known to be untrue in financial econometrics.

These difficulties with the mean-variance model do not render it useless. In fact, the model can perform quite well when there are better predictors for the expected returns and covariance matrix (rather than just historical values). The model by BL extends the mean-variance framework by creating an estimation strategy that incorporates an investor?s views on the assets in question with an equilibrium model of asset performance. Many investors make decisions about their portfolio based on how they expect the market to perform, so it is intuitive to incorporate these views into the model.

Investor views in the Black-Litterman model can either be absolute or relative. Absolute views specify the expected return for an individual security; for example, an investor may think that the S&P 500 will return 2% next month. Relative views specify the relationship between assets; for example, an investor may think that the London Stock Exchange will have a return 2% higher than the Toronto Stock Exchange next month. BL specify the same assumptions and use a similar model to Markowitz to describe the market equilibrium, and they then incorporate the investor?s views through Bayesian updating. This returns a vector of expected returns that is similar to the market equilibrium but adjusted for the investor?s views. Only assets that the investor has a view on will deviate from the equilibrium weight. Finally, BL use the same mean-variance utility function as Markowitz to calculate the optimal portfolio weights based off of the updated expected returns.

Zhou takes this framework one step further by also incorporating historical returns into the analysis because the equilibrium market weights are subject to error that the historical data can help fix. The market equilibrium values are based on the validity of the capital asset pricing model (CAPM)3, which is not always supported by historical data. This does not render the equilibrium returns useless; they simply must be supplemented by historical data in order to make the model more robust. The combination of the equilibrium pricing model and the investor?s views with the data strengthens the model by combining different means of prediction. As an extension, it would be useful to research the benefit of including a more complex data modeling mechanism that incorporates more than just the historical mean returns. A return forecasting model could be of great use here, though it would greatly increase the complexity of the model.

Zhou uses a very complete description of the market by incorporating all three of these elements, but there is one other aspect of the model that he neglects; his theoretical framework does not account for uncertainty in the covariance matrix. By neglecting this aspect, he implies that the next period's covariance matrix is only described by the fixed historical covariance matrix. This is in line with the problems that arise in Markowitz, and is also not sound in a Bayesian statistical sense because he is using a data generated covariance matrix in the prior, which is then updated by the same data. I will therefore put an inverse-Wishart prior distribution on the Black-Litterman estimate of Σ before updating the prior with the data. The primary Bayesian updating stage, where the equilibrium estimate is updated by the investor views will remain consistent. This way Σ is modeled as a distribution in the final Bayesian updating stage which will allow the prior to have a more profound effect.

Investment Strategies

Though the Black-Litterman model is quantitatively based it is extremely flexible, unlike many other models, due to the input of subjective views by the investor. These views are directly specified and can come from any source, whether that is a hunch, the Wall Street Journal, or maybe even an entirely different quantitative model. I will present a momentum based view strategy, but this is only one of countless different strategies that could be incorporated, whether they are quantitatively based or not. The results of this paper will be heavily dependent on the view specification, which is based on the nature of the model. The goal of this paper is not to have a perfect empirical analysis, but instead to present a flexible, statistically sound and customizable model for an investor regardless of their level of expertise.

The investor's views can be independent over time or follow a specific investment strategy. In the analysis I use a function based on the recent price movement of the indices, a momentum strategy, to specify the views. The conventional wisdom of many investors is that individual prices and their movements have nothing to say about the asset's value, but when the correct time frame is analyzed, generally the previous 6-12 months, statistically significant returns can be achieved (Momentum). In the last 5 years alone, over 150 papers have been published investigating the significance of momentum investment strategies (Momentum). Foreign indices are not an exception, as it has been shown that indices with positive momentum perform better than those with negative momentum (AQR).

The basis of momentum strategies lies in the empirical failure of the efficient market hypothesis, which states that all possible information about an asset is immediately priced into the asset once the information becomes available. This tends to fail because some investors get the information earlier or respond to it in different manners, so there is an inherent asymmetric incorporation of information that creates shortterm price trends (momentum) that can be exposed. This phenomenon can be further explored in Momentum.

Though momentum investing is gaining in popularity, there are countless other investment strategies in use today. Value and growth investing are both examples, and view functions incorporating these strategies are an interesting topic of further research.

III. Theoretical Framework

Baseline

As mentioned in the literature review, Markowitz specifies a mean-variance utility function with respect to the portfolio asset weight vector, w. The investor's goal is to maximize the expected return while minimizing the volatility and he does so by maximizing the utility function

where RT is the current period?s return, RT+1 is the future period?s return, γ is the investor?s risk aversion coefficient, μ is the sample return vector and Σ is the sample covariance matrix. This is referred to as a two moment utility function since it incorporates the distribution's first two moments, the mean and variance. The first order condition of this utility function, with respect to w, solves to

which can be used to solve for the optimal portfolio weights given the historical data.

BL first specify their model by determining the expected market equilibrium returns. To do so, they solve for μ in (2) by plugging in the sample covariance matrix and the market equilibrium weights. The sample covariance matrix comes from the data and the market equilibrium weights are simply the percentage that each country's market capitalization makes up of the total portfolio market capitalization.

In equilibrium, if we assume that the CAPM holds and that all investors have the same risk aversion and views on the market, the demand for any asset will be equal to the available supply. The supply of an asset is simply its market capitalization, or the amount of dollars available of the asset in the market. In equilibrium when supply equals demand, we know that the weights of each asset in the optimal portfolio will be equal to the supply, or the market capitalization of each asset. Σ is simply the historical covariance matrix, so we therefore know both w and Σ in (2), meaning we can solve for μe, the equilibrium expected excess returns.

It is also assumed that the true expected excess return, μ, is normally distributed with mean μe and covariance matrix τΣ. This can be written as

where μe is the market equilibrium returns, τ is a scalar indicating the confidence of how the true expected returns are modeled by the market equilibrium, and Σ is the fixed sample covariance matrix. It is common practice to use a small value of tau since one would guess that long-term equilibrium returns are less volatile than historical returns.

We must also incorporate the investor?s views, which can be modeled by

where P is a K × N matrix that specifies K views on the N assets, and Ω is the covariance matrix explaining the degree of confidence that the investor has in his views. Ω is one of the harder variables to specify in the model, but [?] provide a method that also helps with the specification of τ. Ω is a diagonal matrix since it is assumed that views are independent of one another, meaning all covariance (non-diagonal) elements of the matrix are zero. Each diagonal element of Ω can be thought of as the variance of the error term, which can be specified as PiΣP'0i, where Pi is an individual row (view) from the K × N view specifying matrix, and Σ is again the historical covariance matrix. Again, I do not agree with this overemphasis on the historical covariance matrix, but I include it here for simplicity of explaining the intuition of the model.

Intuition calibrate the confidence of each view by shrinking each view's error team by multiplying it by τ. This makes τ independent of the posterior analysis because it is now incorporated in the same manner in the two stages of the model. If it is drastically increased, so too are be the error terms of Ω, but the estimated return vector, shown in (5) is not changed because there is be an identical effect on Σ.

We can combine these two models by Bayesian updating, which leaves us with the Black-Litterman mean and variance