### Archive

Archive for November, 2011

## Black-Litterman Model

The Black-Litterman Model was created by Fischer Black and Robert Litterman in 1992 to resolve shortcomings of traditional Markowitz mean-variance asset allocation model. It addresses following two items:

• Lack of diversification of portfolios on the mean-variance efficient frontier.
• Instability of portfolios on the mean-variance efficient frontier: small changes in the input assumptions often lead to very different efficient portfolios.

I recommend a very good non-technical introduction to The Black-Litterman Model, An Introduction for the Practitioner by T. Becker (2009).

I will take the country allocation example presented in The Intuition Behind Black-Litterman Model Portfolios by G. He, R. Litterman (1999) paper and update it using current market data.

First, I need market capitalization data for each country to compute equilibrium portfolio. I found following two sources of capitalization data:

I will use market capitalization data from World Databank.

# load Systematic Investor Toolbox
setInternet2(TRUE)
source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')))

#--------------------------------------------------------------------------
# Visualize Market Capitalization History
#--------------------------------------------------------------------------

hist.caps = aa.test.hist.capitalization()
hist.caps.weight = hist.caps/rowSums(hist.caps)

# Plot Transition of Market Cap Weights in time
plot.transition.map(hist.caps.weight, index(hist.caps.weight), xlab='', name='Market Capitalization Weight History')

# Plot History for each Country's Market Cap
layout( matrix(1:9, nrow = 3, byrow=T) )
col = plota.colors(ncol(hist.caps))
for(i in 1:ncol(hist.caps)) {
plota(hist.caps[,i], type='l', lwd=5, col=col[i], main=colnames(hist.caps)[i])
}  There is a major shift in weights between Japan and USA from 1988 to 2010. In 1988 Japan represented 47% and USA 33%. In 2010 Japan represents 13% and USA 55%. The shift was driven by inflow of capital to USA, the Japaneses capitalization was pretty stable in time, as can be observed from time series plot for each country.

Second, I need historical prices series for each country to compute covariance matrix. I will use historical data from Yahoo Fiance:

 Australia EWA Canada EWC France EWQ Germany EWG Japan EWJ U.K. EWU USA SPY

The first step of the Black-Litterman model is to find implied equilibrium returns using reverse optimization. $\Pi = \delta \Sigma w_{eq}$

where $\Pi$ are equilibrium returns, $\delta$ is risk aversion, $\Sigma$ is covariance matrix, and $w_{eq}$ are market capitalization weights. The risk aversion parameter can be estimated from historical data by dividing the excess market portfolio return by its variance.

# Use reverse optimization to compute the vector of equilibrium returns
bl.compute.eqret <- function
(
risk.aversion, 	# Risk Aversion
cov, 		# Covariance matrix
cap.weight, 	# Market Capitalization Weights
risk.free = 0	# Rsik Free Interest Rate
)
{
return( risk.aversion * cov %*% cap.weight +  risk.free)
}

#--------------------------------------------------------------------------
# Compute Risk Aversion, prepare Black-Litterman input assumptions
#--------------------------------------------------------------------------
ia = aa.test.create.ia.country()

# compute Risk Aversion
risk.aversion = bl.compute.risk.aversion( ia$hist.returns$USA )

# the latest market capitalization weights
cap.weight = last(hist.caps.weight)

# create Black-Litterman input assumptions
ia.bl = ia
ia.bl$expected.return = bl.compute.eqret( risk.aversion, ia$cov, cap.weight )

# Plot market capitalization weights and implied equilibrium returns
layout( matrix(c(1,1,2,3), nrow=2, byrow=T) )
pie(coredata(cap.weight), paste(colnames(cap.weight), round(100*cap.weight), '%'),
main = paste('Country Market Capitalization Weights for', format(index(cap.weight),'%b %Y'))
, col=plota.colors(ia$n)) plot.ia(ia.bl, T) Next, let’s compare the efficient frontier created using historical input assumptions and Black-Litterman input assumptions  #-------------------------------------------------------------------------- # Create Efficient Frontier(s) #-------------------------------------------------------------------------- n = ia$n

# -1 <= x.i <= 1
constraints = new.constraints(n, lb = 0, ub = 1)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)

# create efficient frontier(s)
ef.risk = portopt(ia, constraints, 50, 'Historical', equally.spaced.risk = T)
ef.risk.bl = portopt(ia.bl, constraints, 50, 'Black-Litterman', equally.spaced.risk = T)

# Plot multiple Efficient Frontiers and Transition Maps
layout( matrix(1:4, nrow = 2) )
plot.ef(ia, list(ef.risk), portfolio.risk, T, T)
plot.ef(ia.bl, list(ef.risk.bl), portfolio.risk, T, T) Comparing the transition maps, the Black-Litterman efficient portfolios are well diversified. Efficient portfolios have allocation to all asset classes at various risk levels. By its construction, the Black-Litterman model is well suited to address the diversification problems.

The Black-Litterman model also introduces a mechanism to incorporate investor’s views into the input assumptions in such a way that small changes in the input assumptions will NOT lead to very different efficient portfolios. The Black-Litterman model adjusts expected returns and covariance: $\bar{\mu} = \left [ (\tau \Sigma)^{-1}+P'\Omega^{-1}P \right ]^{-1} \left [ (\tau \Sigma)^{-1}\Pi + P'\Omega^{-1}Q \right ] \newline\newline \bar{\Sigma}=\Sigma+\left [ (\tau \Sigma)^{-1}+P'\Omega^{-1}P \right ]^{-1}$

where P is Views pick matrix, and Q Views mean vector. The Black-Litterman model assumes that views are $N \sim (Q,P)$.

bl.compute.posterior <- function
(
mu, 		# Equilibrium returns
cov, 		# Covariance matrix
pmat=NULL, 	# Views pick matrix
qmat=NULL, 	# Views mean vector
tau=0.025 	# Measure of uncertainty of the prior estimate of the mean returns
)
{
out = list()
omega = diag(c(1,diag(tau * pmat %*% cov %*% t(pmat))))[-1,-1]

temp = solve(solve(tau * cov) + t(pmat) %*% solve(omega) %*% pmat)
out$cov = cov + temp out$expected.return = temp %*% (solve(tau * cov) %*% mu + t(pmat) %*% solve(omega) %*% qmat)
return(out)
}

#--------------------------------------------------------------------------
# Create Views
#--------------------------------------------------------------------------
temp = matrix(rep(0, n), nrow = 1)
colnames(temp) = ia$symbols # Relative View # Japan will outperform UK by 2% temp[,'Japan'] = 1 temp[,'UK'] = -1 pmat = temp qmat = c(0.02) # Absolute View # Australia's expected return is 12% temp[] = 0 temp[,'Australia'] = 1 pmat = rbind(pmat, temp) qmat = c(qmat, 0.12) # compute posterior distribution parameters post = bl.compute.posterior(ia.bl$expected.return, ia$cov, pmat, qmat, tau = 0.025 ) # create Black-Litterman input assumptions with Views ia.bl.view = ia.bl ia.bl.view$expected.return = post$expected.return ia.bl.view$cov = post$cov ia.bl.view$risk = sqrt(diag(ia.bl.view$cov)) # create efficient frontier(s) ef.risk.bl.view = portopt(ia.bl.view, constraints, 50, 'Black-Litterman + View(s)', equally.spaced.risk = T) # Plot multiple Efficient Frontiers and Transition Maps layout( matrix(1:4, nrow = 2) ) plot.ef(ia.bl, list(ef.risk.bl), portfolio.risk, T, T) plot.ef(ia.bl.view, list(ef.risk.bl.view), portfolio.risk, T, T) Comparing the transition maps, the Black-Litterman + Views efficient portfolios have more allocation to Japan and Australia, as expected. The portfolios are well diversified and are not drastically different from the Black-Litterman efficient portfolios. The Black-Litterman model provides an elegant way to resolve shortcomings of traditional Markovitz mean-variance asset allocation model based on historical input assumptions. It addresses following two items: • Lack of diversification of portfolios on the mean-variance efficient frontier. The Black-Litterman model uses equilibrium returns implied from the current market capitalization weighs to construct well diversified portfolios. • Instability of portfolios on the mean-variance efficient frontier. The Black-Litterman model introduces a mechanism to incorporate investor’s views into the input assumptions in such a way that small changes in the input assumptions will NOT lead to very different efficient portfolios. I highly recommend exploring and reading following articles and websites for better understanding of the Black-Litterman model: To view the complete source code for this example, please have a look at the aa.black.litterman.test() function in aa.test.r at github. ## Resampling and Shrinkage : Solutions to Instability of mean-variance efficient portfolios November 11, 2011 1 comment Small changes in the input assumptions often lead to very different efficient portfolios constructed with mean-variance optimization. I will discuss Resampling and Covariance Shrinkage Estimator – two common techniques to make portfolios in the mean-variance efficient frontier more diversified and immune to small changes in the input assumptions. Resampling was introduced by Michaud in Efficient Asset Management, 2nd edition. Please note that the Resampling procedure is patented portfolio optimization process. (Richard Michaud and Robert Michaud co-inventors, December 1999, U.S. Patent # 6,003,018, worldwide patents pending) The use of this technology that includes include even personal use, requires licensing or authorization. To create Resampled Efficient Frontier: • Step 0: Estimate mean (Mu*) and covariance (Cov*), for example from historical assets returns. • Step 1: Sample from multivariate normal distribution with mean=Mu* and covariance=Cov*. • Step 2: Compute sample mean and covariance, and use them to create efficient frontier. • Step 3: Save portfolio weights that are on efficient frontier. • Repeat Steps 1,2,3 Number of Samples to draw times. • Step 4: Average over saved portfolio weights to obtain final portfolio weights that lie on the Resampled Efficient Frontier. Update: please note that due to the above patent, I cannot post the source code for the resampling function and that makes examples below not reproducible. Let’s compare portfolios that lie on the mean-variance efficient frontier and Resampled efficient frontier: # load Systematic Investor Toolbox setInternet2(TRUE) source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))) #-------------------------------------------------------------------------- # Create Resampled Efficient Frontier #-------------------------------------------------------------------------- ia = aa.test.create.ia.rebal() n = ia$n

# -1 	constraints = new.constraints(n, lb = 0, ub = 1)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)

# create efficient frontier(s)
ef.risk = portopt(ia, constraints, 50, 'Risk', equally.spaced.risk = T)
ef.risk.resampled = portopt.resampled(ia, constraints, 50, 'Risk Resampled',
nsamples = 200, sample.len= 10)

# Plot multiple Efficient Frontiers and Transition Maps
layout( matrix(c(1,1,2,3), nrow = 2, byrow=T) )
plot.ef(ia, list(ef.risk, ef.risk.resampled), portfolio.risk, F)
plot.transition.map(ef.risk)
plot.transition.map(ef.risk.resampled) The Resampled Efficient Frontier is visually superior to the the mean-variance efficient frontier. It is better Diversified: allocation to all asset classes is present and transition between portfolios is continuous and smooth, no sharp changes. The Resampled Efficient Frontier is also, by construction, immune to small changes in the input assumptions.

The idea of Covariance Shrinkage Estimator is nicely explained at Honey, I Shrunk the Sample Covariance matrix by Olivier Ledoit and Michael Wolf (2003). Here is the Abstract:

The central message of this paper is that nobody should be using the sample covariance matrix for the purpose of portfolio optimization. It contains estimation error of the kind most likely to perturb a mean-variance optimizer. In its place, we suggest using the matrix obtained from the sample covariance matrix through a transformation called shrinkage. This tends to pull the most extreme coefficients towards more central values, thereby systematically reducing estimation error where it matters most. Statistically, the challenge is to know the optimal shrinkage intensity, and we give the formula for that. Without changing any other step in the portfolio optimization process, we show on actual stock market data that shrinkage reduces tracking error relative to a benchmark index, and substantially increases the realized information ratio of the active portfolio manager.

The Ledoit-Wolf Covariance Matrix Estimator is a convex linear combination aF + (1-a)S, where

• S is a sample covariance matrix,
• F is a highly structured estimator,
• a is a shrinkage constant, a number between 0 and 1.

This technique is called shrinkage, since the sample covariance matrix is shrunk’ to-wards the structured estimator. The Shrinkage Target, F, is modeled by constant correlation model. The model says that all the (pairwise) correlations are identical. The average of all the sample correlations is the estimator of the common constant correlation.

The Ledoit-Wolf Covariance Shrinkage Estimator is implemented in tawny R package, cov.shrink function. Here is an example of efficient frontier using the Ledoit-Wolf Covariance Shrinkage Estimator:

#--------------------------------------------------------------------------
# Create Efficient Frontier using Ledoit-Wolf Covariance Shrinkage Estimator from tawny package
#--------------------------------------------------------------------------

# load / check required packages

ia.original = ia

# compute Ledoit-Wolf Covariance Shrinkage Estimator
ia$cov = tawny::cov.shrink(ia$hist.returns)
ef.risk.cov.shrink = portopt(ia, constraints, 50, 'Risk Ledoit-Wolf', equally.spaced.risk = T)

ia = ia.original

# Plot multiple Efficient Frontiers and Transition Maps
layout( matrix(c(1,1,2,3), nrow = 2, byrow=T) )
plot.ef(ia, list(ef.risk, ef.risk.cov.shrink), portfolio.risk, F)
plot.transition.map(ef.risk)
plot.transition.map(ef.risk.cov.shrink) The Efficient Frontier constructed with Ledoit-Wolf Covariance Shrinkage Estimator is better Diversified: allocation to all asset classes is present and is also, by construction, immune to small changes in the input assumptions.

Shrinkage is not the only solution, there are many alternatives ways to estimate Covariance matrix. For example, Pat Burns at Portfolio Probe blog, discusses estimation of Covariance matrix with factor models: A test of Ledoit-Wolf versus a factor model.

Another interesting idea is to combine Resampling with Shrinkage was examined in the Resampling vs. Shrinkage for Benchmarked Managers by M. Wolf (2006) paper. It is very easy to implement because we only need to change Step 2 in the above algorithm. Instead of using sample covariance matrix, we will use it’s Shrinkage Estimator.

Let’s compare Resampled and Resampled+Shrinkage Efficient Frontiers:

#--------------------------------------------------------------------------
# Create Resampled Efficient Frontier(using Ledoit-Wolf Covariance Shrinkage Estimator)
# As described on page 8 of
# Resampling vs. Shrinkage for Benchmarked Managers by M. Wolf (2006)
#--------------------------------------------------------------------------

ef.risk.resampled.shrink = portopt.resampled(ia, constraints, 50, 'Risk Ledoit-Wolf+Resampled',
nsamples = 200, sample.len= 10,
shrinkage.fn=tawny::cov.shrink)

# Plot multiple Efficient Frontiers and Transition Maps
layout( matrix(c(1:4), nrow = 2, byrow=T) )
plot.ef(ia, list(ef.risk, ef.risk.resampled, ef.risk.resampled.shrink), portfolio.risk, F)
plot.transition.map(ef.risk)
plot.transition.map(ef.risk.resampled)
plot.transition.map(ef.risk.resampled.shrink) Both frontiers are close and have very similar portfolio composition. So it’s hard to say if adding covariance shrinkage estimator to the resampling algorithm is beneficial.

In conclusion, there are findings for and against that these methods will improve out-of-sample returns. However, these methods will surely reduce portfolio turnover and transaction cost associated with rebalancing by making portfolios on efficient frontier immune to small changes in the input assumptions.

There is an interesting collection of papers on Portfolio Optimization at Worcester Polytechnic Institute:

In the next post I will discuss the Black–Litterman model that addresses both Instability and Diversification problems of the portfolios on the mean-variance efficient frontier.

To view the complete source code for this example, please have a look at the aa.solutions2instability.test() function in aa.test.r at github.

## Geometric Efficient Frontier

What is important for an investor? The rate of return is at the top of the list. Does the expected rate of return shown on the mean-variance efficient frontier paints the full picture? If investor’s investment horizon is longer than one period, for example 5 years, than the true measure of portfolio performance is Geometric return, which is always less then or equal to the Arithmetic return plotted on the Y axis of the mean-variance efficient frontier.

There is a good discussion of Arithmetic vs Geometric returns in A tale of two returns blog post by Pat Burns and another good post Why Log Returns by Quantivity.

I will use example and methods presented in DIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER by W. Bernstein and D. Wilkinson (1997) paper to create Geometric Efficient Frontier.

Let’s first examine portfolios on the mean-variance efficient frontier and compute their historical geometrical returns:

# load Systematic Investor Toolbox
setInternet2(TRUE)
source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')))

#--------------------------------------------------------------------------
# Create Efficient Frontier
#--------------------------------------------------------------------------
ia = aa.test.create.ia.rebal()
n = ia$n # -1 <= x.i <= 1 constraints = new.constraints(n, lb = 0, ub = 1) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # create efficient frontier(s) ef.risk = portopt(ia, constraints, 50, 'Arithmetic', equally.spaced.risk = T) # compute historical geometrical returns ef.risk.geometric = ef.risk ef.risk.geometric$name = 'Geometric'
ef.risk.geometric$return = portfolio.geometric.return(ef.risk$weight, ia)

# Plot multiple Efficient Frontiers and Transition Maps
plot.ef(ia, list(ef.risk, ef.risk.geometric), portfolio.risk, T) The efficient frontier labeled ‘Geometric’ lies below the ‘Arithmetic’ efficient frontier and has the maximum Geometric mean at 21.6% risk level. All portfolios with risk greater than 21.6% are not efficient for investor with multi-period investment horizon.

The DIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER by W. Bernstein and D. Wilkinson (1997) paper discuss four methods to approximate historical geometric means calculation with arithmetic means. All computations are based on the following approximation between Arithmetic (R) and Geometric (G) means (page 8): $G \approx R - \frac{aV}{2(1+bR)} \newline \newline R \approx \frac{2G+aV}{1-bG+\sqrt{(1+bG)^2+2abV}}$

To reproduce results from the paper, I will follow the geometric efficient frontier construction methods outlined on page 14. The A(0,1) and A(1,1) geometric efficient frontiers are constructed from the classical Markowitz efficient frontier which use the arithmetic means as inputs. To compute portfolio geometric means I use the above approximation.

The G(0,1) and G(1,1) geometric efficient frontiers are constructed from the classical Markowitz efficient frontier which use the individual pseudo-arithmetic means as inputs. I.e. I first convert individual geometric means to the pseudo-arithmetic means and use them later in the optimization. To compute portfolio geometric means I use the above approximation, same as for A(0,1) and A(1,1).

###############################################################################
# Functions to convert between Arithmetic and Geometric means
###############################################################################
# page 8, DIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER by W. Bernstein and D. Wilkinson (1997)
###############################################################################
geom2aritm <- function(G, V, a, b)
{
(2*G + a*V^2) / (1 - b*G + sqrt((1+b*G)^2 + 2*a*b*V^2))
}

aritm2geom <- function(R, V, a, b)
{
R - a*V^2 / (2*(1 + b*R))
}

#--------------------------------------------------------------------------
# Following paper's notation : A(1,0) and A(1,1) page 8, 14
#--------------------------------------------------------------------------
# A(1,0)
ef.risk.A10 = ef.risk
ef.risk.A10$name = 'A(1;0)' ef.risk.A10$return = apply( cbind(ef.risk$return, ef.risk$risk), 1,
function(x) aritm2geom(x, x, 1, 0) )
# A(1,1)
ef.risk.A11 = ef.risk
ef.risk.A11$name = 'A(1;1)' ef.risk.A11$return = apply( cbind(ef.risk$return, ef.risk$risk), 1,
function(x) aritm2geom(x, x, 1, 1) )
# G(1,0)
ia.G = ia
ia.G$expected.return = apply( cbind(ia$geometric.return, ia$risk), 1, function(x) geom2aritm(x, x, 1, 0) ) ef.risk.G10 = portopt(ia.G, constraints, 50, 'G(1;0)',equally.spaced.risk = T) ef.risk.G10$return = apply( cbind(ef.risk.G10$return, ef.risk.G10$risk), 1,
function(x) aritm2geom(x, x, 1, 0) )
# G(1,1)
ia.G$expected.return = apply( cbind(ia$geometric.return, ia$risk), 1, function(x) geom2aritm(x, x, 1, 1) ) ef.risk.G11 = portopt(ia.G, constraints, 50, 'G(1;1)',equally.spaced.risk = T) ef.risk.G11$return = apply( cbind(ef.risk.G11$return, ef.risk.G11$risk), 1,
function(x) aritm2geom(x, x, 1, 1) )

# Plot multiple Efficient Frontiers
layout( matrix(1:4, nrow = 2) )
plot.ef(ia, list(ef.risk, ef.risk.geometric, ef.risk.A10), portfolio.risk, F)
plot.ef(ia, list(ef.risk, ef.risk.geometric, ef.risk.A11), portfolio.risk, F)
plot.ef(ia, list(ef.risk, ef.risk.geometric, ef.risk.G10), portfolio.risk, F)
plot.ef(ia, list(ef.risk, ef.risk.geometric, ef.risk.G11), portfolio.risk, F) The results match closely to the paper’s results on page 27.

But we can do better, in the recent paper On the Relationship between Arithmetic and Geometric Returns by D. Mindlin (2011) the author presents a detailed review of the four approximation methods to convert between Arithmetic and Geometric Returns. The overall winner is the relationship (A4) (page 8): $G \approx \frac{(1+R)}{\sqrt{1+\frac{V}{(1+R)^2}}}-1 \newline \newline R \approx (1+G)\sqrt{\frac{1+\sqrt{1+\frac{4V}{(1+G)^2)}}}{2}}-1$

Let’s compare the geometric mean approximation given by A4 method in the same framework as we did above:

###############################################################################
# Functions to convert between Arithmetic and Geometric means
###############################################################################
# page 14, A4, On the Relationship between Arithmetic and Geometric Returns by D. Mindlin
###############################################################################
geom2aritm4 <- function(G, V)
{
(1+G)*sqrt(1/2 + 1/2*sqrt(1 + 4*V^2/(1+G)^2)) - 1
}

aritm2geom4 <- function(R, V)
{
(1+R)/(sqrt(1 + V^2/(1+R)^2)) - 1
}

#--------------------------------------------------------------------------
# Use A4 method to convert between Arithmetic and Geometric means
#--------------------------------------------------------------------------
# A
ef.risk.A4 = ef.risk
ef.risk.A4$name = 'Risk A4' ef.risk.A4$return = apply( cbind(ef.risk$return, ef.risk$risk), 1,
function(x) aritm2geom4(x, x) )

# G
ia.G = ia
ia.G$expected.return = apply( cbind(ia$geometric.return, ia$risk), 1, function(x) geom2aritm4(x, x) ) ef.risk.G4 = portopt(ia.G, constraints, 50, 'Risk G4',equally.spaced.risk = T) ef.risk.G4$return = apply( cbind(ef.risk.G4$return, ef.risk.G4$risk), 1,
function(x) aritm2geom4(x, x) )

# Plot multiple Efficient Frontiers
layout( matrix(1:2, nrow = 2) )
plot.ef(ia, list(ef.risk, ef.risk.geometric, ef.risk.A4), portfolio.risk, F)
plot.ef(ia, list(ef.risk, ef.risk.geometric, ef.risk.G4), portfolio.risk, F) The results match closely the Geometric Efficient Frontier and outperform the A(1,0), A(1,1), G(1,0), G(1,1) approximations.

All discussion above was based on the Geometric Efficient Frontier that consisted from portfolios on the Arithmetic Efficient Frontier. But why do we think that both the Geometric Efficient Frontier and the Arithmetic Efficient Frontier will have same portfolios?

Let’s create the True Geometric Efficient Frontier. I’m not aware of the method to solve this problem as the linear programming, so I will use a Nonlinear programming solver, Rdonlp2, which is based on donlp2 routine developed and copyright by Prof. Dr. Peter Spellucci. Following code might not properly execute on your computer because Rdonlp2 is only available for R version 2.9 or below.

	#--------------------------------------------------------------------------
# Create True Geometric Efficient Frontier
#--------------------------------------------------------------------------
ef.true.geometric = ef.risk
ef.true.geometric$name = 'True Geometric' constraints$x0 = ef.risk$weight[1,] for(i in 1:len(ef.risk$risk)) {
cat('i =', i, '\n')
ef.true.geometric$weight[i,] = max.geometric.return.portfolio(ia, constraints, ef.risk$risk[i], ef.risk$risk[i]) constraints$x0 = ef.true.geometric$weight[i,] } ef.true.geometric$return = portfolio.geometric.return(ef.true.geometric$weight, ia) # Plot multiple Efficient Frontiers layout( matrix(1:4, nrow = 2) ) plot.ef(ia, list(ef.risk.geometric, ef.risk, ef.true.geometric), portfolio.risk, T, T) plot.ef(ia, list(ef.true.geometric, ef.risk, ef.risk.geometric), portfolio.risk, T, T) The source code for the max.geometric.return.portfolio() function is at the end of this post. The True Geometric Efficient Frontier and Geometric Efficient Frontier that consisted from portfolios on the Arithmetic Efficient Frontier look identical. This conclusion is also supported by the paper On the Maximization of the Geometric Mean with Lognormal Return Distribution by E. Elton, M. Gruber (1974). Following is the abstract: In this paper we discuss the relevancy of the geometric mean as a portfolio selection criteria. A procedure for finding that portfolio with the highest geometric mean when returns on portfolios are lognormally distributed is presented. The development of this algorithm involves a proof that the portfolio with maximum geometric mean lies on the efficient frontier in arithmetic mean variance space. The last step is to check that portfolios found by Non-Linear Optimization are global maximums. I will create 1,000,000 random portfolios that satisfy constraints and plot True Geometric Efficient Frontier and random portfolios on the same chart.  #-------------------------------------------------------------------------- # Double check that NonLinear Optimization finds global maximums by # creating random portfolios that satisfy constraints. # Plot True Geometric Efficient Frontier and random portfolios, check # that all portfolios lie below the efficient frontier. #-------------------------------------------------------------------------- # Generate random portfolios ef.random = list() ef.random$name = 'Random'
ef.random$weight = randfixedsum(1000000, n, 1, 0, 1) ef.random$risk = portfolio.risk(ef.random$weight, ia) ef.random$return = portfolio.geometric.return(ef.random$weight, ia) # Plot True Geometric Efficient Frontier and random portfolios layout(1) plot(100*ef.random$risk, 100*ef.random$return, type='p', pch=20, xlim = 100*range(ef.random$risk, ef.true.geometric$risk), ylim = 100*range(ef.random$return, ef.true.geometric$return), main = 'True Geometric Efficient Frontier vs Random Portfolios', xlab = 'portfolio.risk', ylab = 'Return' ) lines(100*ef.true.geometric$risk, 100*ef.true.geometric$return, type='l', lwd=2,col = 'red') The Non-Linear Optimization found global maximums: all random portfolios, black dots, lie below the True Geometric Efficient Frontier, red line. To generate Random Portfolios I used randfixedsum function (Random Vectors with Fixed Sum) created by Roger Stafford. The randfixedsum function randomly and uniformly generates vectors with a specified sum and values in a specified interval. The message that I want to convey in this post is to always check the Geometric Efficient Frontier, constructed from the portfolios on the Arithmetic Efficient Frontier, and make sure that you are compensated for the risk in your portfolios. I.e. the more risk you take on, the greater the expected Geometric return of your portfolios. To view the complete source code for this example, please have a look at the aa.arithmetic.geometric.test() function in aa.test.r at github. The source code for the max.geometric.return.portfolio() function: max.geometric.return.portfolio <- function ( ia, # input assumptions constraints, # constraints min.risk, max.risk ) { n = ia$n
nt = nrow(ia$hist.returns) require(Rdonlp2) # Geometric return fn <- function(x){ portfolio.returns = x %*% t(ia$hist.returns)
prod(1 + portfolio.returns)
}

# control structure, fnscale - set -1 for maximization
cntl <- donlp2.control(silent = T, fnscale = -1, iterma =10000, nstep = 100, epsx = 1e-10)
# lower/upper bounds
par.l = constraints$lb par.u = constraints$ub

# intial guess
p = rep(1, nrow(constraints$A)) if(!is.null(constraints$x0)) p = constraints$x0 # linear constraints A = t(constraints$A)
lin.l = constraints$b lin.u = constraints$b
lin.u[ -c(1:constraints$meq) ] = +Inf # Nonlinear constraints nlcon1 <- function(x){ sqrt(t(x) %*% ia$cov %*% x)
}

# find solution
sol = donlp2(p, fn, par.lower=par.l, par.upper=par.u,
A=A, lin.u=lin.u, lin.l=lin.l, control=cntl,
nlin=list(nlcon1),
nlin.upper=c(max.risk),
nlin.lower=c(min.risk)
)

x = sol$par return( x ) }  ## Maximizing Omega Ratio November 3, 2011 1 comment The Omega Ratio was introduced by Keating and Shadwick in 2002. It measures the ratio of average portfolio wins over average portfolio losses for a given target return L. Let x.i, i= 1,…,n be weights of instruments in the portfolio. We suppose that j= 1,…,T scenarios of returns with equal probabilities are available. I will use historical assets returns as scenarios. Let us denote by r.ij the return of i-th asset in the scenario j. The portfolio’s Omega Ratio can be written as $\Omega(L) = \frac{E\left [ max(\sum_{i=1}^{n}r_{ij}x_{i} - L, 0) \right ]}{E\left [ max(L - \sum_{i=1}^{n}r_{ij}x_{i}, 0) \right ]}$ I will use methods presented in Optimizing Omega by H. Mausser, D. Saunders, L. Seco (2006) paper to construct optimal portfolios that maximize Omega Ratio. The maximization problem (pages 5-6) can be written as $\Omega^{*}(L) = max_{x,u,d}\frac{\frac{1}{T} \sum_{i=1}^{n}u_{i}}{\frac{1}{T} \sum_{i=1}^{n}d_{i}} \newline\newline \sum_{i=1}^{n}r_{ij}x_{i} - u_{j}+d_{j} = L, j=1,...,T \newline\newline u_{j},d_{j}\geq 0, j=1,...,T \newline\newline u_{j}*d_{j} = 0, j=1,...,T$ It can be formulated as a linear programming problem with following transformation $t=\frac{1}{\frac{1}{T} \sum_{i=1}^{n}u_{i}} \newline\newline \Omega^{*}(L) = max_{\tilde{x},\tilde{u},\tilde{d},t}\frac{1}{T} \sum_{i=1}^{n}\tilde{u}_{i} \newline\newline \sum_{i=1}^{n}r_{ij}\tilde{x}_{i} - \tilde{u}_{j}+\tilde{d}_{j} = L, j=1,...,T \newline\newline \frac{1}{T}\sum_{i=1}^{n}\tilde{d}_{i} = 1 \newline\newline \tilde{u}_{j},\tilde{d}_{j}\geq 0, j=1,...,T$ This method will only work for $\Omega^{*}(L) > 1$. In the case $\Omega^{*}(L) \leqslant 1$, I will use a Nonlinear programming solver, Rdonlp2, which is based on donlp2 routine developed and copyright by Prof. Dr. Peter Spellucci. Following code might not properly execute on your computer because Rdonlp2 is only available for R version 2.9 or below. max.omega.portfolio <- function ( ia, # input assumptions constraints # constraints ) { n = ia$n
nt = nrow(ia$hist.returns) constraints0 = constraints omega = ia$parameters.omega

#--------------------------------------------------------------------------
# Linear Programming, Omega > 1, Case
#--------------------------------------------------------------------------

# objective : Omega
# [ SUM <over j> 1/T * u.j ]
f.obj = c(rep(0, n), (1/nt) * rep(1, nt), rep(0, nt), 0)

constraints = add.variables(2*nt + 1, constraints, lb = c(rep(0,2*nt),-Inf))

# Transformation for inequalities
# Aw < b => Aw1 - bt < 0
constraints$A[n + 2*nt + 1, ] = -constraints$b
constraints$b[] = 0 # Transformation for Lower/Upper bounds, use same transformation index = which( constraints$ub[1:n] < +Inf )
if( len(index) > 0 ) {
a = rbind( diag(n), matrix(0, 2*nt, n), -constraints$ub[1:n]) constraints = add.constraints(a[, index], rep(0, len(index)), '<=', constraints) } index = which( constraints$lb[1:n] > -Inf )
if( len(index) > 0 ) {
a = rbind( diag(n), matrix(0, 2*nt, n), -constraints$lb[1:n]) constraints = add.constraints(a[, index], rep(0, len(index)), '>=', constraints) } constraints$lb[1:n] = -Inf
constraints$ub[1:n] = Inf # [ SUM <over i> r.ij * x.i ] - u.j + d.j - L * t = 0, for each j = 1,...,T a = rbind( matrix(0, n, nt), -diag(nt), diag(nt), -omega) a[1 : n, ] = t(ia$hist.returns)
constraints = add.constraints(a, rep(0, nt), '=', constraints)

# [ SUM <over j> 1/T * d.j ] = 1
constraints = add.constraints(c( rep(0,n), rep(0,nt), (1/nt) * rep(1,nt), 0), 1, '=', constraints)

# setup linear programming
f.con = constraints$A f.dir = c(rep('=', constraints$meq), rep('>=', len(constraints$b) - constraints$meq))
f.rhs = constraints$b # find optimal solution x = NA sol = try(solve.LP.bounds('max', f.obj, t(f.con), f.dir, f.rhs, lb = constraints$lb, ub = constraints$ub), TRUE) if(!inherits(sol, 'try-error')) { x0 = sol$solution[1:n]
u = sol$solution[(1+n):(n+nt)] d = sol$solution[(n+nt+1):(n+2*nt)]
t = sol$solution[(n+2*nt+1):(n+2*nt+1)] # Reverse Transformation x = x0/t } #-------------------------------------------------------------------------- # NonLinear Programming, Omega > 1, Case #-------------------------------------------------------------------------- # Check if any u.j * d.j != 0 or LP solver encounter an error if( any( u*d != 0 ) || sol$status !=0 ) {
require(Rdonlp2)

constraints = constraints0

# compute omega ratio
fn <- function(x){
portfolio.returns = x %*% t(ia$hist.returns) mean(pmax(portfolio.returns - omega,0)) / mean(pmax(omega - portfolio.returns,0)) } # control structure, fnscale - set -1 for maximization cntl <- donlp2.control(silent = T, fnscale = -1, iterma =10000, nstep = 100, epsx = 1e-10) # lower/upper bounds par.l = constraints$lb
par.u = constraints$ub # intial guess if(!is.null(constraints$x0)) p = constraints$x0 # linear constraints A = t(constraints$A)
lin.l = constraints$b lin.u = constraints$b
lin.u[ -c(1:constraints$meq) ] = +Inf # find solution sol = donlp2(p, fn, par.lower=par.l, par.upper=par.u, A=A, lin.u=lin.u, lin.l=lin.l, control=cntl) x = sol$par
}

return( x )
}


First let’s examine how the traditional mean-variance efficient frontier looks like in the Omega Ratio framework.

# load Systematic Investor Toolbox
setInternet2(TRUE)
source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')))

#--------------------------------------------------------------------------
# Create Efficient Frontier
#--------------------------------------------------------------------------
ia = aa.test.create.ia()
n = ia$n # 0 <= x.i <= 0.8 constraints = new.constraints(n, lb = 0, ub = 0.8) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # Omega - http://en.wikipedia.org/wiki/Omega_ratio ia$parameters.omega = 13/100
ia$parameters.omega = 12/100 # convert annual to monthly ia$parameters.omega = ia$parameters.omega / 12 # create efficient frontier(s) ef.risk = portopt(ia, constraints, 50, 'Risk') # Plot Omega Efficient Frontiers and Transition Maps layout( matrix(1:4, nrow = 2, byrow=T) ) # weights rownames(ef.risk$weight) = paste('Risk','weight',1:50,sep='_')
plot.omega(ef.risk$weight[c(1,10,40,50), ], ia) # assets temp = diag(n) rownames(temp) = ia$symbols
plot.omega(temp, ia)

# mean-variance efficient frontier in the Omega Ratio framework
plot.ef(ia, list(ef.risk), portfolio.omega, T, T) Portfolio returns and Portfolio Omega Ratio are monotonically increasing as we move along the traditional mean-variance efficient frontier in the Omega Ratio framework. The least risky portfolios (Risk_weight_1, Risk_weight_10) have lower Omega Ratio for 13% threshold (target return) and the most risky portfolios (Risk_weight_40, Risk_weight_50) have higher Omega Ratio.

To create efficient frontier in the Omega Ratio framework, I propose first to compute range of returns in the mean-variance framework. Next split this range into # Portfolios equally spaced points. For each point, I propose to find portfolio that has expected return less than given point’s expected return and maximum Omega Ratio.

#--------------------------------------------------------------------------
# Create Efficient Frontier in Omega Ratio framework
#--------------------------------------------------------------------------
# Create maximum Omega Efficient Frontier
ef.omega = portopt.omega(ia, constraints, 50, 'Omega')

# Plot Omega Efficient Frontiers and Transition Maps
layout( matrix(1:4, nrow = 2, byrow=T) )

# weights
plot.omega(ef.risk$weight[c(1,10,40,50), ], ia) # weights rownames(ef.omega$weight) = paste('Omega','weight',1:50,sep='_')
plot.omega(ef.omega$weight[c(1,10,40,50), ], ia) # portfolio plot.ef(ia, list(ef.omega, ef.risk), portfolio.omega, T, T) The Omega Ratio efficient frontier looks similar to the traditional mean-variance efficient frontier for expected returns greater than 13% threshold (target return). However, there is a big shift in allocation and increase in Omega Ratio for portfolios with expected returns less than 13% threshold. The Omega Ratio efficient frontier looks very inefficient in the Risk framework for portfolios with expected returns less than 13% threshold. But remember that goal of this optimization was to find portfolios that maximize Omega Ratio for given user constraints. Overall I find results a bit radical for portfolios with expected returns less than 13% threshold, and this results defiantly call for more investigation. To view the complete source code for this example, please have a look at the aa.omega.test() function in aa.test.r at github. ## Minimizing Downside Risk In the Maximum Loss and Mean-Absolute Deviation risk measures, and Expected shortfall (CVaR) and Conditional Drawdown at Risk (CDaR) posts I started the discussion about alternative risk measures we can use to construct efficient frontier. Another alternative risk measure I want to discuss is Downside Risk. In the traditional mean-variance optimization both returns above and below the mean contribute to the portfolio risk (usually measured by the standard deviation of the portfolio’s return). In the Downside Risk framework, only returns that are below the mean or below the target rate of return (MAR) contribute to the portfolio risk. I will discuss two Downside Risk measures: Lower Semi-Variance, and Lower Semi-Absolute Deviation. I will use methods presented in Portfolio Optimization under Lower Partial Risk Measure by H. Konno, H. Waki and A. Yuuki (2002) paper to construct optimal portfolios. Let x.i, i= 1,…,n be weights of instruments in the portfolio. We suppose that j= 1,…,T scenarios of returns with equal probabilities are available. I will use historical assets returns as scenarios. Let us denote by r.ij the return of i-th asset in the scenario j. The portfolio’s Lower Semi-Absolute Deviation (page 6) can be written as $\frac{1}{T}\sum_{j=1}^{T}\left | \tau - \sum_{i=1}^{n}r_{ij}x_{i} \right |_{-} \newline\newline =\frac{1}{T}\sum_{j=1}^{T}max\left \{ \tau - \sum_{i=1}^{n}r_{ij}x_{i},0 \right \}$ It can be formulated as a linear programming problem $\min_{}{}\frac{1}{T}\sum_{j=1}^{T}z_{j} \newline\newline \tau - \sum_{i=1}^{n}r_{ij}x_{i} \leq z_{j}, j=1,...,T \newline\newline z_{j} \geq 0, j=1,...,T$ This linear programming problem can be easily implemented min.mad.downside.portfolio <- function ( ia, # input assumptions constraints # constraints ) { n = ia$n
nt = nrow(ia$hist.returns) mar = ia$parameters.mar

# objective : Mean-Lower-Semi-Absolute Deviation (M-LSAD)
#  1/T * [ SUM <over j> z.j ]
f.obj = c( rep(0, n), (1/nt) * rep(1, nt) )

constraints = add.variables(nt, constraints, lb = 0)

#  MAR - [ SUM <over i> r.ij * x.i ] <= z.j , for each j = 1,...,T
a = rbind( matrix(0, n, nt), diag(nt))
a[1 : n, ] = t(ia$hist.returns) constraints = add.constraints(a, rep(mar, nt), '>=', constraints) # setup linear programming f.con = constraints$A
f.dir = c(rep('=', constraints$meq), rep('>=', len(constraints$b) - constraints$meq)) f.rhs = constraints$b

# find optimal solution
x = NA
sol = try(solve.LP.bounds('min', f.obj, t(f.con), f.dir, f.rhs,
lb = constraints$lb, ub = constraints$ub), TRUE)

if(!inherits(sol, 'try-error')) {
x = sol$solution[1:n] } return( x ) }  The portfolio’s Lower Semi-Absolute Deviation (page 7) can be written as $\frac{1}{T}\sum_{j=1}^{T}\left | \tau - \sum_{i=1}^{n}r_{ij}x_{i} \right |_{-}^{2} \newline\newline =\frac{1}{T}\sum_{j=1}^{T}max\left \{ \tau - \sum_{i=1}^{n}r_{ij}x_{i},0 \right \}^{2}$ It can be formulated as a quadratic programming problem $\min_{}{}\frac{1}{T}\sum_{j=1}^{T}z_{j}^{2} \newline\newline \tau - \sum_{i=1}^{n}r_{ij}x_{i} \leq z_{j}, j=1,...,T \newline\newline z_{j} \geq 0, j=1,...,T$ This quadratic programming problem can be implemented min.risk.downside.portfolio <- function ( ia, # input assumptions constraints # constraints ) { n = ia$n
nt = nrow(ia$hist.returns) mar = ia$parameters.mar

# objective : Mean-Lower Semi-Variance (MV)
#  1/T * [ SUM <over j> z.j^2 ]
f.obj = c( rep(0, n), (1/nt) * rep(1, nt) )

constraints = add.variables(nt, constraints, lb = 0)

#  MAR - [ SUM <over i> r.ij * x.i ] <= z.j , for each j = 1,...,T
a = rbind( matrix(0, n, nt), diag(nt))
a[1 : n, ] = t(ia$hist.returns) constraints = add.constraints(a, rep(mar, nt), '>=', constraints) # setup quadratic minimization Dmat = diag( len(f.obj) ) diag(Dmat) = f.obj if(!is.positive.definite(Dmat)) { Dmat <- make.positive.definite(Dmat) } # find optimal solution x = NA sol = try(solve.QP.bounds(Dmat = Dmat, dvec = rep(0, nrow(Dmat)) , Amat=constraints$A, bvec=constraints$b, constraints$meq,
lb = constraints$lb, ub = constraints$ub),TRUE)

if(!inherits(sol, 'try-error')) {
x = sol$solution[1:n] } return( x ) }  Let’s examine efficient frontiers computed under different risk measures using historical input assumptions presented in the Introduction to Asset Allocation post: # load Systematic Investor Toolbox setInternet2(TRUE) source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))) #-------------------------------------------------------------------------- # Create Efficient Frontier #-------------------------------------------------------------------------- ia = aa.test.create.ia() n = ia$n

# 0 <= x.i <= 0.8
constraints = new.constraints(n, lb = 0, ub = 0.8)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)

# Set target return (or Minimum Acceptable Returns (MAR))
# and consider only returns that are less than the target
ia$parameters.mar = 0/100 # convert annual to monthly ia$parameters.mar = ia\$parameters.mar / 12

# create efficient frontier(s)

ef.risk = portopt(ia, constraints, 50, 'Risk')
ef.risk.downside = portopt(ia, constraints, 50, 'S-Risk', min.risk.downside.portfolio)

# Plot multiple Efficient Frontiers and Transition Maps
layout( matrix(1:4, nrow = 2) )

# Plot multiple Efficient Frontiers and Transition Maps
layout( matrix(1:4, nrow = 2) )
plot.ef(ia, list(ef.risk.downside, ef.risk), portfolio.risk, F)
plot.ef(ia, list(ef.risk.downside, ef.risk), portfolio.risk.downside, F)

plot.transition.map(ef.risk)
plot.transition.map(ef.risk.downside)
`  The downside risk efficient frontiers, prefixed with S-, are similar to their full counterparts and have similar transition maps in our example.

Another way to approach minimization of the Lower Semi-Variance that I want to explore later is presented in Mean-Semivariance Optimization: A Heuristic Approach by Javier Estrada paper.

To view the complete source code for this example, please have a look at the aa.downside.test() function in aa.test.r at github.