Archive
Multiple Factor Model – Building Risk Model
This is the fourth post in the series about Multiple Factor Models. I will build on the code presented in the prior post, Multiple Factor Model – Building CSFB Factors, and I will show how to build a multiple factor risk model. For an example of the multiple factor risk models, please read following references:
The outline of this post:
- Run cross sectional regression to estimate factor returns
- Compute factor covariance using shrinkage estimator
- Forecast stocks specific variances using GARCH(1,1)
- Compute portfolio risk using multiple factor model and compare it to the historical standard deviation of portfolio returns.
Let’s start by loading the CSFB factors that we saved at the end of the prior post. [If you are missing data.factors.Rdata file, please execute fm.all.factor.test() function first to create and save CSFB factors.] Next, I will run cross sectional regression to estimate factor returns.
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load factor data that we saved at the end of the fm.all.factor.test functions #****************************************************************** load.packages('quantmod,abind') load(file='data.factors.Rdata') # remove Composite Average factor factors.avg = factors.avg[which(names(factors.avg) != 'AVG')] #***************************************************************** # Run cross sectional regression to estimate factor returns #****************************************************************** nperiods = nrow(next.month.ret) n = ncol(next.month.ret) # create sector dummy variables: binary 0/1 values for each sector nsectors = len(levels(sectors)) sectors.matrix = array(double(), c(nperiods, n, nsectors)) dimnames(sectors.matrix)[[3]] = levels(sectors) for(j in levels(sectors)) { sectors.matrix[,,j] = matrix(sectors == j, nr=nperiods, nc=n, byrow=T) } # create matrix for each factor factors.matrix = abind(factors.avg, along = 3) # combine sector dummies and all factors all.data = abind(sectors.matrix, factors.matrix) # create betas and specific.return beta = all.data[,1,] * NA specific.return = next.month.ret * NA nfactors = ncol(beta) # append next.month.ret to all.data all.data = abind(next.month.ret, all.data, along = 3) dimnames(all.data)[[3]][1] = 'Ret' # estimate betas (factor returns) for(t in 12:(nperiods-1)) { temp = all.data[t:t,,] x = temp[,-c(1:2)] y = temp[,1] b = lm(y~x)$coefficients b[2:nsectors] = b[1] + b[2:nsectors] beta[(t+1),] = b specific.return[(t+1),] = y - rowSums(temp[,-1] * matrix(b, n, nfactors, byrow=T), na.rm=T) }
To estimate factor returns (betas), we solve for coefficients of the following multiple factor model:
Ret = b1 * F1 + b2 * F2 + ... + bn * Fn + e where b1...bn are estimated factor returns F1...Fn are factor exposures. I.e. sector dummies and CSFB factor exposures e is stock specific return, not captured by factors F1...Fn
Note that we cannot include the first sector dummy variable in the regression, otherwise we will get a linearly dependent relationship of the first sector dummy variable with all other sector dummy variables. The sector effect of the first sector dummy variable is absorbed into the intercept in the regression.
There are a few alternative ways of estimating this regression. For example, the robust linear model can be estimated with following code:
load.packages('MASS') temp = rlm(y~x)$coefficients
The quantile regression can can be estimated with following code:
load.packages('quantreg') temp = rq(y ~ x, tau = 0.5)$coefficients
Next let’s look at the cumulative factor returns.
#***************************************************************** # helper function #****************************************************************** fm.hist.plot <- function(temp, smain=NULL) { ntemp = ncol(temp) cols = plota.colors(ntemp) plota(temp, ylim = range(temp), log='y', main=smain) for(i in 1:ntemp) plota.lines(temp[,i], col=cols[i]) plota.legend(colnames(temp), cols, as.list(temp)) } #***************************************************************** # Examine historical cumulative factor returns #****************************************************************** temp = make.xts(beta, index(next.month.ret)) temp = temp['2000::',] temp[] = apply(coredata(temp), 2, function(x) cumprod(1 + ifna(x,0))) fm.hist.plot(temp[,-c(1:nsectors)], 'Factor Returns')
The Price Reversals(PR) and Small Size(SS) factors have done well.
Next let’s estimate the factor covariance matrix over the rolling 24 month window.
load.packages('BurStFin') factor.covariance = array(double(), c(nperiods, nfactors, nfactors)) dimnames(factor.covariance)[[2]] = colnames(beta) dimnames(factor.covariance)[[3]] = colnames(beta) # estimate factor covariance for(t in 36:nperiods) { factor.covariance[t,,] = var.shrink.eqcor(beta[(t-23):t,]) }
Next let’s forecast stocks specific variances using GARCH(1,1). I will use the GARCH estimation routine described in the Trading using Garch Volatility Forecast post.
#***************************************************************** # Compute stocks specific variance foreasts using GARCH(1,1) #****************************************************************** load.packages('tseries,fGarch') specific.variance = next.month.ret * NA for(i in 1:n) { specific.variance[,i] = bt.forecast.garch.volatility(specific.return[,i], 24) } # compute historical specific.variance hist.specific.variance = next.month.ret * NA for(i in 1:n) hist.specific.variance[,i] = runSD(specific.return[,i], 24) # if specific.variance is missing use historical specific.variance specific.variance[] = ifna(coredata(specific.variance), coredata(hist.specific.variance)) #***************************************************************** # Save multiple factor risk model to be used later during portfolio construction #****************************************************************** save(all.data, factor.covariance, specific.variance, file='risk.model.Rdata')
Now we have all the ingredients to compute a portfolio risk:
Portfolio Risk = (common factor variance + specific variance)^0.5 common factor variance = (portfolio factor exposure) * factor covariance matrix * (portfolio factor exposure)' specific variance = (specific.variance)^2 * (portfolio weights)^2
#***************************************************************** # Compute portfolio risk #****************************************************************** portfolio = rep(1/n, n) portfolio = matrix(portfolio, n, nfactors) portfolio.risk = next.month.ret[,1] * NA for(t in 36:(nperiods-1)) { portfolio.exposure = colSums(portfolio * all.data[t,,-1], na.rm=T) portfolio.risk[t] = sqrt( portfolio.exposure %*% factor.covariance[t,,] %*% (portfolio.exposure) + sum(specific.variance[t,]^2 * portfolio[,1]^2, na.rm=T) ) }
Next let’s compare portfolio risk estimated using multiple factor risk model with portfolio historical risk.
#***************************************************************** # Compute historical portfolio risk #****************************************************************** portfolio = rep(1/n, n) portfolio = t(matrix(portfolio, n, nperiods)) portfolio.returns = next.month.ret[,1] * NA portfolio.returns[] = rowSums(mlag(next.month.ret) * portfolio, na.rm=T) hist.portfolio.risk = runSD(portfolio.returns, 24) #***************************************************************** # Plot risks #****************************************************************** plota(portfolio.risk['2000::',], type='l') plota.lines(hist.portfolio.risk, col='blue') plota.legend('Risk,Historical Risk', 'black,blue')
The multiple factor risk model does a decent job of estimating portfolio risk most of the time.
To view the complete source code for this example, please have a look at the fm.risk.model.test() function in factor.model.test.r at github.
Backtesting Minimum Variance portfolios
I want to show how to combine various risk measures I discussed while writing the series of posts about Asset Allocation with backtesting library in the Systematic Investor Toolbox. I will use Minimum Variance portfolio as an example for this post.
I recommend reading a good discussion about Minimum Variance portfolios at Minimum Variance Sector Rotation post by Quantivity.
I will use the investment universe and rebalancing schedule as outlined in the Forecast-Free Algorithms: A New Benchmark For Tactical Strategies post by David Varadi. The investment universe:
1. S&P500 (SPY) |
2. Nasdaq 100 (QQQ) |
3. Emerging Markets (EEM) |
4. Russell 2000 (IWM) |
5. MSCI EAFE (EFA) |
6. Long-term Treasury Bonds (TLT) |
7. Real Estate (IYR) |
8. Gold (GLD) |
The rebalancing is done on a weekly basis and quarterly data is used to estimate input assumptions.
Following code implements this strategy using the backtesting library in the Systematic Investor Toolbox:
# Load Systematic Investor Toolbox (SIT) setInternet2(TRUE) con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod,quadprog,lpSolve') tickers = spl('SPY,QQQ,EEM,IWM,EFA,TLT,IYR,GLD') data <- new.env() getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) data.weekly <- new.env() for(i in tickers) data.weekly[[i]] = to.weekly(data[[i]], indexAt='endof') bt.prep(data, align='remove.na', dates='1990::2011') bt.prep(data.weekly, align='remove.na', dates='1990::2011') #***************************************************************** # Code Strategies #****************************************************************** prices = data$prices n = ncol(prices) # find week ends week.ends = endpoints(prices, 'weeks') week.ends = week.ends[week.ends > 0] # Equal Weight 1/N Benchmark data$weight[] = NA data$weight[week.ends,] = ntop(prices[week.ends,], n) capital = 100000 data$weight[] = (capital / prices) * data$weight equal.weight = bt.run(data, type='share') #***************************************************************** # Create Constraints #***************************************************************** constraints = new.constraints(n, lb = -Inf, ub = +Inf) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) ret = prices / mlag(prices) - 1 weight = coredata(prices) weight[] = NA for( i in week.ends[week.ends >= (63 + 1)] ) { # one quarter is 63 days hist = ret[ (i- 63 +1):i, ] # create historical input assumptions ia = create.historical.ia(hist, 252) s0 = apply(coredata(hist),2,sd) ia$cov = cor(coredata(hist), use='complete.obs',method='pearson') * (s0 %*% t(s0)) weight[i,] = min.risk.portfolio(ia, constraints) } # Minimum Variance data$weight[] = weight capital = 100000 data$weight[] = (capital / prices) * data$weight min.var.daily = bt.run(data, type='share', capital=capital)
Next let’s create Minimum Variance portfolios using weekly data:
#***************************************************************** # Code Strategies: Weekly #****************************************************************** retw = data.weekly$prices / mlag(data.weekly$prices) - 1 weightw = coredata(prices) weightw[] = NA for( i in week.ends[week.ends >= (63 + 1)] ) { # map j = which(index(ret[i,]) == index(retw)) # one quarter = 13 weeks hist = retw[ (j- 13 +1):j, ] # create historical input assumptions ia = create.historical.ia(hist, 52) s0 = apply(coredata(hist),2,sd) ia$cov = cor(coredata(hist), use='complete.obs',method='pearson') * (s0 %*% t(s0)) weightw[i,] = min.risk.portfolio(ia, constraints) } data$weight[] = weightw capital = 100000 data$weight[] = (capital / prices) * data$weight min.var.weekly = bt.run(data, type='share', capital=capital) #***************************************************************** # Create Report #****************************************************************** plotbt.custom.report.part1(min.var.weekly, min.var.daily, equal.weight) # plot Daily and Weekly transition maps layout(1:2) plotbt.transition.map(min.var.daily$weight) legend('topright', legend = 'min.var.daily', bty = 'n') plotbt.transition.map(min.var.weekly$weight) legend('topright', legend = 'min.var.weekly', bty = 'n')
I find it very interesting that the Minimum Variance portfolios constructed using daily returns to create input assumptions are way different from the Minimum Variance portfolios constructed using weekly returns to create input assumptions. One possible explanation for this discrepancy was examined by Pat Burns in the The volatility mystery continues post.
There are a few ways I suggest you can play with this code:
- make covariance matrix estimate more stable, use the Ledoit-Wolf covariance shrinkage estimator from tawny package
ia$cov = tawny::cov.shrink(hist)
or
ia$cov = cor(coredata(hist), use='complete.obs',method='spearman') * (s0 %*% t(s0))
or
ia$cov = cor(coredata(hist), use='complete.obs',method='kendall') * (s0 %*% t(s0))
- use different investment universe
- consider different rebalancing schedule
- consider different risk measures. for example, I discussed and implemented following risk measures in the series of posts about Asset Allocation:
- min.maxloss.portfolio
- min.mad.portfolio
- min.cvar.portfolio
- min.cdar.portfolio
- min.mad.downside.portfolio
- min.risk.downside.portfolio
To view the complete source code for this example, please have a look at the bt.min.var.test() function in bt.test.r at github.
Maximizing Omega Ratio
The Omega Ratio was introduced by Keating and Shadwick in 2002. It measures the ratio of average portfolio wins over average portfolio losses for a given target return L.
Let x.i, i= 1,…,n be weights of instruments in the portfolio. We suppose that j= 1,…,T scenarios of returns with equal probabilities are available. I will use historical assets returns as scenarios. Let us denote by r.ij the return of i-th asset in the scenario j. The portfolio’s Omega Ratio can be written as
I will use methods presented in Optimizing Omega by H. Mausser, D. Saunders, L. Seco (2006) paper to construct optimal portfolios that maximize Omega Ratio.
The maximization problem (pages 5-6) can be written as
It can be formulated as a linear programming problem with following transformation
This method will only work for . In the case
, I will use a Nonlinear programming solver, Rdonlp2, which is based on donlp2 routine developed and copyright by Prof. Dr. Peter Spellucci. Following code might not properly execute on your computer because Rdonlp2 is only available for R version 2.9 or below.
max.omega.portfolio <- function ( ia, # input assumptions constraints # constraints ) { n = ia$n nt = nrow(ia$hist.returns) constraints0 = constraints omega = ia$parameters.omega #-------------------------------------------------------------------------- # Linear Programming, Omega > 1, Case #-------------------------------------------------------------------------- # objective : Omega # [ SUM <over j> 1/T * u.j ] f.obj = c(rep(0, n), (1/nt) * rep(1, nt), rep(0, nt), 0) # adjust constraints, add u.j, d.j, t constraints = add.variables(2*nt + 1, constraints, lb = c(rep(0,2*nt),-Inf)) # Transformation for inequalities # Aw < b => Aw1 - bt < 0 constraints$A[n + 2*nt + 1, ] = -constraints$b constraints$b[] = 0 # Transformation for Lower/Upper bounds, use same transformation index = which( constraints$ub[1:n] < +Inf ) if( len(index) > 0 ) { a = rbind( diag(n), matrix(0, 2*nt, n), -constraints$ub[1:n]) constraints = add.constraints(a[, index], rep(0, len(index)), '<=', constraints) } index = which( constraints$lb[1:n] > -Inf ) if( len(index) > 0 ) { a = rbind( diag(n), matrix(0, 2*nt, n), -constraints$lb[1:n]) constraints = add.constraints(a[, index], rep(0, len(index)), '>=', constraints) } constraints$lb[1:n] = -Inf constraints$ub[1:n] = Inf # [ SUM <over i> r.ij * x.i ] - u.j + d.j - L * t = 0, for each j = 1,...,T a = rbind( matrix(0, n, nt), -diag(nt), diag(nt), -omega) a[1 : n, ] = t(ia$hist.returns) constraints = add.constraints(a, rep(0, nt), '=', constraints) # [ SUM <over j> 1/T * d.j ] = 1 constraints = add.constraints(c( rep(0,n), rep(0,nt), (1/nt) * rep(1,nt), 0), 1, '=', constraints) # setup linear programming f.con = constraints$A f.dir = c(rep('=', constraints$meq), rep('>=', len(constraints$b) - constraints$meq)) f.rhs = constraints$b # find optimal solution x = NA sol = try(solve.LP.bounds('max', f.obj, t(f.con), f.dir, f.rhs, lb = constraints$lb, ub = constraints$ub), TRUE) if(!inherits(sol, 'try-error')) { x0 = sol$solution[1:n] u = sol$solution[(1+n):(n+nt)] d = sol$solution[(n+nt+1):(n+2*nt)] t = sol$solution[(n+2*nt+1):(n+2*nt+1)] # Reverse Transformation x = x0/t } #-------------------------------------------------------------------------- # NonLinear Programming, Omega > 1, Case #-------------------------------------------------------------------------- # Check if any u.j * d.j != 0 or LP solver encounter an error if( any( u*d != 0 ) || sol$status !=0 ) { require(Rdonlp2) constraints = constraints0 # compute omega ratio fn <- function(x){ portfolio.returns = x %*% t(ia$hist.returns) mean(pmax(portfolio.returns - omega,0)) / mean(pmax(omega - portfolio.returns,0)) } # control structure, fnscale - set -1 for maximization cntl <- donlp2.control(silent = T, fnscale = -1, iterma =10000, nstep = 100, epsx = 1e-10) # lower/upper bounds par.l = constraints$lb par.u = constraints$ub # intial guess if(!is.null(constraints$x0)) p = constraints$x0 # linear constraints A = t(constraints$A) lin.l = constraints$b lin.u = constraints$b lin.u[ -c(1:constraints$meq) ] = +Inf # find solution sol = donlp2(p, fn, par.lower=par.l, par.upper=par.u, A=A, lin.u=lin.u, lin.l=lin.l, control=cntl) x = sol$par } return( x ) }
First let’s examine how the traditional mean-variance efficient frontier looks like in the Omega Ratio framework.
# load Systematic Investor Toolbox setInternet2(TRUE) source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))) #-------------------------------------------------------------------------- # Create Efficient Frontier #-------------------------------------------------------------------------- ia = aa.test.create.ia() n = ia$n # 0 <= x.i <= 0.8 constraints = new.constraints(n, lb = 0, ub = 0.8) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # Omega - http://en.wikipedia.org/wiki/Omega_ratio ia$parameters.omega = 13/100 ia$parameters.omega = 12/100 # convert annual to monthly ia$parameters.omega = ia$parameters.omega / 12 # create efficient frontier(s) ef.risk = portopt(ia, constraints, 50, 'Risk') # Plot Omega Efficient Frontiers and Transition Maps layout( matrix(1:4, nrow = 2, byrow=T) ) # weights rownames(ef.risk$weight) = paste('Risk','weight',1:50,sep='_') plot.omega(ef.risk$weight[c(1,10,40,50), ], ia) # assets temp = diag(n) rownames(temp) = ia$symbols plot.omega(temp, ia) # mean-variance efficient frontier in the Omega Ratio framework plot.ef(ia, list(ef.risk), portfolio.omega, T, T)
Portfolio returns and Portfolio Omega Ratio are monotonically increasing as we move along the traditional mean-variance efficient frontier in the Omega Ratio framework. The least risky portfolios (Risk_weight_1, Risk_weight_10) have lower Omega Ratio for 13% threshold (target return) and the most risky portfolios (Risk_weight_40, Risk_weight_50) have higher Omega Ratio.
To create efficient frontier in the Omega Ratio framework, I propose first to compute range of returns in the mean-variance framework. Next split this range into # Portfolios equally spaced points. For each point, I propose to find portfolio that has expected return less than given point’s expected return and maximum Omega Ratio.
#-------------------------------------------------------------------------- # Create Efficient Frontier in Omega Ratio framework #-------------------------------------------------------------------------- # Create maximum Omega Efficient Frontier ef.omega = portopt.omega(ia, constraints, 50, 'Omega') # Plot Omega Efficient Frontiers and Transition Maps layout( matrix(1:4, nrow = 2, byrow=T) ) # weights plot.omega(ef.risk$weight[c(1,10,40,50), ], ia) # weights rownames(ef.omega$weight) = paste('Omega','weight',1:50,sep='_') plot.omega(ef.omega$weight[c(1,10,40,50), ], ia) # portfolio plot.ef(ia, list(ef.omega, ef.risk), portfolio.omega, T, T)
The Omega Ratio efficient frontier looks similar to the traditional mean-variance efficient frontier for expected returns greater than 13% threshold (target return). However, there is a big shift in allocation and increase in Omega Ratio for portfolios with expected returns less than 13% threshold.
The Omega Ratio efficient frontier looks very inefficient in the Risk framework for portfolios with expected returns less than 13% threshold. But remember that goal of this optimization was to find portfolios that maximize Omega Ratio for given user constraints. Overall I find results a bit radical for portfolios with expected returns less than 13% threshold, and this results defiantly call for more investigation.
To view the complete source code for this example, please have a look at the aa.omega.test() function in aa.test.r at github.
Minimizing Downside Risk
In the Maximum Loss and Mean-Absolute Deviation risk measures, and Expected shortfall (CVaR) and Conditional Drawdown at Risk (CDaR) posts I started the discussion about alternative risk measures we can use to construct efficient frontier. Another alternative risk measure I want to discuss is Downside Risk.
In the traditional mean-variance optimization both returns above and below the mean contribute to the portfolio risk (usually measured by the standard deviation of the portfolio’s return). In the Downside Risk framework, only returns that are below the mean or below the target rate of return (MAR) contribute to the portfolio risk. I will discuss two Downside Risk measures: Lower Semi-Variance, and Lower Semi-Absolute Deviation. I will use methods presented in Portfolio Optimization under Lower Partial Risk Measure by H. Konno, H. Waki and A. Yuuki (2002) paper to construct optimal portfolios.
Let x.i, i= 1,…,n be weights of instruments in the portfolio. We suppose that j= 1,…,T scenarios of returns with equal probabilities are available. I will use historical assets returns as scenarios. Let us denote by r.ij the return of i-th asset in the scenario j. The portfolio’s Lower Semi-Absolute Deviation (page 6) can be written as
It can be formulated as a linear programming problem
This linear programming problem can be easily implemented
min.mad.downside.portfolio <- function ( ia, # input assumptions constraints # constraints ) { n = ia$n nt = nrow(ia$hist.returns) mar = ia$parameters.mar # objective : Mean-Lower-Semi-Absolute Deviation (M-LSAD) # 1/T * [ SUM <over j> z.j ] f.obj = c( rep(0, n), (1/nt) * rep(1, nt) ) # adjust constraints, add z.j constraints = add.variables(nt, constraints, lb = 0) # MAR - [ SUM <over i> r.ij * x.i ] <= z.j , for each j = 1,...,T a = rbind( matrix(0, n, nt), diag(nt)) a[1 : n, ] = t(ia$hist.returns) constraints = add.constraints(a, rep(mar, nt), '>=', constraints) # setup linear programming f.con = constraints$A f.dir = c(rep('=', constraints$meq), rep('>=', len(constraints$b) - constraints$meq)) f.rhs = constraints$b # find optimal solution x = NA sol = try(solve.LP.bounds('min', f.obj, t(f.con), f.dir, f.rhs, lb = constraints$lb, ub = constraints$ub), TRUE) if(!inherits(sol, 'try-error')) { x = sol$solution[1:n] } return( x ) }
The portfolio’s Lower Semi-Absolute Deviation (page 7) can be written as
It can be formulated as a quadratic programming problem
This quadratic programming problem can be implemented
min.risk.downside.portfolio <- function ( ia, # input assumptions constraints # constraints ) { n = ia$n nt = nrow(ia$hist.returns) mar = ia$parameters.mar # objective : Mean-Lower Semi-Variance (MV) # 1/T * [ SUM <over j> z.j^2 ] f.obj = c( rep(0, n), (1/nt) * rep(1, nt) ) # adjust constraints, add z.j constraints = add.variables(nt, constraints, lb = 0) # MAR - [ SUM <over i> r.ij * x.i ] <= z.j , for each j = 1,...,T a = rbind( matrix(0, n, nt), diag(nt)) a[1 : n, ] = t(ia$hist.returns) constraints = add.constraints(a, rep(mar, nt), '>=', constraints) # setup quadratic minimization Dmat = diag( len(f.obj) ) diag(Dmat) = f.obj if(!is.positive.definite(Dmat)) { Dmat <- make.positive.definite(Dmat) } # find optimal solution x = NA sol = try(solve.QP.bounds(Dmat = Dmat, dvec = rep(0, nrow(Dmat)) , Amat=constraints$A, bvec=constraints$b, constraints$meq, lb = constraints$lb, ub = constraints$ub),TRUE) if(!inherits(sol, 'try-error')) { x = sol$solution[1:n] } return( x ) }
Let’s examine efficient frontiers computed under different risk measures using historical input assumptions presented in the Introduction to Asset Allocation post:
# load Systematic Investor Toolbox setInternet2(TRUE) source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))) #-------------------------------------------------------------------------- # Create Efficient Frontier #-------------------------------------------------------------------------- ia = aa.test.create.ia() n = ia$n # 0 <= x.i <= 0.8 constraints = new.constraints(n, lb = 0, ub = 0.8) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # Set target return (or Minimum Acceptable Returns (MAR)) # and consider only returns that are less than the target ia$parameters.mar = 0/100 # convert annual to monthly ia$parameters.mar = ia$parameters.mar / 12 # create efficient frontier(s) ef.mad = portopt(ia, constraints, 50, 'MAD', min.mad.portfolio) ef.mad.downside = portopt(ia, constraints, 50, 'S-MAD', min.mad.downside.portfolio) ef.risk = portopt(ia, constraints, 50, 'Risk') ef.risk.downside = portopt(ia, constraints, 50, 'S-Risk', min.risk.downside.portfolio) # Plot multiple Efficient Frontiers and Transition Maps layout( matrix(1:4, nrow = 2) ) plot.ef(ia, list(ef.mad.downside, ef.mad), portfolio.mad, F) plot.ef(ia, list(ef.mad.downside, ef.mad), portfolio.mad.downside, F) plot.transition.map(ef.mad) plot.transition.map(ef.mad.downside) # Plot multiple Efficient Frontiers and Transition Maps layout( matrix(1:4, nrow = 2) ) plot.ef(ia, list(ef.risk.downside, ef.risk), portfolio.risk, F) plot.ef(ia, list(ef.risk.downside, ef.risk), portfolio.risk.downside, F) plot.transition.map(ef.risk) plot.transition.map(ef.risk.downside)
The downside risk efficient frontiers, prefixed with S-, are similar to their full counterparts and have similar transition maps in our example.
Another way to approach minimization of the Lower Semi-Variance that I want to explore later is presented in Mean-Semivariance Optimization: A Heuristic Approach by Javier Estrada paper.
To view the complete source code for this example, please have a look at the aa.downside.test() function in aa.test.r at github.
The Most Diversified or The Least Correlated Efficient Frontier
The “Minimum Correlation Algorithm” is a term I stumbled at the CSS Analytics blog. This is an Interesting Risk Measure that in my interpretation means: minimizing Average Portfolio Correlation with each Asset Class for a given level of return.
One might try to use Correlation instead of Covariance matrix in mean-variance optimization, but this approach, as I will show below, will not produce the least correlated portfolios.
The Average Portfolio Correlation with each Asset Class:
This formula can be easily coded in R:
portfolio.sigma = sqrt( t(weight) %*% assets.cov %*% weight ) mean( ( weight %*% assets.cov ) / ( assets.sigma * portfolio.sigma ) ) # Alternatively portfolio.returns = weight %*% t(assets.hist.returns) mean(cor(assets.hist.returns, portfolio.returns))
I’m not aware of the method to transform this formula in to the linear programming, so I will use a Nonlinear programming solver, Rdonlp2, which is based on donlp2 routine developed and copyright by Prof. Dr. Peter Spellucci. Following code might not properly execute on your computer because Rdonlp2 is only available for R version 2.9 or below.
#-------------------------------------------------------------------------- # Create Efficient Frontier #-------------------------------------------------------------------------- ia = aa.test.create.ia() n = ia$n # 0 <= x.i <= 0.8 constraints = new.constraints(n, lb = 0, ub = 0.8) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # create efficient frontier(s) ef.risk = portopt(ia, constraints, 50, 'Risk') ef.cor.insteadof.cov = portopt(ia, constraints, 50, 'Cor instead of Cov', min.cor.insteadof.cov.portfolio) ef.avgcor = portopt(ia, constraints, 50, 'AvgCor', min.avgcor.portfolio) # Plot multiple Efficient Frontiers layout(1:2) plot.ef(ia, list(ef.risk, ef.avgcor, ef.cor.insteadof.cov), portfolio.risk, F) plot.ef(ia, list(ef.risk, ef.avgcor, ef.cor.insteadof.cov), portfolio.avgcor, F) # Plot multiple Transition Maps layout( matrix(1:4, nrow = 2) ) plot.transition.map(ef.risk) plot.transition.map(ef.avgcor) plot.transition.map(ef.cor.insteadof.cov) # visualize input assumptions plot.ia(ia)
Using Correlation instead of Covariance matrix in mean-variance optimization is a very bad idea to produce the least correlated portfolios. The ‘Cor instead of Cov’ efficient frontier actually increases average portfolio correlation compared to the standard ‘Risk’ efficient frontier.
The portfolio composition of the Average Correlation efficient frontier is split between gold (GLD) and bonds (TLT) at the lower risk levels. This is not surprising because both gold and bonds have positive expected returns and low correlation to the other assets.
To view the complete source code for this example, please have a look at the aa.avg.cor.test() function in aa.test.r at github.
Following is the complete source code for minimizing Average Portfolio Correlation with each Asset Class function:
min.avgcor.portfolio <- function ( ia, # input assumptions constraints # constraints ) { require(Rdonlp2) cov = ia$cov[1:ia$n, 1:ia$n] s = sqrt(diag(cov)) # avgcor fn <- function(x){ sd_x = sqrt( t(x) %*% cov %*% x ) mean( ( x %*% cov ) / ( s * sd_x ) ) } # control structure cntl <- donlp2.control(silent = T, iterma =10000, nstep = 100, epsx = 1e-10) # lower/upper bounds par.l = constraints$lb par.u = constraints$ub # intial guess p = rep(1,n) if(!is.null(constraints$x0)) p = constraints$x0 # linear constraints A = t(constraints$A) lin.l = constraints$b lin.u = constraints$b lin.u[ -c(1:constraints$meq) ] = +Inf # find solution sol = donlp2(p, fn, par.lower=par.l, par.upper=par.u, A=A, lin.u=lin.u, lin.l=lin.l, control=cntl) x = sol$par return( x ) }