Archive
Maximum Sharpe Portfolio
Maximum Sharpe Portfolio or Tangency Portfolio is a portfolio on the efficient frontier at the point where line drawn from the point (0, risk-free rate) is tangent to the efficient frontier.
There is a great discussion about Maximum Sharpe Portfolio or Tangency Portfolio at quadprog optimization question. In general case, finding the Maximum Sharpe Portfolio requires a non-linear solver because we want to find portfolio weights w
to maximize w' mu / sqrt( w' V w )
(i.e. Sharpe Ratio is a non-linear function of w
). But as discussed in the quadprog optimization question, there are special cases when we can use quadratic solver to find Maximum Sharpe Portfolio. As long as all constraints are homogeneous of degree 0 (i.e. if we multiply w by a number, the constraint is unchanged – for example, w1 > 0 is equivalent to 5*w1 > 5*0), the quadratic solver can be used to find Maximum Sharpe Portfolio or Tangency Portfolio.
I have implemented the logic to find Maximum Sharpe Portfolio or Tangency Portfolio in the max.sharpe.portfolio() function in strategy.r at github. You can specify following 2 parameters:
- Type of portfolio: ‘long-only’, ‘long-short’, or ‘market-neutral’
- Portfolio exposure. For example, ‘long-only’ with exposure = 1, is a fully invested portfolio
Now, let’s construct a sample efficient frontier and plot Maximum Sharpe Portfolio.
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Create Efficient Frontier #****************************************************************** # create sample historical input assumptions ia = aa.test.create.ia() # create long-only, fully invested efficient frontier n = ia$n # 0 <= x.i <= 1 constraints = new.constraints(n, lb = 0, ub = 1) constraints = add.constraints(diag(n), type='>=', b=0, constraints) constraints = add.constraints(diag(n), type='<=', b=1, constraints) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # create efficient frontier ef = portopt(ia, constraints, 50, 'Efficient Frontier') #***************************************************************** # Create Plot #****************************************************************** # plot efficient frontier plot.ef(ia, list(ef), transition.map=F) # find maximum sharpe portfolio max(portfolio.return(ef$weight,ia) / portfolio.risk(ef$weight,ia)) # plot minimum variance portfolio weight = min.var.portfolio(ia,constraints) points(100 * portfolio.risk(weight,ia), 100 * portfolio.return(weight,ia), pch=15, col='red') portfolio.return(weight,ia) / portfolio.risk(weight,ia) # plot maximum Sharpe or tangency portfolio weight = max.sharpe.portfolio()(ia,constraints) points(100 * portfolio.risk(weight,ia), 100 * portfolio.return(weight,ia), pch=15, col='orange') portfolio.return(weight,ia) / portfolio.risk(weight,ia) plota.legend('Minimum Variance,Maximum Sharpe','red,orange', x='topright')
Now let’s see how to construct ‘long-only’, ‘long-short’, or ‘market-neutral’ Maximum Sharpe Portfolio or Tangency Portfolios:
#***************************************************************** # Examples of Maximum Sharpe or Tangency portfolios construction #****************************************************************** weight = max.sharpe.portfolio('long-only')(ia,constraints) round(weight,2) round(c(sum(weight[weight<0]), sum(weight[weight>0])),2) weight = max.sharpe.portfolio('long-short')(ia,constraints) round(weight,2) round(c(sum(weight[weight<0]), sum(weight[weight>0])),2) weight = max.sharpe.portfolio('market-neutral')(ia,constraints) round(weight,2) round(c(sum(weight[weight<0]), sum(weight[weight>0])),2)
The long-only Maximum Sharpe portfolio as expected has exposure of 100%. The long-short Maximum Sharpe portfolio is 227% long and 127% short. The market-neutral Maximum Sharpe portfolio is 100% long and 100% short.
As the last step, I run Maximum Sharpe algo vs other portfolio optimization methods I have previously discussed (i.e. Risk Parity, Minimum Variance, Maximum Diversification, Minimum Correlation) on the 10 asset universe used in the Adaptive Asset Allocation post.
#***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') tickers = spl('SPY,EFA,EWJ,EEM,IYR,RWX,IEF,TLT,DBC,GLD') data <- new.env() getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) bt.prep(data, align='keep.all', dates='2004:12::') #***************************************************************** # Code Strategies #****************************************************************** prices = data$prices n = ncol(prices) models = list() #***************************************************************** # Code Strategies #****************************************************************** # find period ends period.ends = endpoints(prices, 'months') period.ends = period.ends[period.ends > 0] n.mom = 180 n.vol = 60 n.top = 4 momentum = prices / mlag(prices, n.mom) obj = portfolio.allocation.helper(data$prices, period.ends=period.ends, lookback.len = n.vol, universe = ntop(momentum[period.ends,], n.top) > 0, min.risk.fns = list(EW=equal.weight.portfolio, RP=risk.parity.portfolio, MV=min.var.portfolio, MD=max.div.portfolio, MC=min.corr.portfolio, MC2=min.corr2.portfolio, MCE=min.corr.excel.portfolio, MS=max.sharpe.portfolio()) ) models = create.strategies(obj, data)$models #***************************************************************** # Create Report #****************************************************************** strategy.performance.snapshoot(models, T) plotbt.custom.report.part2(models$MS) # Plot Portfolio Turnover for each strategy layout(1) barplot.with.labels(sapply(models, compute.turnover, data), 'Average Annual Portfolio Turnover')
The allocation using Maximum Sharpe Portfolio produced more concentrated portfolios with higher total return, higher Sharpe ratio, and higher turnover.
More Principal Components Fun
Today, I want to continue with the Principal Components theme and show how the Principal Component Analysis can be used to build portfolios that are not correlated to the market. Most of the content for this post is based on the excellent article, “Using PCA for spread trading” by Jev Kuznetsov.
Let’s start by loading the components of the Dow Jones Industrial Average index over last 5 years.
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') tickers = dow.jones.components() data <- new.env() getSymbols(tickers, src = 'yahoo', from = '2009-01-01', env = data, auto.assign = T) bt.prep(data, align='remove.na')
Next let’s compute the Principal Components based on the first year of price history.
#***************************************************************** # Principal component analysis (PCA), for interesting discussion # http://machine-master.blogspot.ca/2012/08/pca-or-polluting-your-clever-analysis.html #****************************************************************** prices = last(data$prices, 1000) n = len(tickers) ret = prices / mlag(prices) - 1 p = princomp(na.omit(ret[1:250,])) loadings = p$loadings[] # look at the first 4 principal components components = loadings[,1:4] # normalize all selected components to have total weight = 1 components = components / repRow(colSums(abs(components)), len(tickers)) # note that first component is market, and all components are orthogonal i.e. not correlated to market market = ret[1:250,] %*% rep(1/n,n) temp = cbind(market, ret[1:250,] %*% components) colnames(temp)[1] = 'Market' round(cor(temp, use='complete.obs',method='pearson'),2) # the variance of each component is decreasing round(100*sd(temp,na.rm=T),2)
Correlation: Market Comp.1 Comp.2 Comp.3 Comp.4 Market 1.0 1 0.2 0.1 0 Comp.1 1.0 1 0.0 0.0 0 Comp.2 0.2 0 1.0 0.0 0 Comp.3 0.1 0 0.0 1.0 0 Comp.4 0.0 0 0.0 0.0 1 Standard Deviation: Market Comp.1 Comp.2 Comp.3 Comp.4 1.8 2.8 1.2 1.0 0.8
Please note that the first principal component is highly correlated to the market and all principal components have very low correlation to each other and very low correlation to the market. Also by construction the volatility of principal components is decreasing. An interesting observation that you might want to check yourself: principal components are quite persistent in time (i.e. if you compute both correlations and volatilities using the future prices, for example, 4 years of prices, the principal components maintain their correlation and volatility profiles)
Next, let’s check if any of the principal components are mean-reverting. I will use Augmented Dickey-Fuller test to check if principal components are mean-reverting. (small p-value => stationary i.e. mean-reverting)
#***************************************************************** # Find stationary components, Augmented Dickey-Fuller test #****************************************************************** library(tseries) equity = bt.apply.matrix(1 + ifna(-ret %*% components,0), cumprod) equity = make.xts(equity, index(prices)) # test for stationarity ( mean-reversion ) adf.test(as.numeric(equity[,1]))$p.value adf.test(as.numeric(equity[,2]))$p.value adf.test(as.numeric(equity[,3]))$p.value adf.test(as.numeric(equity[,4]))$p.value
The Augmented Dickey-Fuller test indicates that the 4th principal component is stationary. Let’s have a closer look at its price history and its composition:
#***************************************************************** # Plot securities and components #***************************************************************** layout(1:2) # add Bollinger Bands i.comp = 4 bbands1 = BBands(repCol(equity[,i.comp],3), n=200, sd=1) bbands2 = BBands(repCol(equity[,i.comp],3), n=200, sd=2) temp = cbind(equity[,i.comp], bbands1[,'up'], bbands1[,'dn'], bbands1[,'mavg'], bbands2[,'up'], bbands2[,'dn']) colnames(temp) = spl('Comp. 4,1SD Up,1SD Down,200 SMA,2SD Up,2SD Down') plota.matplot(temp, main=paste(i.comp, 'Principal component')) barplot.with.labels(sort(components[,i.comp]), 'weights')
The price history along with 200 day moving average and 1 and 2 Bollinger Bands are shown on the top pane. The portfolio weights of the 4th principal component are shown on the bottom pane.
So now you have a mean-reverting portfolio that is also uncorrelated to the market. There are many ways you can use this infromation. Please share your ideas and suggestions.
To view the complete source code for this example, please have a look at the bt.pca.trading.test() function in bt.test.r at github.
Modeling Couch Potato strategy
I first read about the Couch Potato strategy in the MoneySense magazine. I liked this simple strategy because it was easy to understand and easy to manage. The Couch Potato strategy is similar to the Permanent Portfolio strategy that I have analyzed previously.
The Couch Potato strategy invests money in the given proportions among different types of assets to ensure diversification and rebalances the holdings once a year. For example the Classic Couch Potato strategy is:
- 1) Canadian equity (33.3%)
- 2) U.S. equity (33.3%)
- 3) Canadian bond (33.3%)
I highly recommend reading following online resources to get more information about the Couch Potato strategy:
- MoneySense
- Canadian Couch Potato
- AssetBuilder
Today, I want to show how you can model and monitor the Couch Potato strategy with the Systematic Investor Toolbox.
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) # helper function to model Couch Potato strategy - a fixed allocation strategy couch.potato.strategy <- function ( data.all, tickers = 'XIC.TO,XSP.TO,XBB.TO', weights = c( 1/3, 1/3, 1/3 ), periodicity = 'years', dates = '1900::', commission = 0.1 ) { #***************************************************************** # Load historical data #****************************************************************** tickers = spl(tickers) names(weights) = tickers data <- new.env() for(s in tickers) data[[ s ]] = data.all[[ s ]] bt.prep(data, align='remove.na', dates=dates) #***************************************************************** # Code Strategies #****************************************************************** prices = data$prices n = ncol(prices) nperiods = nrow(prices) # find period ends period.ends = endpoints(data$prices, periodicity) period.ends = c(1, period.ends[period.ends > 0]) #***************************************************************** # Code Strategies #****************************************************************** data$weight[] = NA for(s in tickers) data$weight[period.ends, s] = weights[s] model = bt.run.share(data, clean.signal=F, commission=commission) return(model) }
The couch.potato.strategy() function creates a periodically rebalanced portfolio for given static allocation.
Next, let’s back-test some Canadian Couch Potato portfolios:
#***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') map = list() map$can.eq = 'XIC.TO' map$can.div = 'XDV.TO' map$us.eq = 'XSP.TO' map$us.div = 'DVY' map$int.eq = 'XIN.TO' map$can.bond = 'XBB.TO' map$can.real.bond = 'XRB.TO' map$can.re = 'XRE.TO' map$can.it = 'XTR.TO' map$can.gold = 'XGD.TO' data <- new.env() for(s in names(map)) { data[[ s ]] = getSymbols(map[[ s ]], src = 'yahoo', from = '1995-01-01', env = data, auto.assign = F) data[[ s ]] = adjustOHLC(data[[ s ]], use.Adjusted=T) } #***************************************************************** # Code Strategies #****************************************************************** models = list() periodicity = 'years' dates = '2006::' models$classic = couch.potato.strategy(data, 'can.eq,us.eq,can.bond', rep(1/3,3), periodicity, dates) models$global = couch.potato.strategy(data, 'can.eq,us.eq,int.eq,can.bond', c(0.2, 0.2, 0.2, 0.4), periodicity, dates) models$yield = couch.potato.strategy(data, 'can.div,can.it,us.div,can.bond', c(0.25, 0.25, 0.25, 0.25), periodicity, dates) models$growth = couch.potato.strategy(data, 'can.eq,us.eq,int.eq,can.bond', c(0.25, 0.25, 0.25, 0.25), periodicity, dates) models$complete = couch.potato.strategy(data, 'can.eq,us.eq,int.eq,can.re,can.real.bond,can.bond', c(0.2, 0.15, 0.15, 0.1, 0.1, 0.3), periodicity, dates) models$permanent = couch.potato.strategy(data, 'can.eq,can.gold,can.bond', c(0.25,0.25,0.5), periodicity, dates) #***************************************************************** # Create Report #****************************************************************** plotbt.custom.report.part1(models)
I have included a few classic Couch Potato portfolios and the Canadian version of the Permanent portfolio. The equity curves speak for themselves: you can call them by the fancy names, but in the end all variations of the Couch Potato portfolios performed similar and suffered a huge draw-down during 2008. The Permanent portfolio did a little better during 2008 bear market.
Next, let’s back-test some US Couch Potato portfolios:
#***************************************************************** # Load historical data #****************************************************************** tickers = spl('VIPSX,VTSMX,VGTSX,SPY,TLT,GLD,SHY') data <- new.env() getSymbols(tickers, src = 'yahoo', from = '1995-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) # extend GLD with Gold.PM - London Gold afternoon fixing prices data$GLD = extend.GLD(data$GLD) #***************************************************************** # Code Strategies #****************************************************************** models = list() periodicity = 'years' dates = '2003::' models$classic = couch.potato.strategy(data, 'VIPSX,VTSMX', rep(1/2,2), periodicity, dates) models$margarita = couch.potato.strategy(data, 'VIPSX,VTSMX,VGTSX', rep(1/3,3), periodicity, dates) models$permanent = couch.potato.strategy(data, 'SPY,TLT,GLD,SHY', rep(1/4,4), periodicity, dates) #***************************************************************** # Create Report #****************************************************************** plotbt.custom.report.part1(models)
The US Couch Potato portfolios also suffered huge draw-downs during 2008. The Permanent portfolio hold it ground much better.
It has been written quite a lot about Couch Potato strategy, but looking at different variations I cannot really see much difference in terms of perfromance or draw-downs. Probably that is why in the last few years, I have seen the creation of many new ETFs to address that in one way or another. For example, now we have tactical asset allocation ETFs, minimum volatility ETFs, income ETFs with covered calls overlays.
To view the complete source code for this example, please have a look at the bt.couch.potato.test() function in bt.test.r at github.
Some additional references from the Canadian Couch Potato blog that are worth reading:
Permanent Portfolio – Transaction Cost and better Risk Parity
I want to address comments that were asked in my last post, Permanent Portfolio – Simple Tools, about Permanent Portfolio strategy. Specifically:
- The impact of transaction costs on the perfromance and
- Create a modified version of risk allocation portfolio that distributes weights across 3 asset classes: stocks(SPY), gold(GLD), and treasuries(TLT), and only invests into cash(SHY) to fill the residual portfolio exposure once we scale the SPY/GLD/TLT portfolio to the target volatility
The first point is easy, to incorporate the transaction cost into your back-test just add commission=0.1 parameter to the bt.run.share() function call.For example, to see the dollar allocation strategy perfromance assuming 10c a share commission, use following code:
# original strategy models$dollar = bt.run.share(data, clean.signal=F) # assuming 10c a share commissions models$dollar = bt.run.share(data, commission=0.1, clean.signal=F)
The second point is a bit more work. First, let’s allocate risk across only to 3 asset classes: stocks(SPY), gold(GLD), and treasuries(TLT). Next, let’s scale the SPY/GLD/TLT portfolio to the 7% target volatility. And finally, let’s allocate to cash(SHY) the residual portfolio exposure.
#***************************************************************** # Risk Weighted: allocate only to 3 asset classes: stocks(SPY), gold(GLD), and treasuries(TLT) #****************************************************************** ret.log = bt.apply.matrix(prices, ROC, type='continuous') hist.vol = sqrt(252) * bt.apply.matrix(ret.log, runSD, n = 21) weight.risk = weight.dollar / hist.vol weight.risk$SHY = 0 weight.risk = weight.risk / rowSums(weight.risk) data$weight[] = NA data$weight[period.ends,] = weight.risk[period.ends,] models$risk = bt.run.share(data, commission=commission, clean.signal=F) #***************************************************************** # Risk Weighted + 7% target volatility #****************************************************************** data$weight[] = NA data$weight[period.ends,] = target.vol.strategy(models$risk, weight.risk, 7/100, 21, 100/100)[period.ends,] models$risk.target7 = bt.run.share(data, commission=commission, clean.signal=F) #***************************************************************** # Risk Weighted + 7% target volatility + SHY #****************************************************************** data$weight[] = NA data$weight[period.ends,] = target.vol.strategy(models$risk, weight.risk, 7/100, 21, 100/100)[period.ends,] cash = 1-rowSums(data$weight) data$weight$SHY[period.ends,] = cash[period.ends] models$risk.target7.shy = bt.run.share(data, commission=commission, clean.signal=F)
The modified version of risk allocation portfolio performs well relative to other portfolios even after incorporating the 10c transaction cost.
To view the complete source code for this example, please have a look at the bt.permanent.portfolio3.test() function in bt.test.r at github.
Permanent Portfolio – Simple Tools
I have previously described and back-tested the Permanent Portfolio strategy based on the series of posts at the GestaltU blog. Today I want to show how we can improve the Permanent Portfolio strategy perfromance using following simple tools:
- Volatility targeting
- Risk allocation
- Tactical market filter
First, let’s load the historical prices for the stocks(SPY), gold(GLD), treasuries(TLT), and cash(SHY) and create a quarterly rebalanced Permanent Portfolio strategy using the Systematic Investor Toolbox.
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') tickers = spl('SPY,TLT,GLD,SHY') data <- new.env() getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) # extend GLD with Gold.PM - London Gold afternoon fixing prices data$GLD = extend.GLD(data$GLD) bt.prep(data, align='remove.na') #***************************************************************** # Setup #****************************************************************** prices = data$prices n = ncol(prices) period.ends = endpoints(prices, 'quarters') period.ends = period.ends[period.ends > 0] period.ends = c(1, period.ends) models = list() #***************************************************************** # Dollar Weighted #****************************************************************** target.allocation = matrix(rep(1/n,n), nrow=1) weight.dollar = ntop(prices, n) data$weight[] = NA data$weight[period.ends,] = weight.dollar[period.ends,] models$dollar = bt.run.share(data, clean.signal=F)
Now let’s create a version of the Permanent Portfolio strategy that targets the 7% annual volatility based on the 21 day look back period.
#***************************************************************** # Dollar Weighted + 7% target volatility #****************************************************************** data$weight[] = NA data$weight[period.ends,] = target.vol.strategy(models$dollar, weight.dollar, 7/100, 21, 100/100)[period.ends,] models$dollar.target7 = bt.run.share(data, clean.signal=F)
Please note that allocating equal dollar amounts to each investment puts more risk allocation to the risky assets. If we want to distribute risk budget equally across all assets we can consider a portfolio based on the equal risk allocation instead of equal capital (dollar) allocation.
#***************************************************************** # Risk Weighted #****************************************************************** ret.log = bt.apply.matrix(prices, ROC, type='continuous') hist.vol = sqrt(252) * bt.apply.matrix(ret.log, runSD, n = 21) weight.risk = weight.dollar / hist.vol weight.risk = weight.risk / rowSums(weight.risk) data$weight[] = NA data$weight[period.ends,] = weight.risk[period.ends,] models$risk = bt.run.share(data, clean.signal=F)
We can also use market filter, for example a 10 month moving average, to control portfolio drawdowns.
#***************************************************************** # Market Filter (tactical): 10 month moving average #****************************************************************** period.ends = endpoints(prices, 'months') period.ends = period.ends[period.ends > 0] period.ends = c(1, period.ends) sma = bt.apply.matrix(prices, SMA, 200) weight.dollar.tactical = weight.dollar * (prices > sma) data$weight[] = NA data$weight[period.ends,] = weight.dollar.tactical[period.ends,] models$dollar.tactical = bt.run.share(data, clean.signal=F)
Finally, let’s combine market filter and volatility targeting:
#***************************************************************** # Tactical + 7% target volatility #****************************************************************** data$weight[] = NA data$weight[period.ends,] = target.vol.strategy(models$dollar.tactical, weight.dollar.tactical, 7/100, 21, 100/100)[period.ends,] models$dollar.tactical.target7 = bt.run.share(data, clean.signal=F) #***************************************************************** # Create Report #****************************************************************** plotbt.custom.report.part1(models) plotbt.strategy.sidebyside(models)
The final portfolio that combines market filter and volatility targeting is a big step up from the original Permanent Portfolio strategy: the returns are a bit down, but draw-downs are cut in half.
To view the complete source code for this example, please have a look at the bt.permanent.portfolio2.test() function in bt.test.r at github.