### Search Results

Keyword: ‘asset allocation’

## Update for Backtesting Asset Allocation Portfolios post

It was over a year since my original post, Backtesting Asset Allocation portfolios. I have expanded the functionality of the Systematic Investor Toolbox both in terms of optimization functions and helper back-test functions during this period.

Today, I want to update the Backtesting Asset Allocation portfolios post and showcase new functionality. I will use the following global asset universe as: SPY,QQQ,EEM,IWM,EFA,TLT,IYR,GLD to form portfolios every month using different asset allocation methods.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
source(con)
close(con)

#*****************************************************************
#******************************************************************
tickers = spl('SPY,QQQ,EEM,IWM,EFA,TLT,IYR,GLD')

data <- new.env()
getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
bt.prep(data, align='remove.na', dates='1990::')

#*****************************************************************
# Code Strategies
#******************************************************************
cluster.group = cluster.group.kmeans.90

obj = portfolio.allocation.helper(data$prices, periodicity = 'months', lookback.len = 60, min.risk.fns = list( EW=equal.weight.portfolio, RP=risk.parity.portfolio(), MD=max.div.portfolio, MV=min.var.portfolio, MVE=min.var.excel.portfolio, MV2=min.var2.portfolio, MC=min.corr.portfolio, MCE=min.corr.excel.portfolio, MC2=min.corr2.portfolio, MS=max.sharpe.portfolio(), ERC = equal.risk.contribution.portfolio, # target retunr / risk TRET.12 = target.return.portfolio(12/100), TRISK.10 = target.risk.portfolio(10/100), # cluster C.EW = distribute.weights(equal.weight.portfolio, cluster.group), C.RP = distribute.weights(risk.parity.portfolio, cluster.group), # rso RSO.RP.5 = rso.portfolio(risk.parity.portfolio, 5, 500), # others MMaxLoss = min.maxloss.portfolio, MMad = min.mad.portfolio, MCVaR = min.cvar.portfolio, MCDaR = min.cdar.portfolio, MMadDown = min.mad.downside.portfolio, MRiskDown = min.risk.downside.portfolio, MCorCov = min.cor.insteadof.cov.portfolio ) ) models = create.strategies(obj, data)$models

#*****************************************************************
# Create Report
#******************************************************************
strategy.performance.snapshoot(models, T, 'Backtesting Asset Allocation portfolios')


I hope you will enjoy creating your own portfolio allocation methods or playing with a large variety of portfolio allocation techniques that are readily available for your experimentation.

To view the complete source code for this example, please have a look at the bt.aa.test.new() function in bt.test.r at github.

## Adaptive Asset Allocation – Sensitivity Analysis

Today I want to continue with Adaptive Asset Allocation theme and examine how the strategy results are sensitive to look-back parameters used for momentum and volatility computations. I will follow the sample steps that were outlined by David Varadi on the robustness of parameters of the Adaptive Asset Allocation algorithm post. Please see my prior post for more infromation.

Let’s start by loading historical prices for 10 ETFs using the Systematic Investor Toolbox:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
source(con)
close(con)

#*****************************************************************
#******************************************************************

tickers = spl('SPY,EFA,EWJ,EEM,IYR,RWX,IEF,TLT,DBC,GLD')

data <- new.env()
getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
bt.prep(data, align='keep.all', dates='2004:12::')

#*****************************************************************
# Code Strategies
#******************************************************************
prices = data$prices n = ncol(prices) models = list() # find period ends period.ends = endpoints(prices, 'months') period.ends = period.ends[period.ends > 0]  Next I wrapped the Combo (Momentum and Volatility weighted) strategy and Adaptive Asset Allocation (AAA) strategy into bt.aaa.combo and bt.aaa.minrisk functions respectively. Following is an example how you can use them:  #***************************************************************** # Test #****************************************************************** models = list() models$combo = bt.aaa.combo(data, period.ends, n.top = 5,
n.mom = 180, n.vol = 20)

modelsaaa = bt.aaa.minrisk(data, period.ends, n.top = 5, n.mom = 180, n.vol = 20) plotbt.custom.report.part1(models)  Now let’s evaluate all possible combinations of momentum and volatility look back parameters ranging from 1 to 12 months using Combo strategy:  #***************************************************************** # Sensitivity Analysis: bt.aaa.combo / bt.aaa.minrisk #****************************************************************** # length of momentum look back mom.lens = ( 1 : 12 ) * 20 # length of volatility look back vol.lens = ( 1 : 12 ) * 20 models = list() # evaluate strategies for(n.mom in mom.lens) { cat('MOM =', n.mom, '\n') for(n.vol in vol.lens) { cat('\tVOL =', n.vol, '\n') models[[ paste('M', n.mom, 'V', n.vol) ]] = bt.aaa.combo(data, period.ends, n.top = 5, n.mom = n.mom, n.vol = n.vol) } } out = plotbt.strategy.sidebyside(models, return.table=T, make.plot = F)  Finally let’s plot the Sharpe, Cagr, DVR, MaxDD statistics for the each strategy:  #***************************************************************** # Create Report #****************************************************************** # allocate matrix to store backtest results dummy = matrix('', len(vol.lens), len(mom.lens)) colnames(dummy) = paste('M', mom.lens) rownames(dummy) = paste('V', vol.lens) names = spl('Sharpe,Cagr,DVR,MaxDD') layout(matrix(1:4,nrow=2)) for(i in names) { dummy[] = '' for(n.mom in mom.lens) for(n.vol in vol.lens) dummy[paste('V', n.vol), paste('M', n.mom)] = out[i, paste('M', n.mom, 'V', n.vol) ] plot.table(dummy, smain = i, highlight = T, colorbar = F) }  I have also repeated the last two steps for the AAA strategy (bt.aaa.minrisk function): The results for AAA and Combo strategies are very similar. The shorter term momentum and shorter term volatility produce the best results, but likely at the cost of higher turnover. To view the complete source code for this example, please have a look at the bt.aaa.sensitivity.test() function in bt.test.r at github. ## Adaptive Asset Allocation August 14, 2012 9 comments Today I want to highlight a whitepaper about Adaptive Asset Allocation by Butler, Philbrick and Gordillo and the discussion by David Varadi on the robustness of parameters of the Adaptive Asset Allocation algorithm. In this post I will follow the steps of the Adaptive Asset Allocation paper, and in the next post I will show how to test the sensitivity of parameters of the of the Adaptive Asset Allocation algorithm. I will use the 10 ETFs that invest into the same asset classes as presented in the paper: • U.S. Stocks (SPY) • European Stocks (EFA) • Japanese Stocks (EWJ) • Emerging Market Stocks (EEM) • U.S. REITs (IYR) • International REITs (RWX) • U.S. Mid-term Treasuries (IEF) • U.S. Long-term Treasuries (TLT) • Commodities (DBC) • Gold (GLD) Unfortunately, most of these 10 ETFs only began trading in the end of 2004, so I will only be able to replicate the recent Adaptive Asset Allocation strategy performance. Let’s start by loading historical prices of 10 ETFs using the Systematic Investor Toolbox: ############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') tickers = spl('SPY,EFA,EWJ,EEM,IYR,RWX,IEF,TLT,DBC,GLD') data <- new.env() getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) bt.prep(data, align='keep.all', dates='2004:12::') #***************************************************************** # Code Strategies #****************************************************************** prices = dataprices
n = ncol(prices)

models = list()

# find period ends
period.ends = endpoints(prices, 'months')
period.ends = period.ends[period.ends > 0]

n.top = 5		# number of momentum positions
n.mom = 6*22	# length of momentum look back
n.vol = 1*22 	# length of volatility look back


Next, let’s create portfolios as outlined in the whitepaper:

    #*****************************************************************
# Equal Weight
#******************************************************************
data$weight[] = NA data$weight[period.ends,] = ntop(prices[period.ends,], n)
models$equal.weight = bt.run.share(data, clean.signal=F) #***************************************************************** # Volatliliy Position Sizing #****************************************************************** ret.log = bt.apply.matrix(prices, ROC, type='continuous') hist.vol = bt.apply.matrix(ret.log, runSD, n = n.vol) adj.vol = 1/hist.vol[period.ends,] data$weight[] = NA
data$weight[period.ends,] = adj.vol / rowSums(adj.vol, na.rm=T) models$volatility.weighted = bt.run.share(data, clean.signal=F)

#*****************************************************************
# Momentum Portfolio
#*****************************************************************
momentum = prices / mlag(prices, n.mom)

data$weight[] = NA data$weight[period.ends,] = ntop(momentum[period.ends,], n.top)
models$momentum = bt.run.share(data, clean.signal=F) #***************************************************************** # Combo: weight positions in the Momentum Portfolio according to Volatliliy #***************************************************************** weight = ntop(momentum[period.ends,], n.top) * adj.vol data$weight[] = NA
data$weight[period.ends,] = weight / rowSums(weight, na.rm=T) models$combo = bt.run.share(data, clean.signal=F,trade.summary = TRUE)


Finally let’s create the Adaptive Asset Allocation portfolio:

    #*****************************************************************
# weight positions in the Momentum Portfolio according to
# the minimum variance algorithm
#*****************************************************************
weight = NA * prices
weight[period.ends,] = ntop(momentum[period.ends,], n.top)

for( i in period.ends[period.ends >= n.mom] ) {
hist = ret.log[ (i - n.vol + 1):i, ]

# require all assets to have full price history
include.index = count(hist)== n.vol

# also only consider assets in the Momentum Portfolio
index = ( weight[i,] > 0 ) & include.index
n = sum(index)

if(n > 0) {
hist = hist[ , index]

# create historical input assumptions
ia = create.historical.ia(hist, 252)
s0 = apply(coredata(hist),2,sd)
ia$cov = cor(coredata(hist), use='complete.obs',method='pearson') * (s0 %*% t(s0)) # create constraints: 0<=x<=1, sum(x) = 1 constraints = new.constraints(n, lb = 0, ub = 1) constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # compute minimum variance weights weight[i,] = 0 weight[i,index] = min.risk.portfolio(ia, constraints) } } # Adaptive Asset Allocation (AAA) data$weight[] = NA
data$weight[period.ends,] = weight[period.ends,] models$aaa = bt.run.share(data, clean.signal=F,trade.summary = TRUE)


The last step is create reports for all models:

    #*****************************************************************
# Create Report
#******************************************************************
models = rev(models)

plotbt.custom.report.part1(models)
plotbt.custom.report.part2(models)
plotbt.custom.report.part3(models$combo, trade.summary = TRUE) plotbt.custom.report.part3(models$aaa, trade.summary = TRUE)


The AAA portfolio performs very well, producing the highest Sharpe ratio and smallest draw-down across all strategies. In the next post I will look at the sensitivity of AAA parameters.

To view the complete source code for this example, please have a look at the bt.aaa.test() function in bt.test.r at github.

## Backtesting Asset Allocation portfolios

In the last post, Portfolio Optimization: Specify constraints with GNU MathProg language, Paolo and MC raised a question: “How would you construct an equal risk contribution portfolio?” Unfortunately, this problem cannot be expressed as a Linear or Quadratic Programming problem.

The outline for this post:

• I will show how Equal Risk Contribution portfolio can be formulated and solved using a non-linear solver.
• I will backtest Equal Risk Contribution portfolio and other Asset Allocation portfolios based on various risk measures I described in the Asset Allocation series of post.

Pat Burns wrote an excellent post: Unproxying weight constraints that explains Risk Contribution – partition the variance of a portfolio into pieces attributed to each asset. The Equal Risk Contribution portfolio is a portfolio that splits total portfolio risk equally among its assets. (The concept is similar to 1/N portfolio – a portfolio that splits total portfolio weight equally among its assets.)

Risk Contributions (risk fractions) can be expressed in terms of portfolio weights and covariance matrix (V):
$f=\frac{w*Vw}{w'Vw}$

Our objective is to find portfolio weights such that Risk Contributions are equal for all assets. This objective function can be easily coded in R:

	risk.contribution = w * (cov %*% w)
sum( abs(risk.contribution - mean(risk.contribution)) )


I recommend following references for a detailed discussion of Risk Contributions:

I will use a Nonlinear programming solver, Rdonlp2, which is based on donlp2 routine developed and copyright by Prof. Dr. Peter Spellucci to solve for Equal Risk Contribution portfolio. [Please note that following code might not properly execute on your computer because Rdonlp2 package is required and not available on CRAN]

#--------------------------------------------------------------------------
# Equal Risk Contribution portfolio
#--------------------------------------------------------------------------
ia = aa.test.create.ia()
n = ian # 0 <= x.i <= 1 constraints = new.constraints(n, lb = 0, ub = 1) # SUM x.i = 1 constraints = add.constraints(rep(1, n), 1, type = '=', constraints) # find Equal Risk Contribution portfolio w = find.erc.portfolio(ia, constraints) # compute Risk Contributions risk.contributions = portfolio.risk.contribution(w, ia)  Next, I want to expand on the Backtesting Minimum Variance portfolios post to include Equal Risk Contribution portfolio and and other Asset Allocation portfolios based on various risk measures I described in the Asset Allocation series of post. ############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod,quadprog,corpcor,lpSolve') tickers = spl('SPY,QQQ,EEM,IWM,EFA,TLT,IYR,GLD') data <- new.env() getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) bt.prep(data, align='remove.na', dates='1990::2011') #***************************************************************** # Code Strategies #****************************************************************** prices = dataprices
n = ncol(prices)

# find week ends
period.ends = endpoints(prices, 'weeks')
period.ends = period.ends[period.ends > 0]

#*****************************************************************
# Create Constraints
#*****************************************************************
constraints = new.constraints(n, lb = 0, ub = 1)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)

#*****************************************************************
# Create Portfolios
#*****************************************************************
ret = prices / mlag(prices) - 1
start.i = which(period.ends >= (63 + 1))[1]

weight = NA * prices[period.ends,]
weights = list()
# Equal Weight 1/N Benchmark
weights$equal.weight = weight weights$equal.weight[] = ntop(prices[period.ends,], n)
weights$equal.weight[1:start.i,] = NA weights$min.var = weight
weights$min.maxloss = weight weights$min.mad = weight
weights$min.cvar = weight weights$min.cdar = weight
weights$min.cor.insteadof.cov = weight weights$min.mad.downside = weight
weights$min.risk.downside = weight # following optimizations use a non-linear solver weights$erc = weight
weights$min.avgcor = weight risk.contributions = list() risk.contributions$erc = weight

# construct portfolios
for( j in start.i:len(period.ends) ) {
i = period.ends[j]

# one quarter = 63 days
hist = ret[ (i- 63 +1):i, ]

# create historical input assumptions
ia = create.historical.ia(hist, 252)
s0 = apply(coredata(hist),2,sd)
ia$correlation = cor(coredata(hist), use='complete.obs',method='pearson') ia$cov = ia$correlation * (s0 %*% t(s0)) # construct portfolios based on various risk measures weights$min.var[j,] = min.risk.portfolio(ia, constraints)
weights$min.maxloss[j,] = min.maxloss.portfolio(ia, constraints) weights$min.mad[j,] = min.mad.portfolio(ia, constraints)
weights$min.cvar[j,] = min.cvar.portfolio(ia, constraints) weights$min.cdar[j,] = min.cdar.portfolio(ia, constraints)
weights$min.cor.insteadof.cov[j,] = min.cor.insteadof.cov.portfolio(ia, constraints) weights$min.mad.downside[j,] = min.mad.downside.portfolio(ia, constraints)
weights$min.risk.downside[j,] = min.risk.downside.portfolio(ia, constraints) # following optimizations use a non-linear solver constraints$x0 = weights$erc[(j-1),] weights$erc[j,] = find.erc.portfolio(ia, constraints)

constraints$x0 = weights$min.avgcor[(j-1),]
weights$min.avgcor[j,] = min.avgcor.portfolio(ia, constraints) risk.contributions$erc[j,] = portfolio.risk.contribution(weights$erc[j,], ia) }  Next let’s backtest these portfolios and create summary statistics:  #***************************************************************** # Create strategies #****************************************************************** models = list() for(i in names(weights)) { data$weight[] = NA
data$weight[period.ends,] = weights[[i]] models[[i]] = bt.run.share(data, clean.signal = F) } #***************************************************************** # Create Report #****************************************************************** models = rev(models) # Plot perfromance plotbt(models, plotX = T, log = 'y', LeftMargin = 3) mtext('Cumulative Performance', side = 2, line = 1) # Plot Strategy Statistics Side by Side plotbt.strategy.sidebyside(models) # Plot transition maps layout(1:len(models)) for(m in names(models)) { plotbt.transition.map(models[[m]]$weight, name=m)
legend('topright', legend = m, bty = 'n')
}

# Plot risk contributions
layout(1:len(risk.contributions))
for(m in names(risk.contributions)) {
plotbt.transition.map(risk.contributions[[m]], name=paste('Risk Contributions',m))
legend('topright', legend = m, bty = 'n')
}

# Compute portfolio concentration and turnover stats based on the
# On the property of equally-weighted risk contributions portfolios by S. Maillard,
# T. Roncalli and J. Teiletche (2008), page 22
out = compute.stats( rev(weights),
list(Gini=function(w) mean(portfolio.concentration.gini.coefficient(w), na.rm=T),
Herfindahl=function(w) mean(portfolio.concentration.herfindahl.index(w), na.rm=T),
Turnover=function(w) 52 * mean(portfolio.turnover(w), na.rm=T)
)
)

out[] = plota.format(100 * out, 1, '', '%')
plot.table(t(out))


The minimum variance (min.risk) portfolio performed very well during that period with 10.5% CAGR and 14% maximum drawdown. The Equal Risk Contribution portfolio (find.erc) also fares well with 10.5% CAGR and 19% maximum drawdown. The 1/N portfolio (equal.weight) is the worst strategy with 7.8% CAGR and 45% maximum drawdown.

One interesting way to modify this strategy is to consider different measures of volatility used to construct a covariance matrix. For example TTR package provides functions for the Garman Klass – Yang Zhang and the Yang Zhang volatility estimation methods. For more details, please have a look at the Different Volatility Measures Effect on Daily MR by Quantum Financier post.

Inspired by the I Dream of Gini by David Varadi, I will show how to create Gini efficient frontier in the next post.

To view the complete source code for this example, please have a look at the bt.aa.test() function in bt.test.r at github.

## Asset Allocation Process Summary

I want to review the series of posts I wrote about Asset Allocation and Portfolio Construction and show how all of them fit into portfolio management framework.

The first step of the Asset Allocation process is to create the Input Assumptions: Expected Return, Risk, and Covariance. This is more art than science because we are trying to forecast future join realization for all asset classes. There are a number of approaches to create input assumptions, for example:

The robust estimation of covariance matrix is usually preferred. For example, the Covariance Shrinkage Estimator is nicely explained in Honey, I Shrunk the Sample Covariance matrix by Olivier Ledoit and Michael Wolf (2003).

Introduction of new asset classes with short historical information is problematic when using historical input assumptions. For example, Treasury Inflation-Protected Securities (TIPS) were introduced by the U.S. Treasury Department in 1997. This is an attractive asset class that helps fight inflation. To incorporate TIPS, I suggest following methods outlined in Analyzing investments whose histories differ in length by R. Stambaugh (1997).

The next step of the Asset Allocation process to create efficient frontier and select target portfolio. I recommend looking at different risk measures in addition to the traditional standard deviation of the portfolio’s return. For example, Maximum Loss, Mean-Absolute Deviation, and Expected shortfall (CVaR) and Conditional Drawdown at Risk (CDaR) risk measures. To select a target portfolio look at the portfolios on the efficient frontier and select one that satisfies both your quantitative and qualitative requirements. For example, a quantitative requirement can be a low historic drawdown, and a qualitative requirement can be a sensible weights. For example, if model suggest 13.2343% allocation to Fixed Income, round it down to 13%.

I also recommend looking at your target portfolio in reference to the geometric efficient frontier to make sure your are properly compensated for the risk of your target portfolio. If you have a view on the possible future economic or market scenarios, please stress test your target portfolio to see how it will behave during these scenarios. For example read A scenarios approach to asset allocation article.

Sometimes, we want to combine short-term tactical models with long-term strategic target portfolio. I think the best way to introduce tactical information into the strategic asset mix is to use Black-Litterman model. Please read my post The Black-Litterman model for a numerical example.

The next step of the Asset Allocation process is to implement the target portfolio. If you follow a fund of funds approach and implement the target asset mix using external managers, please perform a style analysis to determine the style mix of each manager and visually study if manager’s style was consistent over time. We want to invest into the managers that follow their investment mandate, so we can correctly map them into our target asset portfolio.

We can use the information from style analysis to create managers input assumptions. Let’s combine alpha and covariance of tracking error from the style analysis with asset input assumptions to determine managers input assumptions.

$Tracking.Error = Managers_{returns} - Style.Mix \star Assets_{returns} \newline\newline Managers_{Alpha}=mean(Tracking.Error) \newline\newline Managers_{Alpha.Covariance} = cov(Tracking.Error)$

Managers Input Assumptions:

$Managers_{Expected.Return}=Managers_{Alpha} + Style.Mix \star Assets_{Expected.Return} \newline\newline Managers_{Covariance} = Managers_{Alpha.Covariance} + Style.Mix \star Assets_{Covariance}$

Note, we simply add up mean and covariance because Managers Tracking Error and Assets Returns are independent by construction.

Next we can create managers efficient frontier, such that all portfolios on this frontier will have target asset allocation, as implied from each manager’s style analysis.

$Minimize ( w \star Managers_{Covariance}\star w') \newline\newline w \star Style.Mix = Target.Portfolio_{Asset.Mix} \newline\newline w \star Managers_{Expected.Return} = E \newline\newline \sum_{i=1}^{N}w_{i} = 1 \newline\newline w_{i}\geqslant 0, for i=1...N$

The last step of the Asset Allocation process is to decide how and when to rebalance: update the portfolio to the target mix. You can potentially rebalance daily, but it is very costly. A good alternative is to rebalance every time period, i.e. quarterly, annually, or set boundaries, i.e. if asset class weight is more than 3% from it’s target then rebalance.

In Conclusion, the Asset Allocation process consists of four decision steps:

• create Input Assumptions
• create Efficient Frontier
• implement Target Portfolio
• create Rebalancing Plan

All these steps include some quantitative and qualitative iterations. I highly recommend experimenting as much as possible before committing your hard earned savings to an asset allocation portfolio.