Archive

Archive for November, 2011

Trading Strategy Sensitivity Analysis

When designing a trading strategy, I want to make sure that small changes in the strategy parameters will not transform the profitable strategy into the loosing one. I will study the strategy robustness and profitability under different parameter scenarios using a sample strategy presented by David Varadi in the Improving Trend-Following Strategies With Counter-Trend Entries post.

First, let’s implement this trend-following strategy using the backtesting library in the Systematic Investor Toolbox:

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
	source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')
	tickers = spl('SPY')

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
	bt.prep(data, align='keep.all', dates='1970::2011')

	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices    
	
	# Buy & Hold	
	data$weight[] = 1
	buy.hold = bt.run(data)	

	# Trend-Following strategy: Long[Close > SMA(10) ]
	sma = bt.apply(data, function(x) { SMA(Cl(x), 10) } )	
	data$weight[] = NA
		data$weight[] = iif(prices >= sma, 1, 0)
	trend.following = bt.run(data, trade.summary=T)			

	# Trend-Following With Counter-Trend strategy: Long[Close > SMA(10), DVB(1) CounterTrend ]
	dv = bt.apply(data, function(x) { DV(HLC(x), 1, TRUE) } )	
	data$weight[] = NA
		data$weight[] = iif(prices > sma & dv < 0.25, 1, data$weight)
		data$weight[] = iif(prices < sma & dv > 0.75, 0, data$weight)
	trend.following.dv1 = bt.run(data, trade.summary=T)			

	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report(trend.following.dv1, trend.following, buy.hold)	

The Counter-Trend Entries (trend.following.dv1, black line) improved the performance of the simple Trend-Following (trend.following, red line) strategy: both returns are higher and drawdowns are smaller.

Next, I will examine how CAGR, Sharpe, DVR, and Maximum Drawdowns are affected by varying length of the moving average from 10 to 100 and varying length of the DV from 1 to 5:

	#*****************************************************************
	# Sensitivity Analysis
	#****************************************************************** 
	ma.lens = seq(10, 100, by = 10)
	dv.lens = seq(1, 5, by = 1)

	# precompute indicators
	mas = matrix(double(), nrow(prices), len(ma.lens))
	dvs = matrix(double(), nrow(prices), len(dv.lens))

	for(i in 1:len(ma.lens)) {
		ma.len = ma.lens[i]
		mas[, i] = bt.apply(data, function(x) { SMA(Cl(x), ma.len) } )
	}
	for(i in 1:len(dv.lens)) {
		dv.len = dv.lens[i]
		dvs[,i] = bt.apply(data, function(x) { DV(HLC(x), dv.len, TRUE) } )
	}

	# allocate matrixes to store backtest results
	dummy = matrix(double(), len(ma.lens), 1+len(dv.lens))
		rownames(dummy) = paste('SMA', ma.lens)
		colnames(dummy) = c('NO', paste('DV', dv.lens))
		
	out = list()
		out$Cagr = dummy
		out$Sharpe = dummy
		out$DVR = dummy
		out$MaxDD = dummy
	
	# evaluate strategies
	for(ima in 1:len(ma.lens)) {
		sma = mas[, ima]
		cat('SMA =', ma.lens[ima], '\n')

		for(idv in 0:len(dv.lens)) {			
			if( idv == 0 ) {
				data$weight[] = NA
					data$weight[] = iif(prices > sma, 1, 0)			
			} else {
				dv = dvs[, idv]
				
				data$weight[] = NA
					data$weight[] = iif(prices > sma & dv < 0.25, 1, data$weight)
					data$weight[] = iif(prices < sma & dv > 0.75, 0, data$weight)
			}
			strategy = bt.run(data, silent=T)
			
			# add 1 to account for benchmark case, no counter-trend
			idv = idv + 1
			out$Cagr[ima, idv] = compute.cagr(strategy$equity)
			out$Sharpe[ima, idv] = compute.sharpe(strategy$ret)
			out$DVR[ima, idv] = compute.DVR(strategy)
			out$MaxDD[ima, idv] = compute.max.drawdown(strategy$equity)
		}
	}

	#*****************************************************************
	# Create Report
	#****************************************************************** 
	layout(matrix(1:4,nrow=2))	
	for(i in names(out)) {
		temp = out[[i]]
		temp[] = plota.format( 100 * temp, 1, '', '' )
		plot.table(temp, smain = i, highlight = T, colorbar = F)
	}

The first column, labeled “NO”, shows the performance of the Trend-Following strategy (no Counter-Trend Entries). The Counter-Trend filter improves the strategy performance for most of the parameter scenarios. This is the result you want to get by doing Sensitivity Analysis, the strategy is robust and profitable under variety of parameters.

The next step, which you can do as a homework, is to examine the strategy performance with different instruments. For example, a more volatile Nasdaq (QQQ), or a Canadian S&P/TSX Index (XIU.TO).

To view the complete source code for this example, please have a look at the bt.improving.trend.following.test() function in bt.test.r at github.

Introduction to Backtesting library in the Systematic Investor Toolbox

November 25, 2011 2 comments

I wrote a simple Backtesting library to evaluate and analyze Trading Strategies. I will use this library to present the performance of trading strategies that I will study in the next series of posts.

It is very easy to write a simple Backtesting routine in R, for example:

bt.simple <- function(data, signal) 
{
	# lag singal
	signal = Lag(signal, 1)

	# back fill
	signal = na.locf(signal, na.rm = FALSE)
	signal[is.na(signal)] = 0

	# calculate Close-to-Close returns
	ret = ROC(Cl(data), type='discrete')
	ret[1] = 0
	
	# compute stats	
	bt = list()
		bt$ret = ret * signal
		bt$equity = cumprod(1 + bt$ret)    	    	
	return(bt)
}

# Test for bt.simple functions
load.packages('quantmod')
	
# load historical prices from Yahoo Finance
data = getSymbols('SPY', src = 'yahoo', from = '1980-01-01', auto.assign = F)

# Buy & Hold
signal = rep(1, nrow(data))
buy.hold = bt.simple(data, signal)
        
# MA Cross
sma = SMA(Cl(data),200)
signal = ifelse(Cl(data) > sma, 1, 0)
sma.cross = bt.simple(data, signal)
        
# Create a chart showing the strategies perfromance in 2000:2009
dates = '2000::2009'
buy.hold.equity = buy.hold$equity[dates] / as.double(buy.hold$equity[dates][1])
sma.cross.equity = sma.cross$equity[dates] / as.double(sma.cross$equity[dates][1])

chartSeries(buy.hold.equity, TA = c(addTA(sma.cross.equity, on=1, col='red')),	
	theme ='white', yrange = range(buy.hold.equity, sma.cross.equity) )	

The code I implemented in the Systematic Investor Toolbox is a bit longer, but follows the same logic. It provides extra functionality: ability to handle multiple securities, weights or shares backtesting, and customized reporting. Following is a sample code to implement the above strategies using the backtesting library in the Systematic Investor Toolbox:

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
	source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 	
	load.packages('quantmod')
	tickers = spl('SPY')

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
	bt.prep(data, align='keep.all', dates='1970::2011')

	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices    
	
	# Buy & Hold	
	data$weight[] = 1
	buy.hold = bt.run(data)	

	# MA Cross
	sma = bt.apply(data, function(x) { SMA(Cl(x), 200) } )	
	data$weight[] = NA
		data$weight[] = iif(prices >= sma, 1, 0)
	sma.cross = bt.run(data, trade.summary=T)			

	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report(sma.cross, buy.hold)

The bt.prep function merges and aligns all symbols in the data environment. The bt.apply function applies user given function to each symbol in the data environment. The bt.run computes the equity curve of strategy specified by data$weight matrix. The data$weight matrix holds weights (signals) to open/close positions. The plotbt.custom.report function creates the customized report, which can be fined tuned by the user. Here is a sample output:

> buy.hold = bt.run(data)
Performance summary :
        CAGR    Best    Worst
        7.2     14.5    -9.9

> sma.cross = bt.run(data, trade.summary=T)
Performance summary :
        CAGR    Best    Worst
        6.3     5.8     -7.2

The visual performance summary:

The statistical performance summary:

The trade summary:

To view the complete source code for this example, please have a look at the bt.test() function in bt.r at github.

Asset Allocation Process Summary

November 22, 2011 7 comments

I want to review the series of posts I wrote about Asset Allocation and Portfolio Construction and show how all of them fit into portfolio management framework.

The first step of the Asset Allocation process is to create the Input Assumptions: Expected Return, Risk, and Covariance. This is more art than science because we are trying to forecast future join realization for all asset classes. There are a number of approaches to create input assumptions, for example:

The robust estimation of covariance matrix is usually preferred. For example, the Covariance Shrinkage Estimator is nicely explained in Honey, I Shrunk the Sample Covariance matrix by Olivier Ledoit and Michael Wolf (2003).

Introduction of new asset classes with short historical information is problematic when using historical input assumptions. For example, Treasury Inflation-Protected Securities (TIPS) were introduced by the U.S. Treasury Department in 1997. This is an attractive asset class that helps fight inflation. To incorporate TIPS, I suggest following methods outlined in Analyzing investments whose histories differ in length by R. Stambaugh (1997).

The next step of the Asset Allocation process to create efficient frontier and select target portfolio. I recommend looking at different risk measures in addition to the traditional standard deviation of the portfolio’s return. For example, Maximum Loss, Mean-Absolute Deviation, and Expected shortfall (CVaR) and Conditional Drawdown at Risk (CDaR) risk measures. To select a target portfolio look at the portfolios on the efficient frontier and select one that satisfies both your quantitative and qualitative requirements. For example, a quantitative requirement can be a low historic drawdown, and a qualitative requirement can be a sensible weights. For example, if model suggest 13.2343% allocation to Fixed Income, round it down to 13%.

I also recommend looking at your target portfolio in reference to the geometric efficient frontier to make sure your are properly compensated for the risk of your target portfolio. If you have a view on the possible future economic or market scenarios, please stress test your target portfolio to see how it will behave during these scenarios. For example read A scenarios approach to asset allocation article.

Sometimes, we want to combine short-term tactical models with long-term strategic target portfolio. I think the best way to introduce tactical information into the strategic asset mix is to use Black-Litterman model. Please read my post The Black-Litterman model for a numerical example.

The next step of the Asset Allocation process is to implement the target portfolio. If you follow a fund of funds approach and implement the target asset mix using external managers, please perform a style analysis to determine the style mix of each manager and visually study if manager’s style was consistent over time. We want to invest into the managers that follow their investment mandate, so we can correctly map them into our target asset portfolio.

We can use the information from style analysis to create managers input assumptions. Let’s combine alpha and covariance of tracking error from the style analysis with asset input assumptions to determine managers input assumptions.

Tracking.Error = Managers_{returns} - Style.Mix \star Assets_{returns}  \newline\newline  Managers_{Alpha}=mean(Tracking.Error)  \newline\newline  Managers_{Alpha.Covariance} = cov(Tracking.Error)

Managers Input Assumptions:

Managers_{Expected.Return}=Managers_{Alpha} + Style.Mix \star Assets_{Expected.Return}  \newline\newline  Managers_{Covariance} = Managers_{Alpha.Covariance} + Style.Mix \star Assets_{Covariance}

Note, we simply add up mean and covariance because Managers Tracking Error and Assets Returns are independent by construction.

Next we can create managers efficient frontier, such that all portfolios on this frontier will have target asset allocation, as implied from each manager’s style analysis.

Minimize ( w \star Managers_{Covariance}\star w')  \newline\newline  w \star Style.Mix = Target.Portfolio_{Asset.Mix}  \newline\newline  w \star Managers_{Expected.Return} = E  \newline\newline  \sum_{i=1}^{N}w_{i} = 1  \newline\newline  w_{i}\geqslant 0, for i=1...N

The last step of the Asset Allocation process is to decide how and when to rebalance: update the portfolio to the target mix. You can potentially rebalance daily, but it is very costly. A good alternative is to rebalance every time period, i.e. quarterly, annually, or set boundaries, i.e. if asset class weight is more than 3% from it’s target then rebalance.

In Conclusion, the Asset Allocation process consists of four decision steps:

  • create Input Assumptions
  • create Efficient Frontier
  • implement Target Portfolio
  • create Rebalancing Plan

All these steps include some quantitative and qualitative iterations. I highly recommend experimenting as much as possible before committing your hard earned savings to an asset allocation portfolio.

Style Analysis

November 18, 2011 4 comments

During the final stage of asset allocation process we have to decide how to implement our desired allocation. In many cases we will allocate capital to the mutual fund managers who will invest money according to their fund’s mandate. Usually there is no perfect relationship between asset classes and fund managers. To determine the true style of a manager one can examine its historical holdings or perform a Style Analysis. Style Analysis is a procedure that tries to attribute funds performance to the performance of asset classes by running the constrained linear regression. For a detailed review of Style Analysis I recommend following articles:

I want to examine to the style of the Fidelity Worldwide Fund (FWWFX). First, let’s get the historical fund and asset class prices from Yahoo Fiance:

# load Systematic Investor Toolbox
setInternet2(TRUE)
source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')))

	#--------------------------------------------------------------------------
	# Get Historical Data
	#--------------------------------------------------------------------------
	load.packages('quantmod')

	# load historical prices from Yahoo Finance
	symbols = spl('FWWFX,EWA,EWC,EWQ,EWG,EWJ,EWU,SPY')		
	symbol.names = spl('Fund,Australia,Canada,France,Germany,Japan,UK,USA')	
	getSymbols(symbols, from = '1980-01-01', auto.assign = TRUE)
			
	# align dates for all symbols & convert to frequency 
	hist.prices = merge(FWWFX,EWA,EWC,EWQ,EWG,EWJ,EWU,SPY)		
		period.ends = endpoints(hist.prices, 'months')
		hist.prices = Ad(hist.prices)[period.ends, ]
		
		index(hist.prices) = as.Date(paste('1/', format(index(hist.prices), '%m/%Y'), sep=''), '%d/%m/%Y')
		colnames(hist.prices) = symbol.names
	
	# remove any missing data	
	hist.prices = na.omit(hist.prices['1990::2010'])
	
	# compute simple returns	
	hist.returns = na.omit( ROC(hist.prices, type = 'discrete') )
		
	#load 3-Month Treasury Bill from FRED
	TB3M = quantmod::getSymbols('TB3MS', src='FRED', auto.assign = FALSE)	
	TB3M = processTBill(TB3M, timetomaturity = 1/4)
		index(TB3M) = as.Date(paste('1/', format(index(TB3M), '%m/%Y'), sep=''), '%d/%m/%Y')
		TB3M = ROC(Ad(TB3M), type = 'discrete')
		colnames(TB3M) = 'Cash'
		
	# add Cash to the asset classes
	hist.returns = na.omit( merge(hist.returns, TB3M) )

To determine the Fidelity Worldwide Fund style, I will run a regression of fund returns on the country asset classes over a rolling 36 months window. First, let’s run the regression naively without any constraints:

	#--------------------------------------------------------------------------
	# Style Regression over 36 Month window, unconstrained
	#--------------------------------------------------------------------------
	# setup
	ndates = nrow(hist.returns)
	n = ncol(hist.returns)-1
	window.len = 36
		
	style.weights = hist.returns[, -1]
		style.weights[] = NA
	style.r.squared = hist.returns[, 1]
		style.r.squared[] = NA
	
	# main loop
	for( i in window.len:ndates ) {
		window.index = (i - window.len + 1) : i
		
		fit = lm.constraint( hist.returns[window.index, -1], hist.returns[window.index, 1] )	
			style.weights[i,] = fit$coefficients
			style.r.squared[i,] = fit$r.squared
	}

	# plot 
	aa.style.summary.plot('Style UnConstrained', style.weights, style.r.squared, window.len)

The allocations jump up and down in no consistent fashion. The regression also suggests that fund uses leverage (i.e. Cash -171%) which is not the case. To fix these problems, I will introduce following constraints:

  • All style weights are between 0% and 100%.
  • The sum of style weights is equal up to 100%.
	#--------------------------------------------------------------------------
	# Style Regression over Window, constrained
	#--------------------------------------------------------------------------
	# setup
	load.packages('quadprog')

	style.weights[] = NA
	style.r.squared[] = NA

	# Setup constraints
	# 0 <= x.i <= 1
	constraints = new.constraints(n, lb = 0, ub = 1)

	# SUM x.i = 1
	constraints = add.constraints(rep(1, n), 1, type = '=', constraints)		
	
	# main loop
	for( i in window.len:ndates ) {
		window.index = (i - window.len + 1) : i
		
		fit = lm.constraint( hist.returns[window.index, -1], hist.returns[window.index, 1], constraints )	
			style.weights[i,] = fit$coefficients
			style.r.squared[i,] = fit$r.squared
	}
 	
	# plot	
	aa.style.summary.plot('Style Constrained', style.weights, style.r.squared, window.len)

After introducing the constraints, the allocations are more stable, but the historical allocation to USA (highlighted with yellow) varies from 0% in 2000 to 60% in 2006. This is very suspicious, and the only way to check if this is true, is to look at the fund memorandum and historical holdings. For now, I will assume that the asset class allocations can vary around the current fund’s allocations. To get the current fund’s allocations, I will examine its current holdings at:

I imposed additional lower and upper bounds constrains:

	#--------------------------------------------------------------------------
	# Style Regression  over Window, constrained + limits on allocation
	#--------------------------------------------------------------------------
	# setup
	style.weights[] = NA
	style.r.squared[] = NA

	# Setup constraints
	temp = rep(0, n)
		names(temp) = colnames(hist.returns)[-1]
	lb = temp
	ub = temp
	ub[] = 1
			
	lb['Australia'] = 0
	ub['Australia'] = 5

	lb['Canada'] = 0
	ub['Canada'] = 5
		
	lb['France'] = 0
	ub['France'] = 15

	lb['Germany'] = 0
	ub['Germany'] = 15

   	lb['Japan'] = 0
	ub['Japan'] = 15

   	lb['UK'] = 0
	ub['UK'] = 25
	
   	lb['USA'] = 30
	ub['USA'] = 100
	     
   	lb['Cash'] = 2
	ub['Cash'] = 15
       
	# 0 <= x.i <= 1
	constraints = new.constraints(n, lb = lb/100, ub = ub/100)

	# SUM x.i = 1
	constraints = add.constraints(rep(1, n), 1, type = '=', constraints)		
	
	# main loop
	for( i in window.len:ndates ) {
		window.index = (i - window.len + 1) : i
		
		fit = lm.constraint( hist.returns[window.index, -1], hist.returns[window.index, 1], constraints )	
			style.weights[i,] = fit$coefficients
			style.r.squared[i,] = fit$r.squared
	}
 	
	# plot
	aa.style.summary.plot('Style Constrained+Limits', style.weights, style.r.squared, window.len)

The last style allocation looks more probable. If historical fund’s holdings were readily available we could have examined them to refine the upper and lower boundaries. The last step is to analyze fund’s actual returns vs returns implied by its style matrix.

	#--------------------------------------------------------------------------
	# Look at Manager's Tracking Error
	#--------------------------------------------------------------------------
	manager.returns = hist.returns[, 1]
		manager.returns = manager.returns[window.len:ndates,]
	implied.returns = as.xts( rowSums(style.weights * hist.returns[, -1]), index(hist.returns))
		implied.returns = implied.returns[window.len:ndates,]

	tracking.error = manager.returns - implied.returns
	alpha = 12*mean(tracking.error)
	covar.alpha = 12* cov(tracking.error)
		
	# plot
	layout(1:2)
	plota(cumprod(1+manager.returns), type='l')
		plota.lines(cumprod(1+implied.returns), col='red')
		plota.legend('Fund,Style', 'black,red')
			
	par(mar = c(4,4,2,1))
	hist(100*tracking.error, xlab='Monthly Tracking Error',
		main= paste('Annualized Alpha =', round(100*alpha,1), 'Std Dev =', round(100*sqrt(covar.alpha),1))
	)

The Fidelity Worldwide Fund outperformed its proxy, implied from the style matrix, consistently over the last decade. The fund’s alpha is 2.9% and standard deviation of alpha is 3.9%. So if you want to invest into Worldwide Fund, the Fidelity Worldwide Fund is not a bad choice.

To view the complete source code for this example, please have a look at the aa.style.test() function in aa.test.r at github.

Periodic Table of Investment Returns

To get a better sense of historical data, I like to examine a Periodic Table of Investment Returns. For an example of a Periodic Table, have a look at the Single Country Index Returns Periodic Table for 2001-2010 published by iShares.

I can easily create a similar table with the following R code, using the historical data from the Black-Litterman Model post.

# load Systematic Investor Toolbox
setInternet2(TRUE)
source(gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')))

	#--------------------------------------------------------------------------
	# Get Historical Data
	#--------------------------------------------------------------------------
	# Country's IA are based on monthly data
	ia = aa.test.create.ia.country()
		hist.returns = ia$hist.returns

	# convert returns to prices
	hist.prices = cumprod(1 + hist.returns)

	# extract annual prices
	period.ends = endpoints(hist.prices, 'years')
		hist.prices = hist.prices[period.ends, ]

	# compute simple returns
	hist.returns = na.omit( ROC(hist.prices, type = 'discrete') )
		hist.returns = hist.returns['2000::2010']

	#--------------------------------------------------------------------------
	# Create Periodic table
	#--------------------------------------------------------------------------
	# create temp matrix with data you want to plot
	temp = t(coredata(hist.returns))
		colnames(temp) = format(index(hist.returns), '%Y')
		rownames(temp) = 1:ia$n
			rownames(temp)[1] = ' Best '
			rownames(temp)[ia$n] = ' Worst '

	# highlight each column
	col = plota.colors(ia$n)
	highlight = apply(temp,2, function(x) col[order(x, decreasing = T)] )
	
	# sort each column
	temp[] = apply(temp,2, sort, decreasing = T)
	
	# format data as percentages
	temp[] = plota.format(100 * temp, 0, '', '%')	

	# plot temp and legend
	plot.table(temp, highlight = highlight)			
	plota.legend(ia$symbols,col,cex=1.5)

The Canadian and Australian markets outperformed US and Japanese markets in most years.

Here is a slightly different version of Periodic table:

	#--------------------------------------------------------------------------
	# Create Periodic table, another version
	#--------------------------------------------------------------------------
	# create temp matrix with data you want to plot
	temp = t(coredata(hist.returns))
		colnames(temp) = format(index(hist.returns), '%Y')

	# format data as percentages
	temp[] = plota.format(100 * temp, 0, '', '%')

	# highlight each column separately
	highlight = apply(temp,2, function(x) plot.table.helper.color(t(x)) )

	# plot temp with colorbar
	plot.table(temp, highlight = highlight, colorbar = TRUE)

To view the complete source code for this example, please have a look at the aa.periodic.table.test() function in aa.test.r at github.

Categories: Uncategorized