Search Results

Keyword: ‘garch’

Trading using Garch Volatility Forecast

January 6, 2012 7 comments

Quantum Financier wrote an interesting article Regime Switching System Using Volatility Forecast. The article presents an elegant algorithm to switch between mean-reversion and trend-following strategies based on the market volatility. Two model are examined: one using the historical volatility and another using the Garch(1,1) Volatility Forecast. The mean-reversion strategy is modeled with RSI(2): Long when RSI(2), and Short otherwise. The trend-following strategy is modeled with SMA 50/200 crossover: Long when SMA(50) > SMA(200), and Short otherwise.

I want show how to implement these ideas using the backtesting library in the Systematic Investor Toolbox.

Following code loads historical prices from Yahoo Fiance and compares performance of the Buy and Hold, Mean-Reversion, and Trend-Following strategies using the backtesting library in the Systematic Investor Toolbox:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = 'SPY'

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)		
	bt.prep(data, align='remove.na', dates='2000::2012')

	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices  
	n = len(tickers)  
	nperiods = nrow(prices)
	
	# Buy & Hold	
	data$weight[] = 1
	buy.hold = bt.run(data)	

		
	# Mean-Reversion(MR) strategy - RSI2
	rsi2 = bt.apply.matrix(prices, RSI, 2)
	data$weight[] = NA
		data$weight[] = iif(rsi2 < 50, 1, -1)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)
	mr = bt.run(data, type='share', capital=capital, trade.summary=T)
				
		
	# Trend Following(TF) strategy - MA 50/200 crossover
	sma.short = bt.apply.matrix(prices, SMA, 50)
	sma.long = bt.apply.matrix(prices, SMA, 200)
	data$weight[] = NA
		data$weight[] = iif(sma.short > sma.long, 1, -1)
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)
	tf = bt.run(data, type='share', capital=capital, trade.summary=T)
		
	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report.part1(mr, tf, buy.hold)

Next, let’s create a strategy that switches between mean-reversion and trend-following strategies based on historical market volatility.

	#*****************************************************************
	# Regime Switching based on historical market volatility
	# Classify current volatility by percentile using a 252 day look-back period
	# percentrank(MA(percentrank(Stdev( diff(log(close)) ,21),252),21),250)
	#****************************************************************** 
	ret.log = bt.apply.matrix(prices, ROC, type='continuous')
	hist.vol = bt.apply.matrix(ret.log, runSD, n = 21)
	vol.rank = percent.rank(SMA(percent.rank(hist.vol, 252), 21), 250)

	# Regime Switching Historical
	data$weight[] = NA
		data$weight[] = iif(vol.rank > 0.5, 
					iif(rsi2 < 50, 1, -1),
					iif(sma.short > sma.long, 1, -1)
				)
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)
	regime.switching = bt.run(data, type='share', capital=capital, trade.summary=T)
	
	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report.part1(regime.switching, mr, tf, buy.hold)

Next, let’s create a GARCH(1,1) Volatility Forecast. I would recommend reading following articles for anyone who wants to find what GARCH is all about or to refresh their knowledge:

There are a few R packages to fit GARCH models. I will consider garch function from tseries package and garchFit function from fGarch package. The garch function from tseries package is fast but does not always find solution. The garchFit function from fGarch package is slower but does converge more consistently. To demonstrate the speed difference between garch function and garchFit function I created a simple benchmark:

	#*****************************************************************
	# Benchmarking Garch algorithms 
	#****************************************************************** 
	load.packages('tseries,fGarch,rbenchmark')	
	temp = garchSim(n=252)

	test1 <- function() {
		fit1=garch(temp, order = c(1, 1), control = garch.control(trace = F))
	}
	test2 <- function() {
		fit2=garchFit(~ garch(1,1), data = temp, include.mean=FALSE, trace=F)
	}
		 	
	benchmark(
		test1(),
		test2(),
		columns=spl('test,replications,elapsed,relative'),
		order='relative',
		replications=100
	)

The garchFit function is on average 6 times slower than garch function. So to forecast volatility I will try to use garch function whenever it can find a solution and garchFit function otherwise.

	#*****************************************************************
	# Forecast Volatility using Garch	
	# garch from tseries is fast, but does not consistently converge
	# garchFit from fGarch is slower, but converges consistently
	#****************************************************************** 
	load.packages('tseries,fGarch')	
			
	# Sigma[t]^2 = w + a* Sigma[t-1]^2 + b*r[t-1]^2
	garch.predict.one.day <- function(fit, r.last) 
	{
		h.last = tail( fitted(fit)[,1] ,1)			
		sqrt(sum( coef(fit) * c(1,  r.last^2, h.last^2) ))	
	}

	# same as predict( fit, n.ahead=1, doplot=F)[3]
	garchFit.predict.one.day <- function(fit, r.last) 
	{
		h.last = tail(sqrt(fit@h.t), 1)
		sqrt(sum( fit@fit$matcoef[,1] * c(1,  r.last^2, h.last^2) ))
	}
	
	garch.vol = NA * hist.vol
	for( i in (252+1):nperiods ) {
		temp = as.vector(ret.log[ (i-252+1):i, ])
		r.last =  tail( temp, 1 )
			
		fit = tryCatch( garch(temp, order = c(1, 1), control = garch.control(trace = F)),
	    				error=function( err ) FALSE, warning=function( warn ) FALSE )
	                    
		if( !is.logical( fit ) ) {
			if( i == 252+1 ) garch.vol[1:252] = fitted(fit)[,1]
			garch.vol[i] = garch.predict.one.day(fit, r.last)
		} else {
			fit = tryCatch( garchFit(~ garch(1,1), data = temp, include.mean=FALSE, trace=F),
	    				error=function( err ) FALSE, warning=function( warn ) FALSE )
	    				
			if( !is.logical( fit ) ) {
				if( i == 252+1 ) garch.vol[1:252] = sqrt(fit@h.t)
				garch.vol[i] = garchFit.predict.one.day(fit, r.last)
			} 
		}			
		if( i %% 100 == 0) cat(i, '\n')
	}
	garch.vol = ifna.prev(garch.vol)

Now, let’s create a strategy that switches between mean-reversion and trend-following strategies based on GARCH(1,1) volatility forecast.

	#*****************************************************************
	# Regime Switching using Garch
	#****************************************************************** 		
	vol.rank = percent.rank(SMA(percent.rank(garch.vol, 252), 21), 250)

	# Regime Switching Garch
	data$weight[] = NA
		data$weight[] = iif(vol.rank > 0.5, 
					iif(rsi2 < 50, 1, -1),
					iif(sma.short > sma.long, 1, -1)
				)
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)
	regime.switching.garch = bt.run(data, type='share', capital=capital, trade.summary=T)	

	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report.part1(regime.switching.garch, regime.switching, buy.hold)

The switching strategy that uses GARCH(1,1) volatility forecast performed slightly better than the one that uses historical volatility.

There many different approaches you can take to incorporate forecasting into your models and trading strategies. R has a very rich set of packages to model and forecast time series. Here are some examples that I found interesting:

To view the complete source code for this example, please have a look at the bt.volatility.garch() function in bt.test.r at github.

Categories: R, Strategy, Trading Strategies

Yet Another Forecast Dashboard

Recently, I came across quite a few examples of time series forecasting using R. Here are some examples:

So I have decided to roll my own version of Forecast Dashboard as well. First, let’s define some helper functions:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# http://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

# extract forecast info
forecast.helper <- function(fit, h=10, level = c(80,95)) {
	out = try(forecast(fit, h=h, level=level), silent=TRUE)
	if (class(out)[1] != 'try-error') {
		out = data.frame(out)
	} else {
		temp = data.frame(predict(fit, n.ahead=h, doplot=F))
			pred = temp[,1]
			se = temp[,2]
		qq = qnorm(0.5 * (1 + level/100))
		out = matrix(NA, nr=h, nc=1+2*len(qq))
			out[,1] = pred
		for(i in 1:len(qq))
			out[,(2*i):(2*i+1)] = c(pred - qq[i] * se, pred + qq[i] * se)
		colnames(out) = c('Point.Forecast', matrix(c(paste('Lo', level, sep='.'), paste('Hi', level, sep='.')), nr=2, byrow=T))
		out = data.frame(out)
	}	
	return(out)
}

# compute future dates for the forecast
forecast2xts <- function(data, forecast) {
	# length of the forecast
	h = nrow(forecast)
	dates = as.Date(index(data))
 		
	new.dates = seq(last(dates)+1, last(dates) + 2*365, by='day')
	rm.index = date.dayofweek(new.dates) == 6 | date.dayofweek(new.dates) == 0
 	new.dates = new.dates[!rm.index]
 		
 	new.dates = new.dates[1:h] 	
 	return(make.xts(forecast, new.dates))
}

# create forecast plot
forecast.plot <- function(data, forecast, ...) {
	out = forecast2xts(data, forecast)

 	# create plot
 	plota(c(data, out[,1]*NA), type='l', 
 			ylim = range(data,out,na.rm=T), ...) 		
 	
 	# highligh sections
	new.dates = index(out)
		temp = coredata(out)

	n = (ncol(out) %/% 2)
	for(i in n : 1) {
		polygon(c(new.dates,rev(new.dates)), 
			c(temp[,(2*i)], rev(temp[,(2*i+1)])), 
		border=NA, col=col.add.alpha(i+2,150))
	}
	
	plota.lines(out[,1], col='red')
	
	labels = c('Data,Forecast', paste(gsub('Lo.', '', colnames(out)[2*(1:n)]), '%', sep=''))
	plota.legend(labels, fill = c('black,red',col.add.alpha((1:n)+2, 150)))			
}

Now we are ready to fit time series models and create a sample Forecast Dashboard. Below are some examples:

	#*****************************************************************
	# Create models
	#****************************************************************** 
	load.packages('forecast,fGarch,fArma')
	
	sample = last(data$prices$SPY, 200)	
	ts.sample = ts(sample, frequency = 12)
	
	models = list(
		# fGarch		
		garch = garchFit(~arma(1,15)+garch(1,1), data=sample, trace=F),
		# fArma
		arima = armaFit(~ arima(1, 1, 15), data=ts.sample),
				
		# forecast
		arma = Arima(ts.sample, c(1,0,1)),
		arfima = arfima(ts.sample),
		auto.arima = auto.arima(ts.sample),
	
		bats = bats(ts.sample),
		HoltWinters = HoltWinters(ts.sample),
		naive = Arima(ts.sample, c(0,1,0))
	)
	
	
	#*****************************************************************
	# Create Report
	#****************************************************************** 						
	# 30 day forecast with 80% and 95% confidence bands
		layout(matrix(1:9,nr=3))
		for(i in 1:len(models)) {
			out = forecast.helper(models[[i]], 30, level = c(80,95))	 	
			forecast.plot(sample, out, main = names(models)[i]) 	
		}	
	
	# 30 day forecast with 75%,85%,95%,97%, and 99% confidence bands
		layout(matrix(1:9,nr=3))
		for(i in 1:len(models)) {
			out = forecast.helper(models[[i]], 30, level = c(75,85,95,97,99))	 	
			forecast.plot(sample, out, main = names(models)[i]) 	
		}	

I encourage you to read more about various time series models available in R and share your examples of Forecast Dashboards.

To view the complete source code for this example, please have a look at the bt.forecast.dashboard() function in bt.test.r at github.

Categories: R

Multiple Factor Model Summary

In this post I want to summarize all the material I covered in the Multiple Factor Models series. The Multiple Factor Model can be used to decompose returns and calculate risk. Following are some examples of the Multiple Factor Models:

The factors in the model are usually created using pricing, fundamental, analyst estimates, and proprietary data. I will only show examples of factors using pricing and fundamental data because these infromation is readily available from Yahoo Fiance and ADVFN.

Following is a summary of all posts that I wrote about Multiple Factor Models:

  1. Multiple Factor Model – Fundamental Data – in this post I demonstrate how to get company’s Fundamental Data into R, create a simple factor, and run correlation analysis.
  2. Multiple Factor Model – Building Fundamental Factors – in this post I demonstrate how to build Fundamental factors described in the CSFB Alpha Factor Framework and compute quantiles spreads. For details of the CSFB Alpha Factor Framework please read CSFB Quantitative Research, Alpha Factor Framework on page 11, page 49 by P. N. Patel, S. Yao, R. Carlson, A. Banerji, J. Handelman.
  3. Multiple Factor Model – Building CSFB Factors – in this post I demonstrate how to build majority of factors described in the CSFB Alpha Factor Framework, run cross sectional regression to estimate factor loading, create and test Alpha model.
  4. Multiple Factor Model – Building Risk Model – in this post I demonstrate how to build a multiple factor risk model, compute factor covariance using shrinkage estimator, forecast stocks specific variances using GARCH(1,1).
  5. Portfolio Optimization – Why do we need a Risk Model – in this post I explain why do we need a risk model and demonstrate how it is used during portfolio construction process.
  6. Multiple Factor Model – Building 130/30 Index – in this post I demonstrate how to build 130/30 Index based on the CSFB Factors and the Risk Model we created previously. The 130/30: The New Long-Only (2008) by A. Lo, P. Patel paper presents a very detailed step by step guide to building 130/30 Index using average CSFB Factors as the alpha model and MSCI Barra Multi-Factor Risk model.
  7. Multiple Factor Model – Building 130/30 Index (Update) – in this post I demonstrate how to build Market-Neutral and Minimum Variance strategies and compare their performance to the 130/30 Index.

There is an excellent discussion of portfolio construction problems and possible solutions in the The top 7 portfolio optimization problems post by Pat Burns. I want to highlight two problems that are relevant to the Multiple Factor Models.

Categories: Factor Model, Factors

Multiple Factor Model – Building Risk Model

February 21, 2012 5 comments

This is the fourth post in the series about Multiple Factor Models. I will build on the code presented in the prior post, Multiple Factor Model – Building CSFB Factors, and I will show how to build a multiple factor risk model. For an example of the multiple factor risk models, please read following references:

The outline of this post:

  • Run cross sectional regression to estimate factor returns
  • Compute factor covariance using shrinkage estimator
  • Forecast stocks specific variances using GARCH(1,1)
  • Compute portfolio risk using multiple factor model and compare it to the historical standard deviation of portfolio returns.

Let’s start by loading the CSFB factors that we saved at the end of the prior post. [If you are missing data.factors.Rdata file, please execute fm.all.factor.test() function first to create and save CSFB factors.] Next, I will run cross sectional regression to estimate factor returns.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# http://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)
	#*****************************************************************
	# Load factor data that we saved at the end of the fm.all.factor.test functions
	#****************************************************************** 
	load.packages('quantmod,abind')	
		
	load(file='data.factors.Rdata')
		# remove Composite Average factor
		factors.avg = factors.avg[which(names(factors.avg) != 'AVG')]	
		
	#*****************************************************************
	# Run cross sectional regression to estimate factor returns
	#****************************************************************** 
	nperiods = nrow(next.month.ret)
	n = ncol(next.month.ret)
		
	# create sector dummy variables: binary 0/1 values for each sector
	nsectors = len(levels(sectors))	
	sectors.matrix = array(double(), c(nperiods, n, nsectors))
		dimnames(sectors.matrix)[[3]] = levels(sectors)		
	for(j in levels(sectors)) {
		sectors.matrix[,,j] = matrix(sectors == j,  nr=nperiods, nc=n, byrow=T)
	}
	
	# create matrix for each factor
	factors.matrix = abind(factors.avg, along = 3)		
	
	# combine sector dummies and all factors
	all.data = abind(sectors.matrix, factors.matrix)		
	
	# create betas and specific.return
	beta = all.data[,1,] * NA
	specific.return = next.month.ret * NA
		nfactors = ncol(beta)
		
	# append next.month.ret to all.data			
	all.data = abind(next.month.ret, all.data, along = 3)
		dimnames(all.data)[[3]][1] = 'Ret'
			
	# estimate betas (factor returns)
	for(t in 12:(nperiods-1)) {		
		temp = all.data[t:t,,]
		x = temp[,-c(1:2)]
		y = temp[,1]
		b = lm(y~x)$coefficients
		
		b[2:nsectors] = b[1] + b[2:nsectors]
		beta[(t+1),] = b		
		
		specific.return[(t+1),] = y - rowSums(temp[,-1] * matrix(b, n, nfactors, byrow=T), na.rm=T)	
	}

To estimate factor returns (betas), we solve for coefficients of the following multiple factor model:

Ret = b1 * F1 + b2 * F2 + ... + bn * Fn + e
where 
b1...bn are estimated factor returns
F1...Fn are factor exposures. I.e. sector dummies and CSFB factor exposures
e is stock specific return, not captured by factors F1...Fn

Note that we cannot include the first sector dummy variable in the regression, otherwise we will get a linearly dependent relationship of the first sector dummy variable with all other sector dummy variables. The sector effect of the first sector dummy variable is absorbed into the intercept in the regression.

There are a few alternative ways of estimating this regression. For example, the robust linear model can be estimated with following code:

	load.packages('MASS')
	temp = rlm(y~x)$coefficients

The quantile regression can can be estimated with following code:

	load.packages('quantreg')
	temp = rq(y ~ x, tau = 0.5)$coefficients

Next let’s look at the cumulative factor returns.

	#*****************************************************************
	# helper function
	#****************************************************************** 	
	fm.hist.plot <- function(temp, smain=NULL) {			
		ntemp = ncol(temp)		
		cols = plota.colors(ntemp)	
		plota(temp, ylim = range(temp), log='y', main=smain)
		for(i in 1:ntemp) plota.lines(temp[,i], col=cols[i])
		plota.legend(colnames(temp), cols, as.list(temp))
	}

	#*****************************************************************
	# Examine historical cumulative factor returns
	#****************************************************************** 	
	temp = make.xts(beta, index(next.month.ret))
		temp = temp['2000::',]
		temp[] = apply(coredata(temp), 2, function(x) cumprod(1 + ifna(x,0)))
	
	fm.hist.plot(temp[,-c(1:nsectors)], 'Factor Returns')

The Price Reversals(PR) and Small Size(SS) factors have done well.

Next let’s estimate the factor covariance matrix over the rolling 24 month window.

	load.packages('BurStFin')	
	factor.covariance = array(double(), c(nperiods, nfactors, nfactors))
		dimnames(factor.covariance)[[2]] = colnames(beta)
		dimnames(factor.covariance)[[3]] = colnames(beta)

	# estimate factor covariance
	for(t in 36:nperiods) {
		factor.covariance[t,,] = var.shrink.eqcor(beta[(t-23):t,])
	}

Next let’s forecast stocks specific variances using GARCH(1,1). I will use the GARCH estimation routine described in the Trading using Garch Volatility Forecast post.

	#*****************************************************************
	# Compute stocks specific variance foreasts using GARCH(1,1)
	#****************************************************************** 	
	load.packages('tseries,fGarch')	

	specific.variance = next.month.ret * NA

	for(i in 1:n) {
		specific.variance[,i] = bt.forecast.garch.volatility(specific.return[,i], 24) 
	}


	# compute historical specific.variance
	hist.specific.variance = next.month.ret * NA
	for(i in 1:n) hist.specific.variance[,i] = runSD(specific.return[,i], 24)	
	
	# if specific.variance is missing use historical specific.variance
	specific.variance[] = ifna(coredata(specific.variance), coredata(hist.specific.variance))	

	#*****************************************************************
	# Save multiple factor risk model to be used later during portfolio construction
	#****************************************************************** 
	save(all.data, factor.covariance, specific.variance, file='risk.model.Rdata')

Now we have all the ingredients to compute a portfolio risk:

Portfolio Risk = (common factor variance + specific variance)^0.5
	common factor variance = (portfolio factor exposure) * factor covariance matrix * (portfolio factor exposure)'
	specific variance = (specific.variance)^2 * (portfolio weights)^2
	#*****************************************************************
	# Compute portfolio risk
	#****************************************************************** 
	portfolio = rep(1/n, n)
		portfolio = matrix(portfolio, n, nfactors)
	
	portfolio.risk = next.month.ret[,1] * NA
	for(t in 36:(nperiods-1)) {	
		portfolio.exposure = colSums(portfolio * all.data[t,,-1], na.rm=T)
		
		portfolio.risk[t] = sqrt(
			portfolio.exposure %*% factor.covariance[t,,] %*% (portfolio.exposure) + 
			sum(specific.variance[t,]^2 * portfolio[,1]^2, na.rm=T)
			)
	}

Next let’s compare portfolio risk estimated using multiple factor risk model with portfolio historical risk.

	#*****************************************************************
	# Compute historical portfolio risk
	#****************************************************************** 
	portfolio = rep(1/n, n)
		portfolio = t(matrix(portfolio, n, nperiods))
	
	portfolio.returns = next.month.ret[,1] * NA
		portfolio.returns[] = rowSums(mlag(next.month.ret) * portfolio, na.rm=T)
	
	hist.portfolio.risk = runSD(portfolio.returns, 24)
	
	#*****************************************************************
	# Plot risks
	#****************************************************************** 			
	plota(portfolio.risk['2000::',], type='l')
		plota.lines(hist.portfolio.risk, col='blue')
		plota.legend('Risk,Historical Risk', 'black,blue')

The multiple factor risk model does a decent job of estimating portfolio risk most of the time.

To view the complete source code for this example, please have a look at the fm.risk.model.test() function in factor.model.test.r at github.

Follow

Get every new post delivered to your Inbox.

Join 245 other followers