Archive

Archive for May, 2012

Backtesting Classical Technical Patterns

In the last post, Classical Technical Patterns, I discussed the algorithm and pattern definitions presented in the Foundations of Technical Analysis by A. Lo, H. Mamaysky, J. Wang (2000) paper. Today, I want to check how different patterns performed historically using SPY.

I will follow the rolling window procedure discussed on pages 14-15 of the paper. Let’s begin by loading the historical data for the SPY and running a rolling window pattern search algorithm.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')
	ticker = 'SPY'
	
	data = getSymbols(ticker, src = 'yahoo', from = '1970-01-01', auto.assign = F)
		data = adjustOHLC(data, use.Adjusted=T)

	#*****************************************************************
	# Search for all patterns over a rolling window
	#****************************************************************** 
	load.packages('sm') 
	history = as.vector(coredata(Cl(data)))
	
	window.L = 35
	window.d = 3
	window.len = window.L + window.d

	patterns = pattern.db()
	
	found.patterns = c()
	
	for(t in window.len : (len(history)-1)) {
		ret = history[(t+1)]/history[t]-1
		
		sample = history[(t - window.len + 1):t]		
		obj = find.extrema( sample )	
		
		if(len(obj$data.extrema.loc) > 0) {
			out =  find.patterns(obj, patterns = patterns, silent=F, plot=F)  
			
			if(len(out)>0) found.patterns = rbind(found.patterns,cbind(t,out,t-window.len+out, ret))
		}
		if( t %% 10 == 0) cat(t, 'out of', len(history), '\n')
	}
	colnames(found.patterns) = spl('t,start,end,tstart,tend,ret')	

There are many patterns that are found multiple times. Let’s remove the entries that refer to the same pattern and keep only the first occurrence.

	#*****************************************************************
	# Clean found patterns
	#****************************************************************** 	
	# remove patterns that finished after window.L
	found.patterns = found.patterns[found.patterns[,'end'] <= window.L,]
		
	# remove the patterns found multiple times, only keep first one
	pattern.names = unique(rownames(found.patterns))
	all.patterns = c()
	for(name in pattern.names) {
		index = which(rownames(found.patterns) == name)
		temp = NA * found.patterns[index,]
		
		i.count = 0
		i.start = 1
		while(i.start < len(index)) {
			i.count = i.count + 1
			temp[i.count,] = found.patterns[index[i.start],]
			subindex = which(found.patterns[index,'tstart'] > temp[i.count,'tend'])			
						
			if(len(subindex) > 0) {
				i.start = subindex[1]
			} else break		
		} 
		all.patterns = rbind(all.patterns, temp[1:i.count,])		
	}	

Now we can visualize the performance of each pattern using the charts from my presentation about Seasonality Analysis and Pattern Matching at the R/Finance conference.

	#*****************************************************************
	# Plot
	#****************************************************************** 	
	# Frequency for each Pattern
	frequency = tapply(rep(1,nrow(all.patterns)), rownames(all.patterns), sum)
	layout(1)
	barplot.with.labels(frequency/100, 'Frequency for each Pattern')

	
	# Summary for each Pattern
	all.patterns[,'ret'] = history[(all.patterns[,'t']+20)] / history[all.patterns[,'t']] - 1
	data_list = tapply(all.patterns[,'ret'], rownames(all.patterns), list)
	group.seasonality(data_list, '20 days after Pattern')


	# Details for BBOT pattern
	layout(1)
	name = 'BBOT'
	index = which(rownames(all.patterns) == name)	
	time.seasonality(data, all.patterns[index,'t'], 20, name)	

The Broadening bottoms (BBOT) and Rectangle tops (RTOP) worked historically well for SPY.

To view the complete source code for this example, please have a look at the bt.patterns.test() function in rfinance2012.r at github.

Categories: Backtesting, R

Classical Technical Patterns

In my presentation about Seasonality Analysis and Pattern Matching at the R/Finance conference, I used examples that I have previously covered in my blog:

The only subject in my presentation that I have not discussed previously was about Classical Technical Patterns. For example, the Head and Shoulders pattern, most often seen in up-trends and generally regarded as a reversal pattern. Below I implemented the algorithm and pattern definitions presented in the Foundations of Technical Analysis by A. Lo, H. Mamaysky, J. Wang (2000) paper.

To identify a price pattern I will follow steps as described in the Foundations of Technical Analysis paper:

  • First, fit a smoothing estimator, a kernel regression estimator, to approximate time series.
  • Next, determine local extrema, tops and bottoms, using fist derivative of the kernel regression estimator.
  • Define classical technical patterns in terms of tops and bottoms.
  • Search for classical technical patterns throughout the tops and bottoms of the kernel regression estimator.

Let’s begin by loading historical prices for SPY:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')
	ticker = 'SPY'
	
	data = getSymbols(ticker, src = 'yahoo', from = '1970-01-01', auto.assign = F)
		data = adjustOHLC(data, use.Adjusted=T)
	
	#*****************************************************************
	# Find Classical Techical Patterns, based on
	# Pattern Matching. Based on Foundations of Technical Analysis
	# by A.W. LO, H. MAMAYSKY, J. WANG	
	#****************************************************************** 
	plot.patterns(data, 190, ticker)

The first step is to fit a smoothing estimator, a kernel regression estimator, to approximate time series. I used sm package to fit kernel regression:

	library(sm)
	y = as.vector( last( Cl(data), 190) )
	t = 1:len(y)

	# fit kernel regression with cross-validatio
	h = h.select(t, y, method = 'cv')
		temp = sm.regression(t, y, h=h, display = 'none')

	# find estimated fit
	mhat = approx(temp$eval.points, temp$estimate, t, method='linear')$y

The second step is to find local extrema, tops and bottoms, using first derivative of the kernel regression estimator. (more details in the paper on page 15):

	temp = diff(sign(diff(mhat)))
	# loc - location of extrema, loc.dir - direction of extrema
	loc = which( temp != 0 ) + 1
		loc.dir = -sign(temp[(loc - 1)])

I put the logic for the first and second step into the find.extrema() function.

The next step is to define classical technical patterns in terms of tops and bottoms. The pattern.db() function implements the 10 patterns described in the paper on page 12. For example, let’s have a look at the Head and Shoulders pattern. The Head and Shoulders pattern:

  • is defined by 5 extrema points (E1, E2, E3, E4, E5)
  • starts with a maximum (E1)
  • E1 and E5 are within 1.5 percent of their average
  • E2 and E4 are within 1.5 percent of their average

The R code below that corresponds to the Head and Shoulders pattern is a direct translation, from the pattern description in the paper on page 12, and is very readable:

	#****************************************************************** 	
	# Head-and-shoulders (HS)
	#****************************************************************** 	
	pattern = list()
	pattern$len = 5
	pattern$start = 'max'
	pattern$formula = expression({
		avg.top = (E1 + E5) / 2
		avg.bot = (E2 + E4) / 2

		# E3 > E1, E3 > E5
		E3 > E1 &
		E3 > E5 &
		
		# E1 and E5 are within 1.5 percent of their average
		abs(E1 - avg.top) < 1.5/100 * avg.top &
		abs(E5 - avg.top) < 1.5/100 * avg.top &
		
		# E2 and E4 are within 1.5 percent of their average
		abs(E2 - avg.bot) < 1.5/100 * avg.bot &
		abs(E4 - avg.bot) < 1.5/100 * avg.bot
		})
	patterns$HS = pattern		

The last step is a function that searches for all defined patterns in the kernel regression representation of original time series.

I put the logic for this step into the find.patterns() function. Below is a simplified version:

find.patterns <- function
(
	obj, 	# extrema points
	patterns = pattern.db() 
) 
{
	data = obj$data
	extrema.dir = obj$extrema.dir
	data.extrema.loc = obj$data.extrema.loc
	n.index = len(data.extrema.loc)

	# search for patterns
	for(i in 1:n.index) {
	
		for(pattern in patterns) {
		
			# check same sign
			if( pattern$start * extrema.dir[i] > 0 ) {
			
				# check that there is suffcient number of extrema to complete pattern
				if( i + pattern$len - 1 <= n.index ) {
				
					# create enviroment to check pattern: E1,E2,...,En; t1,t2,...,tn
					envir.data = c(data[data.extrema.loc][i:(i + pattern$len - 1)], 
					data.extrema.loc[i:(i + pattern$len - 1)])									
						names(envir.data) = c(paste('E', 1:pattern$len, sep=''), 
							paste('t', 1:pattern$len, sep=''))
					envir.data = as.list(envir.data)					
										
					# check if pattern was found
					if( eval(pattern$formula, envir = envir.data) ) {
						cat('Found', pattern$name, 'at', i, '\n')
					}
				}
			}		
		}	
	}
	
}

I put the logic for the entire process in to the plot.patterns() helper function. The plot.patterns() function first call find.extrema() function to determine extrema points, and next it calls find.patterns() function to find and plot patterns. Let’s find classical technical patterns in the last 150 days of SPY history:

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')
	ticker = 'SPY'
	
	data = getSymbols(ticker, src = 'yahoo', from = '1970-01-01', auto.assign = F)
		data = adjustOHLC(data, use.Adjusted=T)
	
	#*****************************************************************
	# Find Classical Techical Patterns, based on
	# Pattern Matching. Based on Foundations of Technical Analysis
	# by A.W. LO, H. MAMAYSKY, J. WANG	
	#****************************************************************** 
	plot.patterns(data, 150, ticker)

It is very easy to define you own custom patterns and I encourage everyone to give it a try.

To view the complete source code for this example, please have a look at the pattern.test() function in rfinance2012.r at github.

Categories: R

R/Finance 2012 presentation

I have attended for the first time the R/Finance conference this year. I must say that I’m impressed with the effort that organizers put into the conference and the breadth and the depth of the material / ideas presented.

I just want to share slides and examples that I used in my presentation about Seasonality Analysis and Pattern Matching.

In the next post, I plan to discuss in more detail the algorithm and the price pattern definitions I used to find classical technical patterns.

Categories: Uncategorized

Cross Sectional Correlation

Diversification is hard to find nowadays because financial markets are becoming increasingly correlated. I found a good visually presentation of Cross Sectional Correlation of stocks in the S&P 500 index in the Trading correlation by D. Varadi and C. Rittenhouse article.

Let’s compute and plot the average correlation among stocks in the S&P 500 index and the the average correlation between SPY and stocks in the S&P 500 index using the Systematic Investor Toolbox:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# http://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = sp500.components()$tickers
	
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)		
	bt.prep(data, align='keep.all', dates='1970::')

	spy = getSymbols('SPY', src = 'yahoo', from = '1970-01-01', auto.assign = F)
		ret.spy = coredata( Cl(spy) / mlag(Cl(spy))-1 )
	
	#*****************************************************************
	# Code Logic
	#****************************************************************** 
	prices = data$prices['1993:01:29::']  
		nperiods = nrow(prices)
			
	ret = prices / mlag(prices) - 1
		ret = coredata(ret)
		
	# require at least 100 stocks with prices
	index = which((count(t(prices)) > 100 ))
		index = index[-c(1:252)]
		
	# average correlation among S&P 500 components
	avg.cor = NA * prices[,1]
	
	# average correlation between the S&P 500 index (SPX) and its component stocks
	avg.cor.spy = NA * prices[,1]
	
	for(i in index) {
		hist = ret[ (i- 252 +1):i, ]
		hist = hist[ , count(hist)==252, drop=F]
			nleft = ncol(hist)
		
		correlation = cor(hist, use='complete.obs',method='pearson')
		avg.cor[i,] = (sum(correlation) - nleft) / (nleft*(nleft-1))
		
		avg.cor.spy[i,] = sum(cor(ret.spy[ (i- 252 +1):i, ], hist, use='complete.obs',method='pearson')) / nleft
		
		if( i %% 100 == 0) cat(i, 'out of', nperiods, '\n')
	}
		
	#*****************************************************************
	# Create Report
	#****************************************************************** 				
 	sma50 = SMA(Cl(spy), 50)
 	sma200 = SMA(Cl(spy), 200)
 	
 	cols = col.add.alpha(spl('green,red'),50)
	plota.control$col.x.highlight = iif(sma50 > sma200, cols[1], cols[2])
	highlight = sma50 > sma200 | sma50 < sma200
			
	plota(avg.cor, type='l', ylim=range(avg.cor, avg.cor.spy, na.rm=T), x.highlight = highlight,
			main='Average 252 day Pairwise Correlation for stocks in SP500')
		plota.lines(avg.cor.spy, type='l', col='blue')
		plota.legend('Pairwise Correlation,Correlation with SPY,SPY 50-day SMA > 200-day SMA,SPY 50-day SMA < 200-day SMA', 
		c('black,blue',cols))

The overall trend for correlations is up. Moreover, correlations are usually rising in the bear markets, when SPY 50-day SMA < 200-day SMA.

To view the complete source code for this example, please have a look at the bt.rolling.cor.test() function in bt.test.r at github.

Categories: R, Strategy, Trading Strategies

Volatility Position Sizing to improve Risk Adjusted Performance

Today I want to show how to use Volatility Position Sizing to improve strategy’s Risk Adjusted Performance. I will use the Average True Range (ATR) as a measure of Volatility and will increase allocation during low Volatility periods and will decrease allocation during high Volatility periods. Following are two good references that explain these strategy in detail:

First, let’s load prices for SPY and compute Buy & Hold performance using the Systematic Investor Toolbox:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# http://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = spl('SPY')

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)			
	bt.prep(data, align='keep.all', dates='1970::')	
	
	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices   
	nperiods = nrow(prices)
	
	models = list()
	
	#*****************************************************************
	# Buy & Hold
	#****************************************************************** 
	data$weight[] = 0
		data$weight[] = 1
	models$buy.hold = bt.run.share(data, clean.signal=T)

Next, let’s modify Buy & Hold strategy to vary it’s allocation according to the Average True Range (ATR).

	#*****************************************************************
	# Volatility Position Sizing - ATR
	#****************************************************************** 
	atr = bt.apply(data, function(x) ATR(HLC(x),20)[,'atr'])
		
	# position size in units = ((porfolio size * % of capital to risk)/(ATR*2)) 
	data$weight[] = NA
		capital = 100000
		
		# risk 2% of capital
		data$weight[] = (capital * 2/100) / (2 * atr)
		
		# make sure you are not committing more than 100%
		max.allocation = capital / prices
		data$weight[] = iif(data$weight > max.allocation, max.allocation,data$weight)
		
	models$buy.hold.2atr = bt.run(data, type='share', capital=capital)					
	
	#*****************************************************************
	# Create Report
	#****************************************************************** 	
	models = rev(models)
	
	plotbt.custom.report.part1(models)
	
	plotbt.custom.report.part2(models)

The Sharpe and DVR are both higher for new strategy and draw-downs are lower.

To view the complete source code for this example, please have a look at the bt.position.sizing.test() function in bt.test.r at github.

Follow

Get every new post delivered to your Inbox.

Join 246 other followers