Archive

Archive for December, 2011

Historical Seasonality Analysis: What company in DOW 30 is likely to do well in January?

December 30, 2011 2 comments

THIS IS NOT INVESTMENT ADVICE. The information is provided for informational purposes only.

Stock Market Seasonality is easy to understand, but hard to justify if you subscribe to Efficient-market hypothesis. For a good summary of seasonality, have a look at Capitalizing On Seasonal Effects article at Investopedia. Another interesting discussion was started by Douglas R Terry in his post Efficient Market Hypothesis and Seasonality that goes into detail analysis how these two ideas can co-exist.

To study Seasonality, I want to introduce the new tool I developed. The Seasonality Tool is a user-friendly, point and click, application to create seasonality statistics and reports.

To demonstrate the Seasonality Tool, I want to show how it can be used to evaluate an investment idea. I want find which company in DOW 30 is likely to do well in January and evaluate it using Seasonality Tool.

Following code loads historical prices from Yahoo Fiance for all companies in the DOW 30 index and computes their average performance in January. I will use the Systematic Investor Toolbox to load and analyze the data:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

    #*****************************************************************
    # Load historical data
    #****************************************************************** 
    load.packages('quantmod')        
    tickers = dow.jones.components()
    
    data <- new.env()
    getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
        for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)    
    bt.prep(data, align='keep.all', dates='1970::2011')
        
    #*****************************************************************
    # Compute monthly returns
    #****************************************************************** 
    prices = data$prices   
    n = ncol(prices)    
    
    # find month ends
    month.ends = endpoints(prices, 'months')
    
    prices = prices[month.ends,]
    ret = prices / mlag(prices) - 1

    # keep only January    
    ret = ret[date.month(index(ret)) == 1, ]
    
    # keep last 20 years
    ret = last(ret,20)

    #*****************************************************************
    # Compute stats
    #****************************************************************** 
    stats = matrix(rep(NA,2*n), nc=n)
        colnames(stats) = colnames(prices)
        rownames(stats) = spl('N,Positive')
        
    for(i in 1:n) {
        stats['N',i] = sum(!is.na(ret[,i]))
        stats['Positive',i] = sum(ret[,i]>0, na.rm=T)    
    }
    sort(stats['Positive',])

The Walt Disney Co. (DIS) was positive 17 times in January in the last 20 years.
Let’s investigate the Walt Disney Co. (DIS) record using the Seasonality Tool.

The Seasonality Analysis Report confirms Walt Disney Co. (DIS) outstanding track record in January. Next let’s see the details for January.

The Detail Seasonality Analysis Report shows that the Walt Disney Co. (DIS) had an amazing returns in January till 2008. Following by a 3 consecutive negative Januaries which most likely indicates a change in trend (regime) for this company.

So do I expect the Walt Disney Co. (DIS) be positive in the upcoming January? I’m not so sure anymore.

To view the complete source code for this example, please have a look at the bt.seasonality.test() function in bt.test.r at github.

Advertisement
Categories: Uncategorized

Happy Holidays and Best Wishes for 2012

December 23, 2011 2 comments

This is just a quick note to wish you and your family a very healthy and happy holidays and wonderful New Year! I hope you enjoyed reading my blog and thank you for your comments and emails.

Here is a short R code that implements an interesting idea from the Charting the Santa Claus Rally post by Woodshedder. I will plot and compare the SPY performance this December versus average performance in previous Decembers.

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
	source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = spl('SPY')

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)		
	bt.prep(data, align='remove.na', dates='1970::2011')

	#*****************************************************************
	# Prepare Data for the plot
	#****************************************************************** 
	prices = data$prices  
	n = len(tickers)  
	ret = prices / mlag(prices) - 1

	
	# find prices in December
	dates = index(prices)
	years = date.year(dates)	
	index = which(date.month(dates) == 12)
	
	# rearrange data in trading days
	trading.days = sapply(tapply(ret[index,], years[index], function(x) coredata(x)), function(x) x[1:22])
		
	# average return each trading days, excluding current year
	avg.trading.days = apply(trading.days[, -ncol(trading.days)], 1, mean, na.rm=T)
	current.year = trading.days[, ncol(trading.days)]
	
	# cumulative
	avg.trading.days = 100 * ( cumprod(1 + avg.trading.days) - 1 )
	current.year = 100 * ( cumprod(1 + current.year) - 1 )
	
	#*****************************************************************
	# Create Plot
	#****************************************************************** 	
	par(mar=c(4,4,1,1))
	plot(avg.trading.days, type='b', col=1,
		ylim=range(avg.trading.days,current.year,na.rm=T),
		xlab = 'Number of Trading Days in December',
		ylab = 'Avg % Profit/Loss'
		)
		lines(current.year, type='b', col=2)
	grid()
	plota.legend('Avg SPY,SPY Dec 2011', 1:2)

Hope this year will not disappoint and we will see the rally towards the year end.

If you want to find average performance in the other months, I recommend reading Trading Calendar article by CXO Advisory.

To view the complete source code for this example, please have a look at the bt.december.trading.test() function in bt.test.r at github.

Categories: R, Uncategorized

Rotational Trading Strategies: borrowing ideas from Engineering Returns

December 20, 2011 1 comment

Frank Hassler at Engineering Returns blog wrote an excellent article Rotational Trading: how to reduce trades and improve returns. The article presents four methods to reduce trades:

  • Trade less frequently. I.e. weekly instead of daily rebalancing.
  • Different criteria for enter / exit a trade.
  • Smooth the rank over the last couple of bars.
  • Combination of above.

I want show how to implement these ideas using the backtesting library in the Systematic Investor Toolbox. I will use the 21 ETFs from the ETF Sector Strategy post as the investment universe.

Following code loads historical prices from Yahoo Fiance and compares performance of the daily versus weekly rebalancing using the backtesting library in the Systematic Investor Toolbox:

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
	source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = spl('XLY,XLP,XLE,XLF,XLV,XLI,XLB,XLK,XLU,IWB,IWD,IWF,IWM,IWN,IWO,IWP,IWR,IWS,IWV,IWW,IWZ')

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)		
	bt.prep(data, align='remove.na', dates='1970::2011')

	#*****************************************************************
	# Code Strategies : weekly rebalancing
	#****************************************************************** 
	prices = data$prices  
	n = len(tickers)  

	# find week ends
	week.ends = endpoints(prices, 'weeks')
		week.ends = week.ends[week.ends > 0]		

		
	# Rank on ROC 200
	position.score = prices / mlag(prices, 200)	
		position.score.ma = position.score		
		buy.rule = T

	# Select Top 2 funds daily
	data$weight[] = NA
		data$weight[] = ntop(position.score, 2)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)				
	top2.d = bt.run(data, type='share', trade.summary=T, capital=capital)

	# Select Top 2 funds weekly
	data$weight[] = NA
		data$weight[week.ends,] = ntop(position.score[week.ends,], 2)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)		
	top2.w = bt.run(data, type='share', trade.summary=T, capital=capital)
	
	# Plot Strategy Metrics Side by Side
	plotbt.strategy.sidebyside(top2.d, top2.w, perfromance.fn = 'engineering.returns.kpi')	

The number of trades falls down from 443 to 164 as we switch from daily to weekly rebalancing. The additional bonus is the better returns for the weekly rebalancing.

Next, let’s examine different entry/exit rank. We will buy top 2 ETFs and will keep them till their ranks drop below 4 /6.

	#*****************************************************************
	# Code Strategies : different entry/exit rank
	#****************************************************************** 
	
	# Select Top 2 funds, Keep till they are in 4/6 rank
	data$weight[] = NA
		data$weight[] = ntop.keep(position.score, 2, 4)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)		
	top2.d.keep4 = bt.run(data, type='share', trade.summary=T, capital=capital)
	
	data$weight[] = NA
		data$weight[] = ntop.keep(position.score, 2, 6)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)		
	top2.d.keep6 = bt.run(data, type='share', trade.summary=T, capital=capital)

	# Plot Strategy Metrics Side by Side
	plotbt.strategy.sidebyside(top2.d, top2.d.keep4, top2.d.keep6, perfromance.fn = 'engineering.returns.kpi')

The number of trades falls down from 443 to 95 to 52 as we hold on to our selection for longer periods.

Next, let’s examine rank smoothing. Instead of using the most recent rank, we will use different averages of rank’s recent values.

	#*****************************************************************
	# Code Strategies : Rank smoothing
	#****************************************************************** 

	models = list()
	models$Bench = top2.d
	for( avg in spl('SMA,EMA') ) {
		for( i in c(3,5,10,20) ) {		
			position.score.smooth = bt.apply.matrix(position.score.ma, avg, i)	
				position.score.smooth[!buy.rule,] = NA
			
			data$weight[] = NA
				data$weight[] = ntop(position.score.smooth, 2)	
				capital = 100000
				data$weight[] = (capital / prices) * bt.exrem(data$weight)		
			models[[ paste(avg,i) ]] = bt.run(data, type='share', trade.summary=T, capital=capital)		
		}
	}
		
	# Plot Strategy Metrics Side by Side
	plotbt.strategy.sidebyside(models, perfromance.fn = 'engineering.returns.kpi')

The number of trades falls down as we increase the length of period used in averaging. There is no big difference in using simple moving average (SMA) versus exponential smoothing average (EMA).

Next, let’s combine different methods to reduce number of trades.

	#*****************************************************************
	# Code Strategies : Combination
	#****************************************************************** 

	# Select Top 2 funds daily, Keep till they are 6 rank, Smooth Rank by 10 day EMA
	position.score.smooth = bt.apply.matrix(position.score.ma, 'EMA', 10)	
		position.score.smooth[!buy.rule,] = NA
	data$weight[] = NA
		data$weight[] = ntop.keep(position.score.smooth, 2, 6)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)		
	top2.d.keep6.EMA10 = bt.run(data, type='share', trade.summary=T, capital=capital)
		
	# Select Top 2 funds weekly, Keep till they are 6 rank
	data$weight[] = NA
		data$weight[week.ends,] = ntop.keep(position.score[week.ends,], 2, 6)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)		
	top2.w.keep6 = bt.run(data, type='share', trade.summary=T, capital=capital)
	
	# Select Top 2 funds weekly, Keep till they are 6 rank, Smooth Rank by 10 week EMA
	position.score.smooth[] = NA
		position.score.smooth[week.ends,] = bt.apply.matrix(position.score.ma[week.ends,], 'EMA', 10)	
			position.score.smooth[!buy.rule,] = NA
	
	data$weight[] = NA
		data$weight[week.ends,] = ntop.keep(position.score.smooth[week.ends,], 2, 6)	
		capital = 100000
		data$weight[] = (capital / prices) * bt.exrem(data$weight)		
	top2.w.keep6.EMA10 = bt.run(data, type='share', trade.summary=T, capital=capital)
	
	# Plot Strategy Metrics Side by Side
	plotbt.strategy.sidebyside(top2.d, top2.d.keep6, top2.d.keep6.EMA10, top2.w, top2.w.keep6, top2.w.keep6.EMA10, perfromance.fn = 'engineering.returns.kpi')

The overall winner is a weekly strategy that buys top 2 ETF’s based on 10 week exponential average rank and keeps them till their ranks drop below 6. The number of trades falls down from 443 to 28 and performance (CAGR) goes up from 2.4% to 7.3%.

The next step, which you can do as a homework, is to find ways to control the strategy’s drawdowns. One solution is discussed in the Avoiding severe draw downs post.

To view the complete source code for this example, please have a look at the bt.rotational.trading.trades.test() function in bt.test.r at github.

Backtesting Rebalancing methods

December 16, 2011 4 comments

I wrote about Rebalancing in the Asset Allocation Process Summary post. Deciding how and when to rebalance (update the portfolio to the target mix) is one of the critical steps in the Asset Allocation Process. I want to study the portfolio performance and turnover for the following Rebalancing methods:

  • Periodic Rebalancing: rebalance to the target mix every month, quarter, year.
  • Maximum Deviation Rebalancing: rebalance to the target mix when asset weights deviate more than a given percentage from the target mix.
  • Maximum Deviation Rebalancing: rebalance half-way to the target mix when asset weights deviate more than a given percentage from the target mix.

I will study a very simple target mix: 50% allocation to equities (SPY) and 50% allocation to fixed income (TLT).

First, let’s examine Periodic Rebalancing. Following code implements this strategy using the backtesting library in the Systematic Investor Toolbox:

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
	source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')
	tickers = spl('SPY,TLT')
	
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1900-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)
	bt.prep(data, align='remove.na', dates='1900::2011')
	
	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices   
	nperiods = nrow(prices)
	target.allocation = matrix(c(0.5, 0.5), nrow=1)
	
	# Buy & Hold	
	data$weight[] = NA	
		data$weight[1,] = target.allocation
		capital = 100000
		data$weight[] = (capital / prices) * data$weight
	buy.hold = bt.run(data, type='share', capital=capital)

	
	# Rebalance periodically
	models = list()
	for(period in spl('months,quarters,years')) {
		data$weight[] = NA	
			data$weight[1,] = target.allocation
			
			period.ends = endpoints(prices, period)
				period.ends = period.ends[period.ends > 0]		
			data$weight[period.ends,] = repmat(target.allocation, len(period.ends), 1)
						
			capital = 100000
			data$weight[] = (capital / prices) * data$weight
		models[[period]] = bt.run(data, type='share', capital=capital)	
	}
	models$buy.hold = buy.hold				
	
	#*****************************************************************
	# Create Report
	#****************************************************************** 		
	plotbt.custom.report(models)		
	

	# Plot BuyHold and Monthly Rebalancing Weights
	layout(1:2)
	plotbt.transition.map(models$buy.hold$weight, 'buy.hold', spl('red,orange'))
		abline(h=50)
	plotbt.transition.map(models$months$weight, 'months', spl('red,orange'))
		abline(h=50)


	# helper function to create barplot with labels
	barplot.with.labels <- function(data, main, plotX = TRUE) {
		par(mar=c( iif(plotX, 6, 2), 4, 2, 2))
		x = barplot(100 * data, main = main, las = 2, names.arg = iif(plotX, names(data), ''))
		text(x, 100 * data, round(100 * data,1), adj=c(0.5,1), xpd = TRUE)
	}
	# Plot Portfolio Turnover for each Rebalancing method
	layout(1)
	barplot.with.labels(sapply(models, compute.turnover, data), 'Average Annual Portfolio Turnover')

As expected, the Buy and Hold strategy has 0 turnover, but deviates a lot from the Target Mix. On the other hand, Monthly Rebalancing keeps portfolio close to the Target Mix, but at the cost of the 36% annual turnover.

Next to trigger Rebalancing based on the Maximum Deviation from the Target Mix, I created a function, bt.max.deviation.rebalancing, that iteratively searches for a violation of Maximum Deviation and forces the rebalancing once the violation took place. This function can rebalance all the way to the target mix or partially based on the rebalancing.ratio. For example, for rebalancing.ratio equal to 0, each time the violation of Maximum Deviation is found, the portfolio is rebalanced all the way to the target mix, and for rebalancing.ratio equal to 0.5, the portfolio is rebalanced half way to the target mix.

# Rebalancing method based on maximum deviation
bt.max.deviation.rebalancing <- function
(
	data, 
	model, 
	target.allocation, 
	max.deviation = 3/100, 
	rebalancing.ratio = 0	# 0 means rebalance all-way to target.allocation
				# 0.5 means rebalance half-way to target.allocation
) 
{
	start.index = 1
	nperiods = nrow(model$weight)
	action.index = rep(F, nperiods)
	
	while(T) {	
		# find rows that violate max.deviation
		weight = model$weight
		index = which( apply(abs(weight - repmat(target.allocation, nperiods, 1)), 1, max) > max.deviation)	
	
		if( len(index) > 0 ) {
			index = index[ index > start.index ]
		
			if( len(index) > 0 ) {
				action.index[index[1]] = T
				
				data$weight[] = NA	
					data$weight[1,] = target.allocation
					
					temp = repmat(target.allocation, sum(action.index), 1)
					data$weight[action.index,] = temp + 
						rebalancing.ratio * (weight[action.index,] - temp)					
					
					capital = 100000
					data$weight[] = (capital / data$prices) * data$weight
				model = bt.run(data, type='share', capital=capital, silent=T)	
				start.index = index[1]
			} else break			
		} else break		
	}
	return(model)
}

Now, let’s compare 5% Maximum Deviation Rebalancing to the Periodic Rebalancing.

	#*****************************************************************
	# Code Strategies that rebalance based on maximum deviation
	#****************************************************************** 
	
	# rebalance to target.allocation when portfolio weights are 5% away from target.allocation
	models$smart5.all = bt.max.deviation.rebalancing(data, buy.hold, target.allocation, 5/100, 0) 
	
	# rebalance half-way to target.allocation when portfolio weights are 5% away from target.allocation
	models$smart5.half = bt.max.deviation.rebalancing(data, buy.hold, target.allocation, 5/100, 0.5) 
		
	#*****************************************************************
	# Create Report
	#****************************************************************** 						
	# Plot BuyHold, Years and Max Deviation Rebalancing Weights	
	layout(1:4)
	plotbt.transition.map(models$buy.hold$weight, 'buy.hold', spl('red,orange'))
		abline(h=50)
	plotbt.transition.map(models$smart5.all$weight, 'Max Deviation 5%, All the way', spl('red,orange'))
		abline(h=50)
	plotbt.transition.map(models$smart5.half$weight, 'Max Deviation 5%, Half the way', spl('red,orange'))
		abline(h=50)
	plotbt.transition.map(models$years$weight, 'years', spl('red,orange'))
		abline(h=50)


	# Plot Portfolio Turnover and Maximum Deviation for each Rebalancing method
	layout(1:2)
	barplot.with.labels(sapply(models, compute.turnover, data), 'Average Annual Portfolio Turnover', F)
	barplot.with.labels(sapply(models, compute.max.deviation, target.allocation), 'Maximum Deviation from Target Mix')


	# Plot Strategy Statistics  Side by Side
	plotbt.strategy.sidebyside(models)

As expected, the Maximum Deviation Rebalancing keeps portfolio close to the Target Mix. Moreover, the Maximum Deviation Rebalancing strategy has lower turnover than the Monthly Rebalancing. So in this case the Maximum Deviation Rebalancing wins over Periodic Rebalancing method.

To view the complete source code for this example, please have a look at the bt.rebalancing.test() function in bt.test.r at github.

Backtesting Minimum Variance portfolios

December 13, 2011 2 comments

I want to show how to combine various risk measures I discussed while writing the series of posts about Asset Allocation with backtesting library in the Systematic Investor Toolbox. I will use Minimum Variance portfolio as an example for this post.

I recommend reading a good discussion about Minimum Variance portfolios at Minimum Variance Sector Rotation post by Quantivity.

I will use the investment universe and rebalancing schedule as outlined in the Forecast-Free Algorithms: A New Benchmark For Tactical Strategies post by David Varadi. The investment universe:

1. S&P500 (SPY)
2. Nasdaq 100 (QQQ)
3. Emerging Markets (EEM)
4. Russell 2000 (IWM)
5. MSCI EAFE (EFA)
6. Long-term Treasury Bonds (TLT)
7. Real Estate (IYR)
8. Gold (GLD)

The rebalancing is done on a weekly basis and quarterly data is used to estimate input assumptions.

Following code implements this strategy using the backtesting library in the Systematic Investor Toolbox:

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
	source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod,quadprog,lpSolve')
	tickers = spl('SPY,QQQ,EEM,IWM,EFA,TLT,IYR,GLD')

		
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)
		
	data.weekly <- new.env()
		for(i in tickers) data.weekly[[i]] = to.weekly(data[[i]], indexAt='endof')
					
	bt.prep(data, align='remove.na', dates='1990::2011')
	bt.prep(data.weekly, align='remove.na', dates='1990::2011')

	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices   
	n = ncol(prices)
	
	# find week ends
	week.ends = endpoints(prices, 'weeks')
		week.ends = week.ends[week.ends > 0]		

		
	# Equal Weight 1/N Benchmark
	data$weight[] = NA
		data$weight[week.ends,] = ntop(prices[week.ends,], n)		
		
		capital = 100000
		data$weight[] = (capital / prices) * data$weight
	equal.weight = bt.run(data, type='share')
		
	
	#*****************************************************************
	# Create Constraints
	#*****************************************************************
	constraints = new.constraints(n, lb = -Inf, ub = +Inf)
	
	# SUM x.i = 1
	constraints = add.constraints(rep(1, n), 1, type = '=', constraints)		

		
	ret = prices / mlag(prices) - 1
	weight = coredata(prices)
		weight[] = NA
		
	for( i in week.ends[week.ends >= (63 + 1)] ) {
		# one quarter is 63 days
		hist = ret[ (i- 63 +1):i, ]
		
		# create historical input assumptions
		ia = create.historical.ia(hist, 252)
			s0 = apply(coredata(hist),2,sd)		
			ia$cov = cor(coredata(hist), use='complete.obs',method='pearson') * (s0 %*% t(s0))
			
		weight[i,] = min.risk.portfolio(ia, constraints)
	}

	# Minimum Variance
	data$weight[] = weight		
		capital = 100000
		data$weight[] = (capital / prices) * data$weight
	min.var.daily = bt.run(data, type='share', capital=capital)

Next let’s create Minimum Variance portfolios using weekly data:

	#*****************************************************************
	# Code Strategies: Weekly
	#****************************************************************** 	
	retw = data.weekly$prices / mlag(data.weekly$prices) - 1
	weightw = coredata(prices)
		weightw[] = NA
	
	for( i in week.ends[week.ends >= (63 + 1)] ) {	
		# map
		j = which(index(ret[i,]) == index(retw))
		
		# one quarter = 13 weeks
		hist = retw[ (j- 13 +1):j, ]
		
		# create historical input assumptions
		ia = create.historical.ia(hist, 52)
			s0 = apply(coredata(hist),2,sd)		
			ia$cov = cor(coredata(hist), use='complete.obs',method='pearson') * (s0 %*% t(s0))

		weightw[i,] = min.risk.portfolio(ia, constraints)
	}	
		
	data$weight[] = weightw		
		capital = 100000
		data$weight[] = (capital / prices) * data$weight
	min.var.weekly = bt.run(data, type='share', capital=capital)
	
	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report.part1(min.var.weekly, min.var.daily, equal.weight)

	# plot Daily and Weekly transition maps
	layout(1:2)
	plotbt.transition.map(min.var.daily$weight)
		legend('topright', legend = 'min.var.daily', bty = 'n')
	plotbt.transition.map(min.var.weekly$weight)
		legend('topright', legend = 'min.var.weekly', bty = 'n')

I find it very interesting that the Minimum Variance portfolios constructed using daily returns to create input assumptions are way different from the Minimum Variance portfolios constructed using weekly returns to create input assumptions. One possible explanation for this discrepancy was examined by Pat Burns in the The volatility mystery continues post.

There are a few ways I suggest you can play with this code:

  • make covariance matrix estimate more stable, use the Ledoit-Wolf covariance shrinkage estimator from tawny package
    ia$cov = tawny::cov.shrink(hist)
    

    or

    ia$cov = cor(coredata(hist), use='complete.obs',method='spearman') * (s0 %*% t(s0))
    

    or

    ia$cov = cor(coredata(hist), use='complete.obs',method='kendall') * (s0 %*% t(s0))
    
  • use different investment universe
  • consider different rebalancing schedule
  • consider different risk measures. for example, I discussed and implemented following risk measures in the series of posts about Asset Allocation:
    • min.maxloss.portfolio
    • min.mad.portfolio
    • min.cvar.portfolio
    • min.cdar.portfolio
    • min.mad.downside.portfolio
    • min.risk.downside.portfolio

      To view the complete source code for this example, please have a look at the bt.min.var.test() function in bt.test.r at github.