Archive

Archive for October, 2012

Modeling Couch Potato strategy

October 26, 2012 2 comments

I first read about the Couch Potato strategy in the MoneySense magazine. I liked this simple strategy because it was easy to understand and easy to manage. The Couch Potato strategy is similar to the Permanent Portfolio strategy that I have analyzed previously.

The Couch Potato strategy invests money in the given proportions among different types of assets to ensure diversification and rebalances the holdings once a year. For example the Classic Couch Potato strategy is:

  • 1) Canadian equity (33.3%)
  • 2) U.S. equity (33.3%)
  • 3) Canadian bond (33.3%)

I highly recommend reading following online resources to get more information about the Couch Potato strategy:

Today, I want to show how you can model and monitor the Couch Potato strategy with the Systematic Investor Toolbox.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)


	# helper function to model Couch Potato strategy - a fixed allocation strategy
	couch.potato.strategy <- function
	(
		data.all,
		tickers = 'XIC.TO,XSP.TO,XBB.TO',
		weights = c( 1/3, 1/3, 1/3 ), 		
		periodicity = 'years',
		dates = '1900::',
		commission = 0.1
	) 
	{ 
		#*****************************************************************
		# Load historical data 
		#****************************************************************** 
		tickers = spl(tickers)
		names(weights) = tickers
		
		data <- new.env()
		for(s in tickers) data[[ s ]] = data.all[[ s ]]
		
		bt.prep(data, align='remove.na', dates=dates)
	
		#*****************************************************************
		# Code Strategies
		#******************************************************************
		prices = data$prices   
			n = ncol(prices)
			nperiods = nrow(prices)
	
		# find period ends
		period.ends = endpoints(data$prices, periodicity)
			period.ends = c(1, period.ends[period.ends > 0])
	
		#*****************************************************************
		# Code Strategies
		#******************************************************************
		data$weight[] = NA
			for(s in tickers) data$weight[period.ends, s] = weights[s]
		model = bt.run.share(data, clean.signal=F, commission=commission)
		
		return(model)
	} 	

The couch.potato.strategy() function creates a periodically rebalanced portfolio for given static allocation.

Next, let’s back-test some Canadian Couch Potato portfolios:

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	map = list()
		map$can.eq = 'XIC.TO'
		map$can.div = 'XDV.TO'		
		map$us.eq = 'XSP.TO'
		map$us.div = 'DVY'			
		map$int.eq = 'XIN.TO'		
		map$can.bond = 'XBB.TO'
		map$can.real.bond = 'XRB.TO'
		map$can.re = 'XRE.TO'		
		map$can.it = 'XTR.TO'
		map$can.gold = 'XGD.TO'
			
	data <- new.env()
	for(s in names(map)) {
		data[[ s ]] = getSymbols(map[[ s ]], src = 'yahoo', from = '1995-01-01', env = data, auto.assign = F)
		data[[ s ]] = adjustOHLC(data[[ s ]], use.Adjusted=T)	
	}
		
	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	models = list()
		periodicity = 'years'
		dates = '2006::'
	
	models$classic = couch.potato.strategy(data, 'can.eq,us.eq,can.bond', rep(1/3,3), periodicity, dates)
	models$global = couch.potato.strategy(data, 'can.eq,us.eq,int.eq,can.bond', c(0.2, 0.2, 0.2, 0.4), periodicity, dates)
	models$yield = couch.potato.strategy(data, 'can.div,can.it,us.div,can.bond', c(0.25, 0.25, 0.25, 0.25), periodicity, dates)
	models$growth = couch.potato.strategy(data, 'can.eq,us.eq,int.eq,can.bond', c(0.25, 0.25, 0.25, 0.25), periodicity, dates)
	
	models$complete = couch.potato.strategy(data, 'can.eq,us.eq,int.eq,can.re,can.real.bond,can.bond', c(0.2, 0.15, 0.15, 0.1, 0.1, 0.3), periodicity, dates)	
	
	models$permanent = couch.potato.strategy(data, 'can.eq,can.gold,can.bond', c(0.25,0.25,0.5), periodicity, dates)	
		
	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report.part1(models)

I have included a few classic Couch Potato portfolios and the Canadian version of the Permanent portfolio. The equity curves speak for themselves: you can call them by the fancy names, but in the end all variations of the Couch Potato portfolios performed similar and suffered a huge draw-down during 2008. The Permanent portfolio did a little better during 2008 bear market.

Next, let’s back-test some US Couch Potato portfolios:

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	tickers = spl('VIPSX,VTSMX,VGTSX,SPY,TLT,GLD,SHY')
	
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1995-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)	
		
		# extend GLD with Gold.PM - London Gold afternoon fixing prices
		data$GLD = extend.GLD(data$GLD)

	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	models = list()
		periodicity = 'years'
		dates = '2003::'
	
	models$classic = couch.potato.strategy(data, 'VIPSX,VTSMX', rep(1/2,2), periodicity, dates)
	models$margarita = couch.potato.strategy(data, 'VIPSX,VTSMX,VGTSX', rep(1/3,3), periodicity, dates)
	models$permanent = couch.potato.strategy(data, 'SPY,TLT,GLD,SHY', rep(1/4,4), periodicity, dates)
		
	#*****************************************************************
	# Create Report
	#****************************************************************** 
	plotbt.custom.report.part1(models)

The US Couch Potato portfolios also suffered huge draw-downs during 2008. The Permanent portfolio hold it ground much better.

It has been written quite a lot about Couch Potato strategy, but looking at different variations I cannot really see much difference in terms of perfromance or draw-downs. Probably that is why in the last few years, I have seen the creation of many new ETFs to address that in one way or another. For example, now we have tactical asset allocation ETFs, minimum volatility ETFs, income ETFs with covered calls overlays.

To view the complete source code for this example, please have a look at the bt.couch.potato.test() function in bt.test.r at github.

Some additional references from the Canadian Couch Potato blog that are worth reading:

Company Valuation using Discounted Cash Flows

October 19, 2012 1 comment

Today I want to show a simple example of how we can value a company using Discounted Cash Flow (DCF) analysis. The idea is to compute the company’s Intrinsic Value based on the discounted future cash-flows. To compute future cash-flows I will use the historical Free Cash Flow growth rate. To compute present value of these cash flows I will use a conservative 9% discount rate. If you want to read more about the Discounted Cash Flow (DCF) analysis, I recommend following references:

First let’s load historical prices and fundamental data for Apple (AAPL) using the Systematic Investor Toolbox.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical fundamental and pricing data
	#****************************************************************** 
	load.packages('quantmod') 
	tickers = spl('AAPL')
	tickers.temp = spl('NASDAQ:AAPL')
	
	# get fundamental data
	data.fund <- new.env()
	for(i in 1:len(tickers))
		data.fund[[tickers[i]]] = fund.data(tickers.temp[i], 80, 'annual')
			
	# get pricing data
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)			

	# prepare data
	fund = data.fund[[tickers[1]]]
	fund.date = date.fund.data(fund)			
	price = Cl(data[[tickers[1]]]['1995::'])

Next let’s extract fundamental data for Discounted Cash Flow (DCF) analysis

	#*****************************************************************
	# Extract Inputs for DCF Valuation
	#****************************************************************** 				
	# Free Cash Flows
	FCF = get.fund.data('free cash flow', fund, fund.date)
	
	# Invested Capital
	IC = get.fund.data('invested capital', fund, fund.date)
		
	# Sales
	SALE = get.fund.data('total revenue', fund, fund.date)

	# Common Equity
	CEQ = get.fund.data('total equity', fund, fund.date)

	# Common Shares Outstanding
	CSHO = get.fund.data('total common shares out', fund, fund.date)

	# Growth Rate
	CROIC = FCF/IC
	
	# Average inputs
	g = runMean(CROIC, 5)
	cash = runMean(FCF, 5)

Next I created a simple function to estimate company’s Intrinsic Value using Discounted Cash Flow (DCF) analysis.

	#*****************************************************************
	# Helper function to compute Intrinsic Value
	#****************************************************************** 				
	compute.DCF.IV <- function(cash, eqity, shares, g, R) {
		if( cash <= 0 ) return(NA)
		
		if( len(R) == 1 ) R = rep(R, len(g))
		
		value = eqity + sum(cash * cumprod(1 + g) / cumprod(1 + R))
		return( value / shares )
	}

Finally, let’s compute AAPL’s Intrinsic Value and create plots

	#*****************************************************************
	# Compute Intrinsic Value, assumptions:
	# Company will grow for the first 3 years at current Growth Rate
	# slowed down by 20% for the next 4 years, and slowed down by a further 20% for the next 3 years
	# and finally 3% growth for the next 10 years
	#
	# The Discount Rate is 9%
	#
	# http://www.oldschoolvalue.com/blog/stock-analysis/apple-aapl-valuation/
	#****************************************************************** 				
	dcf.price = NA * g
	i.start = which(!is.na(g))[1] 
	
	for(i in i.start : nrow(g)) {
		# Create Growth Rate scenario: 		
		g.scenario = c(rep(g[i],3), rep(g[i],4)*0.8, rep(g[i],3)*0.8*0.8, rep(3/100,10))
		
		# Compute Intrinsic Value
		dcf.price[i] = compute.DCF.IV(cash[i], CEQ[i], CSHO[i], g.scenario, 9/100)
	}
	
	#*****************************************************************
	# Create Plots
	#****************************************************************** 
	plota(price, type='l', log = 'y', col='blue', main=tickers[1],
		ylim=range(price,dcf.price,na.rm=T))
	plota.lines(dcf.price, type='s', col='red', lwd=2)
	plota.legend('Close,Intrinsic Value', 'blue,red', list(price, dcf.price))	

	
	plota(g, type='b', col='blue', pch=0, main='Growth Rate')	


	plota(cash, type='b', col='blue', pch=0, main='Free Cash Flows')	

The Intrinsic Value calculations are highly sensitive to your assumptions about the company’s Growth Rate and Discount Rate used in the Discounted Cash Flow (DCF) analysis.

AAPL has experienced the amazing Growth Rate over the last 5 years and the big question is whether AAPL will be able to maintain this Growth Rate in the future. If yes, then the stock price can easily reach new highs as suggested by the Discounted Cash Flow (DCF) analysis.

To view the complete source code for this example, please have a look at the fundamental.dcf.test() function in fundamental.test.r at github.

Categories: R

Permanent Portfolio – Transaction Cost and better Risk Parity

October 10, 2012 3 comments

I want to address comments that were asked in my last post, Permanent Portfolio – Simple Tools, about Permanent Portfolio strategy. Specifically:

  • The impact of transaction costs on the perfromance and
  • Create a modified version of risk allocation portfolio that distributes weights across 3 asset classes: stocks(SPY), gold(GLD), and treasuries(TLT), and only invests into cash(SHY) to fill the residual portfolio exposure once we scale the SPY/GLD/TLT portfolio to the target volatility

The first point is easy, to incorporate the transaction cost into your back-test just add commission=0.1 parameter to the bt.run.share() function call.For example, to see the dollar allocation strategy perfromance assuming 10c a share commission, use following code:

# original strategy
models$dollar = bt.run.share(data, clean.signal=F)

# assuming 10c a share commissions
models$dollar = bt.run.share(data, commission=0.1, clean.signal=F)

The second point is a bit more work. First, let’s allocate risk across only to 3 asset classes: stocks(SPY), gold(GLD), and treasuries(TLT). Next, let’s scale the SPY/GLD/TLT portfolio to the 7% target volatility. And finally, let’s allocate to cash(SHY) the residual portfolio exposure.

	#*****************************************************************
	# Risk Weighted: allocate only to 3 asset classes: stocks(SPY), gold(GLD), and treasuries(TLT)
	#****************************************************************** 				
	ret.log = bt.apply.matrix(prices, ROC, type='continuous')
	hist.vol = sqrt(252) * bt.apply.matrix(ret.log, runSD, n = 21)	
	weight.risk = weight.dollar / hist.vol
		weight.risk$SHY = 0 
		weight.risk = weight.risk / rowSums(weight.risk)
		
	data$weight[] = NA
		data$weight[period.ends,] = weight.risk[period.ends,]
	models$risk = bt.run.share(data, commission=commission, clean.signal=F)

	#*****************************************************************
	# Risk Weighted + 7% target volatility
	#****************************************************************** 				
	data$weight[] = NA
		data$weight[period.ends,] = target.vol.strategy(models$risk,
						weight.risk, 7/100, 21, 100/100)[period.ends,]
	models$risk.target7 = bt.run.share(data, commission=commission, clean.signal=F)

	#*****************************************************************
	# Risk Weighted + 7% target volatility + SHY
	#****************************************************************** 				
	data$weight[] = NA
		data$weight[period.ends,] = target.vol.strategy(models$risk,
						weight.risk, 7/100, 21, 100/100)[period.ends,]
						
  		cash = 1-rowSums(data$weight)
	    data$weight$SHY[period.ends,] = cash[period.ends]
	models$risk.target7.shy = bt.run.share(data, commission=commission, clean.signal=F)

The modified version of risk allocation portfolio performs well relative to other portfolios even after incorporating the 10c transaction cost.

To view the complete source code for this example, please have a look at the bt.permanent.portfolio3.test() function in bt.test.r at github.

Weekend Reading – Facebook’s P/E ratio

The Barron’s article Still Too Pricey by Andrew Bary looks at the share price of the Facebook and based on the P/E ration valuation metrics concludes that even at the current prices, stock is overvalued. I want to show how to do this type of fundamental analysis using the Systematic Investor Toolbox.

First let’s load historical prices and earnings per share (EPS) for Facebook and a few stocks in the technology sector: LinkedIn, Groupon, Apple, and Google.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical fundamental and pricing data
	#****************************************************************** 
	load.packages('quantmod') 
	tickers = spl('FB,LNKD,GRPN,AAPL,GOOG')
	tickers.temp = spl('NASDAQ:FB,NYSE:LNKD,NASDAQ:GRPN,NASDAQ:AAPL,NASDAQ:GOOG')
	
	# get fundamental data
	data.fund <- new.env()
	for(i in 1:len(tickers)) {
			cat(tickers[i],'\n')
			data.fund[[tickers[i]]] = fund.data(tickers.temp[i], 80)
	}
	
	# get pricing data
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)			

Next, let’s combine fundamental and pricing data and create P/E ratio for all stocks.

	#*****************************************************************
	# Combine fundamental and pricing data
	#****************************************************************** 				
	for(i in tickers) {
		fund = data.fund[[i]]
		fund.date = date.fund.data(fund)
			
		# Earnings per Share		
		EPS = 4 * get.fund.data('Diluted EPS from Total Operations', fund, fund.date)
		if(nrow(EPS) > 3)
			EPS = rbind(EPS[1:3], get.fund.data('Diluted EPS from Total Operations', fund, fund.date, is.12m.rolling=T)[-c(1:3)])
		
		# merge	
		data[[i]] = merge(data[[i]], EPS)
	}

	bt.prep(data, align='keep.all', dates='1995::')
	
	#*****************************************************************
	# Create PE
	#****************************************************************** 
	prices = data$prices
		prices = bt.apply.matrix(prices, function(x) ifna.prev(x))
		
	EPS =  bt.apply(data, function(x) ifna.prev(x[, 'EPS']))
	
	PE = ifna(prices / EPS, NA)
		PE[ abs(EPS) < 0.001 ] = NA	

Please note that for very small EPS, the P/E ratio will be very big; therefore, I set P/E to NA in such cases.

The hard part is done, not let’ plot P/E ratios for all companies.

    #*****************************************************************
    # Create Report
    #******************************************************************       
    plota.matplot(PE)

    plota.matplot(PE, type='b',pch=20, dates='2012::')
		
    plota.matplot(EPS)
	
    plota.matplot(prices)

P/E ratios for all companies:

P/E ratios for all companies in 2012:

Earnings per share (EPS) for all companies:

Prices for all companies:

From these charts I would say it is too early to decide if Facebook is overvalued based on historical P/E ratio basis only, because we only have 3 financial statements, not enough to make an informed conclusion. You might use project one year (FY1) and two year (FY2) earnings estimates to make a better decision.

What is interesting in these charts is how LinkedIn is managing to sustain its astronomical P/E ratio?

I have previously shown examples of how to get and use fundamental data. Here are links for your reference:

To view the complete source code for this example, please have a look at the fundamental.fb.test() function in fundamental.test.r at github.

Categories: Factors, R

Permanent Portfolio – Simple Tools

October 5, 2012 5 comments

I have previously described and back-tested the Permanent Portfolio strategy based on the series of posts at the GestaltU blog. Today I want to show how we can improve the Permanent Portfolio strategy perfromance using following simple tools:

  • Volatility targeting
  • Risk allocation
  • Tactical market filter

First, let’s load the historical prices for the stocks(SPY), gold(GLD), treasuries(TLT), and cash(SHY) and create a quarterly rebalanced Permanent Portfolio strategy using the Systematic Investor Toolbox.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = spl('SPY,TLT,GLD,SHY')
		
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)
		
		# extend GLD with Gold.PM - London Gold afternoon fixing prices
		data$GLD = extend.GLD(data$GLD)
	
	bt.prep(data, align='remove.na')

	#*****************************************************************
	# Setup
	#****************************************************************** 		
	prices = data$prices   
		n = ncol(prices)

	period.ends = endpoints(prices, 'quarters')
		period.ends = period.ends[period.ends > 0]		
		period.ends = c(1, period.ends)
					

	models = list()
	
	
	#*****************************************************************
	# Dollar Weighted
	#****************************************************************** 			
	target.allocation = matrix(rep(1/n,n), nrow=1)
	weight.dollar = ntop(prices, n)
	
	data$weight[] = NA
		data$weight[period.ends,] = weight.dollar[period.ends,]
	models$dollar = bt.run.share(data, clean.signal=F)

Now let’s create a version of the Permanent Portfolio strategy that targets the 7% annual volatility based on the 21 day look back period.

	#*****************************************************************
	# Dollar Weighted + 7% target volatility
	#****************************************************************** 				
	data$weight[] = NA
		data$weight[period.ends,] = target.vol.strategy(models$dollar,
						weight.dollar, 7/100, 21, 100/100)[period.ends,]
	models$dollar.target7 = bt.run.share(data, clean.signal=F)

Please note that allocating equal dollar amounts to each investment puts more risk allocation to the risky assets. If we want to distribute risk budget equally across all assets we can consider a portfolio based on the equal risk allocation instead of equal capital (dollar) allocation.

	#*****************************************************************
	# Risk Weighted
	#****************************************************************** 				
	ret.log = bt.apply.matrix(prices, ROC, type='continuous')
	hist.vol = sqrt(252) * bt.apply.matrix(ret.log, runSD, n = 21)	
	weight.risk = weight.dollar / hist.vol
		weight.risk = weight.risk / rowSums(weight.risk)
		
	data$weight[] = NA
		data$weight[period.ends,] = weight.risk[period.ends,]
	models$risk = bt.run.share(data, clean.signal=F)

We can also use market filter, for example a 10 month moving average, to control portfolio drawdowns.

	#*****************************************************************
	# Market Filter (tactical): 10 month moving average
	#****************************************************************** 				
	period.ends = endpoints(prices, 'months')
		period.ends = period.ends[period.ends > 0]		
		period.ends = c(1, period.ends)

	sma = bt.apply.matrix(prices, SMA, 200)
	weight.dollar.tactical = weight.dollar * (prices > sma)	
	
	data$weight[] = NA
		data$weight[period.ends,] = weight.dollar.tactical[period.ends,]
	models$dollar.tactical = bt.run.share(data, clean.signal=F)

Finally, let’s combine market filter and volatility targeting:

	#*****************************************************************
	# Tactical + 7% target volatility
	#****************************************************************** 				
	data$weight[] = NA
		data$weight[period.ends,] = target.vol.strategy(models$dollar.tactical,
						weight.dollar.tactical, 7/100, 21, 100/100)[period.ends,]
	models$dollar.tactical.target7 = bt.run.share(data, clean.signal=F)
		
			
	#*****************************************************************
	# Create Report
	#******************************************************************       
	plotbt.custom.report.part1(models)       
	
	plotbt.strategy.sidebyside(models)	

The final portfolio that combines market filter and volatility targeting is a big step up from the original Permanent Portfolio strategy: the returns are a bit down, but draw-downs are cut in half.

To view the complete source code for this example, please have a look at the bt.permanent.portfolio2.test() function in bt.test.r at github.