Archive
Weekend Reading: F-Squared
Mebane Faber posted another interesting blog post: Building a Simple Sector Rotation on Momentum and Trend that caught my interest. Today I want to show how you can test such strategy using the Systematic Investor Toolbox:
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') data = new.env() # load historical market returns temp = get.fama.french.data('F-F_Research_Data_Factors', periodicity = '',download = T, clean = T) ret = cbind(temp[[1]]$Mkt.RF + temp[[1]]$RF, temp[[1]]$RF) price = bt.apply.matrix(ret / 100, function(x) cumprod(1 + x)) data$SPY = make.stock.xts( price$Mkt.RF ) data$SHY = make.stock.xts( price$RF ) # load historical sector returns temp = get.fama.french.data('10_Industry_Portfolios', periodicity = '',download = T, clean = T) ret = temp[[1]] price = bt.apply.matrix(ret[,1:9] / 100, function(x) cumprod(1 + x)) for(n in names(price)) data[[n]] = make.stock.xts( price[,n] ) # align dates data$symbolnames = c(names(price), 'SHY', 'SPY') bt.prep(data, align='remove.na', dates='2000::') # back-test dates bt.dates = '2001:04::' #***************************************************************** # Setup #****************************************************************** prices = data$prices n = ncol(data$prices) models = list() #***************************************************************** # Benchmark Strategies #****************************************************************** data$weight[] = NA data$weight$SPY[1] = 1 models$SPY = bt.run.share(data, clean.signal=F, dates=bt.dates) weight = prices weight$SPY = NA weight$SHY = NA data$weight[] = NA data$weight[] = ntop(weight[], n) models$EW = bt.run.share(data, clean.signal=F, dates=bt.dates) #***************************************************************** # Code Strategies # http://www.mebanefaber.com/2013/12/04/square-root-of-f-squared/ #****************************************************************** sma = bt.apply.matrix(prices, SMA, 10) # create position score position.score = sma position.score[ prices < sma ] = NA position.score$SHY = NA position.score$SPY = NA # equal weight allocation weight = ntop(position.score[], n) # number of invested funds n.selected = rowSums(weight != 0) # cash logic weight$SHY[n.selected == 0,] = 1 weight[n.selected == 1,] = 0.25 * weight[n.selected == 1,] weight$SHY[n.selected == 1,] = 0.75 weight[n.selected == 2,] = 0.5 * weight[n.selected == 2,] weight$SHY[n.selected == 2,] = 0.5 weight[n.selected == 3,] = 0.75 * weight[n.selected == 3,] weight$SHY[n.selected == 3,] = 0.25 # cbind(round(100*weight,0), n.selected) data$weight[] = NA data$weight[] = weight models$strategy1 = bt.run.share(data, clean.signal=F, dates=bt.dates) #***************************************************************** # Create Report #****************************************************************** strategy.performance.snapshoot(models, one.page = T)
Mebane thank you very much for sharing your great ideas. I would encourage readers to play with this strategy and report back.
Please note that I back-tested the strategy using the monthly observations. The strategy’s draw-down is around 17% using monthly data. If we switch to the daily data, the strategy’s draw-down goes to around 22%. There was one really bad month in 2002.
To view the complete source code for this example, please have a look at the bt.mebanefaber.f.squared.test() function in bt.test.r at github.
Averaged Input Assumptions and Momentum
Today I want to share another interesting idea contributed by Pierre Chretien. Pierre suggested using Averaged Input Assumptions and Momentum to create reasonably quiet strategy. The averaging techniques are used to avoid over-fitting any particular frequency.
To create Averaged Input Assumptions we combine returns over different look-back periods, giving more weight to the recent returns, to form overall Input Assumptions.
create.ia.averaged <- function(lookbacks, n.lag) { lookbacks = lookbacks n.lag = n.lag function(hist.returns, index=1:ncol(hist.returns), hist.all) { nperiods = nrow(hist.returns) temp = c() for (n.lookback in lookbacks) temp = rbind(temp, hist.returns[(nperiods - n.lookback - n.lag + 1):(nperiods - n.lag), ]) create.ia(temp, index, hist.all) } }
To create Averaged Momentum we take a look-back weighted avaerage of momentums computed over different look-back periods.
momentum.averaged <- function(prices, lookbacks = c(20,60,120,250) , # length of momentum look back n.lag = 3 ) { momentum = 0 * prices for (n.lookback in lookbacks) { part.mom = mlag(prices, n.lag) / mlag(prices, n.lookback + n.lag) - 1 momentum = momentum + 252 / n.lookback * part.mom } momentum / len(lookbacks) }
Next let’s compare using historical Input Assumptions vs Averaged Input Assumptions and Momentum vs Averaged Momentum. I will consider Absolute Momentum (not cross sectional), for more information about relative and absolute momentum, please see
Absolute Momentum: A Simple Rule-Based Strategy and Universal Trend-Following Overlay, Gary Antonacci, 2013
Generalized Momentum and Flexible Asset Allocation (FAA): An Heuristic Approach by W.J. Keller and H. S. Van Putten 2012
############################################################################### # Load Systematic Investor Toolbox (SIT) # https://systematicinvestor.wordpress.com/systematic-investor-toolbox/ ############################################################################### setInternet2(TRUE) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) #***************************************************************** # Load historical data #****************************************************************** load.packages('quantmod') # 10 funds tickers = spl('Us.Eq = VTI + VTSMX, Eurpoe.Eq = IEV + FIEUX, Japan.Eq = EWJ + FJPNX, Emer.Eq = EEM + VEIEX, Re = RWX + VNQ + VGSIX, Com = DBC + QRAAX, Gold = GLD + SCGDX, Long.Tr = TLT + VUSTX, Mid.Tr = IEF + VFITX, Short.Tr = SHY + VFISX') start.date = 1998 dates = paste(start.date,'::',sep='') data <- new.env() getSymbols.extra(tickers, src = 'yahoo', from = '1980-01-01', env = data, set.symbolnames = T, auto.assign = T) for(i in data$symbolnames) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) bt.prep(data, align='keep.all', dates=paste(start.date-2,':12::',sep=''), fill.gaps = T) #***************************************************************** # Setup #****************************************************************** prices = data$prices n = ncol(prices) nperiods = nrow(prices) periodicity = 'months' period.ends = endpoints(prices, periodicity) period.ends = period.ends[period.ends > 0] max.product.exposure = 0.6 #***************************************************************** # Input Assumptions #****************************************************************** lookback.len = 40 create.ia.fn = create.ia # input assumptions are averaged on 20, 40, 60 days using 1 day lag ia.array = c(20,40,60) avg.create.ia.fn = create.ia.averaged(ia.array, 1) #***************************************************************** # Momentum #****************************************************************** universe = prices > 0 mom.lookback.len = 120 momentum = prices / mlag(prices, mom.lookback.len) - 1 mom.universe = ifna(momentum > 0, F) # momentum is averaged on 20,60,120,250 days using 3 day lag mom.array = c(20,60,120,250) avg.momentum = momentum.averaged(prices, mom.array, 3) avgmom.universe = ifna(avg.momentum > 0, F) #***************************************************************** # Algos #****************************************************************** min.risk.fns = list( EW = equal.weight.portfolio, MV = min.var.portfolio, MCE = min.corr.excel.portfolio, MV.RSO = rso.portfolio(min.var.portfolio, 3, 100, const.ub = max.product.exposure), MCE.RSO = rso.portfolio(min.corr.excel.portfolio, 3, 100, const.ub = max.product.exposure) ) #***************************************************************** # Code Strategies #****************************************************************** make.strategy.custom <- function(name, create.ia.fn, lookback.len, universe, env) { obj = portfolio.allocation.helper(data$prices, periodicity = periodicity, universe = universe, lookback.len = lookback.len, create.ia.fn = create.ia.fn, const.ub = max.product.exposure, min.risk.fns = min.risk.fns, adjust2positive.definite = F ) env[[name]] = create.strategies(obj, data, prefix=paste(name,'.',sep=''))$models } models <- new.env() make.strategy.custom('ia.none' , create.ia.fn , lookback.len, universe , models) make.strategy.custom('ia.mom' , create.ia.fn , lookback.len, mom.universe , models) make.strategy.custom('ia.avg_mom' , create.ia.fn , lookback.len, avgmom.universe, models) make.strategy.custom('avg_ia.none' , avg.create.ia.fn, 252 , universe , models) make.strategy.custom('avg_ia.mom' , avg.create.ia.fn, 252 , mom.universe , models) make.strategy.custom('avg_ia.avg_mom' , avg.create.ia.fn, 252 , avgmom.universe, models) #***************************************************************** # Create Report #***************************************************************** strategy.snapshot.custom <- function(models, n = 0, title = NULL) { if (n > 0) models = models[ as.vector(matrix(1:len(models),ncol=n, byrow=T)) ] layout(1:3) plotbt(models, plotX = T, log = 'y', LeftMargin = 3, main = title) mtext('Cumulative Performance', side = 2, line = 1) plotbt.strategy.sidebyside(models) barplot.with.labels(sapply(models, compute.turnover, data), 'Average Annual Portfolio Turnover', T) } # basic vs basic + momentum => momentum filter has better results models.final = c(models$ia.none, models$ia.mom) strategy.snapshot.custom(models.final, len(min.risk.fns), 'Momentum Filter') # basic vs basic + avg ia => averaged ia reduce turnover models.final = c(models$ia.none, models$avg_ia.none) strategy.snapshot.custom(models.final, len(min.risk.fns), 'Averaged Input Assumptions') # basic + momentum vs basic + avg.momentum => mixed results for averaged momentum models.final = c(models$ia.mom, models$ia.avg_mom) strategy.snapshot.custom(models.final, len(min.risk.fns), 'Averaged Momentum') # basic + momentum vs avg ia + avg.momentum models.final = c(models$ia.mom, models$avg_ia.avg_mom) strategy.snapshot.custom(models.final, len(min.risk.fns), 'Averaged vs Base')
Above, I compared results for the following 4 cases:
1. Adding Momentum filter: all algos perfrom better
2. Input Assumptions vs Averaged Input Assumptions: returns are very similar, but using Averaged Input Assumptions helps reduce portfolio turnover.
3. Momentum vs Averaged Momentum: returns are very similar, but using Averaged Momentum increases portfolio turnover.
4. historical Input Assumptions + Momentum vs Averaged Input Assumptions + Averaged Momentum: results are mixed, no consistent advantage of using Averaged methods
Overall, the Averaged methods is a very interesting idea and I hope you will experiemtn with it and share your findings, like Pierre. Pierre, again thank you very much for sharing.
The full source code and example for the bt.averaged.test() function is available in bt.test.r at github.