Archive

Archive for the ‘Risk Measures’ Category

Simulating Multiple Asset Paths in R

I recently came across the Optimal Rebalancing Strategy Using Dynamic Programming for Institutional Portfolios by W. Sun, A. Fan, L. Chen, T. Schouwenaars, M. Albota paper that examines the cost of different rebablancing methods. For example, one might use calendar rebalancing: i.e. rebalance every month / quarter / year. Or one might use threshold rebalancing: i.e. rebalance only if asset weights break out from a certain band around the policy mix, say 3%.

To investigate the cost of the different rebalancing methods, authors run 10,000 simulations. Today, I want to show how to simulate asset price paths given the expected returns and covariances. I will assume that prices follow the Geometric Brownian Motion. Also I will show a simple application of Monte Carlo option pricing. In the next post I will evaluate the cost of different rebalancing methods.

Let’s assume that a stock price can be described by the stochastic differential equation:

dS_{t}=\mu S_{t}dt+\sigma S_{t}dW_{t}

where \mu is the expected annual return of the underlying asset, \sigma is the annualized volatility, and W_{t} is a Brownian Motion. This stochastic differential equation has the following solution

S_{t}=S_{0}exp\left(\left(\mu - \frac{\sigma^{2}}{2}\right)t+\sigma W_{t}\right)

which I implemented in the asset.paths() function. The asset.paths() function is based on the Simulating Multiple Asset Paths in MATLAB code in Matlab.

asset.paths <- function(s0, mu, sigma, 
	nsims = 10000, 
	periods = c(0, 1)	# time periods at which to simulate prices
) 
{
	s0 = as.vector(s0)
	nsteps = len(periods)
	dt = c(periods[1], diff(periods))
	
	if( len(s0) == 1 ) {
		drift = mu - 0.5 * sigma^2
		if( nsteps == 1 ) {
			s0 * exp(drift * dt + sigma * sqrt(dt) * rnorm(nsims))
		} else {
			temp = matrix(exp(drift * dt + sigma * sqrt(dt) * rnorm(nsteps * nsims)), nc=nsims)
			for(i in 2:nsteps) temp[i,] = temp[i,] * temp[(i-1),]
			s0 * temp
		}
	} else {
		require(MASS)
		drift = mu - 0.5 * diag(sigma)
		n = len(mu)
		
		if( nsteps == 1 ) {
			s0 * exp(drift * dt + sqrt(dt) * t(mvrnorm(nsims, rep(0, n), sigma)))
		} else {
			temp = array(exp(as.vector(drift %*% t(dt)) + t(sqrt(dt) * mvrnorm(nsteps * nsims, rep(0, n), sigma))), c(n, nsteps, nsims))
			for(i in 2:nsteps) temp[,i,] = temp[,i,] * temp[,(i-1),]
			s0 * temp
		}
	}
}

Next let’s visualize some simulation asset paths:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Plot some price paths
	#******************************************************************  
	S = c(100,105)
	X = 98
	Time = 0.5
	r = 0.05
	sigma = c(0.11,0.16)
	rho = 0.63
	N = 10000

	# Single Asset for 10 years
	periods = 0:10
	prices = asset.paths(S[1], r, sigma[1], N, periods = periods)
	
	# plot
	matplot(prices[,1:100], type='l', xlab='Years', ylab='Prices',
		main='Selected Price Paths')
	
		
	# Multiple Assets for 10 years
	periods = 0:10
	cov.matrix = sigma%*%t(sigma) * matrix(c(1,rho,rho,1),2,2)
	prices = asset.paths(S, c(r,r), cov.matrix, N, periods = periods)

	# plot
	layout(1:2)
	matplot(prices[1,,1:100], type='l', xlab='Years', ylab='Prices',
		main='Selected Price Paths for Asset 1')
	matplot(prices[2,,1:100], type='l', xlab='Years', ylab='Prices',
		main='Selected Price Paths for Asset 2')

Next, let’s look at examples of Monte Carlo option pricing using asset.paths() function in random.r at github.

	#*****************************************************************
	# Price European Call Option
	#******************************************************************  
	load.packages('fOptions')

	# Black–Scholes
	GBSOption(TypeFlag = "c", S = S[1], X = X, Time = Time, r = r, b = r, sigma = sigma[1])
	
	# Monte Carlo simulation	
	N = 1000000	
	prices = asset.paths(S[1], r, sigma[1], N, periods = Time)
		future.payoff = pmax(0, prices - X)
		discounted.payoff = future.payoff * exp(-r * Time)
	# option price
	mean(discounted.payoff) 

	#*****************************************************************
	# Price Asian Call Option
	#******************************************************************  
	load.packages('fExoticOptions')

	Time = 1/12

	# Approximation
	TurnbullWakemanAsianApproxOption(TypeFlag = "c", S = S[1], SA = S[1], 
  		X = X, Time = Time, time = Time, tau = 0 , r = r, b = r, sigma = sigma[1])
		
	# Monte Carlo simulation		
	N = 100000
	periods = seq(0,Time,1/360)
		n = len(periods)
	prices = asset.paths(S[1], r, sigma[1], N, periods = periods)
		future.payoff = pmax(0, colSums(prices)/n - X)
		discounted.payoff = future.payoff * exp(-r * Time)
	# option price
	mean(discounted.payoff) 

	#*****************************************************************
	# Price Basket Option
	#******************************************************************  
	
	Time = 0.5
	
	# Approximation
	TwoRiskyAssetsOption(TypeFlag = "cmax", S1 = S[1], S2 = S[2],
		X = X, Time = Time, r = r, b1 = r, b2 = r,
		sigma1 = sigma[1], sigma2 = sigma[2], rho = rho)
	
	# Monte Carlo simulation		
	N = 100000
	cov.matrix = sigma%*%t(sigma) * matrix(c(1,rho,rho,1),2,2)
	prices = asset.paths(S, c(r,r), sigma = cov.matrix, N, periods = Time)
		future.payoff = pmax(0, apply(prices,2,max) - X)
		discounted.payoff = future.payoff * exp(-r * Time)
	# option price
	mean(discounted.payoff) 

	#*****************************************************************
	# Price Asian Basket Option
	#******************************************************************  
	
	Time = 1/12
	
	# Monte Carlo simulation		
	N = 10000
	periods = seq(0,Time,1/360)
		n = len(periods)
	
	prices = asset.paths(S, c(r,r), sigma = cov.matrix, N, periods = periods)
		future.payoff = pmax(0, colSums(apply(prices,c(2,3),max))/n - X)
		discounted.payoff = future.payoff * exp(-r * Time)
	# option price
	mean(discounted.payoff) 

Please note that Monte Carlo option pricing requireies many simulations to converge to the option price. It takes longer as we increase number of simulations or number of time periods or number of assets. On the positive side, it provides a viable alternative to simulating difficult problems that might not be solved analytically.

In the next post I will look at the cost of different rebalancing methods.

To view the complete source code for this example, please have a look at the asset.paths.test() function in random.r at github.

Advertisement
Categories: R, Risk Measures

Volatility Quantiles

Today I want to examine the performance of stocks in the S&P 500 grouped into Quantiles based on one year historical Volatility. The idea is very simple: each week we will form Volatility Quantiles portfolios by grouping stocks in the S&P 500 into Quantiles using one year historical Volatility. Next we will backtest each portfolio and check if low historical volatility corresponds to the low realized volatility.

Let’s start by loading historical prices for all companies in the S&P 500 and create SPY and Equal Weight benchmarks using the Systematic Investor Toolbox:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod')	
	tickers = sp500.components()$tickers

	
	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1970-01-01', env = data, auto.assign = T)
		# remove companies with less than 5 years of data
		rm.index = which( sapply(ls(data), function(x) nrow(data[[x]])) < 1000 )	
		rm(list=names(rm.index), envir=data)
		
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)		
	bt.prep(data, align='keep.all', dates='1994::')
	
	data.spy <- new.env()
	getSymbols('SPY', src = 'yahoo', from = '1970-01-01', env = data.spy, auto.assign = T)
	bt.prep(data.spy, align='keep.all', dates='1994::')

	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices
		nperiods = nrow(prices)
		n = ncol(prices)
			
	models = list()
	
	# SPY
	data.spy$weight[] = NA
		data.spy$weight[] = 1
	models$spy = bt.run(data.spy)
	
	# Equal Weight
	data$weight[] = NA
		data$weight[] = ntop(prices, 500)
	models$equal.weight = bt.run(data)	

Next let’s divide stocks in the S&P 500 into Quantiles using one year historical Volatility and create backtest for each quantile.

	#*****************************************************************
	# Create Quantiles based on the historical one year volatility 
	#****************************************************************** 
	# setup re-balancing periods
	period.ends = endpoints(prices, 'weeks')
		period.ends = period.ends[period.ends > 0]

	# compute historical one year volatility
	p = bt.apply.matrix(coredata(prices), ifna.prev)	
	ret = p / mlag(p) - 1		
	sd252 = bt.apply.matrix(ret, runSD, 252)		
		
	# split stocks in the S&amp;P 500 into Quantiles using one year historical Volatility
	n.quantiles=5
	start.t = which(period.ends >= (252+2))[1]	
	quantiles = weights = p * NA
				
	for( t in start.t:len(period.ends) ) {
		i = period.ends[t]

		factor = sd252[i,]
		ranking = ceiling(n.quantiles * rank(factor, na.last = 'keep','first') / count(factor))
	
		quantiles[i,] = ranking
		weights[i,] = 1/tapply(rep(1,n), ranking, sum)[ranking]			
	}

	quantiles = ifna(quantiles,0)
	
	#*****************************************************************
	# Create backtest for each Quintile
	#****************************************************************** 
	for( i in 1:n.quantiles) {
		temp = weights * NA
		temp[period.ends,] = 0
		temp[quantiles == i] = weights[quantiles == i]
	
		data$weight[] = NA
			data$weight[] = temp
		models[[ paste('Q',i,sep='_') ]] = bt.run(data, silent = T)
	}

Finally let’s plot historical equity curves for each Volatility Quantile and create a summary table.

	#*****************************************************************
	# Create Report
	#****************************************************************** 					
	plotbt.custom.report.part1(models)		
	plotbt.strategy.sidebyside(models)

Our thesis is true for stocks in the S&P 500 index: low historical volatility leads to low realized volatility. There is a one-to-one correspondence between historical and realized volatility quantiles.

Please note that performance numbers have to be taken with a grain of salt because I used current components in the S&P 500 index; hence, introducing survivorship bias.

To view the complete source code for this example, please have a look at the bt.volatility.quantiles.test() function in bt.test.r at github.

Gini Efficient Frontier

David Varadi have recently wrote two posts about Gini Coefficient: I Dream of Gini, and Mean-Gini Optimization. I want to show how to use Gini risk measure to construct efficient frontier and compare it with alternative risk measures I discussed previously.

I will use Gini mean difference risk measure – the mean of the difference between every possible pair of returns to construct Mean-Gini Efficient Frontier. I will use methods presented in “The Generation of Mean Gini Efficient Sets” by J. Okunev (1991) paper to construct optimal portfolios.

Let x.i, i= 1,…,N be weights of instruments in the portfolio. Let us denote by r.it the return of i-th asset in the time period t for i= 1,…,N and t= 1,…,T. The portfolio’s Gini mean difference (page 5) can be written as:

\Gamma = \frac{1}{T^2}\sum_{j=1}^{T}\sum_{k>j}^{T}\left | Y_{j} - Y_{k} \right |

It can be formulated as a linear programming problem

\left | Y_{j} - Y_{k} \right | = a_{jk}^{+} - a_{jk}^{-}  \newline  \newline  min \frac{1}{T^2}\sum_{j=1}^{T}\sum_{k>j}^{T}\left ( a_{jk}^{+} - a_{jk}^{-} \right )  \newline  \sum_{i=1}^{N}x_{i}\left ( r_{ij} - r_{ik} \right ) - a_{jk}^{+} + a_{jk}^{-} = 0  \newline   \left ( \begin{matrix} for & j=1 & to & T, & and & k>j\end{matrix} \right )  \newline   a_{jk}^{+}, a_{jk}^{-}\geqslant 0

This linear programming problem can be easily implemented

min.gini.portfolio <- function
(
	ia,		# input assumptions
	constraints	# constraints
)
{
	n = ia$n
	nt = nrow(ia$hist.returns)
		
	# objective : Gini mean difference - the mean of the difference between every possible pair of returns
	#  1/(T^2) * [ SUM <over j = 1,...,T , k>j> a.long.jk + a.short.jk ]
	f.obj = c(rep(0, n), (1/(nt^2)) * rep(1, nt*(nt-1)))

	# adjust constraints, add a.long.jk , a.short.jk
	constraints = add.variables(nt*(nt-1), constraints, lb=0)

	# [ SUM <over i> x.i * (r.ij - r.ik) ] - a.long.jk + a.short.jk = 0
	# for each j = 1,...,T , k>j	
	a = matrix(0, n + nt*(nt-1), nt*(nt-1)/2)
		diag(a[(n+1) : (n + nt*(nt-1)/2), ]) = -1
		diag(a[(n+1+nt*(nt-1)/2) : (n + nt*(nt-1)), ]) = 1
	hist.returns = as.matrix(ia$hist.returns)
	
	i.start = 0
	for(t in 1:(nt-1)) {
		index = (i.start+1) : (i.start + nt -t)
		for(i in 1:n) {
			a[i, index] = ( hist.returns[t,i] - hist.returns[,i] ) [ (t+1) : nt ] 
		}
		i.start = i.start + nt -t		
	}
	constraints = add.constraints(a, 0, '=', constraints)
				
	# setup linear programming	
	f.con = constraints$A
	f.dir = c(rep('=', constraints$meq), rep('>=', len(constraints$b) - constraints$meq))
	f.rhs = constraints$b

	# find optimal solution
	x = NA
	sol = try(solve.LP.bounds('min', f.obj, t(f.con), f.dir, f.rhs, 
					lb = constraints$lb, ub = constraints$ub), TRUE)

	if(!inherits(sol, 'try-error')) {
		x = sol$solution[1:n]
	}

	return( x )
}

Let’s examine efficient frontiers computed under Gini and Standard deviation risk measures using sample historical input assumptions.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#--------------------------------------------------------------------------
	# Create Efficient Frontier
	#--------------------------------------------------------------------------
	ia = aa.test.create.ia.rebal()
	n = ia$n		

	# 0 <= x.i <= 1 
	constraints = new.constraints(n, lb = 0, ub = 1)
		
	# SUM x.i = 1
	constraints = add.constraints(rep(1, n), 1, type = '=', constraints)		

	# create efficient frontier(s)
	ef.risk = portopt(ia, constraints, 50, 'Risk')
	ef.gini = portopt(ia, constraints, 50, 'GINI', min.gini.portfolio)
		
	#--------------------------------------------------------------------------
	# Create Plots
	#--------------------------------------------------------------------------
	layout( matrix(1:4, nrow = 2) )
	plot.ef(ia, list(ef.risk, ef.gini), portfolio.risk, F)	
	plot.ef(ia, list(ef.risk, ef.gini), portfolio.gini.coefficient, F)	

	plot.transition.map(ef.risk)
	plot.transition.map(ef.gini)

The Gini efficient frontier is almost identical to Standard deviation efficient frontier, labeled ‘Risk’. This is not a surprise because asset returns that are used in the sample input assumptions are well behaved. The Gini measure of risk would be most appropriate if asset returns contained large outliers.

To view the complete source code for this example, please have a look at the aa.gini.test() function in aa.test.r at github.

Next I added Gini risk measure to the mix of Asset Allocation strategies that I examined in the Backtesting Asset Allocation portfolios post.

The Gini portfolios and Minimum Variance portfolios show very similar perfromance

To view the complete source code for this example, please have a look at the bt.aa.test() function in bt.test.r at github.

Backtesting Asset Allocation portfolios

In the last post, Portfolio Optimization: Specify constraints with GNU MathProg language, Paolo and MC raised a question: “How would you construct an equal risk contribution portfolio?” Unfortunately, this problem cannot be expressed as a Linear or Quadratic Programming problem.

The outline for this post:

  • I will show how Equal Risk Contribution portfolio can be formulated and solved using a non-linear solver.
  • I will backtest Equal Risk Contribution portfolio and other Asset Allocation portfolios based on various risk measures I described in the Asset Allocation series of post.

Pat Burns wrote an excellent post: Unproxying weight constraints that explains Risk Contribution – partition the variance of a portfolio into pieces attributed to each asset. The Equal Risk Contribution portfolio is a portfolio that splits total portfolio risk equally among its assets. (The concept is similar to 1/N portfolio – a portfolio that splits total portfolio weight equally among its assets.)

Risk Contributions (risk fractions) can be expressed in terms of portfolio weights (w) and covariance matrix (V):
f=\frac{w*Vw}{w'Vw}

Our objective is to find portfolio weights (w) such that Risk Contributions are equal for all assets. This objective function can be easily coded in R:

	risk.contribution = w * (cov %*% w)
	sum( abs(risk.contribution - mean(risk.contribution)) )

I recommend following references for a detailed discussion of Risk Contributions:

I will use a Nonlinear programming solver, Rdonlp2, which is based on donlp2 routine developed and copyright by Prof. Dr. Peter Spellucci to solve for Equal Risk Contribution portfolio. [Please note that following code might not properly execute on your computer because Rdonlp2 package is required and not available on CRAN]

#--------------------------------------------------------------------------
# Equal Risk Contribution portfolio
#--------------------------------------------------------------------------
ia = aa.test.create.ia()
n = ia$n		

# 0 <= x.i <= 1
constraints = new.constraints(n, lb = 0, ub = 1)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)		

# find Equal Risk Contribution portfolio 
w = find.erc.portfolio(ia, constraints)	

# compute Risk Contributions 	
risk.contributions = portfolio.risk.contribution(w, ia)

Next, I want to expand on the Backtesting Minimum Variance portfolios post to include Equal Risk Contribution portfolio and and other Asset Allocation portfolios based on various risk measures I described in the Asset Allocation series of post.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)

	#*****************************************************************
	# Load historical data
	#****************************************************************** 
	load.packages('quantmod,quadprog,corpcor,lpSolve')
	tickers = spl('SPY,QQQ,EEM,IWM,EFA,TLT,IYR,GLD')

	data <- new.env()
	getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
		for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)							
	bt.prep(data, align='remove.na', dates='1990::2011')
 
	#*****************************************************************
	# Code Strategies
	#****************************************************************** 
	prices = data$prices   
	n = ncol(prices)
	
	# find week ends
	period.ends = endpoints(prices, 'weeks')
		period.ends = period.ends[period.ends > 0]

	#*****************************************************************
	# Create Constraints
	#*****************************************************************
	constraints = new.constraints(n, lb = 0, ub = 1)
	
	# SUM x.i = 1
	constraints = add.constraints(rep(1, n), 1, type = '=', constraints)		

	#*****************************************************************
	# Create Portfolios
	#*****************************************************************			
	ret = prices / mlag(prices) - 1
	start.i = which(period.ends >= (63 + 1))[1]
	
	weight = NA * prices[period.ends,]
	weights = list()
		# Equal Weight 1/N Benchmark
		weights$equal.weight = weight
			weights$equal.weight[] = ntop(prices[period.ends,], n)	
			weights$equal.weight[1:start.i,] = NA
			
		weights$min.var = weight
		weights$min.maxloss = weight
		weights$min.mad = weight
		weights$min.cvar = weight
		weights$min.cdar = weight
		weights$min.cor.insteadof.cov = weight
		weights$min.mad.downside = weight
		weights$min.risk.downside = weight
		
		# following optimizations use a non-linear solver
		weights$erc = weight		
		weights$min.avgcor = weight		
		
	risk.contributions = list()	
		risk.contributions$erc = weight		
		
	# construct portfolios
	for( j in start.i:len(period.ends) ) {
		i = period.ends[j]
		
		# one quarter = 63 days
		hist = ret[ (i- 63 +1):i, ]
		
		# create historical input assumptions
		ia = create.historical.ia(hist, 252)
			s0 = apply(coredata(hist),2,sd)		
			ia$correlation = cor(coredata(hist), use='complete.obs',method='pearson')
			ia$cov = ia$correlation * (s0 %*% t(s0))
			
		# construct portfolios based on various risk measures
		weights$min.var[j,] = min.risk.portfolio(ia, constraints)
		weights$min.maxloss[j,] = min.maxloss.portfolio(ia, constraints)
		weights$min.mad[j,] = min.mad.portfolio(ia, constraints)
		weights$min.cvar[j,] = min.cvar.portfolio(ia, constraints)
		weights$min.cdar[j,] = min.cdar.portfolio(ia, constraints)
		weights$min.cor.insteadof.cov[j,] = min.cor.insteadof.cov.portfolio(ia, constraints)
		weights$min.mad.downside[j,] = min.mad.downside.portfolio(ia, constraints)
		weights$min.risk.downside[j,] = min.risk.downside.portfolio(ia, constraints)

		# following optimizations use a non-linear solver		
		constraints$x0 = weights$erc[(j-1),]
		weights$erc[j,] = find.erc.portfolio(ia, constraints)		
		
		constraints$x0 = weights$min.avgcor[(j-1),]
		weights$min.avgcor[j,] = min.avgcor.portfolio(ia, constraints)						
		
		risk.contributions$erc[j,] = portfolio.risk.contribution(weights$erc[j,], ia)
	}

Next let’s backtest these portfolios and create summary statistics:

	#*****************************************************************
	# Create strategies
	#****************************************************************** 		
	models = list()
	for(i in names(weights)) {
		data$weight[] = NA
			data$weight[period.ends,] = weights[[i]]	
		models[[i]] = bt.run.share(data, clean.signal = F)
	}
		
	#*****************************************************************
	# Create Report
	#****************************************************************** 
	models = rev(models)

	# Plot perfromance
	plotbt(models, plotX = T, log = 'y', LeftMargin = 3)	    	
		mtext('Cumulative Performance', side = 2, line = 1)

	# Plot Strategy Statistics  Side by Side
	plotbt.strategy.sidebyside(models)
	
	# Plot transition maps
	layout(1:len(models))
	for(m in names(models)) {
		plotbt.transition.map(models[[m]]$weight, name=m)
			legend('topright', legend = m, bty = 'n')
	}

	# Plot risk contributions
	layout(1:len(risk.contributions))
	for(m in names(risk.contributions)) {
		plotbt.transition.map(risk.contributions[[m]], name=paste('Risk Contributions',m))
			legend('topright', legend = m, bty = 'n')
	}

	# Compute portfolio concentration and turnover stats based on the
	# On the property of equally-weighted risk contributions portfolios by S. Maillard, 
	# T. Roncalli and J. Teiletche (2008), page 22
	# http://www.thierry-roncalli.com/download/erc.pdf
	out = compute.stats( rev(weights),
		list(Gini=function(w) mean(portfolio.concentration.gini.coefficient(w), na.rm=T),
			Herfindahl=function(w) mean(portfolio.concentration.herfindahl.index(w), na.rm=T),
			Turnover=function(w) 52 * mean(portfolio.turnover(w), na.rm=T)
			)
		)
	
	out[] = plota.format(100 * out, 1, '', '%')
	plot.table(t(out))

The minimum variance (min.risk) portfolio performed very well during that period with 10.5% CAGR and 14% maximum drawdown. The Equal Risk Contribution portfolio (find.erc) also fares well with 10.5% CAGR and 19% maximum drawdown. The 1/N portfolio (equal.weight) is the worst strategy with 7.8% CAGR and 45% maximum drawdown.

One interesting way to modify this strategy is to consider different measures of volatility used to construct a covariance matrix. For example TTR package provides functions for the Garman Klass – Yang Zhang and the Yang Zhang volatility estimation methods. For more details, please have a look at the Different Volatility Measures Effect on Daily MR by Quantum Financier post.

Inspired by the I Dream of Gini by David Varadi, I will show how to create Gini efficient frontier in the next post.

To view the complete source code for this example, please have a look at the bt.aa.test() function in bt.test.r at github.

Portfolio Optimization – Why do we need a Risk Model

February 26, 2012 1 comment

In the last post, Multiple Factor Model – Building Risk Model, I have shown how to build a multiple factor risk model. In this post I want to explain why do we need a risk model and how it is used during portfolio construction process.

The covariance matrix is used during the mean-variance portfolio optimization to estimate portfolio risk. The sample covariance matrix can estimated using historical asset’s returns, but the use of the sample covariance matrix becomes problematic when the number of assets (N) is of the same order of magnitude or larger than the number of observations (T). In typical applications, there can be over a thousand stocks to choose from, but rarely more than ten years of monthly data (i.e. N = 1,000 and T = 120). I recommend following references for a detailed discussion:

One possible solution to estimating a covariance matrix is to model risk by a low-dimensional multiple factor model. Another advantage of using a multiple factor model to describe covariances between assets is that portfolio optimization algorithm works faster because the covariance matrix is quite sparse and well structured. For a detailed discussion please read: Portfolio Optimization with Factors, Scenarios, and Realistic Short Positions by B. Jacobs, K. Levy, and H. Markowitz (2005) [In problems with a large number of securities, computation time may differ by orders of magnitude between using a dense covariance matrix(sample covariance matrix) and using the diagonal or nearly diagonal covariance matrices permitted by a multiple factor model of covariance.]

The outline of this post:

  • Construct minimum variance portfolio using the sample covariance matrix
  • Construct minimum variance portfolio using the multiple factor risk model we created in the prior post

Let’s start by loading the multiple factor risk model we saved at the end of the prior post. [If you are missing risk.model.Rdata file, please execute fm.risk.model.test() function first to create and save multiple factor risk model.] Next, I will construct minimum variance portfolio using the historical covariance matrix.

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# https://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
    source(con)
close(con)
	#*****************************************************************
	# Load data
	#****************************************************************** 
	load.packages('quantmod')	

	# Load factor data that we saved at the end of the fm.all.factor.test function
	load(file='data.factors.Rdata')
		nperiods = nrow(next.month.ret)
		n = ncol(next.month.ret)
		tickers = colnames(next.month.ret)
		
	# Load multiple factor risk model data that we saved at the end of the fm.risk.model.test function	
	load(file='risk.model.Rdata')
		
		
	#*****************************************************************
	# Construct minimum variance portfolio using the sample covariance matrix
	#****************************************************************** 
	load.packages('quadprog,corpcor')
	
	#--------------------------------------------------------------------------
	# Create Covariance matrix based on the last 24 months of returns
	#--------------------------------------------------------------------------
	temp = last(mlag(next.month.ret),24)
	cov.temp = cov(temp, use='complete.obs', method='pearson')	
	hist.cov = cov.temp
	
	#--------------------------------------------------------------------------
	# Adjust Covariance matrix
	#--------------------------------------------------------------------------
	if(!is.positive.definite(cov.temp)) {
		cov.temp <- make.positive.definite(cov.temp, 0.000000001)
	}	
		
	#--------------------------------------------------------------------------
	# Create constraints
	#--------------------------------------------------------------------------
	# set min/max wgts for individual stocks: 0 =< x <= 1
	constraints = new.constraints(n, lb = 0, ub = 1)
	
	# wgts must sum to 1 (fully invested)
	constraints = add.constraints(rep(1,n), 1, type = '=', constraints)
			
	#--------------------------------------------------------------------------
	# Solve QP problem
	#--------------------------------------------------------------------------		
	sol = solve.QP.bounds(Dmat = cov.temp, dvec = rep(0, nrow(cov.temp)) , 
		Amat=constraints$A, bvec=constraints$b, constraints$meq,
		lb = constraints$lb, ub = constraints$ub)

	#--------------------------------------------------------------------------
	# Plot Portfolio weights
	#--------------------------------------------------------------------------		
	x = round(sol$solution,4)
		x = iif(x < 0, 0, x)
		names(x) = colnames(next.month.ret)
	hist.min.var.portfolio = x

	# plot
	barplot(100*x,las = 2, 
		main = 'Minimum variance portfolio weights (sample covariance matrix)')		

To minimize portfolio risk computed under risk model framework we have to combine specific risk and factor covariance matrix into one covariance matrix. This process is described on pages 4-5 of the Portfolio Optimization with Factors, Scenarios, and Realistic Short Positions by B. Jacobs, K. Levy, and H. Markowitz (2005).

Let K be the number of factors in the risk model. We introduce K additional variables that represent portfolio’s factor exposure to each factor. Let’s add K constraints:

X1...Xn * factor.exposures = Xn+1...Xn+k

The new covariance matrix:

[specific.risk  0                 ]
[0              factor.covariance ]

Next, I will construct minimum variance portfolio using the the multiple factor risk model we created in the prior post.

	#*****************************************************************
	# Construct minimum variance portfolio using the multiple factor risk model
	#****************************************************************** 
	t = nperiods	
	factor.exposures = all.data[t,,-1]	
		nfactors = ncol(factor.exposures)
		
	#--------------------------------------------------------------------------
	# Create constraints
	#--------------------------------------------------------------------------
	# set min/max wgts for individual stocks: 0 =< x <= 1
	constraints = new.constraints(n, lb = 0, ub = 1)
	
	# wgts must sum to 1 (fully invested)
	constraints = add.constraints(rep(1,n), 1, type = '=', constraints)
		
	# adjust prior constraints, add v.i
	constraints = add.variables(nfactors, constraints)
	
	# BX - Xnew = 0
	constraints = add.constraints(rbind(factor.exposures, -diag(nfactors)), rep(0, nfactors), type = '=', constraints)

	#--------------------------------------------------------------------------
	# Create Covariance matrix
	# [Qu  0]
	# [ 0 Qf]
	#--------------------------------------------------------------------------
	temp = diag(n)
		diag(temp) = specific.variance[t,]^2
	cov.temp = diag(n + nfactors)
		cov.temp[1:n,1:n] = temp
	cov.temp[(n+1):(n+nfactors),(n+1):(n+nfactors)] = factor.covariance[t,,]
	
	#--------------------------------------------------------------------------
	# Solve QP problem
	#--------------------------------------------------------------------------
	load.packages('quadprog')
	
	sol = solve.QP.bounds(Dmat = cov.temp, dvec = rep(0, nrow(cov.temp)) , 
		Amat=constraints$A, bvec=constraints$b, constraints$meq,
		lb = constraints$lb, ub = constraints$ub)
	
	#--------------------------------------------------------------------------
	# Plot Portfolio weights
	#--------------------------------------------------------------------------		
	x = round(sol$solution,4)[1:n]
		x = iif(x < 0, 0, x)
		names(x) = colnames(next.month.ret)
	risk.model.min.var.portfolio = x

	# plot
	barplot(100*x,las = 2, 
		main = 'Minimum variance portfolio weights (multiple factor risk model)')

The minimum variance portfolio computed under the risk model is more diversified. Also for a larger stock universe (i.e. 1000-2000 stocks) solving quadratic problem will take less time using a factor risk model that results in a sparse covariance matrix.

To view the complete source code for this example, please have a look at the fm.risk.model.test() function in factor.model.test.r at github.