Cross Correlation Analysis: Analysing Currency Pairs in Python

When working with a time series, one important thing we wish to determine is whether one series “causes” changes in another. In other words, is there a strong correlation between a time series and another given a number of lags? The way we can detect this is through measuring cross correlation.

Continue reading “Cross Correlation Analysis: Analysing Currency Pairs in Python”

Kalman Filter: Modelling Time Series Shocks with KFAS in R

We have already seen how time series models such as ARIMA can be used to make time series forecasts. While these models can prove to have high degrees of accuracy, they have one major shortcoming – they do not account for “shocks”, or sudden changes in a time series. Let’s see how we can potentially alleviate this problem using a model known as the Kalman Filter.

Continue reading “Kalman Filter: Modelling Time Series Shocks with KFAS in R”

ARIMA Models: Stock Price Forecasting with Python and R

ARIMA (Autoregressive Integrated Moving Average) is a major tool used in time series analysis to attempt to forecast future values of a variable based on its present value. For this particular example, I use a stock price dataset of Johnson & Johnson (JNJ) from 2006-2016, and use the aforementioned model to conduct price forecasting on this time series.

Continue reading “ARIMA Models: Stock Price Forecasting with Python and R”

Cross-Correlation of Currency Pairs In R (ccf)

When working with a time series, one important thing we wish to determine is whether one series “causes” changes in another. In other words, is there a strong correlation between a time series and another given a number of lags? The way we can detect this is through measuring cross-correlation.

Continue reading “Cross-Correlation of Currency Pairs In R (ccf)”

Chow Test For Structural Breaks in Time Series

A Chow test is designed to determine whether a structural break in a time series exists. That is to say, a sharp change in trend in a time series that merits further study. For instance, a structural break in one series can give useful clues as to whether such a change is being propagated across other variables – assuming that there is a significant correlation between them under normal circumstances.

Continue reading “Chow Test For Structural Breaks in Time Series”

Serial Correlation: Durbin-Watson and Cochrane-Orcutt Remedy

Serial correlation (also known as autocorrelation) is a violation of the Ordinary Least Squares assumption that all observations of the error term in a dataset are uncorrelated. In a model with serial correlation, the current value of the error term is a function of the one immediately previous to it:

  et = ρe(t-1) + ut
   
  where e = error term of equation in question; ρ = first-order autocorrelation coefficient; u = classical (not serially correlated error term)

Continue reading “Serial Correlation: Durbin-Watson and Cochrane-Orcutt Remedy”

Stationarity and Cointegration in R (adf, egcm, pp, kpss)

When we refer to a time series as stationary, we mean to say that its mean, variance and autocorrelation are all consistent over time. Cointegration, on the other hand, is when we have two time series that are non-stationary, but a linear combination of them results in a stationary time series. So, why is the concept of stationarity important? Well, a large purpose of time series modelling is to be able to predict future values from current data.

Continue reading “Stationarity and Cointegration in R (adf, egcm, pp, kpss)”