Sample Questions PDF

Title Sample Questions
Author 婧 李
Course Time Series Analysis
Institution University College Cork
Pages 42
File Size 552.3 KB
File Type PDF
Total Downloads 54
Total Views 175

Summary

Sample Questions answer...


Description

Sample Questions Along with all the examples covered in-class, the following are some extra questions which can be used when revising the course. (N.B. The questions listed below should be considered as parts to questions rather than whole questions.) What is an ARIMA model? Why might ARIMA models be considered particularly useful for financial time series? an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. ARIMA models are applied in some cases where data show evidence of non-stationarity, where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity. ARMA models are of particular use for financial series due to their flexibility. They are fairly simple to estimate, can often produce reasonable forecasts, and most importantly, they require no knowledge of any structural variables that might be required for more “traditional” econometric analysis. Describe the steps that Box and Jenkins (1970) suggested should be involved in constructing an ARMA model. 1. Identification: Involves determining the order of the model---- Use of graphical procedures such as the ACF and PACF used above 2. Estimation: Estimation of the parameters --- Can be done using least squares or maximum likelihood depending on the model. 3. Model diagnostic checking: This involves determining whether the model specified and estimated is adequate. Box and Jenkins suggest 2 methods: deliberate overfitting, residual diagnostics

4. Criteria for Model Selection: The identification stage rarely comes up with only one possible model. Usually there are a number of models which could fit the data. 5. Forecasting: Forecasting = prediction. An important test of the adequacy of a model. ‘Given that the objective of any econometric modeling exercise is to find the model that most closely “fits” the data, then adding more lags to an ARMA model will almost invariably lead to a better fit. Therefore, a large model is best because it will fit the data more closely.’ Comment on the validity (or otherwise) of this statement. Adding more variables in a ARMA model endlessly risks over-fitting and thus it can't predict the future well. This type of model is useless as it doesn't serve any predictive purpose or explanatory one either since the data has been overfit. What is meant by the term stationary, as applied to a time series model? Explain how the notation I (0) and I (1) is related to the concept of stationarity. Give one example of a stationary model and one of a non-stationary model. A time series variable is said to be strictly stationary if: The properties of its elements do not depend on time. (i.e. the properties are unaffected by a change of time origin, or the joint and conditional probability distributions of the process are unchanged if displaced in time). If a non-stationary series, yt must be differenced d times before it becomes stationary, then it is said to be integrated of order d. We write yt  I(d). So if yt  I(d) then dyt I(0). An I(0) series is a stationary series An I(1) series contains one unit root

e.g. yt = yt-1 + ut

What are unit root tests and why are they important?

What different forms can the Dickey-Fuller test for unit root processes take? Dickey Fuller tests are also known as  tests: , , . The null (H0) and alternative (H1) models in each case are i)

H0: yt = yt-1+ut H1: yt = yt-1+ut,  the tendency for volatility to rise more following a large price fall than following a price rise of the same magnitude

A number of stylised features of financial data have been suggested at the start of Chapter 9 and in other places throughout the book: -Frequency: Stock market prices are measured every time there is a trade or somebody posts a

new quote, so often the frequency of the data is very high -Non-stationarity: Financial data (asset prices) are covariance non-stationary; but if we assume that we are talking about returns from here on, then we can validly consider them to be stationary. -Linear Independence: They typically have little

evidence of linear (autoregressive) dependence, especially at low frequency. -Non-normality: They are not normally distributed – they are fat-tailed. -Volatility pooling and asymmetries in volatility: The returns exhibit volatility clustering and leverage effects. Of these, we can allow for the non-stationarity

within the linear (ARIMA) framework, and we can use whatever frequency of data we like to form the models, but we cannot hope to capture the other features using a linear model with Gaussian disturbances. A number of stylised features of financial data have been suggested at the start of Chapter 9 and

in other places throughout the book: -Frequency: Stock market prices are measured every time there is a trade or somebody posts a new quote, so often the frequency of the data is very high -Non-stationarity: Financial data (asset prices) are covariance non-stationary; but if we assume

that we are talking about returns from here on, then we can validly consider them to be stationary. -Linear Independence: They typically have little evidence of linear (autoregressive) dependence, especially at low frequency. -Non-normality: They are not normally distributed – they are fat-tailed.

-Volatility pooling and asymmetries in volatility: The returns exhibit volatility clustering and leverage effects. Of these, we can allow for the non-stationarity within the linear (ARIMA) framework, and we can use whatever frequency of data we like to form the models, but we cannot hope to capture the

other features using a linear model with Gaussian disturbances. A number of stylised features of financial data have been suggested at the start of Chapter 9 and in other places throughout the book: -Frequency: Stock market prices are measured every time there is a trade or somebody posts a

new quote, so often the frequency of the data is very high -Non-stationarity: Financial data (asset prices) are covariance non-stationary; but if we assume that we are talking about returns from here on, then we can validly consider them to be stationary. -Linear Independence: They typically have little

evidence of linear (autoregressive) dependence, especially at low frequency. -Non-normality: They are not normally distributed – they are fat-tailed. -Volatility pooling and asymmetries in volatility: The returns exhibit volatility clustering and leverage effects. Of these, we can allow for the non-stationarity

within the linear (ARIMA) framework, and we can use whatever frequency of data we like to form the models, but we cannot hope to capture the other features using a linear model with Gaussian disturbances. A number of stylised features of financial data have been suggested at the start of Chapter 9 and

in other places throughout the book: -Frequency: Stock market prices are measured every time there is a trade or somebody posts a new quote, so often the frequency of the data is very high -Non-stationarity: Financial data (asset prices) are covariance non-stationary; but if we assume

that we are talking about returns from here on, then we can validly consider them to be stationary. -Linear Independence: They typically have little evidence of linear (autoregressive) dependence, especially at low frequency. -Non-normality: They are not normally distributed – they are fat-tailed.

-Volatility pooling and asymmetries in volatility: The returns exhibit volatility clustering and leverage effects. Of these, we can allow for the non-stationarity within the linear (ARIMA) framework, and we can use whatever frequency of data we like to form the models, but we cannot hope to capture the

other features using a linear model with Gaussian disturbances. A number of stylised features of financial data have been suggested at the start of Chapter 9 and in other places throughout the book: -Frequency: Stock market prices are measured every time there is a trade or somebody posts a

new quote, so often the frequency of the data is very high -Non-stationarity: Financial data (asset prices) are covariance non-stationary; but if we assume that we are talking about returns from here on, then we can validly consider them to be stationary. -Linear Independence: They typically have little

evidence of linear (autoregressive) dependence, especially at low frequency. -Non-normality: They are not normally distributed – they are fat-tailed. -Volatility pooling and asymmetries in volatility: The returns exhibit volatility clustering and leverage effects. Of these, we can allow for the non-stationarity

within the linear (ARIMA) framework, and we can use whatever frequency of data we like to form the models, but we cannot hope to capture the other features using a linear model with Gaussian disturbances. A number of stylised features of financial data have been suggested at the start of Chapter 9 and

in other places throughout the book: -Frequency: Stock market prices are measured every time there is a trade or somebody posts a new quote, so often the frequency of the data is very high -Non-stationarity: Financial data (asset prices) are covariance non-stationary; but if we assume

that we are talking about returns from here on, then we can validly consider them to be stationary. -Linear Independence: They typically have little evidence of linear (autoregressive) dependence, especially at low frequency. -Non-normality: They are not normally distributed – they are fat-tailed.

-Volatility pooling and asymmetries in volatility: The returns exhibit volatility clustering and leverage effects. Of these, we can allow for the non-stationarity within the linear (ARIMA) framework, and we can use whatever frequency of data we like to form the models, but we cannot hope to capture the

other features using a linear model with Gaussian disturbances Why, in recent empirical research, have researchers preferred GARCH(1,1) models to pure ARCH(p)? In general, a GARCH (1,1) model will be sufficient to capture the volatility clustering in the data. 1.GARCH is more parsimonious - avoids overfitting 2.GARCH is less likely to breech non-negativity constraints Describe how one would test for ARCH effects? How would you estimate the model? 1. First, run any postulated linear regression of the form given in the equation above, e.g.

yt =  1 +  2x2t + ... +  kxkt + ut

saving the residuals,

.

2. Then square the residuals, and regress them on q own lags to test for ARCH of order q, i.e. run the regression

uˆ t2 0  1uˆ t2 1  2uˆ t2 2  ...  q uˆt2 q  vt iid. Obtain R2 from this regression

where vt is

3. The test statistic is defined as TR2 (the number of observations multiplied by the coefficient of multiple correlation) from the last regression, and is distributed as a

2(q). 4. The null and alternative hypotheses are H0 : 1 = 0 and 2 = 0 and 3 = 0 and ... and q = 0 H1 : 1  0 or 2  0 or 3  0 or ... or q  0.

If the value of the test statistic is greater than the critical value from the  2 distribution, then reject the null hypothesis. Note that the ARCH test is also sometimes applied directly to returns instead of the residuals from Stage 1 above. Describe two extensions to the original GARCH model. What additional characteristics of financial data might they be able to capture? the exponential GARCH (EGARCH) model which is an asymmetric GARCH model, and the GARCH-M Model. log( t 2 )    log( t 1 2 )  

ut  1

 t 12

 u t 1      t 12  

2   

An asymmetric model should allow for the possibility that an unexpected drop in.

    

price (“bad news”) has a larger impact on future volatility than an unexpected increase in price (“good news”) of similar magnitude. i.e. leverage effects

yt =  +  t-1+ ut , u t N(0, t2) 2 2 2 + u = + t 0 1 t 1 t  1

let the return of a security be partly determined by its risk. allows for feedback A1.) In the context of these series, explain the concept of cointegration.

Consider two I(1) series Y and X, and suppose that there is a linear relationship between Y and X. This is reflected in the proposition that there exists some value of β such that Yt – βXt is I(0), although both Y and X are I(1). In such a case it is said that Yt and Xt are cointegrated

GARCH models are designed to capture the volatility clustering effects in the returns (GARCH(1,1) can model the dependence in the squared returns, or squared residuals), and they can also capture some of the unconditional leptokurtosis, so that

even if the residuals of a linear model of the form given by the first part of the equation in part (e), the t

u ˆ ’s, are leptokurtic, the standardised residuals from the GARCH estimation are likely to be less leptokurtic. Standard

GARCH models cannot, however, account for leverage effects. GARCH models are designed to capture the volatility clustering effects in the returns (GARCH(1,1) can model the dependence in the squared returns, or squared residuals), and they can also capture some of the unconditional leptokurtosis, so that

even if the residuals of a linear model of the form given by the first part of the equation in part (e), the t

u ˆ ’s, are leptokurtic, the standardised residuals from the GARCH estimation are likely to be less leptokurtic. Standard

GARCH models cannot, however, account for leverage effects. GARCH models are designed to capture the volatility clustering effects in the returns (GARCH(1,1) can model the dependence in the squared returns, or squared residuals), and they can also capture some of the unconditional leptokurtosis, so that

even if the residuals of a linear model of the form given by the first part of the equation in part (e), the t

u ˆ ’s, are leptokurtic, the standardised residuals from the GARCH estimation are likely to be less leptokurtic. Standard

GARCH models cannot, however, account for leverage effects. A2.) Discuss how a researcher might test for cointegration between the variables using the Engle-Granger approach. Step 1: - Make sure that all the individual variables are I (1). - Then estimate the cointegrating regression using OLS. - Save the residuals of the cointegrating regression, ut . - Test these residuals to ensure that they are I (0). Step 2: - Use the step 1 residuals as one variable in the error correction model e.g.

 yt = 1xt + 2(

where

uˆt  1

uˆt  1

) + ut

= yt-1- xt-1

Discuss the limitations of the Engle-Granger test for cointegration. 1. Unit root and cointegration tests have low power in finite samples 2. We are forced to treat the variables asymmetrically and to specify one as the dependent and the other as independent variables.

3. Cannot perform any hypothesis tests about the actual cointegrating relationship estimated at stage 1. Describe an Equilibrium/Error Correction Model and discuss why it might be used. When the concept of non-stationarity was first considered, a usual response was to independently take the first differences of a series of I(1) variables and use the first differences in any subsequent modelling process. The problem with this approach is that pure first difference models have no long run solution. e.g. Consider yt and xt both I(1). The model we may want to estimate is ∆yt = �∆xt + ut But this collapses to nothing in the long run. The definition of the long run that we use is where yt = yt-1 = y;

xt = xt-1 = x.

Give a further example from finance where cointegration between a set of variables may be expected. Explain, by reference to the implication of noncointegration, why cointegration between the series might be expected.

Why is forecasting ability one of the most important tests of the adequacy of a model?

Discuss the use of MA models and AR models in forecasting.

How can we tell whether a forecast, and thus a model is accurate or not? Some of the most popular criteria for assessing the accuracy of time series forecasting techniques are:

MSE 

1 N  ( yt s  ft ,s )2 N t1

1 MAE  N MAE is given by

N

 t1

yt s  ft ,s

Mean absolute percentage error:

MAPE 100 

1 N y t s  f t, s  y N t1 t s

Contrast out-of-sample forecasting with in-sample forecasting and discuss why it is useful. In-sample forecasts are those generated for the same set of data that was used to estimate the model’s parameters. Thus, you would expect the “forecasts” of the model to be good in-sample. A more accurate way of evaluating a model’s accuracy is not to use all of the observations in estimating the model’s parameters, but rather to hold some observations back. These would then be used to construct out-of-sample forecasts. Briefly discuss the limits of forecasting. • Forecasting models are prone to break down around turning points • Series subject to structural changes or regime shifts cannot be forecast • Predictive accuracy usually declines with forecasting horizon • Forecasting is not a substitute for judgement Explain why a test for purchasing power parity can be based on an analysis of the real exchange rate.

Discuss the limitations of ARCH modeling. • How do we decide on q? • The required value of q might be very large

• Non-negativity constraints might be violated. What is technical analysis and why is it useful? What is the difference between technical analysis and fundamental analysis? Technical Analysis is a method of predicting price movements and future market trends by studying charts of past market action which take into account price of instruments, volume of trading and where applicable, open interest in the instruments.

Discuss the strengths and weaknesses of technical analysis. Strengths 1. Technical analysis can be used to follow a wide range of instruments in almost any market place. 2. Charts can be used to analyse data for time periods ranging from hours to a century. 3. The basic principles of technical analysis are very easy to understand and have been developed from the way markets operate-technical analysis is concerned with what actually happens in markets.

4. Technical analysis relies on the use of accurate and timely data, which is available real-time or with only a short delay when necessary. Weaknessess 1. Subjective process 2. There are limits to the extent to which the future can be simply extrapolated from the past. 3. Technical analysis is concerned with the degree of probability that an event will happen-not the certainty of the event. 4. It is vital to the success of technical analysis that the information used is both timely and accurate. Discuss the construction of different types of charts used in technical analysis and give examples of trends found in the data. 1. The bar chart is the most common method used to represent price action. A bar chart plots instrument data activity for each period as a series of vertical bars. The period may be anything from one minute to one year, depending upon the time horizon of the analysis. 2. A candlestick chart shows the same data as is used for each period of a bar chart in a particular way that highlights the relationship between the opening and closing prices. Each period is represented by a candle, composed of its real body and its ‘shadows.’ 3. Point and Figure charting is a simple technique for plotting the price a...


Similar Free PDFs