T2 QA 1 Ch1 59 65 201 207 300 301 708 709 20.1 20 PDF

Title T2 QA 1 Ch1 59 65 201 207 300 301 708 709 20.1 20
Course Bank Management
Institution De La Salle University
Pages 40
File Size 1.6 MB
File Type PDF
Total Downloads 8
Total Views 136

Summary

Book 2 of Financial Risk Management of GARP 2021-2022...


Description

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

P1.T2. Quantitative Analysis Bionic Turtle FRM Practice Questions Chapter 1: Fundamentals of Probability This is a super-collection of quantitative practice questions. It represents several years of cumulative history mapped to the current reading. Previous readings include Miller, Stock, and Gujarati, which we have retained in this practice question set. By David Harper, CFA FRM CIPM www.bionicturtle.com

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

Note that this pertains to Chapters 1-6 in Topic 2, Quantitative Analysis. We will include this introduction in each of those practice question sets for reference. Within each chapter, our practice questions are sequenced in reverse chronological order (appearing first are the questions written most recently). For example, consider Miller’s Chapter 2 (Probabilities), you will notice there are fully three (3) sets of questions:   

Questions T2.708 to 709 (Miller Chapter 2) were written in 2017. The 7XX denotes 2017. Questions T2.300 to 301 (Miller Chapter 2 were written in 2013. The 3XX denotes 2103. Questions T2.201 & 204 (Stock & Watson) were written in 2012. Relevant but optional.

The reason we include the prior questions is simple: although the FRM’s econometrics readings have churned in recent years (specifically, for Probabilities and Statistics, from Gujarati to Stock and Watson to Miller and now to GARP), the learning objectives (AIMs) have remained essentially unchanged. The testable concepts themselves, in this case, are generally quite durable over time. Therefore, do not feel obligated to review all of the questions in this document! Rather, consider the additional questions as merely a supplemental, optional resource for those who want to spend additional time with the concepts. The major sections are: 

This Chapter: Fundamentals of Probabilities (current QA-1, Chapter 1) o Most Recent BT questions, (20.1 and 20.2) o Previous BT questions, Miller Chapter 2 (T2.708 & T2.709) o Previous BT questions, Miller Chapter 2 (T2.300 & T2.301) o Previous BT questions, Stock & Watson Chapter 2 (T2.201 & T2.204) o Previous BT questions, Gujarati (T2.59 to T2.61, T2.65)



Random Variables (current QA-2, Chapter 2) o Most Recent BT questions (20.3 and 20.4) o Previous BT questions, Miller Chapter 3 (T2.710 & T2.712) o Previous BT questions, Miller Chapters 2 & 3 (T2.303 & T2.307) o Previous BT questions, Gujarati (T2.58, T2.59, T2.62, T2.65 & T2.66 )



Common Univariate Random Variables (current QA-3, Chapter 3) o Most Recent BT questions, (20.5 to 20.7) o Previous BT questions, Miller Chapter 4 (T2.309 to T2.312 & T2.713 to T2.716) o Previous BT questions ,Stock & Watson Chapter 2 (T2.205) o Previous BT questions, Rachev Chapters 2 & 3 (T2.110 to T2.126) o Previous BT questions, Gujarati (T2.59, T2.68, T2.72 to T2.74, T2.82)



Multivariate Random Variables (current QA-4, Chapter 4) o Most Recent BT questions Chapter 4 (20.8 to 20.10) o Previous BT questions, Miller Chapters 2, 3 & 4 (T2.304, T2.709, T2.711 & T2.716) o Previous BT questions, Stock & Watson Chapters 2 & 3 (T2.202, T2.212 to T2.213 ) o Previous BT questions, Gujarati (T2.62, T2.64, T2.65 & T2.67)

2

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.



Sample Moments (current QA-5, Chapter 5) o Most Recent BT questions (20.11 to 20.13) o Previous BT questions, Miller Chapter 3 (T2.303 to T2.308, T2.710 to T2.712) o Previous BT questions, Stock & Watson Chapters 2 & 3 (T2.203,T2.206 to T2.208, T2.213) o Previous BT questions, Gujarati (T2.66, T2.67, T2.69 to T2.71)



Hypothesis Testing & Confidence Intervals (current QA-6, Chapter 6) o Most Recent BT questions (20.14 to 20.15) o Previous BT questions, Miller Chapters 5 & 7 (T2.313 – T2.315, T2.718 & T2.719) o Previous BT questions, Stock & Watson Chapter 3 (T2.209 to T2.212) o Previous BT questions, Gujarati (T2.75, T2.77, T2.79 to T2.81)

3

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

PROBABILITIES - KEY IDEAS ................................................................................................. 5 Probabilities P1.T2.20.1. CONDITIONALLY INDEPENDENT EVENTS .................................................................. 7 P1.T2.20.2. MORE PROBABILITIES AND BAYES RULE .................................................................10 P1.T2.708. PROBABILITY FUNCTION FUNDAMENTALS ................................................................14 P1.T2.709. JOINT PROBABILITY MATRICES ...............................................................................17 P1.T2.300. PROBABILITY FUNCTIONS (MILLER) ........................................................................20 P1.T2.301. MILLER'S PROBABILITY MATRIX...............................................................................23 Probabilities (Stock & Watson Chapter 2) P1.T2.201. RANDOM VARIABLES .............................................................................................26 P1.T2.204. JOINT, MARGINAL, AND CONDITIONAL PROBABILITY FUNCTIONS ................................29 Statistics (Gujarati’s Essentials of Econometrics) P1.T2.59. G UJARATI’ S INTRODUCTION TO PROBABILITIES ..........................................................31 P1.T2.60. BAYES T HEOREM ....................................................................................................34 P1.T2.61. STATISTICAL DEPENDENCE ......................................................................................36 P1.T2.65. VARIANCE AND CONDITIONAL EXPECTATIONS ............................................................39

4

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

Probabilities - Key Ideas 

Risk measurement is largely the quantification of uncertainty. We quantify uncertainty by characterizing outcomes with random variables. Random variables have distributions which are either discrete or continuous.



In general, we observe samples; and use them to make inferences about a population (in practice, we tend to assume the population exists but it not available to us)



We are concerned with the first four moments of a distribution: o

Mean, typically denoted µ

o

Variance, the square of the standard deviation. Annualized standard deviation is called volatility; e.g., 12% volatility per annum. Variance is almost always denoted σ^2 and standard deviation by sigma, σ

o

Skew (a function of the third moment about the mean): a symmetrical distribution has zero skew or skewness

o

Kurtosis (a function of the fourth moment about the mean). 

The normal distribution has kurtosis = 3.0



Excess kurtosis = 3 – Kurtosis. The normal distribution, being the benchmark, has excess kurtosis equal to zero



Kurtosis > 3.0 refers to a heavy-tailed distribution (a.k.a., leptokurtosis). Heavy-tailed distributions do tend to exhibit higher peaks, but our emphasis in risk is their heavy tails.



The concepts of joint, conditional and marginal probability are important.



To test a hypothesis about a sample mean (i.e., is the true population mean different than some value), we use a student t or normal distribution o

Student t if the population variance is unknown (it usually is unknown)

o

If the sample is large, the student t remains applicable, but as it approximates the normal, for large samples the normal is used since the difference is not material



To test a hypothesis about a sample variance, we use the chi-squared



To test a joint hypothesis about regression coefficients, we use the F distribution



In regard to the normal distribution: o

N(mu, σ^2) indicates the only two parameters required. For example, N(3,10) connotes a normal distribution with mean of 3 and variance of 10 and, therefore, standard deviation of SQRT(10)

o

The standard normal distribution is N(0,1) and therefore requires no parameter specification: by definition it has mean of zero and variance of 1.0.

o

Please memorize, with respect to the standard normal distribution: 

For N(0,1) Pr(Z < -2.33) ~= 1.0% (CDF is one-tailed)



For N(0,1)  Pr (Z< -1.645)~ = 5.0% (CDF is one-tailed)

5

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.





The definition of a random sample is technical: the draws (or trials) are independent and identically distributed (i.i.d.) o

Identical: same distribution

o

Independence: no correlation (in a time series, no autocorrelation)

The assumption of i.i.d. is a precondition for: o

Law of large numbers

o

Central limit theorem (CLT)

o

Square root rule (SRR) for scaling volatility; e.g., we typically scales a daily volatility of (V) to an annual volatility with V*SQRT(250). Please note that i.i.d. returns is the unrealistic precondition.

6

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

Probabilities P1.T2.20.1. Conditionally independent events P1.T2.20.2. More probabilities and Bayes rule P1.T2.708. Probability function fundamentals P1.T2.709. Joint probability matrices P1.T2.300. Probability functions P1.T2.301. Miller's probability matrix

P1.T2.20.1. Conditionally independent events Learning objectives: Describe an event and an event space. Describe independent events and mutually exclusive events. Explain the difference between independent events and conditionally independent events. 20.1.1. A specialized credit portfolio contains only three loans but they are very risky, as each has a single-period default probability of 10.0%. They are independent (therefore, we have the i.i.d. condition). You know enough probability to determine (for example) that, at the end of a single period, the probability that all three loans default is 0.1% and the probability that all three loans survive is 72.9%. However, at the end of the period, the portfolio manager gives you a piece of additional information when she tells you that "AT LEAST two of the bonds have defaulted." What is the (conditional) probability that the other (third) bond also defaulted? a) b) c) d)

0.09% 0.10% 3.57% 10.0%

20.1.2. Yesterday a web page hosted by Acme received tens of thousands of page views but some were views by malicious bots. Acme utilizes two software applications to detect these malicious "bot-views." It uploads the same data file from yesterday to both applications. The first application detects 200 bot-views and the second application detects 300 bot-views. Among these, only 40 bot-views were detected by both applications. All bot-views are equally likely to be located, but clearly both applications only identify a minority of the bot-views (otherwise there would be a much higher number of identified bot-views common to both applications). Further, the identification of a bot-view by one application is independent of its identification by the other application. How many malicious bot-views did the web page experience on this day? a) b) c) d)

300 460 540 1,500

7

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

20.1.3. Albert and Betty share an office where each month they attempt to predict the bestperforming industry within their respective sectors. Albert's sector is Financials and Betty's sector is Information Technology. Each contains several industries. Without any help, the probability that Albert predicts the best-performing industry (within Financials) is 12.0%, and the probability that Betty predicts the best-performing industry (within I.T.) is 15.0%. Put another way, their unconditional success probabilities are, respectively, P(A) = 12.0% and P(B) = 15.0%. Without any help, the probability that they both simultaneously predict their best industry is 1.80%; that is, the joint Pr(A ∩ B) = 1.80%. Their firm also subscribes to software with artificial intelligence and the software boosts their predictive abilities. In fact, when using the software to help them, their respective success probabilities double. Specifically, P(A | S) = 24.0% and P(B | S) = 30.0%; for example, the probability that Betty picks the best-performing industry conditional on her utilization of the software jumps to 30.0%. When they both use the software, their joint probability of success is 15.0%. In regard to the observed dependencies, which of the following statements is accurate? a) b) c) d)

Independent and conditionally independent Independent but conditionally dependent Dependent but conditionally independent Dependent and conditionally dependent

8

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

Answers: 20.1.1. C. True: 3.57% The unconditional probability that TWO or MORE loans default equals 3*(10%^2*90%) + 10%^3 = 2.80% such that the conditional probability, Pr (3 default | two or more default) = 0.10% / 2.80% = 3.5714%. 20.1.2. D. True: 1,500 The second application identified 40/200 = 20.0% of those identified by the first application, therefore (per the independence), we can infer that its own 300 identifications is about 20.0% of the total number such that we estimate 300 / 20% = 1,500 total bot-views. Similarly, the first application identified 40/300 = 13.33% of those identified by the second application, so we can infer that its own 200 identification represents about 13.33% of the total, which is also 200/13.33% = 1,500.

20.1.3. B. Independent but conditionally dependent They are independent because P(A)*P(B) = 12%*15% = 1.80% and this is equal to the joint P(AB) = 1.80%. However, they are conditionally dependent because it is not true that P(A|S) * P(B|S) = P(AB|S). The product, P(A|S)* P(B|S) = 24.0% * 30% = 7.20%, but the P(AB|S) is given as 15.0%.

Discuss here in the forum: https://www.bionicturtle.com/forum/threads/p1-t2-20-1conditionally-independent-events.23249/

9

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

P1.T2.20.2. More probabilities and Bayes rule Learning objectives: Calculate the probability of an event for a discrete probability function. Define and calculate a conditional probability. Distinguish between conditional and unconditional probabilities. Explain and apply Bayes’ rule. 20.2.1. The probability graph below illustrates event A (the yellow rectangle) and event B (the blue rectangle). The unconditional probability of event A is 50.0% and the unconditional probability of event B is 44.0%; i.e., Pr(A) = 50.0% and Pr(B) = 44.0%. Their overlap is graphed by the green rectangle such that Pr(A ∩ B) = 27.0%. The orange rectangle conditions on the event C. For example, conditional on event C, there is a 50.0% probability that event A occurs, Pr(A | C) = 50.0%.

Which of the following is TRUE about, respectively, the unconditional and conditional relationship between events A and B? a) b) c) d)

A and B are unconditionally dependent but conditionally (on event C) independent A and B are unconditionally dependent and also conditionally (on event C) dependent A and B are unconditionally independent but conditionally (on event C) dependent A and B are unconditionally independent and also conditionally (on event C) independent

10

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

20.2.2. Rebecca is a risk analyst who wants to characterize the loss frequency distribution of a certain minor operational process during each day. On most days, there is no loss event; i.e. Pr(X = 0) > 50.0%. On days when there is at least one loss, there occurs either one, two, three, or four loss events. For this process, she likes the shape of the Poisson distribution with a low mean (e.g., lambda = 1) but the problem is that the Poisson has a long, thin right tail. However, given her frequency outcome is finite, Rebecca prefers a domain limited to only five outcomes including zero: X = {0, 1, 2, 3, or 4}, but X cannot be five or more. She settles on an elegant formula to express the density probability as a function of a constant. Her function is Pr(X = x) = (5-x)^3*a, where (a) is a constant, over the domain mentioned. Specifically Pr(X = 0) = 125*a, Pr(X = 1) = 64*a, and so on. Basically, this assigns the lowest probability (a) to an outcome of four. An outcome of three is eight times (8a) more likely than an outcome of four. An outcome of two is 27 times more likely (27a) than an outcome of four, an outcome of one is 64 times more likely (64a) than a four, and an outcome of zero is 125 times more likely (125a) than an outcome of four. This allows her to fit her sample database by characterizing the distribution of outcomes in relative terms; i.e., relative to an outcome of four which is the least likely. Specifically, it reflects her want of a distribution under which a zero or one occurs more than 80.0% of the time, yet in rare cases the outcome can be as much as four. Unlike the Poisson, it has no tail beyond an outcome of four. Her probability mass distribution looks like the following:

What is the probability that X will be at least two, Pr(X≥2), which in this case of a discrete distribution is the same as Pr(X > 1)? a) b) c) d)

2.78% 9.50% 16.00% 36.00%

11

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 21, 2021.฀ The information provided in this document is intended solely for you. Please do not freely distribute.

20.2.3. Among a set of filtered stocks, a stock screener assigns stocks to one of three style categories: value, quality, or momentum. At the end of each month, the stock's performance is compared to the S&P such that it either beats or does not beat the index The prior beliefs (aka, unconditional probabilities) are the following: Pr(Style = Value) = 15.0%, Pr(Style = Quality) = 30.0%, and Pr(Style = M...


Similar Free PDFs