CPU OS Simulator Observations PDF

Title CPU OS Simulator Observations
Author Anonymous User
Course Computer Science and Information Systems
Institution Birla Institute of Technology and Science, Pilani
Pages 12
File Size 390.5 KB
File Type PDF
Total Downloads 96
Total Views 146

Summary

Observations for Direct Cache, Set Associative Cache and Fully Associative Cache Memory by running binary search program using CPU OS Simulator...


Description

Assignment I Problem Bank 47 Assignment Description: The assignment aims to provide deeper understanding of cache by analysing its behaviour using cache implementation of CPU- OS Simulator. The assignment has three parts. ● Part I deals with Cache Memory Management with Direct Mapping ● Part II deals with Cache Memory Management with Associative Mapping ● Part III deals with Cache Memory Management with Set Associative Mapping

Submission: You will have to submit this documentation file and the name of the file should be GROUP-NUMBER.pdf. For Example, if your group number is 1, then the file name should be GROUP-1.pdf. Submit the assignment by 22nd December 2021, through canvas only. File submitted by any means outside CANVAS will not be accepted and marked. In case of any issues, please drop an email to the course TAs, Ms. Michelle Gonsalves ([email protected]). Caution!!! Assignments are designed for individual groups which may look similar, and you may not notice minor changes in the assignments. Hence, refrain from copying or sharing documents with others. Any evidence of such practice will attract severe penalty. Evaluation: ● The assignment carries 13 marks ● Grading will depend on o Contribution of each student in the implementation of the assignment o Plagiarism or copying will result in -13 marks

************************FILL IN THE DETAILS GIVEN BELOW************** Assignment Set Number:

Group Name:

Contribution Table: Contribution (This table should contain the list of all the students in the group. Clearly mention each student’s contribution towards the assignment. Mention “No Contribution” in cases applicable.)

Sl. No.

Name (as appears in Canvas)

ID NO

Contribution

Resource for Part I, II and III: ● Use following link to login to “eLearn” portal. o https://elearn.bits-pilani.ac.in ● Click on “My Virtual Lab – CSIS” ● Using your canvas credentials login into Virtual lab ● In “BITS Pilani” Virtual lab click on “Resources”. Click on “Computer Organization and software systems” course. o Use resources within “LabCapsule3: Cache Memory” Code to be used: The following code written in STL Language, implements searching of an element (key) in an array using binary search technique. program BinarySearch VAR a array(11) INTEGER for n = 0 to 10 a(n) = n writeln (a(n)) next VAR key INTEGER VAR first INTEGER

VAR last INTEGER VAR middle INTEGER VAR temp INTEGER key = 5 writeln("Key to be searched",key) first = 0 last = 10 middle = (first+last)/2 while first temp then first = middle + 1 else last = middle - 1 end if middle = (first+last)/2 wend if first > last then writeln("Key Not Found") end if end General procedure to convert the given STL program into ALP: ● ● ● ● ● ● ● ●

Open CPU OS Simulator. Go to advanced tab and press compiler button Copy the above program in Program Source window Open Compile tab and press compile button In Assembly Code, enter start address and press Load in Memory button Now the assembly language program is available in CPU simulator. Set speed of execution to FAST. Open I/O console To run the program press RUN button.

General Procedure to use Cache set up in CPU-OS simulator ● After compiling and loading the assembly language code in CPU simulator, press “Cache-Pipeline” tab and select cache type as “both”. Press “SHOW CACHE” button. ● In the newly opened cache window, choose appropriate cache Type, cache size, set blocks, replacement algorithm and write back policy.

Part I: Direct Mapped Cache a) Execute the above program by setting block size to 2, 4, 8, 16 and 32 for cache size = 8, 16 and 32. Record the observation in the following table. Block Size

Cache size

# Hits

# Misses

% Miss Ratio

2

8

104

115

52.5%

47.5%

4

118

101

46.1%

53.9%

8

109

110

50.2%

49.8%

117

102

46.5%

53.5%

4

139

80

36.5%

63.5%

8

143

76

34.7%

65.3%

16

154

65

29.6%

70.4%

120

99

45.2%

54.8%

4

150

69

31.5%

68.5%

8

166

53

24.2%

75.8%

16

178

41

18.7%

81.3%

32

186

33

15.0%

85.0%

2

2

16

32

%Hit Ratio

b) Plot a single graph of Cache hit ratio Vs Block size with respect to cache size = 8, 16 and 32. Comment on the graph that is obtained. -

Chart Title 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00%

1

2 %Hit Ratio Cache size 8

3 %Hit Ratio Cache size 16

%Hit Ratio Cache size 32

By increasing the cache size, the hit ratio improved, hence the performance of the program improved. The higher the cache size, the functioning of CPU is better.

c) Fill the below table and write a small note on your observation from the data cache. ● Block Size =16 ● Cache Size = 16 ● Cache Type = Direct Mapped Addresses Data Miss (%) 0048

00

29.6%

0049

02

29.6%

0050

00

29.6%

0051

05

29.6%

0052

02

29.6%

0053

00

29.6%

0054

00

29.6%

0055

02

29.6%

0056

00

29.6%

0057

0A

29.6%

0058

02

29.6%

0059

00

29.6%

0060

05

29.6%

0061

02

29.6%

0062

00

29.6%

0063

05

29.6%

Part II: Associative Mapped Cache a) Execute the above program by setting block size to 2, 4, 8, 16 and 32 for cache size = 8, 16 and 32. Record the observation in the following table. LRU Replacement Algorithm Block Size

Cache size

# Hits

# Misses

% Miss Ratio

2

8

121

98

44.7%

55.3%

4

112

107

48.8%

51.2%

8

109

110

50.2%

49.8%

123

96

43.8%

56.2%

4

156

63

28.7%

71.3%

8

136

83

37.9%

62.1%

16

154

65

29.6%

70.4%

127

92

42.0%

58%

4

162

57

26.0%

74%

8

189

30

13.7%

86.3%

16

197

22

10.0%

90.0%

32

186

33

15.0%

85.0%

2

2

16

32

%Hit Ratio

b) Plot a single graph of Cache hit ratio Vs Block size with respect to cache size = 8, 16 and 32. Comment on the graph that is obtained.

Chart Title 100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% 1

2

3

%Hit Ratio Cache size 8

%Hit Ratio Cache size 16

%Hit Ratio Cache size 32

By increasing the cache size, the hit ratio improved, hence the performance of the program improved. The higher the cache size, the functioning of CPU is better.

c) Fill up the following table for three different replacement algorithms and state which replacement algorithm is better and why? Replacement Algorithm: Random Block Size

Cache size

Miss

Hit

Hit ratio

2

4

127

92

42.0

2

8

104

115

52.5

2

16

100

119

54.3

2

32

89

130

59.3

2

64

87

132

60.2

Replacement Algorithm: FIFO Block Size

Cache size

Miss

Hit

Hit ratio

2

4

131

88

40.2

2

8

108

111

50.68

2

16

100

119

54.3

2

32

91

128

58.4

2

64

87

132

60.3

Replacement Algorithm: LRU Block Size

Cache size

Miss

Hit

Hit ratio

2

4

131

88

40.2

2

8

98

121

55.3

2

16

96

123

56.2

2

32

92

127

58.0

2

64

85

134

61.2

There is no significant difference in the misses and hit ratio for the different algorithms. The marginal difference as observed for the hit ratio from the above table, we can conclude LRU algorithm performs better than the other algorithms. d) Plot the graph of Cache Hit Ratio Vs Cache size with respect to different replacement algorithms. Comment on the graph that is obtained.

Chart Title 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% 1

2 Replacement Algorithm: Random Replacement Algorithm: LRU

3

4

5

Replacement Algorithm: FIFO

Irrespective of algorithms that we use, it can be concluded that increase in the cache size, improves the hit ratio.

Part III: Set Associative Mapped Cache Execute the above program by setting the following Parameters: ● Number of sets (Set Blocks): 2 way ● Cache Type: Set Associative ● Replacement: LRU/FIFO/Random a) Fill up the following table for three different replacement algorithms and state which replacement algorithm is better and why? Replacement Algorithm: Random Block Size

Cache size

Miss

Hit

Hit ratio

2

4

127

92

42.01%

2

8

101

118

53.88%

2

16

99

120

54.79%

2

32

93

126

57.53%

2

64

87

132

60.27%

Replacement Algorithm: FIFO Block Size

Cache size

Miss

Hit

Hit ratio

2

4

134

85

38.81%

2

8

68

151

68.95%

2

16

60

159

72.60%

2

32

54

165

75.34%

2

64

45

174

79.45%

Replacement Algorithm: LRU Block Size

Cache size

Miss

Hit

Hit ratio

2

4

134

85

38.81%

2

8

60

159

72.60%

2

16

55

164

74.89%

2

32

51

168

76.71%

2

64

44

175

79.91%

There is no significant difference in the misses and hit ratio for the different algorithms. The marginal difference as observed for the hit ratio from the above table, we can conclude LRU algorithm performs better than the other algorithms. c) Plot the graph of Cache Hit Ratio Vs Cache size with respect to different replacement algorithms. Comment on the graph that is obtained.

Chart Title 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% 1

2

3

Replacement Algorithm: Random Replacement Algorithm: LRU

4

5

Replacement Algorithm: FIFO

Irrespective of algorithms that we use, it can be concluded that increase in the cache size, improves the hit ratio. c) Fill in the following table and analyse the behaviour of Set Associate Cache. Which one is better and why? Replacement Algorithm: LRU Block Size, Cache size

Set Blocks

Miss

Hit

Hit ratio

2, 64

2 – Way

86 (39.2%)

133

0.60

2, 64

4 – Way

85(38.8%)

134

0.61

2, 64

8 – Way

85(38.8%)

134

0.61

Rush’s values Replacement Algorithm: LRU Block Size, Cache size

Set Blocks

Miss

Hit

Hit ratio

2, 64

2 – Way

44(20%)

175

0.79

2, 64

4 – Way

43(19.6%)

176

0.80

2, 64

8 – Way

43(19.6%)

176

0.80

As the number of set blocks increases(the number of sets decreases), the number of misses reduce and the hits increase. We can conclude that the 8 - way set blocks perform better with the LRU algorithm....


Similar Free PDFs