Cs6303 Computer Architecture notes PDF

Title Cs6303 Computer Architecture notes
Author Latha Palani
Course Computer Science
Institution Anna University
Pages 287
File Size 14.7 MB
File Type PDF
Total Downloads 78
Total Views 152

Summary

Download Cs6303 Computer Architecture notes PDF


Description

CS6303

Computer Architecture

R.M.K ENGINEERING COLLEGE DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

CS6303 COMPUTER ARCHITECTURE NOTES

P.Latha,ASP/ECE.

1

CS6303

CS6303

Computer Architecture

COMPUTER ARCHITECTURE

L T PC 3 00 3

OBJECTIVES: To make students understand the basic structure and operation of digital computer. To understand the hardware-software interface. To familiarize the students with arithmetic and logic unit and implementation of fixed point and floating-point arithmetic operations. To expose the students to the concept of pipelining. To familiarize the students with hierarchical memory system including cache memories and virtual memory. To expose the students with different ways of communicating with I/O devices and standard I/O interfaces. UNIT I OVERVIEW & INSTRUCTIONS 9 Eight ideas – Components of a computer system – Technology – Performance – Power wall – Uniprocessors to multiprocessors; Instructions – operations and operands – representing instructions – Logical operations – control operations – Addressing and addressing modes. UNIT II ARITHMETIC OPERATIONS 7 ALU - Addition and subtraction – Multiplication – Division – Floating Point operations – Subword parallelism. UNIT III PROCESSOR AND CONTROL UNIT 11 Basic MIPS implementation – Building data path – Control Implementation scheme – Pipelining – Pipelined data path and control – Handling Data hazards & Control hazards – Exceptions. UNIT IV PARALLELISM 9 Instruction-level-parallelism – Parallel processing challenges – Flynn's classification – Hardware multithreading – Multicore processors UNIT V MEMORY AND I/O SYSTEMS Memory hierarchy - Memory technologies – Cache basics – Measuring and improving cache performance - Virtual memory, TLBs - Input/output system, programmed I/O, DMA and interrupts, I/O processors. OUTCOMES: At the end of the course, the student should be able to: Design arithmetic and logic unit. Design and anlayse pipelined control units Evaluate performance of memory systems. Understand parallel processing architectures. TEXT BOOK: 1. David A. Patterson and John L. Hennessey, “Computer organization and design’, Morgan

P.Latha,ASP/ECE.

2

CS6303

Computer Architecture

Kauffman / Elsevier, Fifth edition, 2014. REFERENCES: 1. V.Carl Hamacher, Zvonko G. Varanesic and Safat G. Zaky, “Computer Organisation“, VI th edition, Mc Graw-Hill Inc, 2012. 2. William Stallings “Computer Organization and Architecture” , Seventh Edition , Pearson Education, 2006. 3. Vincent P. Heuring, Harry F. Jordan, “Computer System Architecture”, Second Edition, Pearson Education, 2005. 4. Govindarajalu, “Computer Architecture and Organization, Design Principles and Applications", first edition, Tata McGraw Hill, New Delhi, 2005. 5. John P. Hayes, “Computer Architecture and Organization”, Third Edition, Tata Mc Graw Hill, 1998. 6. http://nptel.ac.in/.

P.Latha,ASP/ECE.

3

CS6303

Computer Architecture

UNITI

OVERVIEW & INSTRUCTIONS

Eight ideas – Components of a computer system – Technology – Performance – Power wall –Uniprocessors to multiprocessors; Instructions – operations and operands – representing instructions – Logical operations – control operations – Addressing and addressing modes.

Introduction Computer Architecture The conceptual design and fundamental operational structure of a computer system. Computer organization → it refers to the operational units and their interconnection that realize the architectural specification. PostPC Era Replacing the PC is the personal mobile device (PMD). PMDs are battery operated with wireless connectivity to the Internet and typically cost hundreds of dollars, and, like PCs, users can download software (“apps”) to run on them. Unlike PCs, they no longer have a keyboard and mouse, and are more likely to rely on a touch-sensitive screen or even speech input. Today’s PMD is a smart phone or a tablet computer, but tomorrow it may include electronic glasses. Taking over from the traditional server is Cloud Computing, which relies upon giant datacenters that are now known as Warehouse Scale Computers (WSCs). Companies like Amazon and Google build these WSCs containing 100,000 servers and then let companies rent portions of them so that they can provide soft ware services to PMDs without having to build WSCs of their own.

Figure 1.1 - The number manufactured per year of tablets and smart phones, which reflect the PostPC era, versus personal computers and traditional cell phones.

P.Latha,ASP/ECE.

4

CS6303

Computer Architecture

Figure 1.1 shows the rapid growth time of tablets and smart phones versus that of PCs and traditional cell phones. Indeed, Software as a Service (SaaS) deployed via the cloud is revolutionizing the software industry just as PMDs and WSCs are revolutionizing the hardware industry.

1.1 Eight Great Ideas in Computer Architecture These are the eight great ideas that computer architects have been invented in the last 60 years of computer design. These ideas are so powerful they have lasted long after the first computer that used them, with newer architects demonstrating their admiration by imitating their predecessors. 1. Designfor Moore’sLaw 2. Use Abstraction to Simplify Design 3. Make the Common Case Fast 4. Performance via Parallelism 5. Performance via Pipelining 6. Performance via Prediction 7. Hierarchy of Memories 8. Dependability via Redundancy 1)

Designfor Moore’sLaw • The one constant for computer designers is rapid change, which is driven largely by Moore's Law, stated by Gordon Moore, one of the founders of Intel • Moore’s law state that “Integrated Circuit resources double every 18-24 months”. It was resulted from a 1965 prediction of such growth in IC capacity. • As computer designs can take years, the resources available per chip can easily double or quadruple between the start and finish of the project. • Computer Architect’s must anticipate where the technology will be, when the design finishes, rather than design for where it starts.

P.Latha,ASP/ECE.

5

CS6303

Computer Architecture

2)

Use Abstraction to Simplify Design • Both computer architects and programmers had to invent techniques to make themselves more productive, for otherwise design time would lengthen as dramatically as resources grew by Moore's Law. • A major productivity technique for hardware and software is to use abstractions to represent the design at different levels of representation; lower-level details are hidden to offer a simpler model at higher levels. • Otherwise, design time would lengthen as resources grow by Moore’s Law.

3)

Make the Common Case Fast • Making the common case fast will tend to enhance performance better than optimizing the rare case. • Common case is often simpler than the rare case and hence is often easier to enhance. • Identify the common case by careful experimentation and measurement.

4)

Performance via Parallelism • Computer architects have offered designs that get more performance by performing operations in parallel.

5)

Performance via Pipelining • Performance is increased by pipelining, in which multiple instructions are overlapped in execution. • The processing is done in steps, called stages in pipelining, that can operate concurrently.

P.Latha,ASP/ECE.

6

CS6303

Computer Architecture

• Each stage of the pipeline has separate resources. 6)

Performance via Prediction • It can be faster on average to guess and start working rather than wait until know for sure, assuming that the mechanism to recover from a misprediction is not too expensive and our prediction is relatively accurate. • Performance is also improved by predicting and executing the next instruction . If the prediction is accurate and if the mechanism to recover from a misprediction is not too expensive, the performance will improve.

7)

Hierarchy of Memories • Programmers want memory to be fast, large, and cheap, as memory speed often shapes performance, capacity limits the size of problems that can be solved, and the cost of memory is often the majority of computer cost. • Architects have found that they can address these conflicting demands with a hierarchy of memories, with the fastest, smallest, and most expensive memory per bit at the top of the hierarchy and the slowest, largest, and cheapest per bit at the bottom. • Cache memories give the programmer the illusion that main memory is nearly as fast as the top of the hierarchy and nearly as big and cheap as the bottom of the hierarchy. • The layered triangle icon is used to represent the memory hierarchy. The shape indicates speed, cost, and size: the closer to the top, the faster and more expensive per bit the memory; the wider the base of the layer, the bigger the memory.

8)

Dependability via Redundancy • Computers not only need to be fast; they need to be dependable. • Since any physical device can fail, we make systems dependable by including redundant components that can take over when a failure occurs and to help detect failures.

P.Latha,ASP/ECE.

7

CS6303

Computer Architecture

Hardware and Software

These layers of software are organized primarily in a hierarchical fashion, with applications being the outermost ring and a variety of systems software sitting between the hardware and applications software. There are many types of systems software, but two types of systems software are central to every computer system today: an operating system and a compiler.

View of Hardware and software

An operating system interfaces between a user’s program and the hardware and provides a variety of services and supervisory functions. Among the most important functions are: ➢ Handling basic input and output operations ➢ Allocating storage and memory ➢ Providing for protected sharing of the computer among multiple applications using it simultaneously. Examples of operating systems in use today are Linux, iOS, and Windows.

P.Latha,ASP/ECE.

8

CS6303

Computer Architecture

Compilation of C program into machine language

Systems software → Soft ware that provides services that are commonly useful, including operating systems, compilers, loaders, and assemblers. Compiler → A program that translates high-level language statements into assembly language statements. Instruction → A command that computer hardware understands and obeys. Assembler → A program that translates a symbolic version of instructions into the binary version. Assembly language → A symbolic representation of machine instructions. P.Latha,ASP/ECE.

9

CS6303

Computer Architecture

Machine language → A binary representation of machine instructions. 1.2 Components of a computer system The five classic components of a computer are: (i) input, (ii) output, (iii) memory, (iv) datapath (ALU) and (v) control, with the last two sometimes combined and called the processor.

Components of computer

1) Input device → A mechanism through which the computer is fed information, such as a keyboard. Input Device Reads the data E.g. keyboard, joysticks, trackballs, mouses – graphical input devices E.g. microphones – captures audio input LCD: (Liquid Crystal Display) The most fascinating I/O device is probably the graphics display. Most personal mobile devices use liquid crystal displays (LCDs) to get a thin, low-power display. The LCD is not the source of light; instead, it controls the transmission of light. A typical LCD includes rod-shaped molecules in a liquid that form a twisting helix that bends light entering the display, from either a light source behind the display or less often from reflected light. The rods straighten out when a current is applied and no longer bend the light. Since the liquid crystal material is between two screens polarized at 90 degrees, the light cannot pass through unless it is bent.

P.Latha,ASP/ECE.

10

CS6303

Computer Architecture

Most LCD displays use an active matrix that has a tiny transistor switch at each pixel to precisely control current and make sharper images. A red-green-blue mask associated with each dot on the display determines the intensity of the three color components in the final image; in a color active matrix LCD, there are three transistor switches at each point. The image is composed of a matrix of picture elements, or pixels, which can be represented as a matrix of bits, called a bit map. Depending on the size of the screen and the resolution, the display matrix in a typical tablet ranges in size from 1024 x 768 to 2048x 1536. A color display might use 8 bits for each of the three colors (red, blue, and green), for 24 bits per pixel, permitting millions of different colors to be displayed. The computer hardware support for graphics consists mainly of a raster refresh buffer, or frame buffer, to store the bit map. The image to be represented onscreen is stored in the frame buffer, and the bit pattern per pixel is read out to the graphics display at the refresh rate.

Coordinates in raster CRT

Touch Screen There are a variety of ways to implement a touch screen, many tablets today use capacitive sensing. Since people are electrical conductors, if an insulator like glass is covered with a transparent conductor, touching distorts the electrostatic field of the screen, which results in a change in capacitance. This technology can allow multiple touches simultaneously, which allows gestures that can lead to attractive user interfaces.

P.Latha,ASP/ECE.

11

CS6303

Computer Architecture

2) Output Device Output device → A mechanism that conveys the result of a computation to a user, such as a display, or to another computer. ➢ Its function is to send processed results to the outside world. ➢ E.g. printer, mechanical impact heads, ink jet scanners, laser printers. 3) CPU (Datapath + control Path) It is also called processor. This is the active part of the computer. The processor logically comprises two main components: 1) datapath - The datapath performs the arithmetic operations. 2) control - control tells the datapath, memory, and I/O devices what to do according to the wishes of the instructions of the program. Instruction Set Architecture (or Architecture): An abstract interface between the hardware and lower-level software that encompasses all the information necessary to write machine language program that will run correctly, including instructions, registers, memory access, I/O and so on is the Instruction Set Architecture (ISA). ABI (Application Binary Interface): The user portion of the instruction set plus the operating system interfaces used by application programmers is defined as a standard for binary portability across computers which is called ABI. Computer Architecture: Computer Architecture consists of the instruction set architecture (ISA) and the hardware that implements the instruction set Both hardware and software consist of hierarchical layers using abstraction, with each lower layer hiding details from the level above. One key interface between the levels of abstraction is the instruction set architecture—the interface between the hardware and low-level software.

P.Latha,ASP/ECE.

12

CS6303

Computer Architecture

This abstract interface enables many implementations of varying cost and performance to run identical software. 4) Memory The memory is where the programs are kept when they are running; it also contains the data needed by the running programs. The memory is organized so that a group of n bits can be stored or retrieved in a single, basic operation. Each group of n bits is referred to as a word of information, and n is called the word length. Modern computers have word lengths that typically range from 16 to 64 bits. A unit of 8 bits is called a byte. Machine instructions may require one or more words for their representation. Accessing the memory to store or retrieve a single item of information, either a word or a byte, requires distinct names or addresses for each item location. It is customary to use numbers from 0 through 2k −1, for some suitable value of k, as the addresses of successive locations in the memory. The 2 k addresses constitute the address space of the computer, and the memory can have up to 2 k addressable locations. For example, a 24-bit address generates an address space of 224 (16,777,216) locations. This number is usually written as 16M (16 mega), where 1M is the number 220 (1,048,576). A 32-bit address creates an address space of 232 or 4G (4 giga) locations, where 1G is 230. Other notational conventions that are commonly used are K (kilo) for the number 210 (1,024), and T (tera) for the number 240. we have seen how to input data, compute using the data, and display data. If we were to lose power to the computer, however, everything would be lost because the memory inside the computer is volatile—that is, when it loses power, it forgets. In contrast, a DVD disk doesn’t forget the movie when you turn off the power to the DVD player, and is thus a nonvolatile memory technology. To distinguish between the volatile memory used to hold data and programs while they are running and this nonvolatile memory used to store data and programs between runs, the term main memory or primary memory is used for the former, and secondary memory for the latter. Secondary memory forms the next lower layer of the memory hierarchy. DRAMs have dominated main memory since 1975, but magnetic disks dominated secondary memory starting even earlier. Because of their size and form factor, personal Mobile Devices use flash memory, a nonvolatile semiconductor memory, instead of disks.

Main memory (Primary memory) The memory is built from DRAM chips. DRAM stands for dynamic random access memory. Multiple DRAMs are used together to contain the instructions and data of a program. In contrast to sequential access memories, such as magnetic tapes, the RAM portion of the term DRAM means that memory accesses take basically the same amount of time no matter what portion of the memory is read. It is volatile, used to hold data and programs while they are running. Cache memory Inside the processor is another type of memory—cache memory. Cache memory consists of a small, fast memory that acts as a buffer for the DRAM memory. Cache is built using a different memory technology, static random access memory (SRAM). SRAM is faster but less dense, and hence more expensive, than DRAM. SRAM and DRAM are two layers of the memory hierarchy.

P.Latha,ASP/ECE.

13

CS6303

Computer Architecture

Secondary memory It is nonvolatile memory used to store data and programs between runs. Secondary memory forms the next lower layer of the memory hierarchy. Magnetic disks dominated secondary memory Because of their size and form factor, personal Mobile Devices use flash memory, a nonvolatile semiconductor memory, instead of disks. Flash memory A non-volatile semi-conductor memory. It is cheaper and slower than DRAM but more expensive per bit and faster than magnetic disks. Access times are about 5 to 50 microseconds. Hence, flash memory is the standard secondary memory for PMDs. Alas, unlike disks and DRAM, flash memory bits wear out after 100,000 to 1,000,000 writes. Thus, file systems must keep track of the number of writes and have a strategy to avoid wearing out storage, such as by moving popular data. Magnetic disk Also c...


Similar Free PDFs