Title | Computer organization and design arm edition |
---|---|
Author | ระยะ สุดท้าย |
Pages | 1,074 |
File Size | 31.5 MB |
File Type | |
Total Downloads | 421 |
Total Views | 761 |
In Praise of Computer Organization and Design: The Hardware/ Software Interface, ARM® Edition “Textbook selection is often a frustrating act of compromise—pedagogy, content coverage, quality of exposition, level of rigor, cost. Computer Organization and Design is the rare book that hits all the rig...
In Praise of Computer Organization and Design: The Hardware/ Software Interface, ARM® Edition “Textbook selection is often a frustrating act of compromise—pedagogy, content coverage, quality of exposition, level of rigor, cost. Computer Organization and Design is the rare book that hits all the right notes across the board, without compromise. It is not only the premier computer organization textbook, it is a shining example of what all computer science textbooks could and should be.” —Michael Goldweber, Xavier University
“I have been using Computer Organization and Design for years, from the very first edition. This new edition is yet another outstanding improvement on an already classic text. The evolution from desktop computing to mobile computing to Big Data brings new coverage of embedded processors such as the ARM, new material on how software and hardware interact to increase performance, and cloud computing. All this without sacrificing the fundamentals.” —Ed Harcourt, St. Lawrence University
“To Millennials: Computer Organization and Design is the computer architecture book you should keep on your (virtual) bookshelf. The book is both old and new, because it develops venerable principles—Moore’s Law, abstraction, common case fast, redundancy, memory hierarchies, parallelism, and pipelining—but illustrates them with contemporary designs.” —Mark D. Hill, University of Wisconsin-Madison
“The new edition of Computer Organization and Design keeps pace with advances in emerging embedded and many-core (GPU) systems, where tablets and smartphones will/are quickly becoming our new desktops. This text acknowledges these changes, but continues to provide a rich foundation of the fundamentals in computer organization and design which will be needed for the designers of hardware and software that power this new class of devices and systems.” —Dave Kaeli, Northeastern University
“Computer Organization and Design provides more than an introduction to computer architecture. It prepares the reader for the changes necessary to meet the ever-increasing performance needs of mobile systems and big data processing at a time that difficulties in semiconductor scaling are making all systems power constrained. In this new era for computing, hardware and software must be co-designed and system-level architecture is as critical as component-level optimizations.” —Christos Kozyrakis, Stanford University
“Patterson and Hennessy brilliantly address the issues in ever-changing computer hardware architectures, emphasizing on interactions among hardware and software components at various abstraction levels. By interspersing I/O and parallelism concepts with a variety of mechanisms in hardware and software throughout the book, the new edition achieves an excellent holistic presentation of computer architecture for the postPC era. This book is an essential guide to hardware and software professionals facing energy efficiency and parallelization challenges in Tablet PC to Cloud computing.” —Jae C. Oh, Syracuse University
This page intentionally left blank
A
R
M®
E
D
I
T
I
O
N
Computer Organization and Design T H E
H A R D W A R E / S O F T W A R E
I N T E R FA C E
David A. Patterson has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1976, where he holds the Pardee Chair of Computer Science. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM, CRA, and SIGARCH. At Berkeley, Patterson led the design and implementation of RISC I, likely the first VLSI reduced instruction set computer, and the foundation of the commercial SPARC architecture. He was a leader of the Redundant Arrays of Inexpensive Disks (RAID) project, which led to dependable storage systems from many companies. He was also involved in the Network of Workstations (NOW) project, which led to cluster technology used by Internet companies and later to cloud computing. These projects earned four dissertation awards from ACM. His current research projects are Algorithm-Machine-People and Algorithms and Specializers for Provably Optimal Implementations with Resilience and Efficiency. The AMP Lab is developing scalable machine learning algorithms, warehouse-scale-computer-friendly programming models, and crowd-sourcing tools to gain valuable insights quickly from big data in the cloud. The ASPIRE Lab uses deep hardware and software co-tuning to achieve the highest possible performance and energy efficiency for mobile and rack computing systems. John L. Hennessy is the tenth president of Stanford University, where he has been a member of the faculty since 1977 in the departments of electrical engineering and computer science. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates. In 1981, he started the MIPS project at Stanford with a handful of graduate students. After completing the project in 1984, he took a leave from the university to cofound MIPS Computer Systems (now MIPS Technologies), which developed one of the first commercial RISC microprocessors. As of 2006, over 2 billion MIPS microprocessors have been shipped in devices ranging from video games and palmtop computers to laser printers and network switches. Hennessy subsequently led the DASH (Director Architecture for Shared Memory) project, which prototyped the first scalable cache coherent multiprocessor; many of the key ideas have been adopted in modern multiprocessors. In addition to his technical activities and university responsibilities, he has continued to work with numerous startups, both as an early-stage advisor and an investor.
A
R
M®
E
D
I
T
I
O
N
Computer Organization and Design T H E
H A R D W A R E / S O F T W A R E
I N T E R FA C E
David A. Patterson University of California, Berkeley John L. Hennessy Stanford University
With contributions by Perry Alexander The University of Kansas
David Kaeli Northeastern University
Kevin Lim Hewlett-Packard
Nicole Kaiyan University of Adelaide
John Nickolls NVIDIA
David Kirk NVIDIA
John Y. Oliver Cal Poly, San Luis Obispo
Zachary Kurmas Grand Valley State University
Milos Prvulovic Georgia Tech
Jichuan Chang Google
James R. Larus School of Computer and Communications Science at EPFL
Partha Ranganathan Google
Matthew Farrens University of California, Davis
Jacob Leverich Stanford University
Peter J. Ashenden Ashenden Designs Pty Ltd Jason D. Bakos University of South Carolina Javier Diaz Bruguera Universidade de Santiago de Compostela
AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Morgan Kaufmann is an imprint of Elsevier
Mark Smotherman Clemson University
Publisher: Todd Green Acquisitions Editor: Steve Merken Development Editor: Nate McFadden Project Manager: Lisa Jones Designer: Matthew Limbert Morgan Kaufmann is an imprint of Elsevier 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, USA Copyright © 2017 Elsevier Inc. All rights reserved No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our Web site: www.elsevier.com/permissions This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods or professional practices, may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information or methods described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the publisher nor the authors, contributors, or editors, assume any liability for any injury and/ or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. All material relating to ARM® technology has been reproduced with permission from ARM Limited, and should only be used for education purposes. All ARM-based models shown or referred to in the text must not be used, reproduced or distributed for commercial purposes, and in no event shall purchasing this textbook be construed as granting you or any third party, expressly or by implication, estoppel or otherwise, a license to use any other ARM technology or know how. Materials provided by ARM are copyright © ARM Limited (or its affiliates). Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-801733-3 For information on all MK publications visit our Web site at www.mkp.com Printed and bound in the United States of America
To Linda, who has been, is, and always will be the love of my life
A C K N O W L E D G M E N T S
Figures 1.7, 1.8 Courtesy of iFixit (www.ifixit.com). Figure 1.9 Courtesy of Chipworks (www.chipworks.com). Figure 1.13 Courtesy of Intel. Figures 1.10.1, 1.10.2, 4.15.2 Courtesy of the Charles Babbage Institute, University of Minnesota Libraries, Minneapolis. Figures 1.10.3, 4.15.1, 4.15.3, 5.12.3, 6.14.2 Courtesy of IBM.
Figure 1.10.4 Courtesy of Cray Inc. Figure 1.10.5 Courtesy of Apple Computer, Inc. Figure 1.10.6 Courtesy of the Computer History Museum. Figures 5.17.1, 5.17.2 Courtesy of Museum of Science, Boston. Figure 5.17.4 Courtesy of MIPS Technologies, Inc. Figure 6.15.1 Courtesy of NASA Ames Research Center.
Contents Preface xv
C H A P T E R S
1
Computer Abstractions and Technology 2 1.1 Introduction 3 1.2 Eight Great Ideas in Computer Architecture 11 1.3 Below Your Program 13 1.4 Under the Covers 16 1.5 Technologies for Building Processors and Memory 24 1.6 Performance 28 1.7 The Power Wall 40 1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors 43 1.9 Real Stuff: Benchmarking the Intel Core i7 46 1.10 Fallacies and Pitfalls 49 1.11 Concluding Remarks 52 1.12 Historical Perspective and Further Reading 54 1.13 Exercises 54
2
Instructions: Language of the Computer 60 2.1 Introduction 62 2.2 Operations of the Computer Hardware 63 2.3 Operands of the Computer Hardware 67 2.4 Signed and Unsigned Numbers 75 2.5 Representing Instructions in the Computer 82 2.6 Logical Operations 90 2.7 Instructions for Making Decisions 93 2.8 Supporting Procedures in Computer Hardware 100 2.9 Communicating with People 110 2.10 LEGv8 Addressing for Wide Immediates and Addresses 115 2.11 Parallelism and Instructions: Synchronization 125 2.12 Translating and Starting a Program 128 2.13 A C Sort Example to Put it All Together 137 2.14 Arrays versus Pointers 146
x Contents
2.15 Advanced Material: Compiling C and Interpreting Java 150 2.16 Real Stuff: MIPS Instructions 150 2.17 Real Stuff: ARMv7 (32-bit) Instructions 152 2.18 Real Stuff: x86 Instructions 154 2.19 Real Stuff: The Rest of the ARMv8 Instruction Set 163 2.20 Fallacies and Pitfalls 169 2.21 Concluding Remarks 171 2.22 Historical Perspective and Further Reading 173 2.23 Exercises 174
3
Arithmetic for Computers 186 3.1 Introduction 188 3.2 Addition and Subtraction 188 3.3 Multiplication 191 3.4 Division 197 3.5 Floating Point 205 3.6 Parallelism and Computer Arithmetic: Subword Parallelism 230 3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86 232 3.8 Real Stuff: The Rest of the ARMv8 Arithmetic Instructions 234 3.9 Going Faster: Subword Parallelism and Matrix Multiply 238 3.10 Fallacies and Pitfalls 242 3.11 Concluding Remarks 245 3.12 Historical Perspective and Further Reading 248 3.13 Exercises 249
4
The Processor 254 4.1 Introduction 256 4.2 Logic Design Conventions 260 4.3 Building a Datapath 263 4.4 A Simple Implementation Scheme 271 4.5 An Overview of Pipelining 283 4.6 Pipelined Datapath and Control 297 4.7 Data Hazards: Forwarding versus Stalling 316 4.8 Control Hazards 328 4.9 Exceptions 336 4.10 Parallelism via Instructions 342 4.11 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Pipelines 355 4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply 363 4.13 Advanced Topic: An Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations 366
Contents
4.14 Fallacies and Pitfalls 366 4.15 Concluding Remarks 367 4.16 Historical Perspective and Further Reading 368 4.17 Exercises 368
5
Large and Fast: Exploiting Memory Hierarchy 386 5.1 Introduction 388 5.2 Memory Technologies 392 5.3 The Basics of Caches 397 5.4 Measuring and Improving Cache Performance 412 5.5 Dependable Memory Hierarchy 432 5.6 Virtual Machines 438 5.7 Virtual Memory 441 5.8 A Common Framework for Memory Hierarchy 465 5.9 Using a Finite-State Machine to Control a Simple Cache 472 5.10 Parallelism and Memory Hierarchy: Cache Coherence 477 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 481 5.12 Advanced Material: Implementing Cache Controllers 482 5.13 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Memory Hierarchies 482 5.14 Real Stuff: The Rest of the ARMv8 System and Special Instructions 487 5.15 Going Faster: Cache Blocking and Matrix Multiply 488 5.16 Fallacies and Pitfalls 491 5.17 Concluding Remarks 496 5.18 Historical Perspective and Further Reading 497 5.19 Exercises 497
6
Parallel Processors from Client to Cloud 514 6.1 Introduction 516 6.2 The Difficulty of Creating Parallel Processing Programs 518 6.3 SISD, MIMD, SIMD, SPMD, and Vector 523 6.4 Hardware Multithreading 530 6.5 Multicore and Other Shared Memory Multiprocessors 533 6.6 Introduction to Graphics Processing Units 538 6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors 545 6.8 Introduction to Multiprocessor Network Topologies 550 6.9 Communicating to the Outside World: Cluster Networking 553 6.10 Multiprocessor Benchmarks and Performance Models 554 6.11 Real Stuff: Benchmarking and Rooflines of the Intel Core i7 960 and the NVIDIA Tesla GPU 564
xi
xii Contents
6.12 Going Faster: Multiple Processors and Matrix Multiply 569 6.13 Fallacies and Pitfalls 572 6.14 Concluding Remarks 574 6.15 Historical Perspective and Further Reading 577 6.16 Exercises 577 A P P E N D I X
A
The Basics of Logic Design A-2 A.1 Introduction A-3 A.2 Gates, Truth Tables, and Logic Equations A-4 A.3 Combinational Logic A-9 A.4 Using a Hardware Description Language A-20 A.5 Constructing a Basic Arithmetic Logic Unit A-26 A.6 Faster Addition: Carry Lookahead A-37 A.7 Clocks A-47 A.8 Memory Elements: Flip-Flops, Latches, and Registers A-49 A.9 Memory Elements: SRAMs and DRAMs A-57 A.10 Finite-State Machines A-66 A.11 Timing Methodologies A-71 A.12 Field Programmable Devices A-77 A.13 Concluding Remarks A-78 A.14 Exercises A-79
Index I-1 O N L I N E
B
C O N T E N T
Graphics and Computing GPUs B-2 B.1 Introduction B-3 B.2 GPU System Architectures B-7 B.3 Programming GPUs B-12 B.4 Multithreaded Multiprocessor Architecture B-25 B.5 Parallel Memory System B-36 B.6 Floating Point Arithmetic B-41 B.7 Real Stuff: The NVIDIA GeForce 8800 B-46 B.8 Real Stuff: Mapping Applications to GPUs B-55 B.9 Fallacies and Pitfalls B-72 B.10 Concluding Remarks B-76 B.11 Historical Perspective and Further Reading B-77
Contents
C
Mapping Control to Hardware C-2 C.1 Introduction C-3 C.2 Implementing Combinational Control Units C-4 C.3 Implementing Finite-State Machine Control C-8 C.4 Implementing the Next-State Function with a Sequencer C-22 C.5 Translating a Microprogram to Hardware C-28 C.6 Concluding Remarks C-32 C.7 Exercises C-33
D
Survey of RISC Architectures for Desktop, Server, A and Embedded Computers D-2
D.1 Introduction D-3 D.2 Addressing Modes and Instruction Formats D-5 D.3 Instructions: The MIPS Core Subset D-9 D.4 Instructions: Multimedia Extensions of the Desktop/Server RISCs D-16 D.5 Instructions: Digital Signal-Processing Extensions of the Embedded RISCs D-19 D.6 Instructions: Common Extensions to MIPS Core D-20 D.7 Instructions Unique to MIPS-64 D-25 D.8 Instructions Unique to Alpha D-27 D.9 Instructions Unique to SPARC v9 D-29 D.10 Instructions Unique to PowerPC D-32 D.11 Instructions Unique to PA-RISC 2.0 D-34 D.12 Instructions Unique to ARM D-36 D.13 Instructions Unique to Thumb D-38 D.14 Instructions Unique to SuperH D-39 D.15 Instructions Unique to M32R D-40 D.16 Instructions Unique to MIPS-16 D-40 D.17 Concluding Remarks D-43 Glossary G-1 Further Reading FR-1
xiii
This page intentionally left blank
Preface The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. Albert Einstein, What I Believe, 1930
About This Book We believe that learning in computer science and engineering should reflect the current state of the field, as well as introduce the principles that are shaping computing. We also feel that readers in every specialty of computing need to appreciate the organizational paradigms that determine the capabilities, performance, energy, and, ultimately, the ...