Ch05-Designing Trusted Operating Systems-2f PDF

Title Ch05-Designing Trusted Operating Systems-2f
Author Nthabiseng Motau
Course Information Security
Institution University of South Africa
Pages 33
File Size 3.3 MB
File Type PDF
Total Downloads 100
Total Views 149

Summary

Text book Notes...


Description

Chapter 5 Designing Trusted Operating Systems Charles h l P. Pfleeger fl & Shari h Lawrence Pfleeger, fl Security in Computing, 4th Ed., Pearson Education, 2007

1

y An operating system is trusted if we have confidence that it provides

these h four f services i consistently i l andd effectively ff i l y Policy - every system can be described by its requirements: statements off what h the h system should h ld do d andd hhow it should h ld ddo it. y Model - designers must be confident that the proposed system will meett itits requirements i t while hil protecting t ti appropriate i t objects bj t andd relationships.

2

y Design - designers choose a means to implement it. y Trust - trust in the h system is rootedd in two aspects: y FEATURES - the operating system has all the necessary functionality needed to enforce the expected security policy. policy y ASSURANCE - the operating system has been implemented in such a way that we have confidence it will enforce the security policy correctly and effectively.

3

5.1. What Is a Trusted System? y Software is trusted software if we know that the code has been

rigorously i l developed d l d andd analyze l dd, giving i i us reason to trust that h the h code does what it is expected to do and nothing more. y Certain kkey characteristics: h y Functional correctness. y Enforcement Ef t off iintegrity. t it y Limited privilege: y Appropri A iate t confidence fid llevel.l

4

y Security professionals prefer to speak of trusted instead

off secure operating systems. y Secure reflects a dichotomy: Something is either secure or not secure. y Trust is not a dichotomy; degrees of trust

5

Secure

Trusted

Either‐or:Somethingeitheris Graded:Therearedegreesof orisnotsecure. "trustworthiness." Propertyof presenter

Propertyof receiver

Asserted based on product characteristics

Judged based on evidence and analysis

Absolute:notqualifiedasto Relative: viewed in context of h howused,where,when,orby d h h b use whom A goal

A characteristic 6

5.2. Security Policies y A security policy is a statement of the security we expect the system

to enforce. f y Military Security Policy y Based B d on protecti t ting classifi l ifie d iinformation. f ti y Each piece of information is ranked at a particular sensitivity level, suchh as unclassified, l ifi d restricted, t i t d confidential, fid ti l secret,t or top t secret.t y The ranks or levels form a hierarchy, and they reflect an increasing order d off sensitivity iti it

7

8

y Military Security Policy (Cont’d) y Information f access is limited l d

by the need-to-know rule y Each E h piece i off classified l ifi d information may be associated i t d with ith one or more projects, called ll d compartments, t t describing the subject matter off the th information. if ti 9

• Assign names to identify the compartments, such as snowshoe, crypto, and Sweden • The combination is called the class or classification of a piece of information. 10

y Military Security Policy (Cont’d)

≤ , called y Introduce d a relation l ll d dominance, d i on the h sets off sensitive objects and subjects. For a subject s and an object o,

y We say that o dominates s (or s is dominated by o) if s ≤ o; the

relation l ti isi the th opposite. it DDominance i is usedd to t limit li it th the sensiti itivitity and content of information a subject can access. 11

y Military Security Policy (Cont’d) y A subject bj can readd an object bj onlyl if y the clearance level of the subject is at least as high as that of the information and y the subject has a need to know about all compartments for which the information is classified These conditions are equivalent to saying that the subject dominates the object.

12

y Commercial Security Policies y Data items at any level l l may hhave ddifferent ff ddegrees off sensitivity,

such as public, proprietary, or internal; here, the names may vary among organizations, i ti andd no universal i l hihierarchy h applies. li y Projects and departments tend to be fairly well separated, with some overlap l as people l workk on two t or more projects. j t y Corporate-level responsibilities tend to overlie projects and d t t as people departments, l throughout th h t th the corporation ti may needd accounting or personnel data. 13

Commercial View of Sensitive Information.

14

y Commercial Security Policies y Two significant f ddifferences ff exist bbetween commerciall andd military l

information security. y y

Firs tt, outside t id th the military, ilit there th iis usually ll no fformalized li d notion ti off clearances. Second, because there is no formal conce pt of a clearance, the rules for allowing access are less regularized.

15

5.3. Models of Security y Multilevel Security y Want W to build b ild a model d l to represent a range off sensitivities i i i i andd to

reflect the need to separate subjects rigorously from objects to whi h ch they h should h ld not have h access. y The generalized model is called the lattice model of security.

16

y What Is a Lattice? y A lattice l i is a mathematical h l structure

of elements organized by a relation among th them, represented t d bby a relational operator.

Sample Lattice. 17

y Multilevel Security (Cont’d) y Lattice L i Model M d l off Access A Security S i y The military security model is representative of a more general scheme, called a lattice. lattice y The dominance relation ≤ defined in the military model is the relation for the lattice. y The relation ≤ is transitive and antisymmetric. y Transitive: If a ≤ b and b ≤ c, then a ≤ c y Antisymmetric: If a ≤ b and b ≤ a, then a = b

18

y Multilevel Security (Cont’d) y Lattice i Model d l off Access Security i (Cont’d) ( ’d) y The largest element of the lattice is the classification

y The smallest element is

19

y Multilevel Security (Cont’d) y BellL B llL a PPadula d l CConfidentiality fid i li Model M dl y A formal description of the allowable paths of information flow in a secure system. system y The model's goal is to identify allowable communication when maintaining secrecy is important. y The model has been used to define security requirements for systems concurrently handling data at different sensitivity levels. y We are interested in secure information flows because they describe acceptable connections between subjects and objects of different levels of sensitivity. sensitivity 20

y Multilevel Security (Cont’d) y BellL ll a Padula d l Confidentiality fid i li Model d l ((Cont’d) ’d) y Consider a security system with the following properties. O y The system covers a set of subjects S and a set of objects O. y Each subject s in S and each object o in O has a fixed security class C(s) and C(o) (denoting clearance and classification level). y The security classes are ordered by a relation ≤.

21

y Multilevel Security (Cont’d) y BellL B llL a PPadula d l CConfidentiality fid i li Model M d l (C (Cont’d) ’d) y Two properties characterize the secure flow of information. y Simple Security Property Property. A subject s may have read access to an object o only if C(o) ≤ C(s). y *-Property (called the "star property"). A subject s who has rea d access to an object o may have write access to an object p only if C(o) ≤ C(p).

The *-property prevents write-down, which occurs when a subject with access to high-level data transfers that data by writing it to a low-level object. object 22

y Multilevel Security (Cont’d) y BellL ll a Padula d l Confidentiality fid i li Model d l ((Cont’d) ’d) y The implications of these two properties are shown in Figure 5-7.

23

24

y Multilevel Security (Cont’d) y BellL ll a Padula d l Confidentiality fid i li Model d l ((Cont’d) ’d) y The classifications of subjects (represented by squares) and objects (represented by circles) are indicated by their positions: y As the classification of an item increases, it is shown higher in the figure. y The flow of information is generally horizontal (to and from the same level) and upward (from lower levels to higher). y A downward flow is acceptable only if the highly cleared subject does not pass any high-sensitivity data to the lower-sensitivity object.

25

y Multilevel Security (Cont’d) y BellL B llL a PPadula d l CConfidentiality fid i li Model M d l (C (Cont’d) ’d) y For computing systems, downward flow of information is difficult because a computer program cannot readily distinguish between having read a piece of information and having read a piece of information that influenced what was later written.

26

y Multilevel Security (Cont’d) y Biba ib Integrity i Model dl y The Biba model is the counterpart (sometimes called the dual) of the BellLa Padula model model. y Biba defines "integrity levels," which are analogous to the sensitivity levels of the BellLa Padula model. y Subjects and objects are ordered by an integrity classification scheme, denoted I(s) and I(o).

27

y Multilevel Security (Cont’d) y Biba Bib Integrity I i Model M d l (Cont’d) (C ’d) y The properties are y

y

Simple Integrity Property. Subject s can modify (have write access to) object o only if I(s) ≥ I(o) Integrity *-Property. If subject s has read access to object o with integrity level I(o), s can have write access to object p only if I(o) ≥ I(p)

28

5.4. Trusted Operating System Design y Trusted System Design Elements y Security considerations d pervade d the h ddesign andd structure off

operating systems implies two things. y y

Firs tt, an operating ti system t controls t l th the iinteracti t tion bbetween t subjects bj t andd objects, so security must be considered in every aspect of its design. Second, because securit y appears in every part of an operating system, its design and implementation cannot be left fuzzy or vague until the rest of the system is working and being tested.

29

y Trusted System Design Elements (Cont’d) y Several S l important i design d i priinciples i l are quite i particular i l to security i

and essential for building a solid, trusted operating system. y y

y

LLeastt privilege. i il Each E h user andd eachh program should h ld operate t bby using i th the fewest privileges possible. Economy of mechanism. The desi gn of the protection s ystem should be small, simple, and straightforward. Such a protection system can be carefully analyzed, exhaustively tested, perhaps verified, and relied on. Open design. An open design is available for extensive public scrutiny, thereby providing independent confirmation of the design security. 30

y Trusted System Design Elements (Cont’d) y Severall important design d principles l are quite particular l to security

and essential for building a solid, trusted operating system. (Cont’d) y

y

CComplete l t mediation. di ti Every E access attempt tt t mustt be b checked. h k d Both B th di directt access attempts (requests) and attempts to circumvent the access checking mechanism should be considered, and the mechanism should be positioned so that it cannot be circumvented. Permission based. The default condition should be denial of access. A conservative designer identifies the items that should be accessible, rather than those that should not. 31

y Trusted System Design Elements (Cont’d) y Several S l important i design d i priinciples i l are quite i particular i l to security i

and essential for building a solid, trusted operating system. (Cont’d) y

y

y

SSeparation ti off privilege. i il Ideally, Id ll access tto objects bj t should h ld ddependd on more than one condition, such as user authentication plus a cryptographic key. In this way, someone who defeats one protection system will not have complete access. Least common mechanism. Shared objects provide potential channels for information flow. Systems employing physical or logical separation reduce the risk from sharing. Ease of use use. If a protection mechanism is eas easy to use, se it is unlikely nlikel to be avoided. 32

y Security Features of Ordinary Operating Systems

33

y Security Features of Ordinary Operating Systems (Cont’d) y User U authentication. h i i y Memory protection. y File il andd I/O / ddevice i access control.l y Allocation and access control to general objects. y Enforced E f d sharing. h i y Guaranteed fair service. y Interprocess I t communication i ti an d sync hhronization. i ti y Protected operating system protection data. 34

y Security Features of Trusted Operating Systems

35

y Security Features of Trusted Operating Systems (Cont’d) y Identification Id ifi i andd Authentication A h i i y Trusted operating systems require secure identification of individuals, and each individual must be uniquely identified identified. y Mandatory and Discretionary Access Control y Mandatory access control (MAC ) means that access control policy decisions are made beyond the control of the individual owner of an object. y Discretionary access control (DAC) leaves a certain amount of access control to the discretion of the object's owner or to anyone else who is a thori ed to control the object's authorized object s access access. 36

y Security Features of Trusted Operating Systems (Cont’d) y Object bj Reuse Protection i y To prevent object reuse leakage, operating systems clear (that is, overwrite) all space to be reassigned before allowing the next user to have access to itit. y Complete Mediation y All accesses must be controlled. y Trusted Path y Want an unmistakable communication, called a trusted path, to ensure that they are supplying protected information only to a legitimate receiver.

37

y Security Features of Trusted Operating Systems (Cont’d) y Accountability A bili andd Audit A di y Accountability usually entails maintaining a log of security-relevant events that have occurred, occurred listing each event and the person responsible for the addition, deletion, or change. Thisaudit log must obviously be protected from outsiders, and every security-relevant event must be recorded. y Audit Log Reduction y Intrusion Detection y Intrusion detection software builds patterns of normal system usage, triggering an alarm any time the usage seems abnormal. 38

y Kernelized Design y A security i kernel k l is responsible bl for f enforcing f the h security

mechanisms of the entire operating system. y The Th security it kernel k l provides id th the security it iinterfaces t f among th the hardware, operating system, and other parts of the computing system. t y Typically, the operating system is designed so that the security k l iis contained kernel t i d within ithi the th operating ti system t kkernel.l

39

y Kernelized Design (Cont’d) y Several S l goodd design d i reasons why h security i funct iions may bbe iisolated l d

in a security kernel. y y

y

CCoverage. Every E access to t a protect t t edd object bj t mustt pass through th h th the security kernel. Separation. Isolating security mechanisms both from the rest of the operating system and from the user space makes it easier to protect those mechanisms from penetration by the operating system or the users. Unity. All security functions are performed by a single set of code, so it is easier to trace the cause of any problems that arise with these functions. 40

y Kernelized Design (Cont’d) y Severall goodd design d reasons why h security funct ions may bbe isolated l d

in a security kernel. y y y

Modifiability. M difi bilit Changes Ch to t the th security it mechanisms h i are easier i tto make k andd easier to test. Compactness. Because it performs only securit y functions, the security kernel is likely to be relatively small. Verifiability. Being relatively small, the security kernel can be analyzed rigorously. For example, formal methods can be used to ensure that all security situations (such as states and state changes) have been covered by th ddesign. the i 41

y Reference Monitor

42

y Reference Monitorn (Cont’d) y Must bbe y Tamperproof, that is, impossible to weaken or disable y Unbypassable, Unbypassable that isis, always invoked when access to any object is required y Analyzable, that is, small enough to be subjected to analysis and testing, the completeness of which can be ensured

43

y Trusted Computing Base (TCB) y The Th trustedd computing i bbase, or TCB, TCB iis the h name we give i to

everything in the trusted operating system necessary to enforce the security policy. l

44

45

y Trusted Computing Base (Cont’d) y The Th TCB TCB, which hi h must maintain i i the h secrecy andd integrity i i off eachh

domain, monitors four basic interactions. y y y

y

PProcess activation. ti ti Execution domain switching. Processes running in one domain often invoke processes in other domains to obtain more sensitive data or services. Memory protection. Because each domain includes code and data stored in memory, the TCB must monitor memory references to ensure secrecy and integrity for each domain. I/O operation. 46

47

48

y Separation/Isolation y physical h i l separation, i two different d ff processes use two different d ff

hardware facilities. y Temporal T l separation ti occurs when h different diff t processes are run att different times. y Encryption E ti iis usedd ffor cryptographic t hi separation ti y Logical separation, also called isolation, is provided when a process suchh as a reference f monitor it separates t one user''s objects bj t from those of another user. 49

50

y Virtualization y The h operating system emulates l or simulates l a collection ll off a

computer system's resources. y A virtual i t l machine hi isi a collection ll ti off reall or simulated i l t d hard h d ware facilities

51

y Virtualization (Cont’d) y Multiple M l i l Virtual Vi l Memory M SSpaces

52

y Virtualization (Cont’d) y Virtual i l Machi hines

53

y Layered Design

Layered Operating System. 54

y Layered Design

55

5.5. Assurance in Trusted Operating Systems y Typical Operating System Flaws y Known K Vulnerabilities V l bili i y User interaction is the largest single source of operating system vulnerabilities y An ambiguity in access policy. y On one hand, we want to separate users and protect their individual resources. y On the other hand, users depend on shared access to libraries, utility programs, common data, and system tables. y Incomplete mediation y Generality is a fourth protection weakness, weakness especially among commercial operating systems for large computing systems. 56

y Assurance Methods y Testing i y Testing is the most widely accepted assurance technique. y However, However conclusions based on testing are necessarily limited limited, for the following reasons: y Testing can demonstrate the existence of a problem, but passing tests does not demonstrate the absence of problems. y Testing adequately within reasonable time or effort is difficult because...


Similar Free PDFs