@[toc]

Chapter 2 Operating System Overview

operating System definition(定义)

  • controls the execution of application programs
  • Acts as an interface between applications and hardware

Operating System Objectives (操作系统目标)

  1. As User/Computer Interface —Convenience/方便
    Makes the computer more convenient to use
  2. As Resource Manager—Efficiency /有效
    Allows computer system resources to be used in an efficient manner
  3. As System Software—Ability to evolve /扩展
    Permit effective development, testing, and introduction of new system functions without interfering with service.

Kernel(内核)

  • Portion of operating system that is in main memory
  • Contains most frequently used functions
  • Also called the nucleus(核子)

Job Control Language (JCL)

  • Special type of programming language
  • Provides instruction to the monitor

Hardware Features

  1. Memory protection
    Do not allow the memory area containing the monitor to be altered
  2. Timer
    Prevents a job from monopolizing(独占) the system
  3. Privileged instructions
    Certain machine level instructions can only be executed by the monitor
  4. Interrupts
    This feature gives the OS more flexibility in relinquishing control to and regaining control from user programs

CPU mode

  1. User program executes in user mode(用户模式)
    Certain instructions may not be executed
    Some memory can not be accessed
  2. Monitor executes in system mode(系统模式)
    Kernel mode(内核模式)
    Privileged instructions are executed
    Protected areas of memory may be accessed

    Processes consists of three components(进程的三个组成部分)

  3. An executable program /code(代码段)
  4. Associated data needed by the program(数据段)
  5. Execution context of the program (the core)
    All information the operating system needs to manage the process

The factors that should be considered(应考虑的因素)

  • Fairness(公平性)
    Give equal and fair access to resources
  • Differential responsiveness(有差别的响应性)
    Discriminate among different classes of processes
  • Efficiency(高效性)
    Maximize throughput(吞吐量), minimize response time(响应时间), and accommodate as many users as possible

    OS has five storage management responsibilities

  • Process isolation(进程隔离)
  • Automatic allocation and management(自动分配和管理)
  • Support of modular programming(模块化程序设计)
  • Protection and access control (保护与存取控制)
  • Long-term storage(长期存储)

    Virtual Memory / VM

  • Allows programmers to address memory from a logical point of view
  • No hiatus(脱节) between the execution of successive processes while one process was written out to secondary store and the successor process was read in

    Page

  • Allows process to be comprised of a number of fixed-size blocks, called pages
    Each page may be located any where in main memory

    Virtual Memory Addressing

  • Storage system consists of main memory and auxiliary memory

在这里插入图片描述

Symmetric multiprocessing (SMP) (对称多处理)

  • There are multiple processors
  • These processors share same main memory and I/O facilities
  • All processors can perform the same functions
  • The OS of an SMP schedules processes or threads across all of the processors.
  • SMP has a number of potential(潜在的) advantages over uniprocessor(单处理机)architecture, including the following:
    1. Performance 性能
    2. Availability 可用性
    3. Incremental growth 可增长
    4. Scaling可扩展,多型号

      Distributed operating systems(分布式操作系统)

  • Provides the illusion of a single main memory space and single secondary memory space

Object-oriented design(面向对象程序设计)

  • Used for adding modular extensions to a small kernel
    Enables programmers to customize an operating system without disrupting system integrity

UnSolve Questions

  1. The operating system provides many types of services to end-users, programmers and system designers, including:
    a. Built-in user applications
    b. All of the above
    c. Relational database capabilities with the internal file system
    $\color{red}{d}$. Error detection and response
    Feedback: the OS typically provides services in the following areas:• Program development • Program execution • Access to I/O devices • Controlled access to files • System access • Error detection and response • Accounting
  2. A computer hardware feature that is vital to the effective operation of a multiprogramming operating system is:
    a. Very large memory
    b. Multiple processors
    $\color{red}{c}$. I/O interrupts and DMA
    d. All of the above
    Feedback: 支持 I/O 中断和直接存储器访问
    3.Which of the following major line of computer system development created problems in timing and synchronization that contributed to the development of the concept of the process?
    a. Multiprogramming batch operation systems
    b. Real time transaction systems
    c. Time sharing systems
    $\color{red}{d}$. All of the above

Chapter 3 Process Description and Control

Process Control Block (PCB) (进程控制块)

  1. Contains the process elements

    Process Identification

    Processor State Information

    • User-Visible Registers
    • Control and Status Registers
    • Stack Pointers

      Process Control information

    • Scheduling and State Information
    • Data Structuring (link information)
    • Interprocess Communication
    • Process Privileges
    • Memory Management
    • Resource Ownership and Utilization
  2. Created and manage by the operating system

  3. Allows support for multiple processes

    Two-State Process Model

    Process may be in one of two states
  4. Running
  5. Not-running
    在这里插入图片描述
    Limit:
  • Not-running
    ready to execute
    waiting for I/O (blocked)(阻塞)
  • Dispatcher(调度器) cannot just select the process that has been in the queue the longest because it may be blocked

Five-State Model

States:

  1. Running(运行态)
  2. Ready(就绪态)
  3. Blocked(阻塞态)
  4. New(新建态)
  5. Exit(退出态)
    在这里插入图片描述

    Operating System Control Structures

  6. Information about the current status of each process and resource (每个进程和资源的当前状态)
  7. Tables are constructed for each entity the operating system manages (操作系统构造并维护他所管理的所有实体的信息表)

    Memory Tables

  8. Allocation of main memory to processes (分配给进程的主存)
  9. Allocation of secondary memory to processes (分配给进程的辅存)
  10. Protection attributes for access to shared memory regions(共享内存区域的保护属性)
  11. Information needed to manage virtual memory(虚拟内存的管理信息)

    I/O Tables

  12. I/O device is available or assigned(分配状态)
  13. Status of I/O operation
  14. Location in main memory being used as the source or destination of the I/O transfer (数据传送的源和目的地址)

    File Tables

  15. Existence of files
  16. Location on secondary memory
  17. Current Status
  18. Attributes

    Process Table

    Every entry is a description of a process image (进程映像)
  • process image: Collection of program, data, stack, attributes
    在这里插入图片描述

    Process Creation

  1. Assign a unique process identifier(进程标识符)
  2. Allocate space for the process
  3. Initialize process control block
  4. Set up appropriate linkages(建立适当的联系)
    Ex: add new process to linked list used for scheduling queue
  5. Create of expand other data structures
    Ex: maintain an accounting file

    When to Switch a Process

  6. Interrupt
  • Clock interrupt
    process has executed for the maximum allowable time slice
  • I/O interrupt
  • Memory fault
    Referenced virtual address is not in main memory, so it must be brought in.
  1. Trap
  • error or exception occurred
  • may cause process to be moved to Exit state
  1. Supervisor call (System Call)
  • such as file open

    Process Switching

  1. Save context of processor including program counter and other registers
  2. Update the process control block of the process and change the process’s state that is currently in the Running state
  3. Move process control block to appropriate queue – ready; blocked; ready/suspend
  4. Select another process for execution
  5. Update the process control block of the process selected and change it state
  6. Update memory-management data structures
  7. Restore context of the selected process

    Execution of the Operating System

  8. Non-process Kernel(无进程内核)
    Execute kernel outside of any process
    Operating system code is executed as a separate entity that operates in privileged mode
  9. Execution Within User Processes(在用户进程中执行)
    Operating system software within context of a user process
    Process executes in privileged mode when executing operating system code
  10. Process-Based Operating System(基于进程的OS)
    Implement operating system as a collection of system processes
    Useful in multi-processor or multi-computer environment

    UnSolved Questions

  11. The behavior of a processor can be characterized by examining:
    a. Multiple process traces
    b. A single process trace
    c. All of the above
    $\color{red}{d}$. The interleaving of the process traces
    Feedback: We can characterize behavior of the processor by showing how the traces of the various processes are interleaved. 单进程(process)使用single process trace 测试
    1. The Process Image element that contains the modifiable part of the user space is called the:
      a. User Program
      $\color{red}{b}$. None of the above
      c. System Stack
      d. Process Control Block
      Feedback: User data.
      3.In a typical UNIX system, the element of the process image that contains the processor status information is the:
      a. All of the above
      b. System-level context
      $\color{red}{c}$. Register context
      d. User-level context
      Feedback: 处理器状态信息保存在寄存器上下文中
    2. In a Process Model that implements two suspend states, a valid state transition is represented by:
      a. Ready -> Ready/Suspend
      b. Running -> Ready/Suspend
      c. Ready/Suspend -> Ready
      $\color{red}{d}$. All of the above

Chapter 4 Threads, SMP, and Microkernels

线程和进程定义

  • Dispatching is referred to as a thread or lightweight process(调度的单位称为线程或轻量进程)
  • Resource of ownership is referred to as a process or task(资源所有权的单位称为进程或者任务)

    Benefits(优点) of Threads

  1. Takes less time to create a new thread than a process (创建快)
  2. Less time to terminate a thread than a process (结束快)
  3. Less time to switch between two threads within the same process (切换快)
  4. Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel (通信快)

    User-Level Threads (ULT,用户级线程)

  5. Multithread implemented by a threads library
  6. All thread management is done by the application
  7. The kernel is not aware of the existence of threads & scheduling is done on a process basis

    Kernel-Level Threads(内核级线程)

  8. Kernel maintains context information for the process and the threads
  9. Scheduling is done on a thread basis

    Advantages of ULT to KLT

  10. Less overhead更少的代价 of thread switches(mode switches do not required)
  11. Scheduling can be application specific
  12. ULTs can run on any operating system

    Disadvantages of ULT to KLT

  13. One thread is blocked, all other threads of the process are blocked
  14. A multithreaded application cannot take advantage of multiprocessing
  15. Ways to work around these drawbacks:
    Multiple processes 多进程
    Jacketing套管 非阻塞调用

    Advantages of KLT to ULT

  • Overcomes the two principal drawbacks of the ULT
  1. Multiple threads in one process can simultaneously run on multiple processors
  2. One threads blocked cannot make the other threads within the same process blocked
  3. Kernel routines例程 themselves can be multithreaded

    Disadvantages of KLT to ULT

  • The principal disadvantage is that thread switch requires mode switches(模式切换) to the kernel

    Microkernel

  • Small operating system core
  • Contains only essential core operating systems functions
  • Many services traditionally included in the operating system are now external subsystems
    Device drivers
    File systems
    Virtual memory manager
    Windowing system
    Security services
  • 相对的内核称为单体内核—monolithic kernel

    Benefits of a Microkernel Organization

  1. Uniform interface(一致接口) on request made by a process
    Don’t distinguish between kernel-level and user-level services
    All services are provided by means of message passing
  2. Extensibility(可扩展性)
    Allows the addition of new services
  3. Flexibility(灵活性)
    New features added
    Existing features can be subtracted(删减)
  4. Portability(可移植性)
    Changes needed to port the system to a new processor is changed in the microkernel - not in the other services
  5. Reliability(可靠性)
    Modular design
    Small microkernel can be rigorously tested
  6. Distributed system support
    Message are sent without knowing what the target machine is (消息的发送不用关心目标是本机还是异机)
  7. Object-oriented operating system
    Components are objects with clearly defined interfaces that can be interconnected to form software (定义接口明晰的组件,以搭积木方式通过组件互联构造软件)

    Microkernel Performance

  • A potential disadvantage of microkernels that is often cited is that of performance (消息传递机制是微内核工作的主要机制,消息传递必须通过内核,因此性能较差)
  • One response to this problem was to enlarge the microkernel by reintegrating critical servers and drivers back into the OS (把关键服务重新纳入内核,减少消息传递开销)

Microkernel Design

  • The microkernel must include those functions that depend directly on the hardware. Those functions fall into the following general categories:
  1. Low level memory management
    Mapping each virtual page to a physical page frame
  2. Interprocess communication (IPC)
    Message
    Port
  3. I/O and interrupt management
    he microkernel transforms interrupts into messages and dispatches messages to device-specific user-level interrupt handling.
    Driver thread

    UnSolved Question

  4. Early operating systems that were designed with little concern about structure are typically referred to as:
    $\color{red}{a}$. Monolithic operating systems
    b. Kernel operating systems
    c. All of the above
    d. Layered operating systems
    Feedback: 单体结构的操作系统
  5. In low-level microkernel memory management, an example of an operation that can support external paging and virtual memory management is the:
    a. Map operation
    b. Flush operation
    c. Grant operation
    $\color{red}{d}$. All of the above
    Feedback: 三个微内核操作支持内核外的分页和虚存管理 授权、映射、刷新
    1. In a W2K system, the state that a thread enters when it has been unblocked and the resource for which it has been blocked is not yet available is called the:
      a. Waiting state
      b. Standby state
      c. None of the above
      $\color{red}{d}$. Transition state
      Feedback: 过渡态
    2. In a Solaris system, a User-Level Thread (ULT) that enters the active state is assigned to a:
      a. Heavy-Weight Process (HWP)
      $\color{red}{b}$. Light-Weight Process (LWP)
      c. None of the above
      d. Kernel thread
      Feedback: 轻量级进程
    3. In a Linux system, when a new process is cloned, the two processes share the same:
      a. Process identifier
      $\color{red}{b}$. Virtual memory
      c. task_struct data structure
      d. All of the above
      Feedback: 克隆后共享同一个虚存
      1. An example of a system that implements a single process with multiple threads is:
        a. WIN 2000
        b. All of the above
        c. Solaris
        $\color{red}{d}$. Java
    4. In a SMP system, each processor maintains a local cache and must alert all other processors that a change to cache update has taken place. This is referred to as the:
      $\color{red}{a}$. Cache coherency problem
      b. Interconnection mechanism problem
      c. None of the above
      d. Synchronization mechanism problem
      Feedback: 高速缓存的一致性问题

Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)

Concurrency arises in three different contexts:

  1. Multiple applications(多应用程序)
    Multiprogramming
  2. Structured application(结构化应用程序)
    Some application can be designed as a set of concurrent processes
  3. Operating-system structure(操作系统结构)
    Operating system is a set of processes or threads在这里插入图片描述

    Race Condition(竞争条件)

  • A race condition occurs when multiple processes or threads read and write data items so that the final result depends on the order of execution of instructions in the multiple processes or thread.
    (竞争条件发生在当多个进程或者线程在读写数据时,其最终结果依赖于多个进程或者线程的指令执行顺序)

    Process Interactions(进程交互)

    在这里插入图片描述

    Requirements for Mutual Exclusion

  1. Only one process at a time is allowed in the critical section for a resource (一次只允许一个进程进入临界区,忙则等待)
  2. A process that halts in its noncritical section must do so without interfering(干扰) with other processes (阻塞于临界区外的进程不能干涉其它进程)
  3. No deadlock or starvation (不会发生饥饿和死锁,有限等待)
  4. A process must not be delayed access to a critical section when there is no other process using it (闲则让进)
  5. No assumptions are made about relative process speeds or number of processes (对相关进程的执行速度和处理器数目没有要求)
  6. A process remains inside its critical section for a finite time only (有限占用)

Implement of Semaphores

  • Implement in hardware or firmware(固件)
  • Implement in software, e.g. Dekker or Peterson
  • Implement by inhibit interrupts(中断禁止) for a single-processor system

    Monitors

  1. Local data variables are accessible only by the monitor
  2. Process enters monitor by invoking one of its procedures
  3. Only one process may be executing in the monitor at a time
  4. A monitor supports synchronization by the use of condition variables
  5. cwait(c): Suspend execution of the calling process on condition c. The monitor is now available for use by another process.
  6. csignal(c): Resume execution of some process blocked after a cwait on the same condition. If there are several such processes, choose one of them; if there is no such process, do nothing.
  7. Note that monitor wait and signal operations are different from those for the semaphore. If a process in a monitor signals and no task is waiting on the condition variable, the signal is lost.

[condition variables]:条件变量 [Monitors]:管程

Addressing

Direct addressing

  • Send primitive includes a specific identifier of the destination process
  • Receive primitive could know ahead of time which process a message is expected or receive primitive could use source parameter to return a value when the receive operation has been performed
    [destination]:目标 [Receive primitive could know ahead of time which process a message is expected or receive primitive could use source parameter to return a value when the receive operation has been performed]:接收原语可以提前知道预期的消息处理过程,或者接收原语可以在执行接收操作时使用源参数返回值

    Indirect addressing

  • Messages are sent to a shared data structure consisting of queues
  • Queues are called mailboxes
  • One process sends a message to the mailbox and the other process picks up the message from the mailbox

    Coroutines

    Processes that are designed to be able to pass execution control back and forth between themselves

    Weak Semaphore And Strong Semaphore

  • Weak Semaphore
  • Strong Semaphore
    [Weak Semaphore]:A semaphore that does not specify the order in which processes are removed from the queue [Strong Semaphore]:A semaphore that specify the order in which processes are removed from the queue

    UnSolvedQuestion

  1. The following requirement must be met by any facility or capability that is to provide support for mutual exclusion:
    $\color{red}{a}$. All of the above
    b. Only one process at a time can be allowed into a critical code section
    c. A process remains in its critical code section for a finite time only
    d. No assumptions can be made about relative process speeds
    1. In a system employing message passing, the typical message is divided into two primary sections
      $\color{red}{a}$. None of the above
      b. Body and mailbox
      c. Destination ID and Source ID
      d. Header and mailbox
      Feedback:Header And Body

      Chapter 6 Concurrency: Deadlock(死锁) and Starvation(饥饿)

      Resources Categories(资源的分类)

      Reusable Resources

  • Used by only one process at a time and not depleted(耗尽) by that use
  • Processes obtain resources that they later release for reuse by other processes
  • Examples include processors, I/O channels, main and secondary memory, devices, and data structures such as files, databases

[Reusable Resources]:可重用资源 [obtain]:获取

Consumable Resources

*[Consumable Resources]:可消费资源

  • May be created (produced) and destroyed (consumed) by processes
  • Examples include interrupts, signals, messages, and information in I/O buffers

    Restrictions of Deadlock Avoidance

    *[Restrictions]:限制
  • Maximum resource requirement must be stated in advance
  • Processes under consideration must be independent; no synchronization requirements
  • There must be a fixed number of resources to allocate
  • No process may exit while holding resources

    An Integrated Deadlock Strategy

    [Integrated]:综合 [Strategy]:策略
  • Group resources into a number of different resource classes(资源分类)
  • Use the linear ordering strategy(线性排序策略) defined previously for the prevention of circular wait to prevent deadlocks between resource classes(类与类之间用线性排序策略避免死锁)
  • Within a resource class, use the algorithm that is most appropriate for that class(每个类中使用适合于该类的策略)

permanent blocking
*[permanent blocking]:永久阻塞

银行家算法

转载链接
作者:Billy12138

UnSolved Question

  1. A software mechanism that informs a process of the occurrences of asynchronous events in UNIX are called:
    a. Pipes
    b. Messages
    c. Signals √
    d. All of the above
    信号是用于通知发生一个同步事件的软件机制
    1. In the Resource Allocation Denial approach to Deadlock Avoidance, a safe state is defined as one in which:
      a. At least one potential process sequence does not result in a deadlock √
      b. None of the above
      c. All potential process sequences do not result in a deadlock:
      d. Several potential process sequences do not result in a deadlock:
      一条安全道路便可

Chapter 7 Memory Management(内存管理)

Memory Management

  • Subdividing memory to accommodate multiple processes
  • Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time
    [Subdividing memory to accommodate multiple processes]:为支持多道程序将内存进行划分 [Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time]:内存管理应确保有适当数目的就绪进程使用处理器时间

    Memory Management Requirements

    1. Relocation
  • Programmer does not know where the program will be placed in memory when it is executed
  • While the program is executing, it may be swapped(交换) to disk and returned to main memory at a different location (relocated)
  • Memory references(访问) must be translated in the code to actual physical memory address(物理内存地址)
    [Relocation]:重定位
    2. Protection
    [Protection]:保护
  • Processes should not be able to reference memory locations in another process without permission or jump to instructions area of another process
  • Normally, processes cannot access any portion of the OS, neither program nor data
  • Impossible to check absolute addresses at compile time, instead, absolute addresses must be checked at rum time.
  • Memory protection requirement must be satisfied by the processor (hardware) rather than the operating system

3. Sharing
*[Sharing]:共享
Allow several processes to access the same portion of memory

  • Share same copy of the program
  • Share data structure to cooperate on some task

4. Logical Organization
*[Logical Organization]:逻辑组织

  • Main memory is organized in a linear address space, consisting of a sequence of bytes or words
  • Programs are written in modules
    • Modules can be written and compiled independently
    • Different degrees of protection given to modules (read-only, execute-only)
    • Share modules among processes
  • Segmentation satisfies these requirements

5. Physical Organization
*[Physical Organization]:物理组织

  • Memory is organized into at least two levels, referred to as main memory and secondary memory
  • Memory available for a program plus its data may be insufficient
    • Overlaying allows various modules to be assigned the same region of memory
  • Programmer does not know how much space will be available and where his/her program will be loaded in memory
    [Overlaying]:覆盖:The practice in which a program and data are organized in such a way that various modules can be assigned the same region of memory [Memory available for a program plus its data may be insufficient]:内存对程序和其数据来说可能不足

Memory Partitioning

*[Partitioning]:分区

1. Fixed Partitioning

  • Alternatives for Fixed Partitioning
    • Equal-size partitions
    • Unequal-size partitions
  • DisAdvantages
    • The number of active processes in the system is limited by the number of partitions
    • Small jobs will not utilize partition efficiently

2. Dynamic Partitioning

Characteristics:
  1. Partitions are of variable length and number
  2. Process is allocated exactly as much memory as required
  3. Eventually get holes in the memory. This is called external fragmentation(外部碎片/外零头)
  4. Must use compaction(压缩) to shift(移动) processes so they are contiguous and all free memory is in one block
    *[contiguous]:相连的
Dynamic Partitioning Placement Algorithm:
1. Best-fit algorithm
  • Chooses the block that is closest in size to the request
  • Worst performer overall
  • Since smallest block is found for process, the smallest amount of fragmentation is left
    Memory compaction must be done more often
2. First-fit algorithm
  • Scans memory from the beginning and chooses the first available block that is large enough
  • Simplest and usually fastest and best
  • May have many process loaded in the front end of memory that must be searched over when trying to find a free block
3. Next-fit
  • Scans memory from the location of the last placement and chooses the next available block that is large enough
  • More often allocate a block of memory at the end of memory where the largest block is found
  • The largest block of memory is broken up into smaller blocks
  • Compaction is required to obtain a large block at the end of memory

    3.Buddy System

    *[Buddy System]:伙伴系统
  • Entire space available is treated as a single block of 2$^U$ (e.g. 1M = $2^{20}$ )
  • If a request of size s such that $2^{U-1} \lt s \leq 2^U$, entire block is allocated
  • Otherwise block is split into two equal buddies
  • Process continues until smallest block greater than or equal to s is generated
    在这里插入图片描述

4.Relocation

  • When program loaded into(载入) memory the actual (absolute) memory locations are determined
  • A process may occupy different partitions which means different absolute memory locations during execution (from swapping,交换)
  • Compaction will also cause a program to occupy a different partition which means different absolute memory locations
    *[Compaction]:压缩

Paging

  • Partition memory into small equal fixed-size chunks(块) which are called frames(帧)
  • Divide each process into small equal fixed-size chunks which are called pages(页). The size of pages are equal to the size of frames
  • Operating system maintains a page table(页表) for each process
    • Contains the frame location(帧位置) for each page in the process
    • Memory address consist of a page number(页号) and offset(偏移量) within the page

      Segmentation

      *[Segmentation]:分段
  • Program and its data can be divided into a number of segments, all segments of all programs do not have to be of the same length
  • There is a maximum segment length
  • Addressing consist of two parts
    1. segment number(段号)
    2. offset(偏移量)
  • Since segments are not equal, segmentation is similar to dynamic partitioning(动态分区)

    Chapter 8 Virtual Memory

    Resident set

    *[Resident set]:常驻集

    • portion of process that is in main memory

      When an address is needed that is not in main memory:

    1. An interrupt is generated, which is known as page fault interrupt(缺页中断)
    2. Operating system places the process in a blocking state

    3. Piece of process that contains the logical address is brought into main memory

      • Operating system issues a disk I/O Read request
      • Another process is dispatched to run while the disk I/O takes place
      • An interrupt is issued when disk I/O complete
  1. operating system place the affected process in the Ready state

    Thrashing

    *[Thrashing]:系统抖动/系统颠簸
  • Swapping out a piece of a process just before that piece is needed
  • Too much of this operations may leads the processor spends most of its time swapping pieces rather than executing user instructions

    Inverted Page Table

    *[Inverted Page Table]:反向页表
  • Used on PowerPC, UltraSPARC, and IA-64 architecture
  • Page number portion of a virtual address is mapped into a hash value
  • Hash value points to inverted page table
  • Fixed proportion of real memory is required for the tables regardless of the number of processes or virtual pages supported
  • Page table entry
    • Page number(页号)
    • Process identifier(进程标志符)
    • Control bits(控制位)
    • Chain pointer(链指针)

不理解点这个->百度百科

Fetch Policy

*[Fetch Policy]:读取策略
Determines when a page should be brought into memory

  1. Demand paging(请求分页式)
    only brings pages into main memory when a reference is made to a location on the page
    Many page faults when process first started
  2. Prepaging paging(预约分页式)
    brings in more pages than needed
    More efficient to bring in pages that reside contiguously on the disk

    Placement Policy

    *[Placement Policy]:放置策略
    Determines where in real memory a process piece is to reside
  • Important in a segmentation system and nonuniform memory access(NUMA, 非一致存储器访问) system (放置策略对分段系统和NUMA系统影响大)
  • Paging or combined paging with segmentation hardware performs address translation and placement is usually irrelevant(放置策略对分页和段页式系统影响不大)

Replacement Policy

*[Replacement Policy]:置换策略
Which page is replaced?

Frame Locking

  • If frame is locked, it may not be replaced, some examples:

    1. Kernel of the OS
    2. Important Control structures of OS
    3. I/O buffers
  • Associate a lock bit with each frame
1. Optimal policy

*[Optimal]:最佳的

  • Selects for replacement that page for which the time to the next reference is the longest(下次访问距当前时间最长的页)
  • Impossible to have perfect knowledge of future events

2. Least Recently Used (LRU)

*[Least Recently Used (LRU)]:最近最少使用

  • Replaces the page that has not been referenced for the longest time(上次访问距当前时间最长的页)
  • By the principle of locality, this should be the page least likely to be referenced in the near future
  • Each page could be tagged with the time of last reference. This would require a great deal of overhead.
    *[overhead]:开销

    3.First-in, first-out (FIFO)

    *[First-in, first-out (FIFO)]:先进先出
  • Treats page frames allocated to a process as a circular buffer(把分配给进程的页帧看做是一个循环缓冲区)
  • Pages are removed in round-robin style
  • Simplest replacement policy to implement
  • Page that has been in memory the longest is replaced
  • These pages may be needed again very soon
    *[round-robin]:循环

    4. Clock Policy

    *[Clock Policy]:时钟策略
  • Additional bit called a use bit is association with every page
  • When a page is first loaded in memory, the use bit is set to 1
  • When the page is referenced, the use bit is set to 1
  • When it is time to replace a page, the first frame encountered with the use bit set to 0 is replaced
  • During the search for replacement, each use bit set to 1 is changed to 0
    *[use bit]:使用位
    在这里插入图片描述

    Page Buffering

    Replaced page is added to one of two lists
  • Free page list if page has not been modified (空闲页表)
  • Modified page list (修改页表)
  • Modified pages are written out in clusters rather than one at a time

    Process Suspension

    *[Suspension]:挂起
  1. Lowest priority process(最低优先级进程)
  2. Faulting process(页错误进程)
    This process does not have its working set in main memory so it will be blocked anyway
  3. Last process activated(最后被激活的进程)
    This process is least likely to have its working set resident
  4. Process with smallest resident set(驻留集最小的进程)
    This process requires the least future effort to reload
  5. Largest process(占用空间最大的进程)
    Obtains the most free frames
  6. Process with the largest remaining execution window(具有最大剩余执行窗口的进程)

UnSolved Question

  1. In SVR4 and Solaris systems, the memory management scheme that manages user processes and disk I/O is called the:
    a. Paging system √
    b. None of the above
    c. Virtual memory manager
    d. Kernel memory allocator

  2. In a combined paging/segmentation system, a user’s address space is broken up into a number of :
    a. Fixed-size pages, which are in turn broken down into variable-sized segments
    b. Segments or pages, at the discretion of the programmer
    c. Variable-sized Segments, which are in turn broken down into fixed-size pages √
    d. All of the above

    1. Sharing is achieved in a segmentation system by:
      a. Having a common data area that all processes can share
      b. Referencing a segment in the segment tables of more than one process √
      c. All of the above
      d. Each process segment table having a reference to the dispatcher main memory area

Chapter 9 Uniprocessor Scheduling

Types of Scheduling

在这里插入图片描述
Scheduling and Process State Transitions(调度与进程状态转换)
在这里插入图片描述

Alternative of Scheduling Policies

Terms

  • Response time(响应时间)
  • Turnaround time(周转时间)
  • Normalized turnaround time(归一化周转时间,周转时间/服务时间)
  • Throughput (吞吐率)
  • Predictability(可预测性)
  • Selection function(选择函数,确定在就绪进程中选择哪一个进程在下一次执行)
  • Decision mode(决策模式,选择函数被执行瞬间的处理方式)
  • Preemptive (抢占,当前正在执行的进程可能被操作系统中断,并转移到就绪态)
  • Nonpreemptive(非抢占,一旦处于运行态就不断执行直到终止或者被阻塞)

    Process Scheduling Example

    在这里插入图片描述

    1.First-Come-First-Served (FCFS)

    *[First-Come-First-Served (FCFS)]:先来先服务
    在这里插入图片描述
  • Decision mode: Nonpreempitive
  • Dispatch time: the current process ceases(终止) to execute
  • Method: The oldest process in the Ready queue is selected

    2.Round-Robin (RR)

    *[Round-Robin (RR)]:轮转

在这里插入图片描述

  • Decision mode: preemption
  • Dispatch time: period q based on a clock
  • Method:select next process based on FCFS

    3.Shortest Process Next (SPN)

    *[Shortest Process Next (SPN)]:最短进程优先
    在这里插入图片描述
  • Decision mode: Nonpreemptive
  • Dispatch time: the current process ceases to execute
  • Method: Process with shortest expected processing time is selected next
  • Predictability of longer processes is reduced(长进程的可预测性降低了)
  • Possibility of starvation(饥饿) for longer processes

    4.Shortest Remaining Time (SRT)

    *[Shortest Remaining Time (SRT)]:最短剩余时间
    在这里插入图片描述
  • Decision mode: Preemptive version of shortest process next
  • Dispatch time: decision is made when a new process arrives
  • Method: Process with shortest remaining time is selected next

    5.Highest Response Ratio Next (HRRN)

    *[Highest Response Ratio Next (HRRN)]:最高响应比优先
  • Decision mode: Nonpreemptive
  • Dispatch time: the current process ceases to execute
  • Method: Choose next process with the greatest response ratio在这里插入图片描述

    6.Feedback

    *[Feedback]:反馈
  • SPN/SRT/HRRN need to know process service time, while Feedback doesn’t
  • Preemptive
  • Penalize(惩罚) jobs that have been running longer
    在这里插入图片描述

    前面介绍的各种用作进程调度的算法都有一定的局限性。如短进程优先的调度算法,仅照顾了短进程而忽略了长进程,而且如果并未指明进程的长度,则短进程优先和基于进程长度的抢占式调度算法都将无法使用。而多级反馈队列调度算法则不必事先知道各种进程所需的执行时间,而且还可以满足各种类型进程的需要,因而它是目前被公认的一种较好的进程调度算法。在采用多级反馈队列调度算法的系统中,调度算法的实施过程如下所述。
    (1) 应设置多个就绪队列,并为各个队列赋予不同的优先级。第一个队列的优先级最高,第二个队列次之,其余各队列的优先权逐个降低。该算法赋予各个队列中进程执行时间片的大小也各不相同,在优先权愈高的队列中,为每个进程所规定的执行时间片就愈小。例如,第二个队列的时间片要比第一个队列的时间片长一倍,……,第i+1个队列的时间片要比第i个队列的时间片长一倍。
    (2) 当一个新进程进入内存后,首先将它放入第一队列的末尾,按FCFS原则排队等待调度。当轮到该进程执行时,如它能在该时间片内完成,便可准备撤离系统;如果它在一个时间片结束时尚未完成,调度程序便将该进程转入第二队列的末尾,再同样地按FCFS原则等待调度执行;如果它在第二队列中运行一个时间片后仍未完成,再依次将它放入第三队列,……,如此下去,当一个长作业(进程)从第一队列依次降到第n队列后,在第n 队列便采取按时间片轮转的方式运行。
    (3) 仅当第一队列空闲时,调度程序才调度第二队列中的进程运行;仅当第1~(i-1)队列均空时,才会调度第i队列中的进程运行。如果处理机正在第i队列中为某进程服务时,又有新进程进入优先权较高的队列(第1~(i-1)中的任何一个队列),则此时新进程将抢占正在运行进程的处理机,即由调度程序把正在运行的进程放回到第i队列的末尾,把处理机分配给新到的高优先权进程。
    ————————————————
    版权声明:本文为CSDN博主@make great efforts的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
    原文链接

    Fair-Share Scheduling(公平共享调度)

  • User’s application runs as a collection of processes (threads)
  • User is concerned about the performance of the application
  • Need to make scheduling decisions based on process sets
  • Scheduling is done on the basis of

    1. under-lying priority of the process
    2. recent processor usage of the process
    3. recent processor usage of the group
  • The higher the numerical value of the priority, the lower the priority.

UnSolved Question

  1. Typically, the swapping-in function for processes is based on the need to manage:
    a. Virtual memory
    b. The degree of multiprogramming √
    c. None of the above
    d. Process priorities
    1. Response time in an interactive system is an example of:
      a. System-oriented criteria for short-term scheduling policies
      b. None of the above
      c. User-oriented criteria for short-term scheduling policies √
      d. System-oriented criteria for long-term scheduling policies
    2. In terms of the queuing model, the total time that a process spends in a system (waiting time plu s service time) is called:
      a. None of the above
      b. Normalized turnaround time (TAT)
      c. Turnaround or residence time (TAT) √
      d. Finish time (FT)
      [Turnaround or residence time]:周转或驻留时间 [Normalized turnaround time]:标准化周转时间

Chapter 11 I/O Management and Disk Scheduling

Performing I/O

  1. Programmed I/O(可编程I/O)
    Process is busy-waiting for the operation to complete
  2. Interrupt-driven I/O(中断驱动I/O)
    I/O command is issued
    Processor continues executing instructions
    I/O module sends an interrupt when done
  3. Direct Memory Access (DMA,直接存储器访问)
    DMA module controls exchange of data between main memory and the I/O device
    Processor interrupted only after entire block has been transferred

I/O Buffering

  • Block-oriented(面向块)
    Information is stored in fixed sized blocks
    Transfers are made a block at a time
    Used for disks and tapes
  • Stream-oriented(面向流)
    Transfers information as a stream of bytes
    Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage

UnSolved Question

  1. In a UNIX system, which of the following types of I/O devices make use of character queues:
    a. Communications lines √
    b. Tape drive
    c. All of the above
    d. Disk drive
    1. The bus configuration for DMA that provides no path other than the system bus between the DM A module(s) and I/O devices is:
      a. I/O bus
      b. Single bus, detached DMA √
      c. Single bus, integrated DMA-I/O
      d. None of the above
    2. The scenario where multiple buffers are used in an attempt to alleviate the problem of absorbing rapid bursts of I/O is typically referred to as:
      a. None of the above
      b. Double buffering
      c. Circular buffering √
      d. Single buffering