CS 439 Exam 1 Flashcard Deck

Primary tabs

Flashcards for Our First Exam


Bookmark to learn: Login to use bookmarks.

Bookmark to learn: Login to use bookmarks.

Add to collection ... add CS 439 Exam 1 Flashcard Deck to your collections:

Help using Flashcards ...just like in real life ;)

  1. Look at the card, do you know this one? Click to flip the card and check yourself.
  2. Mark card Right or Wrong, this card will be removed from the deck and your score kept.
  3. At any point you can Shuffle, Reveal cards and more via Deck controls.
  4. Continue to reveal the wrong cards until you have correctly answered the entire deck. Good job!
  5. Via the Actions button you can Shuffle, Unshuffle, Flip all Cards, Reset score, etc.
  6. Come back soon, we'll keep your score.
    “Repetition is the mother of all learning.”
  7. Signed in users can Create, Edit, Import, Export decks and more!.

Bookmark to learn: Login to use bookmarks.

Share via these services ...

Email this deck:

Right: #
Wrong: #
# Right & # Wrong of #

OS Definition

Software that manages a computer's resources

OS Hats



Manages shared resources
Isolation (protects processes from each other)
Communication (processes work together)


Illusion of resources that aren't really present
Virtualization (processor, memory, screen space)


Provides standard interfaces to the hardware
Simplifies application design
Decouple hardware and application development
Start, stop, and clean up after program

Operating System Evaluation Criteria



OS does exactly what it is designed to
Availability: percentage of time OS is useful


OS cannot be compromised by a malicious hacker

Security Policy

Defines what is permitted

Enforcement Mechanism

Ensures only permitted actions are allowed


OS does not change as hardware changes
Three interfaces

Three Interfaces

Abstract Machine Interface (AMI)
Application Programming Interface (API)
Hardware Abstraction Layer (HAL)

Abstract Machine Interface (AMI)

Between OS and apps
Memory Access Model
Legally Executable Instructions

Application Programming Interface (API)

Function calls provided to apps

Hardware Abstraction Layer

Abstracts hardware internally to OS


Response time


Answer: How much is lost by not running on bare hardware?


Answer: How are resources divided?

Response Time

Answer: How long does it take to deliver a response to the user?


Answer: How many tasks complete per unit of time?


Answer: Are performance metrics consistent over time?

Three Operating System Phases

Phase 1: Hardware expensive, humans cheap
Phase 2: Hardware cheap, humans cheap
Phase 3: Hardware very cheap, humans very expensive

Phase 1

One user, one process
Batch processing
Overlap I/O and Computations

One user, one process

(1945 - 1955)
Started with programming directly with wires (plugboard)
Later, punch cards
Marked by low utilization of expensive components

Batch processing

(1955 - 1965)
Load program
Output to tape
Print results
Next job can be loaded immediately as previous job finishes
Hard to debug (make sure punch cards are correct and in order *horror when cards dropped)
Idle I/O

Overlap of I/O and Computation

Allows I/O and computation to happen at the same time
Hand off interrupt to I/O. resume computation until end I/O signal received
Concurrency within the same process


(1965 - 1980)
Several programs run at the same time on machine
Program runs until it gets interrupt, then scheduler determines next program to run

Phase 2

Interactive Timesharing
New OS services (shell, file system, rapid process switching, virtual memory)

Interactive Timesharing

(1970 -)
Cheap terminals allow many users to interact with same machine
A timer interrupt is used to multiplex CPU between jobs

Phase 3

Personal Computing
Parallel and Distributed Computing

Personal Computing

Computers are cheap, so give everyone a computer
At first OS was simplified, but didn't work: remember, humans are expensive

Parallel and Distributed Computing

Computers are cheap, so give everyone a bunch of them
Increased performance, increased reliability, sharing of specialized resources


A program during execution

Minimum Requirements:
Memory for code and data
CPU registers to support execution

Process State

Contains at least:

Address Space, including:
Static data

Registers and their contents, including:
Heap Pointer
Program Counter
Stack Pointer

OS resources in use

Process Identifier (PID)

Process execution state (running, ready, etc.)

Process Execution State



OS is setting up process state


Ready to run, but waiting for CPU


Executing instructions on the CPU


Waiting for an event to complete


OS is destroying this process

Process Control Block (PCB)

Dynamic kernel data structure in memory
Represents the execution state and location of each process when it is not executing

Process State Minimums (see flashcard)
General purpose registers
Username of owner

Initialized when process created, deleted when process terminates

Dual Mode Execution

Designate processes as kernel or user at a given time with a bit in the process status register

Executing a privileged instruction while in user mode causes a processor exception...
...which passes control to the kernel

User Mode

Restricted access

Directly access I/O
Use instructions that manipulate OS memory
Set the mode bits to user or kernel
Halt the machine

Kernel Mode

Unrestricted access

Directly access I/O
Use instructions that manipulate OS memory
Set the mode bits to user or kernel
Halt the machine

Three ways to transition from user mode to kernel mode

System Calls/Traps

Three efficient protection requirements

Privileged Instructions
Timer Interrupts
Memory Protection

System Call

A request by a user-level process to call a function


Way processes are created in Unix
Copies process into an identical process
Returns twice: once to parent (PID), once to child (0)
Both processes begin execution at same point
Each process has its own memory and copy of variables


Overlays a process with a new program
PID doesn't change (same process!)
Arguments to new program can be specified
Code, stack, and heap are overwritten


Causes parent to wait until child terminates
Allows parent to get return value from child
Parent blocked until child calls exit
Zombies waiting for parent deallocated, and value returned from a zombie (which one??)

Zombie Process

Process has terminated, but parent hasn't collected its status
Dead, but not gone

Orphaned Process

Occurs when parent terminates before child
Only occurs in some instances, in others the children are killed


Called when a program finishes execution
Takes return value as argument
Closes all open files, connections, etc.
Deallocates most of the OS structures supporting the process
Checks if parent is alive: alive ->zombie, otherwise deallocate all data structures, process is dead
Cleans up all waiting zombies


Sends a signal to the specified process
Parent can terminate child using kill(), but not its only use

Long-Term Scheduling

Answer: How does the OS determine the degree of multiprogramming, i.e., the number of jobs residing at once in the primary memory?

Short-Term Scheduling

Answer: How does (or should) the OS select a process from the ready queue to execute?


Schedules the next process on the CPU
It is the mechanism of a policy

Two Main Scheduling Policies


Non-Preemptive Scheduling

Scheduler runs when process blocks or terminates---not on hardware interrupts

Preemptive Scheduling

An executing process might be pre-empted from the CPU
Scheduler runs on interrupts, mostly timer, but also system calls and other hardware device interrupts
Used in most OSes

Evaluating Scheduling Policies Criteria

CPU Utilization
Turnaround time
Response time
Waiting time

(Scheduler) CPU Utilization

Percentage of time that the CPU is busy

(Scheduler) Throughput

Number of processes completing in a unit of

(Scheduler) Turnaround time

Length of time to run a process from
initialization to termination, including all the waiting time

(Scheduler) Response time

Time between issuing a command and
getting a result

(Scheduler) Waiting time

Total amount of time that a process is in
the ready queue.


Scheduler executes job in order of arrival


Run each job for its time slice (equal), which is typically 100 times more than its context switch time

Context Switch

The way to change from one process to another
Main source of overhead for a scheduling policy

Shortest Job First

Maximizes throughput
Can lead to starvation
Can be preemptive or non-preemptive

Multilevel Feedback Queue

Multiple queues with multiple priorities
Priority changes depending on execution time
Approximation of shortest job first


Short for "Thread of Control"
Defines a single sequential execution stream within a process
Each thread is bound to a single process, but a process may have multiple threads
Has same life cycle as processes
Shares address space within the same process, but each thread has its own stack

Thread Advantages

Creating a thread is cheaper than creating a process
Communication between threads is easier than processes
Context switching between threads is cheaper (same address space)

Thread Control Block (TCB)

Contains thread-specific information:
Stack pointer
Program Counter
Register Values
Pointer to PCB

User-Level Thread

A thread the OS does not know
Programmer uses a thread library to manage threads
Switching threads does not involve a context switch

Kernel-Level Thread

A thread that the OS knows about
Every process has at least one kernel-level thread
Switching between kernel-level threads of the same process requires a small context switch

Critical Section Correctness Requirements

Bounded Waiting
Failure atomicity


Only one thread in the critical section


If no threads are executing a critical section,
and a thread wishes to enter a critical section, that
thread must be guaranteed to eventually enter the
critical section

Bounded waiting

If a thread wishes to enter a critical
section, then there exists a bound on the number of
other threads that may enter the critical section
before that thread does

Combines safety and liveness

Failure atomicity

It’s okay for a thread to die in the
critical section

Mutual Exclusion

Exactly one thread (or process) is doing a particular activity at a time.
Resources are not accessed by
multiple threads at the same time

Atomic Operation

An operation that is uninterruptible


Using atomic operations to ensure cooperation between threads


Provide mutual exclusion to shared data with Lock::Acquire() and Lock::Release()
Are either busy or free

Atomic Read-Modify-Write Instructions

Atomically read a value from memory into a register and write a new value
Test&Set most common RMW for architectures


Reads a value from memory
Writes "1" back to the memory location


Invented by Dijkstra in 1965
A way to ensure mutual exclusion and general synchronization
Support two atomic operations (Up & Down) on an integer to provide synchronization
Calling down on an integer set to 0 adds process to waiting list
Used by the kernel

Binary Semaphore

Integer has 2 values: 0 or 1

Counted Semaphore

Integer is non-negative


Cccurs when two or more threads are waiting for an event that can only be generated by these same threads

Deadlock Conditions

Necessary and Sufficient

Mutual Exclusion
Hold and Wait
No Pre-emption
Circular Wait

Hold and Wait

At least one thread holds a resources
and is waiting for other resources to become available. A different thread holds the resource.

No Pre-emption

A thread only releases a resource voluntarily; another thread or the OS cannot force the thread to release the resource

Circular Wait

A set of waiting threads {t1 , ..., tn } where
ti is waiting on ti+1 (i=1 to n) and tn is waiting on t1
Basically the definition of deadlock
Prevent by imposing an ordering on the locks and request them in order


Defines a lock and zero or more condition variables for managing concurrent access to shared data

A solution to the problems that can arise from the general purpose nature of semaphores

Used in user mode

Condition Variables

A queue of waiting threads without state

Wait adds thread to waiting queue
Signal wakes up waiting thread
Broadcast wakes up all threads

Resource Variables

A way for a monitor to check state

Mesa/Hansen Semantics

The thread that signals keeps the lock
The waiting thread waits for the lock
Thus, need while loop for condition variable

Hoare Semantics

The thread that signals gives up the lock and the waiting thread gets the lock
Do not need while loop for condition variable (for our class, we use while loop due to portability)

Fine-Grained Locking

More than one lock
Better for performance
Complex (more likely to cause deadlock)

Conservative 2-phase Locking

Provides serializability and prevents deadlock in fine-grained locking

Acquire all locks it needs (if it can't get a lock, release all held locks and try again)
Make necessary changes
Release locks


Group actions together so that they are:

Critical sections give us atomicity and serializability, but not durability


Actions appear to happen one after the other


Once it happens, it sticks

Achieving Durability

We need to be able to:

If we get to commit state, we're OK.
If we don't, we need to rollback as if the transaction never happened


Indicate when a transaction is finished


Recover from an aborted transaction
Essentially trying to reverse steps
Reversing steps is hard, so we keep a write-ahead log, commit changes, and write behind those changes on disk instead

Write-Ahead Log

All changes are written to the log (a special part of disk) prior to being written to their true location

Changes must be idempotent


Unchaining when operated upon by itself

The Six Commandments of Synchronization

Thou shalt always do things the same way

Thou shalt always synchronize with locks and
condition variables

Thou shalt always acquire the lock at the
beginning of a function and release it at the

Thou shalt always hold lock when operating
on a condition variable

Thou shalt always wait in a while loop

(Almost) Never sleep()