Renormalize files
This commit is contained in:
@ -1 +1 @@
|
||||
Lecture Topic: Round Roblin Scheduling
|
||||
Lecture Topic: Round Roblin Scheduling
|
||||
|
@ -1,70 +1,70 @@
|
||||
# Lecture Topic: Real Time CPU scheduling
|
||||
|
||||
**Material up to 5.27 on the slides will be on the midterm**
|
||||
|
||||
Multilevel Feedback Queue:
|
||||
A process can move between multiple queues
|
||||
Get from slides (?)
|
||||
|
||||
|
||||
Real Time CPU scheduling
|
||||
Can present challenges
|
||||
Soft real time systems: Critical real time tasks have the highest priortiy but there are no guarantees
|
||||
Hard real time systems: Tasks need to be scheduled first (?)
|
||||
|
||||
Event Latency: The amount of time that elapses from when an event occurs to when it is serviced
|
||||
Two types of latency's affect performance:
|
||||
- Interrupt latency
|
||||
- Dispatch Latency
|
||||
|
||||
Dispatch latency conflicts:
|
||||
- You need to preempt any processes running in kernel mode
|
||||
- Release by low priority processes of resource needed by high priority processes
|
||||
|
||||
Priority based scheduling:
|
||||
For real time scheduling, you need to support preemptive, priority based scheduling, but this only guarantees soft real time
|
||||
For hard real time, must also provide ability to meet deadlines:
|
||||
Processes have new characteristics, periodic ones require CPU at constant intervals:
|
||||
- has processing time t, deadline of d, and a period p
|
||||
- 0 < t < d < p
|
||||
- rate of period task is 1/p
|
||||
|
||||
|
||||
Rate monotonic Scheduling:
|
||||
A priority is assigned based on the inverse of its period.
|
||||
Shorter periods = higher priority
|
||||
longer periods = lower priority
|
||||
|
||||
Earliest deadline first:
|
||||
Priorities are assigned according to deadlines:
|
||||
The earlier the deadline, the higher the priority
|
||||
The later the deadline, the lower the priority
|
||||
|
||||
Algorithm Evaluation:
|
||||
(?)
|
||||
|
||||
|
||||
# **Midterm Review:**
|
||||
|
||||
Midterm is balanced:
|
||||
Applying tools relevant to concepts
|
||||
Answering theory questions
|
||||
About 60% of the grade is theory (no coding)
|
||||
50 minutes
|
||||
|
||||
Understanding the labs and stuff under misc. in d2l should be enough
|
||||
|
||||
Topics to review:
|
||||
Slides (Chapter 1 to Chapter 5, until rate monotonic scheduling)
|
||||
Lab 2 and Lab 3
|
||||
A1 and A2
|
||||
Content of the misc. useful stuff folder
|
||||
Scheduling, metrics
|
||||
Materials occurring multiple times in the lectures
|
||||
|
||||
How to answer if you don't know
|
||||
If you remember the book answer - use that
|
||||
answer in your own words if you are trying to explain a concept
|
||||
the book is the most reliable source
|
||||
|
||||
# Lecture Topic: Real Time CPU scheduling
|
||||
|
||||
**Material up to 5.27 on the slides will be on the midterm**
|
||||
|
||||
Multilevel Feedback Queue:
|
||||
A process can move between multiple queues
|
||||
Get from slides (?)
|
||||
|
||||
|
||||
Real Time CPU scheduling
|
||||
Can present challenges
|
||||
Soft real time systems: Critical real time tasks have the highest priortiy but there are no guarantees
|
||||
Hard real time systems: Tasks need to be scheduled first (?)
|
||||
|
||||
Event Latency: The amount of time that elapses from when an event occurs to when it is serviced
|
||||
Two types of latency's affect performance:
|
||||
- Interrupt latency
|
||||
- Dispatch Latency
|
||||
|
||||
Dispatch latency conflicts:
|
||||
- You need to preempt any processes running in kernel mode
|
||||
- Release by low priority processes of resource needed by high priority processes
|
||||
|
||||
Priority based scheduling:
|
||||
For real time scheduling, you need to support preemptive, priority based scheduling, but this only guarantees soft real time
|
||||
For hard real time, must also provide ability to meet deadlines:
|
||||
Processes have new characteristics, periodic ones require CPU at constant intervals:
|
||||
- has processing time t, deadline of d, and a period p
|
||||
- 0 < t < d < p
|
||||
- rate of period task is 1/p
|
||||
|
||||
|
||||
Rate monotonic Scheduling:
|
||||
A priority is assigned based on the inverse of its period.
|
||||
Shorter periods = higher priority
|
||||
longer periods = lower priority
|
||||
|
||||
Earliest deadline first:
|
||||
Priorities are assigned according to deadlines:
|
||||
The earlier the deadline, the higher the priority
|
||||
The later the deadline, the lower the priority
|
||||
|
||||
Algorithm Evaluation:
|
||||
(?)
|
||||
|
||||
|
||||
# **Midterm Review:**
|
||||
|
||||
Midterm is balanced:
|
||||
Applying tools relevant to concepts
|
||||
Answering theory questions
|
||||
About 60% of the grade is theory (no coding)
|
||||
50 minutes
|
||||
|
||||
Understanding the labs and stuff under misc. in d2l should be enough
|
||||
|
||||
Topics to review:
|
||||
Slides (Chapter 1 to Chapter 5, until rate monotonic scheduling)
|
||||
Lab 2 and Lab 3
|
||||
A1 and A2
|
||||
Content of the misc. useful stuff folder
|
||||
Scheduling, metrics
|
||||
Materials occurring multiple times in the lectures
|
||||
|
||||
How to answer if you don't know
|
||||
If you remember the book answer - use that
|
||||
answer in your own words if you are trying to explain a concept
|
||||
the book is the most reliable source
|
||||
|
||||
What would the output of the following function will probably be on the midterm
|
@ -1,13 +1,13 @@
|
||||
Lecture Topic: Producer Consumer
|
||||
|
||||
There is a problem with using a counter in a multithreaded queue, as the counter can be stored in the CPU's registers at the same time, and when saving that register with changes, with out of order execution, can lead to misleading counts.
|
||||
|
||||
Say a CPU loads a variable into a register and on two threads, one increments it, and the other decrements it, you can encounter a scenario where it saves the the variable as not the intended value. This is called a race condition.
|
||||
|
||||
You need some for of synchronization mechanism to ensure this operation occurs correctly
|
||||
|
||||
You may also need a synchronization mechanism for scenarios where only 1 thread reads and 1 thread writes. An example is an ATM, where a read reads a valid valance, and another separate transaction occurs, and in the time between the start and finalization of the purchase, there is now insufficient funds.
|
||||
|
||||
Critical Section Problem:
|
||||
|
||||
Lecture Topic: Producer Consumer
|
||||
|
||||
There is a problem with using a counter in a multithreaded queue, as the counter can be stored in the CPU's registers at the same time, and when saving that register with changes, with out of order execution, can lead to misleading counts.
|
||||
|
||||
Say a CPU loads a variable into a register and on two threads, one increments it, and the other decrements it, you can encounter a scenario where it saves the the variable as not the intended value. This is called a race condition.
|
||||
|
||||
You need some for of synchronization mechanism to ensure this operation occurs correctly
|
||||
|
||||
You may also need a synchronization mechanism for scenarios where only 1 thread reads and 1 thread writes. An example is an ATM, where a read reads a valid valance, and another separate transaction occurs, and in the time between the start and finalization of the purchase, there is now insufficient funds.
|
||||
|
||||
Critical Section Problem:
|
||||
|
||||
A typical example of a critical section problem that, in a loop, there is some critical section is reading shared memory.
|
@ -1,37 +1,37 @@
|
||||
Lecture Topic: Critical Section
|
||||
|
||||
A solution to the critical section problem has to
|
||||
- Ensure mutual exclusion
|
||||
- Progress: If 2 resources are needed to progress, the program needs to properly assign resources, and not perform no progress work, such as assigning threads only 1 resource to each thread? Pen and form problem from notes
|
||||
- Bounded waiting
|
||||
|
||||
|
||||
Discusses this algorithm
|
||||
https://en.wikipedia.org/wiki/Peterson%27s_algorithm
|
||||
|
||||
It is mentioned that the problem with this algorithm is that the initial implementation is only applicable to 2 threads
|
||||
|
||||
It also has problems with modern implementations that implement instruction or memory access reordering
|
||||
|
||||
Memory models
|
||||
- Strongly ordered: Changes are immediately visible to all other processors
|
||||
- Weakly ordered: Changes in memory are not immediately visible to all other processors
|
||||
|
||||
A memory barrier is a way to force a propagation of changes values in memory to all other processors
|
||||
|
||||
# Mutex Locks:
|
||||
Before a critical section, acquire a mutex lock, and then release it after
|
||||
|
||||
For this to be functional you need atomic operations on acquire and release
|
||||
|
||||
This also requires busy waiting, because it needs to poll for the lock. Thus it is called a spinlock
|
||||
|
||||
# Semaphore
|
||||
|
||||
It is accesses with the wait and signal functions, these atomic functions perform these things
|
||||
|
||||
- wait: It waits while the semaphore is less than or equal to 0, and after it decrements the semaphore.
|
||||
- signal: Simply increases the semaphore by 1
|
||||
A semaphore can be thought of as a mutex lock that allows execution to more than one thread at once. The initial semaphore value is the number of threads that are allowed concurrent execution.
|
||||
|
||||
Lecture Topic: Critical Section
|
||||
|
||||
A solution to the critical section problem has to
|
||||
- Ensure mutual exclusion
|
||||
- Progress: If 2 resources are needed to progress, the program needs to properly assign resources, and not perform no progress work, such as assigning threads only 1 resource to each thread? Pen and form problem from notes
|
||||
- Bounded waiting
|
||||
|
||||
|
||||
Discusses this algorithm
|
||||
https://en.wikipedia.org/wiki/Peterson%27s_algorithm
|
||||
|
||||
It is mentioned that the problem with this algorithm is that the initial implementation is only applicable to 2 threads
|
||||
|
||||
It also has problems with modern implementations that implement instruction or memory access reordering
|
||||
|
||||
Memory models
|
||||
- Strongly ordered: Changes are immediately visible to all other processors
|
||||
- Weakly ordered: Changes in memory are not immediately visible to all other processors
|
||||
|
||||
A memory barrier is a way to force a propagation of changes values in memory to all other processors
|
||||
|
||||
# Mutex Locks:
|
||||
Before a critical section, acquire a mutex lock, and then release it after
|
||||
|
||||
For this to be functional you need atomic operations on acquire and release
|
||||
|
||||
This also requires busy waiting, because it needs to poll for the lock. Thus it is called a spinlock
|
||||
|
||||
# Semaphore
|
||||
|
||||
It is accesses with the wait and signal functions, these atomic functions perform these things
|
||||
|
||||
- wait: It waits while the semaphore is less than or equal to 0, and after it decrements the semaphore.
|
||||
- signal: Simply increases the semaphore by 1
|
||||
A semaphore can be thought of as a mutex lock that allows execution to more than one thread at once. The initial semaphore value is the number of threads that are allowed concurrent execution.
|
||||
|
||||
Semaphores can be also used to solve synchronization problems, not just the critical section problem
|
@ -1,3 +1,3 @@
|
||||
Lecture Topic: Synchronization
|
||||
|
||||
Lecture Topic: Synchronization
|
||||
|
||||
Look at slides later, was about posix and waiting for variables to change for synconization
|
@ -1,46 +1,46 @@
|
||||
Lecture Topic: Producer Consumer & Reader Writer Problem
|
||||
|
||||
[Lecture 20](https://lms.unb.ca/d2l/le/content/231539/viewContent/2623161/View)
|
||||
|
||||
If you were to use two semaphores and a mutex you can have multiple producers and multiple consumers.
|
||||
|
||||
|
||||
Producer
|
||||
```c
|
||||
while(true) {
|
||||
// produce am item
|
||||
wait(empty)
|
||||
wait(mutex)
|
||||
buffer[index] = item
|
||||
index++
|
||||
signal(mutex)
|
||||
signal(full)
|
||||
}
|
||||
```
|
||||
|
||||
Consumer
|
||||
```c
|
||||
while(true) {
|
||||
// consume an item
|
||||
wait(empty)
|
||||
wait(mutex)
|
||||
item = buffer[index]
|
||||
index--
|
||||
signal(mutex)
|
||||
signal(full)
|
||||
}
|
||||
```
|
||||
|
||||
[Slides Chapter 7](https://lms.unb.ca/d2l/le/content/231539/viewContent/2622059/View)
|
||||
|
||||
In the reader writer problem, you can encounter the "First reader-writer problem", in which the writer process never writes. A "second reader-writer problem" in which when a writer is ready to write, no reader is available. These problems can sometimes be solved by kernel reader writer locks.
|
||||
|
||||
[Dining Philosopher problem](https://en.wikipedia.org/wiki/Dining_philosophers_problem)
|
||||
|
||||
|
||||
[Slides Chapter 8](https://lms.unb.ca/d2l/le/content/231539/viewContent/2623159/View)
|
||||
|
||||
Deadlock Characterization:
|
||||
|
||||
Deadlocks can occur if four conditions are true simultaneously
|
||||
Lecture Topic: Producer Consumer & Reader Writer Problem
|
||||
|
||||
[Lecture 20](https://lms.unb.ca/d2l/le/content/231539/viewContent/2623161/View)
|
||||
|
||||
If you were to use two semaphores and a mutex you can have multiple producers and multiple consumers.
|
||||
|
||||
|
||||
Producer
|
||||
```c
|
||||
while(true) {
|
||||
// produce am item
|
||||
wait(empty)
|
||||
wait(mutex)
|
||||
buffer[index] = item
|
||||
index++
|
||||
signal(mutex)
|
||||
signal(full)
|
||||
}
|
||||
```
|
||||
|
||||
Consumer
|
||||
```c
|
||||
while(true) {
|
||||
// consume an item
|
||||
wait(empty)
|
||||
wait(mutex)
|
||||
item = buffer[index]
|
||||
index--
|
||||
signal(mutex)
|
||||
signal(full)
|
||||
}
|
||||
```
|
||||
|
||||
[Slides Chapter 7](https://lms.unb.ca/d2l/le/content/231539/viewContent/2622059/View)
|
||||
|
||||
In the reader writer problem, you can encounter the "First reader-writer problem", in which the writer process never writes. A "second reader-writer problem" in which when a writer is ready to write, no reader is available. These problems can sometimes be solved by kernel reader writer locks.
|
||||
|
||||
[Dining Philosopher problem](https://en.wikipedia.org/wiki/Dining_philosophers_problem)
|
||||
|
||||
|
||||
[Slides Chapter 8](https://lms.unb.ca/d2l/le/content/231539/viewContent/2623159/View)
|
||||
|
||||
Deadlock Characterization:
|
||||
|
||||
Deadlocks can occur if four conditions are true simultaneously
|
||||
- Mutual exclusion: only one process can use a resource
|
@ -1,16 +1,16 @@
|
||||
Lecture Topic: Memory Management
|
||||
|
||||
# Midterm Review
|
||||
|
||||
1. When a syscall is performed.
|
||||
2. Difference is single core vs multiple core concurrency, and yes its possible for concurrency and parallel execution to occur at the same time
|
||||
3. It is not going to differ in output
|
||||
4. It's a visual answer
|
||||
5. Depends on 4
|
||||
|
||||
|
||||
|
||||
There are three forms of memory generally, Memory in the CPU, System Memory, and Block Devices. In the CPU you have registers and cache.
|
||||
|
||||
Cost increases as you go from Block devices to CPU memory, but speed also increases, so it is an important tradeoff to consider.
|
||||
|
||||
Lecture Topic: Memory Management
|
||||
|
||||
# Midterm Review
|
||||
|
||||
1. When a syscall is performed.
|
||||
2. Difference is single core vs multiple core concurrency, and yes its possible for concurrency and parallel execution to occur at the same time
|
||||
3. It is not going to differ in output
|
||||
4. It's a visual answer
|
||||
5. Depends on 4
|
||||
|
||||
|
||||
|
||||
There are three forms of memory generally, Memory in the CPU, System Memory, and Block Devices. In the CPU you have registers and cache.
|
||||
|
||||
Cost increases as you go from Block devices to CPU memory, but speed also increases, so it is an important tradeoff to consider.
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
Lecture Topic: Multithreading
|
||||
Scheduling Algorithms:
|
||||
Preemptive and non-preemptive
|
||||
|
||||
Preemptive means that the process might be interuppted to improve responsiveness of the ssytem
|
||||
|
||||
Lecture Topic: Multithreading
|
||||
Scheduling Algorithms:
|
||||
Preemptive and non-preemptive
|
||||
|
||||
Preemptive means that the process might be interuppted to improve responsiveness of the ssytem
|
||||
|
||||
Non-preemptive means that processes are allowed to execute until I/O request or termination
|
@ -1,90 +1,90 @@
|
||||
Lecture Topic: Scheduling
|
||||
|
||||
Scheduling Criteria:
|
||||
- CPU Utilization
|
||||
- Throughput
|
||||
- Turnaround Time
|
||||
- Waiting Time: Time spent in ready queue
|
||||
- Response Time
|
||||
|
||||
Multiprocessing Scheduler Design Choices:
|
||||
- One Queue for all processes
|
||||
- A queue per process
|
||||
|
||||
CPU: Homogeneous or heterogenous
|
||||
|
||||
| CPU 0 | CPU 1 |
|
||||
| ----- | ----- |
|
||||
| Cache | Cache |
|
||||
| ISA | ISA |
|
||||
|
||||
Affinity:
|
||||
Soft - May be scheduled on multiple CPUs
|
||||
Hard - Has to be scheduled on a designated CPU
|
||||
|
||||
Load Balancing: Keeping jobs evenly distributed
|
||||
- Job pushing: Jobs may be pushed to another CPU
|
||||
- Job stealing: Unused CPU may steal jobs from the queue
|
||||
|
||||
CPU alternates between computation and input output
|
||||
|
||||
Load stalls minimization
|
||||
A load stall is when the CPU is idle when memory access is being performed
|
||||
|
||||
Instruction reordering
|
||||
The CPU may perform operations out of order to better align with CPU computation and input output cycles, to minimize load stalls
|
||||
|
||||
Hyper threading:
|
||||
|
||||
CPU bursts are most frequently short
|
||||
|
||||
There are different kinds of algorithms that solve a few kinds of problems:
|
||||
- First Come First Serve: When job arrives it immediately gets worked on
|
||||
- Tie Breaker: When two jobs arrive at the same time
|
||||
|
||||
So, P0 comes first, but since we have a tie between P1 and P2. In the first come first serve case:
|
||||
|
||||
| Process | Arrival | Duration | Wait Time | Response Time | Turnaround |
|
||||
| ------- | ------- | -------- | --------- | ------------- | ---------- |
|
||||
| P0 | 0 | 3 | 0 | 0 | 3 |
|
||||
| P1 | 1 | 20 | 2 | 2 | 22 |
|
||||
| P2 | 1 | 5 | 22 | 22 | 27 |
|
||||
|
||||
So the average wait time is
|
||||
(0 + 2 + 22)/3 = 8
|
||||
|
||||
If there are different arrival times:
|
||||
|
||||
| Process | Arrival | Duration | Wait | Turnaround |
|
||||
| ------- | ------- | -------- | ---- | ---------- |
|
||||
| P0 | 0 | 3 | 0 | 3 |
|
||||
| P1 | 2 | 20 | 6 | 26 |
|
||||
| P2 | 1 | 5 | 2 | 7 |
|
||||
|
||||
In this case the average wait time is
|
||||
(0 + 6 + 2)/3 = 2.666666...
|
||||
|
||||
Much better.
|
||||
We can reorder jobs to reduce the wait time for jobs
|
||||
|
||||
The turnaround time also improves (do the calculation in your head dummy)
|
||||
|
||||
The throughput remains the same however
|
||||
|
||||
|
||||
Kinds of algorithms:
|
||||
Shortest Job first
|
||||
|
||||
| Process | Arrival | Duration | Wait Time | Response Time | Turnaround |
|
||||
| ------- | ------- | -------- | --------- | ------------- | ---------- |
|
||||
| P0 | 0 | 9 | | | |
|
||||
| P1 | 1 | 15 | | | |
|
||||
| P2 | 1 | 7 | | | |
|
||||
|
||||
The Gantt Chart
|
||||
|
||||
| | P0 | | P2 | | P3 | |
|
||||
| --- | --- | --- | --- | --- | --- | --- |
|
||||
| 0 | | 9 | | 16 | | 31 |
|
||||
|
||||
Lecture Topic: Scheduling
|
||||
|
||||
Scheduling Criteria:
|
||||
- CPU Utilization
|
||||
- Throughput
|
||||
- Turnaround Time
|
||||
- Waiting Time: Time spent in ready queue
|
||||
- Response Time
|
||||
|
||||
Multiprocessing Scheduler Design Choices:
|
||||
- One Queue for all processes
|
||||
- A queue per process
|
||||
|
||||
CPU: Homogeneous or heterogenous
|
||||
|
||||
| CPU 0 | CPU 1 |
|
||||
| ----- | ----- |
|
||||
| Cache | Cache |
|
||||
| ISA | ISA |
|
||||
|
||||
Affinity:
|
||||
Soft - May be scheduled on multiple CPUs
|
||||
Hard - Has to be scheduled on a designated CPU
|
||||
|
||||
Load Balancing: Keeping jobs evenly distributed
|
||||
- Job pushing: Jobs may be pushed to another CPU
|
||||
- Job stealing: Unused CPU may steal jobs from the queue
|
||||
|
||||
CPU alternates between computation and input output
|
||||
|
||||
Load stalls minimization
|
||||
A load stall is when the CPU is idle when memory access is being performed
|
||||
|
||||
Instruction reordering
|
||||
The CPU may perform operations out of order to better align with CPU computation and input output cycles, to minimize load stalls
|
||||
|
||||
Hyper threading:
|
||||
|
||||
CPU bursts are most frequently short
|
||||
|
||||
There are different kinds of algorithms that solve a few kinds of problems:
|
||||
- First Come First Serve: When job arrives it immediately gets worked on
|
||||
- Tie Breaker: When two jobs arrive at the same time
|
||||
|
||||
So, P0 comes first, but since we have a tie between P1 and P2. In the first come first serve case:
|
||||
|
||||
| Process | Arrival | Duration | Wait Time | Response Time | Turnaround |
|
||||
| ------- | ------- | -------- | --------- | ------------- | ---------- |
|
||||
| P0 | 0 | 3 | 0 | 0 | 3 |
|
||||
| P1 | 1 | 20 | 2 | 2 | 22 |
|
||||
| P2 | 1 | 5 | 22 | 22 | 27 |
|
||||
|
||||
So the average wait time is
|
||||
(0 + 2 + 22)/3 = 8
|
||||
|
||||
If there are different arrival times:
|
||||
|
||||
| Process | Arrival | Duration | Wait | Turnaround |
|
||||
| ------- | ------- | -------- | ---- | ---------- |
|
||||
| P0 | 0 | 3 | 0 | 3 |
|
||||
| P1 | 2 | 20 | 6 | 26 |
|
||||
| P2 | 1 | 5 | 2 | 7 |
|
||||
|
||||
In this case the average wait time is
|
||||
(0 + 6 + 2)/3 = 2.666666...
|
||||
|
||||
Much better.
|
||||
We can reorder jobs to reduce the wait time for jobs
|
||||
|
||||
The turnaround time also improves (do the calculation in your head dummy)
|
||||
|
||||
The throughput remains the same however
|
||||
|
||||
|
||||
Kinds of algorithms:
|
||||
Shortest Job first
|
||||
|
||||
| Process | Arrival | Duration | Wait Time | Response Time | Turnaround |
|
||||
| ------- | ------- | -------- | --------- | ------------- | ---------- |
|
||||
| P0 | 0 | 9 | | | |
|
||||
| P1 | 1 | 15 | | | |
|
||||
| P2 | 1 | 7 | | | |
|
||||
|
||||
The Gantt Chart
|
||||
|
||||
| | P0 | | P2 | | P3 | |
|
||||
| --- | --- | --- | --- | --- | --- | --- |
|
||||
| 0 | | 9 | | 16 | | 31 |
|
||||
|
||||
The problem with this kind of algorithm is determining the duration of jobs
|
@ -1,5 +1,5 @@
|
||||
Lecture Topic: Memory
|
||||
|
||||
There is a problem with 32 bit systems in which the max addressable ram is 4GB at a time.
|
||||
|
||||
Lecture Topic: Memory
|
||||
|
||||
There is a problem with 32 bit systems in which the max addressable ram is 4GB at a time.
|
||||
|
||||
This can be solved with the use of 64 bit systems, which allow a max addressable space of 2^64 memory locations, which is in the tens of thousands of petabytes.
|
@ -1 +1 @@
|
||||
Lecture Topic: Paging
|
||||
Lecture Topic: Paging
|
||||
|
@ -1 +1 @@
|
||||
Lecture Topic: Mass Storage
|
||||
Lecture Topic: Mass Storage
|
||||
|
@ -1,3 +1,3 @@
|
||||
Lecture Topic: Mass Storage
|
||||
|
||||
Shortest Seek Time First:
|
||||
Lecture Topic: Mass Storage
|
||||
|
||||
Shortest Seek Time First:
|
||||
|
@ -1,47 +1,47 @@
|
||||
Lecture Topic: Virtualization
|
||||
|
||||
| | Windows | Linux |
|
||||
| ----------------- | -------------- | --------------------- |
|
||||
| Executable Format | .exe .bat .ps1 | .sh ELF standard .out |
|
||||
| Library Format | .dll | ELF .o .so |
|
||||
|
||||
Executables tend to not be portable, on Linux usually due to differences in the flavor of Linux, and even on windows due to different platform architecture. Interoperability is not guaranteed.
|
||||
|
||||
The differences between simulation and emulation
|
||||
Simulation: getting the result
|
||||
Emulation: mimicking the behavior
|
||||
|
||||
Virtualization:
|
||||
Alternative to emulation and simulation: allows access to hardware by generating appropriate machine code
|
||||
|
||||
Why Virtualization?
|
||||
- Access to hardware
|
||||
- Safety, sandboxed, controlled environment
|
||||
- Multiple Users
|
||||
|
||||
Kinds of VMs
|
||||
|
||||
Type 1 virtual machine: manages the VMs directly, with no interaction with a base kernel
|
||||
Type 2 virtual machine: manages the VMs through interactions with the hardware with a main kernel
|
||||
|
||||
Process VM
|
||||
- One purpose functionality VM
|
||||
- Java VM as an example
|
||||
|
||||
OS VM
|
||||
- Virtual machine that emulates entire OS/machine functionality
|
||||
|
||||
Example VM stack
|
||||
- VirtualBox VM
|
||||
- VirtualBox VMM
|
||||
- Windows
|
||||
- Hyper-V
|
||||
- Hardware
|
||||
|
||||
| Virtualbox | Hyper-V |
|
||||
| ---------------- | -------------- |
|
||||
| OS VM, Type 2 VM | VMM, Type 1 VM |
|
||||
|
||||
| WSL | Docker |
|
||||
| --------------------------------------------- | -------------------------------- |
|
||||
| Uses Hyper-V, Reuses system calls if possible | Containers, minimal dependencies |
|
||||
Lecture Topic: Virtualization
|
||||
|
||||
| | Windows | Linux |
|
||||
| ----------------- | -------------- | --------------------- |
|
||||
| Executable Format | .exe .bat .ps1 | .sh ELF standard .out |
|
||||
| Library Format | .dll | ELF .o .so |
|
||||
|
||||
Executables tend to not be portable, on Linux usually due to differences in the flavor of Linux, and even on windows due to different platform architecture. Interoperability is not guaranteed.
|
||||
|
||||
The differences between simulation and emulation
|
||||
Simulation: getting the result
|
||||
Emulation: mimicking the behavior
|
||||
|
||||
Virtualization:
|
||||
Alternative to emulation and simulation: allows access to hardware by generating appropriate machine code
|
||||
|
||||
Why Virtualization?
|
||||
- Access to hardware
|
||||
- Safety, sandboxed, controlled environment
|
||||
- Multiple Users
|
||||
|
||||
Kinds of VMs
|
||||
|
||||
Type 1 virtual machine: manages the VMs directly, with no interaction with a base kernel
|
||||
Type 2 virtual machine: manages the VMs through interactions with the hardware with a main kernel
|
||||
|
||||
Process VM
|
||||
- One purpose functionality VM
|
||||
- Java VM as an example
|
||||
|
||||
OS VM
|
||||
- Virtual machine that emulates entire OS/machine functionality
|
||||
|
||||
Example VM stack
|
||||
- VirtualBox VM
|
||||
- VirtualBox VMM
|
||||
- Windows
|
||||
- Hyper-V
|
||||
- Hardware
|
||||
|
||||
| Virtualbox | Hyper-V |
|
||||
| ---------------- | -------------- |
|
||||
| OS VM, Type 2 VM | VMM, Type 1 VM |
|
||||
|
||||
| WSL | Docker |
|
||||
| --------------------------------------------- | -------------------------------- |
|
||||
| Uses Hyper-V, Reuses system calls if possible | Containers, minimal dependencies |
|
||||
|
@ -1,39 +1,39 @@
|
||||
Lecture Topic: Operating System Services
|
||||
|
||||
Operating systems provide an environment for execution of programs and services to programs and users
|
||||
|
||||
Some of these services are:
|
||||
- User Interface (CLI, GUI, Touch Screen)
|
||||
- Program Execution: Load the program into memory, run, and end execution either normally or abnormally (errors)
|
||||
- I/O operations: Files, I/O device
|
||||
- File system manipulation: Supports the reading, writing of files and directories, create and delete, search, list file metadata and permission management of files and directories
|
||||
|
||||
There are 2 ways to design CLI:
|
||||
- Executing the program by jumping directly into the program code
|
||||
- Creating a new process and then invoking the program in that new process
|
||||
|
||||
A few sets of services that are helpful are:
|
||||
- Communication between different processes on the same PC, and computers on the network, either by shared memory or message passing (packets)
|
||||
- Error handling/detection
|
||||
- Resource allocation: multiple users or jobs are running concurrently and resources need to be shared, like CPU cycles, memory, storage, IO devices
|
||||
- Logging
|
||||
- Protection and Security, protection being that all access to system resources are controlled and security is protecting the OS from outsiders, by using authentication
|
||||
|
||||
System Calls:
|
||||
Programming interface provided by OS, typically written in C or C++, usually accessed with a high level API, not using direct system calls. Common examples would be the Win32 API, the POSIX API for Linux, Unix, BSD, macOS, or the Java API for the JVM
|
||||
|
||||
Typically a number is associated with each system call, the System-call interface maintains a tabled indexed according to these numbers. The system call interface invokes the intended system call and the OS returns the status and any other return values. The caller need to know nothing about how the system call is implemented. They just need to obey the API and understand what the OS will return
|
||||
|
||||
Often more information is required than simply the identity of the system call. Three methods are generally used to pass information to the OS
|
||||
|
||||
- Simplest: Pass the parameters in registers. In some cases there are more parameters than registers
|
||||
- Parameters are stored in a block/table in memory and then pass the address of the block and pass it in as a parameter
|
||||
- Parameters and placed on the stack and then popped of the stack as used
|
||||
|
||||
Linkers and Loaders:
|
||||
When source code is compiled it is loaded into a physical memory location - relocatable object file. A linker combines these into a single binary executable file (also brings in libraries).
|
||||
- Program resides on secondary storage as binary executable.
|
||||
- Must be brought into memory by loader to be executed
|
||||
- Relocation assigns final addressed to program parts and adjusts code and data in program to match those address
|
||||
- Modern general purpose systems don't link libraries into executables, rather using dynamically linked libraries that are loaded as needed, shared by all that use the same version of the same library
|
||||
Lecture Topic: Operating System Services
|
||||
|
||||
Operating systems provide an environment for execution of programs and services to programs and users
|
||||
|
||||
Some of these services are:
|
||||
- User Interface (CLI, GUI, Touch Screen)
|
||||
- Program Execution: Load the program into memory, run, and end execution either normally or abnormally (errors)
|
||||
- I/O operations: Files, I/O device
|
||||
- File system manipulation: Supports the reading, writing of files and directories, create and delete, search, list file metadata and permission management of files and directories
|
||||
|
||||
There are 2 ways to design CLI:
|
||||
- Executing the program by jumping directly into the program code
|
||||
- Creating a new process and then invoking the program in that new process
|
||||
|
||||
A few sets of services that are helpful are:
|
||||
- Communication between different processes on the same PC, and computers on the network, either by shared memory or message passing (packets)
|
||||
- Error handling/detection
|
||||
- Resource allocation: multiple users or jobs are running concurrently and resources need to be shared, like CPU cycles, memory, storage, IO devices
|
||||
- Logging
|
||||
- Protection and Security, protection being that all access to system resources are controlled and security is protecting the OS from outsiders, by using authentication
|
||||
|
||||
System Calls:
|
||||
Programming interface provided by OS, typically written in C or C++, usually accessed with a high level API, not using direct system calls. Common examples would be the Win32 API, the POSIX API for Linux, Unix, BSD, macOS, or the Java API for the JVM
|
||||
|
||||
Typically a number is associated with each system call, the System-call interface maintains a tabled indexed according to these numbers. The system call interface invokes the intended system call and the OS returns the status and any other return values. The caller need to know nothing about how the system call is implemented. They just need to obey the API and understand what the OS will return
|
||||
|
||||
Often more information is required than simply the identity of the system call. Three methods are generally used to pass information to the OS
|
||||
|
||||
- Simplest: Pass the parameters in registers. In some cases there are more parameters than registers
|
||||
- Parameters are stored in a block/table in memory and then pass the address of the block and pass it in as a parameter
|
||||
- Parameters and placed on the stack and then popped of the stack as used
|
||||
|
||||
Linkers and Loaders:
|
||||
When source code is compiled it is loaded into a physical memory location - relocatable object file. A linker combines these into a single binary executable file (also brings in libraries).
|
||||
- Program resides on secondary storage as binary executable.
|
||||
- Must be brought into memory by loader to be executed
|
||||
- Relocation assigns final addressed to program parts and adjusts code and data in program to match those address
|
||||
- Modern general purpose systems don't link libraries into executables, rather using dynamically linked libraries that are loaded as needed, shared by all that use the same version of the same library
|
||||
- Object and executable files have standard formats so operating system knows how to load and start them
|
@ -1,48 +1,48 @@
|
||||
Lecture Topic: Processes
|
||||
Parts of memory
|
||||
- Stack
|
||||
- Heap
|
||||
- Program Counter
|
||||
- Registers
|
||||
- Arguments Vector
|
||||
- Environment PATH
|
||||
|
||||
| Stack Layout |
|
||||
| ------------------------------ |
|
||||
| argv, argc |
|
||||
| stack |
|
||||
| heap |
|
||||
| uninitialized global variables |
|
||||
| initialized global variables |
|
||||
| text |
|
||||
|
||||
Heap Memory, only the result of calls to malloc or calloc
|
||||
Stack Memory, anything else, even pointers or custom typedefs, unless it was specifically allocated
|
||||
|
||||
Possible Process States
|
||||
- New: First Stage, when the OS loads the program and before it is given CPU cycles
|
||||
- Ready: When the program is ready to execute and has been added to the OS queue for execution
|
||||
- Running: The program is executing instructions on the CPU
|
||||
- Waiting: The program is waiting for CPU cycles or is dormant, either due to timeout or waiting for IO or events
|
||||
- Terminated: The program memory is unallocated and revoked. Can be invoked by program itself or by parent (includes OS)
|
||||
|
||||
Process Control Block or Task Control Block
|
||||
- Process state
|
||||
- Program counter
|
||||
- Scheduling information
|
||||
- Memory info
|
||||
- Accounting info
|
||||
- IO status info (fds, etc)
|
||||
In Linux this is implemented with a structure called `task_struct`
|
||||
|
||||
It's organized in a Linked List, it's used for scheduling.
|
||||
|
||||
The process scheduler, selects which process get CPU attention, with the goal of improving response time, as well as multiprorgramming.
|
||||
|
||||
Maintained Queues
|
||||
- Waiting Queue
|
||||
- Ready Queue
|
||||
|
||||
Processes could be CPU bound or IO bound
|
||||
|
||||
Lecture Topic: Processes
|
||||
Parts of memory
|
||||
- Stack
|
||||
- Heap
|
||||
- Program Counter
|
||||
- Registers
|
||||
- Arguments Vector
|
||||
- Environment PATH
|
||||
|
||||
| Stack Layout |
|
||||
| ------------------------------ |
|
||||
| argv, argc |
|
||||
| stack |
|
||||
| heap |
|
||||
| uninitialized global variables |
|
||||
| initialized global variables |
|
||||
| text |
|
||||
|
||||
Heap Memory, only the result of calls to malloc or calloc
|
||||
Stack Memory, anything else, even pointers or custom typedefs, unless it was specifically allocated
|
||||
|
||||
Possible Process States
|
||||
- New: First Stage, when the OS loads the program and before it is given CPU cycles
|
||||
- Ready: When the program is ready to execute and has been added to the OS queue for execution
|
||||
- Running: The program is executing instructions on the CPU
|
||||
- Waiting: The program is waiting for CPU cycles or is dormant, either due to timeout or waiting for IO or events
|
||||
- Terminated: The program memory is unallocated and revoked. Can be invoked by program itself or by parent (includes OS)
|
||||
|
||||
Process Control Block or Task Control Block
|
||||
- Process state
|
||||
- Program counter
|
||||
- Scheduling information
|
||||
- Memory info
|
||||
- Accounting info
|
||||
- IO status info (fds, etc)
|
||||
In Linux this is implemented with a structure called `task_struct`
|
||||
|
||||
It's organized in a Linked List, it's used for scheduling.
|
||||
|
||||
The process scheduler, selects which process get CPU attention, with the goal of improving response time, as well as multiprorgramming.
|
||||
|
||||
Maintained Queues
|
||||
- Waiting Queue
|
||||
- Ready Queue
|
||||
|
||||
Processes could be CPU bound or IO bound
|
||||
|
||||
In general 100% utilization (CPU, IO) is not bad
|
@ -1,21 +1,21 @@
|
||||
Lecture Topic: Processes
|
||||
|
||||
Resource Sharing Options:
|
||||
Not Shared
|
||||
Shared
|
||||
Partially Shared
|
||||
|
||||
fork ()
|
||||
stack not shared
|
||||
heap not shared
|
||||
|
||||
Process creation options:
|
||||
Child duplicates the parent
|
||||
Child has a program loaded into it
|
||||
|
||||
examples:
|
||||
fork()
|
||||
exec()
|
||||
|
||||
|
||||
Lecture Topic: Processes
|
||||
|
||||
Resource Sharing Options:
|
||||
Not Shared
|
||||
Shared
|
||||
Partially Shared
|
||||
|
||||
fork ()
|
||||
stack not shared
|
||||
heap not shared
|
||||
|
||||
Process creation options:
|
||||
Child duplicates the parent
|
||||
Child has a program loaded into it
|
||||
|
||||
examples:
|
||||
fork()
|
||||
exec()
|
||||
|
||||
|
||||
Was a bit late today, but was mostly a demo of how forking worked in C
|
@ -1,3 +1,3 @@
|
||||
Topic: Consumer Producer
|
||||
|
||||
Topic: Consumer Producer
|
||||
|
||||
Forgor ADHD medication 💀
|
@ -1 +1 @@
|
||||
Lecture Topic: Forking and Threading
|
||||
Lecture Topic: Forking and Threading
|
||||
|
Reference in New Issue
Block a user