程序代写案例-PA4

PA4
Context Switching
Goals
● Solidify understanding of the mechanisms underlying process context switches
● Get a little practice making a sho
rt explanatory video (this may be valuable for the final
project)
● Learn about an OS simulator on which you might base a final project
● Reinforce your understanding of other various aspects of computer system hardware
(COMSC-221 stuff) and various OS stuff (e.g., syscalls, schedulers)
● Potentially extend your knowledge of material only touched on briefly or in passing so far
(e.g., executable files, the C library, compiler toolchains)
A note!
This document is pretty long. It contains a wealth of information on the simulator it uses
(serving as a sort of manual), as well as background on various OS-related topics. All of it may
be useful, but you do not necessarily need to read it from start to finish. But if there's something
you wonder about, you may want to use it like a reference. I do recommend that you read at
least these sections:
1. Introduction
2. About the Simulator (the introduction part)
3. Demonstration Programs
4. Getting and Running the Simulator (the whole thing)
5. Completing the Assignment
6. Tips
You may also want to look through the highlighted portions of other sections.
Additionally, if you find the simulator interesting, you might want to look at the final section,
which has some ideas for final projects. It may not be too late to pivot to a new project!
Introduction
The purpose of this programming assignment is really to deepen your understanding of exactly
what happens when an OS context switches between processes. It has a distinctly different
flavor than all the other programming assignments. There is much more stuff going on, but
you'll need to write much less code (it literally ultimately comes down to only a handful of lines)
-- and that code will be in Python.
In this programming assignment, you are given a simulated computer system that only runs one
process at a time. Your task is to finish the implementation of preemptive multitasking in the
simulated operating system (primarily by finishing the implementation of the context_switch()
function) such that it can run multiple processes "simultaneously".
Below is a video showing the simulator running several of the demo programs and context
switching between them successfully.
About the Simulator
This project uses a simulated computer system. This includes:
● Simulated Hardware
○ The simulator simulates a 32-bit MIPS CPU similar to what you've seen in
COMSC-221. While the simulation is incomplete, it is enough to run some real
software.
○ The simulator simulates some computer memory (i.e., primary storage or RAM)
and a simple MMU.
○ The simulator also simulates bitmap or "pixmap" graphics hardware.
○ The hardware simulation is mostly written in Python.
● A simulated operating system (or at least the beginnings of one)
○ There are implementations of a number of syscalls that user mode programs
can utilize. These are based on syscalls from the Linux operating system.
○ It has an incomplete Process Control Block data structure (it's enough to run
one program, but not enough to multitask -- getting that working is your job!).
○ It has a simple round-robin scheduler powered by a simulated timer interrupt.
○ It has a file format for executable files, and an implementation of exec() which
can load these into a process. (And there's a tool to convert from a Linux
executable to this format.)
○ It has a simple windowed graphical user interface so that programs can output
graphics.
○ The OS simulation is mostly written in Python (with the exception being parts of
the graphics, which are in C).
● A C compiler toolchain
○ Getting from C source code to working executables requires several pieces of
software called a toolchain. This crucially includes the C compiler and several
related tools, the C standard library (where things like strtok() and malloc()
are implemented), and some OS-support code (e.g., the C functions which wrap
the OS's syscalls). Since our system simulates a MIPS-like CPU and (pieces of)
a Linux-like OS, our toolchain is assembled from fairly ordinary pieces, though it's
simpler than many ones in real-world use.
● Several demonstration programs
○ What's a computer system without some programs to run on it? There are
several C programs included; most of them are little graphical demonstrations
based on notable and/or historically interesting algorithms.
The remaining subsections contain details on each of these aspects. If you have a good sense
of what is going on, you may be able to skip them. They may have useful information, be
valuable as a reference, or be interesting, though.
The Simulated Hardware
Note: The simulated hardware is largely implemented by code in the mipssim/ directory, and
the mipssim/hardware.py file especially.
The simulated CPU is based on 32-bit MIPS processors, such as were focused on in
COMSC-221. These are still in use today, but were pretty cutting edge in the late 1980s and the
1990s. One example was the MIPS R3000 CPU used by the original Sony PlayStation. Similar
CPUs were used in the Silicon Graphics workstations which were part of the revolution of
computer imagery in movies. The simulated CPU does not implement all of the MIPS
instructions. At least not yet. It implements enough to run real software produced by a C
compiler, though! One specific thing it lacks is the floating point math instructions, so while
integers work, floats and doubles will not (if you wanted to add floating point support, there's a
final project in there somewhere). It also lacks a kernel mode -- it only runs in user mode.
That's okay. The equivalent of the OS kernel in the simulator doesn't actually directly use the
simulated CPU -- the OS is simulated in Python, as described in the section on the OS below.
In terms of the CPU simulation code, this project uses a subclass of the basic CPU
implementation. The subclass is in simulator.py toward the top. You can see that it holds a
reference to the "memory" attached to the CPU in the .mem attribute, and holds a reference to
the CPU registers in the .R attribute. You can also see that the CPU is configured to have
custom syscall and interrupt handlers (since these are implemented by the simulated OS; more
on them later).
If you're interested in how the CPU is actually simulated, a lot of the basic CPU code is in
emu.py, and the implementation of the individual MIPS instructions is in instr.py. For
example, the ADDIU instruction is the MIPS instruction to add the value of one register to an
immediate value and store the result in another instruction. In instr.py is approximately the
implementation shown below, which adds the value of one register (register number rs, which
has its value stored in rsv1) to an immediate value (imm). It uses a bitwise AND to ensure the
result hasn't overflowed 32 bits, and finally uses SetReg() to store the result into the register
referred to by rt. The reason we use SetReg() instead of simply assigning the result to
cpu.R[rt] is that we may not actually want to store the result value immediately -- it
theoretically happens during a later pipeline stage.
def op_addiu (cpu, pc, rsv, rtv, rs, rt, imm):
return SetReg(rt, (rsv + imm) & 0xffFFffFF)
The primary storage or RAM that's connected to the "CPU" is simulated at a fairly high level by
the Memory class. By simply having multiple instances of the Memory class, one can easily
support multiple independent virtual address spaces -- you just switch which one is hooked up
to the simulated CPU (by switching the mem attribute of the CPU instance). The Memory class
(implemented in the hardware.py file) supports virtual address spaces that can start at any
virtual address and extend to any range of larger contiguous addresses. In essence this gives it
simulated capabilities similar to a basic MMU much like the Dream Segmentation discussed in
lecture, but with only a single segment (or, if you prefer, it's like Base and Bounds except that
you can control the virtual base address as well -- it's not stuck at zero).
The graphics simulation is fairly straightforward. A user mode program can use some of its
memory as a bitmap/pixmap buffer. Essentially this means that the memory is treated like a 2D
array, and each position in the array corresponds to a pixel in a 2D area of the screen starting
with the upper left and progressing horizontally until you reach the last pixel on the first line and
then continuing with the leftmost pixel of the second line, and so on (similar to the order use
when reading English text). In the usual configuration, each pixel is 32 bits (four bytes). One
byte is the amount of red, one the amount of green, one the amount of blue, and one is called
an "alpha channel" (understanding the alpha channel is not important at for this class; the only
thing you might care about is that it should have all the bits turned on -- be the value 0xFF or
255). If your graphics area was 200 pixels wide by 100 pixels tall, you might have a uint32_t
1 That is, if rs is 3, that means we are interested in the value stored in register 3 (which is the register
named V1). We could access that using cpu.R[3], but it has already been loaded into rsv for us.
* pixels which pointed to the start of it. pixels[205] would refer to the sixth pixel/column on
the second row. If you set pixels[205] = 0xffFFffFF, that would set red, green, blue (and
the alpha) to their maximum values, resulting in that pixel turning white. To actually do this, the
program will allocate memory for a buffer (e.g., an array of using malloc()) and then issue a
syscall to tell the system where the memory is located, how many pixels wide/tall it is, and so
on. The simulated OS will then open a window to display it. The demos/common.h file has the
init_graphics() function which wraps this syscall with a slightly easier to use interface.
The Simulated OS
On a real MIPS CPU, the OS is also MIPS machine code -- but running in the CPU's kernel
mode. In the simulator, the CPU doesn't have a kernel mode. But this is okay, because the
simulated OS isn't written in MIPS machine code -- our OS kernel is just written/simulated in
Python, primarily in the simulator.py file. For example, the OS's PCB is implemented by the
Process class, the interrupt handler is implemented by the interrupt_handler() function,
and the equivalent of the syscall table is implemented by the syscall_handler() function.
Supported Syscalls
To the last point, this means that when user mode code running on the simulated CPU performs
a syscall, it performs a "controlled jump" into the appropriate Python code. If you recall from
lecture, a syscall on MIPS CPUs is performed by putting a number that corresponds to the
desired syscall (e.g., 0x0FB4 is used for the getpid syscall) into a register (specially register 4,
which is known as A0), and then executing the syscall instruction. This results in the code in
syscall_handler() running. Looking at a snippet from this function shown below, you can
see this puts the current process's PID into the V0 register which serves as the return value.

elif cpu.R.V0 == 0xfb4: # getpid
# Get the PID of the current process
cpu.R.V0 = current_process.pid if current_process else 0
elif cpu.R.V0 == 0xfa4: # write

The specific syscalls supported in this "kernel" (and their numbers) are inspired by the MIPS
version of the Linux kernel. Only a few of Linux's syscalls are supported (if I recall, Linux has
about 300!). Of the ones which are supported, they're not necessarily supported exactly as
Linux does it. However, enough syscalls are supported well enough that this simulated OS can
run a number of programs that were compiled with the intention of running on MIPS machines
running Linux.
Aside from the Linux-inspired syscalls, the simulated OS also supports at least one syscall that
doesn't come from Linux -- one which creates a graphics window for use with the simulated
graphics hardware as discussed in the previous section (the code that actually draws windows
and such is the only part of the simulator not written in Python -- it's written using a C library
called Lux). The way graphics works on Linux is generally quite a bit more complicated.
Because many of the example programs use this simulator-specific syscall, they won't actually
work unmodified on Linux (but the non-graphical programs like hello and primes will).
Executable Files
As we know from lecture, compiled programs live in executable files stored on secondary
storage (disk). These executable files contain all of a program's code and static data. They
also contain some other useful information, such as the starting address of the code (the
beginning of the code may not necessarily be at the very beginning of the file). Different OSes
use different file formats for executables. Linux uses one called ELF, which is very flexible and
rather complex. Thus, the simulator uses a much simpler one. It's not especially good, so it's
called a Bad Executable or "badex". Since the simulated OS is inspired by Linux, we use a
pretty normal C compiler to produce code (this is described in a section below). This compiler
doesn't know how to produce Bad Executables, so we actually let it produce ELFs, and then
there's a tool to convert from ELF to Bad Executable (misc/elf_to_badex.py). The actual
format of a Bad Executable file is like this:
Size Name/Meaning Notes
4 bytes Magic A magic number which can be used to identify a file as a badex.
Specifically, 0x1badecec.
4 bytes Entrypoint The address of the program's "entry point" -- the first instruction
that should be run.
4 bytes MemLow The starting virtual address at which the executable should be
loaded.
4 bytes MemSize The size of the executable code/data to load in bytes.
4 bytes BSSStart The starting address of the BSS segment. See below.
4 bytes BSSSize The size of the BSS segment. As you know, global variables (and
other static data) in C are initialized to zero unless another initial
value is specified. While those other values would need to be
stored in the executable, it's very common to just have zeros.
Instead of making us actually store the zeros in the file (a bit
wasteful), the BSS essentially just says how many bytes of zeros
we need (this is BSSSize). When the executable is loaded, the
OS allocates that amount of memory (starting at the address
BSSStart -- usually immediately after the loaded code/data), and
initializes it to zero.
4 bytes HeapSize The number of bytes to allocate after the BSS for use by the
heap. Note: This may be zero and the heap may be managed
another way (e.g., by having the malloc() implementation
allocate space out of the BSS or by using a syscall like brk/sbrk to
get more virtual memory).
4 bytes StackSize The number of bytes that the OS should dedicate to the stack. A
zero means to use a default stack size.
MemSize
bytes
Code/Data This is a variable amount of bytes which are the actual code and
data to be loaded into memory starting at MemLow.
The code to load a badex into memory and get it ready to run is implemented in
Process.exec() in simulator.py, so-named because this is basically what the exec syscall
family does.
Processes and the Scheduler
The "kernel" keeps track of the current process (the one actively using the CPU) and has a
single list of all loaded processes (these are the current_process and processes global
variables respectively in simulator.py). This last bit is a bit unusual; many OSes keep some
separate lists for different process states (e.g., a list of runnable processes, a list of blocked
processes, etc.). We don't bother and just keep them all together; we can tell which state a
process is in by examining some of its attributes. For example, if a process has terminated, it
will have its exit_code attribute set (and since our kernel does not presently have a wait()
syscall, it will get "reaped" automatically in the near future).
The kernel has a simple round-robin scheduler implemented by do_context_switch() and
next_process(). When do_context_switch() is called, it will call next_process(), which
first finds the current process in the process list, and then finds the next runnable process after it
(possibly looping back around to the beginning of the list). Processes that are sleeping are
skipped (or awakened if their sleep time is over). If all processes are sleeping, the simulator will
just go to sleep itself for a while. If there are no processes alive at all, the simulator will exit.
When the scheduler has identified the next process that should run, it will call
context_switch(), and pass it the PCBs of the process we are switching away from and the
process we are switching to. Note that the implementation of context_switch() you are given
is incomplete, which prevents the OS from being able to actually support multiple processes
simultaneously! Fixing this is the primary goal of this project.
The scheduler is invoked in a couple situations. For example, when a process executes the
exit syscall, the exit syscall handler calls do_context_switch() in order to switch away
from the terminated process to some still-running process (the code for this from
syscall_handler() is shown below). More importantly, the simulated timer interrupt just calls
do_context_switch() periodically to allow for preemptive multitasking (since the timer is the
only kind of interrupt implemented in the simulator at present, this is all the
interrupt_handler() trap handling function does).

elif cpu.R.V0 == 0xfa1: # exit
# Exit the process by setting the .exit_code attribute in the PCB
# and then context switching to the next process.
current_process.exit_code = cpu.R.A0
do_context_switch()
elif cpu.R.V0 == 0xfee: # gettimeofday

The Toolchain
Note: This section describes how programs for the simulated computer/OS are actually
compiled. It's probably the safest one to skip. Unless you want to actually understand it, or do
a final project based on it!
There's more to compiling working C code than just the C compiler. You need a whole chain of
tools for a working system. Perhaps unsurprisingly, this is often referred to as a compiler
toolchain. This typically includes the C compiler itself, an assembler (to turn the assembly code
produced by the compiler into machine code), a linker (which can glue multiple pieces of
assembled machine code into a single executable), libraries (including the C standard library
which contains things like the implementations of strtok() and malloc(), as well as functions
which provide C interfaces to an OS's syscalls), and more. A major accomplishment of the
open source software community was creating freely available, nonproprietary versions of
these. In particular, the GNU Project developed the GNU Compiler Collection (gcc), GNU
binutils, and the GNU C Library (glibc), which have been (and continue to be) a cornerstone of
open source software (and perhaps especially Linux).2
As described in the sections above, our simulated hardware is based on MIPS CPUs and our
simulated kernel is inspired by Linux. gcc supports these, so we can use the gcc compiler and
other parts of the GNU toolchain. When you type "gcc" on a Linux machine, you generally get
the compiler that will compile for the machine you're using. That probably uses an Intel or an
ARM processor (rhea, for example, has an Intel processor). However, one can install additional
compilers. It's common for these to be available with the specific "target machine type" as a
prefix to the filename. For example, on rhea, mipsel-linux-gnu-gcc is the gcc compiler for
Linux systems running MIPS processors. That's what we'll use (the "el" in "mipsel" is for Little
Endian -- for a 32 bit integer, the four bytes are stored with the least significant one first; that's
the common way to do it on most contemporary CPUs).
2 For many years, the GNU tools were almost "the only game in town" if you wanted open source. More
recently, other projects in this space (e.g., the clang and LLVM compiler technologies, the musl C library,
etc.) have become truly competitive. In part this is thanks to Apple wanting more control of their
technologies; they hired the original author of the clang compiler and transitioned from gcc (and other
GNU tools) to clang (and its associated tools) -- making huge advances in clang along the way.
Our toolchain deviates from a standard one in two ways. First, as mentioned above, the final
output of compiling/assembling on a Linux machine is generally an executable file in the ELF
format. Linux knows how to load and run these, but our OS does not -- our OS only knows how
to load/run Bad Executables (badex files). gcc actually knows several executable file formats,
but none of them are badex. So our toolchain requires a final step to convert from ELF to
badex. The other major difference is the C library.
C libraries contain all the stuff defined by the C standard (like strtok(), printf(), and
malloc()). Implementing some of this functionality typically requires interfacing with an
operating system (for example, printf() needs to be able to send data to an output terminal);
this type of OS interface is therefore also often part of C libraries like GNU glibc and the popular
alternative, musl. These are great projects meant for real use in production systems. As a
result, they're a bit "heavyweight" and make a lot of decisions to get the best performance they
can on real Linux systems. For an OS class running a simulated OS on a simulated computer,
flexibility and simplicity are more important. As a result, the C library I am using is based on one
called picolibc, which is based on one called newlib. These are generally meant for running on
special purpose devices rather than ordinary desktop/server computers. For example, they're
used on things like embedded systems (things like smart watches, laboratory and factory
equipment, smart household appliances, etc.), and are at the heart of the Nintendo DS
homebrew video game community. I maintain a version of picolibc which has basic support for
Linux and the OS simulator; this project uses that.
Invoking the compiler to use the custom libraries and such is more than you probably want to
remember. The included Makefile compiles the existing demos for you using the correct
options. You can also use the included misc/compile.sh shell script, like so:
$ misc/compile.sh demos/hello.c
This will invoke the correct compiler with the correct options, will use the correct C library, and
will automatically convert the resulting ELF executable to a badex executable. Thus, it will
produce hello.elf (an ELF executable which you can run on Linux by just typing
./hello.elf) as well as hello.badex, which can be run with the simulator (see the section on
running the simulator to see how).
Demonstration Programs
To demonstrate the system, we need some programs to run on it. A number of C demonstration
programs are available in the demos/ directory. Most of them are little graphical demos, and
many of them have periods of heavy CPU usage or are entirely CPU-bound. Many of them
support some simple command line options (e.g., to change the window size); check their
main() functions to see. Most of them also have some more information about their origins as
comments in the code, but here's some brief descriptions:
Source code Description
hello.c Text based. A "hello world" program which also prints the command line
arguments, if any. Runs and terminates quickly.
primes.c Text based. Computes and prints out prime numbers; there's an option to
set how many. CPU-bound.
knocknock.c Text based. Generates almost entirely nonsensical knock-knock jokes.
Usually sleeps a lot (adjustable).
dazzler.c A historical kaleidoscope program which ran on the first color graphics
adapter for microcomputers and supposedly caused a traffic jam on Fifth
Avenue in New York in the late 1970s. The original version was written by
Li-Chen Wang, who was part of the rather historically significant
Homebrew Computer Club.
ds.c An implementation of the famous Diamond-Square algorithm which has
been used to make computer generated clouds for images, 3D terrain for
video games, etc.
mandelbrot.c Computes the famous Mandelbrot Set fractal using 64-bit fixed-point math.
It focuses on a part of the set that I think is pretty, but you can tweak it in
the code for a more classic Mandelbrot look.
maze.c Generates and solves mazes using two variations of the Depth First
Search algorithm. Options to set the size, etc.
mystify.c Bouncing lines like this were a common little graphical demo in the 1980s
and 1990s (similar to the Windows "Mystify Your Mind" screensaver).
Getting and Running the Simulator
To get started, running make will build all the demonstration programs in both .elf and .badex
forms as described in the Executable Files section (in short: the ELF versions of the
non-graphical programs are normal Linux programs; the Bad Executable versions can run in the
simulator). If you do make asm it will build assembly listings of all of the programs (the ".s" files);
you can read these in a text editor to see the compiled programs in assembly language form.
The simulator doesn't do anything on its own -- you need to tell it one or more programs to load
and run. When you're first given the assignment, it can load multiple programs, but it can't
switch between them because context_switch() is incomplete. Let's start by running a single
instance of the knockknock program. We'll pass it the option -n1 so that it only tells us a single
joke. It should look something like this:
[simulator]$ ./simulator.py demos/knockknock -n1
Process 1 PC:00400480
Knock knock!
Who's there?
Dog says.
Dog says who?
Dog says the water bottle, it's not knocking!
All processes terminated.
[simulator]$
This joke itself may be buggy (i.e., not very funny), but the program ran successfully.
Of course, we'd like to be able to run multiple programs at once. To do this, we enter each of
their command lines, separated by "---". For example, we should be able to run the
knock-knock generator and the prime number generator at the same time, which would look
something like this:
[simulator]$ ./simulator.py demos/knockknock -j0 -l0 --- demos/primes 500
Process 1 PC:00400480
Process 2 PC:00400270
Prime number 2 is: 3
Prime number 3 is: 5
Knock knock!
Who's there?
Prime number 4 is: 7
Apple.
Prime number 5 is: 11
Apple who?
Apple the table, it's not working!
Prime number 6 is: 13
Knock knock!
Prime number 7 is: 17
Who's there?
...
In the example, we're running the knock-knock generator with the -j0 and -l0 options to
remove all delays, and also generating the first 500 prime numbers. You can see that these are
running at the same time and being switched between (the simulator applies different colors to
different processes so that you can tell their outputs apart). In theory, we could add more
programs, multiple instances of the same program with different options, etc. But again, this
currently won't work, because it's up to you to make context switching work!
Graphical Programs
To view graphical programs, you'll use your web browser. To get started, try running the
mystify program in the simulator. While it's running, open your web browser to:
https://rhea.sockette.net:8182/gui
Note: You should probably use Chrome or Firefox for this. Safari doesn't seem to remember the
password and constantly prompts for it.
This should open a page where you can see the running graphical programs. When the
simulator exits and is restarted, your browser should reconnect to the new simulator
momentarily. You can test that the GUI is working by running one of the graphical demos, for
example:
[simulator]$ ./simulator.py demos/mystify
Context Switching Background
Both lecture and the text have discussed context switching. If you have been able to follow
along, you may be able to skip this section. If you could use another perspective or would like
to review, the remainder of this section attempts a brief coverage of the subject.
As discussed in the lecture on the Process Abstraction, we use time sharing to share the CPU
among the various independent processes running on a machine. Essentially, one process gets
to use the CPU for a while (it's in the running state), then another process gets to use the CPU
for a while, and so on. The actual transition from one process running to another is called a
context switch.
There are two interesting things which can cause a context switch to occur, both of which
involve a type of trap:
1. A process performs a blocking syscall, essentially voluntarily giving up the CPU for a
while. When this happens, the process enters the blocked state, also known as the
sleeping state. One example blocking syscall is sleep(), which specifically puts the
process to sleep for a given amount of time. Another example is using read() to read
input when there is not input available (or at least there isn't input available yet). When
the blocked process becomes unblocked (e.g., because the desired amount of time has
passed), the process switches to the ready state -- ready to be run again. (In the context
of the system you're given for this project, the only blocking syscall corresponds to the
Linux nanosleep syscall, which puts the process to sleep for at least the given number of
nanoseconds -- you can see the code for this inside the syscall_handler() function.)
2. The timer interrupt goes off, which executes the kernel's timer interrupt handler code,
which determines that the current process has been in the running state for too long and
that another process should get a chance to run. In this case, the interrupted process
enters the ready state -- it's immediately ready to run again. (In the context of this
project, there is a simulated timer which goes off periodically which causes an interrupt
trap which is handled by the interrupt_handler() function in simulator.py.)
There are also other less-interesting things which will cause a context switch to occur -- for
example, if the current process exits, you had better context switch to another process! (You
can see that the code for the exit syscall switches to another process inside
syscall_handler().)
Processes are an abstraction implemented by the OS, and the OS maintains a data structure for
each process known as the process control block or PCB. This is where the OS stashes all of
the crucial information about a process. The OS also implements the context switch code. As
discussed in lecture, a context switch can be (very vaguely) thought of like so:
1. The OS takes a "snapshot" of the state of the computer while it's running Process A
(with the snapshot being stored in or referred to by Process A's PCB).
2. The OS then rearranges the state of the computer so it looks like the most recent
snapshot it took while running Process B.
Eventually (after running Process B and possibly other processes), Process A's state will be
restored in the second step, and it can continue running.
When we say a snapshot of the "state of the computer", we basically mean everything
necessary to let us resume where we are leaving off. This must clearly include the process's
primary storage (RAM), and the values of the CPU registers. Note that this isn't just the
"regular" CPU registers. We also need to remember the special registers. For example, MIPS
processors have the HI and LO registers used for multiplication, and all the common CPUs I
know of have a register which contains the address of the instruction currently being executed
(this is often called the Program Counter or PC; Intel calls it the Instruction Pointer).
There's a slight bootstrap problem here, of course: When a process first starts, there's no saved
snapshot to restore from! The OS essentially makes one up. In the initial "snapshot", the PC
register should be pointing at some initialization code (which does a little bit of setup and then
jumps to the main() function), and the stack pointer register should be set to point at the top of
the stack memory. Not much else matters!
Completing the Assignment
In this programming assignment, you are given a simulated computer system that only runs one
process at a time. Your task is to finish the implementation of preemptive multitasking in the
simulated operating system such that it can run multiple processes "simultaneously". But you
must also document your process. That is, only half of the grade comes from the code you
submit (as mentioned above, it's only a few lines of code to complete the assignment!). The
other half comes from a video in which you describe and reflect on how you arrived at the
solution.
The video should be between three and five minutes long. It should describe how you
approached the problem, what problems you faced, what you learned along the way, and how
you eventually completed the assignment. Documenting your missteps is as important as
documenting how you succeeded! I can look at the code you turned in to see what you
eventually did -- the video should explain how you reached that point. Thus, you should
probably be thinking about and making notes about your progress as you go, instead of
leaving it to the end. It may be useful to think of questions like the following (though not limited
to the following!) starting from the very beginning:
● How good of a grasp did you have on the assignment coming into it? Did you
understand what you were supposed to do? Did you have some expectation of what the
simulator code would look like before you started? After looking at the code, did it look
like you expected?
● How did you start working on the project? Did you read specific sections of this
document? Read more of the provided code? Refer to lecture slides or your notes?
Google stuff? Try running examples?
● How did your first attempt at solving the problem go? Did you get anything wrong? How
did you go about figuring out the problem?
● What was the single biggest issue you ran into when completing the assignment? How
did you end up working through it? Were you missing some piece of information (or
more than one piece)? Was there something you misunderstood?
● What (if anything) do you think you learned related to the course material? Did you learn
anything that seems useful but wasn't directly related to the course material? It's okay to
say you didn't learn anything -- it would be good for me to know that it's a waste of time
and I should do something else next semester!
● Ultimately, what does your solution do? How does it work? What is different about it in
the simulator versus how it would be in a real OS on real hardware?
Again, your video should probably be three to five minutes. Feel free to show screenshots of
old versions of your code, or slides that help illustrate your points.
To turn it in:
1. Upload your video to Google Drive
2. Share your video with me (people always forget this!)
3. Submit on Moodle
a. The assignment has an upload portion -- submit just your simulator.py file (it's
the only one you should have needed to change)
b. The assignment also has a text portion -- paste in the link to your video
Grading
● 50% for a working solution. I appreciate a nice clean solution, but this isn't a Python
programming class, so as long as it works / implements the correct idea, you should
expect a good grade.
● 50% for the video. Note that if this is a solid reflection on the work you did, you can get
these points even without a working solution (but please get a working solution!)

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468