程序代写案例-CSIS 460

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
CSIS 460 Final Examination
Spring 2020
ID number:
Instructions
This test is open-book and open-notes. Answer each question in the space provided. You may use the back
of the page if necessary.
1. Suppose we had a 16-bit OS that used demand-paged virtual memory where each page-frame was 1024
bytes large.
(a) (5 points) Of the 16 bits in an address, how many bits are used for the logical page number and
how many for the offset?
(b) (5 points) Suppose a process has the following logical-to-physical page mappings in its TLB:
0→ 805, 1→ 10, 2→ 2
Given a logical address of 1026 (i.e., binary 0000010000000010) what byte (byte#, not page &
offset) in physical memory is being addressed?
2. (5 points) Demand-paged Virtual Memory extends physical memory to secondary storage.Demand-
paging
I. Is the same thing as swapping
II. Uses secondary storage allocated for swapping
III. Improves response time through partial program loading
A. I only
B. II only
C. I and III only
D. II and III only
E. I, II and III
1
3. (5 points) Our virtual memory simulation program confirmed1 that we generally expect an LRU page-
replacement policy to result in fewer page faults than a FIFO policy because
I. LRU uses the past to predict the future
II. Memory references exhibit “locality of reference”
III. FIFO requires a linked-list queue
A. I only
B. II only
C. III only
D. I and II only
E. I, II and III
4. (15 points) Previous assignments involved computing the “expected access time” for accessing memory
or accessing data on disk. Now consider the complexity of the memory-hierarchy and the worst case
of manipulating a value in memory (e.g., i++) where the value (i.e., i) resides on a logical page of
memory that 1) has been paged-out to disk and b), currently has no mapping in the TLB.
Describe, as a sequence of steps, the process required before we can manipulate such a value in the
ALU (i.e., compute i++). Assume a worst case where “misses” occur at every level (you may assume
that the process’ page-table is resident in main memory). Your description should account for elements
of the “memory hierarchy” including: registers, cache, the TLB in the MMU, main memory, disk buffer
“cache”, and physical disk
1Or should have confirmed
2
5. Suppose a process on a 32-bit machine using demand-paged virtual memory has been allocated 5 4096-
byte page-frames in its working set and its memory holds a large array (defined as int myArray[5121])
that the process is repeatedly iterating over.
(a) (5 points) Explain why LRU is the worst possible choice for managing these 5 page-frames.
(b) (5 points) What would be a better mechanism and why?
6. (5 points) Which of the following CPU scheduling algorithms potentially allow a process to starve (i.e.,
never getting sufficient CPU time to complete)? Circle all that apply:
A. First Come First Served
B. Shortest Job First
C. Shortest Job First with Preemption
D. Round Robin
E. Priority-Based
F. Priority-Based with Preemption
3
7. Consider the following code:
1 #include
2 void f i l l U p (char∗ b u f f e r ) {
3 int i =0;
4 while ( ( b u f f e r [ i ++] = f g e t c ( s td in ) ) != EOF) ; // Clever ;−)
5 b u f f e r [ i ] = ’ \0 ’ ;
6 }
7 int main ( ) {
8 char myBuffer [ 1 0 0 ] ;
9 f i l l U p ( myBuffer ) ;
10 puts ( myBuffer ) ;
11 }
(a) (5 points) Why is this code “unsafe” with respect to memory (be specific)?
(b) (5 points) How might we change the “signature” and implementation of fillUp() to make this
safe (hint: consider how fgets() works)?
8. (10 points) On a modern general-purpose OS, such as Linux or Windows, how do kernel mode and,
separately, the use of logical memory prevent a buggy program from adversely affecting other processes
or the OS?
4
9. The UNIX filesystem uses multi-level indexed-allocation implemented via inodes (i.e., index-node). As
shown in Figure 11.9 on page 518 of the text, UNIX uses a combined scheme where each inode contains
direct pointers to the first 12 blocks of data as well as indirect pointers to other inodes (if necessary),
packing multiple inodes into a single disk block.
As an alternative, suppose we increase the size of an inode to a single disk block (say 4K bytes) and
replace the 12 direct pointers with almost an entire block of data instead:
double indirect
mode
data
single indirect
count
size block
timestamps
owners
triple indirect
(a) (10 points) For systems with many tiny files (i.e., < 4K bytes), how and why would this improve
the performance of file I/O (i.e., how many disk blocks would have to be read to access a tiny
file’s data using this scheme as opposed to the scheme of Figure 11.9)?
(b) (10 points) For systems with few such tiny files how and why would this degrade the performance
of file I/O (be specific — how does this add I/O to blocks of small files larger than a “tiny” file)?
5
10. Imagine I wrote the following (pointless) multi-threaded program that uses “mailboxes” for commu-
nication. My Mailbox class (not shown) is similar to the MessageQueue class shown in Figure 3.21 of
the text except that my Mailbox class does not rely on Java synchronization for the Vector class but
uses explicit semaphores (also not shown) that are associated with each mailbox. Thus my Mailbox
class includes acquire() and release() operations to get mutually exclusive access to a mailbox:
1 public class Server ( ) {
2 public Server ( ) {
3 Mailbox mbox1 = new Mailbox ( ) ;
4 Mailbox mbox2 = new Mailbox ( ) ;
5
6 // S t i c k a coup le t h i n g s in the mai lbox f o r s t a r t e r s
7 mbox1 . send ( ” t h i s ” ) ;
8 mbox2 . send ( ” that ” ) ;
9 MyThread thread1 = new MyThread(mbox1 , mbox2 ) ;
10 MyThread thread2 = new MyThread(mbox2 , mbox1 ) ;
11 thread1 . s t a r t ( ) ;
12 thread2 . s t a r t ( ) ;
13 } // Server
14
15 public stat ic void main ( St r ing [ ] a rgs ) {
16 Server s e r v e r = new Server ( ) ;
17 } // main
18
19 public class MyThread extends Thread {
20 private MailBox inBox , outBox ;
21
22 public MyThread( MailBox in , MailBox out ) {
23 inBox = in ;
24 outBox = out ;
25 } // MyThread
26
27 public void run ( ) {
28 St r ing data ;
29
30 while ( true ) {
31 // Lock the inbox & outbox
32 inBox . acqu i r e ( ) ;
33 outBox . acqu i r e ( ) ;
34 // Read the data and t r an s f e r i t from my inbox to my outbox
35 outBox . send ( data = inBox . r e c e i v e ( ) ) ;
36 // Unlock the inbox & outbox
37 outBox . r e l e a s e ( ) ;
38 inBox . r e l e a s e ( ) ;
39 i f ( data != null ) System . out . p r i n t l n ( data ) ;
40 } // wh i l e
41 } //run
42 } // MyThread
43 } // Server
6
(a) (10 points) Unfortunately, my Server program often seems to randomly “hang” after running for
awhile. What and where is the problem (give specific line number(s))?
(b) (5 points) What relatively simple change(s) could I make to my program to fix this problem and
why does your fix solve the issue? Be specific — show code above and/or give code and line
numbers here. Note, the change(s) are relatively simple and do not require the introduction of
any new objects or classes; however, they are not trivial.
11. (0 points) T F
While the well-known Trojan Horse is a diabolical program that masquerades as an innocuous piece of
software, the lesser-known Quaker Pony is an overly friendly program that refuses to take any action
it deems hostile in any way to the system. A common Quaker Pony is a hacked UNIX kill utility
that refuses to terminate processes.
7
12. We used a Java ThreadPool for the Web Crawler
(a) (5 points) “Pooling” of threads can be beneficial by
I. Eliminating the overhead of dynamically allocating/deallocating resources
II. Reducing the overhead of a process context-switch
III. Limiting the number of instances of a resource
A. I only
B. II only
C. III only
D. I and III only
E. I, II, and III
(b) (5 points) Why was a ThreadPool not appropriate for the Foxes & Hounds program?
13. In discussing “dynamic read-only memory” and the kernel, LWN describes a patch to the kernel that
I. Allows user-mode programs to dynamically allocate read-only memory pages
II. Enables marking a “pool” of dynamically allocated kernel memory pages as read-only
III. Allows read-only pages to dynamically be made read-write
A. I only
B. II only
C. III only
D. II and III only
E. I, II, and III
8
14. In discussing proposed optimizations to the ext3 filesystem, Linux Weekly News (LWN) described a
technique called “plugging”.
(a) (10 points) What is “plugging” and how is it related to disk scheduling algorithms such as LOOK,
SCAN, etc.?
(b) (5 points) How and why does file fragmentation work against this optimization?
(c) (5 points) Why would this optimization be largely irrelevant for solid-state-devices (SSD)?
9

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468