Round Robin processor scheduling is queue based as is fifo disk arm scheduling.
More general processor or disk arm scheduling policies often use priority queues (with various definitions of priority). We will learn how to implement priority queues later this chapter (section 2.4).
Homework: (You may refer to your 202 notes if you wish; mine are on-line based on my home page). How can you interpret Round Robin processor scheduling and fifo disk scheduling as priority queues. That is what is the priority? Same question for SJF (shortest job first) and SSTF (shortest seek time first). If you have not taken an OS course (202 or equivalent at some other school), you are exempt from this question. Just write on you homework paper that you have not taken an OS course.
Problem Set #1, Problem 2: C-2.2
Unlike stacks and queues, the structures in this section support operations in the middle, not just at one or both ends.
The rank of an element in a sequence is the number of elements before it. So if the sequence contains n elements, 0≤rank<n.
A vector storing n elements supports:
Use an array A and store the element with rank r in A[r].
Algorithm insertAtRank(r,e) for i = n-1, n-2, ..., r do A[i+1]←A[i] A[r]←e n←n+1 Algorithm removeAtRank(r) e←A[r] for i = r, r+1, ..., n-2 do A[i]←A[i+1] n←n-1 return e
The worst-case time complexity of these two algorithms is Θ(n); the remaining algorithms are all Θ(1).
Homework: When does the worst case occur for insertAtRank(r,e) and removeAtRank(r)?
By using a circular array we can achieve Θ(1) time for insertAtRank(0,e) and removeAtRank(0). Indeed, that is the third problem of the first problem set.
Problem Set #1, Problem 3:
Part 1: C-2.5 from the book
Part 2: This implementation still has worst case complexity
Θ(n). When does the worst case occur?
So far we have been considering what Knuth refers to as sequential allocation, when the next element is stored in the next location. Now we will be considering linked allocation, where each element refers explicitly to the next and/or preceding element(s).
We think of each element as contained in a node, which is a placeholder that also contains references to the preceding and/or following node.
But in fact we don't want to expose Nodes to user's algorithms since this would freeze the possible implementation. Instead we define the idea (i.e., ADT) of a position in a list. The only method available to users is
Given the position ADT, we can now define the methods for the list ADT. The first methods only query a list; the last ones actually modify it.
Now when we are implementing a list we can certainly use the concept of nodes. In a singly linked list each node contains a next link that references the next node. A doubly linked list contains, in addition prev link that references the previous node.
Singly linked lists work well for stacks and queues, but do not perform well for general lists. Hence we use doubly linked lists
Homework: What is the worst case time complexity of insertBefore for a singly linked list implementation and when does it occur?