Offline prefetching

We are given (offline) a sequence of page request, one for every time unit. We have a cache of size k. If a page is in the cache it can be served, otherwise it must be fetched, and some page from the cache must be evicted to make room. This operation costs F time units, meaning that F stall time units are introduced. The idea is that pages can be fetched in advance, and in parallel to the fetch other pages might be served, reducing so the stall time. The constraint is that two fetches cannot overlap in time.
In this implementation we assume that the cache contains initially the first k distinct requested pages.

Input

Cache size
Fetch duration
Request sequence
output horizontally vertically

Output

request
|   cache at beginning of request
a : a h f c
a : a h f c
h : a h f c
f : a h f c
h : a h f c
a : a h f c
h : a h _ c
h : a h _ c
c : a h _ c
a : a h _ c
b : a h b c
a : a h b c
h : a h b c
a : a h b c
c : a h b c
c : a h b c
c : a h b c
c : a h b c
b : a h b c
h : a h b c
total cost (stall time) =  0