# Talk:Japan

### From Progteam

Ok, so I'm not quite sure I follow the idea given below. I've taken a different route, which isn't as efficient, but was easier for me to understand. By modifying mergesort I was able to do inversion counting in O(nlogn) time, which is now fast enough for it to run to completion and tell me that my answer is wrong on PKU. This has been quite frustrating since my code works on all the inputs I've made up so far. Would it be OK to post my code up here and see if others have an input that can break it? --Jinho 22:58, 22 March 2008 (EDT)

For easy/fast coding, one way to use data structures is to take a general data structure and specialize it to take advantage of the regularity in the data. So rather than build a general search tree, use the fact that the keys are, say, integers between 0 and m-1. Use, for example the perfect balanced binary tree given by a heap. Suppose 2^k is the first power of 2 that is greater than or equal to m. Let the leaves of the tree be A[2^k .. 2^k+m-1]. Each location in A[0..2^k+m-1] is intialized to zero. The counts at each node are the number of items in the subtree rooted by the node. The parent of a node j is j/2 (why? answer: why not? So A[0] is wasted. So what. Root A[1] is not needed either.) To insert key j, write:

procedure insert(j) loc <- j + twotothek repeat A[loc] <- A[loc] + 1 loc <- loc/2 until loc=0 (Okay, the zero could be a one. So what.)

Delete is no different.

To find how many entries are less than j use:

function lessthan(j) temp <- 0 loc <- j + twotothek repeat if odd(loc) then temp + A[loc-1] endif loc <- loc/2 until loc=1 return temp

The code is fast, easy to read, easy to write. --Siegel 11:26, 22 March 2008 (EDT)

This problem is a data structures/sorting question. For each road (i,j), how many roads (k,l) with index k<i have destination l>j? So the idea is:

Step 1. Build a data structure to return the number of pairs (k,l) where k<i.

Enter all of the roads into it).

Step 2. As Jason suggested, process the roads for j = 1 2,3, . . .(as the main key and then the secondary key counts also increasing).

Compute the count for the first (i,j), which is just the number of roads (k,l) where k<i. Then remove road (i,j) so that the j you process next is always the smallest.

Use an efficient data structure.

So the idea is: for each road (k,l) insert the the record (primary key k, secondary key l). Use standard tricks to compute the number of entries with key count less than k.

Enter all such roads. Now greedily sequence through the roads (i,j) by the order (j=primary key, i=secondary key) and compute count the number of entries (k,l) with k<i. Then remove (i,j). The counts give the intersections.

This was a procedural description of an algorithm that runs in r log r time, where r is the number of roads.

Classifications: sorting, inversion counting. Also enhanced search trees. Fast (in code design) implementations of enhanced search trees (over pairs of records with keys that are integers in [0,n]). --Siegel 19:43, 14 March 2008 (EDT)

Hmm. interesting interval problem. Do you think a greedy approach might work well? --Siegel 14:05, 26 February 2008 (EST)

Did you try divide and conquer? Here is an equivalent problem (why?) that is kind of nice (it will appear somewhere, sometime, and probably already has). The data is a list of n horizontal intervals on the x-axis. You can assume that the endpoints are integers, if you like. The interval [a,b] contains [c,d] if a<=c and b<=d. you can assume that all intervals are distinct. Find the number of pairs where one contains the other. I guess a problem variant is to define the other relationships (incomplete overlap) and (disjoint) and count those as well. --Siegel 02:38, 26 February 2008 (EST)

I added a lookup table, but it's still too slow. Any ideas?

-- Melanie
**Bold text**