Whither Polymath?

Over at the Polymath blog, Gil Kalai recently proposed a discussion about possible future Polymath projects. This post is partly to direct you to that discussion in case you haven’t noticed it and might have ideas to contribute, and partly to start a specific Polymathematical conversation. I don’t call it a Polymath project, but rather an idea I’d like to discuss that might or might not become the basis for a nice project. One thing that Gil and others have said is that it would be a good idea to experiment with various different levels of difficulty and importance of problem. Perhaps one way of getting a Polymath project to take off is to tackle a problem that isn’t necessarily all that hard or important, but is nevertheless sufficiently interesting to appeal to a critical mass of people. That is very much the spirit of this post.

Before I go any further, I should say that the topic in question is one about which I am not an expert, so it may well be that the answer to the question I’m about to ask is already known. I could I suppose try to find out on Mathoverflow, but I’m not sure I can formulate the question precisely enough to make a suitable Mathoverflow question, so instead I’m doing it here. This has the added advantage that if the question does seem suitable, then any discussion of it that there might be will take place where I would want any continuation of the discussion to take place.

Fast parallel sorting.

I am in the middle of writing an article about Szemerédi’s mathematical work (for a book about Abel Prize winners), and one of his results that I am writing about is a famous parallel sorting network, discovered with Ajtai and Komlós, that sorts n objects in C\log n rounds. This is a gorgeous result, but unfortunately the constant C is quite large, and remains so after subsequent improvements, which means that in practice their sorting network does not work as well as a simpler one that runs in time C(\log n)^2 for a much smaller C.

The thought I would like to explore is not a way of solving that problem — obtaining the AKS result with a reasonable constant — but it is closely related. I have a rough idea for a randomized method of sorting, and I’d like to know whether it can be made to work. If it gave a good constant that would be great, but I think even proving that it works with any constant would be nice, unless it has been done already — in which case congratulations to whoever did it and I very much like your result.

The rough idea is this. As with any fast parallel sorting method, the sorting should take place in rounds, where each round consists in partitioning all the pairs (or a constant fraction of them, but there isn’t much to be lost by doing extra comparisons) and swapping the objects round if the one that should come before the other in fact comes after it.

An obvious thing to try is a random partition. So you start with n objects arranged in an arbitrary order. Your target is to rearrange them so that they are arranged according to some linear ordering that you don’t know, and all you are allowed to do is pairwise comparisons. To visualize it, I like to imagine that the objects are n rocks that all look fairly similar, that you want to lay them out in a line going from lightest to heaviest, and that all you have to help you is a set of n/2 balances that can fit at most one rock on each side.

With this picture, the random method would be to partition the rocks randomly into pairs and do the corresponding n/2 comparisons. When you have compared rocks x and y, you put them back in the same two places in the line, but with the lighter one to the left of the heavier one (so they may have been switched round).

What happens if we do C\log n random comparisons of this kind? The answer is that it doesn’t work very well at all. To see why not, suppose that the ordering is approximately correct, in the sense that for every m the mth lightest rock is in roughly the mth place. (To be more concrete, we could say that it is in the rth place and that |r-m|\leq\sqrt{n} or something like that.) Then when we do a random comparison, the probability that we move any given rock is extremely small (in the concrete case around n^{-1/2} at most). Basically, we are wasting our time comparing rocks when we are pretty certain of the answer in advance.

This suggests an adjustment to the naive random strategy, which is to have a succession of random rounds, but to make them gradually more and more “localized”. (Something like this is quite similar to what Ajtai, Komlós and Szemerédi do, so this idea doesn’t come out of nowhere. But it is also very natural.) That is, at each stage, we would choose a distance scale d and pair up the rocks randomly in a way that favours distances that are of order of magnitude d, and we would make d shrink exponentially quickly as we proceed.

The thing that makes it a bit complicated (and again this has strong echoes in the AKS proof) is that if you just do a constant number of rounds at one distance scale, then a few points will get “left behind” — by sheer bad luck they just don’t get compared with anything useful. So the challenge is to prove that as the distance scale shrinks, the points that are far from where they should be get moved with such high probability that they “catch up” with the other points that they should be close to.

It seems to me at least possible that a purely random strategy with shrinking distance scales could sort n objects in C\log n rounds, and that if one devised the random partitions in a particularly nice way then the proof might even be rather nice — some kind of random walk with drift might be taking place for each rock.

So my questions are these.

(i) Has something like this been done already?

(ii) If not, is anyone interested in exploring the possibility?

A final remark is that something that contributed a great deal to the health of the discussion about the Erdős discrepancy problem was the possibility of doing computer experiments. That would apply to some extent here too: one could devise an algorithm along the above lines and just observe experimentally how it does, whether points catch up after getting left behind, etc.

39 Responses to “Whither Polymath?”

  1. Uwe Stroinski Says:

    (i) I do not know. My knowledge about sorting does not go far beyond ‘quicksort’.

    (ii) I would be interested.

  2. g Says:

    This seems highly reminiscent of “Shell sort” — take a decreasing sequence a1,a2,…, ending in 1; for each k and each b mod ak, insertion-sort elements (t.ak+b); note that these “sub-sorts” can run in parallel — which has the following mildly discouraging features.

    1. The (worst-case or average) execution time depends very strongly on the exact choice of gap sequence.

    2. The way in which this dependence works is poorly understood.

    3. No gap sequence produces a worst-case execution time that’s O(N log N). No gap sequence is known to produce an average execution time that’s O(N log N).

    Now, maybe Shell sort is too rigid somehow, and the sort of randomized schedule you propose would somehow average things out in a helpful way. A bit of googling turns up a paper by Michael Goodrich describing what he calls “randomized Shellsort”, which allegedly runs in time O(N log N) and sorts correctly with very high probability. This looks really quite similar to what you propose here.

  3. g Says:

    Er, sorry, of course I should have given a link to Goodrich’s paper. It’s at https://www.siam.org/proceedings/soda/2010/SODA10_101_goodrichm.pdf .

  4. gowers Says:

    Thanks for that interesting reference, which I didn’t know about. It does indeed look similar, though if I understand correctly the depth is greater than C\log N. Also, I was hoping for a sorting network that sorts every single permutation rather than just almost every permutation. But the complicated argument in this paper suggests that that may be hard to achieve.

    Having said that, I don’t want to rule out the possibility that one might be able to do things cleverly somehow and make the argument simpler.

  5. Gerhard Paseman Says:

    I was motivated to ask http://mathoverflow.net/questions/31364/inversion-density-have-you-seen-this-concept by a problem similar to yours, but without the random element. The references listed in the question may be of use; Poonen’s analysis is general enough that it might apply. However,
    you or a student of yours might try an experiment comparing, say, combsort with a randomized version which changes the gap length by +- 1 each time a comparison is done.

  6. Gil Kalai Says:

    At the time I felt that the best chances for a definite success in the various suggestions before agreeing on EDP was the first case of polynomial HJT. http://gowers.wordpress.com/2009/11/14/the-first-unknown-case-of-polynomial-dhj/
    (Of course some propose projects were more “open ended”)

  7. Alec Edgington Says:

    Sounds like a nice question: I’d be interested in exploring it.

  8. gowers Says:

    From the response so far, it’s not clear whether a Polymath project based on this question would get off the ground (I mean that both positively and negatively), so let me try to be more specific about the question. The weakness in the proposed proof so far is that I don’t have a very precise idea about why it might be reasonable to hope that the elements that get “left behind” should eventually catch up. However, I do have a vague idea, and it might be interesting to try to turn that into a more precise heuristic argument, by which I mean turn it from a qualitative argument into a quantitative one, perhaps making some plausible assumptions along the way.

    The vague idea is this. Suppose that we have a linearly ordered set x_1<\dots<x_n and an initial permutation of that set. Let us write \pi_r(m) for the place occupied by x_m after r rounds of sorting. And suppose that r is such that |\pi_r(m)-m|\leq t for all but a small percentage of m. What we expect is that t should be ne^{-cr} for some positive constant c, and indeed I think that something like that should be easy to prove, at least for small r.

    That means that by the rth stage, the algorithm should by and large be working at distance scale ne^{-cr}. Suppose now that the element x_m is one of the few that is way out of position: for convenience, let us assume that \pi_r(m)-m is much bigger than ne^{-cr}. Then at time r, x_m will be surrounded (at a distance scale of ne^{-cr}) by elements that are much larger, so practically all comparisons we ever make will result in x_m being moved closer to place m. (To be more precise, I would expect about half the comparisons to be with objects in later places, which would result in no movement at all, and half to be with objects in earlier places, which would result in significant movement towards m. Putting those two together should give significant movement towards m.)

    So the thing that gives me hope about this is that the elements that are far from their correct positions are much more likely to move towards their correct positions.

    To be slightly more precise still, let’s say that an element is unlucky at stage r if it doesn’t get moved significantly nearer to its correct position. At the beginning of the process, we would normally expect an element to be unlucky with constant positive probability. If that remained true, then even after logarithmically many steps we would expect at least some elements to be unlucky. But by then the distance scale would have contracted to n^\alpha for some \alpha<1 and it starts to look very difficult for that element to catch up. However, one of the assumptions I have just made is too pessimistic, since after a few steps the probability that an element is unlucky should no longer be a constant.

    Having written that, I realize that it is not quite correct as I’ve stated it. Suppose that at each stage the probability that the element in place a is compared with an element in place b for some b>a is 1/2. Then if \pi_1(m) is significantly greater than m, there is a constant probability at each stage that x_m will be compared with an element in some place that is in a place that is greater than \pi_1(m), in which case x_m is unlucky. So for my vague argument to work, the random comparisons need to be organized in a way that guarantees that roughly half the comparisons should be with an element to the left and roughly half with an element to the right.

    One possible way of doing that would be to pick at each stage a random number s at the right distance scale, partition into residue classes mod s, and do two rounds, one where the pairs are (a,a+s), (a+2s,a+3s), etc. and the other where the pairs are (a+s,a+2s), (a+3s,a+4s), etc.

  9. gowers Says:

    Here’s the kind of heuristic calculation I’d like to see done. First, we should work out what we expect the distribution of \pi_s(u)-u to look like. Next, suppose that an element x_m starts in a position that is \theta n places to the right of m: that is, \pi_1(m)-m=\theta n. Then we should work out the probability that \pi_r(m)-m is large assuming that \pi_s(u)-u has roughly the expected distribution at all intermediate stages.

    What makes it a bit tricky is that the probability in question affects the distribution: it looks as though the distribution is some kind of fixed point, and therefore quite hard to guess. The distribution itself depends on the distribution of the random comparisons, but that is less problematic — one could choose the comparisons in as nice a way as possible in the hope of simplifying the calculations. This could be a case where some computer experimentation would be interesting: if we choose some natural seeming collection of comparisons, such as the ones suggested at the end of the previous comment, with s normally distributed with standard deviation e^{-cr}n for some smallish constant c, what is the distribution of \pi_s(u)-u like?

    I’ve just seen an additional complication, which is that it is not clear what I mean by “the distribution”. In particular, am I talking about a random initial arrangement of the x_i or the worst-case initial arrangement? It is the latter that I unconsciously had in mind. So I suppose that what I’m looking for is an “upper bound” for the distribution: that is, for each k an upper bound for the proportion of m such that |\pi_1(m)-m|\geq k, the upper bound being valid whatever the initial sequence is.

  10. gowers Says:

    I’ve just realized that there is a problem with my suggested random scheme for sorting if we want to sort all sequences rather than just almost all. Given any place p, the number of elements x_i that can be at p after r rounds of comparison is at most 2^r. So if r is less than logarithmic, then we can fix it so that the element initially in place p is much too large, and all the elements that could possibly end up at place p are even larger.

    This implies that after, say, \log n/2 rounds, we may have some elements that are out of place by n/2. But by that stage, the distance scale has dropped to n^\alpha, which means that at least n^{1-\alpha} steps will need to be taken by those elements, which is impossible in only logarithmically many rounds.

    This seems to suggest that if the difference scales decrease in such a way that after a short time it becomes very unlikely that comparisons between distant places are ever made again, then we are in trouble. So for an approach like this to work, what seems to be required is for a typical comparison to become more and more focused on small distance scales without losing sight of the large distance scales. Exactly how this should be done is not clear to me, though it might be possible to get some insight into the problem by looking at the AKS scheme and seeing how they cope with elements that are left behind. Maybe some randomized version of what they do would work, though if it was too close to their approach then it wouldn’t be very interesting.

    • Gil Kalai Says:

      A few remarks: 1) There is a simplification of AKS due to M. S. Paterson: Improved sorting networks with O(log N) depth, Algorithmica 5 (1990), no. 1, pp. 75–92. Perhaps there are further works.

      2) There is a beautiful area of random sorting networks. See Random Sorting Networks by Omer Angel, Alexander E. Holroyd, Dan Romik, Balint Virag http://arxiv.org/abs/math/0609538

      3) If you take a random sorting network of depth C log n (or more generally depth m) can we have a heuristic estimation for the probability p that this network will sort every n numbers? (Or perhaps simpler an expectation and variance for the number of orderings it will sort?) Then we can see when p is larger than 1/(n!)^m.

    • gowers Says:

      A quick further remark. The following is a slightly daring conjecture, but I don’t see an obvious argument for its being false. One could simply do what I was suggesting and then do it again. My remark above shows that the first time round it can’t sort everything, but the second time round the initial ordering will on average be very close to correct, and then it isn’t clear that a second pass won’t tidy everything up.

    • Gil Kalai Says:

      Maybe it will be useful to to restate (glue and paste) or link to the explicit description of “what I was suggesting”. So we will be clear about your conjecture.

      Let me remark that on thing that we can do is that anytime we compare 2 rocks we put them back in a random 1/2 1/2 position and then the quastion is how can we guarantee with such a small depth network, that the distribution is very close to uniform.

  11. Polymath proposal (Tim Gowers): Randomized Parallel Sorting Algorithm | The polymath blog Says:

    [...] sorting network for  $n$ numbers of depth $O(log N)$. rounds where in each runs $n/2$. Tim Gowers proposes to find collectively a randomized sorting with the same [...]

  12. Gerhard Paseman Says:

    I want to see how your idea would work on a simple input, say [n,1,2,3,4,...,n-1]. My concern is that the sorting network will do a lot of unsorting (say [d,1,2,3,...,n,...]) and not make any provable progress in a logarithmic number of rounds.

    • gowers Says:

      That’s a nice example to think about. It’s not necessarily a major problem if d gets moved to the beginning, because it will have a strong tendency to move back. But of course, that will move other numbers out of place, and exactly how things pan out is not clear.

  13. gowers Says:

    I’ve thought about Gerhard’s example now and am in a position to explain why I don’t find it too threatening. What I’m going to do is consider a highly idealized and unrealistic model of what the randomized network would do, which to me makes it believable that a actual randomized network would have a chance of success.

    To describe the idealized model, let’s assume that n is a power of 2. For convenience I’ll call the numbers 1 to 2^k and the places 0 to 2^k-1. The initial arrangement is 2^k,1,2,3,\dots,2^k-1. The worry about this arrangement is that almost all the numbers are in order, so very few comparisons will actually change anything. Meanwhile, those few comparisons that do change anything seem to be messing things up.

    However, these two worries in a sense cancel each other out: the messing up is necessary to get all of 1,2,\dots,2^k-1 to shift to the left.

    Let’s now do the following sequence of rounds. We will begin by comparing place r with place 2^{k-1}+r for each r=0,1,\dots,2^{k-1}-1. Note that these are precisely those r for which \epsilon_{k-1}=0 if we represent them as \sum_{i=0}^{k-1}\epsilon_i2^i with each \epsilon_i\in\{0,1\}.

    Next, we compare place r with place 2^{k-2}+r for each r such that \epsilon_{k-2}=0 (that is, the first and third quarters of all the r). And we continue in this way.

    After the first round, everything is as it was before, except that now we have 2^k in place 2^{k-1} and 2^{k-1} in place 0. Now we have two copies of the original ordering, one in the left half and one in the right half. What’s more, in each half we do the sequence of rounds corresponding to k-1. Therefore, by induction after k rounds everything is sorted.

    What is happening here is that as we proceed, we replace one sequence that is messed up at a distance scale d by a concatenation of two sequences that are messed up in precisely the same way at a distance scale d/2. In a sense, this concatenation is messier than the original sequence, but because we can operate on it in parallel, this does not matter.

    I think it would be very interesting to try to analyse what a more randomized set of rounds would do with this initial ordering.

  14. Alec Edgington Says:

    Even if the goal is to find (or prove the existence of) small-depth circuits that sort all initial inputs (and not merely ‘almost all’), I wonder if it might be useful to think about the more general case. A proof that depended upon following some properties of the input distribution through the rounds might be easier to make work in that case. On the other hand, it might be possible to control some properties (e.g. some measure of sortedness) possessed by all potential inputs to each round.

    I also wonder if we are after a (randomized construction of a) circuit that works with high probability, or just with positive probability? Or are these somehow equivalent?

  15. Gil Kalai Says:

    As I heard from Michael Ben-Or, there is a randomized construction by Tom Leighton of a sorting network with 7.2log n rounds that sort all orderings except exponentially small fraction of them. (The problem with repeating is that it seems that a log number of repeats are needed.)

    • Gil Kalai Says:

      The book by Leighton: :Introduction to Parallel Algorithms and Architectures Arrays, Trees, Hypercubes,” devotes a chapter to this construction. See also this paper by Leighton and Plaxton: http://epubs.siam.org/doi/pdf/10.1137/S0097539794268406

    • gowers Says:

      Do you know why a log number of repeats is needed? One might think that if after a first round the number of elements that are out of place was n^{1-c} (with similar statements at all distance scales), then at the next repetition the probability that tidying up fails becomes very small after only a constant number of rounds.

      Just to be clear, what I’m suggesting is that one does a randomized sorting procedure with the distance scales shrinking exponentially, and when one reaches a distance scale of 1, one repeats the entire process.

  16. Gil Kalai Says:

    I suppose roughly the reason is that for n^{1-c)t} to be small you need t in the order of log n.

    • gowers Says:

      Hmm … it depends what we mean by small. I was thinking of “small” as meaning n^{-C}, so that for n^{-ct} to be small we just need t to be constant. But I can see that it isn’t obvious that that will work for all sequences rather than just almost all.

  17. gowers Says:

    It may not be feasible, but I quite like the idea of attempting something along the following lines. Given some kind of random comparator network, try to design a permutation, depending on the network, that will not be correctly sorted. With the help of this process, try to develop a sufficient condition for the permutation not to exist, and then try to show that that condition holds with positive probability.

    To obtain the sufficient condition, what one might try to do is find a number of necessary conditions (saying things like that the network must “mix well” at every distance scale), and hope that eventually you have enough necessary conditions that taken together they are sufficient.

    • Alec Edgington Says:

      I wonder what lower bounds are known for the constant C. There’s a trivial lower bound of 2 / \log 2 (from the fact that each round can have at most n/2 comparisons, and if K is the total number of comparisons then we must have 2^K \geq n!, and Stirling’s formula). Can anything more be said in this direction? What are the obstacles to creating such an efficient circuit?

  18. gowers Says:

    Earlier, Gil asked for a precise statement of “what I am suggesting”. I’m not sure I can provide that yet, because there are a number of precise statements that I would be happy with. However, let me give a precise statement that I would be delighted to prove, but that might easily be false. If it is false, then I would want to find a modification of it that is true.

    The precise statement is as follows. Let me define an m-round of type 0 to be a set of comparisons of the following kind. First, partition \mathbb{Z} into intervals of the form [2m(r-1), 2mr-1], then partition each such interval into pairs (x,x+m) (which can be done in exactly one way), and finally, do a comparison for all such pairs for which both x and x+m are between 1 and n. Define an m-round of type 1 to be the same except that the initial partition is into intervals of the form [2m(r-1)+m,2mr-1+m]. If we do an m-round of both types, then every place x between 1 and n will be compared with both place x-m and place x+m, provided both those places are between 1 and n.

    Now I propose the following comparator network. We choose a random sequence of integers m_0,m_1,\dots,m_t, where m_s is uniformly distributed in the interval (of integers) [1,ne^{-cs}] and t is maximal such that ne^{-ct}\geq 1. Then at step s we do an m_s round of type 0 followed by an m_s round of type 1.

    When we have finished this, we repeat the process once.

    My probably false suggestion is that for some fixed c>0 and with reasonable probability this defines a sorting network.

    Incidentally, there is a detail in the above suggestion that reflects a point I have only recently realized, which is that if one is devising a random sorting network of roughly this kind — that is, with increasingly local comparisons — then it is important that there should be correlations between what happens in two different places at the same small distance scale. If not, then when we get to small distance scales, there are many opportunities for something bad to happen. That is, it might be that each little piece of network has a quite high probability of behaving well, but that if you choose many little pieces independently, then some of them will misbehave. The use of m-rounds above is an attempt to get round this problem by making the network do essentially the same thing everywhere at each distance scale. (It doesn’t do this exactly, because of the ends of the intervals of length 2m, but I’m hoping that that minor variation is not too damaging.)

  19. gowers Says:

    I now think that the suggestion in the previous comment is unlikely to give a depth better than \log n \log\log n. Here is why. The sum of the GP 1, e^{-c}, e^{-2c},\dots is approximately c^{-1}. So if an element starts out at a distance \theta n from where it should be and is not moved at any time during the first approximately r=2c^{-1}\log(c^{-1}) rounds, then at the end of all rounds it will still be at a distance proportional to n from where it should be. If we start with a random permutation, then the chances of this happening look to me like at least 10^{-2^r}, since we only care about the values in at most 2^r places.

    So I would expect that after the first round, the proportion of points that are a long way from where they should be will be at least 10^{-2^r}, which is an absolute constant (depending on the absolute constant c that we choose at the beginning).

    After the next round, something similar can be said, except that this time the probability that a point in one of the 2^r places is a bad point has gone down to more like 10^{-2^r}. But that still gives us a probability of (10^{-2^r})^{2^r}=10^{-2^{2r}} that the point will end up way out of place. And after k repetitions, we get to 10^{-2^{kr}}. For that to be smaller than 1/n we need k\approx\log\log n.

    • gowers Says:

      To be clear, when I say that the suggestion in the above comment is unlikely to give a depth better than \log n\log\log n, what I actually mean is that the suggestion itself looks false, and to rescue it I think one would need to repeat at least \log\log n times rather than just once.

  20. Weekly links for March 4 | God plays dice Says:

    [...] Gowers suggests a polymath project based on parallel sorting; Alexander Holroyd has a gallery of pictures of sorting [...]

  21. mixedmath Says:

    I’m a bit late to this post, but the question sounds interesting and I would support it as well. Unfortunately, I don’t know much about it either.

    – David Lowry-Duda

  22. mixedmath Says:

    I’d just like to throw something out there, so I beg forgiveness if this is a bit nonsensical. But I was reading up on the AKS network sort, and I read that a key idea in the method is to construct an expander graph. As a student of modular forms, I’ve heard that there is a way to construct expander graphs that is intimately connected to modular forms (somehow). While I don’t know much about this yet, I wonder if there is something to be gained from examining that connection a bit more closely.

  23. Gerhard Paseman Says:

    Here is a test comment on another post, as requested.

  24. michelledelcourt Says:

    I am definitely interested in thinking more about this problem.

  25. vznvzn Says:

    “A final remark is that something that contributed a great deal to the health of the discussion about the Erdős discrepancy problem was the possibility of doing computer experiments. That would apply to some extent here too: one could devise an algorithm along the above lines and just observe experimentally how it does, whether points catch up after getting left behind, etc.”

    huge fan of empirical/experimental approaches here too & think they are too much underutilized. an interesting strategy along these lines is something like what is called a “magnification lemma” seen in jukna’s new book on circuit theory. basically it is a way of translating or scaling problems of size N to size log(N) which might it more accessible to computer experiments of the results.

    have recently collected various refs on experimental math on my blog home page links & am planning to do a post on the subject. theres also an excellent article on simons web site profiling zeilberger.

    suggest that future polymath projects be launched with an eye toward supporting empirical attacks to support building technical intuition.

    also, have been thinking that nobody has addressed this issue much, but basically polymath is an attempt to find “viral-like” projects which have high popularity. but there is currently no good way to sort out this popularity. so, suggest a voting system be built up so that projects can gain “critical mass” through measuring interest via votes. [similar to stackexchange mechanisms].

    think that this difficulty in achieving “critical mass” is one of the key issues facing further polymath success. which by the way to my eye, speaking honestly/critically but hopefully also constructively, does not seem to be “clicking” so much since the original/initial gowers project.

  26. vznvzn Says:

    one other point. it would be helpful if someone put together a survey of the status of polymath problems launched since the original gowers Hales-Jewett DHJ problem—its hard to keep track of them all. analyzing in particular not so much the results but the crowd level of participation and enthusiasm for each, which imho is a key ulterior factor of project success, a social-cultural aspect which is so far seems largely not addressed by any participants.

  27. mixedmath Says:

    I’d like to say something else about expander graphs and that approach. I recently heard a talk by Dr. Cristina Ballantine at ICERM where she explained that she could construct expander bigraphs, i.e. low degree highly connected biregular bipartite graphs.

    • mixedmath Says:

      I did not mean to end that post already – so I continue in a comment to myself.

      From my (somewhat limited) experience with computer science algorithms, bigraphs made things much better than just highly connected graphs. Here, having a graph be biregularly bipartite (meaning that the vertices are of two families; in each, all the vertices have the same degree – but the two families can have different degrees from each other – and are directly connected only to the other family) seems like it might behave well with parallelizing the work, maybe.

      Maybe this is something to consider?

  28. aaronsheldon Says:

    Sorry to diverge the thread, but based purely on heuristics, if you would like to have an $O(\log n)$ random parallel sorting algorithm I would think it would be hard to beat Markov chain monte carlo with an appropriate score to optimize the ordering.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 1,564 other followers

%d bloggers like this: