ÿþ<HTML> <HEAD> <TITLE>2013 Erd&#337;s Lectures in Discrete Mathematics and Theoretical&nbsp;Computer&nbsp;Science </TITLE> <script type="text/javascript"> function toggle(obj) { var obj=document.getElementById(obj); if (obj.style.display == "block") obj.style.display = "none"; else obj.style.display = "block"; } </script> </HEAD> <BODY BGCOLOR="#FFFFF0" TEXT="#000000" LINK="#003366" VLINK="#440088" ALINK="#FF0000"> <TABLE WIDTH=70% ALIGN="CENTER" BORDER=0" <TR> <TD WIDTH=40%> <A HREF="http://www.huji.ac.il/" TARGET="_blank" style="text-decoration: none"> The Hebrew University of Jerusalem</A><BR> <A HREF="http://www.ma.huji.ac.il" TARGET="_blank" style="text-decoration: none"> Einstein Institute of Mathematics</A> <TD WIDTH=20%> <A HREF="http://www.huji.ac.il/" style="text-decoration: none"> <IMG SRC="../gifs/HUJI_logos1_08.jpg" ALT="Hebrew University of Jerusalem" width=97 height=99 BORDER=0 hspace=10 vspace=10></A> <TD WIDTH=40%> <A HREF="http://www.huji.ac.il/" TARGET="_blank" style="text-decoration: none"> The Hebrew University of Jerusalem</A><BR> <A HREF="http://www.cse.huji.ac.il/" TARGET="_blank" style="text-decoration: none"> The Rachel and Selim Benin School of Computer Science & Engineering</A> <TR> <TH COLSPAN=3>Center for Theoretical Computer Science and Discrete Mathematics </TABLE><P> <CENTER> <font face="Times New Roman,Times" SIZE=+1> <A HREF="erdos.html" style="text-decoration: none"> Erd&#337;s Lectures in Discrete Mathematics and Theoretical&nbsp;Computer&nbsp;Science</A> </font></B> </CENTER> <HR><P> <CENTER> The Hebrew University Center for Theoretical Computer Science and Discrete Mathematics invites you to this year's Erd&#337;s&nbsp;Lecture&nbsp;in&nbsp;Discrete&nbsp;Mathematics&nbsp;and&nbsp; Theoretical&nbsp;Computer&nbsp;Science<P> <FONT SIZE=+2><B> <I>Prof. <A HREF="http://www.math.rutgers.edu/~saks/" style="text-decoration: none" TARGET="_blank"> Michael Saks</A></I> &nbsp;&nbsp; (Rutgers University)</B> </FONT> </CENTER> <HR> <BLOCKQUOTE> <B>Lectures</B>: <!-- &nbsp;&nbsp;&nbsp;&nbsp; (<B><A HREF="erdos_10.pdf">Poster</A></B>) --> <OL> <LI><B>Two very efficient approximation algorithms for the longest increasing subsequence</B><BR> <I>Monday, March 18, 2013</I> &nbsp;&nbsp; 11:00 am &nbsp;&nbsp; Ross Building, Room 201<BR> <a href="javascript: void(0);" onClick="toggle('a1')">Abstract</a> <div id="a1" style="display:none;"> The longest increasing subsequence (LIS) of a finite sequence has been the subject of much study within combinatorics and probability. As a computational problem it is a classic example of a problem efficiently solved by dynamic programming. Simple algorithms are known that run in time O(n log(n)) where n is the length of the array.<P> In recent years, many algorithmic problems have been revisited under the assumption that the input is so large that one can only afford restricted access to it, and one is willing to settle for an approximation to the solution. In this talk, I'll discuss joint work with <I>C. Seshadhri</I>, in which we study the LIS problem from this perspective in two distinct models of computation.<P> In the standard computational model we obtain an approximation algorithm that runs in time polylogarithmic in the input size. For any chosen constant c > 0, the algorithm outputs an approximation to the length of the LIS that (with high probability) is accurate to within an additive cn error.<P> In the data stream model, the input data is not available for random access, but rather arrives as a sequence of data items. The goal is to estimate the length of the LIS while minimizing the memory needed. We present a simple algorithm that is accurate to within an additive cn error and uses O(log^2(n)/c) memory. </div><P> <LI><B>Tight lower bounds for file maintenance</B><BR> <I>Wednesday, March 20, 2013</I> &nbsp;&nbsp; 10:30am &nbsp,&nbsp; Ross Building, Room 201<BR> <a href="javascript: void(0);" onClick="toggle('a2')">Abstract</a> <div id="a2" style="display:none;"> In the file maintenance problem, n integer items from the set {1,....,r} are to be stored in an array of size m>=n. The items are presented sequentially in an arbitrary order and must be stored in the array in sorted order (but not necessarily in consecutive locationsin the array). Each new item is stored before the next arrives. If r<=m then we can simply store item j in location j, but if r>m then we may have to shift the location of stored items to make space for a newly arrived item. The algorithm is charged each time an item is stored or moved to a new location. The goal is to minimize the total number of such moves. The problem is non-trivial for n <= m < r.<P> In the case that m=Cn for some constant C>1, <I>Itai, Konheim and Rodeh</I> gave an algorithm that stores the items at an amortized cost of at most O(log^2(n)) per item. In the case that m = n^C for some $C>1$ their algorithm can be tuned so that the cost is O(log(n)) per item.<P> In this talk I'll discuss my work, in various papers, with <I>Martin Babka, Jan Bul&aacute;nek, Vladim&iacute;r &#268;un&aacute;t, and Michal Kouck&yacute;</I>, in which we establish lower bounds that match the above upper bounds. </div><P> <LI><B>Population recovery under high erasure probability</B><BR> <I>Thursday, March 21, 2013</I> &nbsp;&nbsp; 14:30pm &nbsp;&nbsp; Mathematics Building, Lecture Hall 2 &nbsp;&nbsp; (Department Colloquium)<BR> <a href="javascript: void(0);" onClick="toggle('a3')">Abstract</a> <div id="a3" style="display:none;"> In the population recovery problem (introduced by <I>Dvir, Rao, Wigderson and Yehudayoff</I>) there is an unknown probability distribution D over length n binary strings and we want to determine an estimate D* of the distribution such that for each string x, |D*(x)-D(x)| is at most some desired bound b. Samples can be obtained from the distribution, but each sample is obscured as follows: for each sampled string, each bit of the string is independently erased (changed to "?") with some probability q.<P> In this talk, I'll discuss my recent work with Ankur Moitra showing that for any constant erasure probability q<1, the distribution D can be well approximated using a number of samples (and also computaton) that is polynomial in n and 1/b. This improves on the previous quasi-polynomial time algorithm of <I>Wigderson and Yehudayoff</I> and the polynomial time algorithm of <I>Dvir et al.</I> which was shown to work for q<.7 by <I>Bateman, Impagliazzo, Murray and Paturi</I>. The algorithm we analyze is implicit in previous work on the problem; our main contribution is to analyze the algorithm. Using linear programming duality the problem is reduced to a question about the behavior of polynomials on the unit complex disk, which is answered using the Hadamard 3-circle theorem from complex analysis. </div><P> </OL> </BLOCKQUOTE> <HR noshade> <font size=-1><B>Comments to</B>: Naavah Levin, email: <I>naavah at math.huji.ac.il</I><BR> <B>Design, construction &amp; editing</B>: Naavah Levin<BR> <B>Last updated</B>: March 12th, 2013</font> </BODY> </HTML>