5th Athens Colloquium on
Algorithms and Complexity – ACAC 2010 |
|
Thursday, August 26 |
|
9:15 – 9:30 |
Opening Remarks |
9:30 – 10:30 |
Invited Session 1 Lefteris Kirousis,
University of Patras and RA Computer Technology Institute
|
10:30 – 11:00 |
Coffee
Break |
11:00 – 13:00 |
Session 1 Ioannis
Caragiannis, University of Patras and RA Computer Technology Institute ABSTRACT: In 1876 Charles Lutwidge
Dodgson suggested the intriguing voting rule that today bears his name.
Although Dodgson\'s rule is one of the most well-studied voting rules, it
suffers from serious deficiencies, both from the computational point of
view---it is NP-hard even to approximate the Dodgson score within
sublogarithmic factors---and from the social choice point of view---it fails
basic social choice desiderata such as monotonicity and homogeneity. In a previous work we have
asked whether there are approximation algorithms for Dodgson\'s rule that are
monotonic or homogeneous. We give definitive answers to these questions. We
design a monotonic exponential-time algorithm that yields a 2-approximation
to the Dodgson score, while matching this result with a tight lower bound. We
also present a monotonic polynomial-time O(log m)-approximation algorithm
(where m is the number of alternatives); this result is tight as well due to
a complexity-theoretic lower bound. Furthermore, we show that a slight
variation on a known voting rule yields a monotonic, homogeneous,
polynomial-time O(m log m)-approximation algorithm, and establish that it is
impossible to achieve a better approximation ratio even if one just asks for
homogeneity. We complete the picture by studying several additional social
choice properties; for these properties, we prove that algorithms with an
approximation ratio that depends only on m do not exist. The talk is based on joint work
with Christos Kaklamanis, Nikos Karanikolas, and Ariel D. Procaccia. Evangelos
Markakis, Athens University of Economics and Business ABSTRACT: We consider approval voting
elections in which each voter votes for a (possibly empty) set of candidates
and the outcome consists of a set of k candidates for some parameter k, e.g.,
committee elections. We are interested in the minimax approval voting rule in
which the outcome represents a compromise among the voters, in the sense that
the maximum distance between the preference of any voter and the outcome is
as small as possible. This voting rule has two main drawbacks. First,
computing an outcome that minimizes the maximum distance is computationally
hard. Furthermore, any algorithm that always returns such an outcome provides
incentives to voters to misreport their true preferences.In order to
circumvent these drawbacks, we resort to approximation algorithms. We present
a polynomial-time 2-approximation algorithm that uses a natural linear
programming relaxation for the underlying optimization problem; this result
improves upon the previously best known algorithm that has an approximation
ratio of $3$. We are furthermore interested in approximation algorithms that
are resistant to manipulation by (coalitions of) voters. We complement
previous results in the literature with new upper and lower bounds on
strategyproof and group-strategyproof algorithms. Joint work with
Ioannis Caragiannis and Dimitris Kalaitzis
Orestis Telelis, CWI
Amsterdam ABSTRACT: We study formally two simple discrete bidding
strategies in the context of iterative best response procedures, for the game
induced by the Generalized Second Price keyword auction mechanism. These
strategies have seen experimental evaluation in the recent literature as
parts of best response procedures, which have been shown not to converge.
Here we give a detailed definition of iterative best response under these
strategies and, under appropriate discretization of the players’ strategy
spaces, we find that the best response state space contains socially optimal
pure Nash equilibria of the original game (in continuous strategies). We cast
the strategies under a new light, arguing that they constitute natural
choices for conservative myopic bidders, that need to act based on local
information only. For this case we provide bounds for the worst-case ratio of
the social welfare of the reached locally stable states, relative to the
socially optimum welfare. Finally, we make several interesting observations
regarding convergence of the studied strategies, present related experimental
evidence and discuss challenging open problems. This is a joint work with Evangelos Markakis Angelina
Vidali, Max-Planck Institut fur Informatik ABSTRACT: Characterizing the mechanisms for the domains of combinatorial auctions and scheduling unrelated machines are two outstanding problems in mechanism design. Since the scheduling domain is essentially the subdomain of combinatorial auctions with additive valuations, we consider whether one can extend a characterization of a subdomain to a domain. This is possible for two players when the only truthful mechanisms for the subdomain are the affine maximizers. Although this is not true for scheduling because besides the affine maximizers there are other truthful mechanisms (the threshold mechanisms), we still show that the truthful mechanisms of practically any domain which is strictly superdomain of scheduling and subdomain of combinatorial auctions are only the affine maximizers.
Joint work with
Elias Koutsoupias.
|
13:00 – 14:00 |
Lunch
Break |
14:00 – 16:00 |
Session 2 Aris Anagnostopoulos,
University of Rome La Sapienza ABSTRACT: In a classical stochastic online network design
problem, traffic requirements follow some distributional assumptions and are
gradually revealed to an algorithm. Each time a new request arrives, the
algorithm has to satisfy it by augmenting the network under construction in a
proper way (with no possibility of recovery). In this talk I will introduce a
natural generalization of online network design problems, where a fraction of
the requests (the outliers) can be disregarded. Now, each time a request
arrives, the algorithm first decides whether to satisfy it or not, and only
in the first case it acts accordingly. I will focus on the online stochastic
Steiner tree problem with outliers: A set of t terminals that belong to an
n-node graph is presented, one at a time, to an algorithm. Each time a new
terminal arrives, the algorithm can either discard or select it. In the
latter case, the algorithm connects it to the Steiner tree under
construction. At the end of the process, at least k terminals must be
selected. I will show inapproximability results as and present an online
algorithm that gives bicriteria guarantees. Panagiotis
Cheilaris, Ben-Gurion University of the Negev ABSTRACT: We
consider unique-maximum and conflict-free vertex colorings of graphs which
are stronger versions of traditional vertex coloring. These two colorings
have applications in: * efficient solving of sparse
linear systems of equations, * scheduling parallel assembly
of products, * finding local optima in
neighborhood structures, * frequency assignment in
cellular, wireless, and sensor networks. A unique-maximum coloring of a graph G is an
assignment of colors (i.e., positive integers) to the vertices of G such that
in every simple path p of G the maximum color that occurs in p, occurs
exactly one time in p. In a conflict-free coloring of G, in every simple path
p of G, there must be a color that occurs exactly one time in p (not
necessarily the maximum one in p). We present several results about the computational
complexity and the relation between the two colorings. Tobias
Mueller, CWI Amsterdam ABSTRACT: A graph is a disk graph if it is the intersection
graph of disks in the plane. That is, we can represent
each vertex
by a disk in such a way that two vertices
are adjacent if and only if
the corresponding disks intersect.
Similarly one can define a segment
graph as the intersection graph of line
segments.
A d-dot product graph is a graph that can
be represented by vectors in
d-space such that two vertices are adjacent
if and only if the
inner product of the corresponding vectors
is at least one.
When d=1 this yields the familiar well-studied
class of threshold graphs. Amongst other things, I will consider the
recognition problem for these graph
classes and the least number of bits that
are needed to store
geometric realisations of these graphs in
the memory of a computer.
By making use of some classical results in
algebraic geometry,
on pseudoline arrangements (also known as
oriented matroids of rank 3) and
on the Colin de Verdiere parameter, I will
settle some conjectures and
questions by Breu&Kirkpatrick, Fiduccia
et al., Spinrad and Van
Leeuwen&Van Leeuwen and
improve upon a result by Kratochvil & Matousek. (based on joint work with Colin McDiarmid and
Ross Kang). Tamara
Mchedlidze, National Technical University of Athens ABSTRACT: A
digraph D is unilateral if for every pair x, y of its vertices there exists a
directed path from x to y, or a directed path from y to x, or both. A mixed
graph M = (V,A,E) with arc-set A and edgeset E accepts a unilateral orientation,
if its edges can be oriented so that the resulting digraph is unilateral. In
this talk, we present the first linear-time recognition algorithm for
unilaterally orientable mixed graphs. Based on this algorithm we derive a
polynomial algorithm for testing whether a unilaterally orientable mixed
graph has a unique unilateral orientation. |
16:00 – 16:30 |
Coffee
Break |
16:30 – 18:30 |
Session 3 Aristeidis
Tentes, New York University ABSTRACT: RSA signature is one of the
most commonly used \""hash-then-sign\"" signature scheme,
whose security, however, has only been proven in the Random Oracle Model.
This paper studies the problem of instantiating Random Oracle in the RSA
signature with a concrete Hash Function Family. Our main result is a black
box separation between the security RSA signature (using any concrete,
efficient Hash Function Family) and almost any natural assumption about the
RSA modulus n. This includes the standard RSA assumption and, more generally,
any assumption which can be
expressed using only multiplication and inverse operations in Z_n^*. Our
separation rules out an important class of reductions, namely those which do
not use the representation of group elements (which includes most standard
reductions from RSA to other primitives, including RSA signatures in the
Random Oracle Model). To complement our negative
result, for any fixed t, we construct a Hash Function Family of size poly(t,
log n), which makes RSA signatures provably secure (under the RSA assumption
in the standard model) against t-chosen message attack. Joint work with Yevgeniy Dodis
and Iftach Haitner Paraschos
Koutris, National Technical University of Athens ABSTRACT: We study
distributed broadcasting protocols with few transmissions (`shots') in radio
networks with unknown topology. In particular, we examine the case in which a
bound k is given and a node may transmit at most k times during the
broadcasting protocol. We focus on almost oblivious algorithms for k-shot
broadcasting, that is, algorithms where the nodes decide whether to transmit
or not with very little consideration of the transmission history. In this
context, we show a lower bound of \Omega(\frac{n^2}{k}) on the broadcasting
time of any almost oblivious k-shot broadcasting algorithm. Furthermore, we prove
a stronger lower bound of \Omega(n^2) for any 1-shot adaptive broadcasting
protocol. We also present an almost oblivious protocol that matches the lower
bound of \Omega(\frac{n^2}{k}) for every k\le \sqrt{n}. Othon
Michail, RA Computer Technology Institute, Patras ABSTRACT: Population Protocols (PPs) constitute a model for studying populations of passively mobile tiny computational devices. The first attempt was to model such devices as communicating finite automata of constant memory. In this talk, we outline some recent developments. In particular, we propose two new models, the Mediated Population Protocol (MPP) model and the Passively mobile Machines (PM) model. The former extends the PP model by allowing the communication links to be finite-state buffers while the latter assumes that the devices are TMs of bounded memory (where the bound is some function of the number of devices). For both models we focus on the class of computable predicates on complete communication graphs and arrive at surprising exact characterizations establishing that both models are extremely strong.
|
Friday, August 27 |
|
9:30 – 10:30 |
Invited Session 2 Ioannis Emiris, National
and Kapodistrian University of Athens
|
10:30 – 11:00 |
Coffee
Break |
11:00 – 13:00 |
Session 4 Iordanis
Kerenidis, CNRS - University Paris ABSTRACT: In a celebrated paper,
Valiant and Vazirani raised the question of whether the difficulty of
NP-complete problems was due to the wide variation of the number of witnesses
of their instances. They gave a strong negative answer by showing that
distinguishing between instances having zero or one witnesses is as hard as
recognizing NP, under randomized reductions. We consider the same question in
the quantum setting and investigate the possibility of reducing quantum
witnesses in the context of the complexity class QMA, the quantum analogue of
NP. The natural way to quantify the number of quantum witnesses is the
dimension of the witness subspace W in some appropriate Hilbert space H. We
present an efficient deterministic procedure that reduces any problem where
the dimension d of W is bounded by a polynomial to a problem with a unique
quantum witness. The main idea of our reduction is to consider the
Alternating subspace of the tensor power H^{\tensor{d}}. Indeed, the
intersection of this subspace with W^{\tensor{d}} is one-dimensional, and
therefore can play the role of the unique quantum witness. Based on a work by Rahul Jain,
Iordanis Kerenidis, Greg Kuperberg, Miklos Santha, Or Sattath, and Shengyu
Zhang, which appeared at 1st Symposium on Innovations in Computer Science
(ICT 2010) Spyridon
Antonakopoulos, Bell Laboratories, Alcatel-Lucent ABSTRACT: Given a network, a set of
demands and a cost function f(.), the min-cost network design problem is to
route all demands with the objective of minimizing the sum of f(l_e), where
l_e is the traffic load on link e under the routing. We focus on cost
functions of the form f(x) = s + x^a for x > 0, with f(0) = 0. For a <=
1, f(.) is subadditive and exhibits behavior consistent with economies of
scale. This problem corresponds to the well-studied Buy-at-Bulk network
design problem and admits polylogarithmic approximation and hardness. In this paper, we focus on the
less studied scenario of a > 1 with a Positive startup cost s > 0. Now,
the cost function f(.) is neither subadditive nor superadditive. This is
motivated by minimizing network-wide energy consumption when supporting a set
of traffic demands. It is commonly accepted that, for some computing and
communication devices, doubling processing speed more than doubles the energy
consumption. Hence, in Economics parlance, such a cost function reflects
diseconomies of scale. We begin by discussing why
existing routing techniques such as randomized rounding and tree-metric
embedding fail to generalize directly. We then present our main contribution,
which is a polylogarithmic approximation algorithm. We obtain this result by
first deriving a bicriteria approximation for a related capacitated min-cost
flow problem that we believe is interesting in its own right. Our approach
for this problem builds upon the well-linked decomposition due to
Chekuri-Khanna-Shepherd, the construction of expanders via matchings due to
Khandekar-Rao-Vazirani, and edge-disjoint routing in well-connected graphs
due to Rao-Zhou. However, we also develop new techniques that allow us to
keep a handle on the total cost, which was not a concern in the
aforementioned literature. This is joint work with Matthew Andrews and Lisa
Zhang, and will appear in FOCS 2010. Nikos
Mutsanas IDSIA, Lugano ABSTRACT: We study a classical single
machine scheduling problem with precedence constraints. This problem was
first studied in the seventies, and still poses perplexing questions
concerning its approximability. It was recently shown to be strongly related
to the Vertex Cover problem of an appropriately defined graph. Based on a
connection between this graph and a well-known graph in Dimension Theory of
partial orders, we devise a framework that unifies and often improves on the
best known approximation algorithms for all previously considered special
cases of precedence constraints. Besides these immediate results, the
connection to dimension theory sheds new light to this old problem and
explains some older results. We conclude by motivating that a similar
approach seems promising in the study of other ordering problems as well. Elias
Tsigaridas, Aarhus University ABSTRACT: Our
probabilistic analysis sheds light to the following questions: Why do random
polynomials seem to have few, and well separated real roots,on the average?
Why do exact algorithms for real root isolation may perform comparatively
well or even better than numerical ones? We exploit results by Kac and by
Edelman and Kostlan and we use tools from statistical physics in order to
estimate the real root separation of degree d polynomials with i.i.d.
coefficients that follow two zero-mean normal distributions, and to obtain
the expected (bit)complexity of STURM solver. We also prove that the expected
number of real roots of a degree d polynomial in the Bernstein basis is
\sqrt{2d}\pm O(1), when the coefficients are i.i.d. variables with moderate
standard deviation. This is
joint work with Andr\'e Galligo and Ioannis Emiris. |
13:00 – 14:00 |
Lunch
Break |
14:00 – 15:30 |
Session 5 Alexis
Kaporis, University of the Aegean ABSTRACT: We present some fine
probabilistic characteristics of the method that improve the search time of a
random file. Emmanouil
Pountourakis, Northwestern University ABSTRACT: We study the problem of
designing group-strategyproof cost-sharing mechanisms. The players report
their bids for getting serviced and the mechanism decides a set of players
that are going to be serviced and how much each one of them is going to
pay.We determine three conditions: Fence Monotonicity, Stability of the
allocation and Validity of the tie-breaking rule that are necessary and
sufficient for group-strategyproofness, regardless of the cost function.
Consequently, Fence Monotonicity characterizes group-strategyproof
cost-sharing schemes closing an important open problem. Finally, we use our
results to prove that there exist families of cost functions, where any
group-strategyproof mechanism has arbitrarily poor budget balance. Vasilis Gkatzelis,
Courant Institute, New York University ABSTRACT: We study policies aiming to minimize the weighted
sum of completion times of jobs in the context of coordination mechanisms for
selfish scheduling problems. Our goal is to design local policies that
achieve a good price of anarchy in the resulting equilibria for unrelated
machine scheduling. In short, we present the first
constant-factor-approximate coordination mechanisms for this model and show
that our bounds imply a new combinatorial constant-factor approximation
algorithm for the underlying optimization problem. More specifically: -We present a generalization of the
ShortestFirst policy for weighted jobs, called SmithRule; we prove that it
achieves an approximation ratio of 4 and we show that any set of strongly
local ordering policies can result in equilibria with approximation ratio at
least 4 even for unweighted jobs. -The main result of our paper is
ProportionalSharing, a preemptive strongly local policy that generalizes the
EqualSharing policy for weighted jobs and beats this lower bound of 4; we
show that this policy achieves an approximation ratio of 2.619 for the
weighted sum of completion times and an approximation ratio of 2.5 for the
(unweighted) sum of completion times. Again, we observe that these bounds are
tight. Furthermore, we show that ProportionalSharing induces potential games
(in which best-response dynamics converge to pure Nash equilibria). -All of our upper bounds are for the
robust price of anarchy, defined by Roughgarden (Intrinsic robustness of the
price of anarchy. In STOC, pages 513–522, 2009), so they naturally extend to
mixed Nash equilibria, correlated equilibria, and regret minimization
dynamics." -Finally, we prove that the games
induced by ProportionalSharing are $\\beta$-nice, which yields the first
\\em{combinatorial} constant-factor approximation algorithm minimizing
weighted completion time for unrelated machine scheduling. |
15:30 – 16:00 |
Coffee
Break |
16:00 – 17:00 |
Session 6 Simon
Yoffe, Tel-Aviv Academic College ABSTRACT: The Union-Find data structure
for maintaning disjoint sets is one of the best known and widespread data
structures, in particular the version with constant-time Union and efficient
Find. Recently, the question of how to handle deletions from the structure in
an effcient manner has been taken up, First by Kaplan, Shafrir and Tarjan
(2002) and subsequently by Alstrup et al. (2005). The latter work shows that
it is possible to implement deletions in constant time, without affecting
adversely the asymptotic complexity of other operations, even when this
complexity is calculated as a function of the current size of the set. In
this talk we present a conceptual and technical simplification of the
algorithm, which has the same asymptotic complexity (and constant factors may
be smaller). Vissarion
Fisikopoulos, National and Kapodistrian University of Athens ABSTRACT: Motivated by resultant
computations we study the convex polytope, called the Secondary polytope,
whose vertices correspond to all regular triangulations of a given point set
$A$ in $R^{d}$. A triangulation of $A$ is called regular if it can be
obtained as the projection of the lower hull of the lifted $A$, for some
lifting to $R^{d+1}$. For this polytope there is an output-sensitive
algorithm for enumerating its vertices. We describe two equivalence classes
of the vertices of this polytope, where the one is a superset of the other.
Our goal is output-sensitive algorithms for enumerating one representative from
each class. To this end, we offer algorithmic characterizations of the
polytope\'s edges that connect two different classes. Some applications of
this theory are in polynomial equation solving and in the implicitization of
parametric (hyper)surfaces. |
17:00 – 17:45 |
Break |
17:45 – 18:30 |
Invited Session 3 Constantinos Daskalakis,
Massachussets Institute of Technology ABSTRACT: Recent
work has shown that Þnding a mixed Nash equilibrium in normal form games is
PPAD-complete, while the pure Nash equilibrium problem in
congestion games is PLS-complete. Nevertheless, there are important classes of games, e.g. simple stochastic games,
and Þxed point problems, e.g. Þxed points of contraction maps, that appear easier. For these problems the PPAD
and PLS machinery seems inappropriate to provide an accurate complexity theoretic characterization; at the
same time, polynomial time algorithms have been evading researchers for decades. We provide complexity theoretic
machinery that could lead to completeness results at the interface of PPAD
and PLS, in the form of geometrically embedded local search. |
21:00 |
Dinner,
Taverna Virinis (Archimidous 11, Pagkrati, Tel: 210-7012153) |