09:00 - 09:30 |
Registration - Opening |
9:30 - 10:30 |
Geometric random walks and volume computation
Ioannis Emiris, Athena Research Center and NKUA
Abstract:
This talk surveys efficient algorithms for approximating the volume of a convex polytope, which is a $\#P$-hard problem. State-of-the-art techniques are based on sampling the convex region by means of geometric random walks. We refer to
recent progress, and emphasize a number of applications, including one in estimating parameters characterizing financial markets.
This is joint work with Apostolos Chalkis and Vissarion Fisikopoulos.
Short Bio:
Ioannis Emiris is President and General Director of Athena Research Center since April 2021, and Professor at National Kapodistrian University of Athens, since 2002. He obtained his BSc from Princeton University in 1989, and his PhD
from UC Berkeley in 1994. He was a Tenured researcher at INRIA Sophia-Antipolis (France) from 1995 to 2002. He has been the supervisor of 14 PhD theses. His H-index is 40, and has twice earned the Distinguished Paper award at ACM
International Symposium on Symbolic & Algebraic Computation (ISSAC). He was Program Committee chair of ISSAC 2011, and General co-chair of ISSAC 2020. He has been a member of the Scientific Board of Hellenic Foundation of Research &
Innovation, representing Mathematics and Informatics (2018-21).
|
10:30 - 11:00 |
Coffee break |
11:00 - 12:40 |
Operating rooms scheduling with a shared resource: a red-blue knapsack modeling approach
Federico Della Croce, DIGEP, Politecnico di Torino, Italy
Abstract:
A practical operating rooms scheduling problem arising at a public hospital located in Northern Italy is considered. It consists in deriving a daily schedule for a set of operating rooms where each operating room is associated with a
surgical specialty and there is a set of elective surgical interventions to be performed for the considered day. Each intervention has a priority coefficient and may or may not require the use of a shared resource. The goal is to
minimize the weighted set of interventions that cannot be processed during the day and need to be postponed to the following days. We denote this problem as the Shared Resource Operating Rooms scheduling problem (SRORSP).
For the case with 2 machines, we show that SRORSP is weakly NP-Hard and provide a fully polynomial time approximation scheme.
For the general case with m machines, we show that SRORSP is strongly
NP-Hard and provide a negative approximation result.
Also, we propose and compare two exact solution approaches: the first one induced by a knapsack-like ILP formulation, the second one exploiting a column generation approach based on the above mentioned formulation.
Spectral Relaxations and Fair Densest Subgraphs
Aris Anagnostopoulos, Sapienza University of Rome
Abstract:
Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. In this paper, we address the problem of identifying a densest subgraph, while ensuring that none of one
binary protected attribute is disparately impacted.
Unfortunately, the underlying algorithmic problem is NP-hard, even in its approximation version: approximating the densest fair subgraph with a polynomial-time algorithm is at least as hard as the densest subgraph problem of at most k
vertices, for which no constant approximation algorithms are known.
Despite such negative premises, we are able to provide approximation results in two important cases. In particular, we are able to prove that a suitable spectral embedding allows recovery of an almost optimal, fair, dense subgraph
hidden in the input data, whenever one is present, a result that is further supported by experimental evidence. We also show a polynomial-time, $2$-approximation algorithm, whenever the underlying graph is itself fair. We finally
prove that, under the small set expansion hypothesis, this result is tight for fair graphs.
The above theoretical findings drive the design of heuristics, which we experimentally evaluate on a scenario based on real data, in which our aim is to strike a good balance between diversity and highly correlated items from Amazon
co-purchasing graphs.
Perfect Matchings and Lower Bounds on the Computation of the Permanent
Archontia Giannopoulou, NKUA
Abstract:
The study of Graph Minors theory was sparked by
the proof of Kuratowski’s theorem, where he characterised all
planar graphs as the class of graphs excluding two specific
graphs as topological minors, as well as Wagner’s Conjecture
that all classes of graphs closed under taking minors admit an
exact characterisation in terms of forbidding a finite set of graphs
as minors.
The proof of Wagner’s Conjecture by Robertson and Seymour,
extending in a series of 23 papers, gave rise to numerous structural
theorems, some of which resulted in many algorithmic results
and applications including a complete dichotomy result on the
computational complexity of computing the permanent of symmetric
matrices. In this talk we will introduce the main structural results
and their algorithmic applications and discuss about their extensions
on graphs admitting perfect matchings and potential applications.
Hierarchical Clustering: Upper Bounds, Lower Bounds and Some Open Questions
Vaggos Chatziafratis, University of California, Santa Cruz
Abstract:
Hierarchical Clustering is an important tool for unsupervised learning whose goal is to construct a hierarchical decomposition of a given dataset describing relationships at all levels of granularity simultaneously. Despite its long
history, Hierarchical Clustering was underdeveloped from a theoretical perspective, partly because of a lack of suitable objectives and algorithms with guarantees.
In this talk, I want to tell you about the recent progress in the area with an emphasis on connections to approximation algorithms like Sparsest Cut and Balanced Cut, some hardness of approximation results, and we shall also highlight
some interesting open problems in the area.
|
12:40 - 12:50 |
Short break |
12:50 - 13:50 |
A computational view of the Probabilistic Method
Dimitris Achlioptas, NKUA
Abstract:
Imagine being in front of n switches hooked up to an array of lightbulbs. Each lightbulb is connected to only a handful of switches and turns on under many, but not all, configurations of those switches. The lightbulbs are often in conflict: for a given switch, some lightbulbs turn on when the switch is up, while others turn on when it is down. Is it possible to set the switches so that all lightbulbs turn on?
When the answer to the above question is positive, then flipping a coin for how to set each switch will cause all lightbulbs to turn on with positive probability. Crucially, any multiplicative approximation of this probability, no matter how poor, suffices to establish that the answer is positive. The Lovasz Local Lemma (LLL), a cornerstone of the Probabilistic Method, is a tool for establishing such positive probability lower bounds in the face of conflict. In this talk, we will present a new, computational view of the LLL which leads to significant generalizations, including a passage from the setting of probabilities to general supermodular functions.
Short Bio:
Dimitris Achlioptas joined the Department of Informatics and Telecommunications of the University of Athens in 2010. From 1998 to 2005 he was with Microsoft Research and since 2005 with UC Santa Cruz. In theory, his expertise lies in the interaction between randomness and computation and his work has received an NSF CAREER award, a Sloan Fellowship, and an ERC Starting Grant. In practice, he likes to think about scalability questions and holds patents on topics ranging from load balancing and cache optimization to web search personalization.
|
13:50 - 15:00 |
Lunch Break |
15:00 - 16:15 |
EM’s Convergence in Gaussian Latent Tree Models
Anthimos Vardis Kandiros, MIT
Abstract:
We study the optimization landscape of the log-likelihood function and the convergence of the Expectation-Maximization (EM) algorithm in latent Gaussian tree models, i.e. tree-structured Gaussian graphical models whose leaf nodes are
observable and non-leaf nodes are unobservable. We show that the unique non-trivial stationary point of the population log-likelihood is its global maximum, and establish that the expectation-maximization algorithm is guaranteed to
converge to it in the single latent variable case. Our results for the landscape of the log-likelihood function in general latent tree models provide support for the extensive practical use of maximum likelihood based-methods in this
setting. Our results for the EM algorithm extend an emerging line of work on obtaining global convergence guarantees for this celebrated algorithm. We show our results for the non-trivial stationary points of the log-likelihood by
arguing that a certain system of polynomial equations obtained from the EM updates has a unique non-trivial solution. The global convergence of the EM algorithm follows by arguing that all trivial fixed points are higher-order saddle
points.
Universal Learning Rates for Multiclass Classification
Alkis Kalavasis, NTUA
Abstract:
We study the problem of multiclass classification with a bounded number of different labels $k$, in the realizable setting. We extend the traditional PAC model to distribution-dependent learning rates. In this universal learning model
(Bousquet, Hanneke, Moran, van Handel and Yehudayoff, STOC'21), we provide a complete characterization of the achievable learning rates that holds for every fixed distribution. In particular, we show the following trichotomy: for any
concept class, the optimal learning rate is either exponential, linear or arbitrarily slow. Additionally, we provide complexity measures of the underlying hypothesis class that characterize when these rates occur.
Label Ranking through Nonparametric Regression
Eleni Psaroudaki, NTUA
Abstract:
Label Ranking (LR) corresponds to the problem of learning a hypothesis that maps features to rankings over a finite set of labels. We adopt a nonparametric regression approach to LR and obtain theoretical performance guarantees for
this fundamental practical problem. We introduce a generative model for Label Ranking, in noiseless and noisy nonparametric regression settings, and provide sample complexity bounds for learning algorithms in both cases. In the
noiseless setting, we study the LR problem with full rankings and provide computationally efficient algorithms using decision trees and random forests in the high-dimensional regime. In the noisy setting, we consider the more general
cases of LR with incomplete and partial rankings from a statistical viewpoint and obtain sample complexity bounds using the One-Versus-One approach of multiclass classification. Finally, we complement our theoretical contributions
with experiments, aiming to understand how the input regression noise affects the observed output.
|
16:15 - 16:45 |
Coffee Break |
16:45 - 17:35 |
Prophet Inequalities for Cost Minimization
Vasilis Livanos, UIUC
Abstract:
Prophet inequalities for rewards maximization are fundamental results from optimal stopping theory with several applications to mechanism design and online optimization. We study the cost minimization counterpart of the classical
prophet inequality due to Krengel, Sucheston, and Garling, where one is facing a sequence of costs $X_1, X_2, ..., X_n$ in an online manner and must ''stop'' at some point and take the last cost seen.
Given that the $X_i$'s are independent random variables drawn from known distributions, the goal is to devise a stopping strategy $S$ (online algorithm) that minimizes the expected cost. The best cost possible is $E[min_i X_i]$
(offline
optimum), achievable only by a prophet who can see the realizations of all $X_i$'s upfront. We say that strategy $S$ is an $\alpha$-approximation to the prophet ($\alpha \ge 1$) if $E[S] \le \alpha E[min_i X_i]$. We first observe that
if the $X_i$'s are not
identically distributed, then no strategy can achieve a bounded approximation, no matter if the arrival order is adversarial or random, even when restricted to $n=2$ and distributions with support size at most two. This leads us to
consider the case where the $X_i$'s are independent and identically distributed (I.I.D.). For the I.I.D. case, we give a complete characterization of the optimal stopping strategy. We show that it achieves a (distribution-dependent)
constant-factor approximation to the prophet's cost for almost all distributions and that this constant is tight. In particular, for distributions for which the integral of the hazard rate is a polynomial $H(x)=\sum_{i=1}^k a_i
x^{d_i}$, where $d_1 < ... < d_k$, the approximation factor is $\lambda(d_1)$, a decreasing function of $d_1$, and is the best possible for $H(x)=x^{d_1}$. Furthermore, when the hazard rate is monotonically increasing (i.e. the
distribution is MHR), we show that this constant is at most 2, and this again is the best possible for the MHR distributions. For the classical prophet inequality for reward maximization, single-threshold strategies have been
powerful enough to achieve the best possible approximation factor. Motivated by this, we analyze single-threshold strategies for the cost prophet inequality problem. We design a threshold that achieves a $O(polylog(n))$-factor
approximation, where the exponent in the logarithmic factor is a distribution-dependent constant, and we show a matching lower bound. We believe that our results may be of independent interest for analyzing approximately optimal
(posted price-style) mechanisms for procuring items.
Graph Connectivity With Noisy Queries
Miltiadis Stouras, EPFL
Abstract:
Graph connectivity is a fundamental combinatorial optimization problem
that arises in many practical applications, where usually a spanning
subgraph of a network is used for its operation. However, in the real world,
links may fail unexpectedly deeming the networks non-operational, while
checking whether a link is damaged is costly and possibly erroneous.
After an event that has damaged an arbitrary subset of the edges, the network
operator must find a spanning tree of the network using non-damaged edges
by making as few checks as possible.
Motivated by such questions, we study the problem of finding a spanning tree
in a network, when we only have access to noisy queries of the form
“Does edge e exist?”. We design efficient algorithms, even when edges fail
adversarially, for all possible error regimes; 2-sided error (where any answer
might be erroneous), false positives (where “no” answers are always correct)
and false negatives (where “yes” answers are always correct).
In the first two regimes we provide efficient algorithms and give matching
lower bounds for general graphs. In the False Negative case we design
efficient algorithms for large interesting families of graphs (e.g. bounded
treewidth, sparse). Using the previous results, we provide tight algorithms
for the practically useful family of planar graphs in all error regimes.
|
17:35 - 17:50 |
Short break |
17:50 - 18:50 |
TBA
Manolis Kellis, MIT
Abstract:
|
18:50 – 19:00 |
Short break |
19:00 – 19:30 |
Lightning Talks - Open Problems |
|
|
9:30 - 10:30 |
Two Models for Scheduling under Uncertainty
Evripidis Bampis, Sorbonne University
Abstract:
|
10:30 - 11:00 |
Coffee Break |
11:00 - 12:15 |
Descriptive complexity of counting problems the decision version of which is easy
Aggeliki Chalki, NTUA
Abstract:
The class #P is the class of functions that count the number of solutions to problems in NP. Since very few counting problems can be exactly computed in polynomial time (e.g. counting spanning trees), the interest of the community has
turned to the complexity of approximating them. To this end the class #PE of problems in #P the decision version of which is in P, is of great significance.
We focus on a subclass of #PE, namely TotP, which is the class of functions that count the total number of paths of NPTMs. TotP contains all self-reducible #PE functions and it is robust, in the sense that it has natural complete
problems and it is closed under addition, multiplication and subtraction by one.
We present logical characterizations of TotP and two other robust subclasses of this class building upon two seminal works in the area of descriptive complexity for counting problems (by Saluja et al. 1995 and Arenas et al. 2020).
Specifically, to capture TotP, we use recursion on functions over second-order variables which, we believe, is of independent interest.
Joint work with Antonis Achilleos, Eleni Bakali, and Aris Pagourtzis
Pure-Circuit: Strong Inapproximability for PPAD
Alexandros Hollender, University of Oxford
Abstract:
We introduce the Pure-Circuit problem: a new tool for proving hardness of approximation for problems in the class PPAD. We apply this new technique to show tight inapproximability results for various Nash equilibrium computation
problems. Namely, the following problems are all PPAD-complete:
- computing an $\varepsilon$-Nash equilibrium in a graphical game for any $\varepsilon < 1/2$,
- computing an $\varepsilon$-well-supported Nash equilibrium in a graphical game for any $\varepsilon < 1$,
- computing an $\varepsilon$-well-supported Nash equilibrium in a polymatrix game for any $\varepsilon < 1/3$. We show that these hardness results are tight for the case of two-action games by providing matching upper bounds.
Joint work with Argyrios Deligkas, John Fearnley, and Themistoklis Melissourgos.
Almost Universally Optimal Distributed Laplacian Solvers via Low-Congestion Shortcuts
Themis Gouleakis, National University of Singapore
Abstract:
In this talk, we refine the distributed Laplacian solver recently developed by Forster, Goranci, Liu, Peng, Sun, and Ye (FOCS `21) via the Ghaffari-Haeupler framework (SODA `16) of low-congestion shortcuts. Specifically, if
$\varepsilon > 0$ is the error of the Laplacian solver, we obtain two main results:
- In the supported version of the CONGEST model, we establish an almost universally optimal Laplacian solver. Namely, we show that any Laplacian system on an $n$-node graph with shortcut quality $\text{SQ}(G)$ can be
solved after $n^{o(1)} \text{SQ}(G) \log(1/\varepsilon)$ rounds, almost matching our lower bound of $\widetilde{\Omega}(\textrm{SQ}(G))$ rounds on any graph $G$. Our techniques also imply almost universally optimal Laplacian
solvers
in the full generality of CONGEST, conditional on the efficient construction of shortcuts. In particular, they unconditionally imply a novel $D n^{o(1)} \log (1/\varepsilon)$ Laplacian solver for excluded-minor graphs with
hop-diameter $D$.
- Following a recent line of work in distributed algorithms, we consider a hybrid communication model which enhances CONGEST with limited global power in the form of the node-capacitated clique (NCC) model. In this model, we show
the
existence of a Laplacian solver with round complexity $n^{o(1)} \log(1/\varepsilon)$.
The unifying thread of these results, and our main technical contribution, is the development of nearly-optimal distributed algorithms for a novel congested generalization of the standard part-wise aggregation problem.
This primitive accelerates the Laplacian solver of Forster, Goranci, Liu, Peng, Sun, and Ye, and we believe it will find further applications in the future.
|
12:15 - 12:30 |
Short break |
12:30 - 13:45 |
Full Analysis of Nakamoto Consensus in Bounded-Delay Networks
Nikos Leonardos, NKUA
Abstract:
Nakamoto consensus, arguably the most exciting development in
distributed computing in the last few years, is in a sense a recasting
of the traditional state-machine-replication problem in an
unauthenticated setting, where furthermore parties come and go without
warning. The protocol relies on a cryptographic primitive known as
proof of work (PoW) which is used to throttle message passing with the
PoW difficulty level being appropriately adjusted throughout the
course of the protocol execution. While the original formulation was
only accompanied by rudimentary analysis, significant and steady
progress has been made in abstracting the protocol’s properties and
providing a formal analysis under various restrictions and protocol
simplifications. Still, a full analysis of the protocol that includes
its target recalculation and, notably, the timestamp adjustment
mechanisms which equip it to operate in its intended setting of
bounded communication delays, imperfect clocks and dynamic
participation, has remained open. This paper’s main result fills this
final gap in terms of its analysis in the Byzantine setting presenting
a proof that Nakamoto’s protocol achieves, under suitable conditions,
consistency and liveness in the above setting. A number of technical
tools and techniques are introduced that may be of independent
interest in the analysis of blockchain protocols.
Joint work with Juan Garay and Aggelos Kiayias.
MAX-CUT for Triangle-free Graphs
Felix Clemen, Karlsruher Institut für Technologie
Abstract:
A well-known conjecture by Erdős states that every triangle-free graph on n vertices can be made bipartite by removing at most $(n^2)/25$ edges. We will present progress on this conjecture in different directions. In particular, we
explore a connection of this conjecture to spectral theory.
Denote by $q_n(G)$ the smallest eigenvalue of the signless laplacian of a graph G. Brandt conjectured in 1997 that for regular triangle-free graphs $q_n(G)$ <= $(4/25)n$. We prove Brandt's conjecture in a strong form by confirming it
for all triangle-free graphs.
This is joint work with József Balogh, Bernard Lidický, Sergey Norin and Jan Volec.
Complexity of Finding Maximum Locally Irregular Induced Subgraphs
Nikos Melissinos, Paris Dauphine
Abstract:
If a graph $G$ is such that no two adjacent vertices of $G$ have the same
degree, we say that $G$ is \emph{locally irregular}. In this work we introduce
and study the problem of identifying a largest induced subgraph of a given
graph $G$ that is locally irregular. Equivalently, given a graph $G$, find a subset
$S$ of $V(G)$ with minimum order, such that by deleting the vertices of $S$
from $G$ results in a locally irregular graph; we denote with $I(G)$ the order of
such a set $S$. We first examine some easy graph families, namely paths,
cycles, trees, complete bipartite and complete graphs. However, we show that
the decision version of the introduced problem is NP-Complete, even for
restricted families of graphs, such as subcubic planar bipartite, or cubic bipartite
graphs. We then show that we can not even approximate an optimal solution
within a ratio of $\mathcal{O}(n^{1-\frac{1}{k}})$, where $k\geq 1$ and $n$ is the
order the graph, unless $P=NP$, even when the input graph is bipartite.
Then, looking for more positive results, we turn our attention towards computing
$I(G)$ through the lens of parameterised complexity. In particular, we provide
two algorithms that compute $I(G)$, each one considering different parameters.
The first one considers the size of the solution $k$ and the maximum degree
$\Delta$ of $G$ with running time $(2\Delta)^kn^{\mathcal{O}(1)}$, while the
second one considers the treewidth $tw$ and $\Delta$ of $G$, and has running
time $\Delta^{2tw}n^{\mathcal{O}(1)}$. Therefore, we show that the problem is
FPT by both $k$ and $tw$ if the graph has bounded maximum degree $\Delta$.
Since these algorithms are not FPT for graphs with unbounded maximum
degree (unless we consider $\Delta + k$ or $\Delta + tw$ as the parameter), it is
natural to wonder if there exists an algorithm that does not include additional
parameters (other than $k$ or $tw$) in its dependency.
We answer negatively, to this question, by showing that our algorithms are
essentially optimal. In particular, we prove that there is no algorithm that
computes $I(G)$ with dependence $f(k)n^{o(k)}$ or $f(tw)n^{o(tw)}$, unless the
ETH fails.
|
13:45 - 15:00 |
Lunch break |
15:00 - 16:00 |
Data Driven Algorithm Design: Learning to Search
Christos Tzamos, UW-Madison and NKUA
Abstract:
|
16:00 - 16:30 |
Coffee break |
16:30 - 17:45 |
Constant Inapproximability for PPA
Themistoklis Melissourgos, University of Essex
Abstract:
We show that $\varepsilon$-Consensus Halving is PPA-complete even when the parameter $\varepsilon$ is a constant. Consensus Halving is a fundamental fair division problem which was the first natural problem shown to be complete for
the class PPA, and all subsequent PPA-completeness results for other natural problems have been obtained by reducing from it. In fact, we prove that PPA-completeness holds for any constant $\varepsilon < 1/5$. This immediately implies
constant inapproximability results for all known natural PPA-complete problems, including "Necklace Splitting" , "Discrete Ham Sandwich" , two variants of the "Pizza Sharing" problem, and finding fair independent sets in cycles and
paths.
Joint work with Argyrios Deligkas, John Fearnley, and Alexandros Hollender.
First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces
Emmanouil Vasileios Vlatakis Gkaragkounis, UC Berkeley
Abstract:
From optimal transport to robust dimensionality reduction, a plethora of machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds. Though many min-max algorithms have been analyzed in
the Euclidean setting, it has proved elusive to translate these results to the Riemannian case. Zhang et al. [2022] have recently shown that geodesic convex concave Riemannian problems always admit saddle-point solutions. Inspired by
this result, we study whether a performance gap between Riemannian and optimal Euclidean space convex-concave algorithms is necessary. We answer this question in the negative-we prove that the Riemannian corrected extragradient (RCEG)
method achieves last-iterate convergence at a linear rate in the geodesically strongly-convex-concave case, matching the Euclidean result. Our results also extend to the stochastic or non-smooth case where RCEG and Riemanian gradient
ascent descent (RGDA) achieve near-optimal convergence rates up to factors depending on curvature of the manifold.
Efficiently Computing Nash Equilibria in Adversarial Team Markov Games
Fivos Kalogiannis, UC Irvine
Abstract:
Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, in light of computational intractability barriers in
general-sum games, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios or impose strong assumptions that are difficult to meet in most applications.
In this work, we depart from those prior results by investigating infinite-horizon adversarial team Markov games, a natural and well-motivated class of games in which a team of identically-interested players---without any explicit
coordination or communication---is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic
interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary $\epsilon$-approximate Nash equilibria in adversarial team Markov games with complexity that is
polynomial in all the natural parameters of the problem and $1/\epsilon$.
The proposed algorithm is particularly natural and practical, and it is based on performing independent policy gradient steps for each player in the team, in tandem with best-responses from the side of the adversary; in turn, the
policy for the adversary is then obtained by solving the dual of a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex
constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers. Along the way, we significantly extend an important characterization of optimal policies due to Von Stengel and Koller (GEB `97) in the
context of adversarial (normal-form) team games.
|
17:45 - 18:00 |
Short Break |
18:00 - 19:15 |
Strong Approximations and Irrationality in Financial Networks with Derivatives
Stavros Ioannidis, King's College London
Abstract:
Financial networks model a set of financial institutions interconnected by obligations. Recent work has introduced to this model a class of obligations called credit default swaps, a certain kind of financial derivatives. The main
computational
challenge for such systems is known as the clearing problem, which is to determine which firms are in default and to compute their exposure to systemic risk, technically known as their recovery rates. It is known that the recovery
rates form the
set of fixed points of a simple function, and that these fixed points can be irrational. We study the clearing problem from the point of view of irrationality and approximation strength. Firstly, we observe that weakly approximate
solutions may
contain misleading information regarding the actual financial state of an institution. On this basis, we study the complexity of finding a strongly approximate solution, and show FIXP-completeness. We then study the structural
properties required
for irrationality, and we give necessary conditions for irrational solutions to emerge: The presence of certain types of cycles in a financial network forces the recovery rates to take the form of roots of non-linear polynomials. In
the absence
of a large subclass of such cycles, we study the complexity of finding an exact fixed point, which we show to be a problem close to, albeit outside of, PPAD.
Metric Distortion Bounds under Limited Information
Panagiotis Patsilinakos, NTUA
Abstract:
In this work we study the metric distortion problem in voting theory under a limited amount of ordinal information. Our primary contribution is threefold. First, we consider mechanisms which perform a sequence of pairwise comparisons
between
candidates. We show that a widely-popular deterministic mechanism employed in most knockout phases yields distortion $O(\log m)$ while eliciting only $m-1$ out of $\Theta(m^2)$ possible pairwise comparisons, where m represents the
number of
candidates. Our analysis for this mechanism leverages a powerful technical lemma recently developed by (Kempe, ΑΑΑΙ 2020a). We also provide a matching lower bound on its distortion. In contrast, we prove that any mechanism which
performs fewer
than $m-1$ pairwise comparisons is destined to have unbounded distortion. Moreover, we study the power of deterministic mechanisms under incomplete rankings. Most notably, when every agent provides her $k$-top preferences we show an
upper bound
of $6m/k+1$ on the distortion, for any $k \in \{1, 2, …, m\}$. Thus, we substantially improve over the previous bound of $12m/k$ recently established by (Kempe, AAAI 2020a, Kempe AAAI 2020b), and we come closer to matching the
best-known lower
bound. Finally, we are concerned with the sample complexity required to ensure near-optimal distortion with high probability. Our main contribution is to show that a random sample of $\Theta(m/\epsilon^2)$ voters suffices to guarantee
distortion
$3+\epsilon$ with high probability, for any sufficiently small $\epsilon>0$.. This result is based on analyzing the sensitivity of the deterministic mechanism introduced by (Gkatzelis, Halpern, and Shah, FOCS 2020). Importantly, all
of our
sample-complexity bounds are distribution-independent.
From an experimental standpoint we present several empirical findings on real-life voting applications, comparing the scoring systems employed in practice with a mechanism explicitly minimizing (metric) distortion. Interestingly, for
our case
studies we find that the winner in the actual competition is typically the candidate who minimizes the distortion.
Joint work with Ioannis Anagnostides and Dimitris Fotakis
Fault-Tolerant Total Domination
Ioannis Vaxevanakis, Department of Informatics and Telecommunications, NKUA
Abstract:
In \textit{total domination}, given a graph $G=(V,E)$, we seek a minimum-size set of nodes $S\subseteq V$, such that every node in $V \setminus S$ has at least one neighbor in $S$ and every node in $S$ also has at least one neighbor
in $S$. We
define the \textit{fault-tolerant} version of total domination, where we extend the requirement for nodes in $V \setminus S$. Any node in $V \setminus S$ must have at least $m$ neighbors in $S$. Let $\Delta$ denote the maximum degree
in $G$. We
prove a first $1 + \ln(\Delta + m - 1)$ approximation for fault-tolerant total domination.
|
|
|