23rd
International Symposium on
Fundamentals of Computation Theory
September 12-15, 2021, Athens, Greece (virtually)
Title: Characterizations and Approximability of Hard Counting Classes below #P
Abstract: An important objective of research in counting complexity is to understand which counting problems are approximable. In this quest, the complexity class TotP, a hard subclass of #P, is of key importance, as it contains self-reducible counting problems with easy decision version, thus eligible to be approximable. Indeed, most problems known so far to admit an fpras fall into this class. An open question raised recently by the community of descriptive complexity is to find a logical characterization of TotP and of robust subclasses of TotP. In this work we define two subclasses of TotP, in terms of descriptive complexity, both of which are robust in the sense that they have natural complete problems, which are defined in terms of satisfiability of Boolean formulae. We then explore the relationship between the class of approximable counting problems and TotP. We prove that TotP is not a subclass of the class FPRAS iff NP$\neq$RP and FPRAS is not a subclass of TotP unless RP=P. To this end we introduce two ancillary classes that can both be seen as counting versions of RP. We further show that FPRAS lies between one of these classes and a counting version of BPP. Finally, we provide a complete picture of inclusions among all the classes defined or discussed in this paper with respect to different conjectures about the NP vs RP vs P questions.
Title: Strategyproof Facility Location in Perturbation Stable Instances
Abstract: We consider $k$-Facility Location games, where $n$ strategic agents report their locations on the real line, and a mechanism maps them to $k\ge 2$ facilities. Each agent seeks to minimize her distance to the nearest facility. We are interested in (deterministic or randomized) strategyproof mechanisms without payments that achieve a reasonable approximation ratio to the optimal social cost of the agents. To circumvent the inapproximability of $k$-Facility Location by deterministic strategyproof mechanisms, we restrict our attention to perturbation stable instances. An instance of $k$-Facility Location on the line is $\gamma$-perturbation stable (or simply, $\gamma$-stable), for some $\gamma\ge 1$, if the optimal agent clustering is not affected by moving any subset of consecutive agent locations closer to each other by a factor at most $\gamma$. We show that the optimal solution is strategyproof in $(2+\sqrt{3})$-stable instances whose optimal solution does not include any singleton clusters, and that allocating the facility to the agent next to the rightmost one in each optimal cluster (or to the unique agent, for singleton clusters) is strategyproof and $(n-2)/2$-approximate for $5$-stable instances (even if their optimal solution includes singleton clusters). On the negative side, we show that for any $k\ge 3$ and any $\delta > 0$, there is no deterministic anonymous mechanism that achieves a bounded approximation ratio and is strategyproof in $(\sqrt{2}-\delta)$-stable instances. We also prove that allocating the facility to a random agent of each optimal cluster is strategyproof and $2$-approximate in $5$-stable instances. To the best of our knowledge, this is the first time that the existence of deterministic (resp. randomized) strategyproof mechanisms with a bounded (resp. constant) approximation ratio is shown for a large and natural class of $k$-Facility Location instances.
Title: Efficient Algorithms for Learning from Coarse Labels
Abstract: For many learning problems one may not have access to fine grained label information; e.g., an image can be labeled as husky, dog, or even animal depending on the expertise of the annotator. In this work, we formalize these settings and study the problem of learning from such coarse data. Instead of observing the actual labels from a set $\mathcal{Z}$, we observe coarse labels corresponding to a partition of $\mathcal{Z}$ (or a mixture of partitions). Our main algorithmic result is that essentially any problem learnable from fine grained labels can also be learned efficiently when the coarse data are sufficiently informative. We obtain our result through a generic reduction for answering Statistical Queries (SQ) over fine grained labels given only coarse labels. The number of coarse labels required depends polynomially on the information distortion due to coarsening and the number of fine labels $|\mathcal{Z}|$. We also investigate the case of (infinitely many) real valued labels focusing on a central problem in censored and truncated statistics: Gaussian mean estimation from coarse data. We provide an efficient algorithm when the sets in the partition are convex and establish that the problem is NP-hard even for very simple non-convex sets. In this talk, I will focus on the unsupervised learning question of Gaussian mean estimation from coarse data.
Title: Learning Ising models from One or Multiple Samples
Abstract: There have been two separate lines of work on estimating Ising models: (1) estimating them from multiple independent samples under minimal assumptions about the model's interaction matrix; and (2) estimating them from one sample in restrictive settings. We propose a unified framework that smoothly interpolates between these two settings, enabling significantly richer estimation guarantees from one, a few, or many samples. Our main theorem provides guarantees for one-sample estimation, quantifying the estimation error in terms of the metric entropy of a family of interaction matrices. As corollaries of our main theorem, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold. In fact, our main result handles multiple independent samples by viewing them as one sample from a larger model, and can be used to derive estimation bounds that are qualitatively similar to those obtained in the afore-described multiple-sample literature. Our technical approach benefits from sparsifying a model's interaction network, conditioning on subsets of variables that make the dependencies in the resulting conditional distribution sufficiently weak. We use this sparsification technique to prove strong concentration and anti-concentration results for the Ising model, which we believe have applications beyond the scope of this paper.
Joint work with Yuval Dagan, Constantinos Daskalakis and Nishanth Dikkala.