NTUA Lib

ACAC '25

20th Athens Colloquium on Algorithms and Complexity

August 25-26, 2025

National Technical University of Athens


Scope

ACAC is an annual meeting in Athens aiming to bring together researchers working in all areas of the theory of algorithms and computational complexity. It serves as a lively forum for presenting research results that are in a preliminary stage or have been recently accepted / presented in some major conference. Contributions may appear, fully or partially, in informal electronic proceedings available only to the participants (subject to authors' approval). The language of the workshop is English.


Registration

There are no registration fees. However, participants should register for administrative purposes, by filling the registration form no later than August 20.


Contributions

Participants interested in giving a presentation should register, providing a tentative title and a short abstract, by using the registration form no later than August 17. The organizers will make every possible effort, subject to scheduling constraints, so that all interested participants present their work at the workshop.


Topics of interest

Include, but are not limited to:

  •   Analysis of Algorithms
  •   Randomized and Approximation Algorithms
  •   Computational Complexity
  •   Data Structures
  •   Cryptography
  •   Graph Theory
  •   Algorithmic Game Theory
  •   Computational Geometry
  •   Combinatorial Optimization
  •   Algorithmic Algebra and Coding Theory
  •   Theoretical Aspects of Databases
  •   Computational Biology
  •   Quantum Computing
  •   Parallel and Distributed Computing
  •   Computational Learning Theory
  •   Applications of Logic


Sponsors

News

  • Important Dates

    Registration Deadline: August 20
    Submission Deadline: August 17

    (extended)




Keynote Talks

  • Christina Fragouli, UCLA
  • Aggelos Kiayias, University of Edinburgh and IOG
  • Christos Papadimitriou, Columbia University and Archimedes AI
  • Vijay Vazirani, UC Irvine



Contributed Talks

  • Georgios Boulougouris
  • Loukas Georgiadis
  • Spyros Dragazis
  • Stelios Drakontaeidis
  • Sotiris Kanellopoulos
  • Maria Kokkou
  • Nikos Lazaropoulos
  • Alviona Mancho
  • Nicolaos Matsakis
  • Panayotis Mertikopoulos
  • Nicos Protopapas
  • Ioannis Psarros
  • Alkmini Sgouritsa
  • Stratis Skoulakis
  • Panagiotis Tsamopoulos
  • Giorgos Tsimos
  • Christos Tsoufis
  • Manolis Vasilakis



Organizing Committee

  • Dimitris Fotakis
  • Nikos Leonardos
  • Evangelos Markakis
  • Aris Pagourtzis
  • Alkmini Sgouritsa
  • Stathis Zachos
  • Vassilis Zissimopoulos



Administration

  • Dimitris Lazarou
  • Giannis Pavlovic
  • Ioanna Protekdikou



Local Arrangements

  • Antonis Antonopoulos
  • Pourandokht Behrouz
  • Sotiris Kanellopoulos
  • Christos Pergaminelis
  • Thanos Tolias



Previous ACACs




Share this event


Contact

For any further details please contact the organizers by email to acac25@corelab.ntua.gr.



Program

Monday, August 25

09:00 - 9:45 Coffee and Registration
09:45 - 10:00 Opening remarks
10:00 - 11:00 Computing with dynamical systems: games and language    Go to Abstract
Christos Papadimitriou, Columbia University and Archimedes AI
Short bio: Christos Papadimitriou is the Donovan Family Professor of Computer Science at Columbia Engineering, and has held faculty positions at UC Berkeley, Harvard, MIT, Stanford, UC San Diego, and the National Technical University of Athens (NTUA), Greece. He has authored important textbooks like 𝘊𝘰𝘮𝘱𝘶𝘵𝘢𝘵𝘪𝘰𝘯𝘢𝘭 𝘊𝘰𝘮𝘱𝘭𝘦𝘹𝘪𝘵𝘺, 𝘊𝘰𝘮𝘣𝘪𝘯𝘢𝘵𝘰𝘳𝘪𝘢𝘭 𝘖𝘱𝘵𝘪𝘮𝘪𝘻𝘢𝘵𝘪𝘰𝘯, 𝘈𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮𝘴, and 𝘌𝘭𝘦𝘮𝘦𝘯𝘵𝘴 𝘰𝘧 𝘵𝘩𝘦 𝘛𝘩𝘦𝘰𝘳𝘺 𝘰𝘧 𝘊𝘰𝘮𝘱𝘶𝘵𝘢𝘵𝘪𝘰𝘯, and numerous novels, including the graphic novel 𝘓𝘰𝘨𝘪𝘤𝘰𝘮𝘪𝘹, which explores the life and work of the philosopher and mathematician Bertrand Russell. His research is diverse and includes Computational Biology, Artificial Intelligence, Machine Learning, Computational Neuroscience, Economics and Game Theory, Generative AI and Large Language Models, Natural Language Processing, Theoretical Computer Science, Algorithms, and Complexity. Christos Papadimitriou is a member of both the US National Academy of Sciences, the US National Academy of Engineering, the Academy of Athens, and a fellow of the American Academy of Arts & Sciences. He has received numerous awards for his scientific contributions, including the 2002 Knuth Prize, the 2012 Gödel prize, the 2015 EATCS award, the 2016 IEEE John von Neumann Medal, the Technion 2018 Harvey Prize, and the 2024 INFORMS John von Neumann Theory Prize.
11:00-11:30 Coffee Break
11:30 - 12:45
Session 1:
Computing k-edge-connected components in directed graphs    Go to Abstract
Loukas Georgiadis, UOI
Abstract:
The computation of edge-connected components in directed and undirected graphs is a well-studied problem that is motivated by several applications, such as network reliability and analysis. In this talk, we survey recent results for the computation of the k-edge-connected components in directed graphs, both in the static and in the dynamic case. We will also discuss some open problems in this area.
Faster Approximation Algorithms for k-Center via Data Reduction    Go to Abstract
Ioannis Psarros, Archimedes, Athena RC
Abstract:
We study efficient algorithms for the Euclidean k-Center problem, focusing on the regime of large k. We take the approach of data reduction by considering α-coreset, which is a small subset S of the dataset P such that any β-approximation on S is an (α + β)-approximation on P. We give efficient algorithms to construct coresets whose size is k · o(n), which immediately speeds up existing approximation algorithms. Notably, we obtain a near-linear time O(1)-approximation when k = nc for any 0 < c < 1. We validate the performance of our coresets on real-world datasets with large k, and we observe that the coreset speeds up the well-known Gonzalez algorithm by up to 4 times, while still achieving similar clustering cost. Technically, one of our coreset results is based on a new efficient construction of consistent hashing with competitive parameters. This general tool may be of independent interest for algorithm design in high dimensional Euclidean spaces.
Streaming Diameter of High-Dimensional Points    Go to Abstract
Nicolas (Nicolaos) Matsakis, Reykjavik University
Abstract:
We improve the space bound for streaming approximation of Diameter but also of Farthest Neighbor queries, Minimum Enclosing Ball and its Coreset, in high-dimensional Euclidean spaces. In particular, our deterministic streaming algorithms store O(ε^{-2}log(1/ε)) points. This improves by a factor of ε^{-1} the previous space bound of Agarwal and Sharathkumar (SODA 2010), while retaining the state-of-the-art approximation guarantees, such as sqrt{2}+ε for Diameter or Farthest Neighbor queries, and also offering a simpler and more complete argument. Moreover, we show that storing Ω(ε^{-1}) points is necessary for a streaming (sqrt{2}+ε)-approximation of Farthest Pair and Farthest Neighbor queries. (Joint work with Magnus Halldorsson and Pavel Vesely. To appear in ESA 2025)
12:45-14:00 Lunch Break
14:00 - 15:00 Compression for learning    Go to Abstract
Christina Fragouli, UCLA
Abstract:
Compression is a classical, very well researched topic, dating back to Shannons famous source coding theorem and before. In this talk, I will argue that there still exist interesting and novel research directions in exploring compression, and sometimes these explorations may lead to results of independent interest, especially as we consider applications in distributing learning or generally learning under resource constraints. In particular, I will present two results on compression for learning, one related to Multi-Armed Bandits (MABs), and the other to Large Language Models (LLMs). The first result establishes a reduction that converts every stochastic contextual linear bandit instance to a linear bandit instance, implying that in distributed settings we do not need to convey the context at all (0 bits compression); while the second result presents a novel framework that leverages outlier statistics to design an efficient index coding scheme for outlier-aware weight-only quantization in LLMs that outperforms the state of the art solutions at the very low bits regime (<4 bits per weight).

Short bio: Christina Fragouli is a Professor in the Electrical and Computer Engineering Department at UCLA. She received the B.S. degree in Electrical Engineering from the National Technical University of Athens, Athens, Greece, and the M.Sc. and Ph.D. degrees in Electrical Engineering from the University of California, Los Angeles. She has worked at the Information Sciences Center, AT\&T Labs, Florham Park New Jersey, and also visited Bell Laboratories, Murray Hill, NJ, and DIMACS, Rutgers University. Between 2006--2015 she was an Assistant and Associate Professor in the School of Computer and Communication Sciences, EPFL, Switzerland. She is an IEEE fellow, she served as the 2022 President of the IEEE Information Theory Society, and has served in several IEEE-wide and Information Theory Society Committees as member or Chair. She has also served as TPC Chair in several conferences including the IEEE Information Theory Symposium in 2024, as an Information Theory Society Distinguished Lecturer, and as an Associate Editor for IEEE Communications Letters, for Elsevier Journal on Computer Communication, for IEEE Transactions on Communications, for IEEE Transactions on Information Theory, and for IEEE Transactions on Mobile Communications. She has received several awards including the Okawa Foundation Award, the European Research Council (ERC) Starting Investigator Grant, and the Zonta Price. Her research interests are in the intersection of coding techniques, machine learning and information theory, with a wide range of applications that include network information flow, network security and privacy, compression, wireless networks and bioinformatics.
15:00-15:30 Coffee Break
15:30 - 16:45
Session 2:
Online EFX Allocations with Predictions    Go to Abstract
Nicos Protopapas, Archimedes, Athena RC
Abstract:
We study an online fair division problem where a fixed number of goods arrive sequentially and must be allocated to a given set of agents. Once a good arrives, its true value for each agent is revealed, and it has to be immediately and irrevocably allocated to some agent. The ultimate goal is to ensure envy-freeness up to any good (EFX) after all goods have been allocated. Unfortunately, as we show, approximate EFX allocations are unattainable in general, even under restrictive assumptions on the valuation functions. To address this, we follow a recent and fruitful trend of augmenting algorithms with predictions. Specifically, we assume access to a prediction vector estimating the agents' true valuations -- e.g., generated by a machine learning model trained on past data. Predictions may be unreliable, and we measure their error using the total variation distance from the true valuations, that is, the percentage of predicted value-mass that disagrees with the true values. Focusing on the natural class of additive valuations, we prove impossibility results even on approximate EFX allocations for algorithms that either ignore predictions or rely solely on them. We then turn to algorithms that use both the predictions and the true values and show strong lower bounds on the prediction accuracy that is required by any algorithm to compute an approximate EFX. These negative results persist even under identical valuations, contrary to the offline setting where exact EFX allocations always exist without the necessity of predictions. We then present an algorithm for two agents with identical valuations that uses effectively the predictions and the true values. The algorithm approximates EFX, with its guarantees improving as the accuracy of the predictions increases."
No-Regret Safety: Balancing Tests and Misclassification in Logistic Bandits    Go to Abstract
Spyros Dragazis, Boston University
Abstract:
We study the problem of sequentially testing individuals for a binary disease outcome whose true risk is governed by an unknown logistic model. At each round, a patient arrives with feature vector $x_t$, and the decision maker may either pay to administer a (noiseless) diagnostic test—revealing the true label—or skip testing and predict the patient's disease status based on their feature vector and prior history. Our goal is to minimize the total number of costly tests while guaranteeing that the fraction of misclassifications does not exceed a prespecified error tolerance $\alpha$, with probability at least $1-\delta$. To address this, we develop a novel algorithm that (i) maintains a confidence ellipsoid for the unknown logistic parameter $\theta^\star$, (ii) interleaves label‐collection and distribution‐estimation to estimate both $\theta^\star$ and the context distribution, and (iii) computes a conservative, data‐driven threshold $\tau_t$ on the logistic score $|x_t^\top\theta|$ over $\theta$ in the confidence set to decide when testing is necessary. We prove that, with probability at least $1-\delta$, our procedure does not exceed the target misclassification rate, and requires only $\widetilde O(\sqrt{T})$ excess tests compared to the oracle baseline that knows both $\theta^\star$ and the patient feature distribution. This establishes the first no‐regret guarantees for error‐constrained logistic testing, with direct applications to cost‐sensitive medical screening. Simulations corroborate our theoretical guarantees, showing that in practice our procedure efficiently estimates $\theta^\star$ while retaining safety guarantees, and does not require too many excess tests.
Finite Pinwheel Scheduling: the k-Visits Problem    Go to Abstract
Sotiris Kanellopoulos, NTUA and Archimedes, Athena RC
Abstract:
Pinwheel Scheduling is a fundamental scheduling problem, in which each task i is associated with a positive integer di, and the objective is to schedule one task per time slot, ensuring each task perpetually appears at least once in every di time slots. Although conjectured to be PSPACE-complete, it remains open whether Pinwheel Scheduling is NP-hard (unless a compact input encoding is used) or even contained in NP. We introduce k-Visits, a finite version of Pinwheel Scheduling, where given n deadlines, the goal is to schedule each task exactly k times. While we observe that the 1-Visit problem is trivial, we prove that 2-Visits is strongly NP-complete through a reduction from Numerical 3-Dimensional Matching (N3DM). As intermediate steps in the reduction, we define NP-complete variants of N3DM which may be of independent interest. We further extend our strong NP-hardness result to a generalization of k-Visits (k>=2) in which the deadline of each task may vary throughout the schedule, as well as to a similar generalization of Pinwheel Scheduling, thus making progress towards settling the complexity of Pinwheel Scheduling. Additionally, we prove that 2-Visits can be solved in linear time if all deadlines are distinct, rendering it one of the rare natural problems which exhibit the interesting dichotomy of being in P if their input is a set and NP-complete if the input is a multiset. We achieve this through a Turing reduction from 2-Visits to a variation of N3DM, which we call Position Matching. Based on this reduction, we also show an FPT algorithm for 2-Visits parameterized by a value related to how close the input deadlines are to each other, as well as a linear-time algorithm for instances with up to two distinct deadlines.
16:45-17:00 Short Break
17:00 - 18:15
Session 3:
Deterministic Self-Stabilising Leader Election for Programmable Matter with Constant Memory    Go to Abstract
Maria Kokkou, Aix-Marseille University
Abstract:
The problem of electing a unique leader is central to all distributed systems, including programmable matter systems where particles have constant size memory. In this paper, we present a silent self-stabilising, deterministic, stationary, election algorithm for particles having constant memory, assuming that the system is simply connected. Our algorithm is elegant and simple, and requires constant memory per particle. We prove that our algorithm always stabilises to a configuration with a unique leader, under a daemon satisfying some fairness guarantees (Gouda fairness [Gouda 2001]). We use the special geometric properties of programmable matter in 2D triangular grids to obtain the first self-stabilising algorithm for such systems. This result is surprising since it is known that silent self-stabilising algorithms for election in general distributed networks require Ω(log n) bits of memory per node, even for ring topologies [Dolev et al. 1999].
Parameterized Maximum Node-Disjoint Paths    Go to Abstract
Manolis Vasilakis, Université Paris-Dauphine
Abstract:
We revisit the Maximum Node-Disjoint Paths problem, the natural optimization version of the famous Node-Disjoint Paths problem, where we are given an undirected graph G, k (demand) pairs of vertices (s_i, t_i), and an integer l, and are asked whether there exist at least l vertex-disjoint paths in G whose endpoints are given pairs. This problem has been intensely studied from both the approximation and parameterized complexity point of view and is notably known to be intractable by standard structural parameters, such as tree-depth, as well as the combined parameter l plus pathwidth. We present several results improving and clarifying this state of the art, with an emphasis towards FPT approximation. Our main positive contribution is to show that the problem's intractability can be overcome using approximation: We show that for several of the structural parameters for which the problem is hard, most notably tree-depth, the problem admits an efficient FPT approximation scheme, returning a (1-ε)-approximate solution in time f(td,ε)n^{O(1)}. We manage to obtain these results by comprehensively mapping out the structural parameters for which the problem is FPT if l is also a parameter, hence showing that understanding l as a parameter is key to the problem's approximability. This, in turn, is a problem we are able to solve via a surprisingly simple color-coding algorithm, which relies on identifying an insightful problem-specific variant of the natural parameter, namely the number of vertices used in the solution. The results above are quite encouraging, as they indicate that in some situations where the problem does not admit an FPT algorithm, it is still solvable almost to optimality in FPT time. A natural question is whether the FPT approximation algorithm we devised for tree-depth can be extended to pathwidth. We resolve this negatively, showing that under the Parameterized Inapproximability Hypothesis no FPT approximation scheme for this parameter is possible, even in time f(pw,ε)n^{g(ε)}. We thus precisely determine the parameter border where the problem transitions from ``hard but approximable'' to ``inapproximable''. Lastly, we strengthen existing lower bounds by replacing W[1]-hardness by XNLP-completeness for parameter pathwidth, and improving the n^{o(sqrt{td})} ETH-based lower bound for tree-depth to (the optimal) n^{o(td)}.
The computational complexity of computing refunds (alternatively From Daedalus to Icarus)    Go to Abstract
Stelios Drakontaeidis, UW Madison and Archimedes, Athena RC
Abstract:
We study a mechanism design setting where a seller offers an item to a buyer who is uncertain about her own valuation. Rather than using a simple take-it-or-leave-it price, the seller can extract significantly more revenue by offering refund options at different price levels. This setup, known as Sequential Screening, models a class of dynamic decision-making problems under uncertainty with applications ranging from airline ticket pricing to online education and healthcare diagnostics, where agents pay for flexible options under ambiguity. We investigate the power of deterministic mechanisms in this context and view them as a form of semi-adaptive preference elicitation, where the seller leverages knowledge of the buyer's value distribution to design refund-based menus that screen types indirectly. We show that the revenue gap between the optimal deterministic mechanism and simpler menus with bounded (possibly randomized) options can be arbitrarily large. We further establish that computing the revenue-optimal mechanism is NP-hard, and complement this result with an efficient algorithm that computes approximately optimal refund schedules.

Tuesday, August 26

09:45 - 10:00 Coffee and Registration
10:00 - 11:00 Cardinal-Utility Matching Markets: Pricing is Intractability, but Nash Bargaining Works!    Go to Abstract
Vijay Vazirani, UC Irvine
Abstract:
For a mechanism to be truly impactful, it must combine strong game-theoretic properties with computational efficiency -- a classic example is the Gale–Shapley (1962) stable matching algorithm. This talk focuses on cardinal-utility matching markets, for which the most well known mechanism is the pricing-based approach of Hylland and Zeckhauser (1979). While this mechanism satisfies several desirable game-theoretic properties, recent work has shown it to be computationally intractable, both in theory and in practice.
This talk will review a series of papers that:
a) establish this intractability;
b) propose an alternative mechanism based on Nash bargaining;
c) demonstrate that this new mechanism achieves game-theoretic and computational guarantees; and
d) provide evidence that significantly better alternatives are unlikely.
This talk is self contained and is based on these (1, 2, 3, 4, 5) and related papers.

Short bio: Vijay Vazirani is ICS Alumni Professor in the School of Information and Computer Sciences and Distinguished Professor in the Computer Science Department at the University of California, Irvine. A description of his research appears in the citation of his 2022 INFORMS John von Neumann Theory Prize. His most recent book, Online and Matching-Based Market Design (co-edited), appeared in July 2023.
11:00 - 11:30 Coffee and Registration
11:30 - 12:45
Session 4:
Mechanism Design Augmented with Incomplete Prediction    Go to Abstract
Alkmini Sgouritsa, AUEB and Archimedes, Athena RC
Abstract:
Our work revisits the design of mechanisms via the learning-augmented framework. In this model, the algorithm is enhanced with imperfect (machine-learned) information usually referred to as prediction. The goal is to use the prediction to design mechanisms with low approximation guarantees whenever the prediction is accurate (consistency), but at the same time to provide worst-case guarantees whenever the prediction is arbitrarily off (robustness). In algorithm design with predictions, various types of prediction models are considered, each providing different levels of information. Input predictions are generally more informative than output predictions/recommendations, as the latter can typically be derived from the former, but not vice versa. In this talk, we first discuss the consistency-robustness tradeoff for truthful mechanisms for the scheduling problem on unrelated machines under those two extreme types of predictions. We then explore a hybrid advice model combining a recommendation about the output allocation with a partial prediction of the input types. Investigating intermediate forms of prediction offers insight into the amount of information required to achieve performance guarantees comparable to those obtained with full input predictions.

Short bio: Alkmini Sgouritsa is an Assistant Professor in the Department of Informatics at Athens University of Economics and Business, and a Lead Researcher at Archimedes Unit / Athena Research Center. She previously was an Assistant Professor in the Department of Computer Science at the University of Liverpool and a visiting researcher in the Geo Algorithm Group at Google Research, CA, USA. Prior than that, she held a Postdoctoral Research position at the Max Planck Institute for Informatics, Saarbrücken, Germany, where she was awarded the Lise Meitner Postdoctoral Fellowship Award. She completed her PhD at the Department of Computer Science, University of Liverpool, where her PhD thesis was second runner-up for the British Computer Society (BCS) Distinguished Thesis Award (2018).
Online Budget-Feasible Mechanism Design with Predictions    Go to Abstract
Panagiotis Tsamopoulos, AUEB
Abstract:
Augmenting the input of algorithms with predictions is an algorithm design paradigm that suggests leveraging a (possibly erroneous) prediction to improve worst-case performance guarantees when the prediction is perfect (consistency), while also providing a performance guarantee when the prediction fails (robustness). Recently, Xu and Lu [2022] and Agrawal et al. [2024] proposed to consider settings with strategic agents under this framework. In this work, we initiate the study of budget-feasible mechanism design with predictions. These mechanisms model a procurement auction scenario in which an auctioneer (buyer) with a strict budget constraint seeks to purchase goods or services from a set of strategic agents, so as to maximize her own valuation function. We focus on the online version of the problem where the arrival order of agents is random. We design mechanisms that are truthful, budget-feasible, and achieve a significantly improved competitive ratio for both monotone and non-monotone submodular valuation functions compared to their state-of-the-art counterparts without predictions. Our results assume access to a prediction for the value of the optimal solution to the offline problem. We complement our positive results by showing that for the offline version of the problem, access to predictions is mostly ineffective in improving approximation guarantees.
Fairness under Equal-Sized Bundles: Impossibility Results and Approximation Guarantees    Go to Abstract
Alviona Mancho, AUEB
Abstract:
We study the fair allocation of indivisible goods under cardinality constraints, where each agent must receive a bundle of fixed size. This models practical scenarios, such as assigning shifts or forming equally sized teams. Recently, variants of envy-freeness up to one/any item (EF1, EFX) were introduced for this setting, based on flips or exchanges of items. Namely, one can define envy-freeness up to one/any flip (EFF1, EFFX), meaning that an agent does not envy another agent after performing one or any one-item flip between their bundles that improves the value of . We explore algorithmic aspects of this notion, and our contribution is twofold: we present both algorithmic and impossibility results, highlighting a stark contrast between the classic EFX concept and its flip-based analogue. First, we explore standard techniques used in the literature and show that they fail to guarantee EFFX approximations. On the positive side, we show that we can achieve a constant factor approximation guarantee when agents share a common ranking over item values, based on the well-known envy cycle elimination technique. This idea also leads to a generalized algorithm with approximation guarantees when agents agree on the top items and their valuation functions are bounded. Finally, we show that an algorithm that maximizes the Nash welfare guarantees a 1/2-EFF1 allocation, and that this bound is tight.
12:45 - 14:00 Lunch break
14:00 - 15:00 Blockchain meets computational complexity and economics - recent results and new directions    Go to Abstract
Aggelos Kiayias, University of Edinburgh and IOG
Abstract:
Blockchain systems have shed new light on previously unexplored areas of theoretical computer science. A common theme in these new questions is the aspiration of these systems to coordinate multiple agents toward the same goal with minimal assumptions regarding their initial alignment or entry requirements, a feature sometimes referred to as being permissionless. In this talk we discuss two important theoretical angles that are crucial in the understanding of the blockchain space. The need for Sybil resilience calls for advances in fine grained complexity and the theory of moderately hard functions. One central question is whether it is possible to achieve permissionless consensus in the standard model under falsifiable assumptions. Our second theme involves problems in economics around monetary policy. A central question in this setting asks whether the digital assets minted within blockchain systems can achieve some form of value stability especially in the presence of external shocks. These questions point towards the direction of designing algorithmic monetary policies at the intersection of economics theory and mechanism design. In this talk we will review recent results and also discuss directions and open questions along both of these themes
15:00 - 15:30 Coffee break
15:30 - 16:45
Session 5:
Are zero-sum games good models of adversarial learning?    Go to Abstract
Panayotis Mertikopoulos, CNRS / NKUA and Archimedes, Athena RC
Abstract:
Zero-sum games are often thought of as the strategic counterpart to potential / common interest games, but this view is overly simplistic—there are simple examples of min-max games that are potential, so this analogy is hardly appropriate.
In this talk, we will discuss the class of harmonic games, originally due to Candogan et al. (2011), which can be shown to be the orthogonal complement of potential games, and thus provide a more principled counterpart thereof. The main focus of the talk will be the long-run behavior of regularized learning in harmonic games, in both continuous and discrete time. In a continuous-time setting, we show that regularized learning leads to Poincaré recurrence, i.e., the dynamics return arbitrarily close to their starting point infinitely often, and fail to converge. In discrete time, the standard, ""vanilla"" implementation of the method may lead to even worse outcomes, eventually trapping the players in a perpetual cycle of best-responses. However, by adding a suitable extrapolation step—which includes as special cases the optimistic and mirror-prox variants of the algorithm—we show that regularized learning converges to a Nash equilibrium from any initial condition, and all players are guaranteed at most O(1) regret. In this regard, harmonic games comprise the canonical complement of potential games not only from a strategic, but also from a dynamical viewpoint.
Revenue Efficiency of First Price Auctions over Approximate Correlated Equilibrium    Go to Abstract
Stratis Skoulakis, Aarhus University
Abstract:
We study the revenue of approximate correlated equilibrium in discrete first price auctions - the set of allowable bids is $\mathcal{B} = \{0, 1/k, \dots, 1 - 1/k, 1\}$ for some $k \in \mathbb{N}$. We show that the revenue of any $\epsilon$-approximate correlated equilibrium is at least $v_2 - \Theta(1/k)- \Theta(\epsilon k^2)$, where $v_2 \geq 0$ is the second-highest valuation. Our results establish the first polynomial convergence rates on the revenue generated by no-swap regret bidders in first-price auctions. For instance, if bidders admit the optimal swap regret of $\mathcal{O}(\sqrt{k T})$, then the time-averaged revenue is at least $v_2 - \Theta(1/k) - \Theta(\epsilon)$ after $\mathcal{O}(k^5/\epsilon^2)$ rounds.
Provably Efficient Resource Allocation of Cloud Native Functions for Network Services    Go to Abstract
Nikos Lazaropoulos, NKUA
Abstract:
Modern Network Function Virtualization (NFV) revolutionizes network service provisioning by orchestrating Cloud Native Functions (CNFs), deployed as containerized workloads across distributed cluster workers. These deployments consist of modular microservices that enable elastic scalability and collaborative service delivery. This study presents a framework for addressing the Resource Allocation problem of CNFs, focusing on approximation algorithms with provable performance guarantees. The proposed framework, designed for Capacity Constrained Resource Allocation of CNFs, encompasses three variations of the Group Generalized Assignment Problem (Group GAP). Key contributions include: (1) a $1/2$-approximation algorithm for instances where each CNF's size is at most half of the cluster capacity, enhancing known results for Group GAP; (2) a $1/2(1 - e^{-1/d})$-approximation algorithm for shared microservices among multiple CNFs, where $d$ is the degree of sharing, supported by experimental evaluation of the algorithm’s relative error; and (3) an algorithm for cases involving diverse microservice sizes and profits, achieving a $(1/2 - \epsilon)$ approximation ratio. By employing linear programming relaxation and rounding techniques, our framework ensures computational efficiency and scalable solution quality, effectively addressing CNF resource management in modern network infrastructures.
16:45 - 17:00 Short break
17:00 - 18:15
Session 6:
Minimizing Rosenthal's Potential in Monotone Congestion Games    Go to Abstract
Christos Tsoufis, Université Paris Dauphine - PSL
Abstract:
Congestion games are attractive because they can model many concrete situations where some competing entities interact through the use of some shared resources, and also because they always admit pure Nash equilibria which correspond to the local minima of a potential function. We explore the problem of computing a state of minimum potential in this setting. Using the maximum number of resources that a player can use at a time, and the possible symmetry in the players' strategy spaces, we settle the complexity of the problem for instances having monotone (i.e., either non decreasing or non-increasing) latency functions on their resources. The picture, delineating polynomial and NP-hard cases, is complemented with tight approximation algorithms.
Sublinear-Round Broadcast without Trusted Setup    Go to Abstract
Giorgos Tsimos, Pod Network
Abstract:
Byzantine broadcast is one of the fundamental problems in distributed computing. Many of its practical applications, from multiparty computation to consensus mechanisms for blockchains, require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users n. This rules out existing solutions which run in a linear number of rounds in n or rely on trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol for the dishonest majority setting. Unlike previous sublinear-round protocols, our protocol assumes neither the existence of a trusted dealer who honestly issues keys and correlated random strings to the parties nor random oracles. Instead, we present a solution whose setup is limited to an unstructured uniform reference string and a plain public key infrastructure (a.k.a. bulletin-board PKI).
On the Event Horizon Monte Carlo approach    Go to Abstract
Georgios Boulougouris, DUTH
Abstract:
In this work we will present the Event Horizon Monte Carlo approach, a stochastic method that performs integration of time dependent stochastic problems by combining, an analytical solution of the differential equation that describes event driven problems like chemical reactions or diffusion at a molecular level in order to create a local network of states, and an iterative stochastic expansion of the network boundary. Furthermore, the connection of the proposed approach with path finding methods will be discussed.

Venue

ACAC will take place in the Multimedia Amphitheater of the National Technical University of Athens, located in the basement of the building of NTUA's Central Library. See the map below:

You can arrive at the Central Library by various ways:

By public transport:

The easiest way is by taking the Blue Metro line and getting off at the "ΚΑΤΕΧΑΚΗ" station. Then take the bus 242, get off at stop "ΘΥΡΩΡΕΙΟ" and walk 5 minutes towards the Central Library.
Another option is to take the bus 140 from the "ΚΑΤΕΧΑΚΗ" metro station and get off at stop "ΠΟΛΥΤΕΧΝΕΙΟΥΠΟΛΗ". Then get into the campus and walk 10 minutes towards the Central Library.

By car:

You can use this google map to get directions from Alimou-Katechaki Avenue.