8:45 |
Registration - Coffee |
9:15 |
Opening |
9:30 - 10:30 |
Plenary Session (Chair: Stathis Zachos)
On the Algorithmic Lovász Local Lemma and Acyclic Edge Coloring
Lefteris Kirousis, University of Athens & Computer Technology Institute and Press “Diophantus”
Abstract: The algorithm for Lovász Local Lemma by Moser and Tardos gives a constructive way to prove the existence of
combinatorial objects that satisfy a system of constraints. We present an alternative probabilistic analysis of the algorithm that does not involve
reconstructing the history of the algorithm. We apply our technique to improve the best known upper bound to acyclic chromatic index.
Specifically we show that a graph with maximum degree $\Delta$ has an acyclic proper edge coloring with at most
$\lceil 3.74(\Delta-1)\rceil+1$ colors, whereas the previously known best bound was $4(\Delta-1)$. The same technique is also applied to improve
corresponding bounds for graphs with bounded girth. An interesting aspect of the latter application is that the probability of the ``undesirable” events
do not have a uniform upper bound, i.e., it constitutes a case of the asymmetric Lovász Local Lemma.
Joint work by Ioannis Giotis, Kostas I. Psaromiligkos and Dimitrios M. Thilikos.
|
10:30 - 11:00 |
Coffee Break |
11:00 - 12:30 |
Session 1 (Chair: Stathis Zachos)
Designing Networks with Good Equilibria under Uncertainty
Giorgos Christodoulou, University of Liverpool
Abstract: We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty.
The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs.
A set of k terminal vertices or players need to establish connectivity with the root. The social optimum is the Minimum Steiner Tree.
We are interested in situations where the designer has incomplete information about the input. We propose two different models,
the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying metric but the requested subset of the players
is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model).
Stable marriages with evolving preferences
Nikos Leonardos, LIAFA, Univ Paris Diderot, France
Abstract: We consider the classical Stable Marriages problem in a model introduced by Anagnostopoulos, Kumar, Mahdian, and Upfal, in 2011.
The essential property of the model is that the input is not static but is dynamically changing.
The changes are probabilistic and occur in discrete time-steps and the algorithm tries to detect them by querying the input in some predefined manner.
The goal of the algorithm is to maintain an output which is dynamically changing and is sufficently "good" in expectation and with high probability.
For the Stable Marriages problem we focus on the number of "unstable" pairs as a measure of the quality of the output.
Joint work with Varun Kanade and Frederic Magniez.
Enumeration of 2-level polytopes
Vissarion Fisikopoulos, Université libre de Bruxelles
Abstract: We propose the first algorithm for enumerating all combinatorial types
of 2-level (a.k.a. compressed) polytopes of a given dimension d, and
provide complete experimental results for d≤6. Our approach is based
on new findings on 2-level polytopes, also presented here. In
particular, we introduce the notion of a simplicial core, which allows
us to reduce the problem to the enumeration of the closed sets of a
discrete closure operator, along with some convex hull computations
and isomorphism tests. (to appear in ESA'15)
Joint work with: Adam Bohn, Yuri Faenza, Samuel Fiorini, Marco
Macchia, Kanstantsin Pashkovich
|
12:30 - 12:45 |
Coffee Break |
12:45 - 13:45 |
Session 2 (Chair: Aris Pagourtzis)
On mobile agent verifiable problems
Evangelos Bampas, LaBRI, University of Bordeaux
Abstract:
We consider decision problems that are solved in a distributed fashion by synchronous mobile agents operating in an unknown, anonymous network.
Each agent has a unique identifier and an input string and they have to decide collectively a property which may involve their input strings,
the graph on which they are operating, and their particular starting positions.
Building on recent work by Fraigniaud and Pelc [LATIN 2012, LNCS 7256, pp. 362–374], we introduce several natural new computability classes
allowing for a finer classification of problems below co-MAV or MAV, the latter being the class of problems that are verifiable when the
agents are provided with an appropriate certificate. We provide inclusion and separation results among all these classes.
We also determine their closure properties with respect to set-theoretic operations. Our main technical tool, which may be of
independent interest, is a new meta-protocol that enables the execution of a possibly infinite number of mobile agent protocols essentially
in parallel, similarly to the well-known dovetailing technique from classical computability theory.
Joint work with David Ilcinkas.
Collaborative Exploration by Energy-Constrained Mobile Robots
Christina Karousatou, Aix-Marseille University and CNRS, France
Abstract: We study the problem of exploration of a tree by mobile agents (robots) that have limited energy.
The energy constraint bounds the number of edges that can be traversed by a single agent. Thus we need a team of agents to
completely explore the tree and the objective is to minimize the size of this team. The agents start at a single node,
the designated root of the tree and the height of the tree is bounded by the energy bound B. We provide an exploration
algorithm without any knowledge about the tree and we compare our algorithm with the optimal offline algorithm that
has complete knowledge of the tree. Our algorithm has a competitive ratio of O(logB), independent of the number of nodes
in the tree. We also show that this is the best possible competitive ratio for exploration of unknown trees.
|
13:45 - 15:00 |
Lunch Break |
15:00 - 16:00 |
Session 3 (Chair: Vangelis Markakis)
Improving Selfish Routing for Risk-Averse Players
Thanasis Lianeas, University of Texas at Austin
Abstract: We investigate how and to which extent one can exploit risk-aversion and modify the perceived cost of the players in
selfish routing so that the Price of Anarchy (PoA) wrt. the total latency is improved. The starting point is to introduce some small random
perturbations to the edge latencies so that the expected latency does not change, but the perceived cost of the players increases, due to risk-aversion.
We adopt the simple model of γ-modifiable routing games, a variant of selfish routing games with restricted tolls. We prove that computing the best
γ-enforceable flow is NP-hard for parallel-link networks with affine latencies and two classes of heterogeneous risk-averse players.
On the positive side, we show that for parallel-link networks with heterogeneous players and for series-parallel networks with homogeneous players,
there exists a nicely structured γ-enforceable flow whose PoA improves fast as γ increases. We show that the complexity of computing such a γ-enforceable
flow is determined by the complexity of computing a Nash flow for the original game. Moreover, we prove that the PoA of this flow is best possible in
the worst-case, in the sense that there are instances where (i) the best γ-enforceable flow has the same PoA, and (ii) considering more flexible modifications
does not lead to any improvement on the PoA.
Pushing the Envelope in Graph Compression
Katia Papakonstantinopoulou, University of Athens
Abstract: We improve the state-of-the-art method for the compression of web and other similar graphs by introducing an elegant technique which
further exploits the clustering properties observed in these graphs. The analysis and experimental evaluation of our method shows that it outperforms the currently
best method of Boldi et al. by achieving a better compression ratio and retrieval time. Our method exhibits vast improvements on certain families of graphs,
such as social networks, by taking advantage of their compressibility characteristics, and ensures that the compression ratio will not worsen for any graph,
since it easily falls back to the state-of-the-art method.
|
16:00 - 16:15 |
Coffee Break |
16:15 - 17:45 |
Session 4 (Chair: Dimitris Fotakis)
Who to Trust for Truthfully Maximizing Welfare?
Emmanouil Zampetakis, Massachusetts Institute of Technology
Abstract: We introduce a general approach based on selective verification and obtain approximate mechanisms without money for
maximizing the social welfare in the general domain of utilitarian voting. Having a good allocation in mind, a mechanism with verification selects
few critical agents and detects, using a verification oracle, whether they have reported truthfully. If yes, the mechanism produces the desired allocation.
Otherwise, the mechanism ignores any misreports and proceeds with the remaining agents. We obtain randomized truthful (or almost truthful) mechanisms
without money that verify only O(lnm/ϵ) agents, where m is the number of outcomes, independently of the total number of agents, and are (1−ϵ)-approximate
for the social welfare. We also show that any truthful mechanism with a constant approximation ratio needs to verify Ω(logm) agents. A remarkable property of
our mechanisms is emph{robustness}, namely that their outcome depends only on the reports of the truthful agents.
On the Efficiency of All-Pay Mechanisms
Alkmini Sgouritsa, University of Liverpool
Abstract: We study the inefficiency of mixed equilibria, expressed as the price of anarchy, of all-pay auctions in three different
environments: combinatorial, multi-unit and single-item auctions. First, we consider item-bidding combinatorial auctions where m all-pay auctions run
in parallel, one for each good. For fractionally subadditive valuations, we strengthen the upper bound from 2 to 1,82 by proving some structural
properties that characterize the mixed Nash equilibria of the game. Next, we design an all-pay mechanism with a randomized allocation rule for
the multi-unit auction. We show that, for bidders with submodular valuations, the mechanism admits a unique, 75% efficient, pure Nash equilibrium.
The efficiency of this mechanism outperforms all the known bounds on the price of anarchy of mechanisms used for multi-unit auctions.
Finally, we analyze single-item all-pay auctions motivated by their connection to contests and show tight bounds on the price of anarchy of
social welfare, revenue and maximum bid.
Manipulation of h-index, g-index, i10-index by merging articles
Chrystalla Pavlou, University of Oxford
Abstract: In the academic word different measures are used to quantify the productivity and the impact of a researcher.
The h-index, g-index and i10-index are examples of popular scientific impact indices that are computed in many article databases.
Although these indices are widely used, there have been some observations about how someone can manipulate them. Extending the model proposed
by Bevern et al. (2014), we investigate whether it is easy for a researcher to improve his g-index and i10-index by means of the merge operation.
We also study the manipulation of h-index, g-index and the i10-index under a model that takes the social network of the researcher into account.
Specifically, we study the case where a researcher wants to improve his target index without affecting a second researcher's index and two
adversarial scenarios. In the first case a researcher wants to improve his target index in a way that does not improve his opponent's index
and in the second case a researcher wants to harm his opponent's index in a way that does not harm his own index.
|
17:45 - 18:00 |
Short Break |
18:00 - 19:00 |
Session 5 (Chair: Vassilis Zissimopoulos)
Cost-sharing models in Participatory Sensing
Giorgos Birmpas, Athens University of Economics and Business
Abstract: In Smart City and Participatory Sensing initiatives the key concept is for user communities to contribute sensor information
and form a body of knowledge that can be exploited by innovative applications and data analytics services. A key aspect in all such platforms is that
sensor information is not free but comes at a cost. As a result, these platforms may suffer due to insufficient sensor information made publicly
available if applications do not share efficiently the cost of the sensor information they consume. We explore the design of specialized market
mechanisms that match demand to supply while taking into account important positive demand externalities: sensors are digital goods and their cost
can be shared by applications. We focus on the buyer side and define different demand models according to the flexibility in choosing sensor data
for satisfying application needs. We then investigate the properties of various cost-sharing mechanisms with respect to efficiency and budget balance.
In doing so, we also propose and study a new mechanism, which although lacks strategyproofness, it exhibits important efficiency improvement along
with certain fairness properties.
Aggregating partial rankings with applications to peer grading in MOOCs
Alexandros Voudouris, University of Patras
Abstract: We investigate the potential of using ordinal peer grading for the evaluation of students in massive online open courses (MOOCs).
According to such grading schemes, each student receives a few assignments (by other students) which she has to rank. Then, a global ranking is produced by
combining the individual ones. This is a novel application area for social choice concepts and methods where the important problem to be solved is as follows:
how should the assignments be distributed so that the collected individual rankings can be easily merged into a global one that is as close as possible to the
ranking that represents the relative performance of the students in the assignment? Our main theoretical result suggests that using very simple ways to distribute
the assignments so that each student has to rank only k of them, a Borda-like aggregation method can recover a 1-O(1/k) fraction of the true ranking when each
student correctly ranks the assignments she receives. Experimental results indicate that this method works even better in practice and is extremely robust even
when students have imperfect capabilities as graders. Our results provide strong evidence that ordinal peer grading can be a highly effective and scalable solution
for evaluation in MOOCs.
|
| |
9:30 - 11:00 |
Session 6 (Chair: Ioannis Milis)
Mobile Agents Rendezvous in Rings, in spite of a Malicious Agent
Euripides Markou, University of Thessaly
Abstract: A number of ‘honest’ mobile agents need to gather at a node of a ring network while a malicious agent can move at the network
and block the agents preventing them from gathering. The malicious agent is assumed to be arbitrarily fast, has full knowledge of the network and it cannot
be exterminated by the honest agents. The honest agents are asynchronous and anonymous finite automata, they have no knowledge about the network, apart from
its topology, and they can communicate only when they meet at a node. We solve the problem for any number of agents in oriented ring networks and for any odd
number of agents in unoriented ring networks, while we show that rendezvous is impossible for an even number of agents in unoriented rings. Our model lies between
the more powerful but static fault model of ‘black holes’ (which can even destroy the agents), and the less powerful but mobile fault model of ‘Byzantine agents’
(which can only imitate the honest agents).
Efficient Money-Burning in General Domains
Dimitris Tsipras, National Technical University of Athens
Abstract: We study mechanism design where the payments charged to the agents are not in the form of monetary transfers, but effectively burned.
In this setting, the objective shifts from maximizing social welfare, to maximizing social utility, i.e. the social welfare minus the payments charged.
We consider the general setting with m discrete outcomes and n multidimensional agents. We present mechanisms that provide optimal worst case guarantees
with respect to maximizing social utility, while simultaneously approximating the optimal social welfare within a constant factor. Such mechanisms are obtained
by defining a maximization problem on a suitably chosen smooth surface of probability distributions. This approach leads to a closed-form, smooth mechanism and we
believe to be of independent interest.
Welfare Ratios for One-sided Matching Mechanisms
Aris Filos-Ratsikas, University of Oxford
Abstract: We study the Price of Anarchy of mechanisms for the fundamental problem of social welfare maximization in one-sided matching settings,
when agents have general cardinal preferences over a finite set of items. We show that the two most well-studied mechanisms in literature, Probabilistic Serial
and Random Priority have a Price of Anarchy of O(sqrt(n)). We complement our results with a lower bound of Ω(sqrt(n)) on the Price of Anarchy of all mechanisms.
As a result, we conclude that these mechanisms are optimal. We extend our lower bound to the Price of Stability of all mechanisms that satisfy a mild "proportionality-like"
property.
|
11:00 - 11:30 |
Coffee Break |
11:30 - 13:00 |
Session 7 (Chair: Euripides Markou)
An introduction to utility elicitation
Angelina Vidali, Pierre and Marie Curie University
Abstract: I present some of the standard methods in the literature for eliciting the utility of a user in the expected utility model as
well as in the more realistic model of prospect theory e.g. when users are risk-averse.
The SYNAISTHISI Project: Using Multi-Agent Systems Technologies in Resource Allocation
Ioannis Vetsikas, NCSR Demokritos
Abstract: The proliferation of mobile smart devices and the Internet of Things (IoT) vision brings about a number of opportunities for new services
and business models. Taking a cue from cloud computing, we develop the SYNAISTHISI platform: it interfaces with heterogeneous components (devices or biological entities)
that offer sensing, processing and actuator capabilities and integrate them as reusable services that are managed and easily “weaved” into complex applications.
In this talk, we will briefly examine two of the key (multi-agent) technologies used in the platform. First, to create complex cyber-physical systems (CPS) on-demand
from available services, we develop agent strategies and mechanisms that facilitate the allocation of services to specific CPS. More specifically, the customers submit
blueprints of CPS to be created together with a budget and their agent bids for possible services (offered by service providers) that maximize their customers’ expected profit.
A mechanism with desirable properties (e.g. efficient, fair) is developed that matches these services to customers, as well as strategies for the customer agents that allow
them to deal with uncertainties, such as failing services or realizing at run time that more services might be necessary than those invoked initially. Second, for one of the
SYNAISHISI pilots, pertaining to the smart grid, our goal is to incentivize the usage of renewable energy and to flatten the demand from traditional power plants (coal etc).
To achieve this, we develop multi-agent based technologies for Demand Reduction (DR) and Demand Side Management (DSM). Compared to existing DR/DSM technologies though our
approach is not centralized, but rather each actor (e.g. household and end user) optimizes their energy usage individually. To coordinate the process we developed mechanisms
(giving pricing or social incentives through the appropriate gamification), so that the optimal action for users and their agents is to shift and/or reduce their demand to
coincide with times when energy is more abundant and available from renewable energy sources. The benefits of this approach are increased privacy, and also that no central
system imposes a decision to the user, but rather the user desires herself to take the actions that are good for the overall system.
Scheduling MapReduce Jobs and Data Shuffle on Unrelated Processors
Georgios Zois, Athens University of Economics and Business
Abstract: We propose a constant approximation algorithm for generalizations of the Flexible Flow Shop (FFS) problem which form a realistic model for
nonpreemptive scheduling in MapReduce systems. Our results concern the minimization of the total weighted completion time of a set of MapReduce jobs on unrelated processors
and improve substantially on the model proposed by Moseley et al. (SPAA 2011) in two directions: (i) we consider jobs consisting of multiple Map and Reduce tasks, which is
the key idea behind MapReduce computations, and (ii) we introduce into our model the crucial cost of the data shuffle phase, i.e., the cost for the transmission of intermediate
data from Map to Reduce tasks. Moreover, we experimentally evaluate our algorithm compared with a lower bound on the optimal cost of our problem as well as with a fast algorithm,
which combines a simple online assignment of tasks to processors with a standard scheduling policy. As we observe, for random instances that capture data locality issues, our
algorithm achieves a better performance.
|
13:00 - 15:00 |
Lunch Break |
15:00 - 15:45 |
Session 8 (Chair: Stathis Zachos)
Stochastic Congestion Games with Risk-Averse Players
Haris Angelidakis, Toyota Technological Institute at Chicago
Abstract: Congestion games ignore the stochastic nature of resource delays and the risk-averse attitude of the players to uncertainty.
To take these into account, we introduce two variants of atomic congestion games, one with stochastic players, where players assign load to their strategies
independently with some probability, and another with stochastic edges, where the latency functions are random. In both variants, the players are risk-averse,
and their individual cost is a player-specific quantile of their delay distribution.
We focus on parallel-link networks and prove that stochastic congestion games on such networks admit an efficiently computable pure Nash equilibrium if the
players have either the same risk attitude or the same participation probabilities, and also admit a potential function if the players have the same risk attitude.
On the negative side, there exist games with players of different risk attitudes that are not potential. As for the inefficiency of equilibria, for parallel-link
networks with n players and linear delays, the Price of Anarchy is Θ(n) and may be unbounded, in case of stochastic edges.
This is joint work with Dimitris Fotakis and Thanasis Lianeas.
|
15:45 - 16:15 |
Coffee Break |
16:15 - 17:00 |
Session 9 (Chair: Stathis Zachos)
System Fragility under Load Redistribution from an Attacker's Perspective
Evangelos Chatziafratis, National Technical University of Athens
Abstract: System Robustness and System Fragility are used when dealing with the problem of finding out how robust (resistant to failure) is a system
in case some of its parts are not working as expected and are malfunctioning. System robustness is interested in the behaviour of computer systems, electrical and
mechanical systems, energy systems, network security etc. It is an important area of research and it has many applications in today's extremely connected world of
the Internet and in the efforts for constructing complex systems such as the smart energy grid etc. In addition, it is also important for the design and study of the
so-called Interdependent Systems.
In this thesis, we study the problem of cascading failures: that is the situation in which we have an attack that destroys some part of the system and these failures
may lead to further failures to other parts of the system. The main reason of this cascading effect and interconnection between different nodes are the possible node
dependencies (e.g. power supply) or the load redistribution that is happening when an attacked node fails. The load redistribution may be equal among all remaining
nodes or may be targeted to selected few of the remaining nodes (e.g. neighbouring nodes). Some of the aspects considered here for the load redistribution problem
are also related to virus propagation, social networks and the well-known problem of influence maximization.
At the beginning of this thesis, we present recent research in the area of system robustness and the different techniques that have been developed and that are used
to address the problem. We study cases where the initial failures are random (e.g. due to a natural disaster) or the failures are because of strategically planned
attacks (e.g. terrorist attacks). We also present a new approach to system robustness based on loads and capacities with cascading failures and we give proofs for
the hardness of finding the best strategy to attack the system, if we have constraints on the number of attacks and on the magnitude of these attacks. We present
some greedy algorithms that may be intuitive, but are not guaranteed to give a good solution and may in fact have really bad behaviour. Afterwards, we continue by
imposing extra constraints on the values of the system's loads and capacities in order to acquire less pathological cases that are most frequently seen in real
situations and we give efficient algorithms that find the optimal solution from an attacker's perspective. Finally, we conclude this thesis by briefly describing
the future work that needs to be done in order to address some interesting problems that we came across during the preparation of this work.
|
17:00 - 17:15 |
Short Break |
17:15 - 18:15 |
Short Sessions |
18:15 |
Closing |
21:00 |
Dinner |
| |