Aims & Scope
The aim of this workshop is to promote the study of real-life motivated
problems
which includes appropriate choices of parameters, design and analysis of FPT
algorithms, and development of kernelization techniques. We believe that the
workshop will be of interest to a wide audience including the ICALP community
and become the first in a series of APAC workshops.
Background
It is well known that the vast majority of practical problems are NP-hard and
thus we cannot hope to obtain
algorithms which compute the exact solution efficiently. One way to overcome
this obstacle is to settle for approximate solutions.
However, is this really necessary?
After all, the data of real-life problems are the outputs of other processes,
such as natural selection, chemical and physical processes, and
other algorithms. Whilst these data are often very large and complex,
some structure is almost always present.
Parameterized complexity is an approach which uses the underlying structure of
data to design efficient and exact algorithms.
First, it uses a parameter to "capture" the amount of structure
present in data —generally, the lower the parameter, the more
structured it is. Here the meaning of
"structured" depends on the choice of parameter.
For instance, the well-known treewidth parameter measures how much a graph
resembles a tree.
Once a suitable parameter is chosen, we can use it to develop efficient
parameterized algorithms.
The runtime of such parameterized algorithms depends both on the size of the
input and the parameter.
Ideally, a practical parameterized algorithm should run in
time O(f(par) ·poly(n)), where poly(n) is a
polynomial in the input size and
f(par) is a computable function in the parameter (algorithms
with a runtime of this form are called FPT,
or fixed-parameter tractable).
These algorithms can solve NP-hard problems on large instances efficiently (in
polynomial or even linear time) as long as the value of the parameter
is relatively small.
It is well known that the input of every FPT problem can be compressed, in
polynomial time, such that its size will be bounded by a function of
the parameter (in many cases, a polynomial). This compression (called
kernelization in Parameterized Complexity community) amounts
to a preprocessing with a worst case guarantee.
Note that kernelization is
of interest even if non-parameterized algorithms such as heuristics
are used to solve the compressed instances of the problem.
Format
The workshop program will consist of several
invited talks. Everyone
is most welcome to attend. The workshop fee is 35 British pounds which covers two coffee breaks, lunch and a coach to/from Leamington Spa for dinner.
Speakers
- Faisal Abu-Khzam,
Lebanese American University, Lebanon
"The Capacitated Cluster Edit Problem: Theory and Experiments"
[show/hide abstract]
[sildes in pdf]
Abstract.
The Cluster Edit problem seeks a transformation of a given undirected graph into a transitive graph via a minimum number of edge-edit operations. Existing
parameterized algorithms exhibit slow performance and may deliver clusters of no practical significance, such as singletons.
We introduce a generalized version of Cluster Edit, dubbed Capacitated Cluster Edit, featuring more input parameters that bound the amount of edge-deletions and
additions per vertex. In addition to its practical significance, the new formulation is used to improve the efficiency of recent Cluster Edit algorithms when
the input is a yes instance. The proposed problem warrants more theoretical study. In this respect, a few open problems are posed.
- Liming Cai,
The University of Georgia, USA
"Parameterized Computation for Biomolecule Folding"
[show/hide abstract]
[sildes in pdf]
Abstract.
Tasks in prediction of bio-molecule structures and biological networks require new, efficient solutions
to a number of classical algorithmic graph problems. Such problems, often associated with graphs of small tree widths,
are parameterized intractable with respect to the tree width parameter (e.g., not MSO-logic definable over graphs of small tree width).
We investigate parameterized computation of these problems constrained with certain engineering parameters of biological significance.
We demonstrate that different graph morphism problems may be of different parameterized complexities under such parameterization.
In particular, some (but probably
not all) morphism properties allow the problem to be solved efficiently by reduction to MSO-logic definable sets over graphs of
a non-trivially small tree width.
- Jiong Guo,
University of Saarland, Germany
"Parameterized Complexity of Local Search Problems"
[show/hide abstract]
[sildes in pdf]
Abstract.
Local search is a universal algorithmic approach for coping with computationally hard optimization problems.
The main idea is to start with some solution, then search inside the "local neighborhood" of this solution
for a better solution until a locally optimal solution has been found. Usually, a brute-force local search for
the better solution takes n^{O(k)}, where n is the total input length, and k is a parameter measuring the "radius"
of the neighborhood; that is, the maximum distance to the current solution.
It is therefore natural to ask whether n^{O(k)} time is required for searching this neighborhood, or
whether f(k) * poly(n) time can be achieved. This is precisely the main question underlining the
"parameterized local search". In this talk, I will survey the recent development of the parameterized
local search for graph problems, string problems, etc.
- Michael A. Langston,
University of Tennessee, USA
''Uncovering Latent Relationships in High Dimensional Data with High Performance Parameterized Algorithms:
A Smorgasbord of Applications''
[show/hide abstract]
[sildes in pdf]
Abstract.
We will focus on applications, and discuss the use of fixed-parameter tractable algorithms and powerful
computational platforms in the analysis of highly complex data. We will address important issues with noise,
and the role of model organisms in successful applications to human health. Effective load balancing and efficient
combinatorial search are found to be critical concerns. Examples will be drawn from genomic, transcriptomic,
methylation and other types of high-throughput biological data, as well as a plethora of data drawn from research
in social and environmental disparities.
- Rolf Niedermeier,
TU Berlin, Germany
"Partitioning into Colorful Components: From Parameterized Algorithmics to Algorithm Engineering"
[show/hide abstract]
[sildes in pdf]
Abstract.
We consider the NP-hard Colorful Components problem: given a
vertex-colored graph, delete a minimum number of edges such that
no connected component contains two vertices of the same color.
First applications occurred in a bioinformatics context (e.g.,
multiple sequence alignment), but the problem also finds applications
in network analysis (e.g., correcting inaccurate interlanguage links
in Wikipedia). We study the parameterized complexity of Colorful Components
as well as its practical solvability, comparing (new) heuristic
and ILP based approaches and discussing the (potential) contributions
of parameterized algorithmics.
- Stefan Szeider,
Vienna University of Technology, Austria
"Parameterized Complexity Results for Probabilistic Network Structure Learning"
[show/hide abstract]
[sildes in pdf]
Abstract.
Probabilistic network structure learning is the notoriously difficult task of inferring probabilistic networks from data. A probabilistic network is a directed
acyclic graph (DAG) whose vertices are labeled with certain tables that express probabilities. Under various goodness-of-fit measures, finding an optimal network
is an NP-hard problem, even if restricted to polytrees. One of the very rare polynomial cases is due to Edmonds, who showed in a 1967 paper that an optimal
branching (i.e., a polytree with maximum in-degree at most 1) can be inferred in polynomial time.
In this talk I will present some recent parameterized complexity results on probabilistic network structure learning with respect to various parameters,
including the edit-distance of the inferred network from being a branching and the treewidth of the super-structure (a certain undirected graph that contains all
the optimal networks). Further results that will be discussed concern the parameterized complexity of k-neighborhood local search, where the task is to improve
the goodness-of-fit of a probabilistic network by reversing, deleting, or adding at most k arcs.
This is joint work with Serge Gaspers, Mathieu Liedloff, Mikko Koivisto, and Sebastian Ordyniak.
- Anders Yeo,
University of Johannesburg, South Africa
"Parameterized Complexity of Certain Information
Security Problems"
[show/hide abstract]
[sildes in pdf]
Abstract.
A workflow specification defines a set of steps, the order in which those steps must be executed,
and constraints on which groups of users are permitted to perform subsets of those steps.
A workflow specification is said to be satisfiable if there exists an assignment of users to
workflow steps that satisfies all the constraints.
An algorithm for determining whether such an assignment exists is important, both as a static analysis tool
for workflow specifications, and for the construction of run-time reference monitors for workflow management systems.
Finding such an assignment is a hard problem in general, but work by Wang and Li in 2010 using parameterized
complexity suggests that efficient algorithms exist under reasonable assumptions about workflow specifications.
In this talk, improved complexity bounds for the workflow satisfiability problem (WSP) will be discussed.
We also generalize and extend the types of constraints that may be defined in a
workflow specification such that the satisfiability problem remains fixed-parameter tractable.
Finally, we consider kernels for WSP.
This is joint work with Jason Crampton and Gregory Gutin.
- Norbert Zeh,
Dalhousie University, Canada
"Comparing Phylogenies: Kernelization, Depth-Bounded Search and Beyond"
[show/hide abstract]
[sildes in pdf]
Abstract.
A phylogenetic tree represents the (putative) evolutionary history of
a set of species. Nowadays these trees are usually constructed using
statistical methods based on the comparison of gene sequences. The problem, and
a great opportunity, is that different genes tell different stories about how
different species evolved. A key step in reconciling these stories and gaining
insights into the evolution of groups of species is the comparison of
phylogenetic trees using biologically meaningful distance measures that capture
non-tree-like evolutionary processes, such as lateral gene transfer or
hybridization.
This talk discusses the state of the art in computing these distances and
corresponding putative evolutionary histories of the given set of species. It
will give an overview over kernelization results and the application of
depth-bounded search techniques for computing the subtree prune-and-regraft
distance and hybridization number of two phylogenies. The analysis underlying
the depth-bounded search algorithms also leads to fairly good approximation
algorithms, which can be used to develop a branch-and-bound heuristic to speed
up the depth-bounded search.
Towards the end of the talk, experimental results
are discussed that show the impact of these improvements and demonstrate the
practical utility of the developed algorithms in attacking larger and more
diverse inputs than would have been possible using previous methods. The talk
concludes with a short summary of ongoing efforts to further improve the running
times of the developed algorithms, apply them to evolutionary networks instead
of trees, and develop new methods based on these algorithms, for example, for
computing a supertree (putative species tree) from a set of gene trees.
Organization
The workshop is organized by
Gregory Z. Gutin, Royal Holloway, University of London, UK
Venue
The workshop will take place in
lecture room MS.03 in the building of the
Warwick Mathematics Institute (Zeeman Building).
View Larger Map
Schedule
| | 8:50 | |
Foreword (Gregory Z. Gutin) |
| | | | |
9:00 | — | 9:45 | |
Michael A. Langston
Uncovering Latent Relationships in High Dimensional Data with High Performance Parameterized Algorithms:
A Smorgasbord of Applications
|
9:45 | — | 10:30 | |
Liming Cai
Parameterized Computation for Biomolecule Folding
|
| | | | |
10:30 | — | 11:00 | | Coffee |
| | | | |
11:00 | — | 11:45 | | Norbert Zeh
Comparing Phylogenies: Kernelization, Depth-Bounded Search and Beyond |
11:45 | — | 12:30 | | Faisal Abu-Khzam
The Capacitated Cluster Edit Problem: Theory and Experiments |
| | | | |
12:30 | — | 14:00 | | Lunch
and Discussion |
| | | | |
14:00 | — | 14:45 | | Rolf Niedermeier
Partitioning into Colorful Components: From Parameterized Algorithmics to Algorithm Engineering |
14:45 | — | 15:30 | | Stefan
Szeider
Parameterized Complexity Results for Probabilistic Network Structure Learning |
| | | | |
15:30 | — | 16:00 | | Coffee |
| | | | |
16:00 | — | 16:45 | | Anders Yeo
Parameterized Complexity of Certain Information
Security Problems |
16:45 | — | 17:30 | | Jiong Guo
Parameterized Complexity of Local Search Problems |
| | | | |
| | 17:30 | | Conclusion |
Photos
Photos by Juraj Stacho.