The ever growing need to efficiently store, retrieve and analyze massive datasets, originated by very different sources, is currently made more complex by the different requirements posed by users and applications. Such a new level of complexity cannot be handled properly by current data structures for big data problems.
To successfully meet these challenges, we launched a project, funded by the Italian Ministry of University and Research (PRIN no. 2017WR7SHH), that will lay down the theoretical and algorithmic-engineering foundations of a new generation of Multicriteria Data Structures and Algorithms. The multicriteria feature refers to the fact that we wish to seamlessly integrate, via a principled optimization approach, modern compressed data structures with new, revolutionary, data structures learned from the input data by using proper machine-learning tools. The goal of the optimization is to select, among a family of properly designed data structures, the one that “best fits” the multiple constraints imposed by its context of use, thus eventually dominating the multitude of trade-offs currently offered by known solutions, especially in the realm of Big Data applications.
A multicriteria data structure, for a given problem $P$, is defined by a pair $\langle \mathcal F, \mathcal A \rangle_P$ where $\mathcal F$ is a family of data structures, each one solving $P$ with a proper trade-off in the use of some resources (e.g. time, space, energy), and $\mathcal A$ is an optimisation algorithm that selects in $\mathcal F$ the data structure that best fits an instance of $P$.
For more details on the project, have a look at its full description here.
A compressed rank/select dictionary exploiting approximate linearity and repetitiveness.
A Combination of Convolutional and Recurrent Deep Neural Networks for Nucleosome Positioning Identification.
An extensible framework to efficiently compute alignment-free functions on a set of large genomic sequences.
A general software framework for the efficient acquisition of FASTA/Q genomic files in a MapReduce environment.
A software library to speed-up sorted table search procedures via learning from data.
Data structures supporting Longest Common Extensions and Suffix Array queries, built on the prefix-free parsing of the text.
A data structure enabling fast searches in arrays of billions of items using orders of magnitude less space than traditional indexes.
Python library of sorted containers with state-of-the-art query performance and compressed memory usage.