SPARQL Autocompletion on Very Large Knowledge Bases

VLDB 2021: Reproducibility Material for Submission #1807

Evaluation web application

Click through our evaluation and explore our results in full detail

Evaluation web app

Evaluation script, queries, AC query templates, and result files

Evaluation Script

Source code of our extended version of QLever

Note: QLever needs several submodules which are not include in the following download link. To obtain a fully working version of QLever follow the instructions in the next section).
QLever Sources (without required external submodules)
Alternatively we have provided a pre-compiled docker image:

Docker image of our QLever extensions

Running QLever, Virtuoso and Blazegraph

Machine Requirements

We ran our experiments on a AMD Ryzen 7 3700X CPU (8 cores + SMT), 128 GB of DDR-4 RAM and 4 TB SSD storage (NVME, Raid 0). To roughly reproduce our results you need a similar machine. In particular you need (at least) 128GB of RAM and 3TB of SSD storage (Needed by QLever's Wikidata index). If you only want to run evaluations on the smaller two datasets (Freebase and Fbeasy), 2TB of SSD suffice. Running the Fbeasy evaluations only should also work on a machine with 500GB of SSD and 64GB of RAM. Your machine needs to run Linux and Docker must be installed. (Everything runs inside Docker so the exact Linux version and distribution should not be too important, we used Ubuntu 18.04)

Instructions for Running the Evaluation
How to run Virtuoso
How to run Blazegraph
How to run QLever
How to run the evaluation

Index Files

index files for virtuoso-wikidata
index files for virtuoso-freebase
index files for virtuoso-fbeasy
index files and executables for blazegraph(all KBs)
index files for qlever-wikidata
index files for qlever-freebase
index files for qlever-fbeasy