LightDB is a database management system (DBMS) designed to efficiently ingest, store, and deliver virtual reality (VR) content at scale. LightDB currently targets both live and prerecorded spherical panoramic (a.k.a. 360◦) VR videos. It persists content as a multidimensional array that utilizes both dense (e.g., space and time) and sparse (e.g., bit-rate) dimensions. LightDB uses orientation prediction to reduce data transfer by degrading out-of-view portions of the video. Content delivered through LightDB requires up to 60% less bandwidth than existing methods and scales to many concurrent connections.
LightDB builds on recent work in multidimensional array processing and develops new techniques for VR data storage and retrieval and near real-time in memory processing of VR videos. Our system combines the state of the art in array-oriented systems (e.g., efficient multidimensional array representation, tiling, prefetching) with the ability to apply recently-introduced optimizations by the multimedia (e.g., motion-constrained tile sets) and machine learning communities (e.g., path prediction). LightDB reduces bandwidth (and thus also power) consumption on client devices, scales to many concurrent connections, and offers an enhanced viewer experience over congested network connections.
To launch a LightDB server using a custom data source, instantiate a video source and pass it as an argument to the LightDB server constructor:
FileIngestAccessMethod source(name, path); LightDBServer server(name, hostname, port, source, ...); server.start();
LightDB currently supports loading from the file system (
FileIngestAccessMethod) and from an RTMP endpoint (
This work is supported in part by the National Science Foundation through NSF grants CCF-1247469, IIS-1247469, IIS-1546083, CCF-1518703, and CNS-1563788; DARPA award FA8750-16-2-0032; DOE award DE-SC0016260; and gifts from the Intel Science and Technology Center for Big Data, Intel Corporation, Adobe, Amazon, and Google.