Data Lens is a simple, scalable platform to get all your data into any Graph Database. Welcome to the User Guide explaining how to set up, use, and get the most out of each of the Lenses and services Data Lens has to offer.
The Structured File Lens is used to ingest structured files and transform them into RDF. The supported file types are CSV, JSON and XML.
Ingest data from SQL Databases with ease. All popular SQL Databases are supported (JDBC connection).
Configure this Lens to fetch data from any RESTful endpoint.
Using AI technologies, the Document Lens will analyse and tag your documents to other data in your Knowledge Graph. Supported types are .docx .pdf and .txt.
The Stream Lens, similar to the Structured File Lens, allows for the transformation of flat data files, such as CSV, JSON, and XML, however with optimised support for very large files (1GB+) using streaming technology such as Apache Flink.
We support writing to all Semantic Knowledge Graph databases. Coming soon support for writing directly to Property Graph Databases.
One of the many ways to interface with the Lenses is through the use of Apache Kafka. A Kafka Message Queue can be used for managing both the input and the output of data to and from the Lens.
Time-series data supported as standard. Every time a Lens ingests some data, we add provenance information. Meaning that you have a full record of data over time. Allowing you to see what the state if the data was at any moment.
The mapping file is what creates the links between your source data and your target model (ontology).