User Guide - Structured File Lens v2.0
Intro
This is the full User Guide for the Structured File Lens, it contains an in-depth set of instructions to fully set up, configure, and run the Lens so you can start ingesting data as part of an end-to-end system. For a guide to get the Lens up and running in the quickest and simplest possible way, see the Quick Start Guide. Once deployed, you can utilise any of our ready-made sample input, mapping, and expected output files to test your Lens. For a list of what has changed since the last release, visit the User Release Notes.
Table of Contents
- 1 Intro
- 2 Table of Contents
- 3 Creating a Mapping File
- 4 Configuring the Lens
- 4.1 Configuration Manipulation
- 4.1.1 Accessing the Config
- 4.1.2 Editing config
- 4.1.3 Backup and Restore Config
- 4.2 Configuration Categories
- 4.2.1 Mandatory Configuration (Local Deployment)
- 4.2.2 Lens Directories Configuration
- 4.2.2.1 Directories in Lenses
- 4.2.3 AWS Configuration
- 4.2.4 Kafka Configuration
- 4.2.5 Provenance Configuration
- 4.2.6 Logging Configuration
- 4.2.7 Optional Configuration
- 4.1 Configuration Manipulation
- 5 Running the Lens
- 6 Ingesting Data / Triggering the Lens
- 6.1 RESTful API Endpoint
- 6.2 Kafka
- 6.3 S3 Lambda
- 6.4 CSV Splitting and Validation
- 6.5 XML Parsing
- 7 Output Data
- 7.1 Endpoint
- 7.2 Kafka
- 7.2.1 Dead Letter Queue
- 8 Provenance Data
- 9 REST API Endpoints
- 9.1 Process
- 9.1.1 GET /process
- 9.2 Config
- 9.2.1 GET /config
- 9.3 Update Config
- 9.3.1 PUT /updateConfig
- 9.4 License
- 9.4.1 GET /license
- 9.5 RML
- 9.6 Custom Functions
- 9.1 Process
Creating a Mapping File
The first step in configuring the Structured File Lens is to create a mapping file. The mapping file is what creates the links between your source database and your target model (ontology). To assist with the creation of your RML mapping files, please see our detailed step by step guide on creating one from scratch.
Configuring the Lens
Each of the Lenses have a wide array of user configuration, all of which can be set and altered both before the startup of the Lens and during the running of a Lens. The former is done through the use of environment variables in your Docker container or ECS Task Definition, and latter is done through the use of an exposed endpoints, as seen below. For a breakdown of every configuration option in the Structured File Lens, see the full list here.
Configuration Manipulation
Accessing the Config
Once a Lens has started and is operational, you can request to view the current configuration by calling the /config
endpoint. This is expanded upon below, including the ability to specify specific config properties.
Editing config
As explained below, the configuration on a running Lens can be edited through the /updateConfig
endpoint.
Backup and Restore Config
A useful feature in the Lens, is the ability to backup and restore your configuration. This is particularly beneficial when you’ve made multiple changes to the config on a running Lens, and want to be able to restore this without rerunning any update config commands. To backup your config, simply call the /uploadConfigBackup
endpoint, and all changes you’ve made to the config will be uploaded to the storage location specified in your CONFIG_BACKUP
env var.
To restore your configuration, this must be done on the startup of a Lens, therefore, by setting the CONFIG_BACKUP
config option as an environment variable in your startup script / task definition. This must however be a remote directory such as S3, as anything local will be deleted if a task or container is stopped.
Configuration Categories
Mandatory Configuration (Local Deployment)
License -
LICENSE
This is the license key required to operate the Lens when being run on a local machine outside of AWS Marketplace, request your new unique license key here.
Lens Directories Configuration
Lens Directory -
LENS_DIRECTORY
This is the directory where all Lens files are stored (assuming individual file dir config haven’t been edited). On Lens startup, if this has been declared, it will create folders at the specified location for mapping, output, yaml-mapping, provenance output, and config backup.
By default, this option is set to a local directory within the docker container (
file:///var/local/
) so isn't mandatory. As with all directories in the Lens, this can be either local or on a remote S3 bucket - we recommend using S3 when running the Lens on AWS (for example -s3://example-bucket/sflens/
)
Mapping Directory URL -
MAPPINGS_DIR_URL
This is the directory where your mapping file(s) is located. All mapping files within this directory are downloaded and added to the store for processing.
Output directory URL -
OUTPUT_DIR_URL
This is the directory where all generated RDF files are saved to. This also supports local and remote URLs.
Provenance Output Directory URL -
PROV_OUTPUT_DIR_URL
Out of the box, the Structured File Lens supports Provenance and it is generated by default. Once generated, the Provenance is saved to separate output files to the transformed source data. This option specifies the directory where provenance RDF files are saved to, which also supports local and remote URLs.
If you do not wish to generate Provenance, you can turn it off by setting the
RECORD_PROVO
variable to false. In this case, thePROV_OUTPUT_DIR_URL
option is no longer required. For more information on Provenance configuration, see below.
Config Backup -
CONFIG_BACKUP
The Lens supports functionality to backup your configuration in the scenario where you wish to reboot your Lens. Upon calling the upload config endpoint, your configuration settings will be backed up to the URL directory specified here. It must be a remote directory such as S3 to support rebooting of the Lens.
Directories in Lenses
The Lenses are designed to support files and directories from an array of sources. This includes both local URLs and remote URLs including cloud-based technologies such as AWS S3. The location should be expressed as a URL string (Ref. RFC-3986).
To use a local URL for directories and files, both the format of
file:///var/local/sflens/output/
and/var/local/sflens/output/
are supported.To use a remote http(s) URL for files,
https://example.com/input-file.csv
is supported.To use a remote AWS S3 URL for directories and files,
s3://example/folder/
is supported where the format iss3://<bucket-name>/<directory>/<file-name>
. If you are using an S3 bucket for any directory, and not running the Lens via the Marketplace, then you must specify an AWS access key and secret key.
AWS Configuration
When running the Lens in ECS, these settings are not required as all credentials are taken directly from the EC2 instance running the Lens. If you wish to use AWS cloud services while running the Lens on-prem, you need to specify an AWS Access Key and Secret Key, and AWS Region. By providing your AWS credentials, this will give you permission for accessing, downloading, and uploading remote files to S3 Buckets. The S3 Region option specifies the region of where in AWS your files and services reside. To do this, the Lenses utilise the AWS Default Credential Provider Chain, allowing for a number of methods to be used. The simplest is by setting the environment variables for AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_REGION
.
Kafka Configuration
One of the many ways to interface with the Lens is through the use of Apache Kafka. With the Structured File Lens, a Kafka Message Queue can be used for managing the output of data from the Lens, and to trigger a Lens transformation. To properly set up your Kafka Cluster, see the instructions here. Once complete, use the following Kafka configuration variables to connect the cluster with your Lens. If you wish to use Kafka, you must switch it on by setting the variable LENS_RUN_STANDALONE
to false.
The Kafka Broker (KAFKA_BROKERS
) is what tells the Lens where to look for your Kafka Cluster, so set this property as follows: <kafka-broker>:<kafka-port>
. The recommended port is 9092
.
All other Kafka configuration variables can be found here, all of which have default values that can be overridden.
Provenance Configuration
As previously mentioned, Provenance is generated by default, this can be turned off by setting the RECORD_PROVO
variable to false, otherwise the prov output files will be stored at the dir specified by PROV_OUTPUT_DIR_URL
. If you wish to store this Provenance remotely in an S3 Bucket, then you are required to specify your region, access key, and secret key, as explained previously in the AWS Configuration section.
If you wish to manage the Provenance output files through Kafka, then you can choose to use the same brokers and topic names as with the previously specified data files, or an entirely different cluster. All Provenance configuration can be found here.
Logging Configuration
When running the Lens locally from the command line using the instructions below, the Lens will automatically log to your terminal instance. In addition to this, the archives of logs will be saved within the docker container at /var/log/datalens/archive/current/
and /var/log/datalens/json/archive/
for text and JSON logs respectively, where the current logs can be found at /var/log/datalens/text/current/
and /var/log/datalens/json/current/
. By default, a maximum of 7 log files will be archived for each file type, however this can be overridden. If running a Lens on cloud in an AWS environment, then connect to your instance via SSH or PuTTY, and the previously outlined logging locations apply. Alternatively, configuring CloudWatch Logs is the easiest way to view your Len’s live logging.
By default, the Lens logs at INFO
level, this can be changed by overriding the LOG_LEVEL_DATALENS
option, however can only be done on Lens startup and will require a reboot if not.
Optional Configuration
There is also a further selection of optional configurations for given situations, see here for the full list.
Running the Lens
All of our Lenses are designed and built to be versatile, allowing them to be set up and run on a number of environments, including in cloud or on-premise. This is achieved through the use of Docker Containers. In addition to this, we now have full support for the Amazon Web Services Marketplace, where you can directly subscribe to and run your Lens from.
Local Docker Image
To run the Lens' Docker image locally, please first ensure you have Docker installed. Once installed, execute a docker run command with the following structure, and Docker will start the container and run the Lens from your downloaded image.
For UNIX based machines (macOS and Linux):
docker run \
-e LICENSE=<<<REQUEST LICENSE>>> \
-e LENS_DIRECTORY=file:///data/sflens/ \
-e RECORD_PROVO=false \
-p 8080:8080 \
-v /User/DataLens/sfens/:/data/sflens/ \
lens-static:Release_2.0.4.250
For Windows:
docker run ^
-e LICENSE=<<<REQUEST LICENSE>>> ^
-e LENS_DIRECTORY=file:///data/sflens/ ^
-e RECORD_PROVO=false ^
-p 8080:8080 ^
-v //c//User/DataLens/sfens/:/data/sflens/ ^
lens-static:Release_2.0.4.250
The above examples demonstrate how to override configuration options using environment variables in your Lens. Given the Lens is run on port 8080, line 5 exposes and binds that port of the host machine so that the APIs can be triggered. The -v
flag seen on line 6 mounts the working directory into the container; when the host directory of a bind-mounted volume doesn’t exist, Docker will automatically create this directory on the host for you. And finally, line 7 is the name and version of the Docker image you wish to run. For more information of running Docker Images, see the official Docs.
Structured File Lens via AWS Marketplace
To run the Structured File Lens on AWS, we have full support for the AWS Marketplace. First subscribe to the Structured File Lens, then use the CloudFormation template we have created to deploy a one click solution, starting up an ECS Cluster with all the required permissions and networking, with the Lens running within as a task. See here for more information about how the template works and what is being initialised.
For more information on the Architecture and Deployment of an Enterprise System, see our guide.
Alternatively, you can manually start the Lens by creating a Task Definition to be run within an ECS or EKS cluster, and using the Lens’s Image ID, exposing the port 8080, and ensuring there is a Task Role with at least the AmazonS3FullAccess
and AWSMarketplaceMeteringRegisterUsage
included.
Ingesting Data / Triggering the Lens
The Structured File Lens supports a number of ways to ingest your data files. While all five supported file types, CSV, XML, JSON, XLSX, and ODS, are ingested in the same way, there may be some additional parameters you wish to set for CSV and XML each as detailed below.
RESTful API Endpoint
First, the easiest way to ingest a file into the Structured File Lens is to use the built-in APIs. Using the process
GET endpoint, you can specify the URL of a file to ingest, along with the logicalSource correlating to your mapping, and in return, you will be provided with the URL(s) of the generated RDF data file(s).
The structure and parameters for the GET request is as follows: http://<lens-ip>:<lens-port>/process?inputFileURLs=<input-file-url>&logicalSources=<logical-source>
, for example, http://127.0.0.1:8080/process?inputFileURLs=file:///var/local/input-data.csv&logicalSources=sample.csv
, where the response is a success report in the form of a JSON object.
For ingestions where you wish to transform an input source file with a specific mapping file or mapping files within a specific directory, from v2.0.21 onwards, you can also pass in the URL of your mapping file or mapping directory as a part of the GET request. For example: http://<lens-ip>:<lens-port>/process?inputFileURLs=<input-file-url>&logicalSources=sample.csv&mappingURL=<mapping-url>
. Otherwise, not specifying a mapping file URL in the request will cause the Lens to run normally, importing all mapping files within the mapping directory URL specified in the configuration.
In addition, you may also wish to ingest multiple input source files in instances where there are joins in your mapping, creating links between your data. This is simply done by including multiple inputFileURLs
and logicalSources
params in your endpoint. For example, to ingest two files would look like the following: /process?inputFileURLs=file:///var/local/employees-123.csv&logicalSources=employees.csv&inputFileURLs=file:///var/local/jobRoles-123.xml&logicalSources=jobRoles.xml
. This can also be extended by also including a specific mapping to the request, in the same way as the example above.
Kafka
The second, and the more versatile and scalable ingestion method, is to use a message queue such as Apache Kafka. To set up a Kafka Cluster, follow the instructions here, but in short, to ingest files into the Structured File Lens you must set up a Producer and connect to your cluster by setting the KAFKA_BROKERS
variable to the same as your Kafka’s endpoint, and ensure the LENS_RUN_STANDALONE
configuration is set to false
. The topic name for which this Producer subscribes to must be the same name that you specified in the KAFKA_TOPIC_NAME_SOURCE
config option (defaults to “source_urls”). Once set up, each message sent from the Producer must be a JSON object containing the URL(s) of the input file(s), the correlating logical sources, as well as optionally a mapping file or mapping directory if you which to given a specific mapping per process.
The JSON object must be structured as follows:
{
"input": [
{"inputFileURL": "", "logicalSource": ""},
{"inputFileURL": "", "logicalSource": ""}
],
"mappingURL": ""
}
S3 Lambda
If you wish to use Kafka, and you are also using S3 to store your source data, we have developed an AWS Lambda to aid with the ingestion of data into your Structured File Lens. The Lambda is designed to monitor a specific Bucket in S3, and when a file arrives or is modified in a specific directory, a message is written to a specified Kafka Topic containing the URL of the new/modified file. Subsequently, this will then be ingested by the Lens. If you wish to use this Lambda, please contact us for more information.
CSV Splitting and Validation
While ingesting CSV files are the same as with XML and JSON, there are a couple of points to note. A very large CSV file with a large number of rows will be split into chunks and processed separately, by default every 100,000 lines. This allows for better performance and continuous output of RDF files. When processed using Kafka, messages are continuously pushed to the Success Queue, however when using the Process endpoint, the response will only be returned once the entire file transformation has been completed. This file chunking size can be overridden with the configuration option MAX_CSV_ROWS
, or conversely turned off by setting this to 0, however this is not recommended unless your machine / instance has a significant amount of RAM.
In addition, CSV files are validated by default before being processed, and any erroneous lines will be removed and not transformed. This has a negligible effect on the performance, however can be turned off by setting VALIDATE_CSV
to false. Please note that multiline CSV records are not supported with this validation functionality, so please turn this feature off if multiline is present in your dataset.
XML Parsing
Ingesting XML files are also the same process as with CSV and JSON files, however there are currently two different parsing methodologies in which this can be done. This is explained more when you create your mapping using our guide, but SAXPath can be used for faster transformation speed, and XPath can be utilised when complex iterators are used including accessing parent nodes.
Output Data
The Lenses can create output data for both Semantic Knowledge Graphs and Property Graphs. The data produced for Knowledge Graphs is RDF, whereby for Property Graphs it is in the form of two CSV files, one for nodes and one for edges.
When creating RDF data for Semantic Knowledge Graphs, the Lens supports a wide number of data types: NQuads, NTriples, JSON-LD, Turtle, Trig, and Trix. By default, the resulting RDF is represented in the form of NQuads or NTriples if provenance is off, however this can be changed by setting the configuration option OUTPUT_FILE_FORMAT
to either nquads
, ntriples
, jsonld
, turtle
, trig
, or trix
.
To create CSV output data for a Property Graph, you must turn on the Property Graph mode by setting PROPERTY_GRAPH_MODE
to true
and then selecting your graph provider by setting PG_GRAPH
to either neptune
, tigergraph
, neo4j
, or default
.
The RDF or CSV data files created and output from the Lens are the same and are not dependant on how it was triggered. The way in which this information is communicated back to you varies slightly for each method.
Endpoint
Once an input file has successfully been processed after being ingested via the Process endpoint, the response returned from the Lens is a JSON object. Within the JSON response is the outputFileLocations
element; this element contains a list of all the URLs of generated RDF files. Usually this would be a single file (or two files for Property Graphs), however multiple files will be generated and listed when ingesting large CSV files.
Sample Knowledge Graph output:
Sample Property Graph output:
Kafka
If you have a Kafka Cluster set up and running, then the successfully generated RDF file URL(s) will be pushed to your Kafka Queue. It will be pushed to the Topic specified in the KAFKA_TOPIC_NAME_SUCCESS
config option, which defaults to “success_queue”. This will happen with both methods of triggering the Lens. One of the many advantages of using this approach is that now this transformed data can be ingested using our Lens Writer which will publish the RDF to a Semantic Knowledge Graph or CSV to a Property Graph of your choice!
Dead Letter Queue
If something goes wrong during the operation of the Lens, the system will publish a message to the Dead Letter Queue Kafka topic (defaults to “dead_letter_queue”) explaining what went wrong along with meta-data about that ingestion, allowing for the problem to be diagnosed and later re-ingested. If enabled, the provenance generated for the current ingestion will also be included as JSON-LD. This message will be in the form of a JSON object with the following structure:
Provenance Data
Within the Structured File Lens, time-series data is supported as standard, every time a Lens ingests some data we add provenance information. This means that you have a full record of data over time, allowing you to see what the state if the data was at any moment. The model we use to record Provenance information is the w3c standard PROV-O model.
Provenance files are uploaded to the location specified in the PROV_OUTPUT_DIR_URL
, then this file location is pushed to the Kafka Topic declared in PROV_KAFKA_TOPIC_NAME_SUCCESS
. The provenance activities in the Structured File Lens are main-execution
, kafkaActivity
, and lens-iteration
.
For more information on how the provenance is laid out, as well as how to query it from your Triple Store, see the Provenance Guide.
REST API Endpoints
In addition to the Process Endpoint designed for triggering the ingestion of data into the Lens, there is a selection of built-in exposed endpoints for you to call.
API | HTTP Request | URL Template | Description |
---|---|---|---|
Process | GET |
| Tells the Lens to ingest the file located at the specified URL location using the correlating logical source |
| When an input file needs to be transformed against a specific mapping file, an additional optional parameter for the mapping file or mapping directory URL can be given. | ||
| You also have the ability to process multiple input source files at once, this is for instances where there are joins in your mapping, creating links between your data. Each input file requires a correlating logical source pairing it with the source in the mapping. (This approach can also use the specific mapping parameter). | ||
Config | GET |
| Displays configuration as JSON string |
GET |
| Displays all Lens configuration specified in the comma-separated list | |
Update Config | PUT |
| Update configuration options on a running Lens |
Upload Config Backup | PUT |
| Uploads the current configuration to the specified config backup location so that it can be restored at a later date. |
License | GET |
| Displays license information |
RML | GET |
| Displays the RML mapping file at the specified location, this is displayed in Turtle RDF serialisation |
PUT |
| Deploys a new mapping file into Lens specified in the request body | |
Functions | GET |
| Allows for the deployment of new custom functions to the Lens. It will download the files set in the |
Restart Kafka | GET |
| Turns the Lens’s Kafka connection on or off depending on its current state. |
Process
GET /process
As previously outlined in the Ingesting Data via Endpoint section, using the process endpoint is one way of triggering the Lens to ingest your source data. When an execution of the Lens fails after being triggered in this way, the response will be a status 400 Bad Request
and contain a JSON response message similar to that sent to the dead letter queue as outlined above.
Config
GET /config
The config endpoint is a GET request that allows you to view the configuration settings of a running lens. By sending GET http://<lens-ip>:<lens-port>/config
(for example http://127.0.0.1:8080/config
), you will receive the entire configuration represented as a JSON, as seen in this small snippet below. All confidential values (such as credentials) are hidden. This endpoint is also useful as a means of Health Checking the Lens.
Alternatively, you can specify exactly what config options you wish to return by providing a comma-separated list of variables under the paths
parameter. For example, the request of GET http://<lens-ip>:<lens-port>/config?paths=lens.config.outputDirUrl,logging.loggers
would return the following.
Update Config
PUT /updateConfig
The configuration on running Lens can now be edited without having to restart. This is done through the update config endpoint. For example, by running the following /updateConfig?configEntry=friendlyName&configValue=GraphBuilder
we have changed the friendly name of the Lens to GraphBuilder. To see a list of the configuration entry names, consult the Structured File Lens Configurable Options.
License
GET /license
The license endpoint is a GET request that allows you to view information about your license key that is in use on a running lens. By sending GET http://<lens-ip>:<lens-port>/license
(for example: http://127.0.0.1:8080/license
), you will receive a JSON response containing the following values.
RML
The RML endpoint is all about the mapping file that you would have created. It consists of a GET and a PUT endpoint, allowing you to get the current master mapping file currently in use on the Lens, and well as replacing the master mapping file with a new one.
GET /rml
By sending GET http://<lens-ip>:<lens-port>/rml?fileName=<file-name>
you will receive a response containing the contents of the mapping file at the location specified written in RDF/Turtle.
PUT /rml
By sending PUT http://<lens-ip>:<lens-port>/rml?fileName=<file-name>
with a turtle mapping file in the body of the request, it will upload it to the file location specified. If a file already exists in that location, it will be replaced, otherwise a new file will be created. The mapping file should be in RDF/Turtle format and the declared HTTP Content-Type
should be text/turtle
. The successful upload is then indicated by an empty response with HTTP status OK
(Ref. RFC-7231) and will be functional immediately.
Custom Functions
If when designing your mapping file for your Lens, you require a function to be executed that cannot perform your required operation simply by using the built-in functions, it is possible to create and use your own. This can be done by setting your CUSTOM_FUNCTION_JAR_URL
and CUSTOM_FUNCTION_TTL_URL
config options to point at your jar and ttl files, and calling this endpoint to download and set these files. For further instructions on how to correctly carry out this process, please see our guide.