User Guide - Stream Lens v1.0

Intro

This is the full User Guide for the Stream Lens, it contains an in-depth set of instructions to fully set up, configure, and run the Lens so you can start ingesting data as part of an end-to-end system. For a guide to get the Lens up and running in the quickest and simplest possible way, see the Quick Start Guide. Once deployed, you can utilise any of our ready-made sample input, mapping, and expected output files to test your Lens. For a list of what has changed since the last release, visit the User Release Notes.

 


Table of Contents

 

 


Overview

The Stream Lens, similar to the Structured File Lens, is able to translate structured flat data files such as the CSV, JSON and XML, into fully W3C compatible semantic web data. However, this Lens is optimised to able to process very large data files, greater than 1GB in size, in conditions of limited resources. This is made possible due to splitting and spreading of the transformation into nodes within an autonomous Apache Flink cluster.

The Lens Structure

The fully functional and minimal Stream Lens cluster consists of 3 major components:

  • Stream Lens node

  • Apache Flink sub-cluster which itself is composed of

    • Job manager node

    • Task manager (at least one)

  • Shared file system

The Stream Lens node provides both a RESTful interface and ingestion of Apache Kafka messages, while also handling the downloading and processing of the mapping and input data files. The Apache Flink Job Manager provides a web and RESTful access point to the Apache Flink cluster. After the streaming job is triggered it is responsible for launching the application within the cluster and spreading the application across task managers. The Task Manager executes the main transformation job, where each task manager writes the output data into a separate output file. For more information about Apache Flink cluster structure and its functionality, you can visit the project documentation pages.

 


Creating the Mapping File

The first step in configuring the Stream Lens is to create a mapping file. The mapping file is what creates the links between your source data and your target model (ontology). This can be created using our online Data Lens Mapping Tool utilising an intuitive web-based UI. Log in here to get started, and select the option for Structured File Lens. Similar to the Stream Lens, The Structured File Lens is capable of ingesting XML, CSV, and JSON files. Creation of mapping files differs slightly between file types so ensure to select the correct options for your use case. Alternatively, the Mapping Tool can be deployed to your own infrastructure, this enables additional functionality such as the ability to update mapping files on a running Lens. To do this, follow these instructions.

However, if you wish you create your RML mapping files manually, there is a detailed step by step guide on creating one from scratch.

 


Configuring the Lens

All Lenses supplied by Data Lens are configurable through the use of Environment Variables. How to declare these environment variables will differ slightly depending on how you choose to run the Lens, so please see Running the Lens for more info. For a breakdown of every configuration option in the Stream Lens, see the full list here.

Mandatory Configuration

For the Lens to operate the following configuration options are required.

  • License - LICENSE

    • This is the license key required to operate the lens, request your new unique license key here. (This option is not required when running the Lens via AWS Marketplace).

  • Mapping Directory URL - MAPPINGS_DIR_URL

    • This is the directory where your mapping file(s) is located. As with all directories, this can be either local or on a remote S3 bucket.

  • Output Directory URL - OUTPUT_DIR_URL

    • This is the directory where all generated RDF files are saved to. This also supports local and remote URLs.

  • Provenance Output Directory URL - PROV_OUTPUT_DIR_URL

    • Out of the box, the Stream Lens supports Provenance and it is generated by default. Once generated, the Provenance is saved to separate RDF output files to the transformed source data. This option specifies the directory where provenance RDF files are saved to, which also supports local and remote URLs.

    • If you do not wish to generate Provenance, you can turn it off by setting the RECORD_PROVO variable to false. In this case, the PROV_OUTPUT_DIR_URL option is no longer required. For more information on Provenance configuration, see below.

AWS Configuration

If you wish to use cloud services such as Amazon Web Services you need to specify an AWS Access Key and Secret Key, and AWS Region, through AWS_ACCESS_KEY, AWS_SECRET_KEY, and S3_REGION respectively. By providing your AWS credentials, this will give you permission for accessing, downloading, and uploading remote files to S3 Buckets. The S3 Region option specifies the region of where in AWS your files and services reside. Please note that all services must be in the same region, including if you choose to run the Lens in an EC2 instance or with the use of Lambdas.

Kafka Configuration

One of the many ways to interface with the Lens is through the use of Apache Kafka. With the Stream Lens, a Kafka Message Queue can be used for managing both the input and the output of data to and from the Lens. To properly set up your Kafka Cluster, see the instructions here. Once complete, use the following Kafka configuration variables to connect the cluster with your Lens. If you do not wish to use Kafka, please set the variable LENS_RUN_STANDALONE to true.

The Kafka Broker is what tells the Lens where to look for your Kafka Cluster, so set this property as follows: <kafka-ip>:<kafka-port>. The recommended port is 9092.

All other Kafka configuration variables can be found here, all of which have default values that can be overridden.

Provenance Configuration

As previously mentioned, Provenance is generated by default, this can be turned off by setting the RECORD_PROVO variable to false, otherwise PROV_OUTPUT_DIR_URL is required. If you wish to store this Provenance remotely in an S3 Bucket, then you are required to specify your region, access key, and secret key, through PROV_S3_REGION, PROV_AWS_ACCESS_KEY, and PROV_AWS_SECRET_KEY respectively.

If you wish to manage the Provenance output files through Kafka, then you can choose to use the same brokers and topic names as with the previously specified data files, or an entirely different cluster. All Provenance configuration can be found here.

Logging Configuration

Logging in the Stream Lens works the same way as with all other Lens, and like with most functionality is configurable through the use of environment variables; this list override-able options and their descriptions can be found here. When running the Lens locally from the command line using the instructions below, the Lens will automatically log to your terminal instance. In addition to this, the archives of logs will be saved within the docker container at /var/log/datalens/archive/current/ and /var/log/datalens/json/archive/ for text and JSON logs respectively, where the current logs can be found at /var/log/datalens/text/current/ and /var/log/datalens/json/current/. By default, a maximum of 7 log files will be archived for each file type, however this can be overridden. If running a Lens on cloud in an AWS environment, then connect to your instance via SSH or PuTTY, and the previously outlined logging locations apply.

Optional Configuration

There is also a further selection of optional configurations for given situations, see here for the full list.

Directories in Lenses

The Lenses are designed to support files and directories from an array of sources. This includes both local URLs and remote URLs including cloud-based technologies such as AWS S3. The location should be expressed as a URL string (Ref. RFC-3986).

  • To use a local URL for directories and files, both the format of file:///mnt/efs/data-lens-output/ and /mnt/efs/data-lens-output/ are supported.

  • To use a remote http(s) URL for files, https://example.com/input-file.csv is supported.

  • To use a remote AWS S3 URL for directories and files, s3://example/folder/ is supported where the format is s3://<bucket-name>/<directory>/<file-name>. If you are using an S3 bucket for any directory, and not running the Lens via the Marketplace, then you must specify an AWS access key and secret key.

Accessing the configuration of a running Lens

Once a Lens has started and is operational, you can request to view the current config by calling one of the Lens' built-in APIs, this is explained in more detail below. Please note, that in order to change any config variable on a running Lens, it must be shut down and restarted.

 


Running the Lens

All of our Lenses are designed and built to be versatile, allowing them to be set up and ran on a number of environments, including in cloud or on-premise. This is achieved through the use of Docker Containers. In addition to this, we now have full support for the Amazon Web Services Marketplace, where you can directly subscribe to and run your Lens from.

Local Docker Image

To run the Lens locally, we will utilise a docker-compose file to build our stack. First, please ensure you have Docker installed.

  1. Once you have pulled the latest version of the Stream Lens, tag your local image as latest using: docker tag datalensltd/lens-stream:{VERSION} datalensltd/lens-stream:latest.

  2. Next, download the docker-compose file and the lens-configuration file. The contents of the docker-compose file are explained in more detail below.

  3. Configure your Lens by opening and adding your variables to the lens-configuration.env file.

    1. In this example, we have set our mapping and output directories to a local directory simply by assigning the variable a string value. As for the License, in this format, the value will be taken from the machines environment variable named ‘License’.

    2. The /tmp directory on the local machine is a mounted volume to the /mnt/efs directory on the running docker images. Therefore if we wish to use locally stored input, mapping, and output files, using the provided docker-compose file, we should store our files in the /tmp directory on your local machine.

      # Please configure your lens, example: KEY=VALUE. Leaving a field blank will take the value from your environment variables. LICENSE MAPPINGS_DIR_URL=file:///mnt/efs/mapping/ OUTPUT_DIR_URL=file:///mnt/efs/output/ PROV_OUTPUT_DIR_URL=file:///mnt/efs/prov-output/
  4. Please ensure the /tmp/data/ directory exists on the host machine using: mkdir -p /tmp/data/.

  5. Now from within the directory you downloaded the compose and config files, run the Lens by using docker-compose up.

 

Docker-Compose File

version: '3' services: jobmanager: image: flink:1.10.0-scala_2.11 expose: - "6123" ports: - "8081:8081" command: jobmanager environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager volumes: - datashared:/mnt/efs # This service executes the job. taskmanager_1: image: flink:1.10.0-scala_2.11 expose: - "6121" - "6122" command: taskmanager links: - "jobmanager:jobmanager" environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager volumes: - datashared:/mnt/efs taskmanager_2: image: flink:1.10.0-scala_2.11 expose: - "6121" - "6122" command: taskmanager links: - "jobmanager:jobmanager" environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager volumes: - datashared:/mnt/efs streamlens: image: datalensltd/lens-stream:latest links: - "jobmanager:jobmanager" ports: - "8080:8080" volumes: - datashared:/mnt/efs env_file: - lens-configuration.env environment: - LENS_RUN_STANDALONE=true - AWS_ACCESS_KEY - AWS_SECRET_KEY - PROV_AWS_ACCESS_KEY=$AWS_ACCESS_KEY - PROV_AWS_SECRET_KEY=$AWS_SECRET_KEY - LOCAL_SIDE_SHARED_STORAGE=/mnt/efs/data - FLINK_SIDE_SHARED_STORAGE=/mnt/efs/data - FLINK_JOB_MANAGER_ADDRESS=http://jobmanager:8081 - RMLSTREAMER_PARALLELISM=2 volumes: datashared: driver: local driver_opts: type: none device: /tmp o: bind

Docker-Compose is a tool for defining and running multi-container Docker applications. Using Compose, we have created a YAML file to configure the application’s services so that with a single command, you create and start all the services. Compose works in all environments: production, staging, development, testing, as well as CI workflows.

 

Configured is a service with Apache Flink Job Manager, responsible for managing Flink streams (cluster IO operations), downloading required dependencies, and delegating jobs into task manager(s). This compose file also provides monitoring for the RESTful API endpoint, as well as the mounting of shared storage volumes.

 

As you can see in lines 54-62, this is where a number of environment variables used for the Lens’s configuration are declared. This includes AWS credentials, which in this case are taken from the local machine’s environment variables. From here you can add, edit, or remove any configuration options you wish, using this list of options as a guide.

 

For more information of running Docker Images with docker-compose, see the official Docs.

 

Docker on AWS

The deployment approach we recommend at Data Lens is to use Amazon Web Services, this is to both store source and RDF data, as well as to host and run your Lenses and Writer.

The aim is to deploy the Lens and other services using AWS by setting up the following architecture:

For more information on the Architecture and Deployment of an Enterprise System, see our guide.

 

AWS Marketplace

We now have full support for the Amazon Web Services Marketplace, where you can directly subscribe to a Lens. Then, using our CloudFormation Templates, you can deploy a one-click solution to run your Lens. See here for further details and instructions to get you started.

 


 

Ingesting Data

The Stream Lens supports a number of ways to ingest your data files, however, each of the three supported file types, CSV, XML, and JSON, are ingested in the same way.

Endpoint

First, the easiest way to ingest a file into the Stream Lens is to use the built-in APIs. Using the process GET endpoint, you can specify the URL of a file to ingest in the same way as previously outlined, and in return, you will be provided with the URL of the generated RDF data file.

The structure and parameters for the GET request is as follows: http://<lens-ip>:<lens-port>/process?inputFileURL=<input-file-url>, for example, http://127.0.0.1:8080/process?inputFileURL=file:///var/local/input-data.csv, where the response is in the form of a JSON.

Kafka

The second, and the more versatile and scalable ingestion method, is to use a message queue such as Apache Kafka. To set up a Kafka Cluster, follow the instructions here, but in short, to ingest files into the Stream Lens you require a Producer. The topic name for which this Producer subscribes to must be the same name that you specified in the KAFKA_TOPIC_NAME_SOURCE config option (defaults to “source_urls”). Once set up, each message sent from the Producer must consist solely of URL of the file, for example, > s3://examplebucket/folder/input-data.csv.

S3 Lambda

If you wish to use Kafka, and you are also using S3 to store your source data, we have developed an AWS Lambda to aid with the ingestion of data into your Stream Lens. The Lambda is designed to monitor a specific Bucket in S3, and when a file arrives or is modified in a specific directory, a message is written to a specified Kafka Topic containing the URL of the new/modified file. Subsequently, this will then be ingested by the Lens. For instructions on how to set up the Lambda within your AWS environment, click here.

 


 

Output Data

The data files created and output from the Lens are the same regardless on how it was triggered or ingested, however the way in which this information is communicated back to you is slightly different for each method.

Endpoint

Once an input file has successfully been processed after being ingested via the Process endpoint, the response returned from the Lens is in the form of a JSON. Within the JSON response is the outputFileLocations element; this element contains a list of all the URLs of generated RDF files within that processing transformation.

Sample output:

{     "input": "file:///mnt/efs/input/input-data.csv",     "failedIterations": 0,     "successfulIterations": 1,     "outputFileLocations": [         "file:///mnt/efs/output/Stream-Lens-44682bd6-3fbc-429b-988d-40dda8892328.nq"     ] }

Kafka

If you have a Kafka Cluster set up and running, then the successfully generated RDF file URL will be pushed to you Kafka Queue. It will be pushed to the Topic specified in the KAFKA_TOPIC_NAME_SUCCESS config option, which defaults to “success_queue”. One of the many advantages of using this approach is that now this transformed data can be ingested using our Lens Writer which will publish the RDF to a Semantic Knowledge Graph (or selection of Property Graphs) of your choice!

Dead Letter Queue

If something goes wrong during the operation of the Lens, the system will publish a message to the Dead Letter Queue Kafka topic (defaults to “dead_letter_queue”) explaining what went wrong along with meta-data about that ingestion, allowing for the problem to be diagnosed and later re-ingested. If enabled, the provenance generated for the current ingestion will also be included as JSON-LD. This message will be in the form of a JSON with the following structure:

Data type

The Stream Lens supports data transformation into two different types: NQuads and JSON-LD. By default, the resulting RDF is represented in the form of NQuads, however by overriding the configuration option OUTPUT_FILE_FORMAT you can change it simply by setting this as json-ld.




 

Provenance Data

Within the Stream Lens, time-series data is supported as standard, every time a Lens ingests some data we add provenance information. This means that you have a full record of data over time, allowing you to see what the state if the data was at any moment. The model we use to record Provenance information is the w3c standard PROV-O model.

Provenance files are uploaded to the location specified in the PROV_OUTPUT_DIR_URL, then this file location is pushed to the Kafka Topic declared in PROV_KAFKA_TOPIC_NAME_SUCCESS. The provenance activities in the Stream Lens are main-execution, kafkaActivity, and lens-iteration.

For more information on how the provenance is laid out, as well as how to query it from your Triple Store, see the Provenance Guide.

 


 

REST API Endpoints

In addition to the Process Endpoint designed for ingesting data into the Lens, there is a selection of built-in exposed endpoints for you to call.

API

HTTP Request

URL Template

Description

API

HTTP Request

URL Template

Description

Process

GET

/process?inputFileURL=<input-file-url>

Tells the Lens to ingest the file located at the specified URL location

Config

GET

/config

Displays all Lens configuration as JSON

GET

/config?paths=<config-options>

Displays all Lens configuration specified in the comma-separated list

License

GET

/license

Displays license information

RML

GET

/rml

Displays the current RML mapping file, this is displayed as Turtle RDF serialisation

PUT

/rml

Deploys a new mapping file into Lens specified in the request body

 

Config

The config endpoint is a GET request that allows you to view the configuration settings of a running lens. By sending GET http://<lens-ip>:<lens-port>/config (for example http://127.0.0.1:8080/config), you will receive the entire configuration represented as a JSON, as seen in this small snippet below. All confidential values (such as AWS credentials) are replaced with the fixed string “REDACTED“.

Alternatively, you can specify exactly what config options you wish to return by providing a comma-separated list of variables under the paths parameter. For example, the request of GET http://<lens-ip>:<lens-port>/config?paths=lens.config.outputDirUrl,logging.loggers would return the following.

License

The license endpoint is a GET request that allows you to view information about your license key that is in use on a running lens. By sending GET http://<lens-ip>:<lens-port>/license (for example: http://127.0.0.1:8080/license), you will receive a JSON response containing the following values.

Process

As previously outlined in the Ingesting Data via Endpoint section, using the process endpoint is one way of triggering the Lens to ingest your source data. When an execution of the Lens fails after being triggered in this way, the response will be a status 400 Bad Request and contain a response message similar to that sent to the dead letter queue as outlined above.

RML

The RML endpoint is all about the mapping file that you created either manually or by using the Mapping Config Web App. It consists of a GET and a PUT endpoint, allowing you to get the master mapping file currently in use on the Lens, and well as replacing the master mapping file with a new one.

By sending GET http://<lens-ip>:<lens-port>/rml you will receive a response containing the contents of the mapping file written in RDF/Turtle. And by sending PUT http://<lens-ip>:<lens-port>/rml with a turtle mapping file in the body of the request, it will upload it to the file location specified in the MAPPINGS_DIR_URL and MASTER_MAPPING_FILE options in the configuration and replace the existing file. The mapping file should be in RDF/Turtle format and the declared HTTP Content-Type should be text/turtle. The successful upload is then indicated by an empty response with HTTP status OK (Ref. RFC-7231) and will be functional immediately.