Login Register
Bigger Data,
Integrated Faster
EVL Microservices EVL Anonymization EVL Validation EVL Data Generation Conversions QVD Writer
FAQ Getting Started with EVL
The EVL Data Integration Suite gives ETL (Extract-Transform-Load) developers the power to solve complex problems by keeping the simple things simple, and splitting comlex problems into simple steps.

Code Based. Developer Friendly.

EVL’s code based ETL job development is more efficient and flexible than GUI systems, while providing a good balance of robustness and functionality. Components, Templates, variables, and a high level of abstraction make creating and orchestrating complicated ETL jobs faster and simpler. Knowing the basics of C++ and bash is enough to get started with EVL.


Visual Inspection Tools

EVL Manager is packaged with tools to view code based ETL jobs and workflows for easier management and troubleshooting.


EVL uses intelligent processing for high performance joins and aggregations. Plus with EVL, expensive DBMS processing can be moved to cheaper Linux machine processing.

Ingestion Job Comparison





Job Duration


Edge Node

1(16 cores)


7-8 minutes


Old Solution



40-45 minutes


3 Data Nodes

2 (2x10 cores)


4-8 hours


Linux Environment Stable and Secure

  • Resource Efficient, EVL installations can be as small as 40MB.
  • Utilize open source security tools to protect data

EVL Data Integration Tools


The ETL tool itself. EVL jobs can be run from command line, by EVL Workflow, or any other scheduler and/or job manager.

EVL Workflow

A job manager, which orchestrates EVL jobs, or any other command line command. Can be used standalone, no need for an EVL installation.

EVL Manager

Web user interface to monitor and manage EVL jobs and EVL Workflows.


> Replacement of traditional ETL

> Performance optimization of homemade ETL

> Migration from relational database to big data

> Heterogeneous data integration

> IoT – streaming and messaging integration

> Fast prototyping in proof of concepts

Data Warehousing

  • Staging data – We provide predefined generic jobs for staging, including getting data definitions from DBMS and csv files, with switches like “incremental/delta/full load”, SCD2 historization, etc.
  • Move SQL into ETL – Replace DBMS processing by much cheaper ordinary Linux machine processing. We provide SQL2EVL script which automates most of the job.
  • Replace heavy ETL – For some ETL tools we provide migration scripts, which make migrations straightforward.
  • Data Marts – High performance joins and aggregations are suitable to prepare data for Data Marts.

Data preparation in Hadoop

  • Data ingestion – We provide generic jobs for data ingestion, including data masking (salt, encription, hash), enriching by lookup, etc.
  • Spark code generator – EVL jobs can wrap Spark template code. Use the power of Spark, while keeping the solution design clear and easily debuggable.
  • High performance JSON parsing – Filter/mask/cleanse your data immediately on Edge node.
  • Parquet producer – Generate this columnar file format immediately from sources, including partitioning.

Internet of Things

  • Move processing to Edges – EVL is light, suits any Linux installation even those with limited resources.

Real-time data processing

  • Stream data processing – Kafka, Flume, or any other streams or queues can be switched (i.e. consumed-modified-produced) by EVL.


100% of our PoCs lead to a production installation, so please do not hesitate to contact us: team@evltool.com We definitely prefer to invest in proof of concepts rather than fluffy marketing messages!