Code Based. Developer Friendly.
EVL’s code based ETL job development is more efficient and flexible than GUI systems, while providing a good balance of robustness and functionality. Components, Templates, variables, and a high level of abstraction make creating and orchestrating complicated ETL jobs faster and simpler. Knowing the basics of C++ and bash is enough to get started with EVL.
Visual Inspection ToolsEVL Manager is packaged with tools to view code based ETL jobs and workflows for easier management and troubleshooting.
PerformanceEVL uses intelligent processing for high performance joins and aggregations. Plus with EVL, expensive DBMS processing can be moved to cheaper Linux machine processing.
Ingestion Job Comparison
3 Data Nodes
2 (2x10 cores)
Linux Environment Stable and Secure
- Resource Efficient, EVL installations can be as small as 40MB.
- Utilize open source security tools to protect data
EVL Data Integration Tools
The ETL tool itself. EVL jobs can be run from command line, by EVL Workflow, or any other scheduler and/or job manager.
A job manager, which orchestrates EVL jobs, or any other command line command. Can be used standalone, no need for an EVL installation.
Web user interface to monitor and manage EVL jobs and EVL Workflows.
> Replacement of traditional ETL
> Performance optimization of homemade ETL
> Migration from relational database to big data
> Heterogeneous data integration
> IoT – streaming and messaging integration
> Fast prototyping in proof of concepts
- Staging data – We provide predefined generic jobs for staging, including getting data definitions from DBMS and csv files, with switches like “incremental/delta/full load”, SCD2 historization, etc.
- Move SQL into ETL – Replace DBMS processing by much cheaper ordinary Linux machine processing. We provide SQL2EVL script which automates most of the job.
- Replace heavy ETL – For some ETL tools we provide migration scripts, which make migrations straightforward.
- Data Marts – High performance joins and aggregations are suitable to prepare data for Data Marts.
Data preparation in Hadoop
- Data ingestion – We provide generic jobs for data ingestion, including data masking (salt, encription, hash), enriching by lookup, etc.
- Spark code generator – EVL jobs can wrap Spark template code. Use the power of Spark, while keeping the solution design clear and easily debuggable.
- High performance JSON parsing – Filter/mask/cleanse your data immediately on Edge node.
- Parquet producer – Generate this columnar file format immediately from sources, including partitioning.
Internet of Things
- Move processing to Edges – EVL is light, suits any Linux installation even those with limited resources.
Real-time data processing
- Stream data processing – Kafka, Flume, or any other streams or queues can be switched (i.e. consumed-modified-produced) by EVL.