|Tool Name||Talend Data Integration|
|Tool Version||5.x to 7.x|
|Tool Web Site||https://www.talend.com/products/data-integration/|
|Supported Methodology||[Data Integration] Multi-Model, Data Store (Physical Data Model, Logical Data Model, Stored Procedure Expression Parsing), ETL (Source and Target Data Stores, Transformation Lineage, Expression Parsing) , Graphical Layout via Eclipse Java API|
Import tool: Talend Talend Data Integration 5.x to 7.x (https://www.talend.com/products/data-integration/)
Import interface: [Data Integration] Multi-Model, Data Store (Physical Data Model, Logical Data Model, Stored Procedure Expression Parsing), ETL (Source and Target Data Stores, Transformation Lineage, Expression Parsing) , Graphical Layout via Eclipse Java API from Talend Data Integration
Import bridge: 'Talend' 10.1.0
Reads Talend Jobs/Joblets and/or Connections metadata from Project directory.
FREQUENTLY ASKED QUESTIONS
Q: how do we get lineage from hand written java code in tJavaRow ?
A: you can provide the data mapping specifications at the bottom of the comment parameter of the custom code components like tJavaRow with the following syntax:
*** lineage start ***
output_row.newColumn = input_row.newColumn;
output_row.newColumn1 = input_row.newColumn1;
*** lineage end ***
The user can specify data lineage dependencies using one or more statements with arithmetic operations and functions.
The following three examples produce the same dependencies but different operations.
output_row.newColumn = input_row.newColumn+input_row.newColumn1;
output_row.newColumn = input_row.newColumn;
output_row.newColumn = input_row.newColumn1;
output_row.newColumn = custom_function(input_row.newColumn, input_row.newColumn1);
The user can specify the control lineage dependencies using the Java ? : operator.
output_row.newColumn = (input_row.newColumn > 0) ? input_row.newColumn1 : 12;
|Project directory||File directory where the Talend project is located.
It should have either a process, metadata or joblet directory.
|Project items||Names of items, like Jobs or Connections separated with semicolon. An item is identified with its path in the Talend repository (file system). For example, a job's jobName within a folder's folderName should is identified as process/folderName/jobName.
The following types of items and their root path are supported:
Job Designs - process
Db Connections - metadata/connections
File delimited - metadata/fileDelimited
File delimited - metadata/filePositional
Specify a list of top level executable jobs which you would like to analyze their data lineage.
A job can execute another job. The list should not mention jobs that are only executed from other jobs. It can cause the resulting lineage to have false and duplicate information.
If a folder has only necessary jobs, its path can be mentioned in the list. It could be helpful when you have a lot of executable jobs and would rather reference them all with folders where they reside.
Sample list of jobs and folder names: 'process/jobName1; process/folder/jobName2; process/parentFolder/childFolder/'
When you have a well-documented Connection that has comments/business names for tables/columns you can forward engineer the metadata to other tools (e.g. data modeling).
When you would like to design mappings in Microsoft Excel you can prepopulate the design with source and target connections that are already available in Talend. Specify source and target connections of type Database, File, etc.
Sample list of connections: 'metadata/connections/dbConnection1; metadata/connections/folder/dbConnection1; metadata/filePositional/file1'
Connections are ignored when Jobs are specified.
|Job Context||Specify the Talend Job Context. If this parameter is empty the 'Default' context will be used.
Sometimes jobs may have several contexts for example DEV/QA parameter sets. You may specify which parameter set to use while using import.
|Context File Directory||Allows you to provide the path to the directory that contains Talend Context flat files (*.txt, *.prn, *.csv).
Files in the directory define 'global' parameter values that apply to all imported Jobs.
Talend DI organizes Jobs in folders. When you need to specify 'local' values for a particular job you should create the job's folder hierarchy under the directory and place the job's specific context files in the leaf folder representing the job.
File defines parameter as a key/value pair delimited by either '=' or ';' (semicolon) or ' '(whitespace) or ':' (colon) or ',' (coma).
Note: bridge will not trim any whitespace around paramter's value.
By default, this is 'data' folder under 'Project Directory'.
|Incremental import||Specifies whether to import only the changes made in the source or to re-import everything (as specified in other parameters).
True - import only the changes made in the source.
False - import everything (as specified in other parameters).
An internal cache is maintained for each metadata source, which contains previously imported models. If this is the first import or if the internal cache has been deleted or corrupted, the bridge will behave as if this parameter is set to 'False'.
|Miscellaneous||Additional semicolon separated parameters for debug purposes.
-zip [path] - compress project into a zip file. Removes sensitive data with type 'password' or field name 'password'. Compress only supported items
[path] - path where zip file will be created. Example : -zip C:\temp
-pre [cmd] - runs command before import. If command fails import will fail. Example: -pre update.bat
-cfd [new delimiter] - used with Context File Directory option. Replaces default name-value delimiter with new value. Example: -cfd ~#*#~
-pppd: enables the DI/ETL post-processor processing of DI/ETL designs in order to create the design connections and connection data sets.
-cd: split or merge file system connections by a directory path.
For example, a connection can have two root folders, a and b. When you imported separate File stores for each root folder you would want to split the connection into two connections that can be resolved using these File stores. It can be achieved with option:
requesting to create 'a_con' connection and move the 'a' folder to it from the 'orig_con' connection. The result will have a_con and orig_con connections. The orig_con connection will have the folder branch 'b' that is left over after splinting the folder branch 'a' out.
Here is a little bit more complex example:
-cd a_con=orig_con:/root/a - create 'a_con' connection for the 'root/a' folder branch in the 'a_con' connection.
You can use the option to merge several connections into one. For example, when you have two file stores C:\a and B:\b you can merge them with the option:
that will move all folders from B:\ connection to C:\ that will end up with a and b root folders.
Mapping information is not available