Meta Integration® Model Bridge (MIMB)
"Metadata Integration" Solution

MIMB Bridge Documentation

MIMB Import Bridge from Apache Hadoop Distributed File System (HDFS Java API)

Bridge Specifications

Vendor Apache
Tool Name Hadoop Distributed File System (HDFS)
Tool Version 2.5
Tool Web Site http://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html
Supported Methodology [Flat File] Multi-Model, Data Store (Physical Data Model) via Java API

Import tool: Apache Hadoop Distributed File System (HDFS) 2.5 (http://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html)
Import interface: [Flat File] Multi-Model, Data Store (Physical Data Model) via Java API from Apache Hadoop Distributed File System (HDFS Java API)
Import bridge: 'ApacheHDFS' 10.0.1

The bridge uses Apache Hadoop HDFS Java library (JARs) to access Hadoop file system.
The library JAR files are located in the /java/Hadoop directory.
One may specify a Configuration files directory and often that is sufficient, as the values for the other bridge parameters may be specified there.
This bridge supports the following file formats:
- Flat File (CSV)
- Open Office Excel (XSLX)
- COBOL Copybook
- JSON (JavaScript Object Notation)
- Apache Avro
- Apache Parquet
- Apache ORC
- W3C XML

as well as the compressed versions of the above formats:
- ZIP (as a compression format, not as archive format)
- BZIP
- GZIP
- LZ4
- Snappy (as standard Snappy format, not as Hadoop native Snappy format)

Please refer to the individual parameter's tool tips for more detailed examples.


Bridge Parameters

Parameter Name Description Type Values Default Scope
Configuration files directory Directory containing core-site.xml and hdfs-site.xml for your environment.

It is an optional parameter that allows you to reuse configuration files you have and avoid specifying Hadoop connection and Kerberos security details manually using other parameters.

When you would like to specify the details manually you should leave this parameter value empty. If you specify the directory value and it does not have the configuration files the bridge exits with the error.

You can override the parameters available in the configuration files using the bridge parameters.
For example, you can override the fs.default.name file parameter using the NameNode URI bridge parameter.

DIRECTORY      
NameNode URI URI of the Hadoop NameNode, like hdfs://host::8020
STRING   hdfs://[server host]:[port]  
Root directory Enter the directory containing metadata files or specify it using browsing tool. Bridge provides up to 3 level browsing depth. REPOSITORY_MODEL      
Include filter The include folder and file filter pattern relative to the root directory.
The patern uses extended unix glob case-sensitive expression syntax.
Here are some common examples:
*.* - include any file at the root level
*.csv - include only csv files at the root level
**.csv -include only csv files at any level
*.{csv,gz} include only csv or gz files at the root level
dir\*.csv - include only csv files in the 'dir' folder
dir\**.csv - include only csv files under 'dir' folder at any level
dir\**.* - include any file under 'dir' folder at any level
f.csv - include only f.csv under root level
**\f.csv - include only f.csv at any level
**dir\** - include all files under any 'dir' folder at any level
**dir1\dir2\** - include all files under any 'dir2' folder under any 'dir1' folder at any level
STRING      
Exclude filter The exclude folder and file filter pattern relative to the root directory.
The patern uses the same syntax as the Include filter. See it for the systax details and examples.
Files that match the exclude filter are skipped.
When both include and exclude filters are empty all folders and files under the Root directory are included.
When the include filter is empty and the exclude one is not folders and files under the Root directory are included except ones matching the exclude filter.
STRING      
Partition directories Files-based partition directories' paths.
The bridge tries to detect partitions automatically. It can take a long time when partitions have a lot of files.
You can shortcut the detection process for a partition by specifying it in this parameter.
Specify the partition directory path relative to the Root directory.
Use . to specify the root directory as the partitioned directory.

Separate multiple paths with the , (or ;) character.
For example: dir1/dir2,dir3/dir4,dir5
STRING      
Sample size Number of files to scan during data-partitioning dirictories analyze NUMERIC      
Hadoop properties Custom Hadoop and HDFS configuration properties.

The bridge uses a default configuration to access a Hadoop distribution. If you need to use a custom configuration, specify its parameter values here.

For further information about the properties required by Hadoop and its related systems such as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache's Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
Typically, the HDFS-related properties can be found in the hdfs-default.xml file of your distribution, such as
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml.
STRING      
Keytab file Full path to the Kerberos keytab file. The file is necessary to log into a Kerberos-enabled Hadoop system. It contains pairs of Kerberos principals and encrypted keys. You need to enter the Principal using the Principal user parameter.

The user that runs the bridge is not necessarily the one the Principal designates but must have the right to read the keytab file being used. For example, the user name you are using to run the bridge is UserA and the principal to be used is UserB; in this situation, ensure that UserA has the right to read the keytab file to be used.
STRING      
Principal User principal name. See the “Keytab file” parameter documentation for details. STRING      
Username User authentication name of HDFS. Sometimes referred to as proxy name.
The parameter is only used for Kerberos authentication.
It does not impact the user which runs the bridge.
STRING      
HDFS encryption key provider (KMS) The location of the KMS proxy. For example, kms://http@localhost:16000/kms.
Specify the HDFS encryption key provider only when the HDFS transparent encryption has been enabled in your cluster. Leave the value empty otherwise.
For further information about the HDFS transparent encryption and its KMS proxy, see Transparent Encryption in HDFS at https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html.

STRING      
Miscellaneous Specify miscellaneous options identified with a -letter and value.

For example, -m 4G -f 100 -j -Dname=value -Xms1G

-m the maximum Java memory size whole number (e.g. -m 4G or -m 2500M ).
-v set environment variable(s) (e.g. -v var1=value -v var2="value with spaces").
-j the last option that is followed by Java command line options (e.g. -j -Dname=value -Xms1G).
-hadoop key1=val1;key2=val2 to manualy set hadoop configuration options
-tps 10 maximum threads pool size
-tl 3600s processing time limit in s -seconds m - minutes or h hours;
-fl 1000 processing files count limit;
STRING      

 

Bridge Mapping

Mapping information is not available

Last updated on Fri, 21 Sep 2018 16:15:06

Copyright © Meta Integration Technology, Inc. 1997-2018 All Rights Reserved.

Meta Integration® is a registered trademark of Meta Integration Technology, Inc.
All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.