Meta Integration® Model Bridge (MIMB)
"Metadata Integration" Solution

MIMB Bridge Documentation

MIMB Import Bridge from Amazon Web Services (AWS) S3 Storage

Bridge Specifications

Vendor Amazon
Tool Name AWS Simple Storage Service (S3)
Tool Version 1.0
Tool Web Site http://aws.amazon.com/s3/
Supported Methodology [File System] Multi-Model, Data Store (NoSQL / Hierarchical) via Java API

Import tool: Amazon AWS Simple Storage Service (S3) 1.0 (http://aws.amazon.com/s3/)
Import interface: [File System] Multi-Model, Data Store (NoSQL / Hierarchical) via Java API from Amazon Web Services (AWS) S3 Storage
Import bridge: 'AmazonS3' 10.0.1

IMPORTING FROM Amazon Simple Storage Service.

This bridge establishes a connection with a choosed bucket in order to extract the physical metadata. It is critical that the parameters are filled correctly in order to satisfy the local connection requirements on the client workstation that runs the bridge.

This bridge supports the following file formats:
- Flat File (CSV)
- Open Office Excel (XSLX)
- COBOL Copybook
- JSON (JavaScript Object Notation)
- Apache Avro
- Apache Parquet
- Apache ORC
- W3C XML

as well as the compressed versions of the above formats:
- ZIP (as a compression format, not as archive format)
- BZIP
- GZIP
- LZ4
- Snappy (as standard Snappy format, not as Hadoop native Snappy format)

Please refer to the individual parameter's tool tips for more detailed examples.


Bridge Parameters

Parameter Name Description Type Values Default Scope
Access Key Your access key id to sign programmatic requests to AWS services. STRING      
Secret Key Your secret key to sign programmatic requests to AWS services. PASSWORD      
Root directory Set directory containing metadata files or specify it using browsing tool. Bridge provides up to 3 level browsing depth. Don't forget to specify 'Region' parameter for using browsing tool.

Specify * symbol to import from all available buckets under specified region

Bridge uses only s3a protocol to load files.
f.e. s3a://bucket/dir1/dir2
REPOSITORY_SUBSET     Mandatory
Include filter The include folder and file filter pattern relative to the root directory.
The patern uses extended unix glob case-sensitive expression syntax.
Here are some common examples:
*.* - include any file at the root level
*.csv - include only csv files at the root level
**.csv -include only csv files at any level
*.{csv,gz} include only csv or gz files at the root level
dir\*.csv - include only csv files in the 'dir' folder
dir\**.csv - include only csv files under 'dir' folder at any level
dir\**.* - include any file under 'dir' folder at any level
f.csv - include only f.csv under root level
**\f.csv - include only f.csv at any level
**dir\** - include all files under any 'dir' folder at any level
**dir1\dir2\** - include all files under any 'dir2' folder under any 'dir1' folder at any level
STRING      
Exclude filter The exclude folder and file filter pattern relative to the root directory.
The patern uses the same syntax as the Include filter. See it for the systax details and examples.
Files that match the exclude filter are skipped.
When both include and exclude filters are empty all folders and files under the Root directory are included.
When the include filter is empty and the exclude one is not folders and files under the Root directory are included except ones matching the exclude filter.
STRING      
Sample size Number of files to scan during data-partitioning dirictories analyze NUMERIC      
Incremental import Specifies whether to import only the changes made in the source or to re-import everything (as specified in other parameters).

True - import only the changes made in the source.
False - import everything (as specified in other parameters).

An internal cache is maintained for each metadata source, which contains previously imported models. If this is the first import or if the internal cache has been deleted or corrupted, the bridge will behave as if this parameter is set to 'False'.
BOOLEAN
False
True
True  
Partition directories Files-based partition directories' paths.
The bridge tries to detect partitions automatically. It can take a long time when partitions have a lot of files.
You can shortcut the detection process for some or all partitions by specifying them in this parameter.
Specify the partition directory path relative to the Root directory.
Use . to specify the root directory as the partitioned directory.
Separate multiple paths with the , (or ;) character.

ETL tools can read and write to pattern-based partitions directories.
For example, ETL can read all *.csv files from a folder F. The ETL bridge representes it as the '*.csv' dataset in the 'F' folder (F/*.csv).
You can instruct this bridge to generate the matching dataset by specifying its name in square brackets after the folder name, like F[*.csv].
Similar it true for application specific partitions.
For example, ETL can write files under folder F to partition sub-folders named using the 'getDate@[yyyyMMdd]' function expression.
The result is represented as the 'getDate@[yyyyMMdd]' dataset in the 'F' folder (F/getDate@[yyyyMMdd]).
Agan, you can instruct this bridge to generate the matching dataset by specifying something like F/[getDate@[yyyyMMdd]].

You may specify additional info about partitioned directory internal structure, using [dataset name] and {partitioned column name} patterns for following cases:
For application partitions like:
zone/po/us/2018/00001.csv
use: zone/[po]/{region}/{year}/*.csv or
zone/[po]/{*}/{*}/*.csv
if partition columns names are not important. They will be stitched by positions

For custom application partitions like:
zone/table1/2018/data/00001.csv
zone/table1/2018/log/00001.txt
zone/table2/2018/data/00001.csv
zone/table2/2018/log/00001.txt
use: zone/*/{year}/[data]/*.csv, zone/*/{year}/[log]/*.txt
STRING      
Miscellaneous Specify miscellaneous options identified with a -letter and value.

For example, -m 4G -f 100 -j -Dname=value -Xms1G

-m the maximum Java memory size whole number (e.g. -m 4G or -m 2500M ).
-v set environment variable(s) (e.g. -v var1=value -v var2="value with spaces").
-j the last option that is followed by Java command line options (e.g. -j -Dname=value -Xms1G).
-hadoop key1=val1;key2=val2 to manualy set hadoop configuration options
-tps 10 maximum threads pool size
-tl 3600s processing time limit in s -seconds m - minutes or h hours;
-fl 1000 processing files count limit;
-delimited.top_rows_skip 1 number of rows to skip while processing csv files
-delimited.extra_separators ~- chars each of which will be used as the delimiter while processing csv files
STRING      

 

Bridge Mapping

Mapping information is not available

Last updated on Mon, 3 Dec 2018 18:35:45

Copyright © Meta Integration Technology, Inc. 1997-2018 All Rights Reserved.

Meta Integration® is a registered trademark of Meta Integration Technology, Inc.
All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.