Target database: Snowflake, Databricks SQL
STORAGE.EXTERNAL_TABLE_FILE_FORMAT option specifies the file format being used.
One can either refer to an existing named file format that describes the staged data files to scan with FORMAT_NAME or one can define the TYPE directly.
See also:
Usage
STORAGE.EXTERNAL_TABLE_FILE_FORMAT: String
Default: Empty
Mandatory: Yes
SNOWFLAKE content either:
-
FORMAT_NAME = file_format_name
Specifies an existing named file format that describes the staged data files to scan. The named file format determines the format type (such as, CSV, JSON), and any other format options, for data files.
-
TYPE = CSV | JSON | AVRO | ORC | PARQUET [ ... ]
Specifies the format type of the staged data files to scan when querying the external table.
If a file format type is specified, additional format-specific options can be specified. For more information, see Format Type Options.
DATABRICKS content one of:
-
AVRO -
BINARYFILE -
CSV -
JSON -
ORC -
PARQUET -
TEXT
Notes
Supports Snowflake and Databricks External table
Examples
Snowflake: Set the STORAGE.EXTERNAL_TABLE_FILE_FORMAT
Setting STORAGE.EXTERNAL_TABLE_FILE_FORMAT for an External table (e.g. publish.F_TRIP):
STORAGE.EXTERNAL_TABLE_FILE_FORMAT: TYPE = PARQUET
DDL generated by Agile Data Engine:
CREATE EXTERNAL TABLE publish.F_TRIP (
...
)
..
FILE_FORMAT = (TYPE = PARQUET);
...;
Databricks SQL: Set the STORAGE.EXTERNAL_TABLE_FILE_FORMAT
Setting STORAGE.EXTERNAL_TABLE_FILE_FORMAT for an External table (e.g. publish.F_TRIP):
STORAGE.EXTERNAL_TABLE_FILE_FORMAT: PARQUET
DDL generated by Agile Data Engine:
CREATE TABLE publish.F_TRIP (
...
)
USING PARQUET
LOCATION 's3:/data/location'
...;