Skip to main content
Skip table of contents

from_path

Target database: Snowflake, Amazon Redshift, Azure SQL Database, Azure Synapse SQL, Microsoft Fabric, Databricks SQL

Resolved as the source path(s) in file load execution.


See also:


Usage

VARIABLE
<from_path>

Notes

Supported in file loads.


Examples

Snowflake: Manifest load

One or multiple files queued with a manifest (SaaS, Private):

azure://example.blob.core.windows.net/example-container/taxidata1/ratecode_description.batch.1618916378963.csv.gz
azure://example.blob.core.windows.net/example-container/taxidata2/ratecode_description.batch.1618916457230.csv.gz

File load customized with OVERRIDE_FILE_LOAD:

SQL
COPY INTO ...
FROM <from_path>
...
FILES=(<from_file_list>);

Here <from_path> is used with \<from_file_list>. <from_path> gives the lowest common part of the file paths and <from_file_list> gives the unique part of the file paths.

File load executed in Snowflake:

SQL
COPY INTO ...
FROM azure://example.blob.core.windows.net/example-container/
...
FILES(taxidata1/ratecode_description.batch.1618916378963.csv.gz, taxidata2/ratecode_description.batch.1618916457230.csv.gz)
...
;

Snowflake: Single file load

Edition: Private

File queued:

s3://example-bucket/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv.gz

In a single file load <from_path> is resolved as the full file path in single quotes including the bucket/container address:

's3://example-bucket/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv.gz'

File load customized with OVERRIDE_FILE_LOAD:

SQL
COPY INTO ...
FROM <from_path>
...
;

File load executed in Snowflake:

SQL
COPY INTO ...
FROM 's3://example-bucket/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv.gz'
...
;

Amazon Redshift: Manifest load

One or multiple files queued with a manifest (SaaS, Private):

s3://example-bucket/taxidata1/ratecode_description.batch.1618916378963.csv.gz
s3://example-bucket/taxidata2/ratecode_description.batch.1618916457230.csv.gz

Amazon Redshift supports loading source files listed in a manifest file which is given in a COPY statement. The Agile Data Engine manifest format is compatible with Redshift.

File load customized with OVERRIDE_FILE_LOAD:

SQL
COPY ...
FROM <from_path>
MANIFEST
...
;

The MANIFEST parameter must be given when loading with a manifest. It is included in the <file_format> value unless <file_format> is overridden with OPT_FILE_FORMAT_OPTIONS.

File load executed in Amazon Redshift:

SQL
COPY ...
FROM 's3://example-bucket/manifest/ratecode_description/manifest-table.ratecode_description.1619690919.batch.1619690919575.fullscanned.false.skiph.1.delim.comma.csv.gz.json'
MANIFEST
...
;

Note that in the COPY statement you also need to specify an IAM role that has access to the manifest bucket and to the bucket where the source files are located:

SQL
CREDENTIALS 'aws_iam_role=arn:aws:iam::<aws-account-id>:role/<role-name>'

Private Edition: If you are using the default file load bucket, you can use the <source_credentials> variable.


Amazon Redshift: Single file load

Edition: Private

File queued:

s3://example-bucket/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv.gz

In a single file load <from_path> is resolved as the full file path in single quotes including the bucket address:

's3://example-bucket/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv.gz'

File load customized with OVERRIDE_FILE_LOAD:

SQL
COPY ...
FROM <from_path>
...
;

File load executed in Amazon Redshift:

SQL
COPY ...
FROM 's3://example-bucket/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv.gz'
...
;

Azure SQL Database: Manifest load

One or multiple files queued with a manifest:

taxidata1/ratecode_description.batch.1618916378963.csv
taxidata2/ratecode_description.batch.1618916457230.csv

In a manifest load (SaaS, Private) <from_path> is resolved as a file path in single quotes given in a manifest entry, for example:

'taxidata1/ratecode_description.batch.1618916378963.csv'

An external data source example_external_data_source is selected with OPT_STORAGE_INTEGRATION. It has been configured to point to the file load container:

https://example.blob.core.windows.net/example_container/

Note that the manifest entries specify paths relative to the external data source. Thus, a full file path is the external data source location + a path given in a manifest entry, for example:

https://example.blob.core.windows.net/example-container/
\+ taxidata1/ratecode_description.batch.1618916378963.csv

File load customized with OVERRIDE_FILE_LOAD:

SQL
BULK INSERT ...
FROM <from_path>
WITH (DATA_SOURCE = '<storage_integration>', ...);

Note that <storage_integration> is used here to provide the external data source name for the DATA_SOURCE property.

Loads executed in Azure SQL Database:

SQL
BULK INSERT ...
FROM 'taxidata1/ratecode_description.batch.1618916378963.csv'
WITH (DATA_SOURCE = 'example_external_data_source', ...);

BULK INSERT ...
FROM 'taxidata2/ratecode_description.batch.1618916457230.csv'
WITH (DATA_SOURCE = 'example_external_data_source', ...);

In Azure SQL Database file loads are executed separately even if multiple files have been queued with one manifest.


Azure SQL Database: Single file load

Edition: Private

File queued:

azure://example.blob.core.windows.net/example-container/taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv

In a single file load <from_path> is resolved as the file path in single quotes without the bucket/container address:

'taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv'

An external data source example_external_data_source is selected with OPT_STORAGE_INTEGRATION. It has been configured to point to the file load container:

https://example.blob.core.windows.net/example_container/

Thus, the full file path is the external data source location + the value of <from_path>:

https://example.blob.core.windows.net/example-container/
\+ taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv

File load customized with OVERRIDE_FILE_LOAD:

SQL
BULK INSERT ...
FROM <from_path>
WITH (DATA_SOURCE = '<storage_integration>', ...);

Note that <storage_integration> is used here to provide the external data source name for the DATA_SOURCE property.

Load executed in Azure SQL Database:

SQL
BULK INSERT ...
FROM 'taxidata1/ratecode_description.1619598629.batch.1619598629291.fullscanned.false.skiph.1.delim.comma.csv'
WITH (DATA_SOURCE = 'example_external_data_source', ...);

Azure Synapse SQL and Microsoft Fabric: Manifest load

One or multiple files queued with a manifest (SaaS, Private):

https://example.blob.core.windows.net/example-container/taxidata1/ratecode_description.batch.1618916378963.csv.gz
https://example.blob.core.windows.net/example-container/taxidata2/ratecode_description.batch.1618916457230.csv.gz

File load customized with OVERRIDE_FILE_LOAD:

SQL
COPY INTO ...
FROM <from_path>
...
;

COPY statement supports listing file paths in the FROM clause.

File load executed in Azure Synapse SQL:

SQL
COPY INTO ...
FROM '<https://example.blob.core.windows.net/example-container/taxidata1/ratecode_description.batch.1618916378963.csv.gz',> '<https://example.blob.core.windows.net/example-container/taxidata2/ratecode_description.batch.1618916457230.csv.gz'>
...
;

Databricks SQL: Manifest load

One or multiple files queued with a manifest (SaaS, Private):

abfss://data@example.blob.core.windows.net/example-container/taxidata/ratecode_description.batch.1618916378963.csv.gz
abfss://data@example.blob.core.windows.net/example-container/taxidata/ratecode_description.batch.1618916457230.csv.gz

File load customized with OVERRIDE_FILE_LOAD:

SQL
COPY INTO ...
FROM <from_path>
...
;

COPY statement supports listing file paths in the FROM clause.

File load executed in Databricks:

SQL
COPY INTO ...
FROM 'abfss://data@example.blob.core.windows.net/example-container/taxidata/'
...
FILES('ratecode_description.batch.1618916378963.csv.gz','ratecode_description.batch.1618916457230.csv.gz')
;

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.