Connect to Your Data on Dataiku Online

Supported Connections

Dataiku Online allows you to connect to multiple sources of data as read-only sources or read-and-write storage.

Note

A (read-only) Data Source will be used to inform Dataiku how it can access data stored externally: Dataiku remembers the location of the original source datasets. This is read-only; no data is stored or modified in the original system.You typically use these datasets as the entrypoint (leftmost part) to your Flow.

A (read-and-write) Data Storage will be used not only to allow Dataiku to read the data, but also to create new datasets (write) and, in SQL Data Storage, perform in-database computation, thus improving performance.

From Dataiku Online, you can connect to the following:

Type

Read / Data Sources

Read and Write / Data Storage

Snowflake

X

X

Azure Synapse

X

X

Google BigQuery

X

X

Amazon Redshift

X

X

PostgreSQL

X

X

Oracle

X

X

SQL Server

X

X

MySQL

X

X

Amazon S3

X

X

Azure Blob Storage

X

X

Google Cloud Storage

X

X

MongoDB

X

With Data Connector Plugins, you can also connect to the following: Salesforce, Zendesk, Google Sheets

Note

Depending on your subscription plan, not all connectors may be available.

How to Add a New Data Connection

  • First navigate to the Launchpad to get started.

  • In your space, open the Connections tab and click on the button Add a connection:

../../_images/add-a-connection.png

  • Choose your connection type from the Read Only Data Sources or Read/Write Data Storage sections:

../../_images/add-a-feature.png

  • Fill the connection details, and then click on Test:

../../_images/snowflake-connection-4.jpg

Once the test is OK, you can add the feature. You will get a confirmation message as well as a message letting you know the IP addresses you might need to whitelist to allow connection.

../../_images/snowflake-connection-5.jpg

Relocatable Datasets

What relocatable means

When creating new datasets, Dataiku takes the settings of the chosen connection to determine where it will be created and under which table or name.

Dataiku Online makes sure managed datasets are relocatable by default, as it is good practice (see the reference documentation for more information on what making managed datasets relocatable means). In short, it means that if a user creates another dataset within the same connection, both datasets will not overlap, thus avoiding any potential conflicts.

Otherwise, conflicts could arise when:

  • Creating two datasets with the same name in different projects,

  • Duplicating a project in a Dataiku instance,

  • Publishing a project in the Automation node.

How Dataiku Online makes managed datasets relocatable

To avoid overlapping, it is a good practice to use variables in the connection settings for creating new datasets. Even if a user makes two datasets with the same name in the same connection, the variable ensures they will be different.

By default, Dataiku Online adds the variables ${projectKey} and ${node} to ensure the datasets are relocatable. Your connection will make all datasets it contains relocatable by default unless you change those settings.

These variables are included in the following dataset fields:

  • For SQL databases: Table prefix

  • For filesystem databases: Path prefix

Limits and exceptions

If your connection was created before the ${node} variable mechanism was implemented, its datasets will not be relocatable when transferring them between the Design and the Automation node. Therefore, when publishing a project in the Automation node without remapping the connections (ie, all datasets use the same connection in both nodes), the two projects will write in the same datasets and cause conflicts.

In the same way, if you edit the table in the settings of a dataset, the connection or the dataset will no longer be relocatable.

How to check if my connection makes managed datasets relocatable?

If your connection was created after January 6, 2023, or is a Redshift, BigQuery, or a managed Snowflake connection, Dataiku Online makes datasets relocatable by default; you have nothing to do.

If you have any doubt about whether your connection makes datasets relocatable, as described above, you can ask Support to verify that it uses the above-mentioned variables (table prefix or path prefix) in the settings.

You can also go to a dataset setting tab created in the connection to verify that the dataset has been created with the variables ${projectKey} and ${node} in:

  • the path in the bucket for filesystem connections,

  • the table in SQL connections.

What to do if my connection does not support relocatable datasets between the Design and Automation node?

If your connection was created before January 6, 2023, and is not a Redshift, BigQuery, or a managed Snowflake connection, then the datasets contained in that connection are not relocatable between the Design and Automation node.

We strongly recommend using different connections between the Design and the Automation nodes in this case. Having two separate schemas or buckets will ensure that the datasets are not created in the same place and will not overlap.

To do that:

  • When creating or editing a connection in the launchpad, the form allows you to specify if you want the connection to be used on the Design, the Automation node, or both.

  • Remap the connections when deploying a project to the Automation node in the Deployer > Deployments > Select the Deployment > Settings > Connections.

  • You will then need to update the deployment and execute your Flow in the Automation node to create the datasets in the new connection.