Create a destination for AWS + S3 + Databricks or Snowflake

Last updated: Apr 13, 2026
IMPLEMENTATION
HEALTH TECH VENDOR

To populate your Amazon Web Services (AWS) + AWS S3 repository with healthcare data from an EHR system via Redox (and then to optionally feed that data into Databricks or Snowflake for analytics), you must configure a specific Redox cloud destination. A Redox destination represents where a message is delivered (e.g., like the address in the “To” line of an email header). Learn more about connecting Redox to your cloud repository.

You'll need to perform some steps in your cloud product(s) and some in Redox. You can perform Redox setup in our dashboard or with the Redox Platform API.

Prerequisites

  • Establish a connection with your preferred EHR system. Learn how to request a connection.
  • Decide which combination of cloud products to use. Redox currently supports any of these combinations with your AWS cloud repository:
    1. AWS + AWS S3
    2. AWS + AWS S3 + Databricks
    3. AWS + AWS S3 + Snowflake
  • Complete your AWS (and any other cloud product) configuration before creating your Redox destination. Save any downloads with secret values, since you’ll need to enter some of these details into the Redox dashboard.
  • Grant access to Redox from AWS (and any other cloud product) to authorize Redox to push data to your cloud repository.

Configure in AWS

  1. Navigate to the AWS dashboard and log in.
  2. Attach a policy to the IAM User that allows PutObject actions against the new S3 bucket.
  3. Generate an access key and secret pair. Save the secret key, since it will only be visible once. You'll need it for Redox setup later.

Create a cloud destination in Redox

Next, create a cloud destination in your Redox organization. When the EHR system sends healthcare data to Redox, we push it on to your configured AWS + S3 cloud destination.

In the dashboard

    1. From the Product type field, select Databricks or Snowflake if you’re using one of those cloud products with AWS S3. Your S3 settings will be ingested with the additional cloud product. Select S3 if you're not using either Databricks or Snowflake.
  1. For the configure destination step, populate these fields. Then click the Next button.
    1. Bucket name: Enter the S3 bucket name. Locate this value in the AWS dashboard.
    2. Object key prefix (optional): Enter any prefix you want prepended to new files when they're created in the S3 bucket. Add / to put the files in a subdirectory. For example, redox/ puts all the files in the redox directory.
  2. For the auth credential step, either a drop-down list of existing auth credentials displays or a new auth credential form opens. Learn how to create an auth credential for AWS Sigv4.

With the Redox Platform API

  1. In your terminal, prepare the /v1/authcredentials request.
  2. Specify these values in the request.
    • Locate the accessKey and secretKey values in the AWS dashboard.
      Example: Create auth credential for AWS S3 + Databricks or Snowflake
      json
      1
      curl 'https://api.redoxengine.com/platform/v1/authcredentials' \
      2
      --request POST \
      3
      --header 'Authorization: Bearer $API_TOKEN' \
      4
      --header 'accept: application/json' \
      5
      --header 'content-type: application/json' \
      6
      --data '{
      7
      "organization": "<Redox_organization_id>",
      8
      "name": "<human_readable_name_for_auth_credential>",
      9
      "environmentId": "<Redox_environment_ID>",
      10
      "authStrategy": "AwsSigV4",
      11
      "accessKey": "<access_key_from_AWS>",
      12
      "secretKey": "<secret_key_from_AWS>",
      13
      "serviceName": "s3",
      14
      "awsRegion": "<aws_region_of_AWS_S3_bucket>"
      15
      }
  3. You should get a successful 200 response and a payload populated with the details of the new auth credential.
  4. In your terminal, prepare the /v1/environments/{environmentId}/destinations request with these values:
    • Set authCredential to the auth credential ID from the response you received in step #4.
    • Populate cloudProviderSettings with these settings.
      • Enter the productId based on your specific setup:
        • S3 only: s3.
        • S3 + Databricks: databricks
        • S3 + Snowflake: snowflake
      • Locate the bucketName in the AWS dashboard.
      • The keyPrefix is optional. If specified, it gets prepended to the created file path in AWS S3. You can append / after the prefix name to indicate a directory path.
        Example: Values for S3, Databricks, or Snowflake cloudProviderSettings
        json
        1
        {
        2
        "cloudProviderSettings": {
        3
        "typeId": "aws",
        4
        "productId": "<s3_databricks_or_snowflake>",
        5
        "settings": {
        6
        "bucketName": "<bucket_name>",
        7
        "keyPrefix": "<optional_prefix>"
        8
        }
        9
        }
        10
        }
  5. You should get a successful 200 response with a payload populated with the details of the new AWS cloud destination. Specifically, the verified status of the destination should be set to true.
  6. Your new destination will now be able to receive messages. Each message pushed to this destination will create files in the S3 bucket. The uploaded file will be named based on the log ID of the message.