...
We’re providing code snippets including the header auth method but we would advise against writing production code with the api API key secret in plain text. If an api API key is believe to have been exposed we would recommend refreshing the secret immediately.
As this uses a personal API key we would suggest creating team or dataset dummy users to generate the API key so if a flow is setup with an individual and that user leaves, workflows that were setup by that user will will continue to function.
Organisation GUID
The programmatic access works with a POST method to an action, as part of the path to the action includes your Organisation GUID, this can be found in your dcat “@id" key and is the alphanumeric string between “….io/org/” and “/dcat/…” so “28ccd497-7acd-4470-bd17-721d5cbbd6ef” in the example below. Note this is also available in the dcat url and is required when making API calls to datasets
...
is required when making API calls to datasets
...
A link to your DCAT can be found on a drop-down menu below the logo on your Organisation page, as shown below. Public will return all public datasets as per DCAT, Private will return all datasets.
...
uSmart Actions
View a datasets meta data, dataset:view
The simplest action and required to validate other actions:
...
The return from the dataset_view
function above will be a json including all of the meta data, descriptions around the APIs and Files that make up a dataset, and provides useful information for many of the other actions that will be discussed.
Add a file to dataset, file:create
Add a data file to an existing dataset, you need the dataset GUID and the file to call this function. 2 functions are presented here but these could be refactored into a single function. The s3:generatePutRequest action generates a signed URL that is used with the file:create action to enable a user to upload a file to our AWS S3 service, this action is also used by our file:updateRevision action
...
Note above filename will be a path to file, this is what shows in the UI so you will expose any folder structure if you run this code from a directory other than where the data is located. This is designed for the UI to upload a file where the path is simply the filename. Dataset GUID is as described in the dataset:view action.
Create the output pipelines, resourceContainer:create
Create an output pipeline to enable data sharing, you need the dataset GUID and a pipeline ID to call this function.
...
While we’ve provided the full list of pipelineId’s we’re not supporting the real-time Data API currently as other actions may be required to enable. We can look to support this in the future.
Replace a file in a dataset, file:updateRevision
Use this action to replace an existing dataset file with a new file, you need the file GUID which you can get with the dataset:view action and the new file to call this action. This uses the s3:generatePutRequest action that was also required for file:create
Code Block | ||||
---|---|---|---|---|
| ||||
import json import os import requests # Generates the signed request to load the data into AWS def generatePutRequest_from_s3(fileName): url = "https://data.usmart.io/org/[Your Organisation GUID]/s3:generatePutRequest" payload = json.dumps({ "fileName": fileName }) headers = { 'api-key-secret': '[your api-key-secret]', 'api-key-id': '[your api-key-id]', 'Content-Type': 'application/json', } response = requests.request("POST", url, headers=headers, data=payload) return response.json() def dataset_file_update(filename, file_guid): getS3Putrequest = generatePutRequest_from_s3(filename) signedRequest_url = getS3Putrequest["result"]["signedRequest"] with open(filename, 'rb') as f: headers = { 'Content-Type': getS3Putrequest["result"]["contentType"] } http_response = requests.put(signedRequest_url, headers=headers, data=f) fileSize = os.path.getsize(filename) url = "https://data.usmart.io/org/[Your Organisation GUID]/file:updateRevision" payload = json.dumps({ "reference": getS3Putrequest["result"]["reference"], "fileName": filename, "fileGUID": file_guid, "fileSize": fileSize, "action": "file:updateRevision" }) headers = { 'api-key-secret': '[your api-key-secret]', 'api-key-id': '[your api-key-id]', 'Content-Type': 'application/json', } response = requests.request("POST", url, headers=headers, data=payload) return response.json() # Example function call filename = r"[Your file here]" file_guid = "[your file GUID]" output = dataset_file_update(filename, file_guid) print(output) |
This uses file GUID to determine the specific file that is to be updated, this can be determined by reviewing the dataset:view action, an example is provided below.
...
Refresh the output pipelines, resourceContainer:process
Use this action to update an output pipeline after updating a file, the function below can be called using resourceContainerGUID which can be sourced from the response to dataset:view, the code snippet below includes a function to refresh all output pipelines of a dataset using the dataset:view function above and is called with the dataset GUID.
...
Closing thoughts and future
This is These are our most commonly used actions from the UI and should enable most use cases. We will look to document and support other actions in the future depending on demand. We have not provided support for enabling data access to Redshift and SQL at this point and more actions are currently required to setup the Schema and update Redshift from S3.