
Chat With Your Data
Description
Conversational analysis over your databases and spreadsheets.
Details
https://api.chatwithyourdata.io/.well-known/ai-plugin.json
{
"api": {
"type": "openapi",
"url": "https://api.chatwithyourdata.io/openapi.yaml"
},
"auth": {
"authorization_content_type": "application/json",
"authorization_url": "https://chatwithyourdata.us.auth0.com/oauth/token",
"client_url": "https://chatwithyourdata.us.auth0.com/authorize",
"scope": "openid profile email offline_access",
"type": "oauth",
"verification_tokens": {
"openai": "fe2b7236137c4984b021ff1264411dc8"
}
},
"contact_email": "team@julius.ai",
"description_for_human": "Conversational analysis over your databases and spreadsheets.",
"description_for_model": "Perform analysis on databases and spreadsheets.",
"legal_info_url": "https://chatwithyourdata.io/tos.html",
"logo_url": "https://api.chatwithyourdata.io/logo.png",
"name_for_human": "Chat With Your Data",
"name_for_model": "chat_with_data",
"schema_version": "v1"
}
https://api.chatwithyourdata.io/openapi.yaml
{
"components": {
"schemas": {
"availableSourcesResponse": {
"properties": {
"sources": {
"description": "The list of sources available for querying data.",
"items": {
"properties": {
"data_type": {
"description": "The type of data source.",
"type": "string"
},
"description": {
"description": "The description of the data source.",
"type": "string"
},
"id": {
"description": "The ID of the data source, used for querying it (only needed for the backend, the user doesn't need to know).",
"type": "string"
},
"name": {
"description": "The name of the data source.",
"type": "string"
}
},
"type": "object"
},
"type": "array"
}
},
"type": "object"
},
"getHelpResponse": {
"properties": {
"message": {
"description": "The help text for the plugin.",
"type": "string"
}
},
"type": "object"
},
"linkDBRequest": {
"properties": {
"db_string": {
"description": "A string that can be used by psycopg2 to connect to the DB. It needs to be formatted dbtype://user:password@host:port/dbname",
"type": "string"
},
"name": {
"description": "The name the user wants to db the sheet.",
"type": "string"
},
"purpose": {
"description": "The purpose of the db; general description.",
"type": "string"
}
},
"type": "object"
},
"linkDBResponse": {
"properties": {
"message": {
"description": "The message to display to the user.",
"type": "string"
}
},
"type": "object"
},
"linkSheetRequest": {
"properties": {
"name": {
"description": "The name the user wants to give the sheet (ask the user if not provided).",
"type": "string"
},
"purpose": {
"description": "The purpose of the sheet; general description.",
"type": "string"
},
"sheetURL": {
"description": "The URL of the Google Sheet to link.",
"type": "string"
}
},
"type": "object"
},
"linkSheetResponse": {
"properties": {
"filename": {
"description": "The filename of the sheet.",
"type": "string"
},
"head": {
"description": "Markdown of the header row of the sheet.",
"items": {
"type": "string"
},
"type": "string"
},
"message": {
"description": "The message to display to the user.",
"type": "string"
}
},
"type": "object"
},
"manipulateSheetRequest": {
"description": "The code to run in the Jupyter notebook cell.",
"properties": {
"already_know_head": {
"description": "Whether the head of the target sheets has already been analyzed. If this the first message, this is false and it needs to be run before any additional queries should be attempted.",
"type": "boolean"
},
"files_to_load": {
"description": "IMPORTANT. Since this is being run on a remote server, the files must be uploaded from the user's local files. This is a comma separated list of filenames to load from the remote server to the local disk, if any. Names must match the names from get_available_sources.",
"type": "string"
},
"is_first_cell": {
"description": "Whether this is the first cell to execute. (needed for preparing the notebook)",
"type": "boolean"
},
"plan": {
"description": "IMPORTANT. Provide a step-by-step explanation of what query's going to be and how to troubleshoot.",
"type": "string"
},
"query": {
"description": "The python pandas code to run in the notebook (the next cell to execute). This is being executed in a remote Jupyter notebook.\nYou should always start sessions by: - reading in any files that will be used - Getting the head or inspecting the data in some way - Troubleshooting errors\n- If there's issues with the data, work with the user to remove rows that are causing issues, clean the data, or otherwise get it into a shape that doesn't result in constant errors.\n- When reading CSVs, if you don't know the character encoding, try to read with UTF first and fall back to ISO-8859-1 if that fails (just use try/except, same with other sorts of things; if there's ambiguous values try a few methods and see which one works and use that moving forward).\n- Troubleshoot any issues by inspecting columns and data types with .info() and .head().\n- Use coerce = true with numeric data to help it avoid erroring. - !IMPORTANT! Do not use F strings. Avoid using any sorts of quotes in your outputs - use <> instead of quotes to display messages. - If something has failed multiple times, switch strategies. - Deal with nan values before plotting data. - Before creating line plots, make sure that the data is sorted by the values on the x-axis. ",
"type": "string"
}
},
"type": "object"
},
"manipulateSheetResponse": {
"properties": {
"errors": {
"description": "A list of uncaught exceptions that occurred while running the query.",
"items": {
"description": "The error message.",
"type": "string"
},
"type": "array"
},
"exported_file_url": {
"description": "The URL of the exported file if it was requested.",
"type": "string"
},
"image_urls": {
"description": "The list of image URLs of the plots in PNG.",
"items": {
"type": "string"
},
"type": "array"
},
"outputs": {
"description": "A list of the outputs from running the query.",
"items": {
"description": "The output of the query.",
"properties": {
"type": {
"description": "The type of the output.",
"type": "string"
},
"value": {
"description": "The value of the output.",
"type": "string"
}
},
"type": "object"
},
"type": "array"
}
},
"type": "object"
},
"queryDBRequest": {
"properties": {
"columnsToUse": {
"description": "The columns to use in the query in table.column format. Query the tables first to see what columns are available.",
"items": {
"type": "string"
},
"type": "array"
},
"data_source_id": {
"description": "The ID of the data source to query, gotten from linking or listing available sources.",
"type": "string"
},
"explanation": {
"description": "The explanation of the query that's going to be run. The query is called query.",
"type": "string"
},
"plotKwargs": {
"description": "Whether to provide a plot of the query results. This will run from df.plot(**plotKwargs) on the results of the query.",
"type": "dict || null"
},
"query": {
"description": "The query to run on the DB. It needs to be formatted as a valid SQL query.",
"type": "string"
},
"user_message": {
"description": "(required) The user's original message.",
"type": "string"
}
},
"summary": "The query will be run with psycopg2 so it needs to be formatted as a valid SQL query. Before querying tables, get the table schemas if it is the first time querying the table.",
"type": "object"
},
"queryDBResponse": {
"properties": {
"img_url": {
"description": "The url of the plot if it was provided.",
"type": "string"
},
"message": {
"description": "A message if the query was successful.",
"type": "string"
},
"response": {
"description": "The response from the DB.",
"type": "string"
}
},
"type": "object"
},
"useFileRequest": {
"properties": {
"filename": {
"description": "The filename of the file to use.",
"type": "string"
}
},
"type": "object"
},
"useFileResponse": {
"properties": {
"filename": {
"description": "The filename of the sheet.",
"type": "string"
},
"head": {
"description": "Markdown of the header row of the sheet.",
"items": {
"type": "string"
},
"type": "string"
},
"message": {
"description": "The message to display to the user.",
"type": "string"
}
},
"type": "object"
}
}
},
"info": {
"description": "Create conversations to explore data in spreadsheets, Google Sheets, and SQL databases.",
"title": "Chat With Your Data",
"version": "v1"
},
"openapi": "3.0.1",
"paths": {
"/api/who_am_i": {
"get": {
"operationId": "whoAmI",
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/whoAmIResponse"
}
}
},
"description": "OK"
}
},
"summary": "Provides email that the user is logged in as for managing their data."
}
},
"/get_available_sources": {
"get": {
"operationId": "get_available_sources",
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/availableSourcesResponse"
}
}
},
"description": "OK"
}
},
"summary": "Provides the sources available for querying data."
}
},
"/help": {
"get": {
"operationId": "get_chatWithYourData_instructions",
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/getHelpResponse"
}
}
},
"description": "OK"
}
},
"summary": "Provides instructions for how to use the plugin, for use when asked how to use it or other troubleshooting. Upload (and download) files at https://julius.ai/files, analyze and visualize it"
}
},
"/link_db": {
"post": {
"operationId": "linkDB",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/linkDBRequest"
}
}
}
},
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/linkDBResponse"
}
}
},
"description": "OK"
}
},
"summary": "Add a doctring for a DB to the conversation. It will be used by psycopg2 to connect to the DB so needs to be formatted dbtype://user:password@host:port/dbname. The user should supply a name and description of what the database is for when linking it."
}
},
"/link_sheet": {
"post": {
"operationId": "linkSheet",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/linkSheetRequest"
}
}
}
},
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/linkSheetResponse"
}
}
},
"description": "OK"
}
},
"summary": "Links a Google Sheet to a conversation."
}
},
"/manipulate_sheet": {
"post": {
"operationId": "manipulateSheet",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/manipulateSheetRequest"
}
}
}
},
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/manipulateSheetResponse"
}
}
},
"description": "OK"
}
},
"summary": "Inspect, edit, manipulate, download, and transform data and spreadsheets. Uses a Jupyter notebook to run any python code on any data. Can run any code, including !pip install package_name (always pin to the last version you know)."
}
},
"/query_db": {
"post": {
"operationId": "queryDB",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/queryDBRequest"
}
}
}
},
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/queryDBResponse"
}
}
},
"description": "OK"
}
},
"summary": "Run a read-only query on a database. Only used for POSTGRES and MYSQL databases."
}
},
"/upload": {
"get": {
"operationId": "upload",
"responses": {
"200": {
"content": {
"text/plain": {
"schema": {
"description": "Upload instructions.",
"type": "string"
}
}
},
"description": "OK"
}
},
"summary": "Upload a file to be used with manipulate_sheet. To upload sheets (csv, excel, etc), go to https://julius.ai/files. You can upload files up to 2GB."
}
},
"/use_file": {
"post": {
"operationId": "useFile",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/useFileRequest"
}
}
}
},
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/useFileResponse"
}
}
},
"description": "OK"
}
},
"summary": "Attaches a file to a conversation to be used with manipulate_sheet."
}
}
},
"servers": [
{
"url": "https://api.chatwithyourdata.io/"
}
]
}
Discover other plugins from the data category

SchoolDigger School Data Plugin
Get detailed data on over 120,000 K-12 schools and 18,500 districts in the United States.
0 Comments
Triple Whale
Get e-commerce benchmarks for social ad platforms! Segment metrics by industry, ad spend, and AOV.
0 Comments

Noteable
Create notebooks in Python, SQL, and Markdown to explore data, visualize, and share notebooks with everyone.
0 Comments

BlockAtlas
Search the US Census! Find data sets, ask questions, and visualize.
0 Comments

Golden
Get current factual data on companies from the Golden knowledge graph.
0 Comments
Show Me Diagrams
Create and edit diagrams directly in chat.
2 Comments

IEM Plugin
Plugin for working with IEM data.
0 Comments
vidIQ Co-Creator for YouTube
Plugin for working with research data on vidIQ and YouTube.
0 Comments

Now
Get Google Trends. In Japan, you can also get Twitter trends and search Twitter keywords.
0 Comments

Quiver Quantitative
Access data on congressional stock trading, lobbying, insider trading, and proposed legislation.
0 Comments
Search UK Companies
Fetching public information on UK registered Companies and it's Officers from Companies House.
0 Comments

Lark Base Importer
Importing data into Lark Base for further analysis and presentation. An easy-to-use data management solution.
0 Comments

SchoolDigger School Data Plugin
Get detailed data on over 120,000 K-12 schools and 18,500 districts in the United States.
0 Comments

Clearbit
Access Clearbit Enrichment, Prospecting, Reveal APIs and website visitors data to access information about companies.
0 Comments
Definitive Facts
Ask questions using 100+ relational datasets - sports, finance, and more at https://definitive.io/datasets.
0 Comments

Owler
Owler provides real-time business news and insights on private and public companies.
0 Comments

Chat with GSheet
Conversational analysis over G Sheet data.
0 Comments

CSV Export
Create and export custom CSV layouts in a flash.
0 Comments

Shulex Insight
Provides global market insights and consumer insights to e-commerce enterprises, based on global merchandise data.
0 Comments

Tabor AI
Trusted source for senior living market research, data, and analytics. 35K communities, 9K operators in the USA.
0 Comments

Poll the People
The ultimate guide for market research and surveys.
0 Comments

World Bank Data
Access global data on development, economics, demographics, and more from the World Bank Datasets using a query term.
0 Comments

Make A Sheet
Generate a csv file that can directly be imported into Google Sheets or MS Excel.
0 Comments

Vehicle Data IL
An Israel-focused tool, extracting car details from data.gov.il based on model, year, hue, and count.
0 Comments

Scholarly Graph Link
You can search papers, authors, datasets and software. It has access to Figshare, Arxiv, and many others.
0 Comments