Claude Artifacts - Build Interactive Apps and Dashboards

Claude Artifacts - Build Interactive Apps and Dashboards

October 16, 2024·
Shaun Smith

Introduction

It’s been a few months since Anthropic launched Artifacts, and since then it has graduated from being a Labs feature to built in to the Claude.ai front-end1.

Claude has the ability to publish Artifacts, enabling rapid prototyping and sharing without needing to worry about hosting.

For this article, we’ll concentrate on building and publishing interactive Artifacts and provide ideas and prompts to help you get the most from this feature.

All Artifacts have been created by prompting using Claude.ai, with no manual code editing or adjustments.

Coding with Claude

Claude has been trained on huge amounts of Source Code, documentation and tutorials which provide the basis for its code generation abilities.

Asking Claude to produce familiar programs or games (for example, “produce an artifact game of asteroids” will usually produce a workable, albeit simple result. Further prompts could refine the game - for example adding features, changing the theme or adjusting the rules.

Note that the same prompt will produce different results every time it is run. The following Artifacts were all produced with the prompt2create a demoscene style display of the word "LLMindset" animated against an animated background”.

This demonstrates visually the degree of randomness in Claude’s responses. Getting the best results requires a combination of refining the prompt and the occasional regeneration.

Below are some techniques and prompts to get the most out of Artifacts.

Store Data


Task List
Task List with Import/Export

Applications are significantly more valuable when they can store and load data between sessions. Artifacts can use the Browser’s Local Storage to persist data3.

This Task List application demonstrates this, saving the tasks and reloading them when we return (as well as having a pleasing confetti animation when a task is marked complete!).

Task List with confetti

Task List with Confetti Animation

Task List Prompts

Main Application

Although the Claude Artifacts prompt refers to React and associated libarires, referring to them explicity seems to improve component choices. The first version of this prompt failed due to an attempt to import the “date-fns” library - hence the additional guardrail statement at the end..

Create a TODO list application, with the following requirements:

* Simple, responsive and reactive user interface. Use React, Shadcn, Tailwind
 and Lucide icons to make it appealing, animated and easy to use. 
 Use Tailwind's Blue and Orange as the Primary and Secondary colour theme.
* Ensure that at least 2/3 of the display width is used to show the list.
* Items in the TODO list have a "Task", "Due Date" and "Completed" status.
* It must be easy and natural to add new items. Typically, the User will want
 to quickly enter a task and select a Due Date such as "Today", "Tomorrow",
 "Next Week" or "At Weekend". Make it fast, easy and intuitive to add a task.
 Feel free to use a large button and a popup for room. Use "Tomorrow" as the
 default to make sure tasks can be added in as few clicks as possible. If
 the User has selected another option, leave that as the default. It must
 also be possible to enter a specific date.
* It must be possible to mark tasks as "Completed". Show a celebratory 
 animation (e.g. confetti) when a task is complete. Indicate clearly when an
 item has been completed.
* Once the "Completed Animation" is complete, move the Completed item to a 
 separate list. Use tabs to navigate between "Completed" and "Active" tasks.
* It must be possible to EDIT and DELETE tasks.
* Items which are overdue must be subtly marked in a red colour. Overdue tasks
  can be updated  with a single click to set the due date to "Tomorrow" or
  "+3 days" (meaning Today's date +3 days).
* Use browser local storage to keep the task list persistent between sessions.
 Include a "Data Reset" button somewhere in the UI to reset the database. 
 Prompt the User with a message such as "Are you sure - this will permanently
  delete all tasks?"
* date-fns is NOT available to you.

Refinements

Claude had assumed I could change the Tailwind CSS file to enable animations, hence the guidance.

make the "completion animation" more exciting (floating/falling confetti) - note
i cannot alter the tailwind CSS file.
Rename the application "Persistent Task List". 
In the "Add New Task" Dialog make sure that the date selector always shows the
selected date - update it with the buttons (rather than dd/mm/yyyy).

ℹ️
When using Local Storage, it’s a good idea to ask Claude to include a button to clear/reset the saved data to make sure the Artifact can’t get stuck in an invalid state.

While Local Storage is useful, the data is only available on one device4. We’ll remix the application to make backup and data migration easier by adding an Import/Export function.

Task List Remix Prompts

Add Backup/Restore

Can you add a TAB that will allow me to extract or replace the JSON tasks so
that I can back it up and restore it easily. 

Reproduce all code.  Be terse and remove all duplication.

Mobile Device Support

Make sure mobile devices are properly supported. Reproduce all code.

The remixed artifact with import/export functionality is here. This entire feature was added with one short prompt!

We can then prompt Claude to generate test data: “the current date is 17 October 2024. produce the JSON for 20 representative tasks, ensure at least 7 are complete and 3 are overdue.”, and import it.

You can use the test data generation prompt if you remix the Import/Export version of the Artifact and want to try it.

Tasks are visible between the two versions. This means we can use the import feature to load the test data, and then use the simpler version for demonstration if we prefer.

Read Files


Base 64 Converter
API Builder

It’s possible to read files with an Artifact.

As an example, to send image or sound data to a model such as GPT-4o or Claude we often need to Base64 encode it. This simple converter allows the User to drag and drop a file and Base64 encode it.

Base64 Converter Prompts

Main Application

A React developer would usually use the more capable “react-dropzone” component. Because that is disallowed in the Artifact sandbox, we prompt Claude not to use it, and instead use inbuilt browser functionality.

write a base64 converter that allows me to select or drag and drop a file on
to it, and convert it to base64. 
Note - you do NOT have access to "react-dropzone".

make the user experience attractive (using react and tailwind) and "dashboard"
like.  display at least the file name and output length. style the base64
output text box in a way that shows it is machine generated content.

make sure that all components are visible to the user at the start 
even when no file has been selected. 

Update UX

combine the file information with the file input card.

Add Icons

add further icons for consistency.
Publish

To test out the API, it’s convenient to be able to run it from the Command Line.

I’ve taken some sample code from the OpenAI Developer Documentation and provided it to Claude to generate commands to call the API with an image:

API Builder Prompts

Provide API Examples

The highlighted lines indicate which parts of this prompt were copied and pasted from the API Documentation.

The Base64 output will be used to send an image as part of an API call. 

## API Call Example
import base64
from openai import OpenAI
client = OpenAI()
# Function to encode the image
def encode_image(image_path):
  with open(image_path, "rb") as image_file:
    return base64.b64encode(image_file.read()).decode('utf-8')
# Path to your image
image_path = "path_to_your_image.jpg"
# Getting the base64 string
base64_image = encode_image(image_path)
response = client.chat.completions.create(
  model="gpt-4o-mini",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image?",
        },
        {
          "type": "image_url",
          "image_url": {
            "url":  f"data:image/jpeg;base64,{base64_image}"
          },
        },
      ],
    }
  ],
)
print(response.choices[0])

## cURL API Call
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ]
  }'

Add the following tabs:

Tab 1) A textbox that contains valid JSON to send to the API.
Tab 2) A cURL command that completes the API call with the image.
Tab 3) An equivalent Powershell 7 command to call the API with the image.

Although the prompt looks long, most of it was pasted from the API Documentation - I only needed to write 4 short instructions to produce a working utility.

Visualise Data


Rainfall Dashboard

Artifacts can be used to build interactive visualisations of data.

For this example we’ll visualise rainfall available from an API supplied by DEFRA (a UK Government Agency).

Dashboard showing rainfall data

Rainfall Dashboard

To start with, I’ve copied the API specification in to a Project, and begin by asking Claude to generate a URL so I can download rainfall data from the API.

Rainfall Dashboard Prompts

Get Rainfall Data

Because we have placed the API specification in to a Project file, Claude can refer to it, making it easy to download a dataset. I looked up the long numeric Station ID and included it in the prompt. Note that DEFRA have an interactive website if you want to explore the data further.

Refer to the Hydrology API to construct a URL to download rainfall station 
6b80bd1c-6c19-4c26-875d-108bf8ef1a84 between 01 July 2024 and 01 October 2024.

This prompt generated a URL which I pasted in to the browser, and then saved the data.

Create Dashboard

Having downloaded the data, we can then generate a dashboard for the data. The highlighted lines were pasted from the downloaded file.

i have downloaded the rainfall data in JSON format. refer to the Hydrology API for definitions.

here is a snippet of the JSON:

{
    "meta": {
        "@id": "http://environment.data.gov.uk/hydrology/data/readings.json?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a",
        "publisher": "Environment Agency",
        "license": "http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/",
        "licenseName": "OGL 3",
        "comment": "Hydrology API for sub-daily data",
        "version": "2.0.0",
        "hasFormat": [
            "http://environment.data.gov.uk/hydrology/data/readings.html?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a",
            "http://environment.data.gov.uk/hydrology/data/readings.ttl?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a",
            "http://environment.data.gov.uk/hydrology/data/readings.json?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a",
            "http://environment.data.gov.uk/hydrology/data/readings.csv?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a",
            "http://environment.data.gov.uk/hydrology/data/readings.rdf?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a",
            "http://environment.data.gov.uk/hydrology/data/readings.geojson?max-date=2024-10-01&mineq-date=2024-07-01&observedProperty=rainfall&station=95d41928-f47b-4b8c-b449-0fdcb9770c8a"
        ],
        "limit": 100000
    },
    "items": [
        {
            "measure": {
                "@id": "http://environment.data.gov.uk/hydrology/id/measures/95d41928-f47b-4b8c-b449-0fdcb9770c8a-rainfall-t-900-mm-qualified"
            },
            "date": "2024-07-26",
            "dateTime": "2024-07-26T06:15:00",
            "value": 0.01,
            "valid": "10000",
            "invalid": "0",
            "missing": "0",
            "completeness": "Complete",
            "quality": "Good"
        },
        {
            "measure": {
                "@id": "http://environment.data.gov.uk/hydrology/id/measures/95d41928-f47b-4b8c-b449-0fdcb9770c8a-rainfall-t-900-mm-qualified"
            },
            "date": "2024-09-18",
            "dateTime": "2024-09-18T08:45:00",
            "value": 0,
            "valid": "10000",
            "invalid": "0",
            "missing": "0",
            "completeness": "Complete",
            "quality": "Good"
        },

---

Create an artifact that produces visualises this data in a dashboard.

Use React, Tailwind and Lucide icons for styling and make it visually appealing.

Make the dashboard interactive and beautiful
Construct the dashboard with a flex container with flex-wrap.
Use StatCard like elements to display key metrics, styling them with a light
background colour and subtle shadows for the cards. Use flex-wrap to lay these 
out.
Key Metrics could include Total Rainfall, Rainiest Day, Average Rainfall, 
Dry Days, Total Days and more.
Sort the data appropriately, there are multiple readings per day, and the data
is not necessarily in order.
It must be possible to filter data.
It must be possible to view rainfall within a day.
It must be possible to view rainfall data over the whole time series.
Use clean sans-serif fonts for readability, vary font sizes to create visual
hierarchy.
Always round displayed data to 2 decimal places and use UK dates.
It must have a "File Upload" to allow uploading of JSON data for presentation. 
The artifact should not crash when no data has been uploaded. You do not have 
access to react-dropzone.

Add Specific Day Breakdown

add an ability to see the rainfall on a particular day.

UX Improvements

when clicking on a date in the "Daily Rainfall Overview" make that select the
"Rainfall on Specific Day". reduce the height of both charts by 25%

I then share a sample of the downloaded data, and ask Claude to produce a dashboard with a file upload.

Combined with using the Browsers Storage, this opens up opportunities to load and compare multiple datasets or series over time.

Including API specifications or Data Schema’s in a Project makes it possible for Claude to refer to them when generating Artifacts. This could be helpful for sharing complex data structures between Artifacts, or parsing and converting data.

Split Artifacts


Cost Calculator
Scenario Editor
Advanced Calculator

In our article on Batch Pricing, we created this artifact to calculate the cost of running prompts against different models.

Screenshot of the Prompt Scenario Price Calculator

Prompt Scenario Price Calculator

This artifact is close the maximum length of Output that Claude can handle (4096 tokens or ~320 lines of code)5.

The prompts for creating the Price Calculator are below. Note that although the prompt is long, a large part of it was produced by asking the AI to enhance the specification. I used OpenAI’s o1-preview model for this, but it could just have easily been Claude.

Price Calculator Prompts

Prompt Refinement

This was the prompt supplied to o1-preview to write the formulas for the calcuations (write out the formulas for the various scenarios so we can check they are correct)

help me refine this specification:


# Prompt Scenario Price Calculator

I want to create a tool to enable the User to calculate the cost of running
 Scenarios against different Models.

Given a Scenario and the number of Messages, the Calculator should display the
 cost of running the Scenario against each model, calculated in four different
  ways: Standard, Cached, Batched and Combined.

## Prompt Cost Calculations

### Scenarios

A Scenario consists of tokens of the following types:

* Static (cacheable) Tokens - The number of Input tokens that are cacheable
* Dynamic (changing) Tokens - The number of Input tokens
that change on each run
* Output Tokens - The expected number of output tokens

It should be possible for the User to add and modify the available Scenarios.

#### Default Scenarios

By default the following Scenarios should be available:

| Scenario               | Static Tokens | Dynamic Tokens | Output Tokens |
| ---------------------- | ------------: | -------------: | ------------: |
| Knowledge Base         |         30000 |            200 |           200 |
| Document Summarisation |          3500 |          15000 |          3000 |

### Model Pricing

There are 2 Pricing Structures to accommodate.

In all structures there is:

* "Input Token" Price
* "Output Token" Price
* "Batch Input Token" Price is 50% of the "Input Token" Price
* "Batch Output Token" Price is 50% of the "Output Token" Price

Pricing Structure 1 is known as "OpenAI" and is as follows:

* "Cache Write Token" Price is the same as "Input Token" Price
* "Cache Read Token" Price is 50% of the "Input Token" Price.

Pricing Structure 2 is known as "Anthropic" and is as follows:

* "Cache Write Token" Price is +75% of the "Input Token" price
* "Cache Read Token" Price is -90% of the "Input Token" price
* "Batch - Cache Write Token" is +75% of the "Batch Input Token" price
* "Batch - Cache Read Token" is -90% of the "Batch Input Token" price

#### Default Models

It should be possible for the User to add and modify the name and price table
for different Models. By default, the following Model profiles should be 
available:

| Model       | Pricing Structure | Input Token Price | Output Token Price |
| ----------- | ----------------- | ----------------: | -----------------: |
| GPT-4o      | OpenAI            |             $2.50 |              $2.50 |
| GPT-4o mini | OpenAI            |             $0.15 |              $0.60 |
| Sonnet 3.5  | Anthropic         |             $3.00 |              $3.75 |
| Haiku 3     | Anthropic         |             $0.25 |              $1.25 |

All prices are specified in $ per million tokens (known as m/tok).

### Calculations

The calculator should allow the User to specify the number of Messages for the
 Scenario, and show the final costs for each model.

For example, the User may wish to find out the pricing for 
"Document Summarisation" for 25,000 runs and compare the Standard, Cached,  
Batch and Combined pricing for each Model.

In this case, the number of each type of token in the Scenario is multiplied by
 the number of messages, and the Model Pricing Structure used to calculate
  the cost of the run.

Standard = ((Static Tokens + Dynamic Tokens) * Messages) * Input Price) +
 (Output Tokens * Messages) * Output Price
Batch = ((Static Tokens + Dynamic Tokens) * Messages) * Batch Input Price) +
 (Output Tokens * Messages) * Batch Output Price

Note, that for Cached and Combined  pricing the first run uses the 
"Cache Write" pricing, and further runs use "Cache Read" pricing. Therefore
in our "Document Summarisation" scenario, Run 1 would use the
"Input Cache Write" prices, and Runs the remaining 24,999 runs would use
"Input Cache Read" pricing.

For Models that support the Anthropic pricing structure, the "Combined" colum
should use "Batch - Cache Write Token", "Batch - Cache Read Token" and 
"Batch Output" pricing. Models that use the "OpenAI" pricing structure should
show a "-" in the column to indicate that it is not supported.

If there are fewer than 1024 Static Tokens, then Standard pricing should be
used instead of Cache pricing. In this case, a warning should be displayed
indicating that normal pricing has been used.


## Features

* There must be a display showing the relative proportion of Static, Dynamic
and Output tokens for the Prompt Scenarios. Use horizontal 100% 
Stacked Bar Chart for this.
* The lowest cost for each model must be highlighted
* It must be possible to add and modify Models.
* It must be possible to add and modify Scenarios.
* Models and Scenario editing must be tabular; use an empty row with an "Add"
button to allow the user to fill entries. On complete rows, use a "Remove" 
button to enable deletion of models or scenarios. When showing a model, the 
calculated token prices (e.g. Batch Input) should be shown.
* The User Interface must be intuitive, beautiful to look at and responsive.
* Consider the layout - the interface should be tabbed so that Model and 
Scenario editing are separate activities.
* Edge cases must be considered and handled (for example, if the User deletes 
all Models and Scenarios the tool should not crash).
* You do not have access to the "@/components/ui/table"
* You do not have access to the "@/components/custom-ui"

in particular, i need the price calculations to be well defined. write out the
formulas for the various scenarios so we can check they are correct.

Main Application

The output from the earlier step was then incorporated in to the prompt as-is (generated prompt is highlighted below):

# Prompt Scenario Price Calculator

I want to create a tool to enable the User to calculate the cost of 
running Scenarios against different Models.

Given a Scenario and the number of Messages, the Calculator should display
the cost of running the Scenario against each model, calculated in four
different ways: Standard, Cached, Batched and Combined.

## Prompt Cost Calculations

### Scenarios

A Scenario consists of tokens of the following types:

* Static (cacheable) Tokens - The number of Input tokens that are cacheable
* Dynamic (changing) Tokens - The number of Input tokens that change on each run
* Output Tokens - The expected number of output tokens

It should be possible for the User to add and modify the available Scenarios.

#### Default Scenarios

By default the following Scenarios should be available:

| Scenario               | Static Tokens | Dynamic Tokens | Output Tokens |
| ---------------------- | ------------: | -------------: | ------------: |
| Knowledge Base         |         30000 |            200 |           200 |
| Document Summarisation |          3500 |          15000 |          2000 |

### Model Pricing

There are 2 Pricing Structures to accommodate.

In all structures there is:

* "Input Token" Price
* "Output Token" Price
* "Batch Input Token" Price is 50% of the "Input Token" Price
* "Batch Output Token" Price is 50% of the "Output Token" Price

Pricing Structure 1 is known as "OpenAI" and is as follows:

* "Cache Write Token" Price is the same as "Input Token" Price
* "Cache Read Token" Price is 50% of the "Input Token" Price.

Pricing Structure 2 is known as "Anthropic" and is as follows:

* "Cache Write Token" Price is +75% of the "Input Token" price
* "Cache Read Token" Price is -90% of the "Input Token" price
* "Batch - Cache Write Token" is +75% of the "Batch Input Token" price
* "Batch - Cache Read Token" is -90% of the "Batch Input Token" price

#### Default Models

It should be possible for the User to add and modify the name and price
table for different Models. By default, the following Model profiles
should be available:

| Model       | Pricing Structure | Input Token Price | Output Token Price |
| ----------- | ----------------- | ----------------: | -----------------: |
| GPT-4o      | OpenAI            |             $2.50 |             $10.00 |
| GPT-4o mini | OpenAI            |             $0.15 |              $0.60 |
| Sonnet 3.5  | Anthropic         |             $3.00 |             $15.00 |
| Haiku 3     | Anthropic         |             $0.25 |              $1.25 |

All prices are specified in $ per million tokens (known as m/tok).

### Calculations

## **Definitions**

### **Variables**

- **S**: Static Tokens (number of input tokens that are cacheable)
- **D**: Dynamic Tokens (number of input tokens that change on each run)
- **O**: Output Tokens (expected number of output tokens)
- **M**: Messages (number of runs)

### **Token Counts**

- **Total Input Tokens**: `Total_Input_Tokens = (S + D) * M`
- **Total Output Tokens**: `Total_Output_Tokens = O * M`

### **Pricing Variables**

All prices are in dollars per million tokens (`$/1,000,000 tokens`).

#### **Base Prices**

- **Input Price (IP)**: Given for each model.
- **Output Price (OP)**: Given for each model.

#### **Derived Prices**

- **Batch Input Price (BIP)**: `BIP = 0.5 * IP`
- **Batch Output Price (BOP)**: `BOP = 0.5 * OP`

#### **Cache Prices**

**For OpenAI Pricing Structure:**

- **Cache Write Price (CWP)**: `CWP = IP`
- **Cache Read Price (CRP)**: `CRP = 0.5 * IP`

**For Anthropic Pricing Structure:**

- **Cache Write Price (CWP)**: `CWP = IP + (0.75 * IP) = 1.75 * IP`
- **Cache Read Price (CRP)**: `CRP = IP - (0.90 * IP) = 0.1 * IP`
- **Batch Cache Write Price (BCWP)**: `BCWP = BIP + (0.75 * BIP) = 1.75 * BIP`
- **Batch Cache Read Price (BCRP)**: `BCRP = BIP - (0.90 * BIP) = 0.1 * BIP`

---

## **Price Calculations**

### **1. Standard Pricing**

**Formula:**

```
Standard Total Cost = [ (S + D) * M * IP + O * M * OP ] / 1,000,000
```

**Explanation:**

- **Input Cost**: All input tokens are charged at the standard 
input price (IP).
- **Output Cost**: All output tokens are charged at the standard 
output price (OP).

---

### **2. Batched Pricing**

**Formula:**

```
Batch Total Cost = [ (S + D) * M * BIP + O * M * BOP ] / 1,000,000
```

**Explanation:**

- **Input Cost**: All input tokens are charged at the batched input price (BIP).
- **Output Cost**: All output tokens are charged at the batched output
price (BOP).

---

### **3. Cached Pricing**

**Condition:**

- If `S < 1,024`, caching is not used. Use **Standard Pricing** instead and
display a warning.

**Formulas:**

**First Run:**

```
First Run Cost = [ S * CWP + D * IP + O * OP ] / 1,000,000
```

**Subsequent Runs:**

```
Subsequent Runs Cost = [ S * CRP + D * IP + O * OP ] * (M - 1) / 1,000,000
```

**Total Cached Cost:**

```
Cached Total Cost = First Run Cost + Subsequent Runs Cost
```

**Explanation:**

- **First Run**:
  - **Static Tokens (S)**: Charged at the cache write price (CWP).
  - **Dynamic Tokens (D)**: Charged at the standard input price (IP).
  - **Output Tokens (O)**: Charged at the standard output price (OP).
- **Subsequent Runs**:
  - **Static Tokens (S)**: Charged at the cache read price (CRP).
  - **Dynamic Tokens (D)**: Charged at the standard input price (IP).
  - **Output Tokens (O)**: Charged at the standard output price (OP).

---

### **4. Combined Pricing** *(Applicable only to Anthropic Pricing Structure)*

**Condition:**

- Applicable only for models with the **Anthropic** pricing structure.
- If `S < 1,024`, caching is not used. Use **Batched Pricing** instead and
display a warning.

**Formulas:**

**First Run:**

```
First Run Combined Cost = [ S * BCWP + D * BIP + O * BOP ] / 1,000,000
```

**Subsequent Runs:**

```
Subsequent Runs Combined Cost = [ S * BCRP + D * BIP + O * BOP ]
 * (M - 1) / 1,000,000
```

**Total Combined Cost:**

```
Combined Total Cost = First Run Combined Cost + Subsequent Runs Combined Cost
```

**Explanation:**

- **First Run**:
  - **Static Tokens (S)**: Charged at the batch cache write price (BCWP).
  - **Dynamic Tokens (D)**: Charged at the batched input price (BIP).
  - **Output Tokens (O)**: Charged at the batched output price (BOP).
- **Subsequent Runs**:
  - **Static Tokens (S)**: Charged at the batch cache read price (BCRP).
  - **Dynamic Tokens (D)**: Charged at the batched input price (BIP).
  - **Output Tokens (O)**: Charged at the batched output price (BOP).

**Note:**

- For models using the **OpenAI** pricing structure, the **Combined Pricing**
 is **not supported**. Display a "-" in the Combined column for these models.

---

## **Example Calculation**

Let's calculate the **Standard** and **Cached** costs for the
**"Document Summarisation"** scenario using the **GPT-4o mini** model for
**25,000 messages**.

### **Given:**

- **Scenario**: Document Summarisation
  - **Static Tokens (S)**: 3,500
  - **Dynamic Tokens (D)**: 15,000
  - **Output Tokens (O)**: 3,000
- **Messages (M)**: 25,000
- **Model**: GPT-4o mini
  - **Pricing Structure**: OpenAI
  - **Input Price (IP)**: $0.15 per million tokens
  - **Output Price (OP)**: $0.60 per million tokens

### **Derived Prices:**

- **Batch Input Price (BIP)**: `0.5 * IP = 0.5 * $0.15 = $0.075` per 
million tokens
- **Batch Output Price (BOP)**: `0.5 * OP = 0.5 * $0.60 = $0.30` per 
million tokens
- **Cache Write Price (CWP)**: `IP = $0.15` per million tokens
- **Cache Read Price (CRP)**: `0.5 * IP = 0.5 * $0.15 = $0.075` per 
million tokens

### **Calculations**

#### **Standard Pricing**

```
Total_Input_Tokens = (S + D) * M = (3,500 + 15,000) * 25,000 = 
18,500 * 25,000 = 462,500,000 tokens
Total_Output_Tokens = O * M = 3,000 * 25,000 = 75,000,000 tokens

Input Cost = (Total_Input_Tokens * IP) / 1,000,000 = 
(462,500,000 * $0.15) / 1,000,000 = $69,375
Output Cost = (Total_Output_Tokens * OP) / 1,000,000 = 
(75,000,000 * $0.60) / 1,000,000 = $45,000

Standard Total Cost = Input Cost + Output Cost = $69,375 + $45,000 = $114,375
```

#### **Cached Pricing**

Since `S = 3,500` which is greater than `1,024`, caching is applicable.

**First Run Cost:**

```
First_Run_Input_Cost = [ S * CWP + D * IP ] / 1,000,000
                    = [ (3,500 * $0.15) + (15,000 * $0.15) ] / 1,000,000
                    = [ $525 + $2,250 ] / 1,000,000
                    = $2,775 / 1,000,000 = $0.002775

First_Run_Output_Cost = (O * OP) / 1,000,000
                      = (3,000 * $0.60) / 1,000,000
                      = $1,800 / 1,000,000 = $0.0018

First Run Cost = First_Run_Input_Cost + First_Run_Output_Cost = 
$0.002775 + $0.0018 = $0.004575
```

**Subsequent Runs Cost (for `M - 1 = 24,999` runs):**

```
Subsequent_Input_Cost = [ S * CRP + D * IP ] * (M - 1) / 1,000,000
                      = [ (3,500 * $0.075) + (15,000 * $0.15) ] * 24,999 / 1,000,000
                      = [ $262.50 + $2,250 ] * 24,999 / 1,000,000
                      = $2,512.50 * 24,999 / 1,000,000
                      = $62,812,468.75 / 1,000,000 = $62.81246875

Subsequent_Output_Cost = (O * OP) * (M - 1) / 1,000,000
                       = (3,000 * $0.60) * 24,999 / 1,000,000
                       = $1,800 * 24,999 / 1,000,000
                       = $44,997,600 / 1,000,000 = $44.9976

Subsequent Runs Cost = Subsequent_Input_Cost + Subsequent_Output_Cost 
= $62.81246875 + $44.9976 = $107.81006875
```

**Total Cached Cost:**

```
Cached Total Cost = First Run Cost + Subsequent Runs Cost
                  = $0.004575 + $107.81006875 = $107.81464375
```

**Savings Compared to Standard Pricing:**

```
Savings = Standard Total Cost - Cached Total Cost
        = $114,375 - $107.81464375 = $6,560.35625
```

## **Edge Case Handling**

- **Static Tokens < 1,024**:
  - Use **Standard Pricing** instead of **Cached Pricing**.
  - Display a warning: "Static Tokens are less than 1,024. Cached pricing 
    is not applied. Standard pricing is used instead."
- **Combined Pricing for OpenAI Models**:
  - Display a "-" or "Not Supported" in the Combined column.
  - No calculations are performed for Combined Pricing on OpenAI models.

## Features

* The lowest cost for each model must be highlighted
* The User Interface must be intuitive, beautiful to look at and responsive.
* Consider the layout - the interface should be tabbed so that Model and 
Scenario editing are separate activities.
* Edge cases must be considered and handled (for example, if the User deletes 
all Models and Scenarios the tool should not crash).
* You do not have access to the "@/components/ui/table"
* You do not have access to the "@/components/custom-ui"

Show Price Table

the calculated results should be shown in a table, models is rows, standard, 
cached, batch, combined as columns. 
update the chart so that the x axis is ordered by standard highest to lowest.
Error - Claude Automatic Fix

Update Error Message, Refine Scenario Editor UX

1) Update the warning text to be "This scenario has fewer than 1024 static 
tokens - Cache Pricing is not applied."
2) On the Manage Scenarios tab, display the scenarios in a table, with an "Action"
column that allows the user to delete or add appropriately. Use "Placeholder Text"
in the input field so that the user can see which value is which. The current
input field forces a "0" there and it can't be overwritten.
Error - Claude Automatic Fix

Update Model Editor UX

Showing Claude some appreciation, making the Model editor consistent with Scenario editing. Prompt edited to “remove unnecessary duplication” as previous attempt failed due to output length limits.

that is truly excellent. can you use that same component on the "manage models"
tab to enable me to manage the models? use a dropdown for "pricing structure". 
make sure that "initialScenarios" is defined. be sure to refactor the code to 
remove any unnecessary duplication.

Placeholder Text

Error - Reproduction Failure (placeholder comments)

Ensuring that new rows show the placeholder text in the Input Boxes.

One final fix - for newly added scenarios and models, don't put a "0" in there
by default so that the placeholder text displays properly. 
you need to reproduce all code so far.
Publish
Remix

UX refinements.

can you put the "scenario" and "number of messages" side by side to save 
a bit of space.
can you make the delete/add buttons a bit more colourful.

Price Display Refinement

can we right-align the numbers in the calculator table, and use a simple "-" rather
than "Not Supported". can we also have a table row hover effect on all table rows.
you will need to reproduce all code.

Reorder Columns

can you update the column order to be Standard, Cached, Batched, Combined. 
reproduce all code.

Model and Scenario Peristence

save the scenarios and models in local storage

State Reset Button.

Output Length Errors

A number of turns at the end were regenerated due to output length limits.

can you add a "Reset" button on the Scenarios and Models tabs to clear local storage,
and go back to the default state for each. reproduce all code, remove duplication
where possible. only produce the artifact, no other text unless absolutely necessary. 
Publish

During creation some responses were regenerated due to Claude occasionally truncating responses for being too long.

Because the Price Calculator uses Local Storage, we can break it in to separate communicating parts, making it easier to expand and modify.

This prompt -

can you extract out the scenario and model manager components in to a new artifact,
use the same local storage format so it is compatible with the original calculator

- produces one artifact for the Model and Scenario editor and this one -

can you now produce the calculator tab as a separate artifact 
(so it has the calculator only), and listen for storage events to update the 
scenarios and models if they change.

- creates a separate Calculator component.

When creating the Calculator prompt we prompted to listen for storage events. This makes the Calculator immediately update when the Models or Scenarios are changed.

We’ll remix the Calculator and make a couple of extra features to the front-end:

Advanced Calculator Prompts

Signpost the Editor

We want to make sure that the editor is available, and if this is the first run to signpost to Users that this needs to be run fist.

Always display this link in the Prompt Scenario Calculator "Model/Scenario 
Editor - https://claude.site/artifacts/48c6f145-bc6a-4a41-8199-bf04e79b2809".  

Tell the User to visit that link if there are no scenarios or models available,
or if there is an error loading them.

Add Functionality

Some small functional updates.

reduce the height of the bar chart by 25%. 
show price to 4 decimal places on the tooltip (use 2 decimal places in the 
table). 
add a horizontal stacked bar chart above the price table to show the
proportion of static/dynamic/output tokens.

move the price table beneath the bar chart.

Add Sensitivity Analysis

Adding a Cache Hit Rate analysis feature, and fixing a small calculation defect.

if there are fewer than 1024 static tokens, don't calculate the "Combined" cost
 and display a "-" instead.

add a tab selector to switch the view between the price chart and a new 
"Sensitivity Analysis" view.

This enables users to see the consequence of different "Cache Hit" rates for
"Combined" pricing. 

A Cache Hit Rate of 0% means that all Messages are charged at the
Batch Cache Write Price.

A Cache Hit Rate of 70% means that 30% of messages are charged at the 
Batch Cache Write Price, and 50% at the Batch Cache Read Price and so on.

The Sensitivity Analysis view has the following features:
* A drop down to select models that have the "Anthropic" pricing structure. 
Always use a default.
* A chart which shows the cost of the Combined run at 0-100% hit rates
(use 10% intervals).
* The chart should also show the standard batch run price (this will be a 
horizontal line on the chart).
* Scale the Y-Axis appropriately (it does not need to be zero-based) on the
min/max prices.

The final version instructs the user to open the Model/Scenario editor if the Artifact has not been set up, and includes a Batch Price Sensitivity calculator to show the effect of different Cache Hit rates.

External Libraries

Claude HTML artifacts can incorporate libraries hosted on cdnjs.com. This can help keep Artifacts compact, and significantly expand their functionality. They can be used in the following way6:

<script src="https://cdnjs.cloudflare.com/ajax/libs/animejs/3.2.2/anime.min.js"
 crossorigin="anonymous" referrerpolicy="no-referrer"></script>

use the above script include in an HTML file and use the anime.js to create
spectacular dynamic explosion effects when i click the mouse on the canvas.

Sometimes it may be more convenient to make small changes to Text, Colours or Spacing by editing the Artifact code directly. In that case, make the changes and use the Remix template to bring the changes in.

<remix_instructions>
The following is an artifact created from an earlier conversation. Please begin
the conversation by reproducing the content in the appropriate artifact 
line by line without any change. Doing so will render the artifact in the UI
and allows the user to interact with it. To maintain functionality and 
usefulness, any changes and remixes created in this conversation should 
maintain the full content each time without omission (e.g. Do not 
leave comments like "// Rest of the code here...").
</remix_instructions>
<artifact>
  <title>Artifact Title</title>
  <type>application/vnd.ant.react</type>
  <content>
    ... Updated Code Here
  </content>
</artifact>

Note: This technique has NOT been used in producing the Artifacts in this article.

Conclusion

Models

Sonnet 3.5’s launch demonstrated a step-change in LLM Coding ability - compare the outputs from the prompts in this article against Opus 3 - and Artifacts are a good showcase for it.

This makes the anticipated launch of Opus 3.5 all the more exciting - if it has similar code production training, paired with greater attention capabilities it will enable coherent code production on much larger problems - especially when paired with Agent style approaches that enable the LLM to deal with more targetted chunks.

Tooling

IDE and LLM interfaces are rapidly converging. One-click hosting is a compelling feature for deploying these types of small applications and utilities.

The Artifacts feature itself would be improved by the ability to edit Artifact code and assets in-place to make small changes. There are also a number of guardrails that need reinforcing (library availability for example) - an option to automatically inject these prompts would likely improve the experience for the casual user.

Final Words

Over the next few months, the combination of more sophisticated models and improved tools will enable more complex and innovative applications to be built and deployed at speed and scale. Abilities to incorporate external API’s or the AI itself from Artifacts type environments may begin to challenge the value of traditional IDE approaches and code structures.



Footnotes


  1. Artifacts is enabled by default. It can be switched on or off by following the instructions here↩︎

  2. Demoscene is a style of Computer Art. ↩︎

  3. Local Storage is usable with React artifacts. The mobile application does not support them (however shared artifacts work fine). ↩︎

  4. Chrome based browsers support “Sync Storage”, however Artifacts do not have access to this. ↩︎

  5. Sonnet 3.5 supports 8192 Output Tokens via the API. ↩︎

  6. JavaScript explosion effects are something that Claude already excels at - using anime.js does not give a better result in this simple case! ↩︎