Claude Analysis Tool - First Look
Introduction
Last week, Anthropic released an upgraded version of Sonnet 3.5 followed a couple of days later with a new Analysis Tool within Claude.ai. This new feature allows Claude to conduct precision data analysis, using context to generate visualisations and intelligent insights.
The Analysis Tool is a preview for Professional plan users. It builds upon the Artifacts feature we explored previously, adding powerful data analysis capabilities. This article examines how to use these features effectively, compares them with ChatGPT’s tooling, and explores the implications of its unique design.
Getting Started
For initial testing, I downloaded some Exercise Tracker Data from Kaggle.
After uploading the data to Claude, it is analysed, and with a couple of extra prompts it has produced an animated 3D visualisation and an interactive dashboard with detailed Contextual Analysis.
The meaning within the data has been understood, providing not just statistical analysis, but useful interpretations. The second image above shows an example of this: providing detailed insights and analysis based on the Movement data each exercise type. Additionally it was able to provide insights on:
- Athlete Fatigue over Repetitions
- Relative “Smoothness” of technique between athletes
- Consistency of movement between sets
It’s impressive to watch Claude interrogate datasets in different ways over a number of rounds, conducting statistical analysis and asking deeper questions of the data.
I’ve tried this with a number of other datasets - Enterprise, Financial and Environmental Data - and been impressed at the levels of understanding shown.
There are some limitations in the preview that are worth being aware of:
- The upload size is limited to about ~0.5mb of data - limited by the size of the Chat Context Window. There is a limited workaround detailed below.
- Although Analysis Artifacts can be published - they don’t load correctly.
- Analysis Artifacts cannot be opened on Mobile.
Design Approach
Anthropic’s choice of architecture for this feature is noteworthy: deterministic analysis runs in the browser, while orchestration and inference remain server-side.
This is how - and where - your data is processed with the Analysis feature:
Step | Activity |
---|---|
1 | You upload your data to Claude |
2 | Claude examines the data and decides whether to use the Analysis Tool (displaying a message such as “Contemplating”) |
3 | If the data is suitable, Claude generates an analysis program and sends it to your Browser |
4 | Your Browser runs the program, summarising the data locally |
5 | The results are sent back to Claude to produce an Artifact, summary or conduct further analysis |
6 | The produced Analysis Artifact then reads the data from the Browser |
This happens automatically and transparently, with Claude orchestrating the execution and interpretation of the analysis results between Server and Browser.
The file size limitation is driven by the need to transmit the full dataset to Claude within its Context Window before beginning its analysis tasks. Fortunately, Anthropic have confirmed they are working on addressing this.
This design offers intriguing possibilities when paired with the new Computer Use features built in to the model.
For example, Claude could use it’s Bash tool to work with data sources directly on remote computers - extracting data, processing files or even conducting machine learning. Visualisation Dashboards could run entirely in your environment - with data transport not needing to go via third parties.
Handling Larger Datasets
To overcome the file size limitation, there is a limited workaround: we can provide Claude with a representative sample of data, and ask for the generated Artifact to allow file uploads. This is very similar to the technique we outlined here.
The example artifact here has been produced using the Analysis tool, then with the addition of multiple file upload functionality. It’s worth noting that the Analysis tool is able to work with fragments of JSON - this example was produced by supplying a partial file.
This approach lets you take advantage of Claude’s ability to design effective visualisations while keeping your complete dataset local. You could even use synthesized or anonymized data to build the visualisation, then use the artifact without sensitive data leaving your environment.
OpenAI ChatGPT Comparison
In our July Chat Interface Roundup we said that ChatGPT’s Cloud Storage Integration Data Analysis features may give it an edge in a number of scenarios.
Here’s a brief comparison of the feature set now available:
Feature | ChatGPT Data Analysis | Claude.ai Analysis Tool (Preview) |
---|---|---|
Data Upload Size | ~50mb1 | ~0.5mb2 |
Model Availability | Excludes o1 reasoning models | Claude 3/3.5 Models |
Analysis | Server Based, using Pythons Pandas library | Server/Browser Based, using JavaScript Lodash and Papa Parse libraries |
Visualisation | Matplotlib Graph Generation | React and Recharts based custom Artifact |
Data Integration | Microsoft OneDrive, Google Docs | Github (with Enterprise Plan) |
Data Manipulation | Table Editor in Chat Interface | - |
The User Experience with the Claude.ai Analysis Tool feels slicker, with more sophisticated contextual understanding and interactive presentation of the data. The use of Artifacts allows a level of customisation far beyond that offered by ChatGPT’s Matplotlib charts. Availability of the o1 reasoning models for Data Analysis would enhance ChatGPT’s functionality here, especially as the new Sonnet 3.5 outperforms GPT-4o in this area.
Ultimately, if you are using ChatGPT Data Analysis to process large datasets today, this Claude.ai Preview is not yet a direct replacement.
Given we know that Anthropic will allow processing larger files, this situation could change quickly.
It feels that OpenAI designed for the use-case of an individual uploading data to conduct point-in-time analysis. Anthropic’s design seems focused for distributed data collection, analysis and orchestration with this initial interactive use-case simply being the first demonstration of these capabilities.
One final point for comparison: Claude.ai restricts the number of messages you can send based on token usage and server capacity3. For people relying on these analysis features for work, Anthropic needs to be clearer about these limits. A simple usage meter would help users understand when they’re approaching restrictions, and plan accordingly. We’ve written before about how giving this type of feedback can encourage smarter LLM usage.
Conclusion
The Analysis Tool Preview is impressive, showcasing Sonnet 3.5’s improved reasoning and code generation abilities. You can design and build sophisticated visualisations through simple conversation, and the resulting Artifacts are genuinely interactive and useful. The design approach opens up interesting possibilities too - distributed processing combined with Claude’s orchestration could reshape how organisations approach data analysis, especially once Artifacts can consume external API’s4.
This could spell trouble for traditional Enterprise tools. The combination of distributed processing and LLM-powered analysis offers deeper insights at a fraction of the cost, while allowing organisations to keep sensitive data local. Expensive data analysis and visualisation projects and products will struggle to compete with this approach.
One final reflection: When we looked at Chat interfaces in July, Open Source offerings were leading the charge with features and usability. The last few months have shown proprietary front-ends delivering new features such as OpenAI’s Canvas and Realtime Voice, Google’s NotebookLM Audio Production features and Anthropic’s Projects and Artifacts features. Some of these offerings are unique due to tight integration with underlying models. This might lead to further specialisation, with open source tools focusing on flexibility and integration with proprietary offerings optimising specific workflows.