Arca: How it works

Arca is the data layer for your personal AI.
Your AI handles the reasoning and UI. Arca handles the storage, structure, and skills.

1. The core idea: Skills

In Arca, everything your AI knows about your life is a skill.

A skill is:

  • Data – stored as either tabular rows (tables) or semantic vectors
  • Schema / metadata – what fields exist, types, constraints
  • Skill docs – a SKILL.md file Arca generates so your AI knows how to use it

Instead of building "apps", you and your AI design skills.

Examples:

  • meals skill for food + calories + meal type
  • workouts skill for exercise, duration, intensity
  • journal_entries skill for daily reflections + mood
  • favorite_places skill for restaurants you love with tags and notes
  • todos, checkins, weight_logs, recipes, grocery_lists, family_events, etc.

2. Structured skills vs semantic skills

Arca supports two main types of skills.

Structured skills (Tables API)

For things that look like rows and columns:

  • meals, workouts, todos, check-ins
  • weight logs, habits, health metrics
  • lists, events, recurring tasks

Your AI uses Arca's Tables API to:

  • create or upsert tables
  • append new records
  • query with filters, aggregations, and custom SQL-like conditions
  • export tables as Parquet or CSV

When you pass skill metadata with a table (description, examples, relationships, notes), Arca turns it into a SKILL.md file that documents:

  • schema
  • purpose
  • example queries
  • relationships to other tables

Semantic skills (Vector API)

For things your AI should search by meaning, not just exact text:

  • journal entries
  • favorites (brands, products, places)
  • preferences and settings
  • experiences and memories
  • saved links, notes, and snippets
  • learning and research notes

Your AI uses the Vector API to:

  • add new vector entries with free text and metadata
  • run semantic search with filters (e.g. only positive mood, or specific categories)
  • export vector tables as CSV (without embeddings)

3. The vault: how your data is stored

When you sign in, Arca:

  • creates a vault for you inside Arca's AWS environment
  • gives your vault an isolated folder/prefix in S3 (your own storage namespace)
  • writes structured tables as Parquet files and vector collections as LanceDB-backed files

Key properties:

  • Isolation – each user has their own logical storage space
  • Short-lived access – AI assistants get temporary credentials to act on your behalf
  • Exportability – you can export structured + vector data in standard formats

No shared SaaS database full of user rows.
Just per-user vaults.

4. Portability via MCP and SDKs

Arca is built to move with you.

  • The Arca MCP server lets assistants like Claude and ChatGPT connect as tools
  • They can load your SKILL.md files, query your tables, and run semantic search over your vectors
  • When you switch assistants, you just plug in Arca again — same skills, same data

For custom apps and scripts, we provide an official Python SDK:

  • Python SDK - Native Python client for scripts, notebooks, and AI agents
  • Type-safe access to Tables and Vectors APIs
  • Upsert data, query, and fetch/update skills from your own code
  • Perfect for data science workflows, automation, and custom integrations

Quick Install:

pip install arca-ai-vault

Your AI stack can change.
Your data model stays in Arca.

5. Skills instead of apps (the "apps no more" model)

The old model:

  1. think of a use case
  2. build or download an app
  3. design UI, backend, tables
  4. store user data in your app's database

The Arca + personal AI model:

  1. You describe the use case in natural language
  2. The AI reasons about what data needs to be stored
  3. The AI uses Arca's APIs to create a new skill (table or vector collection)
  4. Arca auto-generates SKILL.md docs
  5. The AI starts logging, querying, and reasoning over that skill

So instead of:

"I built a workout app."

it becomes:

"I gave my AI a workout skill in my Arca vault."

Vibe coding apps becomes a temporary phase.
Designing skills for your AI becomes the main way people "build software" for themselves.

6. For developers

Arca exposes simple endpoints:

  • POST /api/v1/tables/upsert – create/append structured records, optionally with skill metadata
  • POST /api/v1/tables/query – query tables with filters, aggregations, and custom WHERE clauses
  • GET /api/v1/tables/list – list tables with metadata
  • GET /api/v1/tables/export – export a table as Parquet
  • POST /api/v1/vectors/add – add vector entries + metadata, optionally with skill metadata
  • POST /api/v1/vectors/search – semantic search with optional filters
  • GET /api/v1/vectors/list – list vector collections
  • POST /api/v1/vectors/export – export a vector collection as CSV
  • GET /api/v1/tables/skills and GET /api/v1/vectors/skills – fetch all SKILL.md docs in one request (ideal for MCP servers)

Auth is via Bearer token tied to the user.
No central app-owned database is required.

7. Putting it together

  • Your vault = isolated storage space in Arca's cloud
  • Skills = how your AI knows what's stored and how to use it
  • Structured skills = tables queried with SQL-like filters
  • Semantic skills = vectors queried with semantic search
  • MCP and SDKs = how assistants and apps plug into your vault

Arca keeps the data layer honest and user-owned.
Your AI becomes the "app."