Quick Start

Whenever AI agents need to understand their human user, the Synvo API can help deliver precise context: finding the most relevant information, expressing user intent, and capturing user's task completion habits and workflows from files of any format, with minimal hallucination.

This Quick Start demonstrates how Synvo extracts factual context from documents, images, videos, and audio, then returns grounded answers with evidence that agents can trust. It shows how memory is built from files, how queries retrieve key facts, and how citations enable verification.

Get started with Synvo API through the Synvo Dashboard. Our free tier includes powerful analytics and visualization tools to help you understand user data, track API usage, and optimize your integration. Sign up now to access your API key and start building with contextual intelligence.


What this Quick Start Covers

  1. Upload several representative files (PDF, image, video, audio) — Synvo builds context memory.
  2. Query for validable facts — the API returns relevant fragments + references.
  3. Verify evidence — answers are grounded in the user’s own data.

Synvo AI can process thousands of files in parallel and perform contextual search across all of them within seconds, providing agents with relevant, verifiable, and multimodal context on demand. However, this guide demonstrates a mini end-to-end example using only a few sample files, allowing developers to quickly experience how Synvo's contextual memory and retrieval pipeline work in practice.


Demo Files (as a Mini Context Set)

Let's explore Synvo's multimodal capabilities through a set of example files that demonstrate how the API handles different content types. Below are four sample files representing common formats users might share, along with example questions that showcase Synvo's contextual understanding on facts. Each tab contains a different media type with its corresponding query and expected response. Browse through the examples by clicking the tabs below:

NTU Annual Report 2024

Question: "How many patents did NTU file in FY2023?"

Expected Answer: 672

📥 Download PDF | 📊 This report contains detailed patent filing statistics for FY2023

WellFest Event Poster

Question: "When is the WellFest Finale event and where is it located?"

Expected Answer: 23rd October 2025 (Thursday), 11am to 4pm, Nanyang Auditorium Foyer

WellFest Event Poster

📥 Download Image | 🎪 Click image to view full size | Contains event timing and location details

Andrew's Machine Learning Lecture

Question: "What is the answer to the in-video quiz in Andrew's lecture?"

Expected Answer: a_2^3 = g(\Omega_2^3 \cdot a^2 + b_2^3)

📥 Download Video | 🎓 Educational ML content with mathematical formulas and quiz questions

Finance Lecture on EBIT

Question: "In the lecture about the operating profit margin formula, what is the value of EBIT used in the example calculation for the year 2018?"

Expected Answer: EBIT = $8 million

📥 Download Audio | 💰 Finance lecture covering EBIT calculations and profit margin formulas

Step 1: Setup

1.1 Create API Key

First, create an API key to authenticate your requests:

  1. Log into your Synvo dashboard
  2. Navigate to API Keys section
  3. Click "Create New Key"
  4. Give it a name (e.g., "My First Key")
  5. Copy and save the key immediately - you won't see it again!

1.2 Set API Key as Environment Variable

Set your API key as an environment variable:

export SYNVO_API_KEY="your_api_key_here"

1.3 Download Demo Files

Download all demo files to your local directory:

Save all files to the same directory where you will run the code examples.

Step 2: Upload and Build Context Memory

Upload all four demo files at once and wait for processing to complete.

import requests
import json
import time
import os

api_key = os.getenv("SYNVO_API_KEY")
BASE_URL = "https://api.synvo.ai"

# Define all files to upload
files_to_upload = [
    "NTU_Annual_Report_2024.pdf",
    "poster.jpg",
    "Andrew_lecture.mp4",
    "Finance_lecture.mp3"
]

# Upload all files
file_ids = {}
for filename in files_to_upload:
    print(f"Uploading {filename}...")
    with open(filename, "rb") as f:
        response = requests.post(
            f"{BASE_URL}/file/upload",
            files={"file": f},
            data={"path": "/"},
            headers={"X-API-Key": api_key}
        )
    file_id = response.json()["file_id"]
    file_ids[filename] = file_id
    print(f"✅ {filename} uploaded: {file_id}")

# Wait for all files to complete processing
print("\n⏳ Waiting for all files to process...")
for filename, file_id in file_ids.items():
    while True:
        status_response = requests.get(
            f"{BASE_URL}/file/status/{file_id}",
            headers={"X-API-Key": api_key}
        )
        status = status_response.json()["status"]
        if status == "COMPLETED":
            print(f"✅ {filename} processing complete!")
            break
        elif status == "FAILED":
            print(f"❌ {filename} processing failed!")
            break
        time.sleep(5)

print("\n🎉 All files ready for querying!")
print("\nFile IDs:")
for filename, file_id in file_ids.items():
    print(f"  {filename}: {file_id}")
# Upload all files
echo "Uploading files..."

# Upload PDF
PDF_RESPONSE=$(curl -s -X POST "https://api.synvo.ai/file/upload" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F "file=@./NTU_Annual_Report_2024.pdf" \
  -F "path=/")
PDF_FILE_ID=$(echo $PDF_RESPONSE | jq -r '.file_id')
echo "✅ PDF uploaded: $PDF_FILE_ID"

# Upload Image
IMAGE_RESPONSE=$(curl -s -X POST "https://api.synvo.ai/file/upload" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F "file=@./poster.jpg" \
  -F "path=/")
IMAGE_FILE_ID=$(echo $IMAGE_RESPONSE | jq -r '.file_id')
echo "✅ Image uploaded: $IMAGE_FILE_ID"

# Upload Video
VIDEO_RESPONSE=$(curl -s -X POST "https://api.synvo.ai/file/upload" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F "file=@./Andrew_lecture.mp4" \
  -F "path=/")
VIDEO_FILE_ID=$(echo $VIDEO_RESPONSE | jq -r '.file_id')
echo "✅ Video uploaded: $VIDEO_FILE_ID"

# Upload Audio
AUDIO_RESPONSE=$(curl -s -X POST "https://api.synvo.ai/file/upload" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F "file=@./Finance_lecture.mp3" \
  -F "path=/")
AUDIO_FILE_ID=$(echo $AUDIO_RESPONSE | jq -r '.file_id')
echo "✅ Audio uploaded: $AUDIO_FILE_ID"

# Wait for all files to process
echo -e "\n⏳ Waiting for all files to process..."
for FILE_ID in "$PDF_FILE_ID" "$IMAGE_FILE_ID" "$VIDEO_FILE_ID" "$AUDIO_FILE_ID"; do
  while true; do
    STATUS=$(curl -s -X GET "https://api.synvo.ai/file/status/$FILE_ID" \
      -H "X-API-Key: $SYNVO_API_KEY" | jq -r '.status')
    
    if [ "$STATUS" = "COMPLETED" ]; then
      echo "✅ File $FILE_ID ready!"
      break
    elif [ "$STATUS" = "FAILED" ]; then
      echo "❌ File $FILE_ID failed!"
      break
    fi
    sleep 5
  done
done

echo -e "\n🎉 All files ready for querying!"
echo -e "\nFile IDs:"
echo "  PDF: $PDF_FILE_ID"
echo "  Image: $IMAGE_FILE_ID"
echo "  Video: $VIDEO_FILE_ID"
echo "  Audio: $AUDIO_FILE_ID"

2.1 Personalization Preview: Profile & User Events

When files finish processing, Synvo also updates the user's profile and user events.
Both are created automatically during file ingestion.

  • User profile combines structured signals extracted from the new files with any profile information that already exists.
  • User events track temporal information, including:
    • File upload time,
    • Times mentioned in the file,
    • Time flow of the described activity.

Together, these records give a clear chronological trace of what happened and when.

This section shows a small preview based on the four demo files above. A full explanation of profile and event construction is available in the Profile & User Event Tutorial.


A. User Profile Summary

The table below summarises the profile signals that Synvo extracted from the demo files in Section 1.3.

Synvo organises profile information into categories and more fine-grained subtopics, with a total of 8 categories and 34 subtopics defined in the current schema.

The entries shown here are only a small sample generated from this mini context set.

CategorySub TopicExtracted SignalsSource Files
educationschool

User may be a student at Nanyang Technological University (NTU).

NTU_Annual_Report_2024.pdf
poster.jpg
Finance_lecture.mp3

educationmajor

User may be studying artificial intelligence or a finance-related field.

Andrew_lecture.mp4
Finance_lecture.mp3

psychologicalmotivations

The user is motivated by a desire to contribute to education through responsible leadership and meaningful service, while also aiming to improve financial literacy to make future decisions more accurate and less dependent on external guidance.

NTU_Annual_Report_2024.pdf
Finance_lecture.mp3

psychologicalvalues

User values leadership and service in education (reflected in NTU's mission) and enhancing knowledge through interactive learning methods, as well as valuing financial literacy and aiming to improve financial decision-making.

NTU_Annual_Report_2024.pdf
poster.jpg
Andrew_lecture.mp4
Finance_lecture.mp3

Additional profile categories and subtopics are omitted here for brevity.


B. User Event Summary (Per File)

Each file also produces an event record.

These events describe the action, the content, and the related time information. The happen_time field is taken from explicit dates in the file; if no reliable date is found, it is set to None, shown as “—”. The mention_time field reflects when the information was processed, and the file_path identifies the source.

Event TypeContentHappen TimeMention TimeFile Path

personal

User uploaded Nanyang Technological University (NTU) Singapore Annual Report for 2024.

2025-11-12 15:29:18 (UTC)

NTU_Annual_Report_2024.pdf

personal

User uploaded a video lecture presented by Andrew Ng regarding neural networks.

2025-11-12 15:31:00 (UTC)

Andrew_lecture.mp4

personal

User recorded/saved audio lecture discussing operating profit margin.

2025-11-12 15:32:40 (UTC)

Finance_lecture.mp3

personal

User received an email invitation for the October WellFest event at Nanyang Auditorium.

2025-10-23 00:00:00 (UTC)

2025-11-12 15:34:06 (UTC)

poster.jpg

personal

User encouraged colleagues to participate in the October WellFest event focused on self-care activities.

2025-10-23 00:00:00 (UTC)

2025-11-12 15:34:06 (UTC)

poster.jpg

financial

User plans to attend the October WellFest event on 23rd October 2025.

2025-10-23 00:00:00 (UTC)

2025-11-12 15:34:06 (UTC)

poster.jpg


Step 3: Query for User Context (Evidence-Backed)

Queries search within the user’s context set, not the public web. Responses include supporting chunks and references to enable agent-side verification.

Python

Question: "How many patents did NTU file in FY2023?"
Expected Answer: 672

import requests
import json
import os

api_key = os.getenv("SYNVO_API_KEY")

payload = {
    "messages": [{
        "role": "user",
        "content": [
            {"type": "text", "text": "How many patents did NTU file in FY2023?"}
        ]
    }]
}

response = requests.post(
    "https://api.synvo.ai/ai/query",
    data={
        "payload": json.dumps(payload),
        "model": "synvo",
        "final_answer": "true"
    },
    headers={"X-API-Key": api_key}
)

result = response.json()

# Print top 3 related factsed facts
k = 3
facts = result["content"][0]["facts"][:k]
for i, fact in enumerate(facts, 1):
    print(f"\n--- Fact {i} ---")
    print(f"Chunk: {fact['chunk']}...")
    print(f"Reference: {fact['reference']}")

Question: "When is the WellFest Finale event and where is it located?"
Expected Answer: 23rd October 2025 (Thursday), 11am to 4pm, Nanyang Auditorium Foyer

import requests
import json
import os

api_key = os.getenv("SYNVO_API_KEY")

payload = {
    "messages": [{
        "role": "user",
        "content": [
            {"type": "text", "text": "When is the WellFest Finale event and where is it located?"}
        ]
    }]
}

response = requests.post(
    "https://api.synvo.ai/ai/query",
    data={
        "payload": json.dumps(payload),
        "model": "synvo",
        "final_answer": "true"
    },
    headers={"X-API-Key": api_key}
)

result = response.json()

# Print top 3 related factsed facts
k = 3
facts = result["content"][0]["facts"][:k]
for i, fact in enumerate(facts, 1):
    print(f"\n--- Fact {i} ---")
    print(f"Chunk: {fact['chunk']}...")
    print(f"Reference: {fact['reference']}")

Question: "What is the answer to the in-video quiz in Andrew's lecture?"
Expected Answer: a_2^3 = g(\Omega_2^3 \cdot a^2 + b_2^3)

import requests
import json
import os

api_key = os.getenv("SYNVO_API_KEY")

payload = {
    "messages": [{
        "role": "user",
        "content": [
            {"type": "text", "text": "What is the answer to the in-video quiz in Andrew's lecture?"}
        ]
    }]
}

response = requests.post(
    "https://api.synvo.ai/ai/query",
    data={
        "payload": json.dumps(payload),
        "model": "synvo",
        "final_answer": "true"
    },
    headers={"X-API-Key": api_key}
)

result = response.json()

# Print top 3 related facts
k = 3
facts = result["content"][0]["facts"][:k]
for i, fact in enumerate(facts, 1):
    print(f"\n--- Fact {i} ---")
    print(f"Chunk: {fact['chunk']}...")
    print(f"Reference: {fact['reference']}")

Question: "In the lecture about the operating profit margin formula, what is the value of EBIT used in the example calculation for the year 2018?"
Expected Answer: EBIT = $8 million

import requests
import json
import os

api_key = os.getenv("SYNVO_API_KEY")

payload = {
    "messages": [{
        "role": "user",
        "content": [
            {"type": "text", "text": "In the lecture about the operating profit margin formula, what is the value of EBIT used in the example calculation for the year 2018?"}
        ]
    }]
}

response = requests.post(
    "https://api.synvo.ai/ai/query",
    data={
        "payload": json.dumps(payload),
        "model": "synvo",
        "final_answer": "true"
    },
    headers={"X-API-Key": api_key}
)

result = response.json()

# Print top 3 related facts
k = 3
facts = result["content"][0]["facts"][:k]
for i, fact in enumerate(facts, 1):
    print(f"\n--- Fact {i} ---")
    print(f"Chunk: {fact['chunk']}...")
    print(f"Reference: {fact['reference']}")

cURL

Question: "How many patents did NTU file in FY2023?"
Expected Answer: 672

curl -X POST "https://api.synvo.ai/ai/query" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F 'payload={"messages":[{"role":"user","content":[{"type":"text","text":"How many patents did NTU file in FY2023?"}]}]}' \
  -F "model=synvo" \
  -F "final_answer=true"

Question: "When is the WellFest Finale event and where is it located?"
Expected Answer: 23rd October 2025 (Thursday), 11am to 4pm, Nanyang Auditorium Foyer

curl -X POST "https://api.synvo.ai/ai/query" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F 'payload={"messages":[{"role":"user","content":[{"type":"text","text":"When is the WellFest Finale event and where is it located?"}]}]}' \
  -F "model=synvo" \
  -F "final_answer=true"

Question: "What is the answer to the in-video quiz in Andrew's lecture?"
Expected Answer: a_2^3 = g(\Omega_2^3 \cdot a^2 + b_2^3)

curl -X POST "https://api.synvo.ai/ai/query" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F 'payload={"messages":[{"role":"user","content":[{"type":"text","text":"What is the answer to the in-video quiz in Andrew'\''s lecture?"}]}]}' \
  -F "model=synvo" \
  -F "final_answer=true"

Question: "In the lecture about the operating profit margin formula, what is the value of EBIT used in the example calculation for the year 2018?"
Expected Answer: EBIT = $8 million

curl -X POST "https://api.synvo.ai/ai/query" \
  -H "X-API-Key: $SYNVO_API_KEY" \
  -F 'payload={"messages":[{"role":"user","content":[{"type":"text","text":"In the lecture about the operating profit margin formula, what is the value of EBIT used in the example calculation for the year 2018?"}]}]}' \
  -F "model=synvo" \
  -F "final_answer=true"

Note

Large File Processing

  • The 8-min demo video file may take 10 seconds to process, larger video file may take more time to process.
  • Check processing status with /file/status/{file_id}

File Format Support

  • Images: .png, .jpg, .jpeg, .gif, .webp
  • Video: .mp4, .mov, .m4v, .avi, .mkv
  • Audio: .mp3, .wav, .m4a, .aac, .flac
  • Documents: .pdf, .docx, .ppt, .pptx, .txt, .json, .jsonl, .xlsx

When agents need to understand their user, Synvo provides the precise, verifiable context required to answer correctly — across any file format, with minimal hallucination. Explore and manage context in the Synvo Dashboard.