Skip to content

Walkthrough

Kubrick includes a CLI tool that automates the process of setting up and deploying the infrastructure, meaning developers can have an ingestion pipeline up and running in a matter of minutes.

Terminal window
git clone https://github.com/kubrick-ai/kubrick.git
cd kubrick/cli && npm run build
kubrick deploy

Full CLI deployment documentation is available here.

Deploying the infrastructure will expose the Kubrick API:

Terminal window
curl -X POST https://API_URL/v1_0/search
-H Content-Type: multipart/form-data \
-F query_type=text \
-F query_text="A red sports car driving down the highway" \
Example Response:
{
"data": [
{
"id": 1234,
"modality": "visual-text",
"scope": "clip",
"start_time": 6.0,
"end_time": 12.0,
"similarity": 0.7654,
"video": {
"id": 5678,
"s3_bucket": "kubrick-video-library-<uuid>",
"s3_key": "uploads/<uuid>/red-sports-car.mp4",
"filename": "red-sports-car.mp4",
"duration": 120.0,
"created_at": "2025-08-05T00:00:00.000000",
"updated_at": "2025-08-05T00:00:00.000000",
"height": 1080,
"width": 1920,
"url": "<presigned_url>"
}
}
...
],
"metadata": { "page": 0, "limit": 10, "total": 10 }
}

Full API documentation is available here.

The deployment includes a playground web interface, hosted as an S3 static site. The playground serves as an implementation example for the core features of the Kubrick API.

Users can upload videos into a private library, which will automatically create an embedding task.

Text queries return relevant results even if they describe qualities that aren’t concrete like emotion, tone, composition, visual style, and movement.

Kubrick also enables querying with image, audio, and video files. Users can upload a reference image to find visually similar scenes, search using an audio clip to locate moments with matching soundscapes or music, or provide a video segment to retrieve the most similar clips in the library.