Create packshot
Generate clean product packshots from raw garment photos
You will need an API token to send HTTP requests. See Authentication for instructions.
Quick start
Create a project to organize your images. The project_id will be used in subsequent requests. See Creating a project for details.
Upload one or more garment-bearing images (front, back, side, detail shots) to the project. This is a two-step process:
- Request a pre-signed upload URL
- PUT the image binary to that URL
Collect all file_id values for the next step. See Uploading images for details.
The upload_url is only valid for a limited time. Upload the image immediately after receiving the response.
Start a Create Packshot job by providing the list of uploaded file IDs and one or more instructions. Each instruction produces one output image (or num_variations outputs). No identity is required: packshots are product-only. See Starting a job for details.
Track job progress with SSE or webhooks. Filter events with your job ID and stop when you receive a terminal status. See Tracking progress for details.
Unlike Flat-lay-to-on-model and Model Swap, Create Packshot does not require an identity. Outputs are product-only (no model). If you need a model wearing the garment, use Flat-lay-to-on-model instead.
Creating a project
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
response = requests.post(
api_url + "/project",
headers={"Authorization": "Bearer " + access_token},
json={"project_name": "my-packshot-project"},
).json()
project_id = response["project_id"]
project_name = response["project_name"]{
"project_id": "abc123...", // PROJECT_ID
"project_name": "my-packshot-project"
}Uploading images
Upload all garment photos that the packshot job will consume: hanger shots, mannequin shots, flat-lays, on-model photos, even phone snaps. The AI uses every photo to reconstruct the garment, so more angles produce sharper packshots. You can upload between 1 and 10 images per job.
The upload_url is only valid for a limited time. Upload the image immediately after receiving the response.
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
project_name = "my-packshot-project"
image_path = "path/to/blazer-front.jpg"
# Step 1: Get pre-signed upload URL
response = requests.post(
api_url + "/upload",
headers={"Authorization": "Bearer " + access_token},
json={
"project_name": project_name,
"filename": "blazer-front.jpg",
},
).json()
upload_url = response["upload_url"]
content_type = response["content_type"]
file_id = response["file_id"]
# Step 2: Upload the image binary
with open(image_path, "rb") as f:
requests.put(
upload_url,
headers={"Content-Type": content_type},
data=f.read(),
)
print(f"Uploaded file ID: {file_id}"){
"upload_url": "https://s3...", // Pre-signed PUT URL
"download_url": "https://...",
"project_id": "abc123...",
"project_name": "my-packshot-project",
"file_id": "img_001...", // FILE_ID
"filename": "blazer-front.jpg",
"content_type": "image/jpeg"
}Starting a job
A Create Packshot job takes N garment images plus M instructions and produces M output packshots (or Σ(num_variations) if any instruction requests more than one variation):
- Each instruction produces one output by default; set
num_variations(1-8) to fan out a single instruction into multiple variations. - Stack instructions to cover multiple catalog surfaces in one job (e.g., one ghost-mannequin set for your PDP plus one flat-lay set for editorial), all from the same input photos.
- Instructions run in parallel.
Credit cost: 3 credits per output at 1K, 5 at 2K, 10 at 4K, billed per output (matches Flat-lay-to-on-model pricing).
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
project_id = "abc123..."
file_ids = ["img_001...", "img_002..."] # Front + back of the same garment
# Define instructions - each produces one output (or num_variations outputs)
instructions = [
{
"style": "ghost_mannequin",
"background": "white studio",
"framing": "tall_3_4",
"angle": "three_quarter",
"shadow": "contact",
"num_variations": 3,
"options": {
"size": "2K",
"ar": "3:4",
"format": "jpg",
},
},
{
"style": "flat_lay",
"surface": "linen",
"shadow": "natural",
"options": {
"size": "2K",
"ar": "1:1",
"format": "jpg",
},
},
]
response = requests.post(
api_url + "/create-packshot",
headers={"Authorization": "Bearer " + access_token},
json={
"project_id": project_id,
"images": file_ids,
"instructions": instructions,
"post_process": False, # Optional: enable post-processing
"options": {
"model": "auto", # Optional: see "Generation options" below
"use_anchor": False, # Optional: opt-in cohesion across variations
},
},
).json()
job_id = response["job_id"]
total_outputs = response["total_outputs"]
print(f"Job started: {job_id} ({total_outputs} outputs)"){
"job_id": "job_abc123...", // JOB_ID
"status": "pending",
"message": "Job created successfully",
"total_outputs": 4 // Sum of num_variations across instructions
}Instruction parameters
Each instruction can contain the following parameters. Only style is required; everything else has sensible defaults derived from the chosen style.
| Parameter | Type | Required | Description |
|---|---|---|---|
style | enum | yes | Packshot style. One of flat_lay, ghost_mannequin, marketing_ready, white_cutout, free. See Style values. |
background | string | int | object | no | Background description (e.g. "white studio"), Design Value index, or {text, image} object. Style-default if omitted. |
composition | string | int | object | no | Composition descriptor (e.g. "single garment centered", "folded on shelf"). |
props | string | int | object | no | Props for the scene. Only meaningful for marketing_ready; ignored for other styles. |
surface | string | int | object | no | Surface the garment lies on (e.g. "matte concrete", "linen"). Only meaningful for flat_lay; ignored for other styles. |
color_palette | string | int | object | no | Color scheme descriptor. Most useful for marketing_ready. |
lighting | string | int | object | no | Lighting descriptor. Also accepts a structured object with optional direction, quality, complexity sub-fields. |
framing | enum | no | How the garment fills the canvas. One of square_packshot, tall_3_4, wide_4_3, full_frame. |
angle | enum | no | View angle. One of front, back, three_quarter, top_down, detail_macro. top_down is forced for flat_lay. |
shadow | enum | no | Shadow treatment. One of auto, none, contact, soft_drop, natural. Style-specific defaults apply when omitted. |
prompt | string | no | Free-form prompt overlay. When empty, the engine builds one automatically from style and the structured fields above. |
seed | integer | no | Reproducibility seed for this instruction. |
num_variations | integer (1-8) | no | Number of output variations to generate from this instruction. Defaults to 1. |
preset_name | string | no | Metadata only. Name of the preset this instruction came from, surfaced in the UI. |
category_names | string[] | no | Metadata only. Categories the preset belongs to. |
options | object | no | Per-instruction output options. See Output options. |
Style values
The style field controls the look of the packshot and the defaults applied to the structured fields. Pick the style that matches the catalog surface you're producing for.
| Style | Description | Best for |
|---|---|---|
flat_lay | Top-down view of the garment laid on a clean surface. angle is forced to top_down. | Editorial product grids, lookbooks |
ghost_mannequin | Invisible mannequin: the garment hovers in 3D as if worn, no body visible. | Most fashion PDPs |
marketing_ready | Editorial scene with props, lighting, and atmosphere. props and color_palette carry weight here. | Campaign imagery, hero banners |
white_cutout | Pure white seamless background with sharp edges and minimal contact shadow. | Marketplaces (Amazon, Zalando), feeds |
free | Unconstrained. The engine interprets the structured fields and prompt without applying a style preset. | Custom looks that do not fit the four above |
Output options
The options object inside each instruction can contain:
| Parameter | Values | Description |
|---|---|---|
size | "1K", "2K", "4K" | Output image resolution. Drives credit cost: 3 / 5 / 10 credits per output. |
ar | "1:1", "3:4", "4:3", "9:16", "16:9" | Aspect ratio. |
format | "jpg", "png" | Output file format. |
width | integer (256-7000, multiple of 8) | Custom output width. Must be set together with height. Aspect ratio must be between 1:4 and 4:1. Requires the OUTPUT_CUSTOM_DIMENSIONS policy on your account. |
height | integer (256-7000, multiple of 8) | Custom output height. Must be set together with width. |
seed | integer | Random seed for this output. Prefer the top-level seed field on the instruction. |
Generation options
Top-level fields inside the request's options object that control how the batch is generated (as opposed to per-instruction styling).
| Parameter | Type | Default | Description |
|---|---|---|---|
model | "auto" | "nano_banana_pro" | "seedream" | "auto" | Which engine generates outputs. auto runs the default engine with a safety fallback if content is refused. Specifying an engine disables the fallback. |
use_anchor | boolean | false | When true, the engine pins one instruction as the canonical look reference and aligns every output to it. Defaults to false for Create Packshot. |
anchor_index | integer | 0 | Which instruction (zero-indexed into instructions) is used as the anchor reference when use_anchor is true. Must satisfy 0 <= anchor_index < len(instructions). |
{
"project_id": "abc123...",
"images": ["img_001...", "img_002..."],
"instructions": [/* ... */],
"options": {
"model": "nano_banana_pro",
"use_anchor": true,
"anchor_index": 0
}
}Per-image annotations
You can provide optional notes for individual images to highlight context the AI might otherwise miss: which view the photo shows, fabric peculiarities, lining details to preserve, or which input represents the canonical "hero" angle.
The images field accepts two formats:
| Format | Example | Description |
|---|---|---|
| Simple | ["uuid-1", "uuid-2"] | List of file IDs (default, backward compatible) |
| Annotated | [{"file_id": "uuid-1", "note": "..."}, ...] | Objects with optional note per image |
Both formats can be mixed. Images without notes behave exactly as before.
{
"project_id": "abc123...",
"images": [
{"file_id": "img_001...", "note": "front view, hanger removed in post"},
{"file_id": "img_002...", "note": "back view, same garment"},
{"file_id": "img_003...", "note": "detail of printed lining"}
],
"instructions": [/* ... */]
}Example notes:
"front view"/"back view"/"three-quarter angle""detail of stitching"or"close-up of buttons""shows printed inner lining, preserve in output""phone photo, ignore the wrinkled bedsheet background""hero angle, prioritize this look"
Post-processing
Set post_process to true in the job request to enable automatic post-processing of results. The job status will include post_processing_status to track this additional step.
{
"project_id": "abc123...",
"images": ["img_001...", "img_002..."],
"instructions": [/* ... */],
"post_process": true
}Tracking progress
Use either SSE or webhooks to receive notifications for job updates.
import json
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
job_id = "job_abc123..."
headers = {"Authorization": "Bearer " + access_token}
with requests.get(
api_url + "/notifications/events",
headers=headers,
stream=True,
timeout=600,
) as response:
response.raise_for_status()
for raw_line in response.iter_lines(decode_unicode=True):
if not raw_line or raw_line.startswith(":"):
continue
if not raw_line.startswith("data: "):
continue
notification = json.loads(raw_line[6:])
data = notification.get("data", {})
task_id = data.get("id_task") or data.get("job_id")
if task_id != job_id:
continue
print(f"Notification: {notification['name']}")
print(f"Data: {data}")
if notification["name"] == "batch_edit":
status = data.get("status")
if status == "completed":
print("Job completed!")
break
if status == "failed":
raise RuntimeError(data.get("error_message", "Job failed"))import hashlib
import hmac
import requests
from flask import Flask, abort, request
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
job_id = "job_abc123..."
public_webhook_url = "https://example.com/webhooks/piktid"
setup = requests.put(
api_url + "/webhooks",
headers={"Authorization": "Bearer " + access_token},
json={"url": public_webhook_url},
).json()
webhook_secret = setup["secret"]
app = Flask(__name__)
def verify_signature(secret: str, body: bytes, signature_header: str) -> bool:
expected = "sha256=" + hmac.new(secret.encode("utf-8"), body, hashlib.sha256).hexdigest()
return hmac.compare_digest(expected, signature_header)
@app.post("/webhooks/piktid")
def handle_piktid_webhook():
body = request.get_data()
signature = request.headers.get("X-Webhook-Signature", "")
if not verify_signature(webhook_secret, body, signature):
abort(401)
notification = request.get_json()
data = notification.get("data", {})
task_id = data.get("id_task") or data.get("job_id")
if task_id != job_id:
return "", 204
if notification["name"] == "batch_edit":
status = data.get("status")
if status == "completed":
print("Job completed!")
elif status == "failed":
print(f"Error: {data.get('error_message')}")
return "", 204
app.run(port=8000)[
{
"id": 12345,
"name": "batch_edit", // Notification type
"timestamp": 1702819200.0,
"data": { // Job-specific data
"id_task": "job_abc123...",
"status": "completed",
"total_images": 4,
"processed_images": 4
}
}
]Use DELETE /notifications/{id} after processing events so they are not replayed on reconnect.
Retrieving results
Once the job is complete, retrieve the processed images.
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
job_id = "job_abc123..."
response = requests.get(
api_url + f"/jobs/{job_id}/results",
headers={"Authorization": "Bearer " + access_token},
).json()
for result in response["results"]:
print(f"Image {result['image_index']}: {result['output']['full_size']}"){
"job_id": "job_abc123...",
"job_type": "create_packshot",
"status": "completed",
"results": [
{
"image_index": 0,
"group_index": 0,
"output": {
"full_size": "https://...", // Result image URL
"thumbnail": "https://..."
},
"model_used": "nano_banana_pro", // Engine that produced this output
"status": "completed"
},
{
"image_index": 1,
"group_index": 0,
"output": {
"full_size": "https://...",
"thumbnail": "https://..."
},
"model_used": "nano_banana_pro",
"status": "completed"
}
],
"summary": {
// Job statistics
}
}Bulk download
For bulk downloads, generate a temporary download URL that packages all results into a ZIP file.
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
job_id = "job_abc123..."
# Generate download URL
response = requests.post(
api_url + "/download",
headers={"Authorization": "Bearer " + access_token},
json={"job_id": job_id},
).json()
download_url = response["download_url"]
expires = response["expires"]
print(f"Download URL: {download_url}")
print(f"Expires: {expires}")
# Download the ZIP file (no auth required for the token URL)
zip_response = requests.get(download_url)
with open("results.zip", "wb") as f:
f.write(zip_response.content){
"download_url": "https://v2.api.piktid.com/download/token123...",
"expires": "2024-12-17T11:00:00Z" // URL expiration time
}Checking job status
You can also check the job status directly without waiting for notifications.
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
job_id = "job_abc123..."
response = requests.get(
api_url + f"/jobs/{job_id}/status",
headers={"Authorization": "Bearer " + access_token},
).json()
print(f"Status: {response['status']}")
print(f"Progress: {response['progress']}%")
print(f"Processed: {response['processed_images']}/{response['total_images']}"){
"job_id": "job_abc123...",
"job_type": "create_packshot",
"status": "processing",
"progress": 50.0,
"total_images": 4,
"processed_images": 2,
"should_post_process": false,
"post_processing_status": null,
"created_at": "2024-12-17T10:00:00Z",
"updated_at": "2024-12-17T10:05:00Z"
}Error handling
Jobs may fail due to various reasons. Check the error_message field in the job status or results.
import requests
api_url = "https://v2.api.piktid.com"
access_token = "your_access_token"
job_id = "job_abc123..."
response = requests.get(
api_url + f"/jobs/{job_id}/results",
headers={"Authorization": "Bearer " + access_token},
).json()
if response["status"] == "failed":
print(f"Job failed: {response['error_message']}")
else:
for result in response["results"]:
if result["status"] == "failed":
print(f"Image {result['image_index']} failed: {result['error_message']}")Common errors at job creation
POST /create-packshot returns the following errors before the job is queued:
| HTTP | Meaning |
|---|---|
| 400 | Missing or invalid image, invalid anchor_index, invalid custom dimensions, unsupported model. |
| 402 | Insufficient credits. Response body includes required_credits, in_progress_credits, user_credits. |
| 403 | Output size exceeds your plan's policy, or you do not have access to the target project. |
| 404 | Project not found. |
| 429 | Concurrent job limit reached. Non-enterprise accounts cap at 5 active jobs across model-swap, flat-2-model, and create-packshot combined. The endpoint is also rate-limited to 5 requests per minute per user. |
{
"error": "Insufficient credits to create job",
"required_credits": 20.0,
"in_progress_credits": 5.0,
"user_credits": 12.0
}