Cylindo Quickshot

Intro to Quickshot

Quickshot is Cylindo’s product‐first lifestyle imagery solution, designed to help teams create accurate, on‐brand visuals at the speed of commerce. It combines AI‐generated environments with structured product inputs — so the product stays recognizable, consistent, and central in every image.

With Quickshot, teams can:

  • Use Cylindo Master Assets in the 2d canvas to generate content instantly and with the creative freedom and product features accuracy provided by the Cylindo Master assets 32 frame

  • Use Cylindo Master Assets for scale-accurate, multi-product scenes in the 3d canvas

  • Edit, adapt, and create visual variations without restarting from scratch

  • Distribute content directly through Cylindo’s Curator and Content API

With Quickshot, users can generate high-quality lifestyle imagery in 2K native resolution*, powered by AI models trained on your specific product logic. The system enhances user creativity by blending prompt-based input, style presets, reference images, and 2d or 3d scene into realistic and detail rich lifestyle images in seconds. 

There are two different creation workflows users can choose from:

  1. 3D Editor

    Trained on Cylindo’s 3D Master Assets and integrated directly into the Cylindo platform, the “3D editor” combines AI-generated scenes with 3D layout and composition for a controlled approach designed for scenes where scale, camera intent, and spatial accuracy matter.

  2. 2D Canvas

    Starting from uploaded product imagery you own or 2D frames from existing Master Assets, the “2D canvas” allows you to control the scale and position of one or more products and uses AI to generate the surrounding scene, offering a faster approach for exploration and content creation when full spatial control is not a hard requirement. Products that have a Cylindo Master Asset can be featured in a scene with one or more of the 32 frames and in any configurations available, allowing additional creative freedom that translates into image variety that is unmatched by AI background generators.
    Although AI models like Nano Banana can assume the look of a product from a different camera angle than the one provided, the product accuracy can be off.

We expect 80-100% of the images generated to be"usable". A "usable image" will meet the following criteria:

  • Accurate product representation

  • No or minor artifacts / product inaccuracies (typically fixable via inpainting)

  • Alignment with creative expectations

Key Considerations on AI-Generated Images

AI-generated lifestyle images in Quickshot incorporate elements of controlled randomness, meaning that product position, layout, scale relationships, and other visual details are guided—but not strictly constrained—by the underlying scene, prompt, presets, and reference image inputs.

Unlike V-Ray renders, which produce deterministic outputs, Quickshot's AI generations may introduce slight variations in product placement and accuracy across images.

Important distinction:

  • In the 3D editor, dimensional accuracy is enforced via 3D Master Assets and the underlying 3D scene.

  • In the 2D canvas, scale is inferred based on the uploaded image, unless a Master Asset frame is used – in which case the product data and logic is known.


2D Canvas vs 3D Editor generation method

Quickshot – 3D editor

Quickshot – 2D canvas

Primary role

Fast, accurate lifestyle imagery from 3D

Immediate lifestyle imagery from existing photos

Best for

Campaigns, PDP refreshes, lifestyle content

Early-stage content, fast campaigns, merchandising

Speed

Fast (seconds to ~1 min)

Fastest (seconds to ~1 min)

Input

Cylindo 3D Master Assets

Product photos, cutouts, or MA frames

Product accuracy

High (anchored to 3D data)

Med–High (anchored to 2D image)

Scale & dimensions

Exact

Best-effort (AI inferred)

Creative freedom

Unlimited

Unlimited

Creative control

High

Med (32 frames from MA)

Low (product imagery)

Control effort

Med

Low

Camera control

Full (angle + focal length)

Indirect (driven by image comp)

Multi-product

Coming soon (high scale accuracy)

Supported (AI may adjust perspectives to match)

Learning curve

Low-Med (3D knowledge helpful)

Low

Current Release Specifications

The current release of Quickshot includes:

  1. 2K native resolution (2048 px on the longest side)

  2. Generation time between 30–60 seconds

  3. User inputs, via prompt, reference image, or style presets, are intelligently upsampled to align with or enhance the staged scene

  4. AI-based post-processing allows props to be added, adjusted, or replaced within seconds. Moreover large elements of the scene can be changed via prompting alone, or by prompting and marking the targeted area. 

  5. Multi-product support available in the 2D canvas workflow. Availability in the 3D editor will come in Q2 2026.

  6. Final images are ready for distribution via the Cylindo Content API and Curator

Scene Setup in 3D Editor

Step 1: Product selection

Select any product with a Cylindo Master Asset rendered in 32 frames and when available amend its configuration.  

Step 2: Scene Templates and Generation Settings

Pick from Indoor or Outdoor templates to direct the 3D scene layout and guide generation. Use no-context templates when you want the AI to define the scene architecture and props.

It’s important to note that the prompt is the primary factor influencing the AI-generated setting. When selecting a template, keep in mind that QuickShot uses the underlying 3D scene layout as guidance during generation, not the preview image.

Open and Basic context templates can be very effective for both outdoor and indoor scenarios when:

  • You want to blur the background

  • You prefer to delegate architecture, props, and placement to the AI

These templates remove most scene context and let the AI decide how to fill the space creatively.

Camera Angle

The camera angle must be set manually via the dedicated Camera Angle panel. The focal length ranges from ultra wide to tele lenses. You can use the former for shots from above and the latter when you desire to compress the perspective and to use a prompt that heavily blurs the background.

Note: Templates present a standard 50mm focal length on load so please make sure to adjust it as needed.

Scene Setup in 2D Canvas

Step 1: Product Selection

Upon entering Quickshot, the workspace will default to the ‘3D editor’. You will need to select the ‘2D canvas’ from the workflow options at the top of the screen.

From there, you have two options for creating:

  1. If you have existing Master Assets, you can choose from your products on the left-hand panel.

  2. If you would like to use external media, select the ‘Uploads’ option and drop your file into the upload box.

    1. Note: Toggle on or off the auto-remove background setting at your discretion.

⚠️ Attention: Imagery created with Quickshot that are based on uploaded imagery are currently unable to be distributed via the Curator and Content API.
⚠️ Attention: The possibility of uploading images directly in Quickshot will be deprecated, but will be replaced with the ability to upload them as external Media to a Product page. These will then be accessible in Quickshot and Curator. 

Step 2: Scene placement and settings

After clicking your selection in the left-hand panel, your product image will appear in the workspace.

Click and drag to move the product within the workspace to define the layout of the scene.

Click on the product and drag the corners of the bounding box to adjust sizing.

If using an existing Master Asset:

  • Use the rotation bar to select which frame you would like to use in the scene.

  • Click the settings button to adjust the configuration.

In the case of an uploaded image, the dimensions of a product, spatial awareness, product quality, and overall scene generation are reliant on the provided image. When using a frame from a Master Asset, the underlying product logic from the structure data package is used to inform the AI model more accurately.

Product duplication: Any product can be duplicated in the canvas in order to feature multiple instances in a given visual. This interaction is particularly relevant and effective when Cylindo MA are used in the canvas.

Multi-product: you can add multiple products, Master Assets or uploads, to the scene as desired.

Aspect ratio: adjust the aspect ratio of the workspace, and the resulting image, by clicking the button in the upper left corner of the space.

Prompting

There are several inputs that you can choose from that will affect prompting.

3D editor

  • Select a style preset

  • Upload a reference image

  • And/or enter a custom prompt

2D canvas

  • Upload a reference image 

  • and/or enter a custom prompt. 

These inputs will be analyzed in combination with the 3D scene or 2D layout to enhance the prompt before being submitted to the AI model for generation. 

After the first generation, you will be able to review the upsampled prompt and make adjustments — such as adding, removing, or editing objects and settings.

💡Tip! We recommend describing lighting styles with vivid, directional language such as: "Dynamic warm light coming from a window casting long shadows and highlights on the scene." See some examples below for inspiration.

To enter a new prompt from scratch, please make sure to clear the existing prompt first.

Note: It is expected that some props listed in the upsampled prompt may be missing or misplaced in one or more generated images. This is a normal part of how AI generation works and does not indicate an error.

Review images and generate more

After clicking ‘Generate’, images will appear gradually. You can click on any thumbnail on the right-hand side to enter the Gallery View.

From the Gallery View, you can:

  • Review product consistency by interacting with the product viewer on the right (only available if a Master Asset was used in the scene)

  • Generate more images using the same scene and prompt

  • Apply edits to specific areas of the image using the inpainting tools

Editing a Generated image

Instead of discarding and regenerating an image, you can edit the content of a generated scene directly. This is useful for refining props, correcting small product inaccuracies, or changing visual details without generating an entirely new image.

While simply amending the prompt can sometimes help, it rarely applies precise edits such as changing the material of a rug or adjusting a specific painting. For that, use the Smart edit feature to paint over areas and guide the AI.

  1. From the Gallery View of any generated image, hover over the image and click Smart edit to open the editing interface.

  2. Choose from the following options at the top of the prompting box:

    1. Edit if you would like to change or add something to the scene.

    2. Erase if you would like to remove an object from the scene.

    3. Fix product if there is an issue with the product within the scene.

Changing or adding to the scene

  1. Using the mask tool, paint over the area within the image that you would like to change.

  2. Add a prompt and/or reference image for the specific change you desire.

  3. Hit the blue Edit button at the bottom right corner of the prompting box to regenerate with the applied changes. 

  4. New thumbnails will appear at the bottom of the screen with variations of the changed image. You can choose to add additional variations as desired.

  5. Once you are satisfied with an image, select Keep and the new image will replace the original in your main gallery.

Removing objects

  1. Using the mask tool, paint over the object within the image that you would like to remove.

  2. Select the blue Erase button in the lower corner of the prompt box to regenerate the image. 

  3. New thumbnails will appear at the bottom of the screen with variations of the changed image. You can choose to add additional variations as desired.

  4. Once you are satisfied with an image, select Keep and the new image will replace the original in your main gallery.

Fixing the Product

  1. The product will be auto-selected. Adjust the selection as needed using the mask tool and painting directly on the image.

  2. Select the blue Fix product button in the lower corner of the prompt box to regenerate the image. 

  3. New thumbnails will appear at the bottom of the screen with variations of the changed image. You can choose to add additional variations as desired.

  4. Once you are satisfied with an image, select Keep and the new image will replace the original in your main gallery.

Tips for Effective Editing

  • You can often fix product inaccuracies via the editing tool and avoid discarding otherwise suitable images. Before doing so:

    • Check how the product is described in the refined prompt and adjust the description if needed.

    • Identify any props or materials (like extra pillows or cushions) that may interfere with product accuracy. Remove or revise them.

  • You can highlight multiple areas or objects and apply all changes in a single edit. You don’t need to be highly accurate — the prompt also guides placement and positioning. 

    • Example: Roughly select the whole sofa and write: "There is a pillow on the right side of the sofa and a blanket on the left arm, touching the ground."

  • Use the format: "There is [object] on/inside/over [area]." You can write multiple lines — one per selected area. 

  • Commands like "add..." or "change..." or "replace..." usually do not work.

  • Be descriptive for precise results, or else AI will try to match the prop to the overall style in the scene. For example:

    • "lamp hanging from the ceiling" → vague, a lamp with materials available in the scene might appear

    • "vintage lamp hanging from the ceiling" → better, results will showcase vintage lamps that match with the scene but they have also a vintage style

    • "large vintage lamp with brushed metal finish and a green and purple polka dot shade" → best approach if you have something specific in mind

  • People and animals: results are mixed — sometimes convincing, other times not. We recommend trying to add and iterate on  people in Smart edit rather than in the generation step. Replacing people props which may partially cover a product can lead to product inaccuracies.  

Trying Again & Keeping Results

You can produce more than 2 edit attempts, but if after 4 tries the results still don't match your intent, we recommend refining the prompt.

After a successful result, be sure to click Keep to save the edited image. If you do not click Keep, the result will be discarded.

⚠️ Attention:

  1. Keeping an edited image will replace the existing ones.

  2. If you close the gallery view without keeping your results, all edit attempts will be lost. Always confirm your edits by clicking Keep before exiting.

When to Use Edits vs. Full Regeneration

Edits are best for adding props or fixing small localized issues. If your changes affect large areas or multiple elements of the scene, consider generating a new image with an updated prompt.

QuickShot Content Distribution

Clicking the ⭐ Publish button in the Gallery View or on the thumbnail in the main Results stream to mark a generated Quickshot image for export. Starred images are made retrievable via CAPI and can be distributed via Curator.

⚠️ Attention: Currently, only images with Master Assets are able to be published and distributed. Images made with uploaded imagery will need to be downloaded.