Cylindo Quickshot

Intro to Quickshot

Quickshot generates high-quality lifestyle imagery in 1.5K native resolution*, powered by AI models trained on Cylindo’s premium 3D Master assets. The system enhances user creativity by blending prompt-based input, style presets, and reference images into realistic 3D scenes in seconds. We expect users to be able to generate 1 "usable" image out of 6. A "usable image" will meet the following criteria:

  • Accurate product representation

  • No or minor artifacts / product inaccuracies (typically fixable via inpainting)

  • Alignment with creative expectations

Key Considerations on AI-Generated Images

AI-generated lifestyle images in Quickshot incorporate elements of controlled randomness, meaning that product position, layout, and other visual details are guided—but not strictly constrained—by the underlying 3D scene and prompt.

Unlike V-Ray renders, which produce deterministic outputs, Quickshot's AI generations may introduce slight variations in product placement and accuracy across images.

The initial release of Quickshot comes with the following specifications:

  1. 1.5K native resolution (1536×1024px – 3:2 fixed aspect ratio) lifestyle image generation using AI trained on Cylindo 3D Master Assets

  2. Generation time between 30–60 seconds (ongoing performance optimizations)

  3. User creative input, via prompt, reference image, or style preset, is intelligently upsampled to align with or enhance the staged 3D scene

  4. AI-based post-processing allows props to be added, adjusted, or replaced within seconds. Final images are ready for distribution via the Cylindo Content API and Curator


Preparing a product for Quickshot

Quickshot's AI is trained on your Cylindo 3D Master Asssets. Before you can start creating imagery with Quickshot, you will need to select the products and configurations that the AI should train upon. To do this, follow these steps:

  • Login to the Cylindo platform and navigate to the 'Assets' tab.

  • Select a product from your asset library and enter the product page.

  • From here, select the configuration you would like to use in Quickshot.

  • Click the three dots to the upper left of the product viewer and choose 'Convert for Quickshot'

  • When you open Quickshot, you will see the product appear as one of the selections in training. It should take between 20-30 minutes for the product to be fully trained and available for use.


Quickshot flow overview

Step

Step 1: Select a Product

Step 2: Choose a Template

Step 3: Add Prompting Inputs

Step 4: Review Images & Generate More

Step 5: Edit a Generated Scene

Step 6: Distribute QuickShot Content

Step 1: Product selection

Selecting a Quickshot-ready product is the first step in creating a lifestyle image.

The AI model is trained on the 360 asset and will use that information in order to replicate its likeness in the generated lifestyle image.

Step 2: Scene Templates and Generation Settings

Pick from Indoor or Outdoor templates to direct the 3D scene layout and guide generation. Use no-context templates when you want the AI to define the scene architecture and props.

It’s important to note that the prompt is the primary factor influencing the AI-generated setting. When selecting a template, keep in mind that QuickShot uses the underlying 3D scene layout as guidance during generation, not the preview image.

Open and Basic context templates can be very effective for both outdoor and indoor scenarios when:

  • You want to blur the background

  • You prefer to delegate architecture, props, and placement to the AI

These templates remove most scene context and let the AI decide how to fill the space creatively.

Camera Angle

The camera angle must be set manually via the dedicated Camera Angle panel. The focal length ranges from ultra wide to tele lenses. You can use the former for shots from above and the latter when you desire to compress the perspective and to use a prompt that heavily blurs the background.

Note: Templates do not define the camera angle, so please make sure to adjust it yourself as needed.

Step 3: Prompting

You can select a style preset, upload a reference image, and/or enter a custom prompt. Any of these inputs will be analyzed in combination with the 3D scene to generate a prompt that harmonizes with the selected layout. After the first generation, you will be able to review the upsampled prompt and make adjustments — such as adding, removing, or editing objects and settings.

We recommend describing lighting styles with vivid, directional language such as: "Dynamic warm light coming from a window casting long shadows and highlights on the scene." See some examples below for inspiration.

To enter a new prompt from scratch, please make sure to clear the existing prompt first.

Note: It is expected that some props listed in the upsampled prompt may be missing or misplaced in one or more generated images. This is a normal part of how AI generation works and does not indicate an error.

Step 4: Review images and generate more

After clicking Generate, images will appear gradually. You can click on any thumbnail on the right-hand side to enter the Gallery View.

From the Gallery View, you can:

  • Review product consistency by interacting with the 3D product viewer on the right

  • Generate more images using the same 3D scene and prompt

  • Apply edits to specific areas of the image using the inpainting tools

Step 5: Editing a Generated image

How to Access Edits

From the gallery view of any generated image, click Edit objects. This opens the editing interface, where you can select areas and define new prompts for specific edits.

Instead of discarding and regenerating an image, you can edit the content of a generated scene directly. This is useful for refining props, correcting small product inaccuracies, or changing visual details without generating an entirely new image.

While simply amending the prompt can sometimes help, it rarely applies precise edits such as changing the material of a rug or adjusting a specific painting. For that, use the Edit objects feature to paint over areas and guide the AI.

Fixing the Product

You can often fix product inaccuracies via the editing tool and avoid discarding otherwise suitable images. Before doing so:

  • Check how the product is described in the refined prompt. Adjust the description if needed.

  • Identify any props or materials (like extra pillows or cushions) that may interfere with product accuracy. Remove or revise them.

To fix the product:

  1. Click the Fix product button in the image view.

  2. The product will be auto-selected. You can adjust the mask to include more or fewer areas.

  3. Click Fix product to correct the shape/material, or enter a prompt like "oak wooden legs" for a targeted fix.

Note: When edits are being generated, you must click on the thumbnail of the result to view the edited image. Otherwise, the main image will still show the default (unedited) version.

Add and remove objects: Tips for Effective Editing

  • You can highlight multiple areas or objects and apply all changes in a single edit. You don’t need to be highly accurate — the prompt also guides placement and positioning. Example: Roughly select the whole sofa and write: "There is a pillow on the right side of the sofa and a blanket on the left arm, touching the ground."

  • Use the format: "There is [object] on/inside/over [area]." You can write multiple lines — one per selected area. Commands like "add..." or "change..." or "replace..." usually do not work.

  • Be descriptive for precise results, or else AI will try to match the prop to the overall style in the scene. For example:

    • "lamp hanging from the ceiling" → vague, a lamp with materials available in the scene might appear

    • "vintage lamp hanging from the ceiling" → better, results will showcase vintage lamps that match with the scene but they have also a vintage style

    • "large vintage lamp with brushed metal finish and a green and purple polka dot shade" → best approach if you have something specific in mind

  • People and animals: results are mixed — sometimes convincing, other times not.

  • Object removal: success rate is around 50%. Describing the location of the object helps.

Trying Again & Keeping Results

You can produce more than 4 edit attempts, but if after 5 tries the results still don't match your intent, we recommend refining the prompt.

After a successful result, be sure to click "Keep" to save the edited image. If you do not click Keep, the result will be discarded.

⚠️ Attention: If you close the gallery view without keeping your results, all edit attempts will be lost. Always confirm your edits by clicking Keep before exiting.

When to Use Edits vs. Full Regeneration

Edits are best for adding props or fixing small localized issues. If your changes affect large areas or multiple elements of the scene, consider generating a new image with an updated prompt.

Step 6: QuickShot Content Distribution

Clicking the ⭐ Star button in the gallery view marks a generated QuickShot image for export. Starred images are made retrievable via CAPI and can be distributed via Curator.

 

Quickshot gallery of examples