FluxNote
✂️Edit & Refine

Edge Detect

Composition control via edges

Use a reference image's edges to control composition. Generate new content with the same layout.

Composition controlPose controlLayout match
✂️

New model

Edge Detect

Be among the first creators to generate real outputs.

Cost per image

3 credits

Free renders

~33 on free plan

Watermark

None — even free

Animate to video

1-click, 5–10s

What is Edge Detect?

Use a reference image's edges to control composition. Generate new content with the same layout.

Edge Detect is an image-editing AI model: you provide an existing image and a plain-English instruction, and it modifies exactly what you describe while leaving the rest of the image intact. Unlike text-to-image models that generate from scratch, Edge Detect understands spatial relationships, preserves colors and lighting that weren't mentioned in the instruction, and applies changes that look like they were always part of the original photo. Common use cases include swapping backgrounds, changing clothing colors, adding or removing objects, adjusting lighting conditions, and adapting seasonal versions of existing brand assets — all without touching Photoshop.

Edge Detect accepts JPG, PNG, and WebP source images at up to Up to 1536×1536. Generation time is 10–16 seconds — faster than most text-to-image models because the model is modifying rather than generating from scratch. Edit instructions work best when they're specific about what to change and explicit about what to preserve: "change the background to a white studio backdrop, keep the product and lighting" reliably outperforms "clean up the background". For large structural changes — removing a person, adding a new object — multiple iterations work better than trying to describe the entire transformation in one prompt.

On FluxNote, Edge Detect costs 3 credits per image. The free plan includes 100 credits per month — enough for 33 Edge Detect renders with no credit card and no watermark. Paid plans (from $9/month) scale up to 15,000 credits without changing how the model or the interface works. Edge Detect sits alongside 18 other image models in the same dashboard; switching to a different model for a different job is a single click, no separate subscription required.

Every Edge Detect output has a one-click animate button. FluxNote sends the still to one of its AI video models — Sora 2 Pro, Veo 3, Kling 3.0, Runway Gen-4, or Seedance 2.0 — and returns a 5–10 second clip. This image-to-video workflow is particularly useful for ad creatives (static images animate well for story and feed placements), social content (a single Edge Detect image can become a TikTok, Reel, and YouTube Short), and product showcases (slow camera moves around a product image that isn't technically a 3D model).

For most workflows, Edge Detect works best as part of a multi-model pipeline: generate concepts fast with FLUX Schnell or Gemini Flash 2.5 at 1–4 credits per image, switch to Edge Detect for the final render, then run the result through FLUX Kontext Edit for last-mile corrections, or Nucleus Image if you need a 2× or 4× upscale for print. All of these models are available under the same FluxNote subscription — there's no additional charge to access other models.

Spec sheet

Hard numbers — what Edge Detect accepts and what it produces.

Resolution

Up to 1536×1536

Generation time

10–16 seconds

Aspect ratios

Match source

Inputs

Reference image + text prompt

Edge Detect: strengths & limitations

An honest picture of what Edge Detect does well — and where it doesn't. Use this to decide whether to pick Edge Detect or one of FluxNote's 18 other image models for a given job.

Strengths

  • Locks composition and pose from a reference; you swap the visual content underneath.
  • Great for matching brand templates exactly.
  • Useful for redrawing/restyling user-uploaded photos at scale.
!

Limitations

  • You need a reference image — not a pure text-to-image model.
  • Fine details (faces, hands) require careful prompt work to preserve.

3 ways creators use Edge Detect

Real-world workflows we see most often in the FluxNote dashboard.

01

Template-locked campaign assets

Scenario:Brand has a strict campaign layout (subject left, copy right) and needs 8 visual variants without breaking the grid.

Walkthrough:Use one approved hero as the edge reference, prompt 8 visual variants, all match the layout exactly.

02

Pose-matched series

Scenario:You need the same character pose redrawn as 5 different art styles.

Walkthrough:Edge-detect from one source, swap style prompt for each render, end with 5 stylistically distinct but pose-identical renders.

03

Architectural style variants

Scenario:Architect wants the same building rendered in modernist, victorian, brutalist, and futuristic styles.

Walkthrough:Edge-detect the original blueprint render, prompt new styles, deliver a comparison sheet.

Prompt examples that work with Edge Detect

Copy-paste any of these prompts into the Edge Detect model to get a result close to the example.

Style swap, same layout

[edge ref: brand template] generate watercolor version with same composition

Pose match

[edge ref: yoga pose photo] generate as a stylized line illustration

Architecture variant

[edge ref: modernist house] re-render in Victorian gothic style

How to get the best results from Edge Detect

Prompt strategies that consistently improve output quality — based on how Edge Detect was trained and what it responds to.

01

Be specific about lighting

Instead of "product photo", try "product on white marble, soft diffused window light from the left, slight shadow on the right, 85mm lens perspective". Lighting direction and quality are the single biggest driver of Edge Detect's realism.

02

Name the camera or lens

"35mm film grain", "85mm portrait lens", "macro lens close-up", "wide-angle environmental shot" — Edge Detect was trained on a corpus that includes photography metadata. Camera/lens references activate a different distribution of outputs than generic prompts.

03

Describe what's NOT in the frame

Adding "no text, no watermark, no border, clean background" prevents Edge Detect from adding visual clutter that wasn't requested. Negative constraints are especially useful for product photography and editorial portraits.

04

Use aspect ratio to guide composition

Edge Detect adapts composition to the requested aspect ratio. For a 9:16 vertical, it naturally produces portrait-style framing. For 16:9, it composes for landscape. Choosing the right ratio before prompting — rather than cropping after — gives you better compositions and less wasted generation.

05

Iterate on the seed before changing the prompt

When Edge Detect produces a near-miss result, try regenerating with a different seed before rewriting the prompt. 80% of the time, the issue is randomness, not prompt direction. Changing the seed with the same prompt often produces the version you were looking for.

06

Specify what to preserve explicitly

"Change the jacket to red, keep the face, hair, and background exactly as-is" — Edge Detect needs to know what not to touch. Edits that don't specify what to preserve sometimes propagate changes further than intended. Explicit preservation constraints produce cleaner, more surgical results.

07

Use incremental edits for large changes

For transformative changes (swapping a background, adding a new object), break the edit into steps rather than asking for everything in one prompt. Edit the background first, then add the new element, then adjust lighting to match. Edge Detect handles incremental edits more reliably than single large transformations.

Edge Detect vs other FluxNote models

Quick reference for picking the right model — every alternative below ships in the same FluxNote subscription.

vs Remix

Remix varies the whole image; Edge Detect locks composition.

How to generate with Edge Detect

Four steps from prompt to publish-ready output. Total time: under 16 seconds.

STEP 01

Upload your source image

Sign in to FluxNote (free, no card) and upload the image you want to edit. Edge Detect works on JPG, PNG, and WebP — no special format needed and no upper file-size limit beyond the typical 25 MB. Your source is private to your account.

STEP 02

Write your prompt

Describe the image in plain English. Edge Detect responds best to specific details — subject, setting, lighting, lens, mood. It's tuned for composition control, pose control, layout match, but it handles general-purpose prompts too. If your prompt is short, FluxNote's built-in prompt assistant can expand it for you in one click.

STEP 03

Pick aspect ratio and style

Choose 1:1, 9:16, 16:9, 4:5, or 3:2 to match your destination — TikTok, Instagram, YouTube thumbnail, blog hero, print. Add an optional style preset (cinematic, anime, photoreal, painterly, etc.) to bias the output. Premium aspect ratios like 21:9 unlock on the Pro plan.

STEP 04

Generate, refine, animate

Hit generate. Edge Detect typically renders in 10–16 seconds. Iterate on the prompt or seed to dial it in, then export full-resolution PNG with no watermark. Want it animated? Click the animate button to turn the still into a 5–10 second video clip in the same dashboard, no separate tool required.

Why creators pick FluxNote for Edge Detect

Edge Detect is one of 19 image models on FluxNote. One subscription unlocks them all.

1

One subscription covers Edge Detect plus 18 other AI image models — FLUX 2 Pro, FLUX PuLID, FLUX Kontext Edit, Seedream 3, Gemini Flash 2.5, FLUX Schnell, and more. No per-model paywalls and no separate logins for each provider.

2

100 free credits per month, no credit card required to start. Edge Detect costs 3 credits per image — that's around 33 free renders on the free plan alone.

3

Zero watermark on every plan including free. Your Edge Detect images export as clean full-resolution PNG, ready for paid social, print, or anywhere else.

4

Animate any Edge Detect output into a 5–10 second video clip with one click. Useful for ads, story posts, reel openers, and YouTube thumbnails — the animation cost is metered separately and starts at 6 credits.

5

Built-in prompt assistant, batch generation, image-to-image variations, and a reusable prompt library. You don't need to memorize prompt syntax — paste a rough idea and FluxNote refines it.

6

Reusable seed and style controls — lock the visual direction once, regenerate variants without losing it. Edge Detect respects seed control like the underlying API does.

7

Private by default. Outputs are visible only to your account unless you publish them to the showcase. We never sell or share your prompts and reference images.

Edge Detect FAQ

The 10 questions creators ask most often before switching to Edge Detect.

Is Edge Detect free on FluxNote?+

Yes — every FluxNote plan including the free tier (100 credits/month) can generate with Edge Detect. Each generation costs 3 credits, so the free tier covers around 33 renders per month with no credit card required.

What is Edge Detect best for?+

Composition control, Pose control, Layout match. Use a reference image's edges to control composition. Generate new content with the same layout. If your project is specifically commercial-print or hero-creative work, you may want to pair it with one of FluxNote's premium models (FLUX 2 Max, Imagen 4 Fast, Seedream 4) for the final render.

Does Edge Detect add a watermark?+

No. FluxNote does not watermark any output, on any plan, including free. Every Edge Detect image exports as a clean full-resolution PNG that you can use commercially without attribution.

Can I use Edge Detect commercially?+

Yes. FluxNote grants commercial usage rights on outputs from every model including Edge Detect. You can use the images in ads, products, books, and merchandise. As with any AI generation, double-check that the prompt and output don't reproduce protected likenesses or trademarks.

What input does Edge Detect need?+

Reference image + text prompt. Edit models need an existing image to operate on, plus a text instruction describing the change.

Can I turn a Edge Detect image into a video?+

Yes — every Edge Detect output has a one-click animate button. FluxNote will generate a 5–10 second video clip from your still using one of the AI video models (Runway Gen-4, Kling, Sora 2 Pro, Veo 3, Seedance). Pick the model from the animate dropdown; cost starts at 6 credits per clip.

What resolutions can Edge Detect export?+

Up to 1536×1536. Free plan exports the same resolution as paid plans — there is no quality gating. If you need print-grade resolution, run the result through Nucleus afterwards to upscale 2× or 4× without losing detail.

Is my data private?+

Yes. Reference photos and prompts are visible only to your account. We do not sell prompts, do not train on your inputs, and you can delete generated images at any time from your library. Identity-locked models (FLUX PuLID) embed reference photos privately and you can purge embeddings on request.

Edge Detect vs other AI image models — when should I pick it?+

Use Edge Detect when composition control, pose control, layout match matter most. Compared to Remix, remix varies the whole image; edge detect locks composition. For an everyday default, most creators pick FLUX Schnell or Gemini Flash 2.5; for hero work, FLUX 2 Max or Seedream 4; for identity, FLUX PuLID; for targeted edits, FLUX Kontext Edit.

Do I need to install anything?+

No. FluxNote runs in any modern browser — Chrome, Firefox, Safari, Edge — on desktop and mobile. There is no install, no GPU requirement, and no waiting in line. Your library, prompts, and outputs sync across devices automatically.

More Edit & Refine models

Same category, different strengths.

Other models on FluxNote

Browse all 19

90s

Your first viral video is 90 seconds away.

Type a topic. AI writes, voices, captions, and edits.You download a 1080p video — yours to post anywhere.

No credit cardNo watermarkCancel anytime

Already 100,000+ creators won't tell you this is their secret.