

Insense
Pro

Type
SaaS Platform
Role
Senior Product Designer
Platform
WebApp
Design Tools
Figma, Miro, User testing
Designing an AI-powered workflow to fix and scale creator-submitted video content.
As Insense grew, brands were producing more creator content than ever — but scaling quality proved harder than scaling volume.
Creators delivered videos quickly, but the output wasn’t always campaign-ready. Some videos needed small fixes, others required structural changes, and almost all needed multiple hook variations to perform across platforms.
The only way to solve this was manual: asking creators for revisions, coordinating feedback, or commissioning entirely new videos. Each iteration added time, cost, and friction — slowing campaigns and frustrating both brands and creators.
After speaking with brand teams and reviewing campaign workflows, it became clear that the problem wasn’t access to creators. It was the lack of a fast, reliable way to adapt creator-submitted videos once they were delivered.
The question became: how might we help brands fix and scale creator video content without restarting the entire production process?

“We were spending more time fixing creator videos than launching campaigns.”
What makes scaling creator video content so difficult?
To understand why brands struggled to scale creator video content, I spoke with brand managers and growth teams running ongoing creator campaigns on Insense.
While their goals varied, a consistent pattern emerged: producing content wasn’t the hard part — adapting it for performance was.
Brands described spending significant time reviewing creator submissions, requesting fixes, and manually coordinating revisions. For teams managing multiple campaigns, this process quickly became a bottleneck.
The videos are good, but rarely ready to launch without changes.









We need multiple hooks, but asking creators for revisions slows everything down.

Fixing small issues shouldn’t take another production cycle.

We spend more time coordinating fixes than testing creatives.

Which formed some key insights:

Creator videos often need adaptation, not complete re-production

Manual fixes and revisions slow down campaign launches

Generating multiple hook variations is essential for performance testing

Brands want flexibility without adding more work for creators
These challenges weren’t isolated edge cases — they reflected a systemic issue in how creator content was adapted once it entered a campaign workflow.

Lexi
“I need to move fast and test what works. Every delay means missed performance opportunities.”
Growth marketing managers are responsible for launching and optimising performance campaigns across paid social channels. They work under tight timelines, balancing speed, experimentation, and measurable results. Their success is judged by how quickly they can test, iterate, and scale winning creatives.
Goals
Launch campaigns quickly without waiting on additional production cycles
Launch campaigns quickly without waiting on additional production cycles
Maximise ROI by extracting more value from existing creator content
Frustrations
Creators videos often require fixes before they’re usable in campaigns
Small content issues can block launches, when the core video is strong
Producing variations manually adds coordination overhead

Alex
“The content looks promising, but it still needs work before it represents our brand.”
Brand and creative leads are responsible for maintaining visual consistency and quality across all marketing outputs. They review creator content with a critical eye, ensuring it aligns with brand guidelines and platform standards. Their challenge is balancing creative control with the speed required by modern campaign cycles.
Goals
Ensure creator content meets brand and quality standards
Adapt videos for different platforms without redoing production
Reduce repetitive review and manual rework
Frustrations
Fixing small visual or structural issues takes disproportionate time
Inconsistent quality across creators increases review effort
Restarting production for minor changes is costly and inefficient
Although their priorities differed, both groups shared the same underlying problem: adapting creator-submitted videos was too slow, too manual, and difficult to scale.
Personas helped surface these patterns, but more importantly, they pointed to a single opportunity — improving what happens after creator content is delivered.
Figuring out what to build
To define what the product should achieve, I focused on the most impactful problem shared across both personas.
Brands didn’t need more creators or more content — they needed a faster way to adapt existing videos for real campaign use.

I need to move fast and test what works. Every delay costs performance.
Defining the structure of the experience
With the problem clearly defined, the next step was to shape an experience that fit naturally into how brands already worked.
The goal wasn’t to introduce a new, complex tool — it was to support a moment that already existed in the workflow: reviewing and preparing creator videos for launch.
Establishing the primary user flow
Through research and stakeholder discussions, it became clear that the most important action brands needed to take was simple: take an existing creator video and make it campaign-ready as quickly as possible.
This meant the experience had to prioritise:
Speed over configuration
Clear, predictable outcomes
Minimal context switching

How might we help brands adapt creator videos quickly and confidently?
With a “no bad ideas” mindset, I explored multiple ways to help users make confident decisions. Starting loose allowed me to test different approaches before committing to a single flow.
The experience starts from an existing creator-submitted video. This ensured the AI feature felt like a continuation of the campaign workflow, not a separate system users had to learn.
Early wireframes explored how to help users express what they wanted to fix before generating anything.
Instead of exposing AI controls upfront, the flow focused on intent — improving hooks, fixing issues, or creating variations — keeping the decision lightweight and clear.
To avoid overwhelming users, the flow was structured as a guided sequence rather than an open canvas. This allowed users to move step by step, with the AI responding to specific needs instead of producing unpredictable results.





The most important action for teams using Insense was reviewing and improving creator-submitted videos.
This became the primary flow around which the AI feature was structured.
I mapped a clear, end-to-end path — from selecting a creator video to generating usable variations — focusing on the most common decisions users needed to make and defining a clear “happy path” that allowed teams to move quickly without friction.

Iterating on the solution
This flow gave us a solid foundation to build on. I created a lightweight prototype of the AI video generation experience and shared it with internal stakeholders, including product, growth, and customer success teams.
Early feedback highlighted two key needs:

Clients wanted confidence and control when using AI on creator-submitted content

Speed mattered, but not at the cost of clarity or brand safety
Based on this, I refined the flow to make the AI’s role more transparent — clearly showing what would be generated, what could be edited, and how variations would be used across campaigns.

How would users behave?
Once the prototype was ready, we tested it with a small group of internal users and select clients to understand how easily they could:
Early feedback highlighted two key needs:

Review creator-submitted videos

Identify issues (framing, hooks, pacing)

Generate and compare AI-powered variations
The goal was to validate whether the feature reduced manual effort without introducing uncertainty or friction into existing workflows.


Testing early was critical — this feature needed to feel helpful, not risky.
Key insights from testing
From early testing and feedback sessions, a few clear patterns emerged:

Users valued quick previews of AI-generated variations before committing

Clear labeling and explanations increased trust in the AI output

Most users preferred starting from a single video and iterating, rather than generating from scratch
These insights helped us prioritize clarity and speed over advanced configuration in the initial release.

The feedback we received on the new approach.
Initial user feedback was positive, validating the direction and supporting progression to large-scale testing.
