Product-led growth · PushOwl × Brevo

Turn first use into first win.

Simplified onboarding that reduces confusion, ships a first win, and builds trust across email, SMS, and push.

Trial architecture Pricing paths AI bundles Managed services

Order-aware offers

Routed every entry point through order-bucket logic so each store sees the right offer first.

AI creatives for first win

Prefilled copy, images, and guardrails so the first campaign ships with confidence.

Discounted trial fallback

A quiet safety net that appears only after stall signals-never the headline.

Managed setup plan

People-led outcomes with a defined scope, tight SLA, and capped weekly slots.

+35%

Revenue lift

+23%

Top-funnel lift

+15%

Conversion rate

PushOwl trial system collage

Impact

The redesigned trial system drove significant growth across the entire funnel

+35%

Revenue lift

Segmented offers and AI creatives unlocked higher-value conversions

+42%

Day-2 campaign sends

AI templates removed blank-page fear for new users

$350

Managed setup ARPU

Capacity signals drove urgency for high-ARPU segments

+23%

Top-funnel lift

More qualified trial starts

+15%

Conversion rate

Trial-to-paid improvement

Order-aware trials that ship a first win

I rebuilt PushOwl’s PLG surface so merchants see the finished experience first—the UI shows the right plan, AI help, or people-led path before we ever talk pricing.

  • Routed every entry point through order buckets so a 50-order store never reads 10k-order copy or discounts.
  • Shipped AI creatives and quiet fallbacks that manufacture the first campaign, then offer discounted trials or managed setup only when a user stalls.
Context & Problem

Product-driven revenue had stalled

About PushOwl

PushOwl is a Shopify app (now part of Brevo) that helps e-commerce stores send push notifications, email, and SMS campaigns. Merchants install it, connect their store, and use it to recover abandoned carts, announce sales, and re-engage customers.

Pricing Structure

  • Free tier: Basic push notifications (subscriber limits)
  • Business tier ($19–$99/mo): Email + SMS + push, automation, AI features
  • Plus tier ($199+/mo): Premium features, priority support
  • Managed setup: High-touch onboarding service for enterprise stores

The Problem

Despite steady Shopify app store traffic, product-driven revenue growth had plateaued. Trial starts were flat, activation rates were low, and refund requests were climbing.

"Merchants weren't afraid of price, they were afraid of wasting time."

Key symptoms:

  • 60% drop-off between trial start and first campaign sent
  • Same trial offer shown to 50-order stores and 10,000-order brands
  • Blank-page paralysis: Users installed, then stalled at campaign creation
  • Money anxiety: Unclear billing terms led to support tickets and refunds
  • No safety net: Hesitant users left without trying alternative paths

Top of funnel working, conversion broken

App Store Traffic
Install App
↓ 60% drop
Start Trial
Send Campaign
Paid

Fix: Trial → Activation → Conversion

Goal

Increase activation and paid conversion while maintaining trial quality, trust, and reducing refund anxiety.

Success metrics: Trial starts ↑, time-to-first-campaign ↓, trial value per signup steady, refund questions ↓

The core hypothesis: different store sizes have different needs, fears, and time constraints. A one-size-fits-all trial was leaving revenue on the table.

Research & Discovery

Finding where trials broke down

Before designing solutions, I spent 3 weeks analyzing Intercom data, support tickets, PostHog funnels, and Metabase cohorts to understand exactly where and why trials were failing.

Investigation methods

  • Intercom conversations: Reviewed 200+ support chats tagged with "trial," "billing," and "confused"
  • PostHog funnels: Tracked drop-off points from install → trial → first campaign → paid
  • Metabase cohorts: Analyzed trial-to-paid conversion by order volume segments
  • Support team interviews: Synthesized patterns from CS team's daily interactions

What users were telling us & support ticket patterns

"

I installed the trial but I'm scared to get charged if I forget to cancel...

S
Sarah Chen
Small store owner
"

The pricing changed after I signed up? Or did I miss something?

M
Mike Rodriguez
Support chat
"

Does the trial include all features or just basic ones?

J
Jessica Park
Pre-install
"

I sent one campaign but now it's asking me to upgrade?

A
Alex Thompson
First-time user
"

What happens if I go over my order limit?

P
Priya Sharma
Growing merchant
User quotes showing trial concerns and feedback
Key friction points from Intercom conversations
Intercom support tickets showing common user questions
Support conversations tagged by theme
Support team insights and pattern analysis
Support team's qualitative insights

Drop-off analysis

PostHog funnel showing trial sign-up to first campaign conversion rates
PostHog funnel showing where users dropped off between trial start and first campaign

Key finding: 60% of trial users never sent a campaign. The biggest drop-off happened between "trial started" and "first campaign created" - not at pricing.

Cohort behavior patterns

What the numbers revealed

  • 0-100 order stores: Highest drop-off. Needed immediate value (AI templates) to overcome blank-page paralysis.
  • 101-2k order stores: Price-sensitive but willing to try. Needed clear trial terms and fallback options.
  • 2k-10k order stores: Feature-focused. Wanted to explore AI capabilities before committing.
  • 10k+ order stores: Time-poor. Converted better with managed setup offers.

The starting point: What needed to change

Old trial UI showing one-size-fits-all approach
❌ Original trial UI: Same offer for everyone, regardless of business size or needs

⚠️ What was broken

  • One-size-fits-all messaging
  • No fallback for hesitant users
  • Hidden billing terms → refunds
  • No AI help for blank-page fear

What we fixed

  • Bucket-aware routing & copy
  • Smart fallback offers when stalled
  • Clear money-anxiety copy upfront
  • AI templates for first campaign

Key insight: A 50-order Shopify store and a 10,000-order brand have completely different needs, fears, and time constraints. Treating them the same = lost revenue.

Make the trial feel safe

Every touchpoint reminds users they stay in control while we chase a first win.

Trust scaffolding features showing money clarity, cancel path, and safety guardrails
Building trust through transparent controls and safety defaults
💰

Money clarity

Charged only after 14 days through Shopify. Cancel anytime - link visible on every page.

↩️

Cancel path

Cancel in two clicks with confirmation before billing.

🛡️

Safety defaults

Frequency caps, subscriber limits, and review-before-send built in.

Key clarity features:

  • Trial countdown visible: Users see exactly how many days remain
  • Shopify billing badge: "Billed through Shopify" reduces payment anxiety
  • Preview everything: Test sends, draft previews, and sandbox modes available
Segmentation & Strategy

Order-bucket segmentation emerged as the key insight

The data showed clear patterns: stores with different order volumes had completely different behaviors, needs, and price sensitivity. A 50-order store starting out needed hand-holding and quick wins. A 10,000-order brand needed efficiency and outcomes.

Why order buckets?

After analyzing Metabase cohorts by various dimensions (store age, traffic, revenue, category), Shopify order count emerged as the strongest predictor of trial behavior and conversion patterns. It correlated with:

  • Business maturity: How comfortable they are with marketing tools
  • Time availability: Whether they need self-serve or managed help
  • Price sensitivity: Willingness to pay for premium features
  • Activation patterns: Speed to first campaign and conversion rates

The four segments

0–100 orders

Profile: New stores, first-time founders

Behavior: Highest drop-off, blank-page paralysis

Need: Immediate value, AI templates, hand-holding

101–2,000 orders

Profile: Growing stores, price-conscious

Behavior: Trial-curious but hesitant

Need: Clear trial terms, fallback options

2,001–10,000 orders

Profile: Established stores, feature-focused

Behavior: Explore features before committing

Need: AI capabilities, self-serve with hints

10,000+ orders

Profile: Enterprise brands, time-poor

Behavior: Want outcomes, not knobs

Need: Managed setup, tight SLA, premium tier

Cut-offs chosen based on conversion rate inflection points in Metabase data

Decision-making & routing logic

Bucket-level routing: How we determine the right path

User lands on PushOwl
Check Shopify order count
0–100 orders High-touch needed
Primary: AI Bundle trial
Secondary: Discounted fallback
101–2k orders Price-sensitive
Primary: Standard trial
Secondary: Discounted fallback
2k–10k orders Feature-focused
Primary: Standard + AI
Secondary: Managed teaser
10k+ orders Time-poor
Primary: Managed setup
Secondary: Self-serve option

Three core hypotheses to test

Based on discovery findings, I formed three testable hypotheses. Each became a distinct offer in the system:

H1: AI creatives remove blank-page fear

Hypothesis: Pre-filled campaign templates with AI-generated copy, images, and safe defaults will increase Day-2 campaign sends and reduce drop-off in the 0-100 order segment.

Measure: % of trials sending first campaign within 48 hours

H2: Segmented offers increase trial starts

Hypothesis: Routing each order bucket to a tailored offer (AI bundle for small stores, managed setup for large stores) will increase trial starts without degrading trial quality.

Measure: Trial starts by cohort, trial value per signup

H3: Discounted fallbacks reduce anxiety

Hypothesis: Offering a discounted trial only after stall signals (modal closed, no progress after 3 days) will recover hesitant users without conditioning everyone to wait for discounts.

Measure: Conversion rate of stalled users, discount exposure % per cohort

Growth through intent spikes

Igniting growth at every stage

Understanding when users show buying intent allowed us to surface the right offer at the right moment - not too early, not too late.

Key intent signals we tracked

📈

First campaign sent

Signal: User completed their first push notification
Action: Show value receipt ("You reached X subscribers") + upsell to paid plan
Timing: Within 24 hours of send

⏱️

Trial day 10+ active use

Signal: User logged in 5+ times, sent 3+ campaigns
Action: In-app modal with plan comparison + "Continue with paid plan"
Timing: Day 10-12 of trial

💡

Draft created, not sent

Signal: Campaign drafted but sitting idle >24 hours
Action: Completion nudge ("You're 90% there") + template suggestions
Timing: 24-48 hours after draft save

⚠️

Trial stalled (Day 5+)

Signal: No campaigns sent, low engagement, approaching end of trial
Action: Discounted trial fallback or managed setup teaser
Timing: Day 5+ with no progress

Key insight: Intent isn't binary. We built a spectrum - from high-intent (already shipping campaigns) to stalled (need a nudge or fallback). Each segment got a different intervention at a different time.

Where offers surface (Nudges & entry points)

Each surface is a gentle nudge layered on top of the trust work - not a pop-up trap.

Different entry points and nudge surfaces including nav chip, modal, and BFCM tab
Contextual surfaces that meet intent without interrupting flow
01

Pick up where you left off

The nav chip shifts between default, "discount available," and "resume your draft" so stores can jump back in without hunting.

02

Unblock the moment you try

A lightweight modal appears when someone explores AI creatives or automations without starting the trial, clarifying cost and next step.

03

Ship something today

A seasonal tab bundles countdown, AI templates, and the managed card with capacity so "I'll do it later" becomes "I shipped today."

Experiments & Iterations

What worked, what didn't

Each hypothesis was tested through controlled experiments. Some validated quickly, others required iteration, and one failed completely - leading to a better approach.

EXPERIMENT 1

AI Creative Templates

AI creative templates interface showing pre-filled campaigns
AI-powered templates that reduced blank-page paralysis by 42%

Variant Tested

Pre-filled campaign with AI copy, product images, safe send time vs. blank campaign builder

Metric Measured

% sending first campaign within 48 hours (0-100 order cohort)

Result

✓ +42% Day-2 sends

What changed: Made AI bundle the primary offer for 0-100 order stores. Added "Edit before sending" safety copy to reduce send anxiety.

EXPERIMENT 2

Discount Timing (Failed)

Discounted trial flow showing timing and placement
Showing discounts too early conditioned users to wait - trial value dropped 18%

Variant Tested

Show discounted trial in modal footer immediately vs. only after stall signal

Metric Measured

Trial starts, trial value per signup

Result

❌ Trial value dropped 18%

Why it failed:

Showing discount upfront conditioned users to wait. Standard-tier users saw it and hesitated, degrading overall trial quality.

What we changed:

Rolled back. Only surface discount after clear stall signals (modal closed without starting, 3+ days no progress). Cap exposure to once per user per month.

EXPERIMENT 3

Managed Setup Capacity Signal

Managed setup service interface with capacity indicator
Capacity signals drove +67% bookings for high-value stores

Variant Tested

Show "3 slots left this week" capacity indicator vs. generic "Book consultation" CTA

Metric Measured

Consultation booking rate (10k+ order cohort)

Result

✓ +67% bookings

What changed: Added dynamic capacity ribbon ("2 slots left") with real team availability. Scarcity drove urgency without feeling manipulative since it was truthful.

Managed setup unlocked high-ARPU segments - This wasn't just an experiment; it was a strategic premium lane for 10k+ stores who prefer outcomes over tools. See full managed setup details →

EXPERIMENT 4

Segment-Specific Messaging

Variant Tested

Tailored copy per bucket ("Launch your first campaign" for 0-100 vs. "Scale your campaigns" for 10k+) vs. generic copy

Metric Measured

Trial start rate by cohort

Result

✓ +19% overall lift

What changed: Created 4 copy variants matched to each segment's maturity level. Small stores saw "first win" language, large stores saw "scale" language.

Key learning: Not all hypotheses validated. The discount timing experiment taught us that visibility matters as much as the offer itself. Showing safety nets too early degraded perceived value.

Cohorts

Increase trial avenues by order bucket

Four cohorts, one system. Each store gets a primary offer and a secondary path tuned to their reality.

0–100 orders

Promise: Launch your first campaign today with AI creatives.

Primary: AI Bundle trial with outcome-first language.

Secondary: Fair trial leading to a discounted fallback if you pause.

Watch: First campaign inside two days, unsubscribe safety, support load.

101–2,000 orders

Promise: Standard trial first, discounted only if you stall.

Primary: Standard trial path.

Secondary: Discounted trial fallback when hesitation shows.

Watch: Trial value per signup, steady revenue quality.

2,001–10,000 orders

Promise: Self-serve plus AI assist; managed teaser after a win.

Primary: Standard trial with AI creatives surfaced inside "Create."

Secondary: Managed teaser that appears once a first win ships.

Watch: Day-2 ship rate and healthy escalations to managed.

10,000+ orders

Promise: People-led setup with a tight SLA.

Primary: Managed setup plan near our upper tier.

Secondary: Self-serve trial stays available but not primary.

Watch: Consult bookings, plan upgrades, and same-day time-to-value.

How the system flows

1

PLG Ideas Doc

PLG ideas doc showing buckets and subdivided experiments
Starting point with buckets and subdivided experiments
Design → Test → Run Experiment
2

Analyzing Experiment Data

Analyzing experiment data
Analyzing experiment results to inform next steps

A structured approach: ideate, execute, and analyze to continuously improve the PLG strategy.

Cross-Functional Collaboration

Building with engineering, product, and CS

This wasn't a solo design effort. I worked closely with engineering, product managers, customer success, and leadership to ship a system that balanced user needs with technical constraints and business goals.

Collaboration & rollout

This was a cross-functional effort with Engineering, Product, Customer Success, and Leadership working together on routing logic, success metrics, and phased rollout strategy.

Team collaboration documentation and cross-functional work
Cross-functional documentation and collaboration (numbers blurred for confidentiality)
Phased rollout plan showing staged launch by cohort
Phased rollout plan by cohort with validation checkpoints (details blurred)

Launch sequence

  1. Phase 1 (Week 1-2): AI creative bundle for 0-100 order cohort only (highest need, lowest risk)
  2. Phase 2 (Week 3-4): Bucket-aware routing for all cohorts with standard trial path
  3. Phase 3 (Week 5-6): Discounted fallback triggers added for stalled users (A/B tested)
  4. Phase 4 (Week 7+): Managed setup offer for 10k+ cohort with capacity controls

Why phased? Each phase validated key assumptions before expanding scope. If AI templates failed in Phase 1, we'd pivot before building the full routing system. This de-risked the investment and allowed engineering to work incrementally.

Premium Lane

Managed setup unlocked high-ARPU segments

Managed setup wasn't an afterthought - it was a strategic decision by leadership to capture high-value stores (10k+ orders) who prefer outcomes over self-serve tools.

Why managed setup?

Data showed that large stores (10,000+ orders) had fundamentally different needs:

  • Time constraints: Busy teams couldn't afford trial-and-error
  • Higher willingness to pay: Premium pricing acceptable for guaranteed outcomes
  • Complex needs: Multi-channel campaigns, integrations, custom automations
  • Retention potential: Once set up correctly, they stayed and expanded

Leadership's Strategic Intent

Leadership wanted to unlock higher ARPU and retention from enterprise Shopify brands. The managed setup service was positioned as a premium lane to:

  1. Increase average revenue per account by $200–400/month
  2. Reduce churn in the 10k+ cohort (proper setup = stickier customers)
  3. Differentiate from competitors who only offered self-serve

My design ownership

I owned the full experience design for managed setup:

Sign-up flow

Designed the consultation booking UI with capacity indicators, service scope preview, and calendar integration

Scope definition

Defined what's included: core automations, first campaign setup, KPI review, 2-week handover plan

Trust signals

Added "3 slots left this week" capacity ribbon, tight SLA copy ("same-day first call"), client testimonials

Triggers & surfaces

Showed managed offer in BFCM tab, post-install banner for 10k+ stores, and after first-win for 2k-10k stores

Managed service offering and premium positioning
Managed service positioned as premium lane
Managed setup booking interface with capacity indicator
Booking flow with dynamic capacity signal

Connection to segmentation strategy

Managed setup wasn't a random add-on - it was deeply integrated into the bucket-aware system:

The upgrade path:

  1. 10k+ stores saw managed setup as primary offer (not fallback)
  2. 2k-10k stores saw managed teaser after first win ("You shipped one campaign - want us to set up your full automation?")
  3. Booking flow checked real team capacity via internal dashboard
  4. Success tracked: consultation bookings, plan upgrades, setup completion rate
+67%

Booking rate increase

After capacity signal added

$350

Average ARPU lift

Managed customers vs. self-serve

-40%

Churn reduction

In 10k+ cohort with managed setup

By positioning managed setup as a premium lane for time-poor, high-value stores, we unlocked a new revenue stream while keeping self-serve accessible for smaller merchants.

Measurement & instrumentation

Prove value, guard quality

Every offer was instrumented by order bucket, surface, and offer type. We tracked trial starts, time-to-first-campaign, trial value per signup, and qualitative support themes.

Before vs. After: What changed

❌ Before

  • One-size-fits-all trial - same offer for 50-order and 10k-order stores
  • 60% drop-off between trial start and first campaign
  • Blank page paralysis - users didn't know where to start
  • High refund requests due to unclear billing terms
  • No fallback path - hesitant users just left

✓ After

  • Bucket-aware routing - right offer for each store size
  • Trial starts ↑ in 0-100 and 101-2k buckets
  • AI templates shipped campaigns within 2 days
  • Refund questions ↓ after clear money-anxiety copy
  • Smart fallbacks - discounted trials when users stall

Trial value per signup remained steady - we grew volume without diluting quality

Instrumentation

Segment trials by order bucket × UTM × offer type (primary, fallback, AI, managed) × surface (modal, topbar, feature, pricing, seasonal).

What moved

Trial starts moved up meaningfully in the two smallest buckets and more trials shipped a campaign within two days.

Quality holds

Trial value per signup held steady while refund questions dropped after money copy changes.

Dashboards

PostHog for experiments, Metabase for revenue cohorts, Intercom for qualitative tags, and Canny for qualitative themes.

What worked / what broke

Keep the wins, own the misses

Worked

  • Fallback as a safety net lifted starts without conditioning every cohort to wait.
  • AI Bundle turned blank-page fear into same-day shipping.
  • Managed lane gave big stores a certainty path with a clear SLA and capacity caps.

Broke / Learned

  • One path for all cohorts created friction-order buckets fixed it.
  • Hiding billing details spiked refunds; explicit copy fixed churn.
  • Auto-enable without safe defaults flooded support; paired guardrails kept trust.
What's next

What I'd ship next

Show what you achieved by Day-2 (subs, sends, est. lift)

  • A live module with clear caveats around assisted revenue.

Turn on a safe starter preset during trial

  • Auto-enable essentials with a single preset so stores feel progress without risking their list.

Adjust which offer you see by season and store type

  • Run an offer matrix that blends seasonality, vertical, and order bucket to keep routing sharp.

Managed “audit to plan” flow: templatize scope, slot scheduling, and follow-through so ops keep pace with demand.

Honesty

Risks I'm owning

  • Discount conditioning: Guarded by exposure caps, trial value monitoring, and LTV deltas.
  • Support load: AI + auto-enable paired with safe defaults and in-product guidance.
  • Seasonality bias: Maintain evergreen templates and adjust measurement windows.
  • Attribution honesty: Show assumptions; no over-claiming trial revenue.
Learnings & Reflection

What I learned and how it applies forward

"PLG isn't about free trials - it's about removing friction until the first success feels inevitable."

Core takeaway

A three-offer system (AI creatives, discounted trial fallback, managed setup) routed by order bucket and activated through lightweight nudges made PushOwl trials feel trustworthy and fast to value - lifting trial starts, first wins, and paid conversion without leaking trust or margin.

The insight that mattered most:

Segmentation isn't just about pricing tiers - it's about understanding that different users have different fears, time constraints, and definitions of "value." Treating a 50-order store the same as a 10,000-order brand was leaving money on the table.

Key learnings

  • First wins beat feature lists: Users don't want to learn your tool - they want to ship something fast. AI templates moved the needle more than any pricing experiment.
  • Visibility timing matters: Showing discounts too early degraded perceived value. Safety nets work best when they appear after stall signals, not upfront.
  • Premium lanes unlock enterprise value: Managed setup wasn't a nice-to-have - it was strategic. High-ARPU customers exist, but they need a different path.
  • Experiments fail, and that's data: Not every hypothesis validated. The discount timing failure taught us as much as the AI creative success.

The best growth doesn't come from aggressive tactics - it comes from designing systems that help users win faster than they expected.

Business Impact

The results: Massive revenue growth

The bucket-aware trial system drove significant gains across all cohorts while maintaining quality.

Revenue growth chart showing dramatic increase after implementing bucket-aware trials
Metabase revenue dashboard: Before vs. After implementing the new trial system
CEO shoutout celebrating the success of the bucket-aware trial system