StarChild is a mobile app that helps parents better understand their children through astrology-based insights, translated into warm, emotionally grounded guidance.
Version 2.0 was a major rebuild. We shifted StarChild from manual editorial insights (written by the content team in Sanity) to a data-driven, AI-generated insight system grounded in deeper astrological calculations. My focus was designing the UX layer that made dynamic content feel calm, trustworthy, and consistent across the app — even when content was still generating, unavailable, or erroring.
My Role
UX/UI Designer
Project Manager
QA Lead
TEAM
3 Engineers
3 Designers
DURATION
6+ Months
Tools
Figma
Payload
Sanity
Product Strategy
Quality Assurance
CMS
Product Design
Where we started (v1)
In V1, insights were manually written and published through Sanity by the content team. The experience worked, but it had limits: content creation didn't scale, updates were slow, and the insights weren't as tightly connected to deeper astrological calculations as the product vision required.
Scope
I did not implement the backend calculations, but I owned the experience behavior that made the system usable: the insight "state system" across the app, back-in-time browsing behaviors and states, notification and re-entry logic tied to generation events, consistent hierarchy and readability for long-form insights, and QA + regression coverage so V2 shipped stable.
What Changed in V2
V2 introduced a robust AI content pipeline backed by stronger calculations and a more accurate data foundation (including moving toward Divine API for higher accuracy). This unlocked scale and personalization, but it also changed the UX everywhere.
Impact at a glance
4 States
Designed a state system for living content
Defined and implemented consistent UX patterns for Generating, Ready, Empty, and Error/Retry states so users always understood what was happening.
Back-in-Time
Shipped without confusion
Designed the back-in-time experience with clear availability rules, loading animations, empty states, and retry states so browsing past insights felt predictable.
Re-entry
Notification + re-entry logic
Defined notification triggers and re-entry landing behavior so returns felt supportive, not chaotic.
Stability
Improved stability and velocity
V2 shipped with a noticeably improved crash rate, smoother performance, and the content team's workload dropped significantly.
The Problem
The app now had to support
Generation delays
Missing insight availability — "there is nothing to read"
Error states — "hmm, something went wrong" + retry logic
Content refreshing at multiple entry points
Both page-level and card-level loading
Users coming and going mid-generation
Meaningful re-entry through notifications
This wasn't a "feature" to design. It was a shift in the nature of the product.
Guardrails That Shaped Every Decision
• Insights could generate at different speeds depending on system load
• Some dates/content simply weren't available
• Multiple triggers could kick off generation (onboarding, add a person, profile changes, back-in-time browsing)
• Migration from Sanity to Payload happening across the broader evolution
• Existing design system was already established visually — focus was behavior, depth, and consistency
Predictable beats clever
If content isn't ready, say it. If it failed, explain it. If it's empty, make it clear.
Trust lives in microcopy + states
"Hmm" vs "Error." "Try again" vs "Reload." These moments determine whether the product feels premium.
The experience should feel simple even when the system behind it is not.
Generating should look and behave like generating everywhere, not feel random.
Role & Collaboration
As engineers refined the backend pipeline and event logic, I translated that reality into user-facing behavior: what the user sees, what the user can do, and how the app recovers when something fails.
• UX rules for insight generation states across the app
• Back-in-time UX and interaction logic
• Notification + re-entry flows tied to generation events
• Edge-case mapping (missing data, still loading, erroring)
• Working with another designer + frontend within the established design system
• QA ownership, regression scripts, cross-device testing
I didn't implement the backend calculations, but I acted as the bridge between design and engineering as system logic became clearer and more complex. I translated backend realities into user-facing behavior through close collaboration across many meetings, defining what the user sees, what they can do, and how the app recovers when something fails.
Solution
Three Connected Layers
After wireframes and early validation, I moved into high-fidelity designs to refine hierarchy, tone, and trust signals. The focus was making trip content scannable, guiding users through long-form details, and keeping the booking action clear at the right decision points.
01
Insight State System
A consistent set of states and UI patterns for dynamic content across every surface of the app.
02
Back-in-Time Experience
A date-based browsing feature with clear rules for availability, loading, and recovery.
03
Notifications + Re-entry
A return journey that makes asynchronous content feel intentional and supportive.
Deep Dive
Insight State System
In V1, the product mostly assumed insights existed because they were manually published. In V2, insights became dependent on generation and availability. We needed a consistent visual and copy language for every possible state.
Generating
Content is being created. The app feels alive, not frozen.
Ready
Insight is available and rendered with full hierarchy and readability.
Empty
"There is nothing to read." Clear, honest, not alarming.
Error + Retry
"Hmm, something went wrong." Warm tone with a clear recovery path.
What we designed
Consistent visual and copy patterns for each state
Clear hierarchy so long-form insights remained readable
"Page grabber" patterns when insights were first generating (so the app felt alive, not frozen)
Card-level loading animations (so partial readiness didn't feel broken)
Deep Dive
Back-in-Time Insights
Back-in-time introduced a new mental model. Users could browse to dates where content might be available, still generating, not available, or erroring with a retry path. The date browsing experience needed a clear product story around availability and states — not just a calendar UI.
What was designed
A calm browsing experience with clear rules
• Date navigation that feels simple, not technical
• Clear empty state language when nothing exists for that date
• Consistent error messaging + retry patterns
• Deliberate loading animation moments so the app feels responsive and intentional
Key UX Artifacts
• Loading moon animation in back-in-time (communicates "this is working" without feeling stressful)
• Back-in-time loading / empty / error state system
• Card-level loading if some insights are available and others are still generating
Deep Dive
Notifications + Re-entry Logic
When insights generate asynchronously, the return moment matters as much as the reading moment. If a user taps a notification and lands in confusion, trust drops instantly.
What was designed
A calm browsing experience with clear rules
• Date navigation that feels simple, not technical
• Clear empty state language when nothing exists for that date
• Consistent error messaging + retry patterns
• Deliberate loading animation moments so the app feels responsive and intentional
Key UX Artifacts
• Loading moon animation in back-in-time (communicates "this is working" without feeling stressful)
• Back-in-time loading / empty / error state system
• Card-level loading if some insights are available and others are still generating
QA and Release Stabilization
Dynamic content systems create invisible failure modes. In V2, it wasn't enough to check if screens "worked." We had to validate state behavior across real-world conditions.
As QA Lead - what i did
Owned regression scripts across onboarding, charts, insights, notifications
Validated edge cases: missing data, profile edits, add person, delete person, mid-generation re-entry
Tested across iOS and Android
V2 was significantly more stable than V1. Crash rate improved and the product felt smoother and more consistent in day-to-day use.
AI Content Pipeline Architecture
Data Sources
Astrology Data + Rules
Source Content Library
Processing Layer
Grounding Layer
Style Extraction
Editorial Rules
Content Team Feedback
Computation
Compute chart signals
(aspects, transits, placements)
Create Style Guide +
Voice Rubric
Content Team Feedback
(system + developer + user)
Generation
Generate structured facts
JSON-like payload for LLM
Create few-shot examples
before/after rewrites
Prompt + few-shot conditioning
Fine-tune / instruction tune
Candidate generations
Evaluation Harness
Rubric scoring
voice match, clarity, usefulness
Fact consistency
must align with chart payload
A/B tests in app
engagement, saves, retention
Automated checks: length, reading level, banned claims
Quality Gate
Meets quality threshold?
Yes
Deploy to production pipeline
Daily generation at scale
Monitoring
voice match, clarity, usefulness
No
Human review loop: content team approves / edits
Add approved edits back into dataset
Outcomes
What We Shipped
Product Outcomes
• Shifted from manual insights to scalable generated insight delivery
• Launched back-in-time browsing with predictable, parent-friendly states
• Supported system-wide asynchronous content behaviors without making the app feel confusing
Team & Business Outcomes
• Content team workload reduced significantly (no longer fully manual)
• Faster shipping velocity once the pipeline and UX system were in place
• Improved insight accuracy through stronger calculation and adoption of Divine API
User Outcomes
• Users consistently gave positive feedback
• App felt smoother, more polished, and more trustworthy
App Store Recognition
V2 supported a major milestone including App Store App of the Day recognition — a testament to the polish and trust the redesigned experience delivered.
Reflection
Premium UX Isn't Decoration
V2 taught me that premium UX isn't decoration — it's clarity and consistency under uncertainty.
The real work was invisible
The most important work wasn't making insight screens pretty — it was designing the invisible system: the states, the messaging, the recovery paths, and the return journey. That's what made AI-generated content feel like a stable product parents could trust.





