The 7 metrics to track in the first 30 days after a KDP coloring book launch are: daily BSR for each of the 3 categories, daily units sold, search-indexing status for all 7 keywords, ad spend efficiency (after day 7), review count and verified ratio, refund rate, and Author Central traffic source mix. Track them daily for days 1 to 14, then twice-weekly for days 15 to 30. The first 30 days are the rolling window the A10 algorithm uses to decide how aggressively to surface your book in month 2, so this is the only month where what you measure changes the outcome [3].
TL;DR:
- The first 30 days are an algorithm evaluation window, not a sales window. A10 weights this rolling period when deciding visibility for the rest of the book's life. A book that sells 20 units a day for 30 days outranks a book that sells 200 on day 1 and 5 a day after [3].
- Daily cadence matters more than peak numbers. Sustained, low-variance velocity reads as "real demand". Spikes that don't sustain read as "promo-driven" and decay fast.
- Review velocity is a two-sided trap. Under 5 reviews by day 30 signals a dead listing. Over 40 reviews in 48 hours triggers Amazon's velocity-anomaly filter, which removes reviews and can flag the account.
- The single most expensive thing to miss is a search-indexing gap. If by day 10 your 7 keywords don't surface the book on page 1 for a long-tail query, the listing has a metadata problem, not a velocity problem. Fix the listing, not the ad budget.
This post is the day 1 to day 30 companion to the launch day playbook, which covers the 12 moves at hour 0. The 25-point pre-publish checklist sets up the launch; the playbook handles the launch day; this post handles the month that follows.
Table of contents
- What does the first 30 days mean to Amazon's algorithm?
- How often should you check each metric?
- The 7 metrics to track
- The week-by-week schedule for the first 30 days
- Red flags by day 30 and what they mean
- What decisions do you make at the 30-day mark?
- The 30-day tracking template
What does the first 30 days mean to Amazon's algorithm?
The A10 algorithm evaluates new books on a rolling 3 to 4 week window, weighting recent performance heavier than older performance but still averaging across the full period [3]. That window is what most KDP publishers call the "honeymoon" or "30-day boost", though Amazon never names it in any official documentation. What's documented is the decay behavior: a book's rank weighting on day 7 is built from sales across days 1 through 7, the day-21 weighting from days 1 through 21, and so on. By day 30, the algorithm has roughly 4 weeks of velocity data to anchor against, and the rank you settle into in week 5 is built from that 4-week average, not from any single day inside it.
The practical implication is straightforward. Three days of strong sales in week 1 won't carry the rank past week 5 if the next 25 days are flat. Conversely, 30 days of modest but consistent daily sales builds a stronger anchor than 1 day of triple-digit sales surrounded by zeros. This is why the launch day playbook's contrarian rule (don't spike, sustain) matters more than any single day-0 move [3].
The second thing the first 30 days does is build the listing's conversion-rate baseline. Every impression Amazon serves (in search, in browse, on competitor product pages) gets logged with a click-through and a buy-rate. By day 30, you have a conversion-rate fingerprint, and the algorithm uses it to decide whether to keep serving impressions or pull back. A book with poor week-1 conversion that doesn't improve typically sees Amazon ad eligibility scores drop and organic impressions thin out by week 5 [5].
The 30-day window is the only month where you can change the trajectory cheaply. After day 30, the listing's baseline is mostly set, and improvements require either a price reset, a cover re-upload (which triggers a fresh review), or aggressive new ad spend.
How often should you check each metric?
A check cadence that's too dense (every 2 hours) burns your week 1 attention on noise. KDP dashboard data has a documented reporting lag of up to 24 hours, and print book sales can lag 24 hours to a week because the unit isn't reported until it ships [2][4]. Checking 6 times a day shows you the same data 6 times. Checking once a week misses the recoverable problems (a missing category, a deindexed keyword) until they've cost you the algorithm window.
The right cadence is per-metric, not per-day. Some metrics shift hourly, some shift weekly. The table below gives the read-frequency for each of the 7:
| Metric | Days 1 to 7 | Days 8 to 14 | Days 15 to 30 |
|---|---|---|---|
| Daily BSR per category | Once a day | Once a day | Twice a week |
| Daily units sold | Once a day | Once a day | Twice a week |
| Search-indexing status | Day 3, day 7 | Day 10, day 14 | Day 21, day 30 |
| Ad spend efficiency | Hold spend | Day 8 onward, daily | Twice a week |
| Review count + verified ratio | Once a day | Once a day | Twice a week |
| Refund rate | Day 7 | Day 14 | Day 21, day 30 |
| Author Central traffic mix | Day 7 | Day 14 | Day 21, day 30 |
Pin the table to whatever surface you use as your tracking journal. The journal can be a Notion page, a Google Sheet, or a notebook. What matters is one place where the 7 metrics get a daily timestamp.
The 7 metrics to track
Each metric has a what (the number), a where (where you read it), and a tell (what it signals when the number moves).
Metric 1: Daily BSR per category
What: the Best Sellers Rank in each of the 3 categories your book sits in. Coloring books usually carry a primary BSR (Books overall) plus a sub-category BSR (e.g., "Coloring Books for Grown-Ups" leaf).
Where: scroll to the "Product details" section near the bottom of the live listing. Amazon updates BSR roughly hourly inside Amazon's reporting infrastructure, but the public-facing number can lag a few hours. Use the same time-of-day daily so you're comparing apples to apples.
Tell: a sub-category BSR under 50,000 in "Coloring Books for Grown-Ups" indicates daily sales. Between 50,000 and 200,000 means weekly sales. Above 500,000 means rarely. The BSR primer covers what each range converts to in coloring-book niches, and the BSR sales estimator does the rank-to-units math for you. The sales estimation methodology post explains why the same BSR converts to wildly different unit estimates across book types.
The single most useful BSR pattern in the first 30 days is the direction over 7-day windows, not the daily number. A book that drifts from rank 200,000 on day 7 to rank 80,000 on day 14 is gaining ground. A book that holds steady at 200,000 across 3 weeks is flat. A book that drifts from 80,000 to 200,000 is decaying, and decay in week 2 is the strongest signal that your launch-day numbers were promo-driven, not demand-driven.
Metric 2: Daily units sold
What: the unit count for the previous day across paperback, hardcover (if you enabled it), and any expanded-distribution channels.
Where: KDP Reports dashboard, "Sales Dashboard" view, set to the previous calendar day. Print sales report when the unit ships from Amazon's fulfillment center, not when the buyer clicks "buy", which is why print numbers can land 24 hours to a week after the click [2][4].
Tell: the first 7 days are noise because of reporting lag. The signal lives in the 7-day rolling average from day 8 onward. A 7-day average of 5 to 10 units a day for a single-category coloring book in a moderately competitive sub-niche is a launching launch. Under 2 units a day for 14 straight days is a dying launch, and the algorithm response in week 5 will reflect it.
Compare your week-1 average to your week-3 average. If week 3 is 30% higher than week 1, the algorithm is rewarding your sustained velocity by serving more impressions [3]. If week 3 is lower than week 1 with no obvious cause, the listing is losing relevance for its keywords, which is metric 3's job to investigate.
Metric 3: Search-indexing status for all 7 keywords
What: whether your book surfaces on Amazon search for each of the 7 backend keywords you submitted, plus the 2 to 3 long-tail variations they were meant to cover.
Where: an incognito Amazon search bar (your account's history changes the results). Type the keyword, scroll the results, see if your book appears in the first 5 pages. By day 10, every backend keyword should resolve to your book somewhere in the top 5 pages of search results. By day 21, every backend keyword should land your book on page 1 for the long-tail forms.
Tell: this is the most diagnostic metric in the first 30 days because indexing failures are silent. The book is live, it has a BSR, units are trickling in, but if you type your own backend keyword and your book is on page 20, the keyword isn't doing anything. Most "my launch flopped" cases are silent indexing failures dressed up as velocity problems.
If a keyword still isn't indexed by day 14, the cause is almost always one of three things: the keyword conflicts with a banned term in KDP's metadata rules, the keyword overlaps too heavily with your title (which makes it redundant and KDP discards it), or the keyword is too broad for your category. The keyword optimizer catches the overlap and rule-violation cases. The 7-slot strategy guide covers the long-tail patterns that actually index for coloring books.
The fix is a metadata update, not a velocity push. Updating the 7 backend keyword slots in the KDP listing form takes 5 minutes and triggers no re-review. The new keyword set re-indexes over the next 72 hours.
Metric 4: Ad spend efficiency (only after day 7)
What: if you're running Amazon Sponsored Product ads, the cost per click (CPC), click-through rate (CTR), and advertising cost of sale (ACoS) per campaign.
Where: Amazon Ads dashboard, accessed from KDP via the Marketing tab. Ads data refreshes roughly every hour.
Tell: do NOT start ads on day 0. Hold spend through day 7 minimum, ideally day 10, so the search index has stabilized [3]. Once indexing is settled, run one auto-targeted Sponsored Product campaign at $5 a day for 7 days. The auto campaign discovers which keywords convert without your guesswork. After day 14, take the top 5 to 10 keyword phrases from the auto campaign's search-term report and split them into manual campaigns. ACoS targets for coloring books in 2026 sit at 30 to 50% during the discovery phase and should drift toward 20 to 30% by day 30 as the manual campaigns mature [5].
The profit calculator converts your price tier and page count into a unit margin so you can set a maximum ACoS that still leaves a profit. Spending 60% ACoS on a book that earns 40% margin loses money on every unit. The pricing guide covers the royalty-tier interaction.
Metric 5: Review count and verified ratio
What: total review count, total rating count, and the verified-purchase ratio (visible by clicking each review).
Where: the live listing's review section. KDP's Author Central also shows the count but not the verified breakdown.
Tell: aim for 5 to 15 reviews by day 30, with at least 70% verified. Verified reviews carry roughly 5 times the algorithmic weight of unverified ones, so 10 verified beats 30 unverified for ranking purposes. Under 3 reviews by day 30 in a competitive sub-niche is a conversion-rate red flag, which usually points at the cover (thumbnail readability) or the listing copy (the description guide covers the conversion-affecting parts).
The danger zone is the opposite. Over 40 reviews in 48 hours triggers Amazon's velocity-anomaly detection. Reviews land in a moderation queue, often get removed, and the listing can get flagged for review-velocity abuse. The 2026 enforcement is tighter than 2023 was. The safe pattern is 1 to 3 verified reviews per day, paced naturally through the first 30 days, not a 30-review blast in week 1.
Review-swap groups, paid reviews, and "instant best seller" services trip the same filter. Amazon removes the reviews within days and flags the account. Don't.
Metric 6: Refund rate
What: the percentage of units sold that were refunded.
Where: KDP Reports dashboard, "Returns" report. Refunds report 1 to 14 days after the original sale, so this metric stabilizes around day 14.
Tell: industry-typical refund rate for coloring books sits at 2 to 4%. Above 6% by day 14 signals a product problem (most often: paper bleed-through with markers, color shift on the cover after the CMYK conversion, or page-count mismatch versus the listing description). All three are catchable with a proof copy, which is move 5 in the launch day playbook for exactly this reason.
Refund-driven reviews are the most expensive review type a book can get because the algorithm reads "refund + 1-star review" as a fundamental quality flag, not a one-off bad experience. One refund-driven 1-star in week 1 takes a 20-review cushion to outweigh. The fix is upstream (the proof copy in the launch checklist), not downstream.
Metric 7: Author Central traffic source mix
What: which referral sources are driving views to your book's listing.
Where: Author Central's "Sales Dashboard" tab shows aggregate views by source (Amazon search, Amazon browse, external referrer). Hourly granularity isn't available, but daily totals are.
Tell: a healthy first-30-day mix is 60 to 80% Amazon-native (search + browse + competitor-page recommendations) and 20 to 40% external (Pinterest, your newsletter, off-Amazon promo). If external traffic is over 60% of total views in week 2 and beyond, the listing isn't earning Amazon-native impressions, which usually points back at metric 3 (search indexing) or a cover/listing conversion problem.
If Amazon-native is 95%+ and external is near zero, that's also a problem in a different direction. The algorithm uses external referral signals as a relevance proof point. A book that pulls some external traffic in week 1 to week 4 signals "real audience demand" to the algorithm in a way pure-Amazon traffic doesn't. The launch-day playbook's move 11 (seed external traffic, spread over days) sets this up; this metric verifies it landed [3].
The week-by-week schedule for the first 30 days
The 30 days break into 4 chunks with different priorities. The pattern is: investigate early, optimize in the middle, document late.
Days 1 to 7: indexing window and proof arrival
- Daily: BSR per category, units sold, review count.
- Day 3: first search-indexing check. Type each of the 7 backend keywords incognito. Note which surface the book, which don't.
- Day 5 to 7: proof copy arrives (assuming you ordered on submit day per move 5 of the launch playbook). Open it. Check paper bleed-through with the marker brand you target, cover color accuracy, spine alignment, barcode placement.
- Day 7: second search-indexing check. Any keyword still not surfacing the book is a candidate for a metadata fix.
- Do NOT start ads. The search index isn't stable.
Days 8 to 14: ad ignition and refund-rate read
- Daily: BSR, units, review count, ad performance (once ads are running).
- Day 8 to 10: start one auto-targeted Sponsored Product campaign at $5 per day. Let it run untouched for 7 days.
- Day 10: third search-indexing check. If a keyword is still missing, edit the listing's 7 backend slots and replace it. New keywords re-index over 72 hours.
- Day 14: first refund-rate read. Anything over 6% is a quality flag, investigate the proof copy.
- Day 14: fourth search-indexing check.
Days 15 to 21: ad refinement and conversion-rate baseline
- Twice-weekly: BSR, units, reviews.
- Day 15: pull the auto campaign's search-term report. Take the top 5 to 10 converting keywords, build manual targeted campaigns at higher bids ($0.50 to $1.20 starting CPC for coloring books in 2026).
- Day 17 to 21: pause the auto campaign or drop its budget to $2 per day. The manual campaigns now do the lifting.
- Day 21: fifth search-indexing check. By now every backend keyword should surface the book on page 1 for the long-tail form.
Days 22 to 30: documentation and decision prep
- Twice-weekly: BSR, units, reviews.
- Day 28 to 30: collect the full 30 days of data into a single sheet. Calculate the 30-day rolling averages for BSR, units, and ad ACoS.
- Day 30: review the categories guide against your current category placements. If you launched into wrong categories, this is the window to request a fix without losing momentum.
Red flags by day 30 and what they mean
Six failure patterns show up in the first 30 days. Each has a cause and a fix.
- Zero or near-zero BSR movement by day 14. The listing isn't getting impressions. Cause is almost always metric 3 (search-indexing failure) or metric 7 (no external traffic, no Amazon trust signals). Fix the metadata, then seed external traffic.
- BSR drift upward (worsening) starting in week 2. Initial launch signals decayed and the listing isn't sustaining velocity. Cause is usually launch-day spike behavior (promo blast on day 1 that didn't sustain). Fix is to start sustained velocity moves: Pinterest pin schedule, modest ad spend, real audience promo over weeks.
- 5+ refunds in week 2 with no obvious cause. Almost always a paper or cover problem the digital previewer didn't show. Check the proof copy with marker bleed-through. If the issue is real, re-export the interior or cover and re-upload, accepting the re-review delay. This is cheaper than a refund-driven review pattern.
- Reviews stuck at 0 to 2 through day 21. Listing conversion is low (visitors aren't buying) OR external traffic is zero (no audience to convert). Audit the cover thumbnail at phone size. Audit the description with the description generator and the description guide.
- ACoS over 70% through week 3. Either targeting is too broad (auto picked irrelevant keywords and you didn't prune) or the price doesn't support the margin. Check the profit calculator for your max sustainable ACoS, then prune campaigns to that ceiling.
- A keyword that worked in week 1 deindexes in week 2. Rare but it happens, usually when Amazon updates a banned-term list or a category mapping. Replace the keyword and the listing re-indexes in 72 hours.
What decisions do you make at the 30-day mark?
Day 30 is the first natural decision point. The data is stable enough to drive 1 of 4 paths.
Path 1: Working as planned. BSR trending down (improving), 5+ reviews verified, ad ACoS at or below 40%, refund rate under 4%. Action: scale ad spend by 50% for the next 30 days, keep the price, do not change the listing.
Path 2: Working but slow. BSR flat or slowly improving, 3 to 5 reviews, ACoS 40 to 60%, refund rate fine. Action: keep current ad budget, audit the cover for thumbnail readability, A/B test the description by editing it in place (no re-review needed for description edits).
Path 3: Stalled, fixable. BSR flat through 30 days, fewer than 3 reviews, no indexing problem. The conversion side is the issue. Action: cover swap (this requires a re-upload and a 24 to 72 hour re-review), price test (drop $1 to test elasticity), or aggressive Pinterest seed for week 5.
Path 4: Broken. BSR drifted upward through 30 days, indexing failures persisted, refund rate over 6%. Action: stop ad spend, pull the listing for revisions (or unpublish if the manuscript itself has a defect), republish with corrections. A 14-day publishing pause is cheaper than 60 more days of accumulating bad signals.
The 30-day mark is also when you decide whether the niche itself is working, or whether the book needs to move into a sister sub-niche. The niche selection guide covers the framework for re-evaluating a niche at the 30-day mark.
The 30-day tracking template
Pin a 4-column journal somewhere visible: date, BSR (primary + 2 subs), units sold, notes. Add the metric-specific columns as the days they're due come up. The journal does two things you can't get any other way.
First, it makes the algorithm's response to your moves visible. Without daily logs, "I posted to Pinterest on day 3" and "BSR moved from 250k to 95k on day 5" feel like two unrelated events. With logs, the connection is visible, and the next 30 days of decisions can use it.
Second, it builds the data feedstock for any second book you launch in the same niche. A 30-day journal of book #1 tells you what week 2 looks like before book #2 ever ships, which means book #2's launch plan can be tighter and the ad ramp can start earlier.
A blank 30-day window with no journal is the worst case. Future-you in month 2 can't tell which moves worked, which didn't, and what to change for book #2. The journal is 5 minutes a day. Skip it once and you'll skip it forever, so just don't skip the first day.
Related reading:
- The 12-move launch day playbook for the hour-by-hour moves on submit day.
- The 25-point launch checklist for the pre-publish validation pass.
- The BSR primer and the BSR sales estimator for converting rank to units.
- The 7-slot keyword strategy and the keyword optimizer for fixing indexing failures.
- The categories guide for verifying or fixing category placement.
- The pricing guide and the profit calculator for setting an ACoS ceiling.
BookIllustrationAI's pre-publish flow keeps the manuscript-side variables (interior DPI, single-sided layout, color mode, KDP-aware bleed) automated, so the post-launch tracking work runs against a clean baseline. The 7 metrics above stay your job because they live inside Amazon's account, not inside the manuscript.
References
- Timelines (KDP Help)- Amazon KDP
- KDP Reports- Amazon KDP
- Launch Velocity in the A10 Era- Vappingo
- KDP Dashboard Explained: What to Watch and What to Ignore- Written Word Media
- Optimize Your Books for the Amazon A10 Algorithm- Hidden Gems
Related Posts
KDP coloring book launch day playbook: 12 moves [2026]
Submit Monday morning, hold ads for 7 days, order proof same day. The 12 launch-day moves that decide whether your KDP coloring book ranks.
KDP coloring book categories: the 3-pick rule (2026)
Amazon caps you at 3 KDP categories per format and the email-support workaround is dead. Here's how to pick the 3 that move coloring books in 2026.
How to write a KDP coloring book description that sells [2026]
Most coloring book descriptions read like novel blurbs. Here's the 3-sentence rule, KDP HTML rules, and a template buyers actually convert on.