Section 01
AI Augmentation Plan for the Marketing Analyst Role
Superior Tire & Rubber Corp., Warren, PA
Prepared for: Jared Steier, Vice President of Sales & Marketing, Material Handling Prepared by: NunnCurtis Labs Date: 2026-05-11
Table of contents
- Executive summary
- What we read in your job description
- A week in your new analyst's life
- The day-one prompt playbook
- 30 / 60 / 90 day rollout
- Tools to add once prompts have proved their worth
- Cost-vs-continued-hire comparison
- Implementation services option
- Money-back guarantee
- What to do this week
Section 02
Executive summary
Your new Marketing Analyst doesn't need a stack on day one. They need a prompt playbook and a Claude Pro or ChatGPT Plus subscription. The stack waits until the prompts are proven.
Most AI pitches you've read open with integrations, ETL, BI dashboards, Slack bots, and a six-figure systems-integrator engagement. This one doesn't. At a $50-$90M family-owned manufacturer, your analyst already has the data — the Epicor exports, the HubSpot reports, the rebate spreadsheets, the booth-scanner CSVs from MHEDA. What they don't have is a structured way to put that data in front of a senior model and get back a defensible scorecard, a post-show attribution report, or a content-gap analysis.
Section 4 is that structure. Eight prompts, one per responsibility in your Indeed listing, written specifically for your business. Material Handling, Construction, and Agriculture business units. Distributor channel. MHEDA, ISO 9001, the rebate accrual mechanics your Finance team already runs. Your analyst pastes each prompt into Claude Cowork (Claude.ai with Projects) or ChatGPT Codex / Canvas, attaches the export, and gets back a report that would have taken two or three days in Excel.
Month-one cost is one Claude Pro or ChatGPT Plus seat. Roughly $20-$30/month. No integrations, no Zapier flows, no ERP read-replica, no Power BI license. Hire the analyst, give them prompts on day one, add a tool only after the prompts prove what's worth automating.
What you'll find in this document, in order:
- The day-one prompt playbook (Section 4). Eight prompts your analyst can use on their first morning. Copy-paste-ready. Platform-neutral.
- A 30 / 60 / 90 day rollout (Section 5). Days 1-30 are just the prompts and existing exports. Days 31-60 introduce the first lightweight automation once one specific step has been done manually too many times. Days 61-90 introduce dashboards, only if a monthly report has enough recurring viewers to justify one.
- A tools section (Section 6). The full vendor breakdown is in the companion
tool-stack.md. Section 6 here is the summary table and the buying logic — nothing to buy in month one. - A cost-vs-hire comparison (Section 7). Year-1 tooling: $497 for this proposal plus roughly $240/year for the analyst's Claude Pro seat. Tools ramp to $400-$650/month by month 6 only if the analyst has proven they need them.
Year-1 recovered value, conservatively: roughly $22,500 in directly recovered analyst hours, plus another $10,000 in soft costs (reports that ship on time, a QBR deck that takes hours instead of days, a post-MHEDA attribution that's defensible). That's about $32,500 of value against a $240/year month-one investment.
Two things to know about how this document is written. First, every number is conservative on purpose. Your CFO will read this, and we'd rather under-promise than defend a hockey-stick claim. Second, nothing here requires a six-figure engineering project or a rip-and-replace of your CRM, marketing automation, or ERP. If your IT team can hand the analyst a CSV export, the rest of month one happens inside Claude Cowork or ChatGPT Codex.
Section 03
What we read in your job description
Your Indeed listing names the role and opens with this line:
"The Marketing Analyst supports data-driven decision making across Sales, Marketing, and Product teams by analyzing market trends, pricing, and performance data..."
Three phrases in that sentence are what the rest of this proposal hangs on.
"Across Sales, Marketing, and Product teams"
This is a cross-functional role. Your analyst won't sit inside one team's stack. They'll get pulled into Sales (distributor performance, quote-to-close cycle), Marketing (campaign reporting, trade-show pipeline, content), and Product (pricing intelligence, win-loss, application data). At a $50-$90M manufacturer with two to six marketing headcount, that makes your analyst the go-between for three siloed data sources. AI is unusually good at exactly that go-between work: pulling from three systems, normalizing, and producing one report.
"Market trends, pricing, and performance data"
That phrase translates to four distinct workflows in the body of the JD: competitive pricing intelligence, lead-source ROI, distributor sell-through, and product-line performance. Each one is a templated, repeatable transformation. Each one is currently a multi-day Excel exercise at most mid-market manufacturers we've looked at. Each one is a strong prompt candidate.
"Data-driven decision making"
The unspoken half of that phrase is the actual pain. The decisions are being made today without the data, or with the data shipped two weeks late. You have data. You need it at the speed of the decision. Prompts don't generate more data; they make the data you already have arrive in time to matter.
The eight responsibilities, called out verbatim
Your JD names eight responsibilities under the role. Each one becomes a prompt in Section 4.
-
"Build and maintain monthly distributor scorecards across the Material Handling, Construction, and Agriculture business units, including SKU-level sell-through, channel ROI, and rebate accruals."
-
"Track trade show pipeline attribution end-to-end, from booth-scan lead capture through quoted opportunity through closed-won revenue. Deliver post-show reports within 14 days of each show."
-
"Analyze lead source performance across digital, trade show, distributor referral, and direct channels. Identify highest-ROI channels by business unit and quarter."
-
"Maintain technical content performance dashboards for our datasheets, application guides, and case studies. Identify content gaps where distributor sell-through is underperforming benchmark."
-
"Compile pricing competitive intelligence on key product lines, drawing from public price lists, distributor feedback, and industry benchmarks."
-
"Support quarterly business reviews with VP Sales & Marketing, including ad-hoc data pulls and one-page executive summaries."
-
"Partner with IT and Operations to extract data from our ERP and CRM systems into reportable formats."
-
"Manage and improve our marketing automation reporting (HubSpot or equivalent)."
The phrase that recurs across all eight: "translate raw data into action." Prompts do that well today. Give a senior model a CSV, a context paragraph, and a structured ask, and you get a report your VP can read in the elevator. The analyst's job shifts from building reports in Excel to editing the model's output for credibility.
What else we picked up from publicly available signals
- MHEDA 2026 Booth #1 (May 2-6, Nashville). An active trade-show investment running concurrent with this hiring search. If your analyst starts in late May or June, they need to produce a defensible post-show ROI report. Without prompts, that report typically lands in late August. With Prompt #2 in Section 4, it lands in mid-June.
- Three business units: Material Handling, Construction, Agriculture. Implied by your title, "VP Sales & Marketing, Material Handling." Your analyst will produce scorecards per business unit, not company-wide aggregates.
- Distributor-channel orientation. MHI membership, MHEDA Booth #1, OEM Off-Highway company profile, all confirm channel partners as your primary go-to-market. Every metric your analyst owns ladders up to one question: is the distributor channel performing?
- 60th-anniversary year, family-owned, ISO 9001, 100% USA-made. Decision velocity is fast (no PE board), but you and your team expect credible, conservative analysis. That's why every number in this proposal sits at the conservative end of the range.
Section 04
A week in your new analyst's life
This section anchors the prompts in Section 4 against the rhythm of the job. If you've hired a Marketing Analyst at this size before, the week below will read familiar. If you haven't, this is what your new hire is walking into without a playbook.
A representative week
Monday morning, 8 AM. Your analyst arrives to three Slack messages from your sales reps: "Can you pull last quarter's sell-through for Distributor X before my 11 AM call?" Each one is twenty minutes in Excel if the data is clean, two hours if it isn't. The data isn't clean. Two of the three messages get answered by lunch; the third slips to Tuesday.
Monday afternoon. You send the analyst a request: the executive team is reviewing Construction business-unit performance Wednesday and needs sell-through by SKU, by region, by quarter, versus the same quarter last year. This is the monthly distributor scorecard, but the Construction cut of it. Your analyst spends Monday afternoon and most of Tuesday building it. Output: a 15-tab Excel workbook. They proof-read it twice for typos because it's going to executives.
Wednesday. You present. The Construction GM asks a follow-up question about a specific distributor's tier promotion. Your analyst spends Wednesday afternoon pulling that data. The original scorecard has to be regenerated to incorporate the new view. Another four hours.
Thursday. Trade-show season. Two of your reps are at a regional material-handling expo. They're calling back from the booth with photos of business cards. Your analyst is supposed to be the central point for trade-show lead capture, but there's no system. Leads arrive as photos, voice notes, and "I'll send you the list when I'm back." Your analyst spends Thursday hand-keying leads into HubSpot, calling distributors to validate territory ownership, and emailing reps for context that should have arrived with the lead.
Friday morning. Joe (Brand Manager) asks for the monthly content-performance report. Which datasheets and case studies are driving sell-through? Your analyst genuinely doesn't know how to answer that question. Website analytics live in one system, the CRM lives in another, and sell-through lives in the ERP nobody on the marketing side has direct access to. The Friday report ships at 10 PM with a caveat: "directional only."
What got skipped
In that representative week, your analyst skipped:
- Lead-source ROI analysis (no time)
- Competitive pricing intelligence (no time)
- Distributor onboarding metrics (the new distributor onboarded last month, no one's looked at their numbers)
- Content-gap analysis (haven't run it this quarter)
- Win-loss synthesis (annual at best)
The pattern
Your analyst spent roughly 80% of the week on urgent-but-not-strategic work. The strategic work got skipped: the analyses that would actually change a distributor relationship, a pricing decision, or next year's trade-show budget. That pattern shows up at most $50-$90M manufacturers we've looked at. The Marketing Analyst is too busy answering questions to ever ask one.
The prompt playbook in Section 4 is specifically about flipping that ratio. Move the templated, repeatable transformations to a senior model running inside Claude Cowork or ChatGPT Codex. Free the human time for the work the model can't do: distributor relationship management, judgment calls on rebate exceptions, conversations with Finance about accrual treatment, the editorial pass on the executive summary before it ships.
Section 05
The day-one prompt playbook
This is the core of the proposal.
Below are eight prompts your new analyst can use on their first morning. Each maps to one of the responsibilities in your JD. Each is platform-neutral, which means the same prompt works in Claude Cowork (Claude.ai with Projects, the workspace interface where you can upload files alongside the chat) and in ChatGPT Codex / Canvas (OpenAI's equivalent workspace inside ChatGPT Plus). Both accept file attachments. Both retain context across a working session. Either is fine; the choice comes down to which one your analyst prefers.
The setup is the same in either case:
- Create a Project (Claude) or a new Canvas / Codex thread (ChatGPT) named for the responsibility (e.g., "MH Distributor Scorecard").
- Attach the relevant export into the workspace (Epicor CSV, HubSpot report, rebate spreadsheet).
- Paste the prompt below.
- Read the output. Edit for credibility. Ship.
A few things to keep in mind before reading the prompts. Each one tells the model who your analyst works for and what kind of company you are. That context matters more than the structured ask. A senior model writing a distributor scorecard for "a SaaS company" produces a different report than the same model writing for "a 60-year-old family-owned industrial-components manufacturer in Warren, PA." The prompts below front-load that context every time. None of them ask the model to generate data — they ask the model to transform data your analyst already attached. If a number isn't in the attached export, the prompt explicitly tells the model to flag the gap rather than fill it in. That's the credibility difference between a useful AI-assisted analyst and one your VP stops trusting. On data hygiene: both Claude Cowork (Claude.ai with the standard Pro subscription) and ChatGPT Plus run on commercial agreements that don't train on your business data. Verify the current terms before sending sensitive data through either platform; Section 6 has notes on enterprise upgrades if your IT team wants tighter controls.
Prompt 1 — Monthly distributor scorecard, per business unit
JD responsibility: "Build and maintain monthly distributor scorecards across the Material Handling, Construction, and Agriculture business units, including SKU-level sell-through, channel ROI, and rebate accruals."
What this means at Superior Tire. Once a month, your analyst produces one scorecard per business unit. Each scorecard lists every distributor in that BU, their sell-through versus prior quarter, their top SKUs (solid polyurethane tires, wheels, casters, engineered elastomers — whatever moved), their rebate accrual status, and a one-sentence plain-English narrative of what changed. The audience is you and the BU GMs. The narrative is what executives actually read.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp, a 60-year-old
family-owned industrial-components manufacturer in Warren, PA. We sell solid
polyurethane and rubber wear products — tires, wheels, casters, engineered
elastomers — through three business units: Material Handling (forklift OEMs
and AGV distributors), Construction (road-construction wear parts), and
Agriculture. Our go-to-market is distributor-led; we are an MHEDA and MHI
member and our distributor channel drives the majority of our revenue.
I'm attaching:
1. Last 90 days of sell-through by SKU, by distributor, by BU (Epicor
export, CSV).
2. The current quarter's rebate accruals by distributor (Finance
spreadsheet, XLSX).
3. Last month's scorecard PDF for format reference.
Build me this month's monthly distributor scorecard for the
[Material Handling | Construction | Agriculture] business unit.
For each distributor, output:
- Total sell-through this period vs. prior period (dollars and units)
- Top 5 SKUs by revenue this period
- Rebate accrual status: on track / behind / ahead, with the dollar gap
- A one-sentence plain-English narrative of what changed
- A flag (yes/no) for whether their sell-through dropped more than 10%
vs. prior period
Group the dropped-more-than-10% distributors at the top of the report
under a heading "Distributors to call this week." For each of those,
suggest a one-paragraph call script for their account manager — concrete,
non-accusatory, opens with what's still working before naming the gap.
End with a one-paragraph executive summary for the VP Sales & Marketing
of the BU. Tone: conservative, credible, mid-market manufacturing. Do not
speculate where data is missing — flag the gap instead and tell me what
file I'd need to attach to close it.
Output as markdown with one sortable table per distributor section,
followed by the executive summary paragraph.
Inputs your analyst attaches: the Epicor sell-through CSV (90 days), the rebate accrual spreadsheet from Finance, the prior month's scorecard PDF for format reference.
What your analyst gets back: a markdown report they can paste into Notion, Google Docs, or Word and ship in under an hour. Your VP reads the dropped-distributor list at the top — that's the deliverable. The call scripts are what get the distributor account managers moving.
Time saved vs. manual. A clean scorecard built from scratch in Excel runs two to three days per business unit, per month. Three BUs adds up to six to nine analyst-days monthly. With this prompt, run-time is roughly four hours per BU for the initial draft plus review, dropping to two hours by month three as the analyst tunes the prompt. Conservative recovery: 40-50 analyst-hours per month, or roughly 500-600 hours per year.
Where this hits the ceiling. When your analyst is running the same scorecard every month with the same inputs, manually re-prompting wastes time. Around month four or five, the natural next step is a Zapier or Make flow that drops the Epicor CSV into a folder, triggers the prompt, and emails the output. That's Section 5 day 31-60 work, and only worth doing after the prompt has proved itself.
Prompt 2 — Trade-show pipeline attribution, post-MHEDA in 14 days
JD responsibility: "Track trade show pipeline attribution end-to-end, from booth-scan lead capture through quoted opportunity through closed-won revenue. Deliver post-show reports within 14 days of each show."
What this means at Superior Tire. Your booth at MHEDA 2026 (Booth #1, May 2-6, Nashville) generated booth-scan leads, business-card photos, and rep voice notes. The clock starts when the show closes. Within 14 days, your analyst needs to produce a defensible attribution report: how many leads, of what quality, routed to which distributors, with what early follow-up. The closed-won number comes later. The quality of the leads and the routing needs to be in your hands while the show is still fresh.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp. We just
exhibited at MHEDA 2026 (Booth #1, Nashville, May 2-6). Our exhibit
strategy is distributor-channel forward: most "leads" are end-customers
we route to the right distributor in their territory, plus a smaller
number of distributor prospects we route to our channel team directly.
I'm attaching:
1. The iCapture (or Cvent LeadCapture) booth-scan export, CSV. Columns
include name, company, title, email, phone, badge type, and any
on-booth notes captured by the rep.
2. A photo-or-text dump of business cards the reps captured outside the
scanner (some of these will be partial — name and company only).
3. The rep voice-notes transcript, if available — three of our reps sent
voice notes back from the booth which I had transcribed.
4. Our distributor territory map, CSV (which distributor owns which
state / region for each of our three BUs).
Build me the post-MHEDA pipeline attribution report. Output in this
structure:
Section 1 — Lead quality summary.
- Total scanned leads, total business-card leads, total voice-note leads.
- Of those, how many are end-customer prospects (route to distributor)
vs. distributor prospects (route to channel team) vs. unclassifiable
(flag for analyst review).
- Top 5 industries represented in the lead set.
- Top 5 states / regions.
Section 2 — Routing recommendations.
- For each end-customer lead with a clear state/region, name the
distributor in that territory it should route to.
- For each distributor prospect, route to channel team with a one-line
context note.
- For unclassifiable leads, list them under "Analyst review queue" with
the reason (missing company, partial card, etc.).
Section 3 — Sales-rep follow-up packet.
- Group the leads by the rep who scanned/noted them.
- For each rep, output a one-page packet: their top 10 leads with full
context, suggested first-touch email per lead in plain language
(no marketing buzzwords — we sell to manufacturing operations
people, not CMOs), and a flag for any lead they should call within
48 hours.
Section 4 — Executive summary.
- One paragraph for the VP Sales & Marketing covering: lead volume vs.
expectations, lead quality vs. prior shows if I gave you that data,
and the one or two distributor relationships that look most promising
to follow up on.
Don't invent contact details that aren't in the attached files. If a
business-card photo only gave you a name and a company, say so — don't
generate an email.
Output as markdown, ready to ship to the VP within 7 days of show close.
Inputs your analyst attaches: the iCapture or Cvent LeadCapture CSV, the business-card photos / OCR dump, the rep voice-note transcripts, and your distributor territory map.
What your analyst gets back: a four-section report your analyst can review, edit, and ship to you within a week of MHEDA closing. The rep follow-up packets are what drive the actual revenue. Every rep gets a ranked list of their own leads with suggested first touches, instead of a shared CSV nobody opens.
Time saved vs. manual. Post-show admin runs roughly 40 hours per major show in the manual workflow: hand-keying, manual enrichment, manual territory routing. With this prompt, it's closer to 4-6 hours per show for review plus edit. Across four majors and six regionals a year, that's roughly 300 recovered hours, and the report ships inside your JD's 14-day window instead of arriving in late August.
Where this hits the ceiling. Two places. First, when leads arrive in multiple formats (scanner CSV, business-card photos, voice notes), pulling them into one workspace is a 30-minute file-wrangling task before the prompt runs. Worth automating with Zapier or Make around month three. Second, the closed-won attribution that lives 6-12 months downstream needs the CRM to flag the lead source consistently. That's a CRM hygiene project, not a prompt project — flag it for IT in week one.
Prompt 3 — Lead source ROI across channels, by BU, by quarter
JD responsibility: "Analyze lead source performance across digital, trade show, distributor referral, and direct channels. Identify highest-ROI channels by business unit and quarter."
What this means at Superior Tire. Once a quarter, your analyst compares your four lead-source channels: digital (website forms, paid search if you run any, content downloads), trade-show (MHEDA, ProMat, MODEX, regional expos), distributor referral (a distributor sends an end-customer to you directly), and direct (the customer calls Warren). The comparison has historically been apples-to-oranges because each channel lives in a different system. The prompt below normalizes that for you.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp, an industrial-
components manufacturer with three business units (Material Handling,
Construction, Agriculture) and a distributor-channel-led GTM. I need to
compare quarterly lead-source ROI across four channels: digital (website
forms, content downloads, paid search), trade-show (MHEDA, ProMat, MODEX,
regional expos), distributor referral, and direct (inbound to our Warren
office).
I'm attaching:
1. Quarterly leads from HubSpot, segmented by source, CSV.
2. Trade-show leads from the post-show reports I've already built this
quarter (PDF or CSV).
3. The distributor-referral log (a Google Sheet our channel team
maintains).
4. The direct-inbound log (front desk / inbound calls our customer
service team logs).
5. Closed-won deals from CRM this quarter, with source attribution
where it exists, CSV. Many records will have the source field blank
or marked "unknown" — that's expected.
Build me this quarter's lead-source ROI analysis. Output:
Section 1 — Volume and conversion table.
- For each channel × BU combination (so 12 cells total: 4 channels × 3
BUs), output: total leads, leads that became quoted opportunities,
quoted opportunities that closed, average days from lead to quote,
average days from quote to close.
- Flag any cell where the data is incomplete (e.g., "Construction +
direct" might only have 3 records — call that out, don't pretend it's
a trend).
Section 2 — Cost per channel.
- For each channel where I gave you a cost (trade-show booth costs,
paid-search spend if applicable, distributor referral fees if we pay
them), calculate cost-per-qualified-lead and cost-per-closed-won.
- For channels where I didn't give you a cost (organic digital,
customer-service inbound), say so and ask what data I'd need to
attach to close the gap.
Section 3 — Strategic recommendations.
- Three to five concrete recommendations for next quarter, based on
the data. Tone: conservative, mid-market manufacturing. Examples
of the tone we want:
- "Material Handling distributor-referrals converted at 3x the rate
of digital leads this quarter. Recommend redirecting some of next
quarter's digital ad spend to a distributor enablement push."
- NOT: "We need to optimize our funnel velocity through PLG-driven
growth loops." (We don't speak that way.)
Section 4 — One-page executive summary for the VP Sales & Marketing.
- Lead with the most important finding. End with the recommended Q3
action. No charts; this is a read-only one-pager.
Output as markdown with clean tables and conservative narrative.
Inputs your analyst attaches: the HubSpot quarterly lead source export, the post-show reports built using Prompt 2, the distributor-referral log, the direct-inbound log, and the CRM closed-won export with source attribution.
What your analyst gets back: a four-section quarterly ROI report your analyst can edit and ship in under three hours. The cost-per-channel math is what earns this report a slide in your QBR deck.
Time saved vs. manual. A clean lead-source ROI comparison in Excel runs 12-16 hours per quarter — most of that is the data normalization step (HubSpot exports don't have the same column structure as the distributor-referral Google Sheet). With this prompt, runtime drops to roughly 2-3 hours per quarter for review plus edit. That's about 50 recovered hours per year, but the bigger win is that the comparison happens at all, with the same definitions every quarter.
Where this hits the ceiling. The closed-won attribution is only as good as your CRM's "lead source" field. If 40% of records are blank, the prompt can only tell you what the 60% says. That's a CRM-hygiene conversation with IT, separate from the prompt itself. Flag it for the month-two CRM cleanup pass.
Prompt 4 — Technical content performance and content-gap analysis
JD responsibility: "Maintain technical content performance dashboards for our datasheets, application guides, and case studies. Identify content gaps where distributor sell-through is underperforming benchmark."
What this means at Superior Tire. When Joe Terrasi asks your analyst on a Friday afternoon for the monthly content report, the real question is: which datasheets, application guides, and case studies are correlated with distributor sell-through? And conversely: which distributors are under-engaging with the content that's known to drive sell-through? The prompt below answers both, from existing HubSpot asset-download exports and your Epicor sell-through export.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp. Our marketing
team produces three categories of technical content: product datasheets
(per SKU), application guides (per use case — e.g., "polyurethane wheels
for cold-storage forklifts"), and case studies (per customer or end-use).
Our distributors download these from a gated section of our website. Our
Brand Manager (Joe Terrasi) wants to know which content is driving sell-
through and where the content gaps are.
I'm attaching:
1. The last 90 days of asset downloads from HubSpot, CSV. Columns:
asset name, asset type, download date, downloading user, downloading
user's company (if known), downloading user's BU interest if
captured.
2. Last 90 days of distributor sell-through by SKU by BU, Epicor export.
3. The current content library inventory (CSV listing every datasheet,
application guide, and case study by name and BU).
Build me this month's content performance + content-gap analysis.
Output:
Section 1 — Content performance by BU.
- For each business unit (Material Handling, Construction, Agriculture),
list the top 10 most-downloaded assets in the last 90 days. For each:
total downloads, unique downloading distributors, and a flag for
whether the distributors downloading it also showed sell-through
growth on related SKUs.
Section 2 — Content the high-performers download.
- Identify the top 5 distributors by sell-through growth this quarter
in each BU. List what content they downloaded in the same window.
Pattern-match: is there content these top performers downloaded that
the rest of the channel hasn't?
Section 3 — Distributor enablement gaps.
- List every distributor whose sell-through is below median for their
BU AND who has downloaded fewer than 3 assets in the last 90 days.
For each, recommend one specific asset they should be sent (based on
what their higher-performing peers downloaded).
- Output a Brand-Manager-ready email template (one paragraph) Joe can
send to those distributors, in plain mid-market manufacturing
language. No marketing buzzwords.
Section 4 — Content the library is missing.
- Identify any high-growth product family where the existing content
library has fewer than 3 assets. Flag these as content-gap
priorities for the next quarter's content roadmap.
Output as markdown. Conservative tone. Don't claim causation where you
only have correlation — say "correlated with" or "co-occurred with,"
not "drove" or "caused." The Brand Manager will read this and he's
been here 18 years; he'll catch overclaim immediately.
Inputs your analyst attaches: the HubSpot asset-download CSV (90 days), the Epicor sell-through export, the content-library inventory CSV.
What your analyst gets back: four sections of content analysis your analyst can edit and pass to Joe Terrasi by Friday afternoon. The distributor enablement-gap section is what Joe will actually use. It's a concrete list of "send this distributor this asset," not a generic content dashboard.
Time saved vs. manual. This analysis typically doesn't happen at all at your scale. The Friday content report ships as "directional only" with a caveat. With this prompt, the analysis ships monthly in roughly 2 hours of analyst time per run. The recovery here is net-new value, plus about 10-15 hours per quarter the analyst no longer spends apologizing for "directional only."
Where this hits the ceiling. The asset-download data is only as good as HubSpot's user-level tracking. If HubSpot is tracking downloads at the company level instead of the user (distributor rep) level, you can't see which reps inside a distributor are under-engaging. That's a HubSpot tier conversation (Pro vs. Starter), worth having in month three if the prompt is producing value.
Prompt 5 — Competitive pricing intelligence
JD responsibility: "Compile pricing competitive intelligence on key product lines, drawing from public price lists, distributor feedback, and industry benchmarks."
What this means at Superior Tire. Pricing in industrial wear products is mostly request-a-quote. Public price lists are inconsistent. Most of your competitive-pricing intel comes from distributors mentioning a competitor's quote in passing. Your analyst's job is to systematize that intel into a monthly briefing for you and the BU GMs. The prompt below assumes your analyst has set up a structured distributor-feedback intake (a Tally form, a Typeform, or a Google Form — Section 5 covers the setup) and has a folder of any public competitor pricing PDFs they can find.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp. We sell solid
polyurethane and rubber industrial wear products. Pricing in our industry
is mostly request-a-quote; public price lists are inconsistent. Our
competitive intel comes from three sources: (1) any public competitor
pricing or catalog PDFs we can find, (2) structured distributor-feedback
form submissions where distributors flag competitive quotes they've
encountered, and (3) industry benchmark reports if we have them.
I'm attaching:
1. The last 30 days of distributor-feedback form submissions (CSV).
Columns: date, distributor, BU, product line, competitor name,
competitor quoted price, our quoted price, customer decision
(won/lost/pending), free-text comment.
2. Any public competitor pricing PDFs I've collected this month (named
by competitor).
3. Last quarter's competitive-pricing briefing for format reference
(PDF).
Build me this month's competitive-pricing intelligence briefing for the
VP Sales & Marketing. Output:
Section 1 — Headline pricing pressure summary.
- One paragraph: where is competitive pressure highest this month?
Which BU, which product line, which competitor?
Section 2 — Quote-level findings.
- A table of every distributor-reported quote this month, with:
product line, competitor, competitor price, our price, gap (%),
outcome (won/lost/pending), and the distributor's free-text comment
on what drove the decision.
- Sort by largest gap descending.
Section 3 — Public-pricing findings.
- For any competitor pricing PDF I attached, summarize: any SKU or
product family where their list price has changed vs. our last
briefing? Any new SKU they've added that overlaps our line?
Section 4 — Recommendations.
- Two to four conservative recommendations. Examples of tone:
- "Competitor A's polyurethane caster price dropped 8% list. Three
distributors in Material Handling cited this in lost quotes. Worth
a pricing-team review of our caster line list before next quarter."
- NOT: "Optimize our pricing funnel for competitive positioning."
Section 5 — One-paragraph executive summary for the VP.
Output as markdown. Conservative tone. Where data is thin (only 1 or 2
quote reports for a product line), say so — don't extrapolate.
Inputs your analyst attaches: the distributor-feedback form export, any public competitor pricing PDFs collected during the month, and the prior month's briefing for format reference.
What your analyst gets back: a five-section monthly briefing your analyst can edit and ship. The quote-level findings table is what earns this briefing a standing slot in your monthly leadership meeting.
Time saved vs. manual. Today the analysis runs reactively — your analyst chases each competitive mention as it arrives, with no consolidation. The prompt consolidates it into a monthly artifact in about 2 hours of analyst time. Recovery is roughly 12-15 hours per quarter, plus the net-new value of having a monthly briefing at all.
Where this hits the ceiling. The prompt depends on distributors actually filling out the feedback form. The first month's submissions will be sparse. Tell your analyst to expect 30-60 days of form-rollout before the data is useful. That's a process problem, not a prompt problem.
Prompt 6 — Quarterly business review, ad-hoc pulls and executive one-pagers
JD responsibility: "Support quarterly business reviews with VP Sales & Marketing, including ad-hoc data pulls and one-page executive summaries."
What this means at Superior Tire. Every quarter, your QBR cycle generates two kinds of work for your analyst. One: the templated deck refresh (same questions, same charts, new numbers). Two: the ad-hoc "can you also pull X for Wednesday's meeting" requests. The prompt below handles the executive-summary side of the QBR — the one-pagers per topic. The ad-hoc pulls themselves are usually a re-run of Prompt 1 or Prompt 3 with a different filter.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp preparing
materials for the upcoming Quarterly Business Review with our VP Sales
& Marketing. Your role is to translate raw data into one-page executive
summaries that a senior leader can read in five minutes and act on
immediately.
I'm attaching:
1. This quarter's distributor scorecard outputs (the three BU
scorecards from Prompt 1).
2. This quarter's lead-source ROI output (from Prompt 3).
3. This quarter's competitive-pricing briefing (from Prompt 5).
4. Any topic-specific ad-hoc data the VP has asked for this quarter
(varies — could be a specific distributor deep dive, a
product-line profitability cut, a trade-show post-mortem, etc.).
5. Last quarter's QBR one-pagers for format reference (PDF).
Build me this quarter's QBR one-pager set. One page per topic. Topics:
1. Distributor channel health, all three BUs combined.
2. Lead-source ROI quarter-over-quarter.
3. Competitive pricing pressure.
4. Whatever ad-hoc topics the VP requested.
Per one-pager, follow this structure:
- Headline (one sentence — the most important finding).
- Three to five bullet points of supporting evidence (numbers, not
adjectives).
- One paragraph of "what changed since last quarter" (the delta is
often more important than the absolute number).
- One paragraph of "recommended action for next quarter" — conservative,
specific, with an owner named where possible.
Length per one-pager: max 400 words. Tone: senior mid-market manufacturing,
conservative, no SaaS jargon. The VP has been in industrial sales for 20+
years; over-claiming will lose his trust faster than under-claiming.
Where the data underlying a one-pager is incomplete or noisy, say so
in the body. The VP would rather see "Q1 data on direct inbound is
incomplete — recommend a process fix before Q2" than see a fabricated
number with no caveat.
Output each one-pager as its own markdown section, separated by horizontal
rules.
Inputs your analyst attaches: the quarter's distributor scorecard outputs from Prompt 1, the quarterly lead-source ROI output from Prompt 3, the competitive-pricing briefing from Prompt 5, and any ad-hoc topic data you've requested.
What your analyst gets back: four to six one-page executive summaries, each editable in under 30 minutes, ready to drop into the QBR deck. The "what changed since last quarter" paragraph is what earns each one a slide.
A clean QBR deck refresh runs 18-24 hours of analyst time per quarter. With this prompt, run-time drops to roughly 3-4 hours per quarter. Conservative recovery: about 60 hours per year.
Where this hits the ceiling. The one-pagers are only as good as the underlying scorecards. If Prompts 1, 3, and 5 are still being tuned, the QBR one-pagers inherit the gaps. Plan for the first QBR cycle (your analyst's first 90 days) to take 8-10 hours of editing. By quarter three, it should be a 3-4 hour exercise.
Prompt 7 — ERP and CRM data extraction (working with IT)
JD responsibility: "Partner with IT and Operations to extract data from our ERP and CRM systems into reportable formats."
What this means at Superior Tire. Your analyst isn't an ERP engineer. They'll spend the first 30 days asking your IT team for read access to specific tables and getting back CSVs they then have to clean. The prompt below is one your analyst uses with your IT team in the room. It's a translation layer between "what the JD says I need" and "what your IT team can actually expose without a six-figure project."
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp working with
the IT team on data access. Our ERP is [Epicor / NetSuite / SAP — fill in
the actual one]; our CRM is [HubSpot / Salesforce / Microsoft Dynamics —
fill in]. I need help translating my reporting needs into specific
data-extraction requests IT can fulfill, ideally as scheduled CSV exports
or as read-only views on a reporting database.
I'm attaching:
1. The eight responsibilities from my JD (text).
2. The current IT-side data-access policy (if we have one) or a
description of what data my analyst can currently see.
3. A list of any existing scheduled exports or BI integrations IT has
already set up.
Help me build:
Section 1 — A prioritized data-access wish list.
For each of the eight JD responsibilities, list the exact tables / fields /
exports I need from the ERP and CRM. Group by:
- Already accessible (no IT work needed)
- One-shot export (IT runs a query once, hands me a CSV)
- Scheduled export (IT sets up a recurring CSV drop — daily / weekly /
monthly)
- Read-only view (IT exposes a view I can query directly)
Section 2 — A request memo to send to IT.
A one-page memo I can send to our IT lead. Plain language, no jargon.
For each ask, name the business reason ("this enables the monthly
distributor scorecard") so IT can prioritize. Conservative ask: don't
ask for a full ERP read-replica in week one; ask for the three or four
exports that unlock month-one work, and tee up the rest for month two
or three.
Section 3 — A list of "things I can do without IT."
Many of my responsibilities can be done from existing HubSpot exports
and Finance spreadsheets I already have. List those explicitly so I
don't block on IT for work I could ship today.
Output as markdown. Tone: collaborative, not demanding. IT teams at
60-year-old family-owned manufacturers are usually one or two people
who are heads-down on production-critical work; they will say yes
faster to a small, specific, well-reasoned ask than a big "give me
everything" ask.
Inputs your analyst attaches: the eight JD responsibilities (or a copy of the JD), your current IT-side data-access policy if one exists, and a list of any existing scheduled exports already in place.
What your analyst gets back: a three-section memo your analyst takes to your IT lead in week one. The "things I can do without IT" section is what lets your analyst ship the first scorecard from Prompt 1 without waiting on an IT ticket.
Time saved vs. manual. This is less about time saved than about not getting stuck in month one. The most common failure pattern at this size is: analyst starts, asks IT for everything, IT can't deliver, analyst spends month one waiting. The prompt avoids that by separating the exports needed for month-one work from the exports that can wait until month three. Recovery: 30-60 days of analyst productivity that otherwise gets lost to IT-ticket purgatory.
Where this hits the ceiling. The prompt is a planning tool. The actual IT work, exposing the ERP views and setting up scheduled exports, still has to happen. That's a real conversation with your IT lead, not a prompt output.
Prompt 8 — Marketing automation reporting (HubSpot or equivalent)
JD responsibility: "Manage and improve our marketing automation reporting (HubSpot or equivalent)."
What this means at Superior Tire. Whatever marketing automation platform you're on (HubSpot most likely, Pardot possible), your analyst inherits the dashboards your last analyst built. They are probably out of date, possibly tracking metrics nobody on your team uses, and probably missing the metrics you actually need (per-BU, per-distributor, per-channel cuts). The prompt below is for the audit and rebuild.
The prompt:
You are a marketing analyst at Superior Tire & Rubber Corp inheriting our
existing HubSpot (or Pardot) reporting setup. Our marketing automation
platform has dashboards from previous analysts; some are useful, most are
out of date. My job is to audit what's there, decide what to keep, decide
what to rebuild, and propose a new dashboard set that matches the eight
responsibilities in my JD.
I'm attaching:
1. A list of every dashboard currently in HubSpot (or Pardot) with its
name, owner, last-modified date, and a one-sentence description of
what it shows (export from HubSpot's dashboard library or a manual
inventory).
2. A list of every report inside each dashboard.
3. The eight responsibilities from my JD.
4. Last quarter's QBR deck (PDF) so you can see which metrics our VP
actually reads.
Build me:
Section 1 — Audit table.
For every existing dashboard, output: name, last modified, owner,
recommended action (keep as-is, refresh, deprecate, rebuild), and a
one-sentence reason.
Section 2 — Deprecation list.
A clean list of dashboards I should remove from rotation, with a
one-line rationale per. Include the diplomatic message I should send to
the dashboard's original owner ("Hi [name], I'm auditing our reporting
setup as part of my onboarding — your [dashboard name] from [date]
doesn't seem to be in active use; OK if I archive it?").
Section 3 — Rebuild list.
For dashboards I'm keeping but need to refresh, output: name, what's
broken or stale, and the proposed fix.
Section 4 — New dashboard proposals.
For each of the eight JD responsibilities NOT covered by an existing
dashboard, propose a new dashboard. For each new dashboard, output:
name, source data, key metrics (3-5 per dashboard), recommended
refresh cadence, and audience (who consumes it).
Section 5 — Implementation order.
A prioritized list: which dashboards to ship in week 1, week 2-4,
and month 2. Bias toward the dashboards that unlock the JD
responsibilities for the monthly scorecards (Prompt 1), the post-show
attribution (Prompt 2), and the QBR (Prompt 6).
Output as markdown. Tone: senior. The current HubSpot setup is
probably reasonable; don't trash anyone's work, just improve it.
Inputs your analyst attaches: the HubSpot or Pardot dashboard inventory (export or manual list), the report inventory inside each dashboard, the JD, and the last QBR deck.
What your analyst gets back: a five-section audit your analyst uses to plan their first month of HubSpot work. The diplomatic-message templates in Section 2 are what keep your analyst from stepping on the toes of whoever built the previous dashboards.
A clean HubSpot audit from scratch runs 8-12 hours of analyst time. With this prompt, run-time drops to roughly 3 hours — about 10 hours of onboarding time recovered, plus a faster ramp for your analyst in their first month.
Where this hits the ceiling. The audit is a planning artifact. The actual rebuild of each dashboard still has to happen inside HubSpot, and HubSpot's dashboard builder has limits (some cross-object reports require Operations Hub tier). If the new dashboards your analyst proposes can't be built in your current tier, that's a tier-upgrade conversation, not a prompt problem.
Summary table — the eight prompts
| # | JD responsibility | Prompt run-time | Time saved/yr | When in month one |
|---|---|---|---|---|
| 1 | Monthly distributor scorecards (per BU) | ~4 hrs/run | ~500-600 hrs | Week 2 |
| 2 | Trade-show pipeline attribution (post-show in 14 days) | ~4-6 hrs/show | ~300 hrs | Week 3, immediately post-MHEDA |
| 3 | Lead source ROI (quarterly, by BU) | ~2-3 hrs/quarter | ~50 hrs | Month 2 |
| 4 | Technical content performance + gap analysis | ~2 hrs/month | net new + ~10-15 hrs/qtr | Month 2 |
| 5 | Competitive pricing intelligence | ~2 hrs/month | net new + ~12-15 hrs/qtr | Month 2, after feedback form is live |
| 6 | QBR ad-hoc pulls + executive one-pagers | ~3-4 hrs/quarter | ~60 hrs | First QBR after start |
| 7 | ERP and CRM data extraction (IT memo) | ~2 hrs (one-time) | unblocks month 1 | Week 1, day 2 |
| 8 | Marketing automation reporting audit | ~3 hrs (one-time) | ~10 hrs onboarding | Week 1, day 3-4 |
Total conservative recovery: roughly 900-1,000 analyst-hours per year, plus the net-new value of analyses (content performance, competitive pricing, win-loss synthesis where applicable) that weren't happening before.
Section 06
30 / 60 / 90 day rollout
This is the rollout for a real human analyst using the prompts in Section 4. Add a tool only after the prompts prove what's worth automating. None of them show up on day 1.
Days 1-30, prompts only
By day 30, your analyst is shipping the Material Handling distributor scorecard end-to-end via Prompt 1, the post-MHEDA report via Prompt 2, and the HubSpot audit via Prompt 8. Their entire workflow runs inside Claude Cowork or ChatGPT Codex with the data exports they already have access to. No Zapier flows, no Power BI, no new SaaS. One paid seat (Claude Pro or ChatGPT Plus) and the prompt library.
Week 1.
- Day 1: welcome plus access to existing systems (HubSpot, CRM, ERP read-only if available, Slack, shared drives). Claude Pro or ChatGPT Plus subscription provisioned. Prompt library handed over.
- Day 2: your analyst runs Prompt 7 to draft the IT data-access memo. Walks it to your IT lead. The conversation is now scoped and specific instead of "give me everything."
- Day 3-4: your analyst runs Prompt 8 to audit the existing HubSpot setup. Diplomatic messages go out to former dashboard owners. Audit lands on your desk by end of week.
- Day 5: your analyst pulls the existing Epicor sell-through export and rebate spreadsheet. Even if IT can't expose a new view yet, the existing monthly Finance export is usually enough to run Prompt 1's first pass.
Week 2.
- Run Prompt 1 for the Material Handling BU. First draft of the scorecard ships to you by end of week.
- Review with you, edit, ship to BU leadership.
- Prompt 1 is now in the analyst's monthly cadence.
Week 3.
- MHEDA 2026 leads are still hot. Run Prompt 2 on the booth-scan CSV plus business-card dump plus rep voice notes.
- Post-MHEDA pipeline attribution report drafted, edited, shipped to you. Inside the 14-day JD window.
- Rep follow-up packets distributed to the sales team.
Week 4.
- Construction and Agriculture BU scorecards scoped (same prompt, different attachments).
- First competitive-pricing feedback form (Tally or Typeform) stood up. Zero SaaS cost. Distributed to your channel team.
Day 30 deliverable: Material Handling monthly scorecard live and scheduled. Post-MHEDA attribution report shipped. HubSpot dashboard audit complete. IT data-access memo accepted. Construction BU scorecard underway. Tooling spend month one: one Claude Pro or ChatGPT Plus seat. That's it.
Days 31-60, the first automation worth paying for
By day 60, all three BU scorecards are running monthly. Lead-source ROI report (Prompt 3) shipped for Q2. Technical-content performance (Prompt 4) reporting live. Competitive pricing intelligence (Prompt 5) ships its first monthly briefing. The first lightweight automation, either Zapier or Make, gets added to remove the most repetitive data-prep step.
Weeks 5-6.
- Construction and Agriculture BU scorecards running via Prompt 1.
- All three BUs covered monthly.
- By month four, the file prep before the prompt becomes the obvious thing to automate. Pulling the Epicor CSV from Finance's shared drive, renaming it, dropping it in the right Claude Cowork or ChatGPT project, attaching it.
- First automation: a single Zapier (or Make) flow that watches the Finance shared drive for the monthly Epicor export and drops it into the right project workspace automatically. Cost: roughly $50/month for the Zapier seat. Hours saved: about 3 per month.
Weeks 7-8.
- Lead-source ROI quarterly report shipped using Prompt 3.
- Technical-content performance report shipped using Prompt 4. Joe Terrasi gets his Friday content report on a Friday, with confidence.
- Competitive pricing pilot live using Prompt 5. First briefing in your inbox in week 8.
- First QBR cycle: prompts 1, 3, 5 outputs feed Prompt 6 (QBR one-pagers).
Day 60 deliverable: five prompts running on cadence. One Zapier (or Make) flow live at roughly $50/month, paying for itself in saved file-prep hours. First competitive-pricing briefing shipped. Tooling spend month two: one Claude Pro seat plus one Zapier seat, about $70/month.
Days 61-90, dashboards when they pay for themselves
By day 90, the analyst is shipping all eight prompt workflows from Section 4 on a regular cadence. Two or three reports get run so often by so many viewers that the prompt itself becomes inefficient. Those graduate to a dashboard. A Slack-bot / self-service question layer is scoped only if the volume of ad-hoc requests justifies it.
Weeks 9-10.
- Distributor onboarding tracking is added using a variant of Prompt 1, focused on the 12-month cohort view for new distributors.
- Competitive win-loss synthesis is added using a variant of Prompt 5, focused on the CRM closed-lost field.
Weeks 11-12.
- First dashboard: the Material Handling monthly scorecard moves from "your analyst re-runs Prompt 1 every month" to "Looker Studio dashboard refreshes nightly, the prompt only runs for the narrative paragraph." Reason: you and the BU GM both check it weekly, and a static monthly report isn't fast enough.
- Second dashboard: the post-MHEDA attribution view (Prompt 2 output) moves to a Looker Studio dashboard with auto-refresh from the CRM closed-won pipeline, so the 6-12 month closed-won attribution lives in one place.
- Looker Studio is free. BI cost is zero unless you choose Power BI for the Microsoft 365 fit (about $14/user/month).
- If ad-hoc question volume from your sales team is still high (10+ per week), a self-service Slack bot is scoped for month four. Only if the prompts and dashboards together haven't dropped the volume below 5 per week.
Day 90 deliverable: all eight prompts in cadence. Two automations live (file prep plus scorecard dashboard). Optional third in scope (Slack bot). Tooling spend month three: Claude Pro plus Zapier plus Looker Studio (free), roughly $90/month. With Power BI, about $160/month. By month six, if Apollo for enrichment and Supabase for a staging table both pay for themselves, you're in the $400-$650/month range described in tool-stack.md.
Day 90 sanity check. Pull a one-week time-tracking from your analyst. If they're still spending 60-70% of time on manual data pulls, the rollout has not actually moved the ratio. That means either the prompts are being under-used or the automation choices were premature. Debug the prompts before adding more tools.
Section 07
Tools to add once prompts have proved their worth
Months one through three of this rollout cost roughly $20-$90/month in tooling. That's on purpose. The vendors in the table below get added one at a time, and only after your analyst is demonstrably doing the manual step they replace too often.
This section is the summary. The full vendor breakdown (pricing pages, alternatives, affiliate notes) is in the companion tool-stack.md.
| Category | When to add it | Primary recommendation | Approximate cost |
|---|---|---|---|
| Workflow automation | Month 2, after one prompt has been run 4+ times manually with the same file prep | Zapier Team or Make Pro | $50-$100/month |
| BI / dashboarding | Month 3, after a monthly scorecard has 3+ recurring viewers checking weekly | Looker Studio (free) or Power BI ($14/user/mo) | $0-$70/month |
| Data warehouse | Month 3-4, once 2+ pipelines write to the same staging area | Supabase (free tier) | $0-$25/month |
| Contact enrichment | Month 3, ahead of the next major trade show | Instantly SuperSearch (existing) or Apollo | $0-$99/month |
| Data pipeline / ETL | Month 4+, only for heavy ERP lifts that Zapier can't cover | n8n (self-hosted) or Airbyte Cloud | $0-$200/month |
| Slack bot for ad-hoc Q | Month 4+, only if the prompts-and-dashboards combo hasn't reduced ad-hoc volume below ~5/week | Custom on Slack + the AI API | $0-$50/month build, then near-zero run |
Total tooling spend by month 6, if every category graduates: roughly $400-$650/month, consistent with the tool-stack.md recommended tier. Total tooling spend in months 1-3: roughly $20-$90/month. The whole point of this proposal is that the second column is where you start.
Two notes on choice. On Claude Cowork vs. ChatGPT Codex, both work for every prompt in Section 4. Claude Cowork (Claude.ai with Projects) tends to handle long-context attachments slightly better; ChatGPT Codex / Canvas tends to feel familiar to analysts coming out of Excel-heavy roles. The prompts are written to be platform-neutral, so the analyst picks. On existing subscriptions: if you already have HubSpot Pro, iCapture for trade-show capture, Instantly for outbound — keep them. Nothing in this proposal asks you to rip and replace what's working.
For the full vendor breakdown, every alternative, every pricing page, every affiliate note, see tool-stack.md.
Section 08
Cost-vs-continued-hire comparison
| Scenario | Year-1 cost | What you get | What you don't get |
|---|---|---|---|
| Hire the analyst, no AI workflow | ~$90,000-$105,000 (fully loaded comp) | One analyst doing the typical 80/20 split (urgent admin / strategic analysis) | Distributor scorecards two weeks late; post-MHEDA report in August; content-gap analysis skipped; win-loss skipped |
| Hire the analyst + Section 4 prompt playbook (this proposal, month one) | Comp + $497 (this proposal) + ~$240/yr Claude Pro or ChatGPT Plus seat | Same analyst running the eight prompts; scorecards on cadence; post-MHEDA in 14 days; content + competitive pricing running monthly | n/a in month one. Tools come in later if they justify themselves. |
| Add tools by month 6 as they prove themselves | Comp + ~$5,000-$8,000/yr tooling + this proposal | Everything above, plus dashboard self-service for top 2-3 reports, Zapier-automated file prep, optional Slack bot | n/a |
| Don't hire, try to run with prompts alone | ~$240/yr for one seat + this proposal | AI-generated reports with no human review | Distributor relationships, judgment, escalation handling, board credibility — the work the JD actually exists to do |
Conservative ROI math.
- Month 1-3 tooling spend: roughly $60-$270 (one Claude Pro seat plus this proposal; optional one Zapier seat in month two).
- Month 4-6 tooling spend: roughly $400-$650/month only if every tool justifies itself. If only Zapier and Looker Studio prove their worth, you're at $50-$70/month indefinitely.
- Analyst time recovered (conservative, from Section 4 summary table): 900-1,000 hours/year × $48/hour fully loaded = roughly $43,000-$48,000 / year. Even at the most conservative rounding (~470 hours × $48 = $22,500), the math holds.
- Soft costs recovered: post-MHEDA report ships on time, QBR deck takes hours instead of days, content-gap analysis exists at all. About $10,000 / year, genuinely conservative.
- Total recovered value: roughly $32,500 / year at the conservative floor; $55,000 / year at the realistic mid-point.
- Year-1 payback: 2-3 weeks from go-live at month-one cost levels (one Claude Pro seat). The ROI window expands as additional tools come in.
- Year-2+ ROI: 50-100x annually on the month-one investment; 4-6x annually on the spend if you scale to the full $400-$650/month tier.
This math intentionally does not count revenue uplift from better distributor decisions, faster trade-show follow-up, or better content targeting. Those are real but harder to attribute, and we'd rather under-promise. If those second-order effects deliver even 1% revenue uplift on a $60M revenue base, that's another $600K / year in marginal contribution.
What to take to your CFO: the month-one ask is roughly $240 for a year of Claude Pro for one seat. The decision to scale tools to $5,000-$8,000/year is a month-four conversation, made with three months of evidence in hand. That's the kind of phased commitment that gets approved at family-owned manufacturers without a board fight.
Section 09
Implementation services option
You can run this rollout in-house with your analyst and IT. Many of our buyers do.
If you'd rather hand the rollout to someone who's done it before, NunnCurtis Labs offers implementation services as a natural next step. Three tiers:
Tier 1, Setup Assist — $5,000 (one-time). We help your analyst build the prompt library for all eight JD responsibilities, customized to your actual data exports. We sit alongside them for the first scorecard run (Prompt 1) and the first post-show attribution run (Prompt 2). Two weeks of work. Your analyst owns the prompt library afterward, including any edits to fit your tone and data structure.
Tier 2, 30/60/90 Co-build — $15,000 (one-time). We pair with your analyst for the full 90-day graduation in Section 5. By day 90, the eight prompts are tuned to your data, the first automation is live, and the first dashboard is built. Includes one full day of analyst training and a written runbook for every prompt and every workflow.
Tier 3, Fractional Marketing Ops — $5,000/month, 6-month minimum. We run the rollout and stay on as a fractional resource. Useful if your analyst start date slips, if your analyst needs sustained ramp support beyond 90 days, or if you'd like NunnCurtis Labs to handle the prompt-tuning and the tools rollout while your analyst focuses on distributor relationships and executive reporting. Includes a monthly review with you.
All three tiers are optional. The $497 you're paying for this proposal gives you everything you need to do it yourself.
Section 10
Money-back guarantee
Money-back guarantee. No time limit.
If this proposal doesn't give your team at least 3 ideas you can actually put to work, email us anytime and we'll refund the full amount. No questions, no forms. We'd rather build a long relationship than win one transaction.
Section 11
What to do this week
Five concrete things you can do this week, before your analyst even starts:
-
Give your new analyst a Claude Pro or ChatGPT Plus subscription before day one. Roughly $20-$30/month. That's the whole stack for month one. Either one works; both accept file attachments and retain context across a session. If your IT team has a preference for one vendor or the other, defer to them — the prompts in Section 4 are platform-neutral.
-
Confirm the data sources. Walk down to your IT lead and ask three questions: (a) Can we expose a read-only export or scheduled CSV of the ERP sell-through table for the analyst? (b) Is the rebate accrual spreadsheet owned by a named person in Finance, or is it nobody's job? (c) Does HubSpot or Pardot currently track asset downloads at the user level, or only at the company level? Each "no" is a 1-2 week unblock you can start now. Prompt 7 in Section 4 is the memo your analyst will use to ask formally.
-
Pull the MHEDA 2026 booth-scan CSV today. The show ran May 2-6. The leads are decaying every day. Get the CSV, the business-card photos, and any rep voice notes into a Google Drive folder. When your analyst starts, they paste Prompt 2 into Claude Cowork or ChatGPT Codex, attach the folder, and your post-show attribution report is in your inbox inside a week. Comfortably inside the 14-day JD window.
-
Codify the rebate-accrual logic. The single highest-friction part of Prompt 1 is the rebate-accrual status check. The fastest way to unblock it: get 30 minutes with whoever owns the rebate accrual spreadsheet in Finance, and write down the tier rules in plain language. One paragraph per tier. Your analyst attaches that document to Prompt 1 from week two onward, and the rebate-accrual section gets dramatically more accurate.
-
Update the JD before the next round of interviews. Your Indeed listing is generic. Add the specific business-unit context (Material Handling / Construction / Agriculture), name the trade shows (MHEDA, ProMat, MODEX), name the systems (HubSpot or Pardot, your ERP), and add a line on AI-assisted workflow: "You will be supported by a curated prompt playbook from day one and expected to refine and extend it." Better candidates self-select in. Worse ones self-select out.
-
Block 30 minutes on your new hire's calendar in week 2 for a prompt-playbook walkthrough. Whether you implement this proposal in full or not, your analyst's first 30 days are the only window to set the workflow expectation. After day 30, the urgent ad-hoc requests will swallow the strategic work, and the pattern will set for the next 18 months.
Closing
The eight prompts in Section 4 work today, inside Claude Cowork or ChatGPT Codex, with the data exports you already have. Month-one cost is one paid seat. Month-six cost, if every tool justifies itself, sits in the $400-$650/month range and pays back inside a quarter. Most of the recovered value shows up in the first 30 days as analyst hours redirected from Excel to the strategic work your JD actually describes.
Hire the analyst, hand them the prompts on day one, and add a tool only after the prompts prove what's worth automating.
— NunnCurtis Labs
Ready to commit
Buy the $497 package, lock in the rollout.
Indefinite money-back guarantee. Twenty-minute interview, then we wire the rest of the package — pitch deck, walkthrough video, action checklist — to this same URL inside 48 hours.