What a social media playbook actually is (and what it isn't)
A social media playbook is not a content calendar, not a posting schedule, and not a list of tactics ranked by some agency. A real playbook is a system that connects four things: a defined audience, a small set of channels chosen for that audience, a repeatable engagement loop, and a measurement model that tells you whether the loop is working before you have wasted six months. Every component below is downstream of the system. Skip the system and you are running tactics; tactics on social compound to roughly zero.
The most common failure mode in 2026 is the tactic-first playbook: a marketing lead reads a newsletter, picks “build in public” or “short-form video” or “LinkedIn carousels,” and tells the team to start producing. Six months later the team has 80 carousels, no measurable pipeline, and a vague sense that “it didn't work.” The carousel was not the problem. The problem was that the carousel was a tactic chosen before the audience, the channel, the engagement loop, or the measurement model had been decided. Reverse the order and almost everything else falls into place.
The second most common failure is the publish-only playbook: a team that thinks of “social” as “the place we post things” and ignores the much larger surface area of conversations already happening about their category, their competitors, and their buyers' daily problems. Publishing is one node in the system. Listening is the trunk. Reply is the branch where most of the actual leverage lives. A team that publishes ten posts a week and answers zero questions in the threads where their buyers are choosing between vendors is leaving the largest source of pipeline on the table.
A real playbook reverses both mistakes. It decides who you are talking to, where they are, how you will show up usefully, and how you will know whether it worked. The tactics — the carousels, the threads, the AMAs — get chosen last, in service of the system, not in spite of it. The rest of this document is the system.
The system, in one diagram
Every social media playbook that actually compounds is a four-stage loop. The stages run in order, every cycle, and skipping a stage breaks the loop downstream. The point of writing this out explicitly is that most teams skip stage one and stage four every single time.
Discover
Find the conversations your ICP is already having about your category, your competitors, and the daily problems your product solves. Output: a ranked list of channels, a starter set of keyword queries, and a working hypothesis about which audiences you serve and which you do not.
Monitor
Run the keyword and brand queries from stage one continuously across every channel that matters. Output: a unified inbox where every relevant conversation surfaces in one place, scored for signal, deduplicated across channels, and triaged within hours rather than days.
Engage
Reply, like, share, and selectively publish into the conversations the monitoring layer surfaces. Output: a steady drumbeat of substantive comments, a smaller drumbeat of original posts, and an even smaller drumbeat of high-effort pieces (AMAs, deep replies, original research) timed to the moments where they pay back.
Measure
Track brand-search lift, share-of-voice, attributed signups, and the count of mentions you did not generate yourself. Output: a quarterly review that tells you which channels to double down on, which to drop, and what to change in stages one through three on the next cycle.
Stage one is where almost every team underinvests. They jump to stage three because publishing feels like “doing social,” and they discover six months later that they have been publishing into channels their buyers do not use. Stage four is where almost every team also underinvests, because measurement is hard and the wrong metrics (likes, followers, vanity reach) are right there waiting to be over-tracked. The non-negotiable discipline of a real playbook is doing stage one and stage four well; stages two and three mostly take care of themselves once those are honest.
The other thing this diagram makes obvious is that social listening is not a separate program from “social media” — it is the trunk that everything else grows from. A team that spins up a separate “listening initiative” in Q3 after running “social” for two years has been running a stage-three-only program and is now retrofitting stage two. That works, but it costs roughly the value of the missing two years.
Choose channels by ICP, not by reach
Channel selection is the most reversible decision in a playbook and the one teams agonize over most. The right answer is mechanical: pick the two-to-four channels where your ICP spends working hours, run the loop on those, and ignore the rest until you have proof that the chosen ones are saturated. Most teams pick five-to-seven channels, run the loop on none of them well, and conclude social does not work for their category.
The 2026 channel landscape splits cleanly along audience-intent lines, and the splits matter. A B2B SaaS team will not win on TikTok in any reasonable horizon, and a DTC apparel brand will not win on LinkedIn no matter how many carousels they post. Pick on the actual evidence of where your buyers are, not on whichever platform has the most hype this quarter.
X (Twitter)
Who's here: Founders, B2B operators, devtools buyers, journalists, indie hackers, build-in-public crowd.
Best for: Real-time conversations, breaking news, replies-as-distribution, founder-led marketing. The platform with the highest reply-to-attention ratio if you participate in the right threads.
Skip if: If your audience is non-technical SMB owners who do not check social during the workday, X is a vanity channel. Do not let founder enthusiasm drive a team-wide commitment to a platform their buyers do not use.
Who's here: B2B buyers, ops leaders, finance, HR, agencies, mid-market and enterprise everything. The dominant B2B graph in every English-speaking market.
Best for: Thought leadership, ABM-style account targeting, hiring, vertical commentary. Comments on other people's posts often outperform original posts for new accounts; use the inbox aggressively.
Skip if: LinkedIn is a poor place for tactical experimentation — the tone discipline is high and the algorithm punishes off-key voice harshly. Treat it as the channel you commit to or do not touch.
Who's here: Devtools, vertical SaaS buyers, prosumer hobby categories, anyone who searches before they buy. The unspoken graph that drives a huge share of B2B and prosumer purchasing decisions.
Best for: High-intent threads where buyers are openly comparing vendors. A great Reddit comment can outrank your own marketing site for the same query for years. Read the dedicated Reddit playbook below before posting anything.
Skip if: Reddit is the channel most likely to actively punish bad behavior. Posting before reading, cross-posting, AI-generated comments, paying for upvotes — all paths to permanent bans. If your team cannot commit to the discipline, skip Reddit entirely.
Who's here: SMB owners, local services, e-commerce community managers, parent-and-PTA-style local groups. Underrated for any business with a local or community angle.
Best for: Group-driven engagement, customer-support replies, local discovery, recovery of unlinked mentions. The graph most teams write off and then quietly drive 20% of pipeline from.
Skip if: Facebook is a poor place for B2B thought leadership and a poor place for prosumer buyer-comparison threads. If your audience is not on Facebook for work, leave Facebook alone.
Instagram & TikTok
Who's here: DTC commerce, lifestyle, creator-economy, food, fashion, fitness, travel, anything visual. Almost any consumer brand.
Best for: Short-form video for discovery, Stories and DMs for retention, creator collaborations for trust transfer. The two channels where a great 15-second clip can drive 10,000 site visits in a day.
Skip if: If you are selling B2B SaaS or any high-consideration B2B service, Instagram and TikTok are not where pipeline lives. The ROI on a single LinkedIn comment for those teams beats the ROI on 50 Reels.
The rule of thumb across all five: pick two channels where your ICP overlaps strongly, add a third only after you are running the loop well on the first two, and treat the rest as optional surfaces you can add quarterly if the data supports it. Most teams should be on two or three channels, not seven. For each channel you pick, set up a corresponding monitor in your inbox so the listening layer is paying attention even when your publishing layer is not. Use the search query builder to draft the operator-syntax queries you will run; it saves twenty minutes of fiddling and produces queries you can paste into ReplySocial, Google site-searches, and competitor monitoring without rework.
The audience-fit map: how the playbook bends per audience
A pillar playbook is genuinely useful only if it bends correctly per audience. The four-stage loop is universal; the channel mix, the cadence, and the engagement style are not. Below is the map for the six audiences ReplySocial sees most often. For each, the right answer is a slightly different shape of the same system — and each links to a focused playbook or use case page where the bend is fully written out.
Channel mix: Facebook + one of (Instagram | X | LinkedIn) chosen by audience. Not five channels.
Bend: Heavy on customer-support replies and local discovery. Light on thought leadership. Treat the inbox as a customer-service queue, not a publishing studio.
See the small-business playbook →Channel mix: The channels your clients are on. Often X + LinkedIn for B2B agencies; IG + TikTok for DTC agencies.
Bend: Run the system per-client, not as one big agency program. Multi-tenant inbox; per-client monitors; reporting that ties back to client outcomes.
See the agency playbook →Channel mix: X + LinkedIn (most teams), with Reddit added if the category is technical or prosumer.
Bend: Founder-led at first, then a marketer-augmented system once founder time is the bottleneck. ICP-fit beats reach by an order of magnitude.
Read the B2B playbook →Channel mix: Instagram + TikTok + Facebook. X and LinkedIn are mostly noise for this audience.
Bend: Visual-first publishing, creator collaborations, customer-service replies in DMs. Social is a top-of-funnel discovery engine and a bottom-of-funnel support channel; both layers need monitoring.
See the e-commerce playbook →Channel mix: One or two channels, picked by where your specific audience already follows you. Spread thin = die.
Bend: Voice and consistency over channel diversification. Build-in-public mechanics, audience-of-one engagement, and a measurement model that values reply quality over reach.
See the creator playbook →Channel mix: The channels your supporters and beneficiaries actually use, which is often Facebook + Instagram + a niche third.
Bend: Story-driven publishing, donor-relationship management, volunteer recruitment via social inbox. Free-tier tooling matters more than feature-completeness here.
See the nonprofit playbook →The pattern across all six is the same: the loop runs the same way; the channel selection and the engagement cadence change. If your audience does not appear above, the right move is to read the closest match, then bend it for the specific differences in your audience. There is no eighth audience hiding behind the first six that requires a fundamentally different system.
Monitor first: the listening layer under everything
Listening is the foundation of every playbook in this document. If you take nothing else from this page, take this: a working keyword and brand monitoring layer running across your chosen channels in a single unified inbox is the highest-leverage thing a social team can build. Everything else compounds on top of it. The teams that skip the monitoring layer end up running a publishing program against a void. The teams that build the monitoring layer first end up with a publishing program that knows exactly which conversations to walk into.
The right number of queries is small. Five categories, two-to-three queries each, gives you ten-to-fifteen total — enough surface area to catch every relevant conversation and few enough to stay triagable in 20 minutes a day. The five categories below cover almost every B2B and prosumer use case; consumer brands swap the “competitor mention” category for a “product or category mention” category but otherwise the structure holds.
Brand mentions (linked and unlinked)
Your brand name with and without the @-handle, common misspellings, and any product names. Most teams catch the @-mentions and miss the unlinked mentions, which are typically 3-5x as common. Use X mentions tracking to capture the @-handle stream and a separate keyword query for the unlinked variant. Pair with the unlinked-mentions tool for a full picture.
Competitor mentions
The names of your top three to five competitors. The threads where someone is asking 'what should I use for X' and listing your category are the highest-intent threads on the entire internet for your business. You want to be in those threads with a useful, honest comment within hours, not days.
Category and pain-phrase queries
The two or three phrases your buyer types when they have your problem but do not know your category yet. 'how do I track competitor mentions on twitter,' 'best tool for replying to brand mentions,' 'social listening on a budget' — the pain-phrase queries surface threads that are early-funnel for you and where a useful answer compounds for years.
Switching-intent queries
Phrases that indicate someone is leaving a competitor or about to choose between vendors. “moving off [competitor],” “[competitor] alternative,” “[competitor] vs [other-competitor]” — the highest-intent queries in the set. Mid-volume but extremely high-fit. Pair with the competitor-watch planner to design queries that catch your competitors' weaknesses without crossing into negativity.
Job and role queries
The roles your buyers fill and the daily problems those roles complain about. 'product marketing manager,' 'head of community,' 'social media manager at agency' — the threads where a real practitioner is grappling with a real problem are where useful, helpful comments build authority over a long horizon.
Once the queries are running, the listening inbox needs two more pieces of discipline. First, bot and spam filtering — open conversations on X are now estimated at 40-70% bot replies in many categories, and a monitoring inbox that does not filter the bots out is unusable inside a week. Second, cross-channel deduplication: a Reddit thread that gets screenshotted onto X four hours later should appear once in your inbox, not twice. Without dedup, your team spends time re-triaging the same conversation and starts ignoring the inbox entirely.
For brands operating on LinkedIn at meaningful scale, the LinkedIn-specific layer matters too. The LinkedIn inbox surface — comments on your posts, comments on your competitors' posts, and DMs from people who saw a thoughtful reply you left on someone else's thread — is where most B2B teams find their highest-quality inbound. Wire it into the same unified inbox as your X and Reddit monitors so triage is one action, not three.
Engagement: the reply-as-distribution model
Most playbooks treat replying as a small, polite afterthought to the “real work” of publishing. They have it backwards. Reply is the largest source of compounding return on social, and publishing is the second largest. The math is simple: a great reply on a high-engagement thread reaches the same number of people as a mediocre original post and attaches your voice to a conversation an audience is already paying attention to. Original posts have to earn their audience cold. Replies inherit one.
The discipline that makes reply work is the 9:1 ratio — at least nine substantive replies that mention nothing you sell for every one reply that mentions your product. The ratio is a floor, not a target. Teams that drift toward 5:1 because “the replies are useful, this is fine” are how brand accounts get muted, throttled, and quietly written off as spam. Teams that hold the ratio at 12:1 or 15:1 build an account that is genuinely valued by the audience and earns the right to mention the product when it is actually relevant.
The voice discipline matters as much as the ratio. A reply that reads like a marketer is a reply that gets ignored or downvoted regardless of how useful the content is. The fastest test: read your reply aloud. If a sentence sounds like it could appear on a landing page, rewrite it. The fix is almost always to be more specific and more concrete — cite numbers, name workflows, admit limits, mention competitors honestly. The good reply in the Reddit marketing playbook is the canonical example; it discloses the relationship up front, names competitors fairly, puts the asker's actual problem above the tool choice, and sounds like a peer. Mirror that shape on every channel.
Three engagement archetypes worth running
Beyond the basic reply, three archetypes show up in every playbook in this document because they each have a distinct payoff curve and a distinct cost. Use them as a menu, not as a checklist; not every team should run all three.
The deep reply
Cost: 20-40 minutes per reply, 1-3 per week.
Payoff: Outsized weight in the threads where it lands. A 600-word reply on a high-fit thread can outrank your own marketing pages on Google for the same query for years.
When to run it: Save for high-intent, high-visibility threads where the asker is genuinely confused and a thoughtful answer compounds. Do not deep-reply on every thread; the math does not work.
The build-in-public post
Cost: 10-20 minutes per post, 2-3 per week.
Payoff: Audience growth via founder authenticity. A two-year build-in-public habit is worth roughly the value of a mid-six-figure marketing budget for a pre-launch B2B SaaS, conservatively.
When to run it: If the founder or a team member is genuinely doing interesting work and is willing to be honest about the boring parts. If you are inventing the 'in public' part, the audience can tell instantly.
The AMA / open-experiment
Cost: 3-5 hours of prep, half a day of live engagement.
Payoff: Concentrated trust transfer in a single window. A great AMA in r/SaaS or r/Entrepreneur can drive months of pipeline in 48 hours.
When to run it: When you have something genuinely AMA-worthy: a published number, a thing you built and open-sourced, a candid failure post. AMAs without substance fail loudly and the failure is public.
The thread that runs through all three archetypes is the same: the team showing up has actual experience to share and is willing to be honest about the limits. There is no third-party way to fake that durability. Tools, templates, and reply scaffolding can make the operational side faster — but the substance has to come from you, and the substance is what compounds.
Six illustrative vignettes
The six vignettes below are illustrative composites, not case studies — every name and number is invented to convey shape, not data. They cover the audience spectrum the pillar playbook addresses; each shows what the four-stage loop looks like in practice for that audience.
A local plumbing company sets up monitors for their brand name, three competitor names, and the phrase 'plumber near [city]' on Facebook and Instagram. They reply to every relevant DM within four hours and post one customer-job photo per week. By month four, 31% of new bookings cite social as the first touchpoint.
What surprised them. The single highest-performing channel was a Facebook neighborhood group the owner had been a passive member of for years. Two helpful replies a week in that group out-pulled the entire publishing program.
Lesson. For SMBs, the playbook bends hard toward listening and replying in the specific places your local audience already congregates. Generic-channel publishing is mostly noise. See the full small-business breakdown for the operational rhythm.
A 10-person devtools startup runs the full four-stage loop on X and Reddit only — no LinkedIn, no Instagram, no TikTok. The founder commits to 2 hours/day on monitoring and replies for 90 days. By month 4, the brand-search volume on Google has roughly doubled and inbound demos shifted from 'cold leads' to 'people who have been reading my X for three months.'
What surprised them. A single Reddit comment in r/devops outranked the company's pricing page on Google for their primary buyer query. The engineering team had to redirect the canonical product page to capture the search traffic.
Lesson. Two channels well > five channels poorly, every time. The compounding is in the reply layer, not the publishing layer. The B2B playbook elsewhere on this site has the full mechanics.
A DTC apparel brand runs IG, TikTok, and Facebook with a small team. They invest in customer-service replies within an hour and seed product mentions only via a careful creator-collaboration program. Eighteen months in, social is the largest single source of new-customer acquisition and the lowest-CAC channel they have.
What surprised them. The customer-service inbox drove more retention than every retention email campaign combined. A four-minute DM reply was worth more than a 12-touchpoint flow.
Lesson. For DTC, social is two channels stacked: top-of-funnel discovery via creators and bottom-of-funnel support via DMs. Both layers need monitoring; treating either as optional cuts the system in half.
A 200-person SaaS company hires a content marketer who publishes 12 LinkedIn posts a week and 8 X posts a week for six months with zero monitoring layer underneath. Engagement metrics look fine — likes, follows, the occasional viral post — but pipeline attribution is roughly zero. The CMO cuts the program.
What surprised them. When the company belatedly turned on monitoring after the cuts, they discovered roughly 240 unanswered threads where their product was being actively recommended or compared by customers and competitors. A year of free distribution had been ignored.
Lesson. Publishing without monitoring is half a system. The pipeline lives in the conversations, not the posts. Most 'social didn't work' diagnoses are actually 'we were running stage three of the loop with no stage two underneath.'
A solo software creator commits to one channel (X) for 18 months. They ship one substantive reply a day on threads in their niche, one original post every other day, and ignore every other social platform entirely. By month 12, they have a list of 8,000 engaged followers; by month 18, that list converts a $19/mo product to a six-figure ARR.
What surprised them. The single largest predictor of conversion was not follower count or post engagement; it was reply count. Followers who had replied to the creator at least once converted at 11x the rate of pure-follower lurkers.
Lesson. Reply is distribution, even at solo scale. One-channel discipline plus relentless reply quality outperforms five-channel scatter every time. The creator playbook documents this rhythm in detail.
A 30-person Series A company commits to LinkedIn, X, Reddit, Instagram, TikTok, and YouTube simultaneously after a board meeting. They hire two full-time creators and an agency. After eight months and roughly $300K of spend, the only channel showing meaningful pipeline impact is LinkedIn. The other five are quiet.
What surprised them. The team had not run a four-stage loop on any of the six channels. They had run stage three only, on all six, with no listening layer and no measurement model. The agency's 'cross-channel content strategy' was the program; nothing else.
Lesson. Six channels at 30% effort beats two channels at 100% effort almost never. The system is what compounds; channel diversity without the system is just spend. Cut to two, run the loop, and add channels only when the loop is saturated.
The pattern across the wins and the failures: the operators who win run the four-stage loop on a small number of channels with discipline. The operators who fail run stage three only, across too many channels, and conclude social does not work for their category. There is no third option.
The 90-day starter sequence
Ninety days is the right horizon for a starter sequence — long enough to see the first evidence of compounding, short enough to keep the team accountable. The plan below assumes one operator (founder, head of marketing, or community lead) and zero new headcount. Each phase is roughly four weeks; each week is four-to-six hours of focused work plus a daily 20-30 minute reading and replying habit starting in week two.
Audience, channels, queries, and account hygiene
Decide your two-to-four channels (section 3). Define your audience-fit shape (section 4). Draft your ten-to-fifteen monitoring queries (section 5). Spin up the unified inbox in ReplySocial across the channels you picked. Audit the accounts you intend to use; if any are fresh, start commenting today on low-stakes threads to age them and accumulate baseline credibility. Read the rules of every community you intend to participate in. Budget 4-6 hours/week.
Channels picked. Queries running. Inbox triaged daily by end of week 4. Accounts active and credible.
If you are still debating channels in week 3, force a decision. Two-channel commitment with imperfect choices beats four-channel paralysis. Ship the listening layer; you can rotate channels at the 90-day review.
Daily triage, no posting yet
Spend the entire phase reading and replying in target threads without publishing anything original. Aim for 3-5 substantive replies a day across all channels. No links to your stuff. No product mentions. No company name. Pure community participation, focused on threads where you have actual experience to share. Track reply count, reply quality (skim your own work weekly), and qualitative signal: are people replying to your replies?
60+ substantive replies shipped. Reply quality consistent. Names and faces in target communities starting to recognize you. Inbox triage running on autopilot.
If your replies are getting ignored, they are reading as marketer voice. Re-read the engagement section. The fix is almost always more specificity and fewer abstractions.
Add publishing, mentions, and measurement
Layer in the second half of the engagement model. Begin publishing original posts at whatever cadence you can sustain (start with 2-3/week). Begin product mentions in threads where they are warranted, holding the 9:1 reply-to-mention ratio. Set up the measurement layer: brand-search volume baseline (Google Trends + GSC), share of voice across competitors using the share-of-voice tool, attributed signups via referrer logs, and the count of mentions you did not generate yourself. By week 12, you have a baseline you can track against in Q2.
Publishing rhythm sustainable. 9:1 ratio holding. Measurement baseline captured. First evidence of inbound (DMs, mentions you did not seed).
If publishing is killing your reply cadence, cut publishing back. Reply is the larger leverage. You can scale publishing later; you cannot retroactively repair an inbox that went cold for two weeks.
By the end of the 90 days, the system runs in roughly an hour a day, you have a measurable baseline on each layer of the loop, and you have a hypothesis about which channels and which engagement archetypes are working in your category. Months 4-12 are where the compounding starts in earnest. Quitting at month 3 is the single most common reason a real playbook fails — the loop is real, but the payoff curve is non-linear and the largest part lives on the far side of the first 90 days.
Measurement and iteration
Measurement on social is harder than measurement on any other marketing channel and pretending otherwise is how social budgets get cut. Direct attribution captures maybe 20-40% of actual social-driven signups on a good day. The rest gets lost to mobile-app traffic that reports as direct, screenshot reposts that lose the source, multi-touch lag where someone reads your post in March and signs up in July, and dark-social referrals (Slack, DMs, group chats) that leave no trace in any tool. A measurement model that relies only on direct attribution will chronically under-credit social and the program will get cut before it has a chance to compound. The model below is layered in priority order.
Layer 1: Brand-search lift. The single most honest signal that a social program is working. Track your brand-name search volume in Google Trends monthly and your branded-query impressions in Google Search Console weekly. A working program shows up here within 60-90 days as a slow, irregular climb in branded-search volume — people who saw your name on social last week are typing it into Google this week. If brand-search is rising and direct attribution is flat, the program is working and the attribution layer is just under-counting. If brand-search is flat after 90 days, the program is not yet working regardless of what your referrer logs say.
Layer 2: Share of voice. Track your share of mentions versus your top three to five competitors monthly across your chosen channels. The metric is comparative: are you a louder voice than the alternatives in the conversations your buyers are having? If yes, the long-tail conversion will follow. If not, the program is not yet creating the surface area that conversion needs. Pair with the broader social listening playbook for the cross-channel measurement story; the listening layer feeds the share-of-voice metric directly.
Layer 3: Direct + assisted attribution. Pull referrers from your analytics, tag any signup that landed via a social URL, and add a self-reported “how did you hear about us” field on signup or in a 7-day onboarding survey. The mismatch between referrer-attributed and self-reported social is your hidden volume. Most teams find self-reported is 2-3x what referrer logs show — and that ratio is itself a useful baseline you can track over time.
Layer 4: The unprompted-mention metric. The single hardest metric to game: how many threads per month mention your product, by name, posted by someone who is not on your team. The first time this happens, you have crossed a real threshold. The trajectory of this metric across 6, 12, and 24 months is the closest thing social has to an honest growth curve. The teams that run the four-stage loop with discipline almost all see this metric inflect in the second year — which is exactly why the teams that quit at month 6 forfeit the largest part of the return.
The metrics to ignore are the obvious ones: follower count (low correlation with anything that matters), like count (lower correlation), individual-post reach (vanity at almost every scale), and engagement rate as defined by any specific platform (each platform redefines it to flatter the platform). None of these are useful in isolation. They are inputs to the system, not outputs. If you find yourself reporting them up the chain as the primary metrics, the chain has the wrong measurement model and the program will eventually get cut on the wrong evidence.
Show up two-to-four channels at a time. Listen before you speak. Reply more than you publish. Mention sparingly and honestly. Measure on the right horizon and the right metrics. That is the entire pillar playbook. The dedicated playbooks elsewhere on this site — the B2B strategy playbook and the Reddit marketing playbook — bend this system for specific audiences. The audience and use-case pages bend it further. The system is the spine; the rest is shape. Now go connect a free account and start listening before you say a word; an honest 90 days is worth more than a clever quarterly-strategy slide.