[{"data":1,"prerenderedAt":230},["ShallowReactive",2],{"/en/workflows/signals-that-survive-the-meeting":3},{"id":4,"slug":5,"locale":6,"translationGroupId":7,"localeSwitchApproved":8,"title":9,"description":10,"documentationMarkdown":11,"workflowJson":12,"category":211,"tags":212,"integrations":216,"difficulty":218,"author":217,"verified":33,"featured":33,"date":219,"modified":219,"icon":7,"imageSrc":7,"path":220,"alternates":221,"seo":222},"ac14f992-e8a3-4421-a9f9-dbdb4ca20669","signals-that-survive-the-meeting","en",null,true,"Signals That Survive the Meeting","A decision-focused coaching flow that helps teams test branch numbers, spot dirty signals, and set practical guardrails for when to trust automation versus human judgment.","## How it works\nThis workflow helps operators turn messy branch numbers, conversations, and “looks-fine” dashboards into decision-ready signals. It starts with a knowledge-base pass (so straightforward questions get answered fast), then routes the user into a guided menu of practical checks.\n\nEach path gives a crisp, non-academic playbook: what to trust, what to challenge, and what usually breaks first—before the team walks into a confident meeting powered by quietly wrong data.\n\n## Key features\n- Knowledge-base-first responses for quick definitions and policy-aligned guidance\n- Menu-driven routing to six decision-shaped signal checks (branch metrics, dirty signals, automation judgment, messy evidence, comparisons, culture)\n- “Back to menu” loop after each coaching response to support iterative investigation\n- Safe fallback to a human team when the user’s request doesn’t match a known path\n\n## Step-by-step\n1. **Trigger:** A user starts the conversation in Calypso.\n2. **Knowledge base pass:** The workflow attempts to answer using your KB rules (with a short opener that frames the tone).\n3. **Choose a checkpoint:** The user gets a menu of signal/decision topics to pick from.\n4. **Route by selection:** The workflow evaluates the selected button and routes to the matching checkpoint.\n5. **Deliver the coaching answer:** A short, practical checklist is sent for the chosen topic.\n6. **Loop for another check:** The workflow returns the user to the menu to run another checkpoint.\n7. **Fallback (if needed):** If the selection can’t be matched, the workflow hands off to the Insights Ops team.\n\n## Setup requirements\n- **Calypso** workspace with canvas workflows enabled\n- (Optional) A configured **Knowledge Base** for the knowledge-base policy step\n- No external credentials are required for this workflow",{"id":13,"teamId":14,"name":9,"version":15,"workflowVersion":16,"nodes":17,"connections":176,"routingEnabled":8,"active":33},"wf_signals_that_survive_the_meeting","calypso-public-library","1.0.0",1,[18,34,41,53,84,93,102,108,115,121,127,133,140,146,152,158,165],{"id":19,"name":20,"type":21,"typeVersion":16,"position":22,"parameters":25,"category":32,"deletable":33,"connectable":33},"node_flow_configs","Workflow settings","flow-configs",[23,24],-560,-40,{"name":9,"description":26,"tags":27,"triggerType":31},"Decision-focused signal checks for branch metrics, messy evidence, and automation guardrails.",[28,29,30],"signal-quality","decision-systems","branch-metrics","input","policy",false,{"id":35,"name":36,"type":31,"typeVersion":16,"position":37,"parameters":40,"category":31,"deletable":33,"connectable":8},"node_input","Incoming message",[38,39],-320,120,{},{"id":42,"name":43,"type":44,"typeVersion":16,"position":45,"parameters":47,"category":52,"deletable":8,"connectable":8},"node_kb_policy","Knowledge base policy","knowledge-base-policy",[46,39],-80,{"enabled":8,"fallbackToRouting":8,"sticky":33,"stickyMode":48,"activationOpener":49,"personalization":51},"default",{"enabled":8,"instruction":50},"I can help you pressure-test signals before they pressure-test you in a meeting. Pick a checkpoint below, or ask a direct question and I’ll use the knowledge base first.",{"useContactName":8},"response",{"id":54,"name":55,"type":56,"typeVersion":16,"position":57,"parameters":59,"category":52,"deletable":8,"connectable":8},"node_menu","Choose a checkpoint","interactive-message",[58,39],160,{"messageType":60,"headerText":9,"bodyText":61,"footerText":62,"sectionTitle":63,"buttons":64,"ctaDisplayText":83,"ctaUrl":83},"list","Pick what you’re deciding, and I’ll give you the fast checks that prevent confident wrong calls.","If it looks clean, check twice.","Decision checkpoints",[65,68,71,74,77,80],{"id":66,"title":67},"branch_trust","Trust branch numbers",{"id":69,"title":70},"dirty_signal","Spot dirty signal",{"id":72,"title":73},"automation_vs_humans","Automation vs humans",{"id":75,"title":76},"messy_evidence","Messy evidence insights",{"id":78,"title":79},"comparison_misreads","Compare branches safely",{"id":81,"title":82},"signal_culture","Build signal culture","",{"id":85,"name":86,"type":87,"typeVersion":16,"position":88,"parameters":90,"category":92,"deletable":8,"connectable":8},"node_if_branch_trust","If: Branch numbers","if",[89,39],420,{"buttonId":66,"operator":91},"equals","routing",{"id":94,"name":95,"type":96,"typeVersion":16,"position":97,"parameters":100,"category":52,"deletable":8,"connectable":8},"node_text_branch_trust","Branch numbers: trust vs noise","text-message",[98,99],680,40,{"text":101},"Branch numbers that deserve trust usually share three boring traits: (1) a clear denominator, (2) stable definitions, (3) a known delay.\n\nQuick pressure test:\n- Denominator check: “Per what?” Per visitor, per lead, per staffed hour? If nobody can say it in one breath, it’s decorative.\n- Definition drift: Did ‘qualified’, ‘appointment’, or ‘conversation started’ change by branch or over time?\n- Lag reality: Many ‘daily’ numbers are actually ‘yesterday plus backfill.’ Ask: what’s the typical late-arrival rate?\n- Coverage: Are we measuring most activity or just the trackable slice? If a branch has more offline work, online metrics will flatter or punish them.\n\nRule of thumb: Trust metrics that can be reconciled to a simple count you could audit in 30 minutes. Be suspicious of metrics that only exist as a dashboard.",{"id":103,"name":104,"type":87,"typeVersion":16,"position":105,"parameters":107,"category":92,"deletable":8,"connectable":8},"node_if_dirty_signal","If: Dirty signal",[89,106],240,{"buttonId":69,"operator":91},{"id":109,"name":110,"type":96,"typeVersion":16,"position":111,"parameters":113,"category":52,"deletable":8,"connectable":8},"node_text_dirty_signal","Dirty signal: early warning",[98,112],200,{"text":114},"Dirty signal rarely announces itself. It shows up as “wow, we’re doing great” right before the meeting.\n\nRed flags that look harmless:\n- Sudden smoothness: variance disappears. Real systems are lumpy.\n- Perfect symmetry: every branch improves the same week. That’s usually a definition change, not performance.\n- Metric stacking: three ‘independent’ charts rise together because they share one upstream counter.\n- Funnel teleportation: mid-funnel moves but top/bottom don’t (unless you changed tracking).\n\nTwo questions that save reputations:\n1) What changed in instrumentation, staffing, incentives, or routing the week the metric changed?\n2) If this number is wrong, what’s the most likely way it’s wrong (missing, double-counted, mis-attributed)?",{"id":116,"name":117,"type":87,"typeVersion":16,"position":118,"parameters":120,"category":92,"deletable":8,"connectable":8},"node_if_automation_vs_humans","If: Automation vs judgment",[89,119],360,{"buttonId":72,"operator":91},{"id":122,"name":123,"type":96,"typeVersion":16,"position":124,"parameters":125,"category":52,"deletable":8,"connectable":8},"node_text_automation_vs_humans","Automation vs human judgment",[98,119],{"text":126},"Automation is great at consistency; it’s terrible at knowing when the world changed.\n\nTrust automation when:\n- The environment is stable (same offer, same routing, same branch capacity).\n- The cost of a wrong decision is low, and you can detect errors quickly.\n- You have guardrails: thresholds, anomaly checks, and a defined rollback.\n\nInsist on human judgment when:\n- You’re comparing branches with different constraints (hours, staffing, local demand).\n- Incentives are in play (people will ‘learn’ the metric).\n- The metric is a proxy (sentiment score, attribution, “engagement”) and not a count.\n\nPractical guardrail: require a ‘why might this be wrong?’ note next to any automated recommendation. If nobody can write one, the system is too magical.",{"id":128,"name":129,"type":87,"typeVersion":16,"position":130,"parameters":132,"category":92,"deletable":8,"connectable":8},"node_if_messy_evidence","If: Messy evidence",[89,131],480,{"buttonId":75,"operator":91},{"id":134,"name":135,"type":96,"typeVersion":16,"position":136,"parameters":138,"category":52,"deletable":8,"connectable":8},"node_text_messy_evidence","Messy evidence → usable insight",[98,137],520,{"text":139},"Cleaning can quietly delete the truth. The trick is to *separate* mess from meaning.\n\nInstead of over-cleaning, do this:\n- Keep raw + curated: store the unedited signal and your decision-ready summary side by side.\n- Tag uncertainty: ‘unverified’, ‘partial’, ‘self-reported’, ‘model-inferred’. Uncertainty is not a bug—it’s a label.\n- Triangulate, don’t average: if conversations say one thing and branch outcomes say another, don’t force a single number. Ask what each channel can actually observe.\n- Preserve outliers with a reason: outliers are either data errors or business reality. Both are important.\n\nDecision habit: write the decision first (“We will do X because Y”). Then ask: what minimum evidence would change our mind? That’s the signal you need.",{"id":141,"name":142,"type":87,"typeVersion":16,"position":143,"parameters":145,"category":92,"deletable":8,"connectable":8},"node_if_comparison_misreads","If: Comparison misreads",[89,144],600,{"buttonId":78,"operator":91},{"id":147,"name":148,"type":96,"typeVersion":16,"position":149,"parameters":150,"category":52,"deletable":8,"connectable":8},"node_text_comparison_misreads","Comparing branches: common traps",[98,98],{"text":151},"Teams misread branch comparisons in predictable ways—because the numbers are politely lying.\n\nMost common traps:\n- Unequal exposure: one branch gets higher-intent demand (or different routing). Your ‘performance’ metric is partly an allocation metric.\n- Capacity blindness: understaffed branches look “less efficient” while actually being overloaded.\n- Attribution mirages: the cleanest attribution often belongs to the simplest channel, not the most causal one.\n- Blended averages: one segment (e.g., repeat customers) dominates and hides new-customer performance.\n\nFix in one meeting:\n- Compare like-for-like segments (intent, product, time window).\n- Normalize by capacity (staffed hours, appointment slots).\n- Always ask: what would we expect to happen *if nothing changed*? If you can’t answer, you’re not measuring change—you’re measuring noise.",{"id":153,"name":154,"type":87,"typeVersion":16,"position":155,"parameters":157,"category":92,"deletable":8,"connectable":8},"node_if_signal_culture","If: Signal culture",[89,156],720,{"buttonId":81,"operator":91},{"id":159,"name":160,"type":96,"typeVersion":16,"position":161,"parameters":163,"category":52,"deletable":8,"connectable":8},"node_text_signal_culture","Signal culture that makes decisions",[98,162],820,{"text":164},"A healthy signal culture doesn’t worship data; it *uses* it to decide.\n\nMake it real (and less slidey):\n- One owner per metric: someone who can explain definition, collection, and failure modes.\n- Pre-mortems: before big decisions, spend 3 minutes on “how could this metric be wrong?”\n- Decision logs: record the decision, the signals used, and what would falsify it. This turns hindsight into learning.\n- Fewer metrics, higher standards: a small set of auditable signals beats a buffet of dashboards.\n\nWitty but true: if your dashboard needs a narrator every week, it’s not a dashboard—it’s a storybook. Make the signals boringly reliable, then let the decisions be interesting.",{"id":166,"name":167,"type":168,"typeVersion":16,"position":169,"parameters":171,"category":175,"deletable":8,"connectable":8},"node_fallback","Handoff to Insights Ops","fallback",[98,170],920,{"handoffMessage":172,"departmentId":173,"departmentName":174},"I’m not fully confident I routed that correctly. I’m handing this to Insights Ops so you get a careful read before anyone makes a confident wrong decision.","insights-ops","Insights Ops","terminal",[177,181,183,185,188,191,193,195,197,199,201,203,205,207,209],{"id":178,"source":35,"target":42,"sourceHandle":179,"targetHandle":179,"type":180},"conn_input_to_kb","main","edge",{"id":182,"source":42,"target":54,"sourceHandle":179,"targetHandle":179,"type":180},"conn_kb_to_menu",{"id":184,"source":54,"target":85,"sourceHandle":179,"targetHandle":179,"type":180},"conn_menu_to_if1",{"id":186,"source":85,"target":94,"sourceHandle":187,"targetHandle":179,"type":180},"conn_if1_true_to_text1","true",{"id":189,"source":85,"target":103,"sourceHandle":190,"targetHandle":179,"type":180},"conn_if1_false_to_if2","false",{"id":192,"source":103,"target":109,"sourceHandle":187,"targetHandle":179,"type":180},"conn_if2_true_to_text2",{"id":194,"source":103,"target":116,"sourceHandle":190,"targetHandle":179,"type":180},"conn_if2_false_to_if3",{"id":196,"source":116,"target":122,"sourceHandle":187,"targetHandle":179,"type":180},"conn_if3_true_to_text3",{"id":198,"source":116,"target":128,"sourceHandle":190,"targetHandle":179,"type":180},"conn_if3_false_to_if4",{"id":200,"source":128,"target":134,"sourceHandle":187,"targetHandle":179,"type":180},"conn_if4_true_to_text4",{"id":202,"source":128,"target":141,"sourceHandle":190,"targetHandle":179,"type":180},"conn_if4_false_to_if5",{"id":204,"source":141,"target":147,"sourceHandle":187,"targetHandle":179,"type":180},"conn_if5_true_to_text5",{"id":206,"source":141,"target":153,"sourceHandle":190,"targetHandle":179,"type":180},"conn_if5_false_to_if6",{"id":208,"source":153,"target":159,"sourceHandle":187,"targetHandle":179,"type":180},"conn_if6_true_to_text6",{"id":210,"source":153,"target":166,"sourceHandle":190,"targetHandle":179,"type":180},"conn_if6_false_to_fallback","automation",[28,30,29,213,214,215],"data-hygiene","automation-guardrails","research-ops",[217],"Calypso","intermediate","2026-03-30T11:05:42.813Z","/en/workflows/signals-that-survive-the-meeting",{"en":220},{"title":9,"description":223,"ogDescription":224,"twitterDescription":225,"canonicalPath":220,"robots":226,"schemaType":227,"alternates":228},"Coach teams to test branch numbers, spot dirty signals, and set guardrails for automation vs judgment before decisions get locked in.","Turn polished noise into decision ready signals. Check branch numbers, detect dirty data, and choose when to trust automation vs human judgment.","A practical signal coach: validate branch numbers, catch dirty signal early, and decide when automation helps—or quietly hurts.","index,follow","HowTo",[229],{"hreflang":6,"href":220},1775310170521]