Scrunch recommends that users identify the external sources AI platforms already cite for core topics, then pursue brand mentions through outreach, review generation, original research, and partner distribution.

For example, Contentstack's AI search & visibility playbook has a number of good ideas to follow in the realm of "mention engineering,” including:
Users can identify and prioritize the external sources shaping AI responses for tracked prompts by using Scrunch to filter by third-party ownership and sort citations based on Influence Score.
There are purpose-built tools that can help automate the process. For example, Noble automates outreach, negotiation, and payment to secure mentions in sources already cited by AI platforms and Stacker automates native, non-sponsored content placements across trusted news outlets.
Scrunch recommends tracking brand presence, citations, referral traffic, AI agent traffic, and share of voice versus competitors as key performance indicators.
Scrunch recommends monitoring AI search trend data like brand mentions and citations consistently over 2-3 week periods to identify real trends versus one-off changes.
Scrunch recommends estimating how many prompts to track for AI search using the following approach: X [# of topic clusters] x Y [12-15 questions related to each topic cluster] = Z [# of AI search prompts to track]. The primary goal is to get a representative sampling of data across all customer journey stages via a mix of branded and non-branded prompts.