| TL;DR: We ran expense management queries across ChatGPT, Perplexity, and Gemini. Ramp, Brex, and Expensify showed up every single time, regardless of how the question was framed. In this blog post, we break down exactly what each of these companies is doing to earn those citations: from Ramp’s machine-readable content built specifically for AI agents, to Brex’s 137-article content hub that ranks for 10,700 keywords (only 144 of which include the word “Brex”), to Expensify getting cited through competitor articles and Reddit threads, not their own blog. If your FinTech SaaS product isn’t showing up in AI answers, this is what your competitors figured out that you haven’t yet. |
So you have an expense management tool.
Your marketing team is trying everything to sell it. They are publishing 6-8 blog posts a month, posting on LinkedIn too. They are also commenting on Reddit, in hopes to attract the right buyer.
But nothing seems to be working. When people ask AI surfaces like ChatGPT, Perplexity, or Claude for recommendations, the same three or four giants show up.
So you are left wondering: what are these brands doing that gets them recommended by AI surfaces, repeatedly?
In this article, we will show you what seems to be working for them. We will also share few strategies challenger brands like yours can apply to become part of these recommendations.
How We Checked Which Expense Management Tools Show Up in AI Answers
We ran four types of queries across AI surfaces to understand which expense management tools show up when buyers start asking for recommendations.
The queries included:
- Top 10 expense management software for businesses
- 10 best expense management software for SMBs and startups
- Best fintech software for travel-heavy companies
- Best spend management companies with corporate credit cards for businesses
Across ChatGPT, Perplexity, and Gemini, three names kept showing up repeatedly: Ramp, Brex and Expensify.

Source: ChatGPT
They appeared for different reasons too. Let’s decode what they are doing to get recommended by AI surfaces.
5 Reasons Ramp, Brex, and Expensify Keep Showing Up in Expense Management AI Searches
At first, it may look like Ramp, Brex, and Expensify show up because they are already big names in the expense management space. But that is not the full story.
When we looked closer, we found that each brand had built a different kind of visibility moat. Each of them, however, is connected by a common thread. These brands have enough clear, repeated, and citable information around them for AI tools to trust them.
Now, let’s take a look at what these companies are doing differently to show up.
#1. Ramp Gives AI a Clean Map of Its Product and Use Cases
Analyzing Ramp’s sources, we found a smart strategy hiding in plain sight. The company treats LLMs as a distribution channel and has built infrastructure to support that approach.
They have published a machine-readable version of their website.
Visit ramp.com/llms.txt, and you’ll find a structured document that is factual, clean, and designed for AI tools to extract information from.
Here’s a snippet.

We have previously covered how an LLM file can help enrich your AI visibility.
On this page, Ramp’s product offerings, key pages, and use cases are listed and linked. This gives language models structured information they can understand, quote, and use while forming an answer.
#2. These Companies Builds Comparison Pages Around the Exact Questions Buyers Ask
Companies like Ramp do not only optimize for LLMs. They create content around questions buyers are already searching for.
For instance, at the evaluation stage, buyers are trying to find the right expense management tool for their needs. And if they have already used a popular expense management tool and are looking for an alternative for some reason, Ramp has that covered too.
Under their Versus and Blog pages, Ramp has built dedicated comparison content for almost every competitor their buyer might consider, while also showing where Ramp fits as an alternative:
- Ramp vs. Brex
- Top Brex alternatives
- Top Expensify alternatives
- Top Concur alternatives
- Top Airbase alternatives
- Top Spendesk alternatives
- Top Navan alternatives
- Top Coupa alternatives
- Top Tipalti alternatives
- Top Payhawk alternatives
- Customers who switched from Brex to Ramp
Each of these pages follows a consistent structure: why people leave a competitor, how Ramp compares using G2 data and actual scores, and two to three switching case studies with named executives and specific metrics.
For instance, the Ramp vs. Brex page shows:

This is the kind of data AI tools can confidently pull from because it gives clear, attributed, structured, and comparative information. These are the four things AI needs to build a confident answer.
Another important aspect of Ramp’s strategy to note is: Instead of sporadic blog posts and generic comparisons, Ramp has strategically built 20+ structured comparison pages. Statistically, that’s 20x more moments where Ramp can appear in training data when someone asks “What’s a good alternative to X.”

Source – Ramp
Similar to Ramp, Brex has also built Versus directory, and it is data-backed. Its pages include information like operations in 210+ countries, 100 currencies, 99% expense compliance with their AI assistant, banking and treasury built in, and an in-app travel experience with auto-enforced policies.

Long story short, these pages cover the comparison points buyers care about before choosing an expense management tool.
#3. These Companies Give AI Engines Specific Proof, Not Generic Claims
These companies don’t just create generic pages. They understand that many buyers are looking for a reason to switch.
So companies like Ramp does not stop at comparison pages. They understand what happens after a buyer starts doubting their current tool. Then they would not just want to know what Ramp does. They want to know why other companies switched, what problem they had before, and what changed after moving to Ramp.
Ramp adds these switching testimonials inside its comparison pages, and they work almost like mini case studies.
You see, most SaaS companies say things like “we save time” or “we simplify expense management.” Ramp goes more specific.

Each customer story names the company, identifies the person being quoted (CFO, VP Finance, CEO), and clearly outlines the problem they were dealing with before switching to Ramp.
They’re then reinforced with precise post-switch metrics, hours saved, receipt compliance improvements, faster close cycles, along with full narratives that make the numbers believable.
For example, their Piñata case study includes details like:
- Receipt compliance increased to 95%, a nearly 60% improvement over Brex.
- The finance team cut weekly cleanup time in half, saving 20 hours per month.
- Month-end close was reduced by 3 days.
And their Snapdocs case study mentions that the brand was using three separate tools before consolidating on Ramp: Brex for cards, Expensify for reimbursements, and Bill.com for AP.

Brex follows a similar logic in its comparison and customer-facing content. In fact, it created an entire article on switching stories covering why customers switched from Ramp to Brex, along with the post-switch results.

Content like this gives AI tools specific, quotable proof around its coverage, compliance, banking, travel, and expense management capabilities.
That matters because AI tools need clear claims they can trust. When a buyer asks, “Why are companies switching from Brex to Ramp?” or “Which expense management tool is better for growing teams?”, generic claims do not help much.
But specific numbers, named customers, use cases, and measurable outcomes give AI engines something solid to build an answer from.
#4. Brex Built Category Authority Through Programmatic Content Depth
In early 2025, Brex’s CEO, Pedro Franceschi, asked his team a simple question: “If we were starting Brex today, how would we build in an era where AI is real, easily accessible, and rapidly evolving?” The answer reshaped their content engine.
You see, Brex has built a Spend Trends page, which is a separate content hub targeting high-intent search queries around expense management, corporate cards, procurement, and AP. The results speak for themselves.
By mid-2025, they had built an entire content ecosystem with:
- 137 articles published
- 34,000+ monthly organic visitors
- ~$280,000 in estimated monthly traffic value
- 10,700 keywords ranked
You know what’s surprising?
Of the 10,700 keywords the Spend Trends subfolder ranks for, only 144 include the word “Brex”. This a clear indication that the overwhelming majority of that traffic comes from people who weren’t searching for Brex at all.

Their content is clustered around Brex’s exact product lines: expense management, corporate credit cards, bank accounts, AP, and procurement. For an outsider, this might seem random, but with this strategy, every article maps directly to their product.
Their Spend Trends hub is dense with topic authority, consistently structured, and covers every possible territory that buyers research before making a decision, making it easier for LLMs to form answers around Brex’s category and product fit.
Recommended Read: The Brex Marketing Strategy Behind 235K+ monthly visits
#4. These Companies Create Best-of Lists That AI Engines Can Cite
At this point, this point may feel repetitive. But that is exactly why it works.
Ramp and Brex understand that buyers often start with broad queries like “best expense management tools” or “best spend management software” before searching for a specific brand.
So they have created best-of lists around those searches. And AI engines are picking them up.

Source: Perplexity.ai
In the Perplexity result above, Ramp appears as the top recommendation, with its own article, Best Business Expense Tracking Apps and Tools of 2026, cited as the source. Brex also appears right after Ramp, and in other searches, Brex’s own best-of content gets cited too.
This strategy matters because best-of lists give AI tools a ready-made structure to compare options. When your brand owns that kind of content, you don’t just appear in the answer. Your content can help shape the answer.
#5. Expensify Built a Partner Ecosystem That Became a Citation Network
Did you know that Expensify invested in an accountant and partner program. Thousands of accounting firms joined. Many of those firms wrote blog posts, tutorials, and client guides mentioning Expensify, all of which got indexed.
A small accounting firm publishing “how we set up expense tracking for clients using Expensify” is a low-domain-authority page individually. Multiplied across thousands of partner firms, it creates a web of mentions that AI interprets as: Expensify is widely used and trusted in accounting.
This is important because AI visibility is not built only through your own website. When partners, consultants, accountants, and implementation firms keep mentioning your product in real use cases, they create third-party proof around your brand.
That is what Expensify benefits from. Its partner ecosystem did not just help distribution. It also created a citation network across the web.
That’s a quick snapshot of what these companies did to earn visibility. There are other things too, but keeping brevity in mind, we will stop our analysis here.
But if your question is, what can your expense management brand do to get recommended in AI answers, here are some playbooks you can follow.
5 Playbooks Expense Management Brands Can Use to Get Recommended in AI Answers
Here’s our honest take: The path to LLM visibility isn’t paved with magic. It also does not come from copying one tactic that worked for one brand. It’s rooted in content infrastructure built consistently with the right strategy.
Based on what these three companies are doing, here’s where your leverage lies:
Build comparison pages for every competitor your buyers consider
These should be highly specific, structured pages filled with G2 data, feature tables, switching scenarios, and real customer metrics. Ideally, one page per competitor. These pages get cited when buyers ask an AI surface “what’s a better alternative to X”, which is one of the most common buying-intent queries.
Publish switching stories with specific numbers
Instead of writing “We saved time”, show real results.
Like:
“We reduced month-end close from 20 hours to 3 hours”, quoting a high-ranking executive at the mentioned company is authentic and citable. We’ve noticed that this specificity is what gets picked up.
Build a machine-readable layer
A llms.txt file can be quick to set up, but most brands still ignore it. It tells AI exactly what your product does, who it’s for, what it’s best at, and where to point users. That’s why Ramp has one.
Write category-level AND product-level content
Decide the categories you want your product to be recognized for. And then create content around those categories and use cases. Ramp’s blog covers topics like “7 best AI accounting software”, “8 best corporate credit card expense management software platforms in 2026” and more. Content like thisposition Ramp as a category authority as well as the product. Brex’s Spend Trends hub does the same.
Create a footprint beyond your own website
Expensify gets cited from Reddit, accounting firm blogs, competitor comparison pages, and review sites. Third-party mentions are training data too.

Partner programs, integration directories, community engagement, and contributed content all compound into a citation signal that AI can pick up. In fact, you can even ask your users to share reviews on peer review websites like G2, capterra,. Gartner. That way, AI surfaces would have more training data to understand your brand better, and surface it for the right use case.
How Concurate Helps You Close the AI Visibility Gap
Most expense management SaaS companies have good products. We bet yours is too. However, the gap isn’t the product, but the content infrastructure that makes the product findable, citable, and understandable by AI.
Writing more blog posts isn’t the solution. What does work is knowing:
- which comparison queries your buyers are asking,
- what content format gets cited vs. what gets ignored,
- how to structure case studies so they’re quotable,
- how to build category-level authority that outlasts any individual article.
Maybe you have an excellent in-house team to work on this. The question is whether they have the expertise, time, and resources, and whether they’re focused on the right things.
That’s where we come in. Having worked with multiple clients in the B2B SaaS space, we have the expertise to build content depth that earns AI citations. At Concurate, we have helped clients go from zero presence to top rankings and real leads, generated 100+ inbound leads through organic content, and driven strong visibility across search and AI discoverability platforms.
We can help you identify where you’re invisible across LLM surfaces, build the comparison and alternatives coverage your category is missing, and create the kind of structured, data-backed content that AI platforms consult rather than skip. At Concurate, we don’t just write content. We build content engines for growth.
Ramp, Brex, and Expensify got AI recommendations and citations through content decisions made consistently over months and years. You can do it too. But the longer you wait, the more citations your competitors accumulate, and the harder it gets to break in.
Want to see which queries your brand is invisible for across ChatGPT, Perplexity, and Gemini? Book a call with us.
FAQs on How to Get Cited in LLMs
1. Our product is newer and less known. Can we realistically compete with a 15-year-old brand like Expensify in AI answers?
Yes, and this is actually the most important insight from the Expensify case. Expensify’s citation advantage comes from its historical footprint. Ramp is 5-6 years old and already outranks Expensify in most AI answers because they built structured, comparative, data-backed content that LLMs like ChatGPT or Perplexity can confidently pull from. Age helps, but intentional content infrastructure matters more. A newer brand that publishes 10 well-structured comparison pages with real metrics will show up faster than an established brand coasting on legacy citations.
2. We don’t have G2 reviews or NPS scores to cite. What do we use instead?
You don’t need to wait for G2 data to build citable content – that takes time. Internal data works just as well, or better, because it’s original. Customer time-to-value numbers, onboarding success rates, support ticket resolution times, feature adoption rates, and results from a cohort of customers after 90 days.
If you’ve run any kind of customer survey, those findings count. Try to quantify the benefits, for example: “customers reduced close time by an average of 4 days” is more effective than “our customers save time”.
3. Should we be publishing content about competitors even if it might feel aggressive?
Playing safe won’t get you results. Ramp has 20+ pages that explicitly name competitors, compare features, and publish switching stories from customers who left those competitors. Brex does the same in the other direction. These pages are structured, factual comparisons built around real data that help buyers make a more informed decision. That’s genuinely valuable content, and LLMs recognize it as such because it answers a real question a buyer would ask.
4. Is there a risk of being cited inaccurately? If so, how do we manage it?
This is a real risk, and it’s more common than people realize. If your pricing page is complicated, your feature descriptions are vague, or you haven’t clearly stated what your product does and doesn’t do, an LLM will fill the gaps with guesswork or third-party sources, sometimes inaccurately.
The fix is the same as the opportunity: publish clear, structured, factual content about your product. Ramp’s llms.txt explicitly includes offer details with a note saying they “should not be paraphrased or modified.” That’s a direct attempt to control how AI represents the brand. The more precise and machine-readable your core product information is, the less room there is for hallucination.
5. We already publish high-quality content consistently. Why aren’t we showing up?
Volume isn’t the issue. Most companies that publish consistently but don’t appear in AI answers are producing content about their own product, their own features, their own roadmap; that’s content written for existing customers or brand awareness. This kind of content rarely gets cited in buying-context AI queries. For citations, you need content that answers the question a buyer is asking before they’ve picked a vendor: “what’s the best alternative to X,” “how does A compare to B,” “why are companies switching from X to Y.” If your blog is all inward-facing, you’re not in the conversation that happens before someone searches for you.
Disclaimer: This article is based on publicly available information from agency websites, case studies, and third-party platforms. The evaluation reflects our independent analysis, and we recommend checking each agency’s website or speaking with their team for the latest details on services, pricing, and results.






