Key Results Glossary

Overview

Key Results are specific, measurable performance indicators tied to an agent’s objectives. Each KR answers a simple question:

Is this agent delivering the outcome it was designed to achieve?

Every agent has multiple KRs, and each KR:

  • Has a clear definition

  • Uses a consistent measurement methodology

  • Is benchmarked against industry standards

  • Rolls up into objective and campaign performance views

Key Results Reference Table

Below is a reference table of all Key Results currently used in the Agent Performance Console. This table outlines what each KR measures, how it’s calculated, and what success looks like in practice.

Campaign
Key Result
Definition
How Measured
Example

Guest Concierge

90% quality response rate

This is the core accuracy metric for the Guest Experience agent. It ensures fans receive precise, helpful answers to logistical questions, minimizing frustration and building trust.

The percentage of agent responses tagged as "Correct" by our evaluation system, ensuring the AI provided the right answer to the specific question.

A guest asks, "What time do gates open?" and the agent responds with the exact time for that specific event day.

Guest Concierge

X service conversations resolved without a human agent

This highlights the agent’s efficiency in handling routine support queries—like parking or bag policy—completely on its own. It demonstrates operational cost savings by deflecting volume away from live staff.

The total number of service-related (non-sales) conversations minus those that were escalated to a human, leaving only the "self-contained" conversations.

A fan asks "Is my purse allowed?" and "Where is the nearest restroom?" The agent answers both instantly, and the fan leaves satisfied without needing a human staff member.

Guest Concierge

X sales lead handoffs to human agents

This tracks how often the AI identifies a high-value sales opportunity (like premium seating) and successfully connects the guest to a live staff member to close the deal.

The number of conversations involving revenue-generating topics (like ticket sales or suites) that resulted in a handoff to a live chat agent.

A guest asks, "How much are luxury suites?" The agent recognizes this as a high-value lead and instantly connects the guest to a sales representative.

Guest Concierge

X revenue opportunities created in chat

This measures the volume of conversations where users express interest in purchasing items, tickets, or upgrades. It quantifies the potential revenue pipeline the agent is uncovering automatically.

The total count of conversations that include at least one user message related to a revenue-generating topic (as defined in our system).

A user asks, "Can I upgrade my seats?" or "Buy parking pass." Even if they don't buy immediately, the agent has identified a revenue opportunity.

Guest Concierge

3+ user interactions per Guest Experience Agent conversation

This measures engagement depth. A higher number suggests fans are having meaningful back-and-forth dialogues—discovering more about the venue—rather than just asking a single question.

The average number of messages a user sends within a single conversation session.

Instead of just asking "Wifi password," a user follows up with "Is it free?" and "How do I connect?", showing deep engagement.

Guest Concierge

5% feedback rate on all Guest Experience Agent responses

This tracks how actively users are engaging with the help they receive. It measures the percentage of responses where a guest takes the time to rate the answer (positive or negative), providing the essential data volume needed to identify gaps and improve the AI over time.

The percentage of total agent messages that receive a 'thumbs up' or 'thumbs down' rating.

A fan asks "Is the roof open today?" and clicks "thumbs up" after reading the answer. Even if they had clicked "thumbs down" because they didn't like the answer, it would still count toward this engagement rate.

Guest Concierge

70% positive response feedback rate

This tracks the "Customer Satisfaction" (CSAT) of the AI agent. High positive feedback confirms the agent is being helpful, friendly, and accurate in its support.

Of all the thumbs-up/thumbs-down ratings received, the percentage that are "thumbs up."

A user is lost and asks for "Directions to the team store." The agent provides a map link, and the user clicks "thumbs up" to confirm the help was useful.

Guest Concierge

X recorded customer preference data points

This captures insights about what your guests care about most (e.g., accessibility, parking, merchandise). It helps build a profile of fan interests to inform operational decisions.

The sum of unique topics detected across all conversations, treating each distinct topic raised by a customer as a data point.

Over a season, the agent records 5,000 inquiries about "ADA Parking" and 3,000 about "Bag Valet," providing concrete data on high-demand services.

F&B Finder

90% quality response rate

This measures the agent's accuracy in handling F&B inquiries. It ensures fans receive correct information about menus, dietary restrictions, and concession locations, directly impacting their dining experience.

The percentage of agent responses scored as "Correct" by our AI evaluation tools, excluding unclear or nonsensical user inputs.

A user asks, "Where can I find gluten-free beer?" and the agent correctly identifies and lists the specific stands carrying that product.

F&B Finder

X completed Food & Beverage Agent conversations

This measures the total volume of unique conversations where guests specifically utilized the Food & Beverage agent. It highlights the demand for dining information and the agent's reach.

The count of unique conversation sessions where the Food & Beverage agent was triggered and successfully interacted with a user.

A fan starts a chat specifically to find "Cocktails near Section 105" and continues asking about prices; this entire session counts as one completed conversation.

F&B Finder

5% feedback rate on all Food & Beverage Agent responses

This metric tracks how actively users are engaging with the agent’s dining suggestions. A higher rate indicates users are paying attention to the recommendations and taking the time to rate the quality of the answer.

We calculate the percentage of total agent responses that receive any user feedback (thumbs up or thumbs down).

If the agent suggests a "Vegan Burger" and the user clicks "thumbs up" to acknowledge the recommendation, this counts toward the feedback rate.

F&B Finder

70% positive response feedback rate

This gauges guest satisfaction with the dining help they receive. It tells us if the agent’s food and beverage suggestions are helpful, accurate, and resonating well with your fans.

Of the responses that received user ratings, we calculate the percentage that were positive ("thumbs up").

A fan asks for "Best hot dogs" and gives a "thumbs up" after the agent directs them to a highly-rated stand near their section.

F&B Finder

X recorded customer food & beverage preference data points

This tracks the specific dining interests your guests reveal during chats. It turns casual questions into actionable insights, helping you understand which items (e.g., "craft beer," "vegan options") are most popular.

We sum the distinct combinations of conversation IDs and specific food/beverage topics detected during the interaction.

In one chat, a user asks about "IPAs" and later asks for "Pretzels." This is recorded as two distinct preference data points.

Visitor Concierge

90% quality response rate

This is the core accuracy metric for the Visitor Experience agent. It ensures fans receive precise, helpful answers to logistical questions, minimizing frustration and building trust.

The percentage of agent responses tagged as "Correct" by our evaluation system, ensuring the AI provided the right answer to the specific question.

A visitor asks, "What time do pools open?" and the agent responds with the exact time for that specific day.

Visitor Concierge

X service conversations resolved without a human agent

This highlights the agent’s efficiency in handling routine support queries—like parking or bag policy—completely on its own. It demonstrates operational cost savings by deflecting volume away from live staff.

The total number of service-related (non-sales) conversations minus those that were escalated to a human, leaving only the "self-contained" conversations.

A visitor asks "Is my purse allowed?" and "Where is the nearest restroom?" The agent answers both instantly, and the visitor leaves satisfied without needing a human staff member.

Visitor Concierge

3+ user interactions per Visitor Experience Agent conversation

This measures engagement depth. A higher number suggests fans are having meaningful back-and-forth dialogues—discovering more about the venue—rather than just asking a single question.

The average number of messages a user sends within a single conversation session.

Instead of just asking "Wifi password," a user follows up with "Is it free?" and "How do I connect?", showing deep engagement.

Visitor Concierge

5% feedback rate on all Visitor Experience Agent responses

This tracks how actively users are engaging with the help they receive. It measures the percentage of responses where a visitor takes the time to rate the answer (positive or negative), providing the essential data volume needed to identify gaps and improve the AI over time.

The percentage of total agent messages that receive a 'thumbs up' or 'thumbs down' rating.

A fan asks "What is the address?" and clicks "thumbs up" after reading the answer. Even if they had clicked "thumbs down" because they didn't like the answer, it would still count toward this engagement rate.

Visitor Concierge

70% positive response feedback rate

This tracks the "Customer Satisfaction" (CSAT) of the AI agent. High positive feedback confirms the agent is being helpful, friendly, and accurate in its support.

Of all the thumbs-up/thumbs-down ratings received, the percentage that are "thumbs up."

A user is lost and asks for "Directions to main street." The agent provides a map link, and the user clicks "thumbs up" to confirm the help was useful.

Visitor Concierge

X recorded customer preference data points

This captures insights about what your visitors care about most (e.g., accessibility, parking, merchandise). It helps build a profile of fan interests to inform operational decisions.

The sum of unique topics detected across all conversations, treating each distinct topic raised by a customer as a data point.

Over a timeframe, the agent records 5,000 inquiries about "Concerts" and 3,000 about "Attractions," providing concrete data on high-demand services.

Last updated

Was this helpful?