How We Built an Engagement Scoring System Using Census AI Columns

Amanda Sieglock
3 February 2025

Find out how to build an automation that assigns an engagement score, highlights areas of interest for that account, and generates talking points for reps to access right from Salesforce. Bonus: We'll also discuss which model worked best for this and why. Learn more about how the LLMs measure up here.

So What Are We Actually Trying to Solve Here?

Let's be real about what understanding customer engagement usually involves:

  • Manually tracking multiple engagement touchpoints
  • Figuring out which activities actually matter
  • Creating engagement scores that make sense
  • Writing summaries that sales will actually read
  • Getting all of this into Salesforce in a usable format
  • And probably maintaining yet another spreadsheet

But here's where it gets interesting - we built an automated engagement scoring system using Census AI columns that your sales team can start using today. Let's dive into how!

The Solution: Census AI Columns

See it in action: If you're more of a video person than a reader, take a look at this quick guide on creating engagement scores using LLM. 👇

For the readers, here's how to set it up:

Step 1: Initial Setup

Before diving in, make sure you have:

  1. Connected your data warehouse as a source
  2. Connected Salesforce as a destination
  3. Set up your AI integration

Step 2: Create Your Dataset

Navigate to Datasets in Census and create a query that pulls:

  • Account information
  • Engagement activities (ebook downloads, pricing page views, free trials)
  • Asset names for each activity
  • Lightweight engagement scores for each activity

Pro tip: While you can bring in all historical data, we recommend limiting to the last 90 days to keep insights relevant for your sales team.

Here's the SQL we used to pull this dataset:

WITH event AS (

SELECT

account_id,

asset_name,

asset_type,

buying_questions,

engagement_likelihood,

engagement_score,

event_id,

happened_at,

key_insights,

page_summary

FROM

census_on_census.production.unioned_events

WHERE account_id IN (115, 124, 147, 159, 168, 174, 181)

)

SELECT

account_id,

ARRAY_AGG(

OBJECT_CONSTRUCT(

'event_id', event_id,

'asset_name', asset_name,

'happened_at', happened_at,

'engagement_score', engagement_score,

'engagement_likelihood', engagement_likelihood,

'ai_summary', OBJECT_CONSTRUCT(

'key_insights', key_insights,

'buying_questions', buying_questions,

'page_summary', page_summary

)

)

) AS event_detail_json

FROM census_on_census.production.unioned_events

WHERE account_id IN (115, 124, 147, 159, 168, 174, 181)

GROUP BY account_id

 

Copy

 

Step 3: Create Your AI Column

This is where the magic happens! Click 'AI Enrich and Enhance' in the right corner, then 'GPT' to create your prompt. We've broken our prompt into three key parts:

  1. Basic Instructions
    • Create a total engagement score (1-10)
    • Review all engagement data
    • Generate a summary
    • Set clear guidelines (e.g., scores 6-10 indicate high engagement)
  1. Output Structure
    • Specify exactly how you want the data formatted
    • Request top 3 assets with highest engagement scores
    • Include color coding for visual clarity
  1. Sample Output
    • Show GPT what "good" looks like
    • Demonstrate the desired format
    • Make it easily readable for the sales team

Pro tip: Set your output type to 'string' for the full summary, or 'number' if you just want the engagement score. You can also add default values for accounts with no data.

Here's the prompt we used:

 
You are an AI assistant specialized in analyzing customer engagement data for Census, a B2B SaaS startup focusing on Reverse ETL solutions. Your task is to evaluate potential customer accounts by analyzing their engagement data, identifying trends, and estimating the likelihood of conversion into a Census customer based on the provided data. Please ensure all instructions are followed exactly. A score of 6 or greater is routed to the sales team. Please keep that in mind while scoring.

 


Instructions:
1. Initial Condition Check:
• Conditions:
• If is empty or malformed:
• Actions:
• Output:

<p><strong>❇️ Engagement Score: 0/10</strong></p>


<p><strong>Engagement Summary - Activity Last 90 Days:</strong></p>
<ul>
<li>Engagement Likelihood: Low. Not enough activity to provide a detailed summary.</li>
</ul>



• Terminate further processing for this record.

2. Engagement Summary:
• Conditions:
• If valid data exists in EVENT_DETAIL_JSON.
• Actions:
• Analyze the company’s engagement trends based on web activity over the last 90 days using the data provided in .

3. Engagement Score Determination:
• Process:
1. Filter Out Irrelevant Events:
• Exclude any events where asset_type is marketing email or engagement_likelihood is low, as these do not indicate a high likelihood to buy.
2. Extract Engagement Likelihood:
• Extract the engagement_likelihood field from each remaining individual event within EVENT_DETAIL_JSON.
3. Assign Numerical Scores:
• High: Add 3 points.
• Medium: Add 2 points.
• Low: Add 1 point (only include “low” if explicitly instructed).
4. Calculate Overall Engagement Likelihood Score:
• Sum all individual scores from the filtered events.
• Normalize the total score to fit within the 0-10 range, if necessary.
• Round the final score to the nearest whole number.
5. Assign Final Engagement Likelihood:
• Use the calculated and normalized score as the overall engagement likelihood for the account.

4. Insights from JSON Data:
• Analyze the following fields from EVENT_DETAIL_JSON to refine insights:
• asset_name: The specific content asset viewed.
• engagement_score: Impact score of individual interactions.
• engagement_likelihood: Likelihood of purchase (Low, Medium, High).
• asset_type: Type of asset interacted with (ensure marketing email types are excluded from analysis).

5. Output Structure:
• Engagement Score:

<p><strong>❇️ Engagement Score: [0 - 10]/10</strong></p>
- Replace `[0 - 10]` with the calculated whole number score.



• Engagement Summary - Activity Last 90 Days:

<p><strong>Engagement Summary - Activity Last 90 Days:</strong></p>
<ul>
<li>[Summary derived from `EVENT_DETAIL_JSON`]</li>
<li>[Additional details focusing on prioritized pages & events and areas of interest]</li>
<li>[Do not include any events where `engagement_likelihood = low` or `asset_type = marketing email`]</li>
</ul>



• Page & Event Engagement:

<p><strong>Page & Event Engagement:</strong></p>
<ul>
<li>💡 <strong>[Page Name 1]:</strong> [Page Purpose]. [Specific tools and their purposes if applicable.]</li>
<li>💡 <strong>[Event Name 2]:</strong> [Event Purpose]. [Specific tools and their purposes if applicable.]</li>
<!-- Up to 5 prioritized Pages & Events -->
</ul>



• Talking Points:

<p><strong>Talking Points:</strong></p>
<ul>
<li>📊 [Actionable Point 1 derived from engagement insights]</li>
<li>📈 [Actionable Point 2 derived from engagement insights]</li>
<li>💡 [Actionable Point 3 derived from engagement insights]</li>
<!-- 3-5 points total -->
</ul>


6. Additional Guidelines:
• Accuracy:
• Ensure all information is derived exclusively from the provided data, strictly based on EVENT_DETAIL_JSON. Avoid making inferences or fabricating details not explicitly present. If key data is missing, return a placeholder or indicate the absence clearly.
• Clarity:
• Present information in a clear and concise manner, making it easy for the sales team to understand and utilize.
• Relevance:
• Focus on the most impactful insights that can drive sales strategies and conversations.
• Formatting:
• Maintain the structure and formatting as illustrated in the sample output to ensure consistency and readability.
• Error Handling:
• If EVENT_DETAIL_JSON is malformed or incomplete, output a clear message indicating the issue and skip processing for that record.
• Automation and Integration:
• Where possible, leverage automation tools or scripts to parse EVENT_DETAIL_JSON and generate the output structure automatically to reduce manual effort and potential errors.
 • Ensure that the generated insights can be seamlessly integrated into existing Customer Relationship Management (CRM) systems for streamlined workflow.

 

 
Copy

 

Step 4: Set Up the Sync

Now let's get this data into Salesforce:

  1. Go to Syncs and create a new sync
  2. Select your source (engagement dataset) and destination (Salesforce Account)
  3. Choose "Update Only" sync behavior
  4. Map your engagement score and summary fields
  5. Set your sync schedule

Real Results at Census

When we sync this to Salesforce, our sales team gets:

  • Clear engagement scores
  • Explanation of why each account got its score
  • Highlighted pages/assets the company engaged with
  • Specific talking points for hyper personalized outreach

Which Model do we prefer?

Census offers connections to GPT, Claude and Gemini. And while we tested this across all of them, our verdict is that GPT makes the most sense for this use case.

  • Both GPT and Claude produced excellent results - Claude was slightly stronger in its writing
  • However, at about 7x the cost, we can't justify using Claude for this internally facing use case. We still love it for anything customer facing or fully automated though!

 

Ready to Get Started?

Whether you're trying to understand engagement patterns, prioritize accounts, or enable more personalized outreach, Census's AI Columns makes it happen in minutes.

Want to try it yourself? Check out our documentation at docs.getcensus.com for detailed setup instructions and our complete AI Columns recipe book for more use cases. Or reach out to us at support@getcensus.com - we'd love to help you supercharge your sales engagement!

Ready to implement engagement scoring in your organization? Check out our documentation or reach out to our team for personalized guidance.