Membership engagement scores are everywhere—in articles, at conferences and even built into our association management systems.
As we are able to access more and more data, we seek out new and better ways to use them for understanding member behavior, and member engagement scores stand at the ready, offering us the key to this knowledge.
But when we measure member engagement what are we really measuring? These scores are a little like time: you know exactly what they are until you have to explain them.
In fact, often member engagement scores have little to do with the real world, and need to connect to something more substantial to reflect how our members actually interact with our organizations and why they do it.
WHAT WE TALK ABOUT WHEN WE TALK ABOUT ENGAGEMENT
We can look at member engagement from a number of perspectives:
· Are my members using my products and services? (Or alternatively, am I providing my members what they want and need)?
· How and in what way are my members interacting with the association as an entity and each other?
And the last, which I think is stronger and a combination of the first two:
· What aspects of my association are my members utilizing and does this mean anything for their future behavior?
This last way of conceiving of engagement is a combination of service provision and interaction, and keeps us thinking proactively. If we know, how current behaviors manifest themselves in future actions, we have a better sense of what our members will want and need tomorrow.
Now that we have a sense of engagement’s meaning, let’s look at the nuts and bolts. Measuring membership engagement is in essence taking very different data sets and turning them into a single score. Comparing the number of events a person participates in with the number of times another person writes on an association forum is hard. Clearly the opportunities to write on a forum out-weigh the event attendance considerably. I can write in a forum every day, but may only make it to a few events per year.
The way to overcome this problem is to normalize the data, turn them into an index that makes the scores from 0 to 100 or 1 to 5.
I have to admit: I love indices. Each year when Freedom House’s Freedom in the World Index comes out, I’m right there glancing over how every country has changed. I do the same thing with the Gini and the Human Development Index. Imagine it as my version of reading the box scores on Sunday morning.
But the problem with indices is the same as their strength: they boil down a complex set of data into one number, and it’s really hard to utilize the scores once you’ve gotten them.
Let’s say Bill got an engagement score of 4.2 and Annie got a score of 3.5. What does this mean? Did Bill’s high participation in our online forum outweigh Annie’s volunteering on committees and boards? Bill is a daily user of our forum, while this year Annie has only had time to be on one committee, our annual meeting’s steering committee. Both of them took a professional development course and attended the Annual Meeting, so, though Annie’s score is good, she doesn’t have as high an average score as Bill.
The case of Annie and Bill illustrates the limitation of averaging activity even when normalized into a score.
PUT A LOAD ON ANNIE
To overcome the problem of a straight average, many tools offer us the option to put higher weights on some activities and lower weights on others. We may consider that Bill’s high score on the forum contributes less, or takes less effort than Annie’s. Bill simply has to turn on a computer, while Annie may have to buy plane tickets, rent a hotel and get herself out of the office (while emails are piling up) to attend the committee meetings. Annie’s contributions also have a larger impact on a significant aspect of the organization. As such, we may want to weight volunteering higher than forum contributions.
Annie’s score now indicates her more valuable role in our association. Right?
Well…we have created a subjective indication of the value we put on her contribution. But we are unsure whether the score really shows us anything or how much our weighted engagement scores tell us.
DON’T JUST KNIT A DOGGY SWEATER
Ultimately, the answer lies in what we are trying to accomplish with our score. An engagement score for engagement-score’s sake is great, but it is a little like knitting a doggy sweater with no dog to wear it. They are both substantial accomplishments, but the functionality is limited.
It is much better to have an outcome in mind from the beginning. Are we interested in how engagement leads to continued membership? How about the relationship between engagement and leadership development? Maybe we want to know how engagement helps our members advance in their careers.
Now our engagement score has direction, and we connect the score to the real world. Moreover, if for our weighting technique we use the probability that a person’s volunteering or forum contributions will result in our chosen outcome, our weights become less subjective.
When forum contributions have a low probability of advancing our members’ careers, we should leave the indicator out of the score. But if they register a high probability that the contributor will remain a member—and that is the outcome we’d like our engagement score to support—then we should leave it in.
Given that we are determining scores based on desired outcomes, we can now envision multiple scores, returning the depth and complexity to the data, while still giving us a shorthand way of talking about member behavior.
The result is that we now have leadership engagement scores, membership engagement scores and/or event attendance engagement scores, and we develop a better understanding of our members, which is what we wanted in the first place…something the vague “engagement score” never really offered.
Kerry spoke in the “Exploring New Markets: How Research, Analytics and Risk Assessment Can Help” session during SURGE Spring, an interactive virtual summit hosted by AssociationSuccess.org on May 2nd-4th. Click here to learn more and register.