Guest contributors to Higher Logic’s blog are valued members of our community who have offered to share their insights. The following blog post was written by a guest contributor who has been compensated for this contribution. The content contained and the opinions expressed in the blog post are solely those of the author and do not necessarily reflect the views of Higher Logic or its employees. Higher Logic does not endorse or make any representations regarding the content and disclaims any liability related to the blog post.
Artificial Intelligence (AI) has become a significant force driving innovation and transformation across various industries. Tools like ChatGPT are making headlines, and everyone seems to be joking about our robot overlords, for better or worse. It’s a dynamic, exciting time, and many association leaders are wondering how they can leverage AI tools and technologies to enhance their operations, engagement, and value proposition for members.
It can be hard to navigate the amount of information (and hype!) out there. So let’s break down some of those opportunities. You can think of AI as having four major areas of benefit to the ways you gather and engage your people: Reducing Risk, Lifting Burden, Deepening Relevance and Enhancing Analysis.
Association communities are delicate, innately human ecosystems. Though they can be extraordinarily resilient, they can also be quite vulnerable. Whether it’s moderation tensions, or plateauing growth and engagement, AI can help you flag and mitigate risks that can impact your online community.
Australian online community hosts, including associations, have important regulatory obligations around online harms. Monitoring and taking action around defamation, hate speech, harassment, misinformation and more, aren’t just part of your duty-of-care, they’re required under legislation, such as the Online Safety Act (2021).
AI moderation tools that integrate with your community can assist you in meeting your obligations in a range of ways. They can help you prioritise reports and issues, flag potential problems before they escalate (based on criteria you input). They can be used to automate stop-gaps such as timing out users or locking conversations automatically, educating users through constructive prompts and reminders, automatically adding user notes to moderation record keeping, and more.
Increasingly sophisticated AI tools like Image Analyzer are emerging to support specific types of moderation, like for visual content, while API-driven tools like Active Fence can be integrated into community platforms to ensure global compliance with top moderation issues.
AI tools can be a valuable foundation to your governance toolset, freeing up staff time for the nuanced human moderation that shapes member culture and experience.
By analysing member behavior and engagement patterns across your online community, AI can help identify potential churn risks amongst your member-base, and inform proactive, targeted retention strategies. See if members are disengaging from content, community discussions or other activities you track, or if their sentiment or tone has recently changed.
In addition to spotting downturns in engagement, this can help you anticipate any financial shortfalls or operational implications that such a downturn might trigger, positioning you to intervene before that happens.
Cumbersome and repetitive administrative busy-work can be automated and streamlined using AI tools, releasing you and your team to focus on the tasks that demand uniquely human perspective and insight.
There’s a perpetually long list of administrative tasks involved in community management and association life. Though member experience should be designed thoughtfully by people who understand and serve their community, once plans are in place, AI can execute sets of tasks on your behalf. There are AI tools that can help you categorise and prioritise your email inbox, draft emails, manage documents, schedule meetings, and more!
You can use them to manage communications around events (like registration notifications, updates, wrap-up surveys), schedule tasks for staff or members, take notes during meetings or events, or for simple content creation (e.g. you could use AI to transcribe a recording of a conference session or webinar, and then use it to summarise the transcription, generating a summary of highlights from a guest speaker)
You could use AI to assist with your member onboarding and offboarding tasks such as sending welcome emails, showcasing community features, recommending content or places to start, nudging people to complete forms or surveys and more, can be automated at scale.
These and more tasks can be automated to free time for higher-value activities, such as strategic planning, and those interactions that require the all-important human touch.
AI-powered analytics can help associations make sense of vast amounts of data, uncover hidden patterns, gain valuable insights into their member community and articulate the value of their work.
As noted, behavioural analysis can help identify the risk of churn, but also offers other benefits.
You can use social graphing or network analysis tools to explore connections and relationships between members that reveal hidden opportunities (e.g. the formation of a new sub-community or network based on shared needs). You can also use AI to help you identify potential volunteers, super-users or ambassadors based on specific interaction criteria.
Many of these tools don’t require coding skills or deep IT knowledge and have user-friendly interfaces to guide you.
AI can also be helpful for bigger-picture needs, such as identifying and creating efficiencies within your wider organisation.
If you can share key data points, AI tools can assist with optimising your resource allocation, including budgeting and staffing, and can help you prioritise tasks when you’re feeling overwhelmed.
AI can also be used to provide trend forecasting within your sector, identify potential sponsors, or analyse research around your key subject matter that could then fuel evidence-based advocacy efforts and strategic decision-making.
One of the most common challenges facing community managers is articulating the value their work is generating. Are unreported benefits being generated within your association community? Are users achieving learning outcomes, making lasting connections, reaching new professional goals, or reporting that they feel more supported than ever?
Over time, AI tools are expected to help uncover patterns of value – mapped to the broader organisation – that association leaders and community managers can hang their hats on.
AI tool sets can help reduce the effort it takes to personalise your member experience, thus increasing engagement, loyalty and retention.
AI algorithms enable associations to deliver tailored programming, content, recommendations, and services to their members. By leveraging member data, AI can identify targeted learning opportunities or educational programs, networking recommendations, and customised event experiences, fostering member engagement and satisfaction.
For example, you can use AI to analyse member data or interactions to help inform programming decisions for member conferences and events, ensuring you’re addressing timely needs and interests. You might use ChatGPT to help you flesh out a program schedule, and a DIY chatbot builder like landbot.io to create a bespoke virtual concierge for an online gathering.
AI tools are increasingly capable of match-making – making those critical connections between people in your community that lead to lasting relational value for both them and your association.
You can use AI to introduce members with similar interests or backgrounds, members seeking specific expertise or resources with peers who can provide them, or volunteers with opportunities in your sector.
Assess with Care
Associations can use AI to elevate their operations, reduce risks, tailor member experiences, automate burdensome tasks, and access deeper insights to demonstrate value and enhance decision-making. However, it’s important to remember that AI is not one-size-fits-all, and despite major advances, it’s still a work in progress. For each of those areas of benefit, there are risks to be mindful of.
Always assess the AI tools you’re considering with care.
Consider Accuracy and Copyright Issues
Accuracy and copyright issues are significant considerations when using generative AI tools. Always critically evaluate the accuracy of AI-generated content and be aware of the potential copyright implications associated with using such content.
Generative AI tools, including text and image generators, might not always produce accurate or contextually appropriate content. The accuracy of AI tools is heavily dependent on the quality and diversity of the data they were trained on. If the training data is biased or incomplete, it can impact the accuracy of the generated content. These tools can generate content that seems plausible but is factually incorrect or misleading. As such, it is important to never assume that AI-generated output is entirely accurate – always review and edit the content generated by AI tools before using it for any purpose.
Additionally, generative AI itself does not inherently account for copyright infringement. The responsibility for addressing copyright concerns when using generative AI tools primarily falls on the users and developers of these tools.
If you’re using an AI tool trained on a data selection that you dictate, you might consider avoiding copyrighted materials in your training data to reduce the risk of generating content that closely resembles copyrighted works.
But, if you’re using a tool like ChatGPT, your best defense is educating your users about copyright risks, and carefully reviewing generated content (via reverse searches, plagiarism detection tools, text analysis tools, and even by involving your legal department).
Consider Data Privacy and Ethics
AI tools rely on data to provide insights and predictions. Assess the privacy and security aspects of tools you use to ensure their compliance with relevant regulations. Globally this includes regulation like GDPR; here in Australia it includes our Privacy Act (1988) and our Data Sovereignty laws. You have a responsibility to understand how the tool handles and protects your sensitive member data.
You’ll also want to think about how transparent you need to be with your members about your AI usage. Would your members be comfortable knowing their community interactions were feeding a machine-learning model, for example? When you’re using a public generative AI tool, like ChatGPT, you should remember that the data you input into that tool is then added to the model for any future users. This can carry privacy risks if the data you’re inputting is private or proprietary (for example, if you add any of your member data or private company reports).
More broadly, it’s important to reflect on the ethical implications of using AI tools. Top researchers around the world, such as Timrit Gebru and Virginia Eubanks, have documented risky and dangerous outcomes of using AI, particularly impacting marginalised or structurally disadvantaged people. These include areas such as racial and gender bias, predictive policing, deepening financial inequality, and outright harassment (exemplified in the Australian Robo-Debt scandal). Additionally there is concern around generative AI tools using existing content and intellectual property for profit, without credit or compensation, and labor abuses of the humans behind the scenes to make AI work (such as people who label data).
These systemic problems aren’t going away, so it’s important to proceed cautiously when adopting AI tools. All AI carries bias, as it’s shaped by the people who made it and the data it consumes. It’s important to do our individual part by applying some scrutiny to the products and practices of AI companies we might engage.
Ask questions about transparency and fairness. Ask where data sets come from. Look for a history of best practices and compliance with both local and global guidance in this space – you want to see shared values in action (not just on paper). You can reach out to not-for-profit and watchdog organisations (such as the Algorithmic Justice League) for guidance about bad actors and issues to look out for.
Consider What You May Be Losing
As well as evaluating the benefits you may gain from applying AI, assess what might disappear from your association online community, and ensure you’re okay with that.
For example, your members may have regularly chatted with each other as part of foraging for needed information. If you use AI to connect them with that information more efficiently, ensure you still design ways for that social interaction to occur, or you risk losing important aspects of what makes your community special.
AI should always be deployed contextually to support your community and their needs, as well as your own needs. It’s not something you should adopt just for the sake of using it, but rather a tool you can consider leveraging to help you save time and streamline. When integrated thoughtfully and ethically, AI tools and technologies can be a positive engine for your association community – from discovery, through membership and beyond.
Venessa Paech (BFA, MA) is an internationally regarded online community expert and educator. She led Community for Lonely Planet, REA Group, Envato and Australia Post. Through her consultancy PeerSense she helps organisations design, build and maintain safe and thriving communities via strategy and coaching. Her clients have included: ABC, AASW, Teach for Australia, QUT, University of Sydney, SANE and Woolworths.
In 2009 Venessa founded the Australian Community Manager Roundtable which in 2011 became Swarm, Australia’s premiere conference for community practitioners. She is Co-Founder and Director of Australian Community Managers (ACM), the national centre of excellence for online community management training and resources. ACM hosts a 6000+ online community, produces research into community management, facilitates the national Code of Ethics for community professionals and offers training for anyone working with online groups or communities (professional or volunteer). In 2021 ACM launched the first ever national training for online community professionals in defamation, in response to new regulatory obligations in the Australian market, and in 2023 ACM launched the All Things in Moderation global conference for humans who moderate.
Venessa works in several international community management leadership collectives, and liaises with industry, government and researchers to grow community management practice in APAC. She has served as an expert witness in Australia for legal matters surrounding the use of social media and online communities. Venessa developed and teaches the first post-graduate online community management Unit for the University of Sydney, where she also teaches business executives about strategic community, lectures in Internet Cultures and Governance, and is a PhD Candidate (University of Sydney) researching AI and online communities.
Suggested Higher Logic Posts
3 Automated Email Campaigns Professionals Australia Used to Strengthen Their Association
Associations, Retention, member engagement // With over 40 diverse member segments, Professionals Australia uses automated email campaigns to recruit, onboard, and engage new members.
Building Buy-In for an Online Community at Your Association
Associations, Retention, member engagement // Need to pitch community to your association’s board? We’ll walk you through aligning your strategy with their priorities and overcoming common objections.
6 Ways Associations Can Use Webinars to Engage Members and Increase Non-Dues Revenue
Associations, Retention, member engagement // Discover six new ways your association can use webinars to engage members and increase non-dues revenue during a time of limited face-to-face interaction.