Skip to content

Privacy and Security Considerations When Using AI for Your Association

Hear From an Expert on AI, Privacy, Security, and Data Protection to Keep Your Association and Its Members Safe

Abstract lock image representing AI security and data privacy

In an episode of The Member Engagement Show, I sat down with Amanda DeLuke, Senior Privacy Analyst at Higher Logic, to talk about key privacy and security considerations for associations as they incorporate AI into how they do their work. Amanda is also a part of the International Association of Privacy Professionals (IAPP), and the Messaging, Malware, and Mobile Anti-Abuse Working Group (M3AAWG) – she recently spoke on an AI governance panel at their conference.

While larger organizations may have dedicated roles for privacy and security, smaller associations often lack these resources, so Amanda and I also talked about how to identify those essential security and privacy steps to make things more manageable and scalable.

 

Higher Logic’s Commitment to Privacy and Security

In her role, Amanda handles Higher Logic’s data protection agreement, runs our internal privacy assessments, manages our privacy program, and educates staff on privacy practices. She also assists with privacy and security reviews of the  vendors Higher Logic uses for day-to-day work.

“Privacy and security is really a group effort,” says Amanda, “It’s important to involve everyone throughout the business.”

One important part of what Amanda does includes supporting Higher Logic’s internal and external audits for our ISO certification and SOC 2 attestation. These standards allow companies to measure their security and privacy and demonstrate a commitment to safeguarding their data. ISO 27001 is a certification process that organizations can go through to achieve formal certification. SOC 2 is an attestation, based on five principles for evaluating the effectiveness of an organization’s security protocols.

“[ISO and SOC 2] are really important industry standard. They show folks that you can trust the organization to safeguard data. I work with a lot of vendors, and I’ve seen where some only have ISO, but Higher Logic has both because that SOC 2 framework is really important and it’s valuable doing that audit of your whole program to evaluate security and privacy controls.”

Trends in AI and Privacy

AI’s rapid rise introduces unique privacy and security challenges to associations. As amazing as it is for helping you save time, it also creates opportunities for abuse and breaches in privacy and security.

“AI is new and it’s hard to detect. So it’s become a new vector for abuse because it gives people more ‘brain’ power to potentially use for cybercrime. In that way it’s a blessing and a curse because this tool is in the hands of everyone – both people who want to do good things and people who use it negatively… There are also concerns on the privacy side around using personal data to train AI models. So there are lots of questions around how we approach this.”

Some public AI tools, for example, may train their underlying model on the data you put into them (which is why Higher Logic’s AI features keep your data private). It’s important to understand if the AI tools you’re using do this. If they do, it will influence which data and information you feel comfortable using in those tools (e.g. you wouldn’t want to use private data in a public tool).

For example, Amanda shared that some organizations use practices like differential privacy, “a mathematical framework that adds in randomness or noise to data sets so you can’t see any personal information in what’s being used to train the model,” or they’ll anonymize data when using it with AI.

As AI becomes more commonplace and is incorporated into more systems and software, it’s important that your organizations come up with internal guidelines to ensure data privacy and responsible use.

data classification chart separating data into public, private, internal, confidential, and restricted

Best Practices in AI Governance

Governance is key for associations aiming to use AI responsibly. As you incorporate AI into your work…

  • Create a data map or data inventory – a detailed record of your organization’s data assets. Consider the type and location of each data point, as well as how that data is transferred between systems.
  • Evaluate and classify your data – data classifications serve to organize data into groups based on sensitivity, importance, and other criteria. Some examples from the NIST data classification levels include public data (freely available and accessible to anyone), private data (more valuable than public data but needs minimal security), internal data (pertains directly to the organization and has restricted access), and confidential data (which might have legal requirements for how it is handled and thus is limited access).
  • Learn from existing frameworks from organizations like NIST, OECD, and the International Association for Privacy Professionals, and ask around in your networks (like the Higher Logic User Group – HUG) to see what other people are doing. You Don’t have to reinvent the wheel – you can borrow from the guidance of others.
  • Leverage existing processes – for example, if you already have a process for identifying and eliminating risks, use it for AI too.
  • Evaluate external tools and vendors – make sure you know if the vendors you use have or are adding AI components and understand how those features work. If it’s hard to tell from their website and communication, contact your account rep to have them walk you through their own privacy and security measures with regards to AI. You should only work with and input data into vendors you trust.
  • Form an internal working group – if you don’t have dedicated security and privacy staff, you can still take steps to use AI responsibly. Share the effort by creating a working group to research and discuss security and privacy considerations for your organization. Empower those on the working group to set aside time to do this work – it’s important to your organization’s reputation and maintaining members’ trust.
  • Make sure staff are on the same page – your data and AI privacy and security policies are only effective if staff understand them and follow them. Set clear expectations and reiterate why everyone needs to be bought in. For example, if you don’t want certain organizational data sets used with public AI tools like ChatGPT, make sure staff know that and understand why.

For associations, transparency about data usage and privacy and security policies is essential for ensuring that all your staff is on the same page, and that you’re mindful in how you use AI so you can maintain the trust of your members.

Want to learn more about AI, Privacy, and Security?

Join Amanda DeLuke and Kelly Whelan on January 22, 2025, for ASAE’s upcoming webinar, Embracing AI Safely: Practical Strategies for Associations.

AI Guidelines and Legislation

Amanda also talked about some of the emerging guidelines and legislation around AI, though she called out that this is a tricky area because AI technologies are evolving so quickly that legislation can’t keep up.

There are a few things, like the OECD Guidelines, the EU AI Act, and various state laws in the U.S. And, don’t forget GDPR, which already regulates how organizations deal with data and absolutely impacts how you should approach AI (e.g. with GDPR and other data privacy laws requiring you to be able to delete someone’s personal information from their systems upon request, you wouldn’t want to add that sort of data to an AI tool that would store it in such a way where you were unable to delete it).

Some overall themes that stand out to Amanda currently:

  • Many laws focus on “prohibited uses” and “high-risk” data, particularly health and biometric information as well as surveillance data,  is often restricted.
  • On the state side the legislation is more focused on development and deployment of AI systems, not use.
  • Most legislation continues to encourage innovation of safe and responsible use of AI.
  • Some companies exclude data from AI tools in regions with strict data privacy laws, which could impact how complete data models are and thus, how reliable or biased the results they generate.

Associations should maintain awareness of these developments as they happen and ensure they’re in compliance, especially if they collect or use data from highly regulated areas.

Practical Tips for Associations Incorporating AI

Smaller associations may feel overwhelmed by the prospect of trying to evaluate and use AI. But with a mindful, proactive approach, there are ways to use it carefully.

  • Start small by testing AI with only public or test data.
  • Start with vendors you trust, who already have strong privacy and security protections, and confirm whether or not they will store your data privately and not use it to train their model
  • Form an internal working group or committee to continuously discuss AI’s uses and potential risks.
  • Lean on communities where you can learn from what others are doing. You’re not alone – we’re all figuring this out together.

Moving Forward with AI Safely

What’s most important for associations is transparency and thoughtful planning. Be open about your use of AI and how it impacts members. If you have a clear, responsible AI strategy, not only does that help you minimize risk, it can also increase member trust and enhance the association’s reputation.

AI is a significant revolution and, understandably, associations can save a lot of time and benefit tremendously from taking advantage of its many functions and features. Just ensure you’re pairing that use with a responsible AI strategy and strong privacy practices so you can embrace innovation while keeping your organization and your members safe.

Kelly Whelan

Kelly Whelan is the Content Marketing Manager for Higher Logic. In this role, she develops content to support association professionals and advise them on member engagement and communication strategy. She also hosts Higher Logic’s podcast, The Member Engagement Show. She has ~10 years of experience working in marketing for associations and nonprofits.