Hear From an Expert on AI, Privacy, Security, and Data Protection to Keep Your Association and Its Members Safe
In an episode of The Member Engagement Show, I sat down with Amanda DeLuke, Senior Privacy Analyst at Higher Logic, to talk about key privacy and security considerations for associations as they incorporate AI into how they do their work. Amanda is also a part of the International Association of Privacy Professionals (IAPP), and the Messaging, Malware, and Mobile Anti-Abuse Working Group (M3AAWG) – she recently spoke on an AI governance panel at their conference.
While larger organizations may have dedicated roles for privacy and security, smaller associations often lack these resources, so Amanda and I also talked about how to identify those essential security and privacy steps to make things more manageable and scalable.
In her role, Amanda handles Higher Logic’s data protection agreement, runs our internal privacy assessments, manages our privacy program, and educates staff on privacy practices. She also assists with privacy and security reviews of the vendors Higher Logic uses for day-to-day work.
“Privacy and security is really a group effort,” says Amanda, “It’s important to involve everyone throughout the business.”
One important part of what Amanda does includes supporting Higher Logic’s internal and external audits for our ISO certification and SOC 2 attestation. These standards allow companies to measure their security and privacy and demonstrate a commitment to safeguarding their data. ISO 27001 is a certification process that organizations can go through to achieve formal certification. SOC 2 is an attestation, based on five principles for evaluating the effectiveness of an organization’s security protocols.
“[ISO and SOC 2] are really important industry standard. They show folks that you can trust the organization to safeguard data. I work with a lot of vendors, and I’ve seen where some only have ISO, but Higher Logic has both because that SOC 2 framework is really important and it’s valuable doing that audit of your whole program to evaluate security and privacy controls.”
AI’s rapid rise introduces unique privacy and security challenges to associations. As amazing as it is for helping you save time, it also creates opportunities for abuse and breaches in privacy and security.
“AI is new and it’s hard to detect. So it’s become a new vector for abuse because it gives people more ‘brain’ power to potentially use for cybercrime. In that way it’s a blessing and a curse because this tool is in the hands of everyone – both people who want to do good things and people who use it negatively… There are also concerns on the privacy side around using personal data to train AI models. So there are lots of questions around how we approach this.”
Some public AI tools, for example, may train their underlying model on the data you put into them (which is why Higher Logic’s AI features keep your data private). It’s important to understand if the AI tools you’re using do this. If they do, it will influence which data and information you feel comfortable using in those tools (e.g. you wouldn’t want to use private data in a public tool).
For example, Amanda shared that some organizations use practices like differential privacy, “a mathematical framework that adds in randomness or noise to data sets so you can’t see any personal information in what’s being used to train the model,” or they’ll anonymize data when using it with AI.
As AI becomes more commonplace and is incorporated into more systems and software, it’s important that your organizations come up with internal guidelines to ensure data privacy and responsible use.
Governance is key for associations aiming to use AI responsibly. As you incorporate AI into your work…
For associations, transparency about data usage and privacy and security policies is essential for ensuring that all your staff is on the same page, and that you’re mindful in how you use AI so you can maintain the trust of your members.
Join Amanda DeLuke and Kelly Whelan on January 22, 2025, for ASAE’s upcoming webinar, Embracing AI Safely: Practical Strategies for Associations.
Amanda also talked about some of the emerging guidelines and legislation around AI, though she called out that this is a tricky area because AI technologies are evolving so quickly that legislation can’t keep up.
There are a few things, like the OECD Guidelines, the EU AI Act, and various state laws in the U.S. And, don’t forget GDPR, which already regulates how organizations deal with data and absolutely impacts how you should approach AI (e.g. with GDPR and other data privacy laws requiring you to be able to delete someone’s personal information from their systems upon request, you wouldn’t want to add that sort of data to an AI tool that would store it in such a way where you were unable to delete it).
Some overall themes that stand out to Amanda currently:
Associations should maintain awareness of these developments as they happen and ensure they’re in compliance, especially if they collect or use data from highly regulated areas.
Smaller associations may feel overwhelmed by the prospect of trying to evaluate and use AI. But with a mindful, proactive approach, there are ways to use it carefully.
What’s most important for associations is transparency and thoughtful planning. Be open about your use of AI and how it impacts members. If you have a clear, responsible AI strategy, not only does that help you minimize risk, it can also increase member trust and enhance the association’s reputation.
AI is a significant revolution and, understandably, associations can save a lot of time and benefit tremendously from taking advantage of its many functions and features. Just ensure you’re pairing that use with a responsible AI strategy and strong privacy practices so you can embrace innovation while keeping your organization and your members safe.