Navigating AI in Social Impact Work: Our Journey at Engage R+D
/by Meghan Hunt
As artificial intelligence tools become increasingly accessible, organizations across the social sector are grappling with how to harness their potential while staying true to their values. At Engage R+D, we’ve spent the past year exploring how generative AI can enhance our evaluation, strategy, and communications work—and we’ve learned a lot along the way.
Here’s what we’ve discovered, the framework we’ve developed, and the practical lessons that might help other organizations on their own AI journeys.
Starting with Values, Not Tools
Our exploration began with a fundamental question. Instead of asking “How can we use AI?” we started with “How can we use AI responsibly?” This shift in perspective shaped everything that followed—from our tool selection to our usage guidelines.
We recognize that technology will continue to change and evolve, so rather than focusing on how to utilize currently available tools, we wanted to develop guiding principles to help us use AI responsibly that could be applied in new contexts moving forward. We are starting with the following three principles to guide our responsible use of AI:
Human Oversight is Mandatory: Every piece of AI-generated or AI-assisted content that leaves our organization is reviewed by a human team member for accuracy, tone, and appropriateness. AI can produce content that appears confident but is factually incorrect or contextually inappropriate, making human judgment essential.
Use Professional Accounts for Work: Just as we wouldn’t send work emails from personal accounts, we don’t use personal AI accounts for work-related tasks. Our organizational subscriptions (see more on that, below) provide security controls, data protection, and consistency that personal accounts—even paid ones—cannot match.
Handle Personal Information Thoughtfully: While our AI subscription provides strong security protections, we still exercise judgment about what information we share. We’re comfortable using names and meeting notes, but we avoid sensitive personal details, off-the-record content, and information from clients who’ve specifically requested that we don’t use AI.
Our Two-Mode Framework: Explorer and Navigator
Next, we created guidance to help us experiment responsibly with AI. To do this, we considered the types of work we do and developed what we refer to as an Explorer/Navigator framework for AI use:
Explorer Mode represents low-risk activities where AI can enhance efficiency and creativity without significant ethical concerns. These include:
Assistance drafting routine communications
Sourcing ideas for data analysis approaches
Getting feedback on writing (proofreading and copy editing)
Brainstorming facilitation techniques and discussion questions
Navigator Mode covers more complex tasks that require careful human oversight and alignment with our values. These include:
Creating meeting agendas, which requires review for inclusivity and meaningful dialogue
Analyzing quantitative data, which requires verifying the accuracy
Summarizing qualitative data, which requires accuracy and bias checks
Drafting sensitive communications, which require careful consideration of tone and relationships
This framework helps our team quickly identify low-risk activities that make it easy to experiment and AI applications that require a more intentional approach.
Why We Chose Claude
After evaluating various AI options, we selected Anthropic’s Claude as our primary generative AI tool. We chose Claude over alternatives like ChatGPT for several key reasons: it performed well for text analysis and thoughtful conversation, but equally important was its alignment with our ethical standards. Anthropic maintains a public Transparency Hub and offers a framework for mitigating the potential harms of AI—commitments that aligned well with our organizational values and build user confidence, particularly for analytical tasks and sensitive communications. We use an organizational subscription that provides enhanced security features and ensures all team members operate under consistent privacy protections.
Practical Lessons Learned
Along the way, we have learned several lessons that may be useful when experimenting with AI:
Start Small and Build Confidence: We began with low-stakes applications like brainstorming and proofreading before moving to more complex tasks. This allowed our team to develop comfort and competence gradually.
Create Clear Guidelines Early: Our reference guide has been invaluable for ensuring consistent, responsible use across the team. Having written guidelines prevents ad-hoc decisions that might compromise our standards.
Expect Ongoing Evolution: AI technology and our understanding of it continue to evolve rapidly. We’ve built flexibility into our approach, relying on guiding principles rather than application-specific approaches and regularly updating our guidelines as we consider new applications and concerns. We do this collaboratively, through all-team learning sessions and shared documents.
Quality Assurance is Everything: For qualitative analysis in particular, we've developed robust processes to verify AI-generated insights against original data. The efficiency gains can be significant, but only when paired with rigorous validation.
The Road Ahead
We’re still in the early stages of understanding AI’s role in social impact work. What we’ve learned so far is that the technology’s potential is significant, but realizing that potential responsibly requires intentionality, clear boundaries, and ongoing reflection. Our approach continues to evolve as we gain experience and as the technology itself advances. We’re particularly interested in exploring how AI might support more complex analytical tasks while maintaining the human insight and contextual understanding that’s essential to our work.
For organizations considering their own AI journey, our biggest recommendation is to start with your values, not the technology. Ask hard questions about responsibility and ethics from the beginning, and build your approach from there. The tools are powerful, but they’re only as good as the framework you create for using them. We’d love to hear from other organizations navigating similar questions. How are you approaching AI in your social impact work? What challenges and opportunities have you encountered?
If you’re interested in seeing a copy of our internal guidelines, please reach out: info@engagerd.com.