Written By Spreeha Foundation
Originally posted on the Spreeha Foundation website 12/9/25
Artificial intelligence has been an emerging subject in nonprofit circles for several years, often sitting at the periphery of conversations – either viewed with skepticism or treated as something that belonged primarily to corporate or academic domains. At this year’s Global Washington Annual Conference in Seattle, that perception appeared to shift. The tone of the discussion was notably different: nonprofit leaders were not debating whether AI mattered, but rather seeking clarity on how to integrate it responsibly, meaningfully, and in ways that enhance service delivery for communities.
This evolving mindset framed one of the conference’s first dedicated AI workshops, featuring Michael Tjalve, PhD, Board Chair of Spreeha Foundation, as a panelist alongside development leaders Paul Essene of Opportunity International and Cameron Birge of Microsoft. The session centered on practical considerations for mission-driven organizations exploring AI, balancing its potential to strengthen nonprofit work with a clear-eyed discussion of limitations, risks, and organizational readiness.

Michael Tjalve with co-panelists Paul Essene and Cameron Birge at the Conference
Spreeha’s Perspective: Application Over Abstraction
Spreeha Foundation’s contribution to the conversation was rooted in operational experience rather than theory. As an organization operating a tech-enabled urgent care network in Bangladesh, Spreeha has been testing how digital tools and carefully designed AI-assisted systems can support frontline healthcare delivery. Current efforts focus on strengthening triage decision-making, providing structured decision-support guidance for health workers, improving diagnostic and referral pathways, and designing follow-up systems for patients managing chronic conditions.
Reflecting this approach, Michael Tjalve, PhD, Board Chair of Spreeha Foundation, emphasized the importance of staying grounded in real-world needs:
“We’re combining many years of healthcare expertise and deep community trust with a clear-eyed view of where AI can provide the most value for the communities we serve.”
For many attendees, this framing offered reassurance that innovation does not need to be speculative or detached from context. Instead, meaningful use of AI can emerge from local realities, embedded clinical and operational capacity, and a willingness to learn gradually while remaining attentive to risk.
From Theory to Practice: Making AI Understandable
A core strength of the workshop was its ability to move AI from abstraction into operational reality. Michael and his fellow panelists drew on concrete examples from their respective fields to show how AI can support development work when embedded within real organizational contexts.
Reflecting on the discussion, Michael noted:
“My fellow panelists and I shared a range of real-world examples that helped demystify how AI can be leveraged for impact across the sector.”
For participants, this approach helped situate AI within existing workflows rather than framing it as an external disruption. The implication was significant: when practitioners see AI functioning within real service environments—particularly in resource-constrained systems—the conversation shifts away from uncertainty and toward careful, informed experimentation.
A Mindset Shift Among Nonprofits
Michael also noted a shift that reflects growing maturity in the sector:
“More nonprofits are moving beyond the initial, well-placed concerns around AI and are starting to find contextually optimized value from its capabilities.”
This observation suggests that nonprofits are no longer approaching AI solely as a risk or a novelty. Instead, they are beginning to treat it as a tool that can be deliberately shaped to strengthen outcomes—while remaining attentive to limitations and risk. Workshop discussions indicated that this transition is being driven by exposure to practical examples, peer learning, and a growing recognition that AI adoption does not need to be overwhelming. It can begin with targeted, context-specific applications embedded within existing systems.
AI as Enabler Rather Than an End Goal
A recurring theme in Michael’s reflections was that technology must ultimately support people and the real conditions in which they work.
He emphasized:
“Connecting the dots between true local needs and what the technology can do is where I find the most engaging discussions. That’s when AI shifts from being a technology to becoming a catalyst for meaningful change.”
This perspective aligns closely with Spreeha’s philosophy, which prioritizes human capability, clinical quality, and system strengthening over technological ambition. In contexts like Bangladesh—where access gaps, workforce shortages, and fragmented information systems remain persistent barriers—AI holds value not as an add-on, but as a practical mechanism to extend reach, support frontline decision-making, and improve continuity of care.
What Grounded AI Adoption Looks Like in Practice
While Michael’s reflections helped frame how nonprofit leaders are thinking about AI at a sector level, they also raised a more practical question: what does responsible adoption look like inside real organizations, operating under financial, staffing, and system constraints? For Spreeha, this question moved from theory to practice through the lens of implementation – an area closely observed by Smriti Shrestha, Director of Development & Partnerships, who attended Global Washington 2025 as a conference delegate.
From a practitioner’s perspective, conversations at the conference often returned to a shared set of questions: What is the largest impact AI can realistically deliver? And how can organizations integrate it without losing the human element that sits at the core of nonprofit work? The emerging answer was not scale or speed, but incremental adoption—introducing AI in ways that align with organizational capacity and real operational needs.
A key distinction that surfaced repeatedly was between small language models (SLMs) and large language models (LLMs). SLMs are already proving useful for focused, lower-risk tasks such as grant drafting, donor communications, and summarizing reports—areas where benefits are immediate and manageable. LLMs, by contrast, are being approached more selectively, reserved for more complex needs such as multilingual content creation, advanced data analysis, or predictive insights. This sequencing allows organizations to test, learn, and adapt without overwhelming teams or budgets.
AI’s role in nonprofit work appeared less speculative and more tangible. Teams shared how AI is supporting proposal development, tailoring language for specific donors, and translating and localizing content to improve accessibility. Others described its use in data analysis for impact measurement and program planning, particularly where analytical capacity is limited.
Several practitioners also emphasized how reducing administrative burden through automation can free up staff time for deeper community engagement. In these cases, the value of AI lay not in technological sophistication, but in its ability to fit within existing workflows and respond to real constraints.
Implications for the Development Sector
Taken together, these practitioner reflections point to a broader pattern in how AI is entering the development sector. Progress is unlikely to come from dramatic technological shifts. Instead, it is being shaped through incremental, grounded experimentation, led by organizations closest to communities and most attuned to local constraints.
For Spreeha, these discussions reinforced the value of approaching AI through lived operational experience. Delivering urgent care in low-resource settings has required navigating fragmented systems, workforce constraints, and trust-dependent service delivery—conditions that closely mirror the realities many nonprofits face. This context shapes how Spreeha approaches AI not as a finished solution, but as an iterative process that demands ongoing testing, community engagement, ethical reflection, and evidence generation over time.
For funders and partners, this underscores the importance of supporting organizational capacity, learning, and governance—not just tools or platforms. For nonprofits, it highlights the value of integrating AI into existing systems in ways that strengthen, rather than disrupt, mission delivery.
About Michael Tjalve, PhD
Michael Tjalve, PhD, is the Board Chair for Spreeha Foundation, bringing more than two decades of global experience in AI research, product development, and nonprofit innovation. A former Chief AI Architect at Microsoft Philanthropies, he has helped humanitarian organizations and nonprofits leverage AI to amplify their impact. In 2024, he founded Humanitarian AI Advisory, supporting social impact institutions in harnessing AI responsibly.
Michael is also an Assistant Professor at the University of Washington, where he teaches AI for humanitarian action and ethical innovation. He currently serves as AI Advisor to the United Nations (OCHA) and co-leads the SAFE AI initiative, promoting responsible AI in humanitarian settings. He is also Co-founder of RootsAI Foundation, which empowers Indigenous and underrepresented youth to preserve their heritage and shape AI in equitable and culturally grounded ways.
About Smriti Shrestha
Smriti Shrestha is the Director of Development and Partnerships at Spreeha Foundation. She is a results-oriented professional with 10+ years of experience in program and project management, fundraising, and partnership development for global non-profits. Her work focuses on translating organizational strategy into fundable, scalable partnerships grounded in community realities.