
Bengaluru, December 16, 2025
As artificial intelligence spreads quietly but rapidly across public systems in India, from classrooms and clinics to courtrooms and content moderation, philanthropy is being pulled into unfamiliar territory. At the recent Asia-Pacific Meet on AI and Philanthropy convened with the Geneva Centre for Philanthropy, leaders from grantmaking foundations, AI deployment organisations and regional advisory bodies confronted a question that is becoming harder to avoid: what responsibilities do funders of social change have in shaping ethical and inclusive AI?
For Natasha Joshi of Rohini Nilekani Philanthropies (RNPF), the entry point was not excitement but unease. When generative AI burst into public view in 2023, she said, the question of whether philanthropy should engage with it became irrelevant almost overnight. “This is not optional anymore,” she said. “It exists. And now we have to decide what our role is inside it.”
At RNPF, that role has meant looking for potential harms as carefully as they look for new possibilities. Instead of starting with tools, Joshi said they start with people, especially those whose vulnerabilities don’t always show up in the usual categories of caste, income, gender or location. Through exercises with technologists, lawyers, teachers and frontline workers, RNPF explored many possible futures shaped by AI, including the ripple effects that often get overlooked in mainstream tech conversations.
One unsettling finding came from research into young people’s engagement with chatbots. Around 65 percent of respondents in one survey reported speaking to AI systems about personal problems by confiding, seeking advice and sharing distress. “We actually have no idea what the true extent of generative AI usage is,” Joshi warned. “If you have a smartphone and some data, you’re already inside this world. We just don’t know what that means yet.”
For philanthropy, that uncertainty creates both obligation and risk. Unlike markets or governments, Joshi argued, philanthropy has the freedom, and responsibility, to resource the invisible work: legal aid, digital harm research, advocacy and early warning systems for forms of exploitation that enforcement agencies have not yet learned to recognise.
While RNPF is working at the level of governance and safeguards, Wadhwani AI operates at the level of national infrastructure. Dr Prachi Karkhanis, who leads Monitoring, Evaluation and Learning at the organisation, described what it takes to move beyond the “AI for social good” rhetoric to actually build tools that work at scale in public programmes.
Wadhwani AI’s tools are now embedded inside major government platforms, from India’s national telemedicine service to tuberculosis tracking systems and education portals used across multiple states. One widely used education tool assesses children’s reading fluency through smartphones, giving teachers immediate feedback on pronunciation, speed and comprehension. The data has helped states design targeted remediation programmes affecting hundreds of thousands of students.
But Karkhanis was careful not to romanticise the process. Most AI projects, she noted, fail in what she called the “pilot trap” or small, promising pilots that never grow into solutions the government can fully adopt. Philanthropy, she argued, could play a decisive role in breaking this cycle by funding not just models but supporting the full system around them, from working closely with government to training frontline users, improving data infrastructure and investing in long-term evaluation.
“Scaling AI is not scaling technology,” she said. “It’s scaling trust, capacity and institutions.”
And trust, she noted, isn’t just about accuracy. It’s about being able to show where information comes from. In their health tools for frontline workers, for example, Wadhwani AI now links every answer back to the exact government guideline it draws from, making the system easier to trust and easier to hold accountable.
If Karkhanis showed what AI can achieve when it’s woven into public systems, Naghma Mulla of EdelGive brought the conversation back to basics. She cautioned that philanthropy sometimes mistakes quick activity such as new tools, new pilots, new data for real change on the ground.
“We are putting icing on a cake that isn’t even baked yet,” she said, referring to chronic underfunding of core operations in the non-profit sector. “Are we putting good technology behind bad systems?”
For Mulla, the real risk isn’t that AI will fail, it’s that it will succeed on the wrong terms. Cheaper processes, faster reports or smoother dashboards do not automatically reduce inequality. “Is everything going to be about making misery efficient?” she asked. “Does it even matter if a child learns Pythagoras in one session instead of five?”
Her argument landed on a structural fault line: philanthropy may have the money for AI, but most non-profit organisations don’t even have the budget for basic digital tools. CSR funding caps, tight project budgets and the long-standing reluctance to fund “overheads” have created a system where organisations hide their real costs just to survive. In that reality, Mulla argued, AI is more likely to expose existing weaknesses than fix them.
“Fund like a business,” she said. “You don’t starve your operations and then expect innovation to magically appear.”
From a regional vantage point, Kithmina Hewage of the Centre for Asian Philanthropy and Society (CAPS) added another layer of complexity. Across much of Asia, he noted, philanthropy is closely tied to business interests and government agendas. This means there’s very little funding for taking risks or testing long-term tech ideas, and even less for basic operational support that non-profits need to function.
Data from CAPS’s 2024 Doing Good Index offered a sobering snapshot: while most Indian social sector organisations are now using digital tools, only 23 percent reported having cybersecurity protections in place. Even fewer had data protection policies. “They are inside these systems without the basic safeguards,” Hewage said. “One bad breach can destroy trust across the entire sector.”
He also noted that India is different from many other countries in the region because it already has organisations building AI directly into government systems at scale. The opportunity is real, he said, but it’s also uneven, still fragile and depends heavily on clear rules and regulation.
A key point kept coming up: we can’t leave ethics to machines. “AI itself is not ethical,” Joshi said. “Ethics is something only humans have.” That means humans, and the systems that create, fund and manage AI, carry the responsibility.
That burden becomes heavier when ecological costs enter the equation. Things like energy consumption, water usage and data centre footprints are rarely factored into social sector AI conversations. Many Indian organisations use AI systems and algorithms, but they don’t actually control them. The companies or countries that do control these systems hold a lot of power, which is concentrated in certain parts of the world, not locally.
When the discussion looked to the future, the answers were cautious. No one claimed that India is fully “AI-ready,” and no one promised that philanthropy will transform AI. Instead, the focus was on slow, often difficult work building strong institutions—inside government, funding systems, regulations and accountability practices.
For Hewage, the way forward depends on two things: strong government leadership to set rules and build trust, and funders providing flexible support for day-to-day work. Without these, he warned, old problems could just come back in new, tech-driven forms.
Mulla, meanwhile, warned that philanthropy isn’t the whole solution or the system itself; but philanthropy is a way to connect and influence the community, the market and the government. The real risk, she said, isn’t using too little AI; the risk is thinking that technology alone can replace the hard work of taking brave steps, being patient and making real, lasting changes in institutions and society.
“This is not a technology question anymore,” Joshi had said earlier in the discussion. “It’s a social question.”
That seemed to be the unspoken agreement: AI will keep moving forward, whether anyone approves or not. The bigger challenge, and the one philanthropy can’t avoid, is figuring out who pays the price when AI fails, who benefits when it works and who decides the rules in between.