Why AI Demands More from Philanthropy Than Funding Alone

With AI rapidly entering India’s public systems, philanthropy can no longer afford to stay on the sidelines.

Bengaluru, December 22, 2025

Artificial intelligence is often described as a tool: faster, cheaper, and capable of operating at a scale no human system can match. In philanthropy, this framing is especially attractive. AI promises better targeting of welfare, clearer insights from complex data, and new ways to reach communities that institutions have historically struggled to serve.

But as Dr. Sarayu Natarajan, Founder of the Aapti Institute, argued in her talk at the Asia-Pacific Meeting on AI and Philanthropy in collaboration with the Geneva Centre for Philanthropy, treating AI as just another tool misses what is distinctive about this moment. The technology is developing under very specific conditions: a small number of private companies control enormous amounts of data and influence; governments are stepping back from parts of public life even as services move online; and digital systems can tailor information so precisely that people no longer share the same view of the world. These conditions shape not only what AI can do, but how it affects society.

For philanthropy, the question is no longer whether AI will have an impact, but what role philanthropic actors choose to play within this ecosystem. Sarayu noted that philanthropy is often absent from key debates on AI governance and policy, even though its values and priorities are directly implicated. Entering the conversation late means standards are already set and harms are already visible.

One way to understand philanthropy’s role, she suggested, is through two parallel responsibilities: doing more of the good, and doing less of the harm.

Doing more of the good starts with public-interest AI—technology designed not for extraction or scale alone, but for equity. Sarayu pointed to language-based tools that allow people to access state services in intuitive ways. “You can post a message on WhatsApp asking, ‘I’m a person living in Bangalore, I have a college degree—what schemes and welfare programmes am I eligible for?’ and it gives you an answer,” she explained, noting that such systems can increasingly help people apply to relevant schemes as well. These applications are rarely highly profitable, but they can be socially transformative.

Language is one of the clearest places where bias shows up. In multilingual countries, people are often excluded simply because their language is missing from the data. When AI is trained on dominant languages or narrow groups, it carries those biases forward. She suggested that philanthropy can play a role by investing in inclusive data, better oversight, and long-term skills and institutions.

Data, however, is only one part of the equation. Capacity matters just as much. Many organisations and communities hold rich qualitative knowledge built up over years—through surveys, community journalism or grassroots work—but lack the tools to analyse and act on it. Sarayu explained that AI can help bring together insights from large amounts of qualitative material, making it easier for philanthropies and governments to spot patterns, understand where help is needed, and avoid repeating the same work. Used carefully, it can also help organisations think differently about impact, beyond what is easy to count.

She pointed out that technology usually fails for human reasons. Systems don’t fall apart because the software is broken, but because they ignore how people actually live and make decisions. Funding technology without supporting the people who are meant to use it often leads nowhere.

The second responsibility is to limit harm, even when it is harder to see. Sarayu pointed to the risks AI poses to democracy, particularly how quickly misinformation can spread at scale. Supporting fair elections, fact-checking work and organisations that monitor these effects is, she said, an important task for philanthropy.

She also raised concerns about personal freedom. She warned that when algorithms quietly influence what people see, buy or believe, they can shape choices without people realising it. She suggested that philanthropy can help by funding research and public discussion on these issues, even when they are difficult or politically awkward.

She broadened the conversation to AI’s environmental impact. The technology depends on energy, water and land in ways that can harm communities far removed from centres of innovation. She cautioned that simply tracking sustainability indicators can overlook a bigger ethical issue: how AI is reshaping our relationship with the planet and other forms of life.

Despite these risks, she was clear that this was not an argument against AI. Used carefully, she said, it can help philanthropy work more effectively, learn faster and connect efforts that are currently fragmented. That requires action on several fronts: long-term funding for public-interest technology, ethical decision-making in daily work and support for initiatives that put social and environmental goals first.

Near the end of her talk, Sarayu returned to a line that captured her message: “The best (people) lack all conviction, while the worst are full of passionate intensity.” For philanthropy, she suggested, the challenge is not only to be careful, but to act with confidence and purpose.

Philanthropy may not be able to control how AI develops, but, as Sarayu argued, it can shape the conditions in which it is built and used. At a time when change is fast and power is concentrated, insisting early on care, inclusion and accountability may be its most important contribution.