Re-imagining Stewardship Intelligence with Large Language Models (LLMs)

Frederick Fabian
Feb 10, 2025

Re-imagining Stewardship Intelligence
In today’s fast-moving world of sustainable investing and corporate governance, stewardship teams face a daunting information overload. Proxy statements, annual reports, ESG disclosures, regulatory filings, and third-party research pour in by the thousands of pages, especially during proxy season when most companies hold their annual meetings. Sifting through this deluge can slow even the most committed teams, leaving little time for thoughtful engagement. However, emerging AI technologies – particularly large language models (LLMs) – offer a way to turn this data tsunami into actionable intelligence. By acting as tireless, knowledgeable assistants, LLM-based systems can rapidly gather and analyze vast information, providing insights that are fast, transparent, and tailored to each investor’s needs. Some firms (for example, Root) are already exploring how purpose-built AI agents paired with deep governance expertise can help reimagine stewardship intelligence to support more effective and timely decision-making.
How AI Augments Stewardship Workflows
Modern AI-driven stewardship tools can streamline the end-to-end workflow of proxy research and engagement. Here are four key capabilities such solutions provide:
Gather the Right Documents – Instantly
An AI assistant can automatically pull in all relevant documents into one secure feed, saving analysts countless hours of hunting and gathering. This includes obvious sources like regulatory filings (e.g. annual reports, proxy statements, sustainability reports), but also any hard-to-find disclosures on the open web. Instead of manually checking multiple repositories or company websites, the system’s web agents fetch what’s needed so that nothing slips through the cracks. In short, all the raw material for analysis is assembled in moments, ensuring the stewardship team starts with a complete information set.Read Like an Experienced Analyst – at Machine Speed
Once the documents are in hand, LLMs trained on relevant domain knowledge (governance codes, stewardship principles, climate frameworks, and even a firm’s own voting policy) go to work. These AI models digest and summarize complex filings at extraordinary speed, flagging the details that would matter to a seasoned governance analyst – for example, meeting agendas, proposal details, key ESG data points, or a subtle change in how executive pay is tied to climate targets. Crucially, the LLM doesn’t operate as a mysterious black box: every finding can be accompanied by references to the source text. Each data point is stored with a citation or annotation, allowing the human team to trace every insight back to its origin for verification. In essence, the AI reads everything, notes what matters, and does it in minutes rather than days, while preserving an audit trail for trust.Enrich Raw Facts with Context
Data points in isolation only tell part of the story. AI agents can cross-reference and enrich the raw facts with wider context that a human analyst would consider. For instance, if a company’s proxy statement says the board recommends voting against a shareholder proposal on emissions targets, the AI can pull up peer benchmarks and historical data: How have the company’s industry peers voted or acted on similar issues? What were this investor’s own voting decisions on similar resolutions last year? The AI system links each item to relevant peer behavior, past voting records, and even prior company statements, creating a living map of “who said what, when, and why.” It can also incorporate internal research memos or third-party ESG research into the analysis – for example, noting that the asset manager’s sustainability team has concerns about the company’s climate strategy, or that an independent research provider flagged a controversy. All this context is classified and scored against either industry benchmarks (e.g. net-zero alignment standards) or custom criteria that the investor specifies. The result is an analysis that doesn’t just list facts from one report, but provides a 360° view of each issue in context.Deliver Clear, Actionable Intelligence
Finally, the AI transforms the analysis into outputs that fit the team’s workflow – whether it’s a comprehensive proxy briefing, a focused climate transition report, or a real-time dashboard. These deliverables can be in whatever format is most useful (PDF reports, spreadsheets of key data, or via an API into the investor’s systems). Importantly, each recommendation or flagged issue comes with a transparent explanation chain. Rather than a terse or unexplained recommendation (“Vote Against CEO’s pay”), the report would include the reasoning: for example, the pay plan misaligns with the company’s own climate goals, citing page X of the sustainability report. This level of clarity empowers the stewardship team to engage or vote with confidence and consistency, armed with intelligence they can easily communicate to investment committees or portfolio managers.
Why It Matters
Adopting LLM-powered stewardship tools isn’t just a tech upgrade – it directly addresses many of the challenges in today’s investment stewardship process:
Speed and Proactivity
AI can dramatically accelerate the research cycle. Instead of waiting weeks for a human-generated proxy analysis (or scrambling last-minute), an AI-driven report can land within days of a new disclosure. This head start gives teams ample time to digest findings and act before the shareholder vote. Early identification of issues means more opportunity for dialogue – engaging a company to rectify a concern before casting a vote against management, for example. In busy proxy seasons, when hundreds of meetings pile up, this speed is a game-changer. Studies note that AI systems are capable of quickly processing large volumes of unstructured data (like proxy filings) and enabling faster analysis of complex proposals. In practice, that means less time lost in reading and more time available for stewardship actions.Customization and Context Awareness
One size rarely fits all in stewardship. With AI, analysis can be highly customized to an investor’s own policies and context. For instance, an asset manager might maintain multiple voting policies – a baseline policy for most holdings, a stricter policy for climate-focused funds, and perhaps client-specific guidelines for separate accounts. An AI platform can easily handle this complexity: it applies the right policy rules to each company’s situation without the confusion of manual tracking. If policies fragment by region or client, the AI can juggle them all, ensuring each vote recommendation aligns with the correct mandate. Moreover, the analysis isn’t done in a vacuum – the AI can incorporate the investor’s prior interactions and stances. If the institution abstained on a director election last year due to a pending governance reform, the AI will “remember” and check if that reform was implemented. If the stewardship team has an internal position (say, believing a company isn’t doing enough on climate), the AI can flag whether the latest disclosures address that concern. This level of context awareness leads to advice that feels as if it were written for your specific firm, not a generic investor. In short, you decide the rules and priorities; the AI provides the data accordingly.Holistic, Nuanced Insights
Traditional proxy research often uses blunt checklists – e.g. “vote against if board independence < X%” or “require Y% of pay tied to sustainability”. Those rules can miss nuance. AI allows for more holistic assessment of companies against their stated strategies and the broader environment. Take the example of climate strategy: rather than simply requiring that “20% of an oil & gas company’s executive compensation is linked to climate outcomes,” an AI agent can delve into the company’s actual climate action plan. It might discover the plan’s key goals (e.g. methane reduction, renewable investment milestones) and then check if executive bonuses are explicitly tied to achieving those specific goals. If the plan is ambitious but the incentives only mention generic “emissions reduction,” the AI will surface that disconnect. This kind of cross-checking – effectively connecting the dots between a firm’s promises and its actions – has not been practical at scale before. With AI’s capacity to read everything and draw links, investors can push for more meaningful alignment (e.g. pay actually rewarding the delivery of climate targets, not just box-ticking metrics). In short, the qualitative gets quantified: the messy details that matter are no longer beyond analytical reach.Consistency and Transparency:
Human analysts have varying levels of experience, judgment, and potential biases. When dozens of analysts are involved across a global portfolio, inconsistencies inevitably creep in. By contrast, an AI system applying pre-defined logic offers far more consistency in evaluations. As legal scholars have noted, pre-formulated AI evaluation rules can reduce subjective variability and arbitrariness in proxy advice, leading to more impartial assessments. This consistency means, for example, a board independence issue will be flagged in the same way across all markets, rather than depending on who happens to cover that region. Furthermore, the AI’s work is fully traceable. Every recommendation comes with a trail: the data points, the sources, and the rationale used. This tackles the “black box” problem – users aren’t left wondering why the AI said yes or no. In fact, transparency is fast becoming a regulatory expectation for AI in finance; the EU’s 2024 ESG Rating regulations even require disclosure of how AI models are used and ensure explanation in AI-driven evaluations. A well-implemented stewardship AI will thus enhance clarity, not diminish it, by providing a human-readable explanation for each insight. For investors, this builds trust in the recommendations and makes it easier to defend votes or engagement decisions to stakeholders.Efficiency and Human Focus
By automating the grunt work of research, AI liberates human experts to focus on what humans do best. An AI-powered workflow can analyze thousands of pages and highlight issues in a fraction of the time it would take a team of people. This efficiency has two big implications: first, small stewardship teams can cover far more ground without scaling up headcount, and second, those same professionals can reallocate their time to higher-value activities. Instead of pouring over reports late into the night, an analyst could spend the time crafting an engagement strategy or coordinating with fellow investors on a collective action. Over time, there’s also a cost benefit – what the industry spends on proxy research and data could decrease as AI takes on more of the workload. That could either save money or free up budget for other stewardship initiatives. Perhaps most importantly, better tools lead to better outcomes: when even a modest asset owner can afford detailed, policy-tailored analysis (because it’s done by AI at low incremental cost), they can hold companies to account with the same rigor that only the largest investors once could. In that sense, AI has the potential to democratize high-quality stewardship, raising the bar for everyone.
Looking Ahead: Towards Autonomous Stewardship Agents
As we look to the next few years (with the 2025 proxy season on the horizon and beyond), the role of AI in stewardship is set to deepen. We anticipate a shift where human stewardship teams focus more on strategic, interpersonal, and oversight tasks, while delegating much of the repetitive analytical heavy lifting to semi-autonomous AI agents. Rather than replacing humans, these AI agents will act as an extension of the team – a collection of specialized “colleagues” handling clearly defined duties at machine speed, under human guidance.
In practical terms, we may soon see AI-managed workflows handling routine stewardship processes end-to-end. For example, consider a typical engagement escalation framework: an investor might have a policy that if a company’s progress on diversity or climate targets stalls, they first send a letter, then escalate to a meeting, and eventually consider voting against directors. Setting up such a framework involves a lot of tracking and follow-up. In the near future, an AI agent (or a group of them) could continuously monitor all relevant indicators – pulling the latest public data, news, and even internal research updates – to automatically populate an engagement tracker for every portfolio company. The moment a policy threshold is crossed (say, the company missed a stated sustainability goal or a controversy erupted), the AI could draft the appropriate email or briefing for the stewardship team to review. It might even schedule meetings or prepare voting rationale statements as the situation escalates, all while keeping an audit log. Essentially, the AI would ensure that no important development slips by unnoticed and that the response is timely and in line with the predetermined strategy. The human team, meanwhile, can spend its time on the high-touch parts of the process: refining the criteria, interacting directly with company leadership in critical meetings, and coordinating with other investors on collective actions.
To achieve this vision, AI agents will become increasingly integrated with both internal and external systems. We already see early signs of this: some advanced platforms can interface directly with third-party data feeds (for financial metrics, ESG ratings, etc.) and even execute tasks like updating a vote in a proxy voting portal or logging an engagement in a CRM system. Tech companies are rolling out autonomous business agents that can handle multi-step workflows – from drafting communications to executing transactions – based on natural language instructions. In stewardship, this could translate to an AI that, for instance, receives a natural language command like “check if any portfolio companies are lagging on their net-zero targets and prepare an engagement plan,” and then carries out the necessary sub-tasks across various platforms. While this level of autonomy will require rigorous oversight (and very clear rules to prevent errors), the tools to enable it are quickly maturing.
It’s important to note that human oversight and insight will remain indispensable. AI might compile the data and even draft recommendations, but human judgment will sign off on the final decisions and strategies. The relationship is akin to a co-pilot: the AI handles the instrumentation and navigation aids, but the human pilot sets the destination and can take control when there’s turbulence. In the coming proxy seasons, we expect many stewardship teams to experiment with this human-AI collaboration. Early pilots are already underway at a handful of asset managers, testing things like AI-generated proxy vote reports and AI-assisted engagement summaries. Yes, there are challenges to work through – ensuring data privacy, avoiding algorithmic biases, satisfying regulators that proper controls are in place – and AI systems will undoubtedly make some mistakes as they learn. But the trajectory is clear: each month we’re seeing fewer errors and more sophisticated capabilities. The rapid improvement in just the past two years suggests that what feels somewhat “futuristic” now could become standard practice sooner than we think.
In summary, re-imagining stewardship with AI is about enabling better outcomes, not just doing the same old processes faster. It’s a chance to refocus human talent on strategy, values, and relationships, while entrusting machines with the heavy analytical lifting. The payoff is a stewardship function that is faster, more custom, and more holistic in its view – one that can move from information overload to decisive, values-aligned action. As we head into 2025 and beyond, those prepared to thoughtfully integrate AI into their stewardship programs will be well positioned to raise engagement effectiveness and stay ahead in the evolving landscape of sustainable investing. The next era of stewardship intelligence is on the horizon, and it promises to empower teams to drive positive change at a scale and speed previously unimaginable.
Ready to rethink stewardship? Let’s talk.