Member-only story

Finance ReAct Agent with LangGraph

Augusto Gonzalez-Bonorino
17 min readJan 4, 2025

đź’¸ 2025 is the year of Agents. Seize the trend and learn the core concepts of LangGraph to start building Agentic products.

The field of Natural Language Processing (NLP) has been experiencing an astounding rate of progress. The Transformer architecture [1] introduced by Vaswani et al in 2017 lends itself nicely to mountains of data and fields of parameters. But the scaling laws might be reaching a plateau [2, 3], prompting new approaches to AI training. Take OpenAI’s o1 for example, an allegedly “reasoning” model that implements the latest and hottest technique “test-time compute”. The concept is fairly simple, rather than adding more data at “train-time” add it at “test-time”. In practice, this results in the Large Language Model (LLM) powering o1 iteratively refining intermediate responses (aka “self-refinement”) or generating a set of candidate responses from which a reward model (aka “verifier”) samples from. HuggingFace recently published a paper and technical blog explaining how they back engineered this technique if you want more details.

Another approach is Agentic design. AI agents, while still no clear definition has been agreed upon, generally refers to LLM-powered workflows with a set of Tools available at its disposal for solving problems at runtime. The technology has existed for a while, but it has been challenging to implement at scale to…

--

--

Augusto Gonzalez-Bonorino
Augusto Gonzalez-Bonorino

Written by Augusto Gonzalez-Bonorino

Economics Lecturer @ Pomona College. From Argentina. I created the Entangled Mind blog. Check it out ;) Lead Researcher @ https://www.econllm-lab.com/

No responses yet