Module 3 of 4 • Foundation
Agentic AI - Building Autonomous Systems
Lesson 2 of 7Implementing Reflection
Lesson Sections
Foundation Progress
Modules Completed1 of 4
Complete all 4 modules to earn your Foundation Certification
Key Takeaways
- Agentic AI iterates to improve quality
- 4 patterns: Reflection, Tool Use, Planning, Multi-Agent
- Agents can critique and refine their own work
- Production agents need evaluation and optimization
🏆
Earn Your Certificate
AI-Native Developer Foundation
Upon CompletionImplementing the Reflection Pattern
The Reflection pattern enables AI to critique its own work and iterate to improve quality. Below is a Python implementation using LangChain:
from langchain import LLMChain, PromptTemplate
from langchain.chat_models import ChatOpenAI
# Initialize the LLM
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
# Step 1: Generate initial output
generator_prompt = PromptTemplate(
input_variables=["task"],
template="""Generate a solution for: {task}
Be thorough and consider edge cases."""
)
# Step 2: Critique the output
critic_prompt = PromptTemplate(
input_variables=["solution", "task"],
template="""Review this solution for the task: {task}
Solution: {solution}
Provide specific, actionable feedback on:
1. Correctness
2. Completeness
3. Edge cases
4. Improvements needed"""
)
# Step 3: Refine based on critique
refiner_prompt = PromptTemplate(
input_variables=["solution", "critique", "task"],
template="""Improve this solution based on the critique.
Original Task: {task}
Original Solution: {solution}
Critique: {critique}
Provide an improved solution:"""
)
def reflection_agent(task: str, max_iterations: int = 3):
"""Agentic reflection pattern implementation"""
# Initial generation
generator = LLMChain(llm=llm, prompt=generator_prompt)
solution = generator.run(task=task)
for i in range(max_iterations):
# Critique
critic = LLMChain(llm=llm, prompt=critic_prompt)
critique = critic.run(solution=solution, task=task)
# Check if critique indicates quality is sufficient
if "no major issues" in critique.lower():
break
# Refine
refiner = LLMChain(llm=llm, prompt=refiner_prompt)
solution = refiner.run(
solution=solution,
critique=critique,
task=task
)
return solutionKey Insight
The reflection pattern typically improves output quality by 20-40% compared to single-pass generation, especially for complex tasks like code generation, analysis, and content creation.