In Part 1, I made the case that agentic AI breaks in production because agents behave like stateful distributed systems: sessions, credentials, scaling, tool access, and observability become the real work.
For Part 2, I’m going to prove the point in the most practical way possible:
I’ll deploy two agents built using two different frameworks:
Strands Agents (task-centric / lightweight agent style)
LangGraph (graph-based orchestration, good for multi-step flows)
I’ll run both agents on Amazon Bedrock AgentCore Runtime
I’ll use OpenAI as the “brain” behind both agents (model provider choice happens during scaffolding)
This is the path I wish existed earlier: standardize production operations once and keep freedom of frameworks and models.
For Part 2, I’m going to prove the point in the most practical way possible:
I’ll deploy two agents built using two different frameworks:
Strands Agents (task-centric / lightweight agent style)
LangGraph (graph-based orchestration, good for multi-step flows)
I’ll run both agents on Amazon Bedrock AgentCore Runtime
I’ll use OpenAI as the “brain” behind both agents (model provider choice happens during scaffolding)
This is the path I wish existed earlier: standardize production operations once and keep freedom of frameworks and models.
