AWS Architecture Showcase

Powered by Amazon Web Services
Chatbot with LangChain and S3
A serverless chatbot architecture using LangChain for processing and S3 for storage.

Components:

  • API Gateway: Accepts user input (e.g., chatbot query).
  • Lambda: Processes the query and calls LangChain.
  • LangChain: Decides what to do (e.g., fetch knowledge, call external APIs).
  • S3: Stores chat logs or files generated by the AI.
  • DynamoDB: Stores metadata (e.g., user sessions, preferences).
Event-Driven AI Workflow
An event-driven architecture for AI-powered file processing and analysis.

Components:

  • S3: Stores uploaded files (e.g., documents for analysis).
  • S3 Event: Triggers Lambda when a file is uploaded.
  • Lambda: Processes the file and calls LangChain for AI reasoning.
  • LangChain: Generates insights and stores results in DynamoDB.
  • EventBridge: Publishes an event when processing is complete.
  • SQS: Queues tasks for downstream processing.
Asynchronous Task Processing
An asynchronous architecture for processing AI tasks using a queue-based approach.

Components:

  • API Gateway: Accepts user requests.
  • Lambda: Validates the request and sends a task to SQS.
  • SQS: Queues the task for processing.
  • Lambda: Polls SQS, processes the task, and calls LangChain.
  • LangChain: Executes the AI workflow and stores results in DynamoDB.
Multi-Model AI Pipeline
A comprehensive AI pipeline that handles multiple types of input and uses various AI services.

Components:

  • API Gateway: Accepts user input (text, image, or speech).
  • Lambda Orchestrator: Directs input to appropriate AI services.
  • LangChain: Processes text input and generates final response.
  • Amazon Rekognition: Analyzes image input.
  • Amazon Transcribe: Converts speech input to text.
  • Lambda Aggregator: Combines results from different AI services.
  • DynamoDB: Stores processed results and metadata.
Serverless AI Training Pipeline
A serverless architecture for training and deploying AI models.

Components:

  • S3: Stores datasets and trained models.
  • Lambda: Triggers training jobs and model deployment.
  • SageMaker: Runs training jobs and hosts deployed models.
  • SNS: Notifies data scientists of job completion.
  • API Gateway: Provides inference endpoint for deployed models.
  • LangChain: Processes inference requests and responses.