In this blog post, we’ll build an AI-powered chatbot for e-commerce customer support using LangChain and Google’s Gemini AI. This chatbot will provide real-time responses by fetching data from a PostgreSQL database. You’ll learn how to integrate an AI model that dynamically generates SQL queries to retrieve product and order information, using this data as context to deliver accurate, relevant answers. Unlike our previous RAG-based app, which used a vector database for context retrieval, here we’ll leverage Supabase to manage our product and order data — allowing the AI to query the database and generate responses
Let's get started with the code we will build a Fastapi application.
First, we set up and install dependencies using requirements.txt file.
langchain
langchain-google-genai
fastapi[standard]
python-dotenv
langchain-community
sqlalchemy
psycopg2
pydantic
pip install -r requirements.txt
Store your database credentials securely using .env
:
DB_PASSWORD=your_database_password
To start, we establish a connection to a PostgreSQL database using SQLAlchemy. This allows our chatbot to retrieve relevant data for customer queries. First import all necessary libraries. We’re using LangChain’s integration with Google Gemini, FastAPI for our web server, and SQLAlchemy for database operations. We set up our PostgreSQL connection in read-only mode to protect from any write operations.
from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAIEmbeddings
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from dotenv import load_dotenv
from sqlalchemy import create_engine
from langchain_community.utilities.sql_database import SQLDatabase
from pydantic import BaseModel
from fastapi import FastAPI
from prompts import sql_agent_prompt, main_agent_prompt
import os
import datetime
load_dotenv()
app = FastAPI()
engine = create_engine(
f"postgresql://postgres:{os.getenv('DB_PASSWORD')}@db.a.supabnase.co:5432/postgres",
execution_options={"postgres_readonly": True},
)
sql_database = SQLDatabase(engine)
Next, we define a function that allows us to fetch data from the database. This tool will later be used by the AI model to retrieve relevant information from the database before generating a response
@tool
def fetch_data_from_db(query: str) -> str:
"""Pass in a sql statement and get the result back."""
return sql_database.run(query)
We integrate Google’s Gemini AI, which will be responsible for processing customer queries, generating SQL queries, and formulating responses based on database results. In the sql_llm
variable, we will bind it to the fetch_data_from_db
function to make LLM know about this function.
llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro")
sql_llm = llm.bind_tools([fetch_data_from_db])
Next, we’ll define a function that processes customer queries. It starts by generating an SQL query, retrieves relevant data from the database, and then crafts a response using AI. To ensure context-aware responses, we’ll also pass the conversation history so the chatbot can remember previous interactions.
def generate_response(query: str, conversation_history: list = []) -> str:
current_date = datetime.datetime.now().strftime("%Y-%m-%d")
sql_messages = ChatPromptTemplate.from_messages(
[
(
"system",
sql_agent_prompt,
),
MessagesPlaceholder("chat_history"),
("human", "{query} \nCurrent Date: {current_date}"),
]
)
chain = sql_messages | sqlllm
response = chain.invoke(
{
"query": query,
"current_date": current_date,
"chat_history": conversation_history,
}
)
context = []
tool_calls = response.tool_calls
print(tool_calls)
for tool in tool_calls:
tool_name = tool["name"]
tool_args = tool["args"]
if tool_name == "fetch_data_from_db":
print(tool["args"]["query"])
data = fetch_data_from_db(tool_args["query"])
context.append(data)
main_messages = ChatPromptTemplate.from_messages(
(
"system",
main_agent_prompt,
MessagesPlaceholder("chat_history"),
("human", "{query} \n Database Extract:" + "\n".join(context)),
)
)
response_chain = main_messages | llm
final_response = response_chain.invoke({"query": query, "chat_history": conversation_history})
return final_response.content
We define an API endpoint that allows users to interact with the chatbot. The request body includes the user’s query and conversation history.
...
class Query(BaseModel):
query: str
conversation_history: list = []
class Response(BaseModel):
success: bool
response: str
@app.post("/chat", response_model=Response)
def chat(request: Query):
try:
req_dict = request.model_dump()
query = req_dict.get("query")
conversation_history = req_dict.get("conversation_history")
response = generate_response(query, conversation_history)
return {"success": True, "response": response}
except Exception as e:
print("Exception", e)
return {"success": False, "response": str(e)}
To run the chatbot, start the FastAPI server:
fastapi dev app.py
You can now test the chatbot by sending a request with a query and conversation history — our API is fully functional and ready to provide chat-based support. While we’ve used the Supabase database, you can easily swap it for different database providers or other data sources by adjusting the prompt and corresponding tool functions.
For even better performance, consider adding an additional Retrieval-Augmented Generation (RAG) layer. This would involve creating a vector store of sample questions and responses and then updating the prompt to reference these examples. When a user submits a query, the system retrieves relevant samples and uses them to enhance the SQL generation and final response — leading to more accurate and well-styled answers.
With this AI-powered chatbot, e-commerce businesses can automate customer support and provide quick, accurate answers using real-time data. By combining LangChain and Google’s Gemini AI, you enable intelligent, context-aware interactions that significantly improve customer experience. Keep refining your prompts and techniques to take your chatbot’s performance even further.
If there’s any topic you’d like me to cover in more detail in a practical way or if you have suggestions for future posts, feel free to share them in the comments. I’d love to hear your thoughts and make this space even more helpful for you.
Thanks for reading, and Happy Coding!