· ai

Building an AI Chat App with .NET, Azure OpenAI and Vue.js - Part 1

A tutorial on creating a chat application using .NET, Azure OpenAI and Vue.js. In this first part, we'll set up the project, create the backend API and run it inside a Docker container.

Building an AI Chat App with .NET, Azure OpenAI and Vue.js - Part 1

Introduction

Have you ever wanted to build your own AI-powered chat application?

Instead of relying on generic AI solutions like OpenAI’s ChatGPT, Google’s Gemini and other tools, with a custom or tuned AI model you can tailor the chat experience to your specific needs. This could mean using industry jargon, answering questions about your unique products or services, or even personalizing responses based on user data or company documents.

With the combination of Microsoft’s OpenAI integration, .NET and Vue.js, it’s easier than you might think!

This multipart tutorial will guide you through the entire process, from creating Azure resources, building a .NET backend, and setting up a Vue.js frontend to interact with the AI model.

In this first part, we’ll start by building the foundation: a .NET backend running inside a Docker container.

Let’s dive in!

Prerequisites

These prerequisites are required for all parts of this tutorial.

Setting Up the Project

I like to start with a set folder structure for my projects. Here’s a simplified approach for what I’d recommend for this tutorial:

📂ChatApp/
├── 📁backend/
└── 📁frontend/

First, inside the 📁backend folder, let’s create a new empty .NET web project using the following command:

# 🖥️CLI
dotnet new web -n ChatApi

This will scaffold a new .NET web project named ChatApi in the current directory. When opening the project in your favorite code editor, and examining the Program.cs file you’ll notice that the project uses Minimal API for the initial route:

// 📄Program.cs
var builder = WebApplication.CreateBuilder(args);

var app = builder.Build();

app.MapGet("/", () => "Hello World!");

app.Run();

We will keep on using Minimal API for this project, as it provides a lightweight way to build APIs without the overhead of traditional controllers and routing.

Next, let’s add the required NuGet packages for our project.

# 🖥️CLI
dotnet add package Microsoft.Extensions.Configuration.EnvironmentVariables
dotnet add package Microsoft.SemanticKernel

These packages will help us manage configuration settings and the Azure OpenAI integration.

Loading Configuration Settings

To load configuration settings from environment variables, we need to modify the Program.cs file as follows and add it to the beginning of the file:

// 📄Program.cs
var config = new ConfigurationBuilder()
    .AddEnvironmentVariables()
    .Build();

var endpoint = config["AZURE_OPENAI_ENDPOINT"];
var deployment = config["AZURE_OPENAI_GPT_NAME"];
var key = config["AZURE_OPENAI_KEY"];

These settings will be passed to the Docker container as environment variables and will be used to authenticate with the Azure OpenAI service. By using environment variables, we can easily switch between different environments (e.g., development, staging, production) without changing the code and don’t have to worry about storing sensitive information in the source code.

We will use these settings in the next part of the tutorial when we interact with the Azure OpenAI service.

Adding the messages endpoint

Now, let’s add a new endpoint to our API that will handle sending messages to the Azure OpenAI service and returning the response.

// 📄Program.cs
app.MapPost("/messages", (ChatMessage message) =>
{
  // We will handle the Azure OpenAI communication in Part 2
  return $"You said: '{message.Content}'";
});

In a separate file, create a new class named ChatMessage:

// 📄ChatMessage.cs
namespace ChatApi;

public class ChatMessage
{
    public string? Content { get; set; }
}

CORS

To allow the frontend to communicate with the API running on a different port, we need to enable CORS (Cross-Origin Resource Sharing) in the API. We especially allow requests from http://localhost and http://localhost:80 to access the API since the API will run on port 8080 inside the Docker container. (As a bonus, we will also add the development port 5173 for the Vue.js frontend in part 3 of this tutorial.)

// 📄Program.cs
var builder = WebApplication.CreateBuilder(args);
+builder.Services.AddCors(options =>
+{
+  options.AddPolicy("AllowLocalhost", builder =>
+    builder.WithOrigins("http://localhost", "http://localhost:80", , "http://localhost:5173")
+            .AllowAnyHeader()
+            .AllowAnyMethod()
+  );
+});

var app = builder.Build();
+app.UseCors("AllowLocalhost");

Running the API in a Docker Container

To host and run the API inside a Docker container, we need to create a Dockerfile in the root of the project:

# 📄Dockerfile
# build image
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /source
COPY . .
RUN dotnet restore
RUN dotnet publish -c Release -o out

# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
COPY --from=build /source/out .
# non root user
USER $APP_UID
EXPOSE 8080
ENTRYPOINT ["dotnet", "ChatApi.dll"]

This Dockerfile uses the official .NET SDK image to build the project and then copies the output to a smaller runtime image to run the API. It also uses a new feature that has been added to the 8.0 images to run the application as a non-root user for better security. By default, applications in the new 8.0 images use port 8080. You can read more about the changes in this blog post by Andrew Lock.

After finishing our Dockerfile, we can build the image using the following command:

# 🖥️CLI
docker build -t jd/aichatnetvue:v1 .

Feel free to replace jd/aichatnetvue:v1 with your desired image name and tag.

Finally, we can run the container using the following command:

# 🖥️CLI
docker run --name chatappbackend -d -rm -p 8080:8080 jd/aichatnetvue:v1

If you have changed the image name and tag in the previous command, make sure to replace jd/aichatnetvue:0.1 accordingly.

If everything went well, you should see the following output in the logs of the running docker container:

# 🐋Logs
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:8080
...

Testing the API

With the API running inside a Docker container, we can now test the endpoint using curl or any other HTTP client (Postman, Insomnia, etc…)

# 🖥️CLI
curl -X POST \
  --url 'http://localhost:8080/messages' \
  -H 'content-type: application/json' \
  -d '{
      "Content": "Hello AI!"
    }'

# Output
# You said: 'Hello AI!'

Docker Compose

Using Docker Compose helps to manage our containers for development and testing. It allows us to define the services and networks in a single file. In Part 3 of this tutorial, we will add a Vue.js frontend to the project, and Docker Compose will be useful to run both the API and the frontend together.

Create a new file named docker-compose.yml in the root folder of the project 📁ChatApp:

# 📄docker-compose.yml
version: '3.8'

services:
  backend:
    build:
      context: ./backend/ChatApi
      dockerfile: Dockerfile
    ports:
      - '8080:8080'
    environment:
      - AZURE_OPENAI_ENDPOINT=${AZURE_OPENAI_ENDPOINT}
      - AZURE_OPENAI_GPT_NAME=${AZURE_OPENAI_GPT_NAME}
      - AZURE_OPENAI_KEY=${AZURE_OPENAI_KEY}

This docker-compose.yml file defines a service named backend that builds the API using the Dockerfile in the 📁backend/ChatApi folder. It also maps port 8080 from the container to the host machine and sets the required environment variables for the Azure OpenAI service.

To run the API using Docker Compose, we can simply use the following command:

# 🖥️CLI
docker-compose up -d

This will build the image and run the container in the background. Testing our API again should yield the same result as before.

The API is working as expected, and we are ready to move on to the next part of the tutorial. In Part 2, we will interact with the Azure OpenAI service to send and receive messages.