Tutorial

KTOR WAR hosting on Jetty9 using an NGINX proxy

This tutorial will guide you through creating an example KTOR project which will be hosted with Jetty9 and uses NGINX as a proxy which points a domain to the jetty9 server.

The project

To start this of we'll create a project using the KTOR project generator:

https://start.ktor.io/

For the purposes of this tutorial we'll be creating a project for 'example.com' which is the default for the project generator. The only change we'll make for this tutorial is setting the engine to Jetty (under 'Adjust project settings')

For this to work as a web package we'll need to make a few adjustments to the project:

New file: /src/main/webapp/WEB-INF/web.xml

This is the deployment descriptor file, it was taken directly from the KTOR documentation

<?xml version="1.0" encoding="ISO-8859-1" ?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0">
    <servlet>
        <display-name>KtorServlet</display-name>
        <servlet-name>KtorServlet</servlet-name>
        <servlet-class>io.ktor.server.servlet.ServletApplicationEngine</servlet-class>
        <init-param>
            <param-name>io.ktor.ktor.config</param-name>
            <param-value>application.conf</param-value>
        </init-param>
        <async-supported>true</async-supported>
    </servlet>

    <servlet-mapping>
        <servlet-name>KtorServlet</servlet-name>
        <url-pattern>/</url-pattern>
    </servlet-mapping>
</web-app>

Edit file: /src/main/kotlin/com/example/Application.kt

Let's create a new method for the webapp which doors everything the embedded server does but without starting the embedded server. We move all the existing functionality into another function and call that from both the embedded server and the web applet.

fun main() {
    embeddedServer(Jetty, port = 8080, host = "0.0.0.0") {
        start(this)
    }.start(wait = true)
}
/**
 * For running via WAR
 */

fun Application.module() {
    start(this)
}

fun start(app: Application)
{
    app.configureRouting()
    app.configureTemplating()
    app.configureMonitoring()
    app.configureHTTP()
    app.configureSecurity()
    app.registerUserRoutes()
}

New file: /src/main/resources/application.conf

This file is required for running as a webapp, it tells jetty what function it needs to run and in which file to find it.

ktor {
    application {
        modules = [ com.example.ApplicationKt.module ]
    }
}

Edit file: /build.gradle.kts Finally we add the war element into the plugins section of build gradle

plugins {
    application
    ...
    id ("war")
}

Now we can build the war file with the following command:

gradle :main war

This will build a war file in the build directory. We can copy that to the server, in this example we'll be storing the war file at:

/var/www/vhosts/example.war

The server

First off let's install the packages we need:

sudo apt-get install default-jre nginx jetty9

Jetty runs on port 8080 by default and uses XML to define the separate Web apps it's serving. Let's create the following new file:

sudo nano /usr/share/jetty9/webapps/example.xml

With the following contents:

<Configure class="org.eclipse.jetty.webapp.WebAppContext">
  <Set name="contextPath">/</Set>
  <Set name="war">/var/www/vhosts/example.war</Set>
  <Set name="virtualHosts">
    <Array type="java.lang.String">
      <Item>example.com</Item>
    </Array>
  </Set>
</Configure>

This file tells jetty to listen to the domain example.com and point any traffic to the location of our war file. The war file can go anywhere but I've decided to go with /var/www/vhosts/. It's also worth noting that the virtualHosts child group is an array allowing you to easily have multiple (sub)domains pointing to the same war file.

We now reload jetty to pick up the new file:

sudo service jetty9 force-reload

Now Jetty is ready, let's set up an Nginx proxy to take any incoming traffic on port 80 and point it to jetty on port 8080:

**New file: /etc/nginx/site-available/example.com

server {
    listen 80;

    server_name example.com;

    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

        proxy_pass http://127.0.0.1:8080/;
        proxy_redirect     off;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
    }

    if ($request_uri = /index.php) {
        return 301 $scheme://$host;
    }
}

Now let's enable the site, we do this with a simple symbolic link from the available site directory to the enabled site directory. This means if something goes wrong, we only need to remove the link while we fix things instead of trying to fix a live configuration file:

cd /etc/nginx/sites/enabled
sudo ln -s ../sites-available/example.com

It's good practice to test the configuration before reloading it:

sudo nginx -t

If everything is alright we can load it in:

sudo nginx reload

HTTPS (optional)

If you're wanting to secure your application with HTTPS then you can do so for free using Let's Encrypt.

To start of we'll install the certbot package:

sudo apt-get install certbot

With that installed we need to then update the nginx config file by adding the following into the server block:

server {
    ...
    location /.well-known/acme-challenge {
        root /var/www/letsencrypt;
    }
}

We need to enable this change by reloading nginx:

sudo service nginx reload

Now we can finally request a new certificate with the following:

sudo certbot --nginx -d example.com

The --nginx parameter will make certbot update the virtualhost file for us. It will change the existing block to listen on port 443 (https) and reference the newly created certificates. It will also create a new http server block as well to redirect any non https traffic.