Motivation for this Series of Posts
As a PhD candidate, you’re often faced with the challenge of managing complex projects and collaborating digitally. Simultaneously, showcasing your skills and achievements to potential employers and colleagues becomes important. Unfortunately, academic institutions frequently fall short in providing the necessary infrastructure and tools for these tasks. Either you’re denied access to essential tools or you’re stuck with inconvenient, admin-restricted options. Worse still, upon completing your PhD, access to these institutional resources typically vanishes, leaving you to start from scratch.
In this series, I’ll guide you through a solution to these challenges. Imagine having your own virtual server, equipped with powerful open-source software, tailored to make your academic journey smoother and look professional throughout. These tools include:
Project Management with OpenProject: Organize your projects and track progress. You can use it just for you, or make user accounts to facility team communication. Includes nice visualization options, like Gantt-charts and boards.
File Management and Online Collaboration via Nextcloud: Say goodbye to sending a bunch of files back and forth per E-Mail. Nextcloud offers a unified platform for storing your documents, sharing with peers, and collaborating in real-time. Even professional video calls!
Showcasing Your Work with a Quarto Website: You do all these intelligent and creative things - showcase them! Quarto allows you to create a personal website to display your research, publications, and professional achievements, ensuring you stand out to employers and collaborators. You can also blog and create interactive presentations with it.
The cost? Surprisingly minimal. You’re looking at around 5 € per month for the server and an additional 1 € or so for your personal domain. This includes 40 GB of storage for your files. Need more space? Upgrading is just a few clicks away.
What Do You Need to Know to Follow This Guide?
Embarking on this journey requires no special prior knowledge, but what you do need is curiosity, a willingness to learn, and a bit of patience. This guide is crafted for beginners, and I’ll walk you through every step in detail. The result is somewhat lengthy. I hope this will not discourage you. However, I think the merit of this post is its comprehensiveness. There are excellent resources available for each of the topics we will be covering, but putting it all together (and even knowing which options are available) has been quite time consuming. I hope I can help you spend less time setting up and more time using the tools for your projects. Let’s break down the key areas you’ll become familiar with:
Linux/Ubuntu Server: The backbone of your virtual server. You’ll learn the (very) basics of server management, including scripting in Bash, and how to get one in the first place.
Domains: Understand how to choose, buy and manage a domain name that reflects your professional identity.
Nginx Proxy Manager: Enhance your server security and encrypt connections using SSL with just a few clicks.
Docker: Encapsulate every app in its own container that has everything the apps needs to run. Your apps will not get in the way of each other.
Powerful Open Source Tools: Install and run OpenProject and Nextcloud instances.
RStudio and Quarto for Your Website: These tools will help you create a professional website to showcase your academic achievements, and share your thoughts.
The time and effort you invest here will not only expand your skill set but also give you full control over your data. Your projects and presentations will look more professional, with everything carrying your personal brand.
Do I Really Need All This?
Setting everything up requires a dedicated investment of time, and maintaining it involves periodic updates. This means that time spent learning how to set up and use new tools is time you can’t spend on other things. So it’s important to balance the benefits of getting started with your personal interests and current circumstances. Use the checklist below to start thinking about whether it’s a good idea:
Does the idea of managing your own server excite you? Do you like tinkering with digital tools, or would you like to find out if you do?
Will the skills and tools acquired be useful to you on a regular basis (in your private and professional life)?
Are you on the cusp of beginning a long-term role in a new organization? It’s often prudent to first acclimate yourself to the existing technological environment and the preferences of your new colleagues. This understanding can inform whether or not integrating a personal server into your workflow is beneficial.
Consider the resources already available to you: Are the benefits of budget friendliness, privacy, long-term availability (independent of employer resources), and personal branding relevant to you?
Do you have some time and brain space to devote to this side project? Please consider this carefully. Don’t take on too much or do it just to procrastinate. Speaking from experience: Open source tools can streamline your workflow, but they cannot write that proposal or create that presentation.
Did these considerations make you want to continue? If not, you’ve just found out what you don’t want to do in the near future. That can feel pretty good too. Of course, this guide isn’t going anywhere - it’s ready for you whenever you are, whether that’s now or in the future. With so much going on in your academic life, the most important thing is to make the right choices for you.
Content of This Post
Renting Your Virtual Server: Dive into the various options available for renting a virtual server. I’ll guide you through a cost-effective choice.
Choosing and Securing Your Domain: Your domain is your digital identity. We’ll explore how to select a domain that resonates with your professional image and the steps to acquire and use it.
Basic Server Configuration: I’ll walk you through the initial setup of your server, including the installation of Docker and the configuration of an Nginx Proxy Manager.
Implementing OpenProject: Learn how to install OpenProject to manage your projects and tasks.
Nextcloud and Quarto will be covered another time.
Ready? Let’s get to it!
Step 1: Get Your Server.
This is your situation right now: You have your local machine, connected to the internet. And you want to serve some content, e.g. a website, to potentially every device with a browser. You need a server!
Physical servers are essentially computer hardware configured to function as servers. It’s a role you assign to some hardware. Technically, you could even repurpose an old laptop for this role. However, for our purposes, we’re taking a more convenient path: renting a segment of a professional-grade server. This approach spares us the hassle of dealing with network configuration, hardware maintenance, backups, and concerns about internet upload speeds. (If you are interested in setting up a server in your home, you can start here.)
Many companies offer server rentals through virtual machines, which allocate a portion of their physical server’s resources to your use. Essentially, the more you invest, the more resources (like performance capabilities and storage space) your “Virtual Private Server” (VPS) will have at its disposal.
For typical academic needs, the most basic options are sufficient. As an example, all my personal academic projects run smoothly on a VPS equipped with a 2-core CPU, 4 GB of RAM, and 40 GB of SSD space. When choosing a provider, my top three criteria were:
Price: A primary goal here is cost-effectiveness, especially compared to the cumulative subscriptions for various file storage and project management tools.
Server Location: I prioritized providers offering servers in the EU, preferably close to my location. The EU’s data protection regulations are a plus, and a geographically nearer server means better performance during regular access.
User-Friendly Setup: The provider should offer an intuitive graphical interface, comprehensive documentation, and flexibility in using external services, like those for domain management.
I personally opted for Hetzner, a German VPS provider that ticks all these boxes, and is among the most affordable options I found. They have servers in Germany, Finland and the USA. The servers in Europe are powered by sustainable energy (water and wind). Another commendable choice is Digital Ocean, known for its extensive documentation and user-friendly interface. However, there are numerous viable alternatives worth exploring, so do a little research yourself.
Sometimes, your institution might offer sponsored access to VPS. For instance, researchers affiliated with universities in Baden-Württemberg, Germany, can access VPS services through bwCloud. Rest assured, whichever provider you choose won’t impact the functionality of the setups we’ll cover next.
I opted for the “CX22” plan from Hetzner. For a price increase of 20 %, I recommend activating the automatic backup feature. This service from Hetzner creates daily backups of your entire server. If something goes awry, you can easily revert to the last backup with just one click. For me, this feature reduces worries about potential data loss or mishandled commands. In total, I pay 5.29 € per month.
Assuming you choose to utilize this option as well, the first step is to set up an account. Once logged in, navigate to your “Cloud Console” dashboard. Here, you’ll find the option to “add server”. Clicking on this will present you with a screen similar to the following:
Select the server location that’s closest to you to optimize performance.
Next, we’ll choose the server’s operating system. Unlike the desktop and laptop market, Linux dominates the server market. Hetzner offers six different Linux distributions, but I recommend selecting Ubuntu. Its popularity ensures easy access to tutorials and community support, making it an ideal choice for those new to server management. Additionally, the code and instructions in the following sections are specifically tailored for Ubuntu. You can use either the brand new 24.04 or the 22.04 edition.
When it comes to server performance, click on “x86 (Intel/AMD)” and then select “CX22” for the most economical option. AMD hardware is fine, too. I would not use ARM64 for our purpose. ARM64 is a type of hardware architecture that is often used in smaller devices such as mobile phones and IoT devices due to its energy efficiency. It is also becoming increasingly popular for data centre/servers. Your local machine is most likely based on the AMD64 architecture. ARM64 can sometimes lead to compatability issues with certain software that has been developed for AMD64. There are workarounds, but if there is not significant price difference, use the Intel or AMD options.
In the “Networking” section, make sure to enable both IPv4 and IPv6. These are standard internet protocols, with IPv6 being the more modern version. However, IPv4 remains prevalent among Internet Service Providers (ISPs) and other internet infrastructure components. Opting for IPv6 only would restrict your server’s accessibility, making it unreachable from networks that rely on IPv4. By enabling both, you ensure maximum compatibility and reach for your server.
At some point, we have to log in to our server. You can let Hetzner send you the root password of your server via email and subsequently use their interface to log in, or establish a remote connection from the terminal of your local machine via SSH (Secure Shell). While using a password is technically an option, SSH keys are recommended for enhanced security. They are a pair of cryptographic keys for authenticating in the SSH protocol. One is a private key, which you keep confidential on your local machine, and the other is a public key that you store on the server. SSH keys are favored over passwords as they offer superior security, being almost immune to brute force attacks. Additionally, they don’t require manual entry each time (but you can set an additional password to further enhance security).
For Linux and Mac users, generating an SSH key pair is straightforward. Open your terminal (search for “terminal” in your system if you’re unsure where to find it), and type in:
ssh-keygen
- 1
-
This command creates a standard 2048 or 3072 RSA key pair by default. You will be asked for a filename (just click enter for the default), and if you want to add a keyphrase (password). If you choose to do so, you will have to enter it each time you use SSH. You can increase the key size (more secure but slower with
ssh-keygen -t rsa -b 4096.
There are other types besides RSA, such as ECDSA and ED25519, if you want to have a look at them.
This command creates a new SSH key pair. You’ll find the public SSH key in /home/yourusername/.ssh/id_rsa.pub
. Open this file, and copy its entire content. In Hetzner’s interface, click on “Add SSH Key” and paste the content from your public SSH key file. Give it a recognizable name.
Windows users can utilize the Windows Subsystem for Linux (WSL) to perform the same steps. WSL also enables you to log into your server later, bypassing the need to use Hetzner’s interface for this purpose.
If you need more comprehensive instructions, DigitalOcean has an excellent guide covering SSH keys for all operating systems.
Next, you’ll have the option to add more SSD storage via volumes. The default 40 GB included in your plan is typically sufficient for regular academic use. You can always expand your storage space later if needed.
Hetzner also provides the option to set up firewall rules at this stage. However, I opted not to use this feature, as we will establish a firewall later using just a few lines of code.
Now, let’s talk about backups. While it’s optional, I highly recommend enabling this feature for added security. You can activate or deactivate backups at any time. The next set of options – groups, labels, and cloud config – are not crucial for our purposes, so we’ll skip them.
The final step here is to name your server. Choose a name that’s easy to remember and reflective of its purpose. Once you’ve done this, you’re ready to generate your very own server. You have the flexibility to stop the service and end payments at any time by simply destroying the server. The price of €5.29 is for the whole month. For example, if you destroy your server (not just stop it, but remove it completely) after 14 days, you will only pay about half of that. Pay-per-hour is the typical VPS payment model and applies to DigitalOcean as well.
Step 1 brought you this far:
With your server set up, let’s move on to acquiring a professional domain to complement it.
Step 2: Get Your Domain.
This step, while optional, is highly recommended for enhancing your server’s accessibility and professionalism. Every internet-connected device, including your server, is assigned a unique Internet Protocol (IP) address, such as 203.0.113.145. This numerical address, akin to a postal address, identifies and locates the device in the digital realm. However, numbers are not as user-friendly or memorable as words. This is where a domain comes in handy – it links a readable, memorable string (like “dgerdesmann.de”) to your server’s IP address. Without a domain, users would have to navigate to URLs like https://203.0.113.145:8080
, which are not only unattractive but also appear less trustworthy. A domain transforms this into a more appealing and professional-looking URL, such as https://dgerdesmann.de
.
Once you have aquired a domain, you can create as many subdomains as you want. For example, you can serve a blog under https://blog.yourdomain.com, an app under https://coolapp.yourdomain.com, and a portfolio website under https://portfolio.yourdomain.com. One domain - lots of opportunities. If you want to learn more about domain names, Mozilla provides a nice overview.
Like VPS providers, there are services for domain registration. You can find appealing domains for less than 1 € per month, and sometimes even for free. Before choosing a provider, let’s focus on selecting the right domain name. Keep these tips in mind:
Short and Sweet: Your domain should be concise, easy to spell, and memorable.
Long-Term Relevance: Avoid overly specific names tied to your current research, as your interests may evolve.
Choose the Right Extension: Common extensions like “.com” or “.net” are versatile. Country-specific domains like “.de” for Germany are great for localized relevance. For broader European contexts, “.eu” might be suitable (for example, if you move a lot or work primarily on EU-sponsored projects). I suggest that you avoid extensions that people are largely unfamiliar with, even if they are very cheap. The goal is for people to click on your URLs without thinking twice, so keep it simple and familiar.
Personal Branding: Using your name is a natural choice for academics. If you have a longer name, consider a shortened version. For instance, I chose “dgerdesmann.de” over the longer “danielgerdesmann.de”. But if your name is common, like “Schmidt”, “Smith” or “Singh”, you might want to include middle names or additional identifiers to avoid confusion (e.g. yourname-biostatistician.com).
Once you have some potential names in mind, use tools like Domcomp to compare prices across providers, or visit popular registrars like Namescheap directly to see if your preferred domain is available. Choosing a well-known provider can be advantageous, as it often means a larger user base and more online resources for troubleshooting. I registered my domain with Namecheap, and the upcoming guide will detail settings specific to their platform. Namecheap also offers these additional features:
For our purposes, these aren’t essential. I opted for “Premium DNS” due to its low cost and the promised minor performance improvements, but it’s not a necessity for what we’re doing. As for “SSL”, there’s no need to purchase this service. We’ll later go through how to set up secure SSL connections for free, ensuring your server communications are safe and encrypted.
Now that you’ve got your domain, it’s exciting to think about the possibilities! Not only will we use it for our server, but you can also leverage it for other purposes. For instance, some email providers allow you to use your custom domain for personalized email addresses (like “yourname@yourdomain.com” instead of “yourname@gmail.com”). If you’re interested, here’s a guide from Tuta, my email provider, on setting this up. However, the process is different for every email provider. It’s also possible to integrate custom domains with Gmail, though additional fees might apply, especially on free plans.
With your server and domain ready, we’re set to start tinkering.
Step 3: Basic Server Setup
Step 3.1: User and Firewall Configuration
It’s time to get your server ready for the installation of powerful open source tools. We’ll initially access our server through Hetzner’s interface. This is for those who chose not to use SSH keys, but if you have your SSH keys set up, you can also log in directly from your local machine (see the DigitalOcean tutorial above and command below).
Accessing Your Server: Log into your provider’s account. For Hetzner users, select your generated server and click on the “Console” button in the right corner. A window will open. You’re now looking at a server console, using the scripting language ‘Bash’ to issue commands to the server. Bash has been around for 35 years and is still widely used. Now it will serve you! There are plenty of tutorials for Bash if you are curious. For now, here are some practical tips for working with Bash code:
To paste code into your console, use
Ctrl+Shift+V.
Move to the beginning of the line with
Ctrl+A
, and to the end withCtrl+E
.Delete characters to the left of the cursor using
Ctrl+W
. To delete characters to the right of the cursor, useCtrl+U
.Use the up and down arrows to navigate through your command history.
Repeat the last command quickly by using
!!+Enter
.Comment code with
#
(just like in R).MOST IMPORTANTLY: Be careful when running commands, especially those that modify or delete files. That’s often permanent. It’s a good idea to save your commands in a .txt file on your system for future reference and troubleshooting.
SSH Login: [optional] If you set up SSH, use your terminal to log in with:
ssh root@your_server_ip
- 1
- Replace “your_server_ip” with the IP of your server. You can find it just below your console or in the Hetzner menu.
- Creating a User: Once logged in as “root”, it’s safer to create a new user and operate from that account. You can choose any name you like. To add a user, use:
adduser username
- 1
- Use any username you like, but keep it short as you will need to type it in sometimes. You will be prompted for user information. Click enter to leave that empty.
- Then, grant this user admin privileges:
usermod -aG sudo username
- 1
- Command breakdown: usermod = command itself; -a = add user to a group; -G sudo = add user to “super user do” group (admin privileges).
- Setting Up a Firewall: We’ll configure a basic firewall to restrict external traffic to specific ports.
- [optional] First, allow SSH connections for future remote logins:
ufw allow 22/tcp
- 1
- Command breakdown: ufw = “Uncomplicated Firewall” command line tool to configure the iptables firewall in linux; allow = what it says; 22/tcp = the standard Open-SSH port is 22, “tcp” stands for Transmission Control Protocol. We use this protocol because of its reliability. Another protocol is UDP, which is less reliable but has a lower latency (used for streaming media, for example). Simply typing “ufw allow 22” will enable both protocols.
- Allow HTTP (Hypertext Transfer Protocol) and HTTPS (HTTP Secure) connections. These are protocols used for transmitting data over the internet, typically webpages. HTTPS encrypts the data to provide secure communication. We’ll primarily use HTTPS for security, but HTTP is needed during the setup phase:
ufw allow 80/tcp
- 1
- Port 80 is the standard port for http.
ufw allow 443/tcp
- 1
- Port 443 is the standard port for https.
- Open some ports that we will need. A port is like a gate for web traffic. We want to open gates so that users can reach our website, for example, from anywhere. But we want to open as few gates as possible, so as to provide as little surface area as possible for attacks.
ufw allow 81/tcp
- 1
- Port 81 is used by the Nginx Proxy Manager were are going to setup later. We need to open this port, so we can reach its interface in the browser.
- Enable the firewall and check its status:
ufw enable
- 1
- We set the rules for our firewall, now we need to activate it.
ufw status
- 1
- Gives you a breakdown of the status. If the firewall is active, incoming traffic is only allowed on the ports we specified. Incoming traffic means e.g. access from your browser.
- Optional Remote Login Setup: If you wish to log in from your local machine’s terminal, point to your SSH keys. Replace ‘username:username’ with your username and adjust the path to where your SSH keys are stored:
rsync --archive --chown=username:username ~/.ssh /home/systemusername
- 1
- Replace “username” with the username you chose above. Replace the path to the SSH keys with the actual path on your system. Command breakdown: rsync = tool for remote file synchronization; –archive = tell rsync that it should preserve attributes of files (permissions, timestamps …); –chown=username:username = sets the (group and user) ownership of the copied files to the specified user. The rest of the command is the path to the directory where the .ssh keys are stored on your system.
You’re probably still logged in as root. Switch to the new user you created with:
su - username
- 1
- From now on, if you ever get “permission denied” errors, use “sudo” in front of the command to run it with admin privileges.
- Server Updates: Finally, update your server’s software, including security patches. Refresh the list of available packages:
sudo apt update
- 1
- Command breakdown: sudo = run a command with admin privileges; apt = Advanced Package Tool, a package management command-line tool; update = a subcommand of apt that gets information about the latest packages and updates available.
Then, install any updates:
sudo apt upgrade
- 1
- “upgrade” checks the information from “update” against the installed package versions. If newer versions are available, you will be asked if you want to install them. Confirm with “y”. If there is a prompt about which services to restart, just press enter.
Sometimes something critical was upgrades, such as the kernel. Rebooting now is a good idea.
sudo reboot
- 1
- This will log you out. Ensure that you know hat to log back in (ssh username@server-ip + password if you set it up). If you’re stuck, you can use the Hetzner console.
Step 3.2: Directing Your Domain to the Server IP
With your domain in hand, it’s time to link it to your server. Navigate to your domain provider’s website, such as Namecheap, and log into your account. Once you’re in, follow these steps:
Accessing Domain Settings: On the dashboard, locate and click on the “Domain List” in the left panel. Find your purchased domain and click on the “MANAGE” button next to it.
Setting Up DNS Records: Head over to the “Advanced DNS” section. Here, we’ll create an A-record to connect your domain to the server’s IP address.
Click on “Add New Record” under the Host Records section.
Select “A Record” from the dropdown menu.
For the Host field, you can choose any string. As we’ll be using Docker later, I used “docks”. This means your server’s setup page will be accessible via http://docks.yourdomain.com.
In the Value field, enter your server’s IP address.
Leave the “TTL” (Time To Live) setting on automatic.
Activating the Record: After confirming these details, Namecheap will register the connection between “docks.yourdomain.com” and your server’s IP in a DNS internet database. This process makes it so that when someone navigates to “docks.yourdomain.com”, they are directed to a web page hosted on your server.
Keep in mind that this DNS record entry might take some time to become active, usually up to 30 minutes. This delay is perfectly normal, and it’s a good opportunity to continue with other server setups in the meantime.
Need images with these instructions? Namescheap has a tutorial.
Step 3.3: Installing Docker
Finally, we’re ready to install some exciting software on our server. Our aim is to host multiple applications, like an OpenProject instance, a Nextcloud instance, and a personal webpage. To manage these efficiently and securely, we’ll use containerization, a method where each application is isolated in its own container(s) with all necessary components (code, libraries, settings, etc.). This separation ensures that the applications don’t interfere with each other and enhances security. For this purpose, we’ll use the popular container technology Docker. Here’s how to install it:
- Preparing for Docker Installation (where to get it from):
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
- 1
- Command breakdown: curl -fsSL = curl make the https request for you, fsSL specifies that you don’t generally want error messages or show progress, only if the download fails, and that curl should follow redirection if necessary; https://download.docker […] = URL where the GPG key is stored that verifies the authenticity of Docker packages; | = pipe symbol, like |> in R or %>% in tidyverse; the rest of the command converts the GPG key to text and stores it, so you can use it to verify Docker packages in the future.
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- 1
- In short, this command allows the APT package manager to recognize and use the Docker repository when installing or updating Docker packages.
- Updating Package List:
sudo apt update
- 1
- Refresh the package list again, now that you have added a repository. It fetches the latest Docker package versions.
- Installing Docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl status docker
- 1
- Shows detailed information about the Docker service, including whether it is running, how long it has been running, and its ID. You can also test if docker is running properly by entering: “sudo docker run hello-world”.
- Adding User to Docker Group: To avoid needing root access every time you use Docker, add your user to the Docker group:
sudo usermod -aG docker ${USER}
- 1
- This command adds the current logged-in user (fetched with the variable ${USER}) to the docker group.
su - ${USER}
- 1
- Command breakdown: su = command to switch users, if you do not provide a username, it defaults to root; in conclusion, this command ensures you are logged-in as the user in the Docker group (you do not need to type in “sudo” every time).
groups
- 1
- Shows you all existing groups, so you can check if your commands worked properly. There should be a “docker” group now.
- Installing Docker Compose: Docker Compose is a tool for defining and running multi-container Docker applications. We need it, because some apps are based on multiple containers, to make them even more robust. Docker Compose serves as the maestro putting it all together. Install it with:
sudo apt install docker-compose-plugin -y
- 1
- Challenge: Explain every part of this command to yourself :)
Troubleshooting:
Check for Updates: Ensure the Docker installation commands are current. Refer to the official Docker documentation for the latest installation steps on Ubuntu servers.
Further Learning: Digital Ocean offers an excellent guide on installing and using Docker. Additionally, the YouTube channel “Awesome Open Source” has a detailed video on using Docker in conjunction with the Nginx Proxy Manager, which we’ll set up next. You can find that here.
Alright! With these steps, you could containerize your applications and users could theoretically request them with your domain. However, there’s no application to serve yet:
Step 4: Setting Up Nginx Proxy Manager
After all the planning, purchasing, and setup, it’s time to see some tangible results - your own webpages. We’re now going to install and configure the Nginx Proxy Manager, which will manage incoming requests on your server and serve content efficiently.
Nginx is a popular choice for home and web server applications because it’s lightweight, stable, and relatively simple to configure. It also acts as a reverse proxy, functioning as a gatekeeper that receives incoming requests and directs them to the appropriate server resources. This approach reduces security risks since we only need to open a few ports for users to reach this gatekeeper, rather than a separate port for each application. Additionally, Nginx Proxy Manager simplifies the process of encrypting connections with HTTPS and implementing basic exploit protection. So pretty good performance and security without the need to really know whats going on. Win!
Let’s start the setup:
- Creating a Directory for Installation: First, we’ll make a directory (“folder”) named “nproxy” for our installation and go there:
mkdir nproxy
- 1
- mkdir = “make directory”, this is the command-line equivalent of right clicking and creating a folder on your system.
cd nproxy
- 1
- cd = “current directory”, used to go to the directory you specify (so you can create, delete, and move files there).
- Preparing the Configuration File: We’ll create a
.yml
file, which is essentially a recipe for our reverse proxy setup. If your server doesn’t have the text editor nano, install it withsudo apt install nano
. Then, open a new file:
nano docker-compose.yml
- 1
- Nano is a simple text editor. There are others, such as Vim. Nano is a good choice here because it is lightweight, relatively easy to use and we do not need any special features. .yml (YAML) files are used to configure data exchange between programs (or Docker containers). We use one here because the Nginx Proxy Manager is split into several Docker containers that need to communicate. YAML files are sensitive to indentation, so keep them exactly as you see them, or the file won’t do its job.
Copy and paste the following configuration into the file. You might need to update the version to the latest one, which you can check here:
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
- 1
- The version of docker-compose you are using. “version:” is now obsolete, but it is still in the official setup instructions. There’s no harm in leaving it in.
- 2
- Specifies that we want the latest Docker image of the proxy manager.
- 3
- Let the proxy manager containers run until we explicitly stop them.
- 4
- These are the ports it uses. We have already opened them for incoming traffic, so you can access the proxy manager in your browser.
- 5
- Specifies directories where to store data, and SSL certificates.
Make sure you get the indentation right. Save and exit nano (Ctrl+S
to save, and Ctrl+X
to exit).
- Building from the Configuration: Use Docker Compose to build and deploy from this configuration:
docker compose up -d
- 1
- This command creates and starts Nginx Proxy Manager in Docker containers as specified in the .yml file. The “-d” option will start the containers in the background, “detached” from the current session. If we type more commands in our terminal, it won’t affect the running containers (if we want to, say, stop the containers, we can use their ID).
Once the process completes, check the status of your containers:
docker ps
- 1
- Shows details of all running containers. Should show “up” under STATUS. If you want to see a list of all containers, including stopped ones, add “-a” (all) to the command. There should be no stopped containers on your fresh server yet, so both commands should return the same list.
- Accessing Your Webpage: Now, navigate to
docks.yourdomain.com
. You should be able to access your first basic webpage, which will look something like this:
If you’re unable to access your webpage and encounter a timeout, don’t worry. Maybe Namecheap is still in the process of registering your A-record. This shouldn’t take more than half an hour, though.
- Logging into Nginx Proxy Manager:
Go to
docks.yourdomain.com:81
in your browser. This will take you to port 81, which hosts your Nginx Proxy Manager interface.The default login is
admin@example.com
with the passwordchangeme
.
Once logged in, navigate to the Admin panel, located in the right corner of the interface.
Change your username and password to something safe and memorable. Don’t forget to save these new login details in a secure place.
- Setting Up Secure Connection:
For encrypted access to your proxy manager interface, return to Namecheap and add another A-record.
Use the same IP address as before, but instead of “docks”, use a label like “manage-proxy”.
Back on your Nginx Proxy Manager page, go to Proxy Hosts and click “Add Proxy Host”.
In the form, enter your new subdomain (e.g.,
manage-proxy.yourdomain.com
).For the IP address, use
172.17.0.1
(you can verify this by typingip addr show docker0
in your server console).Set the Forward Port to 81, and enable Block Common Exploits as well as Websockets Support.
Save your settings and test your new subdomain in the browser. If it times out, give Namecheap some time to register this new subdomain.
- Enabling SSL:
Once your subdomain is accessible, edit the domain entry in your proxy manager.
Go to the SSL tab and select “Request a new SSL Certificate”.
Force SSL, enter your email address, and agree to the terms of service.
SSL certificates are usually valid for 90 days, and the Proxy Manager will renew them automatically. If not, you’ll receive an email notification when it’s time.
After saving these settings, your subdomain should be accessible securely via HTTPS (e.g.,
https://manage-proxy.yourdomain.com
). HTTP requests will automatically redirect to HTTPS.
Troubleshooting:
A-Record Delays: A-records can take some time to become active. Patience is key.
Firewall Settings: Ensure you’ve allowed outside traffic to the relevant ports in your firewall setup (refer to the
sudo ufw allow
commands from earlier). Double-check that ports 80/tcp, 81/tcp and 443/tcp are open usingsudo ufw status
.Local DNS Try accessing manage-proxy.yourdomain.com with your mobile phone not using your local network. If that’s successful and using your local network was not, then there might be an issue with your local DNS or browser cache. Trying clearing the cache. Otherwise, the problem could be resolved after some time, as your local DNS updates.
Further Learning: There are many tutorials on YouTube, e.g. from Christian Lempa and Awesome Open Source.
This step got you this far:
Take a moment to appreciate what you’ve accomplished! You’ve set up a server, a domain, and a reverse proxy manager, and secured them with SSL. This foundation should motivate you to power through the final step: installing OpenProject.
Step 5: Installing OpenProject
First, we are going to enter another A-record for our new OpenProject subdomain. For example, you could choose openproject.yourdomain.com. Just repeat the process used for manage-proxy.yourdomain.com
with openproject.yourdomain.com
at Namecheap (or wherever). Use the same settings (Type, Value, TTL) as before. You only need to enter your new subdomain, e.g. openproject, in “Host”. While this new subdomain is being registered, we can move on to install OpenProject.
There are two ways to install OpenProject in Docker containers: (i) The recommended way is to install and run OpenProject based on multiple containers, just like we did for Nginx Proxy Manager. (ii) You can also install OpenProject in a single container. While the one-container way is a easier to set up, you get more security and a more efficient way to update OpenProject in the future. For our purposes, the one-container way may be acceptable. You probably won’t store super confident data in OpenProject, so even if something happens, the damage is limited. However, why not invest a few minutes more and do it the recommended way? It’s not much more difficult. And if you get stuck or prefer the easier way, I’ll cover the easier way afterwards as well.
For the recommended multi-container way, follow these steps in your server console:
- Clone the OpenProject GitHub repository
The developer from OpenProject are kind and maintain a GitHub repository with predefined installation files, such as a docker-compose.yml file. Using git, we can copy those files to our server. If git is not already installed, get it with sudo apt install git
. Before pasting the command in your console, make sure version 14 is indeed the latest community version. Check that here.
git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/14 openproject
- 1
- Copies all files from the URL to the current directory on your server. Change the version in “=stable/14” if needed.
You’ll now have a new folder called “openproject” in your current directory. In this folder, there is another one called “compose”. Move there with:
cd openproject/compose
- Customize the compose files.
We need to do some changes to the generic files, so they will fit to our Nginx Proxy Manager setup. But instead of changing the original files, we create new ones, that override the originals where needed. This ensures that we don’t loose these changes, if we pull the repository again in order to upgrade to a new OpenProject version. First, copy the “env.example” file to a new file called “.env”:
cp .env.example .env
- 1
- cp is the standard copy command, followed by the file you would like to copy, and the name of the new file.
Copy the original docker-compose.yml file, in case we need it later. We will make some changes to docker-compose.yml.
cp docker-compose.yml docker-compose.original.yml
- 1
- We give the copy of the original file the name “docker-compose.original.yml”.
Next, we edit the .env file with:
nano .env
You’ll see the following content. Look at the annotations to make the necessary changes.
##
# All environment variables defined here will only apply if you pass them
# to the OpenProject container in docker-compose.yml under x-op-app -> environment.
# For the examples here this is already the case.
#
# Please refer to our documentation to see all possible variables:
# https://www.openproject.org/docs/installation-and-operations/configuration/environment/
#
TAG=14-slim
OPENPROJECT_HTTPS=false
OPENPROJECT_HOST__NAME=localhost
PORT=127.0.0.1:8080
OPENPROJECT_RAILS__RELATIVE__URL__ROOT=
IMAP_ENABLED=false
DATABASE_URL=postgres://postgres:some-long-strong-password@db/openproject?pool=20&encoding=unicode&reconnect=true
RAILS_MIN_THREADS=4
RAILS_MAX_THREADS=16
PGDATA="/var/lib/postgresql/data"
OPDATA="/var/openproject/assets"
POSTGRES_PASSWORD=some-long-strong-password
- 1
- Nothing to change here, but in case you are wondering why we disable HTTPS: only until we set everything up in our proxy manager. After that, we’ll come back an enable HTTPS.
- 2
- Replace “localhost” with the subdomain you want to use for your OpenProject instance, e.g. openproject.yourdomain.com. Must be identical to the entry in Nginx Proxy Manager.
- 3
- Remove the IP address in front of 8080. You can change 8080 to another port, if you want, e.g. 8085. This is actually quite useful, since 8080 is a common port.
- 4
- Replace “some-long-strong-password” with a strong password. Look below for a command that generates one.
- 5
- Add a new line with “POSTGRES_PASSWORD=” followed by the same password you use above. You can right-click copy and then paste it, to make sure they are identical.
Save the file with CTRL + O, then press Enter to confirm. Use CTRL + X to exit nano.
In case you want to generate a strong password, you can use this command:
head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo ''
- 1
- Generates a pseudo-random sequence of letters and numbers. Reuse as often as you like. The output will be different every time.
- Start your OpenProject instance.
docker compose up -d && docker compose logs -f
- 1
- The “&& docker compose logs -f” part says: Show me the logs while OpenProject is building. It’s great for seeing if there’s a problem somewhere. You can stay and watch the running logs, or move on to the next step.
The installation takes about five minutes. Enough to configure our proxy manager.
- Make a new entry in Nginx Proxy Manager.
Go to manage-proxy.yourdomain.com. Add a new proxy host, as we did above. For the domain name, enter exactly what you specified under OPENPROJECT_HOST_NAME in the env. file earlier.
Enter the same IP as above. Use port 8080 (or the port you chose in the .env file).
Enable Block Common Exploits, and Websocket Support. Click save.
It’s time to check if everything worked so far. In your browser, check if you can reach your OpenProject domain (e.g. http://openproject.yourdomain.com.). If not, remember that Namescheap needs some time register the subdomain.
If it is reachable, go back to your Proxy Manager. Edit the openproject entry. Go to the SSL tab and request a certificate like we did above. Enable Force SSL, HTTP/2 Support, and HSTS. Agree to the terms, and save.
For the all-in-one-container way, follow these steps:
- Creating a Directory for OpenProject
sudo mkdir -p /var/lib/openproject/{pgdata,assets}
- 1
- This command creates two directories, “pgdata” and “assets”. These will contain the data of our OpenProject instance, e.g. your projects. The “-p” flag enables the automatic creation of the parent directories (/var/lib/openproject/), so we don’t have to do it manually.
- Building Your OpenProject Instance:
Ensure you replace
OPENPROJECT_HOST_NAME
with your actual domain andOPENPROJECT_SECRET_KEY_BASE
with a randomly generated string (see above for generating this string).Verify if version 14 is still the latest OpenProject community version here.
Run the Docker command:
docker run -d -p 8080:80 --name openproject \
-e OPENPROJECT_HOST__NAME=openproject.yourdomain.de \
-e OPENPROJECT_SECRET_KEY_BASE=secret \
-v /var/lib/openproject/pgdata:/var/openproject/pgdata \
-v /var/lib/openproject/assets:/var/openproject/assets \
openproject/community:14
- Setting up the Domain:
- At
manage-proxy.yourdomain.com
, add a new proxy host foropenproject.yourdomain.com
, with the IP172.17.0.1
and port8080
. - Once reachable, set up SSL certification as previously discussed.
- At
Finalizing Setup:
Navigate to
openproject.yourdomain.com
and check if it redirects to HTTPS.Log in to OpenProject using the default admin credentials (username and password: admin).
Change these credentials immediately and store them securely.
If you have used the recommended method, you will probably see a yellow warning about an “HTTPS mode setup mismatch” at the bottom of the page. This occurs, because we disabled HTTPS during setup, but use a HTTPS connection to access the site. Let’s enable it in our .env file. In your server console, type in:
nano .env
Change OPENPROJECT_HTTPS=false to =true. That’s it. Save and exit. Run:
docker compose up -d
Troubleshooting:
A-Record Delay: Ensure you’ve given enough time for the A-records to activate. You can use tools such as dnschecker to check if your subdomain already propagates.
Firewall Settings: If you are getting 504 timeouts, the most likely reason is that your firewall is blocking the port used. Normally, Nginx Proxy Manager should route incoming requests to the container(s), so you don’t need to open any other ports. Check this by running
sudo ufw allow 8080/tcp
(or whatever port you used for OpenProject). If the page is reachable after that, the proxy manager can’t communicate properly with the container, and the containerized application is directly reachable through the opened port. You can try fiddling with Docker’s network settings, or use a single docker-compose.yml file for both the Nginx Proxy Manager and Openproject. However, it can be tedious to get it to work. There’s always the other option: Be happy that it works, and know that it will probably never be a problem. All the other security measures are still in place.Consult Guides: Refer to the official OpenProject installation guide for detailed instructions. Also, consider watching this Awesome Open Source YouTube Tutorial.
Upgrading OpenProject
In the future, you might want to upgrade to a new version. Upgrade the multi-container OpenProject instance with:
docker-compose pull
- 1
- This command is supposed to fetch the latest OpenProject Docker image. If it doesn’t do that, you may need to manually change to version number in the docker-compose.yml file to whatever new version (e.g. change 13 in image: openproject/community:${TAG:-13} to 14). For consistency, you do that in the .env file as well.
docker-compose up -d
- 1
- Rebuilds your OpenProject instance, leaving your data intact.
Upgrade a one-container OpenProject instance like this:
- Pull the current instance:
docker pull openproject/community:14
- 1
- This command pulls the latest OpenProject Docker image. You need to look up the latest version and change 14 to e.g. 15.
- Stop and remove the current instance:
docker stop openproject
- 1
- Stops your current OpenProject instance.
docker rm openproject
- 1
- Removes the instance.
- Rerun the setup command with the updated version number. Your data stored in
pgdata
andassets
will remain intact during the update process. Note, that this upgrading procedure is a bit less efficient, since everything needs to be rebuilt. With the multi-container setup, only certain parts can be upgraded. However, the difference is not huge in personal use cases. - To free up some space on your VPS, you can remove the old OpenProject image. Type
docker image ls
to list them. Locate the old image. Usedocker rmi image_id
to remove it.
Upgrading your apps is also a good opportunity to upgrade the Ubuntu and Docker software on your server (important for security updates). It’s easy and quick:
sudo apt update
- 1
- That’s the command we used in step 1. It updates the list of available packages.
sudo apt upgrade
- 1
-
This will update all packages. Sometimes it is advisable to reboot your server after this, especially if the kernel has been upgraded (a message will be displayed). Type
sudo reboot
. It will log you out, so you need to log in again. Then usedocker ps -a
to list all the docker containers and make sure they are all running again. Sometimes you may need to restart them manually, especially the single-container apps. Usedocker restart appname
to do this. You can find the app name/ID in the docker ps list.
Congratulations!
You now have a server linked to your own domain, with OpenProject set up for project management. In my next post, I’ll guide you through installing Nextcloud for file storage and online collaboration. Feel free to reach out with questions or comments.
Thank you for following along!
Giving Back
Every day, smart people develop powerful open source software for us. Often we can use it for free. Consider showing your appreciation and motivating them to keep working on the software.
Nginx Proxy Manager: Buy developer jc21 a coffee here.
OpenProject: You can support OpenProject by upgrading to Premium. However, that is only really useful, if you use it heavily, and with multiple users.
Other smart people show us how to use open source software. For example, I struggled to make the multi-container OpenProject option work. Brian McGonagill’s tutorials (Awesome Open Source) really helped. He has a ton of other tutorials for your new server. You can find ways to support him on his linked wiki page.