Article
Setting Up SonarQube the Right Way: A Production-Ready Guide with Docker, PostgreSQL & SSL

Why I Decided to Document This
You know that feeling when you finally get something working in production, and you think "I should write this down before I forget"? That's exactly what happened with my SonarQube setup.
I'd been running code quality checks manually—copying results, generating reports, basically doing everything the hard way. Then management decided we needed a proper code analysis platform. "Just set up SonarQube," they said. "It'll be quick," they said.
Spoiler alert: It wasn't quick. But after battling through documentation, Stack Overflow threads, and a few "why isn't this working" moments, I ended up with a setup I'm actually proud of. Not just something that works, but something that follows proper Linux standards, stays secure, and won't fall apart when you least expect it.
What We're Actually Building Here
Before diving in, let's be clear about what you're getting:
- SonarQube running in a Docker container (no messy manual installations)
- PostgreSQL as the database (because H2 is for demos, not production)
- Nginx as a reverse proxy with SSL/HTTPS (because your security team will thank you)
- Everything organized properly following Linux Filesystem Hierarchy Standard (FHS)
- Health checks and automatic restarts (because 3 AM alerts are no fun)
This isn't a "quick and dirty" setup. This is the setup you can show your senior engineer without feeling embarrassed.
The Prerequisites (Don't Skip This)
You'll need:
- An Ubuntu server (I used an EC2 instance, but any Ubuntu server works)
- A domain name pointing to your server
- Root or sudo access
- SSH access
That's it. No prior Docker expertise or PhD in DevOps required, I'll walk you through everything.
Step 1: Getting Docker Installed (The Foundation)
If you're starting with a fresh Ubuntu instance, Docker isn't there yet. Let's fix that.
The installation is straightforward but has a few steps. We're adding Docker's official repository because the version in Ubuntu's default repos tends to be outdated:
# Update your system first
sudo apt update
sudo apt upgrade -y
# Install dependencies
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
# Add Docker's GPG key (this verifies package authenticity)
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add the Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
# Start Docker and make it run on boot
sudo systemctl start docker
sudo systemctl enable docker
Here's the part everyone forgets: adding your user to the docker group so you don't need sudo every time:
sudo usermod -aG docker $USER
Important: Log out and log back in after this command. Otherwise, you'll keep getting permission errors and wonder why nothing works.
Now install Docker Compose:
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Verify everything:
docker --version
docker-compose --version
docker run hello-world
If that last command pulls, runs a test container and you see a welcome message, you're golden.
Step 2: Creating the Directory Structure (This Actually Matters)
This is where most tutorials go wrong. They tell you to throw everything in /home or in some random directories, and six months later, nobody knows where anything is. That's fine for learning, but for production! terrible.
Linux has a Filesystem Hierarchy Standard (FHS) that tells you where stuff should go. So, we are going to follow this. It sounds fancy, but it just means putting things where they belong. It makes your life easier when troubleshooting at 3 AM. Trust me on this:
# Application files go in /opt (optional software)
sudo mkdir -p /opt/sonarqube-stack
# Data that needs to persist goes in /var/lib (variable library files)
sudo mkdir -p /var/lib/sonarqube/{data,extensions}
sudo mkdir -p /var/lib/postgresql/sonarqube
# Configuration goes in /etc (etcetera/configuration)
sudo mkdir -p /etc/sonarqube
# Logs go in /var/log (you'll thank me later)
sudo mkdir -p /var/log/sonarqube
sudo mkdir -p /var/log/postgresql
This isn't me being pedantic, when something breaks, you'll know exactly where to look. You don't want to be hunting through random directories trying to find log files. Everything has its place. Database acting weird? Check /var/log/postgresql. SonarQube crashing? Look in /var/log/sonarqube. Simple.
Step 3: Setting Permissions (The Part That Causes Most Errors)
Docker containers run as specific user IDs for security. If the permissions are wrong, nothing works, and you'll get cryptic error messages.
Here's what worked for me:
# Your user owns the application directory
sudo chown -R $USER:$USER /opt/sonarqube-stack
# SonarQube runs as UID 1000
sudo chown -R 1000:1000 /var/lib/sonarqube
sudo chown -R 1000:1000 /var/log/sonarqube
# PostgreSQL runs as UID 70
sudo chown -R 70:70 /var/lib/postgresql
sudo chown -R 70:70 /var/log/postgresql
# Configuration stays root-owned but readable
sudo chown -R root:root /etc/sonarqube
sudo chmod 755 /etc/sonarqube
I spent hours debugging permissions once. Don't be me. Get this right once, and you'll never think about it again. Get it wrong, and you'll spend hours troubleshooting.
Step 4: System Configuration (SonarQube's Special Needs)
SonarQube uses Elasticsearch internally, which has some specific kernel parameter requirements. Without these, it won't start:
sudo tee -a /etc/sysctl.conf << EOF
vm.max_map_count=524288
fs.file-max=131072
EOF
# Apply the changes immediately
sudo sysctl -p
I learned about this the hard way after my first SonarQube container kept crashing. These parameters tell the Linux kernel to allocate more memory map areas and file descriptors, things Elasticsearch needs to function properly.
Step 5: Environment Variables (Keeping Secrets Secret)
Never hardcode passwords in your docker-compose files. Just don't. Create a proper .env file:
sudo nano /etc/sonarqube/.env
Add your configuration (use strong passwords, not these examples):
# Database Configuration
POSTGRES_DB=sonarqube
POSTGRES_USER=sonarqube
POSTGRES_PASSWORD=YourActualStrongPassword123!
# SonarQube Database Connection
SONARQUBE_JDBC_USERNAME=sonarqube
SONARQUBE_JDBC_PASSWORD=YourActualStrongPassword123!
SONARQUBE_JDBC_URL=jdbc:postgresql://postgres:5432/sonarqube
# Security
POSTGRES_HOST_AUTH_METHOD=md5
Secure it properly:
sudo chmod 600 /etc/sonarqube/.env
sudo chown $USER:$USER /etc/sonarqube/.env
The 600 permissions mean only the owner can read/write this file. Nobody else on the system can see your passwords.
Step 6: The Docker Compose Configuration (Where It All Comes Together)
Navigate to your application directory:
cd /opt/sonarqube-stack
# Create a symlink to the config (keeps things clean)
ln -s /etc/sonarqube/.env .env
That symbolic link is a nice touch, keeps the actual config file in /etc where it belongs, but makes it accessible from your app directory.
Now, here's where you have a choice. I'm giving you two configurations because different scenerios need different approaches, so just pick the one that fits your situation.
The Basic Setup (Simple and Clean)
If you have a dedicated server or like (t3.medium or larger) with decent resources (4GB+ RAM), this version is perfect. Create docker-compose.yml:
services:
postgres:
image: postgres:15-alpine
container_name: sonarqube-postgres
restart: unless-stopped
env_file:
- .env
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_HOST_AUTH_METHOD: ${POSTGRES_HOST_AUTH_METHOD}
volumes:
- /var/lib/postgresql/sonarqube:/var/lib/postgresql/data
- /var/log/postgresql:/var/log/postgresql
networks:
- sonarnet
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 30s
timeout: 10s
retries: 3
sonarqube:
image: sonarqube:10-community
container_name: sonarqube-app
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
env_file:
- .env
environment:
SONAR_JDBC_URL: ${SONARQUBE_JDBC_URL}
SONAR_JDBC_USERNAME: ${SONARQUBE_JDBC_USERNAME}
SONAR_JDBC_PASSWORD: ${SONARQUBE_JDBC_PASSWORD}
volumes:
- /var/lib/sonarqube/data:/opt/sonarqube/data
- /var/lib/sonarqube/extensions:/opt/sonarqube/extensions
- /var/log/sonarqube:/opt/sonarqube/logs
ports:
- "9000:9000"
networks:
- sonarnet
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9000/api/system/status || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 90s
networks:
sonarnet:
driver: bridge
This configuration includes health checks, proper volume mounting, and dependency management. The depends_on with service_healthy means SonarQube won't even try to start until PostgreSQL is actually ready to accept connections, not just running, but ready.
The Production Setup (With Resource Limits)
If you're on a smaller instance (t3.small, t3.micro) or running multiple services, you want this version with explicit resource limits:
services:
postgres:
image: postgres:15-alpine
container_name: sonarqube-postgres
restart: unless-stopped
env_file:
- .env
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_HOST_AUTH_METHOD: ${POSTGRES_HOST_AUTH_METHOD}
volumes:
- /var/lib/postgresql/sonarqube:/var/lib/postgresql/data
- /var/log/postgresql:/var/log/postgresql
networks:
- sonarnet
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
sonarqube:
image: sonarqube:10-community
container_name: sonarqube-app
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
env_file:
- .env
environment:
SONAR_JDBC_URL: ${SONARQUBE_JDBC_URL}
SONAR_JDBC_USERNAME: ${SONARQUBE_JDBC_USERNAME}
SONAR_JDBC_PASSWORD: ${SONARQUBE_JDBC_PASSWORD}
volumes:
- /var/lib/sonarqube/data:/opt/sonarqube/data
- /var/lib/sonarqube/extensions:/opt/sonarqube/extensions
- /var/log/sonarqube:/opt/sonarqube/logs
ports:
- "9000:9000"
networks:
- sonarnet
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9000/api/system/status || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 90s
deploy:
resources:
limits:
memory: 3G
cpus: '2.0'
reservations:
memory: 2G
cpus: '1.0'
networks:
sonarnet:
driver: bridge
The difference? Resource limits. PostgreSQL can't use more than 512MB of RAM, and SonarQube is capped at 3GB. This prevents one container from consuming all available memory and bringing down your entire server.
I learned this lesson when SonarQube decided to use 6GB of RAM during a large analysis, causing my monitoring alerts to go crazy.
Important Note:
Running out of memory?
If you're on a smaller instance (2GB RAM), you'll want the production setup with resource limits. Also consider setting up swap space:
sudo fallocate -l 2G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfileMake it permanent by adding this to
/etc/fstab:/swapfile none swap sw 0 0
Step 7: Starting Everything Up
This is the moment of truth:
cd /opt/sonarqube-stack
docker compose up -d
The -d flag runs everything in detached mode (background). Now check the status:
docker compose ps
You should see both containers running. If not, check the logs:
docker compose logs -f
SonarQube takes some minutes to fully initialize. Grab a coffee. Check Slack. Don't panic if it takes a moment. Be patient. You'll see various startup messages, and eventually, it'll settle down.
Test if everything's working:
# Check PostgreSQL
docker exec sonarqube-postgres psql -U sonarqube -d sonarqube -c "SELECT version();"
# Check SonarQube
curl http://localhost:9000
If both return something sensible, you're in business.
Step 8: Setting Up Nginx (Making It Accessible)
Install Nginx:
sudo apt update
sudo apt install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx
# Remove the default config
sudo rm -f /etc/nginx/sites-enabled/default
Create your SonarQube configuration:
sudo nano /etc/nginx/sites-available/sonarqube
Add this configuration (replace your-domain.com with your actual domain):
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable the site:
sudo ln -s /etc/nginx/sites-available/sonarqube /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
The nginx -t command tests your configuration before applying it. Always run this, it'll save you from breaking your web server. Now you can access SonarQube via your domain name instead of remembering port 9000. Boom!
Step 9: Getting That Sweet SSL Certificate
Here's where it gets beautiful. Certbot automates the entire SSL setup. This is easier than you think:
# Install certbot
sudo apt install certbot python3-certbot-nginx -y
# Get your certificate (replace your-domain.com)
sudo certbot --nginx -d your-domain.com
Follow the prompts. Certbot will:
- Verify you own the domain
- Generate an SSL certificate
- Automatically update your Nginx configuration
- Set up HTTPS redirect
- Configure automatic renewal
It's genuinely impressive how much certbot handles for you. Remember when SSL certificates cost money and required manual installation? We've come a long way.
Step 10: Securing Everything (Optional but Recommended)
Set up the firewall:
# Allow web traffic
sudo ufw allow 'Nginx Full'
sudo ufw allow ssh
sudo ufw --force enable
# Verify
sudo ufw status
This allows HTTP (80), HTTPS (443), and SSH (22) while blocking everything else. Simple, effective security.
Accessing Your SonarQube Instance
Open your browser and navigate to https://your-domain.com. You should see the SonarQube login page.
Default credentials:
- Username:
admin - Password:
admin
Change this immediately. SonarQube will force you to on first login anyway.
If you see the login page with a green padlock in your browser, congratulations. You just deployed production-grade infrastructure.
Where to Go From Here
Once you have SonarQube running, the next steps are:
- Integrate it with your CI/CD pipeline
- Configure quality gates and rules
- Set up project analysis
- Customize quality profiles for your tech stack
But that's a topic for another article.
Final Thoughts
This setup has been running in production for months now without issues. It's stable, secure, and maintainable. Most importantly, it's something you can hand off to another developer, and they'll understand what's going on. Setting up infrastructure properly takes more time upfront, but it pays off every single day afterward.
Is it perfect? No. Can it be improved? Absolutely. But it's solid, maintainable, and it won't explode at 3 AM.No scrambling to find where logs are stored. No permission errors after server restarts.
Just a solid, working system that does its job.
GitHub Repository
You can find the complete configuration files, scripts, and detailed documentation for this SonarQube setup in my GitHub repository:
Repository: SonarQube Production Level Setup
Feel free to fork it, star it if you find it helpful, and open issues if you run into problems. I'm always open to improvements and suggestions.
"The best time to fix technical debt was yesterday. The second best time is now.".


