Hello,
I am attempting to set up an MQTT cluster but I have a couple of problems.
version: '3'
resources:
balancers:
custom:
service: mqtt
ports:
1883: 1883
9001: 9001
services:
mqtt:
image: eclipse-mosquitto
ports:
- 1883
- 9001
volumeOptions:
- awsEfs:
id: "efs-mosquitto-config"
accessMode: ReadWriteMany
mountPath: "/mosquitto/config"
- awsEfs:
id: "efs-mosquitto-data"
accessMode: ReadWriteMany
mountPath: "/mosquitto/data"
- awsEfs:
id: "efs-mosquitto-logs"
accessMode: ReadWriteMany
mountPath: "/mosquitto/logs"
scale:
count: 2
Problem 1:
It doesn’t appear that the volumes are shared between my two instances. When I convox run or exec into a new pod and touch a file touch /mosquitto/config/hello
it doesn’t appear in the config folder in the other pods.
Problem 2:
I don’t know how to create a healthcheck for my mqtt services which are running behind the balancer. I don’t think mqtt server has a endpoint that returns 200. Is there a way to disable the healthcheck? Can I make the healthcheck PING MQTT in some way?
For anyone interested, I managed to solve this like so.
- Make the healthcheck succeed by creating a custom Dockerfile for mosquitto with a very simple socat http server which checks if the mosquitto process is running.
- This let the new pods deploy properly (with efs attached)
mosquitto/Dockerfile
FROM eclipse-mosquitto
# Install the socat package so we can host a health check endpoint
RUN apk add socat
# Copy the health check script
COPY startup.sh ./startup.sh
COPY health.sh ./health.sh
RUN chown -R mosquitto:mosquitto ./startup.sh
RUN chown -R mosquitto:mosquitto ./health.sh
# Copy the mosquitto configuration file to the container
# It will be copied into the mounted /mosquitto/config volume
# when the container is started if there is no existing configuration file
COPY init/mosquitto.conf /mosquitto/init/mosquitto.conf
ARG MOSQUITTO_USERNAME
ARG MOSQUITTO_PASSWORD
# Create the default password file
# It will be copied into the mounted /mosquitto/config volume
# when the container is started if there is no existing password file
RUN touch /mosquitto/init/pwfile
RUN chmod 0700 /mosquitto/init/pwfile
RUN mosquitto_passwd -b /mosquitto/init/pwfile "${MOSQUITTO_USERNAME}" "${MOSQUITTO_PASSWORD}"
# Ensure there's a group for 1000 which has efs access permissions
RUN addgroup -g 1000 appgroup \
## Add the 1883 mosquitto user to the 1000 group
RUN adduser mosquitto appgroup
# The startup runs a health check `/` on port 80
EXPOSE 80
CMD ./startup.sh
mosquitto/startup.sh
#!/bin/sh
# check if the mosquitto config is present? if not copy from init files
if [ ! -f /mosquitto/config/initialised ]; then
echo "Config not initialised"
cp /mosquitto/init/mosquitto.conf /mosquitto/config/mosquitto.conf
cp /mosquitto/init/pwfile /mosquitto/config/pwfile
chmod 0700 /mosquitto/config/pwfile
chown -R mosquitto:mosquitto /mosquitto/config/pwfile
touch /mosquitto/config/initialised
fi
# Start Mosquitto and the health check server in parallel
exec mosquitto -c /mosquitto/config/mosquitto.conf & exec socat TCP-LISTEN:80,reuseaddr,fork EXEC:./health.sh
mosquitto/health.sh
#!/bin/sh
# health check function
health() {
if pidof mosquitto > /dev/null; then
echo -e "HTTP/1.1 200 OK\r\n\r\nMQTT Broker is running"
else
echo -e "HTTP/1.1 500 Internal Server Error\r\n\r\nMQTT Broker is down"
fi
}
health
convox.yml
version: '3'
balancers:
custom:
service: mqtt
ports:
80: 80
1883: 1883
9001: 9001
services:
mqtt:
build: ./mosquitto
port: 80
health: /
ports:
- 1883
- 9001
volumeOptions:
- awsEfs:
id: "mosquitto-config"
accessMode: ReadWriteMany
mountPath: "/mosquitto/config"
- awsEfs:
id: "mosquitto-data"
accessMode: ReadWriteMany
mountPath: "/mosquitto/data"
- awsEfs:
id: "mosquitto-log"
accessMode: ReadWriteMany
mountPath: "/mosquitto/log"
scale:
count: 2
environment:
- MOSQUITTO_USERNAME=mosquitto
- MOSQUITTO_PASSWORD=mosquitto
Hey @rhysawilliams2010
Thank you for the writeup solution.
As an additional call out you can disable service health checks with option flag disable: true
e.g.
services:
mqtt:
build: ./mosquitto
port: 80
health:
disable: true
1 Like