Self-hosting HyperDX for fun and profit
HyperDX is a relatively new, but complete, product in the observability space. It supports logs, spans, session replay, dashboards, alerts, and everything else necessary for a complete observability solution. I compared its features to a list of other solutions and HyperDX came out on top.
Quick sidenote: They’re working on a v2. From the website and docs it appears as if it’ll be only a NextJs application (without server-side processing) that connects to ClickHouse, having fewer features than v1. I talked with the devs about it and they told me it’s only the current beta state. Over time v2 will have the same features as v1. They’ll publish a roadmap, but I don’t know when. Judging from their GitHub history almost all their dev time goes into v2.
Self-hosting on localhost
Their GitHub repo is an excellent starting point. The first step is to check out the code.
git clone https://github.com/hyperdxio/hyperdx.git
cd hyperdx
git checkout dddfbbc31548defe6d73c9e1e2a0d221d94efa72
The checked out commit is version 1.10.0
. That’s the version I’m using. At the time of writing, the commits that followed make deployment a bit more complicated (not the purpose of course, just incidentally) and do not add new functionality.
After checking it out you can start it with Docker.
docker compose up -d
Then it runs on localhost
. The UI is on port 8080, the API on port 8000, and the OpenTelemetry endpoint on port 4318.
If you run it locally and access the UI via http://localhost:8080
, you’re done. Everything works.
But accessing it on a server via its IP, or a domain pointed to the server’s IP, the UI shows Loading HyperDX
indefinitely.
The reason is that it tries to access the API via http://localhost:8000
.
Self-hosting on a server
From the GH repo’s readme:
By default, HyperDX app/api will run on localhost with port
8080
/8000
. You can change this by updatingHYPERDX_APP_**
andHYPERDX_API_**
variables in the.env
file. After making your changes, rebuild images withmake build-local
.
It doesn’t matter if it’s an IP address or a domain name. I set up a temporary server, and in this post will use its IP address, 159.69.12.166
. For my production setup I assigned a domain.
The first step is updating the .env
file:
# Used by docker-compose.yml
IMAGE_NAME=ghcr.io/hyperdxio/hyperdx
LOCAL_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-local
LOCAL_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-local
IMAGE_VERSION=1.10.0
# Set up domain URLs
HYPERDX_API_PORT=8000
HYPERDX_API_URL=http://159.69.12.166
HYPERDX_APP_PORT=8080
HYPERDX_APP_URL=http://159.69.12.166
HYPERDX_LOG_LEVEL=debug
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 # port is fixed
Then rebuild the images with make build-local
. That takes a while. On the server I used about 10-15 minutes.
Rebuilding the Docker image is required because the server URL is set in the Dockerfile
, not taken from the environment.
ENV NEXT_PUBLIC_SERVER_URL $SERVER_URL
The command in the Makefile
sets the SERVER_URL
variable as build argument.
docker build \
--build-arg CODE_VERSION=${LATEST_VERSION} \
--build-arg OTEL_EXPORTER_OTLP_ENDPOINT=${OTEL_EXPORTER_OTLP_ENDPOINT} \
--build-arg OTEL_SERVICE_NAME=${OTEL_SERVICE_NAME} \
--build-arg PORT=${HYPERDX_APP_PORT} \
--build-arg SERVER_URL=${HYPERDX_API_URL}:${HYPERDX_API_PORT} \
. -f ./packages/app/Dockerfile -t ${IMAGE_NAME}:${LATEST_VERSION}-app --target prod
When the images are built, HyperDX can again be started with docker compose up -d
. Now accessing HyperDX via the server’s IP works as expected, and it shows a beautiful setup screen.
Other considerations
Blocking the default MongoDB port
The MongoDB instance that’s started doesn’t require a username or password to log in, and its port is exposed, which means anyone who knows the server’s IP address and the MongoDB port can connect to it.
There are actors who continuously scan the internet for any open MongoDB instances and delete all their data. On my server the data was deleted every couple hours. This led to the loss of user and team information and required me to repeatedly create new users.
At the recommendation of the HyperDX team, I blocked the port so that it cannot be accessed externally.
Deleting existing iptables rules
The first step is deleting any existing rules for DOCKER-USER
. To check if there are any use the following iptables
command.
iptables -L DOCKER-USER -n --line-numbers
Then, delete them with the -D
flag.
iptables -D DOCKER-USER <number>
The final output of iptables -L DOCKER-USER -n --line-numbers
should only include the RETURN
rule.
Chain DOCKER-USER (1 references)
num target prot opt source destination
1 RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
Drop all traffic for port 27017
The next step is dropping all traffic for port 27017
. To do that add a rule before the RETURN
rule, with the -I
flag.
iptables -I DOCKER-USER -p tcp --dport 27017 -j DROP
Now the output of iptables -L DOCKER-USER -n --line-numbers
should show the new rule on position 1.
Chain DOCKER-USER (1 references)
num target prot opt source destination
1 DROP 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:27017
2 RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
Allow traffic from localhost and Docker network
MongoDB should be available from localhost and the Docker subnet.
To find out which subnet the Docker network uses, inspect it with docker network inspect hyperdx_internal
.
The relevant part of the output is this:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
In this case, the subnet is 172.18.0.0/16
.
The next commands allow traffic from specific subnets. The first one from localhost, the second one from the Docker subnet. If in your case the subnet is different, replace 172.18.0.0/16
with the correct one.
iptables -I DOCKER-USER -s 127.0.0.1 -p tcp --dport 27017 -j ACCEPT
iptables -I DOCKER-USER -s 172.18.0.0/16 -p tcp --dport 27017 -j ACCEPT
Now the output of iptables -L DOCKER-USER -n --line-numbers
should show the new rules on position 1 and 2.
Chain DOCKER-USER (1 references)
num target prot opt source destination
1 ACCEPT 6 -- 172.18.0.0/16 0.0.0.0/0 tcp dpt:27017
2 ACCEPT 6 -- 127.0.0.1 0.0.0.0/0 tcp dpt:27017
3 DROP 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:27017
4 RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
Rules are executed in order, which means if any of the ACCEPT
rules match, the request is accepted. If they don’t then the request is dropped through the DROP
rule. This applies only for port 27017
, every other port is handled by the RETURN
rule.
Save config
The final step is to ensure the iptables rules are persisted and reloaded after rebooting the server.
apt-get install iptables-persistent
netfilter-persistent save
Data retention and storage
HyperDX sets the data retention period of ClickHouse tables to one month. ClickHouse makes up most of the data, so that’s the only real storage concern. In my case it needs about 60GB of space for one month of logs.
The symptom if no space is left on the server is that the MongoDB database crashes. And after restarting crashes again in less than a minute. Don’t ask me how I know.
The system is quite efficient, so a relatively cheap server (2 CPUs, 4 GB memory) is enough to handle the traffic from Lighthouse. Instead of upgrading the server, I added a volume and moved the hyperdx/.volumes/ch_data
directory to it via a symlink.
Final words
HyperDX is a relatively new product in the observability space, so it needs a bit more tinkering to self-host it than other products, and their documentation is not as complete as others yet.
It seems to me the team focused most of their efforts on building a great product, and they succeeded. After the initial setup phase I had no issues, and their integration with session recordings is exceptional. Being able to see what the user did leading up to an error is incredible.