<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Dominik Weber</title><description>I develop products, share what I learn and how I think about things.</description><link>https://weberdominik.com/</link><item><title>Lighthouse update February 23rd</title><link>https://weberdominik.com/blog/lighthouse-update-2026-02-23/</link><guid isPermaLink="true">https://weberdominik.com/blog/lighthouse-update-2026-02-23/</guid><pubDate>Mon, 23 Feb 2026 00:00:00 GMT</pubDate><content:encoded>During the past week a couple of nice improvements happened.

**Finally implemented a 2 week trial without requiring a credit card**

Every user now gets the trial by default. This is a nice improvement because, from what I can observe, in B2C most people want to test the product before entering their credit card. It was also a good step to a better first product experience.

**Finished the website to feed feature**

The last remaining task was automated finding of items. When you enter a website, it automatically checks it and tries to find relevant items. If items are found, they are highlighted and the selectors added, without users having to do anything.

**Updated blogroll editor**

This is a small free tool on the Lighthouse website. It&apos;s for creating collections of feeds, websites, and newsletters. For a long time I wanted to create collections for specific areas, for example company engineering blogs, AI labs, JavaScript ecosystem, and so on. The reworked blogroll editor makes that much simpler to do.

## Next steps

An issue that became important is feed URLs being behind bot protection. It doesn&apos;t really make sense to be configured that way, because feed URLs are designed to be accessed by bots, but in some cases it may be difficult to configure properly. This affects only for a small number of feeds, but it&apos;s enough to be noticable. It prevents people from moving to Lighthouse from other services. Consequently, one of the next tasks is to fix this.

Besides that, the first user experience continues to be an ongoing area of improvement. I have a couple of ideas on how to make it better, and will continuously work on it.</content:encoded></item><item><title>AI made coding more enjoyable</title><link>https://weberdominik.com/blog/ai-coding-enjoyable/</link><guid isPermaLink="true">https://weberdominik.com/blog/ai-coding-enjoyable/</guid><pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate><content:encoded>To me, one of the most annoying parts of software engineering is writing code that doesn&apos;t require thinking. It&apos;s just a typing exercise, and that&apos;s boring.

That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers.

Writing tests is another enjoyable use-case. I design the architecture so the code is testable, write the first test so the AI knows how they should be written, and which cases should be tested. Then I tell the AI each test case and it writes them for me.

The only thing where I don&apos;t trust it yet is when code must be copy pasted. I can&apos;t trace if it actually cuts and pastes code, or if the LLM brain is in between. In the latter case there may be tiny errors that I&apos;d never find, so I&apos;m not doing that. But maybe I&apos;m paranoid.

In any case, this is incredible. In the past years I&apos;ve been handed tools that do the most tedious tasks of software engineering for me. And I love it.</content:encoded></item><item><title>We should talk about LLMs, not AI</title><link>https://weberdominik.com/blog/llms-not-ai/</link><guid isPermaLink="true">https://weberdominik.com/blog/llms-not-ai/</guid><pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate><content:encoded>Currently, every conversation that mentions AI actually refers to LLMs. It&apos;s not wrong, LLMs are part of AI after all, but AI is so much more than LLMs. The field of artificial intelligence has existed for decades, not just the past couple of years where LLMs got big.

So saying the word “AI” is actually highly unspecific. And in a few years, when the next breakthrough in AI arrives, we&apos;ll all refer to that when we say “AI.</content:encoded></item><item><title>Lighthouse update February 16th</title><link>https://weberdominik.com/blog/lighthouse-update-2026-02-16/</link><guid isPermaLink="true">https://weberdominik.com/blog/lighthouse-update-2026-02-16/</guid><pubDate>Mon, 16 Feb 2026 00:00:00 GMT</pubDate><content:encoded>## Website to feed

The past week had first and foremost one improvement, website to feed conversion. It enables users to subscribe to websites that don&apos;t provide an RSS feed.

This feature consists of multiple areas. The backbone is extracting items from a website based on CSS selectors, and then putting those items through the same pipeline as items of an RSS feed. Meaning extracting full content, calculating reading time, creating a summary, creating an about sentence, interpreting language and topic, and so on.

Additional areas are all about making it easier to use. Showing the website and letting users select items simplifies the feature, for many websites it&apos;s not necessary to even know about the selectors. This also required some heuristics about which elements to select and how to find the repeating items from just one selection.

The user experience can always be improved, but I think as it is right now it&apos;s already quite decent.

The next step for this feature is to automatically find the relevant items, without the user having to select anything.

## Next steps

An ongoing thing is the first user experience. It&apos;s not where I want it to be, but honestly it&apos;s difficult to know or imagine how it should be. One issue that came up repeatedly is the premium trial, and that users don&apos;t want to provide their credit card just to start the trial. That&apos;s fair. Though Paddle, the payment system Lighthouse uses, doesn&apos;t provide another option. They have it as private beta, but I didn&apos;t get invited to that unfortunately. So I&apos;m going to bite the bullet and implement this myself. Won&apos;t be as great as if Paddle does it, but at least users will get the premium experience for 2 weeks after signup.

An improvement I had my eyes on for some time is using the HTML of RSS feed items for the preview. Lighthouse attempts to parse the full content for all items, but that&apos;s not always possible. If websites disallow it via robots.txt, or block via bot protection, Lighthouse doesn&apos;t get the content. In these cases it shows that access was blocked. But if the feed contains some content, that could be displayed. Feeds usually don&apos;t contain the full content, but it&apos;s at least something.

One more thing I wanted to do for a long time, and can finally make time for, is creating collections of feeds for specific topics. For example &quot;Frontier AI labs&quot;, &quot;Company engineering blogs&quot;, &quot;JS ecosystem&quot;, and so on. The [blogroll editor](https://lighthouseapp.io/tools/blogroll-editor) is the basis for that. It lets you create a collection of websites and feeds, and export OPML from that. I&apos;m going to improve its UX a bit and then start creating these collections.</content:encoded></item><item><title>Lighthouse update February 9th</title><link>https://weberdominik.com/blog/lighthouse-update-2026-02-09/</link><guid isPermaLink="true">https://weberdominik.com/blog/lighthouse-update-2026-02-09/</guid><pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate><content:encoded>During the past week I finished the most important onboarding improvements. For new users it&apos;s now easier to get into Lighthouse. The biggest updates were

- An onboarding email drip which explains the features of Lighthouse
- Feed subscribe changes, now showing a suggestion list of topics and curated feeds, and a search for websites and feeds to subscribe to

The next step became clear after talking to users and potential customers.

The insight was that even if the structure and features of Lighthouse are much better for content curation, it doesn&apos;t matter if not all relevant content can be pulled into Lighthouse. This means first and foremost websites that don&apos;t have a feed or newsletter.

So the next feature will be a website to feed conversion. That websites can be subscribed to even if they don&apos;t have a feed or newsletter.

## Pricing

Big parts of the indie business community give the advice to charge more. &quot;You&apos;re not charging enough, charge more&quot; is a generic and relatively popular advice. I stopped frequenting these (online) places as much, so I&apos;m not sure they give the same advice in the current environment, but for a long time I read this advice a lot.

I&apos;m sure in some areas this holds true, but I since realized that the content aggregator space is different. It&apos;s a relatively sticky type of product, people don&apos;t like to switch. Even if OPML exports and imports make it easy to move feeds, additional custom features like newsletter subscriptions, rule setups, tags, and so on make it harder to move.

So people rightfully place a risk premium on smaller products. Pricing it close to the big ones is too high, and I now consider this a mistake. So I&apos;m lowering the price from 10€ to 7€ for the premium plan.

Another issue is the 3-part pricing structure. Everyone does it because the big companies do. And maybe at this point the big companies do it because &quot;it&apos;s always been done that way&quot;. But as a small company I don&apos;t yet know where the lines are, which features are important to which customer segment. Therefore I&apos;ll remove the 2nd paid plan, to only have a free and one paid plan.

I&apos;m worried that the pricing changes are seen as erratic, but honestly too few people care yet for this worry to be warranted or important.

What I find interesting is that I&apos;m much more confident on the product side than on the business side. On the one hand this is clear, because I&apos;m a software engineer. But on the other hand I believe it&apos;s also because (software) products are additive. In the sense that features can always be added. For pricing there is always one. The more time I have the more features I can add, so the only decision is what to do first. For pricing it doesn&apos;t matter how much time I have, I must always choose between one or the other. It doesn&apos;t really have a consequence, but I found it an interesting meta-thought.</content:encoded></item><item><title>Lighthouse update February 2nd</title><link>https://weberdominik.com/blog/lighthouse-update-2026-02-02/</link><guid isPermaLink="true">https://weberdominik.com/blog/lighthouse-update-2026-02-02/</guid><pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate><content:encoded>The current state of Lighthouse is that the website gets a decent amount of views and a good amount of those also create an account. However, of those who create an account only a tiny fraction start using the product.

This points to an onboarding issue. That people don&apos;t know what to do with the product after they signed up.

Lighthouse is a powerful product. Particularly the rule system can do _a lot_. But it&apos;s worthless if users don&apos;t get to that point.

To improve that I&apos;m going working on a better onboarding flow and an after-signup email drip campaign.

The after-signup email drip campaign will be a series of emails, each explaining a different aspect of Lighthouse. This serves two purposes. First, it shows what Lighthouse can do and how to use it. And second, it reminds people that the product exists. I had this one before, but when Sendgrid changed their pricing I stopped it and didn&apos;t reimplement with another system. This may have been a mistake.

The onboarding flow changes I have planned is to reduce the explanation steps (now handled via the emails) and show a discovery page with search immediately after signup. The idea is that this makes the product easier to explore.

Until now I didn&apos;t implement a typical website search because users can just enter the website and Lighthouse finds the feed automatically. But looking at popular products in the space, they all have a search mechanism.

The new user experience has been a blindspot for me. Now I&apos;m working to remedy that.</content:encoded></item><item><title>Learning photography: composition</title><link>https://weberdominik.com/blog/learning-photography-composition/</link><guid isPermaLink="true">https://weberdominik.com/blog/learning-photography-composition/</guid><pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate><content:encoded>I&apos;ve been taking pictures with mine for years, as I imagine most smartphone owners do. When I see something picture-worthy, I pull out my smartphone and take a shot. But I never seem to be able to do the scenes justice. The pictures always fall flat. What I found so captivating when seeing it with my own eyes is nowhere to be seen in the picture.

I wanted to learn how to take pictures that captures what I like about the scenes. So, to improve my photography skills I decided to learn the foundations.

The books I read covered a wide array of topics. Composition, lighting, technical knowledge (e.g. lenses, exposure), how to recognize picture-worthy scenes, and much more. It was great to read about all that, and it showed me many areas to learn and improve.&amp;#x20;

But as an amateur photographer it&apos;s too much to focus on at once. I have to take my time and practice step by step.

Composition, learning how to arrange the elements, appeared to be the foundation. Other topics, like lighting for example, build on top of composition. Which is the reason why I focused my efforts on improving my composition skills before anything else.

## Composition

Composition is a catchall term that covers a bunch of techniques and guidelines. The ones I learned about are leading lines, framing, foreground interest, symmetry, landscape vs portrait,, rule of thirds, and visual weight.

It&apos;s possible to take great pictures without those guidelines, or by breaking them, but as a beginner they helped me improve my pictures. And knowing about them sharpened my eye.

I classify them in 2 areas

- Creating balance in the picture: symmetry, rule of thirds, visual weight
- Guiding the eye of the viewer: leading lines, framing, foreground interest, landscape vs portrait

They can be arbitrarily combined. It&apos;s not necessary to use all of them, and using more of them doesn&apos;t guarantee a great result. It takes experimentation and experience. For me, this knowledge combined with intuition and experimentation (taking many pictures of the same subject) creates the best results. And I assume (hope) that over time the accumulated experience will allow me to take better pictures with fewer takes.

## Guiding the eye

Techniques for guiding the eye are about connecting the viewer with the picture.&amp;#x20;

My pictures often appeared lifeless, and the primary reason was that I photographed the main subject without paying attention to the surroundings. The result was that the images had nothing that draws the viewer in. Nothing that connects the viewer with the subject. Techniques to guide the eye to the subject fix that.

### Foreground interest

When photographing a subject that&apos;s large or far away, images often feel disconnected and lose their sense of depth. Objects in the foreground can counteract that effect. They are a stepping stone into the image, towards the subject.

This is particularly important for landscape photography. It&apos;s easy to focus on the big picture, on the landscape. But a good picture also needs something close by to connect the viewer to the subject or landscape.

![](./IMG_0873.jpeg)

In this picture, the subject is the Arco da Rua Augusta (the building). The bucket and the bag with the red jacket were happy accidents (I didn&apos;t know they were there, didn&apos;t pay attention to them). Without them, the foreground would be empty, and there would be nothing to connect the viewer to the subject further back.&amp;#x20;

As a side note, cropping the image, shaving some off the top, would probably further improve it.

### Framing

Framing is surrounding (parts of) the picture with something that resembles a frame. It can be anything, a window, a door frame, or any other kind of opening. The frame calls attention to the part of the picture that&apos;s inside.

![](./IMG_0933.jpeg)

In this picture the scenery is framed by the stone wall around it.&amp;#x20;

Framing can have a stronger impact if there was a specific subject that&apos;s worthy of framing, and if it frames only part of the picture.

Here, the frame covers the whole picture. I still like it, but mostly because of the symmetry (I&apos;ll reuse the picture later in this article).&amp;#x20;

The pictures with framing that I have all cover the full image. More complex scenes, where only part of the image is framed, and the other part still is interesting enough to be in the picture, are much more difficult to spot.

### Leading lines

Leading lines are any objects that create some sort of line. They help drawing viewers into the picture. They guide the eye of the viewer to a specific part, often the subject.

![](./IMG_1017.jpg)

In this picture I used the bridge as a leading line going to the colorful buildings in the back. Without the bridge, it would be a picture of random buildings in the background.&amp;#x20;

### Landscape vs portrait

I was never quite sure which orientation to take my pictures in. I mostly decided that based on the picture and how the scene would fit in it.

Now I know that landscape pictures encourage the eye to move from side to side, while portrait pictures encourage up and down movement. It&apos;s best to choose the orientation that matches the flow of the subject, or the dominant lines in the image.

For example, in the image above (Leading lines section), the dominant line is the bridge. At first it goes straight into the image, but the more important part is when it turns right and becomes horizontal. This is why I shot it in landscape.

In the image below, the eye should quite clearly move from the street in the front to the hill and building in the back. That&apos;s why this picture is in portrait mode.

![](./IMG_0938.jpeg)

## Creating balance

Balance in a photo creates harmony and evokes an emotional response. If the viewer is connected with the image (through the aforementioned techniques), balance is what causes captivation.

I imagine there are many more ways to evoke an emotional response and create great pictures. But for me, currently, harmony is what I&apos;m after in my photos.

### Symmetry

There is beauty in symmetry, and we are instinctively drawn to it. The same in pictures. However, if it&apos;s too symmetric, it can feel a bit eerie, a bit too perfect. Slight imperfections make it interesting.

![](./IMG_0933.jpeg)

In this picture I like the frame (as mentioned before) but also the symmetric aspect of the man standing on the left and the stand on the right. They create a balance, without perfectly mirroring each other.

### Rule of thirds

This is probably the most well-known technique. Pretty much every camera and camera app has a setting to show lines at 1/3rd and 2/3rd of the width and height.

The rule of thirds is about putting the subject at one of the points where the lines overlap. So that the focal point of the image is about a third of the width or height away from the center.

Putting the subject in the center can be too boring, too predictable. Moving it off-center can make the picture more exciting while still keeping it balanced.

### Visual weight

The visual weight is how strong an element appears in the image. For a balanced picture, the size of elements are less important than their visual weights. If one element looks heavier than the others, it should take less space in the picture to balance it out.

Finding the correct relationship between the elements is complicated. I haven&apos;t read any specific tips, besides trusting yourself when taking and viewing the picture, and adjusting based on feeling.

![](./IMG_1063.jpeg)

![](./IMG_1064.jpeg)

I think it&apos;s obvious why I like these pictures. The sky is breathtaking. What I find interesting though is that on my phone, the first picture looked better. There the balance of the bright sky and dark foreground was good. But when looking at them on my laptop (with a bigger screen), I find the second one better. I&apos;m not sure what to make of that, maybe the size of the image changes the visual balance?

## Final words

Photos are good if they create an emotional response. As I paid more attention to the mentioned techniques, it became easier to create such pictures, but they&apos;re by no means a requirement. The books I read mentioned that it&apos;s also possible to take great pictures by throwing all rules out the window. It&apos;s much more difficult though, it takes a trained eye which I, as an amateur, don&apos;t have. So for now I&apos;ll use these rules when taking pictures.

And I am happy that I finally decided to read photography books. With that little investment my pictures are 10x better than before.</content:encoded></item><item><title>Thoughts on using synthetic users for product development</title><link>https://weberdominik.com/blog/thoughts-on-synthetic-users-product-development/</link><guid isPermaLink="true">https://weberdominik.com/blog/thoughts-on-synthetic-users-product-development/</guid><pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate><content:encoded>## TLDR

LLMs can provide information about how specific user groups behaved in the past. Combining a product idea with that can sharpen the understanding of how the product may help the user. It can give feature ideas on how to make the product more complete for the user, and help with understanding which features are not as important. Best for development of MVPs, but can also be useful for existing products.

---

Synthetic users are AI-generated personas. They can be asked questions and respond with an approximation of what a real user would say. They come from the UX research space, but while reading [this article](https://www.nngroup.com/articles/synthetic-users/) I had the idea that they may be used for product research and development.

The strength of LLMs are their knowledge and ease of interaction. They encode the knowledge of the world, and can be easily chatted with.

This means I can give it a persona (e.g. engineering manager in a software company) and tell it to answer questions as this person would. LLMs have a tendency to revert to the mean, to be average. That tendency actually helps here, because I get the typical behavior of that persona. When talking to people I&apos;d have to talk to quite many to get that kind of understanding. On the flipside, LLMs are incapable of giving specific responses (e.g. engineering manager with a 4-person team at Google).

So, lets say I have an idea for a product, and a rough idea about who may be interested in it. I can then create a synthetic user (AI persona) about their typical day, their tasks, workflows, and so on. I can find out how an average person does things, and may even ask for multiple ways to achieve the same goal. This is purely information gathering, which LLMs are good at.

Based on that information I can judge how the product can help and which features may be useful. Then I can go further into these areas. Get more details about the specific tasks the product may help with. Using that process I can define (a minimal version of) a product or feature that is more grounded in reality than without.

The key is to stay clear of any kind of value judgments and future predictions. Don&apos;t ask it if a task is annoying, how important it is, how much time it takes, or how the feature and product would change the behavior or workflow. This is where LLMs are even worse than humans.

The core message of the book &quot;The mom test&quot; is to ask users about their behavior, what they did in the past. To never ask users to predict how they would behave. This is even more critical with LLMs. They will tell you what you want to hear, but by staying factual, by asking how user groups behave, they may have value.

The use-case I&apos;m thinking of for this type of interaction is a software engineer having a product idea. Maybe a side project, maybe something that should become a business. In any case, they want others to use the product. Often they would not do any user research before starting development. By using LLMs it&apos;s possible to get information quickly which may help to refine the idea before writing the first line of code.

Note: It goes without saying that contact with real people, either through interviews or selling the product, is at some point necessary to verify if the information holds true. But that&apos;s a much higher investment which is not warranted every time time before starting development.</content:encoded></item><item><title>Rules for creating good-looking user interfaces, from a developer</title><link>https://weberdominik.com/blog/rules-user-interfaces/</link><guid isPermaLink="true">https://weberdominik.com/blog/rules-user-interfaces/</guid><pubDate>Tue, 16 Sep 2025 00:00:00 GMT</pubDate><content:encoded>Creating good-looking user interfaces has always been a struggle for me. If you&apos;re in the same camp, this might help.

I recently redesigned [Lighthouse](https://lighthouseapp.io/), and during that process built a system that helped me create much better designs than I ever did before.

This system is about achieving the best possible design with the least amount of effort. There&apos;s no need to know about the psychological impact of colors, which fonts are best for which purpose, golden ratios, etc. This is expert-level design knowledge that is just distracting if you&apos;re not on that level. **The key is to focus on the few important aspects, and not try to optimize every tiny detail.**

## Hallmarks of bad design

For a long time I could tell when a design was good and that my own designs were bad, but I could never specify why my designs were bad. Now I can summarize it in two words: **alignment and consistency**.

Let me show you.

![](./old.png)

This is the previous UI of [Lighthouse](https://lighthouseapp.io/), before the redesign. A couple issues I see:

1. Icons in the navigation sidebar are not aligned
   - The logo is further on the left than the other icons.
2. Icon weight mismatch in the navigation sidebar
   - The icons are thin, compared to the text, which is bold.
   - This one is very subtle, I&apos;d never have thought of it, but once you see the difference it&apos;s so obvious.
3. “show summary” buttons
   - The position of the “plus” button in the items is inconsistent, sometimes it&apos;s more on the right, other times more on the left, depending on the text that comes before it.
4. Alignment of item counts
   - Item counts are shown in parenthesis right after the view name, consequently they start in different positions. And that makes it much harder to compare counts of different views (e.g. finding the ones with more than 100 items).

They all are either about positions being off (alignment) or elements looking differently than the ones next to them (consistency). These issues are subtle, if no one tells you what to look for they&apos;re difficult to recognize.

Below is the same page in the new design. You have to look quite closely to see the differences, but it looks much smoother, less grating. And it feels much calmer while using it.

In some way the difference is tiny. For [Lighthouse](https://lighthouseapp.io/) I&apos;m probably the only one who noticed that it&apos;s calmer. But it&apos;s also huge in a less obvious way, much more enjoyable to use now, even though there are no new features.

![](./new.png)

I want to mention one more hallmark of bad design. This one is harder to show with screenshots, so I&apos;ll leave them out for now.

It&apos;s **inconsistency between pages**. When similar UI elements look, feel, and work differently than others. Take the example of filtering items. Different item types have different properties to filter by. It might make sense to implement a separate filter component for every item, to optimize the UI for each item type. Without proper care this might result in a look and feel that&apos;s different on every page. It isn&apos;t obvious while testing one page, but users have to adapt ever so slightly depending on which screen they see. These slight differences make the UI feel worse.

While it&apos;s best to optimize the UI for every page and still keep the look and feel consistent, in many cases that&apos;s a lot of work, especially having to keep in mind how the UX is on other pages while implementing the functionality of the current one.

The tradeoff is to either focus on local perfection and sacrifice overall consistency, or keep it overall consistent and sacrifice local perfection. In my opinion the latter is better.

## Component libraries

It takes an immense amount of effort to implement the functionality of a component library and make sure all components work well together, from a design (colors, sizes, etc.) and behavior (animations, states like disabled, etc.) perspective.

It&apos;s best to build on top of a good component library, and not develop your own. For [Lighthouse](https://lighthouseapp.io/) I use [HeroUI](https://www.heroui.com/). I also backed [Web Awesome](https://webawesome.com/)&apos;s Kickstarter campaign, and would&apos;ve loved to use it, but at the time of the redesign it wasn&apos;t ready yet.

### How to use them

For using component libraries I have 2 rules:

- Use the components of the library as much as possible, and don&apos;t adapt them
- Decide which parts to use

#### Use the components of the library as much as possible, and don&apos;t adapt them

In the past I tried to optimize the user experience for every little part of the interface. To do that I created my own components or adapted library components to fit that specific case better.

The result was a UI that had no consistency at all, every part looked and felt different than the others. I simply don&apos;t have the design skills (or the time) to create a smooth UI with adaptations for every situation.

Now I&apos;m in a different camp. I use the components the library provides as much as possible, even if they don&apos;t fit perfectly, even if I&apos;d like to have something different. This makes the UI consistent across all pages and elements.

Before I optimized the small parts and forgot about the larger picture. Now I focus on the larger picture and ignore little imperfections.

#### Decide which parts to use

[HeroUI](https://www.heroui.com/) offers a couple different styles, and other component libraries do the same. For example the button component:

![](./variants.png)

Using all of them would be too much for most applications. In [Lighthouse](https://lighthouseapp.io/), I decided to use only 3 variants, and treat them as primary, secondary, tertiary.

- Primary = solid
- Secondary = flat
- Tertiary = light

Other components, like `Listbox`, have the same style options. For those I decided to use only `flat`.

These decisions keep the overall UI consistent and have the added benefit to simplify the design process.

Similar decisions can be done for component sizes, spacing, colors, shadows, etc. The more you decide beforehand the more consistent the UI, and the simpler and faster the design process will be.

### How to choose them

The component library you use has the biggest impact on the design of the product. For choosing the right one, I have 3 rules:

- Use a library that includes all components you&apos;ll need
- Use a library that you find appealing design-wise
- Don&apos;t use copy-paste libraries (less important)

#### Use a library that includes all components you&apos;ll need

This is my most important rule. A lot of work is required to make the components of the library consistent in design and behavior, and work well with all the other components. If I have to add a component myself, then most likely it won&apos;t work as smoothly with the library as built-in ones.

`DatePicker` is the component I miss the most in component libraries.

There are many excellent libraries that have all the components one could wish, so there are very few reasons why one would choose an incomplete one.

Keep in mind that it&apos;s not necessary to choose the library with the most components. If you don&apos;t need a component, there&apos;s no point to look for it. `DataTable` for example is not required for [Lighthouse](https://lighthouseapp.io/), so I didn&apos;t consider it in the selection process.

#### Use a library that you find appealing design-wise

From the libraries that survived the initial selection process, choose the one you find the most appealing design-wise.

Besides the obvious (why would you choose a library with worse design?), I&apos;d argue that whatever you create will anyway gravitate towards your design taste. The library you choose should be closest to it, and more often than not it&apos;ll be the one you like best.

Don&apos;t overthink the library choice.

A professional designer might choose a specific look depending on what the product is. Some are more playful, others more serious. There is skeumorphic, brutalism, flat, and many other styles. But without deep design experience this is just noise, and might make the product worse in the long run.

#### Don&apos;t use copy-paste libraries

This is more of a personal preference, because previously I used one and it didn&apos;t work out at all.

The whole allure of copy-paste libraries is that you can adapt components to your liking. And if you have the code right there, that&apos;s also quite tempting when the components don&apos;t work exactly as you want them to. But as mentioned above, changing components leads to inconsistencies in the UI that are hard to get rid of. And if you don&apos;t change the components, there&apos;s no reason to even add them.

Even if you could make changes without adding inconsistencies, to get upstream improvements you then have to continuously rebase your changes to the source. That&apos;s just more work with questionable payoff.

Copy-paste libraries are probably best for teams that have a designer and want to have a starting point for their own component library and design system.

## Design rules

By using a component library, a lot of the small-scale design is already done for you. However, some elements you have to create yourself. Most commonly text (body text, headlines, etc.) and icons. And you of course have to combine the components into pages to create the full user experience. For that I have a couple specific rules.

### Use only 2 font weights

There is important text (e.g. headlines, bold inline text) and less important text (e.g. body text). Regardless of which ones you choose, one is slightly bolder than the other. Use them accordingly.

This makes it much easier to keep the UI consistent.

### Use only 2 text colors

The same goes for colors. By using a component library, colors are already handled for the most part. What&apos;s left are the colors for the texts.

Here the same distinction applies. More important text gets a slightly darker color than less important text. In dark mode it&apos;s reversed.

If you use tailwind, this could be `text-gray-700` and `text-gray-900`.

### Adapt icon weights to content next to it

I mentioned this one before. But it&apos;s so subtle it doesn&apos;t hurt to mention it again.

![](./mismatch.png)

![](./match.png)

In the first version, the icons are too thin, they don&apos;t match the text. In the second one it matches and looks much better.

### Consider the purpose of elements

Don&apos;t just put information in the UI because it exists in the backend. It&apos;s not necessary that the UI shows everything. The same thing applies to functionality. Just because it&apos;s simple to implement doesn&apos;t mean it should be shown to the user.

Less is more is a good principle here.

Thinking about what a user wants to achieve, and what they need to get there, helps create less cluttered user interfaces. With fewer elements in the UI, users can decide faster what they need to achieve their goal. This is a win win. Less work and better UI.

## A note on dark mode

Dark mode was one of the most requested features for [Lighthouse](https://lighthouseapp.io/). I refrained a long time from adding it because it adds additional work to every UI task. For every change I must ensure it works in light and dark mode. It may not sound much, but it adds up.

With [HeroUI](https://www.heroui.com/) setting up dark mode is straightforward, and with a bit of additional setup I don&apos;t have to think about it at all anymore. It just works. Dark mode without additional work, that I take anytime.

Showing the setup is beyond the scope of this article. I&apos;ll add the link here as soon as it&apos;s published.

## Project-specific rules

I focus a lot on keeping a consistent user interface across the whole product. For this purpose I created an additional document of design rules. This is where I define everything that appears on multiple places.

From small things

&gt; **Loading states**
&gt;
&gt; \- Loading states use skeleton loaders, except buttons an other interactive elements which use the default loader

&gt; **Button**
&gt;
&gt; \- Variants
&gt;
&gt;   - Primary: `solid`
&gt;
&gt;   - Secondary: `flat`
&gt;
&gt;   - Tertiary: `light`
&gt;
&gt; \- Icon weights
&gt;
&gt;   - `regular` in all cases except
&gt;
&gt;   - tertiary button, then `light`
&gt;
&gt;   - tertiary icon only button, then `solid`
&gt;
&gt;   - secondary button in button group, then `light`

To large things

&gt; **Actions**
&gt;
&gt; \- If possible, the UI should update immediately, and the action done in the background
&gt;
&gt; \- If not possible, the UI should show a loading indicator until the action is complete
&gt;
&gt; \- If it&apos;s a button that was pressed, then the loading indicator should be in that button

&gt; **Forms**
&gt;
&gt; \- Edit directly, no save button click required
&gt;
&gt; \- If buttons are required (e.g. subscribe page), they are left-aligned
&gt;
&gt; \- Always in a `Card` component
&gt;
&gt; \- Multiple sections are separated with `LighthouseCard` and multiple `CardHeader` and `CardBody` elements within the card
&gt;
&gt; \- Labels are always on top of the input element
&gt;
&gt; \- Additional info (e.g. preview) can appear below the card, as a second card

This is one document that defines how most of the UI of [Lighthouse](https://lighthouseapp.io/) works. It&apos;s a living document and will change over time, usually when I discover new generic rules.

Having it written down removes many of the small decisions. It makes the UI better and less work.

## Resources / books

There are a lot of design books, and many that are targeted for developers. I read a ton of them, here are the 3 that I found most valuable:

### Practical UI

This book is incredible. No fluff, no unnecessary deep design talk, just incredibly useful introductions to the most important topics of web design. Many of the rules (e.g. the icons) are from that book.

[https://www.practical-ui.com/](https://www.practical-ui.com/)

![](./practicalui.png)

### Refactoring UI

This one is similar to Practical UI. Many immediately usable design tips while building a foundational understanding of UI design.

If you have the time, read both. It helps getting different perspectives.

[https://www.refactoringui.com/](https://www.refactoringui.com/)

![](./refactoringui.png)

### Designing interfaces

This book introduces you to many UI design patterns. These are foundational patterns used in most products and websites. For example dropdowns, navigation menu, and so on.

Even though you probably already know them, the book is still valuable. It explains what each pattern can be used for, where it works well, where it doesn&apos;t, and what to consider when using a specific pattern.

The less experience you have with UI design, the more value this book will provide.

[https://www.oreilly.com/library/view/designing-interfaces-3rd/9781492051954/](https://www.oreilly.com/library/view/designing-interfaces-3rd/9781492051954/)

![](./designinginterfaces.jpg)

## Summary

Designing a beautiful user interface is difficult, but by adhering to a couple rules it becomes much simpler.

- Use a component library
- Use the components of the library as much as possible

  - Prefer slight imperfections with library components over perfection with custom components

- Don&apos;t adapt library components
- Choose a component library that provides all components your product needs
- Choose a component library based on your style preferences
- Keep the UI as consistent as possible

  - Use similar components and patterns for similar interactions
  - Use only 2 font weights
  - Use only 2 text colors
  - Adapt icon weights to content next to it

- Consider the purpose of elements
- Create a document of project-specific design rules

If you&apos;d want to summarize these rules into one sentence, it&apos;d be:

**Prefer global UI consistency over local optimizations.**</content:encoded></item><item><title>An approach for automated fact checking</title><link>https://weberdominik.com/blog/fact-checking-approach/</link><guid isPermaLink="true">https://weberdominik.com/blog/fact-checking-approach/</guid><pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate><content:encoded>A short while ago I took part in a hackathon from the [Wiener Zeitung](https://www.wienerzeitung.at/), where the theme was to tackle problems in the media space. Since I’m working on an [RSS feed reader](https://lighthouseapp.io/) myself, I have a lot of ideas, but not the time to work on them. This hackathon was the perfect opportunity to validate if the fact-checking system I thought of some time ago could work. The result was better than expected.

## Screenshot

I wish I would’ve taken screenshots along the way so I could show example results of each step. But I didn’t, it was a hackathon with a time limit after all, so the textual description of the intermediate steps has to be enough.

But at least here’s a screenshot of the end result. It contains mistakes, I didn’t try to find a case where it works particularly well. And the implementation itself isn’t great, but more on that below.

![](./result.png)

The full screenshot is quite long, including it in the article would be too much, but you can find it [here](/images/automated-fact-checking-rull-result.png).

## The approach

The main approach is very simple.

1. Extract statements to verify
2. Verify them

LLMs can do a lot of that, and with such a simple approach I feared that I’d be done 1h after the hackathon started. That fear was unfounded.

## Extracting statements

### Using LLMs

The first approach was to use an LLM (GPT-4o) to extract factual statements. As you can imagine this didn’t work so well. LLMs are not reliable enough (at least not yet) to do that properly. Sometimes the result contained 10 items, sometimes 25. Sometimes it split one statement containing two claims into two items, sometimes not.

It may be possible to improve the consistency of results with prompt engineering, but that would then make the system reliant on the model and its specific version. That’s a dependency I didn’t want to have, because at some point it should work with smaller models as well.

So I was looking for an alternative.

### Splitting by sentence

The next approach was much simpler. Just split the text into sentences. This could be quite complex logic, considering quotes, periods in numbers, and so on, but for this first version I just split by period. In some cases this resulted in gibberish, but for the hackathon I ignored those.

The result is a list of sentences, which are statements that potentially need verification. Not every sentence does though. There’s no point to verify “When a german party triggers a Zählappell, or parliamentary roll call, it is serious business.“ for example.

### Classifying sentences into fact types

Ignoring some sentences for verification would make for a better user experience (only relevant verification information is displayed) and reduce costs.

The first approach I tried was classifying each sentence in isolation, into a fact type. With the help of GPT-4.5 I created a list, categorized into how important it is to verify it.

![](./classification.png)

Based on that I created a list of fact types that I want to verify.

```
Quantified data: anything with numbers
Trend and change statements
Risk or safety claims
Legal claims
Scientific claims
Significant historical facts
Significant societal facts
```

Neither classification with the full list nor with the reduced list worked well. It was just too easy to classify each statement into one of the important types, even if it wasn’t important. It always made at least somewhat sense, so I couldn’t even say the LLM made a mistake. Which is a surefire sign that a different strategy is needed.

### Classifying sentences in context

Next I tried to send the specific sentence and the whole article to the LLM, asking it if it’s important to verify this sentence in the context of the article.

I tried it with a yes/no and high/medium/low classification. This worked better, but still not really great.

### Extended classification in context

There was a bit more experimentation along those lines, but I’m jumping to the end now. The approach that worked the best is first classifying each sentence into a category. The ones I used are

```
&quot;Specific Facts&quot;,
&quot;General Knowledge/Facts&quot;,
&quot;Opinions/Subjective&quot;,
&quot;Speculative/Hypothetical&quot;,
&quot;Questions/Rhetoricals&quot;,
&quot;Instructions/Imperatives&quot;,
&quot;Quotations/Attributions&quot;,
&quot;Ambiguous/Uncertain&quot;,
&quot;Other&quot;
```

Only `Specific Facts` and `Quotations/Attributions` are further processed. All other categories basically act as a honeypot, so the LLM has other options and puts the sentence where it fits best, instead of put everything that loosely fits into one of those two.

This happens out of context. Each sentence is sent to the LLM without additional information.

The second step is to analyze the sentence in context, how important it is for the text and how big the impact is if it’s wrong. This is the prompt I used for it:

```
`Original text: &quot;${articleText}&quot;

Sentence to analyze: &quot;${fact}&quot;

Analyze this sentence from the text along the following dimensions:
- Relevance &amp; Context: How central is the sentence to the text&apos;s main arguments or narrative?
- Impact of Misinformation: If the sentence is wrong, would it have significant consequences?`
```

The LLM call also includes a tool which restricts the results for both dimensions to `”low” | “medium” | “high”`.

If the result for any of those is `low` then it’s ignored for further verification.

## Verification

Compared to statement extraction, the verification step is simple. I spent much more time on statement extraction. Before starting this little project I thought it’d be the other way around.

Basically, the verification step sends the statements to Perplexity and asks it to verify each one. It makes one request per statement.

This is the prompt:

```
`${fact}

Verify the above fact. Return true, false, or null (if not enough sources could be found to verify).`
```

To check the result the code checks if `true`, `false`, or `null` is contained in the resulting string. It’s a naive approach and sometimes shows the result being all of those 3, because Perplexity responds with a sentence along the lines of _“There is not enough information to verify the result being either true or false, so the correct response is null”_. This part is not where I wanted to spend time, so I left it like that.

Apart from this small issue it worked surprisingly well. Well enough that I didn’t spend any time on improving it, apart from one specific case.

In a text each sentence is in the context of that text. Verifying sentences without that context is often impossible.

One particular example is the sentence _“The draft defence budget for 2025 is €53bn.”_. It’s an article about Germany, so while reading it it’s clear that Germany’s budget is meant. But that information is lost when sending it to Perplexity. For some reason it always assumed it’s Netherlands’ budget, which I thought was kinda funny.

The solution to that particular problem was converting each sentence into a self-contained statement before sending it to Perplexity for verification. Since every sentence is already analyzed in the context of the article, simply amending the prompt did the trick.

Below is the same prompt as I mentioned above, but with the statement `Also convert the sentence into another one to make it a self-contained statement, to not require any additional information to understand and verify it.` added at the end.

```
`Original text: &quot;${articleText}&quot;

Sentence to analyze: &quot;${fact}&quot;

Analyze this sentence from the text along the following dimensions:
- Relevance &amp; Context: How central is the sentence to the text&apos;s main arguments or narrative?
- Impact of Misinformation: If the sentence is wrong, would it have significant consequences?

Also convert the sentence into another one to make it a self-contained statement, to not require any additional information to understand and verify it.`
```

This results in the sentence _“Germany’s draft defence budget for 2025 is €53bn.”,_ which contains all necessary information for verification.

This step considerably improved the accuracy of verification.

## The code

This is a separate headline so people looking for the code can jump directly here. In short, I’m not including it because it’s atrocious.

The goal was validating the approach, not writing good code. For the sake of speed I let the code generate by Claude and iterated with it until I arrived at the final state. During that iteration it included some (UI) bugs that I couldn’t be bothered to fix. The typical 70% problem with AI-generated code.

You could probably use this article as prompt and get a similar result anyway.

## Conclusion

I’m quite happy to say that the current version works for the most part. Better than I initially expected. However there are things to iron out.

The verification sometimes doesn’t find the right content to verify statements, even though it exists. Also it has difficulties with more complex statements. Additionally, it searches the internet without limiting the results to credible sources. I assume Perplexity does some of that, but not sure to what extent.

Similarly, statement extraction right now works by splitting sentences. This is ok for a prototype, but a sentence can contain multiple statements.

Another interesting case is quotes, which need double verification. Did the person really say what is quoted, and is what they said correct.

And arguably the most important improvement is to find a way to extract and categorize statements in a single pass, to bring the costs down. Currently the whole article is sent n times to the LLM (where n is the number of sentences). For long articles the costs can balloon quite fast.

Particularly this issue makes the approach in its current form infeasible for integration in a product. What could work though is a browser extension where users provide their own API key. The cost is predictable, because the content is known when the user is on a website. So the UI of the browser extension can show predicted costs to the user, which would make for a nice UX.</content:encoded></item><item><title>Self-hosting HyperDX for fun and profit</title><link>https://weberdominik.com/blog/self-host-hyperdx/</link><guid isPermaLink="true">https://weberdominik.com/blog/self-host-hyperdx/</guid><pubDate>Tue, 14 Jan 2025 00:00:00 GMT</pubDate><content:encoded>[HyperDX](https://www.hyperdx.io/) is a relatively new, but complete, product in the observability space. It supports logs, spans, session replay, dashboards, alerts, and everything else necessary for a complete observability solution. I compared its features to a list of other solutions and HyperDX came out on top.

&gt; Quick sidenote: They’re working on a v2. From the website and docs it appears as if it’ll be only a NextJs application (without server-side processing) that connects to ClickHouse, having fewer features than v1. I talked with the devs about it and they told me it’s only the current beta state. Over time v2 will have the same features as v1. They’ll publish a roadmap, but I don’t know when. Judging from their GitHub history almost all their dev time goes into v2.

## Self-hosting on localhost

Their [GitHub repo](https://github.com/hyperdxio/hyperdx) is an excellent starting point. The first step is to check out the code.

```
git clone https://github.com/hyperdxio/hyperdx.git
cd hyperdx
git checkout dddfbbc31548defe6d73c9e1e2a0d221d94efa72
```

The checked out [commit](https://github.com/hyperdxio/hyperdx/tree/dddfbbc31548defe6d73c9e1e2a0d221d94efa72) is version `1.10.0`. That’s the version I’m using. At the time of writing, the commits that followed make deployment a bit more complicated (not the purpose of course, just incidentally) and do not add new functionality.

After checking it out you can start it with Docker.

```
docker compose up -d
```

Then it runs on `localhost`. The UI is on port 8080, the API on port 8000, and the OpenTelemetry endpoint on port 4318.

If you run it locally and access the UI via `http://localhost:8080`, you’re done. Everything works.

But accessing it on a server via its IP, or a domain pointed to the server’s IP, the UI shows `Loading HyperDX` indefinitely.

![](https://lex-img-p.s3.us-west-2.amazonaws.com/img/068077d3-4d35-4252-9966-d1919d73385f-RackMultipart20250108-170-qp0861.png)

The reason is that it tries to access the API via `http://localhost:8000`.

![](https://lex-img-p.s3.us-west-2.amazonaws.com/img/19db58d6-5233-45c9-a335-8337b39f6038-RackMultipart20250108-126-75535f.png)

## Self-hosting on a server

From the GH repo’s readme:

&gt; By default, HyperDX app/api will run on [localhost](http://localhost) with port `8080`/`8000`. You can change this by updating `HYPERDX_APP_**` and `HYPERDX_API_**` variables in the `.env` file. After making your changes, rebuild images with `make build-local`.

It doesn’t matter if it’s an IP address or a domain name. I set up a temporary server, and in this post will use its IP address, `159.69.12.166`. For my production setup I assigned a domain.

The first step is updating the `.env` file:

```
# Used by docker-compose.yml
IMAGE_NAME=ghcr.io/hyperdxio/hyperdx
LOCAL_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-local
LOCAL_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-local
IMAGE_VERSION=1.10.0

# Set up domain URLs
HYPERDX_API_PORT=8000
HYPERDX_API_URL=http://159.69.12.166
HYPERDX_APP_PORT=8080
HYPERDX_APP_URL=http://159.69.12.166
HYPERDX_LOG_LEVEL=debug
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 # port is fixed
```

Then rebuild the images with `make build-local`. That takes a while. On the server I used about 10-15 minutes.

Rebuilding the Docker image is required because the server URL is [set](https://github.com/hyperdxio/hyperdx/blob/e6d8501bd918f4f5e7bb0aeca67f159cb5af3d51/packages/app/Dockerfile#L32) in the `Dockerfile`, not taken from the environment.

```
ENV NEXT_PUBLIC_SERVER_URL $SERVER_URL
```

The [command](https://github.com/hyperdxio/hyperdx/blob/e6d8501bd918f4f5e7bb0aeca67f159cb5af3d51/Makefile#L80) in the `Makefile` sets the `SERVER_URL` variable as build argument.

```
	docker build \
		--build-arg CODE_VERSION=${LATEST_VERSION} \
		--build-arg OTEL_EXPORTER_OTLP_ENDPOINT=${OTEL_EXPORTER_OTLP_ENDPOINT} \
		--build-arg OTEL_SERVICE_NAME=${OTEL_SERVICE_NAME} \
		--build-arg PORT=${HYPERDX_APP_PORT} \
		--build-arg SERVER_URL=${HYPERDX_API_URL}:${HYPERDX_API_PORT} \
		. -f ./packages/app/Dockerfile -t ${IMAGE_NAME}:${LATEST_VERSION}-app --target prod
```

When the images are built, HyperDX can again be started with `docker compose up -d`. Now accessing HyperDX via the server’s IP works as expected, and it shows a beautiful setup screen.

![](https://lex-img-p.s3.us-west-2.amazonaws.com/img/91917a94-1ee9-406f-b439-f86bb6f6cf63-RackMultipart20250108-122-og30nl.png)

## Other considerations

### Blocking the default MongoDB port

The MongoDB instance that’s started doesn’t require a username or password to log in, and its port is exposed, which means anyone who knows the server’s IP address and the MongoDB port can connect to it.

There are actors who continuously scan the internet for any open MongoDB instances and delete all their data. On my server the data was deleted every couple hours. This led to the loss of user and team information and required me to repeatedly create new users.

At the recommendation of the HyperDX team, I blocked the port so that it cannot be accessed externally.

#### Deleting existing iptables rules

The first step is deleting any existing rules for `DOCKER-USER`. To check if there are any use the following `iptables` command.

```
iptables -L DOCKER-USER -n --line-numbers
```

Then, delete them with the `-D` flag.

```
iptables -D DOCKER-USER &lt;number&gt;
```

The final output of `iptables -L DOCKER-USER -n --line-numbers` should only include the `RETURN` rule.

```
Chain DOCKER-USER (1 references)
num  target     prot opt source               destination
1    RETURN     0    --  0.0.0.0/0            0.0.0.0/0
```

#### Drop all traffic for port 27017

The next step is dropping all traffic for port `27017`. To do that add a rule before the `RETURN` rule, with the `-I` flag.

```
iptables -I DOCKER-USER -p tcp --dport 27017 -j DROP
```

Now the output of `iptables -L DOCKER-USER -n --line-numbers` should show the new rule on position 1.

```
Chain DOCKER-USER (1 references)
num  target     prot opt source               destination
1    DROP       6    --  0.0.0.0/0            0.0.0.0/0            tcp dpt:27017
2    RETURN     0    --  0.0.0.0/0            0.0.0.0/0
```

#### Allow traffic from [localhost](http://localhost) and Docker network

MongoDB should be available from [localhost](http://localhost) and the Docker subnet.

To find out which subnet the Docker network uses, inspect it with `docker network inspect hyperdx_internal`.

The relevant part of the output is this:

```
&quot;IPAM&quot;: {
    &quot;Driver&quot;: &quot;default&quot;,
    &quot;Options&quot;: null,
    &quot;Config&quot;: [
        {
            &quot;Subnet&quot;: &quot;172.18.0.0/16&quot;,
            &quot;Gateway&quot;: &quot;172.18.0.1&quot;
        }
    ]
},
```

In this case, the subnet is `172.18.0.0/16`.

The next commands allow traffic from specific subnets. The first one from localhost, the second one from the Docker subnet. If in your case the subnet is different, replace `172.18.0.0/16` with the correct one.

```
iptables -I DOCKER-USER -s 127.0.0.1 -p tcp --dport 27017 -j ACCEPT
iptables -I DOCKER-USER -s 172.18.0.0/16 -p tcp --dport 27017 -j ACCEPT
```

Now the output of `iptables -L DOCKER-USER -n --line-numbers` should show the new rules on position 1 and 2.

```
Chain DOCKER-USER (1 references)
num  target     prot opt source               destination
1    ACCEPT     6    --  172.18.0.0/16        0.0.0.0/0            tcp dpt:27017
2    ACCEPT     6    --  127.0.0.1            0.0.0.0/0            tcp dpt:27017
3    DROP       6    --  0.0.0.0/0            0.0.0.0/0            tcp dpt:27017
4    RETURN     0    --  0.0.0.0/0            0.0.0.0/0
```

Rules are executed in order, which means if any of the `ACCEPT` rules match, the request is accepted. If they don’t then the request is dropped through the `DROP` rule. This applies only for port `27017`, every other port is handled by the `RETURN` rule.

#### Save config

The final step is to ensure the iptables rules are persisted and reloaded after rebooting the server.

```
apt-get install iptables-persistent
netfilter-persistent save
```

### Data retention and storage

HyperDX sets the [data retention period](https://github.com/hyperdxio/hyperdx/blob/main/packages/api/src/clickhouse/index.ts#L293) of ClickHouse tables to one month. ClickHouse makes up most of the data, so that’s the only real storage concern. In my case it needs about 60GB of space for one month of logs.

The symptom if no space is left on the server is that the MongoDB database crashes. And after restarting crashes again in less than a minute. Don’t ask me how I know.

The system is quite efficient, so a relatively cheap server (2 CPUs, 4 GB memory) is enough to handle the traffic from Lighthouse. Instead of upgrading the server, I added a volume and moved the `hyperdx/.volumes/ch_data` directory to it via a symlink.

## Final words

HyperDX is a relatively new product in the observability space, so it needs a bit more tinkering to self-host it than other products, and their documentation is not as complete as others yet.

It seems to me the team focused most of their efforts on building a great product, and they succeeded. After the initial setup phase I had no issues, and their integration with session recordings is exceptional. Being able to see what the user did leading up to an error is incredible.</content:encoded></item><item><title>AsyncLocalStorage and how to use it to reduce repetition of log data</title><link>https://weberdominik.com/blog/asynclocalstorage-log-repetition/</link><guid isPermaLink="true">https://weberdominik.com/blog/asynclocalstorage-log-repetition/</guid><pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate><content:encoded>`AsyncLocalStorage` is a Node.js class that makes it possible to store global data for one specific function execution. It doesn’t matter if the function is synchronous or asynchronous, but in practice it’s more relevant for async functions.

Let’s start with an example to understand what it does and how it works.

```ts
import { AsyncLocalStorage } from &quot;node:async_hooks&quot;;

const asyncLocalStorage = new AsyncLocalStorage&lt;{ task: number }&gt;();

const tasks = [1, 2, 3, 4, 5];

async function longRunningWork() {
  const duration = Math.floor(Math.random() * 1000) + 500;
  await new Promise((resolve) =&gt; setTimeout(resolve, duration));

  const task = asyncLocalStorage.getStore()?.task;
  console.log(`Task ${task} completed`);
}

async function main() {
  await Promise.all(
    tasks.map(async (task) =&gt; {
      const context = { task };
      return asyncLocalStorage.run(context, async () =&gt; {
        await longRunningWork();
      });
    }),
  );
}
main();

// Example output:
// Task 4 completed
// Task 2 completed
// Task 5 completed
// Task 3 completed
// Task 1 completed
```

The tasks are all executed at the same time, and when the log is written they get the value of `task` that’s specific to that particular execution context.

That behavior is impossible to replicate without `AsyncLocalStorage`. Global objects, in comparison, exist only once for the entire application. Adding that value to a global object would only retain the last one, resulting in logging `Task 5 completed` five times.

```ts
const globalStorage = { task: 0 };

const tasks = [1, 2, 3, 4, 5];

async function longRunningWork() {
  const duration = Math.floor(Math.random() * 1000) + 500;
  await new Promise((resolve) =&gt; setTimeout(resolve, duration));

  const task = globalStorage.task;
  console.log(`Task ${task} completed`);
}

async function main() {
  await Promise.all(
    tasks.map(async (task) =&gt; {
      globalStorage.task = task;
      await longRunningWork();
    }),
  );
}
main();

// Output:
// Task 5 completed
// Task 5 completed
// Task 5 completed
// Task 5 completed
// Task 5 completed
```

## Use-cases

Global data should be used sparingly. It can make it difficult to follow the data flow and the code harder to understand. In the example above, it’d be simpler to add `task` as parameter to `longRunningWork`.

Prime candidates are auxiliary logic, which have nothing to do with the core business logic, but need to pass data down the execution stack. Being able to pass that data outside the normal control flow (i.e. not having to pass as function parameters) frees the code from it.

Tracking request IDs is a good example. The framework can set it when the request is received, and it’ll be available throughout the handling of the request. If another framework function is called that needs the request id, it can access it, without the developer having to do extra work.

### Passing repetitive log data

With structured logging it’s possible to include data within log entries. Log aggregation software stores that data and makes it possible to filter logs based on it.

In my specific case, for [Lighthouse](https://lighthouseapp.io/), it’s helpful that I can filter all logs that are specific to one URL.

![](https://lex-img-p.s3.us-west-2.amazonaws.com/img/fc4f02b2-d264-44cb-87f1-5c551d144592-RackMultipart20250108-124-zr9p4m.png)

To make that possible, the `url` field must be added to the data object every time a log line is written.

It is possible to do that manually by passing the URL to every function that writes logs. This is error-prone (I might forget somewhere) and makes the code harder to read. Not every function would need the URL otherwise. When reading the code a team member might be confused about why the function `computeReadingTime` requires the URL in addition to the text.

Another option is to create a child logger and pass that through. Child loggers can store additional data, which is added to every log. Every log library I know has that feature. But child loggers must also be passed as parameter to the function.

With `AsyncLocalStorage`, there’s essentially a side-channel for data, which the log functions can use.

## How I use `AsyncLocalStorage` to pass data

```ts
import { AsyncLocalStorage } from &quot;node:async_hooks&quot;;

const asyncLocalStorage = new AsyncLocalStorage();
export function withData&lt;R&gt;(data: Record&lt;string, any&gt;, callback: () =&gt; R): R;
export function withData&lt;R, TArgs extends any[]&gt;(
  data: Record&lt;string, any&gt;,
  callback: (...args: TArgs) =&gt; R,
  ...args: TArgs
): R;
export function withData&lt;R, TArgs extends any[]&gt;(
  data: Record&lt;string, any&gt;,
  callback: (...args: TArgs) =&gt; R,
  ...args: TArgs
): R {
  const currentStore = asyncLocalStorage.getStore() ?? {};
  const combinedData = { ...(typeof currentStore === &quot;object&quot; ? currentStore : {}), ...data };
  return asyncLocalStorage.run(combinedData, callback, ...args);
}

const logger = pino();
export function info(message: string, data?: Record&lt;string, any&gt;) {
  const store = asyncLocalStorage.getStore();
  const storeData = typeof store === &quot;object&quot; ? store : {};
  const combinedData = { ...storeData, ...data };
  logger.info({ ...combinedData, message });
}
```

This is the code of my `logUtils.ts`. The real one has additional functions for other log levels, but they’re essentially the same. The `info` function merges data from `asyncLocalStorage` with any additional data passed to it directly.

The `withData` function receives a `data` and `callback` parameter, combines the `data` with the object currently in store, and calls the `callback` with the combined object. In the context of logging, I view data as contextual information. The further down the execution gets, the more specific the context should become. Therefore context should only be added, never removed.

`withData` is a tiny wrapper around `asyncLocalStorage.run` that combines existing data with new data. The type of `withData` is the same as that of `asyncLocalStorage.run`.

Below is an example demonstrating how it works.

```ts
withData({ a: 1 }, () =&gt; {
  info(&quot;Log 1&quot;, { b: 2 }); // { message: &quot;Log 1&quot;, a: 1, b: 2 }

  withData({ c: 3 }, () =&gt; {
    info(&quot;Log 2&quot;, { d: 4 }); // { message: &quot;Log 2&quot;, a: 1, c: 3, d: 4 }
  });
});
```</content:encoded></item><item><title>Type-safe logging with custom string interpolation</title><link>https://weberdominik.com/blog/type-logging-string-interpolation/</link><guid isPermaLink="true">https://weberdominik.com/blog/type-logging-string-interpolation/</guid><pubDate>Fri, 03 Jan 2025 00:00:00 GMT</pubDate><content:encoded>In an effort to improve observability of [Lighthouse](https://lighthouseapp.io/), I updated the logging infrastructure and switched from `console.log` to [pino](https://getpino.io). That led me down the rabbit hole of typing the logging functions.

I wanted to create a function that takes a message string and data object, where the message string can reference values in the data object, and TypeScript verifies that all referenced values are in the data object.

```
log.info(&quot;Something happened %time&quot;, { time: new Date() }); // OK
log.info(&quot;Something happened %time&quot;, { }); // Property &apos;time&apos; is missing in type &apos;{}&apos;
```

It looks simple, but took quite a lot of tinkering to make it work.

With the typing of the log function I want to ensure that changing log lines will never result in missing data in those log lines. Because, writing new code is relatively easy, and most unintended changes (aka mistakes) happen while editing.

## Logging with pino

Pino is a logging library for JavaScript. It supports structured logging and writes log lines as JSON. The message is included as `msg` property, and its log functions accept an additional `data` parameter, which is merged into the logged JSON object.

It also supports string interpolation.

The signature is

```
logger.info([mergingObject], [message], [...interpolationValues])
```

This is an example with string interpolation:

```
logger.info({ property: &quot;value&quot; }, &quot;hello %s&quot;, &quot;world&quot;);
// {&quot;property&quot;: &quot;value&quot;,&quot;level&quot;:30,&quot;time&quot;:1531257826880,&quot;msg&quot;:&quot;hello world&quot;,&quot;pid&quot;:55956,&quot;hostname&quot;:&quot;x&quot;}
```

`%s` is replaced by `&quot;world&quot;` in the final `msg` property.

## Motivation

I want the interpolated data also in the `data` object, not only in the message. Having to pass it twice, once in the `data` object and once for string interpolation seems like unnecessary effort.

`logger.info(&quot;Something happened %time&quot;, { time: new Date() });`

is cleaner than

`logger.info({ time: new Date() }, &quot;Something happened %s&quot;, new Date());`

## Implementation

The first step is to implement that behavior. Typing comes later.

It’s possible to pass a custom formatter to pino. This formatter receives the `data` object and can manipulate it. There it’s possible to replace the value references (e.g. `%time`) with the actual values.

```
const logger = pino({
  formatters: {
    // Make it possible to interpolate provided data into the log message
    log(data) {
      const message = data.msg as string;
      if (message == null) return data;

      const resultMessage = message.replace(/%(\w+)/g, (_, key) =&gt; {
        const value = data.hasOwnProperty(key) ? data[key] : `%${key}`;
        if (typeof value === &quot;string&quot;) return value;
        if (value instanceof Date) return value.toISOString();
        if (value instanceof Error) return value.message;
        return JSON.stringify(value);
      });
      return {
        ...data,
        msg: resultMessage,
      };
    },
  },
});
```

This function implements custom formatting for `Date` and `Error` objects, the rest is converted to JSON strings.

The `logger` is used in a separate log function

```
function info(message: string, data?: Record&lt;string, any&gt;) {
  logger.info({ ...data, msg: message });
}
```

The `log` formatter function doesn’t receive the message text, but since it’s included in the resulting object as `msg` property, it’s possible to work around that by merging the message into the object directly.

## Typing

The `info` function has `message` and `data` as parameters, but doesn’t yet verify that values referenced in `message` (e.g. `%time`) are part of the `data` parameter.

To reiterate, the goal is to type the log functions so that referencing a value which doesn’t exist in the data object leads to a TypeScript error.

```
log.info(&quot;Something happened %time&quot;, { time: new Date() }); // OK
log.info(&quot;Something happened %time&quot;, { }); // Property &apos;time&apos; is missing in type &apos;{}&apos;
```

### Solution

The full solution adds 5 helper types and changes the types of the `info` function:

```
type IsOnlyAlphabet&lt;T extends string&gt; = T extends `${infer F}${infer R}`
  ? Uppercase&lt;F&gt; extends Lowercase&lt;F&gt;
    ? false
    : IsOnlyAlphabet&lt;R&gt;
  : true;

type OnlyAlphabet&lt;T extends string&gt; = T extends string
  ? IsOnlyAlphabet&lt;T&gt; extends true
    ? T
    : never
  : never;

type ExtractPlaceholdersRaw&lt;S extends string&gt; = S extends
  | `${string}%${infer Key}${&quot; &quot; | &quot;,&quot; | &quot;.&quot;}${infer Rest}`
  | `${string}%${infer Key}`
  ? Key | ExtractPlaceholdersRaw&lt;Rest&gt;
  : never;

type ExtractPlaceholders&lt;S extends string&gt; = OnlyAlphabet&lt;ExtractPlaceholdersRaw&lt;S&gt;&gt;;

type PlaceholdersPresent&lt;S extends string, P extends Record&lt;string, any&gt;&gt; = {
  [K in ExtractPlaceholders&lt;S&gt;]: any;
} &amp; P;

function info&lt;S extends string, P extends Record&lt;string, any&gt;&gt;(message: S, data?: PlaceholdersPresent&lt;S, P&gt;) {
  logger.info({ ...data, msg: message });
}
```

### Explanation

The solution works in 2 steps

1. Extract all words prefixed with `%` into a string union
2. Ensure that the `data` parameter contains all strings of that union as key

`IsOnlyAlphabet`, `OnlyAlphabet`, `ExtractPlaceholdersRaw`, and `ExtractPlaceholders` are relevant for step 1.

`PlaceholdersPresent` is for step 2.

#### Extracting keys

`ExtractPlaceholdersRaw` is a recursive type that converts a string into a union of strings which includes every substring prefixed by a `%` sign.

```
type T1 = ExtractPlaceholdersRaw&lt;&quot;This %is a %test and %more&quot;&gt;
// type T1 = &quot;is&quot; | &quot;is a %test and %more&quot; | &quot;test&quot; | &quot;test and %more&quot; | &quot;more&quot;
```

The most important part is the condition:

```
S extends
  | `${string}%${infer Key}${&quot; &quot; | &quot;,&quot; | &quot;.&quot;}${infer Rest}`
  | `${string}%${infer Key}`
```

If it matches, it extracts `Key` and recursively unions it with other keys extracted from `Rest`. If it doesn’t match, it returns the type `never`.

```
? ExtractPlaceholdersRaw&lt;Rest&gt;
: Key | ExtractPlaceholdersRaw&lt;Rest&gt;
```

The condition itself is a union. The first part of the union makes sure that `Key` is separated by a delimiter, in this case either a space (\`” “\`), colon (\`,\`), or dot (\`.\`). If the delimiter wasn’t there, then `Key` would only match one letter, and the result would be `&quot;t&quot;` instead of `&quot;test&quot;` for example.

The second part of the union ensures that interpolated values are extracted even if they are at the end of the string. In the example above that’d be `&quot;more&quot;`.

The remaining problem is that the second part of the union also extracts `&quot;is a %test and %more&quot;` and `&quot;test and %more&quot;`, because it matches any string that starts with `%`.

This is where the `OnlyAlphabet` type is relevant.

#### Checking if a key contains only letters

The type `IsOnlyAlphabet` converts a string type into `true` if it only contains letters, or `false` otherwise.

```
type IsOnlyAlphabet&lt;T extends string&gt; = T extends `${infer F}${infer R}`
  ? Uppercase&lt;F&gt; extends Lowercase&lt;F&gt;
    ? false
    : IsOnlyAlphabet&lt;R&gt;
  : true;
```

It’s also a recursive type, and extracts every character for the given string.

```
T extends `${infer F}${infer R}`
```

The condition works similar to the one of `ExtractPlaceholdersRaw`. It extracts the first letter into `F`, and the rest into `R`.

`Uppercase&lt;F&gt; extends Lowercase&lt;F&gt;`, checks if the character is the same as uppercase and lowercase. If that’s the case, it cannot be a letter.

Some letters in languages other than English are the same in upper- and lowercase, but in code that’s usually not an issue.

The result is

```
type A1 = IsOnlyAlphabet&lt;&quot;abcd&quot;&gt;;
// type A1 = true
type A2 = IsOnlyAlphabet&lt;&quot;1234&quot;&gt;;
// type A2 = false
```

#### Filtering a union of strings

As explained above, `ExtractPlaceholdersRaw` creates a union of strings which includes undesired values. They should be filtered to include only values that are letters-only.

`&quot;is&quot; | &quot;is a %test and %more&quot; | &quot;test&quot; | &quot;test and %more&quot; | &quot;more&quot;` should become `&quot;is&quot; | &quot;test&quot; | &quot;more&quot;`.

This is what the `OnlyAlphabet` type does.

```
type OnlyAlphabet&lt;T extends string&gt; = T extends string
  ? IsOnlyAlphabet&lt;T&gt; extends true
    ? T
    : never
  : never;
```

It checks every part of the union, if `IsOnlyAlphabet&lt;T&gt;` is true. If so, it returns the string type, if not, returns `never`.

To understand how this type works it’s important to know that generic types expand unions. They are applied for every type of the union separately.

Applying it to the example above replaces `&quot;is a %test and %more&quot;` and `&quot;test and %more&quot;` with `never`.

```
OnlyAlphabet&lt;&quot;is&quot; | &quot;is a %test and %more&quot; | &quot;test&quot; | &quot;test and %more&quot; | &quot;more&quot;&gt;
// &quot;is&quot; | never | &quot;test&quot; | never | &quot;more&quot;
```

The result is shortened to `&quot;is&quot; | &quot;test&quot; | &quot;more&quot;`.

#### Combining it all

The type `ExtractPlaceholders` combines `ExtractPlaceholdersRaw` with `OnlyAlphabet` to only extract the keys we want.

```
type ExtractPlaceholders&lt;S extends string&gt; = OnlyAlphabet&lt;ExtractPlaceholdersRaw&lt;S&gt;&gt;;
```

`ExtractPlaceholdersRaw` extracts the list of keys, including some undesired keys, and `OnlyAlphabet` filters out the undesired keys to leave a clean list.

#### Ensuring the keys are present in the `data` parameter

The `data` parameter must have values corresponding to the keys from the `message` string, and it should be possible to add additional keys.

```
type PlaceholdersPresent&lt;S extends string, P extends Record&lt;string, any&gt;&gt; = {
  [K in ExtractPlaceholders&lt;S&gt;]: any;
} &amp; P;

function info&lt;S extends string, P extends Record&lt;string, any&gt;&gt;(message: S, data?: PlaceholdersPresent&lt;S, P&gt;) {
  logger.info({ ...data, msg: message });
}
```

`PlaceholdersPresent` gets a string and record type as generic parameters. It creates one type with the extracted keys (`[K in ExtractPlaceholders&lt;S&gt;\]: any;`), and combines that with the record type.

To stick with the example from before, the resulting type would be

```
{
  is: any;
  test: any;
  more: any;
} &amp; P
```

The result is that if a `data` parameter is passed to the `info` function, it must have values for `is`, `test`, and `more` properties.

A slight caveat is that, with the current function definition, the `data` parameter is optional. Consequently, if no `data` parameter is provided, TypeScript doesn’t show errors even if there are some mandatory keys.

```
info(&quot;This %is a %test and %more&quot;); // No error
info(&quot;This %is a %test and %more&quot;, {}); // Type &apos;{}&apos; is missing the following properties from type &apos;{ is: any; test: any; more: any; }&apos;: is, test, more
```

This can be solved with more conditional type magic, but it’s already complicated enough. I left it as a future improvement.

## Final words

This must be one of the most complicated types I ever created. Even though this article only explains the final solution, it took many experiments and detours to finally get there.

I hope it helps others achieve the same.</content:encoded></item><item><title>On log levels</title><link>https://weberdominik.com/blog/on-log-levels/</link><guid isPermaLink="true">https://weberdominik.com/blog/on-log-levels/</guid><pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate><content:encoded>Logs are an important part of observability. When something went wrong, logs help with the investigation and understanding of what happened.

One of the most common problems is inconsistent use of log levels. If that’s the case, filtering by log level is almost useless and logs become hard to read.

Defining and documenting log levels helps maintain consistency. Everyone working on the project should find the definitions one way or another. In [Lighthouse](https://lighthouseapp.io/), for example, there’s a `logUtils.ts` file which includes the documentation at the top and the defines log levels within each respective log function.

## Log levels

There are different widespread definitions for log levels. For example, [RFC 5424](https://datatracker.ietf.org/doc/html/rfc5424) defines them from priority 0-7.

```
           Numerical         Severity
             Code

              0       Emergency: system is unusable
              1       Alert: action must be taken immediately
              2       Critical: critical conditions
              3       Error: error conditions
              4       Warning: warning conditions
              5       Notice: normal but significant condition
              6       Informational: informational messages
              7       Debug: debug-level messages

              Table 2. Syslog Message Severities
```

[npm defines log levels](https://docs.npmjs.com/cli/v8/using-npm/logging#setting-log-levels) as `&quot;silent&quot;`, `&quot;error&quot;`, `&quot;warn&quot;`, `&quot;notice&quot;`, `&quot;http&quot;`, `&quot;timing&quot;`, `&quot;info&quot;`, `&quot;verbose&quot;`, `&quot;silly&quot;`.

In JavaScript’s `console` API, available levels include `error`, `warn`, `info`, `debug`, and `trace`.

There’s not one right system. The most obvious difference is the number of log levels. RFC 5424 defines 8 levels, npm has 9, other projects use more, and some use fewer.

As a general rule of thumb, the more complex DevOps is in the organization, the more integrations with other systems exist, the more log levels are required.

For example, if logs integrate with a paging system, and engineers are paged for errors, they should only be paged for errors that need immediate attention. Logging non-critical errors should still be possible, allowing engineers to review them later. In that case, multiple error levels make sense.

## Defining log levels

Regardless of the number of levels used, documenting the purpose and appropriate usage of each level is essential for consistency. The following are the definitions I use for [Lighthouse](https://lighthouseapp.io/):

- **Error**

  - unexpected things that are not recoverable

- **Warning**

  - unexpected things that are recoverable

- **Info**:

  - high-level what happens in the system
  - it should be possible to read info logs without becoming overwhelmed
  - engineers, even if they’re unfamiliar with the code, should understand what’s going on

- **Debug**:

  - significant changes made in the system
  - E.g. database updates, important new value computed and set on an object

- **Trace**:

  - general information about code execution
  - E.g. function start, value returned, specific code branch visited

In my experience, `error` and `warning` are quite intuitive. The difference between `info`, `debug`, and `trace` is less so, and must be more clearly defined.

## Other considerations

Defining log levels is a start, but does not automatically lead to good logs.

It’s important to keep log levels consistent. One unexpected event sometimes seems less important than another, for example. There’s the temptation to use `info` instead of `warning`. Don’t. It muddles the water and makes filtering more difficult later on.

Instead, it’s good to ask if it should be logged at all. Every log line should provide value. If it doesn’t, it should be removed. Just because a log line fits any of the definitions above doesn’t mean it should be added.

I find imagining reading through the final log helps finding out if a log line adds value. If I see this line, would it help me understand what happened?

The same principle applies to data. I use structured logging (log lines written as JSON objects), so data is part of every log line. The log aggregator I use shows the message of each log line in the overview, and its data in the details. Still, adding data just so it’s there is unnecessary. It increases load on the log aggregator and might obscure useful data.

## Final words

As a junior engineer, I didn’t understand the value of logs yet. As I became more experienced, worked with larger systems, and bugs became more complicated, my appreciation increased.

Back then, bugs were obvious and I could always attach a debugger and reproduce them easily. With more complicated bugs, just finding out what happened and how it was caused is 90% of the work.

Good logs help, a lot.</content:encoded></item><item><title>Monorepo setup with TypeScript, Tailwind, NextJs, and WXT (browser extension development) with shared components</title><link>https://weberdominik.com/blog/monorepo-wxt-nextjs/</link><guid isPermaLink="true">https://weberdominik.com/blog/monorepo-wxt-nextjs/</guid><pubDate>Mon, 23 Dec 2024 00:00:00 GMT</pubDate><content:encoded>The most-requested feature for Lighthouse is a browser extension to add articles to the library. Lighthouse has always been a monorepo to share code between the NextJs application and a couple of Lambda functions. Since there was only one application that used UI components, they were always part of the NextJs codebase. To avoid code duplication with the browser extension, the UI components had to move to a separate package.

## The goal: shared styles and components, and good developer experience

The fastest and easiest way would be to copy the Tailwind config and the components the extension requires and call it a day. But as developers we know, if it serves the same purpose, duplicating code is a sin.

The monorepo has 2 workspace directories, apps and packages. Apps are deployed entities, and packages are shared code.

Until now there was a `web` app, and with adding the browser extension there will now be an additional `web-extension` app and `ui-base` package.

```
├── apps/
│   ├── web/
│   └── web-extension/
└── packages/
    └── ui-base/
```

Everything that can be shared between client applications should live in the `ui-base` package. This includes the Tailwind config, UI components, and other shared code like API client.

The developer experience should be as you’d expect. Autocomplete suggesting imports, hot module reloading during development, and type checking within the IDE.

Achieving these goals was not as straightforward as I thought it would be.

## Different build pipelines across frameworks

The main aspect that makes monorepos complicated to set up is that frameworks use different build tools.

NextJs uses swc, WXT uses Vite, and for the Lambda functions I use TypeScript (tsc).

NextJs [doesn’t support](https://github.com/vercel/next.js/discussions/50866) the `references` field of the `tsconfig`, and WXT [doesn’t support](https://wxt.dev/guide/essentials/config/typescript.html) path aliases.

When working on only one application it doesn’t matter. However, when working in a monorepo with multiple apps using different frameworks you have to constantly be aware of these limitations, and how they interact with TypeScript and the TypeScript language server (which is used for autocomplete suggestions).

## Aside: How Tailwind is included in the build pipeline

While expanding the setup with the WXT project, I was amazed and confused at the same time how Tailwind is included.

Adding the config files `tailwind.config.js` and `postcss.config.js`, and the Tailwind directives to a CSS file and importing it is enough.

```
@tailwind base;
@tailwind components;
@tailwind utilities;
```

Turns out both [NextJs](https://nextjs.org/docs/pages/building-your-application/configuring/post-css) and [Vite](https://vite.dev/guide/features#postcss) handle PostCSS natively, and since Tailwind is a PostCSS plugin nothing else is required.

## Examining monorepo templates

After my first try of moving all relevant code to the shared package failed with incomprehensible compile errors, I started checking out other NextJs monorepo setups to see what I can learn from them.

The first stop was the Turborepo [example](https://github.com/vercel/turborepo/tree/main/examples/with-tailwind) with NextJs and Tailwind. The shared `ui` package exports every component separately. Adding every component to the list is too much overhead, so it was a non-starter.

`packages/ui/package.json`

```
...
&quot;exports&quot;: {
    &quot;./styles.css&quot;: &quot;./dist/index.css&quot;,
    &quot;./card&quot;: &quot;./src/card.tsx&quot;
  },
...
```

Another [example](https://github.com/belgattitude/nextjs-monorepo-example/blob/main/apps/nextjs-app/tsconfig.json) uses path aliases.

`apps/nextjs-app/tsconfig.json`

```
...
&quot;paths&quot;: {
  &quot;@your-org/ui-lib/*&quot;: [&quot;../../../packages/ui-lib/src/*&quot;],
},
...
```

This was more promising, and very close to the final setup.

## Result

One benefit is that the `ui-base` package isn’t published, which makes it possible to treat it as just a directory for code separation and reuse purposes, with config files (`tsconfig.json` and `tailwind.config.js`) for the editor, that the VS Code plugins pick up.

Leaving the compiling and bundling to the respective frameworks, which both support TypeScript, Tailwind, and React, ensures that there is no need to cater to the intricacies of them. The only requirement is adapting the respective config files so the shared code is picked up.

Path aliases are the perfect solution. Even though the files are in a different directory and imported via `@packages/ui-base/*`, they are treated like files in the project.

NextJs can handle TypeScript’s path aliases natively, so it’s enough to add it in the `tsconfig` file.

`apps/web/tsconfig.json`

```
...
&quot;paths&quot;: {
  &quot;@packages/ui-base/*&quot;: [&quot;../../packages/ui-base/src/*&quot;]
}
...
```

WXT handles path aliases differently, they must be added in the config file `wxt.config.ts` and doesn’t necessarily require it in the `tsconfig`.

`apps/web-extension/wxt.config.ts`

```
...
alias: {
  &quot;@packages/ui-base&quot;: resolve(&quot;../../packages/ui-base/src&quot;),
},
...
```

### Developer experience

With the above setup, it’s possible to import components, e.g. a button, from `@packages/ui-base`.

```
import { Button } from &quot;@packages/ui-base/components/library/button&quot;;
```

It works for NextJs and WXT during development and for production builds, but while writing code autocomplete doesn’t suggest the components and imports. To stay with the button example, writing `&lt;Bu` doesn’t suggest importing the the `Button` component, but I’d very much like it to do so.

That’s because even though the files are referenced via the `alias` field in the `tsconfig`, TypeScript doesn’t automatically pick up those files. They must be added to the `include` paths of the `tsconfig.json`.

```
...
&quot;include&quot;: [
  ...
  &quot;../../packages/ui-base/src/**/*.ts&quot;,
  &quot;../../packages/ui-base/src/**/*.tsx&quot;
],
...
```

With that it works as expected for the NextJs app.

Since the [WXT docs](https://wxt.dev/guide/essentials/config/typescript.html) recommend against it, I initially didn’t include the path alias in the `web-extension` `tsconfig` file.

Because of that omission TypeScript didn’t auto-suggest importing components from `ui-base`. This surprised me. I would have expected that it still suggests the components, but with `../../packages/ui-base/src/…` as path.

Despite the WXT docs recommending against it, adding path aliases to the `tsconfig` doesn’t seem to cause any issues. After adding it, TypeScript correctly suggests component imports.

`apps/web-extension/tsconfig.json`

```
...
&quot;paths&quot;: {
  &quot;@packages/ui-base/*&quot;: [&quot;../../packages/ui-base/src/*&quot;]
}
...
```

## Final configuration

The end result is that the `tsconfig` files of the `web` and `web-extension` apps have path aliases and include paths, and the WXT config file has an alias.

`tsconfig.json`

```
{
  ...
  &quot;compilerOptions&quot;: {
    ...
    &quot;paths&quot;: {
      &quot;@packages/ui-base/*&quot;: [&quot;../../packages/ui-base/src/*&quot;]
    }
  },
  &quot;include&quot;: [
    ...
    &quot;../../packages/ui-base/src/**/*.ts&quot;,
    &quot;../../packages/ui-base/src/**/*.tsx&quot;
  ]
}
```

`apps/web-extension/wxt.config.js`

```
export default defineConfig({
  alias: {
    &quot;@packages/ui-base&quot;: resolve(&quot;../../packages/ui-base/src&quot;),
  },
  ...
});
```

The `ui-base` package doesn’t need any special config, since its files are processed by the build pipelines of the apps.

I’m happy that the final setup is simple. However, understanding it’s configured this way helps dealing with future changes of NextJs, WXT, or TypeScript.

## Final words

Now that I have a working setup, it seems easy. But getting there was more difficult than I initially expected. With so many different options (e.g. project references, building the package) I ran into more than one dead end.

It would be amazing if in the future we as the web dev community can unify around one build and bundling system. It’d make setup and configuration so much simpler.

On the other hand, the web dev community is so great because people and companies frequently experiment with new approaches.

Both are different sides of the same coin. Nonetheless, one can dream.</content:encoded></item><item><title>When to choose the freemium pricing model as solo founder</title><link>https://weberdominik.com/blog/solo-founder-freemium/</link><guid isPermaLink="true">https://weberdominik.com/blog/solo-founder-freemium/</guid><pubDate>Mon, 06 May 2024 00:00:00 GMT</pubDate><content:encoded>Pricing is a difficult topic, especially for first-time SaaS founders who don&apos;t have experience in that area. One common question is whether the product should have a free plan. I have read many articles on that topic, and received a lot of contradictory advice. Often it&apos;s just a &quot;it worked for me so it&apos;ll work for you&quot; kinda thing. I haven&apos;t found a good reason for including or not including a free plan. Until recently.

Before explaining my reasons I should address the elephant in the room. Why not just include a free plan in every product and be done with it?

While it can work, it may hinder growth or hurt profitability.

Free only means free for the customer, not for the product creator. Free users still generate infrastructure costs, and sometimes need support. Spending significant amounts of time assisting non-paying users can ruin businesses. As a frame of reference, some people talk about 100:1 free to paid conversion ratio. This means one paid customer needs to support 100 free ones. That can be difficult to make economically viable.

Free plans may also cannibalize paid usage. A user who would&apos;ve paid for the product might be ok with the free version. Structuring the free plan appropriately can be challenging. It has to be good enough for people to use it, but not so much that they see no need to upgrade. Achieving that balance can be tricky. Even more so for new products which don&apos;t have a lot of features yet.

## Different archetypes of product-market fit

Sequoia recently posted an [article](https://www.sequoiacap.com/article/pmf-framework/) recently about the 3 archetypes of product-market fit, which was the missing piece for me. I&apos;ll explain them, but for more details check out the article, it&apos;s worth it.

**Hair on Fire** is when you solve a problem that&apos;s a clear and urgent need for customers. Demand is obvious, and customers are actively wrestling with the problem. They need a solution and compare products to see which is best for their situation.

**Hard fact** is when there&apos;s a pain point that&apos;s accepted as a part of life. People deal with the product in one way or another, but are not actively trying to solve it. Often because there is no clear solution. It&apos;s annoying but not worth the effort to search everywhere to solve it.

**Future vision** is when the product makes something possible that wasn&apos;t possible before. It&apos;s a fundamental change. Customers can&apos;t look for it because no one even knows that it&apos;s possible.

For the discussion of freemium pricing model as solo founder, the future vision archetype is irrelevant. Such products take a lot of effort to develop, which is out of reach for solo founders. It takes many people and a lot of money to make them happen. Which means solo founders work either &quot;Hair on fire&quot; or &quot;Hard fact&quot; problems.

## Differences in required customer education

A big difference between &quot;Hair on fire&quot; or &quot;Hard fact&quot; problems is the amount of customer education that&apos;s necessary for people to understand the value of the product.

&quot;Hair on fire&quot; problems are intimately understood by the people having them. They know it is a problem, they know there is a solution, and they&apos;re actively looking for a solution that fits their needs. There is no need to convince people they have a problem worth solving. The only convincing required is that your product is the best tool for the job.

In contrast, &quot;Hard fact&quot; problems fade into the background. Often there is no solution, or existing solutions are not well-known, so people don&apos;t try to look for them. In other cases the problem is not important enough to solve them.

Another way to think about it is expected effort vs expected reward. The expected effort is high, because it&apos;s unknown if a solution really exists. The expected reward is low because the current way of doing things is accepted and rarely thought about.

In these cases it&apos;s important to educate customers that there is a better way, that it works for them, and that it&apos;s effortless enough to switch to warrant the effort.

## When to choose freemium

Free plans are a tool for customer acquisition. They let users try the product stress-free, and ideally, once they&apos;re used to the product, they will convert to a paying customer.

Based on that I&apos;m sure you can already see where I&apos;m going with this argument. If a free plan makes sense depends on the kind of problem the product solves.

If you work on a **Hair on fire** problem, then there&apos;s **no need for a free plan**. Customers understand the problem, they have it now, and need to solve it now. Which means they&apos;re ready to choose a solution. They only need to verify that your solution is the right one. A free **trial** is enough for them to evaluate the product and compare it with others. If your product is the right one for them, they will pay to have their problem solved. If it&apos;s not they will move on to another product that is.

If you offer a free plan, you&apos;ll likely get customers who choose your product because of the free plan, and never intend to pay. With a bottom-up strategy (where you want individual contributors introducing your product to the companies they work for, often dev tools like website analytics) this is great. Without that strategy you&apos;re doing charity.

For **Hard fact** problems, a **free plan is very useful**. Such products offer a better way to deal with a problem, but users need to be convinced. They need time to recognize the advantages of the new approach. Since it&apos;s not a pressing problem, they won&apos;t spend significant time to figure out how the product works and how it&apos;s better than what they did before.

A trial is time-bound, and would stress users. It sets a time limit to figure out if the product is better. This often results in users not starting to use it at all. With a free plan they have all the time in the world, and when they see that it is better than what they do currently, they will use the product more and more.

The key here is to design the free plan in a way that, as soon as people are at that inflection point, when they realize the value of the new solution, they are required to pay.

There are gray zones in between. There can be many other factors that affect the pricing model, such as competition. But I found it a useful framework to start with.

## My product decision

With [Lighthouse](https://lighthouseapp.io/) I was unsure what path to take. The reasons I laid out in this article helped me make the decision with conviction.

Lighthouse combines the feature of RSS feed reader, newsletter reader, and read-it-later app. It focuses on fighting information overload, and as such has a different structure than traditional RSS feed readers. They show each feed as a separate list. In Lighthouse, new published content gets into the inbox, where users can select the content they are interested in and move them to the library. The rest is ignored.

Because its core structure differs from existing RSS readers, Lighthouse falls into the &apos;Hard fact&apos; category of problems. Therefore, I decided to add a free plan.

## Conclusion

For a long time I was in the &quot;never add a free plan&quot; camp. I read many articles about companies where adding a free plan hurt their business, sometimes significantly. And a lot of advice for solopreneurs is to not add a free plan.

Given that in the RSS reader landscape most companies have a free plan, this was always a difficult choice to make and maintain, especially without any grounding theory on why to add one or not to add one.

This way of thinking about free plans helped me a lot. It&apos;s impossible to know if a choice is right, or if another might have been better. But this framework helps me feel good about the decision I took.

And I hope it also adds clarity to your decision making process.</content:encoded></item><item><title>Pivoting to a well-defined product category</title><link>https://weberdominik.com/blog/pivoting-well-defined-category/</link><guid isPermaLink="true">https://weberdominik.com/blog/pivoting-well-defined-category/</guid><pubDate>Wed, 17 Apr 2024 00:00:00 GMT</pubDate><content:encoded>I&apos;m a solopreneur and working on my first serious product. It&apos;s called Lighthouse, and combines the features of an RSS feed reader, newsletter reader, and read-it-later app.

The problem that I want to address is that when people subscribe to many blogs, newsletters, YouTube channels, and various other sources, the volume of content quickly becomes overwhelming. You can say that the vision is to fix content overload.

I&apos;ve written about it in detail [before](https://lighthouseapp.io/blog/introducing-lighthouse), so I&apos;m not going into more detail here. But that&apos;s the gist of it.

## A unique product

In the early days of Lighthouse I was under the impression that the product has to be unique. I intentionally distanced its messaging from the RSS feed reader landscape, and on the website mentioned RSS feeds only in passing. The messaging focused on the vision rather than the product.

This made it very difficult for me to communicate what the product is, and to figure out how I can contact potential customers. It was all about content aggregation and fixing content overload. Content aggregation is a tangible target market, but dominated by marketing, content curation, and marketing through curated content. Not the direction I envision for Lighthouse. And finding people who have the problem of content overload is almost impossible. Everyone deals with that somehow, and it&apos;s rarely recognized as a specific problem to solve.

I was effectively trying to create my own market category, with neither the knowledge nor capital to achieve it.

April Dunford writes in &quot;Obviously Awesome&quot;:

&gt; This [positioning] style is the most difficult because it involves dramatically shifting the way customers think, and shifting customer thinking takes a very strong, consistent, long-term effort. That means you need a certain amount of money and time to convince the market to make this shift. Because of the investment and time required, this style is generally best used by more established companies with massive resources to put toward educating the market and establishing a leadership position.

## The pivot

When I read that quote from April Dunford I decided to switch my approach. Instead of positioning Lighthouse as &quot;fixing content overload&quot;, I positioned it as a competitor to existing RSS feed readers. It was an RSS feed reader anyway, so that change made a huge amount of sense. So much so that I regret not making the change sooner.

Now, it&apos;s much easier to explain what I&apos;m building. And with a clear market to address, I can finally make progress in my user acquisition efforts.

The shift in strategy had another, unexpected, benefit. I freed my mind to look at other feed readers to check what a complete product looks like, get ideas for new features, and how existing features should work.

Before I refrained from doing that, because I wasn&apos;t building just another RSS feed reader, I was building a tool to fix content overload. In my mind I was building something different, so there was no point in looking at other products.

The strategy shift prompted me to reevaluate where Lighthouse stands. The result was a list of features and improvements that needed to get done. It meant crunch time. Over the next 3 weeks I worked day and night to develop those features.

It was a lot of work, but I&apos;m very happy I did it. It made the product 100x better. And, incidentally, it now serves the original purpose of fixing content overload a lot better too.

## Learning

So, what&apos;s the moral of this story?

I learned that, as a solopreneur, when building a new product, it&apos;s best to first align with an existing category. Build a minimum version, with all the required features that people need. This makes it a lot easier to communicate what the product does and where it&apos;s useful.

You want to get the product ready for users as quickly as possible. That means taking shortcuts. But instead of cutting corners with the product, do that with non-differentiating features, like authentication. If performance is not a differentiating factor, shortcuts with the underlying architecture are also a good option to increase development velocity. Though at some point it will need cleanup.

And the final lesson I learned is to prioritize the product category over the unique aspects of the product. In messaging, because that will give users an easier time understanding what the product is. But also in the way I think about the product. Instead of building only what makes the product unique, check competitors and which supporting features they have that make the product useful and lovable. Differentiation will take care of itself as you shape the product to fit with your vision.</content:encoded></item><item><title>Product naming trends over time</title><link>https://weberdominik.com/blog/product-naming-trends/</link><guid isPermaLink="true">https://weberdominik.com/blog/product-naming-trends/</guid><pubDate>Mon, 15 Apr 2024 00:00:00 GMT</pubDate><content:encoded>It took me months to settle on the name Lighthouse for the product I&apos;m working on, which combines the functionality of an RSS feed reader, a newsletter reader, and a read-it-later app, to fight content overload.

Some people excel at naming their products. I&amp;apos;m not one of them. I experimented a lot with different strategies. After a friend mentioned that all of my suggestions sound outdated, I started to research how naming trends of SAAS products changed over time.

Here&apos;s what I found.

## Naming

### Descriptive and functional

In the early days of SAAS names often related directly to the functionality of the software. For example, Salesforce (with sales right in the name), TurboTax, or Mailchimp. The name was usually enough to guess at least the market the product is in.

### Misspellings, dropped vowels, and suffixes

A couple of years later companies became more creative. Names tended to have misspellings or dropped vowels, just enough to create a unique name but still have the original word recognizable. For example Flickr or Tumblr.

Another trend at the same time was adding suffixes, most often -ly and -ify. Shopify and Spotify are probably the most well-known examples.

### Playfulness

The next trend was a move toward unique names that didn&amp;apos;t necessarily have a direct relation to the product&apos;s function but were memorable and often had a playful tone, such as Zapier or Slack.

### Abstract words

Products shifted towards single-word names, not directly describing the tool&apos;s function but rather a concept related to the product.

Lighthouse, for example (the name I chose in the end), is such a case. Here the concept is that a lighthouse guides ships. It&apos;s similar to the product, which helps users weed out low-quality content.

Larger companies usually have a whole branding strategy around the name.

## Domains

Memorable domain names are important for businesses, and when a company becomes large enough, they eventually buy their .com domain. .com is still the most important TLD for SAAS businesses.

Over time fewer and fewer good .com domains were available, so people started to get creative. At first, they used prefixes like get or use before the product name.

Over time this expanded to include prefixes like join, my, go, and postfixes like app or hq.

Additionally, other TLDs like .io, .co, .ai, .app, .dev, have gained popularity alongside .com.</content:encoded></item><item><title>About risk taking in life and job</title><link>https://weberdominik.com/blog/risk-taking-in-life-and-job/</link><guid isPermaLink="true">https://weberdominik.com/blog/risk-taking-in-life-and-job/</guid><pubDate>Thu, 29 Feb 2024 00:00:00 GMT</pubDate><content:encoded>When speaking with individuals in the finance community who aim to maximize their wealth, the most common strategy I encounter regarding property investment is to buy as much as you can reasonably afford. If you have enough cash to buy one flat outright, you could alternatively buy three flats by taking a mortgage.

I&apos;m not going into detail about the reasons here, but on the surface it makes sense. You get 3 flats, get tenants to pay the mortgage, and at the end you have 3 instead of one. Additionally, there are also tax benefits associated with mortgages (at least where I live).

While it is sound financial advice, I don&apos;t think it makes sense for everyone. My thought was that there are different areas one can take risk in. The ones I&apos;m focusing on now are financial risk and job risk. Financial risk is about what you do with money. If you save it, invest it somewhere, buy property, take on debt. I define job risk as how you earn money through your day to day activities. A normal day job and entrepreneurship are the most common ones.

I believe it&apos;s necessary to further explain what I mean with job risk. It involves the ability to change how you earn money. Switching companies is one. It&apos;s usually low-risk, but still a risk. It&apos;s a new environment, new people, possibly a new city. It&apos;s risky because you can&apos;t say how it&apos;ll turn out. You might not like it, or worst case the company might let you go. Going full-time founding a new company is higher risk. You can never know if or when you&apos;ll start making money with it.

A well-known principle in finance is that taking on more risk increases your leverage. With more leverage the reward can be higher, but it&apos;s also easier to get wiped out. In this analogy it means going bankrupt.

Going back to the example, either buying three properties and take on mortgages, or buy one property without debt. The clear benefit of buying three properties is that, if all goes well, you end up with more wealth. Renters will help you out with mortgage payments, and after the mortgages are paid you got three for the price of one (oversimplified of course, forgive me).

On the other hand, until the mortgages are repaid, you&apos;re responsible for significantly higher monthly payments. If renters fall out, you still have to pay. This adds financial pressure, making the prospect of changing jobs seem far riskier.

This is the tradeoff. Assuming financial risk, or debt, can reduce personal freedom, but also has the upside of ending up with more wealth. Taking personal risk, or job risk, means potentially not being able to afford great investments, but often has the upside of liking what you do much more.

There is no right answer. Everyone has to figure out for themselves what&apos;s right for them. Personally, I want to be able to take job risks, and am perfectly happy to give up some potential financial gain for it.</content:encoded></item><item><title>What I want to achieve with Lighthouse</title><link>https://weberdominik.com/blog/what-to-achieve-with-lighthouse/</link><guid isPermaLink="true">https://weberdominik.com/blog/what-to-achieve-with-lighthouse/</guid><pubDate>Mon, 26 Feb 2024 00:00:00 GMT</pubDate><content:encoded>I&apos;ve been learning from articles, blog posts, and newsletters since I was in school. They&apos;re not my only source of knowledge, but they are great to stay up to date, get practical knowledge from people who&apos;ve done it, expose myself to new ideas, and much more.

Over the years I&apos;ve gone through a couple different tools. Initially, I opened everything in a new tab. It worked for a while, but it became unsustainable. At some point I switched to Pocket, a read-it-later app, and my browser (and laptop memory) was freed of the tens to hundreds of tabs.

Over time I accumulated a long list of blogs I follow. Not all of them have newsletters and relying on their content being upvoted on HackerNews or Reddit wasn&apos;t going to cut it. This is when I looked into rss feed readers, and eventually developed my own.

Life was good. I followed the blogs I wanted, and even moved all newsletter subscriptions there. Goodbye email clutter.

Fast forward a few years, I have acquired much more knowledge than I had during my university years. Back then almost everything was new and interesting. Today, a lot of it just rehashes what I already know.

I find myself skimming and archiving content much more frequently now. It often happens that long content stays in my reading list due to time constraints, only to discover it&apos;s not valuable at all when I finally take the time.

It&apos;s frustrating. Ideally I&apos;d like to know beforehand if content is high-quality and worth the time.

To that end I&apos;m building [Lighthouse](https://lighthouseapp.io/). It&apos;s a combination of RSS feed reader, newsletter reader, and read-it-later app, which focuses on dealing with content overload. Specifically overload of educational content. The application is targeted towards lifelong learners, so we know where to put our limited attention.

My vision is that everything you read, watch, or listen to, contains valuable information. Phrased differently, no content you put your valuable attention on, is a waste of time.

This is also the rationale behind the name `Lighthouse`. It&apos;s an analogy for how it shines a light on great content, leaving the rest in the dark.

This is not the only way to solve content overload, and there is no shortage of feed readers out there. Some of them already attempt to tackle content overload. They usually attempt it by using an AI that filters or sorts content for you.

I follow a different approach.

In my opinion, no AI, no content curator, not even friends, can recommend content that&apos;s 100% perfect for you. You always need to check yourself if it&apos;s worth the time. They suggest content, but you&apos;re the only one who can say if it&apos;s useful to you.

This process of filtering and selecting content is what I aim to facilitate. I want to provide as much insight into content before you open it, in a scannable manner, so you can quickly decide if it&apos;s valuable or not.

Additional information is helpful because the headline is often not enough to make that decision. Sometimes it&apos;s misleading (aka clickbait), other times it lacks information altogether. In other instances, expectations are misaligned. We anticipate a deep dive, only to find it&apos;s merely an overview. There are endless reasons why we end up reading something that provides no value.

The more information we have upfront the better we can pick the pieces that we are looking for.

If you share this view on content overload, try out [Lighthouse](https://lighthouseapp.io/).</content:encoded></item><item><title>Think of dopamine as finite resource to spend on activities</title><link>https://weberdominik.com/blog/concept-dopamine-first/</link><guid isPermaLink="true">https://weberdominik.com/blog/concept-dopamine-first/</guid><pubDate>Sat, 10 Feb 2024 00:00:00 GMT</pubDate><content:encoded>We usually think of dopamine as a feel-good chemical we get from specific activities, without considering that dopamine production has limits. What if we reversed our thinking to dopamine-first, that it&apos;s a finite resource we can spend on activities?

Dopamine is known as the motivation and reward chemical in the brain. There are others as well, and it&apos;s more complicated than that, but for the context of this article let&apos;s think of getting (releasing) dopamine as feeling good and motivated. It&apos;s a good enough proxy.

In our daily lives we try to feel good as much as possible. The more we feel good the better. Nothing wrong with that, that&apos;s how it&apos;s supposed to be. Nobody wakes up thinking &quot;today I want to feel shit&quot;.

The body, however, can&apos;t produce an endless amount of dopamine. The brain operates via a self-regulation process called homeostasis, meaning for every high, there&apos;s a low. Regardless how amazing our lives are, we will have down phases. They&apos;re impossible to avoid.

Instead of trying to avoid down phases, which is a losing battle anyway, how about we control from which activities we get our dopamine, i.e. high phases?

We can get dopamine from a variety of different activities. Some more productive, like working out, creating something, finishing tasks, learning, spending time in nature, or being with friends. Some are less productive, like scrolling TikTok, eating junk food, watching TV, or taking drugs.

If we have a limited amount of dopamine each day, how would you rather get it? By scrolling TikTok or by working on a project you&apos;re passionate about?

I&apos;d assume by working out. The problem is that if you spend too much time on TikTok, you&apos;ll enjoy working out less.

That is why I propose thinking of dopamine as a resource to spend on activities. It&apos;s reverse from what actually happens (we release dopamine based on activities), but because it&apos;s limited, at some point it&apos;s over. And when this resource is depleted, our enjoyment of these activities goes down.

Thinking of dopamine, motivation, or enjoyment as an endless well doesn&apos;t work, because it&apos;s limited. At some point even the most enjoyable activity won&apos;t bring joy anymore.

My hope is that having the question How would you rather get dopamine? in the back of our minds will cause us to think if we really want to spend the 15 minutes waiting time scrolling TikTok, therefore depleting dopamine and reducing motivation for the next thing.

As mentioned before, the body is more complex than that, but anecdotally this feels true. Whenever I spend my time watching short videos, I&apos;m less motivated for the task after that. Similarly, when I&apos;m well-rested I get more enjoyment from videos and movies than when I&apos;m tired and exhausted (apart from the guilt I feel of not being productive when I have the capacity to do so).

Even if it isn&apos;t 100% scientifically correct (and I&apos;m not claiming it is), it can be a useful concept, a useful approximation, in our daily lives.

Applying this concept made it easier for me to resist certain activities during the day, like scrolling TikTok when I&apos;m waiting for a couple minutes. It makes clear that doing that will lessen enjoyment of my work, workout, and other productive tasks. And enjoying my work is more important for me than avoiding five minutes boredom.

In short, it helps me enjoy my productive activities more.</content:encoded></item><item><title>Reading articles and newsletters reduces blind spots</title><link>https://weberdominik.com/blog/reading-reducing-blind-spots/</link><guid isPermaLink="true">https://weberdominik.com/blog/reading-reducing-blind-spots/</guid><pubDate>Fri, 09 Feb 2024 00:00:00 GMT</pubDate><content:encoded>Reading articles, blog posts, and newsletters is such an enjoyable activity for me because it reduces blind spots without needing a lot of cognitive effort. Finding out about a novel approach that I can use in my work, or a tool that gives me new capabilities, is exciting. I get to see what else is out there without expending much mental energy.

You know when you&apos;re trying to achieve something, you try the most complicated ways, and then a friend comes along and shows you a simple way to do it. It&apos;s great when it happens. You just saved hours, days, maybe even weeks of time. Reading articles and newsletters does the same. On steroids.

I divide reading into focused and unfocused. Focused reading is part of learning and acquiring skills. It&apos;s about picking a topic, reading about it and practicing what you read. Practice is important to hammer in the learned material.

Unfocused reading can stand alone. It&apos;s about reading non-fiction content for enjoyment, without the goal of learning anything specific. There are many different reasons to enjoy it. It might be a good story, the ups and downs of emotions, the validation if someone agrees with your view, or any other reason. For me it&apos;s mostly about excitement when I discover something new.

Phrasing it differently, it&apos;s about reducing blind spots and expanding my horizon.

The world contains so much information. A lot more than we can ever know. Luckily, being aware that information exists is a lot easier than knowing what that information is.

For example, the information that it&apos;s possible to create images with AI by describing what I want is already helpful. I don&apos;t need to know which software to use. When I am in a situation where I need it, I can use Google to find relevant software. But if I didn&apos;t even know about it, then I wouldn&apos;t think to look for it.

And that&apos;s what I mean by expanding my horizon and reducing blind spots. To widen the top line of the T, in T-shaped knowledge. To have superficial knowledge without knowing the details. If I need the details, I can dive in. But without knowing that there are details to explore, I wouldn&apos;t know where to start.

This is where relatively short-form content, usually less than 30min, excels. I can expose myself to new and different knowledge at a high rate.

In a similar sense, this also exposes me to new and different ideas at a high rate. People often write about how they see the world, and about their unique insights. Some ideas resonate, others don&apos;t. But all of them inform and expand my own thinking and often lead to my very own new insights.</content:encoded></item><item><title>Startups and marketing</title><link>https://weberdominik.com/blog/startups-marketing/</link><guid isPermaLink="true">https://weberdominik.com/blog/startups-marketing/</guid><pubDate>Sun, 26 Nov 2023 00:00:00 GMT</pubDate><content:encoded>When I heard marketing professionals giving advice, it never fully made sense to me. Until a couple days ago, when it finally clicked.

An important distinction is the state of the company. As an indie developer I was looking at marketing from the point of view of a new product (getting first users), and marketers work through the lens of established businesses. They assume that the target audience, the ideal customer, is known. But in early stage products this is usually unclear.

What comes up most often in “marketing for beginners” lessons and advice is knowing the target audience, the ideal customer profile, defining a brand, having a vision and mission, and knowing the how and why of the golden circle of Simon Sinek.

![Golden circle](./golden-circle.png)

Marketing is always talked about as one pillar of a product company. I assumed that it&apos;s just something one has to do to get the business going.

It is a lot though, and can feel overwhelming. Especially for founders who have never done anything like it before.

In the early days understanding of the required information changes quickly as you expand your knowledge of the market. At that stage it&apos;s not yet clear what the best market for the product is, and part of the fun (and challenge) is to figure that out.

That&apos;s why this way of marketing is not for new product companies.

It was important for me to understand that even beginner courses are for marketers working in established companies. Marketing in small or new companies is an entirely different discipline.

A product still needs users though. If it&apos;s not marketing, what do I call it? I decided that in the early phase I simply think of it as “user acquisition”. No marketing, no fancy marketing strategy, just plain and simple user acquisition.

This is the core message I want to get across. It&apos;s not necessary to define the ideal customer profile, mission, vision, and whatever else in the beginning. Just do whatever works to get users and worry about the rest later.

Once the product acquired an initial set of customers work from there. Find out which group of users gets the most value from the product, define the ideal customer from that, and then get to product market fit.

## Marketing

Marketing professionals threw around terms that didn&apos;t quite make sense in my engineering mind.

It&apos;s all about perspective of course. The marketing lens is different from the engineering lens, which is different from the product lens, and so forth. Understanding these different perspectives is a (communication) superpower.

Today I&apos;m an engineer and indie developer looking through the marketing lens.

![](./engineer-lens.png)

For the purpose of this article I split marketing into three areas.

- Advertising
- Product marketing
- Brand building

What I mean with product marketing are actions that should bring users directly to the product. An obvious way is creating content which explains the benefits and features of the product.

I&apos;ll focus on brand building, as this is where my confusion came from. The other two are quite straightforward, easier to understand in their purpose and operations. Doesn&apos;t mean that it&apos;s easy to do though.

Brand building is about communicating what the company cares about. Their values, the vision and mission, the why, how and what. It is the baseline for the brand.

![why-how-what](./why-how-what.png)

Communicating it effectively is the job of marketing. What was crucial for me to understand is that it is more than just explaining what the mission is for example. It&apos;s also about going beyond the product.

Let&apos;s assume the mission of a fictional company is to reduce carbon emissions to zero. Their product analyzes computing infrastructure, shows the impact it has on emissions, and suggests improvements. A marketing action can be to create ads and put them on specific keywords in Google.

Going beyond the product can be to write an article about how companies can reduce their carbon footprint with their travel policy. This article also works toward the mission. Such actions build a brand. This one communicates that the company cares about reducing carbon emissions, not only their product.

## Conclusion

I believe it&apos;s important to form a good mental model about all aspects of product business building to be able to have repeatable success. It helps set the right actions and avoid unnecessary ones.

Trying to do things the way they&apos;re supposed to be done is the perfectionist&apos;s curse. If you never tried to &apos;do it properly&apos; and just did what works, be happy about it, I envy you.

For me it was an important step to understand that most of the information marketing courses tell you to get is not necessary to get first customers.

It all seems quite obvious when I write it down like that, and I feel a bit stupid for not getting there earlier. But now that I know, I can put marketing advice better into context.</content:encoded></item><item><title>Refactoring an entire NextJs application to server components</title><link>https://weberdominik.com/blog/server-components-refactoring/</link><guid isPermaLink="true">https://weberdominik.com/blog/server-components-refactoring/</guid><pubDate>Mon, 20 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Next.js 13 introduced the app directory and React Server Components. On Star Wars day (May 4th) a couple months later, React Server Components was marked as stable.

Server components separate page rendering, with some parts rendered on the server and others on the client. The key difference is that server components are _always_ rendered on the server, not prerendered on the server and hydrated on the client. Server-side rendering existed before, but now for the first time it&apos;s possible to mix it with client-side rendering _on the same page_.

I was intrigued by these changes and spent 3 days migrating the product I&apos;m working on, [https://looking-glass.app](https://looking-glass.app).

Before I started the client rendered everything, every layout and page had the `use client` directive on top.

The goal I set myself was to move as much rendering to the server as I can. Meaning client components should be as far down the tree as possible.

## User experience

In the early days of the internet, all pages were rendered on the server. The (simplified) flow was

- Navigation start
- Server requests data
- Server renders html
- Browser displays page with data
- Navigation finish

The rise of single-page applications changed this.

- Navigation start
- SPA changes DOM to show new page
- Navigation finish
- SPA calls API and shows loading state
- Server responds with data
- Browser renders data

With SPAs navigation is instant, but retrieving data for display on the new page takes time. Most applications show a loading state while the API calls are ongoing.  
With pages rendered on the server navigation might take longer but the page is complete, no need to load additional data.

Server components make it possible to get the best of both worlds. For every element on a page, developers can choose the pattern that best fits the needs.

It is possible to show [loading states](https://nextjs.org/docs/app/api-reference/file-conventions/loading) with server components as well. I opted to not do that because subjectively the application feels snappier without loading states.

There is a cutoff though. If rendering the new page takes too long, users might question if it&apos;s still happening. In my case it was possible to keep the time to 200-300ms, which is fast enough that a loading state is superfluous. There were a couple performance improvements that needed to happen to get to that state though.

## Performance

Initially the product performed terribly with server components. Every server-side render took multiple seconds to complete. I attribute this to mistakes I made while refactoring, not to a flaw in server components. With one additional day of investigation and performance improvements it was at an acceptable level.

![performance before](./performance-before.png)

This is the timing pattern most requests had before the improvements. Content download takes so long because the server starts streaming the response immediately, while rendering of components happens. Rendering includes any async data fetching as well, the more there is, the longer it takes.

After improvements, the typical pattern looked like this. From 1.2s down to 0.2s, a reduction of 80%!

![performance after](./performance-after.png)

### Improvement: Combining data requests

At first some server components fetched data themselves. The component that displays library content is an example. Here&apos;s how it looks.

![component](./component.png)

Besides content data, which is passed as prop, it needs the icon url and a list of all tags. The component fetched this additional data itself.

With the standard page size being 20, it resulted in 40 requests for each page (icon url and tags). As you can imagine, this was the cause of a big chunk of the request duration. After moving these database calls to the page and combining them into one, render time was reduced by ~700ms.

### Improvement: Caching

When multiple components require the same data, caching can substantially decrease processing times. In my case user data is used by the middleware, layouts, and pages. Without caching it would be fetched 3 times.

`fetch` is [extended](https://nextjs.org/docs/app/building-your-application/data-fetching/fetching-caching-and-revalidating) to memoize calls by default.

&gt; Next.js extends the native fetch Web API to allow you to configure the caching and revalidating behavior for each fetch request on the server. React extends fetch to automatically memoize fetch requests while rendering a React component tree.

Async code that doesn&apos;t use `fetch`, database calls for example, need custom caching code.

With React&apos;s [`cache`](https://react.dev/reference/react/cache) it&apos;s possible to cache data for the same request. It takes a function and returns a cached function. During the request, if the function is called multiple times with the same parameters, it will execute only once and always return the result of the first execution.

Unfortunately at the time of writing it&apos;s only available in the canary and experimental release channels of React.

With NextJs&apos; [`unstable_cache`](https://nextjs.org/docs/app/api-reference/functions/unstable_cache) it&apos;s possible to reuse the results of expensive operations across multiple requests. This also means caching across users, which creates opportunities for even greater performance improvements but also introduces the risk of accidentally exposing data to the wrong user.

The best way to avoid that risk is to pass in the user id to the cached function. `unstable_cache` creates a key to store cached data with. This [key](https://github.com/vercel/next.js/blob/02103feb296759d5b873b64ad4fca4e3030ad063/packages/next/src/server/web/spec-extension/unstable-cache.ts#L61) includes the function itself, the key parts passed as a second parameter, and the arguments given to the function.

```
const joinedKey =
  `${cb.toString()}-${Array.isArray(keyParts) &amp;&amp; keyParts.join(&apos;,&apos;)}
  -${JSON.stringify(args)}`
```

Note that `unstable_cache` [only works for JSON values](https://github.com/vercel/next.js/blob/76da32e43fc5aafc9787f53a772458e828febbcd/packages/next/src/server/web/spec-extension/unstable-cache.ts#L127). Whatever data is cached will be returned in JSON format. `Date` objects and any other that don&apos;t have a direct representation, are returned as strings.

```
// TODO: handle non-JSON values?
body: JSON.stringify(result),
```

This is unexpected because the resulting function [type](https://github.com/vercel/next.js/blob/02103feb296759d5b873b64ad4fca4e3030ad063/packages/next/src/server/web/spec-extension/unstable-cache.ts#L12C1-L19C7) is the same as the callback provided.

```
export function unstable_cache&lt;T extends Callback&gt;(
  cb: T, // The callback type
  keyParts?: string[],
  options: {
	revalidate?: number | false
	tags?: string[]
  } = {}
): T { // Returns the same callback type, indicating the the return value is the same as well
// …
}
```

It&apos;s still unstable so I expect there will be improvements, but something to be aware of.

In my case it was useful to cache user data for the request, but not longer. I used `cache` and shaved off another 100ms.

### Parallelizing data requests

Pages often make more than one async call. If they&apos;re not dependent on each other it&apos;s possible to parallelize them.

For example changing

```
const sources = await getSourcesOverview(cookieStorage);
const inbox = await getInbox(cookieStorage, sourceId, page, 20, sortProperty, sortDirection);
const userData = await getUserData(cookieStorage);
```

into

```
const [sources, inbox, userData] = await Promise.all([
  getSourcesOverview(cookieStorage),
  getInbox(cookieStorage, sourceId, page, 20, sortProperty, sortDirection),
  getUserData(cookieStorage)
];);
```

In my case it didn&apos;t affect performance at all. Which leads me to believe that the database provider might use only one connection. It&apos;s an area for further investigation.

## Refactoring process

The strategy I chose was to refactor from the outside in. Starting with layouts, continuing with pages, and then components further and further down the tree. This approach gradually led to more and more content being rendered on the server.

The first step of every layout, page, or component refactoring was to convert it to a server component, removing `”use client”` and marking it `async`.

To handle API calls the application uses React Query. Initially every GET API call became one awaited function call. The improvements mentioned in [Performance](#performance) were left for later.

After that only event handling code was left. Functions that change data based on user actions or update stale data. Moving them into a new client component to encapsulate these actions was enough to remove them from the current page or component. This added a bunch of new files to the application, which was cleaned up as the last step of the refactoring.

During the cleanup multiple components with similar purposes were combined into one. For example, the `TagButton` has a callback prop which is executed when a tag is selected. Multiple entities can be tagged, so the tag endpoint to call is context-specific. At first there were multiple wrappers based on the entities that are tagged, `ContentEntryTagButton` and `ContentSubscriptionTagButton`.

The combined component can either receive a callback or a string defining which entity should be updated.

```
Interface Props {
  onSelect: &quot;updateUserContent&quot; | &quot;updateContentSubscription&quot; | ((tag: string) =&gt; void);
  elementId: string | null;
  …
}
```

If `&quot;updateUserContent&quot;` or `”updateContentSubscription”` is passed, then the component does the API call. If a function is passed then it calls the function.

I&apos;m sure there are better solutions, but for the size of this application it was the most straightforward one.

## Reflections

Server components significantly impact the development of NextJs applications. What I&apos;m writing here won&apos;t even scratch the surface, and over time new ideas will be developed and best practices adapted.

### What to render where

One idea I had is that all data loading should happen in server components, and user interaction should be in client components. Then there&apos;s no need for `GET` API endpoints or serialization of return data anymore.

It was good practice to only return data the UI needs, nothing more. That becomes moot as well. Even if the backend fetches all the data in the world, only the rendered components are returned.

Because of these changes I could remove a lot of code, which always feels nice. And I&apos;m excited about [server actions](link) for the same reason.

On the other hand, there&apos;s a big difference in how much data is transferred. Rendered components are larger than raw data. For this reason it might be beneficial to do most rendering on the server except for lists, where raw data is transferred and the same component rendered repeatedly. The raw data plus component code is probably smaller than the rendering result in these cases.

### UI updates

Server components are updated by calling `router.refresh()`. Every server component on the page is rerendered and transferred to the client. The more components there are the larger the response. If it were only server components and one list item out of hundreds is removed, the whole list is transferred again.

It&apos;d be amazing if only the diff were transferred.

A nice consequence of centralizing data retrieval in server components is that following any kind of user action, calling `router.refresh()` is enough to ensure the UI is up to date. No more granular data invalidation. This of course leads to excessive database usage, but makes development easier. It&apos;s a tradeoff. If at some point in the future we get a way to update only certain server components, we&apos;d have the best of both worlds.

The prerequisite here is that retrieving data and rendering server components is fast enough. If it&apos;s slow, the user experience will suffer.

At the moment I don&apos;t have a way to show a loading state while server components refresh. My application is fast enough that it&apos;s not needed, however I am curious and it is on my list to check out soon.

### Note

One thing to keep in mind is that on page reload client components are prerendered on the server. There&apos;s no need to switch to server components if that&apos;s all you want. Component trees like ServerC-&gt;ClientC-&gt;ServerC can still be fully prerendered on the server. When it hydrates the state of ClientC is used and adapts the UI as required.

## Conclusion

Server components are a significant change, and still relatively new. We don&apos;t know all the implications they will have on web application development yet. Big companies need time to develop trust in new technologies and features, and adapting their huge applications takes even longer.

My application is small, and I only just refactored it. Discovering all the benefits and tradeoffs takes time, and I&apos;m looking forward to find out more.

Besides that, the next innovation is already waiting. Server actions was released as stable with NextJs 14, on October 26th.</content:encoded></item><item><title>Vertical tabs in Visual Studio Code</title><link>https://weberdominik.com/blog/vscode-vertical-tabs/</link><guid isPermaLink="true">https://weberdominik.com/blog/vscode-vertical-tabs/</guid><description>With vertical tabs it&apos;s possible to have an overview over much more open files. Here&apos;s how to get it in VS Code.</description><pubDate>Sat, 25 Jun 2022 00:00:00 GMT</pubDate><content:encoded>I love vertical tabs. I use them wherever possible. Mostly in my browser (Firefox) and the IDEs I use.

For years I tried to have the same in VS Code too, but couldn&apos;t find a way. And now, finally, I found one.

It&apos;s actually quite easy.

First make sure that `View-&gt;Appearance-&gt;Show secondary side bar` is checked.

![](./1.png)

Then drag `Open Editors` to the secondary sidebar.

![](./2.png)

That&apos;s it. Now the open files are separated from the folder structure.

Now there&apos;s only one minor improvement, hiding the tabs on top.

Open the Command Palette (Ctrl + Shift + P) and go to `Preferences: Open Settings (JSON)`.

There add `&quot;workbench.editor.showTabs&quot;: false`.

Another improvement I found helpful is increasing the indent in the file structure. By default it&apos;s too small for me.  
To do that add `&quot;workbench.tree.indent&quot;: 20`, or whatever value works for you.

That&apos;s it. Here&apos;s the end result:

![](./3.png)</content:encoded></item><item><title>List of Built-In Helper Types in TypeScript</title><link>https://weberdominik.com/blog/ts-builtin-types-list/</link><guid isPermaLink="true">https://weberdominik.com/blog/ts-builtin-types-list/</guid><description>TypeScript has a few very useful helper types predefined, which aren&apos;t known widely enough. Here&apos;s a list of them with examples and explanations how they work for the more complex ones.</description><pubDate>Mon, 15 Jul 2019 00:00:00 GMT</pubDate><content:encoded>TypeScript has a few very useful helper types predefined, which aren&apos;t known widely enough. Here&apos;s a list of them with examples and explanations how they work for the more complex ones.

These helper types are either conditional or mapped types. To get an understanding how they work in general, check out my other blogpost [Mapped Types in TypeScript](/blogposts/ts-mapped-types).

## Mapped Types

### Partial

```TypeScript
// Make all properties in T optional
type Partial&lt;T&gt; = {
    [P in keyof T]?: T[P];
};
```

### Required

```TypeScript
// Make all properties in T required
type Required&lt;T&gt; = {
    [P in keyof T]-?: T[P];
};
```

### Readonly

```TypeScript
// Make all properties in T readonly
type Readonly&lt;T&gt; = {
    readonly [P in keyof T]: T[P];
};
```

### Pick

```TypeScript
// From T, pick a set of properties whose keys are in the union K
type Pick&lt;T, K extends keyof T&gt; = {
    [P in K]: T[P];
};
```

Example:

```TypeScript
interface I {
    a: string;
    b: string;
    c: string;
}

type T = Pick&lt;I, &apos;a&apos; | &apos;b&apos;&gt;;
// T = { a: string; b: string; }
```

To ensure proper type checking of the provided properties, `K extends keyof T`. Since `keyof T` is a union of all property names, `&apos;a&apos; | &apos;b&apos; | &apos;c&apos;` in the example, anything that extends it can only contain a subset of those.

### Record

```TypeScript
// Construct a type with a set of properties K of type T
type Record&lt;K extends keyof any, T&gt; = {
    [P in K]: T;
};
```

Example:

```TypeScript
type T = Record&lt;&apos;a&apos; | &apos;b&apos; | 1, string&gt;;
// T = { a: string; b: string; 1: string }
```

The `Record` type is an interesting one. The example above is how it is supposed to be used. Provide a list of property keys and a type and it creates a type with all of these properties set to the given type.

To ensure that only valid keys can be given, `K extends keyof any`. The result of that is `string | number | symbol`.  
In my opinion it would be clearer to use `PropertyKey` instead, which is defined to be the same thing, but it may have historical reasons to be defined `keyof any`.

Usually `keyof` gives a union of literal string types. The reasoning of the result of `keyof any` is that `any` is no concrete type, it can be anything, with any possible property key. They can be strings, numbers and symbols. So `keyof any` returns a union of all strings, all numbers and all symbols, which is equivalent to `string | number | symbol`.

_Note: I don&apos;t know if this is actually the reasoning. It is just the best explanation I could come up with that doesn&apos;t make `keyof any` a special case._

Technically there are no `number` property keys in JavaScript (they are converted to strings). The TypeScript team chose to include it anyway, to make it easier for developers and keep consistency. You can read more about that decision in [issue #21983](https://github.com/Microsoft/TypeScript/issues/21983) on the TypeScript repository.

## Conditional Types

### Exclude

```TypeScript
// Exclude from T those types that are assignable to U
type Exclude&lt;T, U&gt; = T extends U ? never : T;
```

Example:

```TypeScript
type T = Exclude&lt;&apos;a&apos; | &apos;b&apos; | &apos;c&apos;, &apos;c&apos; | &apos;f&apos;&gt;;
// T = &apos;a&apos; | &apos;b&apos;
```

`Exclude` makes most sense when applied to union types, because of the way they&apos;re applied to them: Distributive, on each type making up the union separately.  
In this example each string, `a`, `b`, `c` is checked if it is contained in `&apos;c&apos; | &apos;f&apos;`, and if not, appended to the result.

The obvious use is for property keys, but it can also be used to exclude types extending another from a union.

```TypeScript
interface Base { z: string; }
interface E1 extends Base {}
interface E2 extends Base {}
interface Other { a: string }
type T = Exclude&lt;E1 | E2 | Other, Base&gt;;
// T = Other
```

You have to watch out though. TypeScript checks the _shape_, not the actual type. This means if they have the same property, they are the same type, even if the name is different.

```TypeScript
interface Base { a: string; }
interface E1 extends Base {}
interface E2 extends Base {}
interface Other { a: string }
type T = Exclude&lt;E1 | E2 | Other, Base&gt;;
// T = never
```

In this example, now that `Base` has a property `a` instead of `z`, `Other extends Base` is also true and the result is `never`.

### Extract

```TypeScript
// Extract from T those types that are assignable to U
type Extract&lt;T, U&gt; = T extends U ? T : never;
```

`Extract` is the reverse of `Exclude`. Which means the same examples apply, just in reverse.

```TypeScript
type T = Extract&lt;&apos;a&apos; | &apos;b&apos; | &apos;c&apos;, &apos;c&apos; | &apos;f&apos;&gt;;
// T = &apos;c&apos;
```

```TypeScript
interface Base { z: string; }
interface E1 extends Base {}
interface E2 extends Base {}
interface Other { a: string }
type T = Extract&lt;E1 | E2 | Other, Base&gt;;
// T = E1 | E2
```

### Omit

```TypeScript
// Construct a type with the properties of T except for those in type K.
type Omit&lt;T, K extends keyof any&gt; = Pick&lt;T, Exclude&lt;keyof T, K&gt;&gt;;
```

Example:

```TypeScript
interface I {
    a: string;
    b: string;
    c: string;
}
type T = Omit&lt;I, &apos;a&apos; | &apos;b&apos;&gt;;
// T = { a: string; }
```

`Omit` is the reverse of `Pick`.  
`Exclude&lt;keyof T, K&gt;` results in all properties that are not in `K` (which is `&apos;a&apos; | &apos;b&apos;` in the example), and `Pick` creates the type with the remaining properties.

### NonNullable

```TypeScript
// Exclude null and undefined from T
type NonNullable&lt;T&gt; = T extends null | undefined ? never : T;
```

Types are often defined as a union with `null` or `undefined` to make assigning a value optional.  
For example

```TypeScript
interface I {}
type T = I | undefined;
const x: T = undefined;
```

`NonNullable` essentially makes it required again.

```TypeScript
type T2 = NonNullable&lt;T&gt;;
// T2 = I
```

_Note: For this to have any effect, `strictNullChecks` must be set to `true` in `tsconfig.json`. Otherwise `null` and `undefined` can be assigned to any type, and `I | undefined` is essentially the same as `I`._

### Parameters

```TypeScript
// Obtain the parameters of a function type in a tuple
type Parameters&lt;T extends (...args: any) =&gt; any&gt; = T extends (...args: infer P) =&gt; any ? P : never;
```

Example:

```TypeScript
type F = (p1: string, p2: number) =&gt; boolean;
type T = Parameters&lt;F&gt;;
// T = [string, number]
```

The definition looks quite complicated, so let&apos;s dissect it. On the left side there is

```TypeScript
type Parameters&lt;T extends (...args: any) =&gt; any&gt;
```

`(...args: any) =&gt; any` maps to any function. It&apos;s pretty much the same as `Function`. This means the left side can also be written as

```TypeScript
type Parameters&lt;T extends Function&gt;
```

Written like that it&apos;s easier to see that it requires `T` to be a function type.

On the right side it&apos;s not possible to do such a simplification, because the type of `args` has to be inferred.

```TypeScript
T extends (...args: infer P) =&gt; any ? P : never
```

The `infer` keyword can be used in combination with `extends` and instructs TypeScript to infer the type of some part of the `extends` condition.  
It can only be used with `extends` because otherwise there is nothing the type is limited to, hence nowhere to infer the type from. For the same reason it&apos;s only possible to reference the inferred type in the true branch of the condition.

In the case above `...args` is typed with `infer P`, which means that the arguments type will be inferred into `P`.

The return type of the function (`(...args: infer P) =&gt; any`) can be ignored, so it&apos;s typed `any`.

If the condition that `T` is a function is true, the result is the inferred arguments type `P`.  
Since `T` is restricted to be a function, this is always the case, otherwise there&apos;d be a compiler error.

### ConstructorParameters

```TypeScript
// Obtain the parameters of a constructor function type in a tuple
type ConstructorParameters&lt;T extends new (...args: any) =&gt; any&gt; = T extends new (...args: infer P) =&gt; any ? P : never;
```

Example:

```TypeScript
type F = new (p1: string, p2: number) =&gt; boolean;
type T = Parameters&lt;F&gt;;
// T = [string, number]
```

This is the same as `Parameters`, just typed for constructor functions with the `new` keyword added.

### ReturnType

```TypeScript
// Obtain the return type of a function type
type ReturnType&lt;T extends (...args: any) =&gt; any&gt; = T extends (...args: any) =&gt; infer R ? R : any;
```

Example:

```TypeScript
type F = (p1: string, p2: number) =&gt; boolean;
type T = ReturnType&lt;F&gt;;
// T = boolean
```

`ReturnType` works very similar as `Parameters`, the only difference is that instead of inferring the parameters type, the return type is inferred (`infer R`) and the result of the `extends` condition.

### InstanceType

```TypeScript
// Obtain the return type of a constructor function type
type InstanceType&lt;T extends new (...args: any) =&gt; any&gt; = T extends new (...args: any) =&gt; infer R ? R : any;
```

This is the same as `ReturnType`, just typed for constructor functions with the `new` keyword added.</content:encoded></item><item><title>Mapped Types in TypeScript</title><link>https://weberdominik.com/blog/ts-mapped-types/</link><guid isPermaLink="true">https://weberdominik.com/blog/ts-mapped-types/</guid><description>Mapped types, introduced in TypeScript 2.1, can significantly reduce typing effort. They can be hard to understand though, as they unfold their full potential only in combination with other (complicated) features.</description><pubDate>Mon, 15 Jul 2019 00:00:00 GMT</pubDate><content:encoded>Mapped types, introduced in TypeScript 2.1, can significantly reduce typing effort. They can be hard to understand though, as they unfold their full potential only in combination with other (complicated) features.

## `keyof` and Indexed Types

Lets start with the features necessary for mapped types, before taking a full dive.

`keyof`, also called the _index type query operator_, creates a literal string union of the public property names of a given type.

```ts
interface I {
  a: string;
  b: number;
}

type Properties = keyof I;
// Properties = &quot;a&quot; | &quot;b&quot;
```

Indexed types, specifically the _indexed access operator_ allow accessing the type of a property and assigning it to a different type. With the same interface `I` from above, it&apos;s possible to get the type of property `a` simply by accessing it.

```ts
type PropertyA = I[&quot;a&quot;];
// PropertyA = string
```

It&apos;s also possible to **pass multiple properties** as a union, which yields a union of the respective property types.

```ts
type PropertyTypes = I[&quot;a&quot; | &quot;b&quot;];
// PropertyTypes = string | number
```

Both features also work in combination.

```ts
type PropertyTypes = I[keyof I];
// PropertyTypes = string | number
```

The _indexed access operator_ is also type-checked, so accessing a property that doesn&apos;t exist would lead to an error.

```ts
type PropertyA = I[&quot;nonexistent&quot;];
// Property &apos;nonexistent&apos; does not exist on type &apos;I&apos;.
```

## Simple Mapped Types

With the basics down we can move on to mapped types themselves. In general, a mapped type maps a list of strings to properties. The list of strings is defined as a literal string union.

```ts
type Properties = &quot;a&quot; | &quot;b&quot; | &quot;c&quot;;
```

A simple mapped type based on that could look like this

```ts
type T = { [P in Properties]: boolean };
// type T = {
//   a: boolean;
//   b: boolean;
//   c: boolean;
// }
```

All it does is iterate over each possible string value and create a boolean property out of it.

By itself this is not terribly useful, but adding generics to the mix will be a great improvement. With it, it&apos;s possible to define a mapped type that makes every property optional.

```ts
type Partial&lt;T&gt; = { [P in keyof T]?: T[P] };

type IPartial = Partial&lt;I&gt;; // &apos;I&apos; is the interface defined on top
// type IPartial {
//   a?: string;
//   b?: string;
// }
```

It looks a bit more complicated, but uses the same structure as the simpler definition before. The major difference here is that it takes an existing type and adapts the properties.

First it uses `keyof` to get a literal string union of all property names (`keyof T`). Then iterates over all of them (`[P in keyof T]`) and makes them optional by adding the question mark. The indexed access operator (`T[P]`) assigns the same type the property has on the given type, to the newly created one.

It&apos;s not limited to make properties optional. Every modifier and type can be used. It&apos;s not even necessary to use the original property type. For example changing every property into a number

```ts
type ToNumber&lt;T&gt; = { [P in keyof T]: number };
```

or into a `Promise`

```ts
type ToPromise&lt;T&gt; = { [P in keyof T]: Promise&lt;T[P]&gt; };
```

It&apos;s even possible to remove modifiers, by adding a `-` in front of it. For example removing the `readonly` modifier from all properties of a type.

```ts
type RemoveReadonly&lt;T&gt; = { -readonly [P in keyof T]: T[P] };
```

The same thing works for removing the optional marker, effectively marking the property required:

```ts
type RemoveOptional&lt;T&gt; = { [P in keyof T]-?: T[P] };
```

One thing to note here is that **mapped types don&apos;t apply to basic types**.

```ts
type MappedBasic = Partial&lt;string&gt;;
// type MappedBasic = string
```

This covers the basics of mapped types.  
The next sections will show how mixing together additional advanced TypeScript features makes them even more powerful (and complicated).

## Conditional Types

TypeScript 2.8 introduced conditional types, which select a possible type based on a type relationship test. For example

```ts
T extends Function ? string : boolean
```

It can be used wherever generics are available, such as the return type of a function.

```ts
declare function f&lt;T&gt;(p: T): T extends Function ? string : boolean;
```

If the parameter `p` is a function, the return type is `string`, if not it&apos;s `boolean`.

The same is true for classes

```ts
class C&lt;T&gt; {
  value: T extends Function ? string : boolean;
}
```

and type aliases

```ts
type T1&lt;T&gt; = T extends Function ? string : boolean;
type T2 = T1&lt;() =&gt; number&gt;;
// T2 = string
```

### Distributive Conditional Types

Conditional types have a special case, namely if the **type parameter** to a conditional type **is a union**. It&apos;s called a _distributive conditional type_. In that case, the conditional type is applied separately to each type making up the union.

To illustrate:

```ts
type T1&lt;T&gt; = T extends string ? string : boolean;
type Union = &quot;a&quot; | &quot;b&quot; | true;
type T2 = T1&lt;Union&gt;;
// T2 = string | boolean
```

What happens here is that `T1` is applied _separately_ to `&apos;a&apos;`, `&apos;b&apos;` and `true` and the results combined back to a union, which yields `string | string | boolean`. The two `string`s can be combined so the end result is `string | boolean`.

While the TypeScript team has given this case for conditional types a special name, it **also applies for mapped types**. Similar to conditional types, applying a mapped type on a union will apply it separately on each type making up the union and combine it back together.

```ts
interface I1 {
  p1: boolean;
}
interface I2 {
  p2: string;
}
interface I3 {
  p3: number;
}
type Union = I1 | I2 | I3;
type T = Partial&lt;Union&gt;;
// T = Partial&lt;I1&gt; | Partial&lt;I2&gt; | Partial&lt;I3&gt;
```

## Enhanced Mapped Types

Up until now, every type mapping was uniform, in the sense that either all properties had the same type (e.g. `string`), or each of them had the corresponding type from the original type. The only exception being modifiers.

Conditional types add the ability to express non-uniform type mappings. For example keeping all function property types while changing all other properties to `boolean`.

```ts
interface I {
  p1: () =&gt; void;
  p2: (a: string) =&gt; boolean;
  p3: string;
  p4: string;
}

type T1&lt;T&gt; =
  { [P in keyof T]: T[P] extends Function ? T[P] : boolean };

type T2 = T1&lt;I&gt;;
T2 = {
  p1: () =&gt; void;
  p2: (a: string) =&gt; boolean;
  p3: boolean;
  p4: boolean;
}
```

## Final Words

What I found was that in normal application code, mapped types are rarely needed. They are much more useful for library and framework code. Most people won&apos;t have to write them themselves, but will encounter them when reading type definitions of libraries.

I hope that you have a better understanding about mapped types and related TypeScript features now, so that you have at least an easier time understanding the ones of the packages you depend on.

{/* # Notes
Working with property names is unfortunately not possible, neither changing it nor deciding the type based on it.

Mapped types are also distributively applied -&gt; change and unify with Distributive Conditional Types somehow
Inside mapped types, there are no Distributive Conditional Types -&gt; enhanced mapped types

First explain keyof, then indexed types
Then base mapped types, explain difference of homomorphic types, how it affects type inference (https://www.typescriptlang.org/docs/handbook/advanced-types.html#inference-from-mapped-types)

Need to address difference of uniform / homomorphic types, maybe only relevant for type inference

impact on type inference as separate section or **separate blogpost**
when a type definition is deferred with conditional types

Resources:
https://mariusschulz.com/blog/mapped-types-in-typescript
https://www.typescriptlang.org/docs/handbook/advanced-types.html#index-types
https://dev.to/busypeoples/notes-on-typescript-mapped-types-and-lookup-types-i36
https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-1.html
https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-8.html
https://stackoverflow.com/questions/51651499/typescript-what-is-a-naked-type-parameter

https://www.google.com/search?q=typescript%20string%20literal%20append
https://stackoverflow.com/questions/54334701/is-there-currently-anyway-to-concatenate-two-or-more-string-literal-types-to-a-s
https://github.com/Microsoft/TypeScript/issues/6579
https://github.com/Microsoft/TypeScript/issues/12754 */}</content:encoded></item><item><title>Setting up a Reverse-Proxy with Nginx and docker-compose</title><link>https://weberdominik.com/blog/reverse-proxy-nginx-docker-compose/</link><guid isPermaLink="true">https://weberdominik.com/blog/reverse-proxy-nginx-docker-compose/</guid><description>Nginx is a great piece of software that allows you to easily wrap your application inside a reverse-proxy, which can then handle server-related aspects, like SSL and caching, completely transparent to the application behind it.</description><pubDate>Sat, 05 May 2018 00:00:00 GMT</pubDate><content:encoded>Nginx is a great piece of software that allows you to easily wrap your application inside a reverse-proxy, which can then handle server-related aspects, like SSL and caching, completely transparent to the application behind it.

## Introduction

Some aspects of web applications, like SSL encryption, request caching and service discovery can be managed outside of the application itself. Reverse-proxies like Nginx can handle many of those responsibilities, so we as developers don&apos;t have to think about it in our software.

Additionally, some software is not meant to be available over the internet, since the don&apos;t have proper security measures in place. Many databases are like that. And it is good practice in general to not make internal services public-facing that don&apos;t have to be.

All of that can be achieved with docker-compose and Nginx.

## docker-compose

[docker-compose](https://docs.docker.com/compose/) is a neat little tool that lets you define a range of docker containers that should be started at the same time, and the configuration they should be started with. This includes the exported ports, the networks they belong to, the volumes mapped to it, the environment variables, and everything else that can be configured with the `docker run` command.

In this section I&apos;ll briefly explain how to configure the docker-compose features used in this article. For more details take a look at the [documentation](https://docs.docker.com/compose/compose-file/).

The main entry point is a `docker-compose.yml` file. It configures all aspects of the containers that should be started together.

Here is an example `docker-compose.yml`:

```yaml
version: &quot;3&quot;
services:
  nginx:
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - 80:80
      - 443:443

  ismydependencysafe:
    image: ismydependencysafe:latest
    container_name: production_ismydependencysafe
    expose:
      - &quot;80&quot;
```

As you can see, there are 2 images specified.  
First `nginx`, with the name `production_nginx`. It specifies a volume that replaces the default Nginx configuration file. Also a mapping of the host&apos;s ports 80 and 443 to the container&apos;s ports 80 and 443 is defined.  
The second image is one is one I created myself. It exposes port 80. The difference to the `ports` configuration is that they are not published to the host machine. That&apos;s why it can also specify port 80, even though `nginx` already did.

There are a few other configuration options used in this article, specifically networks, volumes and environment variables.

### Networks

With networks it is possible to specific which containers can talk to each other. They are specified as a new root config entry and on the container configurations.

```YAML
version: &apos;3&apos;
services:
  nginx:
    ...
    networks:
      - my-network-name

  ismydependencysafe:
    ...
    networks:
      - my-network-name

networks:
  my-network-name:
```

In the root object `networks`, the network `my-network-name` is defined. Each container is assigned to that network by adding it to the `network` list.

If no network is specified, all containers are in the same network, which is created by default. Therefore, if only one network is used, no network has to be specified at all.

A convenient feature of networks is that containers in the same one can reference each other by name. In the example above, the url `http://ismydependencysafe` will resolve to the container `ismydependencysafe`.

### Volumes

Volumes define persistent storage for docker containers. If an application writes somewhere no volume is defined, that data will be lost when the container stops.

There are 2 types of volumes. The ones that map a file or directory to one inside the container, and the ones that just make a file or directory persistent (named volumes), without making them accessible on the file system (of course they are _somewhere_, but that is docker implementation specific and should not be meddled with).

The first type, volumes that map a specific file or directory into the container, we have already seen in the example above. Here is it again, with an additional volume that also specifies a directory in the same way:

```YAML
version: &apos;3&apos;
services:
  nginx:
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - /etc/letsencrypt/:/etc/letsencrypt/
...
```

Named volumes are specified similar to networks, as a separate root configuration entry and directly on the container configuration.

```YAML
version: &apos;3&apos;
services:
  nginx:
    ...
    volumes:
      - &quot;certificates:/etc/letsencrypt/&quot;

    ...

volumes:
  certificates:
...
```

### Environment Variables

Docker can also specify environment variables for the application in the container. In the compose config, there are multiple ways to do so, either by specifying a file that contains them, or declaring them directly in `docker-compose.yml`.

```YAML
version: &apos;3&apos;
services:
  nginx:
    ...
    env_file:
      - ./common.env
    environment:
      - ENV=development
      - APPLICATION_URL=http://ismydependencysafe
    ...
```

As you can see, both ways can also be used at the same time. Just be aware that variables set in `environment` overwrite the ones loaded from the files.

The environment files must have the format `VAR=VAL`, one variable on each line.

```
ENV=production
APPLICATION_URL=http://ismydependencysafe
```

### CLI

The commands for starting and stopping the containers are pretty simple.

To start use `docker-compose up -d`.  
The `-d` specifies that it should be started in the background. Without it, the containers would be stopped when the command line is closed.

To stop use `docker-compose down`.

Both commands look for a `docker-compose.yml` file in the current directory. If it is somewhere else, specify it with `-f path/to/docker-compose.yml`.

Now that the basics of docker-compose are clear, lets move on to Nginx.

## Nginx

Nginx is a web server with a wide array of features, including reverse proxying, which is what it is used for in this article.  
It is configured with a `nginx.conf`. By default it looks for it in `/etc/nginx/nginx.conf`, but it is of course possible to specify another file.

As a reverse proxy, it can transparenty handle two very important aspects of a web application, encryption and caching. But before going into detail about that, lets see how the reverse proxy feature itself is configured:

```
http {
  server {
    server_name your.server.url;

    location /yourService1 {
      proxy_pass http://localhost:80;
      rewrite ^/yourService1(.*)$ $1 break;
    }

    location /yourService2 {
      proxy_pass http://localhost:5000;
      rewrite ^/yourService1(.*)$ $1 break;
    }
  }

  server {
    server_name another.server.url;

    location /yourService1 {
      proxy_pass http://localhost:80;
      rewrite ^/yourService1(.*)$ $1 break;
    }

    location /yourService3 {
      proxy_pass http://localhost:5001;
      rewrite ^/yourService1(.*)$ $1 break;
    }
  }
}
```

The Nginx config is organized in **contexts**, which define the kind of traffic they are handling. The `http` context is (obviously) handling http traffic. Other contexts are `mail` and `stream`.

The `server` configuration specifies a virtual server, where each can have its own rules. The `server_name` directive defined which urls or IP addresses the virtual server responds to.

The `location` configuration defines where to route incoming traffic. Depending on the url, the requests can be passed to one service or another. In the config above, the start of the route specifies the service.  
`proxy_pass` sets the new url, and with `rewrite` the url is rewritten so that it fits the service. In this case, the `yourService{x}` is removed from the url.

This was a general overview, later sections will explain how caching and SSL can be configured.

For more details, check out the [docs](https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/).

Now that we know the pieces, lets start putting them together.

## Setup Nginx as a Reverse-Proxy inside Docker

For a basic setup only 3 things are needed:

1. Mapping of the host ports to the container ports
2. Mapping a config file to the default Nginx config file at `/etc/nginx/nginx.conf`
3. The Nginx config

In a docker-compose file, the port mapping can be done with the `ports` config entry, as we&apos;ve seen above.

```YAML
    ...
    ports:
      - 80:80
      - 443:443
    ...
```

The mapping for the Nginx config is done with a volume, which we&apos;ve also seen before:

```YAML
    ...
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ...
```

The Nginx config is assumed to be in the same directory as `docker-compse.yml` (`./nginx.conf`), but it can be anywhere of course.

### Cache Configuration

Adding caching to the setup is quite easy, only the Nginx config has to be changed.  
In the `http` context, add a `proxy_cache_path` directive, which defines the local filesystem path for cached content and name and size of the memory zone.  
Keep in mind though that the path is _inside_ the container, not on the host&apos;s filesystem.

```
http {
    ...
    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
}
```

In the `server` or `location` context for which responses should be cached, add a `proxy_cache` directive specifying the memory zone.

```
  ...
  server {
    proxy_cache one;
  ...
```

That&apos;s enough to define the cache with the default caching configuration. There are a lot of other directives which specify which responses to cache in much more detail. For more details on those, have a look at the [docs](https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/#specifying-which-requests-to-cache).

## Securing HTTP Traffic with SSL

By now the server setup is finished. docker-compose starts up all containers, and the Nginx container acts as a reverse-proxy for the services. There is just one thing left to set up, as [this](https://doesmysiteneedhttps.com) site so beautifully explains, encryption.

To install certbot, the client that fetches certificates from Let’s Encrypt, follow the [install instructions](https://certbot.eff.org).

### Generating SSL Certificates with certbot

certbot has a variety of ways to get SSL certificates. There are plugins for widespread webservers, like Apache and Nginx, one to use a standalone webserver to verify the domain, and of course a manual way.

We&apos;ll use the `standalone` plugin. It starts up a separate webserver for the certificate challenge, which means the port 80 or 443 must be available. For this to work, the Nginx webserver has to be shut down, as it binds to both ports, and the certbot server needs to be able to accept inbound connections on at least one of them.

To create a certificate, execute

```bash
certbot --standalone -d your.server.url
```

and follow the instructions. You can also create a certificate for multiple urls at once, by adding more `-d` parameters, e.g. `-d your.server1.url` `-d your.server2.url`.

### Automating Certificate Renewal

The Let&apos;s Encrypt CA issues short-lived certificates, they are only valid for 90 days. This makes automating the renewal process important. Thankfully, certbot makes that easy with the command [`certbot renew`](https://certbot.eff.org/docs/using.html#renewing-certificates). It checks all installed certificates, and renews the ones that will expire in less than 30 days.

It will use the same plugin for the renewal as was used when initially getting the certificate. In our case that is the `standalone` plugin.

The challenge process is the same, so also for renewals the ports 80 or 443 must be free.  
certbot provides pre and post hooks, which we use to stop and start the webserver during the renewal, to free the ports.  
The hooks are executed only if a certificate needs to be renewed, so there is no unnecessary downtime of your services.

Since we are using `docker-compose`, the whole command looks like this:

```bash
certbot renew --pre-hook &quot;docker-compose -f path/to/docker-compose.yml down&quot; --post-hook &quot;docker-compose -f path/to/docker-compose.yml up -d&quot;
```

To complete the automation simply add the previous command as a cronjob.  
Open the cron file with `crontab -e`.  
In there add a new line with

```
@daily certbot renew --pre-hook &quot;docker-compose -f path/to/docker-compose.yml down&quot; --post-hook &quot;docker-compose -f path/to/docker-compose.yml up -d&quot;
```

That&apos;s it. Now the renew command is executed daily, and you won&apos;t have to worry about your certificates&apos; expiration date.

### Using the Certificates in the Nginx Docker Container

By now the certificates are requested and stored on the server, but we don&apos;t use them yet. To achieve that, we have to

1. Make the certificates available to the Nginx container and
2. Change the config to use them

To make the certificates available to the Nginx container, simply specify the whole `letsencrypt` directory as a volume on it.

```YAML
  ...
  nginx:
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - /etc/letsencrypt/:/etc/letsencrypt/
  ...
```

Adapting the config and making it secure is a bit more work.
By default, a virtual server listens to port 80, but with SSL, it should also listen to port 443. This has to be specified by 2 `listen` directives.  
Additionally, the certificate must be defined. This is done with the `ssl_certificate` and `ssl_certificate_key` directives.

```
  ...
  server {
    ...
    listen 80;
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/your.server.url/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your.server.url/privkey.pem;
  }
  ...
```

These small changes are enough to configure nginx for SSL.  
It uses the default SSL settings of Nginx though, which is ok, but can be improved upon.

### Improving Security of Nginx Config

At the beginning of this section I should mention that, if you use the latest version of nginx, its default SSL settings are secure. There is no need to define the protocols, ciphers and other parameters.

That said, there are a few SSL directives with which we can improve security even further.  
Just keep in mind that by setting these, you are responsible for keeping them up to date yourself. The changes Nginx does to the default config settings won&apos;t affect you, since you&apos;re overwriting them.

First, set

```
ssl_protocols TLSv1.1 TLSv1.2;
```

This disables all SSL protocols and TLSv1.0, which are considered insecure ([TLSv1.0](https://www.netsparker.com/web-vulnerability-scanner/vulnerabilities/insecure-transportation-security-protocol-supported-tls-10/), [SSLv3](https://www.netsparker.com/web-vulnerability-scanner/vulnerabilities/insecure-transportation-security-protocol-supported-sslv3/), [SSLv2](https://www.netsparker.com/web-vulnerability-scanner/vulnerabilities/insecure-transportation-security-protocol-supported-sslv2/)). TLSv1.1 and TLSv1.2 are, at the time of writing (July 2018), considered secure, but nobody can promise that they will not be broken in the future.

Next, set

```
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;
```

The ciphers define how the encryption is done. Those values are copied from [this article](https://bjornjohansen.no/optimizing-https-nginx), as I&apos;m not an expert in this area.

Those are the most important settings. To improve security even more, follow these articles:

- [Optimizing HTTPS on Nginx](https://bjornjohansen.no/optimizing-https-nginx)
- [How to setup Let&apos;s Encrypt for Nginx on Ubuntu 18.04](https://gist.github.com/cecilemuller/a26737699a7e70a7093d4dc115915de8#stronger-settings-for-a)

You can check the security of your SSL configuration with a great [website](https://www.ssllabs.com/ssltest/analyze.html) SSL Labs provides.

## Wrap up

In this article we&apos;ve covered how to setup docker-compose, use its network and volume feature and how to set environment variables, how to use Nginx as a reverse proxy, including caching and SSL security. Everything that&apos;s needed to host a project.

Just keep in mind that this is not a terribly professional setup, any important service will need a more sophisticated setup, but for small projects or side-projects it is totally fine.

## Amendment

Here are the resulting `nginx.conf` and `docker-compose.yml` files. They include placeholder names, urls and paths for your applications.

**docker-compose.yml**

```
version: &apos;3&apos;
services:
  nginx:
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/error.log:/etc/nginx/error_log.log
      - ./nginx/cache/:/etc/nginx/cache
      - /etc/letsencrypt/:/etc/letsencrypt/
    ports:
      - 80:80
      - 443:443

  your_app_1:
    image: your_app_1_image:latest
    container_name: your_app_1
    expose:
      - &quot;80&quot;

  your_app_2:
    image: your_app_2_image:latest
    container_name: your_app_2
    expose:
      - &quot;80&quot;

  your_app_3:
    image: your_app_3_image:latest
    container_name: your_app_3
    expose:
      - &quot;80&quot;
```

**nginx.conf**

```
events {

}

http {
  error_log /etc/nginx/error_log.log warn;
  client_max_body_size 20m;

  proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;

  server {
    server_name server1.your.domain;

    location /your_app_1 {
      proxy_pass http://your_app_1:80;
      rewrite ^/your_app_1(.*)$ $1 break;
    }

    location /your_app_2 {
      proxy_pass http://your_app_2:80;
      rewrite ^/your_app_2(.*)$ $1 break;
    }
  }

  server {
    server_name server2.your.domain;
    proxy_cache one;
    proxy_cache_key $request_method$request_uri;
    proxy_cache_min_uses 1;
    proxy_cache_methods GET;
    proxy_cache_valid 200 1y;

    location / {
      proxy_pass http://your_app_3:80;
      rewrite ^/your_app_3(.*)$ $1 break;
    }

    listen 80;
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/server2.your.domain/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/server2.your.domain/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
  }
}
```</content:encoded></item><item><title>Hosting Asp.Net Core Applications on Windows Server Core</title><link>https://weberdominik.com/blog/hosting-asp-net-core/</link><guid isPermaLink="true">https://weberdominik.com/blog/hosting-asp-net-core/</guid><description>Recently, I&apos;ve found myself in the position of having to host an application on Windows Server. Having never managed a Windows Server before, I struggled to find relevant information, especially since most of it is written for a Windows Server with installed UI, and the default image on Azure is a Core image, without UI. This is mostly documentation for myself, but maybe you find it helpful too.</description><pubDate>Wed, 04 Apr 2018 00:00:00 GMT</pubDate><content:encoded>Recently, I&apos;ve found myself in the position of having to host an application on Windows Server. Having never managed a Windows Server before, I struggled to find relevant information, especially since most of it is written for a Windows Server with installed UI, and the default image on Azure is a Core image, without UI. This is mostly documentation for myself, but maybe you find it helpful too.

## Introduction

This is a step by step introduction of how to host an Asp.Net Core application on Windows Server Core with IIS (Internet Information Server).

We will cover how to set up IIS, how to configure it, how to deploy to it with Web Deploy in Visual Studio and securing connections to that application with https.

I&apos;m using a virtual machine from Azure, which provides a nice UI for managing firewall rules. That is probably very different for you, so I&apos;ll just say which ports have to be open, and not cover how to do that.

## Setting up the Server

After logging in on the server, you are greeted by a command prompt. Since most commands we will use are PowerShell commands, we have to start it.  
Just enter `powershell` and execute it. After that you should see a `PS` in front of the prompt.

![PowerShell Prompt](./static/1-powershell.png)

Now IIS has to be installed. This is done with this command:

```PowerShell
Install-WindowsFeature Web-Server
```

While installing, PowerShell shows a nice little progress bar:

![PowerShell Progress](./static/2-install-progress.png)

## Enabling Remote Management

Per default, the server does not allow remote management. It has to be enabled by installing the Web-Mgmt-Service and setting the registry entry `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server\EnableRemoteManagement` to `1`.

Keep in mind that the registry key is only available after Web-Mgmt-Service is installed.

```PowerShell
Install-WindowsFeature Web-Mgmt-Service
```

```PowerShell
Set-ItemProperty -Path Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server -Name EnableRemoteManagement -Value 1
```

After executing those commands restart the web server so that the changes take effect:

```PowerShell
net stop was /y
net start w3svc
```

Also start the Web Management Service, otherwise you won&apos;t be able to connect to it.

```PowerShell
net start wmsvc
```

**Note:** IIS Manager connects via port 8172, so make sure it is open on your server.

## Enabling Management on your Windows 10 Device

To remotly manage an IIS server, the IIS Manager has to be installed on your device. This can be done in `Control Panel -&gt; Programs -&gt; Programs and Features -&gt; Turn Windows features on or off`. Activating `IIS Management Console` is sufficient, IIS itself does not have to be installed.

![IIS Manager Installation](./static/3-iis-activation.png)

Out of the box IIS Manager cannot manage remote servers. That features has to be added with _IIS Manager for Remote Administration_. You can download it [here](https://www.microsoft.com/en-us/download/details.aspx?id=41177).  
After it is installed, IIS Manager will have the menus enabled to connect to a remote IIS.

Now the connection to the remote IIS can be added. Just go to `File -&gt; Connect to a Server` and fill out the required information.

![IIS Manager Installation](./static/4-connect-iis.png)

**Note:** If you can&apos;t connect, most likely the Port 8172 is not open, or the Web Management Service is not started. Do that with

```PowerShell
net start wmsvc
```

## Configuring IIS to host Asp.Net Core Applications

By default IIS cannot host Asp.Net Core applications. The [Asp.Net Core Module](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/aspnet-core-module) is needed for that, which is installed with the .NET Core Windows Server Hosting bundle.

1. Go to the [.Net all downloads page](https://www.microsoft.com/net/download/all)
2. Select the .Net Core runtime you need
3. Download Server Hosting Installer (this is just to copy the download url, we need it on the server, not locally)
4. Copy the download url
5. Download the installer on the server with the command

```PowerShell
Invoke-WebRequest https://download.microsoft.com/download/8/D/A/8DA04DA7-565B-4372-BBCE-D44C7809A467/DotNetCore.2.0.6-1-WindowsHosting.exe -OutFile C:\Users\YourUsername\Downloads\DotNetCore.2.0.6-1-WindowsHosting.exe
#This is the download url for the latest non-preview runtime at the time of writing (2.0.6).
```

6. Execute the installer

```PowerShell
C:\Users\YourUsername\Downloads\DotNetCore.2.0.6-1-WindowsHosting.exe
```

Now, this is what was really surprising for me. The installer executes with a UI, the same as on any Windows. Being on a Core installation, I thought there would be absolutely no UI, but I was wrong.  
This also opens the interesting option to install Chrome and download all necessary files with it.

Restart the web server so that the changes take effect:

```PowerShell
net stop was /y
net start w3svc
```

## Preparing IIS for Web Deploy

Since this is a small project, the most convenient deploy option is Web Deploy directly in Visual Studio.  
As with almost everything else, this is not supported out of the box, but can be added.

Web Deploy can be downloaded from the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=43717).  
Use the same process outlined above, or Chrome, your choice :-)

```PowerShell
Invoke-WebRequest https://download.microsoft.com/download/0/1/D/01DC28EA-638C-4A22-A57B-4CEF97755C6C/WebDeploy_amd64_en-US.msi -OutFile C:\Users\dominik\Downloads\WebDeploy_amd64_en-US.msi
#This is the download url for the latest Web Deploy at the time of writing (3.6).
```

Also execute that installer

```PowerShell
C:\Users\dominik\Downloads\WebDeploy_amd64_en-US.msi
```

**Note:** I&apos;ve read somewhere that all features have to be installed, and that the installer&apos;s _Complete_ option does not actually install everything. So just select _Custom_ and make sure to that all features are enabled.

## Deploying an Asp.Net Core Application

Now we are finally ready to publish the application. Well, almost. A publish profile has to be created first.

1. Right-click on the Asp.Net Core application in the Solution Explorer
2. Select _Publish_
3. Click on _Create new Profile_
4. Select _IIS, FPT, etc._
5. Select _Create Profile_ where by default _Publish_ is entered

![IIS Manager Installation](./static/5-publish-target.png)

6. Enter the required information
   - _Site name_ is either _Default Web Site_, or, if you created a different one in IIS, the name of that one.
7. Click _Validate Connection_ to check if everything was entered correctly
8. If it was, click _Save_
9. Select the created profile
10. Click _Publish_ and watch the magic happen :-)

## Configuring SSL

We&apos;ve achieved what we wanted, hosting the application. Now there is only one step left: securing it with SSL. Don&apos;t worry, it&apos;s not difficult, I promise.  
There is a great project out there, called [Windows ACME Simple](https://github.com/PKISharp/win-acme), which makes this process really simple.

1. Download the latest release (you can get the download link from the release page of the Github project)

```PowerShell
Invoke-WebRequest https://github.com/PKISharp/win-acme/releases/download/v1.9.10.1/win-acme.v1.9.10.1.zip -OutFile C:\Users\dominik\Downloads\win-acme.v1.9.10.1.zip
#This is the download url for the latest version at the time of writing (1.9.10.1).
```

2. If this fails with the message `The request was aborted: Could not create SSL/TLS secure channel.`, try execute `[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12` beforehand (from [StackOverflow](https://stackoverflow.com/a/41618979/3107430)).

3. Extract the zip file

```PowerShell
Expand-Archive C:\Users\dominik\Downloads\win-acme.v1.9.10.1.zip -DestinationPath C:\Users\dominik\Downloads\win-acme.v1.9.10.1
```

4. Execute _letsencrypt.exe_

```PowerShell
C:\Users\dominik\Downloads\win-acme.v1.9.10.1\letsencrypt.exe
```

![IIS Manager Installation](./static/6-letsencrypt.png)

5. Select `N` to create a new certificate in simple mode
6. Select `1` to create a single binding of an IIS site
7. Now you should see a selection of sites you have configured. Select the site you want to secure
8. After you&apos;ve added an email address and agreed to the subscriber agreement, it does its magic
9. If all goes well, your site is now encrypted and you can quit Windows ACME Simple (`Q`)

## Closing

That&apos;s it. The application is now fully set up. I hope this walkthrough helped you as much as it undoubtedly will help me in the future, the next time I have to set up a Windows Server.

## Resources

[Introducing Windows Server, version 1709](https://docs.microsoft.com/en-us/windows-server/get-started/get-started-with-1709)  
[Manage a Server Core server](https://docs.microsoft.com/en-us/windows-server/administration/server-core/server-core-manage)  
[Configure an IIS Server Core server for remote management](https://blogs.msdn.microsoft.com/benjaminperkins/2015/11/02/configure-an-iis-server-core-server-for-remote-management/)  
[Host ASP.NET Core on Windows with IIS](https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/iis/?tabs=aspnetcore2x)</content:encoded></item></channel></rss>