Skip to main content

Updates for Sprint 33 – 36

In short: Again, we’ve fixed a lot of bugs and regressions in our application. Some were trivial, some caused us countless hours of work and some grey hair.


We improved the user experience again—mostly small things that were barely noticeable, but still annoying. Sponsor cards now look better for basic events that don’t display logos.

We added introduction texts to speakers and sessions to improve the clarity and visual design of it. For sessions, we no longer require a set date and time and show them as “unscheduled“. We also fixed some issues with our Markdown parser not correctly escaping HTML and added limited Markdown formatting to the introduction texts.


On the infrastructure side we improved our Ansible playbook code to run much faster, and are now able to control all our servers in the Digital Ocean environment entirely via Ansible.

To improve the performance of our database and application servers, we changed a lot of expensive code to SQL triggers. This resulted in a couple of regressions but there’s a noticeable difference in performance for users.

Our session lifetime is now longer to persist a login session by users for a longer period of time for more convenience.

For event venues we allow people to add an address. This works with a combination of OpenStreetMap’s Nominatim service and a database.

When we first built this, we followed a recommended approach of doing the lookup at Nominatim, which in combination with several address attributes we set, resulted in a lot of addresses not found. We now changed our implementation and can find more addresses to show the correct map on the event venue page.

A bigger issue with the Geonames database required some more work from us: For our events we need the city of the event and use it to build a readable, unique URL. However, using the data from Geonames isn’t trivial and our first implementation was way too optimistic and didn’t take e.g. duplicate city names around the world into account. We’re still fixing this by rewriting the complete implementation of the massive dataset and are on the way to finish this in the current sprint.

We’re now also setting up the server synchronisation which was unfortunately blocked by an external Pull Request we created on the required Open Source project and the respective package release. This will allow us to easily scale our servers and add / remove e.g. application servers on demand without much effort.

Lastly, we need to set up the event coverage link service on the production environment. Coding-wise it’s done but the setup still needs to get implemented.

Looking at our Launch backlog, we’re now pretty confident that we’ll have some exciting public news to share soon.

Want to read more articles like this? Subscribe to the category What’s new or the feed for all our articles.