Gopher is still alive - Welcome to port 70
It’s been a while since I’ve written my last post.
Actually I was absent because in the meantime I’ve got a new job, I’m going to make some family’s changes (new house, new life), so all of my efforts were (and are) focused on doing my best in my new job, handle all the things that the life reserved to me, and yes, it’s been a quite hard find a moment to stop myself, thinking and writing.
Finally I’m here, trying to find some time to write a little post on things I love and I really hope it can be a rule for the next months.
Even though I love what I do day by day, there are few side projects I think they deserved to be told and yes, this post is about one of them.
To make a long story short, few months ago reading how the Redis development was going I was inspired by a new, old topic: the implementation of the gopher protocol.
It’s not a joke, and I know today gopher is not widely used anymore (HTTP rules), but I’ve seen in it an opportunity to learn a bit of history and why not, the opportunity to make a pure C implementation of a Gopher server to revive it’s RFC1436 and figure out how internet (the internet people know today) was at the very beginning, when the need was to empower people to publish and share information, getting rid of all it was related to the style and the cosmetic issues on presenting information to the end users.
Anyway, c_gopherd tries to be a tiny, minimal re-implementation of a gopher server and tries to make alive again that world in which you just had text based documents, hyperlink connections to other resources, experimenting a different way to do things.
If you want to have more context on this, I suggest reading few useful documents:
Scaling applications with Openshift - A real Use Case Scenario
One of the last days of this year, one of the last job travels that give me some
time to write a post.
This year I’ve made a full immersion on kubernetes and inhered technologies like openshift as container orchestration platform and I have to say that definitively I’ve learned tons of fun things on cloud native software.
With the help of this kind of tools the software development cycle is becoming more agile, more flexible, you can deploy applications thousands of times without breaking the service you’re offering to your customers, rolling out new (and maybe different version or just a % of traffic) of a deployment in no time;
Data Structure you should care: HashTables
It’s been a sunny day, and after been out with my family, walking a lot and played with my daughter until she’s getting very tired,
finally I’m here ready to write a simple post I was thinking about from a while.
Generally speaking, in my work I returned to write tons of code and despite my work is related to the cloud world (openstack, k8s and such of those amazing things), made of abstractions of any type and where the code you write will manipulate very high level data, I found myself nostalgic remembering with a smile the time in which code would be optimized, would have written in resource limited environments and each problem come to a resolution through a process in which software design, algorithms and data structures were involved in a massive way.
Starting from this point, I just thought to start a series of posts related to data structures and algorithms, hoping that I can succeeded in find the time to make this post the p->head of a real series and not an isolated episode. From Java to Golang developers passing through bash (or any kind of) people who like scripts, one of the most popular data structure that come to my mind is the Hashtable and, indeed, it comes with a great story to tell.
The thing made me like: “wow” is that this DS has many apparent authors, but I like to think that was Peter Luhn the first creator, writing the IBM memorandum using hashing with chaining.
And Now, since this is not an historical point of view of this theme, let’s start to talk about how technically it works:
Upgrade Openstack from Kilo to Mitaka
The upgrade of the openstack infrastructure based on Kilo wasn’t an easy task…we performed two releases jump and this process took several months because we had many goals to achieve, impacts that we had to care about both from an infrastructure point of view and from an application side: minimize the downtime for all running application and have a rollback path were (and they always MUST be) the core component!!