Directly jump to the content and enable reader mode
Article image

have read aloud

Reading time: about 12 min Print version

Background information

Since this will be relevant in some articles, a few background information that might be useful.


The first steps

I had access to my computer around 1984 plus or minus a year or two. Not exactly sure any more. An old Apple II clone called ITT-2020, which I actually still have.
I tortured it with the supplied BASIC.
Around 1984 or so I got access to my first Laptop, a Epson HX-20, which I also still own. I also tortured it with BASIC (a slightly different dialect).
Both computers were powerful in their time, but laughable in modern times. The ITT-2020 had a roughly 1MHz CPU 8-bit CPU with 16kB RAM, the  HX-20 a 614kHz main CPU with 16KB RAM as well.
To "convert" this to modern CPU Speed labels: Single-Core 0.001 GHz

Second steps

The next step in performance was my next Computer, an Amiga 500 with a whopping 7 MHz, fantastic graphics (at the time) and 512kB RAM (with later expansion to whopping 1MB) around  1989.
I started with the supplied BASIC and sooner or later reached the limit of what it can do. I shortly tried to program in C, but there were no really good (free) compilers I coud get my hands on.
A friend of mine was active in the DEMO-scene and he convinced me to join him and to learn the 68K assembly using a public domain assembler. I loved it. I was also active in the scene for a while, also worked on a few demos that have been lost a looong time ago. Sadly. But we came up with a lot of nifty things to extend the physical limits of the machine. More on that at a later time. I also switched to a commercial assembler IDE after a while. Yes this was a thing back then.

A few years later I switched to an Amiga 1200 with some beefy extras, which I also still own.

The PC era

After the Amiga went the way of the Dodo, I switched to the PC. I tried to program it in assembly as well, but coming from an 68K architecture to the PC one felt like having to run a marathon with tied legs. Memory bank switching, only a handful of registers, lack of instructions a.s.o. So I gave Turbo Pascal a try. A friend of mine had an extra license that he gave to me. (yes, the commercial thing still was a big thing!) It was fine, but having the crippled graphical features of the time sucked and every graphic card needed their own way to talk to them. Proper graphic drivers for DOS were not a thing. It didn't prevent me from trying to do graphical stuff as well. To a limited success, but I was able to build my own tiny 3D engine and fun stuff. We were still talking about non-hardware accelerated graphics (not even 2D ones) and ~66Mhz CPUs with ~4MB RAM. Not those 1000+Cores 4GB RAM on the graphic card alone...

The came Windows 3.11 ...


Don't get me wrong, the first versions of Windows were a huge improvement over DOS, but they still sucked a lot. No memory protection, meaning a wring write to memory could kill the complete OS, no multitasking (something that my old Amigas could do at day 1), but drivers with common interfaces and abstractions. It did not matter, which graphic card or sound card you had anymore. At that time I switched to Turbo Pascal on Windows and tried C programming again. Also around that time I started to become more interested in hardware development and some embedded systems. Somewhere around 1994. Way before Arduinos became popular.

Hardware developing

in 1998 I started my first company and focussed mainly on developing hardware and drivers for it. It was fine for a while, but the tighter and tighter integrations and the rise of surface mounted components with non-public documentation that you had to buy for a lot of $$$ made me look for alternatives. There was this new thing called "Internet". So I took a closer look at it and the possibilities it gave. This was around the year 2000.

The Internet

I quickly realized this could be a huge potential and switched my main focus on developing for the new platform running on this new operating system called Linux. I mainly used perl for the first few interactive Internet pages. But it wasn't that great to work with so I looked for alternatives. I soon found php, they released version 3.0 and I gave it a try. It gave me things I needed so used it from that time for my web site projects. Sooner or later my projects grew and everything was fine. I did some client based developments to pay my rent, mostly using Visual C++ running on Windows.

Concentrating on backend stuff

During the time on developing GUI apps on windows I soon realized: this is not what I want to do. I want to concentrate on backend stuff. I worked on some high-profile and very important backend things that I'm not allowed to tell (no, not the spying stuff). But basically lives depended on it had to be completely tested, memory leak free and must be able to run for years without human interaction. Memory leak free means that after a while, the memory amount must not rise. It was OK to use more memory for a while, but every byte had to be given back to the OS. This is sometimes an art form, and sadly most developers don't even try to archive that any more. Event the simple "ls" command leaks memory. It's not that important any more, since the OS keeps track and reuses the memory after the program has closed. But my programs didn't close and kept running 24/7/365. Also coming from the DEMO-screne, I had developed an more or less intuitive approach on efficient programming.

Some non-interesting projects

I did a lot of smaller and bigger project in a lot of different areas, like travel industry, a bit pharmaceutical stuff and so on. I had contact with a lot of programming languages and also learned (to hate) a few of them.
Most of the time it wasn't the program language itself that I had issues with, more the people that saw them as their religion. They defended it at any cost, even through objective flaws.
One of my favourite tests is to run a >>print "Hello World"<< "Program" through valgrind. For those that do not know Valgrind: It's primary a tool to detect memory misuse and memory leaks, but can do a lot more.
So if you run this test on a python test:

valgrind --trace-children=yes python -c 'print("Hello World")'

And you get errors like this:

Address 0x4ed0020 is 16 bytes after a block of size 32 in arena "client"
Address 0x4ed1020 is 80 bytes inside a block of size 1,112 free'd
still reachable: 314,959 bytes in 154 blocks
ERROR SUMMARY: 713 errors from 78 contexts (suppressed: 0 from 0)

You know, the programming language or to be exact: the interpretor for that language can not be trusted when it writes to more memory than it is allowed to (first line) or uses memory after it has been returned to the operating system (second line) or when it "lost" 314kB memory (a.k.a not returned to the OS). That's almost 20 times the RAM lost compared to the RAM my first computers had in total. In total 713 dangerous memory accesses made.

To compare to the hated PHP:

valgrind php7.4 -r 'echo "Hello World\n";'
==14550== Command: php7.4 -r echo\ "Hello\ World\\n";
Hello World
==14550== LEAK SUMMARY:
==14550==    still reachable: 3,077 bytes in 26 blocks
==14550==         suppressed: 0 bytes in 0 blocks
==14550== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

No dangerous memory usage (crash and security relevant) and "only" 3kB memory lost.

A trip to the "dark side" of the internet

No, I don't mean adult pages or illegal content... By background in memory and resource efficient programming got me a nice contract in the adserving industry. I was building systems that delivered the ads you saw on many webpages (and still do). But to break a few expectations, the ad serving companies themself are not the bad guys. They work on a simple way: Based on the website and which banner your browser asks for, it checks which ads could be delivered from a defined list. Matches criteria and selects the best matching ad for the criteria. Which ad it is is not known to the system, it's just a bunch of ID or text snippets. If this ad is for some beverage or scam is not known. This is up to the users that book those campaigns. Back in the time, those were agencies that directly booked ads on specific websites. Those huge marketplaces with interconnected bidding system did not exist back then. I also created one of the first real-time profiling systems which calculated your interest based on the pages you visit. For example: you read a lot of politics related news and websites: you might be interested in politics. Or sports, cars, travel, a.s.o. Also one important thing is that it had to scale, a lot. To compare this: the English Wikipedia measured around 9.6 billion page views in March 2020. Seem impressive, right? Now Imagine that back in 2009 the software I build had request numbers of around 4 billion ads (plus tracking pixel), so roughly the same amount. But, not during a month, per day!
And a side node: I often hear something like "the user will not notice a 1ms speed increase, so why bother". Just look at this from the other point: at somewhere around 200.000 request per second, if the server farm needs 1 ms more per request, you will need 200 extra CPUs!


Then came a few other projects some more interesting, some less, some that went viral, some that went bankrupt. Also I had the chance to learn more languages and to hate some more.

Back to another dark side

This time a short trip to the cyber security world. You know, this firewall stuff with magic AI to filter out bad guys and protect you and other snake oil claims. Don't get me wrong, there are good and valid reasons for good firewalls and also to detect malicious behaviours in company networks. But the stuff you can download and install on you PC is 90% snake oil. If you want something that really works, you need a piece of hardware in your network architecture that is properly secured from access. I was working on this hardware based stuff, but not the packet filtering itself, the analytics part with the created metadata (connection from machine a to machine b, which protocol, how many bytes/packets a.s.o.). Again the high performance stuff, dealing with several thousand, sometimes millions of records per second.

Today: back to the first "dark side" and at the same time building an alternative

Right now I'm partially back in the first ad serving side. I still had contacts to former colleagues and one of them nagged me for a long time to help him on a project. Basically it's the following setup. The company my friend works for wanted to have a feature from the adserving company they use. This adserving company didn't want to have this, but were forced by contracts to build such stuff. So they probably thought: lets raise the requirements so high, nobody wants or is able to do it. It almost worked. My friend asked several companies they worked with before, nobody wanted or was able to. Every time a company said no, my friend asked me again. Finally I said yes to get the at least the requirements. They were: http based service, request to the service, do magic, return results. Max response time: 10ms including network latency, several thousand requests per second. My first reaction was: what should I do with the leftover 8ms? So after a bit of talking and discussions, I build and still run this service. My prediction were surprisingly correct. Average response time (measured at the load balancer is ~1ms) plus 1-2ms network latency to the external server (both running in the same data-centre for latency reasons). And the magic is not trivial. But I'm back in the tracking, profiling and data collection area of the internet. Don't get me wrong, it is well paid and since everyone is home right now (thanks to Covid) there is a lot of traffic and things to do.


Just to name-drop a few programming languages I used, and my personal preference 5 = I like a lot, 0 it's ok, -5 = I'll never use it for something important, unless I'm paid a good amount

  • Assembly: Embedded (0), 68k (1), Intel (-5), I prefer C or C++, just less verbose
  • BASIC: 0(not need any more)
  • C (4, for anything embedded and no C++ available)
  • C++ (5, my preference for anything performance)
  • C# (0, only when I get paid for, no benefit over preferred languages)
  • Java (0, it hates me and I don't like it as well)
  • Javascript/Typescript (2, it gets the job done, first one is mandatory for web stuff)
  • Perl (-3, just gets messy very quickly)
  • PHP (4, my preference for web page)
  • Python (-5, runtime is buggy as hell)
  • Ruby (-5, runtime is buggy as hell)
  • (Turbo) Pascal: (0 it's dead, Jim)

Content Nation: my alternative

To make the internet a better, less tracking place I had the idea for Content Nation. The original Idea dates back to mid 2000, but there were several things that blocked the idea. Those blockers are gone or worked around with the currently active version you are using right now. It's a tracking, profiling and ad free place for normal people to publish their content, have it hosted, indexed by an own search engine as well as good reachability for external search engines. Also it is easy to use and did I mention it's free? Not only this, but if you can convince your readers to support you, you can collect donations from them with only a minimum fee that pays the hosting costs for this site and maybe a coffee a month for me.

Report article

Our algorithm thinks, these articles are relevant: