dark and light mode coming soon…-ish modalità chiara e scura in arrivo…

std::iostream of consciousness

— Enrico Maria De Angelis

Work in progress…
Lavori in corso…

Big-O complexity is not all you need

This article is about the time I discovered, the hard way, that there's much more than big-O complexity in performance analysis: even if two algorithms have the same big-O complexity, the difference in how they perform can be tremendous, because of the literally huge multiplicative constant that can exist between them, as a consequence of hardware optimizations.

A simple problem — string equality

One of the simplest expressions that sooner or later all programmers write is

str == "some string"

and sometimes we don't even care much about the time complexity of such a comparison.

For instance, if this comparison is done in the context of command line parsing to recognize whether the user has selected an option, so whatever the complexity of that comparison is, we'll pay for it only once, or at least a small known number of times.

String equality — many times

However there are times when this comparison is run for all elements of a possibly long collection of strings

auto validStrings = strings | filter(not_fn(equals("<invalid>"s)));

(See example on Compiler Explorer.)

If strings contains N strings, we'll pay the cost of string comparison N times, so we do want to know how much that cost is. And if we make those comparisons inside an algorithm which is not linear in the number of strings like filter, but, say, quadratic, then knowing the cost of the individual comparisons is even more important.

So, how much is the time complexity of string comparison?

Well, I told myself, the best case scenario is when the strings have different length, in which case the algorithm can return in constant time.

And then if their length is equal, then all characters have to be compared one by one for as long as it takes to find a character that is different between the two strings

Rushed conclusions

In summary, my conclusion was that

  • "hello" and "hello world" are compared in constant time because their lengths are different. (This is the best case scenario.)
  • "hello" and "hello" are compared in linear time, and since they are equal, it takes 6 comparisons to conclude that they are indeed equal, because 1 comparison is to check their lengths are equal and 5 is the number of characters that we need to compare. (This is the worst case scenario.)
  • "hello" and "help!" are also compared in linear time, but it takes just 5 comparisons to conclude so, because we can bale out as soon as the second 'l' in "hello" compares different from the 'p' in "help!". (This is an average scenario.)

A more complex problem — whitespace-trimmed string equality

Fast forward a few months, and I had to consider that, in the dummy example described above, the invalid strings could contain some leading and trailing whitespace, e.g. an invalid string could be "␣␣␣<invalid>␣␣", where I've used the symbol to mean a whitespace.

Forgetful of my conclusions above, with confidence I changed the code to something like this

auto validStrings = strings | filter(compose(not_fn(equals("<invalid>"s)), trim(isSpace)));

(See example on Compiler Explorer.)

where compose(f, g) is a function that first runs trim(isSpace) on its input to remove any leading and trailing whitespace, and then runs not_fn(equals("<invalid>")) on the result.

It's neat, isn't it?

And what about the complexity? Well, I thought, the trimming is a linear operation too, just like checking strings for equality, as all it does is comparing the leading characters one by one with the space character, increasing a counter at the same time; when the first non-space is met, the counter is basically the offset from which the string needs to be compared with "<invalid>" (the same holds, modulo a slightly different wording, for trailing whitespace).

The whitespace-trimmed equality comparison is much slower!

It turned out that the code above performed poorly with respect to the original one, and I initially had no idea why, because based on my analysis both functions passed to filter should run in linear time with the length of the string they are passed.

Here's why

Stepping through the code with the debugger is what eventually helped me figure out what my mistake was:

  • when comparing "␣␣␣hello␣␣" with itself via std::equal_to<>{}, a.k.a. the plain == operator, the function __builtin_memcmp is called once to compare the two strings, and the function __builtin_memchr is never called.
  • when comparing "␣␣␣hello␣␣" with itself via equal ^on^ trim(isSpace), a.k.a. the plain == operator called on the whitespace-trimmed strings, the function __builtin_memchr is called 7 times! Once for each space that is trimmed, plus once for the 'h' and once for 'o'(i.e. the first and last non-space characters); only after all these calls, does the compiler run a string comparison between what's left, the two "hello" substrings, with pretty much the same efficiency as the previous point.

So it is evident that the trimming part of the algorithm is what really costs, for the reason that it requires the processor to truly process the trimmed characters one at a time.

Fundamentally, what's wrong in the conclusion I had initially drawn is that I didn't consider at all that in the non-best case scenario, e.g. when comparing two equally long strings for equality, the comparisons between corresponding characters need not be run in any specific order; actually, they can be run at the same time. If the hardware is capable of comparing those 5 characters at the same time, it'll do, thus dividing the cost of the comparison roughly by 5.

On the contrary, trimming is an operation inherently directional, because you want to find the first character which is not equal to the characters you're discarding, so you must walk the characters from the first onward, and ask for each single one of them should I discard it?.


I've learned that Big-O complexity is maybe where one has to start to assess whether an algorithm is good and how much, but it is certainly not where one should stop.

As regards the fact that a processor can make, fundamentally, string equality in (what to me looks like) constant time, I don't know much about it, probably because I lack an academic background about this stuff (I'm an aerospace engineer).

For this reason, I decided to ask a question on cs.stackexchange; at the time I'm writing this post it's only been commented, not answered, but the comments mention a CPU implementation detail called "superscalarity".


Bitten byScottato dalla SSO

I was working on a text processing tool that accepts text in a form as simple as

std::string input{"this is the text the algorithm is fed with"};
and does simple stuff like filtering and transforming text (e.g. removing vocals, converting tabs to spaces, …, whatever comes to mind), eventually showing the string in some GUI with colored overlays based on the outcome of those text operations.

Instead of performing these operations eagerly, by dealing with actual std::strings (which would mean that potentially a lot of std::string are constructed, copied, and moved around), the algorithm works with views, and returns a collection of views on the input string, without actually instantiating, copying, or moving any std::string. Making the simplifying assumption that the algorithm truly returns one single std::string_view, the code could look like this

std::string_view algo(std::string const& s) {
    // very complex algorithm: chops off first and last characters
    return std::string_view(s.c_str() + 1, s.size() - 1);
and be used like this
auto out = algo(input);

Now, in the application I was working on, the out object played the role of the model in the MVVM pattern, and it was not directly used in the GUI.

What was used in the GUI were two things:

  • the actual text contained in input in order to… well, show the text on screen,
  • and the viewmodel which, in practice, just consisted of out.length().

And how were these two things made available to the front end? By wrapping them both in a class like this

struct TextAndViewmodel {
    std::string s;
    int vm;
    TextAndViewmodel(std::string&& s, int vm)
        : s{std::move(s)}, vm{vm}

and that's were a silent bug was waiting for me! Can you spot it?

The bug is in that TextAndViewmodel holds the text by value (rather than via a std::unique_ptr or some other owning wrapper with pointer or reference semantics).

Because of that, when such an object was initialized, for instance via

TextAndViewmodel tv{std::move(input), out.length()};
the input was moved from under the std::string_view returned by the algorithm (out), which end up being dangling.

To be precise, this was not yet a bug: the std::string underlying out was gone forever, but out.length() was not affected, so the TextAndViewmodel object was just fine.

It became a bug as soon as, for other reasons, I had to include out itself into tv, and make use of it to decide whether vm should be used or not for each of the TextAndViewmodel objects (it was a kind of a last-minute filtering right before sending stuff to the GUI).

struct TextAndViewmodel {
    std::string s;
    std::string_view sv;
    TextAndViewmodel(std::string&& s, std::string_view vm)
        : s{std::move(s)}, sv{sv}
    // if the client needs what was `vm` in the
    // previous snippet, they can go for `sv.length()`

At that point, indeed, I started noticing some weird things going on in the GUI:

  • the text was ok, because input was moved into tv,
  • and for "normally sized" text files everything was fine in all respects,
  • but whenever I tried with a few characters long files, the GUI behaved as the filtering I mentioned above never happened, even if it should.

After spending some time (I really can't unveil how much) debugging and banging my head on the monitor, I understood what was going on: SSO was biting me. Actually, the other way around: SSO is the only reason why I could see the bug, otherwise it could have lingered there, silent for a long while, waiting for somebody other than me, maybe a customer, to bite.

So what was going on?

When input was long enough and SSO wasn't kicking in, the char const* inside it was left intact, as the "movement" of resources simply consisted in the pointer held by input was copied into the one held by tv.s, and then set to nullptr (yes, std::move doesn't move anything).

The consequence is that out was "legally" invalid, because input was std::moved from under it, but "romantically" (aka SSO not kicking in) still looking onto the correct area of memory.

On the other hand, when SSO kicked in, then the actual chars stored in the "special short-string compartment" in input were truly copied, one by one, into the corresponding member of tv.s, and then freed, which for me resulted in input being left full of garbage characters and, in turn, the filtering I mentioned not happening at all.

In the process of debugging, I wrote this little code to print the string length for which the SSO is disabled (which is an implementation detail, by the way):

#include <iostream>
#include <string>

int main()
    std::string str1{""};
    bool sso{true};
    while (sso) {
        str1 += "x";
        char const* ptr = str1.c_str();
        std::string str2{std::move(str1)};
        sso = (void*)ptr != (void*)str2.c_str();
        std::cout << (void*)ptr << std::endl;
        std::cout << (void*)str2.c_str() << std::endl;
        str1 = str2;
    std::cout << str1.length() << std::endl;


The importance of Item 26 from Effective Modern C++


Some time ago I tried to partially apply std::visit to a visitor, the reason being that I had to visit several std::variants with the same visitor (in different places of the code, so passing all the std::variants at the same time to std::visit was not viable).

Sure, I could quickly write a lambda to do the job:

auto visitWith = [](auto const& visitor){
  return [](auto const&... variants){
    return std::visit(visitor, variants...);

but why should I, if there's already the Boost.Hana library out there that offers hana::partial? However, to do so, I had the problem that std::visit is a template function, so it can't be passed around just like if it was an object. Easy peasy, I wrapped it in a lambda:

auto visit = [](auto const& visitor, auto const&... variants){
  return std::visit(visitor, variants...);

Yeah, I hear you: if I'm happy with this lambda, why should the previous one bother me?

Well, that's true, but the point is that stubbornly trying to use hana::partial is what made me understand, once more, that Scott Meyers' book is a holy bible!

The problem

Wrapping std::visit in the visit object, however, wasn't enough to be able to partially apply visit to a visitor; not in the naive code I wrote,

auto visitor = hana::overload(
  [](auto arg)               { /* whatever */ },
  [](double arg)             { /* whatever */ },
  [](const std::string& arg) { /* whatever */ });

auto visitWithVis = hana::partial(visit, visitor);

which was failing to compile, with a pretty confusing error, the key part of which was (in hindsight!)

error: no matching function for call to ‘hana::overload_t<main()::<lambda(double)>, main()::<lambda(const string&)> >::overload_t()’

Therefore I asked a question on StackOverflow but nobody answered (for 5 months!), so today I decided to spend some time myself to find out the answer.

The investigation

Eventually, I was able to set up a very minimal example to reproduce the problem:

#include <boost/hana/functional/overload.hpp>
auto ovl{hana::overload(l1, l2, l3)};
using Ovl = decltype(ovl);
Ovl ovl2{ovl}; // this triggers the compilation error

From the comment you understand that there's something wrong with copying the ovl object. As suggested in a comment, however, making ovl a const object solves the problem, which gave me some food for thoughts.

By peeping for some time into hana::partial and hana::overload I could later make up a less minimal, but more meaningful example:

#include <boost/hana/functional/overload.hpp>

auto l1 = [](int){};    using L1 = decltype(l1);
auto l2 = [](double){}; using L2 = decltype(l2);
auto l3 = [](float){};  using L3 = decltype(l3);

using Ovl = hana::overload_t<L1, L2, L3>

Ovl ovl{l1,l2,l3};

Ovl ovl2{ovl}; // this triggers the compilation error

where hana::overload_t is the (template) class of the object returned by hana::overload.

To understand what's going on, I had to look once more, and for more than just a few minutes, into the implementation of hana::overload_t, and to play a bit the role of the compiler, by "manually instantiating" it:

struct overload_t<L1, L2, L3>
    : L1
    , overload_t<L2, L3>
    using type = overload_t;
    using L1::operator();
    using overload_t<L2, L3>::operator();

    template <typename F_, typename ...G_>
    constexpr explicit overload_t(F_&& f, G_&& ...g)
        : L1(static_cast<F_&&>(f))
        , overload_t<L2, L3>(static_cast<G_&&>(g)...) // error points here
    { }
The solution

It is then clear what's going on: the code Ovl ovl2{ovl};, which should make use of the copy constructor, is using instead the template member function above, with F_ = overload_t<L1, L2, L3>& and G_ = {} (i.e. G_ is an empty pack), which in turn causes an attempt to initialize overload_t<L2, L3>(), which is erroneous and nicely maps to the key part of the error I copied above.

But why does the compiler try to use the template constructor above, instead of the compiler-generated copy constructor (the generation of which is not suppressed, as per Item 17 from the same book)?

The answer is at the end of Item 26 from Scott Meyers' Effective Modern C++:

cloneOfP is being initialized with a non-const lvalue (p), and that means that the templetized constructor can be instantiated to take a non-const lvalue of type Person


Calling the copy constructor would require adding const […], but calling the instantiated template requires no such addition. […] the template is thus a better match

(In our case cloneOfP corresponds to ovl2, p to ovl, and Person to overload_t<L1, L2, L3>.)

Thanks again, Scott, always!


CSS-only radar diagram

Diagramma a radar con solo CSS

I still remember I was flying back or away from home when I started thinking about this webpage with these thoughts coming in sequence

  • Oh, I really enjoyed CSS in Depth,
  • I think I could really leverage that to make a good blog;
  • I should probably start by doing what I initially planned to do: putting my CV online…
  • … with my working experience and all my skills…
  • Yeah, especially all my programming skills!
  • A diagram with the skills would be nice…
  • I should try doing it without any JavaScript!
  • With CSS I can certainly do it!

Well, the reality was not that smooth though. As soon as I had put my feet literally back on earth and played around a bit, I realized that there's a fundamental problem with making CSS-only plots:

  • ideally, the data, e.g. the x and y coordinates of a piecewise linear plot, should be contained in the HTML markup, they are part of the content of the page, having nothing to do with style (CSS), let alone dynamics (JavaScript);
  • it is reasonable to think that each pair (x, y) should belong to one HTML element representing a "point" of the curve;
  • then CSS should take care of "drawing" lines, curves, colors, and what not.

So how should that ideal HTML look like? Probably something along these lines

<div class="plot">
  <!-- 6 points of the y = x² parabola -->
  <div class="point" style="--x: 0; --y: 0;"></div>
  <div class="point" style="--x: 1; --y: 1;"></div>
  <div class="point" style="--x: 2; --y: 4;"></div>
  <div class="point" style="--x: 3; --y: 9;"></div>
  <div class="point" style="--x: 4; --y: 16;"></div>
  <div class="point" style="--x: 5; --y: 25;"></div>

where the x and y coordinates are in the style attribute so that CSS can pick it up easily.

But now the point is the following:

How the heck do I draw anything, say a line, via CSS according to quantities specific to two separate elements?

Well, the answer is simply that I cannot, the reason being that, in order to draw lines or more complex stuff via CSS, one has to leverage things like borders (to draw lines), background-colors (to draw areas), and so on. But those things are specific to one element, and cannot bridge between two elements.

Regardless of how reasonable my self-argument sounded, I couldn't really just think I had the correct answer already, and so I wanted to search a bit online:

  • First, searching online for "CSS plot" et similia I generally found stuff containing .js written here or there, sooner or later down the page, which already made me feel like I could be right.
  • Second, I found this article from CSS Tricks, which links to Chart.js in the first few lines, and later shows a solution that features, for each HTML element representing a point, not just the --x and --y CSS properties, but also properties --hypotenuse and --angle in a nested HTML element (peep into the code and you'll see). If you tamper with those, you'll see the lines detaching from the points; in other words, the lines are not bound to the points, and you have to recompute the values of --hypotenuse and --angle by hand if you change --x and --y.
  • Finally, I found Chart.css a CSS data visualization framework that claims
    Charts.css is a modern CSS framework. It uses CSS utility classes to style HTML elements as charts.
    (By the way, the very chart I was looking for, the radar chart, is under development while I'm writing these notes.)

The last result was kind of illuminating. Indeed, if you look at the documentation page of the maybe simplest example, the line, you'll see that it states the following:

When you add data, you need to supply not only the data --size variable but also the --start variable that indicates the starting point.

Rephrasing it (and you can verify it by playing around with the code), each HTML element that represents a point in the graph must provide data to the CSS not just for that point, but also for the point before it. And this is in the simple scenario of piecewise linear curve; if you think about something more complex, there might be need for one HTML element to carry data about more neighboring points.

Given this (bad) news and going back to the HTML snippet above, we could rewrite it like this (where the trailing p of --xp and --yp stands for previous),

<div class="plot">
  <!-- 5 segments (between the 6 points) of the y = x² parabola -->
  <div class="point" style="--xp: 0; --yp: 0; --x: 1; --y: 1;"></div>
  <div class="point" style="--xp: 1; --yp: 1; --x: 2; --y: 4;"></div>
  <div class="point" style="--xp: 2; --yp: 4; --x: 3; --y: 9;"></div>
  <div class="point" style="--xp: 3; --yp: 9; --x: 4; --y: 16;"></div>
  <div class="point" style="--xp: 4; --yp: 16;--x: 5; --y: 25;"></div>
giving a chance to our CSS to pick the coordinates of the two points between which line has to be drawn.

As regards how that line is drawn, I don't think there's one single solution, and to be honest, I haven't bothered making up some random plots; I targeted full steam ahead the radar plot I wanted.

Here are the few points on which that chart is based (I'm referring to the data content of the graph, and not the meta content, aka the grid):

  1. a radar diagram looks like a polygon, with as many vertices (and sides) as the number of quantities we want to show, say N, so we need to provide that N via a CSS property, say --N that we can encode in the HTML in the root parent of the chart;
  2. that polygon is made up of N triangles;
  3. all triangles have one vertex in common, the one at the center of the chart (so let's call that vertex the center), and the angle insisting on that vertex is the same for all triangles, being it equal to 350° divided by N; position, left, and right CSS properties can be used to achieve the translational positioning, whereas transform and its rotate value can be used to achieve the angular positioning;
  4. for each triangle, the lengths of the two sides starting from the center are exactly the "current" quantity and the "previous" quantity, so we need to make those quantities available to the CSS, as I've already explained and shown in the previous HTML snippet;
  5. any triangle can be thought of as half of an appropriate parallelogram, the half on one side of the appropriate diagonal, and any parallelogram can be thought of as a square subject to anisotropic scaling (that makes it a rectangle) and shear (that makes it "skewed"), so we can make use of the background-image CSS property and setting it to something along the lines of linear-gradient(45deg, black 50%, transparent 50%); to have a black orthogonal triangle, and the transform property with its skew and scale values.

That's mostly it, but the difficult part is actually writing CSS code such that it is reusable. Doing so goes through making appropriate use of CSS custom properties.

Thoroughly inspecting my Radar plot here, both HTML-wise and CSS-wise (via dev tools or by peeping into the repo on GitHub), together with the associated KSS documentation, should be enough to understand how the various bricks fit together. But it's not the easiest of the things…

A few external references:

However, here is an example of a plot with 12 skills (in hindsight, I should probably put some effort in drawing as many diameters in the canvas as the are skills on the plot, but that's easier said than done, considering that each of those lines is done with one dedicated linear-gradient in the CSS):


Notice that the markup is essentially very simple, as each entry of the plot has 4 classes and 3 CSS custom properties; here are the first two entries:

<div class="trigonometry triangle center-origin--after skill" style="--i: 0; --this-skill: 5; --next-skill: 3"></div>
<div class="trigonometry triangle center-origin--after skill" style="--i: 1; --this-skill: 3; --next-skill: 4"></div>
<div class="trigonometry triangle center-origin--after skill" style="--i: 2; --this-skill: 4; --next-skill: 1"></div>
<div class="trigonometry triangle center-origin--after skill" style="--i: 3; --this-skill: 1; --next-skill: 3"></div>
The markup is just a bit more complicated, with two more nested elements, when we want to add some "labels" to each entry; here's the entry labelled as B:
<div class="trigonometry triangle center-origin--after skill" style="--i: 9; --this-skill: 2; --next-skill: 3">
  <div class="polar-diagram-icon polar-position">

It is understood that using JavaScript we could dynamically set --next-skill instead of hardcoding it in the HTML, which clutters it with content which is not semantically relevant.

A few external references:


This blog

Questo blog

Before starting the adventure of setting up a personal blog, I had never really dedicated myself to a personal programming project that needs such an amount of work.

In the Beginning, There Was Nothing. The Keith J. Grant Wrote CSS in Depth.

…and that's how I truly considered that I could write a blog myself.

To be precise, it all started with the slightly less pretentious aim of putting my CV online. Here's the first sketch

But as soon as I started to feel comfortable with CSS and HTML, I also started to target somewhat higher… Why not writing a blog?

Therefore I had to approach JavaScript as well, including meeting JQuery.

Eventually, I asked several question around the topic of web development.

That's how it all started.

Well, for now nobody knows of it, but... you never know, maybe somebody will start reading it…, and maybe find it useful too, if they want to start a similar adventure themselves!


Painless window navigation in Vim

Navigazione facile tra le finestre di Vim

Today my Vim plugin for easy window navigation, which I named WinZoZ, has got its first star! Given this comment on StackOverflow, the star is from the user @yolenoyer.

With this plugin you can hit <leader>w to enter a sort of "CTRL-W mode", during which each key pressed will be assumed preceded by CTRL-W.

This can be very helpful, imho, if you're the kind of Vim user who works with a big monitor and splits the terminal several times both horizontally and vertically.

Indeed, if this is the case, chances are good that you hit CTRL-W a lot of times to move from one window to another. (Search for :help CTRL-W to see how many commands start by CTRL-W.)

Below is a GIF demonstrating the use of WinZoZ:

Oggi il mio plugin Vim per agevolare la navigazione tra le finestre, che ho chiamato WinZoZ, ha guadagnato la sua prima stella! A giudicare da questo commento su StackOverflow, la stella è dall'utente @yolenoyer.

Con questo plugin puoi premere <leader>w per attivare una specie di "modo CTRL-W", durante il quale ogni tasto premuto verrà interpretato come preceduto da CTRL-W.

Questo può essere molto utile, secondo me se sei il tipo di utente Vim che lavora con un grande monitor e divide il terminale molte volte sia orizzontalmente che verticalmente.

In questo caso, infatti, ci sono buone probabilità che tu sia costretto a premere CTRL-W tantissime volte per muoversi da una finestra all'altra. (Cerca :help CTRL-W per vedere quanti comandi iniziano con CTRL-W.)

In basso c'è una GIF dimostrativa dell'utilizzo di WinZoZ:


Clean Code (?) - part 1

I didn't like that book.

(OK, maybe I should be more verbose than that.)

First of all, it's full of just Java code. With such a premise, I can't hope that the author expresses as general ideas as the title would imply, and indeed he doesn't; what he does instead, is trying to give a recipe made up of dogmatic rules for writing presumably clean Java code. And given that languages evolve (I guess Java does too, doesn't it? I don't know Java, unfortunately), that old book is probably not applicable anymore.

Secondly, the book does have something good: I do agree with several statements across the book, and those statements are often engraved in section titles…

… but as long as examples are concerned, I think they are flat out wrong: they mislead, and generally go against what the titles express, at least based on the few chapters I could read before giving up.

And yes, this is gonna be a rant. (And it's not just my rant: It's probably time to stop recommending Clean Code.)

Take Chapter 2, Meaningful Names.

After an Introduction that is as non-controversial as it is a tautology, a section named Use Intention-Revealing Names gives an example of a badly named variable

int d; // elapsed time in days
The name d reveals nothing.

followed by examples of presumably better names for variables representing time

We should choose a name that specifies what is being measured and the unit of that measurement:
int elapsedTimeInDays;
int daysSinceCreation;
int daysSinceModification;
int fileAgeInDays;

Now, I do understand and appreciate the suggestion of giving a name that specifies what is being measured; after all, if I don't know what a variable is, how can I make use of it or understand how the surrounding code uses it?

On the other hand, I can't help notice that all those variables are ints and that their names contain the word days.

What does that make me think? That daysSinceCreation is sitting there just waiting to be subtracted to secondsSinceTheEpoch, and the difference to be assigned to dayOfCreation, resulting in a wonderfully cinematographic temporal paradox that features us travelling to a very far future to create a new file that has always existed… (I hope the joke is not too obscure… the bold text should help)

So going back to the suggestion, my question is: why should I encode the unit of measure in the name of the variable? The age of a file is just time, whether I express it in days or in seconds. The fileAge part of the name fileAgeInDays is semantically relevant, as it tells me what that variable represents in the piece of code I'm reading. But the InDays part is just an implementation detail.

I hear somebody saying once I have decided that I use an int to represent time, I have to encode in the variable names what is the unit of measure of that time, otherwise I risk adding apples and oranges. My answer is that this approach is flawed since the beginning: if you are worried about an int representing a time in days being added to an int representing time in years, that means that int is just not the right type for you, and that you should use a type that handles the unit of measurement automatically.

(To know more about this, I suggest reading the series on strong types by Jonathan Boccara, which I find excellent.)

I hear somebody else saying come on, it was just an example!, and my answer is

  • this is a book about about writing clean code, so it should have been damn careful when giving examples
  • tell me what the cost of writing the above snippet like this would have been:
    Time elapsedTime;
    Time timeSinceCreation; // a better name is probably just "age"
    Time timeSinceModification;
    Time fileAge;
    Time just screams no worries, I'm the one handling the units. (No, starting a variable name with time in this case is not duplicating information already contained in the type, but is disambiguating: using a standalone sinceCreation we wouldn't know if that is a time or a number of events, just to say; fileAge encodes the "time-ness" in the Age part, so we don't need time in its name.)

Then Martin claims that it's hard to tell what the following code is doing

public List<int[]> getThem() {
  List<int[]> list1 = new ArrayList<int[]>();
  for (int[] x : theList)
  if (x[0] == 4)
  return list1;

which I can't disagree with, but I feel there are two candidates for why the code is not clear:

  1. the names of the variables are badly chosen
  2. the code mixes two levels of abstraction and hard codes stuff that should be named appropriately and taken as input

Uncle Bob obviously picks number 1 and goes full steam on his crusade, but we disagree even earlier than that, when he writes There aren't even any fancy classes or polymorphic methods, which for me just means that somebody is easily scared by generic code.

Anyway, he eventually shows us his shiny code

public List<Cell> getFlaggedCells() {
  List<Cell> flaggedCells = new ArrayList<Cell>();
  for (Cell cell : gameBoard)
    if (cell.isFlagged())
  return flaggedCells;
With these simple name changes, it's not difficult to understand what's going on. This is the power of choosing good names.

Incidentally, where does gameBoard come from? It's a global variable, which means getFlaggedCells() function has side effects: it will return different output without you changing the inputs you pass to it (yeah, it doesn't take inputs, which is a symptom of impurity). A few pages later Uncle Bob will suggest that code should not have side effect. What a undecided man…

I don't know how understandable that code is if you don't parse it, just as you were the compiler (or the JVM, or whatever it's needed to run that crap), but I know that Uncle Bob will write the following code somewhere else (modulo I-don't-know-Java-sorry)

public Set<Book> getProgrammingBooks() {
  Set<Book> programmingBooks = new ArraySet<Book>();
  for (Book book : libraryShelf)
    if (book.isOnProgramming())
  return programmingBooks;

And guess what's the difference? None! The two codes are just doing the same thing: filtering a container according to a predicate (a.k.a. I've just renamed the variables/types/names to make the latter snippet from the former).

So why should we write those two functions, instead of writing just one generic function that we use in the context we need? Again, I don't know any Java, but I guess this pseudo-Java conveys what I mean:

public Container<Item> filter(Container<Item> container, Pred pred) {
  Container<Item> newCont = new ArrayContainer<Item>();
  for (Item item : container)
    if (pred(item))
  return newCont;

Notice that this piece of code above is closer to the initial version (of which Uncle Bob says that telling what it does it's not easy) than it is to the questionably refined one:

  • The original version
    • used specific types, such as List, ArrayList, and int,
    • hard-coded the predicate,
    • depended on a global variable,
    • but at least used names that were not bound to the specific domain of the caller (even though they uselessly hard-code informations which is already encoded in the type, e.g. why calling a List theList?);
  • uncle Bob's final version
    • kept using specific types, such as List, ArrayList, Cell,
    • kept hard-coding the predicate,
    • kept depending on a global variable,
    • used the caller's domain-specific names for the variables;
  • my version
    • well, by Container I mean a generic type, but I don't know how to do that in Java,
    • takes the predicate as an input argument,
    • takes the input collection as an input argument,
    • uses generic names that don't commit to any specific domain the caller could choose to use it for.

And if the C++ programmer that I am were to kick in, then I would say that if your language doesn't have a library function for filtering a container, then you'd better change the language.

Here's a C++11 solution, which uses the erase-remove idiom:

template<typename Container, typename Predicate>
Container filter(Container c, Predicate p) {
    c.erase(std::remove_if(c.begin(), c.end(), p), c.end());
    return c;
and finally here a C++17,
#include <range/v3/view/filter.hpp>
using ranges::views::filter;
and here a C++20 solution,
#include <ranges>
using std::ranges::views::filter;
What's that?! Just an #include and a using? Yeah, because in both cases the function for filtering is just there, in the appropriate header and in the appropriate namespace, and you can easily use it
auto w = filter(v, pred);  // these two
auto w = v | filter(pred); // are equivalent
(See full example on Compiler Explorer.)

Going back to my claim that Martin's code mixes two levels of abstraction, now it should be clearer: he prompts us to name every aspect that is inside the body of that function according to the use that the caller makes of it, which clearly depends on the domain of the caller: in his example a board game, in my counter example the old library down the road. In some way, the specific use-case that the caller makes of this function, which is at a given level of abstraction, has leaked into the function, the content of which, says Martin, should be one abstraction level lower than its call site (exact quote is Notice that the three steps of the function are one level of abstraction below the stated name of the function, at the top of page 36). This is clearly in contrast with page 33, first paragraph, where he (righteously) claims that mixing too many levels of abstractions in a function is a bad thing.

But the function's implementation screams I'm not so dumb, I can do much more!, and naming stuff as Martin suggests just makes us deaf to that call.

Clearly, another thing that deafens us is designing the function such that it takes no arguments (another dogmatic, unjustified, and flat out wrong suggestion that Martin dispenses in the next chapter, Chapter 3), implying that it writes or at least reads from some global state. Because of that, we could erroneously think:

  • I can filter "things"; if my function takes no "things" in input, how can it be a filter?
  • Filtering is done according to a predicate, but my function takes no predicate.

Therefore, blindly following Martin's prescriptions will indirectly lead us to make a lot of code duplication, because the names chosen his way will prevent us from seeing common patterns. This is clearly in contrast with what he writes at page 32, first paragraph, where he (righteously) refers code duplication as a bad thing.

Then the crusade goes on in a (slightly) less controversial way:

Avoid Disinformation


A truly awful example of disinformative names would be the use of lower-case L or uppercase O as variable names, especially in combination. The problem, of course, is that they look almost entirely like the constants one and zero, respectively.

int a = l;
if ( O == l )
  l = O1;
  a = 01;

I can't disagree, in general, but I'd say that the example is again silly: the IDE will show O1 and 01 in different colors, because the former is a variable name and the latter a numeric literal. What if my IDE doesn't? Then why are you still using it?

Oh, I appreciate a footnote related to the use of a and the prefixes for local variables and function arguments (arguments parameters, Uncle Bob):

Uncle Bob used to do this in C++ but has given up the practice because modern IDEs make it unnecessary.

Going forward, about Make Meaningful Distinctions,

Number-series naming (a1, a2, .. aN) is the opposite of intentional naming. Such names are not disinformative—they are noninformative; they provide no clue to the author's intention.

… except if I'm coding a mathematical function that traditionally uses that well known "number-series naming" convention.

And then:

How are the programmers in this project supposed to know which of these functions to call?

Now jump at page 51, Listing 3-7 (Chapter 3), and you'll find two member functions named includeSetupPages() and includeSetupPage() respectively. It is true that the former is public whereas the latter is private, so a user of that class won't have a chance to be in doubt of which one they have to call. But the current chapter is demonizing the coexistence of a singular with its plural for the sake of it.

Anyway, while I might agree that getActiveAccountInfo vs getActiveAccount can (and maybe is) confusing, why should getActiveAccount vs getActiveAccounts be confusing at all? It's clear that the former gives one account and the latter some collection of accounts. What's the problem with that? I mean, is the following not clear?

for (Item item : items)
    // do something with this item

Then the crusade goes on and targets single-letter names, luckily admitting that they can be acceptable in short contexts (the claim that the length of a name should correspond to the size of its scope is simply hilarious, though).

Once more into madness with this code,

for (int j=0; j<34; j++) {
  s += (t[j]*4)/5;

which is claimed to be unclear, versus this,

int realDaysPerIdealDay = 4;
const int WORK_DAYS_PER_WEEK = 5;
int sum = 0;
for (int j=0; j < NUMBER_OF_TASKS; j++) {
  int realTaskDays = taskEstimate[j] * realDaysPerIdealDay;
  int realTaskWeeks = (realdays / WORK_DAYS_PER_WEEK);
  sum += realTaskWeeks;

which, in my opinion, just hides the fact that the we have just a trivial application of transform/map followed by a reduce/accumulate (unfortunately not all languages agree on how these general purpose utilities are called, sigh).

Indeed, in JavaScript, you'd write the above as

taskEstimate.map(x => x*4/5).reduce((x, y) => x + y)

or even more clearly like

_.sum(_.map(taskEstimate, _.multiply(4/5))) // lodash
_.sum(_.map(_.multiply(4/5), taskEstimate)) // lodash/fp

if you use Lodash/lodash/fp.

But it was just an example! Good, thanks. And what do I get from a book on programming which has silly examples? Philosophy?

As regards using searchable names, I agree, but if you are in need of searching the codebase for the running index of a for loop on an array, then either you have no idea what debugging is, or you're dealing with much bigger problem with the surrounding code, so finding that index won't really help you much. After all, if I have a for loop running on the indices of a vector/array (why am I doing this in 2022, by the way?), then why should I name that index other than i or j? It is just a number to loop on the array/vector, it is an implementation detail.

Good news is that Martin suggests that we shouldn't use m_ prefixes for member variables anymore. Praise the lord!

Here's another couple of suggestions which are truly Java-specific:

  • Classes and objects should have noun or noun phrase names
  • Method Names Methods should have verb or verb phrase names

Sure, and what if a class exists for the sole purpose of providing a method (especially in a language like Java, where everything is a class)? What do I call it then? ClassForMethodMethodName? The point is that objects and functions are just the same thing, artificially made different by several (most?) languages; establishing different naming rules is equivalent to encoding the weakness of the language in code. Take std::equal_to in C++: it is a class, but it has a call operator, and we are just meant to use an object of that class, std::equal_to<>{}, as it was a function.

Eventually, there's one point I strongly agree with, in the Final Words of this chapter (my emphasis):

The hardest thing about choosing good names is that it requires good descriptive skills and a shared cultural background. This is a teaching issue rather than a technical, business, or management issue.
which prompts me to ask a question:
Should I write code as if my grandma was to read it? Or should you be willing to learn continuously during your life of programmer, thus lifting your cultural background up with time?

… and this was most I had to say about Chapter 2.


Clojure Programming

In this post, I'm collecting my thoughts on the Clojure Programming book published by O'Reilly.

At page 165, in the middle of the section named Promises, the authors claim that

An immediately practical application of promises is in easily making callback-based APIs synchronous.

and then propose this asynchronous function,

(defn call-service
  [arg1 arg2 callback-fn]
  ; ... do stuff
  (future (callback-fn (+ arg1 arg2) (- arg1 arg2))))

this synchronous wrapper,

(defn sync-fn
  (fn [& args]
    (let [result (promise)]
      (apply async-fn (conj (vec args) #(deliver result %&)))

and this use case,

((sync-fn call-service) 10 7) ; (17 3)

What puzzled me of this solution, is the fact that sync-fn doesn't really preserve call-service's API: we can't even pass a callback function to call-service anymore, because the callback is encoded in sync-fn itself, specifically in the function literal (aka lambda) #(deliver result %&) which entangles together two businesses

  • executing a pure function on the inputs (the two inputs that call-service passes to callback-fn);
  • populating the result promise via the non-pure deliver utility.

Indeed, #(deliver result %&) is a variadic function that wraps all its arguments into a sequence, via %&, and then passes that sequence as the second argument to deliver, so we can rewrite it as the composition of those two businesses

; equivalent to #(deliver result %&)
(comp (partial deliver result) list)

It is apparent, at this point, that list is just one possible function that we want call-service to use on callback-fn's arguments, so there's no reason for hard-coding it into sync-fn.

In fact, we can make it an input to sync-fn:

(defn sync-fn
  [async-fn callback-fn]
  (fn [& args]
    (let [result (promise)]
      (apply async-fn (conj (vec args)
                            (comp (partial deliver result) callback-fn)))

which can then be used like this

((sync-fn call-service list)          10 7) ; (17 3)
((sync-fn call-service vector)        10 7) ; [17 3]
((sync-fn call-service #(apply + %&)) 10 7) ; 20
((sync-fn call-service #(apply * %&)) 10 7) ; 51

where we pass, as sync-fn's second argument, the same lambda we would pass to call-service as a third argument.

This way, sync-fn doesn't even need any change in the face of changes in what call-service expects the callback to look like (even though the fact that call-service expects the callback function as the last argument is still hard-coded in sync-fn). For instance, if call-service expected a callback-fn that takes a list of arguments rather than distinct arguments,

(defn call-service-2
  [arg1 arg2 callback-fn]
  ; ... do stuff
  (future (callback-fn (list (+ arg1 arg2) (- arg1 arg2)))))

then we'd only need to adapt the callback lambdas by making them accept one argument instead of many:

((sync-fn call-service-2 identity)          10 7) ; (17 3)
((sync-fn call-service-2 vec)               10 7) ; [17 3]
((sync-fn call-service-2 (partial apply +)) 10 7) ; 20
((sync-fn call-service-2 (partial apply *)) 10 7) ; 51

Programming skills

Conoscenze di programmazione

Self-assessment Autovalutazione
Stack Overflow

C++ Software Engineer Sviluppatore C++

MathWorks, Cambridge (UKRegno Unito)

Design, development, and testing of text comparison back-end tools and algorithms.

Design, sviluppo, e test di algoritmi e strumenti di back-end per confronto di testi


Esperienza con:

  • C++17
  • template meta-programming
  • concepts
  • Boost, Boost.Hana, Range-v3
  • functional programming
  • programmazione funzionale
  • Google Benchmark
  • Google Test

Application Support Engineer Supporto tecnico

MathWorks, Cambridge (UKRegno Unito)

Main roles:

  • Phone/e-mail tech support
  • training assistant at JLR
  • MATLAB Expo
  • lead computer science weekly meetings
  • coaching new hires
  • technical interviews

Development C++ projects:

  • Implementation of Kuo&Cross longest common subsequence algorithm
  • Transition 3p library → internal library for HTTP requests

Compiti principali:

  • Supporto tecnico telefonico e tramite e-mail
  • Assistente trainer presso Jaguar-Land Rover
  • MATLAB Expo
  • Direzione di incontri settimanali su temi di computer science
  • Training di nuovi assunti
  • Interviste tecniche per assunzione

Progetti di sviluppo in C++:

  • Scrittura di algoritmo di Kuo&Cross per il calcolo della longest common subsequence
  • Transizione da libreria esterna a libraria interna per richieste HTTP

Ph.D. in Aerospace EngineeringIngegneria Aerospaziale

Federico II, Naples (Italy)Federico II, Napoli (Italia)

Thesis Lavoro di tesi

Development of a 3D fluid-dynamic solver for turbulent flows from scratch Sviluppo da zero di un solutore fluidodinamico tridimensionale per flussi turbulent

  • Object-Oriented Fortran 2003/2008 + MPI 3.1-parallel Fortran 2003/2008 orientato agli oggetti, parallelizzato tramite MPI 3.1
  • employed MPI three-dimensional domain decomposition Utilizzo di decomposizione tridimensionale del dominio

Master degree in Aerospace EngineeringLaurea specialistica in Ingegneria Aerospaziale

Federico II, Naples (Italy)Federico II, Napoli (Italia)

Final gradeVoto finale: 110/110 cum laudee lode

Thesis titleTitolo della tesi: Parallel implementation of compact schemes for Navier-Stokes simulations

  • Description: Compact finite difference numerical schemes are implicit in space, thus requiring the solution of linear systems in order to be used. Parallelization opportunities in applying such schemes have been investigated from the algorithmic point of view and tested on 2D benchmarks.
  • Contribution: An original method has been developed which uses explicit estimates of derivatives to decouple the algebraic systems.
  • Results: The method shows dissipation properties to be reduced by 75% with respect to more standard approaches.

Favourite courses:Corsi che ho preferito:

  • Multimedia signal processing
  • Control theory
  • Complements of gas dynamics (e.g. shockwave structure)

Bachelor degree in Aerospace EngineeringLaurea triennale in Ingegneria Aerospaziale

Federico II, Naples (Italy)Federico II, Napoli (Italia)

Final gradeVoto finale: 110/110 cum laudee lode

Thesis titleTitolo della tesi: Convective heat transfer measurements in transitional circular and chevron jets

  • Description: Study of "cooling" capabilities of Chevron impinging jets Descrizione: Studio della capacità di raffreddamento di un getto Chevron impingente
  • Contribution: Experimental campaign consisted in heat transfer measurements by means of a thermographic camera Contributo: Campagna sperimentale consistita in misurazioni di scambio termico tramite camera termografica

Favourite courses:Corsi che ho preferito:

  • Numerical methods (FD, FV, FEM) Metodi numerici (DF, VF, MEF)
  • Aerodynamics Aerodinamica
  • Thermodynamics Termodinamica
  • Gasdynamics Gasdinamica

High school diplomaDiploma di scuola superiore

Liceo "Colombo", Marigliano (Naples, Italy)Liceo "Colombo", Marigliano (Napoli, Italia)

Final gradeVoto finale: 100/100


  • Bronze medal at the National stage of "Olimpiadi della Matematica 2008" (90th classified out of 300 participants) Medaglia di bronzo alla fase Nazionale delle "Olimpiadi della Matematica 2008" (90esimo classificato su 300 partecipanti)
  • 2nd classified at Provincial stage of "Olimpiadi della Fisica 2008" (among ~100 participants) 2° classificato alla fase Provinciale delle "Olimpiadi della Fisica 2008" (tra ~100 partecipanti)
  • Top 5 classification every year at the institute competition to select participants for Tra i primi 5 classificati ogni anno alle gare d'istituto per la selezione dei partecipanti alle
    • Olimpiadi della Matematica
    • Olimpiadi della Fisica
    • Olimpiadi della Chimica