I coined the term, "Stupid-knowledge" for anything you have to learn that you shouldn't have to learn to make the system just work -- especially if you wouldn't have to know it if they just designed it "right" or better. Thus it is a stupid waste of your time -- they (designers) were stupid, and now you have to know about it!
As computer nerds migrate from being average users (those that use the computer and just want to get their work done) to becoming "power-users" (those who want to learn how and why things behave as they do, and all the little details of the system). They start learning all sorts of things they should have to, to just get work done. Like the design decisions early on (history of why we do it that way), are the how's and the why's we do it that way, even though it appears to make "no sense". The fact that it appears that way, is because it's true. We do it that way because that's the way it evolved, not because that's the right way to think of it now. So all this is, "stupid-knowledge".
One example I used to use a lot was the Windows registry file -- the fact that for decades, Microsoft users had to know of this monstrosity (especially when the software was failing to do what it was supposed to), was evidence of failure, not success: despite some smug geeks that would lord their ability to fix shit that shouldn't have broke in the first place.
If I'm using a GUI machine, and I need to do something on the Command Line Interface (because the UI can't do it) it is Stupid Knowledge, that I should need to know how to do that. (The same for tricks I can do in the GUI that I can't do in the command line, but those are far more rare). But then it gets deeper and all meta.
Unix is the worst for this: almost everything is done a certain way, not because it's right or makes sense, but because that's the way it was always done. Like think of the Directory Structures (file system). Unix has this old 2-dimensional system, for laying out n-dimensional files so the tree can look like the following:
Directory Tree 1
Directory Tree 2
In Directory Tree 1, the different Applications folders differentiate between whether an Application is local (available to people only on this machine), or available to people across the network, you basically have two different paths -- Local and Network. But why is it done this way? Why not define the type of file first -- like Applications or Libraries -- then have the different attributes of whether they are local or remote as subdirectories of that? (Directory Tree 2).Each has advantages and disadvantages. The truth is it is more for historical reasons than anything else.
Why shouldn't you be able to rename the Application or Libraries folder, or Local or Remote folder to something that makes sense to you (the user)? The name shouldn't effect the type -- though in traditional UNIX it certainly does (they use name to define type).
Either way the problem is that you are trying to cram a multi-dimensional file-space (or many attibutes like position, type, access, name, etc), into a two dimensional hierarchy -- and it just doesn't quite fit.
In a well designed system, the attributes (metadata like position, type, access, etc) wouldn't be coupled to the name/location: then I could move the network folder anywhere, and rename it. The same with Applications. So that you need to know how/why is just Stupid Knowledge.
In the end, whenever you see something in computers that you need to know, but shouldn't have to, that's Stupid Knowledge. Every one of those seems to be a sign of failure -- something we learned later, but didn't go back and fix. (Also called tech-debt for technical debt, on things you should fix after the fact, because you didn't do it right the first time). It's sort of evidence of engineering malfeasance.... and sloth at not fixing it.