When will someone write documentation that tells you what the bits mean rather than what they set? I have been working to integrate a library into our system, and every time I try to figure out what it wants from my code, all it tells me is what a part of it is: "This is the foo field." The problem is that it does not tell me what happens when I set foo. It is as if I am supposed to know that already.
Confoosed
Nowhere is this problem more prevalent than in hardware documentation. I am sure Dante listed a special ring of hell for people who document this way, telling you what something is while never explaining the why or how.
The problem with that approach is assumed knowledge. Most engineers, of both the hardware and/or software persuasion, seem to assume the people they are writing documentation for—if they write documentation at all—already have the full context of the widget they are working on in their heads when they start to read the docs. The documentation in this case is a reference, but not a guide. If you already know what you need to know, then you are using a reference; if you do not know what you need to know, then you need a guide. Companies that care about their documentation will, at this point, hire a decent technical writer.
The job of a technical writer is to tease out of the engineer not only the what of a device or piece of software, but also the why and the how. It is a delicate job, because given the incredible malleability of software, one could go on for thousands of pages about the what, not to mention the why and how. The biggest problem is that the what is the easiest question to answer, because it is in the code when dealing with software, or the VHDL (VHSIC Hardware Description Language) when dealing with hardware. The what can be extracted without talking to another person, and who really wants to spend the day pulling engineers' teeth to get coherent explanations about how to use their systems? Since it is easiest to get at the what, most documentation concentrates on this part, often to the exclusion of the other two. Most tutorial documentation is short, and then at some point the rest of it is left as "an exercise to the reader." And exercise it is. Have you ever tried to lift a reference manual?
Although many engineers and engineering managers now give lip service to the need for "good documentation," they continue to churn out the same garbage that technical people have joked about since IBM intentionally left pages blank. A good writer knows that his or her job is to form in the mind of the reader a sense and an image of what the writer is trying to communicate. Alas, programmers and engineers have rarely been known as good writers; in fact, they are most often known as atrocious writers. It turns out that writers often want to relate, in some way, to people. That is, however, not something often said about technical folk, and in fact, it is often quite the opposite. Most of us want to go off into a corner and "do cool stuff" and be left alone. Unfortunately, none of us works in a vacuum and so we must at least learn to communicate effectively with others of our ilk, if only for the sake of our own project deadlines.
Every software and hardware developer should be able to answer the following questions about systems they are developing:
And if the answer to #1 is, "Management told me to," then it is time to fire management, or find a new job.
KV
During a recent rollout, I overheard one of our DevOps folks bemoaning the fact that upgrading our software had slowed down the overall system. This is a complaint I hear a lot, so I think it is happening more often. The problem is the folks at my gig do not do enough performance testing and just upgrade systems whenever our vendors tell them to so that they will not miss any new features, whether or not they use those features.
Bogged Down by Upgrades
What you are really seeing are 10-year-old expectations trashed by modern hardware trends. Everyone in computing has been talking about the end of frequency scaling for at least five years, and probably more. While lots of folks sounded the warning about this problem, and talked at length about the ways in which software would have to change to meet them, not enough software has been rewritten—oh, I'm sorry, I meant refactored—to handle this new reality. I am often amazed by people who upgrade software and expect it automatically to be faster.
I am often amazed by people who upgrade software and expect it automatically to be faster.
Expecting more features makes some sense, because that is what marketing and management are always going to push for in a new version of a system. The more boxes you can tick, the more money you can charge, even if the things provided are of little or no use. Given that upgrades always include new features, what makes anyone think the system provided will run any faster? Surely more code to execute means the system will run slower and not faster after the upgrade—unless you upgrade your hardware at the same time. None of which is to say that this must be the case—it is simply that it often is the case.
The end of frequency scaling, the ever-upward tick of CPU frequencies, was supposed to spur the software industry into building applications that took advantage of multiple cores, as transistor density is still climbing, even if clock frequency is not. Newer software does seem to take advantage of multiple cores in a system, but even when it does, another problem is presented: memory locality. Anyone who has been building software on the latest hardware knows the programs now need to know where they are running in order to get fast access to memory. In multiprocessor systems, memory is now nonuniform, meaning if my program runs on processor A but the operating system gives me memory nearer to processor B, then I am going to be very, very annoyed.
Modern operating systems are trying to handle NUMA (nonuniform memory access) correctly, but when they get it wrong, you become—as you signed this letter—bogged down again. These are the new rules of the game programmers must contend with. Processors are not getting faster; they are splitting into parallel machines with nonuniform memory. In the current environment, we now need to worry about all the things we may have last seen in parallel-programming classes in graduate school. All programming will now be threaded programming, and we will have to deal with all that entails, plus the fact that we now need to know where our memory is coming from. My advice is to switch careers from programming to ditch digging (where at least at the end of the day you will know you did something). If you cannot switch careers, here are a few things you will need to do and check as you try to improve the responsiveness of your code:
printf()
and a spoon.KV
Related articles
on queue.acm.org
Keeping Bits Safe: How Hard Can It Be?
David S.H. Rosenthal
http://queue.acm.org/detail.cfm?id=1866298
Successful Strategies for IPv6 Rollouts. Really.
Thomas A. Limoncelli and Vinton G. Cerf
http://queue.acm.org/detail.cfm?id=1959015
Get Real about Realtime
George Neville-Neil
http://queue.acm.org/detail.cfm?id=1466445
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.