I’d answer the question posed by UWNThesis’ post ‘Can anyone become a programmer?‘ and the study papers linked to it with ‘yes’. Anyone could become a programmer. After some thought, I’ve come up with three learning methods that might work better than conventional programming courses. Ultimately, though, I think that people run into problems because programming languages are tools that require context and technical understanding to use. I guess that anyone could become a programmer in generally the same way anyone could become an engineer.
1. Usability and Control – The Command Line as an Ideal Interface
It was almost a decade ago, around the time I was discovering the Linux operating system, that I attended a talk given by a computer scientist from Cardiff University, Dr. Frank Langbein, who argued that anyone can and should learn to program. He didn’t argue that people should ‘learn to code’ for the sake of just being able to, but because it gives us the ability to break through the limitations imposed by ‘user-friendly’ graphical interfaces. And sure enough, the same arguments, and distilled computing wisdom, could be found in Neal Stephenson’s Cryptonomicon and In the Beginning was the Command Line – please do read them for yourself.
Perhaps the clearest illustrations of Langbein’s argument would be the horrid (in my opinion) desktop interface that became the default in Ubuntu since ~2012, and the widely disliked Windows 8 desktop. Despite these entirely cosmetic changes, they were still the same Linux and Windows operating systems as before, and both just as usable in the command line. The only change was the graphical interface, but nevertheless there was a demand for programmers to fork Gnome and develop a third-party program to reintroduce a start menu for Windows 8.
I had other reasons to dispense with the desktop interface: I’m composing this post on an old Compaq that’s barely powerful enough to run a modern desktop interface, and it certainly can’t handle graphics or video without almost combusting, yet it’s too good to throw away. Consequently I got into the habit of doing most things in the Linux command line – browsing the Internet, accessing emails, modifying spreadsheets, configuring and administrating servers remotely, drafting research papers and generating them in the same format as you’d see in a scientific journal, etc. etc. And it’s important to mention that I’m not intelligent – the command line was designed to be usable for humans, and proficiency is just another learned pattern of behaviour, a habit.
This is actually a good way to begin learning how to program. Given enough time (say three months), and with very little experience, a novice can install Linux or FreeBSD on a VirtualBox, start experimenting with the command line and become reasonably adept at it. I’d state that much as fact. With some familiarity with the command line syntax, it’s a trivial step to string together the execution of shell binaries in a BASH script, much like you’d string together modules in Python. From there, one could learn Python, and perhaps later move onto a more ‘serious’ language.
2. Context and Technical Understanding
The abstract of Saeed’s 2006 paper claims that ‘programming teaching is useless for those who are bound to fail and pointless for those who are certain to succeed.‘
I think that it’s unfair and unreasonable to make that determination about undergraduates (and anyone else, for that matter) before they grasped the technical theory and methodical reasoning. Both assertions are still correct, though, but for a very different reason.
Where I studied, it was actually possible (but not common) to graduate in computer security without having done any programming beyond a first year module in Visual Basic. What I observed is that many of us were successfully teaching ourselves BASH scripting and Python during our final year, and we skipped over the standard tutorials to instead focus on scripting the execution of security-related tools and third-party Python modules. Basically programming became simply a means to achieving something, after we gained some level of technical understanding and consequently the ability to clarify the problems. For example, if you wanted to develop a database-driven application, you might fetch a graphics rendering module, a database driver module, an Object Relational Mapper module, and so on, and we’d simply develop code that brings those modules together as a software application. Could we have done that two years prior? Maybe, but certainly not without some difficulty.
What this suggests, especially given there’s a world of difference between coding and having the skills required to properly engineer software, is there’s a lot of additional expertise required to become a decent software developer. I think that programming courses should be taught in that context, and not necessarily as a first step. Nacko, commenting on Ars Technica, put it another way:
‘Saying a person “can’t program” is like saying he can’t build a house. There is a lot of foundation skill and knowledge that goes into being able to successfully build a house. I would think the only way forward is to identify a more specific deficiency than “unable to build a house” and correct that with appropriate study and training. I suspect that applies to programming as well, and that most people can at least achieve competency’.
By the way, I’ve put together a short list of required skills on GitHub for someone applying for an entry-level developer role.
3. Learning Through Application
Does a course need to teach how to print ‘hello world’, how to manipulate arrays and how to loop operations in a given order? Not necessarily. I think the typical syllabus is inadequate, ineffective and often not very motivating. Using Python, Java and C#, it’s possible for beginners to start making things with third-party extensions following the briefest of introductions to the language – I actually flicked through a £7 Python book in WH Smiths’ magazine section the other month that uses this method. A course might take the form of a project to build a Python-based graphical interface to a MySQL database, during which students would gain a broader and more applicable understanding. Many of the concepts learned using Python could then be carried over into a secondary course with Java or C#, which might introduce real-world software design and engineering concepts – the SOLID design principles being chief among them.
Another thing I’ve noticed over the years is that a developer environment’s font style and colour scheme can make a huge difference to the comprehensibility of a source file, just as surely as the way source is formatted. Quite often I switch to another editor such as Notepad++ and Atom for modifying tricky-to-follow .NET projects. And it’s important to use the right editor – you don’t want an IDE that’s dauntingly feature-rich and presents its own learning curve.