Common Sense Software Engineering: Letter to a Young Woman (Part II)

Author Notes:

This is the second part of a three-part series.   It is a huge piece of writing.  It resulted from a conversation I had with a young woman who showed interest in learning how to program and possibly enter the IT profession.  It is also an attempt to bring the realities of Information Technology profession as it is today into perspective so that a young woman interested in this field can make informed choices as to how she may be able to enter the field either professionally or for self-interest.

Those who read this piece and would like to pursue further study are more than welcome to contact me with their questions and requests for assistance at blackfalconsoftware@outlook.com.

I will do everything I can to help you on this long but potentially exciting journey while also offering advice on how to avoid the most serious pitfalls you may encounter.

In addition, since this is such a long piece, it is also available in downloadable PDF form at the following address… https://1drv.ms/b/s!AnW5gyh0E3V-g2bQ4UCq4Df-V2tf

The 21st Century

By 2000, Microsoft had introduced its next generation operating system, Windows 2000, which could take advantage of the newer 486 chip sets and then the Pentium chip sets, which though still 32bit machines could process computer instructions even faster due to advances in the internal chip architectures.

And then the Dot.com economic bubble reared its head as a frenzy in the field began to unfold as the Internet became the platform of choice for development.

New startup companies began popping up in the technical industry like an uncontrolled growth in lawn weeds.  A new generation of young professionals was entering the industry in droves and was suddenly being offered highly inflated salaries for their technical, educational backgrounds, though few had any real-world experience.  Venture capitalists were pouring in monies like water into new companies that barely had legitimate business plans for development.

The result, in a word was… chaos.   The only real change to come out of this economic fantasy was a severe increase in working hours for developers and the beginnings of a decrease in job security.

After the Dot.com bubble burst, thousands of technical personnel lost their positions while numerous companies collapsed under the weight of their own mismanagement.  One thing remained; business perceptions that development could be sped up increasingly by cutting corners in the design of applications.

Another outcome, though not a direct result, was that to increase the speed of development while at the same time cutting its costs, US businesses turned to outsourcing technical work at ever increasing levels.  And insourcing increased similarly as well with a new generation of foreign workers entering the US technical workforce who were not trained nearly as well as their earlier counterparts and nor did they have the engaging personalities of their earlier contemporaries.  Foreign outsourcing companies began feeding into the technical pipeline personnel that were simply not qualified to work in the technical profession from a technical standpoint nor from a personal one.

Many of the new foreign personnel were trained only in the details of technology and had little understanding of how systems and applications were actually built for longevity purposes.  They thought they did and many lauded it over their American counterparts along with the foreign management that was increasingly brought in at lower costs as well solely for the purpose of brow-beating developer staffs into fulfilling increasingly deadlier deadlines..  Until the US Millennials would begin entering the profession, the atmosphere in the Information Technology field became one of terrible pressures and arrogance, which caused a complete undercurrent of sociological disruption in the US technical workforce.  US citizens were being viewed as second class members of the profession since so many could not compete with the exploitative circumstances of both the foreign insourced personnel and the outsourced ones.

It was at this point that professional women in the field began leaving the industry in droves as undercurrents of the oppressive working conditions in the work place started to get out of control as developers were seemingly expected to be either working or on call 24/7.  To encourage this perspective, television advertisements began “glorifying” the non-stop work habits of young workers that provided no time for personal lives.

In short, female technical personnel reacted with a sense of sanity towards a profession that was barreling towards an all-consuming, technically-oriented lifestyle as the release of mobile computing technologies emerged along with a maturation of development techniques and tools that in reality had little alternative for innovation in the business environments.  To this end, developers who adapted to the increasing promotions of freely available, open-source software products (software\source code provided freely) started using them to design their own tools with the idea that redundancy was some form of innovation.

“Open Source” software was a new wrinkle in the profession that up through the early 2000s had a substantially successful “cottage industry” where software developers could sell their own crafted software for moderate prices under the aegis of what was called “shareware”, which was simply software that either had a trial period or limited feature sets both enforced by licensing.

The “Open Source” movement grew out of the sociology of the growing Java Community in which many of its promoters were quite young and were still living with their parents or academics who saw writing software on their own time as a way to promote their own ideas.  However, to be fair, this movement was also given impetus by Richard Stall’s “Free Software Foundation”, which promoted that all software should be free.

The concept of “Open Source” was that a community of professional and non-professional technicians would contribute to a software product’s base code allowing for many extensions and corrections to any defects.  It was a highly altruistic movement in the beginning but many more senior professionals worried, and rightly so, that such a movement would eventually destroy any developer’s ability to sell their creations for income.  They were right.

The Open Source” movement did in fact eventually destroy in large part the original cottage industries that had grown up as third-party vendors from the late 1980s through the 1990s.  Its long term effects have been immeasurable on the abilities of all professional developers to earn either a full-time or part-time income for their own, personal interests in development.

This movement would be later spurred on by Apple’s cheapening of the music industry as it promoted its iTunes products through the splintering of how music was acquired, which was increasingly through Internet downloads and attempts at cutting out royalties due to the artists that put their hearts and souls into one of Humanity’s greatest achievements.

The Open Source” movement has indeed provided the industry with some great products such as the MySQL and PostgreSQL database engines for example.  However, for the most part the ability for individual developers to provide third-party products at cost was substantially reduced to almost nothing.

This detrimental loss to the profession was also seriously impacted by smart-phone technology which made everything appear either as a sound-bite or a simple blip on a screen.  The psychologies fostered by such equipment along with changing economics of the US saw a dramatic trend towards increasing self-gratification, which was also, reflected in a demand for lower prices but in the software world, no cost at all.

This combination of forces ensured the destruction of the third-party, independent developers who created software for sale at moderate prices who became known as ISVs (independent software developers).  In and around 2005 a counter-movement was attempted to re-initialize the software cottage industries through the development and promotion of the Micro-ISV, which were small groups of software developers who banded together to create their own products for sale in their hopes that they could re-create the third-party software industry.  However, this effort completely failed as initial attempts went nowhere due to the overshadowing of what were to become known as Internet aggregators.

Aggregators, in particular software aggregators, were companies that offered (in theory) a way for many people to make money by aligning their efforts and products with singular companies that offered their platforms for use by the “masses”.  So for example you have the Apple Store and Microsoft’s App Store where developers can upload their products and sell them for a fraction of the cost that the earlier cottage industries would have supported.  Both companies make off with large profits due to the fees paid to them through the sales while providing the chimera of professionalism to those that have taken advantage of their services.  Many other such services exist such as Amazon cloud services, Microsoft Cloud services, Google’s advertising programs.  The list becomes endless to the point of oblivion.

This is not to say that there has been no innovation in these past years; there has.  However, for the most part it has been predicated on wealthier companies driving out much of the creative spirit that once inhabit the software development world for “committee based” sales and development efforts.  Instead what has emerged is a circular process of production of new tools, concepts, and development processes that are to a very great extent merely redundancies to the original techniques of creating quality software.  It is as if the entire profession is now simply running in circles with everyone trying to one-up themselves with a new framework, design concept, software tool mostly all predicated on free software provided by larger, monolithic organizations in an effort to apparently keep the masses busy building in essence, “junk” software.

It is possible that the huge loss of female professional technical personnel over the years has been an underlying factor for the Information Technology profession’s derailment from the development of quality applications to the creation of software “junk”; this as a result of the loss of the nurturing, female mindset.

Not surprising however, out of all this, the mature products that software developers have used for years to build their applications have all remained well entrenched in development organizations.  This is primarily true for what are called the entrenched, third-generation languages of BASIC (Visual Basic), C# (pronounced see-sharp), and Java.

Along the way there have been a host of new languages that have been developed such as Python (top general scripting language), PHP (web only), Ruby on Rails, Go (Google), Swift as just mentioned (Apple), and Scala, to name a few.  However, none of these languages has yet exceeded the popularity of the still primary languages in use just mentioned with the possible the exception of Python, which has become the most advanced of what are called the scripting or dynamic languages.  The reason for this is that these primary languages can do anything and more than any of the newer languages are capable of except for very specific, esoteric types of processes (ie: string parsing, the extraction of data from a single line of text).

Out of this array of newer languages only one stands out among all of them as truly innovative; Apple’s new Swift language.  The reason for this is that the majority of Apple development has been based upon a variant of the C language called “Objective C”.  It is probably the most difficult C variant to pick up and learn well so this new language, Swift, should bring about an easier form of development for those devoted to Apple software creation.

The problem with releasing a new language is that it must go through the same maturation processes that all previous languages have experienced, which entails adding new features as the people using the new language contribute their suggestions as well as unearth defects in it.  This maturation process will eventually weed out quite a number of newcomers as the original developers of the language lose interest or cannot gain enough developer interest to keep working on their creations.  A classic example of this maturation process is the trajectories that Microsoft’s Standard ASP.NET against its later implementation of ASP.NET MVC has taken.  Viewing each one’s maturation process you can easily see how each one is going through the same exact process by reading the comparative technical articles on the features being implanted to the latter; which are primarily the same as those that had been added to the former in context.

To this end, the primary languages are always being refined so that it will be very difficult for any new language to supersede them.  If one does, it will simply be a result of good marketing and the recent characteristic of younger professionals to gravitate to anything new no matter the credibility of the new software tool.

A Note on Successfully Studying Programming

No matter what language any student wants to learn there are two phases that must accompany each level of the process.  You must first read and understand the topic (or be taught it in a formal class) you are currently studying and then actually use the computer to develop that part of the language you just completed learning.  Learning a programming language has many similarities to learning a foreign language.  You first learn a concept in terms of grammar and then you must go out and use it.  For a programming language, you must actually develop with it, no matter how small your first applications may appear.  You have to get used to using your computer to building with it.

Do not be afraid of making mistakes… and a lot of them.  You will; and even when you become very competent with programming you will still make them.  It is just a part of the endeavor and you cannot escape it…  So don’t ever be ashamed about it…

What Language to Study…

Coming into this history of the Information Technology profession and its currents state, even for one who may be interested in simply developing programming capabilities for self-interest, can be quite daunting considering that what makes up the ability to develop an application is not simply confined to the knowledge of a programming language.

Beyond the knowledge of a programming language you must also know how to use the tools that are used for development with a language.  Most often these tools also include understanding design constructs such as object oriented programming, database access, interface design (the display of an application), and the constructs required for the particular application you are interested in eventually creating.  For example, if you are interested in game programming, you will have to learn how to use one of the many game libraries that are available for just about any type of programming language currently.

The result then in considering a language choice is to choose a language that offers the most flexibility so that once learned its foundations can be used to learn other languages more easily.

In this case the best alternatives are the three primary languages; Visual Basic (VB.NET), C#, Java.  All three provide tremendous flexibility towards understanding all of the foundations of modern programming.  However, only one of these languages provides a new programmer with the greatest amount of flexibility while being rather easy to learn that will also allow for the development of literally any type of application.  This language is C#.  This is because understanding C# will allow you to move towards other languages with C syntax and nomenclature; and there are many with Java being the most prominent of them outside of C++.  In fact several of the most prominent gaming development environments use C/C#-like languages for scripting purposes.

Admittedly, I am a Microsoft software engineer so C# is one of the languages that I can provide assistance with along with the various tools that are used to work with it.  In addition, I do not use a lot of the fancy innovations that have been brought to the C# language since most of these new features are nothing more but redundant measures for doing the same things with the primary tenets of the language.  Many of these new features also act to make the language more arcane and difficult to decipher (like C++).

However, beyond this, C# is also right in the middle of capability, flexibility, but most importantly, complexity.  In terms of the latter, complexity, what is meant here is that the C# language like both VB.NET and Java, contains all of the veritable foundations that a general purpose language would require allowing it to support any type of application to be built.  In addition, the inclusion of such complexity will allow you to study any aspect of development that could be applied to any other equivalent language.

In terms of the Microsoft development environments, C# is no more complex than VB.NET as well as no more powerful.  Though the debate has raged over the years in the Microsoft development communities as to the superiority in performance of C# compared to VB.NET, this debate is more one simply of preference.  There are no existing, scientific benchmarks that demonstrate any superior, overall performance of C# when compared to VB.NET.  This is because they use the same compiler foundations and run against the same run-time support foundations.  Thus, neither could be faster than the other.

VB.NET has a somewhat simpler syntax than C#, especially for people who have been working in some other variant of the BASIC language, but that is about it.

The most complex choice would be Java.  Like both VB.NET and C#, Java is a very mature language with all of the same types of tools available to work with it as are available for the Microsoft languages.  The only difference is that the tools for the Microsoft languages are generally provided by Microsoft, though there are some very good third-party tools available, while Oracle has been the primary vendor that provides the majority of the foundational tools for the Java language (Oracle took over the stewardship of the language when they bought Sun Microsystems in 2010).  However, as it regards some of the tools support, this appears to be changing as there is possible talk of spinning off the “NetBeans” Java development environment and its staff (the tool that is used to code the Java language) into a separate company.

If you were planning on being employed by a large scale development organization that supports many of the large businesses in the United States, Java would probably be your best choice.  Java, though it is a highly capable language and can do anything that VB.NET or C# are capable of, was nonetheless originally intended for commercial release for very large scale application development in mind and the environment also comes with a variety of tools to support such deployments.  Java’s origins as a language were actually for appliances but when the capabilities of the language were proven in the development labs, it was decided to release it as a new third generation, general development language.

The result is that to learn Java well is a more complex learning experience than that which is required for VB.NET or C#.  In addition, I would venture to say that most hobbyist developers do not make the choice to use Java as a result simply for personal use.

How to begin Your Language Studies…

All general development languages primarily follow two types of formats.  The first is that the language is compiled to a native format that both the hardware and the operating systems can support.  Most often this means that the language is compiled to a specific chip-set type (microprocessor).

Notice in the image above that what you write for a compiler is called “source code”.  This is the English-like text that a developer would write and is comprised of specific language based “reserve words” (the commands that make up a language’s vocabulary, if you will).

A compiler is a separate application that takes whatever code the developer writes and in this case, compiles it into Assembler (low level-level internal language) and outputs the results in Binary executable format.  If one were to open up such a file in a text editor, the user would simply see what appears to be complete gibberish.  The advantages of the natively, compiled format is that the executable will be processed at maximum speeds the microprocessor on any specific machine can provide.  The additional advantage is that without some very expensive software, the executable, in this case, cannot be easily reverse engineered.  This subsequent Assembler\Binary file can be now executed simply by double-clicking on it like you would do for any executable application since it would also include any support software required for its execution.

Languages that produce such output have had a cyclic nature of popularity in the industry.  Some years they are in and others they have been out.  Today, the two languages that still produce such output are C++ and Pascal.  Interestingly enough, both C++ and Pascal, though they are capable of producing all types of software, are primarily used for internals development (ie: the design of compiler technologies, databases).  At one point back in the 1980s and early 1990s Pascal was nearly as popular for internals development as C++.  At that time approximately 50% of the compilers in the industry worldwide were designed with Pascal.  Today, Pascal is more or less a niche area product provided by Embarcadero and RemObjects.  Both vendors use variants of the once famous Borland International compiler.

The Java language when it was introduced in 1991 was designed around the semi-interpretive concept as was VB.NET and C# when they were commercially released later in 2001.  In this regard, these languages are all compiled to a form of pseudo-code that is then run interpretively against supporting applications.  The one that Java uses is called the Java Virtual Machine (JVM) and the similar one for VB.NET and C# languages is called the Common Language Runtime (CLR)

An interpretive language is one in which it’s instructions are re-interpreted by the applications that execute them every time the instruction comes up for processing in the internal execute cycle.  This makes such languages somewhat less powerful in terms of performance than natively compiles assemblies and can be easily reversed engineered by freely available tools on the Internet making them insecure in terms of security and intellectual property rights concerns.  As a result, additional tools have been designed to scramble the pseudo-code outputs to be nearly impossible to reverse engineer adequately.

The advantage of such languages is that many languages can be developed to run against either of these interpretive platforms that provide a wide array of support tools.  The disadvantage is that no such language can provide anything unique to its environment that doesn’t meet the specifications of the required runtime interpreter.

It should be noted that this is a very simplistic view of how any executable code is created for these three languages.  However, since 2001, this form of executable file creation has become the standard in the industry for most business application development. Like native executables, these interpretive assemblies can be simply double-clicked on and executed since they will have embedded in them where the interpretive runtimes are located to process them.

What is .NET?

So far we have talked about the three primary languages that are recommended for study.  All three of these languages are what are known as “general purpose” languages as they can accommodate literally any style of application as well as the requirements for them.  That being said, none of these languages were developed as simply separate entities.  All three of them were designed around what are known as “integrated environments”, with the Microsoft languages initially being considered more advanced in their integration than that of Java.  However, in recent years Java has reached similar levels of integration with the maturity of its own tools.

What is an “integrated environment”?  It is simply a complete set of tools that work together to support the languages designed for them.  For VB.NET and C#, this environment is called Microsoft .NET.  A high level view of the .NET environment would look like the graphic below…

Don’t worry about the technologies in this image as you will not be dealing with them for quite a while.  However, this graphic representation provides some insight into what the Microsoft .NET environment comprises and this is only from Microsoft.  To get a better understanding of the details of these technologies go to the following link on Wikipedia…

https://en.wikipedia.org/wiki/.NET_Framework

With the exception of the database tool, “Entity Framework”, all of these technologies are comprised within what is called the .NET Framework, which is a single installation on your machine that will support the entirety of your studies and development efforts for the C# language.

Notice that the C# and VB.NET languages are not listed here.  This is because all such languages are designed as separate compilers that will interact with the .NET Framework to be considered a .NET Language; and there are quite a few beyond the two described.  As a result, each .NET language is designed to compile against the Framework to use its libraries in order to process its instruction set.  Subsequently then, the Common Language Runtime will support the interpretation and execution of the pseudo-code that these compilers generate.

Looking at this graphic, anyone new to programming would legitimately wonder and ask how anyone would be able to work with all these technologies.  There are two ways to do this.

Just about everything in .NET is designed to work at the lowest level of development, which is through the use of a simple text editor.  A good example of such an editor would be Notepad++.

See…  https://notepad-plus-plus.org/

The Java language has also been designed from this standpoint.

Thus, with a simple text editor you can create your code, save it to a file, and then with a .NET Command Prompt console screen (comes with the installation of the Framework) execute the compiler to create an executable file.

This is of course, a very low level way of working and as a result, is highly inefficient since there is a lot that cannot be done without a lot of difficulty such as stepping through your code while it is running to uncover errors.  This is called “debugging” your code.

To more easily work with .NET language code, the preferred choice of most developers is to make use of what is known as an “Integrated Development Environment” or IDE, which Borland International was the first to popularize with its version 4.0 Turbo Pascal back in the 1990s.

Since then, every language vendor (or its supporting tools community) provides an IDE for the coding of its language or languages as many IDEs can support multiple languages since it simply holds your own code or what (as previously mentioned) is known as “source code”.  The IDE will compile it for you by presenting your code to an external compiler and then retrieving the results of that compiler.

For Microsoft languages, the IDE of choice is called, “Visual Studio”…

Visual Studio  (See… https://en.wikipedia.org/wiki/Microsoft_Visual_Studio)

Visual Studio has a very long history and has evolved from the earliest days of Microsoft’s first web development environment, which used what were called Active Server Pages in the 1990s.  This was just a fancy name for the style of mixing code and interface markup (HTML) in the same module. Developers both loved and hated this type of coding for the web.  They loved the simplicity but hated the confusion of the mixed code bases with HTML being used for the interface and VB-Script being used to code processes. This entanglement led to many sloppily built applications.  However, there were ways to make such development very legible if only a few standards were followed.

Today, Active Server Pages is now mimicked by the very popular web language, PHP and its corresponding support tools.  However, Visual Studio offers a complete development environment for just about anything related to .NET development.  From database applications, to internals such as Word Processing, to game development, it can all be done with this excellent IDE.  It is considered the finest IDE in the entire development industry with only a single competitor, Java’s “NetBeans” IDE.

There are undoubtedly other very fine IDEs but they are for the more specialized languages such as Micro Focus’ COBOL IDE, which has taken this aging language and turned it into a developer powerhouse.

The graphic image above of Visual Studio, though a little blurry, gives an impression of a lot of complexity and the complexity is there.  However, you will find that learning and understanding the basic features for its use will take less time than one would expect.

Like many tools for developers these days, Visual Studio is offered completely free in what we call the Community Edition.  Prior to the latest release of the free editions of Visual Studio, Microsoft offered this IDE in two different flavors and was scaled down from the paid-for Professional Edition; a web development IDE and a desktop application IDE.

With the latest release of Visual Studio 2015, these separate installations are no longer available as the new Community Edition is now a single, complete installation with the nearly the same power and features of the Professional Edition.

In the next and final part of this series, we will discuss how to get and install install the .NET Framework and Visual Studio along with notes on how to go about beginning your studies, whether to become a professional or a hobbyist, and the exploding world of game development.

Stay tuned…

Leave a Reply