search
Part two

Letter to a young woman: A history of computer science

Steve Naidamast
woman
© Shutterstock / Jgolby

We’re checking in again with Steve Naidamast and his exhaustive history of the tech industry. In part one, he went over the historical background of women in tech. Today, we’re exploring more recent history all the way up to today, including important lessons all of us should learn.

If you missed our last installment, read part one of this series here!

Previously:

The following is a huge piece of writing. It resulted from a conversation I had with a young woman who showed interest in learning how to program and possibly enter the IT profession. It is also an attempt to bring the realities of Information Technology profession as it is today into perspective so that a young woman interested in this field can make informed choices as to how she may be able to enter the field either professionally or for self-interest.

Those who read this piece and would like to pursue further study are more than welcome to contact me with their questions and requests for assistance at support@blackfalconsoftware.com.

I will do everything I can to help you on this long but potentially exciting journey while also offering advice on how to avoid the most serious pitfalls you may encounter.

In addition, since this is such a long piece, it is also available in downloadable PDF form here

The 21st Century

By 2000, Microsoft had introduced its next generation operating system, Windows 2000. It could take advantage of the newer 486 chip sets and then the Pentium chip sets even though 32bit machines could still process computer instructions faster due to advances in the internal chip architectures.

As the Dot.com economic bubble reared its head as a frenzy in the field began to unfold as the Internet became the platform of choice for development. New startup companies began popping up in the technical industry like weeds.  A new generation of young professionals were entering the industry in droves. These professionals offered highly inflated salaries for their technical and educational backgrounds, even though few had any real-world experience.  Venture capitalists were pouring money into new companies that barely had legitimate business plans for development.

The result was chaos.   The only real change to come out of this economic fantasy was a severe increase in working hours for developers and the beginnings of a decrease in job security.

After the Dot.com bubble burst, thousands of technical personnel lost their positions. Numerous companies collapsed under the weight of their own mismanagement.  Only one thing remained: the idea that development could be sped up by cutting corners in the design of applications.

Another  indirect outcome was that to increase the US turned to outsourcing technical work in order to increase development speed while cutting costs. They also started insourcing by sponsoring foreign technical workers to enter the US workforce despite their lack of training. Foreign outsourcing companies began feeding into the technical pipeline personnel that were simply not qualified to work in the technical profession from a technical standpoint.

SEE MORE: The top 10 catastrophes in the history of IT

Many of the new foreign personnel were unprepared and were trained only in the details of technology. They had little understanding of how systems and applications were actually built for longevity purposes.  The atmosphere in the IT field became one of terrible pressures and arrogance.  It was difficult for US tech workers to compete with the exploitative conditions of the insourced and outsourced workers.

It was at this point that professional women in the field began leaving the industry in droves.  As undercurrents of the oppressive working conditions in the work place started to get out of control, developers were expected to be either working or on call 24/7.  To encourage this perspective, television began “glorifying” the non-stop work habits of young workers that provided no time for personal lives.

In short, the profession was barreling towards an all-consuming, technically-oriented lifestyle. Mobile computing technologies emerged along with a maturation of development techniques and tools.  To this end, developers who adapted to the increasing promotions of freely available, open-source software products started using them to design their own tools.

“Open source” software was a new development in the profession. Until the early 2000s, there was a substantially successful cottage industry where software developers could sell their own software. This was under the aegis of shareware, or software under either a trial period or limited feature sets enforced by licensing.

SEE MORE: The tangled ways of open source and how to master it: GitHub shares its wisdom

The open source movement grew out of the growing Java Community. Many of its promoters were quite young and still living with their parents. Others were academics who saw writing software on their own time as a way to promote their own ideas.  However, this movement was also given impetus by Richard Stall’s “Free Software Foundation”. The Free Software Foundation promoted the idea that all software should be free.

The concept of open source was that a community of professional and non-professional technicians would contribute to a software product’s base code. This allowed for many extensions and corrections to any defects.  It was a highly altruistic movement in the beginning. However, many senior professionals worried that such a movement would eventually destroy a developer’s ability to sell their creations for income.  They were right, to a certain extent.

The open source movement did in fact eventually destroy the original cottage industries that had grown up as third-party vendors during the early years of tech.  Its long term effects have been immeasurable on the abilities of all professional developers to earn income for their personal interests in development.

The open source movement has indeed provided the industry with some great products, including MySQL and PostgreSQL database engines.  For the most part, the ability for individual developers to provide third-party products at cost was substantially reduced to almost nothing.

This detrimental loss to the profession was also seriously impacted by smart-phone technology which made everything appear either as a sound-bite or a simple blip on a screen.  The changing economics of the US saw a dramatic trend towards  a demand for lower prices and, in the software world, no cost at all.

SEE MORE: Who should fund open-source projects?

This combination of forces ensured the destruction of the third-party, independent developers who created software for sale at moderate prices.  In 2005 a counter-movement was attempted to re-initialize the software cottage industries.  Small groups of software developers banded together to create their own products for sale as Micro-ISV (or independent software vendors). They hoped to re-create the third-party software industry.  However, this effort completely failed. Initial attempts went nowhere due to the overshadowing of what would become known as Internet aggregators.

Aggregators were companies that offered a way for many people to make money by aligning their efforts and products with companies that offered their platforms for use by the “masses”.  For example, the Apple Store and Microsoft’s App Store allow developers toupload their products and sell them. This is at a fraction of the cost of the earlier cottage industries.  Both companies make large profits due to the fees paid to them through the sales.  Many other such services exist such as Amazon Cloud Services, Microsoft Cloud Services, Google’s advertising programs, etc.  The list goes on and on.

This is not to say that there has been no innovation in these past years; there has.  However, it has been predicated on wealthier companies driving out much of the creative spirit that once inhabit the software development world for “committee based” sales and development efforts.  Instead, what has emerged is a circular process of production of new tools, concepts, and development processes that are merely redundancies to the original techniques of creating quality software.  It is as if the entire profession is now simply running in circles, trying to one-up themselves with a new framework, design concept, or software tool.

SEE MORE: Whose job is it to promote diversity in the Eclipse community?

It’s not surprising that mature products used by software developers for years remained entrenched in development organizations.  This is primarily true for the third-generation languages of BASIC, C#, and Java.

Along the way there have been a host of new languages that have been developed such as Python, PHP, Ruby on Rails, Go , Swift, and Scala, to name a few.  However, none of these languages has yet exceeded the popularity of the primary languages in use. The possible exception is Python, which is now the most advanced of the scripting or dynamic languages.  The reason for this is that these primary languages can do anything and more than any of the newer languages except for very specific, esoteric types of processes (ie: string parsing, the extraction of data from a single line of text).

Out of this array of newer languages, only one stands out among all of them as truly innovative: Apple’s new Swift language.  The reason for this is that the majority of Apple development has been based upon a variant of the C language called “Objective C”.  It is probably the most difficult C variant to pick up and learn well. Swift should bring about an easier form of development for those devoted to Apple software creation.

SEE MORE:  Trending on the charts: Swift hits Top 10 for popular programming languages

The problem with releasing a new language is that it must go through the same maturation processes that all previous languages have experienced. This entails adding new features as the people using the new language contribute their suggestions as well as unearth defects in it.  This maturation process weeds out a number of newcomers as well as some of the original developers of the language lose interest or cannot gain enough developer interest to keep working on their creations.  A classic example of this maturation process is the trajectories ofMicrosoft’s Standard ASP.NET against its later implementation of ASP.NET MVC.  Viewing their maturation processes, one can easily see how they are going through the same process. 

My belief is that the primary languages are always being refined so that it will be very difficult for any new language to supersede them.  If one does, it will simply be a result of good marketing and humanity’s obsession with novelty.

A note on successfully studying programming

Learning a programming language has many similarities to learning a foreign language.  You first learn a concept in terms of grammar and then you must go out and use it.  For a programming language, you must actually develop with it, no matter how small your first applications may appear.  You have to get used to using your computer to building with it.

Do not be afraid of making mistakes… a lot of mistakes.  You will; and even when you become very competent with programming you will still make them.  It is just a part of the endeavor and you cannot escape it. So don’t ever be ashamed about it, we all go through the same process.

Choosing a programming language to study

Coming into this field can be quite daunting. The ability to develop an application is not simply confined to the knowledge of a programming language.

Beyond the knowledge of a programming language, you must also know how to use the tools that are used for development with a language.  Most often these tools also include understanding design constructs, such as object oriented programming, database access, interface design (the display of an application), and the constructs required for the particular application you are interested in eventually creating.  If you are interested in game programming, you will have to learn how to use one of the many game libraries that are available for just about any type of programming language currently.

Choose a language that offers the most flexibility so that once learned its foundations can be used to learn other languages more easily.

In this case the best alternatives are the three primary languages; Visual Basic, C#, Java.  All three provide tremendous flexibility towards understanding all of the foundations of modern programming.  However, only one of these languages provides a new programmer with the greatest amount of flexibility while being rather easy to learn that will also allow for the development of literally any type of application.  

SEE MORE: 5 programming languages worth learning in 2017

This language is C#.  This is because understanding C# will allow you to move towards other languages with C syntax and nomenclature; and there are many with Java being the most prominent of them outside of C++.  In fact several of the most prominent gaming development environments use C/C#-like languages for scripting purposes.

Admittedly, I am a Microsoft software engineer. C# is one of the languages that I can provide assistance with along with the various tools that are used to work with it.  In addition, I do not use a lot of the fancy innovations that have been brought to the C# language as I feel many of these new features also act to make the language more arcane and difficult to decipher (like C++).

However,  C# is also right in the middle of capability, flexibility, but most importantly, complexity.  In terms of complexity, I mean that the C# language like both VB.NET and Java, contains all of the veritable foundations that a general purpose language would require allowing it to support any type of application to be built.  In addition, the inclusion of such complexity will allow you to study any aspect of development that could be applied to any other equivalent language.

In terms of the Microsoft development environments, C# is no more complex than VB.NET or as powerful.  Though the debate has raged over the years as to the superiority in performance of C# compared to VB.NET, this debate is simply one of preference.  There are no existing, scientific benchmarks that demonstrate any superior, overall performance of C# when compared to VB.NET.  This is because they use the same compiler foundations and run against the same run-time support foundations.  Thus, neither could be faster than the other.

VB.NET has a somewhat simpler syntax than C#, especially for people who have been working in some other variant of the BASIC language, but that is about it.

SEE MORE: Technology trends 2017: Here are the top programming languages

The most complex choice would be Java.  Like both VB.NET and C#, Java is a very mature language with all of the same types of tools available to work with it as are available for the Microsoft languages.  The only difference is that the tools for the Microsoft languages are generally provided by Microsoft, though there are some very good third-party tools available, while Oracle has been the primary vendor that provides the majority of the foundational tools for the Java language.  However, as it regards some of the tools support, this appears to be changing as there is possible talk of spinning off the “NetBeans” Java development environment and its staff (the tool that is used to code the Java language) into a separate company.

If you were planning on being employed by a large scale development organization that supports many of the large businesses in the United States, Java would probably be your best choice.  Java, though it is a highly capable language and can do anything that VB.NET or C# are capable of, was nonetheless originally intended for commercial release for very large scale application development in mind and the environment also comes with a variety of tools to support such deployments.  Java’s origins as a language was actually for appliances but when the capabilities of the language were proven in the development labs, it was decided to release it as a new third generation, general development language.

The result is that to learn Java well is a more complex learning experience than that which is required for VB.NET or C#.  In addition, I would venture to say that most hobbyist developers do not make the choice to use Java as a result simply for personal use.

How to begin your language studies

All general development languages primarily follow two types of formats.  The first is that the language is compiled to a native format that both the hardware and the operating systems can support.  Most often this means that the language is compiled to a specific chip-set type (microprocessor).

women in tech

Notice in the image above that what you write for a compiler is called “source code”.  This is the English-like text that a developer would write and is comprised of specific language based “reserve words” (the commands that make up a language’s vocabulary, if you will).

A compiler is a separate application that takes whatever code the developer writes and in this case, compiles it into Assembler (low level-level internal language) and outputs the results in Binary executable format.  If one were to open up such a file in a text editor, the user would simply see what appears to be complete gibberish.  The advantages of the natively, compiled format is that the executable will be processed at maximum speeds the microprocessor on any specific machine can provide.  The additional advantage is that without some very expensive software, the executable, in this case, cannot be easily reverse engineered.  This subsequent Assembler\Binary file can be now executed simply by double-clicking on it like you would do for any executable application since it would also include any support software required for its execution.

SEE MORE: Learning skills – understanding new programming languages more efficiently

Languages that produce such output have had a cyclic nature of popularity in the industry.  Some years they are in and others they have been out.  Today, the two languages that still produce such output are C++ and Pascal.  Interestingly enough, both C++ and Pascal are primarily used for internals development.  At one point back in the 1980s and early 1990s, Pascal was nearly as popular for internals development as C++.  At that time approximately 50% of the compilers in the industry worldwide were designed with Pascal.  Today, Pascal is more or less a niche area product provided by Embarcadero and RemObjects.  Both vendors use variants of the once famous Borland International compiler.

The Java language was designed around the semi-interpretive concept as were VB.NET and C# when they were commercially released later in 2001.  In this regard, these languages are all compiled to a form of pseudo-code that is then run interpretively against supporting applications.  The one that Java uses is called the Java Virtual Machine (JVM) and the similar one for VB.NET and C# languages is called the Common Language Runtime (CLR)

women in tech

An interpretive language is one in which it’s instructions are re-interpreted by the applications that execute them every time the instruction comes up for processing in the internal execute cycle.  This makes such languages somewhat less powerful in terms of performance than natively compiles assemblies and can be easily reversed engineered by freely available tools on the Internet making them insecure in terms of security and intellectual property rights concerns.  As a result, additional tools have been designed to scramble the pseudo-code outputs to be nearly impossible to reverse engineer adequately.

The advantage of such languages is that many languages can be developed to run against either of these interpretive platforms that provide a wide array of support tools.  The disadvantage is that no such language can provide anything unique to its environment that doesn’t meet the specifications of the required runtime interpreter.

It should be noted that this is a very simplistic view of how any executable code is created for these three languages.  However, since 2001, this form of executable file creation has become the standard in the industry for most business application development. Like native executables, these interpretive assemblies can be simply double-clicked on and executed since they will have embedded in them where the interpretive runtimes are located to process them.

What is .NET?

So far we have talked about the three primary languages that are recommended for study.  All three of these languages are what are known as “general purpose” languages as they can accommodate literally any style of application as well as the requirements for them.  That being said, none of these languages were developed as simply separate entities.  All three of them were designed around what are known as “integrated environments”, with the Microsoft languages initially being considered more advanced in their integration than that of Java.  However, in recent years Java has reached similar levels of integration with the maturity of its own tools.

What is an “integrated environment”?  It is simply a complete set of tools that work together to support the languages designed for them.  For VB.NET and C#, this environment is called Microsoft .NET.  A high level view of the .NET environment would look like the graphic below…

women in tech

Don’t worry about the technologies in this image as you will not be dealing with them for quite a while.  However, this graphic representation provides some insight into what the Microsoft .NET environment comprises and this is only from Microsoft.  To get a better understanding of the details of these technologies here.

SEE MORE: Frameworks are challenging programming languages’ supremacy

With the exception of the database tool, “Entity Framework”, all of these technologies are comprised within what is called the .NET Framework, which is a single installation on your machine that will support the entirety of your studies and development efforts for the C# language.

Notice that the C# and VB.NET languages are not listed here.  This is because all such languages are designed as separate compilers that will interact with the .NET Framework to be considered a .NET Language; and there are quite a few beyond the two described.  As a result, each .NET language is designed to compile against the Framework to use its libraries in order to process its instruction set.  Subsequently then, the Common Language Runtime will support the interpretation and execution of the pseudo-code that these compilers generate.

Looking at this graphic, anyone new to programming would legitimately wonder and ask how anyone would be able to work with all these technologies.  There are two ways to do this.

Just about everything in .NET is designed to work at the lowest level of development, which is through the use of a simple text editor.  A good example of such an editor would be Notepad++.

The Java language has also been designed from this standpoint. Thus, with a simple text editor you can create your code, save it to a file, and then with a .NET Command Prompt console screen (comes with the installation of the Framework) execute the compiler to create an executable file.

SEE MORE: Is Java bad for beginners? Stanford thinks so

This is of course, a very low level way of working and as a result, is highly inefficient since there is a lot that cannot be done without a lot of difficulty such as stepping through your code while it is running to uncover errors.  This is called “debugging” your code.

To more easily work with .NET language code, the preferred choice of most developers is to make use of what is known as an “Integrated Development Environment” or IDE, which Borland International was the first to popularize with its version 4.0 Turbo Pascal back in the 1990s.

Since then, every language vendor (or its supporting tools community) provides an IDE for the coding of its language or languages as many IDEs can support multiple languages since it simply holds your own code or what (as previously mentioned) is known as “source code”.  The IDE will compile it for you by presenting your code to an external compiler and then retrieving the results of that compiler.

Visual Studio

For Microsoft languages, the IDE of choice is called, Visual Studio.

Visual Studio has a very long history and has evolved from the earliest days of Microsoft’s first web development environment, which used what were called Active Server Pages in the 1990s.  This was just a fancy name for the style of mixing code and interface markup (HTML) in the same module. Developers both loved and hated this type of coding for the web.  They loved the simplicity but hated the confusion of the mixed code bases with HTML being used for the interface and VB-Script being used to code processes. This entanglement led to many sloppily built applications.  However, there were ways to make such development very legible if only a few standards were followed.

Today, Active Server Pages is now mimicked by the very popular web language, PHP and its corresponding support tools.  However, Visual Studio offers a complete development environment for just about anything related to .NET development.  From database applications, to internals such as Word Processing, to game development, it can all be done with this excellent IDE.  It is considered the finest IDE in the entire development industry with only a single competitor, Java’s “NetBeans” IDE.

There are undoubtedly other very fine IDEs but they are for the more specialized languages such as Micro Focus’ COBOL IDE, which has taken this aging language and turned it into a developer powerhouse.

women in tech

The graphic image above of Visual Studio, though a little blurry, gives an impression of a lot of complexity and the complexity is there.  However, you will find that learning and understanding the basic features for its use will take less time than one would expect.

Like many tools for developers these days, Visual Studio is offered completely free in what we call the Community Edition.  Prior to the latest release of the free editions of Visual Studio, Microsoft offered this IDE in two different flavors and was scaled down from the paid-for Professional Edition; a web development IDE and a desktop application IDE.

With the latest release of Visual Studio 2015, these separate installations are no longer available as the new Community Edition is now a single, complete installation with the nearly the same power and features of the Professional Edition.

Requirements for Installing Visual Studio 2015 & the .NET Framework

 The minimum requirements for installing Visual Studio 2015 and the .NET Framework is a machine running Windows 7/Service Pack 1.  If you have a brand new machine you will most likely have Windows 10 on it, which is fine.

Installing Visual Studio & the .NET Framework

With a single installation package you can install both the Visual Studio 2015 Community Edition and the .NET Framework Version 4.6.  Depending on the deployment of the Visual Studio Community Edition that you download you may get version 4.6, 4.6.1, or 4.6.2 of the .NET Framework.  All of these latest frameworks are capable of supporting any application type you wish to create.

To get the download package, go here and click download. This will bring you to a secondary page where you will be provided with the option to download the file labeled, “vs_community.exe” or the one labeled, “VS2015.com_enu.iso”.

The “vs_community.exe” file will allow you to install the package over the web after you have saved the file to a selected directory on your computer.

The “VS2015.com_enu.iso” file is the complete package in downloadable form.  Like an “exe” file, the “iso” file can also be launched by double-clicking on it.  However, this file will immediately request that you burn it to a DVD since “iso” files cannot be directly executed.

The download page(s) will also provide you with detailed descriptions for either type of installation package. Either installation method will provide you with all the tools you may require to study and learn C# and application development.

Learning C#, Visual Studio & the .NET Framework

To begin studying this environment, there are several ways in which you may start.  If you feel you can handle a lot of technical information all at once then one of the best places to look at is here. You will find tutorials on every aspect of Visual Studio, C#, and the .NET Framework.  However, these tutorials tend to be for seasoned developers who are looking to learn something new and possibly changing over from the Java Community.

This site is also heavily reliant on reading the material and researching it, which in any event is something that all developers have to do since videos aren’t the best medium for hard core technical subject material.

Another way to learn this material is to go to the online videos that are offered by a variety of sites and take some of the good tutorials that many developers and instructors have provided without charge to the development community.

Here are some sources for you to familiarize yourself with the Visual Studio 2015 environment:

Note that these videos go into quite a bit about the actual use of the Visual Studio development environment.  So don’t be discouraged if you don’t understand everything all at once.  All of this technical information will become second nature over your time of studying it.

For learning C#, Microsoft has a set of recommended tutorials at Channel9 for beginners.  You can find the entire list in order here.

Despite the many advances in technologies for media in recent years, the best way to learn how to program is from those ancient technologies called, “books”.  With books you have advances such as “earmarking pages”, “writing in the margins”, and even “bookmarking” by placing a piece of paper within the pages so you can easily return to where you left off reading.

Simple but highly effective technologies

There are tons of books available on every aspect of technology you can imagine.  And for C# I would recommend the following to begin with:

  • C# For Beginners: The tactical guidebook – Learn CSharp by coding
  • Microsoft Visual C# Step by Step (8th Edition) (Developer Reference) 8th Edition

Both of these books are available from Amazon.com.  And if you look at the details of the books at the Amazon site you will notice that all of them are fairly large, each probably taking a single tree to produce.  Nonetheless, the advantages of such books is that they have everything, from “soup to nuts”, to learn and understand the C# language.  And you can study them at your own pace building up your technical capabilities slowly.

Professional or hobbyist?

Everyone who starts to learn to program will consider the possibility of doing this work professionally.  Many who take such courses in school have already decided upon such a career, though this number has been decreasing over the years.  And surprisingly, many people in the field, including the younger professionals find that university graduates in Computer Science are not nearly as good as self-taught developers since the graduates tend to have more theoretical knowledge while the self-taught developer has experienced more real-world development situations.

However, the Information Technology profession as it is known is not anywhere near what is promoted by Hollywood or television.  Technical professionals are not “Nerds” or “Geeks” as the media likes to portray us.  No doubt we have such personalities within the profession.  Nonetheless, the majority of people who make up the bulk of business and general application developers and software engineers tend to be a rather motley but tough, engineering bunch.

There is a tremendous amount of competitiveness within the field along with a lot arrogance that is often abrasive and difficult to handle.  Hopefully, the new millennial generation of developers will root out these traits as they have been very damaging to the profession and have helped destroy its allure as a career consideration.

SEE MORE: Why today’s computer science students need to know more about ‘professional coding’

There is also always a feeling for the need to keep up or you will not be accepted by your colleagues in the work place and nor may you be prepared for a new position at another company.  Thus, the field has developed a similar trajectory as officer career paths in the US military, which has come to be called the “up or out syndrome”; you either advance or you’re toast.  This, luckily, has been the thrust mostly in the corporate environments leaving the consulting and freelancing arenas far more open to those who simply want to enjoy the arts and sciences of software development.  In these areas, individual developers can maintain a software specialty interest while still being able to accrue projects as their experience increases.  The reason for this lessened pressure on the needs for the knowledge of so many technologies is that many times freelancing and consulting assignments are for clients that are more flexible in how certain applications are built, though there are just as many that require the standard technological implementations that the market is fostering.

Though such independent personnel still have to maintain their studies and gain knowledge, the pressures are not as extreme.  At least in my years as a senior consultant I did not find this to be the case.  The pressures instead come from the ability to maintain a constant flow of contracts and\or assignments.  If you work with a good agency this is less of a problem than that of a freelancer.

SEE MORE: Java is alive and well, thank you, and is just as relevant as ever

The corporate environments can also be very dehumanizing and brutal where the enjoyment of accomplishment is often overridden by impossible deadlines and intense pressures.  Most of these situations are the results of the terrible corporate politics that come into play with development projects, which are usually fostered by incompetent technical management.  I have found that being a senior consultant and on the outside of employment you are not as affected by such impediments to your development.  You are there to do an assignment and that is it.  Many consultants I have spoken with at all levels of expertise have felt the same way and many have refused adamantly to become an employee as a result; even it meant not working for a while.

Due to the rampant outsourcing of the late 1990s up through the 2000s and even currently, a large percentage of all technical professionals are now either consultants (hired through agencies) or freelancers who work independently.  Consultants and freelancers in many respects are technical mercenaries who do their jobs for a variety of reasons; many do it just because they are good at it and for the monies when the contracts are plentiful.  However, there are many that also do it for the pure joy of being able to create something of quality and be more in control over their own lives.

For young people today the corporate environments can be quite discouraging.  And many have opted to not enter the field altogether as a result.  This has been especially true for young women.  Look at any technical site and review the articles there and you can immediately see the lack of female contributors to any of them.

To enter into the corporate environments requires one of two things to get your foot in the door; a good amount of experience or a university degree in Computer Science with an emphasis on development, or both.  If you can’t get hired you cannot get the experience and no university degree will help in a situation where experience is a requirement, which is quite often the case.

However, in the small business and startup communities, if you can show you have the aptitude, have some experience with development, and have enthusiasm for the profession, your chances of getting a position is greatly increased.

Reviewing some of the videos on YouTube by younger professionals, it appears, that despite the daunting obstacles that could be placed in the way of young people who want to enter the profession, there are ways that are still viable for consideration such a career path.

SEE MORE: Behind the scenes as a remote worker

Surprisingly, the “Mustang” still appears to be a very credible way for a new person to enter the profession.  A “Mustang” is one who derives their education and experience all on their own and enters the profession without any formal training.  I was a “Mustang” when the profession in the 1970s was literally closed to non-experienced personnel.  I took the operations route and ran the big mainframe machinery as an operator at night and slowly taught myself mainframe development and finally moved into a developer position in about three years.

Since operations is no longer a viable path for professionals today, young people may opt for working with older professionals as interns acquiring their experience with a mentor, acquiring experience with freelance projects on their own through one of the many freelancer sites, and when enough experience has been obtained be able to consider decent paying contracts with consulting agencies.

None of this is easy but the Information Technology profession has never been an easy profession to enter no matter which path is chosen during any of the major eras it has experienced.  It takes a sense of determined perseverance, a lot of studying, a tremendous desire to solve problems as well as the desire to create things from scratch, as well as keeping an eye on the changing nature of the technologies at your disposal.

No matter how good you may develop your skills you have to realize that your first projects may be poorly paid or possibly not even paid at all.  The idea is to gain experience and no matter how you do that, as long as the experience is legitimate and defensible in an interview few really care how you got to the point you did.  They only care that you can show that you can do the job.

As hard as the profession may be, and it is considered one of the most difficult professions in the world to enter and succeed at, it is still very much an artist’s life at the beginning if you choose to do it on your own.  However, as long as you keep at it, you will succeed.  You may not become rich but you will be able to secure a livable income.

Many times, people who begin their studies intending to develop programming capabilities simply as a hobby for enjoyment, find that they develop a piece of software that may actually have money-making capabilities or they have developed their skills to such a point that they want to change what they are doing to program professionally.  This latter case is found all over the field with many changing their career paths for that of becoming a “computer jockey”.

Selling software in the industry has become a rather difficult proposition since one of the results of the open source movement (previously described) has been to make software development for sale a thing of the past since everyone these days wants their software for free.  This development in the profession came about because many of the younger professionals years ago who entered the Java Community were living at home or already had support mechanisms to allow them to develop their software and give it away.

When you don’t have to support yourself, as these people apparently didn’t, you don’t have a view towards survival, which many more senior professionals adamantly complained about.  Hopefully, this viewpoint will change and as professionals become older will want to see earnings from their own creations. It is only natural.

For now, offering services for a software product or the development of new ones (freelancing) is the way many individual professionals earn their keep.  However, if one perseveres with a credible product, monies can still be earned from their sale.

 The gaming development industry

One area that is growing substantially and has not completely been subordinated to open source views on software distribution is the game programming industry.  There are a growing number of independent developers creating games for sale and\or offering cost-based extensions to game environments or even games themselves.

The gaming industry has been dramatically opened up to professionals wanting to develop game environments professionally.  The most prominent of these game environments, all offered completely freely to game developers, can be found at the following site addresses…

There are literally hundreds of freely available game development tools that any developer with any type of major language skills can avail themselves of to produce literally any type of game that their imaginations can conjure up.

However, there are two caveats to this part of the industry.  One, game development is very difficult no matter how you approach it.  You will have to spend quite a bit of time studying its foundations before you will have any chance of creating anything remotely near what you would like to eventually accomplish.

Two, due to the incredible increase in the availability of 3D game development tools, video games have disastrously given rise to an ongoing compendium of violent, mind numbing offerings.  Like the sociology of the ‘smart phone”, these types of games have had terrible effects on young people playing them.  One such effect is the dramatic increase in obesity in young people since they spend less time doing anything really active, which their bodies need while also consuming large amounts of junk food leading many into early symptoms of diabetes.

SEE MORE: DevOps maturity models: What computer games can teach us

If you decide to enter such a development area, I hope you would consider the development of games that make people “think” so instead of detracting from the development of a young person’s mind you actually contribute to its growth.

One such area in this type of development is the “war game”.  Despite the ominous sound of this type of game, it was exactly this type of game that dominated the 1960s, 1970s, 1980s, and into the 1990s.  War games were considered by industry analysts to be the games of choice for the “educated”.

War games teach “critical thinking” skills, the understanding of resources and their limitations and strengths, as well as tactical and strategic thinking skills.

Most often war games are based on actual historical battles and events that actually occurred and places the player in the position to see if he or she can outperform the original commanders.

According to certain writings, the “war game” is starting to make a reemergence as a popular past time for players.  And there aren’t simply enough developers out there doing this type of work because it is so difficult.  So you may want to give this path some consideration if business application development is not your cup of tea…

Closing notes

The massive amount of information presented here is not meant in any way to deter you from an interest in the software development field no matter what path is decided upon; professional or hobbyist.  This information was meant to provide you with a glimpse into the incredible amount of information and knowledge that is required for a person to succeed.  And this is only the tip of an enormous iceberg.

The good news is that no one ever sinks as long as they persevere to succeed in developing their skills.  It will take at least 6 months to become proficient in your chosen language no matter the choice.  It will take at least two years of study and experimentation altogether to become familiar and competent with the many support tools you will be able to study and use in the development of applications as well as the common variety of application types that most professionals tend to work with.

This paper was merely a first step for your potential journey. Good luck!

Author
Steve Naidamast
Steve Naidamast is a Senior Software Engineer at Black Falcon Software.

Comments
comments powered by Disqus