It's been a lot of talks regarding automation of development environments and DevOps approaches lately.
From my perspective, I've been involved in what's now being called DevOps, automation of development environments already back in 2005-2006, when I started getting more and more tired of constantly needing to do tiresome tasks again and again. So, my first feeble steps was done by creating more and more scripts that took away some of the need. First the simpler scripts that removed the need to constantly remember all in-parameters to for example gcc, sometimes beginning with setting an alias for quick redo's, later more complicated scripts to handle different configurations.
Again, this was mainly done due to laziness. I didn't feel like writing the same commands again and again, and also, going through logs was really boring, and I'm not the first to feel this way, as since the beginning of IT, the tenuous task of checking logs and texts added the tools such as grep, sed, awk and similar to the *nix portfolio of regexp tools which helps a lot to simplify the tasks of extracting and comparing pass-fail criteria's.
After some trials and fails, which now actually added the nice feature of being able of creating logs of each execution, and thus also by adding simple parsers for extracting pass/fail of execution, enabling the advent of generated reports. True, the first ones was simple enough, but as time went on, more and more features where added to the parsers and report generators to clean up the contents and adding more information that was of use for enhancing product.
As most of the script executions took place on *nix machines, the usage of cron (scheduled execution) also enabled the 24h usage of nodes dedicated for test and build. Now, that really made the importance of the report generation greater, as the amount of jobs needed to be checked now increased 10-fold. Some things we did to simplify the checking was adding better pass-fail criteria to jobs, as well as starting to use web-servers to present the results of the jobs.
First there mostly generation of long lists of the jobs executed with single line containing links to the contents of the log directory for the job, then some first quick "passed/failed" depending on of course the outcome. Problem in the beginning was that most jobs actually failed, so to add some kind of "levels", the steps concept was added in, so, for example the steps: compile, basic test, build, module test, system test was introduced to see in which step was failing.
When the concept was introduced to others as a available view and queue to add jobs into, the input rate of suggested improvements increased. The amount was so great that there got to be a couple of different approaches. One was to introduce user views, and another to prioritize the need.
For a couple of years most of this was a pretty usual setup, popping up in most larger companies, if not using tools such as CruiseControl that became available around 2000, it wasn't until Hudson, later migrated over to the OpenSource version Jenkins, a specialized tool was used for setting up so called CI (Continuous Integration) of build environments to trigger build/test/integration jobs, and keeping history easily to view and retrieve in a web-view.
...