Distributed continuous Quality Assurance
I’m gonna toss this google tech talk into the *wow, that sounds amazing, but how the he#$# would you implement something like that in real software systems without developing a testing harness of equal or greater complexity than the product under test … but there are practical lessons to be learned from theory” bucket.
Stumbling around QA videos on google tech talks tonight I happened to be interested enough in Adam Porter’s Google talk regarding distributed continuous integration testing to devote an hour of my life to this cause. The basic premise (obviously I’m skimming over vasts amount of detail given a 1hr speech is now condensed into 4 points) goes something along the lines of
a) Derive a list of legal configurations and variables for the system under test
b) Develop a centralized “Quality assurance space” server I’ll coin it the QAS which will hold the conditions under tests, results, and heuristics for deploying these to test clients who call back home.
c) Deploy a number of said rogue testing clients to thousands of test machines (vm, clusters, and others) that will call back home to the centralized machine
d) Machine learning and pattern recognition within the QAS
Aside from the fact that there’s obviously constraints involved in deploying any significant number of rogue clients even with the advent of VMware, I completely agree with Adam that modern systems are bigger, more complex, and more geographically distributed than in the past, and traditional Qa functional and automated testing results in controlled environments are being reduced to nill. However, 99.9 % of quality assurance groups (let’s just say everyone outside of Microsoft and Google), lack the resources (both processional and machine) for such an elaborate system. However, there were a few pieces of data that I pulled out from the speech that I can easily apply to Collabomatic automated tests.
a) In Adam’s test results, traversing 2 way option paths yielded nearly as many defects as did an exhaustive test of a complex system. In other words a high wide test of scenarios covering at least 2 combos of all of valid input as opposed to an exhaustive search was just about as good but required about 1/10000th the execution and design time. Because of computational power of my 3 Collabomatic test machines (I’m guessing that no one would allow me to deploy rogue software yet), 2 combos is about as much as I could expect. A system such as this could be administered from the centralized fitnesse server I have deployed.
b) A good set of configuration settings that easily map back to automated tests allows for a focus on variables that really do matter. This allows for tests to become more intelligent over time in stressing areas and configurations with a higher defect yield. In the practical sense, this allows a software engineer in test to derive automated tests in those areas, flipping those configurations around, and leave areas
c) Adam himself admits that quite possibly the most valuable data they gained from these tests was the exercise of deriving all of the legal configurations. This is very much the same 80/20 school of thought where QA involvement at the design level finds bugs when they are cheap to fix (ie. Before the code is even written). Anyway, we’ve all heard that song and dance but it was interesting to hear that even after a beast of an automation tool was designed one of the most effective testing results was gained through a glorified design review.
Anyway, I’d highly recommend viewing the video for anyone interested in emerging QA technologies. I’m excited about such a system as described, have gained some ideas for that I can apply to Collabomatic automated tests.