Senior Research Project
Christopher Varanese
December 16, 2014
I. Title of Project:
Achieved Software Quality Through
Tests
II. Statement of Purpose:
After creation, nearly all
applications, programs, websites, or hardware will have some unforeseen
problems. In order to ensure quality and minimize "bugs", an
effective form of testing is used to maximize efficiency. This testing is, for
the most part, the only way to guarantee that the user will not encounter any
problems. There are multiple methods of testing, and each will go to a
different level of detail. For this project, I will be looking at three
different types of testing. The first, ad hoc testing, is not structured and is
performed primarily by improvisation. The second, unit testing, includes tests
performed only on segments of written code. The last, continuous integration, tests
how all of the code works when it is put together. For my research, I will
answer the question: "How do user perceptions of product quality vary by
testing method?"
III. Background:
Both of my parents were computer
programmers, which is one reason that I decided to take the AP Computer Science
class. In that class, I worked with java and this sparked my interest in
computer science as a whole. More recently, I have worked in MatLab in my
Physics class, which contains a high amount of coding. Since my code is never
perfect, I have gained much experience with solely ad hoc testing.
IV. Prior Research:
Automated testing of code has
been increasing in demand during the last decade, since manual testing takes time
to analyze more code. Many companies and programmers have begun to rely on
these systematic testers, allowing them to finish their work more efficiently.
In Godefroid, Halleux, Nori, et al article, they state that "testing usually accounts for about half the R&D budget of software development organizations." They talk about two different types of testing: static and dynamic testing. Automated static testing is systematically running through every possible computation that the system may have to complete, but it will not work when the tested code contains elements that are outside of the scope of the program. Automated dynamic testing takes in to account previous inputs in order to infer constraints and limitations within the code so that a future execution will test more thoroughly.
In his article "When Should a Test be Automated?" Brian Marick talks about the way he views code segments and what he takes in to account in order to determine whether to do a test manually or automatically. He points out the three main questions he asks: "How much more will automating this test and running it once cost than simply running it manually once?" "How long will this automated test be able to function properly, and what events might end it?" and "How likely is this test to find additional bugs, and how does this benefit balance against the cost of automation?" Marick quickly makes the assumption that automating a test will cost more than manually running it, and claims that this is nearly always the case. He takes in to account the amount of time that a test may be useful, stating that after some amount of runs, the code will be changed enough so that the test may need to be rewritten. Lastly, he describes a method for determining if the test will have a continued value. He claims: "An automated test's value is mostly unrelated to the specific purpose for which it was written. It's the accidental things that count: the untargeted bugs that it finds." This poses the problem of changing parts of code that were originally not intended to change, which leads in to the cost of automating testing. In summary, Marick says that this paper shows two insights that he has taken a long time to grasp. First, that the cost of automating a test is best measured by the number of manual tests that it will cut out of the process, and the bugs that will remain unfound because of it. Second, as stated previously, that an automated test should be expected to find bugs that have nothing to do with the test's original purpose, and that much of the value of an automated test lies in how well it can do that.
In Kerry Zalar's article, "Practical Experience in Automated Testing," the author outlines what he has discovered through working with manual and automated testing. He starts out by saying that "many efforts in test automation do not live up to expectations." Of course, this article is geared towards testing practically, so he advises to do an in-depth cost/benefit analysis before automating a manual test. He also advises to start small before planning on automating a large section of code. Though test automation is a powerful tool when implemented correctly, he states, it is not replacement for "walkthroughs, inspections, good project management, coding standards, [and] good configuration management." The benefits are referenced as the expected: speed, consistency, and reusability. However, he mentions some false benefits, such as automation being easier than manual testing, which are perceived as actual benefits but are not actually true at all. Lastly, he brings up the subject of potential risks. He says that some risks include a loss of team morale after not seeing quick, beneficial results, and the risk of the application that is being tested having to be completely redone, rendering the automated test useless.
In Godefroid, Halleux, Nori, et al article, they state that "testing usually accounts for about half the R&D budget of software development organizations." They talk about two different types of testing: static and dynamic testing. Automated static testing is systematically running through every possible computation that the system may have to complete, but it will not work when the tested code contains elements that are outside of the scope of the program. Automated dynamic testing takes in to account previous inputs in order to infer constraints and limitations within the code so that a future execution will test more thoroughly.
In his article "When Should a Test be Automated?" Brian Marick talks about the way he views code segments and what he takes in to account in order to determine whether to do a test manually or automatically. He points out the three main questions he asks: "How much more will automating this test and running it once cost than simply running it manually once?" "How long will this automated test be able to function properly, and what events might end it?" and "How likely is this test to find additional bugs, and how does this benefit balance against the cost of automation?" Marick quickly makes the assumption that automating a test will cost more than manually running it, and claims that this is nearly always the case. He takes in to account the amount of time that a test may be useful, stating that after some amount of runs, the code will be changed enough so that the test may need to be rewritten. Lastly, he describes a method for determining if the test will have a continued value. He claims: "An automated test's value is mostly unrelated to the specific purpose for which it was written. It's the accidental things that count: the untargeted bugs that it finds." This poses the problem of changing parts of code that were originally not intended to change, which leads in to the cost of automating testing. In summary, Marick says that this paper shows two insights that he has taken a long time to grasp. First, that the cost of automating a test is best measured by the number of manual tests that it will cut out of the process, and the bugs that will remain unfound because of it. Second, as stated previously, that an automated test should be expected to find bugs that have nothing to do with the test's original purpose, and that much of the value of an automated test lies in how well it can do that.
In Kerry Zalar's article, "Practical Experience in Automated Testing," the author outlines what he has discovered through working with manual and automated testing. He starts out by saying that "many efforts in test automation do not live up to expectations." Of course, this article is geared towards testing practically, so he advises to do an in-depth cost/benefit analysis before automating a manual test. He also advises to start small before planning on automating a large section of code. Though test automation is a powerful tool when implemented correctly, he states, it is not replacement for "walkthroughs, inspections, good project management, coding standards, [and] good configuration management." The benefits are referenced as the expected: speed, consistency, and reusability. However, he mentions some false benefits, such as automation being easier than manual testing, which are perceived as actual benefits but are not actually true at all. Lastly, he brings up the subject of potential risks. He says that some risks include a loss of team morale after not seeing quick, beneficial results, and the risk of the application that is being tested having to be completely redone, rendering the automated test useless.
V. Significance:
Testing is performed on every
development or creation. With a greater understanding of automated testing, the
process should be completed faster and more efficiently. Companies will find
benefits from the updated testing procedure in economics, safety, and value.
Their products will have less defects, attracting more customers and more
profits, they will be safer because of fewer errors, and the systems will be
finished quicker with less resources devoted to their creation, adding value to
the product.
VI. Description:
Most of the research I will
conduct will consist of actually using the different types of testing methods
on a unit that is given to me by Fluidic Energy. It will be experimental and I
hope to be able to demonstrate the significance of the different types of
testing. I will then be able to compare all methods used and determine when
certain methods are the most useful.
VII. Methodology:
The company I am working with,
Fluidic Energy, will provide the software to practice testing on. The software
will have defects in it since it has not been tested yet. I will write the
testing programs and perform the different types of tests, keeping track of the
defects that I find. After, I will compare the results and determine the
relationship between code tested, testing program used, and number of defects
remaining.
VIII. Problems:
The creation of testing program
may prove harder than anticipated, and the software may not be as compatible
with the three different testing types as is hoped. Potentially, it could take
several iterations of code writing to be able to create a test that is
automated well. In addition, not all code can be tested in all three different
methods, so I may find part-way through testing that I cannot successfully try
one of my three types of tests on it.
IX. Bibliography:
Godefroid, P., & de Halleux,
P., & Levin, M. Y., et al. (2008) Automated Software Testing Using Program Analysis, Microsoft Research. Retrieved November 10, 2014, from
http://research.microsoft.com/pubs/74119/ieeesw2008.pdf
Marick, B. (2001, November 15). StickyMinds | When Should a Test Be Automated? Retrieved November 21, 2014, from http://www.stickyminds.com/article/when-should-test-be-automated Rouse, M. (2007, February 1).
Unit testing. Retrieved December 9, 2014, from http://searchsoftwarequality.techtarget.com/definition/unit-testing Rouse, M. (2014, November 1).
Automated software testing. Retrieved December 9, 2014, from http://searchsoftwarequality.techtarget.com/definition/automated-software-testing
Test Automation & Best Practices | Atlassian Bamboo | Atlassian. (n.d.). Retrieved December 9, 2014, from https://www.atlassian.com/test-automation
Testing Methodologies. (2014, September 18). Retrieved December 16, 2014, from http://www.inflectra.com/Ideas/Topic/Testing-Methodologies.aspx
Top Automated Software Testing for Mission-critical Software Systems. (n.d.). Retrieved December 16, 2014, from http://idtus.com/what-is-automated-software-testing/
Tutorials Point Simply Easy Learning. (n.d.). Retrieved December 16, 2014, from http://www.tutorialspoint.com/software_testing/testing_types.htm
Why Automated Testing? (n.d.). Retrieved November 21, 2014, from http://support.smartbear.com/articles/testcomplete/manager-overview/ Zallar, K. (2000, January 1).
Practical Experience in Automated Software Testing. Retrieved December 16, 2014, from http://www.methodsandtools.com/archive/archive.php?id=33
Marick, B. (2001, November 15). StickyMinds | When Should a Test Be Automated? Retrieved November 21, 2014, from http://www.stickyminds.com/article/when-should-test-be-automated Rouse, M. (2007, February 1).
Unit testing. Retrieved December 9, 2014, from http://searchsoftwarequality.techtarget.com/definition/unit-testing Rouse, M. (2014, November 1).
Automated software testing. Retrieved December 9, 2014, from http://searchsoftwarequality.techtarget.com/definition/automated-software-testing
Test Automation & Best Practices | Atlassian Bamboo | Atlassian. (n.d.). Retrieved December 9, 2014, from https://www.atlassian.com/test-automation
Testing Methodologies. (2014, September 18). Retrieved December 16, 2014, from http://www.inflectra.com/Ideas/Topic/Testing-Methodologies.aspx
Top Automated Software Testing for Mission-critical Software Systems. (n.d.). Retrieved December 16, 2014, from http://idtus.com/what-is-automated-software-testing/
Tutorials Point Simply Easy Learning. (n.d.). Retrieved December 16, 2014, from http://www.tutorialspoint.com/software_testing/testing_types.htm
Why Automated Testing? (n.d.). Retrieved November 21, 2014, from http://support.smartbear.com/articles/testcomplete/manager-overview/ Zallar, K. (2000, January 1).
Practical Experience in Automated Software Testing. Retrieved December 16, 2014, from http://www.methodsandtools.com/archive/archive.php?id=33
0 comments
Tambahkan Komentar Anda