Running Header: 1
Quality Assurance Report
Group Project 4
John Holmberg, Sean Austin, Christian Dillon, Charles Williams, Matthew Serdy, Frank Opoku
IT488 – IT Capstone II (IT488-1902B-01)
Table of Contents Section 9 – Quality Assurance Report – 3 Quality Management Report Overview – 3 Entry testing – Exit Criteria 4 Entry testing 4 Exit criteria 4 Types of testing 4 Testing Details 5 Quality Management Report Table 6 Test case summary 10 Lifecycle Testing of Software: 15 Change Board 16 Changes of Design: 17 Prototyping: 18 References: 20 Quality Management Report 21
Testing has begun on the systems as the developmental processes comes to the end of the project timelines. Though extensive and exhaustive, manual and automatic unit testing will determine any outlying issues all to be completed by an independent and qualified testing team. Unit testing has begun and is working through the units as time allows, through revision and retesting on failure items. There are multiple groups in the EiS organization that oversee the testing phase. The first group “verification” who has the responsibility to verify that the code and the systems are complaint with the source information, documentation, and the requirements agreed upon by EiS and organizational stakeholders. The second group “creation” is responsible for creating an automated and manual set of testing that will be responsible for finding and reporting fault items to the development team that coded the item. They will work together to re-code / re-test and validate any code / functional faults. Team three “final” will run automated and manual testing on the units and work through the complete functionality to return faults or mis-code that could result in noncompliance with documentation and functional requirements. Passing final testing will validate and certify the software to gold status.
All teams will work simultaneously as the project begins to near completion and gold. This is the point where the development team are working to combine modules and the units of the system to a final product and begin the integration. The “Final” team will also be testing the final integration and validate that all the units and modules are functioning as a single unit. Testing will consist of functional, usable, and stability controls.
Entry testing – Entry testing are the processes that must be in place prior to any process or activity beginning. Software testing life cycle demands that there are entry measures in place prior and during the testing phases. It also denotes the time allotted to ensure that the process is completed. For the entry testing there must be a defined set of requirements, a designated plan to test, appropriate test cases and testing criteria, the tools in which to perform the necessary testing, and a testing environment.
Exit criteria – This is the final documented completion of the testing phase. This is the criteria, conditions, or continued processes that have been fully recognized and completed prior to the end of the software testing life cycle. There is an exit criteria documentation that should be completed at the end of every aspect of the development phase. The exit criteria should have documented that the all testing criteria has been successful, all workflows are functioning properly, all defects, bugs, or inaccuracies should be resolved, and the proof of verification testing after all defects have been addressed.
Regression Testing: A set of defined procedures used to test a software to confirm if the features of a software or code has not affected its features.
Network-readiness testing: The entire look or thorough search of a network to confirm its strength and ability to be able perform the basic functions of a network and that is transmit successfully.
Volume testing: It’s the use of a quantified amount of data used to test a software.
Recovery testing: The pace at which a software is able bounce back successfully after a system failure or crash or possibly a deadlock both for hardware and software.
Penetration testing: The use of black box technique to check and verify the security integration policy of a system of database.
Migration testing: The successful transfer or the move of the inheritance of the framework from one system to the other.
Ready-for-use testing: The set of tools used to test a network to check its availability to be used immediately for a service delivery.
Hardware-certification testing: The checking of hardware components to confirm if they conform to specific hardware rules as they are supposed to.
EiS is taking into consideration a network for education institutions that receives and transacts with students, lectures, employees and other institutions.
EiS system to 1. Customer
2. Central education location
3. Headquarters of EiS
4. Sms Company
5. Other education institutions
Objective of test case: To prove the competence and strength for a network to perform basic functions. Its availability to be transferred successfully to another platform and then also meet basic standards of software and hardware component rules.
Setup procedure: The way and manner by which the system would be set up to perform those functions. How it would interact with the database and then also communicate successfully with other systems such as client’s details per transaction and also communicate effectively with other systems such as that of other educational institutions.
Expected results: the system should simply be able to do the following; be able to receive any request from one account of a customer in question.
Showing in Figure 1 a representation of how the testing cases are represented.
Procedure for executing test case: Scenario Activity: A client has two accounts and they are namely student A and Account B. Student Account B should be able to receive order from another account into Account B and then be able to successfully track the order to his or her location. Hence the EiS would alert the client for the successful transaction.
|Category||Description||Recorded Date||Time Required / Time used|
|Validation||Owner: Dev Team||Testing Team 1|
|Test Action :||Determine PW Length Variables||Week one||Time Required:||TestCase01|
|Decide on necessary fields||3 hours|
|Testing required policy||Time Used:|
|Test correct access||5 hours|
|Analyze to verify code to prompt pages|
|Validate correct actions|
|Re-test all account types|
|Final – Resulted in extra coding to be done to correct database|
|Revisit PW policy||Week one||Time Required:||TestCase02|
|Determine ‘guessable dictionary’||5 Hours|
|Determine ‘strength’ coding variables||Time Used:|
|Testing required against policy||4 Hours|
|Verify security protocols functional on guessable, length, complexity variables|
|Re-test lockout policy|
|Guarantee all fields take after gauges.|
|Final – Functionality verified and saved time in testing|
|Validation||Owner: Dev Team||Testing Team 1|
|Test Action :||Decide account status||Week one||Time Required:||TestCase03|
|validate account status||4 hours|
|attempt account logins based on different status||Time Used:|
|determine results||6 hours|
|verify options to remedy|
|validate remedy solutions|
|Final – support options not available recoded page|
|Confirmation||Owner: Dev Team||Testing Team 1|
|Test Action :||Validate HTML scripting||Week two||Time Required:||TestCase04|
|Validate HTML linking through site||5 hours|
|Determine hyperlinking functional||Time Used:|
|Determine internet extranet linking||6 hours|
|Verify linking functional|
|Final – Share documents linking failure based on excluded .html in hyperlink|
|Database Verification||Owner: Dev Team||Testing Team 1|
|Test Action :||Validate necessary space available||Week two||Time Required:||TestCase05|
|validate proper access to share documents function||10 hours|
|Calculate space necessary / compare to available to host||Time Used:|
|test up / down / delete||15 hours|
|validate document transition|
|verify share permissions|
|validate space avail change|
|Final – Host share primary key relationship failure. DBA corrected key relationship to account status|
|Database Verification||Owner: Dev Team||Testing Team 1|
|Test Action :||Validate stored document||Week three||Time Required:||TestCase06|
|Compare stored document size to local document size||7 hours|
|Complete upload / download – verify completion||Time Used:|
|Compare data in stored / moved / original||9 hours|
|validate non corruption|
|Final – validated document movement does not corrupt or change document.|
|Owner: Dev Team||Testing Team 1|
|Retention||Test Action :||Validate uploaded files||Week three||Time Required:||TestCase07|
|Initiate backup tasks||6 hours|
|verify task completed success||Time Used:|
|validate data size base on compression||10 hours|
|validate data encryption on media|
|Delete local files|
|Initiate restore tasks|
|verify task completed success|
|validate data migration to local source|
|validate data matches original|
|validate date changes based on retention|
|Final – Data archival success – retention policy failure required multiple re-testing to validate proper retention|
|Load||Owner: Dev Team||Testing Team 1|
|Performance||Test Action :||Validate services functional||Week four||Time Required:||TestCase08|
|Initiate timed tests||7 hours|
|verify navigation successful||Time Used:|
|validate load times per page||7 hours|
|initiate server load test|
|Initiate timed tests|
|validate load times per page|
|Final – Performance at optimal load and stress load resides within acceptable ranges|
|System||Owner: Dev Team||Testing Team 1|
|Performance||Test Action :||Validate current log in count||Week 5||Time Required:||TestCase09|
|Process concurrent logons||40 hours|
|Validate current log in count||Time Used:|
|Initiate timed tests||40 hours|
|Log in log out process|
|file move process|
|validate load times per page|
|Verify log out concurrent logons|
|Based on agreements, system performance within optimal ranges based on use, load, and stress|
|Integration||Owner: Dev Team||Testing Team 1|
|Test Action :||Validate HTML code for Mailto||Week 5||Time Required:||TestCase10|
|verify navigation successful||10 hours|
|validate external application launch||Time Used:|
|validate pre-listed data population correct||15 hours|
|validate return after process|
|verify email navigation|
|validate communication correct|
|confirm email delivery|
|Final – Test results successful with integration to outlook and eruopa mail subsystems.|
Test Case 1
Test case 1’s objective is to test the creation of a new user account. We will test this objective by creating an account that will test the system as well as any redundancies of the system. The first step will be to test if by entering a new email to set up an account if the system will accept the email and create the account. The second step will be to enter an existing email to test for a current account. This will test if the system can recognize if an account is already created and signal if the system will alert if there is an existing account. The third step will be to enter the pertinent information when creating an account to ensure that the system will recognize and accept the user’s details. The 4th step will be then to display the user information as entered and lastly to test that the system will store a zip code and then save after it is entered.
In testing or system we had to fix the email redundancy check and ensure that the system will recognize when you are entering an existing email when creating an account. This will cause issues that all emails will be accepted which will cause overlap in our system and not allow users to create accounts.
Test Case 2
The second test case will be to setup a strong password policy. This will determine our security procedures in our system and ensure that each users account is secure with a unique password and not allow a user to create an account with a blank password in the field.
The first step will be to enter a new password for the system and ensure that our system will only allow strong passwords based on a criteria we will set that will give the user a parameter when it comes to password strength. Step 2 and 3 will repeat this the first being that if the password is generic it will return with a weak password warning and not allow the account to be set up. Step 2 and 3 will test if the password is ok or a strong password and return the notification informing the user of the strength of the password. Step 4 and 5 will ensure that the password does not match the user name and when the password is strong and unique that it will return an alert of success. Lastly based on the parameters set the password will have an expiration date and it will test if this system works properly informing the user that the user must update or change their password based on the date of creation.
Ensuring that our users have strong passwords will increase the chance that there will be no breach of data and that our users can feel safe with the security measures we have put in place.
Test Case 3
The third test is to test user account login. Using this test we will be testing the ability for a user to login and if the account is active. We will be testing if the account is active, valid, and if the password is not expired. We will be testing the verification of account access. We will perform this test by testing the account verification system that we have already established in test 1 and test 2. The major points of this test will be to ensure that the user can log on then log off as well as that the system will recognize the users account info.
Test Case 4
Test 4 will determine if the system will have accurate and responsive application navigation. We will be performing this test by using the different navigation buttons on the system itself to ensure that will work properly. The buttons we will mainly be testing will be the menu, home, new documents, my documents, share, and friends’ buttons. The main way in which to perform this test will be to navigate through the site to the numerous different areas and ensure that no matter what page we are currently viewing we are able to use the different buttons an d they will return us to the desired locations.
Test Case 5
Test 5 will be performing the document management tests and will mainly focus on the uploading, downloading, and sharing of documents within the system. We will be performing tests that will check if the user will be able to create new documents as well as to perform all the tasks required regarding the life cycle of documents themselves. Within the tests that we will be performing we will have dependencies that will relate to previous tests. The dependencies for this test will be that the user has an active account, memory space available to produce new documents, as well as that when the user is sharing documents that the user, they are sharing them with will be in the user’s friends list. Within this test we will also be testing the recycle bin which is vital to the deletion of any documents that the users will no longer be needing.
Test Case 6
Test case 5 will be testing the data integrity of the documents and that the system will be able to upload and download data to the system that will be available to your account and the accounts that you share with. With this test we will be performing uploads and downloads and then testing the documents within the system to ensure that will be available to opened and not be corrupted. We will test this by performing the processes of create a document then navigating to the Mydocuments page then uploading that document. We will then download the document and ensure that it opens this will then allow us to test that the document is not corrupted and allow us to open it.
Some of the dependencies for this test will be that the test from test 4 and 5 will have already been confirmed and the system will allow us to perform the tasks to be able to test that the document is not corrupted.
Test Case 7
Test 7 will test the ability of our system to provide data retention backup within or system as it relates to system backup and backup restore capabilities. The dependencies for this test will that the backup and restore policy that we have developed is active and running as well as there is space available for the data to be backed up. We will also be testing that the documents within the MyDocuments section will be the test data for the backup retention policy. We will perform this test by adding documents to the MyDocuments folder and then performing the backup of those documents. Then we will delete all documents from the folder and then initiate a restore. We will then test to ensure that the documents that we have deleted have been restored and that the files are not corrupted.
Test Case 8
Within test case 8 we will be testing the load times throughout our system and how they respond on our different pages. The dependency for this test will be that all of our links are working and are not corrupted as we tested within test case 4 and that we have a stopwatch to test the times for loading within the system. We will perform this test by logging in to the system and then utilizing the navigation buttons we will click our hyperlinks and then run the stopwatch to test the system as it loads the different pages. From these tests we will determine if the system is running correctly based on the speeds in which our pages load.
Test Case 9
Test 9 will be a stress test for our system to ensure that our system can handle a large load of log-ins and that the full system can handle heavy traffic. The dependencies for this test will be 500 concurrent log ins that we have previously performed through previous test cases. We will perform this test by running load tests across the entire platform and utilizing heavy load traffic to ensure that the system will be able to still perform the speeds we intend while also navigating properly between pages and links. We will provide 500 dummy accounts and then with each account have them navigate to different pages and using all the previous test case procedures concurrently have each dummy account perform those same operations. This will then stress the system to the point where we will be able to determine the amount of load that our system can handle at any one time. In performing numerous tests like this we will be able to oversee the system and see how the processes respond to heavy load.
Test Case 10
In test 10 we will be testing whether the system will be able to properly email to our support staff and ensure that the link works properly as well as the email system is operational. The dependencies for this test will be to ensure that the system’s email server is working properly and that our hyperlinks are working and active as in test case 4. The tests performed to ensure this test case will be to navigate to the contact support link and that the link to contact support is active. We will test that the fields will populate correctly and that each field within the support contact page is operational and allows the user to enter text into these fields. Then we will test after the fields are populated that the system will deliver the email with all the same information that was originally entered into the fields. This will test if the fields are accurately relaying the information to our support team and that the system is operating as we intend.
The testing of the software is a vital part of the testing process for the improvement and greater quality of the project. With this testing, it is essential to know that lifecycle testing must be strictly followed in all cases.
Planning is a very important part of the testing facilitation process, but the lack thereof can also be just as bad because it can cause delays and over budgeting. The design of the testing is vital because this part is to decide which test to perform and when. Testing flow is important because it starts with simple testing then moves forward with more complex testing. In the unit testing process, each individual unit is tested from start to finish before implementation. The module testing is the combining of individual units into modules for the interactions of each part of the individual phases to check for functionalities. Subsystem testing is a form of testing to test those larger combined parts of the system for a greater functionality of which the user interfacing can be validated as well, (M.U.S.E. © 2019). The system testing is when all other components of the system have been combined and completed of which a performance test can be performed at this time, (M.U.S.E. © 2019). The last and final test is the acceptance testing which is all tests are completed and live data is utilized by the client/customer to see if all requirements and functionalities have been met. If all requirements and functionalities have been met, then there is a sign-off and the payment is made.
(Cited: M.U.S.E. © 2019, retrieved from https://class.ctuonline.edu)
|Effects of risk||Decrease of risk|
|First-time success||Good chances|
|Long-term success||Excellent chances|
(Cited: S. McConnell © 1996, Rapid Development retrieved from https://ebooksbvd.my-education-connection-com)
Note: Risks in general can be either too of risks and the possibilities of too many risks to changing software products as you progress through the project. The use of integrations and trade-offs have the capabilities of being done with the use of other practices.
(S. McConnell © 1996, Rapid Development retrieved from https://ebooksbvd.my-education-connection.com)
Note: Other types of tests are a Daily Build test or Smoke Test is where the build is tested daily by going through a series of tests to check the operations of the system.
With the Daily Build and Smoke Test comes some forms of risks such as a major risk which is too frequently from the pressures of release. Within major interactions and trade-offs comes small project overhead, effectiveness in miniature milestones, and support of the life-cycle models, (Rapid Development by S. McConnell © 1996). There are other benefits of the Daily Board and Smoke testing such as minimum integration risks, reduction of low-quality risks, support of defect diagnoses, monitoring support, morale improvement, and greater customer relations.
The daily board and smoke test are built every day for a better functionality of the project. In an effective build one would have to check for broken builds. The smoke test is conducted daily where the entire system is checked from beginning to end. On smaller builds one person can conduct the build but sometimes it is best for greater effectiveness to have more than one person. On larger builds it is to the advantage to have a group of individuals working strictly on the daily build and smoke testing. The addition of code should only be done when it is sensible to do so but do not wait too long to do so, (Rapid Development by S. McConnell © 1996). It should be a requirement to conduct a smoke test before the actual code is added to the system to ensure that it all works properly. Other things to implement are having a holding area for the new code until it is ready to be added to the system, keep the build healthy by having a penalty for breaking the build, better to release builds in the morning for better productivity, and build and smoke test even under pressure because disciplines do fall by the way side.
(Cited: S. McConnell Rapid Development © 1996, retrieved from https://ebooksbvd.my-education-connection.com)
The designing for change has several designs for change that if not utilized at the beginning of the software lifecycle, the software will be not that effective, (Rapid Development, S. McConnell © 1996). A reduction from the norm would be fair, no progress visibility, a decrease in the risks of a scheduled risks, first-time successes are good, and long-term successes are excellent. The designing for change is a single design methodology referring to the flexible software designs, (Rapid Development S. McConnell © 1996). The following are a list of the items list of the designing changes; Areas to change, use informational hiding for change, make a development change plan, have families of programs, and have the use of object-oriented designs, (Rapid Development © 1996).
(Cited: Rapid Development by Steve McConnell © 1996 retrieved from https://ebooksbvd.my-education-connection.com)
Note: Doing design for change is not as easy as someone would think, and you should rely on the design not the diagrams or tables. The best advice is to stick to the actual design that has to be accomplished and to focus on the areas for change is also best, (Rapid Development © 1996).
Prototyping on an evolutionary scale is a lifecycle type model of which is done in smaller sections for greater development through the inputs of the end-user and the customer feedbacks, (Rapid Development © 1996). With prototyping the efficacy of a normal schedule is excellent, visibility is excellent, long-term is excellent, first-time is successful, and the schedule risk is increased, (Rapid Development © 1996). The risks involved are unrealistic schedules and budgetary expectations, the inefficiency of prototyping uses, performance expectations are unrealistic, a poor design, and the poor maintainability, (Rapid Development © 1996).
(Cited: Rapid Development by Steve McConnell © 1996, retrieved from https://ebooksbvd.my-education-connection-com)
For all practical purposes, evolutionary prototyping is a way of exploring the activity when you are not exactly sure of what needs to be built, (Rapid Development © 1996).
A list of some of the issues in the prototyping are as follows: schedule and budget expectations, a lack of project controls, bad customer and end-user feedback, bad performance issues, bad performance expectations, bad design, lack of maintainability, creepy features, and bad prototyping times, (Rapid Development © 1996). Some of the prototyping side effects are the improved moral between customers and end-users, early feedback, the code length is decreased, rates of defects are lower, and a smoother effort curves, (Rapid Development © 1996).
(Cited: Rapid Development by Steve McConnell © 1996, retrieved from https://ebooksbvd.my-education-connection.com).
M.U.S.E. © 2019, https://class.ctuonline.edu
S. McConnell © 1996, Rapid Development https://ebooksbvd.my-education-connection.com
|Weeks||Category||Issues||Corrective Actions||Lessons Learned||Implementation Checklist|
|Unit1||Metrics Planning||Internal (personnel)||Listings of group expectations||Follow directions||Submissions of paperwork assigned|
|Establishment of Group One|
|Group One||Time management||Accountability of one’s time||Plan your time successfully||Time management checklist of self|
|PMGR||Lack of knowledgeability||Read all reading materials||Reading and research create knowledge of material needed||Knowledgeabilities create multiple successes|
|C. Dillion||Sickness or injury||Research outside the reading materials|
|Unit2||Defects Reporting||Vendor delays||Vendor accountability||Vendor reliabilities established||Vendor deadlines met|
|Software/hardware Failures||Testing at each implementation||Testing solves issues||Testing fixes issues before they are major ones|
|Impracticable deadlines set||Set real world deadlines||Real time deadlines more reliable||Realtime deadlines create less stress and greater productivity|
|Unit3||Workload Balance||End-user lack of knowledge||Manuals for ease of access||Training guides in place||Ensure knowledgeabilities or the lack there of|
|Client changes requirements||Meetings to establish guidelines||Have requirements set at the start of project||Have requirements established and all questions answered before the start|
|Vendor delivery delays||Vendor real-time delivery times||Vendors commit to timeline of delivery||Vendors must stick to strict delivery guidelines|
|Unit4||Testing||completion of testing phases||Allocation of more time to complete tests||Over Estimate continued testing protocol time allotment||Ensure proper time and estimates from development|
(Cited: QMR Template provided by Professor H. Okoro for Rapid Development IT488-1902B-01)