Monday, February 6, 2012

Hey guys,

I am going to post some more useful information...

Wednesday, May 13, 2009

Some abbreviations of Software Engineering Glossary

This glossary is intended for those visitors to who may be unfamiliar with basic software engineering terminology.

Abstraction - (1) the level of technical detail of some representation of software; (2) a cohesive model of data or an algorithmic procedure

Adaptive maintenance - activity associate with changing an application to make it conform to changes in its external environment

Analysis - a set of activities that attempt to understand and model customer needs and constraints

Analysis methods - a notation and heuristics for creating models of customer needs and constraints

Architectural design - an activity that attempts to layout the module "floor plan" for the software

Automated estimation tools - tools that help in estimating project cost or effort

Automatic code generation - tools that generate source code from a representation of software that is not source code

Baseline - a point at which some deliverable produced during the software engineering process is put under formal change control

Basis path testing - a white box test case design technique that used the algorithmic flow of the program to design tests

Basis set - the set of tests derived using basis path testing

Behavioral modeling - representing the mode of behavior (called states) of an application and the events that cause transitions from state to state

Beta testing - testing that is conducted by the user

Black box testing - testing that does not focus on the internal details of the program but uses external requirements

Boundary value analysis - a black box testing method that designs test cases that exercise data boundaries

Bounding - removing ambiguity from specification

Builds - see Clusters

Business risks - the set of potential business problems or occurrences that may cause the project to fail

CASE - Computer-aided software engineering

see also, Tools

Cause-effect graphing - a black-box testing method

Change control - an umbrella process that enables a project team to accept, evaluate, and act on changes in a systematic manner

Change control authority (CCA) - the person(s) who have responsibility for deciding whether a change is to be made

Change report - provides detail on the nature of work required to make a change

Change request - provides detail on the type of change that is requested

Chief programmer team - one way of organizing project staff

Classes - a basic construct in object-oriented methods that categorizes elements of the problem

Classic life cycle - a linear, sequential approach to process modeling

Clusters - a collection of program components (modules) that is tested as a group

Coding - the generation of source code

Complexity - a quantitative measure of a program's complexity

Component reuse - the ability to reuse a portion of a model, source code, test case, etc.

Configuration - the collection of programs, documents and data that must be controlled when changes are to be made

Configuration audit - an activity performed by an SQA group with the intent of ensuring that the change control process is working

Configuration control - the control of changes to programs, documents or data

Configuration items - the individual pieces of programs, documents and data that are controlled using SCM

Configuration status reporting (CSR) - an activity that help software developer to understand what changes have been made and why

Constraints - an restrictions or limitations placed on requirements or design

Corrective maintenance - finding and fixing defects that have been reported by users

Customer - the person or group that has requested the software and will be paying the bill for its development

Cyclomatic complexity - a measure of the logical complexity of an algorithm, used in white-box testing

Data design - an activity that translates the data model developed during analysis into implementable data structures

Data dictionary - a database that contains definitions of all data items defined during analysis; see also, Requirements dictionary

Data flow diagram (DFD) - a modeling notation that represents a functional decomposition of a system

Data modeling - an analysis method that models data objects and their relationships

Data objects - an input or output that is user visible

Debugging - the activity associated with finding and correcting an error or defect - a lack of conformance to requirements found in the software after delivery to the customer

Design - an activity that translates the requirements model into a more detailed model that is the guide to implementation of the software

Design specification - a document that describes the design

Design walkthrough - a formal technical review of the design

Detail design - a design activity that focuses on the creation of an algorithm

Documentation - descriptive information

Documents - deliverables produced as part of the software engineering process

Effort - the work-time product (e.g., person-days) associated with a project

Engineering change order (ECO) - a mini-specification that describes the technical aspects of a change

Enhancement - an extension of functional or performance requirements

Equivalence partitioning - a black-box testing method

Errors - a lack of conformance found before software is delivered to the customer

Estimation - a project planning activity that attempts to project effort and cost for a software project

FAST - Facilitated application specification techniques, a structured meeting between developer and customer; intent is to define basic requirements

Formal technical reviews - a structured meeting conducted by software engineering with the intent of uncovering errors in some deliverable or work product

Function points - a measure of the utility delivered by an application

Functional decomposition - a technique used during planning, analysis and design; creates a functional hierarchy for the software

Go, no-go decision - a point at which manager or the customer decides whether the project should proceed

Grammatical parse - a technique that is used during analysis and intended to help isolate basic data objects and functions

High-order tests - black-box tests conducted once the software has been integrated

Independent test group (ITG) - a group of people whose primary responsibility is software testing

Integration testing - a testing step that constructs the software while testing it

Integration - the specific approach to integration testing

Interoperability - the degree to which one application communicates or interfaces with another

Joint application development (JAD) - a specific FAST technique

Levels of abstraction - the degree of detail with which some representation of the software is presented

Line-of-code metrics - measures of quality or productivity that are normalized using lines of code produced

LOC - lines of code

Loop testing - a white box testing technique that exercises program loops

Maintainability - the degree to which a program is amenable to change

Maintenance - the activities associated with changes to software after it has been delivered to end-users

Make-buy decision - determining whether software should be built internally, acquired, contracted or built from reusable components

Measurement - collecting quantitative data about the software or the software engineering process

Metrics - a specific measurement

Milestones - a point in time that is used to indicate progress during a project

Modular design - a design approach that stresses modularity

Modularity - an attribute of a design that leads to the creation of high quality program components

Object-oriented - an approach to software development that makes use of a classification approach and packages data and processing together

Object-oriented analysis (OOA) - a technique for defined classes of objects, their relationships and basic structure

Object-oriented design (OOD) - a technique for translating the OOA model into an implementation model

Objects - a named element of the problem domain containing data and processing

Outsourcing - contracting software work to a third party

Paper prototype - a paper representation of an application (e.g., story boards that describe the interaction at a human interface)

Paradigms - the process model

PDL - program design language; a combination of natural language with programming language-like constructs

Perfective maintenance - enhancement,

Portability - the ability to transport software from one target environment to another

Preliminary design - creates representation of the data and architecture

Procedural design - creates representations of algorithmic detail within a module

Processing narrative - a natural language description of a model (program component)

Productivity - work output per unit time

Program design language, see PDL

Project database - the place where configuration items are kept

Project Plan - a description of the management approach for a project

Project planning - the activity that creates the Project Plan

Project risks - the set of potential project problems or occurrences that may cause the project to fail

Project scope - a statement of basic requirements of the software to be built

Project size - an indication of the overall effort to be expended or the number of people working on the project

Project tracking - the activity that enables a manager to understand the status of a project

Project control - the control of quality and change

Prototyping - the creation of a mock-up of an application

Quality - the degree to which a product conforms to both explicit and implicit requirements

Quality metrics - measures of quality

Re-engineering - a series of activities that transform legacy systems (with poor maintainability) into software that exhibits high quality

Regression testing - tests that are conducted repeated to ensure that a change has not introduced side effects

Repository - see Project Database

Requirements analysis - a modeling activity whose objective is to understand what the customer really wants

Resources - anything that is required to get the project done, people, hardware, materials, information, etc.

Reusability - the ability to reuse an already-existing program component in another application

Reusable components - configuration items that are reusable

Reverse engineering - trying to develop design models or an understanding of design using program code as a starting point

Reviews - see formal technical reviews

Risk - a potential problem or occurrence that put a project in jeopardy

Risk analysis - a techniques for identifying and evaluating risks

Risk Management and Monitoring Plan (RMMP) - a plan for mitigating, monitoring and managing risks

Scheduling - the activity that lays out a timeline for work to be conducted on a project

Scope - a bounded statement of what must be accomplished

Selective testing - testing only a selected set of program paths and data inputs

Side effects - errors that occur because of changes

Software - programs, documents and data

Software engineering - a discipline that encompasses the process associated with software development, the methods used to analyze, design and test computer software, the management techniques associated with the control and monitoring of software projects and the tools used to support process, methods, and techniques

Software maintenance - see also, Maintenance,

Software metrics - quantitative measures of the process or the product

Software problem report - a report of a defect

Software quality - see quality

Software quality assurance (SQA) - a series of activities that assist an organization in producing high quality software

Software Requirements Specification - a deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software

Software testing - a set of activities conducted with the intent of finding errors in software

Spiral model - an evolutionary software engineering paradigm

State transition diagram (STD) - a notation for behavioral modeling

Statistical quality assurance - techniques for process improvement that are based on measurements of the product and the process

Stepwise refinement - a technique for accomplishing functional decomposition or procedural design (also called partitioning)

Structured programming - a design method that limited design constructs to only three basic forms and constrains program flow for better quality

Technical risks - the set of potential technical problems or occurrences that may cause the project to fail

Test case design - a set of techniques for deriving effective test cases

Test cases, derivation of - the creation of data that can be used to uncover errors in the software

Test plan and procedure - a description of testing strategy and tactics

Testing - a set of activities that attempt to find errors

Tools - application software used to perform software engineering tasks (e.g., design tools, testing tools); see also CASE tools

Total quality management - a company commitment to develop a process that achieves high quality product and customer satisfaction

Unit testing - part of the testing strategy that focuses on tests to individual program components

User - the person who actually used to software or the product that has software embedded within it

Validation - tests to ensure that the software conforms to its requirements

White box testing - a test case design technique that makes use of a knowledge of the internal program logic

Work breakdown structure (WBS) - the set of work tasks required to build the software; defined as part of the process model

Wednesday, May 6, 2009

Use of Selenium along with variants


Selenium
IDE

Selenium
Remote Control

Selenium
Core

Selenium
Core HTA

Browser Support

Firefox Only

Many

All

IE Only

Requires Remote Installation

No

No

Yes

No

Supports HTTPS/SSL

Yes

Yes*

Yes

Yes

Supports Multiple Domains

Yes

Yes*

No

Yes

Requires Java

No

Yes

No

No

Saves Test Results to Disk

No**

Yes

No

Yes

Language Support

Selenese Only

Many

Selenese Only

Selenese Only


* = Experimental support is available in Selenium RC
** = Theoretically possible, but not currently implemented

Selenium IDE: First time user recommended.

Selenium Core: Requires hand written tests.

Selenium RC: More advanced users.

Thursday, June 12, 2008

Selenium Core

Selenium Core is a test tool for web applications. Selenium Core tests run directly in a browser, just as real users do. And they run in Internet Explorer, Mozilla and Firefox on Windows, Linux and Macintosh. No other test tool covers such a wide array of platforms. Team of programmers and testers at Thought Works developed it.

It allows the tests to run in any supported browser on the client-side, mainly allows you to do the following:
Browser compatibility testing: Test your application to see if it works correctly on different browsers and operating systems. The same script can run on any Selenium platform.
System functional testing:
Create regression tests to verify application functionality and user acceptance.


Advantages: Selenium Core works on
all the major browsers on all the platforms. Selenium Core is a simpler form of Selenium and it is suitable for non-developers, but it has some inherent limitations and few others mentioned below.

  • Recommended use: Non- developers
  • Better use: Developers writing custom tests
  • Optional command line

It Supports Platforms and browsers are as below:

Windows:

  • Internet Explorer 6.0 and 7.0
  • Firefox 0.8 to 2.0
  • Mozilla Suite 1.6+, 1.7+
  • Seamonkey 1.0
  • Opera 8 & 9

Mac OS X:

  • Safari 2.0.4+
  • Firefox 0.8 to 2.0
  • Camino 1.0a1
  • Mozilla Suite 1.6+, 1.7+
  • Seamonkey 1.0
  • Not yet supported: OmniWeb

Linux:

  • Firefox 0.8 to 2.0
  • Mozilla Suite 1.6+, 1.7+
  • Konqueror
  • Opera 8 & 9

Disadvantages: Must have write access on web server, because directly writes core tests in web applications. It is extremely complicated for first time users.


Selenium Core installation: If this is your first time using a Selenium tool, you may not want to just use Selenium Core directly; one good way to get introduced to Selenium's features is to use Selenium IDE , which embeds Selenium Core internally. Download Selenium IDE and you'll get Selenium Core along with it. Give it a spin!

Selenium Core is pure DHTML/JavaScript, to use Selenium Core you need to make it available from the same web server as the application you want to test. That means that you can't use Selenium Core (pure DHTML/JavaScript) to write a test of google.com this is because Selenium Core is pure DHTML/JavaScript, and so it is bound by JavaScript's security restrictions. Specifically, JavaScript enforces (This is a JavaScript security requirement. If you can't/won't modify the web server you want to test, Selenium Core may not be the right tool for you; you may just want to use Selenium IDE or Selenium RC instead. If you're on Windows and you only want to use Selenium Core with Internet Explorer, you may also use HTA mode to run your tests against a remote web site.)

Normally, this is as simple as extracting the Selenium Core zip file into the DocumentRoot, htdocs, or webroot of your webserver. Then, try to open the TestRunner.html page on your website, which should be in the core/ directory.


Run Selenium’s test suite: We can run our scripts using Test Runner only. Test Runner gives you nice browser interface to execute your tests and also gives summary of how many tests were executed, how many passed and failed.Here is a screen shot of Test Runner.

The screen is divided into four sections: "Test Suite", "Current Test", "Control Panel" and the main application frame where your application will reside. Initially, none of the Control Panel buttons will work, because you haven't selected a test suite. By default, Selenium Core offers to run the test suite in "../tests/TestSuite.html". Click "Go" to open that test suite.

If you installed the /tests along with /core, you should now see a list of tests to run. Press the "Run All Tests" button to run all of our automated tests. Note that some tests may fail if your browser blocks pop-ups, manages passwords, or does other helpful things that interfere with automation. Be sure to turn these features off before running the tests!


Here's a close-up view of the Selenium Control Panel:

Run All Tests: Run every test in the test suite.

Run Selected Test: Click on one test in the test suite, then click here to run just that one test.

Pause / Continue: While a test is running, press the Pause button to temporarily stop running commands. After you press Pause, you can press Continue to resume the test.

Step: Click on a test in the test suite, and then click on just one line of the test to set a "breakpoint." When the TestRunner reaches a breakpoint, it will automatically pause execution on that line. You may then press the Step button to run the next line of the test and pause again. You can also click on a breakpoint line to remove the breakpoint.


HTA Mode: Selenium Core provides an additional mechanism for running automated tests called "HTA mode." HTA Mode only works on Windows, and only with Internet Explorer; you can't use HTA mode to test any other browser. An HTA file is a special type of HTML file that is allowed to violate the same origin policy and to write files to disk. When running in HTA mode, you don't have to install Selenium Core on the same webserver as the AUT. HTA files are also allowed to save test results directly to disk, rather than POSTing the test results to a webserver.
To run Selenium Core in HTA mode, just double-click on TestRunner.hta, in the /core directory of your Selenium installation. (As you can see, TestRunner.hta is just a copy of TestRunner.html; but that ".hta" extension makes a big difference!) You can then run your test suite just like any other Selenium Core test, but since the tests are being run in HTA mode, you're free to run your tests against any website.

You can also run TestRunner.hta from the command line, passing arguments to it just like you would pass arguments to an HTML file, like this:

C:\selenium\core>TestRunner.hta "test=..%2Ftests%2FTestSuite.html&auto=true&close=true&resultsUrl=results.html&save=true"


Advanteges of HTA mode:

  • Can use tests against other sites
  • Suggested use: Regression testing against a site being rebuilt.

Disadvantage of HTA mode:

  • Only runs with IE

Thursday, April 17, 2008

Selenium IDE

Selenium IDE is an integrated development environment for Selenium tests. It is implemented as a Firefox extension simply we can say like selenium via FF plugin and allows you to record, edit, and debug tests. Selenium IDE includes the entire Selenium Core, allowing you to easily and quickly record and play back tests in the actual environment that they will run. Selenium IDE is the easiest way to use Selenium and most of the time it also serves as a starting point for your automation. As compared to most of the test automation tools it is very simple and lightweight.

Advantages: Selenium IDE is the only flavor of Selenium, which allows you to record user action on browser window. It can also record user actions in most of the popular languages like Java, C#, Perl, Ruby etc. This eliminates the need of learning new vendor scripting language. For executing scripts created in these languages, you will need to use Selenium Remote Control. If you do not want to use Remote Control than you will need to create your test scripts in HTML format.

Disadvantages: Biggest drawback of Selenium IDE is its limitation in terms of browser support. Though Selenium scripts can be used for most of the browser and operating system, Scripts written using Selenium IDE can be used for only Firefox browser if it is not used with Selenium RC or Selenium Core.

Features:

  • Easy record and playback.
  • Intelligent field selection will use IDs, names, or XPath as needed.
  • Autocomplete for all common Selenium commands.
  • Walk through tests.
  • Debug and set breakpoints.
  • Save tests as HTML, Ruby scripts, or any other format.
  • Support for Selenium user-extensions.js file.
  • Option to automatically assert the title of every page.

Selenium IDE installation: This can be installed from either openqa or mozilla distribution site. Selenium extension will be downloaded as XPI file. If you open this file using File -> open in Mozilla (version: 1.5 or higher), it should get installed. If installed properly, Selenium can be accessed from tool --> Selenium IDE in your browser toolbar.

Selenium Toolbar contents and their functionality:The small red button on the right hand side gives you an indication on whether Selenium is in recording mode or not. By default when the IDE comes up it has recording turned on. You might want to turn it off by clicking on the red button, until you are ready to begin recording. Go to a web site that you want to record, click on the record button and begin your browsing task(s). You will notice that as you click and type in the browser the IDE is recording everything you are doing. Also, Selenium IDE will not record any operation that you do on your computer apart from the events on Firefox browser window. So go ahead read your mail, open a word doc or do anything else, Selenium will record only your actions on browser. When you are done, simply end recording and click on the green arrow to play back your script you just created. Don't forget to save your script before closing the IDE.

If you are created script manually, simply go to the Source tab in the IDE and paste the script in there. Now go back to the Editor tab and click on the Green arrow button. You should now see the actions taking place in your Firefox browser. If you want to slow down the execution select the Walk radio button before playing the script. You may need to increase the pause commands if the web pages are not rendering fast enough, and thus causing an error.

If you open the option window by going to Option, you will see there are some self explanatory options available. For example, encoding of test files, timeout etc. You can also specify Selenium Core and Selenium IDE extensions on this page. Selenium extensions can be used to enhance the functionality provided by Selenium.

Final button present on the Selenium IDE toolbar is the Selenium TestRunner. Test Runner gives you nice browser interface to execute your tests and also gives summary of how many tests were executed, how many passed and failed. It also gives similar information on commands, which were passed or failed. TestRunner is also available to tests developed in HTML Only.Here is a screen shot of Selenium IDE.




Tuesday, April 15, 2008

Selenium- Automation tool for web based applications.

Selenium is an open source tool for web application testing, using this tool we can test Functional or System tests for web applications as well as Selenium is not meant for unit testing. Selenium is primarily developed in Java Script and browser technologies and hence supports all the major browsers on all the platforms.

In terms of coverage for platform and browser Selenium is one of the best tool available in the market. For example, you can have your automation scripts written for Firefox on Windows, with out major changes you can run them on Firefox in Mac.

Selenium tests run directly in a browser, just like real users do. It runs in Internet Explorer, Mozilla and Firefox on Windows, Linux, and Macintosh, Safari on the Mac.They having plans to target Safari on the iPhone in some months.

There are three variants of Selenium, which can be used in isolation or in combination to create complete automation suite for your web applications.

  • Selenium IDE.
  • Selenium Core.
  • Selenium Remote Control.

Thursday, January 10, 2008

Importance Of Testing

Testing is an important step in software development life cycle. The process of testing takes place at various stages of development in programming. This is a vital step in development life cycle because the process of testing helps to identify the mistakes and sends the program for correction. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in software development.

Software is useful in Human Lives. For Eg: Software has infiltrated almost all areas in the industry and has over the years become more and more wide spread as a crucial component of many systems. System failure in any industry can be very costly and in the case of critical systems (fight control, nuclear reactor monitoring, medical applications, etc.) it can mean lost human lives. These "cost" factors call for some kind of system failure prevention. One way to ensure system's reliability is to extensively test the system. Since software is a system component, it requires a testing process also.

Basic Theory of Testing:-
The process of software testing involves creating test cases to "break the system" but before these can be designed, a few principles have to be followed.

1.Testing should be based on user requirements. This is in order to uncover any defects that might cause the program or system to fail to meet the client's requirements.

2.Testing time and resources are limited. Avoid redundant tests.

3.It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible; simple because of the many different variables affecting the system and the number of paths a program flow might take.

4.Use effective resources to test. This represents use of the most suitable tools, procedures and individuals to conduct the tests. The test team should use tools that they are confident and familiar with. Testing procedures should be clearly defined and it should be based on client’s requirement.

5.Test planning should be done early. This is because test planning can begin independently of coding and as soon as the client requirements are set.

6.Testing should begin at the module. The focus of testing should be concentrated on the smallest programming units first and then expand to other parts of the Project.

7.Equal importance for functional and non functional attributes. Traditionally testing should focuses on functional attributes (ex: Correct calculations) only, but non functional attributes (ex: reliability, availability, timeliness) are equally important.

We look at software testing in the traditional (procedural) sense and then describe some testing strategies and methods used in Object Oriented environment. We also introduce some issues with software testing in both environments.