Performance Testing

Performance Testing

Abstract

This one day tutorial is targeted toward those who want to learn about Performance Testing for web-based applications as well as desktop applictions.

The tutorial introduces various types of performance tests and their objectives. Often the customers are not sure about what type of performance tests they need and the problem is compounded by their insistence on doing one type of tests when they actually want other type of test done. The tutorial presents some common questions that testers should ask the stake holders to understand which performance test types need to be performed and also the information to be gathered before starting a performance test.

The tutorial also focuses on setting up the performance environment, performance testing lab setup, benchmarking of load clients to determine maximum load from each client machine. The tutorial demonstrates use of Jmeter as a performance testing tool and doing a simple performance test with Jmeter on a sample application.
Last section of the tutorial focuses on the analysis of the captured data and preparation of a test report.

The tutorial also provides a list of DOs and DONTs from performance testing perspective in order to perform effective tests in an efficient and repeatable manner.

 

Vipul Kocher's Biography

 

 

Practical Risk Based Testing

Practical Risk Based Testing

Abstract

This workshop provides senior test engineers, test leaders and testing managers with the main definitions, ideas, processes and tools they will need in order to exercise risk based testing in their projects and organizations.

This workshop covers the major concepts of risk based testing and risk management. During the workshop, the methodology behind RBT will be discussed, examples and excel tool templates will be provided to assist participants in their day to day work.

The risk based testing process is presented through theory, examples, discussions and (when relevant) exercises that are focused on risk based testing Identification, Selection & Planning – on one hand, and Mitigating, Tracking and Controlling – on the other hand.
A short review of risk based testing in an Agile environment will also be discussed.

Objectives
Through presentation, discussion, and examples, participants will learn to:
* Understand the concepts of Risk Management
* Describe Risk Based Testing principals
* Understand what Risk language propose in day to day life, and why should we talk like that
* Define where Risk Based Testing can assist in the testing life cycle
* How to define Risks on a project level
* How to define Risk Analysis for the different testing topics
* Discuss test planning strategy issues in regards to Risk Based Testing
* Discuss test execution strategy issues and Risk Based Testing

Real life Case studies will be presented were reports and graphs will be discussed.

Audience
This course is intended for professionals who deal with managing risks for the testing group. Among them are:
* Experiences Test engineers
* Test Team Leaders
* Test Managers
* Quality Officers/Engineers/managers

Prerequisites
This is not an introductory course. Participants should be familiar with/have knowledge in:
* Testing basic concepts
* Testing main processes
* Testing lifecycle
* Defect management principals
* Test requirements basics
* General knowledge of testing measurements

 

Alon Linetzki's Biography

Exploratory Testing

Exploratory Testing

Presentation Format: Full Day Tutorial/Workshop

Abstract:

Exploratory Testing has gained popularity over the last few years, and has quite a following but, like Agile approaches, it still causes some dispute amongst software professional. Some testers love its spontaneity, creativity and flexibility; others mistrust it because it is unsystematic, unplanned, undocumented and an excuse for sloppy work. Of course, ET has its time and place.

ET has its role in all environments, no matter how structured, but it should be regarded as another tool in the testers’ armoury, to be used with judgement.
In fact, Paul argues that all testing is exploratory.

See http://gerrardconsulting.com/?q=node/588.

This tutorial sets out the background to the New Model Testing, Exploration and why all testers should know how to explore. The course discusses the important psychological issues relating to exploration and how it can be used in the context of uncertain requirements. A range of techniques are described and examples given.

There will be a lot of practical work, testing functionality available on the internet. A laptop and wireless connection will be required to participate fully.

The course does not assume any detailed technical knowledge.

 

3 Key Points:

  • All testing is exploratory to some degree
  • Exploration of software, even without specifications, relies on sources of knowledge
  • Exploration depends on mental models that we use to inform good testing.

 

Paul Gerrard's Biography

Software Test Estimation

Software Test Estimation   

Abstract

How long will the testing take? That’s a question we test professionals often struggle to answer, and, when we do, the response is often, “That’s too long!”

In this practical, hands-on course, Rex Black guides you through the tricky questions of test estimation. Can we use risk to determine what we should test—and how extensively? What tasks must we carry out to be ready to perform those tests when the time comes? Can we combine techniques like work-breakdown-structures, historical project data, and rules of thumb to estimate the time and money required for those tasks? How can we respond to management requests to compress testing efforts into pre-existing schedule or budget targets? Rex’s experience-based presentation, lively group discussion, hypothetical case study, and a realistic running exercise will put the essential estimation tools in your hands so you’ll be ready for your next testing project.
This course draws on Rex’s best-selling book, Managing the Testing Process, 2e, his new book, Critical Testing Processes, and over two decades of software, hardware, and systems experience.

 

Learning Objectives

Through presentation, discussion, and hands-on exercises, attendees will learn to:

• Analyze risks to system quality to determine what should be tested—and to what degree—in a test subproject.

• Use work-breakdown-structures to create an actionable, realistic estimate of the tasks, dependencies, resources, and time required for the testing subproject.

• Refine estimates using developer/tester ratios, industry averages, historical data, and test point analysis.

• Sell the estimate to management on a dollars-and-cents, risk-management basis.

• Adjust the estimated schedule and budget to fit project constraints without undermining accuracy or unduly increasing risk.

Course Materials

This course includes the following materials:

Name Description
Course Outline A general description of the course along with learning objectives, course materials and an outline of the course topics, including approximate timings for each section
Noteset A set of approximately 130 PowerPoint slides covering the topics to be addressed
Project Source Documents for Course Exercises Specifications used in the realistic example project used in exercises for the course
Estimation Factors Factors that influence test estimation
Test Estimation Tools and techniques for realistic predictions of your test effort
Bibliography and resources A set of further readings, Web sites, tools and other resources to help implement the concepts



The printed course materials are provided in a binder in a way which makes it convenience for course attendees to remove portions as needed for reference; e.g., during exercises.

Session Plan

Introduction

Deciding what you should test

  • Quality
  • Quality and customer usage
  • Quality risk analysis
  • Case study

Exercise: Quality risk analysis

Estimating what you can test: Fundamentals

  • Work-breakdown-structures
  • Deliverables
  • Delphic oracle, three-point, and wideband
  • Dependencies and resources
  • Case study

Estimating what you can test: Important considerations

  • Test execution time
  • Bug removal time
  • People, process, and materials factors

Estimating what you can test: Refinements

  • Historical-data framework
  • Industry averages
  • Developer/tester and project effort ratios
  • Test point analysis
  • Uses and misuses of these techniques
  • Sticky-note work-breakdown-structure technique

Exercise: Developing an estimate

Selling your estimate

  • Cost of quality—and poor quality
  • The value of known bugs
  • Testing as an insurance substitute
  • Test information as a project guide
  • Case study

Exercise: Presenting and defending a budget

Exercise: Calculating costs of failure

Adapting to project constraints

  • Overlap phases
  • Add staff
  • Reduce test execution time
  • Use risk as a guide
  • Drop features
  • The risks of overtime and stretch goals

Exercise: Risk-driven reductions in test subproject scope

Bibliography

Recommended Readings

The class materials include a bibliography of books related to software testing, project management, quality, and other topics of interest to the test professional.

Rex Black's Biography

 

Measurements & Metrics for making good Testing Decisions

Measurements & Metrics for making good Testing Decisions - A practical workshop for controlling your Testing and Quality

Abstract

“If you can’t measure it, you can’t manage it.”- said Peter Drucker. As test professionals, we are faced with critical decisions we have to make along the life cycle. From choosing the right test strategy and approach, estimating the right amount of effort and budget, deciding if to add more test execution (cycle), to deciding what is the real quality of the system now, and more.
In this workshop Alon Linetzki aims to bring the awareness and knowhow for identifying, analyzing and maintaining important measurements and metrics (choosing the right ones…), for making critical and important decisions during the product development and testing life cycle. The GQM method for defining measurements & metrics shall be explained, and demonstrated.
Through presentation, discussion, debate, brainstorming and questioning, we shall go through terminology, methodology, and concepts to learn how to plan, design and use measurements & metrics in a meaningful way, that will help us make good testing decisions.
Relevant exercises are integrated into the workshop in order to close the gap between methodology and actual implementation.

Audience

Experienced test engineers, testing team leaders, test managers, Quality Assurance engineers, QA team leaders and QA managers that would like to know which metrics are critical and are worth setting up and maintaining for making good testing and quality decisions.

Pre-requisites

Participants should have knowledge in test planning & estimation, test design, test execution, cooperation with development; engineering and product, have exercised test status reports and have done some defects analysis.

Duration

1 day.
Note: This workshop is an extraction from a 2 days workshop, which includes more debate, discussions and company based exercises.

Course Outline

Day 1

Chapter 1 - Introduction to Measurements and Metrics

● Introduction

○ Presenting participants and coach

● General

○ Measurement; Metric – definition

○ In which way should we use M&M

○ Measurements & metrics misconceptions

○ Discussion exercise – measurements & metrics – why & how

 

Chapter 2 – Choosing the right metrics

● Motivation

○ How decisions are really done?

○ The 7 most expensive words in a project

○ Why selecting good metrics is important?

○ Getting the right decisions, using the wrong metrics

● GQM – Goal Question Metric

○ Introduction

○ Defining Business Goals (BGs) and Testing Goals (TGs)

○ Asking the right questions 

Exercise – Defining BGs/TGs and Questions [optional]

○ Defining the right metrics

○ Exercise – Defining metrics

 

Chapter 3 – Measurement & metrics implementation

● M&M Implementation and Operation

○ The Measurements & metrics Catalog

○ M&M Roll Out dimensions

■ Functional

● Operational

● Managerial

■ Aggregation

■ Comparison (Projects, Releases, Products, Departments)

■ Category (Quality, Budget, Scope, Time)

○ Exercise – Catalog [optional]

○ M&M Economics

● Using M&M for making Good Testing Decisions

○ Decision points during the product development lifecycle

Scenario exercise – what are our goals? How will we know we reached them? What will help us know that?

○ Integrating M&Ms, Setting up Dashboards

● Measurements & metrics in Agile

● Measurements & metrics Benefits & Risks

● Workshop Retrospective

 

Note: the syllabus outline may vary, and additions or subtractions of topics and subtopics may occur – all in favor of delivering a better workshop content, that is relevant, up-to-date, and adding value.

 

Alon Linetzki's Biography

 

Click here to enroll!

 

    PLATINUM 


 AlmavivA 300dpi 

 Alten italia

 

 CA r 1c Grey 2014 

 

Capgemini highres

Logo 01

 LOGO RGB 

HP Blue RGB 72 LG

 

ibmpos black Converted

 

mini mf v2

 

logo2

         GOLD

 

        iSQI registered adding 2014 4c

 

 media motive

  

QUENCE small

 

     EXHIBITOR

           Denamik  

 

Logo E quality

 

Neotys

 

parasoft logo HD

Perfecto Mobile
Polarion