EvangelistPunk Reading List

If you look on the site Menu, you might notice a new item has appeared – my Reading List.

This is, in short, a list of the books I’ve read which have particularly helped me during my journey in software testing, and that I would recommend to anyone carving their own path through our profession.

Any glaring omissions? Anything you want to point me towards? Let me know!

Testing Big Data: A Blueprint

Besides the old data testing standards of getting your test data set in order, no-one has really figured out how they’re going to do this properly, or offered any kind of industry standard for testing in this area.

So, I challenged myself to have a closer look at Big Data, and to put together a blueprint for how to test it.

In my current role, I’m looking after a team that deals with Big Data. This is a new area to me, and having tried to do some research on testing in this discipline, it became apparent that I’m not the only one. This is a brave new world, and it seems that, besides the old data testing standards of getting your test data set in order, no-one has really figured out how they’re going to do this properly, or offered any kind of industry standard for testing in this area.

So, I challenged myself to have a closer look at Big Data, and to put together a blueprint for how to test it. None of this is gospel, but it’s how I intend to get my teams to start thinking about what they’re doing, and the approach I’d like to see them taking.

Best Practices

Data Quality: First of all, the tester should establish the data quality requirements for different forms of data (e.g. traditional data sources, data from social media, data from sensors, etc.) If that’s done properly, the transformation logic can be tested in isolation by executing tests against all possible data sets.

Data Sampling / Risk Based Testing: Data sampling becomes hugely important in Big Data implementation, and it’s the tester’s job to identify suitable sampling techniques, and to establish appropriate levels of risk based testing to include all critical business scenarios and the right test data set(s). Whether this is done with handcrafted data, or a sample of production data is down to the circumstances you’re working in, but do think carefully about security / confidentiality constraints if using real data.

Automation: Automate the test suites as much as possible. Big Data regression tests must be run regularly, as the database will be regularly updated. Automated regression should be created with a view to being run after each iteration.

Parallel test execution: Hopefully, this is obvious – the volume of data being checked will probably require parallel execution. But here’s a talking point – if data sampling is good enough, is it even necessary?

Pairing with developers: This is vital to understand the system under test. Testers will require knowledge on par with the developers about Hadoop / HDFS (Hadoop Data File System) / Hive.

Make things simpler: If possible, the data warehouse should be organised into smaller units that are easier to test. This will offer improved test coverage, and optimisation of the test data set.

Normalise design and tests: Effective generation of normalised test data can be achieved by normalising the dynamic schemas at the design level.

Test Design
One thing I’ve seen repeated in many places is the mantra that test design should centre around measurement of the four Vs of data – Variety, Velocity, Volume and Veracity.

Variety:
Different forms of data. The variety of data types is increasing, as we must now consider structured data, unstructured text-based data, and semi-structured data like social media data, location-based data, log-file data etc. They break down as follows:

  • Structured Data comes in a defined format from RDBMS tables or structured files. Transactional data can be handled in files or tables for validation purposes.
  • Semi-structured Data does not have any defined format, but structure can be determined based on data patterns – for example, data scraped from other websites for analysis purposes. For validation, data need to be transformed into a structured format using custom built scripts. So firstly, the patterns need to be identified, then copy books or pattern outline need to be prepared, then the copy book need to be used in scripts to convert the incoming data into a structured format, then validations performed using comparison tools.
  • Unstructured Data is the data that does not have any format and is stored in documents or web content, etc., so testing it can be complex and time consuming. A level of automation could be achieved by converting the unstructured data into structured data using PIG scripting or something similar – but the overall coverage of automation will be affected by any unexpected behavior of data. This is because the input data can be in any form, and could potentially end up changing every time a new test is performed. So a business scenario validation strategy should be employed for unstructured data to identify different scenarios that could occur in data analysis, and handcrafted test data created based on those scenarios.

Velocity:
The speed at which new data is created. Speed – and the need for real-time analytics to derive business value from it – is increasing thanks to digitization of transactions, mobile computing and the sheer number of internet and mobile device users. Data speed needs to be considered when implementing any Big Data appliance to overcome performance problems. Performance testing plays an important role in the identification of any performance bottlenecks in the system, and in ensuring the system can handle high velocity streaming data.

Volume:
Scale of data. Comparison scripts must be run in parallel across multiple nodes. As data stored in HDFS is in file format, scripts can be written to compare two files and extract the differences using compare tools. Data is converted into expected result format, then compared using compare tools with actual data. This is a faster overall approach which requires an up-front time investment while scripting, but this will reduce the required regression testing time. When we don’t have time to validate complete data, risk-based sampling should be used for validation. Depending on your circumstances, there could potentially be a case to build tools for E2E testing across the cluster.

Veracity:
Accuracy of data. This is the assurance that the final data provided to EDW has been processed correctly and matches the original data file, regardless of its type. Accuracy of subsequent analysis is dependent on the veracity of data. This also means ensuring that “data preparation” processes such as removing duplicates, fixing partial entries, eliminating null / blank entries, concatenating data, collapsing columns or splitting columns, aggregating results into buckets etc. are not onerous manual tasks.

Getting these right allows us to use the data to offer two more Vs – Visibility and Value.

Potential Issues:
Test planning and design:
Existing automated scripts generally cannot be scaled to test Big Data. Trying to scale up test data sets without proper planning and design will lead to delayed response times, time outs etc. during test execution. However, performing action-based testing (ABT), and treating the tests as actions pointed at keywords and appropriate parameters in a test module will help mitigate this issue.

When To Test

Testing should be performed at each of the three phases of Big Data processing to ensure that data is getting processed without any errors.

Functional Testing should include:

  • Validation of pre-Hadoop processing
  • Validation of Hadoop Map Reduce process data output
  • Validation of data extract, and load into EDW

Apart from these functional validations, non-functional testing including performance testing and failover testing should be performed.

Validation of Pre-Hadoop Processing
Data from various sources like weblogs, social network sites, call logs, transactional data etc., is extracted based on the requirements and loaded into HDFS before processing it further.

Validations:
1. Comparing input data file against source systems data to ensure the data is extracted correctly

2. Validating the data requirements and ensuring the right data is extracted

3. Validating that the files are loaded into HDFS correctly

4. Validating the input files are split, moved and replicated in different data nodes.

Potential Issues:
Incorrect data captured from source systems
Incorrect storage of data
Incomplete or incorrect replication

Validation of Hadoop Map Reduce Process
Once the data is loaded into HDFS, the Hadoop map-reduce process is run to process the data coming from different sources.

Validations:
1. Validating that data processing is completed and output file is generated

2. Validating the business logic on standalone node and then validating after running against test cluster

3. Validating the map reduce process to verify that key value pairs are generated correctly

4. Validating the aggregation and consolidation of data after reduce process

5. Validating the output data against the source files and ensuring the data processing is completed correctly

6. Validating the output data file format and ensuring that the format is per the requirement

Potential Issues:
Coding issues in map-reduce jobs
Jobs working correctly when run in standalone node, but not on multiple nodes Incorrect aggregations
Node configurations
Incorrect output format

Validation of Data Extract, and Load into EDW
Once map-reduce process is completed and data output files are generated, this processed data is moved to enterprise data warehouse or any other transactional systems depending on the requirement. MAS don’t currently export data, the Hungary team do.

Validations:
1. Validating that transformation rules are applied correctly

2. Validating that there is no data corruption by comparing target table data against HDFS files data

3. Validating the data load in target system

4. Validating the aggregation of data

5. Validating the data integrity in the target system

Potential Issues:
Incorrectly applied transformation rules Incorrect load of HDFS files into EDW Incomplete data extract from Hadoop HDFS

Validation of Reports
Analytical reports are generated using reporting tools by fetching the data from EDW or running queries on Hive.

Validations:
1. Reports Validation: Reports are tested after ETL/transformation workflows are executed for all the sources systems and the data is loaded into the DW tables. The metadata layer of the reporting tool provides an intuitive business view of data available for report authoring. Checks are performed by writing queries to verify whether the views are getting the exact data needed for the generation of the reports.

2. Cube Testing: Cubes are tested to verify that dimension hierarchies with pre-aggregated values are calculated correctly and displayed in the report.

3. Dashboard Testing: Dashboard testing consists of testing of individual web parts and reports placed in a dashboard. Testing would involve ensuring all objects are rendered properly and the resources on the webpage are current and latest. The data fetched from various web parts is validated against the databases.

Potential Issues:
Report definition not set as per the requirement
Report data issues
Layout and format issues

Kent Software Testing Meetup – An Update

First of all, a big thank you to everyone who has expressed an interest in putting together a Software Testing meetup in Kent. There was definitely enough interest to consider moving forward and putting something together, so that’s what I’ll try to do after Easter.

In preparation for that, it would really help if you could let me know which town you think would be best for such an event, and the sort of things you’d like to see covered. I want this to be right for as many people as possible, and I can only do that if I know what people actually want.

I’d be grateful if you could keep any requests and ideas to the comments for this post on my blog page. My posts get shared across a few branches of social media, and I don’t want anyone’s feedback to get lost, or to have to check all over the place if I can help it.

Thanks again!

Software Testing Meetup – Kent / South East

In London, it seems that you can’t throw a brick without it crashing into the middle of a Software Testing / QA Meetup*. But once you get outside of the big cities, things tend to thin out quite a bit. And as far as I can tell, there’s absolutely nothing going on for Testers / QAs in my neck of the woods – Kent.

So, are there any Software Testing / QA meetups happening in Kent that I don’t know about, and that I could get involved with?

If not, would there be any regular interest in either attending or speaking at one if I were to organise it?

If I were to put a meetup together, I’d like it to be an open, friendly environment where junior testers can seek advice and learn their craft, where experienced testers can help mentor while sharing experiences and ideas, and for all involved to discuss any recent innovations or working practices they’ve found which worked really well, any issues they’re facing, any interesting articles or media they may have found and so on. As a result, everyone involved would be helping to build and contribute to a local support network where questions and ideas can be floated, issues can be discussed, and feedback and answers given in a positive and productive way.

The contacts I’ve made in my other life mean I’ll have no problem getting a room sorted out in a pub somewhere, so we can get together and talk about our industry, and our particular discipline within our industry.

As for location, if I were to put something together in Ashford, that could potentially capture Canterbury, Folkestone, Hythe, Romney Marsh, Dover, Deal, Thanet, Tonbridge, Maidstone, Tenterden and Tunbridge Wells – so, the majority of South Kent.

If the interest is there, I’ll happily organise the meetup via… erm… meetup.com – but I need to know the interest is there first. So feel free to share this post far and wide with folks in the Testing / QA industry across the South East – the interest it receives will be the key factor in deciding whether or not to go ahead with a dedicated meetup in Kent.

*Please note – I do not advocate the throwing of bricks in London as a means of finding a Testing Meetup.

Metrics & Measuring Performance in QA

I’ve always thought of metrics in QA as a bit of a tricky subject, as I find it difficult to identify and attach meaningful numbers to performance in a role based around providing information.
Technically, there is stuff we can quantify, but I’m dead against keeping track of statistics like personal bug counts, numbers of tests executed and so on. They bring about pointless bugs, endless raising of non-issues, and underhanded tactics all over the place, so they don’t give any true measure of an individual’s performance in the QA field. As I mentioned, the real measure of QA’s effectiveness and value is in the information they provide to their customers – The rest of their development team, the product and business teams they work with, and indeed, anyone else who is a stakeholder in the work the team carries out.
Even when trying to compare the performance of one person against another, the nature of QA means that, due to different pressures, time constraints, relative state of the system under test and so on, you will never get to see different folks running the same test in exactly the same set of circumstances. So, it’s unfair to consider using that sort of thing as measure or comparison of performance too.
But I do understand the need to monitor performance, particularly for new hires or new additions to a team, and there are a few things I use to measure the performance, throughput and relative value of folks in QA. While many of these metrics are geared towards the performance of new members of a team, they could easily be adapted to track the progress and performance of established team members too.
Bug Quality
The general quality of bugs raised should be spot checked, with closer attention being paid to bugs raised due to issues missed in testing (indicative of areas where testing and detection methods should be improved) and bugs returned as ‘Will Not Fix’ (indicative of areas where priorities and understanding of requirements / product needs / customer needs should be improved). For new hires, I’d expect numbers of such issues to decrease over time as the QA ramps up in their new domain. Also, keep an eye open for any issues which have failed to detect or report incorrect system behaviour, have been assigned an inappropriately low priority, or that otherwise understate the significance of a problem. These will highlight areas where coaching is required to improve understanding of the system under test.
 
Critical Bugs in Test vs Production
Keep an eye on the ratio of critical bugs (>=P2) raised in Test vs Production. Customer satisfaction is the true North of quality, and if there are more than a handful of instances of critical bugs being identified post-sprint, this could be indicative of a coaching need.
 
Test Coverage for Applications
Whenever a new hire fills a vacancy, I’d expect to see an increase in test coverage over time. Establish the current base line as the areas the team currently cover, and track this for increases. But, importantly, you must track for increases in areas where increases are expected. Don’t forget that, particularly with automation, there are upper limits for test coverage, so don’t make the mistake of setting a coverage target without first discussing and identifying the areas it is actually possible to cover. And of course, as well as the expected coverage levels, any timescale implemented must also be realistic, or you risk setting an unachievable target.
Load Shift
When a new QA comes on board, overall team output should increase as the new member of the team ramps up and takes on more of the testing load. This one is a bit arbitrary, and not entirely dependent on the new QA, but it’s still worth monitoring as an identifier for potential issues and bottlenecks in your team’s workflow, as well as the performance of QA.
 
Overall increase in Story Turnaround & Completion
Keep track of the team’s commitments for each sprint, and of how many of those commitments were delivered with a high standard of quality. Again, this isn’t always going to be directly in the hands of QA, but where a team has recently filled a vacancy, I’d expect a month-on-month increases in the number of stories committed to, percentage of commitments met, and an increase in the speed with which stories are completed. Take the current averages as a baseline, and monitor for the expected increases.
 
Engagement
Not a ’numbers’ metric, but arguably the most important one. Are the QA team making meaningful contributions to Retrospectives? Planning & Estimation? How are they communicating the information they’re finding during the course of their work? For new team members, I’d look for their engagement to increase as they ramp up in their new domain and adapt to the team and company culture. But as QA professionals bringing a fresh pair of eyes to the team, I’d expect there to be some level of insight and engagement from the very beginning. I’d also expect the QAs to be actively involved in the solutions to any bugs / issues they raise – So, conferring with developers working on solutions, discussing how the fix should be retested etc. to improve their knowledge of the system under test and its workings.

Regardless of how you decide to measure performance in QA, it is worth remembering that any metric should be used as an informational tool rather than any kind of absolute measure. The reality is that there is no substitute for getting to know what your folks are doing, the problems they encounter, how they handle those problems, and how they communicate with the people around them. These are the things that the team and your customers will be assessing their performance on, and the truest measure of success is also the most simple – ‘Is the customer happy?’

Ministry of Testing

I’ve been asked a few times if there are any communities or other blogs that I recommend for continuing to learn the QA & Testing craft, and for engaging with other folks in our field of expertise.

Yes, there is.

Support MoT

I’m a supporter and fan of Ministry of Testing. Rosie Sherry, and her ever-growing group of presenters and contributors have come together to build a fantastic software testing community, full of tutorials, discussions, knowledge sharing and information for testers of all levels. Plus, their growing portfolio of TestBash conferences around the world are some of the friendliest, most accessible conferences around.

And now, I’m very proud to have my blog as part of the Testing Blog Feed on their site. This feed is an absolute goldmine of thoughts and ideas on all aspects of testing, and I’m looking forward to contributing to it immensely.