We recently conducted a 24 hour test automation hackathon. The intent was simple. Get every team member up-to speed on the latest features of Sahi Pro and familiarize them with how our end users and customers use the product.
Sahi Pro has been around for over 12 years now. Some of our team members got introduced to Sahi Pro a few years back, while some of our newer folks just started using Sahi Pro. As it often happens, we kind of become familiar with a particular way of using any product and we may not always be aware of what the latest developments and best practices are for doing the same thing today. With this hackathon, we wanted all our team members to come up-to speed with the latest recommended way of using Sahi Pro. And of course, when we use our own product to automate a real world application, we find bugs and usage patterns which may give us ideas for improvement.
With this intention in mind, we split ourselves into 4 teams. Each team had a mix of developers, testers and support people. While some of us had experience working on real world automation projects, some did not. We ensured a good mix.
We had looked at a bunch of applications to automate for this hackathon and narrowed down on OpenCart. Opencart is an open source online store management system used to create and manage e-commerce stores and websites. It was PHP/MySQL based and easily installable locally. It had rich functionality with real world application. The domain of online shopping was also familiar to most people. It was decided that we would automate the modules of "Sales", "Regular Checkout", "Reports", "System", "Catalog", and "Marketing". In the module "Systems", we would look at the sub-modules "Tax", "Currency", and "Localisation". The scope seemed ambitious. However since it was decided by mutual discussion, we took it up as a challenge.
The day dawned with everyone all eager to showcase their best. The teams formed their group huddles and discussed their strategy. And off they went to click away on their laptops. Different teams took different approaches. While some teams distributed work module wise without overlap, some preferred to think up of common areas and tackled them first. Some stuck to the simplicity of Sahi Pro's BDTA, while some tried to come up with a little more abstraction. Some exported all data into external csv files. Some preferred to have them in the code. Some had all object identifiers neatly tucked away in Accessor Repositories. Some kept them hard-coded in the function implementations.
Overall, there were a few features which most teams used and found useful:
BDTA or Business Driven Test Automation
This way of declaring business scenarios in an excel like interface, with prime focus on the business and data was well appreciated by all the teams. Some had just started using it, but everyone felt that BDTA was expressive and easy to use. Though it may surprise outsiders, it is true that inside Sahi Pro, we have a healthy culture of disagreement internally and not every one is always in favor of or supportive of new improvements and innovations. But BDTA definitely was unanimously hailed as very useful and easy to use.
Data driven or data externalization
In BDTA scenario files, it is fairly easy to right click and move inline data into external csv files. This was used well by all the teams to repeat the same testcases with varying sets of data. Given the time pressure and the scope of automation, teams tried to use ready-made features well so that they don't reinvent the wheel (as is often done in Resume Driven Test Automation).
Relational APIs and Accessor Repository
If there is one feature everyone in Sahi Pro uses, it is the Relational APIs. Identifying a radio button with its label or clicking on a down arrow near a button - these are easily possible with the _near, _in, _under, _leftOf, _rightOf etc. APIs. Every team understood this and used it well. However some teams, also stored these in AR or Accessor Repository files. Apart from separating all UI elements into a separate file, there is one other slightly hidden advantage of using an AR file. Suppose we identified an element as _radio(0, _near(_label("Female"))). We have now added it to the AR file. When we try to identify the same element next time while recording a fresh scenario, the element would be identified correctly as _radio(0, _near(_label("Female"))). It will automatically be recognized with the relational API! It was interesting to note that many teams did not know this and were surprised when they saw this.
Git integration lets us work with repositories directly from the Sahi Pro user interface. Without leaving the Sahi Pro Editor, we can pull changes, commit changes, and push them to repositories.
One team in particular divided their work such that different members worked on different modules and kept pushing to git. Team members periodically pulled the changes and could quickly utilize the keywords built by other team members. The team claimed that this reduced effort duplication and helped in quicker and smoother integration with each other’s work. It was interesting to see 2 teams use git, while the other teams went the old fashioned way of merging everything together later as a separate exercise. It is notable that the team that won, used git integration.
All teams used Sahi Pro's inbuilt reporting and did not try to customize anything in reports. Sahi Pro automatically creates logs for all test executions. Starting from the suite level, reports can be drilled down to individual scenario files, then to the keyword called, and further to the exact line of script that was executed on the browser with the data used. Reports for BDTA scenarios, retain the format of the scenario file, along with the data passed to it. When an error occurs, a screenshot is taken and is available at the relevant failed step in the reports. Teams did not make any customizations to the reports and used the default HTML reports since it sufficed their requirements.
Other interesting usages:
This ensured that all such random alerts were dismissed during playback, but also duly logged in the reports for later analysis. Only one team had used this to their advantage.
2) There was also another problem in the OpenCart application where a button needed to be clicked twice sometimes to elicit the correct response. It was interesting to see how different teams handled it. While normally it would have been left as a bug with a failing test, most teams felt that bypassing that point was necessary since most other flows depended on going past that point. So while the error was duly logged, teams also chose to work around and proceed further in the automation. (We are still to investigate if it was a Sahi Pro bug in click simulation!)
Post Hackathon Discussion:
We had initially planned to have 30 minute sessions where each team would showcase what they had done. However, we soon realized there was a wealth of information to be gained from the discussions that were freely flowing. So what was to be a 2 hour session turned into a full 8 hour exercise! But it was very worthwhile. Technical approaches were questioned, conflicting solutions proposed and code and implementation were scrutinized for errors and bad implementations. High drama discussions ensued with laughter and at times bruised egos, leading to a very entertaining and informative session.
Some teams accomplished a lot of breadth in their automation. But not all scenarios were necessarily working. Some were very focused on specific scenarios which could be well show cased. Our criteria for choosing a winner was the quantity and quality of tests automated. There were no extra points for using any specific features of Sahi Pro. There was one team which had accomplished a good balance of breadth and correctness in their solution. They had automated the maximum number of testcases and their automation implementation was easy to read and understand. Interestingly, they used BDTA, Accessor Repository, Git Integration and Data driven approach. Their solution was not over engineered and they stuck to the essence of test automation. Congratulations Team Sahi Squad!
Learning and observations:
1) Automating a real world application in a time-bound manner helped us understand the pressures of our customers and see if our proposed solutions were useful to them. We were happy to see that Sahi Pro was indeed effective.
2) It is easy to under-estimate the work involved in test automation. Even with experienced folk in the teams, the task at hand proved to be very large for the time available.
3) We noticed that new developers in the teams had a tendency to over-engineer the solution. It needs a bit of automation experience to settle into the right amount of abstractions.
4) Even with a tool like Sahi Pro, there is a lot of thought and effort needed to design good test cases. It may not be a programming task, but it still is a technical task needing some good thought.
5) Competitions like these are a welcome break to routine work. While it was a fun filled couple of days, (though originally planned as a 24 hour hackathon, we did split it into 2 working days), we learnt tremendously about the product and target use case. While this was a competition, there was also an undercurrent of camaraderie which made the event less stressful and more fun. Food and refreshments on the house also helped. Teams freely helped each other where there were quirks in the application. It was also a good opportunity for people from dev, QA and support to mingle and work on a common problem.
Some quotes from our team mates:
"A great peer-to-peer team-building activity that helped me to be an active team player. The experience of Sahi Pro Hackathon helped me identify and clear my gaps of implementing Sahi Pro, and the scope of enhancements for Sahi Pro."
- Avanish Kantesh.
"Sahi is becoming a more powerful tool everyday. The post-hackathon discussions of conflicting opinions and ideas for solving a given problem was a good learning experience."
- Sachin Goyal.
We definitely look forward to more of these!