Mouna Hammoudi (PhD, University of Nebraska) states that there are five main reasons why recorded automation scripts break easily. In this article, we will look at some solutions as well as critiques to the problems that are brought up. This study has shown that weak locators make up 73.6 percent of automation script failures in record/replay tests.
You can read the academic journal here
Cause 1: Weak locators
What’s nice about this write up is that it puts a percentage to the failures to the breakages. In the case for locators, it is a whopping 73.6%! That means, according to this study, we can fix a majority of recorded/replay tests if we can fix this problem. In detail, this is because locators are automatically generated in recorders.
The hierarchy of breakages are:
- Element Attribute not Found.
- Attribute in this context would be something like
html <input name="“FullName”/" type="text" />that is changed to
html <input name="“Name”/" type="text" />.
- Hierarchy-Based Locator Target not Found.
- This issue is under the umbrella of selectors generally not being located successfully but the hierarchy of DOM structures are emphasized in her argument. This occurs when a developer changes something about the DOM element that the automation script is relying on. The issue is worsened the more specific the selector gets. For instance:
"html > body.universal-auth-page.new-topbar > div.container > div#content > div.subheader > div#tabs > a" ``` is more likely to break than ```html "#tabs a"```. I agree that relying on hierarchy excessively is a dangerous way to build locators. Designers and developers are prone to change the HTML document to make the application work and hierarchy is one of those things that are most likely to change. I recommend talking to your developers to discuss what is best. In specific, I would build some sort of agreement on what selectors would be stable and reliable. - *Index-Based Locator Target not Found.* - This directly translates into ".some-selector div div:nth-child(1)". I agree that this is a dangerous thing to do. But how would you then select something in a list? In this case, it would be wise to rely on a class name such as ".active" as it is less likely to change. <!-- Obviously, this issue is not specific to recorders as well. In our application ABT, we do have an automatic CSS selector builder as well. --> ### Cause 2: Invalid Values Let's start with an example: #### Invalid Text Field Input ```html <input required="" type="string" />
Is changed to:
<input required="" type="email" />
And the values that were inputted were “hello world”. This would no longer be valid because the browser would not allow the form to be submitted. I actually don’t think there is a good solution to this problem. The input form has changed to an “email” type. Therefore, it should be so that the automation script behave accordingly. Anything else would produce a non-deterministic value.
Cause 3: Page Reloading
I can answer this with a straightfoward technical solution. The problem exists because recorders do not understand the fact that the browser is navigating to a whole different page. The solution would be to listen for when a browser will jump ship and act accordingly. I think Selenium IDE fixes this issue by using “clickAndWait” instead of just “click” when it knows the browser is navigating to a different page.
Cause 4: User Session Times
This issue is directly related to a misuse of implicit waits. Misconfiguring timeouts can be a timesink to debug because it’s dangerous. If your recorded tests do this, it may be time to jump ship to another provider.
Cause 5: Popup Boxes
Recorded scripts can fail because popup boxes can appear (and not appear). Once again, I think this is a case of not communicating with feature developers to make sure that this problem does not occur. It used to be that resolving popups (accepting them) was difficult. But advancements to WebDriver and browsers implementing better drivers has eased mitigating the technical problem at hand.
In essence, automating a browser has two functions, where you want to act and what you want to do. More often than not, the fact that automation scripts break when web application change in the smallest of ways proves automation less fruitful work.
I would speculate that the root of these technical issues lie in the communication (or lackof) between the developer and tester. The issue lies in part by the tester assuming that the feature was stable. It also lies in the developer going in and changing it because they thought it wouldn’t break the testing suite (or just don’t care). There are two main solutions to this problem:
- Continuous integration. Get the developer to put your automated tests in their continuous integration flow. That way, if there are any breaking changes to the automation suite, the developer will receive immediate feedback.
- Don’t write it. Pretty simple huh? But it might just be that the application isn’t really ready for automation testing yet. Automation suites are a way to harden the software, so that even minor changes will be significant and may halt development.