Pairing with a dev to create a utility to test translations

In another example of how automation is not just Selenium (or just Unit testing or just…), I had a really great interaction with some developers on my team the other week while we were introducting a translations file for the first time in our greenfield app and wanted to recap it in case it helps others.

Disclaimer: it was arguably too late to be doing this at 6 months into our project. There is a lot more to translations (and even more to internationalisation), but I have yet to find a good reason not to at least extract all strings to a resource file of some sort from the begining. Try to get that practice included ASAP since it takes a lot of development AND testing effort to clean up random hard coded strings later on.

But whether you were quicker to the resource file than us or not, many of us have to test translations and that is really what this is about.

One day we went to finalise analysis and kick off a story which was meant to get our app translation friendly. As we dug into what that meant, we realised that the hidden costs were in:

  1. the number of pages and states that would need to be tested for full coverage of all strings
  2. exactly how translations was going to work moving foward

Because of this we chose to immediate split the story and worry about one thing at a time. First things first, let’s get the structure of a single translation understood, working well, and tested well. The stories became:

  • pick one static string that is easily accessible and translate it
  • translate all strings on page 1
  • translate all strings on page 2
  • translate all strings on page 3

We decided that it was not necessary to split any more fine grained than by page to start, but agreed we would reassess before playing those stories.

Now it is time to dig into that idea of understanding, implemening, and testing a single string…

This was inherently going to be a bit of an exploration story so we were working in abstracts during the kickoff. That being said, there were plenty of good questions to ask from the testing perspective. I shared my experience from a previous project where we set up a couple of tools to assist testing and I was hoping for the same tools here. The idea was to be able to create a “test” resource file (just as you may have a FR-fr or EN-gb language file) which could be kept up to date with the evolving  resource file and loaded as the language of choice easily. We also spoke about looking for more automated ways to test, but regardless of unit or integration test coverage, this tool was necessary to exploratory testing moving forward as more enhancements (and therefore more strings to translate) were added to the app. The devs seemed keen on the idea so I went on my merry way right into a 3 day vacation.

I came back rested and revived and so excited to see that story in the “done” column! But wait…there in the “to do” column was a glaring card “test translations”. skreeeeeech! What is that? Since when does our team not build in testing?! Obviously I was due for an update.

I spoke with one of the developers from the origional kick off and first congratulated her on the story being done and then asked how it went. She explained to me that the framework for translations already existed in other apps and we were asked to follow pattern. Because of this, implementation was pretty easy, but understanding was still quite limited and testing was viewed as “not necessary” since it is the same as all other teams. Whew, ok a lot to unpack…

  1. “We have always done it that way” is not only the most dangerous phrase in the english language, but absolutely the most dangerous phrase in a quaity software team. I raised some questions about how the framework would impact our team directly (maintenance, performance, etc) and we quickly came to the conclusion keeping it as a black box wasn’t going to work for us.
  2. Let’s define testing. No no no, not in another ranting way, in this context only. The developer was thinking regression testing, I was thinking future exploratory testing. They both needed to be discussed!

As we dug into the framework we adopted, I was brought up to speed on why putting in automated regression testing was probably not worth the effort in the short term. We moved on to the exploratory side. Translation strings are not something that will stay static. Since new strngs will be added all the time so we need a way to make sure that new effort can be validated. With very little explanation, this became clear to the developer and we put our heads together to come up with a solution given the kind of convulated translations framework. Within about 10 minutes we had devised a plan which would take less than an hour to code and would provide me a 2 command process to load up a test translations file. Success!

So what does this look like?

create_test_translations.bash – Makes a copy of the base translations file, names it TE-te and adds “TRANSLATED–” to the beginning of all strings.

load_translations.bash – This was a bit of a work around for our framework and dealt with restarting the app in the required way to use the new test translations file.

And to clean up after?

git checkout

Definitely not fancy. Definitely not enough forever, but for it met our current cost/value ratio needs. I unfortunatley can’t show the client site, so instead I am going to use the amazing Troy Hunt site HackYourselfFirst so that you can get the gist. Hopefully you can see the “untranslated” string a bit easier as well.

First, the site translated to Dutch…

Screen Shot 2017-03-05 at 10.38.30.png

 

Did you spot the 4 places that it was not translated? Did you think you spotted more than 4? Notice some words intentionally stay the same (proper nouns etc) and others should have been but weren’t.

Now for the site to be translated to my “test” language…

screen-shot-2017-03-05-at-10-29-13
Troy Hunt’s HackYourselfFirst site with translations being applied to _most_ pieces of text.

 

This time I could quickly tell that the vote counter was not translated as well as the Top manufacturerers text. At least for me, this was A LOT easier to cut down on both false positives (thinking Lexus or McLaren should have been traslated) and false negatives (eyes skipping past the word votes)

Translations can be a tricky part of testing a global website. Since most of us do not speak every language that the site will be displayed in. But there are certain heursitics that we can use to at least combat the most outrageous issues. Take a look at how strings of differnet lengths will look, look at how we handle new strings, deleted strings, changed strings. Etc etc etc.

As I mentioned at the beginning, there is A LOT more to internationalisation, but this was a way for our team to spend a little less time combing through text updates.

Bookmarklets/Scriptlets for data entry

Getting back on the horse with a quick post thanks to Mark Winteringham’s suggestion. We are working together on an API testing workshop for CASTx17 and it has been a lot of fun. We are trying to showcase the need to marry the technical and business needs when discussing APIs and each of us come to that conversations with a vision on both sides, but he is stronger on the tech implementation side and I am often more focused on the business/usability aspects. This has meant that I am taking the lead on creating the user stories associated with our workshop with his verification of them and he is taking the lead on the API creation while I then test that implementation.

One of the user stories has to do with the option to introduce pagination for high volumes of data. This has meant that I need to create a number of entries and it can be extremely time consuming. One way I can improve on this is to create a Postman script to seed the data. This is extremely helpful when data setup is the key. But at the same time, it means you are only testing the API and not the GUI as well. Therefore, to see how the javascript reacts to the data I also like to create small bookmarklets.

To do this, I muck around in the Chrome Developer tools to figure out how to set the fields value using JQuery. This is very similar to coding with Selenium, but you will definitely need to get used to syntax. Depending on your comfort level with Dev Tools and selectors you may jump past these first few steps, but I have just documented them all in images with captions below.

Opening dev tools by right clicking within chrome and selecting "inspect"
Step 1 – Open Chrome Dev Tools from your working web page. To do this, you can use keyboard shortcuts and/or dropdown menu options, but I find it most helpful to right click on the element I need the ID of anyways since it will then open up focused to the place in the HTML that I needed.
Copying an HTML element's selector within Chrome Dev Tools by right clicking and selecting Copy > Copy selector
Step 2 – Find the element in the HTML and identify it’s selector. If you are unsure how to do this, a quick way to learn more about selectors is to use the tools to create them.
Using the console tab of Chrome Dev Tools to run JQuery that finds an element and then clicks on it.
Step 3 – Move to the console tab of Dev Tools and use JQuery confirm the selector has found the one (and only the one) element you are interested in. Then experiment until you are able to take the correct action on it. In this case, I needed to click on the link to open a modal.
Finding the ID for the Firstname field using Chrome Dev Tools
Step 4 – Continue finding elements that you will need to complete the task. In this case, I need to also add text to the Firstname field.
Adding a text value to an input field using JQuery in Chrome Dev Tools
Step 5 – An example of how to add text value to a field using JQuery. The .val() method used here does not work on all field types so use Console and google searches to find how to fill in each field you need to.
A completed JQuery command that opens, fills in, and submits a form.
Step 6 – String together all your commands by putting semicolons (;) at the end of each one.
Selecting to Add Page and create a new bookmark in Chrome
Step 7 – Create a new bookmark. In this case, I right clicked on my bookmark bar and selected to “Add Page…”
Creating a bookmark by adding "javascript:" to the beginning of the script and placing it into the URL field
Step 8 – Name your bookmark and paste the complete script into the URL field. Be sure to add “javascript: ” to the beginning so that Chrome knows to execute the script rather than open a webpage.

 

I find that I create these little bookmarklets a lot on my teams. In this case, I found a defect that using Postman for dataset up would not have exposed. Turns out, when you create the 11th booking through the GUI (the first one that would force a 2nd page), that second page is not created and instead it is listed as one big list of bookings until the page is refreshed.

In other cases, I have used this tool (with a fairly complex script) to replace text on the page with long strings. This was a quick way for me to evaluate the responsiveness of the text. This was valuable given that we had a high percentage of user entered data as well as translations into many languages so we had to be prepared for different length strings.

Hope this helps someone stop entering that same ol’ data into the form just to get started on the more interesting exploring!

#TW30DaysOfTesting – Day 29: Nice to not be in an echo chamber

This challenge may not make sense to most people reading this blog, so let me start by explaining it a bit. My.ThoughtWorks is an internal communication tool where we have the opportunity to create content, share ideas, and get exposed to a diverse set of opinions on topics ranging from low level machine learning to design patterns and current human rights causes from around the world. From what I hear, many companies that get to a certain size start this type of communication forum and it provides great value, and my.TW is no different. However, in creating the 30 days of challenges, I was thinking about how much we consume each other’s outputs, but how much less frequently we provide feedback (or extension on) to them. Some of this is time or interest, but I know (from personal experience) that some of this is also from how daunting it can be to respond to a post where there are over 4,000 extremely smart and outspoken individuals on the other end who may be driven into the debate.

While I am an active member of the testing and women in tech channels on my.TW, I am more hesitant to write on the infamous software development channel (arguably the most useful and active channel) or some other more highly specific channels like our security channel. Already in this 30 days I have posted to Software Development about collating a list of podcasts and that has received tremendous support and been extended to so many new and interesting ideas which encourages me to engage even more.

So today, I have been inspired by a post in the security channel about how to engage the business in cyber security which points to a very interesting article in HBR entitled “How Cybersecurity Teams Can Convince the C-Suite of Their Value“. The post is from 4 days ago and has a single reply so far which also points to threat modeling with executive sponsors as a good way to extend interest in cyber security. Below is my reply to the thread:

Thank you for sharing this article. I find it really interesting to see a case study of how other companies are tackling these challenges. And while I agree and support our focus on threat modeling as a way to inform businesses of their risks in the security space, I think that this article is pointing at some really new an innovative ideas that we could look to also trial.

In particular I really liked the proactive approaches that the Facebook team took. They remind me of my interest in creating “safety nets” for future defects instead of just wrapping tests around current code. An example of this for me is when we are doing end point testing. Just putting in tests for each end point requires us to remember to add the test. Instead, I look for ways to scan for endpoints so when a developer is creating a new one that is one less thing that they need to keep on their mind. Another example of this would be teams that run the security scans in their build looking for vulnerable dependencies and other risks. Can we try and cultivate other examples of where we are doing work like this?

And at the risk of dividing this thread I want to also raise a point we discussed at the last TWASP meeting. As a QA I am often in a position to raise these risks and am also often feeling they are not well received as they are viewed as abstract. One request I made to the room, and now here, is if we could create something more impactful to showcase to clients when we find a vulnerability. For example, when I find reflection JS injection risks, I tend to showcase it with a popup. This is proof it exists, but in my experience this has been met with a bit of an eye roll and acceptance that if someone wants to create a pop up on their own computer…fine…let ’em. Also during the TWASP meeting I was shown just how powerful the reflection attacks can be but am unsure if I could recreate it. Do we have (or could we create) a repository that could showcase really business critical examples for the OWASP Top 10 when they are identified on a program?

#TW30DaysOfTesting – Day 28: If you don’t have anything nice to say…you probably aren’t looking hard enough

Oi. This has come on quite a day! But as I mention in the title, there is almost always a shining spot and it is worth sharing that encouragement with people as they struggle through the challenges.

So today’s task…test a new integration point which has taken almost 3 weeks to wire up.

I am feeling fairly stuck in the mode of “trying to make it work” rather than really trying to challenge assumptions and push on boundaries. That is never a place I like to be in, and it probably a blog post in it’s own right. But suffice to say, there is a lot to be upset with today while testing this story.

But!

The developers were brilliant while producing this story 🙂 They knew that the data setup would be a struggle, and we had talked about how that would need to be improved before story sign off. Thankfully, they not only agreed with that need, but really gave it a great effort. They created (and checked into source control) some Postman scripts. Not only that, they thought to parameterise them and use pre-request scrips to allow shared variables to only be set up once.

No. This is not an ideal story slicing. No. This is not an ideal implementation. And no. This is not a warm fuzzy feeling as we move towards completion. Lucky for me, I have a team where I can pass on my sincere thanks and praise for how much they value building a quality culture on our team. I know that with that type of attention, we can sort through the challenges and create a fantastic product!

#TW30DaysOfTesting – Day 27: Long time no see

This is one of my favourite of the challenges for this 30 days. I find there to be such great camaraderie built when I am on a project with amazing people, especially when we are on the road and trying to build a bit of a life in a new city, but then all of a sudden you separate! The relationships I have built have translated into wedding invites, group vacations, and lots and lots of work related debates and support.

Today I decided to reach out to an old friend that I actually first met in relation to the new hire program at ThoughtWorks. She was a graduate hire not long after I had been through the program and we started to chat about that and about life after returning from the training. Since then I have been lucky enough to be staffed near her (yay social events!) and even on her same project (though still not on the same team).

Aside from just wanting to say hi to an old friend, the reason I chose to reach out and say hi to Aubry was because she is on the project I left when I moved to the UK. It was a big engagement and had so much potential but also had a very long timeline. They are currently in a push for go live and I am super excited to hear how things are going.

Other reasons I find myself reaching out to old teammates…

  • Quick hellos
  • Recon on what they are working on and if they need a new teammate (wink wink)
  • And of course the other way to see if they would want to join my current team
  • Questions about old experiences I want to use as case studies in my current project
  • Opportunities to speak at new conferences (oh heyyyy James Spargo / TeshBash Philly!)
  • Congratulations for recent go-lives or other accomplishments
  • So many more…

So I hope you all take the time to say hi to a past friend as well, it is truly a wealth of support and learning that I try to take advantage of as much as possible.

#TW30DaysOfTesting – Day 26: Getting comfortable with mindmaps

Mindmaps are a tool which I have heard about and dabbled in for at least a couple years now. While I have always wanted to see them work for me, I just haven’t quite found them to be natural within my workflow. When I use a tool to create them I feel bogged down by how it is structured or non-intuitive shortcut keys. When I do them by hand I have a mix of issues with my handwriting and with not being able to erase or edit once its been written. That being said, I have always found them to be valuable and interesting as a thinking tool. I think this is why I thought to use them again more recently on my current project.

In my current project we are “adopting and adapting” a product from another region. One large module we needed to adopt was the file storage system for a user’s documents. This includes functionality like saving/deleting/copying/moving as you would assume. But when you dig in deeper there is creating new folders, restoring deleted files, renaming files, adding notes, and the list goes on.

Once the developers wired the whole module up there were a lot of questions around if that means the feature works. Being my usual cautious self I decided to list out the risks and in turn the testing options to verify this. I laid them out a bit like the three bears and as you might assume the business picked the medium route. The more intense route included needing data/user permissions to be clarified significantly more, and also included development work as it would have wrapped automation around the current behaviours. The smaller route was really just a gut check and may have caught the big things, but relied on end of lifecycle certification testing to catch anything more hidden. The middle route meant that I would dig into every nook and cranny we could identify, but that only a single user permission and a priority set of 3 document types. This clearly is not 100% coverage, but it seemed like the right call on cost/value for the team.

While the product owner had identified the features they could as user stories, I was not familiar with the original feature and wanted to dig in more. I decided to explore the original team’s site and see if there were any more features we should be testing. During this exploration, I created a mind map with Freemind. After a couple days I found that my perceived issues with the tool fell away and I all of a sudden felt like it sped up my work significantly. I could link externally to the user stories I was creating to track my work and I could also link internally within the map so that I did not duplicate nodes when it was the same toolbar on multiple pages.

In the end, I colour coded the mind map to indicate where the original product owner stories were covering, where some test cases we dug up from the other team covered, and where I identified additional work to be done. This set me up to have a really frank conversation with the business about the cost and time frames for the testing as well as visualise why even with the middle route of testing, there is significant risk given the size of the feature.

largemindmap
Mindmap to indicate test coverage for a folders feature in our application. Red indicates the areas totally uncovered (approx. 1/3 of the map).

Since then I have been creating smaller mindmaps for each feature as I test it. I use the mindmap to write down my targets for the testing and some different data cases I may need to create. I then identify any behaviours I notice along with screenshots and annotations of them so that we can review and decide which of them are defects in our adoption, global defects across products, or enhancements that could be applied globally.

This has proven to be an amazing way to showcase the testing effort and everyone has really loved having more of a discussion around testing instead of the historical way which included the business signing off test cases before they could be run. I am now actively looking for other ways to use mindmaps effectively and I am excited to next create a mindmap which describes my current responsibilities and activities so that as I onboard a new QA/Scrum Master it can drive our conversations.