Using cURL and jq to work with Trello data

As mentioned in this blog post we are using Trello to handle our data at Speak Easy instead of the google sheets that we used to do. There are a lot of pros for this, but one con is that the data is just not as accessible as when you could sort lists etc in google sheets. I decided to make that better by creating a few scripts in python leveraging just a basic cURL command and the jq library.

Todo list:

  1. Access the Trello API from the command line
  2. Identify first use case for API call from command line
  3. Create script for first use case

1. Access the Trello API from the command line

First things first, I want to make sure I can access the API with a cURL command with the right auth etc. To do this I leveraged my old postman scripts and stole the cURL command from there. If you haven’t seen it before postman has a bunch of easy to export formats if you click the code link just under the save button.

PostmanCURLSnippet.png
How to open the code snippet window in Postman and select the cURL export type

Once I copied this to my clipboard I was able to run this in my command line and get the same results as in postman. But realistically this isn’t super helpful because if anything it is harder to read. But the power is really going to come from when I start using the jq library. So now I need to install jq which I am able to do easily through brew install jq.  This means that I can now “pipe” the response into jq and see the json formatted as well as start manipulating. So first things first, below you can see the difference in the cURL response before jq, and after being piped to jq.

CURLWithJqParsing
curl -X GET ‘https://api.trello.com/1/board/######/lists?key=######&token=######’ | jq ‘.’

2. Identify first use case for API call from command line

So why do I want to do this anyways? The first thing I thought about is how do we create a mailing list from our mentors? Previously it was a column in our google sheets doc. Now it is spread out across cards. I want to collect all the email addresses in each of the cards into a single list.

So let’s get to work!

I needed to actually improve the API call more than the jq filtering. In this case I was able to ask the Trello API for only the fields I cared about (name and desc which includes email).

SpecificCURLForEmailAddress.png
curl -X GET ‘https://api.trello.com/1/board/######/cards?lists=open&checklists=all&fields=name,desc&key=######&token=######’ | jq ‘.’

 

So here is where I bet you have already spotted my mistake and I have to fully admit that I am writing this live as I go and have just realised a show stopper! The whole point of this exercise was to access the email addresses easier. The kicker though is that while we have email addresses in the cards, they are in the body of description and therefore I would need to not only do all this funky fun with jq, but also use some funky regex to parse them out. And that my friends is where I will be drawing the line.

So instead of working harder, I decided to work smarter and figure out how to add more useful data fields to the cards. Turns out there is a power up for that called “Custom Fields” which has allowed me to make an email text field for all cards. This comes with some docs for the API as well.

TrelloCardWithCustomEmailField
Picture of my Mentor Trello card with the custom email field.

So now, when I ask the Trello API for fields, I can ask for the ‘customFieldItems’ instead of the ‘desc’ field.

MyEmailAddress.png
curl -X GET ‘https://api.trello.com/1/board/######/cards?lists=open&fields=name&customFieldItems=true&key=######&token=######’ | jq ‘.[] | select(.name == “MENTOR: Abby Bangser (EU)”).customFieldItems[].value[]

Wrap up

So given I have just created myself a new todo list for this activity:

  1. Access the Trello API from the command line
  2. Identify first use case for API call from command line
  3. Introduce email fields for all Mentors
  4. Create script for first use case

I am going to call it quits for the day. the next blog will need to dive into moving this proof of concept into a python script 🙂

In the mean time…is jq interesting? Want more details in a blog or is it just a utility I should show usages of?

 

Advertisements

First things first…AWS training wheels

So I am getting started on a first website implementation tomorrow. I have worked on enough early cloud projects at this point to know there are some basic house keeping things I need to get in order before using my AWS account for, well, anything.

The two areas that I have found most important are basic user auth and account monitoring (which includes billing awareness) so I am going to focus on those two for tonight. One thing to note, I am not going to go through the step by step of how to do these as they are things that are very well documented. I am happy to share specific links if you would like a place to start but a given the expectations set in this post a google search will provide tons of starting points.

Basic IAM

In AWS the service which provides both authentication and authorisation (yes I cheated and used the ambiguous term before). Given IAM is a service authentication there are only a couple of things which you need to tune and most of them are provided as a check list right on the landing page for the service.

Screen Shot 2018-05-29 at 08.15.51
The basic expectations of an AWS account’s IAM setup including no root access keys, MFA, and individual logins, and a password policy.

 

Access keys

The one you may be least familiar with if you are not an AWS user is the “Delete your root access keys” request. Access Keys are the digital signature used when making command line requests to AWS. Your root user is an un-bounded power user who can not be limited from deleting/changing/creating things at will. If this gets compromised it could be a problem, but also most people like to protect themselves from themselves and this a perfect example of when to do that.

So you may ask, if you can’t use the root user account that you have created, what can you use? That is when you create a user with only the permissions you need to get the job done. Don’t worry, if your “job” expands over time you can always log back into root and expand your access, but it will require that extra bit of thought which can be good. In my case I have given pretty much complete admin access to S3, CloudFront, CodeBuild, and Route53.

MFA

The second one of the list is about MFA (Multi Factor Authentication) being set up on your root user. This is great, but I would highly suggest that you get this set up on ALL users. Unfortunately this is not a set by default and can not be. However, one thing you can do before switching from the root account is to create a policy which requires MFA before a user can do anything of use and apply this to all users. I used this tutorial to create the policy (https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_users-self-manage-mfa-and-creds.html).

Account monitoring

Based on the AWS docs (https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#keep-a-log) I wanted to confirm I had some basic level of account awareness. The ones applicable to a completely new account I figured would be CloudTrail, CloudWatch, and AWS Config.

 

CloudTrail

This is a very high level access log which looks at create, modify, and delete actions in the account. There are of course concerns around reads but that is just not what CloudTrail is used for. The good part is that CloudTrail is default set up for the past 90 days for free. Between the limited data retention and the difficulty of actually using the massive amount of data CloudTrail generates, the is realistically something you need to enhance given a larger footprint in AWS. But for now I am happy with the defaults.

CloudWatch

CloudWatch is a more fine grained method of tracking services and requires configuration unlike CloudTrail. The only thing I am initially concerned with is being a bit of a dummy on costs so this is where I am going to set up my billing alert. Thankfully this is another setup activity that is so well accepted that it has clear documentation which I followed (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/monitor-charges.html).

AWS Config

Kinda wish I had looked this tool up before if I am honest. It appears to be an easy framework to create the “detective checks” I discussed at a previous engagement. What these detective checks are meant to do is evaluate the ongoing compliance of your account given that changes can be made by many different people and processes and throughout time. At this point I have just clicked through and used an existing rule set “iam-user-no-policies-check” which actually did catch some issues with an old user on my account. I hope to create a new rule around the MFA requirement but that involves a bit more work with lambdas etc so that will have to wait for another day and another blog post.

Wrap up

So where are we now? Honestly not that far. But the bare minimum set up is complete and I should be able to now use this account with basic assurances around access and billing issues.

And whats next? I am going to work with Gojko Adzic on getting a basic website up under a new domain name. The main technologies will be Jekyll, CodeDeploy, S3, and CloudFront.

 

Kicking off #SEBehindTheScenes

Hopefully this marks the start of a new onslaught of blogging so I figure I should set up why as well as what will come.

For over a year now I have been working with SpeakEasy as an admin based on this simple call for help. When I first signed up, I had big dreams of the impact I could have. We talked about improving the service by removing some of the admin overhead. We talked about generating more data on how we were doing on matching, and more importantly how our applicants were getting on as speakers! We talked about how to support our mentors better so as to make that a great experience as well. Unfortunately I underestimated 1) my life circumstances and 2) the amount of effort Fiona and Anne-Marie had been putting in to “keep the lights on”.

So what have I done? I have moved us from a google docs admin experience to a Trello one and I believe we have a couple good outcomes (and areas for growth) from this move.

We now have visibility of the journey from original sign up to match. It will need some work but has already quantified a fear we had around drop off rate from signing up on the site to getting a mentor match. This means we now know that over the last year we have had 40% of applicants never get to the point of confirming their interest via email. Hmmm that sounds like a barrier we may want to lower.

We also now have a process which allows us to track who has been contacted, for what reason, and at what time. It still has a lot of manual process around moving columns, adding comments, etc. It also still has a lot of limitations since the comments are hand written and may limit our ability to search across applicant and mentor experiences for better data in the future. But it is a start and given that I am handing over “head matching admin” to Kristine Corbus after a year of driving the process myself, I have the utmost faith this process will be put through it’s paces and improved!

So what am I going to be doing if not matching? A couple things. First of all Anne-Marie and I have been through the cycle a few times of being excited about all the changes we could drive for SE, losing momentum because of the other things going on and the size of the task ahead, and then restarting. We have refocused and are planning on identifying the microtasks that can make a difference and hopefully start a push.

I will also be hopefully putting some of my goals around pipelines, observable systems, and testing in production to the test in putting up a new website for SE where we can start to incorporate some of the new changes we want to see. That is really the key to this hashtag. It is a promise I have made to Anne-Marie to broadcast the learnings I have as I take on the task of re-platforming and then operating the SpeakEasy website.

Step 1? A call for help on twitter:

 

Step 2? A pairing session the the one and only Gojko Adzic next week which I plan to blog about as well 🙂

Fun with google form scripting

Sometimes doing the not most important thing on your todo list is so fun. That was the case today.

I have started to work with a lot of really amazing people to support SpeakEasy as a place for new voices to be heard at testing conferences. Turns out it is a lot of work! It is so impressive to me that Anne Marie and Fiona dreamed this up, actually got it running, and have made such a sustained impact on the community over the last few years. I personally am a graduate of the program, have continued on as a mentor, and now am a volunteer. As a group of new volunteers, we were given the open invitation from Anne Marie and Fiona to make SpeakEasy our own. They know that the need to keep it running took its toll on some of the processess and that with some extra volunteer power we may be able to address some of the underlying processes.

One process I was particularly interested by was how we were alerted about new mentees and then how we would then take them through the process from new sign up to being mentored.

Oh the glory of google sheets! We added three columns to the google sheet that gets created from the google form on the website. One was used to sign up for the job of getting that person matched, one was for tracking your progress on that journey, and one was for confirming final status of that new sign up. As you can imagine this gets messy and also limits the value we can get from data like how long it takes for someone to get matched.

Excel table row with data about a mentee and matching them with a mentor.
An example row from the “to be mentored” sheet.

 

In comes the power of a small script. I have known about the ability to script Google apps for a while now but never played with it myself. This felt like the perfect opportunity. What I really wanted was to be able to have a new mentee show up in a management tool for tracking and visibility, in our case Trello. But let’s start with how to get a Google form to take action on submit.

I found this website that gave a great framework for my efforts. A couple of interesting notes to get it working though. First of all, do make sure you update all of the fields they suggest to make sure they match your form. But even if you do that, if you choose to “Run > onFormSubmit” from the tool bar you will get an error like this:

TypeError: Cannot read property "response" from undefined. (line 24, file "Code")
This is the error when trying to run onFormSubmit from inside the Google scripting tool.

 

Basically it is telling you that because no form has been submitted you can not run this command. Shucks. Really wanted to test this before going live. Good thing I attached to it to a play form, so I just went and did that! Submitted the form and waited for my glorious Trello card to show up.

Wahwahhhhh turns out that there are some typos in the script. It took me a while to sort them, but using the “notification” option on the triggers to say email me immediately helped a lot to debug. In particular you will need to look at SendEmail() and update the “title” and “message” variables to “trelloTitle” and “trelloDescription” for the form to work.

Viola! Now on submit I have a new line in my excel response sheet as well as a new card in trello.

A trello card with the details from the entry form on the SpeakEasy website
A new trello card for a mentee to match

But honestly, the more exciting thing is the power of collaboration that is unlocked now. Now we have timestamps for activities that we perform, we can at a glance see how much any one person has in progress, and we can track where bottlenecks may occur during the process.

A filled in Trello card with dates and times of activties from new application through to introduced to mentor.
Possibly example process of getting Jane Doe matched in Trello.

 

Obviously we as a team need to sort out our own process with this tool, but the oportunities for collaboration are so much greater. Looking forward to seeing how it progresses!

Pairing with a dev to create a utility to test translations

In another example of how automation is not just Selenium (or just Unit testing or just…), I had a really great interaction with some developers on my team the other week while we were introducting a translations file for the first time in our greenfield app and wanted to recap it in case it helps others.

Disclaimer: it was arguably too late to be doing this at 6 months into our project. There is a lot more to translations (and even more to internationalisation), but I have yet to find a good reason not to at least extract all strings to a resource file of some sort from the begining. Try to get that practice included ASAP since it takes a lot of development AND testing effort to clean up random hard coded strings later on.

But whether you were quicker to the resource file than us or not, many of us have to test translations and that is really what this is about.

One day we went to finalise analysis and kick off a story which was meant to get our app translation friendly. As we dug into what that meant, we realised that the hidden costs were in:

  1. the number of pages and states that would need to be tested for full coverage of all strings
  2. exactly how translations was going to work moving foward

Because of this we chose to immediate split the story and worry about one thing at a time. First things first, let’s get the structure of a single translation understood, working well, and tested well. The stories became:

  • pick one static string that is easily accessible and translate it
  • translate all strings on page 1
  • translate all strings on page 2
  • translate all strings on page 3

We decided that it was not necessary to split any more fine grained than by page to start, but agreed we would reassess before playing those stories.

Now it is time to dig into that idea of understanding, implemening, and testing a single string…

This was inherently going to be a bit of an exploration story so we were working in abstracts during the kickoff. That being said, there were plenty of good questions to ask from the testing perspective. I shared my experience from a previous project where we set up a couple of tools to assist testing and I was hoping for the same tools here. The idea was to be able to create a “test” resource file (just as you may have a FR-fr or EN-gb language file) which could be kept up to date with the evolving  resource file and loaded as the language of choice easily. We also spoke about looking for more automated ways to test, but regardless of unit or integration test coverage, this tool was necessary to exploratory testing moving forward as more enhancements (and therefore more strings to translate) were added to the app. The devs seemed keen on the idea so I went on my merry way right into a 3 day vacation.

I came back rested and revived and so excited to see that story in the “done” column! But wait…there in the “to do” column was a glaring card “test translations”. skreeeeeech! What is that? Since when does our team not build in testing?! Obviously I was due for an update.

I spoke with one of the developers from the origional kick off and first congratulated her on the story being done and then asked how it went. She explained to me that the framework for translations already existed in other apps and we were asked to follow pattern. Because of this, implementation was pretty easy, but understanding was still quite limited and testing was viewed as “not necessary” since it is the same as all other teams. Whew, ok a lot to unpack…

  1. “We have always done it that way” is not only the most dangerous phrase in the english language, but absolutely the most dangerous phrase in a quaity software team. I raised some questions about how the framework would impact our team directly (maintenance, performance, etc) and we quickly came to the conclusion keeping it as a black box wasn’t going to work for us.
  2. Let’s define testing. No no no, not in another ranting way, in this context only. The developer was thinking regression testing, I was thinking future exploratory testing. They both needed to be discussed!

As we dug into the framework we adopted, I was brought up to speed on why putting in automated regression testing was probably not worth the effort in the short term. We moved on to the exploratory side. Translation strings are not something that will stay static. Since new strngs will be added all the time so we need a way to make sure that new effort can be validated. With very little explanation, this became clear to the developer and we put our heads together to come up with a solution given the kind of convulated translations framework. Within about 10 minutes we had devised a plan which would take less than an hour to code and would provide me a 2 command process to load up a test translations file. Success!

So what does this look like?

create_test_translations.bash – Makes a copy of the base translations file, names it TE-te and adds “TRANSLATED–” to the beginning of all strings.

load_translations.bash – This was a bit of a work around for our framework and dealt with restarting the app in the required way to use the new test translations file.

And to clean up after?

git checkout

Definitely not fancy. Definitely not enough forever, but for it met our current cost/value ratio needs. I unfortunatley can’t show the client site, so instead I am going to use the amazing Troy Hunt site HackYourselfFirst so that you can get the gist. Hopefully you can see the “untranslated” string a bit easier as well.

First, the site translated to Dutch…

Screen Shot 2017-03-05 at 10.38.30.png

 

Did you spot the 4 places that it was not translated? Did you think you spotted more than 4? Notice some words intentionally stay the same (proper nouns etc) and others should have been but weren’t.

Now for the site to be translated to my “test” language…

screen-shot-2017-03-05-at-10-29-13
Troy Hunt’s HackYourselfFirst site with translations being applied to _most_ pieces of text.

 

This time I could quickly tell that the vote counter was not translated as well as the Top manufacturerers text. At least for me, this was A LOT easier to cut down on both false positives (thinking Lexus or McLaren should have been traslated) and false negatives (eyes skipping past the word votes)

Translations can be a tricky part of testing a global website. Since most of us do not speak every language that the site will be displayed in. But there are certain heursitics that we can use to at least combat the most outrageous issues. Take a look at how strings of differnet lengths will look, look at how we handle new strings, deleted strings, changed strings. Etc etc etc.

As I mentioned at the beginning, there is A LOT more to internationalisation, but this was a way for our team to spend a little less time combing through text updates.

Bookmarklets/Scriptlets for data entry

Getting back on the horse with a quick post thanks to Mark Winteringham’s suggestion. We are working together on an API testing workshop for CASTx17 and it has been a lot of fun. We are trying to showcase the need to marry the technical and business needs when discussing APIs and each of us come to that conversations with a vision on both sides, but he is stronger on the tech implementation side and I am often more focused on the business/usability aspects. This has meant that I am taking the lead on creating the user stories associated with our workshop with his verification of them and he is taking the lead on the API creation while I then test that implementation.

One of the user stories has to do with the option to introduce pagination for high volumes of data. This has meant that I need to create a number of entries and it can be extremely time consuming. One way I can improve on this is to create a Postman script to seed the data. This is extremely helpful when data setup is the key. But at the same time, it means you are only testing the API and not the GUI as well. Therefore, to see how the javascript reacts to the data I also like to create small bookmarklets.

To do this, I muck around in the Chrome Developer tools to figure out how to set the fields value using JQuery. This is very similar to coding with Selenium, but you will definitely need to get used to syntax. Depending on your comfort level with Dev Tools and selectors you may jump past these first few steps, but I have just documented them all in images with captions below.

Opening dev tools by right clicking within chrome and selecting "inspect"
Step 1 – Open Chrome Dev Tools from your working web page. To do this, you can use keyboard shortcuts and/or dropdown menu options, but I find it most helpful to right click on the element I need the ID of anyways since it will then open up focused to the place in the HTML that I needed.
Copying an HTML element's selector within Chrome Dev Tools by right clicking and selecting Copy > Copy selector
Step 2 – Find the element in the HTML and identify it’s selector. If you are unsure how to do this, a quick way to learn more about selectors is to use the tools to create them.
Using the console tab of Chrome Dev Tools to run JQuery that finds an element and then clicks on it.
Step 3 – Move to the console tab of Dev Tools and use JQuery confirm the selector has found the one (and only the one) element you are interested in. Then experiment until you are able to take the correct action on it. In this case, I needed to click on the link to open a modal.
Finding the ID for the Firstname field using Chrome Dev Tools
Step 4 – Continue finding elements that you will need to complete the task. In this case, I need to also add text to the Firstname field.
Adding a text value to an input field using JQuery in Chrome Dev Tools
Step 5 – An example of how to add text value to a field using JQuery. The .val() method used here does not work on all field types so use Console and google searches to find how to fill in each field you need to.
A completed JQuery command that opens, fills in, and submits a form.
Step 6 – String together all your commands by putting semicolons (;) at the end of each one.
Selecting to Add Page and create a new bookmark in Chrome
Step 7 – Create a new bookmark. In this case, I right clicked on my bookmark bar and selected to “Add Page…”
Creating a bookmark by adding "javascript:" to the beginning of the script and placing it into the URL field
Step 8 – Name your bookmark and paste the complete script into the URL field. Be sure to add “javascript: ” to the beginning so that Chrome knows to execute the script rather than open a webpage.

 

I find that I create these little bookmarklets a lot on my teams. In this case, I found a defect that using Postman for dataset up would not have exposed. Turns out, when you create the 11th booking through the GUI (the first one that would force a 2nd page), that second page is not created and instead it is listed as one big list of bookings until the page is refreshed.

In other cases, I have used this tool (with a fairly complex script) to replace text on the page with long strings. This was a quick way for me to evaluate the responsiveness of the text. This was valuable given that we had a high percentage of user entered data as well as translations into many languages so we had to be prepared for different length strings.

Hope this helps someone stop entering that same ol’ data into the form just to get started on the more interesting exploring!