Josh Male

software development blog

Angular on Rails :: First Attempt

| Comments

I recently completed a 5 month project for a small realestate company called BresicWhitney in which we built their customer facing website. This project was a milestone for me for two reasons, the first being that I was returning to coding after doing other stuff for two years, the second being that it was the first time I put on the frontend developer hat. The last time I had coded, I was on a project where we wrote a single page web app for an insurance company using sammy.js. A bit has happened in that time, noteably:

  • single page apps are more popular now, as evidenced by the variety of frameworks
  • web page designs and CSS has come a long way. The difference to what the designer does in photoshop and what actually gets built has got much less.

Anyway, I was determined to provide the best user experience I could so I decided to go the single page app route again (to avoid page refreshes and still have the URL to match the application’s state etc). I chose Angular.js after hearing good things about it from colleagues. Rails was the team’s choice for the server-side due to our collective skill sets.

For this post, I’ve cherry picked some the challenges I had, particularly integrating Angular.js with Rails. It is in no way a holistic guide.

Website

Architecture

The main components of the customer facing website are shown below. I’ve left out the back-office application (kinda like the CMS) because it did not use Angular.

Templating

I elected to use Haml for my HTML DSL though I’d probably consider Slim next time. I ended up having separate templates for the following cases:

  • page templates (displayed in the ng-view element)
  • directive templates

Directives are an awesome feature which I made use of extensively.

first attempt: put everything in the header

To get myself up and running, I put the = yield statement in the <head> section of the frontend Rails layout and then a loop in the included index.haml to bring in the templates. A snippet is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
['search_bar_template',
 'buy_listings_search_bar_template',
 'rent_listings_search_bar_template',
 'image_slider_template',
 'bed_bath_car_template',
 'street_address_details_template',
 'tenancy_application_template',
 .
 .
 .
 'disclaimer_template',
 'error_page_template',
 'contact_us_template'].each do |template_name|

  %script(type="text/ng-template" id="_#{template_name}.haml")
    = render :partial => "frontend/#{template_name}"

The template files themselves were put in the same folder as index.haml. Thus, whenever I needed a new template, I’d create it and then add it to the array above. I could then refer to the templates via templateUrl in my javascript code.

This worked okay, except for the fact that there was a lot of “stuff” in my <head> section. Also, the above technique requires that all templates be loaded on the initial page load.

second attempt: loading via ajax and bringing in the asset pipeline

One of Rails great fetures IMHO is the asset pipeline (provided by the sprockets library). It basically allows for indefinite browser caching for static assets – a feature I wanted to take advantage of for the templates. Unfortunately, sprockets does not support Haml (the authors don’t seem to like HTML DSLs) and the “haml” gem only supports putting haml templates in the “views” folder. After reading through the sprockets and sass gem source code, I cobbled together the following code which allows Haml files in the asset pipeline for development mode as well as pre-compilation:

In application.rb:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
require 'haml'

config.assets.paths << Rails.root.join('app', 'assets', 'templates')

class HamlTemplate < Tilt::HamlTemplate
  def prepare
    @options = @options.merge :format => :html5
    super
  end
end

config.before_initialize do |app|
  require 'sprockets'
  Sprockets::Engines #force autoloading
  Sprockets.register_engine '.haml', HamlTemplate
end

This allowed me to put the templates in app/assets/templates (of course this path is easy to change)

An then to load the templates from the browser side, I created a file called templates.coffee.erb:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
templates_version_number = 1.6

templates = ->
  templates = {}

  <% Dir.glob(Rails.root.join('app', 'assets', 'templates', '*.haml')).each do |f| %>
  templates["<%=  File.basename(f) %>"] = "<%= asset_path File.basename(f, '.haml')%>"
  <% end %>
    
  templates

Frontend.Templates = templates()

Frontend.Template = (logical_name) -> Frontend.Templates[logical_name]

And, from my routes and directive files, I could link to the template via “templateUrl”, e.g. templateUrl: Frontend.Template '_tenancy_application_template.html.haml'. Keeping the template ID the same as the actual file name made the template files easy to find.

One “gotcha” with this method is that if you create a new template, you need to increment template_version_number so that the templates file is recompiled with the new template. This is not required for modifications to existing templates though.

This method has it drawbacks as even the homepage requires many templates to be loaded – and hence many HTTP requests. If I had more time, I would have pre-loaded the templates required for the homepage, potentially using the technique from my first attempt above.

adding efficiency, lazy-loading templates

To make subsequent page transitions faster, I added some code to templates.coffee.erb to pre-load any remaining templates after 15 seconds – there by allowing the homepage to complete its loading first:

1
2
3
4
5
6
7
8
Frontend.factory('LazyLoadTemplates', ['$templateCache', '$timeout', '$http', ($templateCache, $timeout, $http) ->
  ->
    $timeout( ->
      Object.keys(Frontend.Templates).forEach (logical_name) -> 
        if not $templateCache.get(Frontend.Templates[logical_name])?
          $http.get Frontend.Templates[logical_name], {cache: $templateCache}
    , 15000)
])

I then called the LazyLoadTemplates function from the Frontend module run method so it could be injected with what it needs.

adding a CDN, dealing with CORS

The next step was to load the templates (as well as the other static assets) from a CDN such as Cloudfront. Unfortunately for this, I had to deal with CORS as I was loading the assets via ajax. I say unfortunately as Angular makes this pretty hard. The problem is that its template loading code is hardwired to use the $http component which is not setup to handle CORS. The fact that $http does not handle CORS is understandable as CORS imposes too many restrictions for a general HTTP component. It would be nice if the template loading process was more configurable/pluggable though.

Complicating matters further was having to support IE 8/9 which rely on the XDomainRequest object.

In the end, I ran out of time to deal with this issue properly, so I worked around it by only loading the lazily-loaded templates from the CDN for “modern” browsers. templates.coffee.erb the became:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
templates_version_number = 1.6

internet_explorer_version = ->
  result = -1
  if navigator.appName == 'Microsoft Internet Explorer'
    re  = new RegExp 'MSIE ([0-9]{1,})[\.0-9]{0,}'
    if re.exec(navigator.userAgent)?
      result = parseInt RegExp.$1
  return result

ie_version = internet_explorer_version()

templates = ->
  templates = {}

  <% Dir.glob(Rails.root.join('app', 'assets', 'templates', '*.haml')).each do |f| %>
  templates["<%=  File.basename(f) %>"] = "<%= asset_path File.basename(f, '.haml')%>"
  <% end %>
    
  templates

Frontend.Templates = templates()

Frontend.Template = (logical_name) -> 
  result = Frontend.Templates[logical_name]
  if not /^\/assets\//.test result
    result = '/assets/' + result.match(/^.*\/([^\/]+)$/)[1] 
  result

lazy_template = (logical_name) ->
  result = Frontend.Templates[logical_name]
  if ie_version < 10 and ie_version != -1
    result = '/assets/' + result.match(/^.*\/([^\/]+)$/)[1] 
  result

create_request = (method, url) ->
  xhr = new XMLHttpRequest()
  xhr.open method, url, true
  xhr

Frontend.factory('LazyLoadTemplates', ['$templateCache', '$timeout', ($templateCache, $timeout) ->
  ->
    $timeout( ->
      Object.keys(Frontend.Templates).forEach (logical_name) -> 
        if not $templateCache.get(lazy_template(logical_name))?
          request = create_request 'get', lazy_template(logical_name)
          request.onload = -> $templateCache.put lazy_template(logical_name), request.responseText
          request.send()
    , 15000)
])

Thick model, thin controllers

In most of the “hello world” tutorials, you’ll see all the logic shoved into the controllers. If you do this on larger applications, you’ll start to get controllers which resemble a big ball of mud. Borrowing from the Rails world, I wanted to structure my code in a similar way as the Rails does with its controllers and ActiveRecord models. The “rent listing” model object is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Frontend.factory('RentListing', ['$http', '$q', '$filter', 'ListingDetails', ($http, $q, $filter, ListingDetails) ->

  sydney_moment = $filter 'sydneyMoment'
  currency_without_cents = $filter 'noFractionCurrency'
  yes_no = $filter 'yesNo'

  quantity_specified = (quanity) -> quanity? and not /^[\s0\.]+$/.test quanity

  RentListing = (data) ->

    is_leased = data.status == "leased"

    now_or_in_the_future = (date) ->
      result = null
      if date?
        lease_start_date = new timezoneJS.Date date, 'Australia/Sydney'
        now = new timezoneJS.Date 'Australia/Sydney'
        result = if lease_start_date > now then sydney_moment(date, 'dddd D MMMM') else 'Now'

      result

    angular.extend(ListingDetails(data),

      rent_per_week: currency_without_cents data.price_display_min

      show_inspection_times: not is_leased and data.next_opening_times? and data.next_opening_times.length > 0

      is_leased: is_leased

      has_floorplan: data.floorplan_links.length > 0

      available_date: now_or_in_the_future data.lease_start_date

      bond_specified: quantity_specified data.bond
      bond: currency_without_cents data.bond

      furnished_specified: quantity_specified data.furnished
      furnished: yes_no data.furnished

      pets_specified: quantity_specified data.pets_allowed
      pets_allowed: yes_no data.pets_allowed
    )

  RentListing.find = (property_id, preview_hash) ->
    config =
      url: "/frontend/rent/rent_listings/#{property_id}"
      method: 'GET'

    if preview_hash
      config.params =
        preview_hash: preview_hash

    $http(config).then(
      (response) -> RentListing(response.data)
      (response) -> $q.reject response.data.error
    )

  RentListing
])

The corresponding controller is:

1
2
3
4
5
Frontend.RentListingController = ($scope, $routeParams, RentListing) ->

  $scope.rent_listing = RentListing.find $routeParams.listing_id, $routeParams.preview_hash

Frontend.RentListingController.$inject = ['$scope', '$routeParams', 'RentListing']

In general, my controllers only did the folling types of things:

  • attached data to the scope, making it available to the view
  • handled routing logic such as redirects or back-button functionality
  • a little view-only logic such as hover states and the like

Model objects followed the pattern shown for the “rent listing” example above, ie they had:

  • a constructor function which used normal javascript closure to add methods to data (from the server-side)
  • “static” functions (e.g. RentListing.find) to retrieve data from the server using the $http service and which called the constuctor function

The model contructor functions could then be tested in isolation to the ajax interactions and view-controller.

In order to resuse code between model objects, I injected the “common” constructor fuction and used it as a template with angular.extend. As there weren’t that many objects, I didn’t bother with prototypal inheritance. For a more complete guide of how to reuse code in javascript, I greatly recomend Eric Elliot’s post on the subject.

IE 8 gotchas

In addition to following the advice on the Angular.js website, I found I need to do the following extra things in order to use “transclude” in my directives:

  • I needed to set replace = true
  • I needed to use the attribute form rather than the element form of the directive.

Luckilly, other than that things worked okay on IE 8.

Conclusion

I would be happy to use Angular.js again on future projects. It provides many things which aid the structure of a thick browser app:

  • dependency injection
  • ansychronous IO with promises
  • directives – which allow encapsulation of HTML and javascript

My only complaint is the size of Angular.js – both in terms of bytes and the number of features. As an example, Angular.js comes with jqLite (low footprint version of jQuery) but I needed more than it provided, so I ended up including jQuery anyway. I nearly ran into the same issue with it’s lightweight $q promises service as it lacked a few functions such as map/reduce.

Apparently, Angular.js has had a great influence on the feature pipelines for the major browsers. As more features are implemented natively, the library itself should get a lot smaller.

Can Angular.js become the “Rails” of the browser world?

Automated Acceptance Testing for an OTS Product

| Comments

Automated acceptance testing for customisable off-the-shelf products is useful, but the approach should be different to bespoke-built products in order to get the best bang for buck. This is particulary true for OTS products which provide most of the external interface (e.g. a web browser interface).

TL;DR: Compliment rather than duplicate the testing done by the vendor. Use Specification by Example to avoid vendor lock-in where possible. Jump to the Conclusion for more details.

Example: automated testing on a fully bespoke product

The product we were building was a web application to sell business insurance online. It was basically designed as a wizard with many input fields requiring validation. At the mid point, the user was given a quote and at the end, the user was invited to sign up for the insurance (and hence become a customer).

The architecture of the application was as follows:

The validation library was written in javascript and was run on both the browser and the server in order to:

  • on the browser to display validation messages to the user before submitting data to the server
  • on the server (via Rhino) to make sure that no invalid data was passed down to the backend service. This was basically done for security reasons just in case someone bypassed the browser application and used HTTP directly.

To test the validation, we used a combination of Selenium to test through the browser and JS Test Driver to test at the unit level. Our Test Pyramid (for just validation) looked like:

We decided that there was little value in testing all validation cases through the browser as:

  • browser tests are slow to run
  • unit tests are much more precise and hence any error can be found and fixed more quickly

The other areas of functionality (e.g. referrals, denials) followed the same pattern as for validation. Hence, the pyramid above was representative of the whole test suite.

Example: automated testing on an OTS product

Borrowing again from the insurance space, the product this time was a claims processing system which handled things like:

  • claims lodgement
  • communications with suppliers / repairers
  • payments
  • general ledger updates

This time, the company decided to go for the “buy” option and selected a product called Guidewire ClaimCenter. ClaimCenter provides a default confuration which covers the above activities (claims lodgement etc) using a browser interface for human interactions.

ClaimCenter can be customised in 3 ways:

  • via XML to tweak the browser interfaces
  • via GOSU scripts (a bit like Groovy)
  • via plugins written in Java for integration to backend systems (not supported out-of-the-box)

To support the target claims process (across all insurance brands), a significant amount of customisation was necessary.

About 3 months into the project, I sat in on a meeting to discuss automated testing. The conversation went something like this:

  • Guidewire consultant
  • Automated testers
Why are you testing through the browser with Selenium? Can’t you just test the XML?
We need to make sure the application works end-to-end and that all the bits integrate together.
But we already test that everything fits together. That’s the whole point of buying an OTS product.
What do you use to test?
Selenium

At the time, the conversation ended in a stalemate, and automated testing continued to be done through the browser. I’ll be honest at this point and admit I was in the “automated tester” camp, though I regret taking that point of view now.

In retrospect, the Guidewire consultant had a good point. For most of the acceptance tests, we could have tested just the customisation code (XML, GOSU, Java) in isolation. By testing the UI, we were duplicating the testing already done internally by Guidewire. Had we tested more pragmatically, our tests plus the internal Guidewire tests would have yielded a pyramid very similar to the bespoke example above:

As it turned out, we ended up with a test suite which took many hours to run and hence couldn’t be run before a developer commited their code.

There are two cases I can think of where a project may want to test at a higher level (eg via the browser):

  • as a deployment test – e.g. to make sure no one has deleted a vital piece of config
  • when defects are found in the OTS product itself.

How can we avoid vendor lockin?

One advantage of testing at a higher level (e.g. via a browser) is that the implementation beneath can change without affecting the test code. One might suggest that this is a good way of avoiding vendor lock-in. I am of the opinion that isn’t for two reasons:

  • the trade-off in time generally isn’t worth it.
  • the interface design is often heavily influenced by the defaults of the OTS product. I.e. when the OTS product changes, so does the interface

What is less likely to change (when a vendor changes) are the business rules and processes. What we can do is specify these business rules and then link the automated tests to them. A great technique for doing this is Specification by Example. For the OTS case above, the automated testers were using a tool called Concordion which is capable of this very thing.

Of course if business processes are changed to fit an OTS product, then it is almost impossible to avoid vendor lock-in.

Conclusion

I recommend using Specification By Example when writing acceptance tests in general, regardless of whether a product is built bespoke or purchased off-the-shelf and then customised.

I also recommend using the ideas in Test Pyramid to write tests at the appropriate level. This will help keep the run-time of the automated test suite down, and allow for bugs to be pin-pointed quickly. There is no point in duplicating testing done by a vendor unless there are holes in that test suite – e.g. defects in the OTS product.

Building a Product in 4 Weeks

| Comments

I was recently part of a small team which built a dynamic web site for a customer in 4 weeks. I would like to share what I think were the factors in making the project successful. But first some context…

Our customer

The customer was a non profit organisation in the education sector. Part of their remit is to give presentations in high school classrooms, including those in remote parts of Australia. These presentations are typically 5 minutes or less in duration, as part of a longer 1 hour educational session.

Their goal

The goal was to provide the means to train upwards of 40 classroom presenters, who were geographically dispersed, in any one of about 25 different presentations. This varied from the current situation where a small number of presenters were coached in person on a one-to-one basis.

The end product

The end product was an online portal which provided a central place from which classroom presenters could view training materials and upload candidate presentations of their own for feedback and coaching. In addition, it allowed management to endorse presenters for particular classroom sessions, making it easier to staff those sessions.

The team

We were a 4 person team, which included a product owner (from the customer side), a tech lead, a tester (who could write automated tests) and myself as an all-rounder (developer, BA, scrum master).

Our process

We gave ourselves 4 weeks to build a useful product. Ie, we fixed time and cost and left scope variable. The project then broke down as follows:

  • 1 day for project inception
  • 1 day for technical setup (ie iteration 0)
  • 4 one week iterations (sprints) of build and deploy

What made us successful?

Co-location

We all sat around a small 6 person table with power outlets in the center. This gave us enough space for guests/stakeholders and also other equipment such as monitors.

The table was in a small room, bounded by windows and whiteboards. This obviated the need for an electronic planning system as we could communicate verbally and use the wall space to articulate (via index cards, whiteboard diagrams etc) the plan and the work in progress.

Also crucial to this was that our product owner spent half his time co-located with us in the project room.

Single goal

Goals tend to have a one to many relationship with both people and features, so it’s crucial not to focus on too many goals at once. Thankfully, our product owner was happy to focus on one goal: that of being able to train 40+ presenters in the classroom sessions.

A focused inception :: impact mapping

For the inception process, we used a technique called Impact Mapping, albeit a simplified form. For those unaware, Impact Mapping is a structured mind-mapping technique with a goal (the “why”) in the center, people (the “who”) and impacts (the “how”) surrounding the goal, and features/actions (the “what”) on the edge. It was the first time I tried the technique, and after some initial scepticism, I was pleasently surprised at the positive feedback I received (the product owner used the words “awesome” and “so impressive”).

After discovering the people and impacts, we did MoSCoW analysis on the impacts – ie before even considering the software features required. At this point, the product owner marked several impacts as “shoulds” and “coulds”, meaning that we didn’t need to consider them further during the inception.

We then analysed our “must have” impacts and discovered the software features required to achieve each impact. We then did another round of MoSCoW analysis, this time on the software features.

By this stage, the impact map had provided a lot of value in discovering the project’s scope. In essence, it had allowed us to discover the Minimal Marketable Product or Mininimum Viable Product.

The impact map also provided a lot of value during the project, allowing us to explain the essence of the project to stakeholders and to onboard new people who gave specialist advice (eg our user experience design designer who worked with us for a few days).

At the end of the project, I digitised the impact map (although I could have just taken a photo of it) and handed it over to the customer for validation – ie they can now validate whether what we have built actually achieved the impacts and the goal.

Just enough planning and management

The first step was to do an initial assessment of what we could do in 4 weeks. We had as an input our MMF from the impact map. We then went through the following process:

  • t-shirt sized each of the “must have” features (XS, S, M, L)
  • decided on an relative scale for the t-shirt sizes: 1, 2, 4, 8
  • did a shopping-cart style velocity estimation to work out roughly how much we could do as a team in 4 weeks.

At this point, we found that the MMF fitted into the 4 weeks pretty much exactly. We then wrote out the features in user story format (which was quite easy as the role and impact was visible on the impact map) and then scheduled the stories into the 4 one-week iterations. Of course, as it was just a best guess estimate, we advised the product owner to schedule the least important user stories in iteration 4. This all took place on the inception day.

As the release horizon was short, we didn’t see the need for burn-up charts. We simply counted cards (both user stories and tasks) completed in each iteration and used that to plan the next iteration and to groom the release scope. We did find that we completed more cards in iterations 3 and 4 – a ramp up that you might expect in any project. Iteration planning and grooming typically took about 30-40 minutes each week and was done just prior to the start of the next iteration – though we did replan to a lessor extent on an add-hoc basis also.

Weekly showcases were held at the customer’s premises, so as to get feedback from other customers and users.

Our card wall looked something like this during iteration 2:

Triage: a place where we put ideas and candidate stories when the product owner wasn’t there (for discussion at the next opportunity)
Zoo: column for bugs which hadn’t been scheduled yet
White cards: user stories
Yellow cards: tasks
Red cards: bugs

Anyone in the team could talk to the product owner to flesh out a story or prioritise something new in. As we were co-located, it was easy for everyone to keep abrest of what was going on.

Frequent customer interaction

Having our product owner co-located with us allowed much quicker turnaround on our questions in comparrison to asychronous communication such as email. It also meant that he could view the state of the project by reading the walls, rather than receiving a report or viewing an electronic tool which saved us time also.

The IT manager also co-located with us in order to get handover of the deployment process and maintenance tools. This was much more effective than relying on documentation alone.

Weekly showcases allowed us to get feedback from presenters and led to a few features, previously de-prioritised by the MoSCoW process, getting planned in (at the expense of another features where necessary).

Everyting was big and visible

In addition to the impact map and card walls, every decision we made was written up on the whiteboards. Examples include:

  • the domain model
  • the architecure diagram
  • what browsers we had to support
  • the agreed naming structure for a classroom session

This made it easier to remember everything as well as to onboard new people.

Technology choices

An often underestimated factor in delivering quickly, our technology choices we vital in allowing us to build and deploy a product within 4 weeks.

Our architecture was as follows:

The choice of Heroku and Rails allowed us to setup our development environments in a day despite only one of us having used Rails before. All the tech choices satisfied the following criteria:

  • easy to setup multuple accounts for use in development, testing and production. This made it easy to create a “production like” environment for testing and showcasing.
  • ammenable to automated testing in that they were:
    • easy to stub-out
    • processed requests quickly, meaning that the automated test suite did not take a long time to run.

The only non-free components were the production Vimeo account, for which we opted for a Vimeo Plus account to meet our speed and storage requirements and RubyMine (our Rails IDE).

Automated tests

Another often underestimated factor in delivering quickly, our automated tests provided a lot of value in iterations 3 and 4. They allowed us to deliver stories right up until the end of the 4th iteration – ie no need for a code freeze or mannual testing period. We did allow about half a day for the customer to test out the production site, however no bugs were uncovered during that time.

To ensure that our tests were adding value, we focused upon writing cucumber specs to ellaborate each story – ala Specification by Example. Specifications which we felt we not suitable for automation were marked as @manual and excluded from the rake build.

Conclusion

This article illustrates some important factors which, in combination, allowed us to deliver a high quality, marketable product in quick time with a small team (ie at low cost). There are of course alternatives to all the examples shown above. For example it is posible to achieve the same thing with a Java stack rather than Rails and other inception techniques (other than impact mapping) can work well also.

However, I believe that as we start to sacrafice these factors (e.g. co-location), we start to inhibit the ability of a team to deliver value quickly. If too much is sacraficed, we tend to find ourselves in an unenviable position of costly and/or slow development.