Acceptance tests simulate real user actions and very often are done on real devices. These kind of automated tests run quite a long time. In some cases many hours, especially when acceptance stories have to be validated on multiple devices. But fast feedback from tests is very important in development process. To achieve that we need to decrease test execution time. But how? Good idea would be to run these tests simultaneously on all devices, right? But it’s not going to be so simple.

Calaba.sh acceptance test framework for iOS applications uses Apple “UI Automation” to launch and control these applications, but Apple has limited possibility to use multiple instances of “UI Automation” on the same Mac OS X operating system. For unknown reason Apple “UI Automation” instance uses predefined port which cannot be changed and more than one instance is not possible to launch on a single port.

Our Solution

Hosting virtual machines (VM’s) on our physical computer allows us to run one “UI Automation” instance for each virtual and physical machine. That will give us some opportunity to scale out our tests. To do that we will need to fulfill a few requirements:

  • Mac OS X host machine with at least 16GB of RAM
  • Bootable Mac OS X setup image
  • VirtualBox
  • Vagrant
  • Ruby
  • Calaba.sh
  • Three iOS devices (One for host, two for VM’s)
  • XCode
  • iOS development app with debugging enabled

On each VM we need to set up the exact same environment as our host environment where we can run our tests with Ruby, Calaba.sh, Xcode, etc. We also need to enable SSH access on all our systems where we will run our tests (Host and all VM’s). Managing all the VM’s with Vagrant is our best option and VM’s should be started with launch script right after the host machine has completed the boot process.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.node-1.vagrant</string>
    <key>StandartOutPath</key>
    <string>/Users/Shared/virtualmachines/node-1-boot.log</string>
    <key>StandartErrorPath</key>
    <string>/Users/Shared/virtualmachines/node-1-boot.err</string>
    <key>EnableGlobbing</key>
    <true/>
    <key>ProgrammArguments</key>
    <array>
    <string>/usr/local/bin/vagrant</string>
    <string>up</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>WorkingDirectory</key>
    <string>/Users/Shared/virtualmachines/node-1</string>
</dict>
</plist>

This boot script is asking Vagrant to start all the VM’s and the Vagrant is provisioning these VM’s with predefined configuration. This configuration includes: port forwarding to host machine and setting up all the test requirements (Ruby, npm, etc.).

config.vm.provision 'shell', privileged: false, inline: <<-SHELL
    # export TRAVIS=1
    echo 'Checking brew...'
    brew_temp=$(type brew | head -1)
    if [ "$brew_temp" != "brew is /usr/local/bin/brew" ]; then
      echo 'installing brew...'
      ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
      echo 'Installing gnupg...'
      brew install gnupg
    fi
    echo 'Checking npm...'
    npm_temp=$(type npm | head -1)
    if [ "$npm_temp" != "npm is /usr/local/bin/npm" ]; then
      echo 'Installing npm...'
      brew install node
    fi
    echo 'Checking ios-deploy...'
    ios_deploy_temp=$(type ios-deploy | head -1)
    if [ "$ios_deploy_temp" != "ios-deploy is /usr/local/bin/ios-deploy" ]; then
      echo 'Installing ios-deploy...'
      npm install ios-deploy -g
    fi
    echo 'Checking rvm...'
    rvm_temp=$(type rvm | head -1)
    if [ "$rvm_temp" != "rvm is a function" ]; then
      echo 'Installing rvm...'
      \\curl -sSL https://get.rvm.io | bash -s stable
    fi
    echo 'Checking ruby...'
    ruby_temp=$(type ruby | head -1)
    if [ "$ruby_temp" != *"ruby is /Users/vagrant/.rvm/rubies/"* ]; then
      echo 'Installing ruby...'
      rvm install 2.2.3
      rvm default 2.2.3
      rvm use 2.2.3
    fi
    gem install bundler

And this is Vagran script for port forwarding.

config.vm.network 'forwarded_port', guest: 22, host: 2001

Forwarding VM SSH port to some unused host port will simplify our configuration because in this case we don’t have to know all the IP addresses for the VM’s and if the setup will be using more than one host machine it will be very simple to see which VM is located on which host.

To run these tests on real devices we need to provide separate device for each VM and to do this we need to add few more lines in vagrantfile which will forward these devices to VM at boot.

config.vm.provider 'virtualbox' do |vb|
   vb.customize ['modifyvm', :id, '--usb', 'on']
   vb.customize ['modifyvm', :id, '--usbehci', 'on']
   vb.customize ['usbfilter', 'remove', '0', '--target', :id]
   vb.customize ['usbfilter', 'add', '0',
                 '--target', :id,
                 '--name', 'Apple Inc. iPhone 5c [xxxx]',
                 '--manufacturer', 'Apple Inc.',
                 '--product', 'iPhone',
                 '--serialnumber', 'xxxxxxxxxxxxxxxxxxxxxx']
 end

To manage the test running process I am using test launch script which is reading simple XML configuration file that stores all configuration information about each VM. When the configuration is parsed, the script creates a thread for each VM and provides required information for test management process (name, host, port and authentication details)

<config>
  <node name="virtual_node1" host="localhost" port="2001" username="vagrant" password="vagrant"/>
  <node name="virtual_node2" host="localhost" port="2002" username="vagrant" password="vagrant"/>
  <node name="physical_node" host="localhost" port="22" username="user" password="vagrant"/>
  </devices>
</config>

After that each thread starts it’s own independent test running process and launch script waits for all threads to complete this process.

node_config = Nokogiri::XML(File.open('configuration/node_config.xml'))

node_list = []
node_config.xpath('//node').each do |node|
  node_list.push(Node.new(host: node['host'], port: node['port'],
                          username: node['username'],
                          password: node['password']))
end

threads = []
node_list.each do |node|
  threads << Thread.new do
    ParallelRunner.new(node: node,
                       temp_dir: temp_dir, log_dir: log_dir,
                       zip_dir: zip_dir, run_options: options).run
  end
end

threads.each(&:join)

Rest of the test running process can be split in three general steps: prepare environments, run tests and collect results.

Environment preparation stage starts with pinging all the nodes from VM config file and checking which of them are online. If the VM is online the thread will make an SSH connection and clean up the environment of possible old logs and reports from previous runs. When cleaning is done, the thread, using SCP, will create test script copies on each VM.

2017-10-30_1516
Example flow of preparing environment stage.

Test running stage starts with preparing Ruby and installing all the requirements from Gemfile over previously created SSH connection with each VM. When Ruby gems are installed, the thread will launch Calaba.sh to run defined tests. Now tests are running parallel on all online VM’s and host.

2017-10-30_1521
Example flow of test running stage.

In the last stage, when tests are done, results are collected using SCP by moving all the report files from VM’s to host. When report files are collected single HTML report is generated.

2017-10-30_1522
Example flow of collecting results.

SSH and SCP connections are managed in networking class. In this class we have defined methods for all network related commands. Methods open_ssh, execute_ssh are using Ruby Gem Net::SSH::Telnet but upload and download methods are using Net::SCP Gem.

def open_ssh
  @ssh = Net::SSH::Telnet.new('Host' => @node.host,
                              'Username' => @node.username,
                              'Password' => @node.password,
                              'Port' => @node.port,
                              'Timeout' => 600)
end

def close_ssh
  @ssh.close
end

def execute_ssh(cmd)
  @ssh.cmd(cmd)
end

def download(remote_path, local_path)
  Net::SCP.download!(@node.host, @node.username,
                     remote_path, local_path,
                     ssh: { password: @node.password, port: @node.port },
                     recursive: true)
end

def upload(local_path, remote_path)
  Net::SCP.upload!(@node.host, @node.username,
                   local_path, remote_path,
                   ssh: { password: @node.password, port: @node.port },
                   recursive: true)
end

Limitations and Possible Improvements

In our example, each iOS device was linked to the exact VM manually. This kind of device linking can be cumbersome when you have to change test devices very often. One way to avoid this issue is linking USB hub to each VM, thus, all the VM devices can be connected to the required hub. Every time we need to mix devices or add new ones we simply need to plug them into the correct USB hub.

Another issue is related with Mac OS X License which allows only two Mac OS X VM’s to be hosted on one Mac hardware. Solution for this is already implemented in the main idea of this post. Using the same VM’s configuration file where we are defining all the VM’s, we can simply add more configuration for other Mac hardware with two more VM’s. Of course this other Mac has to be with enabled SSH service.

This is Not the Only Solution

This solution with VM’s has been tested and proven to be good and stable to run automated tests simultaneously with Calaba.sh and “UI Automation”, but recently situation has changed and it’s not the only solution around. Apple added support for Web Driver Agent which is used by Appium test automation framework and it allows to launch multiple test instances in one environment.

Although some time is lost distributing the test and collecting data, running tests in parallel requires half the time. In general test run time will be divided by VM count, if there will be enough test devices to split by threads.