acm-header
Sign In

Communications of the ACM

Viewpoint

Lessons from the Tech Transfer Trenches


Lessons from the Tech Transfer Trenches, illustration

Credit: Shutterstock.com

As researchers employed by a company, we wear two hats. One of our roles is to participate in the research community. The other is to channel some of our research toward "business impact" on the company we work for. In this Viewpoint, we present our experiences in taking our research project on test automation to business impact. Notwithstanding differences between organizations, we hope our colleagues in other research institutions will find some of these lessons useful in their own attempts toward business impact.

The work we describe here was done in the context of IBM's service delivery organization. Since this context may be unfamiliar to many readers, we first explain it briefly. Software businesses fall roughly into two categories: those (for example, Microsoft) that manufacture and sell software—essentially licenses to pre-packaged software—to other businesses and consumers, and those (for example, Accenture) that sell software development as a service to other businesses; some companies engage in both kinds of businesses. Both product and services businesses employ lots of software engineers, and both serve very large markets. Product companies differentiate their offerings based on the features in their products. By contrast, services companies differentiate their offerings based on the cost and quality of service. As such, prowess in software engineering is directly connected to their success.

The topic of our research project was regression testing for Web applications. For commercial Web applications, such as an e-commerce portal for a bank, there are thousands of test cases, to be run against a large number of browser and platform variants. Moreover, the application itself is updated frequently. Companies that own these Web applications prefer not to invest in in-house staff to carry out this testing, and so this work is often outsourced to service providers. Large service providers, including IBM, offer a variety of testing services, including regression testing.

Given the scale of the problem and the limited time available in a regression testing cycle, comprehensive manual test execution is generally infeasible. This is where test automation comes in. The idea is to write programs, a.k.a. test scripts, which drive these Web applications programmatically by taking control over a browser and simulating user actions. Unfortunately, there are costs to automation. One of them is the cost of creating these test scripts in the first place. The second is an often hidden cost, which is to keep these scripts working in the face of small UI changes. If not done properly, the cost of continual test maintenance can negate the benefits of automation.


Over time, we realized trying to change the ways of existing projects might be a fruitless initiative.


In our research, we proposed a new approach to test automation. Since test cases are initially available as "manual" tests, which are written as test steps in plain English, our idea was to generate a program almost automatically based on lightweight natural language processing and local exploration, as is common in approaches to program synthesis. This semi-automation decreases the cost of entry into test automation, because this approach requires less programming expertise than would be needed otherwise. Our approach also addresses the issue of script fragility: it represents the generated scripts in a DOM-independenta manner. We called our tool ATA, for Automating Test Automation.3

Given a research prototype, and given our context—IBM services delivery organization—we wanted to see if there was an opportunity of business impact by deploying the ATA in client accounts. The services delivery organization in IBM is basically set up as separate account teams, each servicing a certain client. We decided to approach one-by-one the account teams in which we knew test automation was one of the deliverables.

Our research affiliation was effective in opening the doors. In most cases, we were able to quickly arrange for a demo to the account team. These demos worked well, and in many cases, we were able to persuade the team to allow us to do a pilot engagement with them. This was a good outcome, as it was clear that the approach we presented was at least of preliminary interest to the practitioners of test automation; after all, the pilot implied a time commitment on behalf of the practitioners too.

This brings us to our first lesson: for tech transfer, the technology has to be of direct applicability to the business (see the accompanying table for a summary of lessons learned).

The actual conducting of pilots turned out to be a much more difficult undertaking than we had anticipated. The goal of a pilot is to show the technology works in a real-world context, that is, in the context of a client account. This immediately exposed all sorts of assumptions we had baked into our research prototype. We had to extend the prototype to accommodate issues such as poorly written manual tests, missing test steps, exceptional flows, verification steps, and so on. The automatic exploration option in ATA did not work as well as we had hoped, and besides, users did not like ATA automatically crawling through their apps. Each account with which we piloted exposed a fresh set of shortcomings, and we had to fix ATA to handle all of these issues. Since the account teams we worked with were all located in Bangalore, we could not risk getting any negative comments on ATA leaking out in this more-or-less close knit community!

After months of hard work, we felt we had shown ATA works well in real situations. Yet, no team signed up to adopt ATA in their day-to-day work. This was a disappointment to us. Delivery managers were being, we felt, overly conservative in the way they executed their projects. On the other hand, we were asking for a lot: we were asking their teams to change the way they carried out test automation. After all, ATA was not a tool whose output you could take or ignore and continue as usual; ATA was how you would get your work done. If ATA did not work out, a lot of time would have been wasted and the project would fall behind schedule. Delivery managers are foremost responsible for predictable and consistent delivery, and were understandably circumspect about adopting ATA in their teams. Finally, their clients were satisfied with the existing level of productivity, and there was no incentive to change the status quo.

This brings us to the second important lesson: although we had established the technical feasibility of using ATA in real projects, we had not made a case that the benefits outweighed the costs and the risks involved. Just because a research tool is available for free does not mean people will adopt it. People wanted to see a prior deployment as a comfort factor in being able to defend adopting ATA, and we had none to show. Moreover, we had no data indicating the actual productivity improvements when using ATA.

This chicken-and-egg problem found its resolution due to a lucky coincidence. We came in contact with a team located in the next building over from us, in charge of test automation for an internal website. This team was trying to get through automation of about 7,000 tests, and they were falling behind. Since they were desperate, and were not under the confines of a client contract, they decided to try out ATA. This allowed us to collect some citable data.2 The actual data is not important here, and possibly had caveats, but it corroborated the claims of higher productivity as well as script resilience. We tried to offer this team as good "customer service" as we possibly could, which came in handy later when we asked them to be our reference for others.

The obvious but crucial third lesson is your users, particularly the early adopters, are precious and should be treated as such, because their referral is crucial in opening more doors.

Over time, we realized trying to change the ways of existing projects might be a fruitless initiative. It might be better to approach the sales side of the business, and get ATA worked into the deal right from the start. This turned out to be a good idea. Sales people liked flaunting the unique technology in test automation that IBM Research had to offer. On our part, we enjoyed getting to talk to higher-level decision makers from client side—these would often be executive-level people in CIO teams of major corporations—as opposed to just delivery managers on the IBM side. Once salespeople promised the use of ATA to the clients, the delivery managers had no choice but to comply. The result: better traction!

The fourth lesson then is that tech transfer is subject to organizational dynamics, and sometimes a top-down approach might be more appropriate than a bottom-up push.

Getting client teams interested turned out to be only part of the battle. There was significant work involved in customizing ATA to suit a client's needs. Since the automation tool is only one part of the overall workflow, we needed to ensure ATA interoperates with any third-party quality management infrastructure (such as Quality Center1) that the client uses in their organization. We also found ourselves under a lot of pressure due to the fact people could be vocal about their wish list from ATA, where they would have quietly put up with the limitations of off-the-shelf software! Notwithstanding the unique capabilities of ATA that differentiated it from competitor product offerings, users were not averse to comparing ATA with these other tools in terms of feature completeness. Addressing this partly involved managing user expectations of a tool that was, in essence, a research prototype rather than a product.

One of the recurring issues in client acceptance was that of tool usability. ATA was, by design, a tool for non- or semi-expert users. As such, we had to pay significant attention to making the tool behave well under all sorts of usage, sometimes even comically inept ones. Failure to anticipate such tool abuses resulted in escalations, and with it the risk of creating a bad image for the technology.

The final lesson then is the "last mile" is a deeply flawed metaphor when used in the context of tech transfer of software tools. In reality, a research prototype is just the first mile; everything after that is the work needed to make the technology work in real-world scenarios, usable by the target audience, and perhaps most importantly, to establish a positive value proposition for it. This requires patience and a long-term commitment on the part of researchers who wish to carry out a successful tech transfer.

Back to Top

References

1. Quality Center Enterprise, HP; http://www8.hp.com/us/en/software-solutions/quality-center-quality-management/

2. Thummalapenta, S., Devaki, P., Sinha, S., Chandra, S., Gnanasundaram, S., Nagaraj, D., and Sathishkumar, S. Efficient and change-resilient test automation: An industry case study. In Proceedings of the International Conference on Software Engineering (Software Engineering in Practice), 2013.

3. Thummalapenta, S., Sinha, S., Singhania, N., and Chandra, S. Automating test automation. In Proceedings of the International Conference on Software Engineering, 2012.

Back to Top

Authors

Satish Chandra ([email protected]) is Senior Principal Engineer at Samsung Research America, Mountain View, CA.

Suresh Thummalapenta ([email protected]) is a member of the Tools for Software Engineering department at Microsoft Corporation, Redmond, WA.

Saurabh Sinha ([email protected]) is a member of the Programming Technologies department at the IBM T.J. Watson Research Center, Yorktown Heights, NY.

Back to Top

Footnotes

a. DOM is the document object model, a structure used by all browsers to represent a Web page.

This Viewpoint is based on an invited talk Satish Chandra presented at the 22nd ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2014). The work described here was carried out at IBM Research in Bangalore, India.

Back to Top

Tables

UT1Table. Summary of lessons for tech transfer.

Back to top


Copyright held by authors.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: