Keywords

1 Introduction

Mobile service applications are essential in both business and avocation. They make user actions and activities more effective, productive and simplify routines [1]. Although valuable, their adoption has been much slower than expected [1]. This may be due to poor decision making in the process for mobile service innovation, as a result of a lack of structured and transparent activities [24]. Research suggests, the mobile service innovation process lacks structure and transparency [4, 7]. Furthermore, it has been suggested that poor understanding in this innovation process has resulted in problems flowing into the later development and deployment stages [6, 7]. Understanding within this research refers to decision maker’s perceptions of particular factors that can influence the adoption of the mobile concept. Finally, poor communication is also a challenge for this process [7, 8]. Research suggests, the terminology used by different members of the development team can be contradictory (due to their diverse backgrounds) and can result in misunderstandings in relation to elements of the mobile concept.

Providing transparency and facilitating understanding and communication amongst team members is vital for rational decisions to take place [7]. These challenges are taken into account as important problems that must be solved with the proposed research. Consequently, the underlining research question addressed includes the following: how can the process of mobile service application innovation be improved to cater for transparency, understanding and communication challenges?

To address the aforementioned challenges, prescriptive knowledge must be provided to the process for mobile service application innovation, [5, 1015]. Design science is a promising research approach that makes it possible to scientifically study the human experience as it relates to IT artifacts, while simultaneously creating new and powerful interactive experiences [5]. It facilitates the creation of prescriptive design knowledge through building and evaluating IT artefacts that can change the world. Despite this, the use of design science in HCI is rare [5]. Due to its prescriptive and practical suitability this paper follows the Design Science Research Methodology (DSRM) proposed by [16]. By doing so, this work provides comprehensive insight for the knowledgebase and practice while also demonstrating the applicability of the design science research methodology for HCI.

To address the issue of transparency the ‘decision situation’ is logically structured into a new calculative space and quantified [19, 20]. This involves defining the decision situation (e.g. concept definition and evaluation in the innovation process), and structuring its key elements (e.g. influencing factors scales and adoption scales) and quantifying this (e.g. aggregating the data to quantify the adoption for each alternative levels of influencing factors present). Specifically, we propose to do this is in the form of an interactive assessment instrument, namely: The Mobile Concept Assessment Instrument. To address the issue of poor understanding the instrument represents and structures the parameters of the decision in a graphical form (e.g. visual aid). Providing a visual aid helps decision makers to filter relevant dimensions of the context. Consequently, the visual aid can enable a deeper understanding of key factors and how they can influence the adoption of the mobile concept. Finally, communication can be improved by structuring the decision situation into traceable units of analysis, providing clear and consistent descriptions. This will encourage collective discussions on the important parameters of the decisions [21].

The following section provides an overview of the existing literature on the process for mobile service application innovation. Following this, section three describes the research methodology in more detail. Section four, describes the development of the assessment instrument and the ex-ante evaluation. The ex-ante evaluation involved interviews with industry experts which resulted in improvements to the initial design. Following the design improvements, section five describes the ex-post evaluation. This involved the implementation of the refined assessment instrument in a real world case study organization. Section six, discusses the findings from the ex-post evaluation and finally, section seven concludes the paper and summarizes the main contributions.

2 Literature Review

Since the 1950s, a lot of research has been done in relation to the phases and activities of the innovation process. Despite the vast research in this area, no one model, of the innovation process, has proved superior to others. Within this research, we discuss the innovation process in the context of mobile service applications.

Koen et al. [17], developed a theoretical construct, defined as the New Concept Development (NCD) model in order to provide common language and insight to the front end activities of innovation. Their model consists of three key parts: five front end elements; the engine that powers the elements, and external influencing factors. The engine represents senior and executive-level management support, which powers the five elements of the NCD model. The outer area denotes the influencing factors which affect the decisions of the two inner parts. The five front end elements (which can be conducted in an iterative manner) include the following activities:

  • Opportunity Identification: this is where the organization, by design or default, identifies the opportunities that the company might want to pursue.

  • Opportunity Analysis: this is where additional information is used to translate the identified opportunity into specific business and technology opportunities and where technology and market assessments are conducted.

  • Idea Generation: This represents an evolutionary process in which ideas are built upon, torn down, combined, reshaped, modified, and upgraded.

  • Idea Selection: this involves choosing which ideas to pursue in order to achieve the most business value.

  • Concept Development: This involves the development of a business case based on estimates of market potential, customer needs, investment requirements, competitor assessments, technology unknowns, and overall project risk.

  • Concept Evaluation: concept evaluation is the final ‘go-no/go’ decision point prior to moving into the planning and development stage. Here the results of the concept development case are evaluated.

The NCD model [17] best reflects the innovation activities of ‘real world’, mobile service application development organizations. Consequently, this research uses the NCD model [17] to describe the innovation activities for mobile service applications. In particular, this study focuses on the final two activities concept definition and evaluation.

While there has been a shift in thinking about service innovation over the years, challenges still exist, particularly for mobile service innovations. In recent years, it has been argued, that too many mobile service innovations fail, or do not achieve their inventors expectations, [18]. One reason for this is due to poor decision making in the mobile service innovation process [1, 3, 4]. Research suggests, the mobile service innovation process still lacks structure and transparency [7]. Furthermore, poor understanding and communication in the innovation stage can result in problems flowing into the later development and deployment stages [8]. Providing transparency and facilitating understanding and communication among team members is vital for rational decision making to take place [9]. Little published research or industry initiatives have been carried out to improve decision making in the mobile service innovation process. The area can be improved by adding transparency to the process, facilitating communication and understanding among members. This research is based on this identified gap and suggests that the facilitation of these elements can improve overall decision making in the mobile service innovation process.

3 Research Methodology

Design science is a promising approach that makes it possible to scientifically study the human experience as it relates to IT artifacts while simultaneously creating new and powerful interactive experiences [5]. Despite its advantages, the use of design science in HCI is rare [5]. Due to its prescriptive and practical suitability we follow the DSRM proposed by [16] to design an interactive assessment instrument and to evaluate user’s experiences with it during the process of mobile service application innovation. By doing so, this research makes a contribution to both theory and practice while also demonstrating the applicability of the DSRM for HCI.

The six DSRM steps were followed which include: problem identification and motivation, objectives of a solution, design and development, demonstration, evaluation (ex-ante & ex-post) and communication [16]. The lack of transparency poor understanding and communication in the innovation process represent the first DSRM phase of problem identification and motivation. The objectives of a solution are derived from the three step process model: crafting rational decisions in practice [9]. The model was reviewed and selected for use during the design and development of the artefact (see Fig. 1). The three steps in the model (contextualization, quantification and calculation) are used as principles for the assessment instrument design and development. Following this, an ex-ante evaluation was conducted via interviews with practitioners, which resulted in improvements to the design. Further details of the tools design and development are outlined in Sect. 4.

Fig. 1.
figure 1

The research methodology based on [9, 16]

Once the design was refined, a case study organization (mobile app development organization, Galway, Ireland) was selected as the test site to execute the demonstration and ex-post evaluation. The ex-post evaluation captures the participant’s experiences with the assessment instrument and the change to the mobile service innovation process as a result of using the assessment instrument. The method of evaluation is a qualitative investigation through semi-structured interviews with the development team members at the case study organization. The interview data analyzed using a comprehensive thematic analysis approach [22]. Particularly, the researchers will investigate if elements of decision theory are replicated [9].

4 Assessment Instrument Design and Ex-ante Evaluation

This research has outlined the existing challenge of poor decision making in the mobile service innovation process as a result of a lack of transparency, communication and understanding. This section details the iterative design of a solution to these challenges, in the form of an interactive assessment instrument, namely: The Mobile Concept Assessment Instrument.

Firstly, the researchers gathered the requirements needed to address the aforementioned challenges. These requirements were gathered via interviews with industry experts and are summarized in Table 2. In addition, a further review of relevant literature was required to discover suitable theories that could be used to design a solution to the mentioned challenges and to meet the outlined requirements. This resulted in the theory of crafting rational decisions in practice [9] being incorporated as the kernel theory to assist the design and development. The choice of this theory is justified by the core focus of our research which, is to improve decision making in the mobile service innovation process. Their theory [9] includes a three step process model which illustrates how decision analysts perform rational choice theory in practice. Their model includes ‘contextualization’, ‘quantification’ and ‘calculation’. Descriptions of these are outlined in Table 1 below.

Table 1. The theory of rational decision making in practice [9]

These three steps (contextualization, quantification and calculation) form the principles of the assessment instrument design and are included in Table 2 along with the summarized challenges and requirements.

Table 2. Tool requirements and design principles

In particular, Table 2 illustrates how structure must be present in the innovation process, if the ‘lack of transparency’ is to be addressed. Consequently, ‘process structuring’ is the first requirement. In order to add structure it is critical to ingrain the ‘contextualisation’ step outlined by [9] while developing the tool. This involves turning the unstructured situation into a decision-analysable problem. Thus ‘contextualisation’ is the first design principle. To contextualize the decision situation, the researcher conducted an in-depth content analysis and a focus group with practitioners (as part of a larger research project) to identify and select factors that should be considered during the innovation process. This resulted in thirteen (adoption) factors prioritized and selected for inclusion in the assessment instrument.

Table 2 also illustrates that to address the issue of ‘poor understanding’ relevant dimensions of the decision context must be filtered. Consequently, the second design principle is ‘quantification’ which allows factors to be filtered and for the decision to become calculable. To filter the data, the assessment is divided into three scales. Firstly, the development team must select the particular type of mobile concept out of six categories (communication, information, transaction, learning, social media, context sensitive). This will filter the aggregated data in the background. The second part of the assessment instrument involves answering questions in relation to the characteristics of the mobile service. For example, service complexity and intuitiveness. Finally, the third part involves answering questions in relation to the context of use of the service. For example the environment and use situations. The team answer each question applicable to them and allocate a score to the scale. For the team to be able to allocate a score to each question, they must discuss the factor that the question is addressing in detail. While this forces the team to consider adoption factors they would not have previously considered the structured list of questions acts as a guide during the meeting, helping them to stay focused. It was also important at this stage to use consistent terminology to facilitate communication flow.

Finally, Table 2 illustrates that the activities of allocating a score and calculating the user adoption must be automated, to ensure a positive user experience (UX). Additionally, the decision situation must be represented in a graphical form therefore a ‘visual aid’ is required. To do so, it was necessary to apply ‘calculative tools’ to automate the assessment and represent the results in a visual aid (bubble-chart). The mobile service concept will be classified based on the scores that have been allocated to the questions for each of the scales. For example, the type of service, the service characteristics and the intended context of use of the service are classified. Following this, the ‘potential’ adoption score is calculated and visually represented. This is based on existing adoption that has been classified and aggregated in the background of the assessment instrument.

4.1 Ex-ante Evaluation and Design Improvements

By understanding the impact the designed assessment instrument has on its users, we can identify issues with the design and areas for improvement. This is referred to as the ex-ante evaluation.

A series of interviews were conducted with industry experts to identify areas for improvement. The assessment instrument was demonstrated to practitioners and they were then asked if they recommended any areas to be refined. The practitioners emphasized some issues with the initial design while also suggesting areas for refinement. The identified issues, suggestions and refinements are summarized in Table 3.

Table 3. Design issues, practitioner suggestions and refinements

Specifically, Table 3 illustrates that the practitioners suggested that some of the questions were confusing. As a result, the questions were refined to reflect industry standard definitions. They highlight that the scales used to categorize the mobile concepts were also confusing, (e.g. the descriptions of scale categories were unclear). Consequently, all scales were clearly defined using industry standard definitions. Additionally, they suggested that the scales should be adjusted, that the visual-aid did not emphasize much of a difference between categories on the scale 1 to 5. For example, there was little visual difference between 3.5–5 %. As a result the scale was adjusted; the new scale categorizes factors between 0 and 100 % as opposed to 1 to 5 %.

The potential adoption score is divided into three parts; low, moderate and high adoption. Within this research adoption is based on intention to use. Low intention to use is captured as any score under the threshold of 50 %. Any score above 50 % represents a moderate to high intention to use. Moderate would move to high once past 60 %. This information is represented in a three dimensional bubble chart. The practitioners highlighted, the difference in bubble charts (adoption scores) were difficult to distinguish. As a result, the bubble chart was refined to have various sizes depending on the score. The smaller the bubble the lower the score (and vice versa), thereby indicating low adoption. The bubble is also colour coded for a deeper visual effect. A traffic-light colouring system is in place with red indicating poor adoption and green indicating high adoption. Providing this information in a bubble chart, can assist team members understating in relation to how these factors will positively or negatively affect adoption. This visual aid also provides necessary information which they can later use to justify their decisions for including/not-including certain elements in the mobile service.

Finally, they suggested that automating the process would be useful as some applications have a ‘fast-to-market’ need. To automate the process instant feedback is provided. Scores are allocated using a ‘scroll-bar’ and the bubble chart data automatically adjusts to the scores allocated. The suggestions provided by industry experts as well as recommendations in the literature were used to make the refinements to the assessment instrument. Several iterations of refinement to the design were conducted until the researchers were satisfied.

5 Case Study Demonstration and Ex-Post Evaluation

The ex-post evaluation involved the use of the assessment instrument in a case study organization. Following this, interviews were conducted to investigate the participants experience with the assessment instrument and the changes to the innovation process. Specifically, we investigated if a change occurred to transparency, understanding or communication in order to claim ‘replication’ (i.e. replicate the logic of decision theory [9] in the process of mobile service application innovation).

An agreement was reached with a mobile application development organization in Galway, Ireland, to trial the M-Concept Assessment Instrument. The organization is one of the leading app development organizations in Ireland. They provide cutting edge applications to both large and small scale clients and also develop in-house applications. Using the categorization of company size proposed by the European Commission, the organization fits between the categories, Micro-Entity (< 10 employees) and Small Company (< 50 employees) as they have nine employees on-site and five other employees working overseas. Their development team consisted of six members; a project manager, a UX designer, a business analyst, two software developers and one member from marketing. All members participated in this study. A mobile transaction service, which permits the payment of products (e.g. food at a grocery store) on your Smartphone, anytime any-where, was selected as the mobile concept. The end users need to create a profile and purchase online tokens which they can use as credit for their products. The supplier of the products can approve payment of the products by selecting an option ‘approve’ when the customer notifies them of the products they wish to purchase.

The study was carried out on-site at the organization. A presentation demonstrating the console of the assessment instrument was given to the development team. After this, a workshop was held where the assessment was conducted by the development team. The assessment instrument questions were answered by all team members together. During the workshop they read the questions out loud and discussed each point. The discussion began with one member suggesting their opinion, this continued until each member in the group had voiced their opinion. The team then debated which score to allocate to each question. This continued until all questions were answered. Based on the scores allocated to each question, the instrument calculated the potential user adoption score automatically. Specifically, the instrument classified the mobile service at (82 %) and the mobile service context at (82 %) and indicated that the potential user adoption between these scores was (90 %). This means that the mobile concept fitted into the category ‘high intention to adopt’. A high adoption score indicates that the user has a high intention to use the service. The participant’s experiences with the assessment instrument are summarized in Sect. 6 ‘study findings’.

6 Study Findings

To gather data for the ex-post evaluation, semi-structured interviews were conducted with the case study participants after the use of the assessment instrument. The interviews sought feedback about the participants experience with the assessment instrument and the changes to the innovation process. Specifically, we were investigating if a change occurred to transparency, understanding or communication in order to claim ‘replication’. The interview data was examined using a comprehensive thematic analysis approach [22]. The findings from this analysis are briefly summarized under the following themes:

Transparency: There is strong evidence that the assessment instrument has added transparency. For example, one member suggested that it helped them to scope the concept: “I think that if we used this, there would be more structure because from the beginning you are starting to determine the scope of the project and peoples roles in the project”.

Another member suggested that it assisted with documenting the process and keeping focused: “It adds transparency because it is more easily documentable, when everyone is together and answer specific questions, you can go back and see who said what… Also the more structure there is the more transparent the process becomes, because we had a list of questions today, we knew we couldn’t leave anything out”.

Communication: There is also evidence from the interviews that the assessment instrument can improve communication in the innovation process. For example, one member suggested: “Using this tool we are more equal, we all talked about each point and everyone expressed their opinions and ideas it wasn’t just one or two members of the team, with the team leader. It was more integrated”.

Another member mentioned that it can help the team members communicate the extent to which particular elements of the app exist: “It defiantly promotes communication because you are scoring each question, it means that there may be a broad agreement among the team that yes maybe we are on this factor but they may not agree on the extent to which we are covered”.

Understanding: The interviews also indicated that the assessment instrument can improve understanding among team members. One member felt that it helped them to recognize factors that they would not have previously considered: “I do think that using the assessment instrument brought up some conversations that would not have come up otherwise”.

Another team member mentioned that it helps, not only to consider adoption factors but also to understand each member’s role better: “This helps us to understand the roles in the project and what is expected for each member…like for example after our discussions… I know the person in the marketing role might have a much bigger role in the project than we would have originally thought, because the project might be very depended on branding and that is something that we would not have discovered if we did not use this tool”.

Along with capturing the changes to the innovation process the interviews sought feedback about the participant’s experience of the use of assessment instrument. The outcomes of this are summarized under the following themes:

Appropriateness: One member mentioned that they were originally apprehensive about using the assessment instrument; however after using it they found that it was very useful and appropriate for this stage. “When we started I thought that this wouldn’t be the best project to use this with but when we actually did the assessment I thought that this made us think a lot more about the concept… so yes in this stage it is very useful”.

Ease of use: Finally, the findings from the analysis suggested that the participants found the assessment instrument easy to use. One participant mentioned: “The way it was structured with the scroll bar was useful… it was easy and well presented. The graph shows you your score so you get immediate feedback that was presented in a very straightforward way”.

7 Conclusions

In summary, the aim of this paper was to describe the construction and evaluation of an interactive assessment instrument to improve the process for mobile service innovation, following the DSRM. The ex-post evaluation has confirmed the potential of the M-Concept Assessment Instrument to address transparency, communication and understanding challenges within the innovation process. Naturally, the proposed assessment instrument needs further evaluation before ‘replication’ can be claimed. Consequently, further case study investigations are currently being undertaken. Nonetheless, the results of the evaluation provided comprehensive insight for the knowledgebase in terms of decision making in the mobile service application innovation process. Along with this, a significant achievement is the incorporation of the tool in the industry, thus providing strong evidence of industry relevance of the research outcome, [23]. Finally, the paper also demonstrates that (DSRM) provides a clearly defined step-by-step set of actions to design and evaluate an interactive IT artefact within the HCI field and therefore, it can serve as reference for other researchers who wish to use design science in HCI.