Why Interactive Visuals Are the Next Big Thing in Software Collaboration
Table of Links
Abstract and I. Introduction
II. Approach
A. Architectural Design
B. Proof of Concept Implementation
III. Envisioned Usage Scenarios
IV. Experiment Design and Demographics
A. Participants
B. Target System and Task
C. Procedure
V. Results and Discussion
VI. Related Work
VII. Conclusions and Future Work, Acknowledgment, and References
III. ENVISIONED USAGE SCENARIOS
Besides using advanced (web) technologies, our approach can be differentiated from related work by the use of dynamic analysis and collaborative SV features. Therefore, we now introduce envisioned usage scenarios that may follow from our approach and related future works.
Scenario 1 (SC1): Facilitate the Onboarding Process
In professional software development, companies utilize different techniques for the onboarding process of new developers. Peer support, product overview, and simple tasks are perceived as useful in that context [22], while finding documentation and technical issues, e.g., setting up a development environment, impede the onboarding process, especially for remote work [23]. We envision a scenario where cloud-based code editors with embedded SVs are prepared to guide new developers step-by-step through a software system’s behavior. Users click on a use case of the analyzed (distributed) target system and understand its unfolding via SV. Furthermore, increasingly large portions of the source code (e.g., depending on experience) are directly linked to SV entities. This allows developers to understand which portion of the source code acts in which use cases. The approach can then be used for taskoriented onboarding, where developers also face small tasks to comprehend the software [22], [24]. At any time, users can invite other developers for collaborative comprehension or their mentor and ask for help. Next to voice communication, participants use collaborative features such as synchronized text selection and shared information popups to interact and exchange [16].
Scenario 2 (SC2): Highlight changes during code reviews
Feature requests and resulting change-based code reviews are commonly used in professional software development [25]. However, reviewers tend to give vacuous feedback and generally report on review tools’ limitations when used in complex scenarios [26]. In this context, we see another potential usage scenario for our approach that we outline in the following. A team member is supposed to review source code changes of a colleague. To do this, he or she can click on a link inside of the pull request that opens a prepared, cloud-based code editor with an embedded SV of the new program behavior (due to the source code change). Source code changes are color-coded in the IDE. For understanding the program behavior, it is possible to switch between old and new program behavior in the SV by pressing a button. The colleague who issued the pull request can be invited to the session such that the changes can also be discussed together.
Scenario 3 (SC3): Integrate Runtime Information into Development Activities
Staging environments are used to test software systems in a production-like environment. We envision code editors informing selected developers about performance problems of a software system installed (e.g., in the staging area). A developer can click on this notification to open the embedded SV. The visualization depicts the runtime behavior which includes the performance problem. It also highlights the entity that introduces the problem, e.g., a method call that took too long to finish. Based on this, developers get runtime information displayed in their code editor and can analyze affected code lines.
IV. EXPERIMENT DESIGN AND DEMOGRAPHICS
Effectiveness is one of the most common properties used to evaluate SV approaches. In that context, Merino et al. [27] present a systematic literature review of SV evaluation. Their work analyzes the literature body of full papers that were published in the SOFTVIS/VISSOFT conferences, resulting in the examination of 181 papers. The authors focus on evaluations that validate the effectiveness of their presented approach. It is mentioned that multiple evaluations omit other variables that can contribute to or generally influence the effectiveness [28], such as recollection and emotions. We share this opinion and argue that we must first evaluate properties such as perceived usefulness, perceived usability, or feature requests to potentially refine a new, exploratory approach. Only afterwards, we should evaluate effectiveness and efficiency with a sufficiently large number of participants in controlled experiments [29]. As a result, we decided to conduct an exploratory user-study first. We designed an experiment in which participants use and evaluate our approach in a taskoriented onboarding process, i.e., in a scenario similar to SC1 (see Section III). In the future, we will also evaluate our approach in other scenarios by using a similar experiment. In this paper however, we developed the experiment with a focus on SC1 due to the approach’s prototype implementation, the exploratory nature of the study, and the duration of a single experiment run. As a result, our research questions (RQ) are not concerned about effectiveness or efficiency. Instead, we focus on several aspects to gather qualitative feedback and quantitative results, such as time spent in the embedded SV, to gain first insights into the use of our approach:
• RQ1: How do subjects use the embedded SV and code editor during task solving?
• RQ2: Is the code editor perceived as more useful than the embedded SV?
• RQ3: Do subjects recognize the usefulness of collaborative SV features for specific tasks?
• RQ4: What is the general perception of the usefulness and usability of the approach?
• RQ5: Is the approach perceived as useful in the envisioned usage scenarios?
We again emphasize that the findings of this contribution should be seen as first insights and indicators for refinements rather than statistically grounded results. However, by answering the research question, we can derive the following main contributions of our evaluation:
• Further insights regarding the perceived usefulness of software cities to comprehend runtime behavior.
• First quantitative and qualitative results regarding the perceived usefulness, perceived usability, and usage time for collaborative, code-proximal software cities.
• A supplementary package containing the evaluation’s raw results, screen recordings of all participants, and detailed instructions as well as software packages for reproduction [30].
In the following, we now present the participants’ demography and our experiment’s procedure.
Authors:
(1) Alexander Krause-Glau, Software Engineering Group, Kiel University, Kiel, Germany ([email protected]);
(2) Wilhelm Hasselbring, Software Engineering Group, Kiel University, Kiel, Germany ([email protected]).