My research interests are in information visualization and Human-computer interaction at the intersection of perceptual evaluation, cognitive measures, visualization design, and meta-analysis. Modeling human perception to optimize design decisions based on visual factors (visual encoding such as symbol type or area; or data aspects as a number of data symbols) is possible through perceptual models (e.g., topology-based data models developed during my PhD), and effectiveness can be improved using perceptual evaluation.
Design choices in visualization, such as the graphical encodings, can directly impact the quality of decision making. Effective visualizations improve understanding of data by leveraging visual perception, e.g., the size of marks in scatterplots better represents quantitative data, while color can express categorical data. In addition, the effectiveness of the visualization varies with the tasks being performed on it, e.g., when searching for clusters vs. outliers, the opacity impacts the visibility of data. Hence, constructing frameworks that consider both the perceptions of the visual encodings and the task being performed enables optimizing the design of visualization to maximize efficacy.Using proposed framework, we provide less ambiguous presentations of data, leading to better quality and higher confidence decision-making.
Scatterplots are among the most widely used visualization techniques. Compelling scatterplot visualizations improve understanding of data by leveraging visual perception to boost awareness when performing specific visual analytics tasks. Design choices in scatterplots, such as graphical encodings or data aspects, can directly impact decision-making quality for low-level tasks like clustering. In this work, we propose here automatic tool to optimize the design factors of scatterplots to reveal the most salient cluster structure. Our approach leverages the merge tree data structure to identify the clusters and optimize the choice of subsampling algorithm, sampling rate, symbol size, and symbol opacity used to generate a scatterplot image.
Ghulam Jilani Quadri, Jeniffer Adorno, Brenton Wiernik, and Paul Rosen, "Automatic catterplot Design Optimization for Clustering Identification." IEEE Transactions on Visualization & Computer Graphics, (Under Revision).
Scatterplots are used for various visual analytics tasks, including cluster identification. The visual encodings used on a scatterplot play a deciding role in the visual separation of clusters. For visualization designers, optimizing the visual encodings is crucial to maximizing data clarity. This requires accurately modeling human perception of cluster separation, which remains challenging. We present a multi-stage user study focusing on four factors---distribution size of clusters, number of points, size of points, and opacity of points---that influence cluster identification in scatterplots. Using the merge tree data structure from Topological Data Analysis, we have constructed two models from these parameters, a distance-based model and a density-based model. Our analysis demonstrates that these factors play an important role in the number of clusters perceived. It verifies that the distance-based and density-based models can reasonably estimate the number of clusters a user observes. Finally, we demonstrate how these models can be used to optimize visual encodings on real-world data.
Ghulam Jilani Quadri and Paul Rosen, "Modeling the Influence of Visual Density on Cluster Perception in Scatterplots Using Topology." IEEE Transactions on Visualization & Computer Graphics, 2021 (to appear). (Preprint | PDF | Demo | Data)
When line chart data are noisy, visualization designers can turn to smoothing to reduce the visual clutter. However, there are many techniques available, and while the results they produce may look similar, each preserves different properties of the data. To preserve some properties of the input data, each smoothing technique must also lose information, which can have a negative impact on the utility of the resulting data. The importance of the lost information can be influenced by both the data being used and the visual analytics tasks being performed. We present an analytical framework for measuring the effectiveness of various smoothing techniques and evaluate the perceptual judgment using user studies.
Paul Rosen and Ghulam Jilani Quadri, "LineSmooth: An Analytical Framework for Evaluating the Effectiveness of Smoothing Techniques on Line Charts." IEEE Transactions on Visualization & Computer Graphics, 2021 (to appear). (Preprint | PDF | Demo | Data)
Knowledge of human perception has long been incorporated into visualizations to enhance their quality and effectiveness. The last decade, in particular, has shown an increase in perception-based visualization research studies. With all of this recent progress, the visualization community lacks a comprehensive guide to contextualize their results. In this report, we provide a systematic and comprehensive review of research studies on perception related to visualization. This survey reviews perception-focused visualization studies since 1980 and summarizes their research developments focusing on low-level tasks, further breaking techniques down by visual encoding and visualization type. In particular, we focus on how perception is used to evaluate the effectiveness of visualizations, to help readers understand and apply the principles of perception of their visualization designs through a task-optimized approach. We concluded our report with a summary of the weaknesses and open research questions in the area.
Reproducibility has been increasingly encouraged by science communities to validate experimental conclusions, and replication studies represent a significant opportunity to vision scientists wishing to contribute new perceptual models, methods, or insights to the visualization community. Unfortunately, the notion of replicating previous studies does not lend itself to how we communicate research findings. Simply put, studies that re-conduct and confirm earlier results do not hold any novelty, a key element to the modern research publication system. Nevertheless, savvy researchers have discovered ways to produce replication studies by embedding them into other, sufficiently novel studies. In this position work, we define three methods---re-evaluation, expansion, and specialization---for implanting a replication study into a novel published work. Finally, we discuss how publishing a true replication study should be avoided while providing suggestions for how vision scientists and others can still use replication studies as a vehicle to producing visualization research publications.
Ghulam Jilani Quadri, and P. Rosen, “You Can’t Publish Replication Studies (and How to Anyways),” In Proceedings of VIS’19: IEEE Conference on Visualization. Workshop on Vis X Vision, 2019. (PDF)
Visual Analytics Science and Technology (VAST) is an annual contest to advance visual analytics through competition. The VAST Challenge is designed to help researchers understand how their software would be used in a novel analytic task and determine if their data transformations, visualizations, and interactions would be beneficial for particular analytical tasks. In the summer of 2017, we as a team of three (Sulav Malla, Anwesh Tuladhar) under the guidance of Dr.Paul Rosen, participated in three mini-challenges (MC1, MC2, and MC3) and submitted our work to the IEEE VAST challenge Community. Our MC3 submission was awarded Honorable Mention for Good Facilitation of Single Image Analysis.
S. Malla, A. Tuladhar, Ghulam Jilani Quadri, and P. Rosen, “Multi-Spectral Satellite Image Analysis for Feature Identification and Change Detection VAST Challenge 2017: Honorable Mention for Good Facilitation of Single Image Analysis,” Proceedings of the IEEE Conference on VAST, October 2017. (Link | PDF)