Invited Speakers

EuroVis:

Bioinformatics of non-coding RNAs
by Peter Stadler

RNA has long been considered a boring intermediate between DNA, the genomic storage device, and proteins performing nearly all important functions in the cell. With the advent of high throughput sequencing and the technical capability of unbiased and reasonably complete measurements of RNA and protein complements, this view as changed dramatically. The overwhelming majority of transcribed RNA exerts in function, which most likely is of regulatory nature, without ever being translated in protein. Functional RNAs can be identified by their conservation in evolutionary related organism, and often be the preservation of structural features even in the presence of large sequence divergence. The computation and comparison of RNA structures give rise to a variety of challenging computational problems. In my presentation I will give an overview of the state of the art in RNA bioinformatics, with an emphasis on those topics and questions where modern visualization strategies could drastically improve the insights into the data and help the generation of new hypotheses: structure comparison, evolutionary changes in large, complex transcript networks, as well as genome-wide patterns in relation to a wide variety of other genomic features.

Biography
Peter StadlerPeter F. Stadler received his Ph.D. in Chemistry from the University of Vienna in 1990 following studies in chemistry, mathematics, physics and astronomy. After a PostDoc at the Max Planck Institute for Biophysical Chemistry in Goettingen he returned to Vienna to work in the area to theoretical biochemistry. Since 1994 he is External Professor at the Santa Fe Institute, a research center focussed on Complex Systems. In 2002 he moved to the University of Leipzig as Full Professor of Bioinformatics. Since 2010 he is External Scientific Member of the Max Planck Society affiliated with the MPI for Mathematics in the Sciences. The general theme of his research is the search for a consistent understanding of biological processes (with an emphasis on (molecular) evolution) at the genotypic, phenotypic, and dynamical level. The techniques range from the analysis of the dynamical systems arising in chemical kinetics and population genetics, to large scale simulations of RNA evolution and the analysis of viral sequence data, to knowledge- based protein potentials, and to algebraic combinatorics applied to the study of fitness landscapes.

Visualization in the Neuroscience Era: The Road Ahead
by Hans-Christian Hege

The brain, as the organ of perception, memory, emotion and action, as well as the most complex thing in the (known) universe, exerts a compelling fascination to researchers. According to many scientists, we are at the beginning of a new era in neuroscience that is characterized by particularly rapid knowledge advancement, mainly due to new and powerful experimental techniques. Possibly this will lead to an understanding of the fundamental principles of brain function.

All four scientific avenues are utilized in neuroscience: experimental, phenomenological-descriptive, theoretical and computational approaches. Corresponding to the complexity of its object of study, the academic field comprises about 20 different branches, with anatomical, physiological and cognitive sciences as major categories. The branches can also be sorted in sciences dealing with (1) molecular and cellular objects, (2) neural circuits and neural systems, (3) cognitive and behavioral aspects, and (4) translational and medical aspects.

In relation to visualization (here used in place of data visualization, knowledge visualization, visual analysis, visual analytics, …) neuroscience plays two roles: on one side, it represents an application domain dealing with a huge amount of highly complex data that must be visualized and analyzed in order to distill insights. On the other side, it is a supportive science providing answers to perceptual and cognitive questions in visualization. In the talk, both relations will be examined, highlighting a few of the many exciting perspectives.

Considering neuroscience as an application domain, the abundant opportunities to support data exploration, data filtering, data analysis, hypothesis-generation, and modeling by interactive visual tools are obvious. Such tools are necessary on all spatiotemporal scales: from molecular, to sub-cellular, cellular, microcircuit and macrocircuit levels with their different time scales. In the talk I will discuss some tools, focusing on the tasks that are related to revealing the layout of anatomical neural connections (the connectome) and the configuring of functional networks by self-organizing processes. Specific topics addressed will be the dense reconstruction of neuronal circuits using electron tomography and the high-speed mapping of brain circuitry using optogenetic technologies.

Beside the traditional approach of single-hypothesis testing, also hypothesis generation and prioritization will become important, as omics- or discovery-based methods are emerging in neurosciences. One of the most exciting and challenging frontiers in neuroscience thus involves harnessing the value of large-scale genetic, genomic, phenotypic, connectomics and physiological data sets, as well as the development of tools for data integration and mining. New methods for analysis of dynamical networks will offer the promise of integrating these different types of data and thereby will provide a more integrative understanding.

Now considering the second role of neuroscience as a supportive science – imagining that a deeper understanding of brain functions has been achieved and computational models have been established to simulate aspects of human attention, perception and cognition, as well as refined tools in cognitive neuroscience have been developed, to conduct sophisticated experiments. This will foster great benefits for our field, as for instance fundamental questions of HCI and visualization potentially can be answered. Among these questions are, e.g.: How do humans perceive, interpret and use visual information? How effective are visual representations? How do visual means facilitate cognitive tasks? How do humans interact with graphical representations in order to make sense of a situation? How do forms of interaction contribute to understanding and familiarity? How do changes happen in understanding, how do insights emerge? How to design effective graphical representations? How to design interactive visual systems? How to evaluate graphical representations? How to evaluate interactive visual systems?

More detailed answers to these old questions might arise only in the distant future. In the talk it will be explained how modern methods of cognitive psychology can be employed already today, e.g., to evaluate more closely the effectiveness of visual representations.

All in all, in the foreseeable future, neuroscience and visualization will move towards each other – with great and mutual benefit.

Biography
Hans Christian HegeHans-Christian Hege is head of the Visualization and Data Analysis Department at Zuse Institute Berlin (ZIB). After studying physics and mathematics, he performed research in computational physics and quantum field theory at Freie Universität Berlin (1984-1989). Then, he joined ZIB, initially as a scientific consultant for high-performance computing, and then as head of the Scientific Visualization Department, which he started in 1991. His group performs research in visual data analysis and develops visualization software such as Amira and Biosphere3D. He is also the co-founder of Mental Images (1986) (now: NVIDIA Advanced Rendering Center), Indeed-Visual Concepts (1999) (now Visage Imaging), and Lenné3D (2005).
He taught as a guest professor at Universitat Pompeu Fabra, Barcelona, and as honorary professor at the German Film School (University for Digital Media Production). His research interests include many branches in visual computing as well as applications in life sciences, natural sciences and engineering.
He is a member of ACM, IEEE, Eurographics, GI, DPG and CURAC.


EnvirVis:

Big Data Challenges in Environmental Management
by Joerg Meyer

Visual analysis of environmental data for the purpose of understanding and managing contaminant and groundwater dynamics requires the ability to aggregate data from a variety of heterogeneous sources. Earth scientists, hydrologists, chemists, physicists and computer scientists are working together to create models for environmental management, in particular for fluid contaminant transport. Over decades scientists have collected data that need to be interpreted in order to create accurate and valid computational models that describe the past and ultimately the future. The challenge for these researchers lies mainly in the fact that large collections of data exist on multiple scales, ranging, for instance, from microscopic images and models that describe rock porosity to scattered observational or dense simulated data over terrains stretching across several acres. The time scale of such data sets also varies greatly and may depend, for example, on the half-life of chemical isotopes of contaminants. Historic data, often sparse and in a variety of formats, must be combined and presented in a format that makes it easy for scientists to discover patterns or assess the effectiveness of preventive mitigation measures.

We have demonstrated that visual analytics and high-performance computing can help to address these challenges. By displaying data in their geographical context and by giving the domain scientists tools for selecting, retrieving and analyzing data, we have helped them get a better and more comprehensive understanding of the available data, which ultimately lead to improved models for contaminant fate prediction.

Biography
Joerg MeyerJoerg Meyer is a Computer Systems Engineer in the Computational Research Division at Lawrence Berkeley National Laboratory (LBNL). His research is focused on large-scale, parallel scientific data visualization and high-performance computing for visualization applications. He also provides Visual Analytics support to the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. He has taught classes and served on the faculty of the Universities of California at Davis and Irvine and Mississippi State University (MSU) and conducted research at the California Institute for Telecommunications and Information Technology (Calit2) and the National Science Foundation (NSF) Engineering Research Center (ERC) at MSU. He received his doctoral degree in 1999 from the University of Kaiserslautern, Germany. In the past, he has led and served on various conference and program committees for multiple professional organizations, including IEEE, ACM SIGGRAPH and EuroVis. He has served as a reviewer for the National Science Foundation and other funding agencies, and has created more than 150 publications, including journal articles, book chapters, conference papers, short papers and posters, in his research field.


EuroRV^3:

Evaluation in Science vs. Design -- a Visualization Perspective
by Torsten Möller

Evaluation is important in the field of visualization as it is in any other scientific and engineering discipline as well as in design. Doing a proper evaluation is often crucial to create a better tool or making a crucial scientific insight. Hence, what is "proper evaluation" is often debated, especially in our discipline -- visualization. In this talk I will give a historic perspective of evaluation in scientific method and contrast it with evaluation in a human centered design processes. I will look at their similarities and differences and how these manifest themselves in the fascinating field of visualization.

Biography
Torsten MoellerTorsten Möller is a professor at the University of Vienna, Austria. He received his PhD in Computer and Information Science from Ohio State University in 1999 and a Vordiplom (BSc) in mathematical computer science from Humboldt University of Berlin, Germany. He is a senior member of IEEE and ACM, and a member of Eurographics. His research interests include the fields of Visualization and Computer Graphics, especially the mathematical foundations thereof.

He heads the research group of Visualization and Data Analysis. He served as the appointed Vice Chair for Publications of the IEEE Visualization and Graphics Technical Committee (VGTC) between 2003 and 2012. He has served on a number of program committees and has been papers co-chair for IEEE Visualization, EuroVis, Graphics Interface, and the Workshop on Volume Graphics as well as the Visualization track of the 2007 International Symposium on Visual Computing. He has also co-organized the 2004 Workshop on Mathematical Foundations of Scientific Visualization, Computer Graphics, and Massive Data Exploration as well as the 2010 Workshop on Sampling and Reconstruction: Applications and Advances at the Banff International Research Station, Canada. He is a co-founding chair of the Symposium on Biological Data Visualization (BioVis). In 2010, he was the recipient of the NSERC DAS award. He received best paper awards from IEEE Conference on Visualization (1997), Symposium on Geometry Processing (2008), and EuroVis (2010), as well as two second best paper awards from EuroVis (2009, 2012).

Fairly sharing the costs of Reproducibility:
Precedents and Possibilities
by Gordon L. Kindlmann

Doing work that is not reproducible is easier than doing work that is reproducible. Much of the resistance and hesitation about reproducibility for visualization amounts to concerns about the unfair allocation of the additional work related to reproducibility. The burden of reproducibility on the author may perhaps be more fairly distributed amongst the author's institution, the research community, the reviewers, the publication venue, and its professional organization. Figuring this out will be challenging, in a research area as diverse as visualization. Difficult questions remain, such as reproducible for whom, and for how long? Can the burden of reproducibility stifle experimental work with cutting-edge hardware? For our community to make progress on this task, we should find inspiration in the expanding space of solutions created by our scientific peers in other research areas within computer science, and in other fields of science and engineering.

Biography
Gordon L. KindlmannGordon Kindlmann researches scientific visualization and image analysis to improve the biomedical applications of three-dimensional imaging modalities. Past research simplified making informative direct volume renderings, inspired by traditional techniques of edge detection. Current work explores ways of translating mathematical principles of image processing and computer vision to practical methods of detecting, measuring, and understanding biological or physical structure in modern imaging data. His research software is all open-source http://teem.sf.net.




EuroVA:

Visual Analytics for Competitive Advantage
by William Ribarsky

Big Data Analytics is getting a great amount of attention in business and government, not the least because studies from McKinsey to Gartner predict that by 2018, in the U.S. alone, there will be a deficit of 200,000 professionals with deep data analytics skills and a need to retrain 1.5M or more managers to take advantage of big data opportunities. If it lives up to its name, visual analytics will be a prime path by which visualization competes successfully in this arena. In this talk, I will discuss how this might be done by focusing on a main goal in business and government, competitive advantage. Competitive advantage implies that one can derive a business value from proper analysis of big or complex value that wouldn't be achieved otherwise and that this would confer an advantage over one's competitors. I will illuminate this discussion with relevant examples from the work of my group and that of my colleagues.

Biography
William RibarskyWilliam Ribarsky is the Bank of America Endowed Chair in Information Technology at UNC Charlotte and the founding director of the Charlotte Visualization Center. He is currently Chair of the Computer Science Department. His research interests include visual analytics; 3D multimodal interaction; bioinformatics visualization; sustainable system analytics; visual reasoning; and interactive visualization of large-scale information spaces. Dr. Ribarsky is the former Chair and a current Director of the IEEE Visualization and Graphics Technical Committee. He is also a member of the Executive Steering Committees for IEEE VisWeek, which comprises the Scientific Visualization, Information Visualization, and Visual Analytics Conferences, the leading international conferences in their respective fields. He was an Associate Editor of IEEE Transactions on Visualization and Computer Graphics and is currently an Editorial Board member for IEEE Computer Graphics & Applications. Dr. Ribarsky co-founded the Eurographics/IEEE visualization conference series (now called EuroVis) and led the effort to establish the current Virtual Reality Conference series. For the above efforts on behalf of IEEE, Dr. Ribarsky won the IEEE Meritorious Service Award in 2004. In 2007, he was general co-chair of the IEEE Visual Analytics Science and Technology (VAST) Symposium. Dr. Ribarsky has published over 160 scholarly papers, book chapters, and books.

A Matter of Time: Visual Analytics of Time-Oriented Data and Information
by Silvia Miksch

Due to the proliferating capabilities to generate and collect vast amounts of heterogeneous data and information we face the challenge that users and analysts get lost in irrelevant or inappropriately processed or presented information. The aim of Visual Analytics is to support the information discovery process from potentially large volumes of complex and heterogeneous data and information. However, how can we tackle time-oriented data and information? Time itself is an exceptional data dimension with distinct characteristics (e.g., scale, scope, structure, viewpoint, and granularity). In order to design and develop Visual Analytics methods that effectively deal with the complexity of time, these special characteristics need to be considered within the intertwined interactive visualization and analysis process to explore trends, patterns, and relationships.

In this talk, I will illustrate the different aspects of time and time-oriented data. A particular focus will be put on the main goal of Visual Analytics – the facilitation of deeper insights into huge heterogeneous data resources – which can be achieved by considering (1) the characteristics of the data, (2) the users, and (3) the users’ tasks and needs. I will give various examples to illustrate what has been achieved so far and show possible future directions and challenges.

Biography
Silvia MikschSilvia Miksch is Associate Professor and head of the Information and Knowledge Engineering research group, Institute of Software Technology & Interactive Systems, Vienna University of Technology. From 2006 to 2010 she was professor and head of the Department of Information and Knowledge Engineering at Danube University Krems, Austria. In April 2010 she established the awarded Laura Bassi Centre of Expertise "CVAST -- Center for Visual Analytics Science and Technology (Design, Interact & Explore)" funded by the Federal Ministry of Economy, Family and Youth of the Republic of Austria. Silvia has acquired, led, and has been involved in several national and international research projects. She has served on various program committees of international scientific conferences and was, for example, conference paper co-chair of the IEEE Conferences on Visual Analytics Science and Technology (IEEE VAST 2010 and 2011) at VisWeek and Eurographics/IEEE Conference on Visualization (EuroVis 2012). She reviewed for several scientific journals, belongs to the editorial board of Artificial Intelligence in Medicine (AIM-J, Elsevier), AI Communications (AICOM, IOS Press), and IEEE Transactions on Visualization and Computer Graphics (TVCG, IEEE CS) and served as guest editor for Artificial Intelligence in Medicine (Elsevier), IEEE Transactions on Visualization and Computer Graphics (TVCG, IEEE CS), and Information Visualization (IV, Palgrave Macmillan/SAGE). Her main research interests are Information Visualization and Visual Analytics (in particular Focus+Context and Interaction methods), Process and Plan Management, Interaction Design, User-Centered Design, and Time.

For more information see


VAMP:

Dimensionality Reduction From Several Angles
by Tamara Munzner

I will present several projects that attack the problem of dimensionality reduction (DR) in visualization from different methodological angles of attack, in order to answer different kinds of questions. First, can we design better DR algorithms? Glimmer is a multilevel multidimensional scaling (MDS) algorithm that exploits the GPU. Glint is a new MDS framework that achieves high performance on costly distance functions. Second, can we build a DR system for real people? DimStiller is a toolkit for DR that provides local and global guidance to users who may not be experts in the mathematics of high-dimensional data analysis, in hopes of “DR for the rest of us”. Third, how should we show people DR results? An empirical lab study provides guidance on visual encoding for system developers, showing that points are more effective than spatialized landscapes for visual search tasks with DR data. A data study, where a small number of people make judgements about a large number of datasets rather than vice versa as with a typical user study, produced a taxonomy of visual cluster separation factors. Fourth, when do people need to use DR? Sometimes it is not the right solution, as we found when grappling with the design of the QuestVis system for a environmental sustainability simulation. We provide guidance for researchers and practitioners engaged in this kind of problem-driven visualization work with the nested model of visualization design and evaluation and the nine-stage framework for design study methodology. Much of this work was informed by preliminary results from an ongoing project, a two-year qualitative study of high-dimensional data analysts in many domains, to discover how the use of DR “in the wild” may or may not match up with the assumptions that underlie previous algorithmic work.

Biography
Tamara MunznerTamara Munzner is a professor at the University of British Columbia Department of Computer Science, where she has been since 2002. She was a research scientist from 2000 to 2002 at the Compaq Systems Research Center (the former DEC SRC), and earned her PhD from Stanford between 1995 and 2000. She was a technical staff member at the National Science Foundation Research Center for Computation and Visualization of Geometric Structures (The Geometry Center) at the University of Minnesota from 1991 to 1995. She holds a BS from Stanford from 1991.

Her research interests include the development, evaluation, and characterization of information visualization systems and techniques from both user-driven and technique-driven perspectives. She has worked on visualization projects in a broad range of application domains, including evolutionary biology, genomics, systems biology, large-scale system administration, computer networking, web log analysis, computational linguistics, and geometric topology. She has consulted for or collaborated with many companies including Agilent, AT&T Labs, Google, Microsoft, and Silicon Graphics, and early-stage startups.

Dr. Munzner was the IEEE Symposium on Information Visualization (InfoVis) Program/Papers Co-Chair in 2003 and 2004, and the Eurographics/IEEE Symposium on Visualization (EuroVis) Program/Papers Co-Chair in 2009 and 2010. She was a Member At Large of the Executive Committee of the IEEE Visualization and Graphics Technical Committee (VGTC) from 2004 through 2009, and is currently a member of the InfoVis Steering Committee and the VisWeek Executive Committee. She was one of the six authors of the 2006 Visualization Challenges Research report, commissioned by several directorates of the US National Science Foundation and National Institutes of Health.