Designing a parallel dataflow architecture for streaming large-scale visualization on heterogeneous platforms

Update Item Information
Title Designing a parallel dataflow architecture for streaming large-scale visualization on heterogeneous platforms
Publication Type dissertation
School or College College of Engineering
Department Computing
Author Vo, Huy T.
Date 2011-08
Description Dataflow pipeline models are widely used in visualization systems. Despite recent advancements in parallel architecture, most systems still support only a single CPU or a small collection of CPUs such as a SMP workstation. Even for systems that are specifically tuned towards parallel visualization, their execution models only provide support for data-parallelism while ignoring taskparallelism and pipeline-parallelism. With the recent popularization of machines equipped with multicore CPUs and multi-GPU units, these visualization systems are undoubtedly falling further behind in reaching maximum efficiency. On the other hand, there exist several libraries that can schedule program executions on multiple CPUs and/or multiple GPUs. However, due to differences in executing a task graph and a pipeline along with their APIs being considerably low-level, it still remains a challenge to integrate these run-time libraries into current visualization systems. Thus, there is a need for a redesigned dataflow architecture to fully support and exploit the power of highly parallel machines in large-scale visualization. The new design must be able to schedule executions on heterogeneous platforms while at the same time supporting arbitrarily large datasets through the use of streaming data structures. The primary goal of this dissertation work is to develop a parallel dataflow architecture for streaming large-scale visualizations. The framework includes supports for platforms ranging from multicore processors to clusters consisting of thousands CPUs and GPUs. We achieve this in our system by introducing the notion of Virtual Processing Elements and Task-Oriented Modules along with a highly customizable scheduler that controls the assignment of tasks to elements dynamically. This creates an intuitive way to maintain multiple CPU/GPU kernels yet still provide coherency and synchronization across module executions. We have implemented these techniques into HyperFlow which is made of an API with all basic dataflow constructs described in the dissertation, and a distributed run-time library that can be used to deploy those pipelines on multicore, multi-GPU and cluster-based platforms.
Type Text
Publisher University of Utah
Subject Dataflow architecture; Heterogeneous platforms; Multi-CPU; Multi-GPU; Parallel execution
Dissertation Institution University of Utah
Dissertation Name Doctor of Philosophy
Language eng
Rights Management Copyright © Huy T. Vo 2011
Format application/pdf
Format Medium application/pdf
Format Extent 35,190,222 bytes
Identifier us-etd3,38255
Source Original housed in Marriott Library Special Collections, QA3.5 2011 .V6
ARK ark:/87278/s6gh9zqf
Setname ir_etd
ID 194543
Reference URL https://collections.lib.utah.edu/ark:/87278/s6gh9zqf
Back to Search Results